Hacker Newsnew | past | comments | ask | show | jobs | submit | zachlatta's commentslogin

This is a website created by a professional lobbyist who still has an active role taking lobbying contracts through a firm they founded called Steinhauser Strategies.


Running a nonprofit is really, really hard and I commend them for doing it. I hope they get through this crisis.

Hack Club almost went insolvent in 2017. We would have, if it wasn’t for 2 friends who stepped in near the end and saved us. Today there are tens of thousands of teens involved and doing amazing things.

I’m disappointed by all the negativity in the comments. I’m not familiar with Session, but raising donor funds is a very different skillset than software engineering and anyone who is trying to do both has my respect.


AI slop. Please see the Hacker News guidelines on writing with AI.


used AI to fix my English, sir!


I think at this point in time, imperfect but real is more acceptable to this crowd than machine translation pseudoprose.


You are barking up the wrong tree. Ryan can't do anything to make government grants for this kind of work exist.

It would be a huge public service if you could get more public support for open source. Maybe you could do it instead of criticizing Ryan!

Some public support for this kind of work already exists, especially for the Python science ecosystem, but nothing that comes close to "competing" with VC for a project like Deno.

You should be the change you hope to see in the world and make this happen!


This is really not true. It’s important that when people say this, we hold them to it and reward them when they see it through.

The internet has a tendency to penalize people who try to do bold things. As a result, it’s too often strategic to stay quiet and boring and focus on the bottom line.

We shouldn’t be cynical. We should be excited when people say bold things and reward them when they live up to it.


This project was made by a teenager in https://blueprint.hackclub.com, a nonprofit program I'm helping run that helps teenagers learn PCB design and get up to $400 USD in funding to prototype and manufacture their designs!

We just launched https://stasis.hackclub.com, another similar electronics program.

If you know any teens that want to get into PCB design, please encourage them to join Hack Club and these programs!


Watsi is an incredible organization doing incredible work.

Chase and Grace are both incredible people.

When I was 15 and first starting Hack Club, I went to Startup School 2013 and watched Chase’s talk. It was the first time I had ever seen a startup founder who was starting a nonprofit instead of a for-profit. Afterwards he showed me the kindness of speaking with me.

Later as the years went on, both of them always replied to emails and gave great advice.

Many nonprofit founders understandably feel very protective of their experience and relationships because nonprofits can be zero-sum in a way that for-profits aren’t, but Chase and Grace are two of the most generous people you’ll ever meet.

Thank you for starting an incredible organization and being such an inspiration.


I just learned about Handy in this thread and it looks great!

I think the biggest difference between FreeFlow and Handy is that FreeFlow implements what Monologue calls "deep context", where it post-processes the raw transcription with context from your currently open window.

This fixes misspelled names if you're replying to an email / makes sure technical terms are spelled right / etc.

The original hope for FreeFlow was for it to use all local models like Handy does, but with the post-processing step the pipeline took 5-10 seconds instead of <1 second with Groq.


There's an open PR in the repo which will be merged which adds this support. Post processing is an optional feature if you want to use it, and when using it, end to end latency can still be under 3 seconds easily


That’s awesome! The specific thing that was causing the long latency was the image LLM call to describe the current context. I’m not sure if you’ve tested Handy’s post-processing with images or if there’s a technique to get image calls to be faster locally.

Thank you for making Handy! It looks amazing and I wish I found it before making FreeFlow.


Could you go into a little more detail about the deep context - what does it grab, and which model is used to process it? Are you also using a groq model for the transcription?


It takes a screenshot of the current window and sends it to Llama in Groq asking it to describe what you’re doing and pull out any key info like names with spelling.

You can go to Settings > Run Logs in FreeFlow to see the full pipeline ran on each request with the exact prompt and LLM response to see exactly what is sent / returned.


You can try ottex for this use case - it has both context capture (app screenshots), native LLMs support, meaning it can send audio AND screenshot directly to gemini 3 flash to produce the bespoke result.


As a very happy Handy user, it doesn't do that indeed. It will be interesting to see if it works better, I'll give FreeFlow a shot, thanks!


Wow, Handy looks really great and super polished. Demo at https://handy.computer/


[I'm using] Handy myself right now. And it's pretty good. I don't have any problems with it, except that I wish that it would slowly roll out the text as you talk instead of waiting to transcribe into the very end. because I like to rant and ramble a little bit and then go back and edit what I've written rather than having to perfectly compose on the first attempt. And that's one of the big advantages, in my opinion, of using a voice to text app is that it would let you ramble and rant and see what you have said and keep making additions and alterations to that. For instance, I'm doing this entire bit using handy in one stream of thought take. And so it's probably gonna be a bit rambly and not very polished, but at the same time it's more representative of a general use case. And I'm talking quite a bit so that I can actually put the system under stress and see how well it responds.

My only issue with it was that it cut off the words [I'm using] at the beginning and obviously it doesn't enter paragraph breaks. It took about 25 seconds to transcribe all of that on a 10th gen i7 laptop processor.

If they could incorporate combination typing out what was said while you're talking it would be pretty perfect.


I love this idea, and originally planned to build it using local models, but to have post-processing (that's where you get correctly spelled names when replying to emails / etc), you need to have a local LLM too.

If you do that, the total pipeline takes too long for the UX to be good (5-10 seconds per transcription instead of <1s). I also had concerns around battery life.

Some day!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: