Hacker Newsnew | past | comments | ask | show | jobs | submit | _345's commentslogin

The page is hard to read on landscape browsers like Chrome on Win11.

We need more voices like this to cut through the bullshit. It's fine that people want to tinker with local models, but there has been this narrative for too long that you can just buy more ram and run some small to medium sized model and be productive that way. You just can't, a 35b will never perform at the level of the same gen 500b+ model. It just won't and you are basically working with GPT-4 (the very first one to launch) tier performance while everyone else is on GPT-5.4. If that's fine for you because you can stay local, cool, but that's the part that no one ever wants to say out loud and it made me think I was just "doing it wrong" for so long on lm studio and ollama.


> We need more voices like this to cut through the bullshit.

Just because you can't figure out how to use the open models effectively doesn't mean they're bullshit. It just takes more skill and experience to use them :)


> We need more voices like this to cut through the bullshit.

Open models are not bullshit, they work fine for many cases and newer techniques like SSD offload make even 500B+ models accessible for simple uses (NOT real-time agentic coding!) on very limited hardware. Of course if you want the full-featured experience it's going to cost a lot.


I fell for this stuff, went into the open+local model rabbit hole, and am finally out of it. What a waste of time and money!

People that love open models dramatically overstate how good the benchmaxxed open models are. They are nowhere near Opus.


There is absolutely a use case for open models... but anyone expecting to get anywhere near the GPT 5.x or Claude 4.x experience for more demanding tasks (read: anything beyond moderate-difficulty coding) will be sorely disappointed.

I love my little hobby aquarium though... It's pretty impressive when Qwen Coder Next and Qwen 3.5 122B can accomplish (in terms of general agentic use and basic coding tasks), considering that the models are freely-available. (Also heard good things about Qwen 3.5 27B, but haven't used it much... yes I am a Qwen fanboi.)


I actually asked chatgpt to recommend me a great starter tmux conf, and it gave me 80% of this blog post. Not an insult btw.


Weirdly, after submit, the post went up as "VR game I've ever played wasn't a VR game". I had to edit it to add "The best" back in. Misfiring title filter maybe?


That's about a 8:30 mile scaling for the fact that its harder when you have to cover more distance... seems pretty reasonable to me as a fitness baseline for the army. I would struggle to make that now but if I had one month to prep I could clear that


Am I crazy? Isn't it 9:40 pace?


They contract out all the time, they've admitted to it in lots of interviews. So I think through the amount of contracting they're able to keep their core hires down.


One thing that steam does better than any other place is create an incredible store experience to sell games on. I don't think any other game distributor has an algorithm as good as theirs, and all the integrations and hookups that come with it. For example, Nintendo's shop page for each game is sparse in detail and lacks so much information buyers have access to in that game's Steam page counterpart. The store search and other store views display games far more efficiently than nintendo's search and store views, making it much easier to find what you're looking for in fewer clicks and fewer minutes.

if you have the time, try to find a game on nintendo vs on steam. Don't google for the pages, go to their base shop page and start from there. Try to avoid directly searching the title, instead search for keywords as if you're a gamer trying to recall a game suggestion you heard from a friend like 2 weeks ago. You'll notice the plethora of differences that combined puts steam on a whole other level of sales and content distribution if you go about it like that


I never understood that. The presence of simple comments loke "Nice!" or "I agree" are really rare anyways, and I don't find it difficult to scroll past them like I would with any other comment whose first few sentences I don't find salient


I would not have started my article with "I could get into Bitcoin anytime, why the rush". That is not the killer first example you think it is. It's been ~17 years of proof so far that you would've made a ton of money by simply mindlessly buying $200 of bitcoin every month after lower risk contributions are made, and just holding onto it.

I mean if you did that you'd have contributed ~$38K USD by 2026 and had ~1.5B USD now if you started in 2010. BTC being so cheap back then dominates the whole process so to demonstrate my point more if you had heard about it all those years and were nervous about trying it and decided to wait until 2016, you'd still need to just put in $24K overall to come out with ~$450K by 2026.

That's not biting your finger nails over the price changes, the hype cycles, the price drop scares. You just set and forget a $200 recurring buy a month and put your energy elsewhere and pocket half a million for basically no effort

And if anything is possible in hindsight, then why in hindsight would you write an article acting like bitcoin was a bad decision to be an early adopter for


yes and it deleted all my fucking cancer research omg


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: