Hacker Newsnew | past | comments | ask | show | jobs | submit | jbellis's commentslogin

How should I update my simplistic understanding that decode is bw-bound with these results that show the B70 decoding faster than a 4090 (about 50% more bw)?

I doubt you'd get the same sort of result on a modern-ish MOE or dense model via a more standard inference engine like llama.cpp or VLLM. I don't think MLPerf is a reasonable benchmark at this point.

Edit: Here is a simple llama.cpp compare where the token gen results match the rule of thumb.

https://www.reddit.com/r/LocalLLaMA/comments/1st6lp6/nvidia_...


Probably the best single resource is https://github.com/pmcfadin/awesome-accord

> Batches all operations. Does large number of reads/edits simultaneously...

I wasn't sure what this meant, so I looked at the source. It seems to be referring to tool APIs being designed around taking multiple targets as a list parameter, instead of hoping the model makes appropriately parallel tool calls. (This matches my experience btw, models are reluctant to make a large number of parallel calls simultaneously, and this seems more pronounced with weaker models.)


I think Anthropic may have mentioned this first, this pattern is also something my custom agent's tools are designed around, pretty sure I picked it up from them.

Branimir is an engineer's engineer. Excellent choice.

His work is very cool and I was impressed by the thoroughness and thoughtfulness of his responses.

The newest structural diff tool is RefactoringMiner, there's a paper and a Github repo that works out of the box which is rare for this space. Excellent results but mainline is limited to Java IIRC with a couple ports for other languages.

what are you using for web search?

We use Duck Duck Go - sorry on the delayed response as well

Flash 2 isn't even at EOL until June but we started seeing ~90% error rates getting 429s over the weekend. (So we switched to GPT 5.4 nano.)

Isn't the "KV Compression Strategies (FAIR)" chart showing that the fancy complex algorithm only barely beats simple topk?

The commentary says that topk "degrades rapidly at low ratios" but the same can be seen for HAE (Entropy + OLS).


Fair point, the gap isn’t huge in that plot, and both degrade at low ratios. The difference is more in how they degrade: TopK can have sharper, localized failures, while HAE tends to be a bit more smooth. That doesn’t always show up strongly in average MSE.

That said, the gains are modest right now, this is still a research prototype exploring the tradeoff, and there’s clearly more work to be done.


Is it really that fancy and complex, though? The “entropy recycling bin” seems fancy to me, but the other stuff is least squares and an SVD, these are solid workhorse numerical routines.

For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).

This has similar problems to swe bench in that models are likely trained on the same open source projects that the benchmark uses.

https://blog.brokk.ai/introducing-the-brokk-power-ranking/


If all models are trained on the benchmark data, you cannot extrapolate the benchmark scores to performance on unseen data, but the ranking of different models still tells you something. A model that solves 95/98 benchmark problems may turn out much worse than that in real life, but probably not much worse than the one that only solved 11/98 despite training on the benchmark problems.

This doesn't hold if some models trained on the benchmark and some didn't, but you can fix this by deliberately fine-tuning all models for the benchmark before comparing them. For more in-depth discussion of this, see https://mlbenchmarks.org/11-evaluating-language-models.html#...


It is much faster though. On my m1 max, describing a picture (quick way to get a pretty large context):

Qwen 3.6 35b a3b: 34 tok/sec

Qwen 3.5 27b: 10 tok/sec

Qwen 3.5 35b a3b: doesn't support image input


I've been using Qwen 3.5 35B-A3B with images as input so I suspect you perhaps didn't include the vision part of the model during testing (I use llama.cpp and I learned I needed to include the separate mmproj part).

What is the quantization level of your Owen 3.6 3b model?

You compare tiny modal for local inference vs propertiary, expensive frontier model. It would be more fair to compare against similar priced model or tiny frontier models like haiku, flash or gpt nano.

Not when the article they're commenting on was doing literally exactly the same thing.

Eh it’s important perspective, lest someone start thinking they can drop $5k on a laptop and be free of Anthropic/OpenAI. Expensive lesson.

Wonderful, another "my preexisting biases are all correct, because AI" piece.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: