Hacker Newsnew | past | comments | ask | show | jobs | submit | extr's commentslogin

Yes same here. I use CC almost constantly every day for months across personal and work max/team accounts, as well as directly via API on google vertex. I have hardly ever noticed an issue (aside from occasional outages/capacity issues, for which I switch to API billing on Vertex). If anything it works better than ever.

I think you are kidding if you think you are going to be remotely approximately the quantity/quality of output you get from a $100/max sub with Zed/Openrouter. I easily get $1K+ of usage out of my $100 max sub. And that's with Opus 4.6 on high thinking.

For personal use I've noticed Claude (via the web-based chat UI) making really bizarre mistakes lately like ignoring input or making completely random assumptions. At work Claude Code has turned into an absolute dog. It fails to follow instructions and builds stuff like a lazy junior developer without any architecture, tests, or verification. This is even with max effort, Opus 4.6, multiple agents, early compaction, etc. I don't know what they did but Anthropic's quality lead has basically evaporated for me. I hope they fix it because I've since adapted my project's Claude artifacts for use with Codex and started using it instead - it feels like Claude Code did earlier this year.

I'd like to give the new GLM models a try for personal stuff.


Same, I'm looking hard for an alternative to what I had.

And I'm seeing the same thing in my sphere- everyone is bailing Anthropic the past few weeks. I figure that's why we're seeing more posts like this.

I hope they're paying attention.


I've noticed the same thing, and even done side by side tests where I compare Claude Code with Cursor both running Opus 4.6.

It seems Cursor somehow builds a better contextual description of the workspace, so the model knows what I'm actually trying to achieve.

The problem is that with Cursor I'm paying per-token, so as GP suggested you can easily spend $100+ per month vs $20 on Claude Code.


I saw this immediately with 4.6 and dumped back to 4.5 because I actually asked it wtf it was doing and it's response was "being lazy"

> At work Claude Code has turned into an absolute dog.

Could it be related to this?: https://news.ycombinator.com/item?id=47660925


Some of the newer models available on OpenRouter are good, but I agree that none of them are a replacement for Opus 4.6 for coding.

If you're trying to minimize cost then having one of the inexpensive models do exploratory work and simple tasks while going back to Opus for the serious thinking and review is a good hybrid model. Having the $20/month Claude plan available is a good idea even if you're primarily using OpenRouter available models.

I think trying to use anything other than the best available SOTA model for important work is not a good tradeoff, though.


I've been thinking of doing this — using one of the "pretty good but not Opus 4.6-good, YET very cheap" models for the implementation part of more basic code features, AFTER first using Opus 4.6 high for the planning stage.

Do you think this would be a decent approach?

Also, which client would I use for this? OpenCode? I don't think Claude Code supports using other models. Thoughts?


I have been doing this and the results have been fairly good.

I use claude to build requirements.md -> implementation.md -> todo.md. Then I tell opencode + openrouter to read those files and follow the todo using a cheap (many times free) model.

It works 90% of the time. The other 10% it will get stuck, in which case I revert to claude.

That has allowed me to stay on the $20/month claude subscription as opposed to the $100.


> I easily get $1K+ of usage out of my $100 max sub. And that's with Opus 4.6 on high thinking.

And people keep claiming the token providers are running inference at a profit.


>And people keep claiming the token providers are running inference at a profit.

Not everyone gets $1K of usage, and you don't know how fat the per-token margins are. It's like saying the local buffet place is losing money because you eat $100 worth of takeout for $30.


> Not everyone gets $1K of usage, and you don't know how fat the per-token margins are.

Well, we're going to find out sooner rather than later. Right now you don't know how thin (or negative) the margins are, either, after all.

All we know for certain is how much VC cash they got. Revenue, spend, profit, etc calculated according to GAAP are still a secret.


In addition to usage distribution aspects others called out .

$1K is not actual cost, just API pricing being compared to subscription pricing. It is quite possible that API has a large operating margins, and say costs only $100 to deliver $1K worth of API credits.


The model developers across the board stand by that most/all models are profitable by EOL, and losses come from R&D/Training.

Yes and when we say things like that we are not talking about plans. Running inference at a profit means api token use is run profitably. It’s a huge unknown what’s happening at the plan level, we know there is subsidy happening but in aggregate impossible to know if it’s profitable or not.

Yeah — I just created an anthropic API key to experiment with pi, and managed to spend $1 in about 30 minutes doing some basic work with Sonnet.

Extrapolating that out, the subscription pricing is HEAVILY subsidized. For similar work in Claude Code, I use a Pro plan for $20/month, and rarely bang up against the limits.


And it scales up - the $200 plan gets you something like 20x what the Pro plan gets you. I've never come close to hitting that limit.

It's obviously capital-subsidized and so I have zero expectation of that lasting, but it's pretty anti-competitive to Cursor and others that rely on API keys.


Ignoring the training costs, the marginal cost for inference is pretty low for providers. They are estimated to break even or better with their $20/month subscriptions.

That being said, they can't stop launching new models, so training is not a one time task. Therefore one might argue that it is part of the marginal cost.


I ran ccusage on my work Max account and I spend what would cost $300 a week if it was billed at API rates.

Out of curiosity, how many tokens are people using? I checked my openrouter activity - I used about 550 million tokens in the last month, 320M with Gemini and 240M with Opus. This cost me $600 in the past 30 days. $200 on Gemini, $400 on Opus.

  My Claude Code usage stats after ~3 months of heavy use:

    Favorite model: Opus 4.6          Total tokens: 42.6m
    Sessions: 420                     Longest session: 10d 2h 13m
    Active days: 53/95                Longest streak: 16 days
    Most active day: Feb 9            Current streak: 4 days

    ~158x more tokens than Moby-Dick

  Monthly breakdown via claude-code-monitor (not sure how accurate this is):

    Month     Total Tokens     Cost (USD)
    2026-01     96,166,569       $112.66
    2026-02    340,158,917       $393.44
    2026-03  2,183,154,148     $3,794.51
    2026-04  1,832,917,712     $3,412.72
    ─────────────────────────────────────
    Total    4,452,397,346     $7,713.34

According to the meter, I used $15k in tokens with my Max plan (along with $5k of Codex tokens) in the last 30 days. That built an entire working and (lightly) optimized language, parser, compiler, runtime toolchain among other things.

Not everyone is just vibecoding everything and relying on agents running sota models to do anything tho.

I actually find Zed pretty reasonable in terms of memory usage. But yeah, like you say, there are lots of small UX/DX papercuts that are just unfortunate. In some cases I'm not sure it's even Zed's fault, it's just years and years of expecting things to work a certain way because of VS Code and they work differently in Zed.

Eg: Ctrl+P "Open Fol.." in Zed does not surface "Opening a Folder". Zed doesn't call them folders. You have to know that's called "Workspace". And even then, if you type "Open Work..." it doesn't surface! You have to purposefully start with "work..."


The issues you described show a critical lack of awareness from the Zed developers that people migrate to their IDE mainly from VS Code.

They are blowing their "weirdness budget" on nonsense.


I don't think it's conscious or even a result of not caring about UX/DX. But I do think you're right - I've noticed the loudest voices in their Issue queue are people wanting things like better vim support, helix keybind support (super niche terminal modal editor), etc. Fine if they want to make that their niche but if you are migrating from VS Code like 99% of people you can't have these kinds of papercuts, people will just uninstall.

I think explicit post-training is going to be needed to make this kind of approach effective.

As this repo notes is "The secret to good memory isn't remembering more. It's knowing what to forget." But knowing what is likely to be important in the future implies a working model of the future and your place in it. It's a fully AGI complete problem: "Given my current state and goals, what am I going to find important conditioned on the likelihood of any particular future...". Anyone working with these agents knows they are hopelessly bad at modeling their own capabilities much less projecting that forward.


What is Cursor doing? They need to relax a little bit. Recently I saw they released "Glass" which WAS here: https://cursor.com/glass, now just redirects to /download.

Is "Cursor 3" == Glass? I get they feel like their identity means they need to constantly be pushing the envelope in terms of agent UX. But they could stand to have like an "experimental" track and a "This is VS Code but with better AI integration" track.


Glass was a codename while the UI was in early alpha with testers. It redirects to download now because there is no special link anymore. It's just part of Cursor 3 itself.


Just emailed him. Ridiculous issue.


Disagree completely. Works great for me.


If you actually care about this stuff you are going to run something like https://github.com/waydabber/BetterDisplay which easily allows for HiDPI @ 4K resolution, it does not "look bizarre" or "require fractional scaling". This is what the OP is about. I do the same thing, I run native res w/ HiDPI on a 27" 4K screen as my only monitor, works great.


Unfortunately BetterDisplay cannot set HiDPI @ 4K on the M5 machines - that was the first thing I tried.


Sure, and that is the real tragedy here. The person I'm replying to is just pointing out that native support for high res sucks, which is true, but the real problem is what limits there are on 3rd party support.


It's widely reported and acknowledged as true.


Well, the only people with any ability to acknowledge it have a massive incentive to do so, and I've been around the block enough times to know that startups will use every trick in the book to paint a rosy financial picture, even when it's extremely misleading or occasionally just straight up lies. In the current climate of AI hype my skepticism is even greater.

I'll believe it when I see it.


Where and by who? Critical context missing here.



The CEO hyping his product and the viability of his business during an interview with Stripe does not, at least to me, qualify as “widely reported and acknowledged”


K2.5 is dog shit compared to leading OAI/Ant models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: