Hacker Newsnew | past | comments | ask | show | jobs | submit | pxtail's commentslogin

Yes that's correct, but in doing so remember that only person that cares most about you and your health is YOU - doctor cares about you for 10-15min, then next patient is waiting, and the level of doctor's care is inversely proportional to the level of burnout.

Yes, that's another issue - CS/IT seems to be unique that sharing knowledge and openly discussing problems is the norm, in other professions you can get finger wagging and some general "advice"

This is actually business opportunity for WP/Automattic - they could introduce vetted plugins where plugin author and/or consumer pays for review service.


Now only thing missing is leadership, development and active maintenance of the project for at least 5+ years - and keeping it and ecosystem around it fair and open (to some degree at least) because that whats allows WP to last for so long, it's not zero sum game.

For such rich and resourceful corp like Cloudflare surely this isn't a problem and they are going to overview, maintain and steward the project for a long, long time. Surely.


There's still plenty of "leave my fellow multbillion corp alone" type ones,it means that corp can and should screw it's loving customer base harder.


The enshittification meme has been taken too seriously to the point where it is shoehorned into every single place possible.

It is not in the interests for Anthropic to screw its customer base. Running a frontier lab comes with tradeoffs between training, inference and other areas.


The investors are their customers - not the users of the end-product.


This shows a lack of understanding of how markets work. Investors make money when the valuation of the company increases. The valuation of the company is the best prediction of future profit risk adjusted.

How would anthropic increase future profits without satisfying customers?


Early investors make money when later investors buy them out at inflated valuations.


Well sure, all market signals should be considered. As a casual observer, my received signals have been indicating that AI is getting sold at a loss to get market share, and more recent signals have indicated that users are really really sensitive to both costs and performance.

The weakest signal to me is investor money, because when you think of it, investors are betting on a future that may or may not be there. Heck even trends aren't guaranteed, "past performance is no guarantee etc etc"


Have you seen the business models for these companies? Literal underpants gnome memes. OpenAI's goes like this:

1. Build AGI

2. Use said AGI to tell us how to become profitable

3. Profit!

Anthropic seems to be going all in on enterprise sales. Which means they don't actually have to please customers, or it's what ThePrimeagen humorously calls a "yacht problem"—a problem that only needs a solution after the IPO. For now all they have to do is convince corporate leadership that this is the future of work and sow enough FOMO to close those sales contracts and their projected sales, and stock valuation, goes through the roof.

Of course that value will collapse if they go without delivering on their promises long enough. That's why they call it a bubble. But by then, hopefully, Dario and the early investors will be long gone and even richer than they were to start. Their only competitor, OpenAI, is confronted with the same issues: the scalability problems won't go away, and addressing them doesn't drive stock valuation the way promising high rollers that AGI and total workforce automation are just around the corner does.


It doesn't matter if it is in Anthropic's interest to screw its customer base, if their reported monthly revenue growth is accurate then it makes perfect sense why Claude would be getting dumber...

Demand is way up and compute supply is extremely limited because data center buildouts can't keep up with demand.

In the face of rising demand and insufficient compute their only practical options (other than refusing new business until demand can be met) are signicantly raising the price of tokens (and more tighly limiting subscription options) or doing behind the scenes inference optimizations that are likely to make the model dumber.

It is very easy to believe that they took the route of inference optimizations that have reduced quality of the service and that that is where the perceived enshittification is coming from.


Good one, at last, April fools joke with some effort.


> Are there more people whose work on a project has been taken away by AI bots?

Of course, what worked me and what allows me to keep my sanity in my case of project owner coming in and remodeling half of the codebase over the weekend with CC is that I mentally ceded "ownership" of the project code, that is, I'm no longer feeling that I'm responsible for what is there and how it is structured. And there are tests.

Apart from that I can say that I empathize with you because I know that initially it feels awful, like loosing some part of agency and also to some degree humiliating when looking that something carefully and meticulously designed is restructured, replaced or thrown away so quickly and carelessly. What also helps is changing mental model and perceiving oneself as controller who overviews process of "shaping" code as whole, in its big mass, to behave in certain way instead of keeping mentally attached to some part of it because "I designed it".


>when looking that something carefully and meticulously designed is restructured, replaced or thrown away so quickly and carelessly.

That’s just a trend that has been accelerating for a while now. Make things quick, quality and longevity are qualities of the past. Is better to give a bad solution quickly than having to stop to think.


Recently after noticing how quickly limits are consumed and reading others complaints about same issue on reddit I was wondering how much about this is real error or bug hidden somewhere and how much it's about testing what threshold of constraining limits will be tolerated without cancelling accounts. Eventually, in case of "shit hits the fan" situation it can be always dismissed by waving hands and apologizing (or not) about some abstract "bug".

The lack of transparency and accountability behind all of this is incredible in my perception.


I've run into this, and I highly doubt I am one of the more extraordinary users. I have delays between working with it, don't have many running at once, am running on smaller codebases, etc. Yet just a few minutes ago I hit a quota. In the past I did far more work with it without running into the quota.

I emailed their support a few days ago with details, concerns, a link to the twitter thread from one of their employees, and a concrete support request, which had an AI agent ('Fin') tell me:

> While our Support team is unable to manually reset or work around usage limits, you can learn about best practices here. If you’ve hit a message limit, you’ll need to wait until the reset time, or you can consider purchasing an upgraded plan (if applicable).

I replied saying that was not an appropriate answer.

You're absolutely right re the lack of transparency and accountability. On one hand, Anthropic generates good will by appearing to have a more ethical stance then OpenAI, and a better product. On the other hand, they kill it fast through extremely poor treatment of their customers.

If they have a bug, they need to resolve it: and in the meantime refund quotas. 'Unable to' - that's shocking. This is simple and reasonable. It's basic customer service. I don't know if they realise the damage their attitude is doing.


Fin is the most useless thing ever. There's no obvious way to get reports in front of a human in a timely manner, and there's no clue to believe fin interactions are retained.

This does mean ultimately no loyalty. I can't stay loyal to a brand that doesn't actually respond to inquiries, bug reports or down reports at all.

I do understand that Anthropic is operating at a tremendous scale and can't have enough humans in the loop. This sounds like a good use for ai classification and triage, really!


> I can't stay loyal to a brand that doesn't actually respond to inquiries, bug reports or down reports at all.

Amen to this.

Being in business means having to respond to customer enquiries at some point.

Given the amount of billions being pumped into Anthropic's pockets and given the millions their senior-leadership no doubt pay themselves, I'm sure they could spare a bit of cash to get off their backsides and sort out the Customer Service.

I simply do not buy the "poor Antropic, they are operating at scale, they are too busy winning to deal with customer service" argument that comes up time and time again.

The fact is there are many large businesses, many large governments that are able to deal with customers "at scale".

Scale means you respond a bit slower, maybe a few days or at most a couple of weeks AT MOST. But complete silence for months or years is inexcusable.

All of my experiences with "Fin" matches that of my friends and colleagues .... namely that "Fin" is a synonym for "black hole". I've got "tickets" opened with "Fin" months ago that have not had a modicum of reply.


> Being in business means having to respond to customer enquiries at some point.

Tell that to Google or Meta.


[flagged]


What started that though?


It’s funny to me that you think this is a bug.


It is also interesting to observe that your most valuable accounts in this kind of pricing model are the ones that are least used and therefore are not confronted by the limits. Heavy users canceling their accounts in frustration is a win for Anthropic not a punishment, at least a short term.


Casual users follow the recommendations of power users. Pushing heavy users off your service is a post-growth optimization


I suspect casual users are MUCH more likely to either cancel their account or switch providers on a whim.


Once you get used to using claude as an abstraction layer you start getting pretty reckless with it.

My organization has the concept of "premium models" where our limits reset every month. I hit my limit pretty quickly last month because I was burning tokens doing things that would have been a simple bash loop in the past - all because I was used to interfacing with Claude at the chat layer for all my automation needs and not thinking any more about it.


This is a real danger that I think a lot of people will run into as prices go up more and more in the future.

Completely outside of the productivity debate, offloading cognitive tasks to LLMs leaves you less practiced in them and less ready to do them when the LLM isn't available. When you have to delegate only certain tasks to the LLM for financial reasons, you may find yourself very frustrated.


I'm really hoping locally hosted llms get to the point of competing with current-day frontier models so that we all have "unlimited" usage.


This is the bet of many of the big AI companies, and why they're subsidizing majorly the calls. With the latest cracks by the US gov, it seems Anthropic is starting to reduce those subsidies given their edge in the game. I am starting to consider local models more seriously beside just testing, but nowadays the ram/gpu market is bloated.


Local models just don't seem that useful for me for these particular tasks yet - the most recent versions of Codex and Claude Opus are the first time I've found them to be particularly useful in a "real engineering" context that isn't just vibe coding.

Google's TurboQuant might help address this, but it also might just widen the gap even further.

I am far on the skeptic edge when it comes to the generative AI side of ML tools though, so do take my opinion with that weight.


Turboquant is totally irrelevant compared to current quantization methods. It has been thoroughly test by people who build inferencing engines for local models. It's all talk no actual meat to it.


Do you have any reading on this? I find it hard to believe something announced a week ago has been “thoroughly tested”.


Their paper TurboQuant (TQ) is not new per say. It's released last year, and heavily rehash of old ideas that were released a year prior (RabitQ). There is also [a bit of drama](https://openreview.net/forum?id=tO3ASKZlok) there that boils down to what it seems a bit of malpractice for google's researchers. TQ does few things: it claims better compression quality and speed, and better KV cache handling. Currently KV cache takes a load of resources beside that of the model itself. Many people applied different quantization strategy for it, but the quality degradation is a too apparent. Enter Attention Rotation. This seems to have genuinely helped KV cache compression as per [llama.cpp latest tests](https://github.com/ggml-org/llama.cpp/pull/21038). On the other hand, [ik_llama.cpp](https://www.reddit.com/r/LocalLLaMA/comments/1s7nq6b/technic...) did tests on the quality of turboquant-3 compared to IQ4 quantized models, and yhe quality degradation is much worse. So it's 2 things: KV compression -> good. Turboquant quantazation -> not good.


Seriously, who isnt planning a local first strategy?


I am sure a lot of people and orgs are - but realistically the majority of users need to understand and prepare not for local-first, but for the fact that they will never have that option for the models they know are the most useful to them.


Every series A-C startup


do you think we're already seeing mental atrophy play out?

or do you think model inference/training will get cheap that we won't reach the point of "high prices"?


They keep running experiments like free $50 in extra use credits or 2x usage outside certain windows where inference is very slow. You can’t help but think this is all a slowly boiling the frog experiment. Experimenting how much they can charge.


They're boiling the frog pretty quickly, honestly. The token usage has clearly been an issue since using Claude code from the beginning. It just blows through tokens


This feels a lot like the same playbook we’re seeing with dynamic pricing in retail, just applied to compute instead of products. You never really know what you’re getting, and the rules shift under you.

What makes it worse is the lack of transparency. If there were clear, hard limits, people could plan around it. Instead it’s this moving target that makes it impossible to trust for real work.

At some point it stops feeling like a bug and starts feeling like a pricing experiment on users.


The clear trend over the past decade or so has been using analytics and data gathering to extract maximum rents from every customer in every industry and AI is going to massively accelerate this.

The only way out is government regulation which means we are screwed in the US (our government is too far gone to represent average citizen interests in any meaningful way) but Europeans maybe have a chance if they get it together and demand change.


It's been pretty clear for a while that companies who have developed foundation models have essentially unprecedented levels of investment to recoup. For all the talk of faster hardware and more efficient models, that spend hasn't gone away and ultimately that investment needs to get a return somewhere.

Dependency on cloud AI models is, in effect, dependency on VC subsidy. From the user's point of view, this dependency is debt which will either be repaid with interest to a model provider or through the hard work of making themselves independent of such models after having become dependent.


Wow, someone here has above a room temp IQ.


What a horrid glimpse in the future. I hope we won't get there and we all collectively fight back with our wallets.


It's going to get much worse. We're soon going to have enough data and compute (and are losing enough online privacy) to allow every company to apply personalized pricing down to the individual. My local restaurant is going to know that I am willing to buy a burger for at most $4.57 and my neighbor is only willing to pay $2.91 for it, and they will have the ability to charge us individually. Every business is going to soak each of us us to the maximum extent that the data says they can.


I think there’s a pretty good argument to be made that this is discriminatory. Certainly it’s not something I would tolerate as a consumer. I suspect there will be heavy pressure to regulate this practice out of existence if it catches on.


who is going to stop them? the consumer protection bureau?


Depends what the political attitudes are where you live. The EU is unlikely to let it fly for example.


Then your neighbor can charge you up to $1.65 to buy a burger on your behalf and you still get it cheaper.


How can you compete when the algorithms are custom, individualized, and private? How would you even know that you should?


Not competition, but more like an opportunity for a startup to build a solution that fits in the new gap. A marketplace for people to sell their discounts.


Who would voluntarily do business with a company that does this? Not me.


Do you use Uber, Lyft, or Doordash?

What about airlines? https://fortune.com/2025/07/16/delta-moves-toward-eliminatin...

What about Staples or Home Depot? https://www.wsj.com/articles/SB10001424127887323777204578189...


Eventually, when all of them do this (and they will be effectively forced to in order to remain competitive), then we will not have a choice.


I will make burgers myself. I take this approach with many things and services without great suppliers anyway. And I don't care if it's suboptimal because, in the long run, I'll have better skills and be protected from exactly this trend.


But the supermarkets will do it too


The prices are literally marked on the shelves.


Today, yes. I can imagine a future where that sticker is replaced with an electronic display, and facial rec shows you an individualized price.


You don't think customers are going to figure that out by talking to each other and comparing prices on the spot?

Do you ever talk to other humans IRL?


Everyone who uses Uber is voluntarily doing business with a company that does this. When was the last time you took an Uber?


I'm worried that the present is actually living off a line of credit that will be spent/closed soon.


That’s what you get when you sign contracts in airline reward miles


Are they going to pay back if subscription was payed but token limit was less than advertised? Is there some tiny text somewhere preventing just suing or pulling money back with credit cards?


Part of the issue is that they don't actually advertise what the token limit is. Just some vague, "this is 5x more than free, and 5x more than pro". They seem to be free to change the basis however they please, because most of us are more than happy to use what they give us at the discounted subscription pricing.


Everyone on my team has been running into this, including the super users on the Max plan and the skeptics who only use it every few days. The quota is going way faster than it did before, sometimes a single prompt will eat up a third or more of the session quota.


I suspect that Claude had a bug that undercounted tokens and they fixed it.


I wonder if that was why they were offering the bonus off hours limits. Ease people in to the transition.


Working as intended? They openly state that how quickly your limit is reached depends on many factors (that you don't know) as well as current load on their systems.

Could just be that usage has gone up.


Welcome to Amazon playbook replayed again, most useful, profitable and popular use-cases will implemented by platform - and they will do it ruthlessly and quickly as money needs to be recouped.


100% its going to happen - also OpenAI will do same, there were already rumors about them building internal "github" which is stepping stone for that Also it is requirement for completing lock-in - the dream for these companies.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: