Hacker Newsnew | past | comments | ask | show | jobs | submit | TIPSIO's commentslogin

Beautiful design and UX for the bot layouts. Kudos this is really clean

A lot of people have spent a considerable amount of time building out "claude -p" workflows trusting Anthropic because of those same Tweet assurances outside of OpenClaw.

It seems with the new "--bare" flag they are introducing, a huge rug pull is coming as they plan to deprecate -p for unlimited users.

The docs now read:

> "Bare mode skips OAuth and keychain reads. Anthropic authentication must come from ANTHROPIC_API_KEY or an apiKeyHelper in the JSON passed to --settings. Bedrock, Vertex, and Foundry use their usual provider credentials. --bare is the recommended mode for scripted and SDK calls, and will become the default for -p in a future release."

Hope I am reading this wrong or this is clarified.

https://code.claude.com/docs/en/headless


It seems clear that Anthropic wants users pay API rates for tokens when use in a programatic way, and not subscriber rates for tokens when used from code. As a user, I want to pay the subscription rates with -p, but it seems they want to block that.

Oh wow, I love this idea even if it's relatively insignificant in savings.

I am finding my writing prompt style is naturally getting lazier, shorter, and more caveman just like this too. If I was honest, it has made writing emails harder.

While messing around, I did a concept of this with HTML to preserve tokens, worked surprisingly well but was only an experiment. Something like:

> <h1 class="bg-red-500 text-green-300"><span>Hello</span></h1>

AI compressed to:

> h1 c bgrd5 tg3 sp hello sp h1

Or something like that.



You'd like Emmet notation. Just look at the cheat sheet: https://docs.emmet.io/cheat-sheet/

Quick everyone to your side projects. We have ~3 days of un-nerfed agentic coding again.

3 days of side project work is about all I had in me anyway

More like 2 hours considering these usage limits

I've been on 5x for a couple of months and the closest I've got to my weekly limits is 75%. I've hit 5-hr limits twice (expected). I'm a solo dev that uses CC anywhere from 8-12+ hr each day, 7 days a week. I've never experienced any of the issues others complain about other than the feeling that my sessions feel a little more rushed. I'd say that overall I have very dialed-in context management which includes: breaking work across sessions in atomic units, svelte claude.md/rules (sub 150 lines), periodic memory audit/cleanup, good pre-compact discipline, and a few great commands that I use to transfer knowledge effectively between sessions, without leaving a trailing pile of detritus. Some may say that this is exhaustive, but I don't find it much different than maintaining Agile discipline.

This being said, I know I'm an outlier.


Perhaps on the 10x plan.

It went through my $20 plan's session limit in 15 minutes, implementing two smallish features in an iOS app.

That was with the effort on auto.

It looks like full time work would require the 20x plan.


I know limits have been nerfed, but c'mon it's $20. The fact that you were able to implement two smallish features in an iOS app in 15 minutes seems like incredible value.

At $20/month your daily cost is $0.67 cents a day. Are you really complaining that you were able to get it to implement two small features in your app for 67 cents?


Yea, actually, people should be complaining.

If you got in a taxi, and they charged you relative to taking a horse carriage, people should be upset.


That last sentence didn't make sense so I'm not sure what your point is. But I'll run with the analogy.

You got into a taxi and they were charging you horse carriage prices initially. They're still not charging you for a full taxi ride but people are complaining because their (mistaken) assumption was that taxis can be provided as cheaply as horse carriages.

People are angry because their expectations were not managed properly which I understand.

But many of us realized that $20 or even $200 was far too low for such advanced capabilities and are not that surprised that all of the companies are raising prices and decreasing usage limits.

OpenAI is not far behind, they're simply taking their time because they're okay with burning through capital more quickly than Anthropic is, and because OpenAI's clearly stated ambition is to win market share, not to be a responsibly, sustainably run company.


Shortly after I ran out of credits in 15 min, they tweeted that they increased usage limits to compensate for the higher token usage, so perhaps it is not as bad now.

Codex, this afternoon, I was able to use for like two hours on the $20 plan. Maybe limits will be tighter in the future. But with new data centers, new GPU generations, and research advances it might rather get cheaper.

Anyway, as you said, this is all pretty cheap. I'll go with the $100 Codex plan, since I now figured out how to nicely work on multiple changes in parallel via the Codex app with worktrees. I imagine the same is possible in Claude Code.


It seems to me a bit naive to think OpenAI would not increase prices/decrease usage limits at some point. $20 might cover a very small fraction of the actual cost that is incurred over a month of sustained usage.

No, I am happy with the results.

For a first test, it did seem like it burned through the usage even faster than usual.

GitHub Copilot’s 7.5x billing factor over 3x with Opus 4.6 seems to suggest it indeed consumes more tokens.

Now I’m just waiting for OpenAI to show their hand before deciding which of the plans to upgrade from the $20 to the $100 plan.


> It looks like full time work would require the 20x plan.

Full time work where you have the LLM do all the code has always required the larger plans.

The $20/month plans are for occasional use as an assistant. If you want to do all of your work through the LLM you have to pay for the higher tiers.

The Codex $20/month plan has higher limits, but in my experience the lower quality output leaves me rewriting more of it anyway so it's not a net win.


Exactly. God, it wouldn't be such a problem if they didn't gaslight you and act like it was nothing. Just put up a banner that says Claude is experiencing overloaded capacity right now, so your responses might be whatever.

Clearly you didn't try it yet ;)

... your side projects that will soon become your main source of income after you are laid off because corporate bosses have noticed that engineers are more productive...

FWIW everyone is also building a version of this themselves. Only so many directions to go


Most definitely. Although I haven’t found an (F)OSS project that lets one easily ship [favorite harness SDK] to self-hosted platform yet.

Which projects are standing out in this space right now?


Shameless self promo but, I've been working on Optio specifically for coding, it works by taking any harness you want and tasking it to open Github/lab PRs based on notion/jira/linear tickets, see: https://news.ycombinator.com/item?id=47520220

It works on top of k8s, so you can deploy and run in your own compute cluster. Right now it's focused only on coding tasks but I'm currently working on abstractions so you can similarly orchestrate large runs of any agentic workflow.


@jawiggins saw your repo it looks like openAI symphony but better as it works across multiple agents and issue trackers and the feedback loop is great . One feature request though - can you add plan mode ? Your issues are so detailed it becomes plan to implement (but I guess your plan mode is currently happening outside of GitHub issues ) but let’s say issue is “implement support for plan mode” there should be back and forth with agent with issue tags pointing to opus max and/or plan mode - so we can correct agents plan back and forth and once tag is removed it can start implementing or something similar ?


Thanks for the feedback. Earlier I expected I'd need to do more back and forth with the agents before accepting their work but in general I've found it isn't needed.

I do have some features coming up that will improve the ability to converse with the agent as it's running. I'll make a note to add in a plan setting so you can have that run and converse before it gets going.


Thanks for considering it. So how are you doing it now ? Who is generating the plan inside of issue ex: https://github.com/jonwiggins/optio/issues/418

Do you just add the Issue Title like this "feat: CLI improvements — status dashboard, workflow commands, shell completions" and it generated the plan in issue body and started working on it OR is the plan generated by another ai agent and copied to issue body for pickup by optio ?


I've been building exactly this. it's Open source, multi-model (5 providers with fallback), from now, it runs locally but the architecture is designed for self-hosted deployment.


Eh, the value is the unlimited Max plan which they have rightfully banned from third-party use.

People simply want Opus without fear of billing nightmare.

That’s like 99% of it.


If this true. My old personal agent Claude Code setup I open sourced last month will finally be obsolete (1 month lol):

https://clappie.ai

- Telegram Integration => CC Dispatch

- Crons => CC Tasks

- Animated ASCII Dog => CC Buddy


Not necessarily; I would very much like to use those features on a Linux server. Currently the Anthropic implementation forces a desktop (or worse, a laptop) to be turned on instead of working headless as far as I understand it.

I’ll give clappie a go, love the theme for the landing page!


I didn't know this existed. I had claude put this together for me from scratch a week ago. Tmux and claude and telegram is a really powerful combo!


Clappie looks much more fabulous than CC though. I'll have to give it a try. I like how you put the requests straight into an already running CC session instead of calling `claude -p` every time like the claws.


Thanks so much! It's a fancy landing page thanks to Claude.

Tmux is seriously an amazing tool.


Dispatch and scheduled tasks have been available for a few weeks already, although with limitations.


Poor mum


Not at all. I am a big a Claude Code fan and glad they are releasing more and more features for users


This was my setup exactly, I open sourced a framework of it a while ago:

- https://clappie.ai

Plus it gives a little ASCII dog to Claude Code terminal.

The ability to spawn independent CLI is awesome. No brainer they would add eventually between the great threaded functionality it brings and is essentially a more controlled version of OpenClaw IMO


An even worst day is probably coming:

Imagine if a model ever does get scary good, would the big labs even release it for general use? You couldn't even buy it if you wanted to. Exceptions would be enterprise deals / e.g.: $AMZN niche super contracts.


Very true.. also I would say even what I get out of claude code is absolutely phenomenal right now.. but sometimes it does take minutes. I just had it take 15 minutes to do something. But what if you had access to the hardware to run it basically instantly?

Just think how these big companies will use that kind of power for themselves to get even more extreme uses out of it.


AI ran a git clean on me and wiped out a bunch of untracked changes.

I just asked Claude Code to help recover it. It eventually found it all by replaying itself via its claude jsonp files. I never had to install or leave anything.


Claude code can certainly recover files from the files yes. In my case I had to recover 80 files stored in over 20+ maybe more sessions in the last month. To recover all those files in one context window without a deterministic script that keeps track of what has extracted and what not, seemed too challenging for me. Claude-file-recovery is able to index all available files and also able to extract files at a certain point in time, without having to rely on the LLM correctly parsing 20+ sessions which won’t fit in one context window.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: