That's likely because it takes an entirely different approach to make it work. Augmenting your existing flow with "sophisticated auto complete" isn't as interesting and isn't actually using the tools how they were designed to be used.
I'm not going to pass judgement either way; we'll see how it all shakes out.
I just know for me, personally, I love computers and making them do what I want and in the AI era I am somehow using them even more and doing even more.
It feels so weird to me - people are exhausting their quotas while I am trying very hard to even reach mine with the $200 plan.
We're generating all of the code for swamp[1] with AI. We review all of that generated code with AI (this is done with the anthropic API.) Every part of our SDLC is pure AI + compute. Many feature requests every day. Bug fixes, etc.
Never hit the quota once. Something weird is definitely going on.
My hypothesis is that people who have continuous sessions that keep the cache valid see the behavior you’re describing: at 95% cache hits (or thereabouts), the max plan goes a long way.
But people who go > 5 minutes between prompts and see no cache, usage is eaten up quickly. Especially passing in hundreds of thousands of tokens of conversation history.
I know my quote goes a lot further when I sit down and keep sessions active, and much less far when I’m distracted and let it sit for 10+ minutes between queries.
It’s a guess. But n=1 and possible confirmation bias noted, it’s what I’m seeing.
Why is it our job to micromanage all this when it used to work fine without? Something's clearly changed for the worse. Why are people insisting on pushing the responsibility on paying users?
Man what the hell happened to System Initiative. It was a super weird pivot from sociotechnical proclamations to a tool I honestly have no idea what it does for me? Is it n8n for agents? Is it needed when I have a bunch of skills that approximate whatever swamp is trying to do? Who knows!
I can't really speak to the sociotechnical proclamations, because I didn't make them.
What it does for you is simple: if you want to automate something, it does. Load the AI harness of your choice, tell it what to automate, swamp builds extensions for whatever it needs to to accomplish your task.
It keeps a perfect memory of everything that was done, manages secrets through vaults (which are themselves extensions it can write) and leaves behind repeatable workflows. People have built all sorts of shit - full vm lifecycle management, homelab setups, manage infrastructure in aws and azure.
What's also interesting is the way we're building it. I gave a brief description in my initial comment.
Ah, interesting, thanks! I think you might consider elevating some of that kind of copy.
The sociotechnical stuff with System Initiative was made by your CEO? The guy who is really into music? And I don't even know how long that product was a thing before the pivot. Not long!
System Initiative was a thing for ~6.5 years. I talked to every person who ever used it or was interested in using it in the last 2.5 years. Thousands of them.
Swamp is better by every metric; has a lot more promise, is a lot more interesting.
Wasn't that the entire premise of drop, though? Massive stock of quality, curated items. Not necessarily in a specific niche. The "what new drop is there today" is the appeal.
At first, the entire premise was facilitating group buying to meet manufacturer order minimums for unique or high-demand hardware, and discounts for meeting manufacturers' volume discount targets.
Then it morphed into a general specialty/niche retailer for people with keyboard, headphone, "EDC" and began to also focus on "house brand" type merch.
There is a lot you can do to shape the end result to not have these faults. In the end, the engineering mind and rigor still needs to apply, so the hard work doesn't go away.
But, the errors that are described - no architecture adhesion, lack of comprehension, random files, etc. are a matter of not leveling up the sophistication of use further, not a gap in those tools.
As an example. Very clearly laying out your architecture principles, guidance, how code should look on disk, theory on imports, etc. And then - objectively analyzing any proposed change against those principles, converges toward sane and understandable.
We've been calling it adversarial testing across a number of dimensions - architecture, security, accessibility, among other things. Every pr gets automatically reviewed and scored based on these perspectives. If an adversary doesn't OK the PR, it doesn't get merged.
Also not like it’s a particularly good piece of tech. It was the first to show a new category. But jeebus the design and security are a nightmare. Any of the numerous other claws are better choices for anything serious.
I've been programming for literally my entire life. I love it, it's part of me, and there hasn't been more than a week in 30 years that I haven't written some code.
This is the first time that I feel a level of anxiety when I am not actively doing it. What a crazy shift that I am still so excited and enamored by the process after all of this time.
But there's also the double edged sword. I am also having a really hard time moderating my working hours, which I naturally struggle with anyway, even more. Partly because I am having so much fun and being so productive. But also because it's just so tempting to add 1 more feature, fix one more bug.
None in my area. Time to disperse. Get out of major cities like the pandemic promised. Fill in this great country we live in. Proliferate the governments surveillance for them.
AI Agent's like Claude Code are an arms race to the bottom. Just like frontier model quality, they all converge on feature sets over time (plan mode, skills, remote execution, sandboxing, etc.,) and opencode is holding its own, preferred even, in a lot of cases.
The real differentiated value comes from the environment the AI Agent operates in, the runtime.
The runtime is agent agnostic but provides a stable interface to your domain. People tried this with MCP, but MCP is a dead end. Local tool calling is so much better. Being able to extend integrations autonomously is the way, instead of being forced in to a bloated bag of tools.
This is why we built swamp - https://swamp.club. We can integrate with any technology with an API, CLI, or code base and build repeatable, typed, validated automation workflows. There are no providers to wait for. No weird tool call paths trying to get the right concoction of MCP. The agent builds the integration itself, on the spot, in minutes.
I'm not going to pass judgement either way; we'll see how it all shakes out.
I just know for me, personally, I love computers and making them do what I want and in the AI era I am somehow using them even more and doing even more.
reply