I’d add that token pricing doesn’t work for anyone but the frontier models. Everything else will be commodified. So Opus can charge us top prices per token until a lower (or local) model hits parity and then price goes to zero.
Nope, I just dump it all in a folder (~/scripts) that claude can read & it picks them up as skills. A good chunk of them are regex based, many are find/replace type tools, some are small code generators & template inflators, some are deployment tools, some are audit tools. I cannot release them at this time, most of them are specific to our company, infra and codebase (main codebase is 1MLoC), sorry about that.
Start with a simple "Let me build a script for claude that can rename the namespace for all the file in a folder". If you have 100K+ plus files, it effort is worth it and your tools start getting chained together too. So make sure each tool only has one purpose for existing and that its output is perfect. So when claude start chaining them and you see what is possible, the mind opens up even more to possibilities.
Fixing isn't often a problem for CVEs. The hard part is almost always finding the CVE in the first place.
There are some extreme cases that might require extensive code changes, and those would benefit from LLMs. But a lot of the issues are things like off by one issues with pointers.
Most patches are non-trivial and then each project/maintainer has a preferred coding style, and they’re being inundated with PRs already, and don’t take kindly to slop.
LLMs can find the CVE fully zero interaction, so it scales trivially.
I’m writing a new type of CRDT that supports move/reorder/remove ops within a tree structure without tombstones. Claude Code is great at writing some of the code but it keeps adding tombstones back to my remove ops because “research requires tombstones for correctness”.
This is true for a usual approach, but the whole reason I’m writing the CRDT is to avoid these tombstones! Anyway, a long story short, I did eventually convince Claude I was right, but to do it I basically had to write a structural proof to show clear ordering and forward progression in all cases. And even then compaction tends to reset it. There are a lot of subtleties these systems don’t quite have yet.
Please do elaborate. I’ve only tried switching to codex once or twice, and it’s been probably 3 months since I last tried it, but I was underwhelmed each time. Is it better on novel things in your experience?
My experience is that it is much more terse and realistic with its feedback, and more thoughtful generally. I trust its positive acknowledgements of my work more than claude, whose praise I have been trained to be extremely skeptical of.
In my experience, Codex / ChatGPT are better at telling you where you're wrong, where your assumptions are incomplete, etc., and better at following the system prompts.
But more importantly, as a coding agent, it follows instructions much better. I've frequently had Claude go off and do things I've explicitly told it not to do, or write too much code that did wrong things, and it's more work to corral it than I want to spend.
Codex will follow instructions better. Currently, it writes code that I find a few notches above Claude, though I'm working with C# and SQL so YMMV; Claude is terrible at coming up with decent schema. When your instructions do leave some leeway, I find the "judgment" of Codex to be better than Claude. And one little thing I like a lot is that it can look at adjacent code in your project so it can try to write idiomatically for your project/team. I haven't seen Claude exhibit this behavior and it writes very middle-of-the-road in terms of style and behavior.
But when I use them I use them in a very targeted fashion. If I ask them to find and fix a bug, it's going to have as much or more detail as a full bug report in my own ticketing system. If it's new code, it comes with a very detailed and long spec for what is needed, what is explicitly not needed, the scope, the constraints, what output is expected, etc., like it's a wiki page or epic for another real developer to work from. I don't do vague prompts or "agentic" workflow stuff.
How is FE not actual programming? I spend less time on FE than I once did, but it has presented some of the most interesting programming challenges I've encountered in my career. It's a large technical space, rich with 'actual' programming to be done.
From what I've seen a ton of people are using Claude Code or Cursor daily. I wouldn't be surprised if most startups are at 100% use right now. The big tech companies are a bit slower, but have started rolling out almost unlimited token use so I wouldn't be surprised if they are above 50% adoption by the end of the year.
Start with Claude Code if you haven't tried it yet as it can edit your files directly and has some pretty fantastic skills/plugins that are quite interesting. (Copilot is quite a bit far behind unfortunately.)
reply