I'd wager that a "main agent" is really just a bunch of subagents in a sequential trench coat.
At the end, in both cases, it's a back and forth with an LLM, and every request has its own lifecycle. So it's unfortunately at least a networked systems problem. I think your point works with infinite context window and one-shot ting the whole repo every time... Maybe quantum LLM models will enable that
I don't buy it either. I've been building my own backend framework for the past 2.5 years, and even though it's a DSL over Python and there's no documentation online and barely one in my computer, Claude Code understands it with enough usage examples in my codebase.
In front-end as well—I've been able to go much farther for simple projects using alpine than more complex frameworks. For big products I use Elm, which isn't exactly the most common front-end choice but it provides a declarative programming style that forces the LLM to write more correct code faster.
In general, I think introspectible frameworks have a better case, and whether they're present in training data or not becomes more irrelevant as the introspectibility increases. Wiring the Elm compiler to a post-write hook means I basically have not written front-end code in 4 or 5 months. Using web standards and micro frameworks with no build step means the LLM can inspect the behaviour using the chrome dev tools MCP and check its work much more effectively than having to deal with the React loop. The ecosystem is so fragmented there, I'm not sure about the "quality because of quantity of training data" argument.
Author here. This is a fair comment. If you have a corpus that can be used as context already it's not like the LLMs will be forcing you in to React, there's probably enough bias (in a good way) to ensure the tool continues to be useful.
What I was trying to get at in the post is that net new experiences is where I see a massive delta
Yeah for sure but I think frameworks will adapt. It's like going back to 2002 and saying that it's better to program in Java because of all the IDEs available and all the corporate money being poured into having the best developer experience there can be. But since LSP arrived, developers choosing a smaller language suffer much less.
The 'LSP' that would allow new frameworks or languages to shine with coding agents is already mostly here, and it's things like hooks, MCPs, ACP, etc. They keep the code generation aligned with the final intent, and syntactically correct from the get go, with the help of very advanced compilers/linters that explain to the LLM the context it's missing.
That's without hypothesising on future model upgrades where fine-tuning becomes simple and cheap, local, framework-specific models become the norm. Then, React's advantage (its presence in the training data) becomes a toll (conflicting versions, fragmented ecosystem).
I also have a huge bias against the javascript/typescript ecosystem, it gives me headaches. So I could be wrong.
I am curious actually! In general about your experiments, but also about integrating this detection algorithm to wider systems. Did you run any autogpt-like experiments with the AI generated text as a critique? My use case is a bit different (decision-making), so I play with relative plausibility instead of writing style. But I haven't found convincing ways of "converging" quite yet, i.e. benchmarks that don't rely solely on LLMs themselves to give their output.
To clarify, the style experiment I've referenced earlier was just that – an experiment. I did not implement those methods into my software. Instead, I focused on how to eliminate things like 'talking with authority without evidence', 'contradictions', 'talking in extremely abstract concepts', 'conclusions without insights', etc.
If you need a dataset to benchmark against, download any articles from pre 2017. There are a few ready-made datasets floating around the Internet.
He mentions it in the video. They couldn't rely on Elm, which was too young at the time, for their migration away from Flash. The scale of creating a language was too big for Prezi's immediate needs back then
I went to a JS conference at the time and the CTO of Prezi was talking to me about compiling Haskell to JS; he really wanted Haskell. I told him the performance won’t be good. He wasn’t happy with that answer, but I saw in the end that they went with Elm.
No, life goes on and it's not a calamity. It's pretty bad, but we'll have another crisis if tourism goes down because of the bad publicity of the earthquake.
For many Moroccans, welcoming tourists is their livelihoods. The best thing you can do to support them is to keep your plans in place.
I'm curious what companies stand to gain the most from this discovery... I understand the promises of LK-99, but where do you think where we'll see it be applied first?
Depends on its price and properties. If it is affordable, I would go with electric motors. Not having to cool them would be hugely beneficial. If it is expensive, then MRI machines and space equipment.
As an Arabic speaker, the verse in the Qur'an means to me that fasting helps you remember Allah.
Islam has no concept of empathy for all beings, or enlightenment really. The assumption is that without the fear of God and Hell, people would sin by default, and religious practices are meant to please God.
Often progressive Muslims mention Sufis but they have been persecuted enough, for holding God-consciousness type beliefs amongst others, to consider them a different thing altogether.
In my use case, I find the lack of features of msgspec more freeing in the long run. Pydantic is good for prototyping, but with msgspec I can build nimble domain specific interfaces with fast serial/deserialisation without having to fight the library. YMMV!
At the end, in both cases, it's a back and forth with an LLM, and every request has its own lifecycle. So it's unfortunately at least a networked systems problem. I think your point works with infinite context window and one-shot ting the whole repo every time... Maybe quantum LLM models will enable that
reply