Have you ever met someone with a true addiction to food? I'm not talking about someone with a habitual craving for sweets. I'm talking about someone who consumes food compulsively like a chain-smoker; someone who, in the absence of whatever their favorites are, will consume and consume with little regard for what the food is: an entire jar of pickles, multiple pounds of grapes, a whole rotisserie chicken, et al.
I used to be one. I once ate six baked white onions¹ in one sitting before vomiting everywhere and rethinking my life.
I broke through naturally, but I wish GLP-1s had been prevalent at the time. Want to know what made breaking it so challenging?
1. Unlike other addictions, you have to continue consuming this one or else you will die.
2. Nearly every social event in the USA is tied in some way to food which means that you have to exercise willpower __constantly__ if you have a social life.
3. People are more interested in shaming you than supporting you. Most want you to fail.
Younger Dryas, definitely. It very likely abruptly stopped progress in human agriculture, before allowing it to abruptly restart again. Makes the Medieval warm period and little ice age look like a joke. Two massive shifts that punctuate the timeline of early human prehistory.
> The Younger Dryas (YD, Greenland Stadial GS-1) was a period in Earth's geologic history that occurred circa 12,900 to 11,700 years Before Present (BP). It is primarily known for the sudden or "abrupt" cooling in the Northern Hemisphere, when the North Atlantic Ocean cooled and annual air temperatures decreased by ~3 °C (5 °F) over North America, 2–6 °C (4–11 °F) in Europe and up to 10 °C (18 °F) in Greenland, in a few decades
> I don't think very many people predicted that it simply wouldn't matter when photorealistic compromising images of whoever you don't like
This goes hand-in-hand with the widespread death of belief in absolute truth in the US and other western nations.
If this technology were released during the height of the Monica Lewinsky scandal, I'd wager it would have had the impact most of us expected it to have, at least for a little while.
> AI and LLMs have changed one thing very quickly: competent output is now cheap.
If you're working on something not truly novel, sure.
If you're using LLMs to assist in e.g. Mathematics work on as-yet-unproven problems, then this is hardly the case.
Hell, if we just stick to the software domain: Gemini3-DeepThink, GPT-5.4pro, and Opus 4.6 perform pretty "meh" writing CUDA C++ code for Hopper & Blackwell.
And I'm not talking about poorly-spec'd problems. I'm talking about mapping straightforward mathematics in annotated WolframLanguage files to WGMMA with TMA.
I am not sure you set it up right. Did you have a runnable WolframLanguage file so it can compare results? Did you give it H100 / H200 access to compile and then iterate?
My experience is that once you have these two, it does amazing kernel work (Codex-5.4).
> Did you have a runnable WolframLanguage file so it can compare results?
Yes.
> Did you give it H100 / H200 access to compile and then iterate?
Yes via Lambda.ai. Also, FWIW, I run claude with --dangerously-skip-permissions and codex with the equivalent flag.
> it does amazing kernel work (Codex-5.4)
Specifically with WGMMA + TMA?
---
Once TMA gets involved both Claude and Codex spin endlessly until they dump TMA for a slower fallback.
I've observed this with Claude-Code having Opus 4.6 reasoning set to medium, high, and max; "adaptive thinking" enabled and disabled; and I've made sure to max-out thinking tokens.
I've also observed this with Codex GPT-5.4 in addition to GPT-5.3-Codex with reasoning efforts from medium to xhigh.
---
I've also observed this on the web, as mentioned in my OP, with GPT-5.4pro (Extended Pro), Gemini3-DeepThink, and Opus 4.6.
That is informative, thanks! Yes, I observe the same thing as the model tends to give up (like you said, "dump TMA for a slower fallback") and needs active steering to get good results. But it indeed works further than one-shot from Chat interface and knows much more about profiling / kernel coding than these.
It doesn't have to be anything so extreme as novel work. The frontier of models still struggle when faced with moderately complex semantics. They've gotten quite good at gluing dependencies together, but it was a rather disappointing nothingburger watching Claude choke on a large xterm project I tried to give him. Spent a month getting absolutely nowhere, just building stuff out until it was so broken the codebase had to be reset and he'd start over from square 1. We've come a long way in certain aspects, but honestly we're just as far away from the silver bullet as we were 3 years ago (for the shit I care about). I'm already bundling up for the next winter.
Have you ever met someone with a true addiction to food? I'm not talking about someone with a habitual craving for sweets. I'm talking about someone who consumes food compulsively like a chain-smoker; someone who, in the absence of whatever their favorites are, will consume and consume with little regard for what the food is: an entire jar of pickles, multiple pounds of grapes, a whole rotisserie chicken, et al.
I used to be one. I once ate six baked white onions¹ in one sitting before vomiting everywhere and rethinking my life.
I broke through naturally, but I wish GLP-1s had been prevalent at the time. Want to know what made breaking it so challenging?
[1] https://www.youtube.com/watch?v=xV9spqCzSkQreply