A classic hype merchant sales pitch: believe me, I was a doubter just like you, but I saw the light thanks to [insert latest model]!
(Which for anyone familiar with your long comment history as a regular HN poster, is comically absurd to imply. You've been reliably adamant that AI will demolish this or that entire industry overnight for years at this point).
GP commenter got my attention during the last few days. Judging by their claims of productivity, they should have been a billionaire already. I'm curious to know their motivation behind making such outrageous claims.
I’ve seen their outrageous comments so often I wonder if it’s Sam Altman’s alt account. Probably the biggest AI snake oil merchant on the forum these days, with a sadistic pleasure at seeing people losing their job to AI.
> You've been reliably adamant that AI will demolish this or that entire industry overnight for years at this point
We'll see who's right. I never said "overnight". Let's check in at the decade's end.
Y'all dunked on me in 2019 when I said AI was coming for Hollywood. Have you seen Seedance 2.0?
It's coming for us too. I've written five nines, active-active systems that handle billions of dollars of money movement daily. These systems can work in those contexts. I didn't think we'd be here this soon, and I actually thought LLMs were a dead end. I was wrong.
I'm not trying to sell Claude Code. I hate the concept of hyperscaler companies. I want there to be viable open source coding models - there just aren't. I'm merely reporting on my findings.
I sit at my machine for hours now in a prompt, review, test cycle. It's addictive. I'm getting more done at a faster rate than any time in my professional career. I'm excited, and I'm also worried. I don't know what happens after this.
If you've seen how much I praise AI, then you've also seen how much I rail against monopolies. I am worried these giant companies are going to take the means of production from us. I don't think enough people are freaking out about this. It's a very real possibility.
I'm just going to keep building. But you should pay close attention to what's happening.
You're not wrong in principle, but you've made some specific extraordinary claims. If you're really that productive, generating useful work product at a rate of 20 kloc/day by yourself, people would pay just to learn how you're doing it!
Y'all dunked on me in 2019 when I said AI was coming for Hollywood. Have you seen Seedance 2.0?
Being right at the wrong time is often worse than just being outright wrong, I've found.
You are reviewing 20kLOC per day, which is about 0.7 LOC per second in an 8 hour working day, assuming literally doing nothing else but reviewing? Pardon me but I don't find that very believable.
I don't completely disagree with what you are saying -- but there's no way you produce and review 20k lines a day. That part is clearly false. Even if Claude generated it in 10 seconds and it needed no changes you probably still couldn't review that much sensibly.
I used Claude a lot on a recent project where it probably wrote 15-20k lines in a month, and it was overall excellent.
So you now have 400Kloc of Rust code? Doing what? How much of that is "new"?
I can't get Augment / Opus 4.5 to edit a few C++ files from within VSCode without going off on a wild goose chase or getting stuck in an infinite loop after I tell that it should be doing this: "oh, you're right, I need to do X", "To do X, I must understand how to do Y", "I see now that to do Y, I should look at at Z". "Let me look at Z", followed by: "oh, you're right, I need to do X"..
My point is that C++ is a tarpit, famous for nerd sniping and leading otherwise intelligent people into Wild goose chases.
Have you read Alexandrescu’s “modern c++”? It’s like a piece of modern art but completely not self aware. There’s just something about C++ that lures intellectuals in; like ice age mammoths to a tar pit.
Small wonder LLMs also fall victims to C++’s deranged ways.
To do what, exactly, and are people paying you for your output or are you just making things for yourself?
Building things at a mature company with a market is a lot different than hacking together your own tools. There are a lot more people you can let down at scale.
That’s the same definition of reviewing code as saying watching the movie is the same as reading the book it’s based on.
No human has ever reviewed 600k lines of code in a month, ever. It’s hard to find someone who can even read and understand that amount in that time.
> They just need to put all of the engineers on HN out of work.
I think you've crossed the line from being an AI maxi to just rage baiting. This comment is a pointless anecdote at best, please take your ridiculous FOMO takes elsewhere.
I’m convinced these “guys you gotta believe me I’m a seasoned veteran and this shit is the real deal” posts that show up in every AI thread are either coming from Sam Altman or a bot.
It'd be interested in seeing how exactly the lawyers figured out how to define AGI. It must be a fairly mundane set of KPIs that they just arbitrarily call AGI, the term will probably devalue significantly in the coming years.
The actual quote is this though:
> hitting an AGI milestone or pursuing an IPO
So it seems softer than actually achieving AGI or finalising an IPO.
I'd assume the real trigger here is "reaching AGI," which would help OpenAI shrug off some of their Microsoft commitments thus making OpenAI models available on Amazon Bedrock. Which is what Amazon is really after.
Very convenient to put "AGI" in all these agreements because the term is fundamentally undefinable. So throw out whatever numbers you want and fight about it and backtrack later.
The problem with AGI is not that it's undefinable, but that everyone has a different one. Kinda like consciousness in that regard.
Fortunately, OpenAI already wrote theirs down. Well, Microsoft[0] says they did, anyway. Some people claimed it was a secret only a few years ago, and since then LLMs have made it so much harder to tell the difference between leaks and hallucinated news saying this, but I can say there's at least a claim of a leak[1].
Extremely debatable. Especially because there is no "The Turing Test" [0] only a game and a few instances were described by Turing. I recommend reading the original paper before making bold claims about it. The bar for the interrogator has certainly be raised, but considering:
- the prevalence "How many |r|'s are in the word 'strawberry'?" esque questions that cause(d) LLMs to stumble
- context window issues
It would be naive to claim that there does not exist, or even that it would be difficult to construct/train, an interrogator that could reliably distinguish between an LLM and human chat instance.
Sure, when the expected monetary value was 0. Then they started claiming that investing $1,000,000,000,000.00 (that's $1T) into a 4 year old startup was a good idea. Change the valuation, change the goal. Then the goal was be better than a human employees (or at least more efficient or even just improves efficiency) because without that the value of the LLM is far lower than what it is being sold as. All the research so far says that LLMs fall far short of that goal. And if this was someone else's money, fine. But this is basically everyone's retirement savings. Again, higher valuation, higher goal. Finally, when you start losing people's retirement savings, criminal penalties start getting attached to things.
It hasn't even passed the original turning test, depending on the question. There are an unlimited number of questions that cause LLMs to give inhuman looking answers.
As for writing in general slop score is still higher than a human baseline for all models[1], so all a human tester has to do is grade it and make the human write a bunch, the interrogator is allowed to submit an arbitrarily long list of questions.
I mean… just ask about something "naughty" and they'll fail? At the very least you'd need to use setups without safeguards to pass any Turing test…
The Turing test could also be considered equivalent to "can humans come up with questions that break the AI?" and the answer to that is still yes I'd say.
Has OpenAI laid out the specific definition of what an AGI is for this case? The one from their mission is quite vague and the general community has nothing close to a universal common definition... which means they will most likely just define it as what they already have when the timing is right.
> Has OpenAI laid out the specific definition of what an AGI is for this case?
Yes and it's actually hilarious: a system that can perform most economically valuable work better than humans, or specifically when the AI generates $100 billion in profits.
At least in their Microsoft contract it means $100 billion in profit, though they don't need to have actually made that money, they just need to show they're on track to do so.
like the other comment, openai can force itself onto the massive index like VOO/FXAIX etc to make retail folks to provide liquidity exit for openai investors.
Those conditions are an IPO or reaching AGI [1].
Nvidia and SofBank will pay in installments.
Also very interesting that Microsoft decided to not invest in this round. A PR statement was made though [2].
[1] https://americanbazaaronline.com/2026/02/26/amazon-to-invest...
[2] https://openai.com/index/continuing-microsoft-partnership/