The pricing models that are published on AWS' website almost certainly have almost nothing to do with the pricing models that are discussed behind closed doors for a $100 billion commitment.
Of course not, but unless they’re getting the sweet heart deal of a lifetime from Amazon of all places, it’s still a hogwash. We’re talking about enough capital to build their own fab and a dozen datacenters*. This deal isn’t going to be buying existing capacity because that’s already stretched, it will be paying for new buildouts.
Afterwards Amazon will be milking the machines these commitments buy for nearly a decade. That tradeoff makes sense at a small scale (even up to $X00 million or even billions), but at $Y0 or $Z00 billion?
Color me skeptical. There are plenty of other side benefits like upgrading to the newest GPUs every few years, but again we’re talking about paying for new buildouts with upfront commitments anyway.
* obviously the timelines, scientific risk, and opportunity cost make this completely infeasible but that’s the scale we’re talking about. It’s a major industrial project on the scale of the thirty year space shuttle program (~$200 billion).
Having just checked my child in for their doctor appointment, 90% of web software would be dramatically improved by using very boring best practices and readable and accessible web practices.
GPT 5.4 is the surly physics PhD post-doc who slowly and angrily sits in a basement to write brilliant, undocumented, uncommented code that encapsulates a breakthrough algorithm.
Opus 4.6 is the L5 new hire SWE keen to prove their chops and quickly turn out totally reasonable code with putatively defensible reasons for doing it that way (that are sometimes tragically wrong) and then catch an after-work yoga class with you.
> and then catch an after-work yoga class with you.
That's cute, but do you mean something concrete with this, aka are there some non-coding prompting you use it for that you're referring to with that or is it simply a throwaway line about L5 SWEs (at a FAANG).
(FWIW, I find myself using ChatGPT for non-coding prompting for some reason, like random questions like if oil is fungible and not Claude, for some reason.)
I'm not a fan. But what Anthropic SHOULD have done is use plain ol' SSO. Google, GitHub, Microsoft, etc. logins with the option to do this magic link stuff. The third party auth providers would use passkeys at the user's discretion.
I store passkeys and totps in 1Password. I know it means there's no hardware protection of the secure element, but in return they're trivially synced across my devices.
I feel this tradeoff is worth it to me; certainly it is no worse than email or SMS as the second factor.
The FAA just effectively grounded all drone flights. FDC 6/4375 NOTAM published two weeks ago makes it illegal for a drone operator to operate within 3000 feet laterally of any Department of Homeland Security mobile operation in the United States. But given that such locations are not published or marked, operators are typically in unmarked vehicles, etc - it is now impossible for any drone operator anywhere in the United States to be sure they are compliant with the TFR. (It's pretty clear this was published to deter filming of ICE agents in Minneapolis.)
I've begun an AI content disclosure working group at W3C if folks are interested in helping to craft a standard that allows websites to voluntarily disclose the degree to which AI was involved in creating all or part of the page. That would enable publishers to be compliant with this law as well as the EU AI Act's Article 50.
Hot saltwater is the worst substance on earth, excepting, maybe, hydrofluoric acid. You really don't want to cool things with ocean water over an extended period of time. And filtering/purifying it takes vast amounts of power (e.g. reverse osmosis).
reply