The original expression came out of an internal company discussion that someone summarized (paraphrased) as "when there's a tough choice to make, one is usually less evil. Make that choice."
In the early days of Google in the public consciousness, this turned into "you can make money without being evil." (From the 2004 S-1.)
Over time, it got shortened to "don't be evil." But this phrase became an obligatory catchphrase for anyone's gripes against Google The Megacorp. Hey, Google, how come there's no dark mode on this page? Whatever happened to "don't be evil"? It didn't serve its purpose anymore, so it was dropped.
Answering your question really depends on your priors. I could see someone honestly believing Google was never in that era, or that it has always been from the start. I strongly believe that the original (and today admittedly stale) sentiment has never changed.
Making a loud affair out its retirement rather than quietly letting it collect dust and be forgotten over time was most definitely not a good idea.
The public already demonstrated that they adopted, misused and weaponized the maxim. Its retirement just sharpened the edge of that weapon. Now instead of "What happened to don't be evil?" it's become "Of course Google is being evil." and everything exists in that lens.
A similar dynamic is playing out with Anthropic, whose founders left OpenAI in part over a philosophical split that could be described, if you'll grant a little literary license appropriate to this thread, as Anthropic choosing the "don't be evil" path. No surprise that we now see HN commentary skewering Anthropic for not living up to it.
I assume not, but the emphasis here is that a new tool is homogenizing these projects and due to its scale it is more important that this homogenous output is up to a higher standard.
A hundred self-thought devs not implementing accessibility standards is a different problem than a school teaching 100 students lacking these standards in its curriculum.
Yes, because a majority just use a CSS framework to save time, and all the big/common frameworks have put a fair bit of effort into their default colours and typography.
I would rather go back to when all side projects used Bootstrap than this purple-on-purple-with-glowing-purple mess of stuff we have now.
The "default" light-mode look of most popular UI frameworks wouldn't have that same issue unless you put a lot of time into customizing your own styling, which most side projects wouldn't bother with (unless that look and feel was the point of the project). There certainly would be poor UI decisions but more likely in layout/placement/navigation, which could still be problematic for accessibility but probably not in a "is this color scheme even readable" kind of way.
Plus given time constraints, they generally wouldn't try to cram huge amounts of tiny text into every visible inch of the page without some intentional reason to do so (using that somewhat hard to read console-ish font Claude seems to love as a default).
Maybe the dark mode/terminal font/high text density look presents as "cool looking" at first glance for one-shotting evals so they've all converged on it. But to OP's point, this seems like a solvable (or at least mitigable) issue if models or harnesses were concerned about it.
It depends. As with everything else, those who know what they're doing and can make good looking and readable dark mode sites are the minority. Too many who think they're cool make squeaky white text on pitch black background and/or commit other sins. AI represents the average of the majority, not of the best.
Are these kinds of libraries a temporary phenomenon? It strikes me as weird that providers haven't settled on a single API by now. Of course they aren't interested in making it easier for customers to switch away from them, but if a proprietary API was a critical part of your business plan, you probably weren't going to make it anyway.
(I'm asking only about the compatibility layer; the other tracking features would be useful even if there were only one cloud LLM API.)
I've been maintaining an abstraction layer over multiple providers for a couple of years now - https://llm.datasette.io/
The best effort we have to defining a standard is OpenAI harmony/responses - https://developers.openai.com/cookbook/articles/openai-harmo... - but it's not seen much pickup. The older OpenAI Chat Completions thing is much more of an ad-hoc standard - almost every provider ends up serving up a clone of that, albeit with frustrating differences because there's no formal spec to work against.
The key problem is that providers are still inventing new stuff, so committing to a standard doesn't work for them because it may not cover the next set of features.
2025 was particularly turbulent because everyone was adding reasoning mechanisms to their APIs in subtly different shapes. Tool calls and response schemas (which are confusingly not always the same thing) have also had a lot of variance - some providers allow for multiple tool calls in the same response, for example.
My hunch is we'll need abstraction layers for quite a while longer, because the shape of these APIs is still too frothy to support a standard that everyone can get behind without restricting their options for future products too much.
It’s a complete mess, and the hardest part of this kind of tool is maintenance.
It’s not just about incompatible APIs, but also about how messages are structured. Even getting reliable tool calling requires a significant amount of work and testing for each individual model.
Just look at LiteLLM’s commit history and open issues/PRs. They’re still struggling with reliable multi-turn tool calling for Gemini, Kimi requires hardcoded rules (so K2.6 is currently unsupported because it’s not on the list), and so on.
Implementing the basic, generic OpenAI/Anthropic protocols is trivial, and at that point it almost feels like building an AI gateway is done. But it isn’t — that’s just the beginning of a long journey of constantly dealing with bugs, changes, and the quirks of each provider and model.
The providers themselves can't keep this straight even within their own ecosystem. Plus everyone is running at a million miles/hour.
For example `Claude code` used to set 2 specific beta headers with some version numbers for their Max subscription to be supported.
Oauth tokens for Max plan is different from how their API keys looked. They kind of look similar, but has specific prefix that these tool pre-validate.
It is barely working at this point even within a single provider
Apple and Microsoft want to be your robot exoskeleton, helping you do whatever you were going to do, but better. Google and Facebook want to do things for you and hand you the results.
Cool article! I never saw that one, even though I do read that blog.
I'd argue that it was from 2018, and it's a different world today. Since then, Microsoft has made a pretty extreme pivot towards the "do things for you" camp and they seem to have become absolutely convinced that "AI" was vaguely the thing they wanted to do for you.
Did it happen suddenly? Or did you go for a long time without using a sauna, and noticed the change only when you resumed? Did anything else about your body change, such as weight loss (perhaps from a GLP-1)?
It's possible that Covid had nothing to do with it, and your body is simply changing with age. It's depressing, but it happens!
No, you don't understand. The people at my company are auto-opt-in premium-communication value-add customer-relationship-establishment specialists. But otherwise, I agree with you: everyone else is a spammer.
I admit it's a habit that's probably weeks out of date. Earlier engines barfed on split GGUFs, but support is a lot better now. Frontends didn't always infer the model name correctly from the first chunk's filename, but once llama.cpp added the models.ini feature, that objection went away.
The purist in me feels the 50GB chunks are a temporary artifact of Hugging Face's uploading requirements, and the authoritative model file should be the merged one. I am unable to articulate any practical reason why this matters.
In the early days of Google in the public consciousness, this turned into "you can make money without being evil." (From the 2004 S-1.)
Over time, it got shortened to "don't be evil." But this phrase became an obligatory catchphrase for anyone's gripes against Google The Megacorp. Hey, Google, how come there's no dark mode on this page? Whatever happened to "don't be evil"? It didn't serve its purpose anymore, so it was dropped.
Answering your question really depends on your priors. I could see someone honestly believing Google was never in that era, or that it has always been from the start. I strongly believe that the original (and today admittedly stale) sentiment has never changed.
reply