people that say things like this are exhausting. exhausting. You make it so very easy to classify you straight into the "looney" bin. People said that WSL was EEE for Linux. when that didn't happen, people said that WSL gaining GPU support was EEE. When that didn't happen, people said that WSLg was EEE for Linux. People said that Powershell was EEE for Windows.
None of these happened. none of them even appear to have happened, and none of them appear to have even been planned. It's all a hallucination by people that talk like this. It's all imaginary. Show me any evidence of anything like this. ANY AT ALL. Not a hunch, not something that could be interpreted that way, show me the very clear and repeatable steps that Microsoft used in the 90s to EEE something in anything they're doing today.
They're too busy copiloting everything and arguing with each other to do this. Show me Microsoft Git with extra features over the open source version. Show me Microsoft Linux with extra features over the open source version. Show me Microsoft ANYTHING with extra features over the open source version they copied, and show me the doors slowly closing behind me. You can't. Because it isn't happening.
git repos can't be locked up in the way you're describing. github is a wrapper around git. it would take an enormous amount of work for microsoft to change this fundamental decision in the design of github. GitHub is a git server, over both HTTP and SSH. These are core decisions of the software that everything else sits on top of. If pulling git repos over HTTP or SSH ever stops being supported, so many things are going to stop being supported, that it just won't be useful at all after that point.
the gh cli makes api calls, that's all. it just makes api calls easier. it exposes a lot of api calls as friendly commands. it's not something that most of github users are even aware of, much less use. gh is not going to lock someone into a github "ecosystem" A) because such a thing doesn't exist and B) again, most people don't use it.
Microsoft is far more likely to kill GitHub because of people with MBAs (aka "morons") who somehow call the shots in business these days. They are not going to pilot into the ground by EEE. They are going to pilot it into the ground because they don't know what they're doing, and they don't know what users want or what they like. That will be the fate of GitHub; incompetence will kill it, not cold, calculating, nefarious competence.
I think the down-votes on this comment are too bad. It's legitimately funny to write a muli-paragraph rant in high dudgeon calling other people "exhausting".
The comment’s size is apparent before reading a single word, so you can avoid it if it is too large. “EEE” comments are short and exhausting and there is no warning visible.
>I give it five years before the GH CLI is the only way to interact with GitHub repos.
I do not doubt this, already it seems to be a pain to deal with some repos on github without using gh. I do not know what gh buys you but I have never used it so I do not know if it is "better". To me the standard git commands are fine. But yes, I think the trend to forcing gh upon us is already being done.
I do use a command-line program as the only way to interact with GitHub (using the GitHub API), but I do not use GH CLI; I have my own implementation (which is much smaller than theirs). (They can see that I use my own, because of the User-Agent header, and they can also see what APIs are accessed.) (Git can also be used, but only for the functions of Git rather than the functions of GitHub.)
They're not exclusive. Quite often developers use a filesystem with a database.
Store the file on the filesystem with a unique name. Store the original name, the unique name, the owner, tags, a description, locking, auth, enforce uniqueness, and track access with the database.
Then try and keep things performant and handle concurrency!
Try doing all of the above just using a filesystem and you'll either:
1. Waste years making a rubbish database.
2. Do a bad job trying to do everything with flat files.
It's definitely not just for JS web apps anymore—you can run Rust, Python, and even standard Docker containers now. Plus, things like D1(SQL) and R2(Storage) give you the entire backend stack ready-made.
But you're completely right that it doesn't replace a raw VM. Cloudflare's goal is to abstract away the infrastructure so you don't have to manage a Linux server just to host an API or SaaS. But if you actually need OS-level access, background daemons, or to run legacy code, you absolutely still need EC2 or a traditional VPS.
Using vim to do this seems silly. Nano is also nearly always present, and doing those “basic” things is 10x more straightforward in an editor that isn’t modal and just gets out of your way.
I’ve often in my career witnessed engineers who’ve cargo culted the need for vim, but they only know how to hit ESC !wq or whatever, and one errant keystroke puts them in modal hell of some sort that, often requiring they just close the terminal and try again.
I don’t begrudge those who want to become power-VIM-users, though it seems wildly awkward to me, to each their own. But if you just want to use it to do the “basics” on ssh sessions, using nano makes more sense. PGUP and PGDN and Home and End and arrows work just fine to navigate, and the bindings for most things are printed right on the screen (except Ctrl-S to save… for some reason, but it works).
Can you cite a source for this? There's no question that they're vastly more complex, but I would think that modern car manufacturing is far more exacting (and efficient) than in the past.
If you're saying that older cars are more repairable, I'm happy to agree with you, even without a source to back up that claim.
An easily visible one is air intakes. Many manufacturers have shifted to plastic. Peteo-engineering has advanced a lot, but they will still get brittle and break.
Interior wise, you can look at things like fabric durability-- lower deniers can be cheaper, but will wear sooner. Springs/foam in seats are another example, but this will vary across manufacturers, models and trims.
This isn't exclusive to financial engineering manufacturers like Stellantis or Nissan, either. Toyota has had issues with simple things like rust proofing (whether intentional or not) on 1st generation Tacomas leading to massive recalls and things like plastic timing guides prone to wearing out. Ford with the wet clutches having belts submersed in oil.
German cars needing body off access for rear timing chain maintenance at 80k miles. Water cooled alternators (really, VW?). All types of "why?" if you follow cars once they are 3+ years old.
It seems like there are a lot of regressions that probably result from cost cutting, while others may exist to simply drive service revenue.
In the United States, the Environmental Protection Agency assumes the typical car is driven 15,000 miles (24,000 km) per year. According to the New York Times, in the 1960s and 1970s, the typical car reached its end of life around 100,000 miles (160,000 km). Due in part to manufacturing improvements, such as tighter tolerances and better anti-corrosion coatings, in 2012 the typical car was estimated to last for 200,000 miles (320,000 km) with the average car in 2024 lasting 160,545 miles according to the website Junk Car Reaper.
I think you're talking about apples and oranges, as parent appeared to be cataloguing recent design defects. Which are pretty common too.
That'll influence the average reliability minimally, unless you were unlucky enough to buy one of those models.
Personally, why I'd rather get something at 120k mileage w/ 250k+ max examples on the road by that calendar date. You'll know whether they designed a lemon.
Add: undersized Tacoma rear leaf springs, multiple manufacturers' head gaskets, a few early aluminum engines (? from memory)
There are many other considerations, too. Years ago I scraped Craigslist and Autotrader, grouping cars by generation/make/model/drivetrain to be able to predict longevity based on quantity for sale versus original sales figures. If a model sold 100k per year for 10 years and only 3 were for sale in year 13, that isn't a great sign. Cheap cars will tend to have cheap owners who are more likely to skimp on maintenance, typically leading to more accrued issues and a shorter lifespan for the vehicle. Some cars are just poorly engineered, and the markets are relatively efficient in pricing resale value. The definition of "high mileage" is going to vary by who you ask. Domestics 150k, German 80k, Japanese 200k, Korean 100k. These are subjective averages (some cars like Theta engines, Darts, even late model GM 6.2s have engine failures <40k), based on when they start disappearing due to repairs being more than the vehicle is worth, but based on what I saw then and kind of observe still.
Leaning on those prior mentioned product mixes, keep in mind that Japanese manufacturers weren't in the American market 60 years ago, so market mix would be wildly different. (Multiple 400k+ mi Toyotas in my family, along with 60 year old GMs, but with aftermarket or rebuilt engines.) The cost of vehicles (and repairs) relative to prevailing wages will impact the repair vs replace balance. Trade publications like Cox/NADA/Adesa/etc. are always cited by financial blogs when mentioning consumer spending/state of economy by average age of cars on the road. Why cars get junked or totaled has shifted drastically, too. Steel bumpers were easy to replace, modern bumper covers with styrofoam backing and aluminum crumple zones, not so much. Tolerances is a vague term in that veiled PR piece on that wiki article. Machining has improved. Tech like direct injection and improved lubrication (synthetics) have done much more in terms of efficiency and longevity. In a lot of cases, manufacturers try to get more and more horsepower from the same displacement by pushing tighter engine tolerances (crank/main bearings, pistons/rings, valvetrain) and things like higher compression ratios and revs, leading to more heat and earlier failure. So while you have better initial engineering, you are closer to the point of failure. For another example, interference engines will grenade themselves if you ignore timing belt maintenance, but in the meantime, you get more horsepower by getting more air into the cylinders.
A v6 Camry or Accord is going to be have more hp, be faster,more reliable at same age, be quieter and get 3x the mpg than nearly any muscle car of the past.
Unfortunately it seems that many Americans prefer giant vehicles that place more emphasis on their size (and status) than materially important factors like reliability engineering or fuel economy.
Obviously these are ancedotal examples, they can be confirmed by wasting hours reading about cars and watching mechanic review videos from people who work on them daily (I am partial to the CarCareNut on YT).
Efficient manufacturing means exactly building stuff as cheaply as you can get away with.
There's a reason why roman architecture is still standing: it is massively overbuilt, the very opposite of efficient (they also used to make the architect stand under his own arches as they removed the temporary support, that could have contributed to the overbuilding).
Is it? Every city in Roman empire had temples and forum. Where are they still standing? Maybe half a dozen survived, like pantheon in Rome or temple in Nimes, but it's extremely rare. Maybe they weren't overbuilt at all?
It seems like you both are looking at different definitions of built well. One pertaining to how well the car will perform over its lifetime. The other describing the build process. Not necessarily exclusionary, but different.
I agree there are a lot of things outside the computer that are a lot more difficult to reverse, but I think that we are maybe conflating things a bit. Most of us just need the code and data magic. We aren't all trying to automate doing the dishes or vacuuming the floors just yet.
I hope this sets the trend for cars too.
I would happily buy a new car with a 2000s Japanese engine and no tech.
reply