But you already knew how to code before LLM coding agents, juniors will jump straight into using agents without learning to code by hand, hence the premise of the article.
Yes. Just like globalization created companies like TSMC, AI will do the same. Software engineers who don't rely on LLM code generators will have a moat because they can do it cheaply and sustainably.
Another reason is that LLMs train on the existing code we already know, don't expect new programming languages or frameworks this means that the software engineering skills that exist today will be relevant for a long time.
I am not so much convinced by your last point, that point of new languages and frameworks. I think the cutoff date is closing in on our current now. If models cannot easily become bigger, they will likely advertise using "up-to-date-ness". Maybe they will be merely a few days behind. Or bigger models will make use of smaller but more up-to-date models.
I think engineering skills will still remain relevant due to taste and proper judgement. A model trained on everything and the kitchen sink has probably not the fitting bias for given specific problems in my project. Accepting too much AI generated code without steering the ship will result in some drift of taste and ultimately make some mediocre project like done by people without good domain knowledge and without good taste. It might even be short term a business, but it lacks the long term excellence, that sets projects with good judgement apart from the common rabble.
> I think the cutoff date is closing in on our current now. If models cannot easily become bigger, they will likely advertise using "up-to-date-ness". Maybe they will be merely a few days behind. Or bigger models will make use of smaller but more up-to-date models
But they will still rely on assembly, C, Rust, Linux, HTML, TCP/IP... Doesn't matter how up to date they are, they rely on existing code they have been trained on, they can't just create new languages without the training data.
That's not what I mean, even I can invent my own language that only I use, but if that language is not widely deployed and used by other people or LLM models, it's just a toy language.
If you agree that there is no absolute truth in complex problems, and that different parties can have different perspectives which are all true/correct from their point of view than this type of systems can be far more inefficient to make decisions compared to a single entity. It's like having multiple CEOs in a company, or design by commity.
> It's too bad these services live forever as cash cows to subsidize crazy ideas and schemes. They should graduate to public utilities once the product stabilizes.
In essence if you take capitalism to it's max you will eventually meet Marx.
I don't buy this. Doesn't supply and demand fix this unless of course you are dealing with a monopoly and have no choice. Don't customers find the most affordable goods in the market place and aren't goods priced competitively? What am I missing?
If all the airlines (or whatever business) are doing something like this, how would you know what the prices are and what the alternatives are? Custom prices per user based on willingness to pay shifts market value around a lot.
> If all the airlines (or whatever business) are doing something like this
Then that would be the main issue and far worse then the algorithms, that's a monopoly or cartel and not a free market. Because in a competitive market prices will be competitive with very little arbitrage.
When the algorithm is for estimating consumer surplus, the line between coordination and independent cost-optimization disappears.
Why would you try to one-down on price if an “objective statistical AI algorithm” tells you you’d be leaving money on the table without gaining market share to compensate? All it takes is for the market to be sufficiently concentrated at that point.
I've got news for you there. The Biden administration tried to take the first real antitrust action in decades and then suddenly a bunch of tech oligarchs switched from supporting Democrats to supporting a convicted felon
Truth is if mastodon.social gets ddosd the same as Bluesky I can still use the rest of the network fine. Proof is in the pudding. tons of instances that make up the fabric of redundancy. I think most people would be served better if Bluesky acted differently early with their rollout in a sharded manner?
i get real tired of people trumpeting that bsky is distributed.
Can i run a private node? can i run a functional node completely within my network segment? because i can with gnusocial and misskey; i've never run mastodon; i am on fosstodon and a couple of other mastodon-likes.
bluesky is to discord what mastodon (fedi) is to IRC.
don't let the fact that most people use the main instances fool you, there's thousands (maybe tens of thousands) of instances. I haven't seen a tally recently, i forget the account that shows them for each "instance type", like pleroma, misskey, mastodon, pixelfed, whatever the reddit clone is, whatever the 4chan clone is, and so on.
anyhow when elon bought twitter mastodon surged. I hope they didn't spend millions upgrading the main instances because most of that dropped off because, you know, everyone's on twitter. only a few million on mastodon.
My whole point is, trying to shoehorn words like "distributed" into a system that i cannot run independently is, well it's just not distributed, that's all.
edit: maybe this is sour grapes because i never got an invite; but maybe i think it's just twitter with a different coat of paint and different buzzwords attached.
This is half true. If mastodon.social goes down every single one of the accounts made on that instance go down as well.
In truly decentralized protocols you own your identity and can take it elsewhere, for instance, in Nostr and SSB, a relay/pub going down is no big deal since you can connect to other servers and maintain communications.
historic posts from the known network and (sometimes media, instance setting) are cached on your own instance in ActivityPub.
interactions travel across the known network graph.
if an instance vanished forever, overnight, there is at least an imprint of it across the network, albeit instance specific.
that may be by design, there are jurisdictions that have people complying with laws and things. not sure how the ecosystems you mention deal with that in particular
That doesn’t answer the point I’m making.
If the instance your account was made on explodes, YOU lose your social graph, wether some of your posts survive cached elsewhere is not relevant, your account is gone, and so are your connections.
You have no way to prove an account made after the original instance went down belongs to someone, that’s the issue with federated systems.
As for content moderation, in nostr relay operators such as nostr.build handle legal takedowns on a daily basis, SSB is a little trickier since it’s mostly p2p but pubs are still able to control what flows through them to some degree.
the web, which also gets referred to as decentralized, suffers from the same proof problem. we have identity tied largely to dns. web sites can claim whatever. somewhere a line was drawn to indicate what matters most is creating something without a single point of failure?
The people I follow on mastodon come from a wide variety of instances. While mastodon.social is the largest instance, most of the accounts I follow are elsewhere.
Granted, all the smaller instances are likely easier to DOS as they are small instances. But mastodon is actually decentralized. If any one instance goes down, everything else keeps working. Unlike Bluesky and ATProto which is more of a theoretical “could be” decentralized.
"AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better. This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary."
Replace AI with "open source and Linux", and "open source" with "Windows" in the statements. That's what Microsoft's PR team would have said about open source and Linux about 20 years back in the 2000s.
After the unsuccessful FUD era, now Microsoft is running away with Linux by running its Windows alongside via WSL to combat MacOS Unix-like popularity, and due to Linux and open source dominance in the cloud OS demographic.
Even worse, in that Microsoft's FUD was mostly right. The joke about Open Source being communism played out straight - FOSS pretty much destroyed the ability to make money on software products, accelerating transition to SaaS models where you can carefully seek rent from the shelter of your secure company servers (later, cloud), and that is in large part responsible for modern surveillance economy - as it turns out, some SaaS segments decayed to "free with ads", where - much like with OSS and locally-run software - you cannot compete on price with free.
It doesn't make any sense. Humans are non-fungible AI is not, they are just arbitrarily imposing such limitations, which can impact the workflows of businesses.
You go to an all you can eat restaurant. Would you find it, that it wouldn't make sense to charge for each bag you start stuffing with food? You are doing the same, using a tool to extract more value out in the same time. Of course they are gonna charge more.
You can be against licenses, because you think it should be possible to own software. (I agree somewhat with that.) But when you accept a fee for usage (aka. a license), the vendor gets to tell you the pricing units in which the usage is determined. Expecting the same cost while changing the pricing unit, isn't likely to be accepted by the seller.
Your society and your status within it are predisposed on making money. They are taxing your ability to leverage them for a leg up through new tooling. Their interests are to maximize or maintain the relative disparity between their value extraction capabilities, and yours. Otherwise they lose at the game of Money. Isn't the society we've built that we like to say is about goods and services, but is actually only about power and influence accretion through relative disparity in money extraction grand?
Don't like it? Build your own transnational ERP. /s to be clear.
Always has been since the ZIRP era. The ‘make something people want’ phrase was coined by a famous Silicon Valley investor. I heard he runs a popular forum.
reply