This is literally the same for all professions, only in CS/SE it is for some unknown fucking reason considered “a problem”. Why isn’t there “replace extremely expensive doctors/lawyers with AI” movement?
Because programmers made the LLMs, and they first applied it to the problems they know, so the examples of "replacing a programmer" are abundant. Then the hype train rolled in and now it's suddenly going to replace everything, just that software engineering is the low-hanging fruit since they already have "proof" that it works in that domain.
Hint: it actually doesn't work at real depth, and why not is fairly well explained in TFA: they hype always overestimates the depth of the field. So these advances do help to make easy thing easy (in the case of LLMs because they have been trained on a billion examples of the easy stuff), but don't really end up helping with the hard things (because they really only make new things that weren't encompassed in their training by getting lucky, and because tedious things are different than hard things).
Overly optimistic people are already talking about using LLM based AI as a way to provide healthcare access in underserved (i.e. rural) areas. There's already lots of studies going on for things like using AI to identify tumors and cancers in MRI and other images.
There's national headlines every few months for lawyers getting in trouble for submitting LLM hallucinated citations in court, so lawyers are starting to do it to themselves as well.
It's early days yet, because unlike most CRUD apps, the consequences of hallucinations and outright bad calls in medicine and law are life ending. Unless the bubble pops soon, it's coming though.