> In practice, it was seldom done, and here we have LLMs actually doing it, and we're realising the drawbacks.
I spent some time dealing with this today. The real issue for me, though, was that the refactors the agent did were bad. I only wanted it to stop making those changes so I could give it more explicit changes on what to fix and how.
> There are many reasons why others might not find what you wrote sufficient to understand it. You boss ran it through AI for a reason and that reason was most likely because it the document was not understandable or perhaps confusing.
It could also be because their manager is less technical. It's not unusual in my life for a PM to try to "rephrase" or restate things I've written in order to make them "easier to understand" in a way that in fact falsifies them or makes them more difficult to understand for the people who will actually have to work on/with it.
"Tell them [very specific answer targeted at X party]"
PM: "They are still asking about Y, see their response with the follow up question"
Then in the original send of [specific thing] PM has transformed it into [something else]. X party has followed up with a question that was answered by [specific thing]. Yes PM you might have been confused but you weren't the target.
The whole "just sync everything, and if you can't seek everything, pretend to sync everything with fake files and then download the real ones ad-hoc" model of storage feels a bit ill-conceived to me. It tries to present a simple facade but I'm not sure it actually simplifies things. It always results in nasty user surprises and sometimes data loss. I've seen Microsoft OneDrive do the same thing to people at work.
Same. I lost a lot of photos this way. I've recently moved over to Immich + Borg backup with a 3-2-1 backup between a local synology NAS and BorgBase. Painful lesson, but at least now I feel much more confident. I've even built some end-to-end monitoring with Grafana.
Thanks... hence, 3-2-1 backups with offsite :) appreciate it though. Will definitely be rolling my own NAS in the future, I just needed something easy at the time.
My own approach to simplicity generally means "hide complexity behind a simple interface" rather than pushing for simple implementations because I feel that too much emphasis on simplicity of implementations often means sacrificing correctness.
This particular example is a useful one for me to think about, because it's a version of hiding complexity in order to present a simple interface that I actually hate. (WYSIWYG editors is another one, for similar reasons: it always ends up being buggy and unpredictable.)
Also, to be clear, I don't think violence is the way to confront the oligarch sociopaths. There is clearly enough momentum to fix a lot of the monopoly / anti-consumer issues over the next 4-8 years. Assuming Trumpty Dumpty doesn't try to put our military at polling places or some other anti-democracy putinesque bullshit like that.
My bad, I thought you were referring to the implicit assumption about momentum. Not the explicit assuming that Trump doesn't put troops at the polls. He sure is trying his best to normalize troops in civilian settings.
Also generally anything critical of capitalism, imperialism, or the military-industrial complex. It doesn't really matter whether it's a measured analysis or shrill shrieking; literally just using any of those words amounts to soliciting downvotes.
This is true but I don't think the downvotes are "fake" though. There's just a whole lot of people who truly believe they are Making the World a Better Place Through Capitalism
Maybe this is just a symptom of my screen addiction, but I keep a close eye on this site for a lot of the day. I’ve noticed a pattern where my commments initially get one or two upvotes (within the first 5-10 minutes of posting) but will then immediately get a greater than or equal amount of downvotes very quickly. It happens consistently enough that I’ve noticed a pattern. The upvotes happen sporadically and the downvotes happen simultaneously.
> I both use AI and have very strong anti-AI sentiment.
Me, too. The AI hype machine involves some really bad ideas, the amount of money being poured into "AI" right now distorts everything, public understanding of how these tools work is low, and a lot of contemporary uses both by corporations and governments are irresponsible, dangerous, and likely to produce or reproduce harmful biases and reduce the accountability of humans for crucial decisions and outcomes.
At the same time, it's useful for me at work, and I'm curious about it. I sometimes enjoy using it. It lets me do things I didn't have time for before. It eliminates some procrastination problems for me. I think its use in computing is also likely to be increasingly mandatory for the near-to-moderate term, so it's probably good for me to get used to using it and thinking about it and looking for new useful things it can do for me.
And my own experiences in using AI are part of what drive my anti-AI sentiment as well! I see it do completely insane and utterly stupid things pretty much every day, both in my personal life and in my professional life. I have a visceral awareness of its unreliability because I use it frequently.
I should hope that as hackers we can muster some understanding and respect both for LLM users and for people with hard "anti-AI" stances. Even if you're "pro-AI" to the core (whatever that means), it's worth understanding the most serious and well-considered arguments of critics of LLMs and the contemporary "AI" race. You might even find, as someone who uses and enjoys using LLMs, that you agree with many of them.
> It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.
It's worth remembering that the way that ended was extremely bloody, particularly for the Luddites themselves. There were a handful of extreme participants, there was a murder, and there was a hell of a lot of violence directed at anyone perceived as a Luddite— even though most actual Luddites themselves mostly avoided violence against other humans.
It would be good if we can somehow avoid such outcomes this time.
Greed drives most of the current crop of Tech Bros.
I once had the chance to be a Bro, far richer than any of the current ones, thanks to the still secretive and anonymous "original-sn-adjacent cryptographic collective". Things, however, did not work out in my favor thanks to other nefarious third-party actors. So, I know where from I speak.
Any outcome is in the hands of the Tech Bros but by the looks of it, greed drives their every action, so things are not looking good!
That's pretty few, at least for the way I'm currently using LLMs. I have them do some Nix work (both debugging and coding) where accuracy and quality matters to me, so they're instructed to behave as I would when it comes to docs, always consulting certain docs and source code in a specific order. It's not unusual for them to chew through 200k - 600k tokens in a single session before they solve everything I want them to. That's what I currently think of when I think of "long horizon within a single context window".
So I need them to not only not devolve into gibberish, but remain smart enough to be useful at contexts several times longer than that.
I spent some time dealing with this today. The real issue for me, though, was that the refactors the agent did were bad. I only wanted it to stop making those changes so I could give it more explicit changes on what to fix and how.
reply