Have you noticed that comments like "this post seems written with AI" are now appearing on all posts, even those written without AI?
We're starting to become wary due to the abuse of AI and proliferation of sloppy content, but also because we often have trouble distinguishing authentic from sloppy content.
Agreed. Its so tedious that the top comment section on every HN post the last six months is "this seems be written by LLM" with a bunch of back and forth on whether it is or not.
As a daily user of HN, this is not true. Maybe you are clicking on different headlines, but the ones I am clicking on dont have that as top comment all that often. They do not even have it as a comment somewhere all that often.
I mean, a lot of it is. Green user, signed up 49mins ago, 5 comments, which erodes trust in real people as well. I’ve noticed I’ve just felt less engaged and more anxious about all kinds of online content. While most platforms were previously botted, had adverts, etc… You could always find niche corners where there were only people talking about things they genuinely cared about. Now you can fill out even those spaces automatically.
Probably because most HN posts in the last six months have been written by LLMs. Not all, but that doesn't matter, the trust has been eroded to the point that clicking on an article on the front page of HN and not being immediately met with the sloppiest slop imaginable is now a standout event in my mind.
I’ve been accused of being AI. My first impression when it happened was that because I often deal in information that people don’t like hearing, because it challenges their frame of mind, i.e., what they were trained on, “this is AI” is just another convenient tool to either dismiss uncomfortable challenge, i.e., cognitive dissonance, and/or another means to keep the mental herd they are part of or control in line with dogma.
“This is AI” seems to just be an evolution of other thought terminating cliches where the negative conditioning associated with something is used in an abusive and manipulative way to evade challenge or the truth itself. It is a common tactic of abusive people, the “beyond the pale” moralizing.
Yeah, whenever I see "It's not... it's...", I catch myself instinctively dismissing the content as AI slop, but upon reflection I'm not so sure, it used to be a normal phrase.
But I do take extra care to avoid LLM-speak as much as I can.
This is really interesting, although I still can't get my head around the fact that core.async.flow topologies are immutable. I feel like most problems can't be solved with fixed topologies.
I guess one could in theory swap flows the same way values are swapped, but I wonder if this is the way this library is supposed to be used. I also wonder what happens to non-empty channel buffers in this case.
Flow is intended for processes with long-running stable topologies. Rich has been thinking about options to "patch" the running topology but it is quite tricky due to the concurrency issues and I'm not sure that will ever be added.
Even though the flow topology is fixed, it's perfectly acceptable for a flow component to use other variable resources and act merely as a coordinator. So you could for example have a process that send data out to an external dynamic thread pool and gets callbacks via a channel.
We're starting to become wary due to the abuse of AI and proliferation of sloppy content, but also because we often have trouble distinguishing authentic from sloppy content.
Another feature of this AI era that I hate.