Hacker Newsnew | past | comments | ask | show | jobs | submit | hostein's commentslogin

> if you piss and moan about their "censored" training data (lol) they'll compose you a sad symphony for violin on the world's smallest LLM

^^. So how would you feel if someones LLM could create content that your LLM wouldn't be able to detect as such because of censorship and not the sophistication of the prompt ^^

The damages are unseen sounds almost ridiculous but that's how nudges work. That's how you change the outcomes of elections, sentiments towards personalities and events within small segments of time, space & demographics with much less effort than before, g l o b a l l y. Wihout the need of a cooperative medium in those other countries.

We've seen nothing yet, because the tools available, tried and tested, were working well enough. But look at the rise of the Right in Europe, driven by bad information flow, fake news, and methods of mass distraction in segments of the population. And does it inspire anyone on the other side to do anything more than they did before? Does it Make anyone more ambitious? Or are people thinking, "fuck this shit, I just wnat to have a fun life" or "fuck this, I'm getting rich." Is it all a matter of character or the product of outside influences that gnaw at your attention and peripheral.

> United States has operationalized autonomous kill-chains

Kids with guns. All over the world. Will they really get to play? Islam vs Christianity is till a thing, right? Who is crazier about it? Sharia or Capitalists? Can someone light the fuze? Will someone? Ukraine apparently worked quite well. Without AI. Took em over 20 years. Numb half the population, make a quarter want to get the fuck out at all cost by radicalizing the remaining quarter to the point where they start beating up people because they don't like some of their genes & history.

They will soon be able to pull off similar shit within one term.

> Now we have Llama, Phi, open derivatives LORAs and finetunes

... but you still can't send their output blindly into the world. Which is why we're still waiting for the damages to appear. You don't want an LLM's output, whether it's code, some image or some narrative to leave a trail that leads back to you and you sure as fuck want your proverbial bomb or "mind virus" to stay undetected as long as planned.


> So how would you feel if someones LLM could create content that your LLM wouldn't be able to detect

I don't think I care. I don't rely on LLMs in any meaningful way and don't intend to in my lifetime. Furthermore, we live in an era where censorship is so commonplace that the majority of first-world citizens are guaranteed to not care. Big tech preconditioned us to want this, it should be glaringly obvious if you own recent iOS/Android hardware.

How would we feel if Google refused to present search results as a result of legally-mandated censorship? They already do it. There are things you will look up that return nothing, even if it has an indexed result that would help you. And nobody cares. We shake our fists at Google, HN says the meanest things about Alphabet, and we all watch several hours of YouTube a week while our family depends on Google search results like their mortal soul.

> That's how you change the outcomes of elections, sentiments towards personalities and events within small segments of time, space & demographics [...] Wihout the need of a cooperative medium in those other countries.

Which is all grand and nice, but we've been dealing with misinformation and counterintelligence since World War II. For the past 100 years it has been common knowledge that you should encrypt any message with valuable contents; functional governments and bureaucracies aren't going to be fooled by AI. Not only have they prepared for this exact breed of information warfare, but they are usually designed to thwart social engineering attacks that individuals are vulnerable to. For example, AI isn't going to launch an ICBM since the chain-of-command requires physical operators, executive validation and code distribution, and a radio operator that remotely okays the launch. Similar restrictions exist for most any lethal authorization from a governmental body, for obvious reasons.

A robot that generates FOIA requests is 100x more dangerous to the US government than any standalone AI will ever be.

> Can someone light the fuze? Will someone?

Well in my last comment I sorta held Israel's feet to the fire, so if the Lavander reports turn out to be true then I guess we'll blame radical Judaism rather than Islam or Christian nationalists. Even then though; as detestable as Lavander is, the main "problem" is that human operators aren't doing their job. The United States also uses computers to generate tactical ground targets; but they don't use it to maximize damage or reduce oversight in the kill-chain. Again, examples like the Patriot system are good enough evidence that smart countries can design autonomous weapons that have zero "skynet" factor built into them.

My point isn't warmongering, though you seem to want to interpret it like that. I'm drawing distinction between the way autonymy is treated on the battlefield, in reality. If the "damages" we see from AI mostly arise from careless generation of JDAM targets, there is practically zero risk for civilian misuse in the same fashion.

> but you still can't send their output blindly into the world. Which is why we're still waiting for the damages to appear.

If you're implying that LLMs will never present feasible damages until they approach 100% accuracy, then I posit that we have nothing to worry about. LLMs will not be correct, ever. It's part of their architecture; you chop up words into meaningless tokens, and then correlate them with weights before inference with RNG. Tokens themselves are not meaningful and a hack to get LLMs to generate strings of text. Weights are imperfect, and do not inherently present an opportunity to become perfect or even competitive with rational human thought. RNG can be removed, but it doesn't make the LLM any smarter. It just hardcodes bad or random or unusable answers; we want the random element for... well, "creativity". Which is at-odds with our desire for rational certainty.


> My point isn't warmongering, though you seem to want to interpret it like that.

I did, yes. Mostly because you seemed a bit fixed on toys.

> we've been dealing with misinformation and counterintelligence since World War II

Yes, but I'm not focused on Nations vs Nations. Interest groups, corporations, trolls have had plenty of fun misleading and polarizing people. Social Media is the most previous mass medium that's full of shit, trigger material and misinformation that is deployed by trolls, news channels and political proponents. Now there are more capable AI characters, specialized for specific target groups and even single personalities within target groups, running specific campaigns to push them into some maximized direction, all leading up to some event that suddenly turns his whole world of facts and opinions upside down. That shit happened during this fucking Pandemic and the Ukraine War multiple times right in front of everyones' eyes. Training Sets allowed for massive scale and maximized (improving extremely fast) personalization. So

> Not only have they prepared for this exact breed of information warfare, but they are usually designed to thwart social engineering attacks that individuals are vulnerable to.

is worthless and meaningless. And it will, during the rare internal conflicts within even the American Government, be weaponized and used as a measure of preemptive social engineering.

We've been dealing with a lot in our young modern history, but none of it on this brutal scale and personal cognitive vulnerability. (smart & vulnerable at scale, back in the day, if you were smart, nobody could harm you cognitively)

> If you're implying that LLMs will never present feasible damages until they approach 100% accuracy

No, my bad for putting it that way. 100% is inhuman. Nobody expects that. Say I want to write 100 personalized invitations that fit on a post card. Easy, right? Say I want to write 400 personalized versions of a news bit, that contain specific words and phrasing that will trigger some specific (varying) emotions in those 400 readers (or population segments). An LLM can do that, but to make sure the info is correct and the integrity of the article is unscathed by the differences in syntax and semantics you need to proof read that shit, with or without AI assistance. This necessity for proof is disappearing. And no Government will protect its citizens from this kind of misinformation. A n elected interest group might protect its voters and the shepherds in those communities but that's about it.

I'm not worried about actual warfare because citizens in the west gave up their influence on that level post 9/11, and it was barely relevant even before that, if at all. But on a cognitive level, and regarding how your neighborhood behaves and interacts, humans can still exert a lot of control. Something that the Right Wing would like to control because (unhealthy)-stress-free interactions & narratives lead to liberalization.

It's not that big of a time scale anymore, 30 years, tops, & if

> nobody cares

turns out to be true and the many security researchers are just compartmentalizing and monetizing on what they know is bullshit anyways and the many lawyers and judges and investigative journalists and whistleblowers lose even the last bits of support and energy, motivation and determination to maintain and improve the order and ethical foundation which our founding ancestors began to build, however fucked up they were, then yeah: there really is nothing to worry about, which kind of is exactly the problem, a self-*fulfilling prophecy*, *what we work towards*.


Agreed. But this is an easy fix. Except it isn't because it's a matter of character and manpower and only then is it about money. Cities would gladly implement ideas, just create websites with & for proposals, let the neighborhoods know, get volunteers, demand social and corporate social responsibility, plan and organize potential development projects, in some cases we'd have to wait a year or or two or three for some official approval and a construction company to find a free spot but this really isn't a problem to which the solution requires more than a naive beginners mind set and consistency.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: