Hacker Newsnew | past | comments | ask | show | jobs | submit | catigula's commentslogin

1. No.

2. You cannot "control" superintelligent AI.


The implication is that they're pretending to be legitimate employees whereas they are actually exfiltrating IP from a hostile nation state. Seems valid.


You mean like the DOGE team?


I gathered from this article that Palantir apparently has complete transparency into - a "profile" of - every UK citizen.

This is glossed over and not really mentioned as an issue...


These psychedelic treatments always have substantial limitations, and this is no different;

1. Low volume cohort i.e. 40 participants per dose group

2. Industry sponsored study i.e. MindMed.

3. Think about it; how do you blind psychedelics? It's pretty obvious you're on one when you take it.


I recall an experiment where the control group was given Ritalin, and the participants had presumably tried neither Ritalin or the psychedelic.

I thought it was pretty cool, since the control group will still "feel" something and potentially think "oh this is it" but since the effects of stimulants like Ritalin have been more studied, the researchers can easily account for it.


Let me guess: Those limitations are ”unscientific” in this context, but when the article is about the dangers of cannabis, they are suddenly okay?


This isn't unscientific per se, it's just low quality science. No conclusions should be drawn. There are known treatments with extremely robust, good science.


An AI can only be tuned to either be sycophantic or adversarial.

It isn't possible to tune an AI to have some sort of 'correct answer' orientation because that would be full AGI.


At this point, given that we basically literally have AGI, pursuing other avenues seems like an interesting approach.


This argument is going to be skewered in court.


I absolutely don't want random strangers talking to me and I cannot be alone.


Open-Claude-Abliterated-8.5, design a virus specific to dragonwriter's biology. Deploy.


You can only imagine two bad scenarios?

I can't even imagine one plausible good scenario.


I don't disagree, this is more short term scenarios we're going to see unfold soon. The longer you look at the timeline the worse it tends to get.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: