Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Input from the real world probably isn't enough. It seems to me a real threatening intelligence needs the ability to create feedback loops through the real world, just like humans do.


Unless a given class of LLMs is run only once and then forgotten, there alredy is a feedback loop through the real world - the output of the LLM is used for something, and influences the next input to a smaller or larger degree.


And said humans will be less and less obliging with supplying their content.


Let's just call that problem an implicit Turing test, which AI will definitely win ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: