Input from the real world probably isn't enough. It seems to me a real threatening intelligence needs the ability to create feedback loops through the real world, just like humans do.
Unless a given class of LLMs is run only once and then forgotten, there alredy is a feedback loop through the real world - the output of the LLM is used for something, and influences the next input to a smaller or larger degree.