Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's my theory:

Consider a typical LLM token vector used to train and interact with an LLM.

Now imagine that other aspects of being human (sensory input, emotional input, physical body sensation, gut feelings, etc.) could be added as metadata to the the token stream, along with some kind of attention function that amplified or diminished the importance of those at any given time period -- all still represented as a stream of tokens.

If an LLM could be trained on input that was enriched by all of the above kind of data, then quite likely the output would feel much more human than the responses we get from LLMs.

Humans are moody, we get headaches, we feel drawn to or repulsed by others, we brood and ruminate at times, we find ourselves wanting to impress some people, some topics make us feel alive while others make us feel bored.

Human intelligence is always colored by the human experience of obtaining it. Obviously we don't obtain it by getting trained on terabytes of data all at once disconnected from bodily experience.

Seemingly we could simulate a "body" and provide that as real time token metadata for an LLM to incorporate, and we might get more moodiness, nostalgia, ambition, etc.

Asking for a theory of mind is in fact committing the Cartesian error of making a mind/body distinction. What is missing with LLMs is a theory of mindbody... similarity to spacetime is not accidental as humans often fail to unify concepts at first.

LLMs are simply time series predictors that can handle massive numbers of parameters in a way that allows them to generate corresponding sequences of tokens that (when mapped back into words) we judge as humanlike or intelligence-like, but those are simply patterns of logic that come from word order, which is closely related in human languages to semantics.

It's silly to think that we humans are not abstractly representable as a probabilistic time series prediction of information. What isn't?



So my observation is that we could embody an AI so that it learns theory of mind-body--but then we could remove the body. This gives a theory of mindful entity that does not need a body to exist.

Then the next research step could be to study those properties so as to reconstruct/reproduce a theory of mind-body AI, without needing any embodiment process at all to obtain it. Is that, in principle, possible? It is unclear me.


> we could embody an AI

... a hardware interface that generates a token stream from a living human's body would seem to enable this at some level.

Not sure how it would work at scale. Maybe something much simpler like phones with built-in VOC sensors that can detect nuances of the user's perspiration, combined with real time emotion sensing via gait, voice, along with metadata that is already available would be sufficient to produce such a token stream... who knows.


> ... a hardware interface that generates a token stream from a living human's body would seem to enable this at some level.

A hardware interface that generated a datastream from sensors monitoring the status and surroundings of the hardware the LLM was running on would be more to the point.


Point taken. Maybe LLM beings would be the driving force behind more widespread adoption of ECC RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: