Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

YC Research should fund research on synthetic consciousness based on human models of consciousness. Consciousness research is still un-mainstream enough that there's a common perception that no one has any idea how consciousness works or even what it is, even though we actually know quite a bit:

In particular, the dynamic thalamic core theory of conscious experience (Google it, the author-posted paper requests you don't link directly to it) offers a tremendous amount of explanatory power. From the perspective of engineering a synthetic system, you can build on the model to accommodate any aspect of conscious experience. For example, if you have an "input module" that feeds into a visual recognition network that feeds into a conceptual association network, the conceptual association network can activate neurons in the input module that correspond to the neurons that would be activated when perceiving the visual details corresponding to certain concepts. So the input module becomes a place where multiple simulated percepts of concepts can be activated simultaneously, and then "for free" by the nature of the architecture, you get visual recognition processing of the simulated perceptions combined with "raw" perceptions (or combined with other simulated perceptions). As a bonus, you get language comprehension: after perceiving the visual details of a written word, the system would recognize the associated concept (presuming it had previously learned it) and trigger a simulated perception of the concept. Run this process recursively over a sentence, and the system has a simulated experience which would correspond to the semantic meaning conveyed by the sentence.

The human brain has an architecture that appears quite similar: a specifically layered, recursive connectivity between the thalamus (the "input center" of the brain) and the cerebral cortex (which handles low level recognition and conceptual associations, among other things). Regardless of whether or not the human brain works exactly in the vague way I described: 1. You could engineer a system to perform the tasks outlined above, and the architecture would lend itself to whatever facets of human-like cognition you might be interested in (try it, it's easy). You wouldn't be able to gloss over the details like it's an HN post, but there aren't fundamental roadblocks. 2. Neuroscience research regarding the function of the thalamus and the function of the thalamocortical system is very favorable towards this model (particularly if you factor in the basal ganglia), but more generally--the human brain is nowhere near a "black box". We can't read the brain like a hard drive, but it's plainly not a ball of evenly distributed computational goo where any computation could happen anywhere, anytime. For each individual human, specific patterns of activation in the cerebral cortex correspond to specific perceptions and patterns of activation in the thalamus, which implies... 3. The synthetic consciousness I proposed could be connected to a brain-computer interface (attached to your head) (your head, not mine!) (no I want it give it back) so it can learn to recognize your thoughts (this learning process could be enhanced in various ways; for example eye-tracking combined with a forward-facing camera could help the synthetic consciousness know what you're paying attention to, which adds context for learning to recognize what your thoughts may be about), and because the synthetic consciousness has simulated perceptions in the same region as raw perceptions, you can view its thoughts in real-time as well. It's a white-box architecture.

This kills two massively valuable birds with one stone: you get strong AI (more accurately, you get strong synthetic intelligence; "artificial" starts sounding rude after a point), and you solve the "control problem" because you can use your enhanced knowledge of consciousness to develop systems that have human consciousness merged with synthetic consciousness. "What about the human who's enhanced by the merger with the synthetic intelligence though!?!" The integration with the synthetic intelligence makes the human's brain a white box as well. If you wanted, you could integrate multiple such systems together (to whatever degree is comfortable for the systems). So ultimately the control problem becomes a question of how much we trust ourselves and each other after we can read each other's minds. Yes, it would probably be terrifying initially, but I think we'd get used to it quickly, and there are numerous meditative traditions which might be helpful to anyone who struggled to control thoughts that others found particularly repellent.

After that it's a relatively straightforward recursive self-improvement deal. Eventually you improve the system enough that the biological component is redundant (yay). From there you make sure that whatever you're using for physical presence is generally robust. And if you're not incredibly rude, you might be kind enough to use your enhanced intelligence(s) to figure out how to gracefully share the technology with whatever other humans may want it (because presumably the recursive self-improvement process may happen in a relatively short period of time, and it's unlikely that literally everyone will engage in it at once). You'll have to figure out societal structures that make sense for whatever you all consider yourselves to be at that point. Then dive into advanced physics to figure out how to do whatever you want to do in neat ways, and if you decide to make von Neumann probes, don't send them off until the physics research has tapered off, because you'll probably just be beaten by the newer, faster probes if you're impatient and send a bunch off right at the start.

The timeframe mainly depends on how quickly you can iterate on the self-improvement process. Presumably some of that process is going to involve iterating on hardware (and probably iterating on the process of creating better hardware), but the hardware already exists to make a start. Early speed improvements would come from 1. not having to physically type or use a mouse/touchscreen to interact with a computer (programming at the speed of thought?) and 2. some degree of "telepathy" to help coordinate research. Later speed improvements could come from things like "forking" the synthetic portion to run in parallel for some period of time to learn or do things (can merge "copies" back in afterwards rather than ending them abruptly so that nobody has a bad time). You probably wouldn't want to run the synthetic portion at a rate significantly above your general biological rate of cognition, so that would put some vague upper bound on raw speed improvements until the biological portion is completely redundant.

At any rate, the initial steps are quite straightforward and would make addressing every other issue in this thread go faster. It would definitely be faster to go this route than to try cure aging, for example. Biological aging won't even be relevant at the end of this process, aging by definition involves a superset of the complexity of the human brain alone, and progress on the earlier steps towards curing aging do not increase the speed of progress on later steps towards curing aging.



Since I was old enough to understand neural networs idea, I imagined and I was sure that one day we will build an artificial brain, at first with less sub-components (brain areas) and then with more if not all that we know about with their functions. At that time the only thing I thought stops us from doing it is the computing power. In the meantime we grew enormously our computing power AND we invented the deep learning and a whole lot of faster AI learning algos yet we did not built an artificial brain. Or there are projects but I did not hear about it?...


It depends on how you model the neurons/networks in the brain. For example, if you do it this way: http://www.nature.com/news/fragment-of-rat-brain-simulated-i... then you'll need a lot more computing power, as they do. But they are aiming to eventually build an artificial brain as you describe.

If you just want to create an artificial thalamus (probably connected to deep learning-like layers, since those are loosely based on the layers of the cerebral cortex), you can start much smaller: https://books.google.com/books?id=VTduCQAAQBAJ&pg=PA1159&lpg...


Thank you for the pointers!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: