
On reversing the Turing Test: If you believe you are communicating with a conscious being and then discover it is a robot, do you stop believing?
There was a great talk on Radio 4’s The Life Scientific this week with Anil Seth, Professor of Cognitive and Computational Neuroscience at Sussex University, who specialises in the study of consciousness. Starting with a philosophical discussion about what is consciousness and ending on that great question – do we have free will? If you have access, the broadcast is available online.
The following is a rough transcript of some of the tastiest soundbites.
Philosophical vs Scientific approaches
“We should always listen to the questions that philosophers ask but we should rarely listen to their answers” – friend of Anil Seth
Philosophy is very good at identifying what we are trying to understand and can keep us conceptually honest. But consciousness is a natural phenomenon that happens as the result of a particular organisation of biological matter. The tools of science can be powerfully productive in unravelling these types of mysteries. Philosophy, science and the humanities need to work together to get to the bottom of what remains one of the big central mysteries of life.
Can we build a brain in a computer?
“When we simulate weather conditions to generate forecasts, we don’t expect it to get windy inside the computer.”
The human brain is perhaps the single most complex organism in the universe. It has about 90 billion neurons and a thousand times more connections. The role of computational modelling is important. But if we simulate the brain in very high detail, it is not the same as instantiating – generating – something like consciousness. When we simulate a hurricane, we know it is just a simulation. Why would it be any different when simulating a brain or consciousness?
The importance of having a theory
Aerodynamics do not depend solely on flapping feathered wings
There is currently a lot of emphasis on whole brain experiments. To model as closely as possible what we know is going on. But without a good theory it could be a fool’s errand. Early attempts to understand flight just copied what birds do – use feathers and flap wings a lot. None of them worked. We needed a theory of aerodynamics to know which parts of the system were essential to model, understand and simulate. Theory and simulation have to go together.
Conscious vs Unconscious
The signatures of losing and regaining consciousness can be found by quantifying how different parts of the brain communicate.
Taking a biological approach and looking at what happens when consciousness is altered. Using sleep states to observe how different parts of the brain communicate with each other. By injecting a pulse of magnetic energy into the brain and recording the echos, you can see how the pattern is different when you are awake, asleep or under anaesthesia. When you lose consciousness the brain still responds, there is still an echo but it becomes very localised. When you are awake the echo bounces around all over the place.
Time travel
“Sleep is a bit of an embarrassment for neuroscience”
We spend nearly a third of our lives asleep and we still don’t really know why. There are lots of theories but none have been firmly established. Is dreaming a conscious state? It includes an amazing naive realism where strange things happen that don’t seem strange at the time. Smell seems to be absent in most peoples dreams. But when someone is dreaming, the brain activity looks a lot like when we are awake. And there are definitely dreamless sleep states where we don’t seem to be conscious at all.
There is a remarkable difference between sleep and anaesthesia. When you wake up from sleep, you have a sense that some time has passed , that something has happened. When you wake up from a general anaesthetic, time doesn’t seem to have passed at all. You are out and you are back and who knows what happened in the middle.
What is normal?
Is a hallucination just a distorted perception?
Current theory is that the brain is always making its best guess about what is out there in the world. It brings to the table prior expectations about the causes of sensory signals. Normally, the brain gets the balance about right between prior expectations and sensory data currently being received. If the balance is wrong and the brain puts too much emphasis on prior expectations or the sensory data that is coming in, our perceptions can become distorted. And it is under these circumstances that we might start to experience hallucinations.
The Cybernetic Bayesian Brain
“Fundamentally, what living systems do is keep themselves alive.”
Much of recent artificial intelligence (AI) has been about modelling high-level reasoning. Cybernetics is another branch of AI stretching back to the 1950s that is about regulation, control and homeostasis – what living systems do to keep themselves alive. This branch of AI is coming back into prominence as we advance robotics. We are starting to explore closely-coupled loops between perception and action, and how physiological variables are kept within bounds.
The Bayesian brain is the idea that the world is very messy, noisy and uncertain. Meaning the brain is always having to make inferences – best guesses – about what is going on. And to do that, the brain must also be able to infer its own internal state in order to best regulate it. The outcome is a conscious experience of, in particular, emotion but also other aspects of the self.
Does consciousness require a body?
The weakness in the original Turing test is that it is based on disembodied messages.
The idea behind the Turing test is that you have an observer who can communicate via keyboard with another human and potentially an intelligent machine. When the observer cannot tell the difference then the computer is deemed to have passed the Turing test.
What the film Ex Machina shows, and others have begun to say, is that the disembodied exchange of messages is not enough. There is something about being intelligent, being conscious, that intimately and fundamentally requires having a body – feeling embodied. It’s not about whether or not an AI can convince the human that it is conscious but rather whether the human thinks the AI is conscious even when knowing it is an AI.
What is real?
“Is there a real world out there? Or is it all subjective?”
The problem is that if I’m trying to explain consciousness I’m relying on you telling me what you experience. We all experience the world slightly differently. When I see red, it means its not green, its not blue, its not a banana… I’m ruling out a whole bunch of possibilities. When you see red, you are ruling out a whole bunch of possibilities. Maybe they are slightly different and that will lead to a slightly different experience.
Relying on subjective reports is inevitable. It’s just been the case in science for ever. Visual illusions show how consciousness is subjective. It’s one of the best ways to show the little tricks our vision plays when interpreting the world.
Do we have free will?
We need to believe in free will to go about our daily lives.
There is a spooky metaphysical sense of free will which I don’t think is happening or is even coherent. It suggests something non-physical that comes in and changes the operation of our brain, or changes something, to alter the flow of events. That what was going to happen is now not going to happen.
Then there’s the more straightforward commonsensical approach. Do I behave according to my beliefs and desires? When we perform actions according to our beliefs and desires there is a corresponding experience of volition – the experience of intending or urging to do something – and agency – the experience of being the cause of an event. Things can go wrong because we can manipulate the experience of volition by stimulating parts of the brain. But as a basic perspective, yes we do have free will because we can behave in the way we wish to behave. What we can’t do is decide what to will.
The real question is what is the point of experiencing free will?
One answer is that it is very important for the brain to distinguish between actions that are more internally generated and actions that are more reflexively generated by immediate circumstances in the world. If you put your hand on a hot stove you don’t need to have an experience of free will. But if you need to decide what job to take, whether to say something in particular to someone, then it is useful to mark that as something that was internally generated so that you can pay more attention to the consequences. Because you might want to do something differently next time.
Audience Q&A
Q: If consciousness is about communication between different areas of the brain. And a simulation can do that. At what point do you start asking ethical questions about what you are doing?
A: Very good question. There is a difference between simulating a hurricane and a brain in that we still don’t know whether consciousness is a property of functional relations between parts of the body or whether it depends on a particular biological material or substrate such as the presence of neurons. Can’t answer that yet so prefer to remain agnostic. Certainly the kinds of simulations we are doing now are very very simplified simulations of neural systems. But you are right that it is important to be preemptive and foresee what kinds of ethical questions might come up. We need to take seriously the possibility that future simulations or AI devices might have some kind of self awareness that will demand an ethical response. Do you turn it off or not turn it off?
The real ethical question is the capacity for suffering. If something has the capacity to suffer then one has to be very mindful about whether or not what you are doing is ethical.
Q: Are the old methods of brain research all flawed because they are based on ideas that assumed structured function and we now know the brain is much more plastic?
A: The fundamental problem in neuroscience is that we don’t know the wiring diagram of the human brain. Even if we did know, it would be unclear how much that would tell us. We do know the wiring diagram for the small nematode worm c.elegans* which has about 300 neurons. And we still don’t know how that works. So to scale up to the level of a human brain is a big challenge. And old school methods are still applicable. Knowing the precise wiring is critical because we do know it is changeable – the brain has plasticity. But if you look at the basic motifs, patterns, in the cortex you can see they repeat. There are structures and clues about what is going on. What we currently don’t have is a very good method for getting all the connections. It’s like knowing the motorways but not knowing the roads needed to get from the motorway to your house. We don’t have the point-to-point connectivity across the whole brain and current neural imaging techniques can’t do it.
* First organism to have its full genome sequenced, in 1998. As of 2012, the only organism to have its connector (neuronal ‘wiring diagram’) sequenced. Source: Wikipedia
References and Related Posts
- Radio 4 The Life Scientific – Anil Seth, June 2015
- Self-learning does not create a brain, November 2012
- The Puppet Master – How the brain controls the body, December 2005
Featured image: published under iStockPhoto license.
Wow! This was a fascinating read. Thanks!
Thanks Dan. Was one of the best podcasts I’ve listened to in an age. Amazing talk.
[…] Can a robot become conscious? […]
[…] Can a robot become conscious? | Joining Dots „On reversing the Turing Test: If you believe you are communicating with a conscious being and then discover it is a robot, do you stop believing?“ […]