Can machines think?

May 22, 2014

Regulating robots: The underlying theme of the 2004 dystopian science fiction movie i,Robot


The debate about whether machines can think is not a new one.  In 1950 Mathematician Alan Turing developed the Turing Test, designed to test whether machines could exhibit intelligent behaviour.

In its original formulation, a man and a machine reside in separate rooms. A group of judges sit in another room and interrogate the man and the machine with a set of identical questions. Turing argues that if the judges are unable to distinguish the man from the machine, then the machine is able to pass the test, and can be said to exhibit intelligent behaviour, or in Turing’s mind, can think.

It won’t surprise anyone to know that this argument has proved highly contentious. Perhaps the most famous response to the argument came from John Searle in 2004 who developed a thought experiment which he claimed proved that machines cannot think. In the experiment Searle asks us to imagine that he is locked in a room. Searle knows no Chinese, but he is given a set of rules in English which enable him to correlate one set of formal symbols with another (The Chinese characters). With this information he is able to respond in such a way that when his interrogators ask him questions they are unable to distinguish him from a native Chinese speaker. But, Searle points out – he does not understand Chinese. Searle asserts that there is no essential difference between the roles of the computer in the Turing test and himself in this experiment. Therefore he argues, it follows that the machine would not be able to understand Chinese either.

Since 1950 we have seen monumental developments in the field of computer science, and more specifically robotics. We live in a world where robots can walk, do the ironing, hovering, build and drive cars, explore space, respond to our questions (Siri) and route us all over the world (Google Maps). With this is mind, is it time to re-visit the question of whether machines can think?

The Economist recently released a special report entitled ‘Rise of the Robots’. In it they state that despite the fact that robots are getting better and better at replicating human behaviour, they have no will of their own. They therefore argue that our concern when it comes to regulating robots should be with the end they are intended for i.e. the job they are designed to do, rather than the means with which they do this. I wonder whether we should be challenging this view. Is it really the case that robots do not have, nor could ever have a will of their own? Is it still inconceivable that machines might one day be able to think for themselves? There are many spheres in which the abilities of robots far outstretch our own – solving complex mathematical problems, storing information, playing chess to name a few.

Perhaps we have to re-visit the Philosophy of Mind to answer this question. Turing and Searle’s argument can be reduced down to a debate between physicalism and dualism. The proponents of physicalism hold that everything in the world is ‘physical’. Therefore, it follows that humans, and the devices that enable them to think, are wholly physical too.  The proponents of dualism maintain that there are two ontologically separate substances – the mental (non-physical) and the physical. They argue that the mind i.e. the substance that enables us to think, is non-physical.

Perhaps the answer to whether machines could ever think comes down to which side of the physicalist – dualist fence you sit on. After all, if everything is physical, is there anything to stop us from one day developing a robot that can think? Supposing this is possible, what does it mean for the human race? Two possible outcomes spring to mind. On the one hand we may battle to re-assert ourselves as top of the food chain, and to differentiate ourselves from machines. But on the other, perhaps robots will become fully integrated members of society – serving as our colleagues, friends, and even family members.

It is hard to tell what the future holds, but I think we should start by asking the right questions. We should not be asking ourselves how we regulate the tasks that robots do, but instead how to manage a society in which humans are no longer top dog.

For more of our thoughts on this, look out for The Lab's 2014 Cultural Forces piece coming soon!


Lucy