The Turing Test
- Hazel Watson Smith
- Aug 17, 2021
- 6 min read
Updated: Sep 27, 2021
By Hazel Watson-Smith

Image from Pixabay
The Turing Test was developed by the father of modern computer science, Alan Turing, to test the intelligence of a computer. Turing first introduced this adaptation of an old parlour game in his 1950 paper entitled “Computing Machinery and Intelligence” [1]. The original test involves an interviewer and two conversational partners, one computer and one actual human. After 5 minutes of questioning, the interviewer must decide which conversation was with a computer and which was with the human [1].
The original Turing Test assesses the text conversation functionality of a weak AI — an artificial intelligence that can only conduct a limited task [2]. The classic examples of narrow AI tasks being computation heavy things like playing chess (IBM Deep Blue [3]), database search applications like playing Jeopardy (IBM Watson [4]) or answering simple spoken questions (Siri [5] or Google Assistant [6]).
The most common type of conversational AI is a chatbot. Only a few chatbots have ever passed The Turing Test. The ‘first’ was Eugene Gootsman in 2014 [7] — a chatbot playing the role of a 13yr old Ukrainian boy. Eugene only just passed the test, with 33% of judges mistaking him for the human. (He needed 30% or more to pass) [8]. Some critics claimed that the character used was too manipulative and meant that judges were significantly more likely to discount mistakes that the AI made as a young boy speaking his second language. With the low pass margin and unfair character, many don’t count this as passing the test [9]. After having a quick chat with him myself, if he replied a bit slower, I might believe he was a random kid in Ukraine asking if I like Borscht.
Most would agree that chatbots aren’t intelligent, but we have chatbots that can pass the traditional Turing Test. Let’s say that being mistaken for a human equals intelligence. This definition, although helpful in the pursuit of simplicity, becomes increasingly problematic as we move towards strong AI — General artificial intelligence [10]. We have to start thinking about what makes a human a human. Is it appearance, image recognition, speech production and comprehension, or having memories, thoughts, opinions, moods and emotions? Do they also need to successfully navigate the physical world, manipulating objects and respond to the changing environment?
In 1965, Herbert A. Simon wrote in his book The Shape of Automation for Men and Management that “machines will be capable, within twenty years, of doing any work a man can do.” [11]. As we are now 26 years past this deadline, where are the general AI’s? Ray Kurzweil in The Singularity is Near has since updated this timeline to expect a general AI by 2045 [12].
If a robot had general intelligence and could sit across from you, Ex-Machina style [13], and be interviewed, that would be the complete Turing Test. However, in this situation, I think we would have to pivot to using actual human tests of intelligence. Can it do times tables, can it write a recount essay about what it did over the weekend, can it go to the supermarket, buy ingredients, cook its favourite meal, and experience joy in sharing it with its loved ones? Can it look at any surface and know what it would feel like to lick (try it)? I could continue but you get the idea.
Some of the hardest things for an AI to achieve are the things that we as humans do almost effortlessly. Chess and Jeopardy are challenges, tests of human intelligence. The average human can’t beat Gary Kasparov at chess or win Jeopardy every single time but that’s almost all a narrow AI can do. Memorising the entirety of Wikipedia, or having the mental computational power to be multiple steps ahead of your opponent in a game of chess is too hard for most humans but DeepBlue and Watson could do this with their (metaphorical) eyes closed. These tasks play to the strengths of these narrow AIs, they play to the strengths of computers.
Humans use a lot of embodied cognition, so in my opinion, we won’t have a true general artificial intelligence until they have a fully functioning body. The mental processes behind a seemingly simple task like picking up a cup involve a multitude of complex subtasks. Visually locating the cup, moving your hand over to the cup — keeping in mind that there are 12 degrees of freedom in a human arm and 27 more in the hand, and we must instantaneously select the most efficient path to the cup. On the way to the cup, we must form our hand into the correct shape and adjust its angle to pick the cup up, move with the right velocity so that we can maintain a feedback loop with our visual system to ensure accuracy, then grip with an appropriate amount of force and lift slowly enough — keeping the cup level so you don’t spill anything. Each of these tasks independently is a challenge in the world of computer vision, robotics and mechanical engineering.
If you see a giraffe for the second time in your life do you think you could recognize it from a new angle? Yes, of course. Do you think you could distinguish between a small giraffe and one that’s just a little further away? In the human mind, we have processes for dealing with relative size, relative lighting and relative perspective. If you’ve ever played around with a system like Google Lens [14], you’ll know that it’s actually getting pretty good. This is thanks to advanced deep learning algorithms that use insane amounts of image data to identify objects and read text. You still have to select the type of thing you’re looking for e.g. text or a landmark. This limits the image search massively and separates it from the human visual system. Also, seeing something isn’t the same as perceiving something. To visually perceive something you must see it, give it a name, connect it to a prior experience, etc.
Giving an object a name isn’t as easy as you might think. If you see a sock on the ground you might think “that’s a piece of human clothing”, “that’s a sock”, “that’s my sock” or “that’s the pink women’s size 9 ankle sock that I washed yesterday that my grandma gave me for Christmas in 2019”. The level of abstraction or detail is practically limitless.
Selecting the most efficient path to reach out and pick up an object is often referred to as motion inbetweening in the world of robotics. The robot must calculate its current position, calculate its relative end position and then work out how to get there. The way that it gets there is a question of how many degrees of freedom it has in its arm, efficiency but also style. Should it just take strictly the most direct path or follow the style of a human or a chicken?
Don’t even get me started on how we instantaneously calculate the dimensions of an object and shape our hand to match. Have you ever gone to pick up a cup and found you hadn’t opened your hand wide enough? I’m thinking probably not unless you were a young child or very distracted/drunk. This is a very interesting process of embodied cognition and no one actually knows how we do it. There are many theories involving size judgements being based on our own body and prior experience with similar objects.
I look forward to living in a world where sentient artificial intelligence lives amongst us. I hope that research moves fast enough and I live long enough to see it but I think that might be quite an unpopular opinion. Please remember AI isn’t scary, it isn’t here to take over the world. It’s a useful tool that has so much untapped potential, and the Turing Test will probably still be used to test their intelligence for another 70 years.
References
[1] Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.
[2] Weak AI. (n.d.). https://www.investopedia.com/terms/w/weak-ai.asp
[3] IBM. (n.d.). Deep Blue. https://www.ibm.com/ibm/ history/ibm100/us/en/icons/deepblue/
[4] IBM. (n.d.). A Computer Called Watson. https://www.ibm.com/ibm/history/ibm100/us/en/icons/watson/
[5] Apple. (n.d.). Siri. https://www.apple.com/siri/
[6] Google. (n.d.). Google Assistant. https://assistant.google.com/
[7] Just AI. (n.d.). Eugene Gootsman. http://eugenegoostman.elasticbeanstalk.com/
[8] Computer AI passes Turing test in ‘world first’. (2014, June 9). BBC:Tech. https://www.bbc.com/news/technology-27762088
[9] Masnick, M. (2014, June 9). No, A ‘Supercomputer’ Did NOT Pass The Turing Test For The First Time And Everyone Should Know Better. TechDirt. https://www.techdirt.com/articles/20140609/07284327524/no-supercomputer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml
[10] General AI. (n.d.). https://www.general-ai-challenge.org/
[11] Simon, H. A. (1965). The Shape of Automation for Men and Management. New York: Harper & Row.
[12] Kurzweil, R. (2005). The Singularity is Near. Viking Press.
[13] Garland, A. (Director). (2014). Ex-Machina [Film]. https://www.imdb.com/title/tt0470752/
[14] Google. (n.d.). Google Lens. https://lens.google/
Commentaires