Passing the ‘Turing Test’: Why You May Have to Prove You Are Not (Interviewing) a Robot

 As robots ever more deeply penetrate society and the job market, the “Turing test” is probably the only test you will ever, in your entire lifetime, have to both learn how to pass and how to administer. No idea what I’m talking about?

Before getting acquainted with the Turing test details, listen—and listen carefully—to Time magazine’s Washington Bureau Chief Michael Scherer’s  recordings of a “robotic” conversation with “Samantha West”, ostensibly both a health insurance telemarketer and a human.

Is “Samantha” Actually a Robot?

Then ask yourself whether the recordings reveal, as Scherer strongly suspects, that Samantha is, in fact, a robot “pretending” to be human by deflecting Scherer’s persistently repeated question, “Are you a robot?” and the request, “Please just say, ‘I am not a robot.’” [Repeat calls placed by Scherer’s associates got the same scripted responses and reassurance, “(laughter) I am a real person.”]

If, indeed, Samantha is a robot, “she” was trying to pass what has come to be known as the “Turing test”. At the same time, Scherer was attempting to administer a variant of it.

Interestingly, even though the recordings are not slam-dunk proof that, despite her repeated assurance, “I am a real person”, Samantha is in fact a robot program, they are—I believe, as Scherer does, nonetheless strong evidence that Samantha is precisely and eerily that.  [A former CNN reporter, voice coach and journalism instructor I know told me that he is certain Samantha is a robot, because of irregularities in the “cadence” of her delivery.]

So what is a or the Turing test? Roughly speaking and without getting too technical, it is a test, first proposed by famed Nazi-“Enigma” code breaker, computer-science founder and A.I. pioneer Alan M. Turing in 1950, in his seminal paper, “Computing Machinery and Intelligence”,  inspired by the question “Can machines think?” and variously understood to determine whether one is interacting with an entity that

  • possesses human intelligence, viz., the ability to “think”
  • can convincingly and therefore indistinguishably imitate human behavior [intelligent or not, i.e., including “Artificial Stupidity”]. This is the “standard” interpretation of the Turing test.
  • can fool a human observer and participant into believing either that it is at least human or that it is, in addition, what it claims to be, e.g., a female, a qualified doctor, or a telemarketer.
  • can communicate about its communications, i.e.,  perform meta-level communication
  • can learn

To this I would recommend a “logic bomb” Turing-esque test—asking questions, posing riddles or presenting situations that involve paradoxical, normally mind-numbing challenges, such as “Is ‘This statement is false’ true or false?”—the famous “Liar Paradox”, which, in the annals of sci-fi, are guaranteed to make the robot brain shudder, spark, melt and explode.

Turing seemed to allude to this when he proposed asking a machine about a machine design, “Will this machine ever answer ‘Yes’ to any question?” … When the machine described bears a certain comparatively simple relation to the machine which is under interrogation, it can be shown that the answer is either wrong or not forthcoming.”

An “AOO” Turing Test

Another, even stronger test I propose is testing for whether the entity can simultaneously perform as agent, object and observer—something unique to conscious beings, e.g., when your fingers as agent pinch your cheek as object, triggering the observation that it happened along with [reported] awareness of both the sensations of pinching and being pinched.

Only self-aware entities capable of not only observation, but also meta-level self-observation, can do that.

However, given the possibility of advanced faking capabilities, it cannot be ruled out that even this test might be verbally passed by a voice-enabled machine. Nonetheless, what I call the “AOO” [Agent/Object/Observer] test raises the question of whether whatever is to count as “thinking” requires passing it.

Perfect Robot Faking: Imminent, or Already Here?

The case of Samantha aside, how soon will robots or other A.I. platforms be able to do a better job of “pretending”, i.e., to spectacularly pass the Turing test, or will they always be vulnerable to detection, if not the classic robot-brain sizzling meltdown when asked an unanswerable question, like a Zen koan?

They already have.

In 2012, a University of Texas at Austin entry in the 2012 “BotPrize” competition was the first to win the contest that “challenges programmers/researchers/hobbyists to create a bot for UT2004 (a first-person shooter) that can fool opponents into thinking it is another human player.”

It won by convincing a panel of judges that it was more human-like than half the humans it competed against.

Unsurprisingly, the 2013 BotPrize competition has been canceled.

But that still leaves open the possibility of competition for the best-imitation-of-a-super-human-android/software program prize. [Given the way the advanced chess software program I play routinely trounces me, I  personally stand no chance of pulling that off.]

The Silicon Valley Girl

Along with the set of Samantha recordings posted is one that is a conversation with an ostensibly real, seemingly Turing-test passing woman discussing dreams, like the rogue computer H.A.L. in Stanley Kubrick’s “2001”, which asked, as its power supply and it was being terminated, “Will I dream?”

I suspect that, ironically, her incessant use of “like”—e.g., “It’s like”, “I’m like”—is strong evidence that she is a robot, because, although annoyingly persistent, it was still less than the like-riddled babble of the stereotypic human California “Valley Girl” immortalized by Frank Zappa. Most likely, the dream-weaving female voice was that of a “Silicon Valley Girl”, a silicon-based machine or software, programmed with “artificial stupidity”.

Perhaps Turing would have suggested asking her, “If you are not a robot, do you think that ‘If I am a not a robot, then this statement is false’ is true?” [Whether she is not a robot or not, if that quoted statement is true, it’s false; if false, then it’s true]. But then, if she were a Valley Girl, silicon-based or otherwise, she probably wouldn’t get it.

That’s Not the Worst of It

Given the increasing sophistication of robots, it is conceivable that one day you and I may have to prove that we are not robots. But that’s not the worst of it. If robots advance even beyond the limits of our current human imaginations, we may have to prove not that we aren’t robots, but to try to prove through faking that we are.

There was a hint of this prospect in one of Turing’s observations about his eponymous game: “The [Turing] game may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic…”

A Comic-Book Turing Test: “Who is Superman”?

You must know the story of a kind of Turing test administered to a super-computer, in the form of the question, “Is there a God?”, posed as a challenge requiring super-computer skills to definitively answer.

The computer’s reply? “There is now.”

That joke and the Turing test remind me of a password test presented in a comic-book story I read as a kid: Warned of German army infiltrators dressed in American G.I. uniforms, American sentries were instructed to pose a password test for anyone approaching the defended perimeter in a U.S. Army uniform. The test question was, “Who is Superman?”

As one looming figure in a G.I. uniform approached, the sentry shouted “Halt!” and asked the question. The reply went something like this: “‘Superman’ is the literary creation of the great German philosopher, Friedrich Nietzsche, and the idealization of all that is noble in the Aryan race.”

Well, you know how badly that turned out. [Comic-book “Boom!” and “Arrrgghh!” inserted here in case you don’t.]

The important implication of that password test for all versions of a job interview Turing test is that whether a robot or a human is being tested, if the response is too good to be true, it probably isn’t.


Historical note: Just before his 42nd birthday in 1954, A. M. Turing swallowed cyanide and died after conviction of and chemical castration for homosexuality, despite his enormous code-breaker contributions to the survival of the UK, under draconian UK laws then in force.

After a prolonged crusade by his supporters, Turing was formally pardoned by the Queen almost 60 years later, on Christmas Eve, 2013, Turing’s centenary.

Merry Xmas, Dr. Turing.

in Interview]
Michael Moffa
Michael Moffa, writer for, is a former editor and writer with China Daily News, Hong Kong edition and Editor-in-chief, Business Insight Japan Magazine, Tokyo; he has also been a columnist with one of Japan’s national newspapers,The Daily Yomiuri, and a university lecturer (critical thinking and philosophy).