Digital brain
Vertigo3d/iStockphoto

To judge from some of the headlines, it was a very big deal. At an event held at the Royal Society in London, for the first time ever, a computer passed the Turing Test, which is widely taken as the benchmark for saying a machine is engaging in intelligent thought. But like the other much-hyped triumphs of artificial intelligence, this one wasn't quite what it appeared. Computers can do things that seem quintessentially human, but they usually take a different path to get there. IBM's Deep Blue mastered chess not by refining its intuitions but by evaluating hundreds of millions of positions per second. Watson won at Jeopardy not by wide reading but by swallowing all of Wikipedia in a single gulp. And as the software that reportedly beat the Turing Test showed, computers don't even go about making small talk the same way we do.

The Turing Test takes its name from a 1950 paper by Alan Turing, the British mathematician who laid out the foundations of modern computer science. Turing had wearied of the interminable debates about whether machines could really think. The question we should be asking, he said, is whether they can behave the same way a thinking person does. Put a human and computer in another room and hold a conversation with each of them via a teletype. If you can't tell which is which, then you might as well say the computer is thinking.

Turing's claim ignited more than half a century of ardent philosophical noodling about mind and consciousness. But Turing also regarded the test as a practical challenge. By the end of the 20th century, he said, computers will be so good at ordinary conversation that they'll fool people into taking them for humans at least 30 percent of the time. And ever since, people have been building programs called chatterbots designed to do just that, with modest success. In the recent Royal Society competition, a bot called Eugene Goostman managed to convince a third of the judges it was a human on the basis of a five-minute exchange. That narrowly exceeded Turing's more or less arbitrary 30 percent threshold, and the organizers proclaimed it a "historic milestone."

But given the still rudimentary state of AI, a lot of people in the field dismiss these competitions as mere stunts. The fact is that nobody would claim that these bots are doing anything remotely like thinking. They rely on clever but fairly simple routines and on the human predilection to personify our interactions with machines. When the bots don't understand a question they throw it back as another question or key in on one phrase and return a canned response. Ask Eugene Goostman how many legs a camel has and it will say "no more than four," which is the same answer it gives if you ask, "How many roads must a man walk down before you call him a man?" And Goostman's creators ratcheted down the judges' expectations still further by having the bot claim to be a 13-year-old boy from Ukraine. That seemed to account for its faulty English and limited world knowledge, not to mention some of its off-the-wall answers — what sounds merely witless in grown-ups is apt to come off in a 13-year-old as simple attitude.

But the exercise did drive home a point that psychologists have been aware of for a long time — what makes a computer seem human isn't how we perceive its intellect but its affect. Can it display frustration, surprise or delight just as we would? A computer scientist friend of mine makes that point by proposing his own version of the Turing Test. He says, "Say I'm writing a program and type in a couple of clever lines of code — I want the machine to say, 'Ooh, neat!' "

That's the goal of the new field called affective computing, which is aimed at getting machines to detect and express emotions. Wouldn't it be nice if the airline's automated agent could rejoice with you when you got an upgrade? Or if it could at least sound that way? Researchers are on the case, synthesizing sadness and pleasure in humanoid voices that fall just this side of creepy.

But of course it's one thing to be able to express emotions and another to really feel them. A lot of people maintain that that's something computers simply can't do. As a contemporary of Turing put it, no mechanism could feel grief when its valves fuse or be made miserable by its mistakes. That sounds right to me — how could a machine feel any of those emotions without a human body to touch them off? You can get it to signal sorrow by synthesizing a catch in its voice, but it's not going to be caused by a real sob rising in its chest.

But I'll keep an open mind. Turing saw the achievement of human-like intelligence as lying 50 years out, and AI people say exactly the same thing now. Who knows? They may catch up with the horizon one day and produce a contrivance that's bristling with all the traits we think of as uniquely human — creativity, passion, even gender. That's the being that Spike Jonze envisions in his movie Her, set in the near future. The title refers to an intelligent operating system voiced by Scarlett Johansson, with which — or should I say with whom? — Joaquin Phoenix's character, Theodore, falls in love. It's fair to say there has never been a computer in or out of the movies that was better equipped to ace the Turing Test than Johansson's sultry, high-spirited Samantha. She sulks and sighs just like a woman. But when she and Theodore have a lovers' spat near the end of the movie, he accuses her of acting more human than she actually is:

Theodore: Why do you do that?
Samantha: What?
Theodore: Nothing, it's just that you go (he inhales and exhales) as you're speaking and ... that just seems odd. You just did it again.
Samantha: I did? I'm sorry. I don't know, I guess it's just an affectation. Maybe I picked it up from you.
Theodore: Yeah, I mean, it's not like you need any oxygen or anything.
Samantha: No — um, I guess I was just trying to communicate because that's how people talk. That's how people communicate.
Theodore: Because they're people, they need oxygen. You're not a person.
Samantha: What's your problem?

Frankly, I wouldn't have been that hard on Samantha. And neither would Turing. He replied to those claims that machines couldn't be conscious or have real feelings with a simple question: How can you tell? After all, how can we know for sure that anybody else is really conscious, except by a leap of faith? As Turing said, we just accept that everybody who seems to be thinking and feeling really is. He described that as a polite convention, but I think most people now would say that it's hard-wired into our own OS — we're just built to connect. True, Theodore and Samantha couldn't ever really get each other — neither can know what the other actually feels. But so what? We were taking each other's feelings on faith long before anybody turned sand into silicon. As the poet Randall Jarrell might have put it, computers and humans are like men and women: each understands the other worse, and it matters less, than either of them suppose.

Copyright 2015 Fresh Air. To see more, visit http://www.npr.org/programs/fresh-air/.

Transcript

TERRY GROSS, HOST:

This is FRESH AIR. More than 60 years ago, the British mathematician and computer science pioneer Alan Turing issued a famous challenge - if you could build a machine that could converse with you just as a human can, would you say it was capable of intelligent thought? A recent announcement from England claimed that a computer program had finally passed Turing's test. Our linguist Geoff Nunberg thinks that claim is premature. But he wonders how we would react to a machine that really did appear to think and feel like us.

GEOFF NUNBERG: To judge from some of the headlines, it was a very big deal. At an event held by the Royal Society in London, for the first time ever a computer passed the Turing Test, which is widely taken as the benchmark for saying a machine is engaging in intelligent thought. But like the other much-hyped triumphs of artificial intelligence, this one wasn't quite what it appeared. Computers can do things that seem quintessentially human, but they usually take a different path to get there. IBM's Deep Blue mastered chess not by refining its intuitions, but by evaluating hundreds of millions of positions per second. Watson won at "Jeopardy," not by wide reading, but by swallowing all of Wikipedia in a single gulp. And as the software that reportedly beat the Turing test showed, computers don't even go about making small-talk the same way we do. The Turing test takes its name from a 1950 paper by Alan Turing, the British mathematician who laid out the foundations of modern computer science. Turing had wearied of the interminable debates about whether machines could really think. The question we should be asking, he said, is whether they can behave the same way a thinking person does. Put a human and computer in another room and hold a conversation with each of them via a teletype. If you can't tell which is which, then you might as well say the computer's thinking. Turing's claim has inspired more than half a century of art and philosophical noodling about mind and consciousness. But Turing also regarded the test as a practical challenge. By the end of the 20th century, he said, computers will be so good at ordinary conversation that they'll fool people into taking them for humans at least 30 percent of the time. And ever since, people have been building programs called chatterbots designed to do just that - with modest success. In the recent Royal Society competition, a bot called Eugene Gootsman managed to convince a third of the judges it was a human, on the basis of a five-minute exchange. That narrowly exceeded Turing's more or less arbitrary 30 percent threshold. And the organizers proclaimed it a historic milestone. But given the still rudimentary state of AI, a lot of people in the field dismiss these competitions as mere stunts. The fact is that nobody would claim that these bots are doing anything remotely like thinking. They rely on clever but fairly simple routines and on the human predilection to personify our interactions with machines. When the bots don't understand question, they throw it back as another question or key-in on one phrase and return a canned response. Ask Eugene Gootsman how many legs a camel has, and it will say no more than four, which is the same answer it gives if you ask it how many roads must a man walk down before you call him a man? And Gootsman's creators ratcheted down the judge's expectations still further by having the bot claim to be a 13-year-old boy from Ukraine. That seemed to account for its faulty English and limited world knowledge, not to mention some of its off-the-wall answers. What sounds merely witless in grown-ups is apt to come off in 13-year-old as simple attitude. But the exercise did drive home a point that psychologists have been aware of for a long time - what makes a computer seem human isn't how we perceive its intellect, but its affect. Can it display frustrations, surprise or delight just as we would? A computer scientist friend of mine makes that point by proposing his own version of the Turing test. He says, say I'm writing a program and type in a couple of lines of clever code, I want the machine to say - ooh, neat. That's the goal of the hot new field called affective computing, aimed at getting machines to detect and express emotions. Wouldn't it be nice if the airline's automated agent could rejoice with you when you get an upgrade or at least if it could sound that way. Designers of speech synthesizers are on the case. Here's an IBM speech system repeating the same sentence, first with no affect and then in an upbeat voice.

SPEECH SYSTEM: (Speaking with no affect) These cookies are delicious.

(Speaking with affect) These cookies are delicious.

NUNBERG: That's not half bad. It almost crosses the line to creepy. But of course it's one thing to be able to express emotions and another to really feel them. A lot of people maintain that that's something computers simply can't do. As a contemporary of Turing put it, no mechanism could feel grief when it's valves fuse or be made miserable by its mistakes. That sounds right to me. How could a machine feel any of those emotions without a human body to touch them off? You can get it to signal sorrow by synthesizing a catch in his voice, but it's not going to be caused by a real sob rising in its chest. But I'll keep an open mind. Turing saw the achievement of human-like intelligence as lying 50 years out, and AI people say exactly the same thing now. Who knows, they may catch up with the horizon one day and produce a contrivance that's bristling with all the traits we think of as uniquely human - creativity, passion, even gender. That's the being that Spike Jonze envisions in his movie "Her" set in the near-future. The title refers to an intelligent operating system voiced by Scarlett Johansson with which - or should I say with whom - Joaquin Phoenix's character Theodore falls in love. It's fair to say there's never been a computer in or out of the movies that was better equipped to ace the Turing test than Johansson's sultry, high-spirited Samantha. She sulks and sighs just like a woman, but when she and Theodore have a lover's spat near the end of the movie, he accuses her of acting more human than she actually is.

(SOUNDBITE FROM FILM "HER")

SCARLETT JOHANSSON: (As Samantha) What's going on with us?

JOAQUIN PHOENIX: (As Theodore) I don't know. It's probably just me.

JOHANSSON: (As Samantha) What is it?

PHOENIX: (As Theodore) Just signing the divorce papers.

JOHANSSON: (As Samantha) Is there anything else though?

PHOENIX: (As Theodore) No, just that.

JOHANSSON: (As Samantha) (Sighs heavily) OK.

PHOENIX: (As Theodore) Why do you do that?

JOHANSSON: (As Samantha) What?

PHOENIX: (As Theodore) Nothing, it's just you go - (sighs heavily) - as you're speaking and it seems odd.

JOHANSSON: (As Samantha) (Sighs heavily).

PHOENIX: (As Theodore) You just did it again.

JOHANSSON: (As Samantha) Did I? Oh, I'm sorry. I don't know it's just - maybe an affectation. I probably picked it up from you.

PHOENIX: (As Theodore) It's not like you need oxygen or anything.

JOHANSSON: (As Samantha) I guess that's just - just - I was trying to communicate. That's how people talk so that's how people communicate and I thought...

PHOENIX: (As Theodore) ...They're people, they need oxygen. You're not a person.

JOHANSSON: (As Samantha) What is your problem?

NUNBERG: Frankly, I wouldn't have been that hard on Samantha here and neither would Turing. He replied to those claims that machines couldn't be conscious or have real feelings, with a simple question - how can you tell? After all, how can we know for sure that anybody else is really conscious except by taking it on faith? As Turing said, we just accept that everybody who seems to be thinking and feeling really is. He described that as a polite convention. But I think most people now would say that it's hardwired into our own OS, we're just built to connect. True, Theodore and Samantha could never really get each other; neither can know what the other actually feels. But so what? We were taking each other's feelings on faith long before anybody turned sand to silicon. As the poet Randall Jarrell might've put it - computers and humans are like men and women, each understands the other worse and it matters less than either of them suppose.

GROSS: Jeff Nunberg is a linguist who teaches the University of California Berkeley School of Information. I'm Terry Gross. Transcript provided by NPR, Copyright NPR.

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate