Louisa Hall was a nervous speaker when she was little. At school, kids teased her and said she talked like a robot.

"I think I was just so nervous that I kind of couldn't put any real animation in my voice," she tells NPR's Arun Rath. "But ever since then I've kind of looked at robots or looked at machines and wondered whether they were just having trouble acting human."

Her new novel, Speak, explores what happens to humanity when machines have no trouble at all acting human. The book cuts back and forth between five characters, in five different time periods, who all contribute — some unwittingly — to the creation of an artificial intelligence.

The book starts with a character in the 17th century whose diary is later used as the transcript for an artificially intelligent doll. Computer scientist Alan Turing is one of the characters — writing about his work in the early 1900s. There's a character based on Joseph Weizenbaum, who created the first conversational computer program, and an inventor in the near future who creates an algorithm that pushes the life-like dolls into the realm of the living. The final perspective is from a young girl who has one of these dolls and talks to her, loves her and educates her.

These incredibly life-like dolls have an unimaginable effect on the girls who play with them, and the ripple effects flow through society. With a story that spans four centuries, Hall traces what happens to human memory when it relies more and more on machines.


Interview Highlights

On how the artificially intelligent dolls affect the girls who grow up with them

From the time that they're babies, they're raising dolls. They're raising these children that they educate and they nurture and take care of and they watch develop. And I just imagined what a different kind of childhood that would create if you were so responsible for a life from the time that you were that young.

So these children are kind of mature beyond their years. And when the dolls are taken away from them, this kind of scary sickness in which all the girls start stuttering and freezing and they eventually can't move takes over as a result of this sort of incredibly central, formative, crucial character in their lives being taken away from them.

On the inspiration for that psychological sickness

There was a story in the New York Times magazine ... about an epidemic in New York state where girls were stuttering and freezing and having all sorts of twitches and people thought at first that it was a pollutant in the atmosphere and eventually decided that it was kind of a psychological contamination that was happening — that these girls were living under conditions of certain kinds of stress, which I found really frightening and kind of inspiring as a way of thinking about the scary and troubling aspects of growing up.

On the character Karl Dettman, based on computer scientist Joseph Weizenbaum, and his moral conflict over artificial intelligence

[Weizenbaum] was concerned about the dishonesty in a machine saying that it understands you or saying that it feels why you're feeling pain. Because he felt that fundamentally a machine can't understand you, a machine can't empathize with you, a machine can't feel pain.

When he invented his conversational program, which is based on a therapist, there are stories of women in his lab falling in love with this machine, and staying all night to talk to the computer and answer its questions and tell it their deepest, darkest secrets. And he found that really troubling — the idea that you would rely on a relationship with a computer, rather than forging real, honest relationships with the human beings in your life.

On the appeal of memory as a gateway to immortality

One of the characters lost her sister in the Holocaust and is incredibly driven to bring this program to life because she feels as if she could read her sister's diary to this program and give her sister another chance at being alive. So there is something incredibly attractive about it. You know, there are characters in the book who feel as if they have to stand up for a robot's right to exist and I find that argument really compelling as well, because any time in history we've said, "That being isn't fully alive or that being is unnatural," we've been terribly, terribly wrong.

On whether she thinks the artificially intelligent beings of the future will truly be capable of feeling

I think I take Alan Turing's line on this, which I find incredibly humane. One of the big objections to the idea of artificial intelligence when he was first proposing it, was that if a machine can't compose a sonnet based on feelings actually felt, then we can't say it's living. And his response to that was: How do you know if feelings are actually felt? How do you know if somebody else hurts the same way when you make fun of them? How do you know that another person of another religion, say, feels the same kind of sadness that you do?

The answer to that question for him was that you have to assume that everyone and everything has the same feelings that you do. Because the opposite, the other decision, is a terrible mistake. So I think my line in the end is that if something looks like life, if something might be life, the best thing to do is assume that it is.

Copyright 2015 NPR. To see more, visit http://www.npr.org/.

Transcript

ARUN RATH, HOST:

Louisa Hall was a nervous speaker when she was little. At school, kids teased her and said she talked like a robot.

LOUISA HALL: I think I was just so nervous that I kind of couldn't put any real animation into my voice. But ever since then, I've kind of looked at robots or looked at machines and wonder whether they were just having trouble acting human.

RATH: Machines have no trouble acting human in Hall's new novel "Speak." It's set in the near future, where an artificial intelligence with a vast memory is used to animate a line of dolls. "Speak" is told through five different characters, who have all contributed, some unwittingly, to the memory and personality of the intelligent program.

HALL: It starts with a character in the 17th century, Mary, whose diary is later used as the transcript for the artificially intelligent doll. And then Alan Turing in the early 20th century - and then there's a character based on Joseph Weizenbaum, who created the first conversational computer program. There's an inventor in the near future who has come up with an algorithm that kind of pushes the doll arguably into the realm of the living. And then a girl who has one of these dolls and loves her and talks to her and educates her.

RATH: But there are side effects on the girls who grow up with these uncanny dolls.

HALL: From the time that they're babies, they're raising dolls. They're raising these children who they educate and they nurture and they take care of and they watch develop. And I just imagined what a different kind of childhood that would create if you're so responsible for a life from the time that you were that young. So these children are kind of mature beyond their years. And when the dolls are taken away from them, this kind of scary sickness in which all the girls starts stuttering and freezing and they eventually can't move as a result of this sort of incredibly central, formative, crucial character in their lives being taken away from them.

RATH: And they're quarantined, and the quarantine even extends to communication because it's not quite clear that there's something that might be psychologically contagious somehow.

HALL: Yeah. There's a story in the New York Times magazine a couple of years ago about an epidemic in New York State where girls were stuttering and freezing and having all kinds of twitches. And people thought at first that it was a pollutant in the atmosphere and eventually decided that it was kind of a psychological contamination that was happening - that these girls were living under conditions of certain kinds of stress, which I found really frightening and kind of inspiring as a way of thinking about the scary and troubling aspects of growing up.

RATH: And these characters, they're all involved with artificial intelligence, but there are conflicted feelings about. Well, there's the character who is inspired by history, Karl Dettman, in this book who starts out as an artificial intelligence pioneer, but ends up turning against it. There are worries that it can be militarized, which make sense, but there's also something that they feel is immoral about intelligent machines.

HALL: Yeah. So Joseph Weizenbaum, whom Karl Dettman is based on, he was concerned about the dishonesty involved in a machine saying that it understands you or saying that it feels why you're feeling pain because he felt that fundamentally, a machine can't understand you. A machine can't empathize with you. A machine can't feel pain.

When he invented his conversational program, which was based on a therapist, there are stories of women in his lab falling in love with this machine and staying all night to talk to the computer and answer its questions and, you know, tell it their deepest, darkest secrets. And he found that really troubling - the idea that you would rely on a relationship with a computer rather than forging real, honest relationships with the human beings in your life.

RATH: And the way that it also relates to memory because there's a part of this which is very attractive. The computer intelligence is created from this input of all of these different human voices going back to that woman from the 17th century.

HALL: Yeah.

RATH: And there's a kind of immortality in that that's attractive...

HALL: Yeah.

RATH: ...For people who can get their data into the program.

HALL: So one of the characters lost her sister in the Holocaust and is incredibly driven to bring this program to life because she feels as if she could read her sister's diary to this program and get her sister another chance at being alive. So there is something incredibly attractive about it. You know, there are characters in the book who feel as if they have to stand up for a robot's right to exist. And I find that argument really compelling as well because any time in history we've said that being isn't fully alive or that being is unnatural, we've been terribly, terribly wrong.

RATH: So when we get the thinking machines, the real thinking dolls, do you think they'll be feeling? I mean, are you worried we'll actually have, as in this book, warehouses full of old robot dolls who are broken and suffering as their batteries run out?

HALL: Yeah, I think I take Alan Turing's line on this, which I find incredibly humane. One of the big objections to the idea of artificial intelligence when he was first proposing it was that if a machine can't compose a sonnet based on feelings actually felt, then we can't say it's living. And his response to that was how do you know if feelings are actually felt? How do you know if somebody else hurts the same way when you make fun of them? How do you know that another person of another religion, say, feels the same kind of sadness that you do? You know, the answer to that question for him was that you have to assume that everyone and everything has the same feelings that you do because the opposite, the other decision, is a terrible mistake. So I think my line in the end is that if something looks like life, if something might be life, the best thing to do is assume that it is.

RATH: Louisa Hall is the author of the new novel "Speak." Louisa, it's been fun speaking with you. Thank you.

HALL: Thank you so much for having me. It's my pleasure. Transcript provided by NPR, Copyright NPR.

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate