A friend asked over lunch this week if I had read anything that helped me understand the real-world effects of artificial intelligence – not theoretical future scenarios, but concrete ways it could change the world now.

I suggested a long New Yorker story about Microsoft attaching AI-driven applications onto software used by millions of people. And I suggested that he listen to NPR's Morning Edition interview with Microsoft CEO Satya Nadella.

In the NPR interview, Nadella told me that 2023 "will be looked at the year we started using AI, it just sort of became part of our lives." Even as researchers and lawmakers debate potential risks of a technology that might grow beyond human control, Microsoft is on its way to deploying it almost everywhere.

Microsoft partnered with OpenAI, creator of the widely noted chatbot ChatGPT, and has begun rolling out AI features within its nearly ubiquitous software such as Microsoft Excel, Outlook and Word. Microsoft's gigantic scale – 220,000 employees in a company valued at $2.75 trillion – gives such moves impact.

Nadella spoke via video from Microsoft's wooded campus in Redmond, Washington, where he said he uses his own company's so-called "copilot" functions to compose "the first draft" of memos and emails.

Engineers developed the "copilot" concept while thinking of the limitations of the technology and the people who use it: users are urged to understand that AI will often get things wrong.

What follows is a portion of our conversation, edited for length and clarity.


SATYA NADELLA, CEO, Microsoft

The one big design decision we made was to think about this as a copilot, not as an autopilot. Designing it in such a way that the human is in control. Thinking of anything that gets generated as a first draft, whether it's a piece of code or a piece of text or an image, that's probably a good way for us to think. Humans have to do their job too.

STEVE INSKEEP, host, NPR's Morning Edition

I wanted to play with some of your technology, so I went on to Bing [Microsoft's search engine]. I asked it to help me prepare for this interview. And I said: Tell me a few things that I didn't know about Satya Nadella. And it gave me information almost in the form of a special Wikipedia article with a number of facts, some of which were true, but also said, "Nadella... has actually published a book of poems called Two Shorten the Road."

No. That is a hallucination.

Is this what you want people to be aware to do? That what they get back may be wrong? That they need to use their own brains?

That is right. This is not a database. It's a reasoning engine that goes and looks at all the data and tries to help you search through it and reason over it. That's why I think you have to design it with humans, teach humans to correct mistakes. And the technology itself needs to get better.

Would you explain for the layman how you're preventing this technology from being used for ill purposes? Someone says: Give me the instructions of how to make a bomb or any number of other things.

What you have to do is start putting it out [in front of users] in progressive ways, for people to find all the issues... and then support it by all the technology, like filtering out things you don't want it to say. Put classifiers, right? When somebody says, 'Give me instructions to make a bomb,' that's a query that you can't put into Bing chat and get a response because we filter those out.

I'm impressed by how there are all these invisible instructions that go out with my query. But I also think about the flip side of trying to control the technology in that way. You become the gatekeeper. Is this right? You're going to decide which information I can get or don't receive, which might cause people to begin questioning you and your power.

Yeah. I mean, as producers of technology, one is we have to take initial responsibility for what are the safety standards around our products. But I fully expect that this is a place where our democratic process – whether it's legislation or regulators – will have a thing or two to say about what exactly is safe deployment.

Are you prepared for the possibility that in the election year that's about to begin, you will be under pressure the way that social media have been in the past to allow certain information out or criticized for not allowing certain information out?

We are already in the search business. When you search the web, you get all kinds of results back. And there's a ranking algorithm today in search, and there's already people who have concerns about the transparency. So this is not complete new ground. It does have a next level of user experience. But yes, we will be subject to the same set of pressures, and subject ourselves more importantly to the standards of making sure that what we put in front of people is sort of verifiable and accurate and helpful.

I understand that a lot of people who work on A.I. do so with an almost religious fervor. They believe so strongly in the technology, and they want it out in the world as quickly as possible. Isn't there a risk that that attitude, commendable as it sounds, could lead us astray?

I think people who are working on this technology, quite frankly, have the right balance. We are not just talking about technology for technology's sake. We are talking about technology and its real world impact. Inflation-adjusted economic growth all around the world is basically very low. So I think we need a new factor of production. [AI is] compatible with creating more economic opportunity. Then the second aspect is what you said, which is how do we do it safely? We can't break things because, if we break things, one, you're not going to have a business.

In addition to driving economic growth, can this technology drive economic inequality, making more and more money for the people who control it the best while putting other people out of work?

There will be changes in jobs and there will be new jobs that will be created. I'll give you an example. Today, you can participate in the increasing digitization inside your company. You can build an application by just [speaking to a computer in English rather than coding]. If you're in health care, or you are in retail, as somebody who's in the frontline with domain expertise, [you can] essentially do I.T. class jobs that may have better wage support.

Do you anticipate a moment where there's a large language model and maybe it works with another A.I. program that can help it to see, and this device thinks it's smarter than you are and would do a better job than you do in running Microsoft?

You know, already the Microsoft copilot is helping me compose emails better, is able to help me take a meeting, remember things I said and others said better. I do think that it is helping me be a better person working at Microsoft.

But do you actually worry about that moment?

I don't worry about the moment of some technology replacing me. If anything, I want technology hopefully to remove the drudgery in work that all of us have.


Copyright 2023 NPR. To see more, visit https://www.npr.org.

Transcript

STEVE INSKEEP, BYLINE: We've been talking with a CEO whose company is spreading artificial intelligence. Satya Nadella of Microsoft came up on a screen from his office outside Seattle.

SATYA NADELLA: '23, I think, will be looked at as the year we started using AI. It just sort of became part of our lives.

INSKEEP: Chat bots captivated the tech industry. AI's potential to replace people became a factor in two big Hollywood strikes. And though it made less news, Microsoft is inserting AI-driven functions into everyday products it already sells to millions.

Are you already telling GPT to draft memos for you as a CEO of...

NADELLA: Oh...

INSKEEP: ...Microsoft?

NADELLA: ...Hundred percent.

INSKEEP: Events this year demonstrated how much Nadella's memos matter. Microsoft has developed products with OpenAI, a high-profile developer. When they fired their CEO, Sam Altman, Nadella had the power to compel OpenAI's board to rehire him. Microsoft has shaped OpenAI's large language models so they can draft things for you in common software like Microsoft Excel, Outlook or Word.

NADELLA: I think that one big design decision we made was to think about this as a copilot. Not as an autopilot, but as a copilot. Designing in such a way that the human is in control. Human agency, human judgment is what is still the core and then building the product around it.

INSKEEP: Now, that reference to human judgment gets to the anxieties about AI. How much can it really do for us, and what might it have the potential to do to us? He wants people to make the final decisions.

NADELLA: You are the editor. That metaphor of thinking of anything that gets generated as a first draft, whether it's a piece of code or a piece of text or an image, that's probably a good way for us to think about how things get accelerated. But at the same time, humans have to do their job, too.

INSKEEP: You know, I wanted to play with some of your technology, so I went on to Bing, your search engine, which has this...

NADELLA: Yep.

INSKEEP: ...Copilot feature that you can use, and I asked it to help me prepare for this interview. And I said, tell me a few things that I didn't know about you, about Satya Nadella. And it gave me information almost in the form of a special Wikipedia article with a number of facts, some of which were true. And it also said, and I'm quoting here, "Nadella is a poetry lover," which I think is true, "and has actually published a book of poems called "To Shorten The Road."

NADELLA: No. That is a hallucination. But I do have a poetry book right next to me, so you can...

INSKEEP: (Laughter) "A Poem Every Day."

NADELLA: ...Say I love poetry.

INSKEEP: OK.

NADELLA: There you go.

INSKEEP: But not that you wrote. OK. OK. Well, I went back to the Copilot, actually, and said, what was your source for that information and got an apology. It apologized and said that was the wrong information.

NADELLA: That's good.

INSKEEP: Is this what you want people to be aware to do? That what they get back may be wrong? That they need to use their own brains?

NADELLA: That is right. I mean, at the end of the day, this is not a database. It's a reasoning engine that goes and looks at all the data and tries to help you search through it and reason over it. And so that's why I think you have to design it with humans, teach humans to correct, you know, mistakes and the technology itself needs to get better, if you will, in terms of being able to be more accurate and be grounded.

INSKEEP: Would you explain for the layman how you're preventing this technology from being used for ill purposes? Someone says give me the instructions of how to make a bomb or any number of other things. What instructions are in there for the machine itself to tell it not to do something wrong?

NADELLA: What you have to do is start putting it out in progressive ways for people to find all the issues, and then support it by all the technology, like filter out things you don't want it to say, put classifiers, right? When somebody says, you know, give me instructions to make a bomb, that's a query that you can't put into Bing chat and get a response because we filter those out. So those are the kinds...

INSKEEP: Meaning that Bing has been given an instruction by you - don't do that sort of thing.

NADELLA: That is correct. That is correct.

INSKEEP: That sounds very smart. I'm impressed by how there are all these invisible instructions that go out with my query. But I also think about the flip side of trying to control the technology in that way. You become the gatekeeper, is this right? I mean, you're going to decide which information I can get or don't receive, which might cause people to begin questioning you and your power. Is that right?

NADELLA: Yeah. I mean, I think as producers of technology, one is we have to take initial responsibility for what are the safety standards around our products. But I fully expect that this is a place where in our democratic process for our form of whether it's legislation or regulators will have a thing or two to say about what exactly is safe deployment.

INSKEEP: Do you anticipate - are you prepared for the possibility, even the likelihood, that in the election year that's about to begin you will be under pressure the way that social media firms have been in the past to allow certain information out or criticized for not allowing certain information out?

NADELLA: We are already in the search business - right? - so this is not new. This is...

INSKEEP: Right.

NADELLA: ...Basically effectively another - in search, when you sort of search the web, you get all kinds of results back. And there's a ranking algorithm today in search like, you know, and there's already people who have concerns about, hey, what is that ranking algorithm, what's the transparency around it and what have you. And so therefore, I think this is not complete new ground, but yes, we will be subject to the same set of pressures we had before and subject ourselves, more importantly, to the standards of making sure that what we put in front of people is sort of verifiable and accurate and helpful.

INSKEEP: Thinkers about AI are divided between optimists who want to spread it as quickly as possible and pessimists who worry about what happens as computers grow smarter than people. Before talking with Nadella, we called Don Beyer, a Virginia congressman who was among the lawmakers considering how to regulate AI. We played the CEO a question that is on the lawmaker's mind.

DON BEYER: The biggest question I'd put to him is what does he see as the real existential risk from AI? Smart people, you know, the Stephen Hawking's who really are afraid of when we make machines that are smarter than we are - I know a lot of the computer scientists say, oh, no, that'll never happen or it'll be benign. I don't think we can afford to ignore the most important question, which is what does it mean for the future of humanity?

NADELLA: The question, of course, he asked is about existential risk. I'll answer that very specifically first, which is, I think if we somehow hand over control to AI and then we really don't have control of AI, that is the existential risk people talk about, right? What happens when, you know, you have a self-improving piece of software that we are not in control of? But the real thing that we also talked about before was what is the here and now, real-world harms, right? Whether it's bioterrorism or election interference or disinformation.

INSKEEP: So he sees two kinds of risks - short-term abuse by people and the long-term power of machines. Microsoft chose to approach both risks not by containing the technology, but by spreading it. When people use AI, their experience may show what it can do and how it needs to be limited.

NADELLA: Any CEO who wants to maintain a long-term, viable business for their investors and all the other constituents has to be thinking about the consequences, not just in the, you know, in the immediate term, but long term.

INSKEEP: Let me ask a little more about that. I understand that a lot of people who work on AI do so with an almost religious fervor. They believe so strongly in the technology, and they want it out in the world as quickly as possible. Isn't there a risk that that attitude, commendable as it sounds, could lead us astray?

NADELLA: I think people who are working on this technology, quite frankly, have, I think, the right balance. What I celebrate today - I mean, think about, you know, we're not just talking about technology for technology's sake. We're talking about technology and its real-world impact, right? So, for example, in a world where, you know, inflation adjusted, the economic growth around the world is - what? - basically very low. So I think we need a new factor of production. What motivates a lot of us in digital technology is to use this technology to drive - I'll call it economic growth that's compatible with our planet. It's compatible with creating more economic opportunity. Then the second aspect is what you said, which is how do we do it safely? We can't break things because if you break things, one, you're not going to have a business. And so therefore, thinking of both of these things simultaneously and the fact that we are having such a rich dialogue has to be celebrated.

INSKEEP: In addition to driving economic growth, can this technology drive economic inequality, making more and more money for the people who control it the best while putting other people out of work?

NADELLA: You know, it's an interesting question. First of all, there will be changes in jobs, and there will be new jobs that will be created. I'll give you an example. Today, you can participate in the increasing digitization inside your company. For example, you can build an application by just using a natural language dialogue interface. So that is - suddenly, if you're in the healthcare or you're in retail as someone who is in the front line with domain expertise is - are essentially doing IT-class jobs that may have better wage support, even.

INSKEEP: You're telling me I don't speak the programming language or don't write it, but I speak English, and I speak English to the computer and it figures out what I want?

NADELLA: Correct. Correct.

INSKEEP: Do you anticipate a moment where there's a large language model and maybe it works with another AI program that can help it to see, and this device thinks it's smarter than you are and would do a better job than you do in running Microsoft?

NADELLA: You know, like, already, there are - there is - the Microsoft Copilot is helping me compose emails better, is able to help me, in fact, take a Teams meeting, remember things I said, others said, better. And so, yes, I mean, I do think that it is helping me, at least in my current role, be a better sort of, you know, person working at Microsoft.

INSKEEP: But do you actually worry about that moment?

NADELLA: I - like, I don't worry about the moment of some technology replacing me. If anything, I want technology, hopefully, to remove the drudgery in work that all of us have.

INSKEEP: Satya Nadella is the AI-assisted CEO of Microsoft. His company was a big part of the news in artificial intelligence in 2023.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate