On the night of Jan. 16, Liz O'Sullivan sent a letter she'd been working on for weeks. It was directed at her boss, Matt Zeiler, the founder and CEO of Clarifai, a tech company. "The moment before I hit send and then afterwards, my heart, I could just feel it racing," she says.

The letter asked: Is our technology going to be used to build weapons?

With little government oversight of the tech industry in the U.S., it's tech workers themselves who increasingly are raising these ethical questions.

O'Sullivan often describes technology as magic. She's 34 — from the generation that saw the birth of high-speed Internet, Facebook, Venmo and Uber. "There are companies out there doing things that really look like magic," she says. "They feel like magic."

Her story began two years ago, when she started working at Clarifai. She says one of her jobs was to explain the company's product to customers. It's visual recognition technology, used by websites to identify nudity and inappropriate content. And doctors use it to spot diseases.

Clarifai was a startup, founded by Zeiler, a young superstar of the tech world. But shortly after O'Sullivan joined, Clarifai got a big break — a government contract, reportedly for millions of dollars.

It was all very secretive. At first, the people assigned to work on the project were in a windowless room, with the glass doors covered.

O'Sullivan would walk by and wonder: What are they doing in there?

Zeiler says the contract required secrecy, but everyone working directly on the project knew what it was about. "We got briefed before even writing a single line of code," he says. "And I also briefed everybody I asked to participate on this project."

NPR spoke to one employee who did work directly on the project. That person, who requested anonymity for fear of retaliation, says many of the workers in that room were not entirely clear what this was going to be used for. After all, the technology they were putting together is the same that they had been working on for other projects.

In the months that followed, former employees say, information started trickling down.

They were working with the Department of Defense.

Then, people working on the project got an email that outlined some details. The text included a brief reference to something called Project Maven.

The Pentagon told NPR that the project, also called Algorithmic Warfare, was created in April 2017. Its first task was to use computer vision technology for drones in the campaign against ISIS.

"This could be more effective than humans, who might miss something or misunderstand something," explains Ben Shneiderman, a computer scientist at the University of Maryland. "The computer vision could be more accurate."

Shneiderman had serious ethical concerns about the project. And he wasn't alone. Many people in the tech world were starting to wonder: What will the technology we're building be used for down the road?

O'Sullivan says this question began to haunt her too.

The big fear among tech activists is that will this be used to build autonomous weapons — ones that are programmed to find targets and kill people, without human intervention.

The Department of Defense's current policy requires that autonomous weapons "allow commanders and operators to exercise appropriate levels of human judgment."

It's a definition many find murky. And last year, tech workers began to ask a lot of questions. "It's a historic moment of the employees rising up in a principled way, an ethical way and saying, we won't do this," Shneiderman says.

In 2018, Microsoft employees protested their company's work with Immigration and Customs Enforcement. And several thousand employees demanded that Google stop working on Project Maven. Google did not renew its contract with the project.

Last June, Clarifai CEO Matt Zeiler also weighed in. In a blog post, he explained why the company was working on a military project.

O'Sullivan read that with interest. "You know, the people running these companies are sort of techno-Utopians. And they believe that tech is going to save the world and that we really just have to build everything that we can, and then figure out where the cards fall. But there are a lot of us out here saying, should we be building this at all?"

Former Clarifai employees told NPR that at the office, the mood got tense.

There were plenty of people who felt comfortable working on Project Maven. Others resented that it had been so secretive. And some just found it morally troubling.

As the months went by, O'Sullivan says she realized she couldn't change the direction of the company. So at the beginning of this year, she wrote that letter to Zeiler and sent it to the whole staff.

"We have serious concerns about recent events and are beginning to worry about what we are all working so hard to build," she wrote.

She went on to ask a bunch of questions. Many of them are the same ones being asked across the tech world today.

Are you going to let us know who we're selling our stuff to?

Are you going vet how it's used?

Do we care if this is used to hurt people?

A week after she sent that letter, she says Zeiler spoke at a staff meeting. "He did say that our technology was likely to be used for weapons," O'Sullivan says, "and autonomous weapons at that."

Zeiler does not deny this. In fact, he says, countries like China, are already doing it. The U.S. needs to step it up.

"We're not going to be building missiles, or any kind of stuff like that at Clarifai," he says. "But the technology ... is going to be useful for those. And through partnerships with the DOD and other contractors, I do think it will make its way into autonomous weapons."

This is where he and O'Sullivan disagree.

Should companies like Clarifai, Google and Amazon be involved in military projects?

Zeiler says Clarifai's technology will help save American soldiers. "At the end of the day, they're out there to do a mission. And if we can provide the best technology so that they can accurately do their mission, in the worst case, there might be a human life at the other end that they're targeting. But in many cases it might be a weapons cache, [without] any humans around or a bridge, to slow down an enemy threat."

And, Zeiler says, it's going to help minimize civilian casualties by improving the accuracy of weapons.

O'Sullivan wasn't buying that. She quit the day after the staff meeting. She describes herself as a conscientious tech objector.

She went on to join a startup that advises companies on how to make trustworthy artificial intelligence.

She says she still thinks tech can be really wonderful — or really dangerous. Like playing with magic.

Copyright 2019 NPR. To see more, visit https://www.npr.org.

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate