After the recent attacks in Paris and in San Bernardino, Calif., social media platforms are under pressure from politicians to do more to take down messages and videos intended to promote terrorist groups and recruit members.
Lawmakers in Congress are considering a bill that calls on President Obama to come up with a strategy to combat the use of social media by terrorist groups. Another measure, proposed in the Senate, would require Internet companies to report knowledge of terrorist activities to the government.
Obama himself has urged tech leaders to make it harder for terrorists "to use technology to escape from justice," and Democratic presidential candidate Hillary Clinton has recently said that social media companies can help by "swiftly shutting down terrorist accounts, so they're not used to plan, provoke or celebrate violence."
The Wall Street Journal is also reporting, citing an unnamed source, that the Department of Homeland Security is working on a plan to study social media posts as part of the visa application process before certain people are allowed to enter the country.
The companies say they cooperate with law enforcement now, and the proposed legislation would do more harm than good.
Messages that threaten or promote terrorism already violate the usage rules of most social media platforms. Twitter, for instance, has teams around the world investigating reports of rule violations, and the company says it works with law enforcement entities when appropriate.
"Violent threats and the promotion of terrorism deserve no place on Twitter and our rules make that clear," Twitter said in a statement.
A major challenge is that social networks rely on their users to flag inappropriate content, in part because of the sheer quantity that is posted. Every minute, hundreds of hours of video may be uploaded to YouTube and thousands of photos to Facebook, making timely response very challenging.
And with human perception in play, some videos can be harder to identify than others:
"There are videos of armed military-style training on YouTube, on Vimeo, on Facebook," says Nicole Wong, a former deputy chief technology officer in the Obama administration and executive at Twitter and Google. "Some of the videos taken by our servicemen in Afghanistan look surprisingly similar to videos taken by the PKK, which is a designated terrorist organization in Turkey."
So what if we automated the process? For instance, social media companies use sophisticated programs to help identify images of child pornography by comparing to a national database. But no such database exists for terrorist images.
And there's a bigger issue: What exactly constitutes terrorist content?
"There's no responsible social media company that wants to be a part of promoting violent extremism," Wong says. To her, a major reason why private companies shouldn't police social media for terrorist content is that "no one has come up with a sensible definition for what terrorist activity or terrorist content would be."
Efforts to legislate the problem run into similar criticism. For instance, the Senate bill that would require companies to report terrorist activity does not define terrorist activity, says Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology.
"This kind of proposal creates a lot of risks for individual privacy and free expression," she says.
Critics say this could open the door for governments elsewhere to demand reports of postings that they may consider threatening.
It's somewhat similar to an ongoing debate about the ability of government investigators to get access to encrypted communications: If the U.S. government asked for backdoors into these secured conversations, what would stop China, Russia or any other country from demanding the same kind of access?
Cisco Systems' new CEO Chuck Robbins spoke about this at a recent small breakfast, which included NPR's Aarti Shahani. He said the company's technologies don't and won't include backdoors and that ultimately, companies can't build their businesses around the swings of public sentiment related to terrorist attacks.
"Our technology is commercially available. ... We are not providing any capabilities that aren't well documented and understood. And [we] also operate within the regulations that every government has placed on the technology arena," he said.
"We're operating the way that the public would like for us to operate and we're operating within the construct of the regulatory environment that we live in."
Transcript
DAVID GREENE, BYLINE: After the attacks in San Bernardino and Paris, social media platforms have come under pressure. Lawmakers want them to do more to take down messages and videos intended to recruit militants or that serve as propaganda. NPR's Brian Naylor reports.
BRIAN NAYLOR, BYLINE: Videos seeking to glorify groups like ISIS abound on the Internet. The Middle East Media Research Institute's TV Monitor Project has a collection. Here's one in English.
(SOUNDBITE OF VIDEO)
UNIDENTIFIED MAN: We are men honored with Islam who climbed its peaks to perform jihad, answering the call to unite under one flag.
NAYLOR: Messages threatening or promoting terrorism violate the usage rules of most social media platforms. The platforms rely on their users to help flag inappropriate content. Nicole Wong is a former executive with Twitter and Google who also served as President Obama's deputy chief technology officer.
NICOLE WONG: When you're a service that has, as YouTube does, more than 300 hours of video uploaded every minute, or, as Facebook does, more than 250,000 photos uploaded every minute, it's really hard to be able to make the proper decisions behind taking stuff down.
NAYLOR: Some content - say, a video showing a beheading - is obviously offensive and an easy call to remove. But Wong says things like military training videos on YouTube are more difficult.
WONG: Some of the videos taken by our servicemen in Afghanistan look surprisingly similar to videos taken by the PKK, which is a designated terrorist organization in Turkey.
NAYLOR: Some have suggested social media sites weed out terrorist posts with the same kinds of sophisticated programs that help them identify images of child pornography - by comparing them to a national database. But videos are constantly changing. A measure in the Senate would require Internet companies to report knowledge of terrorist activities to the government. Emma Llanso of the Center for Democracy and Technology says the proposal is pretty vague.
EMMA LLANSO: The bill does not define what terrorist activity is, but it does create this obligation for companies that, you know, would carry some risk if they failed to comply.
NAYLOR: Llanso says the legislation could force social media platforms to send all sorts of personal information about their users to the governments. Denise Zheng of the Center for Strategic and International Studies says she believes social media could be more proactive when it comes to taking down problem posts. But she says legislation could dry up important tips for law enforcement.
DENISE ZHENG: A lot of these individuals are actually identified using, you know, intelligence collection capabilities that monitor online behavior. So there are certainly intelligence interests, and we wouldn't want to hamper our efforts to identify ISIS militants and to take action against them.
NAYLOR: And there's a risk that demands here for companies to comply with the government legitimize online censorship in places like China. Nicole Wong, the former White House and social media official, says it's a dilemma.
WONG: We have designed and thrived on an open Internet. And we need to figure out ways to keep those communications channels open, even as more people who we disagree with are getting on board and using these same platforms.
NAYLOR: But with each attack, the pressure on social media companies to do more ratchets up. Brian Naylor, NPR News, Washington. Transcript provided by NPR, Copyright NPR.
300x250 Ad
300x250 Ad