Russia is the most prolific foreign influence actor using artificial intelligence to generate content targeting the 2024 presidential election, U.S. intelligence officials said on Monday.
The cutting-edge technology is making it easier for Russia as well as Iran to quickly and more convincingly tailor often-polarizing content aimed at swaying American voters, an official from the Office of the Director of National Intelligence, who spoke on condition of anonymity, told reporters at a briefing.
“The [intelligence community] considers AI a malign influence accelerant, not yet a revolutionary influence tool,” the official said. “In other words, information operations are the threat, and AI is an enabler.”
Intelligence officials have previously said they saw AI used in elections overseas. “Our update today makes clear that this is now happening here,” the ODNI official said.
Russian influence operations have spread synthetic images, video, audio, and text online, officials said. That includes AI-generated content “of and about prominent U.S. figures” and material seeking to emphasize divisive issues such as immigration. Officials said that’s consistent with the Kremlin’s broader goal to boost former President Donald Trump and denigrate Vice President Kamala Harris.
But Russia is also using lower-tech methods. The ODNI official said Russian influence actors staged a video in which a woman claimed to be a victim of a hit-and-run by Harris in 2011. There’s no evidence that ever happened. Last week, Microsoft also said Russia was behind the video, which was spread by a website claiming to be a nonexistent local San Francisco TV station.
Russia is also behind manipulated videos of Harris’s speeches, the ODNI official said. They may have been altered using editing tools or with AI. They were disseminated on social media and using other methods.
“One of the efforts we see Russian influence actors do is, when they create this media, try to encourage its spread,” the ODNI official said.
The official said the videos of Harris had been altered in a range of ways, to “paint her in a bad light both personally but also in comparison to her opponent” and to focus on issues Russia believes are divisive.
Iran has also tapped AI to generate social media posts and write fake stories for websites posing as legitimate news outlets, officials said. The intelligence community has said Iran is seeking to undercut Trump in the 2024 election.
Iran has used AI to create such content in both English and Spanish, and is targeting Americans “across the political spectrum on polarizing issues” including the war in Gaza and the presidential candidates, officials said.
China, the third main foreign threat to U.S. elections, is using AI in its broader influence operations that aim to shape global views of China and amplify divisive topics in the U.S. such as drug use, immigration, and abortion, officials said.
However, officials said they had not identified any AI-powered operations targeting the outcome of voting in the U.S. The intelligence community has said Beijing’s influence operations are more focused on down-ballot races in the U.S. than the presidential contest.
U.S. officials, lawmakers, tech companies, and researchers have been concerned about the potential for AI-powered manipulation to upend this year’s election campaign, such as deepfake videos or audio depicting candidates doing or saying something they didn't or misleading voters about the voting process.
While those threats may yet still materialize as election day draws closer, so far AI has been used more frequently in different ways: by foreign adversaries to improve productivity and boost volume, and by political partisans to generate memes and jokes.
On Monday, the ODNI official said foreign actors have been slow to overcome three main obstacles to AI-generated content becoming a greater risk to American elections: first, overcome guardrails built into many AI tools without being detected; second, develop their own sophisticated models; and third, strategically target and distribute AI content.
As Election Day nears, the intelligence community will be monitoring for foreign efforts to introduce deceptive or AI-generated content in a variety of ways, including “laundering material through prominent figures,” using fake social media accounts or websites posing as news outlets, or “releasing supposed ‘leaks’ of AI-generated content that appear sensitive or controversial,” the ODNI report said.
Earlier this month, the Justice Department accused Russian state broadcaster RT, which the U.S. government says operates as an arm of Russian intelligence services, of funneling nearly $10 million to pro-Trump American influencers who posted videos critical of Harris and Ukraine. The influencers say they didn’t know the money came from Russia.
300x250 Ad
300x250 Ad