© 2024 Maine Public

Bangor Studio/Membership Department
63 Texas Ave.
Bangor, ME 04401

Lewiston Studio
1450 Lisbon St.
Lewiston, ME 04240

Portland Studio
323 Marginal Way
Portland, ME 04101

Registered 501(c)(3) EIN: 22-3171529
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Scroll down to see all available streams.

How Russia is using artificial intelligence in its propaganda operations

JUANA SUMMERS, HOST:

This week, a U.S. official said they disrupted a Russian propaganda operation that used fake social media accounts posing as Americans. The Kremlin has used these kinds of bot farms before, but now artificial intelligence is making those efforts easier. NPR's Shannon Bond joins us to tell us more. Hey, Shannon.

SHANNON BOND, BYLINE: Hey, Juana.

SUMMERS: So Shannon, start by telling us about this bot farm and the role that AI played.

BOND: Yeah. The Justice Department says this was part of a project run by a Russian intelligence officer and funded by the Kremlin. And it used AI software to create fake profiles on X, the platform formerly known as Twitter, and promote pro-Kremlin narratives. So, for example, there was one user who claimed to live in Minneapolis. He posted videos of Russian President Vladimir Putin justifying Russia's actions in Ukraine. And these are the kinds of messages Russia has been pushing online for years now, particularly to undermine support for Ukraine.

SUMMERS: OK. Is AI making these operations more effective than they have been in the past?

BOND: Well, so far, it doesn't seem that using AI is helping these campaigns reach a lot more people. So X took down almost a thousand accounts involved in this bot farm, but we don't know how many people followed them. The screenshots that the government shared showed very small numbers. X didn't respond to further inquiries. But overall, that tracks with what we've heard from other recent reports on influence operations using AI.

What AI does seem to do is make these efforts cheaper. So, for example, in the past, you know, you had a troll farm. You had to rely on actual people to post online. And AI can replace some of that human labor, you know, whether it's by creating fake accounts like this new Russian operation did or by using chatbots to write posts. I spoke with Clement Briens. He's a senior threat intelligence analyst at the cybersecurity firm Recorded Future.

CLEMENT BRIENS: When you compare the cost of doing that versus paying, like, professional trolls in a troll farm in countries like Macedonia or the Philippines, that, you know, are sort of known for having those content farms, it represents over 100x decrease in costs.

SUMMERS: OK, Shannon, I'm going to need you to simplify this for me. What do these influence campaigns look like to just an average old person?

BOND: Yeah. So I'll give you another example. So Recorded Future and other researchers have been investigating a network linked to Russia, a network of websites that pose as news outlets. And they seem to be using AI to rewrite articles from real news sites and then post them. And the goal there seems to be to make these fake publications look more credible so that when they publish Kremlin propaganda or false information or even AI-generated deepfakes, you know, it looks like it's from a legitimate source, a news site with other stories. And it's a form of information laundering that AI has made a lot easier for them to do.

SUMMERS: Shannon, I heard you mention deepfakes there, and we have talked a whole lot about how AI can create realistic but fake audio and images. But those fakes often have clues that suggest they may not be real. What about this AI-generated text?

BOND: Well, it's not always easy to spot this kind of text. You know, it does, actually, in many cases, allow foreign adversaries to avoid some of the errors that previously would have given them away - so things like grammar mistakes or idioms that just don't translate right into English. But sometimes these operations are sloppy. They will, for example, publish part of the prompts they use to generate the text, and so that can be a giveaway. But even so, you know, what we're seeing now with this kind of use of AI is it allows these propaganda campaigns to be more prolific. And, Juana, this is all coming at a time when intelligence officials are warning us that both Russia and Iran are stepping up their foreign influence operations.

SUMMERS: That's NPR's Shannon Bond. Shannon, thank you.

BOND: Thanks so much. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.