© 2024 Maine Public

Bangor Studio/Membership Department
63 Texas Ave.
Bangor, ME 04401

Lewiston Studio
1450 Lisbon St.
Lewiston, ME 04240

Portland Studio
323 Marginal Way
Portland, ME 04101

Registered 501(c)(3) EIN: 22-3171529
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Scroll down to see all available streams.

Social media firms are prepping for the midterms. Experts say it may not be enough

Voters cast their ballots at a polling station setup in a fire station on August 23, 2022 in Miami Beach, Florida.
Joe Raedle
/
Getty Images
Voters cast their ballots at a polling station setup in a fire station on August 23, 2022 in Miami Beach, Florida.

With two months to go until the midterms, tech companies are getting ready: rolling out fact checks, labeling misleading claims and setting up voting guides.

The election playbooks being used by Facebook, Twitter, Google-owned YouTube and TikTok are largely in line with those they used in 2020, when they warned that both foreign and domestic actors were seeking to undermine confidence in the results.

But the wave of falsehoods in the wake of that election — including the "big lie" that Donald Trump won — has continued to spread, espoused by hundreds of Republican candidates on ballots this fall.

That's left experts who study social media wondering what lessons tech companies have learned from 2020 — and whether they are doing enough this year.

The host of election-related announcements in recent weeks add up to a "business as usual" approach, said Katie Harbath, a former elections policy director at Facebook who's now a fellow at the Bipartisan Policy Center.

The return of familiar playbooks

The platforms are largely taking a two-pronged approach: tamping down misleading or outright false claims, and boosting authoritative information from local election officials and reputable news sources.

In the first case, all four major platforms are leaning on labels to flag falsehoods and, in many cases, direct users to fact checks or accurate information. In some cases, users won't be able to share labeled posts and the platforms themselves won't recommend them. YouTube, Facebook and TikTok also say they will remove some specific false claims about voting and threats of violence.

Platforms are often hesitant to spell out exactly how they enforce their policies to avoid giving bad actors a roadmap. The range of approaches to labeling and removal also illustrates the fraught balance the companies try to strike between letting users express themselves and protecting their platforms from being weaponized — all while facing scrutiny from politicians on both sides of the aisle.

Policies diverge most when it comes to political ads. Twitter and TikTok have banned ads for candidates and about political issues. Google and Facebook both allow them, requiring disclosure of who pays for them. Facebook is once again freezing all new political ads in the week before Election Day but will allow existing ads to continue running.

But defining when an ad or issue qualifies as political isn't straightforward, leaving gaps that experts worry could be exploited.

"It's actually a quite confusing landscape because there is no regulation, there are no standards these companies have to follow," Harbath said. "Everyone is just making the choices that they feel are best for them and their company."

On the flip side, all four platforms are highlighting features that aim to put more reliable information in users' feeds, such as providing information about candidates, voter registration and when and where to cast ballots. That information will also be available in Spanish across platforms.

Branching out beyond English is an important step towards addressing a "glaring omission" in previous elections, said Zeve Sanderson, executive director of New York University's Center for Social Media and Politics.

In the final days of the 2020 election, Latino voters were targeted with social media posts discouraging them from voting, according to voting rights activists and disinformation experts.

Evidence is mixed on how well platform policies work

Even as social media companies double down on their 2020 tactics, researchers say it's not always clear how effective their interventions are.

In the case of labels, there is mixed evidence about whether they help dispel false impressions, or if, in some cases, they may inadvertently encourage people to double down on those beliefs.

Last year, researchers at NYU analyzed what happened after Twitter labeled some of Trump's tweets before and after the 2020 election as containing misinformation. They found the labeled messages spread even further on Twitter, and also took off on other platforms including Facebook, Instagram and Reddit.

The platforms have given small peeks into what they know about how well their tools work. Twitter has said after it redesigned its misleading information labels last year, more people clicked through to read accurate information.

Facebook, meanwhile, says it will be more choosy about what it labels, after users said labels were "over-used" in 2020. "In the event that we do need to deploy them this time round our intention is to do so in a targeted and strategic way," Nick Clegg, president of global affairs at Facebook parent Meta, wrote in a blog post.

But for NYU's Sanderson, that raised more questions the company has not answered.

"What was the feedback? From which users? What do the words 'targeted' and 'strategic' mean?" he said. "It would be really helpful for them to contextualize it within actual details of what their internal research has found."

Pro-Trump protesters gather in front of the U.S. Capitol Building on January 6, 2021.
Brent Stirton / Getty Images
/
Getty Images
Pro-Trump protesters gather in front of the U.S. Capitol on Jan. 6, 2021.

Moving beyond misinformation "Whac-A-Mole"

What's more, it's hard to know how well the companies enforce their policies — which Harbath, the former Facebook official, described as a "huge gap."

"The companies are like, 'These are our policies, these are all the things that we're going to do.' But they don't talk enough about, 'OK, but humans are fallible. The technology is not 100% perfect,' " she said.

In the hours after polls closed in 2020, Trump supporters began rallying online under the slogan "Stop the Steal," Facebook removed the first Stop the Steal group on its platform quickly, under its rules against casting doubt on the legitimacy of the election and calling for violence. But more groups kept popping up — and Facebook was unable to keep up.

Researchers warn that the 2020 approach to election falsehoods doesn't address the reality of 2022. Tech companies approach elections as discrete events, typically putting policies in place and then turning them off when the voting is over — even though false claims don't end when the ballots are counted.

"The companies should be doing a lot more to have an always-on policy, because clearly these topics around the integrity of elections are certainly staying in the lexicon and the conversation well beyond Election Day," Harbath said.

The big challenge is for companies to move beyond being reactive and find ways to prevent their platforms from being used to spread these kinds of falsehoods so widely in the first place.

"When it comes to election misinformation and disinformation, platforms are kind of just playing Whac-A-Mole — trying to get on top of something before something else arises," said Spandi Singh, a policy analyst at the Open Technology Institute at the think tank New America.

Editor's note: Facebook parent Meta pays NPR to license NPR content.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.