It’s time to rethink how we fight election misinformation

Last month, all four major online social platforms — Meta, Twitter, YouTube, and TikTok — released their plans for combating misinformation and disinformation in the weeks leading up to the 2022 US midterms. 

Meta will have voting alerts, real-time fact-checking in both English and Spanish, and as it did in 2020, it will also be banning “new political, electoral and social issue ads” during the week leading up to the election. Twitter is focusing on “prebunks,” proactively fact-checking content in users’ feeds based on search terms and hashtags and will have election-themed Explore pages. YouTube is rolling out information widgets on search pages for candidates. And TikTok will have curated election hashtags full of vetted information and will be continuing to enforce its long-standing ban on political advertising. 

You’d be forgiven, though, if you couldn’t keep any of this straight in your head anymore or can’t immediately parse what makes these any different from any other election. In fact, researchers and fact-checkers feel the same way. 

“I don’t think any platform is in ‘good shape’”

“Unless Facebook has drastically changed the core functions and design of its platform, then I doubt any meaningful changes have happened and ‘policing misinformation’ is still piecemeal, whack-a-mole, and reactive,” Erin Gallagher, a disinformation researcher on the Technology and Social Change research team at the Shorenstein Center, told The Verge.

The world has changed and the internet’s biggest platforms don’t seem to realize it. If polls are to be believed, 45 percent of Americans — and 70 percent of Republicans — believe some variation of the “Big Lie” that former President Donald Trump won the 2020 election. QAnon-affiliated candidates are running in over 25 states, and their conspiracy theories are even more prevalent. And even with new policies pointed directly at banning allegations of 2020 voter fraud, platforms are still full of “Big Lie” content. In the war between platform moderators and conspiracy theories, conspiracy theories won.

November’s election will be the first time that many Americans will enter the voting booth since last year’s insurrection, which was planned and subsequently livestreamed on many of the same platforms now emphasizing their commitments to democracy. And it’s this new troubling political reality that isn’t reflected in the current policies of the four major platform companies. 

“I hate to be extremely pessimistic but I don’t think any platform is in ‘good shape,’” Gallagher said.

So what comes next? How do we moderate online platforms in a post-insurrection world? Is it all just hopeless? Well, the simple answer might be that we have reached the limit of what individual platforms can do.

The “war room” looked like a room full of computers

Katie Harbath, CEO of Anchor Change and former public policy director for Facebook, said very little of what she’s seen from platforms regarding the US midterms this year feels new. Harbath left Facebook in 2021 and said she’s particularly concerned that none of the Big Tech companies have anything in their election policies that mention coordinating across platforms on internet-wide conspiracy theories.

“How does this mis- and disinformation spread amongst all these different apps? How do they interplay with one another,” Harbath told The Verge. “We don’t have enough insight into that because nobody has the ability to really look cross-platform at how actors are going to be exploiting the different loopholes or vulnerabilities that each platform has to make up a sum of a whole.”

The idea of a misinformation war room — a specific place with specific staffers devoted entirely to banning and isolating misinformation — was pioneered by Facebook after the Cambridge Analytica scandal. The twin shocks of Donald Trump’s 2016 victory and the unexpected Brexit vote created a need for a way for internet platforms to show that they were actively safeguarding against those who wanted to manipulate the democratic process around the world. 

Ahead of the US midterms and the Brazilian presidential election in 2018, Meta (then Facebook) wanted to change the narrative. The company invited journalists to tour a literal physical war room, which The Associated Press described as “a nerve center the social network has set up to combat fake accounts and bogus news stories ahead of upcoming elections.” From pictures, it looked like a room full of computers with a couple of clocks on the wall showing different time zones. The war room would then be shut down less than a month later. But it proved to be an effective bit of PR for the company, lending a sense of place to the largely very mundane work of moderating a large website.

Harbath said that the election war rooms were meant to centralize the company’s rapid response teams and often focused on fairly mundane issues like fixing bugs or quickly taking down attempts at voter suppression. One example of war room content moderation that Harbath gave from the 2018 midterms was that the Trump campaign was running an ad about caravans of undocumented immigrants at the border. There was heavy debate internally about whether that ad would be allowed to run. It ultimately decided to block the ad. 

“No platform has been transparent about how much content even gets labeled”

“Let’s say I got a phone call from some presidential candidate’s team because their page had gone down,” she said. “I could immediately flag that for the people in the War Room to instantly triage it there. And then they had systems in place to make sure that they were routing things in the right perspective, stuff like that.”

A lot of that triaging was also happening very publicly, with analysts and journalists flagging harmful content and moderators acting in response. In the 2020 election, the platform finally cracked down on “stop the steal” content — more than two months after results of the election were settled.

Corey Chambliss, a spokesperson for Meta, told The Verge that the 2018 policy with regard to working with “government, cybersecurity, and tech industry partners” during elections was still accurate for this year’s midterms. Chambliss would not specify which industry peers Meta communicates with but said that its “Election Operations Center” will be, in effect, head of Election Day this year. 

In a report published this month about removing coordinated authentic activity in Russia and China, Facebook said, “To support further research into this and similar cross-internet activities, we are including a list of domains, petitions and Telegram channels that we have assessed to be connected to the operation. We look forward to further discoveries from the research community.”

“There are also just more platforms now.”

There are other reasons to be pessimistic. Right now, the bulk of the current election response involves using filters and artificial intelligence to automatically flag false or misleading content in some way and specifically remove more high-level coordinated disinformation. But if you’re someone who spends 10 hours a day consuming QAnon content in a Facebook Group, you’re probably not going to see a fact-checking widget and suddenly deradicalize. And making things even more frustrating, according to Gallagher, is the fact that there aren’t any actual numbers on how many posts are flagged as misleading or false.

“As far as I know, no platform has been transparent about how much content even gets labeled or what the reach of that labeled content was, or how long did it take to put a label on it, or what was the reach before vs. after it was labeled,” she said.

Also, if you’re someone immersed in these digital alternate realities, in all likelihood, you’re not just using one platform to consume content and network with other users. You’re probably using several at once, none of which have a uniform set of standards and policies. 

“There are also just more platforms now,” said Gallagher, thinking of alternative social media platforms like Rumble, Gettr, Parler, Truth Social, etc. “And TikTok, which is wildly popular.”

Platforms also function in new ways. Social media is not simply just a place to add friends, post life updates, and share links with different communities. It has grown into a vast interconnected universe of different platforms with different algorithms and vastly different incentives. And the problems these sites are facing are bigger than any one company can deal with.

“There was a big platform migration that happened both since 2020, and since January 6th.”

Karan Lala, a fellow at the Integrity Institute and a former member of Facebook’s civic integrity team, told The Verge that it’s useful now to focus on how different apps deliver content to users. He divides them into two groups: distribution-based apps versus community-based apps.

“TikTok, apps like Instagram, those are distribution-based apps where the primary mechanism is users consuming content from other users,” Lala said. “Versus Facebook, which has community-based harms. Right?”

That first class of apps, which includes TikTok and Instagram among others, poses a significant challenge during large news events like an election. This year’s midterms won’t be the first “TikTok election” in the US in the literal sense, but it will be the first US election where TikTok, not Facebook, is the dominant cultural force in the country. Meta’s flagship platform reported that it lost users for the first time this year and, per a recent report from TechCrunch, TikTok pushed the app out of the Apple App Store top 10 this summer. 

And, according to Brandi Geurkink, a senior fellow at Mozilla, TikTok is also the least transparent of any major platform. “It’s harder to scrutinize, from the outside, TikTok than it is some other platforms, even like Facebook — they have more in terms of transparency tools than TikTok,” Geurkink told The Verge.

Geurkink was part of the team at Mozilla that recently published “These Are ‘Not’ Political Ads,” a report that found TikTok’s ban on political ads is extremely easy to bypass and that the platform’s new tool that lets creators pay to promote their content has virtually no moderation, allowing users to easily amplify politically sponsored content. TikTok has, however, updated its policy this month, blocking politicians and political parties from using the platform’s monetization tools, such as gifting, tipping, and the platform’s Creator Fund. The Verge has reached out to TikTok for comment.

“I think what we’ve advocated for, for a long time, is there to basically be external scrutiny into the platforms,” Geurkink said. “Which can be done by external researchers, and TikTok hasn’t really enabled that in terms of transparency. They’ve done a lot less than the other platforms.”

It’s not just a lack of transparency with regard to how the platforms moderate themselves that’s a problem, however. We also still have little to no understanding of how these platforms operate as a network. Though, thanks to Meta’s own Widely Viewed Content Reports, we do have some sense of how linked these different platforms are now. 

The most viewed domain on Facebook during the second quarter of 2022 was YouTube.com, which accounted for almost 170 million views, and the TikTok.com domain, accounting for 108 million views. Which sort of throws a wrench into the idea of any one platform moderating their content independently. But it’s not just content coming from other big platforms like YouTube and TikTok that create weird moderation gray areas for a site like Facebook. 

“If people genuinely believe a false claim, all they’re going to think is that the social media company is trying to work against what they perceive to be the truth.”

Sara Aniano, a disinformation analyst at the Anti-Defamation League’s Center on Extremism, told The Verge that fringe right-wing websites like Rumble are increasingly impactful, with their content being shared back on mainstream platforms like Facebook.

“There was a big platform migration that happened both since 2020, and since January 6th,” Aniano said. “People figured out that they were getting censored and flagged with content warnings on mainstream social media platforms. And maybe they went to places like Telegram or Truth Social or Gab, where they could speak more freely, without consequence.”

Bad actors — the users who aren’t just blindly sharing content they think is true or don’t care enough to personally verify — know that larger mainstream platforms will suspend their accounts or put content warnings on their posts, so they’ve gotten better at moving from platform to platform. And when they are banned or have their posts flagged as misleading or false, it can add to a conspiratorial mindset from their followers.

“If people genuinely believe a false claim, all they’re going to think is that the social media company is trying to work against what they perceive to be the truth,” she said. “And that is kind of the tragic reality of conspiracism, not just leading up to the election, but around everything, around medicine, around doctors, around education, and all the other industries that we’ve been seeing attacked over and over again.”

One good example of how this all works together, she said, was the recent Arizona primaries, where a conspiracy theory spread about the Arizona primary that claimed that the pens being used by Maricopa County election officials were used to rig the election. It was a repeat of a similar conspiracy theory called #SharpieGate that first started going viral in 2020. 

The hashtag #SharpieGate is currently hidden on Facebook. But that hasn’t stopped right-wing publishers from writing about it and having their articles shared on the platform. YouTube’s search results are completely free of conspiracy theory content; the hashtag isn’t blocked on Twitter but is blocked on TikTok. Users are still making videos about it. 

Ivy Choi, the policy communications manager for YouTube, told The Verge that the platform is not blocking #SharpieGate content but is demoting it in the platform’s search terms. “When you look for ‘#Sharpiegate 2.0,’ YouTube systems are making sure authoritative content is at the top,” she said. “And making sure that borderline content is not recommended.”

“I mean, any attempt at mitigation and more stringent content moderation is a good thing,” Aniano said. “I would never say that it’s futile. But I do think that it needs acknowledging that the problem, and the distrust that has been sowed in the democratic process since 2020 is deeply systemic. It cannot be solved in a week, it can’t be solved in a year, it may take lifetimes to rebuild this trust.”


Credit: Source link

Comments are closed.