© 2024 WYPR
WYPR 88.1 FM Baltimore WYPF 88.1 FM Frederick WYPO 106.9 FM Ocean City
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Twitter Expands Warning Labels To Slow Spread of Election Misinformation

Twitter will add more prominent labels to misleading tweets from U.S. politicians and other high-profile users. Other users won't be able to read the posts until they click past a warning screen.
Twitter
Twitter will add more prominent labels to misleading tweets from U.S. politicians and other high-profile users. Other users won't be able to read the posts until they click past a warning screen.

Expect to see more prominent warning labels on Twitter that make it harder to see and share false claims about the election and the coronavirus, the company said on Friday.

This is the latest step that Twitter is taking to prevent the spread of deliberate misinformation as voters cast their ballots amid a pandemic. Like Facebook and other social media platforms, Twitter has announced a cascade of new rules to stop a flood of hoaxes and false claims aimed at misleading voters.

The social media company will more aggressively limit the impact of posts it labels as misleading. Most notably, it will hide tweets with false claims from American political figures, candidates or parties and other high-profile U.S. users behind warning screens. Users will have to click past the warnings to read these tweets.

Twitter will hide misleading tweets behind warning screens, similar to the ones it has used for posts that break its rules but are left up because of public interest.
/ Screenshot by NPR
/
Screenshot by NPR
Twitter will hide misleading tweets behind warning screens, similar to the ones it has used for posts that break its rules but are left up because of public interest.

"Some or all of the content shared in this Tweet is disputed and may be misleading," the warning will read. That label will also will appear prominently above the tweet, once users click past the warning screen.

It will be harder for such tweets to spread, too. Users won't be allowed to reply to them or retweet without adding comment. And the tweets will not be recommended by Twitter's algorithms, meaning users won't see them in their main timelines.

"We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these Tweets," Twitter officials Vijaya Gadde and Kayvon Beykpour wrote in a blog post on Friday.

Despite creating more intricate rules designed to stop misinformation, Twitter has been reluctant to remove posts in most cases. It previously used these kinds of warnings on tweets that violated its rules but which it determined should remain online because of public interest, including abusive posts from political leaders and harmful tweets about the coronavirus.

The expanded use of warning labels is likely to have a visible impact on one of Twitter's most prolific and controversial users: President Trump. He has repeatedly made false claims, including about mail-in voting, that Twitter has labeled as misleading. Under the new policy, more of his posts could be hidden behind warning labels and thus have their views reduced.

With less than a month to go until Election Day, social media companies are increasingly alarmed at the potential that their platforms will be used to manipulate or intimidate voters or to undermine the legitimacy of the election.

Both Twitter and Facebook have struggled to curb the viral spread of misinformation and hoaxes, which often proliferate widely before fact checks and corrections can catch up.

There are some lines Twitter says users cannot cross. On Friday, the company clarified that it would take down posts that try to interfere with the election process or its aftermath, including calls for "violent action."

It gave more details on plans to label posts that claim victory before election results are final. It will direct users to official information about the election and will only consider a race "authoritatively called" if it has been announced by state election officials or in independent, public projections from at least two "authoritative, national news outlets."

Earlier this week, Facebook said it, too, would crack down on voter intimidation, including removing posts that use "militarized language" in urging people to monitor polling places. Concerns have been growing over possible confrontations after Donald Trump Jr., the president's son, posted a video on social media calling for people to join an "Army for Trump." Facebook also plans to label premature claims of victory.

Before users can retweet something labeled as misinformation, they will see an alert that the tweet is "disputed" and possible links to "reliable" information.
/ Twitter
/
Twitter
Before users can retweet something labeled as misinformation, they will see an alert that the tweet is "disputed" and possible links to "reliable" information.

Other measures Twitter announced on Friday encourage users to think before posting. If a user tries to retweet something labeled as misinformation, she will be shown an alert directing her to "credible information about the topic" before she can continue.

The changes to how misleading information is displayed and shared — whether from high-profile figures or everyday users — go into effect next week and will be permanent.

Some additional restrictions will take effect on Oct. 20 and extend at least until the end of election week.

During that time, Twitter will temporarily prompt users to "quote tweet" — adding their own commentary — rather than simply retweet a post. It will also stop recommending tweets from people whom users do not already follow, a step meant to slow viral amplification.

And it will make changes to the trends it recommends to U.S. users, adding a description to explain why a given term is trending.

"This will help people more quickly gain an informed understanding of the high volume public conversation in the U.S. and also help reduce the potential for misleading information to spread," the company said.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.