The next time you try to share a post without reading it first, Facebook might warn you to think again.
The social media company announced Monday morning that it will be testing a new feature starting today that prompts users to open and read articles before sharing them on the platform. Facebook will begin testing the feature on about 6% of its global Android users, a company spokesperson told Recode. Twitter started testing a similar feature in June of last year and rolled it out to all of its users. more widely in September.
Facebook’s move is the latest example of social media companies trying to slow down rampant spread of disinformation and harmful content on their platforms by causing users to slow down before sharing content. Some social media researchers have long advocated this type of inducement, which they hope will minimize people’s reactions to a provocative headline without actually getting the fuller context of the story.
But since these features are relatively new, it’s unclear how well these interventions will actually work, or if people will just skip the prompts and share news without reading it anyway. And even if someone clicks on a post after Facebook asks them to, there’s no guarantee that they’ll actually read the whole story – so it’s not a complete solution.
Facebook announced the news on a company’s Twitter account on Monday, including a picture of what the prompt will look like.
If you open an article without clicking on it, Facebook will tell you the following:
“You are about to share this article without opening it. Sharing articles without reading them can mean missing key facts. The company will then prompt users to either open the article first or continue sharing without reading it.
Facebook did not immediately respond to a request for additional comment, beyond clarifying the percentage of users who will test the feature.
There are some early signs that while features like this won’t completely stop the spread of fake news or polarizing content, they can at least help people read more context on today’s news.
Back in September, Twitter shared their first information after they started testing a similar feature on its Android application. Data showed that prompts led people to open items 40% more often.
Last week, Twitter also rolled out a feature to make people reconsider “offensive or hurtful language” tweets. And ahead of the 2020 US presidential election, Twitter and Facebook began tackling misleading information on their platforms by tagging politically misleading tweets and banning users from “Like” or responding to those messages.
Social media companies have many levers to slow or stop the spread of harmful information and divisive speech. The outright banning of people – like Facebook and Twitter did with Donald Trump – is part of it, but it’s a controversial option and, in many cases, far too blunt a tool. Features like the one Facebook started testing on Monday, which “trick” users to stop sharing uninformed content, can potentially do more by gradually changing the way people post on the platform – before you share. divisive or misleading content.