Trying to correct misinformation online can make matters worse:
ANI |
Updated: May 22, 2021 18:43 IST
Washington [US]May 22 (ANI): Findings of a recent study show that Twitter users post even more wrong information after other users edit them, this can lead to the spread of more misinformation on social media.
According to new research discussed in the Proceedings of the 2021 Conference on Human Factors in Computing Systems, co-authored by a group of MIT scholars, not only wrong information online, but attempting to correct it can result in even less accurate and more malicious information from those being corrected.
The study revolved around a field experiment on Twitter, in which a research team offered polite edits, complete with links to solid evidence, in replies to clearly false tweets. politically clear.
“What we do,” says Mohsen Mosleh, a research associate at the MIT Sloan School of Management, a lecturer in the University of Exeter Business School, and a co-author of a new paper detailing the findings. found to be not encouraging.
Mosleh added, “After a user was corrected… they retweeted significantly lower quality and higher partisan news, and their retweets contained a lot of language. more toxic word.”
Article, “Consequences of backflow vandalism: Being corrected by other users for posting political misinformation increases subsequent shares of low-quality, partisan, and malicious content in real-time testing geographies on Twitter”, was published online in CHI ’21: Proceedings of the 2021 Conference on Human Factors in Computer Systems.
The authors of the paper are Mosleh; Cameron Martel, a PhD candidate at MIT Sloan; Dean Epris, Mitsubishi Associate Professor of Career Development at MIT Sloan; and David G. Rand, Professor Erwin H. Schell at MIT Sloan.
From attention to confusion?
To conduct the test, the researchers first identified 2,000 Twitter users, with a combination of political persuasion, who tweeted any one of 11 frequently repeated false articles. All of those articles have been removed by the Snopes.com website. Examples of these parts of wrong information including the incorrect claim that Ukraine has donated more money than any other country to the Clinton Foundation and the false claim that Donald Trump, as a landlord, once fired a disabled veteran because own a therapy dog.
The team then created a series of Twitter bot accounts, all of which existed for at least three months and attracted at least 1,000 followers, and appeared to be genuine human accounts. Upon finding any of the 11 false claims posted on the tweet, the bots will then send a reply message along the lines of, “I’m not sure about this post – it might not be true. I found a link on Snopes that says this title is false.” That answer will also link to the correct information.
Among other findings, the researchers observed that the accuracy of news sources that Twitter users retweeted rapidly decreased by about 1% over the next 24 hours after being corrected. Similarly, evaluating more than 7,000 retweets with links to political content by Twitter accounts in the same 24 hours, the academics found that the share of partisan content increased by more than 1% and the level of “unique”. harmful” increased by about 3%. of retweets, based on an analysis of the language being used.
In all of these areas – accuracy, partisanship and language being used – there is a difference between retweets and primary tweets written by Twitter users. In particular, the tweets were reduced in quality, while the original tweets of the accounts under study were not.
“Our observation that the effect occurs only for retweets suggests that the effect is working through the attention channel,” says Rand. .
He added: “We might have expected that getting fixed would turn one’s attention to accuracy. But instead it seems like being publicly corrected by another user has shifted people’s attention away from accuracy – perhaps to other social factors like confusion.” The effects were slightly larger when people were corrected by an account that identified with their political party, suggesting that the negative response was not motivated by partisan animosity.
Ready for the big time
According to Rand’s observations, the current results do not appear to follow some previous findings that he and other colleagues have made, such as a study published in the journal Nature in March that showed that neutral, indirect reminders of the concept of accuracy can increase the quality of the news people share on social media.
“The divergence between these results and our previous work on subtle precision highlights how complex psychology is involved,” Rand said.
As current reports note, there is a big difference between privately reading online prompts and publicly questioning the accuracy of one’s own tweets. And as Rand notes, in providing corrections, “users can post about the importance of accuracy in general without debugging or attacking specific posts, and this will help raise accuracy and increase the quality of news shared by others.”
At the very least, it is possible that highly controversial corrections could produce even worse results. Rand suggests that the style of editing and the nature of the source material used for editing could both be the subject of additional research.
“Future work should explore how words are edited to maximize their impact, and how the origin of the correction affects its impact,” he said. (ANI)