To trust or not to trust—that is the question many of us ask when it comes to choosing our information today. Our choices can be so overwhelming that science wants to come to the rescue by adding extra buttons on social media postings.
This comes after researchers from the University College London (UCL) conducted experiments by adding ‘trust’ and ‘distrust’ buttons on social media, in addition to the already existing ‘like’ buttons, and found this helped reduce the spread of misinformation by half.
Sharot is from the UCL Department of Psychology & Language Sciences, the Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and the Massachusetts Institute of Technology.
“Part of why misinformation spreads so readily is that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true,” she said.
What Was In The Experiments
The scientists tested 951 participants over six experiments by changing the incentive structure of the social media platforms.The participants were to share accurate or inaccurate news articles, and the receivers of these articles were to ‘like’ or ‘dislike’ them, as well as chose to ‘trust’ or ‘distrust’ them.
When the researchers used computational modelling to analyze the results, they found that with the trust/distrust buttons, the users were more likely to be careful with the information they reposted and shared.
Practical Benefits Of The Study
The researchers said that their study has practical benefits as it can help with the reduction of the spread of misinformation on social platforms.Potential Challenges
However, the researchers note there are potential challenges to the implementation of ’trust‘ and ’distrust' buttons on social platforms, with subjectivity and abuse of the buttons being primary considerations.This is because determining the threshold for trust or distrust can be challenging due to the subjective nature of content evaluation. Additionally, such buttons may be vulnerable to abuse, where users might exploit them to promote personal biases or engage in targeted harassment.
Algorithmic complexities are another challenge, as implementing ’trust‘ and ’distrust' buttons requires robust algorithms to prevent gaming the system. Thus, platforms would need to develop sophisticated algorithms that can distinguish genuine user feedback from malicious or manipulative actions.