January 4th, 2023. Issue #5
TL;DR
A study by MIT's CSAIL has found that social media users are able to effectively assess and filter out misinforming content without prior training. The researchers developed a prototype platform that allows users to assess the accuracy of content and filter posts based on assessments made by trusted users. This decentralized approach to moderation is more efficient and scalable than centralized moderation and may appeal to users who mistrust platforms. Current methods used by social media platforms to mark or remove misinforming content can create tension and may actually increase the reach of the content. The researchers' goal is to make it easier for users to help each other assess misinformation on social media, reducing the workload for everyone.
Think tank
How exactly were users able to filter content on their own?
Users were able to filter out misinforming content on their own using a prototype platform that allowed them to assess the accuracy of content and filter posts based on assessments made by trusted users. They were also able to use filters to control the content displayed in their feed.
How impactful is community level moderation?
Community-level moderation can take a lot of the load off your team. From my experience at Yik Yak (an anonymous social platform with over 8 million users), we were able to implement a downvoting system that led to 64% of our total moderated content.
What other ways as fraud fighters can we ensure we are building a trust?
Data, data, data. Without data, you're blind. Making sure your tools and decision making are centered around data will really elevate not just moderation efforts, but effective community building.
Highlights & Events
Insights | Ushering in a Safer Digital Future, World Economic Forum
Events | Comply with Online Safety, Spectrum Labs
Article | Ethics and Ai
Insights | Sharing Economy Insights, Marketplace Risk