This session will focus on how social media companies censor users’ speech on their platforms through the process of moderating content.
To kick off the discussion, we will highlight the first round of analysis of the data drawn from OnlineCensorship.org, a recently launched Knight News Challenge-winning project that crowdsources user-generated reports on the takedown of content, suspension of accounts, and other issues of content moderation. The data illustrates the often-discriminatory impact of these practices and users’ perceptions of company content moderation platforms.
The ultimate objective of the project is to hold these companies more accountable to their users. We’d love to hold a discussion with the participants about ways we can use this data to effectively do so.
A few possible questions we might discuss during the workshop include:
- How is content moderation unequally affecting different communities (LGBTQ, human rights activists, artists, etc.)?
- How is abuse of the reporting function tied to censorship of certain communities?
- What would a more transparent and accountable moderation process look like?