Social media continuously moderate content on their platforms. In doing so, they need to balance the freedom of expression rights of those who upload content with the interests of other individuals and groups to remove harmful content. Platforms like Facebook and Instagram currently mix automated and human decisions. Over- and under-inclusive interventions remain, however, a daily occurrence. Legitimate content is automatically taken down, harmful content sometimes remains online notwithstanding the reports of users. The GDPR provides a right not to be subject to automated decision-making but it is an open question if this right can provide redress with regard to content moderation. The new Digital Service Act introduces the right of redress for users. But what does it entail, and are there alternative solutions to explore? What are the limits of individual access to justice within privately owned online platforms?
• What is the role and what are the limitations of the redress tools against automated content moderation offered by the GDPR?
• What is the role and what are the limitations of the new right of redress introduced by the Digital Services Act against automated content moderation?
• Are there any alternatives to automated decisions implementing the T&S of a social media platform?
• Is there an “access to justice” right in the context of privately owned social media? What are its main elements?