Businesses and organisations rely on AI to innovate, and increasingly rely on AI to protect the public interest. For social media platforms, AI can be a powerful tool for content moderation in order to keep users and the public safe, for instance by detecting and taking down violating content and accounts. At the same time, social media platforms need to preserve privacy, fairness, and freedom of expression. Content moderation does not come with a one-size fits all approach. Panelists will dive into how AI-based detection of illegal and harmful content works in different areas of harm, such as hate speech, child safety or illegal content. The panel will also discuss the risks and safeguards, transparency, control, privacy, fairness, and the role of human review and intervention. Some of the questions that will be addressed are:
• Which data is needed for AI to be effective in different areas of harm, such as hate speech, misinformation, or illegal content?
• What are the opportunities and risks of leveraging AI, and which challenges need to be addressed for AI to be effective and safe for content moderation?
• Which other industry use cases could leverage AI for content moderation?
• How can regulation optimise for effective and safe use of AI for content moderation?