Content Moderation (2)

The alteration or removal of hateful or dangerous speech or content on digital platforms to ensure a safer, more equitable environment.

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 5 min
  • TechCrunch
  • 2020
image description
Twitch updates its hateful content and harassment policy after company called out for its own abuses

At the end of 2020, Twitch, a social network predicated on streaming video content and commenting, expanded and clarified its definitions of hateful content in order to moderate comments or posts which harassed other users or otherwise had a negative effect on other people. However, as a workplace, the Twitch company has much to prove before validating this updated policy as something more than a PR move.

  • TechCrunch
  • 2020
  • 3 min
  • Politico
  • 2021
image description
Library of Congress bomb suspect livestreamed on Facebook for hours before being blocked

Live streaming technologies are challenging to moderate and might have a negative effect on society’s perception of violent events. They also raise the question of how such content can be deleted once it has been broadcasted and potentially copied multiple times by different recipients.

  • Politico
  • 2021