Content Moderation (3)

The alteration or removal of hateful or dangerous speech or content on digital platforms to ensure a safer, more equitable environment.

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 5 min
  • TechCrunch
  • 2020
image description
Twitch updates its hateful content and harassment policy after company called out for its own abuses

At the end of 2020, Twitch, a social network predicated on streaming video content and commenting, expanded and clarified its definitions of hateful content in order to moderate comments or posts which harassed other users or otherwise had a negative effect on other people. However, as a workplace, the Twitch company has much to prove before validating this updated policy as something more than a PR move.

  • TechCrunch
  • 2020
  • 3 min
  • Politico
  • 2021
image description
Library of Congress bomb suspect livestreamed on Facebook for hours before being blocked

Live streaming technologies are challenging to moderate and might have a negative effect on society’s perception of violent events. They also raise the question of how such content can be deleted once it has been broadcasted and potentially copied multiple times by different recipients.

  • Politico
  • 2021
  • 30 min
  • CNET, New York Times, Gizmodo
  • 2023
image description
The ChatGPT Congressional Hearing

On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.

  • CNET, New York Times, Gizmodo
  • 2023