Recommender Systems (6)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 6 min
  • Wired
  • 2019
image description
The Toxic Potential of YouTube’s Feedback Loop

Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.

  • Wired
  • 2019
  • 15 min
  • The App Solutions
image description
5 types of recommender systems and their impact on customer experience

Overview of recommender systems, which are information filtering algorithms design to suggest content or products to a particular user.

  • The App Solutions
  • 7 min
  • New York Times
  • 2018
image description
Youtube, The Great Radicalizer

Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.

  • New York Times
  • 2018
  • 15 min
  • n/a
  • 2018
image description
Choose Your Own Fake News

Choose-your-own-adventure game, in which you experience some sort of data fraud through acting in the position of a cast of characters.

  • n/a
  • 2018
  • 2 min
  • The Verge
  • 2019
image description
New bill would ban autoplay videos and endless scrolling

In this very short narrative, the Social Media Addiction Reduction technology Act is presented in the context of social networks and concerns around digital addiction.

  • The Verge
  • 2019
  • 10 min
  • The Washington Post
  • 2021
image description
He predicted the dark side of the Internet 30 years ago. Why did no one listen?

The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.

  • The Washington Post
  • 2021