Algorithmic Bias (19)

Algorithms selectively favoring certain groups or demographics.

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 4 min
  • OneZero
  • 2020
image description
Dr. Timnit Gebru, Joy Buolamwini, Deborah Raji — an Enduring Sisterhood of Face Queens

A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.

  • OneZero
  • 2020
  • 15 min
  • Hidden Switch
  • 2018
image description
Monster Match

A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.

  • Hidden Switch
  • 2018
  • 10 min
  • Survival of the Best Fit
  • 2018
image description
Survival of the Best Fit

Explores hiring bias of AI by playing a game in which you are the hiring manager.

  • Survival of the Best Fit
  • 2018
  • 27 min
  • Cornell Tech
  • 2019
image description
Teaching Ethics in Data Science

Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.

  • Cornell Tech
  • 2019
  • 5 min
  • Time Magazine
  • 2017
image description
The Police Are Using Computer Algorithms to Tell If You’re a Threat

Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.

  • Time Magazine
  • 2017
  • 28 min
  • Cornell Tech
  • 2019
image description
Algorithms in the Courtroom

Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.

  • Cornell Tech
  • 2019
Load more