Algorithmic Bias (23)

Algorithms selectively favoring certain groups or demographics.

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 10 min
  • New York Times
  • 2019
image description
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias

Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.

  • New York Times
  • 2019
  • 15 min
  • Hidden Switch
  • 2018
image description
Monster Match

A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.

  • Hidden Switch
  • 2018
  • 10 min
  • MIT Technology Review
  • 2020
image description
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.

This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.

  • MIT Technology Review
  • 2020
  • 7 min
  • New York Times
  • 2018
image description
Facial Recognition Is Accurate, if You’re a White Guy

This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.

  • New York Times
  • 2018
  • 10 min
  • Gizmodo
  • 2021
image description
Developing Algorithms That Might One Day Be Used Against You

Physicist Brian Nord, who learned about deep learning algorithms through his research on the cosmos, warns against how developing algorithms without proper ethical sensibility can lead to these algorithms having more negative impacts than positive ones. Essentially, an “a priori” or proactive approach to instilling AI ethical sensibility, whether through review institutions or ethical education of developers, is needed to guard against privileged populations using algorithms to maintain hegemony.

  • Gizmodo
  • 2021
  • 7 min
  • Farnam Street Blog
  • 2021
image description
A Primer on Algorithms and Bias

Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.

  • Farnam Street Blog
  • 2021
Load more