Algorithmic Bias (24)

Algorithms selectively favoring certain groups or demographics.

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 10 min
  • New York Times
  • 2019
image description
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias

Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.

  • New York Times
  • 2019
  • 4 min
  • OneZero
  • 2020
image description
Dr. Timnit Gebru, Joy Buolamwini, Deborah Raji — an Enduring Sisterhood of Face Queens

A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.

  • OneZero
  • 2020
  • 40 min
  • New York Times
  • 2021
image description
She’s Taking Jeff Bezos to Task

As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.

  • New York Times
  • 2021
  • 10 min
  • MIT Technology Review
  • 2020
image description
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.

This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.

  • MIT Technology Review
  • 2020
  • 7 min
  • New York Times
  • 2018
image description
Facial Recognition Is Accurate, if You’re a White Guy

This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.

  • New York Times
  • 2018
  • 10 min
  • Gizmodo
  • 2021
image description
Developing Algorithms That Might One Day Be Used Against You

Physicist Brian Nord, who learned about deep learning algorithms through his research on the cosmos, warns against how developing algorithms without proper ethical sensibility can lead to these algorithms having more negative impacts than positive ones. Essentially, an “a priori” or proactive approach to instilling AI ethical sensibility, whether through review institutions or ethical education of developers, is needed to guard against privileged populations using algorithms to maintain hegemony.

  • Gizmodo
  • 2021
Load more