Algorithmic Bias (19)

Algorithms selectively favoring certain groups or demographics.

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 10 min
  • New York Times
  • 2019
image description
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias

Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.

  • New York Times
  • 2019
  • 7 min
  • TED
  • 2017
image description
Justice in the Age of Big Data

Predictive policing software such as PredPol may claim to be objective through mathematical, “colorblind” analyses of geographical crime areas, yet this supposed objectivity is not free of human bias and is in fact used as a justification for the further targeting of oppressed groups, such as poor communities or racial and ethnic minorities. Further, the balance between fairness and efficacy in the justice system must be considered, since algorithms tend more toward the latter than the former.

  • TED
  • 2017
  • 7 min
  • Venture Beat
  • 2021
image description
Center for Applied Data Ethics suggests treating AI like a bureaucracy

As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.

  • Venture Beat
  • 2021
  • 7 min
  • MIT Tech Review
  • 2020
image description
Why 2020 was a pivotal, contradictory year for facial recognition

This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.

  • MIT Tech Review
  • 2020
  • 7 min
  • The New Republic
  • 2020
image description
Who Gets a Say in Our Dystopian Tech Future?

The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.

  • The New Republic
  • 2020
  • 4 min
  • VentureBeat
  • 2020
image description
Researchers Find that Even Fair Hiring Algorithms Can Be Biased

A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.

  • VentureBeat
  • 2020
Load more