Fairness and Non-discrimination (56)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 5 min
  • Time Magazine
  • 2017
image description
The Police Are Using Computer Algorithms to Tell If You’re a Threat

Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.

  • Time Magazine
  • 2017
  • 5 min
  • Wired
  • 2019
image description
Taser User Says It Wont Use Biometrics In BodyCams

Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.

  • Wired
  • 2019
  • 5 min
  • Wall Street Journal
  • 2019
image description
Investors Urge AI Startups to Inject Early Dose of Ethics

Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.

  • Wall Street Journal
  • 2019
  • 5 min
  • Wired
  • 2019
image description
This dating app exposes the monstrous bias of algorithms

Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.

  • Wired
  • 2019
  • 5 min
  • MIT Technology Review
  • 2019
image description
This is how AI bias really happens—and why it’s so hard to fix

Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.

  • MIT Technology Review
  • 2019
  • 5 min
  • GIS Lounge
  • 2019
image description
When AI Goes Wrong in Spatial Reasoning

GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.

  • GIS Lounge
  • 2019
Load more