All Narratives (356)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 5 min
  • CNET
  • 2019
image description
Demonstrators scan public faces in DC to show lack of facial recognition laws

Fight for the Future, a digital activist group, used Amazon’s Rekognition facial recognition software to scan faces on the street in Washington DC to show that there should be more guardrails on the use of this type of technology, before it is deployed for ends which violate human rights such as identifying peaceful protestors.

  • CNET
  • 2019
  • 7 min
  • New York Times
  • 2018
image description
Facial Recognition Is Accurate, if You’re a White Guy

This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.

  • New York Times
  • 2018
  • 7 min
  • The Verge
  • 2020
image description
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias

PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.

  • The Verge
  • 2020
  • 7 min
  • Wall Street Journal
  • 2021
image description
Google Built the Pixel 6 Camera to Better Portray People With Darker Skin Tones. Does It?

Google’s new Pixel 6 smartphone claims to have “the world’s most inclusive camera” based on its purported ability to more accurately reflect darker skin tones in photographs, a form of digital justice notably absent from previous iterations of computational photography across the phones of various tech monopolies.

  • Wall Street Journal
  • 2021
  • 10 min
  • Gizmodo
  • 2021
image description
Developing Algorithms That Might One Day Be Used Against You

Physicist Brian Nord, who learned about deep learning algorithms through his research on the cosmos, warns against how developing algorithms without proper ethical sensibility can lead to these algorithms having more negative impacts than positive ones. Essentially, an “a priori” or proactive approach to instilling AI ethical sensibility, whether through review institutions or ethical education of developers, is needed to guard against privileged populations using algorithms to maintain hegemony.

  • Gizmodo
  • 2021
  • 5 min
  • Inc
  • 2021
image description
Tim Cook May Have Just Ended Facebook

On International Data Privacy Day, Apple CEO Tim Cook fired shots against Mark Zuckerberg and Facebook’s model of mining user data through platform analytics and web mining to serve up targeted ads to users. By contrast, Cook painted Apple as a privacy oriented company who wants to make technology work for its users by not collecting their data and manipulating them psychologically through advertising.

  • Inc
  • 2021
Load more