All Narratives (328)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 6 min
  • Wired
  • 2019
image description
The Toxic Potential of YouTube’s Feedback Loop

Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.

  • Wired
  • 2019
  • 9 min
  • Kinolab
  • 1995
image description
Self-Sustaining Programs

In this world, a human consciousness (“ghost”) can inhabit an artificial body (“shell”), thus at once becoming edited humans in a somewhat robotic body.  The Puppet Master, a notorious villain in this world, is revealed not to be a human hacker, but a computer program which has gained sentience and gone on to hack the captured shell. It challenges the law enforcement officials of Section 6 and Section 9 saying that it is a life-form and not an AI. It argues that its existence as a self-sustaining program which has achieved singularity is not different from human DNA as a “self-sustaining program.” The Puppet Master specifically references reproduction/offspring, not copying, as a distinguishing feature of living things as opposed to nonliving things. Additionally, it developed emotional connection with Major which led it to select her as a candidate for merging. It references how it can die but live on through the merging and, after Major’s death, in the internet.

  • Kinolab
  • 1995
  • 5 min
  • MIT Tech Review
  • 2020
image description
AI Summarisation

The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.

  • MIT Tech Review
  • 2020
  • 7 min
  • Wired
  • 2020
image description
Facial Recognition Applications on College Campuses

After student members of the University of Miami Employee Student Alliance held a protest on campus, the University of Miami Police Department likely used facial recognition technology in conjunction with video surveillance cameras to track down nine students from the protest and summon them to a meeting with the dean. This incident provided a gateway into the discussion of fairness of facial recognition programs, and how students believe that they should not be deployed on college campuses.

  • Wired
  • 2020
  • 5 min
  • Vice
  • 2020
image description
Robotic Beasts, Wildlife Control, and Environmental Impact

Robot researches in Japan have recently begun to use robotic “monster wolves” to help control wildlife populations by keeping them out of human civilizations or agricultural areas. These robots are of interest to robot engineers who work in environmentalism because although the process of engineering a robot does not help the environment, the ultimate good accomplished by robots which help control wildlife populations may outweigh this cost.

  • Vice
  • 2020
  • 5 min
  • Wired
  • 2021
image description
Don’t End Up on This Artificial Intelligence Hall of Shame

This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.

  • Wired
  • 2021
Load more