AI (124)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 5 min
  • MIT Tech Review
  • 2020
image description
AI Summarisation

The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.

  • MIT Tech Review
  • 2020
  • 7 min
  • Wired
  • 2020
image description
Facial Recognition Applications on College Campuses

After student members of the University of Miami Employee Student Alliance held a protest on campus, the University of Miami Police Department likely used facial recognition technology in conjunction with video surveillance cameras to track down nine students from the protest and summon them to a meeting with the dean. This incident provided a gateway into the discussion of fairness of facial recognition programs, and how students believe that they should not be deployed on college campuses.

  • Wired
  • 2020
  • 5 min
  • Wired
  • 2021
image description
Don’t End Up on This Artificial Intelligence Hall of Shame

This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.

  • Wired
  • 2021
  • 3 min
  • Kinolab
  • 2009
image description
Digital Environment Analysis

In a distant future after the “Water War” in which much of the natural environment was destroyed and water has become scarce, Asha works as a curator at a museum which displays the former splendor of nature on Earth. She receives a mysterious soil sample which, after digital analysis using a object recognition to take data from the soil, surprisingly contains water.

  • Kinolab
  • 2009
  • 9 min
  • Kinolab
  • 2013
image description
Dangers of Digital Commodification

In the world of this film, Robin Wright plays a fictional version of herself who has allowed herself to be digitized by the film company Miramount Studios in order to be entered into many films without having to actually act in them, becoming digitally immortal in a sense. Once she enters a hallucinogenic mixed reality known as Abrahama City, she agrees to renew the contract with Miramount studios under the panic of her declining mental health and sense of autonomy. This renewed contract will not only allow movies starring her digital likeness to be made, but will also allow people to appear as her.

  • Kinolab
  • 2013
  • 10 min
  • The Washington Post
  • 2019
image description
Are ‘bots’ manipulating the 2020 conversation? Here’s what’s changed since 2016.

After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.

  • The Washington Post
  • 2019
Load more