Limitations of Digital Technologies (22)

Describes limitations and shortfalls of current digital technologies, particularly when compared to human capabilities.

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 7 min
  • The Verge
  • 2019
image description
AI ‘Emotion Recognition’ Can’t Be Trusted

Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.

  • The Verge
  • 2019
  • 4 min
  • VentureBeat
  • 2020
image description
Researchers Find that Even Fair Hiring Algorithms Can Be Biased

A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.

  • VentureBeat
  • 2020
  • Wired
  • 2021
image description
Why a YouTube Chat About Chess Got Flagged for Hate Speech

Youtube algorithm’s struggle to distinguish chess-related terms from hate speech and abuse has revealed shortcomings in artificial intelligence’s ability to moderate online hate speech. The incident reflects the need to develop digital technologies capable of processing natural languages with a sufficient degree of social sensitivity.

  • Wired
  • 2021
  • 10 min
  • MIT Technology Review
  • 2020
image description
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.

This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.

  • MIT Technology Review
  • 2020
  • 7 min
  • The Verge
  • 2020
image description
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias

PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.

  • The Verge
  • 2020
  • 10 min
  • The New Yorker
  • 2020
image description
The Second Act of Social Media Activism

This article contextualizes the BLM uprisings of 2020 in a larger trend of using social media and other digital platforms to promote activist causes. A comparison between the benefits of in-person, on-the-ground activism and activism which takes place through social media is considered.

  • The New Yorker
  • 2020
Load more