Privacy (134)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 6 min
  • Wired
  • 2019
image description
The Toxic Potential of YouTube’s Feedback Loop

Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.

  • Wired
  • 2019
  • 5 min
  • Gizmodo
  • 2020
image description
You Need to Opt Out of Amazon Sidewalk

This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.

  • Gizmodo
  • 2020
  • 3 min
  • CNN
  • 2021
image description
Microsoft patented a chatbot that would let you talk to dead people. It was too disturbing for production

The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”

  • CNN
  • 2021
  • 10 min
  • The Washington Post
  • 2021
image description
He predicted the dark side of the Internet 30 years ago. Why did no one listen?

The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.

  • The Washington Post
  • 2021
  • 7 min
  • MIT Tech Review
  • 2020
image description
Why 2020 was a pivotal, contradictory year for facial recognition

This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.

  • MIT Tech Review
  • 2020
  • 7 min
  • Wired
  • 2021
image description
This Site Published Every Face From Parler’s Capitol Riot Videos

An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.

  • Wired
  • 2021
Load more