News Article (145)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- MIT Tech Review
- 2020
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
- MIT Tech Review
- 2020
-
- 7 min
- MIT Tech Review
- 2020
Why 2020 was a pivotal, contradictory year for facial recognition
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
Should there be a national moratorium on facial recognition technology? How can it be ensured that smaller companies like Clearview AI are more carefully watched and regulated? Do we consent to having or faces identified any time we post something to social media?
-
- 7 min
- Chronicle
- 2021
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
- Chronicle
- 2021
-
- 7 min
- Chronicle
- 2021
Artificial Intelligence Is a House Divided
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
Which approach to AI (symbolic or neural networks) do you believe leads to greater transparency? Which approach to AI do you believe might be more effective in accomplishing a certain goal? Does one approach make you feel more comfortable than the other? How could these two approaches be synthesized, if at all?
-
- 7 min
- The New York Times
- 2021
On October 4th, 2021, Facebook’s servers experienced an outage which left its apps, including the commonly used Facebook, Instagram, and Whatsapp, out of commission for several hours. The problem is said to be caused by a incorrect configuring of Facebook’s servers, which ultimately led to a Domain Name System error in which the numerical IP addresses determined by the computer became inaccessible. The myriad effects of this outage spread across the globe as businesses were effected by the lack of access to these social networks. Additionally, certain other internet services linked to Facebook became inaccessible.
- The New York Times
- 2021
-
- 7 min
- The New York Times
- 2021
Facebook and all of its apps go down simultaneously.
On October 4th, 2021, Facebook’s servers experienced an outage which left its apps, including the commonly used Facebook, Instagram, and Whatsapp, out of commission for several hours. The problem is said to be caused by a incorrect configuring of Facebook’s servers, which ultimately led to a Domain Name System error in which the numerical IP addresses determined by the computer became inaccessible. The myriad effects of this outage spread across the globe as businesses were effected by the lack of access to these social networks. Additionally, certain other internet services linked to Facebook became inaccessible.
What are the dangers of relying on fallible networks to perform essential functions such as business? How can network infrastructure be more protected? How much data and information should Facebook be trusted with?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?
-
- 10 min
- MIT Technology Review
- 2020
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
- MIT Technology Review
- 2020
-
- 10 min
- MIT Technology Review
- 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
How should models for training NLP algorithms be more closely scrutinized? What sorts of voices are needed at the design table to ensure that the impact of such algorithms are consistent across all populations? Can this ever be achieved?
-
- 2 min
- azfamily.com
- 2018
Facial recognition technology has found a new application: reuniting dogs with their owners. A simple machine learning algorithm takes a photo of a dog and crawls through a database of photos of dogs in shelters in hopes of finding a match.
- azfamily.com
- 2018
-
- 2 min
- azfamily.com
- 2018
Facial recognition technology now used in Phoenix area to locate lost dogs
Facial recognition technology has found a new application: reuniting dogs with their owners. A simple machine learning algorithm takes a photo of a dog and crawls through a database of photos of dogs in shelters in hopes of finding a match.
How could this beneficial use of recognition technology find even broader use?