AI (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Venture Beat
- 2021
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
- Venture Beat
- 2021
-
- 5 min
- Venture Beat
- 2021
Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
How can bias in tech monopolies be mitigated? How can authorities on AI ethics be positioned in such a way that they cannot be fired when developers do not want to listen to them?
-
- 7 min
- MIT Tech Review
- 2020
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
- MIT Tech Review
- 2020
-
- 7 min
- MIT Tech Review
- 2020
Why 2020 was a pivotal, contradictory year for facial recognition
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
Should there be a national moratorium on facial recognition technology? How can it be ensured that smaller companies like Clearview AI are more carefully watched and regulated? Do we consent to having or faces identified any time we post something to social media?
-
- 7 min
- Chronicle
- 2021
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
- Chronicle
- 2021
-
- 7 min
- Chronicle
- 2021
Artificial Intelligence Is a House Divided
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
Which approach to AI (symbolic or neural networks) do you believe leads to greater transparency? Which approach to AI do you believe might be more effective in accomplishing a certain goal? Does one approach make you feel more comfortable than the other? How could these two approaches be synthesized, if at all?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?
-
- 5 min
- BBC
- 2021
The ability of facial recognition technology used by the South Wales Police force to identify an individual based on biometric data nearly instantly rather than the previous standard of 10 days allowed a mother to say goodbye to her son on his deathbed. It seems to have other positive impacts, such as identifying criminals earlier than they otherwise might have been. However, as is usually the case, concerns abound about how this facial recognition technology can violate human rights.
- BBC
- 2021
-
- 5 min
- BBC
- 2021
Facial recognition technology meant mum saw dying son
The ability of facial recognition technology used by the South Wales Police force to identify an individual based on biometric data nearly instantly rather than the previous standard of 10 days allowed a mother to say goodbye to her son on his deathbed. It seems to have other positive impacts, such as identifying criminals earlier than they otherwise might have been. However, as is usually the case, concerns abound about how this facial recognition technology can violate human rights.
Who can be trusted with facial recognition algorithms that can give someone several possibilities for the identity of a particular face? Who can be trusted to decide in what cases this technology can be deployed? How can bias become problematic when a human is selecting one of many faces recommended by the algorithm? Should the idea of constant surveillance or omnipresent cameras make us feel safe or concerned?
-
- 5 min
- CNET
- 2019
Fight for the Future, a digital activist group, used Amazon’s Rekognition facial recognition software to scan faces on the street in Washington DC to show that there should be more guardrails on the use of this type of technology, before it is deployed for ends which violate human rights such as identifying peaceful protestors.
- CNET
- 2019
-
- 5 min
- CNET
- 2019
Demonstrators scan public faces in DC to show lack of facial recognition laws
Fight for the Future, a digital activist group, used Amazon’s Rekognition facial recognition software to scan faces on the street in Washington DC to show that there should be more guardrails on the use of this type of technology, before it is deployed for ends which violate human rights such as identifying peaceful protestors.
Does this kind of stunt seem effective at getting the attention of the public on the ways that facial recognition can be misused? How? Who decides what is a “positive” use of facial recognition technology, and how can these use cases be negotiated with those citizens who want their privacy protected?