AI (123)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 7 min
- MIT Technology Review
- 2020
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
- MIT Technology Review
- 2020
-
- 7 min
- MIT Technology Review
- 2020
Tiny four-bit computers are now all you need to train AI
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
Does more efficiency mean more data would be wanted or needed? Would that be a good thing, a bad thing, or potentially both?
-
- 3 min
- CNN
- 2021
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
- CNN
- 2021
-
- 3 min
- CNN
- 2021
Microsoft patented a chatbot that would let you talk to dead people. It was too disturbing for production
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
How do humans control their identity when it can be replicated through machine learning? What sorts of quirks and mannerisms are unique to humans and cannot be replicated by an algorithm?
-
- 7 min
- CNN
- 2021
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
- CNN
- 2021
-
- 7 min
- CNN
- 2021
South Korea has used AI to bring a dead superstar’s voice back to the stage, but ethical concerns abound
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
How can synthetic media change the legacy of a certain person? Who do you believe should gain ownership of works created by AI? What factors does this depend upon? How might the music industry be changed by such AI? How could human singers compete with artificial ones if AI concerts became the norm?
-
- 7 min
- Venture Beat
- 2021
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
- Venture Beat
- 2021
-
- 7 min
- Venture Beat
- 2021
Center for Applied Data Ethics suggests treating AI like a bureaucracy
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
Whose job is it to ameliorate the “privilege hazard”, and how should this be done? How should large data sets be analyzed to avoid bias and ensure fairness? How can large data aggregators such as Google be held accountable to new standards of scrutinizing data and introducing humanities perspectives in applications?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?