News Article (130)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 3 min
- Politico
- 2021
Live streaming technologies are challenging to moderate and might have a negative effect on society’s perception of violent events. They also raise the question of how such content can be deleted once it has been broadcasted and potentially copied multiple times by different recipients.
- Politico
- 2021
-
- 3 min
- Politico
- 2021
Library of Congress bomb suspect livestreamed on Facebook for hours before being blocked
Live streaming technologies are challenging to moderate and might have a negative effect on society’s perception of violent events. They also raise the question of how such content can be deleted once it has been broadcasted and potentially copied multiple times by different recipients.
What are the ethical trade-offs of live-streaming through social media? Is it possible to remediate a socially undesirable broadcast? What actors are responsible for moderating live streaming on social media?
-
- 5 min
- Time
- 2021
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
- Time
- 2021
-
- 5 min
- Time
- 2021
4 Big Takeaways From the Facebook Whistleblower Congressional Hearing
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
How does social quantification result in negative self-conception? How are the environments of social media platforms more harmful in terms of body image or “role models” than in-person environments? What are the dangers of every person having easy access to a broad platform of communication in terms of forming models of perfection? Why do social media algorithms want to feed users increasingly extreme content?
-
- 10 min
- The Washington Post
- 2019
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
- The Washington Post
- 2019
-
- 10 min
- The Washington Post
- 2019
Are ‘bots’ manipulating the 2020 conversation? Here’s what’s changed since 2016.
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
How are social media networks architectures that can be manipulated to an individual’s agenda, and how could this be addressed? Should any kind of bot accounts be allowed on Twitter, or do they all have too much negative potential to be trusted? What affordances of social networks allow bad actors to redirect the traffic of these networks? Is the problem of “trends” or “cascades” inherent to social media?
-
- 5 min
- Wired
- 2021
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
These Doctors are using AI to Screen for Breast Cancer
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
Do there seem to be any drawbacks to using this technology widely? How important is transparency of the algorithm in this case, as long as it seems to provide accurate results? How might this change the nature of doctor-patient relationships?
-
- 3 min
- CNN
- 2021
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
- CNN
- 2021
-
- 3 min
- CNN
- 2021
Microsoft patented a chatbot that would let you talk to dead people. It was too disturbing for production
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
How do humans control their identity when it can be replicated through machine learning? What sorts of quirks and mannerisms are unique to humans and cannot be replicated by an algorithm?