All Narratives (356)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 4 min
- Reuters
- 2020
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
- Reuters
- 2020
-
- 4 min
- Reuters
- 2020
From hate speech to nudity, Facebook’s oversight board picks its first cases
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
How much oversight do algorithms or networks with a broad impact need? Who all needs to be in a room when deciding what an algorithm or site should or should not allow? Can algorithms be designed to detect and remove hate speech? Should such an algorithm exist?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 5 min
- ABC News
- 2020
The United States government is pushing its interest in breaking up the tech monopoly that is Facebook, hoping to restore some competition in the social networking and data selling market which the company dominates. Facebook, of course, is resistant to these efforts.
- ABC News
- 2020
-
- 5 min
- ABC News
- 2020
Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general
The United States government is pushing its interest in breaking up the tech monopoly that is Facebook, hoping to restore some competition in the social networking and data selling market which the company dominates. Facebook, of course, is resistant to these efforts.
What role did data collection and use play in Facebook’s rise as a monopoly power? What would breaking up this monopoly accomplish? Will users achieve more data privacy if one large company does not own several platforms on which users communicate?
-
- 7 min
- MIT Technology Review
- 2020
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
- MIT Technology Review
- 2020
-
- 7 min
- MIT Technology Review
- 2020
Tiny four-bit computers are now all you need to train AI
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
Does more efficiency mean more data would be wanted or needed? Would that be a good thing, a bad thing, or potentially both?
-
- 7 min
- Wired
- 2020
In discussing the history of the singular Internet that many global users experience every day, this article reveals some dangers of digital technologies becoming transparent through repeated use and reliance. Namely, it becomes more difficult to imagine a world where there could be alternatives to the current digital way of doing things.
- Wired
- 2020
-
- 7 min
- Wired
- 2020
Hello, World! It is ‘I’, the Internet
In discussing the history of the singular Internet that many global users experience every day, this article reveals some dangers of digital technologies becoming transparent through repeated use and reliance. Namely, it becomes more difficult to imagine a world where there could be alternatives to the current digital way of doing things.
Is it too late to imagine alternatives to the Internet? How could people be convinced to get on board with a radical redo of the internet as we know it? Do alternatives need to be imagined before forming a certain digital product or service, especially if they end up being as revolutionary as the internet? Are the most popular and powerful digital technologies and services “tools”, or have they reached the status of cultural norms and conduits?
War Technologies and Global Impacts
Once ships start mysteriously disappearing off the coast of Odo Island in post-WWII Japan, both scientists and villagers are confounded. Eventually, the culprit of these attacks is revealed to be Godzilla, a massive kaiju thought to be from the Jurassic era who has returned from the deep sea in order to wreak havoc and destruction on humanity. Scientists explain to government officials their theory that Hydrogen-bomb testing in the deep sea disrupted Godzilla’s natural habitat and provoked the attacks on Odo island. After debates over whether Godzilla should be killed or studied for contributions to science, the monster attacks Tokyo with flame breath. Emiko and Ogata implore Serizawa to deploy his new Oxygen Destroyer technology against this monster. This lethal device suffocates any living things before splitting oxygen molecules and liquefying anything organic in the range. While the technologies on display here are not necessarily digital in nature, this narrative nonetheless provides a non-American voice on the dangers of technology and innovation, especially as they are deployed in wars.
How should dangerous technology be regulated, as to not purposefully or inadvertently harm innocent citizens if deployed in wars? What modern warfare technologies are currently being used which could have unforeseen consequences? Should dangerous technology or specimens be kept around for scientific study, or should they just not be allowed to exist at all? How can it be insured that innovations and innovators are not abused by evil powers? What appears to be the metaphorical meaning of Godzilla in this narrative? How can technology exacerbate global divides and xenophobia?