All Narratives (356)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 4 min
- Reuters
- 2020
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
- Reuters
- 2020
-
- 4 min
- Reuters
- 2020
From hate speech to nudity, Facebook’s oversight board picks its first cases
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
How much oversight do algorithms or networks with a broad impact need? Who all needs to be in a room when deciding what an algorithm or site should or should not allow? Can algorithms be designed to detect and remove hate speech? Should such an algorithm exist?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 5 min
- ABC News
- 2020
The United States government is pushing its interest in breaking up the tech monopoly that is Facebook, hoping to restore some competition in the social networking and data selling market which the company dominates. Facebook, of course, is resistant to these efforts.
- ABC News
- 2020
-
- 5 min
- ABC News
- 2020
Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general
The United States government is pushing its interest in breaking up the tech monopoly that is Facebook, hoping to restore some competition in the social networking and data selling market which the company dominates. Facebook, of course, is resistant to these efforts.
What role did data collection and use play in Facebook’s rise as a monopoly power? What would breaking up this monopoly accomplish? Will users achieve more data privacy if one large company does not own several platforms on which users communicate?
-
- 7 min
- MIT Technology Review
- 2020
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
- MIT Technology Review
- 2020
-
- 7 min
- MIT Technology Review
- 2020
Tiny four-bit computers are now all you need to train AI
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
Does more efficiency mean more data would be wanted or needed? Would that be a good thing, a bad thing, or potentially both?
-
- 7 min
- Wired
- 2020
In discussing the history of the singular Internet that many global users experience every day, this article reveals some dangers of digital technologies becoming transparent through repeated use and reliance. Namely, it becomes more difficult to imagine a world where there could be alternatives to the current digital way of doing things.
- Wired
- 2020
-
- 7 min
- Wired
- 2020
Hello, World! It is ‘I’, the Internet
In discussing the history of the singular Internet that many global users experience every day, this article reveals some dangers of digital technologies becoming transparent through repeated use and reliance. Namely, it becomes more difficult to imagine a world where there could be alternatives to the current digital way of doing things.
Is it too late to imagine alternatives to the Internet? How could people be convinced to get on board with a radical redo of the internet as we know it? Do alternatives need to be imagined before forming a certain digital product or service, especially if they end up being as revolutionary as the internet? Are the most popular and powerful digital technologies and services “tools”, or have they reached the status of cultural norms and conduits?
Will, Evelyn, and Max Part II: Medical Nanotechnology and Networked Humans
Will Caster is an artificial intelligence scientist whose consciousness his wife Evelyn uploaded to the internet after his premature death. Dr. Caster used his access to the internet to grant himself vast intelligence, creating a technological utopia called Brightwood in the desert to get enough solar power to develop cutting-edge digital projects. Specifically, he uses nanotechnology to cure fatal or longtime inflictions on people, inserting tiny robots into their bodies to help cells recover. However, it is soon revealed that these nanorobots stay inside their human hosts, allowing Will to project his consciousness into them and generally control them, along with other inhuman traits.
Should nanotechnology be used for medical purposes if it can easily be abused to take away the autonomy of the host? How can use of nanotechnology avoid this critical pitfall? How can seriously injured people consent to such operations in a meaningful way? What are the implications of nanotechnology being used to create technological or real-life underclasses? Should human brains ever be networked to each other, or to any non-human device, especially one that has achieved singularity?