All Narratives (356)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 4 min
- Reuters
- 2020
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
- Reuters
- 2020
-
- 4 min
- Reuters
- 2020
From hate speech to nudity, Facebook’s oversight board picks its first cases
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
How much oversight do algorithms or networks with a broad impact need? Who all needs to be in a room when deciding what an algorithm or site should or should not allow? Can algorithms be designed to detect and remove hate speech? Should such an algorithm exist?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 5 min
- ABC News
- 2020
The United States government is pushing its interest in breaking up the tech monopoly that is Facebook, hoping to restore some competition in the social networking and data selling market which the company dominates. Facebook, of course, is resistant to these efforts.
- ABC News
- 2020
-
- 5 min
- ABC News
- 2020
Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general
The United States government is pushing its interest in breaking up the tech monopoly that is Facebook, hoping to restore some competition in the social networking and data selling market which the company dominates. Facebook, of course, is resistant to these efforts.
What role did data collection and use play in Facebook’s rise as a monopoly power? What would breaking up this monopoly accomplish? Will users achieve more data privacy if one large company does not own several platforms on which users communicate?
-
- 7 min
- MIT Technology Review
- 2020
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
- MIT Technology Review
- 2020
-
- 7 min
- MIT Technology Review
- 2020
Tiny four-bit computers are now all you need to train AI
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
Does more efficiency mean more data would be wanted or needed? Would that be a good thing, a bad thing, or potentially both?
-
- 7 min
- Wired
- 2020
In discussing the history of the singular Internet that many global users experience every day, this article reveals some dangers of digital technologies becoming transparent through repeated use and reliance. Namely, it becomes more difficult to imagine a world where there could be alternatives to the current digital way of doing things.
- Wired
- 2020
-
- 7 min
- Wired
- 2020
Hello, World! It is ‘I’, the Internet
In discussing the history of the singular Internet that many global users experience every day, this article reveals some dangers of digital technologies becoming transparent through repeated use and reliance. Namely, it becomes more difficult to imagine a world where there could be alternatives to the current digital way of doing things.
Is it too late to imagine alternatives to the Internet? How could people be convinced to get on board with a radical redo of the internet as we know it? Do alternatives need to be imagined before forming a certain digital product or service, especially if they end up being as revolutionary as the internet? Are the most popular and powerful digital technologies and services “tools”, or have they reached the status of cultural norms and conduits?
Augmented Communication and a Post-Privacy Era
In this imagined future, citizens interact with the world and with each other through brain-computer interface devices which augment reality in ways such as sending each other visual messages or changing one’s appearance at a moment’s notice. Additionally, with this device, everyone can automatically see a “ranking” of other people, in which Alphas or As are the best and Epsilons or Es are the worst. With all of these features of the devices, privacy in its many forms is all but outlawed in this society.
How can brain-computer interfaces work together with virtual reality to enable us to share images, styles, and other information to our friends more seamlessly? What if humans could also implement VR into our communications? Would that improve interactions? How could deception sneak into this system? How do social media quantifications, such as a number of likes or followers, act as a sort of preliminary “ranking” for a person, and how does this affect people’s opportunities? Have social media and other digital media platforms conditioned society to see a lack of privacy as the norm, and conversely privacy as a sort of vice? How should we continue to value privacy in the age of social media monopolies?