All Narratives (346)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 5 min
- Wired
- 2020
As means of preserving deceased loved ones digitally become more and more likely, it is critical to consider the implications of technologies which aim to replicate and capture the personality and traits of those who have passed. Not only might this change the natural process of grieving and healing, it may also have alarming consequences for the agency of the dead. For the corresponding Black Mirror episode discussed in the article, see the narratives “Martha and Ash Parts I and II.”
- Wired
- 2020
-
- 5 min
- Wired
- 2020
The Ethics of Rebooting the Dead
As means of preserving deceased loved ones digitally become more and more likely, it is critical to consider the implications of technologies which aim to replicate and capture the personality and traits of those who have passed. Not only might this change the natural process of grieving and healing, it may also have alarming consequences for the agency of the dead. For the corresponding Black Mirror episode discussed in the article, see the narratives “Martha and Ash Parts I and II.”
Should anyone be allowed to use digital resurrection technologies if they feel it may better help them cope? With all the data points that exist for internet users in this day and age, is it easier to create versions of deceased people which are uncannily similar to their real identities? What would be missing from this abstraction? How is a person’s identity kept uniform or recognizable if they are digitally resurrected?
-
- 4 min
- Reuters
- 2020
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
- Reuters
- 2020
-
- 4 min
- Reuters
- 2020
From hate speech to nudity, Facebook’s oversight board picks its first cases
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
How much oversight do algorithms or networks with a broad impact need? Who all needs to be in a room when deciding what an algorithm or site should or should not allow? Can algorithms be designed to detect and remove hate speech? Should such an algorithm exist?
-
- 3 min
- TechCrunch
- 2020
This short article details a pledge inspired by the practices of the French government for tech monopolies to be more responsible in the areas of taxes and privacy. As of 2020, many have signed onto this initiative.
- TechCrunch
- 2020
-
- 3 min
- TechCrunch
- 2020
Dozens of tech companies sign ‘Tech for Good Call’ following French initiative
This short article details a pledge inspired by the practices of the French government for tech monopolies to be more responsible in the areas of taxes and privacy. As of 2020, many have signed onto this initiative.
What does accountability for tech monopolies look like? Who should offer robust challenges to these companies, and who actually has the power to do so?
-
- 5 min
- Gizmodo
- 2020
The data privacy of employees is at risk under a new “Productivity Score” program started by Microsoft, in which employers and administrators can use Microsoft 365 platforms to collect several metrics on their workers in order to “optimize productivity.” However, this approach causes unnecessary stress for workers, beginning a surveillance program in the workplace.
- Gizmodo
- 2020
-
- 5 min
- Gizmodo
- 2020
Microsoft’s Creepy New ‘Productivity Score’ Gamifies Workplace Surveillance
The data privacy of employees is at risk under a new “Productivity Score” program started by Microsoft, in which employers and administrators can use Microsoft 365 platforms to collect several metrics on their workers in order to “optimize productivity.” However, this approach causes unnecessary stress for workers, beginning a surveillance program in the workplace.
How are excuses such as using data to “optimize productivity” employed to gather more data on people? How could such a goal be accomplished without the surveillance aspect? How does this approach not account for a diversity of working methods?
-
- 7 min
- ZDNet
- 2020
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
- ZDNet
- 2020
-
- 7 min
- ZDNet
- 2020
Rebooting AI: Deep learning, meet knowledge graphs
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
Does giving a machine similar learning capabilities to humans and animals bring artificial intelligence closer to singularity? Should humans ultimately be in control of what a machine learns? What is problematic about leaving AI less capable of understanding semantic webs?
Augmented Communication and a Post-Privacy Era
In this imagined future, citizens interact with the world and with each other through brain-computer interface devices which augment reality in ways such as sending each other visual messages or changing one’s appearance at a moment’s notice. Additionally, with this device, everyone can automatically see a “ranking” of other people, in which Alphas or As are the best and Epsilons or Es are the worst. With all of these features of the devices, privacy in its many forms is all but outlawed in this society.
How can brain-computer interfaces work together with virtual reality to enable us to share images, styles, and other information to our friends more seamlessly? What if humans could also implement VR into our communications? Would that improve interactions? How could deception sneak into this system? How do social media quantifications, such as a number of likes or followers, act as a sort of preliminary “ranking” for a person, and how does this affect people’s opportunities? Have social media and other digital media platforms conditioned society to see a lack of privacy as the norm, and conversely privacy as a sort of vice? How should we continue to value privacy in the age of social media monopolies?