Additional Resources
Centuries ago, knowledge was passed mainly by word of mouth, or by books for those who could read and access them. On account of the great leaps made to get to where we are, people commonly refer to this time period as the Information Era.
As technology has continued to progress though, more and more means of spreading misinformation have risen. As we saw in the Cambridge Analytica scandals of the 2016 elections and the court proceedings that followed, social media can be used to spread false information at dangerous rates. Deepfake technologies have popped up recently, a method of using artificial intelligence to create “synthetic media.”
With growing prevalence, deepfakes have entered the AI spotlight and face growing scrutiny for their potential use in manipulating and misrepresenting reality. In order to move forward with this technology we need to consider how to best use and manage it.
The goal is to think about the implications of deepfakes for society and the role of developers in creating these digital artifacts.
In this module we will investigate and experience deepfakes through some video and text narratives.
Watch and read the following narratives:
-
- 6 min
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
-
- 5 min
- GIS Lounge
- 2019
-
- 5 min
- GIS Lounge
- 2019
When AI Goes Wrong in Spatial Reasoning
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
What happens when the source of the data itself (the dataset) is biased? Can the ideas present in this article (namely the intentionally broadening of the training data pool and inclusion of composite data) find application beyond GIS?
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
We will work in trios to discuss the implications of deepfakes and developer's roles on the development of this type of technology.
-
Spend a few minutes experiencing these deepfakes put together by MIT Media Labs.
-
- 2 min
- The Verge
- 2019
Close btn-
- 2 min
- The Verge
- 2019
New bill would ban autoplay videos and endless scrolling
In this very short narrative, the Social Media Addiction Reduction technology Act is presented in the context of social networks and concerns around digital addiction.
Discussion Prompts:How do they work and what are the risks of digital addiction mechanisms? How to regulate digital content that can generate addictions?
In this very short narrative, the Social Media Addiction Reduction technology Act is presented in the context of social networks and concerns around digital addiction.
-
-
Brainstorm and take notes about these discussion prompts.
-
Select one person from your group to report your findings back to the entire class when we return from breakout rooms.
Please, answer this short concept check.
For a critical perspective on the video we watched in the pre-meeting activity, watch this documentary by Scientific American.
-
- 6 min
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019