Centuries ago, knowledge was passed mainly by word of mouth, or by books for those who could read and access them. On account of the great leaps made to get to where we are, people commonly refer to this time period as the Information Era.

As technology has continued to progress though, more and more means of spreading misinformation have risen. As we saw in the Cambridge Analytica scandals of the 2016 elections and the court proceedings that followed, social media can be used to spread false information at dangerous rates. Deepfake technologies have popped up recently, a method of using artificial intelligence to create “synthetic media.”

With growing prevalence, deepfakes have entered the AI spotlight and face growing scrutiny for their potential use in manipulating and misrepresenting reality. In order to move forward with this technology we need to consider how to best use and manage it.


The goal is to think about the implications of deepfakes for society and the role of developers in creating these digital artifacts.

Related Themes and Technologies:

In this module we will investigate and experience deepfakes through some video and text narratives.

Before We meet

Watch and read the following narratives:

  • 10 min
  • n/a
  • 2018
image description
In Event of Moon Disaster

Techniques of misinformation are used to make a film about an alternative history in which the Apollo 11 mission failed and the astronauts became stranded on the moon.

  • 13 min
  • Danielle Citron
  • 2019
image description
How deepfakes undermine truth and threaten democracy

The use of deepfake technology to manipulate video and audio for malicious purposes — whether it’s to stoke violence or defame politicians and journalists — is becoming a real threat. As these tools become more accessible and their products more realistic, how will they shape what we believe about the world? In a portentous talk, law professor Danielle Citron reveals how deepfakes magnify our distrust — and suggests approaches to safeguarding the truth.
Discussion Questions:

What are some of the possible uses for video and audio deepfakes?
What is trust? How do we normally verify information we receive?
How does this type of technology erode trust in existing systems of accountability in society?
Can you think of any possible positive uses of this technology?

  • 5 min
  • MIT Tech Review
  • 2020
image description
The Year Deepfakes Went Mainstream

With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.

During our meeting

We will work in trios to discuss the implications of deepfakes and developer's roles on the development of this type of technology.

  1. Spend a few minutes experiencing these deepfakes put together by MIT Media Labs.

    • 10 min
    • MIT Media Labs
    • 2021
    image description
    Detect DeepFakes

    This is an MIT research project. All data for research is collected anonymously for research purposes.
    This project will show you a variety of media snippets including transcripts, audio files, and videos. Sometimes, we include subtitles. Sometimes, the video is silent. You can watch the videos as many times as you would like. The site will ask you to share how confident you are that the individual really said what we show. If you have seen the video before today, please select the checkbox that says “I’ve already seen this video.” And remember, half of the media snippets that are presented are statements that the individual actually said. Read more about this project and the dataset used to produce this research on the About page

  2. Brainstorm and take notes about these discussion prompts.

    Please, brainstorm with your group and list at least 5 ways in which deepfakes POSITIVELY impact or can impact societies.

    Please, discuss with your group and list at least 5 ways in which deepfakes NEGATIVELY impact or can impact societies.

    Take 3-minutes to quickly find a couple of recent news about deepfakes. Share them with your colleagues and select one to report to the entire group.

  3. Select one person from your group to report your findings back to the entire class when we return from breakout rooms.

After we meet

Please, answer this short concept check.

For a critical perspective on the video we watched in the pre-meeting activity, watch this documentary by Scientific American.


Additional Perspectives
  • 33 min
  • Scientific American
  • 2020
image description
To Make a DeepFake

In conjunction with Scientific American, this thirty minute documentary brings the film In Event of Moon Disaster to a group of experts on AI, digital privacy, law, and human rights to gauge their reaction on the film, and to provide context on this new technology—its perils, its potential, and the possibilities of this brave, new digital world, where every pixel that moves past our collective eyes is potentially up for grabs.

  • 5 min
  • Premium Beat
  • 2020
image description
Is Deepfake Technology the Future of the Film Industry?

This blog post explores what a combination of deepfake and computer generated images (CGI) technologies might mean to film makers.

  • 5 min
  • MIT Technology Review
  • 2020
image description
Inside the strange new world of being a deepfake actor

This article details the reactions to the deepfake documentary In the event of moon disaster.

For Instructors - Module Goal and Learning Objectives

The goal of this module is to provide students with an opportunity to consider deepfake technologies and the ethical issues surrounding impact on individuals and society. The module is designed for introductory to intermediate CS courses (i.e., Intro to CS through Algorithms). It follows the CEN format asking students to consider their preexisting ideas and knowledge about the technology (Pre-Activity), consider the history of who created the technology, its original/intended purpose, the impact of its widespread applications (Activity 1), reflect on society level impact from multiple perspectives (technology creator, campus community member, person in society) (Activity 2), and assess existing ethical guidelines from technology organizations (Post-Activity).

  • Students will be able to identify a technology, who designed it, and the purpose it was designed for.

  • Students will be able to identify who the technology was not intended for and who it potentially may harm.

  • Students will be able to articulate how the technology has changed over time and the different purpose it now serves.

  • Students will be able to articulate and discuss possible benefits and harms with different types of deepfake technologies.

We have provided examples of assessments you may want to copy and use on your own LMS for secure student data collection. The rationale, format, and time length for the module components are listed below.

Pre-Activity is an online pre-module assignment asking students to respond to a series of questions about what they know about this technology and viewing provided links to narratives.

Activity 1 is a class activity (0.5 hours) that places students into small groups to read and discuss a set of provided narratives to inform them about certain perspectives on the technology.

Activity 2 is a class activity (45min to 1.5 hours) that provides a chance for the class to break into groups (n=~24 or more with larger class/online class) to evaluate the positive and negative impacts of deep fake technologies from multiple perspectives using several prompt questions on a class bulletin board (aka jamboard or something like it).

Post-Activity is an online post-module assignment that asks students to respond again to a series of questions about what they now know about this technology based on the narratives they viewed or read and the Activity 1 or 2 discussions.

Additional Resources

  • 6 min
image description
The Toxic Potential of YouTube’s Feedback Loop

Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.

  • Wired
  • 2019