Recently Added Narratives
-
- 30 min
- CNET, New York Times, Gizmodo
- 2023
- CNET, New York Times, Gizmodo
- 2023
The ChatGPT Congressional Hearing
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
If you were in the position of the congresspeople in the hearing, what questions would you ask Sam Altman? Does Sam Altman put too much of the onus of ethical regulation on the government? How would the “license” approach apply to AI companies that already exist/have released popular products? Do you believe Congress might still be able to “meet the moment” on AI?
- Wired
- 2021
Why a YouTube Chat About Chess Got Flagged for Hate Speech
Youtube algorithm’s struggle to distinguish chess-related terms from hate speech and abuse has revealed shortcomings in artificial intelligence’s ability to moderate online hate speech. The incident reflects the need to develop digital technologies capable of processing natural languages with a sufficient degree of social sensitivity.
Where do you draw the line between freedom of speech and online community conduct and regulations? What are some problems you think AI will experience in moderating hate speech like slurs?
- Wired
- 2021
Far-Right Platform Gab Has Been Hacked—Including Private Data
Following the January 6th capital riots, there have been many ongoing investigations into right-wing extremists groups. pioneering these investigations are left-leaning hacktivists, determined to expose hate speech and abuse in private conversations.
Where do we draw the line in content moderation decision-making between allowing a feed of fake information and making sure we are not denying access to real news?
- ZDNet
- 2021
- ZDNet
- 2021
- ZDNet
- 2021
Judge approves $650m settlement for Facebook users in privacy, biometrics lawsuit
Facebook’s use of biometrics to develop facial recognition came under scrutiny from those skeptical of users’ privacy protection. The company has just filed a $650 million settlement to close the lawsuit regarding this issue.
What role do you think the government should play in establishing precedent for violations of privacy by technology companies?
- ZDNet
- 2021
Amazon makes Alexa Conversations generally available
Alexa Conversations improves its quality of natural language processing as users feed them sample conversations. This feedback system allows Alexa Conversations to cut costs of training developers and managing related data.
What are some measures you think technology companies should implement to ensure the protection of users’ privacy? What role do you think the government should play?
-
- 5 min
- MIT Technology Review
- 2020
- MIT Technology Review
- 2020
-
- 5 min
- MIT Technology Review
- 2020
Inside the strange new world of being a deepfake actor
This article details the reactions to the deepfake documentary In the event of moon disaster.
-
- 5 min
- Premium Beat
- 2020
Is Deepfake Technology the Future of the Film Industry?
This blog post explores what a combination of deepfake and computer generated images (CGI) technologies might mean to film makers.
To Make a DeepFake
In conjunction with Scientific American, this thirty minute documentary brings the film In Event of Moon Disaster to a group of experts on AI, digital privacy, law, and human rights to gauge their reaction on the film, and to provide context on this new technology—its perils, its potential, and the possibilities of this brave, new digital world, where every pixel that moves past our collective eyes is potentially up for grabs.
- What are some of the new dangers this technology brings that are different from other forms of media disinformation in the past?
-
- 10 min
- MIT Media Labs
- 2021
Detect DeepFakes
This is an MIT research project. All data for research is collected anonymously for research purposes.
This project will show you a variety of media snippets including transcripts, audio files, and videos. Sometimes, we include subtitles. Sometimes, the video is silent. You can watch the videos as many times as you would like. The site will ask you to share how confident you are that the individual really said what we show. If you have seen the video before today, please select the checkbox that says “I’ve already seen this video.” And remember, half of the media snippets that are presented are statements that the individual actually said. Read more about this project and the dataset used to produce this research on the About page
https://detectfakes.media.mit.edu/about
- What about these videos is so disturbing?
- How can we be convinced to not trust our own judgement about the information being presented?
-
- 13 min
- Danielle Citron
- 2019
- Danielle Citron
- 2019
-
- 13 min
- Danielle Citron
- 2019
How deepfakes undermine truth and threaten democracy
The use of deepfake technology to manipulate video and audio for malicious purposes — whether it’s to stoke violence or defame politicians and journalists — is becoming a real threat. As these tools become more accessible and their products more realistic, how will they shape what we believe about the world? In a portentous talk, law professor Danielle Citron reveals how deepfakes magnify our distrust — and suggests approaches to safeguarding the truth.
Discussion Questions:
- What are some of the possible uses for video and audio deepfakes?
- What is trust? How do we normally verify information we receive?
- How does this type of technology erode trust in existing systems of accountability in society?
- Can you think of any possible positive uses of this technology?
- Endgadget
- 2021
- Endgadget
- 2021
- Endgadget
- 2021
Hitting the Books: The Brooksian revolution that led to rational robots
Article is an excerpt from book about the history of AI and the shift in AI research in 1990s from knowledge-based to context-based approaches to artificial intelligence.