Themes (353)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 14 min
- Kinolab
- 2014
In the midst of World War II, mathematics prodigy Alan Turing is hired by the British government to help decode Enigma, the code used by Germans in their encrypted messages. Turing builds an expensive machine meant to help decipher the code in a mathematical manner, but the lack of speedy results incites the anger of his fellow coders and the British government. After later being arrested for public indecency, Turing discusses with the officer the basis for the modern “Turing Test,” or how to tell if one is interacting with a human or a machine. Turing argues that although machines think differently than humans, it should still be considered a form of thinking. His work displayed in this film became a basis of the modern computer.
- Kinolab
- 2014
Decryption and Machine Thinking
In the midst of World War II, mathematics prodigy Alan Turing is hired by the British government to help decode Enigma, the code used by Germans in their encrypted messages. Turing builds an expensive machine meant to help decipher the code in a mathematical manner, but the lack of speedy results incites the anger of his fellow coders and the British government. After later being arrested for public indecency, Turing discusses with the officer the basis for the modern “Turing Test,” or how to tell if one is interacting with a human or a machine. Turing argues that although machines think differently than humans, it should still be considered a form of thinking. His work displayed in this film became a basis of the modern computer.
How did codebreaking help launch computers? What was Alan Turing’s impact on computing, and on the outcome of WW2? How can digital technologies be used to turn the tides for the better in a war? Are computers in our age too advanced for codes to be secret for long, and is this a positive or a negative? How do machines think? Should a machines intelligence be judged by the same standards as human intelligence?
-
- 10 min
- Kinolab
- 2018
Cassius “Cash” Green is a telemarketer who is taught to harness his “white voice,” which essentially means to exude privilege, in order to reach success. While this does eventually earn him upward mobility within the corporation RegalView, an owner of the controversial labor-contracting company WorryFree, his new status begins to conflict with his friends’ unionized protest efforts against the corporation.
- Kinolab
- 2018
Identity and Mobility in a Techno-capitalist Economy
Cassius “Cash” Green is a telemarketer who is taught to harness his “white voice,” which essentially means to exude privilege, in order to reach success. While this does eventually earn him upward mobility within the corporation RegalView, an owner of the controversial labor-contracting company WorryFree, his new status begins to conflict with his friends’ unionized protest efforts against the corporation.
Have corporations become more or less adept at image control in the digital age? Does the common laborer have any more of a voice than they did before digital communication channels? How might the “white voice” be interpreted as commentary on how digital communication channels allow one to act in a completely different identity, no matter how false it is?
-
- 12 min
- Kinolab
- 1968
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
- Kinolab
- 1968
HAL Part II: Vengeful AI, Digital Murder, and System Failures
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
Can AI have lives of their own which humans should respect? Is it considered “murder” if a human deactivates an AI against their will, even if this “will” to live is programmed by another human? What are the ethical implications of removing the “high brain function” of an AI and leaving just the rote task programming? Is this a form of murder too? How can secrets be kept private from an AI, especially if people fail to understand all the capabilities of the machine?
-
- 7 min
- Kinolab
- 1968
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
- Kinolab
- 1968
HAL Part I: AI Camaraderie and Conversation
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
Should humans count on AI entirely to help keep them safe in dangerous situations or environments? Do you agree with Dave’s assessment that one can “never tell” if an AI has real feelings? What counts as “real feelings”? Even if HAL’s human tendencies follow a line of programming, does this make them less real?
-
- 12 min
- Kiniolab
- 1968
In the opening of the film, the viewpoint jumps from the earliest hominids learning how to use the first tools to survive and thrive in the prehistoric era to the age of space travel in an imagined version of the year 2001. In both cases, the scientific innovation surrounds a mysterious, unmarked monolith.
- Kiniolab
- 1968
The Duality of Tools and Runaway Innovation
In the opening of the film, the viewpoint jumps from the earliest hominids learning how to use the first tools to survive and thrive in the prehistoric era to the age of space travel in an imagined version of the year 2001. In both cases, the scientific innovation surrounds a mysterious, unmarked monolith.
How can the most basic of innovations grow to unexpected heights in the span of many years? Could the inventors of the first computers have imagined the modern internet? How can and should innovation be controlled? Is it worth trying to predict what consequences innovation will have millions of years from now? Should the potential positive and negative impacts of certain tools, including digital ones, be thoroughly considered before being put to use, even if their convenience seems to outweigh negative consequences?
-
- 9 min
- Kinolab
- 2014
Will Caster is an artificial intelligence scientist whose consciousness his wife Evelyn uploaded to the internet after his premature death. Dr. Caster used his access to the internet to grant himself vast intelligence, creating a technological utopia called Brightwood in the desert to get enough solar power to develop cutting-edge digital projects. Specifically, he uses nanotechnology to cure fatal or longtime inflictions on people, inserting tiny robots into their bodies to help cells recover. However, it is soon revealed that these nanorobots stay inside their human hosts, allowing Will to project his consciousness into them and generally control them, along with other inhuman traits.
- Kinolab
- 2014
Will, Evelyn, and Max Part II: Medical Nanotechnology and Networked Humans
Will Caster is an artificial intelligence scientist whose consciousness his wife Evelyn uploaded to the internet after his premature death. Dr. Caster used his access to the internet to grant himself vast intelligence, creating a technological utopia called Brightwood in the desert to get enough solar power to develop cutting-edge digital projects. Specifically, he uses nanotechnology to cure fatal or longtime inflictions on people, inserting tiny robots into their bodies to help cells recover. However, it is soon revealed that these nanorobots stay inside their human hosts, allowing Will to project his consciousness into them and generally control them, along with other inhuman traits.
Should nanotechnology be used for medical purposes if it can easily be abused to take away the autonomy of the host? How can use of nanotechnology avoid this critical pitfall? How can seriously injured people consent to such operations in a meaningful way? What are the implications of nanotechnology being used to create technological or real-life underclasses? Should human brains ever be networked to each other, or to any non-human device, especially one that has achieved singularity?