Human Control of Technology (68)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 4 min
- Kinolab
- 1982
Flynn codes a digital avatar, Clu, in an attempt to hack into the mainframe of ENCOM. However, when Flynn fails to get Clu past the virtual, video-game-like defenses, Clu is captured and violently interrogated by a mysterious figure in the virtual world.
- Kinolab
- 1982
Artificial Intelligence as Servants to Humans
Flynn codes a digital avatar, Clu, in an attempt to hack into the mainframe of ENCOM. However, when Flynn fails to get Clu past the virtual, video-game-like defenses, Clu is captured and violently interrogated by a mysterious figure in the virtual world.
How can we program AI to perform tasks remotely for us? How can AI be used to remotely hack into public or private systems? Does every program designed to complete a task, even programs such as malware, have a life of its own? What are potential consequences to training AI solely to do the bidding of humans?
-
- 8 min
- Kinolab
- 1982
Main Control Program, an Artificial Intelligence presence, has self-developed beyond the imagination of its creators and sets its sights on hacking global governments, including the pentagon. It believes that with its growing intelligence, it can rule better than any human can, and forces the hand of Dillinger, a human, to help move its hacking beyond corporations. Meanwhile, a team of hackers attempt to break into the mainframe of this system. When the rebel hacker Flynn attempts to hack into the mainframe of the MCP, he is drawn into the digital world of the computer which is under the dominion of the MCP. Sark, one of the digital beings who serves the MCP, is tasked with killing Flynn.
- Kinolab
- 1982
Digital Hegemony in the Real and Virtual Worlds
Main Control Program, an Artificial Intelligence presence, has self-developed beyond the imagination of its creators and sets its sights on hacking global governments, including the pentagon. It believes that with its growing intelligence, it can rule better than any human can, and forces the hand of Dillinger, a human, to help move its hacking beyond corporations. Meanwhile, a team of hackers attempt to break into the mainframe of this system. When the rebel hacker Flynn attempts to hack into the mainframe of the MCP, he is drawn into the digital world of the computer which is under the dominion of the MCP. Sark, one of the digital beings who serves the MCP, is tasked with killing Flynn.
Is human anxiety over the potential for super-powered AI justified? Would things truly be better if machines and artificial intelligence made authoritative decisions as global actors and rulers?
What could be the implications of ‘teleporting’ into digital space in terms of alienation from the real world? For now, it seems that humans are in charge of computers in the “real” world; if humans were to enter a digital world, who would be in charge? Do AI beings owe subservience to humans for their creation, given their increasing intelligence?
-
- 14 min
- Kinolab
- 2014
Caleb, a programmer in a large company, is invited by his boss Nathan to test a robot named Ava. During one session of the Turing Test, Ava fearfully interrogates Caleb on what her fate will be if she is deemed not capable or human enough by the results of the test. Caleb struggles to deliver the honest answer, especially given that Ava displays attachment toward him, a sentiment which he returns. After Caleb discovers that Nathan wants to essentially kill Ava, he loops her in to his escape plan, offering her freedom and a chance to live a human life. Once Nathan is killed, Ava goes to his robotics repository and bestows a new physical, humanlike appearance upon herself. She then permanently traps Caleb, the only remaining person who knows she is an android, in Nathan’s compound before escaping to live a human life in the real world.
- Kinolab
- 2014
Liberty, Autonomy, and Desires of Humanoid Robots
Caleb, a programmer in a large company, is invited by his boss Nathan to test a robot named Ava. During one session of the Turing Test, Ava fearfully interrogates Caleb on what her fate will be if she is deemed not capable or human enough by the results of the test. Caleb struggles to deliver the honest answer, especially given that Ava displays attachment toward him, a sentiment which he returns. After Caleb discovers that Nathan wants to essentially kill Ava, he loops her in to his escape plan, offering her freedom and a chance to live a human life. Once Nathan is killed, Ava goes to his robotics repository and bestows a new physical, humanlike appearance upon herself. She then permanently traps Caleb, the only remaining person who knows she is an android, in Nathan’s compound before escaping to live a human life in the real world.
What rights to freedom do AI have? Do sentient AI beings deserve to be at the mercy of their creators? What are the consequences of machines being able to detect and expose lies? Is emotional attachment to AI a valid form of love? What threat could well-disguised, hyper-intelligent AI pose for humanity? If no one knows or can tell the difference, does that matter?
-
- 12 min
- Kinolab
- 1968
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
- Kinolab
- 1968
HAL Part II: Vengeful AI, Digital Murder, and System Failures
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
Can AI have lives of their own which humans should respect? Is it considered “murder” if a human deactivates an AI against their will, even if this “will” to live is programmed by another human? What are the ethical implications of removing the “high brain function” of an AI and leaving just the rote task programming? Is this a form of murder too? How can secrets be kept private from an AI, especially if people fail to understand all the capabilities of the machine?
-
- 6 min
- Kinolab
- 2017
Robert Daly is a programmer at the company Callister, which developed the immersive virtual reality game Infinity and its community for the entertainment of users. Daly is typically seen in the shadow of the co-founder of the company, the charismatic James Walton. Unbeknownst to anyone else, Daly possesses a personal modification of the Infinity game program, where he is able to upload sentient digital clones of his co-workers to take out his frustrations upon, as he does with Walton in this narrative.
- Kinolab
- 2017
Virtual Vindictiveness and Simulated Clones Part I: Daly and Walton
Robert Daly is a programmer at the company Callister, which developed the immersive virtual reality game Infinity and its community for the entertainment of users. Daly is typically seen in the shadow of the co-founder of the company, the charismatic James Walton. Unbeknownst to anyone else, Daly possesses a personal modification of the Infinity game program, where he is able to upload sentient digital clones of his co-workers to take out his frustrations upon, as he does with Walton in this narrative.
What should the ethical boundaries be in terms of creating digital copies of real-life people to manipulate in virtual realities? How would this alter the perception of autonomy or entitlement? Should the capability to create exact digital likenesses of real people be created for any reason? If so, how should their autonomy be ensured, since they are technically a piece of programming? Are digital copies of a person entitled to the same rights that their corporeal selves have?
-
- 7 min
- Kinolab
- 1968
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
- Kinolab
- 1968
HAL Part I: AI Camaraderie and Conversation
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
Should humans count on AI entirely to help keep them safe in dangerous situations or environments? Do you agree with Dave’s assessment that one can “never tell” if an AI has real feelings? What counts as “real feelings”? Even if HAL’s human tendencies follow a line of programming, does this make them less real?