Robotics (65)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Technology Review
- 2019
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
When algorithms mess up, the nearest human gets the blame
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
Should humans take the blame for algorithm-created harm? At what level (development, corporate, or personal) should this liability occur?
-
- 4 min
- Kinolab
- 2017
K is an android who works with the LAPD to track down and destroy escaped older models of “replicants,” or humanoid robots, in a world where androids work as laborers without compensation. In this clip, we meet K’s virtual wife, Joi. Although she is not ‘real,’ it seems like she has real human feelings and presents like a human woman who provides K company and can complete tasks such as making him dinner.
- Kinolab
- 2017
Robot Relationships and Marriage
K is an android who works with the LAPD to track down and destroy escaped older models of “replicants,” or humanoid robots, in a world where androids work as laborers without compensation. In this clip, we meet K’s virtual wife, Joi. Although she is not ‘real,’ it seems like she has real human feelings and presents like a human woman who provides K company and can complete tasks such as making him dinner.
What problems arise from using robotic companions to fulfill gendered tasks? How might this alter perceptions of real people? Consider how Joi is “typecast” as a 50s housewife, and can alter her appearance on command. How could virtual or AI female assistants and robots perpetuate harmful gender norms? Can robots truly love each other, or is this only accomplishable through specific coding? If humans are to give robots a full range of emotions and autonomy to live independently, are humans then responsible for providing them with companions? Would it be more or less uncomfortable if a real human owned and used the Joi holograph, and why?
-
- 12 min
- Kinolab
- 1982
In dystopian 2019 Los Angeles, humanoid robots known as “replicants” are on the loose, and must be tracked down and killed by bounty hunters. The normal role for replicants is to serve as laborers in space colonies; they previously were not meant to incorporate into human society. The first two clips demonstrate the Voigt-Kampff test, this universe’s Turing Test to determine if someone is a replicant or a human. While the android Leon is discovered and retaliates quickly, Rachel, a more advanced model of android, is able to hide her status as an android for longer because she herself believes she is human due to implanted memories. When this secret is revealed to Rachel, she becomes quite upset.
- Kinolab
- 1982
Distinguishing Between Robots and Humans
In dystopian 2019 Los Angeles, humanoid robots known as “replicants” are on the loose, and must be tracked down and killed by bounty hunters. The normal role for replicants is to serve as laborers in space colonies; they previously were not meant to incorporate into human society. The first two clips demonstrate the Voigt-Kampff test, this universe’s Turing Test to determine if someone is a replicant or a human. While the android Leon is discovered and retaliates quickly, Rachel, a more advanced model of android, is able to hide her status as an android for longer because she herself believes she is human due to implanted memories. When this secret is revealed to Rachel, she becomes quite upset.
Will “Turing Tests” such as the one shown here become more common practice if AI become seemingly indistinguishable from humans? In this universe, the principal criteria for discovering an android is seeing if they display empathy toward animals. Is this a fair criterion to judge a machine? Do all humans show empathy toward animals? If AI can replicate humans, do they need to disclose their status as an android? Why? What makes Rachel’s life less “real” than any other humans? What are the dangers of giving away human memories to AI?
-
- 11 min
- Kinolab
- 1982
Roy Batty is a rogue humanoid android, known as a “replicant,” who escaped his position as an unpaid laborer in a space colony and now lives among humans on Earth. After discovering that he only has a lifespan of four years, Roy breaks into the penthouse of his creator Eldon Tyrell and implores him to find a way to prolong his life. After Tyrell refuses and lauds Roy’s advanced design, Roy kills Tyrell, despite seeing him as a sort of father figure. After fleeing from the penthouse, he is found by android bounty hunter Rick Deckard, who proceeds to chase him across the rooftops. After a short confrontation with Deckard, Roy delivers a monologue explaining his sorry state of affairs.
- Kinolab
- 1982
Meaning and Duration of Android Lives
Roy Batty is a rogue humanoid android, known as a “replicant,” who escaped his position as an unpaid laborer in a space colony and now lives among humans on Earth. After discovering that he only has a lifespan of four years, Roy breaks into the penthouse of his creator Eldon Tyrell and implores him to find a way to prolong his life. After Tyrell refuses and lauds Roy’s advanced design, Roy kills Tyrell, despite seeing him as a sort of father figure. After fleeing from the penthouse, he is found by android bounty hunter Rick Deckard, who proceeds to chase him across the rooftops. After a short confrontation with Deckard, Roy delivers a monologue explaining his sorry state of affairs.
Should robots who are modeled to act like real humans be given a predetermined, short lifespan? Should robots who are modeled to act like real humans ever be expected to complete uncompensated work? How should creators of robots give their creations the opportunity to make meaning of their lives? Who is ultimately responsible to “parent” a sentient AI?
-
- 7 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Joe Anderson, the former head of the PreCrime policing program, is named as a future perpetrator and must flee from his former employer. Due to the widespread nature of retinal scanning biometric technology, he is found quickly, and thus must undergo an eye transplant. While recovering in a run-down apartment, the PreCrime officers deploy spider-shaped drones to scan the retinas of everyone in the building.
- Kinolab
- 2002
Retinal Scans and Immediate Identification
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Joe Anderson, the former head of the PreCrime policing program, is named as a future perpetrator and must flee from his former employer. Due to the widespread nature of retinal scanning biometric technology, he is found quickly, and thus must undergo an eye transplant. While recovering in a run-down apartment, the PreCrime officers deploy spider-shaped drones to scan the retinas of everyone in the building.
Is it possible that people would consent to having their retinas scanned in general public places if it meant a more personalized experience of that space? Should government be able to deceive people into giving up their private data, as social media companies already do? How can people protect themselves from retinal scanning and other biometric identification technologies on small and large scales?
-
- 8 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she attempts to prevent Michael, the ruler of The Good Place, from being sent to the torture chambers by murdering Janet, the robotic assistant of the good place. However, Eleanor and her companions have a harder time murdering Janet than they had prepared for thanks to her quite realistic begging for her life.
- Kinolab
- 2016
Murder of Robots and Honesty
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she attempts to prevent Michael, the ruler of The Good Place, from being sent to the torture chambers by murdering Janet, the robotic assistant of the good place. However, Eleanor and her companions have a harder time murdering Janet than they had prepared for thanks to her quite realistic begging for her life.
How can robots be programmed to manipulate emotional responses from humans? Is the act committed in this narrative “murder”? Is there ever any such thing as a victimless lie? How has true honesty become harder in the digital age? Is it ethical to decommission older versions of humanoid robots as newer ones come along? Is this evolution in its own right?