Natural Language Interfaces (23)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 15 min
- 2024
In this piece, Leong—a Catholic attorney and theology graduate student—explores the ethical, spiritual, and emotional implications of “grief tech,” particularly AI-powered “ghostbots” that simulate conversations with deceased loved ones. She critiques this technology through a Christian theological lens, drawing on thinkers like Karl Rahner and Tina Beattie to argue that such digital recreations undermine the embodied nature of human personhood and the Christian understanding of death.
- 2024
The false promise of keeping a loved one “alive” with A.I. grief bots.
In this piece, Leong—a Catholic attorney and theology graduate student—explores the ethical, spiritual, and emotional implications of “grief tech,” particularly AI-powered “ghostbots” that simulate conversations with deceased loved ones. She critiques this technology through a Christian theological lens, drawing on thinkers like Karl Rahner and Tina Beattie to argue that such digital recreations undermine the embodied nature of human personhood and the Christian understanding of death.
- What does the article suggest about the meaning of personhood, and how might AI griefbots distort this concept?
- How might someone from a different religious or cultural tradition respond differently to the idea of digitally resurrecting the dead?
-
- 45 min
- The Interational Journal of Psychoanalysis
- 2024
Because the technology simulates sentience, it removes the ethical imperative of considering the deceased as an irreducible other, fostering attachments that may displace living relationships and misrepresent the dead. While the author concedes that tightly regulated, consent-based applications (e.g., helping a child imagine a deceased parent) might offer therapeutic value, the prevailing danger is that griefbots short-circuit the lifelong, relational work of mourning. Psychoanalysis, the article concludes, must scrutinize these “post-human” tools to preserve an ethics of otherness in a culture increasingly tempted to outsource grief to machines.
- The Interational Journal of Psychoanalysis
- 2024
-
- 45 min
- The Interational Journal of Psychoanalysis
- 2024
Mourning, melancholia and machines: An applied psychoanalytic investigation of mourning in the age of griefbots
Because the technology simulates sentience, it removes the ethical imperative of considering the deceased as an irreducible other, fostering attachments that may displace living relationships and misrepresent the dead. While the author concedes that tightly regulated, consent-based applications (e.g., helping a child imagine a deceased parent) might offer therapeutic value, the prevailing danger is that griefbots short-circuit the lifelong, relational work of mourning. Psychoanalysis, the article concludes, must scrutinize these “post-human” tools to preserve an ethics of otherness in a culture increasingly tempted to outsource grief to machines.
- What is the danger of turning mourning into a private, self-regulated loop through the use of a grief bot?
- What are the benefits or harms of disconnecting from a deceased loved one during the grieving process, and why might that be lost through the use of grief bots?
-
- 50 min
- Science and Engineering Ethics
- 2022
Lindemann identifies grief bots as techno-social niches that change the affective emotional state of the user. With a focus on the dignity of the bereaved rather than the deceased, Lindemann argues that grief bots can both regulate and deregulate users’ emotions. Referring to them as pseudo-bonds, Lindemann does a very good job of trying to characterize a standard relationship with a grief bot. This article is mostly about the grief and well-being of users of griefbots.
- Science and Engineering Ethics
- 2022
-
- 50 min
- Science and Engineering Ethics
- 2022
The Ethics of ‘Deathbots’
Lindemann identifies grief bots as techno-social niches that change the affective emotional state of the user. With a focus on the dignity of the bereaved rather than the deceased, Lindemann argues that grief bots can both regulate and deregulate users’ emotions. Referring to them as pseudo-bonds, Lindemann does a very good job of trying to characterize a standard relationship with a grief bot. This article is mostly about the grief and well-being of users of griefbots.
- What does Lindemann mean by internet-enabled techno-social niches, and what things exemplify them?
- After reading this paper, would you ever use–or allow your digital remains to create a deathbot? Why or why not?
- Outline the key data-protection and safety requirements you would test in a pilot program before approving any clinical deployment of grief bots.
-
- 30 min
- CNET, New York Times, Gizmodo
- 2023
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
- CNET, New York Times, Gizmodo
- 2023
The ChatGPT Congressional Hearing
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
If you were in the position of the congresspeople in the hearing, what questions would you ask Sam Altman? Does Sam Altman put too much of the onus of ethical regulation on the government? How would the “license” approach apply to AI companies that already exist/have released popular products? Do you believe Congress might still be able to “meet the moment” on AI?
- ZDNet
- 2021
Alexa Conversations improves its quality of natural language processing as users feed them sample conversations. This feedback system allows Alexa Conversations to cut costs of training developers and managing related data.
- ZDNet
- 2021
- ZDNet
- 2021
Amazon makes Alexa Conversations generally available
Alexa Conversations improves its quality of natural language processing as users feed them sample conversations. This feedback system allows Alexa Conversations to cut costs of training developers and managing related data.
What are some measures you think technology companies should implement to ensure the protection of users’ privacy? What role do you think the government should play?
-
- 7 min
- VentureBeat
- 2021
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
GPT-3: We’re at the very beginning of a new app ecosystem
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
Should AI be able to imitate human speech unchecked? Should humans be trained to be able to tell when speech or text might be produced by a machine? How might Natural Language Processing cheapen human writing and writing jobs?