AI (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Digital Health
- 2025
The BMA warns GPs to exercise caution when using AI scribing tools, emphasizing the need for proper clinical safety and information governance.
- Digital Health
- 2025
-
- 5 min
- Digital Health
- 2025
The Use of AI Scribing Tools by GPs
The BMA warns GPs to exercise caution when using AI scribing tools, emphasizing the need for proper clinical safety and information governance.
- What should be done to meet the growing demand for AI scribing tools by clinicians?
- How might the patient conversation data shared during doctor appointments be more sensitive in nature than an older model of physician note-taking?
- What are the potential ethical issues and risks with this application of generative AI systems?
-
- 5 min
- Nature
- 2025
Researchers in this article make an argument against the use of AI in the scientific process. They believe that the sheer volume of academic articles being produced is putting an immense strain on the peer review process. This limits the capacity for in-depth thought and confuses scientific progress with a skewed notion of academic productivity—scientific progress is quantified by the number of articles produced.
- Nature
- 2025
-
- 5 min
- Nature
- 2025
Why AI Should Not be Used in the Scientific Process
Researchers in this article make an argument against the use of AI in the scientific process. They believe that the sheer volume of academic articles being produced is putting an immense strain on the peer review process. This limits the capacity for in-depth thought and confuses scientific progress with a skewed notion of academic productivity—scientific progress is quantified by the number of articles produced.
- What are the benefits and limitations of restricting the use of AI in the academic process?
- What do we as academic researchers and students lose by giving over scientific discovery and communication to generative AI systems?
-
- 5 min
- Nature
- 2025
Research has found that 90% of studies on AI development, as well as 86% of the resulting patents, involve human imaging. This means that data used to train AI models are well suited to being used in surveillance applications by military, law enforcement, corporations, and other private actors. There is also substantial evidence to suggest that much of the research that created the current models were funded by government and military agencies.
- Nature
- 2025
-
- 5 min
- Nature
- 2025
Is AI watching us?
Research has found that 90% of studies on AI development, as well as 86% of the resulting patents, involve human imaging. This means that data used to train AI models are well suited to being used in surveillance applications by military, law enforcement, corporations, and other private actors. There is also substantial evidence to suggest that much of the research that created the current models were funded by government and military agencies.
- Why do you think such a significant proportion of the studies involve human imaging?
- What are the ethical barriers in using human imaging to advance AI research?
- What are potential cases in which the images and information collected may be misused by government or private companies?
-
- 10 min
- The Guardian
- 2025
Meta was accused of violating fair use agreements. Writers Sarah Silverman and Ta-Nehisi Coates had argued that the Facebook owner had breached copyright law by using their books without permission to train its AI system. It was later decided in court that fair use agreements were not violated because of the prosecution’s inability to prove how this would cause market dilution by flooding the market with work similar to theirs. This may, however, be a case of not making a compelling argument in court.
- The Guardian
- 2025
-
- 10 min
- The Guardian
- 2025
Meta wins AI copyright lawsuit as US judge rules against authors
Meta was accused of violating fair use agreements. Writers Sarah Silverman and Ta-Nehisi Coates had argued that the Facebook owner had breached copyright law by using their books without permission to train its AI system. It was later decided in court that fair use agreements were not violated because of the prosecution’s inability to prove how this would cause market dilution by flooding the market with work similar to theirs. This may, however, be a case of not making a compelling argument in court.
- How is media being uploaded to AI without the owner’s permission harmful to people who rely on creative work for their careers?
- How has the movement of open information access and internet technologies helped to fuel recent advances in AI?
-
- 10 min
- Rest of World
- 2024
This article provides an overview of griefbot culture in China. Users there, according to this article, are very satisfied with the experiences they are having with the griefbots of their loved ones.
- Rest of World
- 2024
-
- 10 min
- Rest of World
- 2024
AI “deathbots” are helping people in China grieve
This article provides an overview of griefbot culture in China. Users there, according to this article, are very satisfied with the experiences they are having with the griefbots of their loved ones.
- Why might there be a difference in the way grief bots are received in China compared to the US?
- Could griefbot technology be more effective or ethical in cultural traditions that view ancestors in the longer context of family relationships? Why or Why not?
-
- 45 min
- The Interational Journal of Psychoanalysis
- 2024
Because the technology simulates sentience, it removes the ethical imperative of considering the deceased as an irreducible other, fostering attachments that may displace living relationships and misrepresent the dead. While the author concedes that tightly regulated, consent-based applications (e.g., helping a child imagine a deceased parent) might offer therapeutic value, the prevailing danger is that griefbots short-circuit the lifelong, relational work of mourning. Psychoanalysis, the article concludes, must scrutinize these “post-human” tools to preserve an ethics of otherness in a culture increasingly tempted to outsource grief to machines.
- The Interational Journal of Psychoanalysis
- 2024
-
- 45 min
- The Interational Journal of Psychoanalysis
- 2024
Mourning, melancholia and machines: An applied psychoanalytic investigation of mourning in the age of griefbots
Because the technology simulates sentience, it removes the ethical imperative of considering the deceased as an irreducible other, fostering attachments that may displace living relationships and misrepresent the dead. While the author concedes that tightly regulated, consent-based applications (e.g., helping a child imagine a deceased parent) might offer therapeutic value, the prevailing danger is that griefbots short-circuit the lifelong, relational work of mourning. Psychoanalysis, the article concludes, must scrutinize these “post-human” tools to preserve an ethics of otherness in a culture increasingly tempted to outsource grief to machines.
- What is the danger of turning mourning into a private, self-regulated loop through the use of a grief bot?
- What are the benefits or harms of disconnecting from a deceased loved one during the grieving process, and why might that be lost through the use of grief bots?