Accountability (39)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- MIT Technology Review
- 2020
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
- MIT Technology Review
- 2020
-
- 10 min
- MIT Technology Review
- 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
How should models for training NLP algorithms be more closely scrutinized? What sorts of voices are needed at the design table to ensure that the impact of such algorithms are consistent across all populations? Can this ever be achieved?
-
- 30 min
- CNET, New York Times, Gizmodo
- 2023
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
- CNET, New York Times, Gizmodo
- 2023
The ChatGPT Congressional Hearing
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
If you were in the position of the congresspeople in the hearing, what questions would you ask Sam Altman? Does Sam Altman put too much of the onus of ethical regulation on the government? How would the “license” approach apply to AI companies that already exist/have released popular products? Do you believe Congress might still be able to “meet the moment” on AI?
-
- 5 min
- Companion Proceedings of the Web Conference
- 2024
Provides an empirical analysis of content moderation practices across major social media platforms within the European Union (EU), utilizing data from the Digital Services Act (DSA) Transparency Database.
- Companion Proceedings of the Web Conference
- 2024
-
- 5 min
- Companion Proceedings of the Web Conference
- 2024
Content Moderation on Social Media in the EU
Provides an empirical analysis of content moderation practices across major social media platforms within the European Union (EU), utilizing data from the Digital Services Act (DSA) Transparency Database.
- What is the distinction between the moderation of content and censorship?
- How would you define effective content moderation? What has shaped your views on this over the last five years?
-
- 15 min
- Splinter
- 2015
Intellitar marketed its service as a form of digital immortality. For a monthly fee of $25, clients could upload personal data, including voice recordings and photographs, to build a lifelike digital version of themselves. The company claimed to have attracted around 10,000 customers. However, despite its ambitious vision, Intellitar ceased operations, leaving its clients without access to their digital counterparts.
- Splinter
- 2015
-
- 15 min
- Splinter
- 2015
This Startup Promised 10,000 People Eternal Digital Life – Then it Died.
Intellitar marketed its service as a form of digital immortality. For a monthly fee of $25, clients could upload personal data, including voice recordings and photographs, to build a lifelike digital version of themselves. The company claimed to have attracted around 10,000 customers. However, despite its ambitious vision, Intellitar ceased operations, leaving its clients without access to their digital counterparts.
- Identify the stakeholders in a situation where a company offering digital immortality services goes bust.
- In what ways are digital remains similar or different to physical remains and memorials? How might we preserve our digital selves in more permanent ways to avoid start-up failures like this one?
- Is this type of service something you could imagine using yourself or for a loved one?
-
- 90 min
- Minds and Machines
- 2017
The authors define DAI as the ecosystem of commercial platforms—ranging from startups like Afternote and Departing.com to tech giants like Facebook and Google—that commodify and manage digital remains (online data, profiles, memories) of deceased users. Using four real-world cases, the author discusses how economic incentives can distort the “informational body” – rewriting profiles, automating posts, and reshaping digital personas.
- Minds and Machines
- 2017
-
- 90 min
- Minds and Machines
- 2017
The Political Economy of Death in the Age of Information
The authors define DAI as the ecosystem of commercial platforms—ranging from startups like Afternote and Departing.com to tech giants like Facebook and Google—that commodify and manage digital remains (online data, profiles, memories) of deceased users. Using four real-world cases, the author discusses how economic incentives can distort the “informational body” – rewriting profiles, automating posts, and reshaping digital personas.
- Should the digital remains of a deceased person be editable by family, friends, or the company hosting the digital immortal?
- Do tech companies have an ethical duty to preserve or remove digital remains?
- How are digital remains companies similar or different to funeral homes and cemeteries in the physical world? What laws govern these types of businesses and should they be applied to digital memorial companies?
-
- 5 min
- BBC
- 2025
A surgeon in the occupied West Bank talks about how Israeli military drones are coming to neighborhoods and picking off civilians, including injured children near hospitals.
- BBC
- 2025
-
- 5 min
- BBC
- 2025
Drone Use in Ghaza
A surgeon in the occupied West Bank talks about how Israeli military drones are coming to neighborhoods and picking off civilians, including injured children near hospitals.
- In what ways have warfare ethics been changed by the development of AI-based drone technologies ?
- Considering how many types of technologies go into current military drones (CV, spatial navigation, robotics, AI, etc.), who are the responsible parties for the impact of drone warfare? Researchers and developers? Military organizations? Drone pilots?