Algorithms selectively favoring certain groups or demographics.
Algorithmic Bias (24)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- New York Times
- 2019
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
- New York Times
- 2019
-
- 10 min
- New York Times
- 2019
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
What are the consequences of employing biased technologies to survey citizens? Who loses agency, and who gains agency?
-
- 4 min
- OneZero
- 2020
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
- OneZero
- 2020
-
- 4 min
- OneZero
- 2020
Dr. Timnit Gebru, Joy Buolamwini, Deborah Raji — an Enduring Sisterhood of Face Queens
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
How can the charge led by these women for more equitable computer vision technologies be made even more visible? Should people need high degrees to have a voice in fighting against technologies which are biased against them? How can corporations be made to listen to voices such as those of the Face Queens?
-
- 40 min
- New York Times
- 2021
As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.
- New York Times
- 2021
She’s Taking Jeff Bezos to Task
As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.
Do you envision an FDA-style approach to technology regulation, particularly for facial recognition, being effective? Can large tech companies be incentivized to make truly ethical decisions on how their technology is created or deployed as long as the profit motive exists? What would this look like? What changes to the technology workforces, such as who designs software products or who chooses data sets, need to be made for technology’s impact to become more equal across populations?
-
- 10 min
- MIT Technology Review
- 2020
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
- MIT Technology Review
- 2020
-
- 10 min
- MIT Technology Review
- 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
How should models for training NLP algorithms be more closely scrutinized? What sorts of voices are needed at the design table to ensure that the impact of such algorithms are consistent across all populations? Can this ever be achieved?
-
- 7 min
- New York Times
- 2018
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Facial Recognition Is Accurate, if You’re a White Guy
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
What does exterior accountability for facial recognition software look like, and what should it look like? How and why does racial bias get coded into technology, whether explicitly or implicitly?
-
- 10 min
- Gizmodo
- 2021
Physicist Brian Nord, who learned about deep learning algorithms through his research on the cosmos, warns against how developing algorithms without proper ethical sensibility can lead to these algorithms having more negative impacts than positive ones. Essentially, an “a priori” or proactive approach to instilling AI ethical sensibility, whether through review institutions or ethical education of developers, is needed to guard against privileged populations using algorithms to maintain hegemony.
- Gizmodo
- 2021
-
- 10 min
- Gizmodo
- 2021
Developing Algorithms That Might One Day Be Used Against You
Physicist Brian Nord, who learned about deep learning algorithms through his research on the cosmos, warns against how developing algorithms without proper ethical sensibility can lead to these algorithms having more negative impacts than positive ones. Essentially, an “a priori” or proactive approach to instilling AI ethical sensibility, whether through review institutions or ethical education of developers, is needed to guard against privileged populations using algorithms to maintain hegemony.
What would an ideal algorithmic accountability organization or process look like? What specific ethical regions should AI developers study before creating their algorithms? How can algorithms or other programs created for one context, such as scientific research or learning, be misused in other contexts?