All Narratives (355)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 4 min
- VentureBeat
- 2020
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
- VentureBeat
- 2020
-
- 4 min
- VentureBeat
- 2020
Researchers Find that Even Fair Hiring Algorithms Can Be Biased
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
Can machine learning ever be enacted in a way that fully gets rid of human bias? Is bias encoded into every trained machine learning program? What does the ideal circumstance look like when using digital technologies and machine learning to reach a point of equitable representation in hiring?
-
- 4 min
- OneZero
- 2020
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
- OneZero
- 2020
-
- 4 min
- OneZero
- 2020
Dr. Timnit Gebru, Joy Buolamwini, Deborah Raji — an Enduring Sisterhood of Face Queens
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
How can the charge led by these women for more equitable computer vision technologies be made even more visible? Should people need high degrees to have a voice in fighting against technologies which are biased against them? How can corporations be made to listen to voices such as those of the Face Queens?
-
- 5 min
- Business Insider
- 2020
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
- Business Insider
- 2020
-
- 5 min
- Business Insider
- 2020
One of Google’s leading AI researchers says she’s been fired in retaliation for an email to other employees
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
How can tech monopolies dismiss recommendations to make their technologies more ethical? How do bias ethicists such as Gebru get onto a more unshakeable platform? Who is going to hold tech monopolies more accountable? Should these monopolies even by trying to fix their current algorithms, or might it be better to just start fresh?
-
- 4 min
- Reuters
- 2020
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
- Reuters
- 2020
-
- 4 min
- Reuters
- 2020
From hate speech to nudity, Facebook’s oversight board picks its first cases
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
How much oversight do algorithms or networks with a broad impact need? Who all needs to be in a room when deciding what an algorithm or site should or should not allow? Can algorithms be designed to detect and remove hate speech? Should such an algorithm exist?
-
- 5 min
- Gizmodo
- 2020
The data privacy of employees is at risk under a new “Productivity Score” program started by Microsoft, in which employers and administrators can use Microsoft 365 platforms to collect several metrics on their workers in order to “optimize productivity.” However, this approach causes unnecessary stress for workers, beginning a surveillance program in the workplace.
- Gizmodo
- 2020
-
- 5 min
- Gizmodo
- 2020
Microsoft’s Creepy New ‘Productivity Score’ Gamifies Workplace Surveillance
The data privacy of employees is at risk under a new “Productivity Score” program started by Microsoft, in which employers and administrators can use Microsoft 365 platforms to collect several metrics on their workers in order to “optimize productivity.” However, this approach causes unnecessary stress for workers, beginning a surveillance program in the workplace.
How are excuses such as using data to “optimize productivity” employed to gather more data on people? How could such a goal be accomplished without the surveillance aspect? How does this approach not account for a diversity of working methods?
-
- 5 min
- NPR
- 2020
After the FTC and 48 States charged Facebook with being a monopoly in late 2020, the FTC continues the push for accountability of tech monopolies by demanding that large social network companies, including Facebook, TikTok, and Twitter, share exactly what they do with user data in hopes of increased transparency. Pair with “Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general“
- NPR
- 2020
-
- 5 min
- NPR
- 2020
Amazon, TikTok, Facebook, Others Ordered To Explain What They Do With User Data
After the FTC and 48 States charged Facebook with being a monopoly in late 2020, the FTC continues the push for accountability of tech monopolies by demanding that large social network companies, including Facebook, TikTok, and Twitter, share exactly what they do with user data in hopes of increased transparency. Pair with “Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general“
Do you think that users, especially younger users, would trade their highly-tailored recommender system and social network experiences for data privacy? How much does transparency of tech monopolies help when many people are not fluent in the concept of how algorithms work? Should social media companies release the abstractions of users that it forms using data?