AI (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- New York Times
- 2020
Decisions on whether or not law enforcement should be trusted with facial recognition are tricky, as is argued by Detroit city official James Tate. On one hand, the combination of the bias latent in the technology itself and the human bias of those who use it sometimes leads to over-policing of certain communities. On the other hand, with the correct guardrails, it can be an effective tool in getting justice in cases of violent crime. This article details the ongoing debate about how much facial recognition technology use is proper in Detroit.
- New York Times
- 2020
-
- 5 min
- New York Times
- 2020
A Case for Facial Recognition
Decisions on whether or not law enforcement should be trusted with facial recognition are tricky, as is argued by Detroit city official James Tate. On one hand, the combination of the bias latent in the technology itself and the human bias of those who use it sometimes leads to over-policing of certain communities. On the other hand, with the correct guardrails, it can be an effective tool in getting justice in cases of violent crime. This article details the ongoing debate about how much facial recognition technology use is proper in Detroit.
Who should be deciding on the guardrails surrounding the use of facial recognition technology? How can citizens have more control over when their face is being recorded or captured? Can there ever be enough guardrails to truly ensure that facial recognition technology can be used with no chance of bias?
-
- 6 min
- TED
- 2020
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
- TED
- 2020
How AI can help shatter barriers to equality
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
Would constant computer vision surveillance of a workplace be ultimately positive or negative or both? How could it be ensured that machine learning algorithms were only used for positive forces in a workplace? What responsibility to large companies have to help those in less privileged countries access digital fluency?
-
- 7 min
- ZDNet
- 2020
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
- ZDNet
- 2020
-
- 7 min
- ZDNet
- 2020
Rebooting AI: Deep learning, meet knowledge graphs
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
Does giving a machine similar learning capabilities to humans and animals bring artificial intelligence closer to singularity? Should humans ultimately be in control of what a machine learns? What is problematic about leaving AI less capable of understanding semantic webs?
-
- 5 min
- Business Insider
- 2020
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
- Business Insider
- 2020
-
- 5 min
- Business Insider
- 2020
One of Google’s leading AI researchers says she’s been fired in retaliation for an email to other employees
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
How can tech monopolies dismiss recommendations to make their technologies more ethical? How do bias ethicists such as Gebru get onto a more unshakeable platform? Who is going to hold tech monopolies more accountable? Should these monopolies even by trying to fix their current algorithms, or might it be better to just start fresh?
-
- 4 min
- OneZero
- 2020
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
- OneZero
- 2020
-
- 4 min
- OneZero
- 2020
Dr. Timnit Gebru, Joy Buolamwini, Deborah Raji — an Enduring Sisterhood of Face Queens
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
How can the charge led by these women for more equitable computer vision technologies be made even more visible? Should people need high degrees to have a voice in fighting against technologies which are biased against them? How can corporations be made to listen to voices such as those of the Face Queens?
-
- 4 min
- VentureBeat
- 2020
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
- VentureBeat
- 2020
-
- 4 min
- VentureBeat
- 2020
Researchers Find that Even Fair Hiring Algorithms Can Be Biased
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
Can machine learning ever be enacted in a way that fully gets rid of human bias? Is bias encoded into every trained machine learning program? What does the ideal circumstance look like when using digital technologies and machine learning to reach a point of equitable representation in hiring?