Privacy (137)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Technology Review
- 2020
This article details the reactions to the deepfake documentary In the event of moon disaster.
- MIT Technology Review
- 2020
-
- 5 min
- MIT Technology Review
- 2020
Inside the strange new world of being a deepfake actor
This article details the reactions to the deepfake documentary In the event of moon disaster.
-
- 33 min
- Scientific American
- 2020
In conjunction with Scientific American, this thirty minute documentary brings the film In Event of Moon Disaster to a group of experts on AI, digital privacy, law, and human rights to gauge their reaction on the film, and to provide context on this new technology—its perils, its potential, and the possibilities of this brave, new digital world, where every pixel that moves past our collective eyes is potentially up for grabs.
- Scientific American
- 2020
To Make a DeepFake
In conjunction with Scientific American, this thirty minute documentary brings the film In Event of Moon Disaster to a group of experts on AI, digital privacy, law, and human rights to gauge their reaction on the film, and to provide context on this new technology—its perils, its potential, and the possibilities of this brave, new digital world, where every pixel that moves past our collective eyes is potentially up for grabs.
- What are some of the new dangers this technology brings that are different from other forms of media disinformation in the past?
- ZDNet
- 2021
Facebook’s use of biometrics to develop facial recognition came under scrutiny from those skeptical of users’ privacy protection. The company has just filed a $650 million settlement to close the lawsuit regarding this issue.
- ZDNet
- 2021
- ZDNet
- 2021
Judge approves $650m settlement for Facebook users in privacy, biometrics lawsuit
Facebook’s use of biometrics to develop facial recognition came under scrutiny from those skeptical of users’ privacy protection. The company has just filed a $650 million settlement to close the lawsuit regarding this issue.
What role do you think the government should play in establishing precedent for violations of privacy by technology companies?
-
- 7 min
- The Verge
- 2020
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
- The Verge
- 2020
-
- 7 min
- The Verge
- 2020
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
What potential harms could you see from the misapplication of the PULSE algorithm? What sorts of bias-mitigating solutions besides more diverse data sets could you envision? Based on this case study, what sorts of real-world applications should facial recognition technology be trusted with?
-
- 5 min
- BBC
- 2021
The ability of facial recognition technology used by the South Wales Police force to identify an individual based on biometric data nearly instantly rather than the previous standard of 10 days allowed a mother to say goodbye to her son on his deathbed. It seems to have other positive impacts, such as identifying criminals earlier than they otherwise might have been. However, as is usually the case, concerns abound about how this facial recognition technology can violate human rights.
- BBC
- 2021
-
- 5 min
- BBC
- 2021
Facial recognition technology meant mum saw dying son
The ability of facial recognition technology used by the South Wales Police force to identify an individual based on biometric data nearly instantly rather than the previous standard of 10 days allowed a mother to say goodbye to her son on his deathbed. It seems to have other positive impacts, such as identifying criminals earlier than they otherwise might have been. However, as is usually the case, concerns abound about how this facial recognition technology can violate human rights.
Who can be trusted with facial recognition algorithms that can give someone several possibilities for the identity of a particular face? Who can be trusted to decide in what cases this technology can be deployed? How can bias become problematic when a human is selecting one of many faces recommended by the algorithm? Should the idea of constant surveillance or omnipresent cameras make us feel safe or concerned?
-
- 7 min
- New York Times
- 2018
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Facial Recognition Is Accurate, if You’re a White Guy
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
What does exterior accountability for facial recognition software look like, and what should it look like? How and why does racial bias get coded into technology, whether explicitly or implicitly?