Privacy (137)
Find narratives by ethical themes or by technologies.
FILTERreset filters- Wired
- 2021
Following the January 6th capital riots, there have been many ongoing investigations into right-wing extremists groups. pioneering these investigations are left-leaning hacktivists, determined to expose hate speech and abuse in private conversations.
- Wired
- 2021
- Wired
- 2021
Far-Right Platform Gab Has Been Hacked—Including Private Data
Following the January 6th capital riots, there have been many ongoing investigations into right-wing extremists groups. pioneering these investigations are left-leaning hacktivists, determined to expose hate speech and abuse in private conversations.
Where do we draw the line in content moderation decision-making between allowing a feed of fake information and making sure we are not denying access to real news?
-
- 2 min
- Kinolab
- 2019
In an imagined future of London, citizens all across the globe are connected to the Feed, a device and network accessed constantly through a brain-computer interface. Tom, the son of the Feed’s creator Lawrence, realizes that his father had deleted some of his childhood memories from the device in his brain, thus Tom has lost all access to them. For further insights into technology and the nature of parent-child relationships, see the narratives “Marie and Sara Parts I and II.”
- Kinolab
- 2019
Personal Control over Memories
In an imagined future of London, citizens all across the globe are connected to the Feed, a device and network accessed constantly through a brain-computer interface. Tom, the son of the Feed’s creator Lawrence, realizes that his father had deleted some of his childhood memories from the device in his brain, thus Tom has lost all access to them. For further insights into technology and the nature of parent-child relationships, see the narratives “Marie and Sara Parts I and II.”
What rights do parents have over the minds and bodies of their children? Should parents ever be able to alter the memories of their children, even if this is supposedly for their own good? What are the consequences of the externalisation of memory through digital technology? How should children be able to give consent for alterations to technological implants?
-
- 9 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens. In this narrative, the PreCrime program is audited, and the officers must explain the ethics and philosophies at play behind their systems. After captain John Anderton is accused of a future crime, he flees, and learns of “minority reports,” or instances of disagreement between the Precogs covered up by the department to make the justice system seem infallible.
- Kinolab
- 2002
Trusting Machines and Variable Outcomes
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens. In this narrative, the PreCrime program is audited, and the officers must explain the ethics and philosophies at play behind their systems. After captain John Anderton is accused of a future crime, he flees, and learns of “minority reports,” or instances of disagreement between the Precogs covered up by the department to make the justice system seem infallible.
What are the problems with taking the results of computer algorithms as infallible or entirely objective? How are such systems prone to bias, especially when two different algorithms might make two different predictions? Is there any way that algorithms could possibly make the justice system more fair? How might humans inflect the results of a predictive crime algorithm in order to serve themselves? Does technology, especially an algorithm such as a crime predictor, need to be made more transparent to its users and the general public so that people do not trust it with a religious sort of fervor?
-
- 10 min
- New York Times
- 2019
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
- New York Times
- 2019
-
- 10 min
- New York Times
- 2019
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
What are the consequences of employing biased technologies to survey citizens? Who loses agency, and who gains agency?
-
- 7 min
- Vice
- 2019
An academic perspective on an algorithm created by PredPol to “predict crime.” Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself.Rather, police find crimes in the same places they’ve been told to look for them, feeding the algorithm ineffective data and allowing unjust targeting of communities of color by the police to continue based on trust in the algorithm.
- Vice
- 2019
-
- 7 min
- Vice
- 2019
Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed
An academic perspective on an algorithm created by PredPol to “predict crime.” Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself.Rather, police find crimes in the same places they’ve been told to look for them, feeding the algorithm ineffective data and allowing unjust targeting of communities of color by the police to continue based on trust in the algorithm.
Can an algorithm which claims to predict crime ever be fair? Is it ever justified for volatile actors such as police to act based on directions from a machine, where the logic is not always transparent?
-
- 3 min
- CNET
- 2019
US Government agencies rely on outdated verification methods, increasing the risk of identify theft.
- CNET
- 2019
-
- 3 min
- CNET
- 2019
Thanks to Equifax breach, 4 US agencies don’t properly verify your data, GAO finds
US Government agencies rely on outdated verification methods, increasing the risk of identify theft.
If the government does not ensure our cyber security, then who does? Can any digital method for identity verification be completely safe, especially given how much of our personal data lives in the digital world?