Privacy (136)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 4 min
- Reuters
- 2020
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
- Reuters
- 2020
-
- 4 min
- Reuters
- 2020
From hate speech to nudity, Facebook’s oversight board picks its first cases
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
How much oversight do algorithms or networks with a broad impact need? Who all needs to be in a room when deciding what an algorithm or site should or should not allow? Can algorithms be designed to detect and remove hate speech? Should such an algorithm exist?
-
- 5 min
- Gizmodo
- 2020
The data privacy of employees is at risk under a new “Productivity Score” program started by Microsoft, in which employers and administrators can use Microsoft 365 platforms to collect several metrics on their workers in order to “optimize productivity.” However, this approach causes unnecessary stress for workers, beginning a surveillance program in the workplace.
- Gizmodo
- 2020
-
- 5 min
- Gizmodo
- 2020
Microsoft’s Creepy New ‘Productivity Score’ Gamifies Workplace Surveillance
The data privacy of employees is at risk under a new “Productivity Score” program started by Microsoft, in which employers and administrators can use Microsoft 365 platforms to collect several metrics on their workers in order to “optimize productivity.” However, this approach causes unnecessary stress for workers, beginning a surveillance program in the workplace.
How are excuses such as using data to “optimize productivity” employed to gather more data on people? How could such a goal be accomplished without the surveillance aspect? How does this approach not account for a diversity of working methods?
-
- 35 min
- Wired
- 2021
In this podcast, interviewees share several narratives which discuss how certain technologies, especially digital photo albums, social media sites, and dating apps, can change the nature of relationships and memories. Once algorithms for certain sites have an idea of what a certain user may want to see, it can be hard for the user to change that idea, as the Pinterest wedding example demonstrates. When it comes to photos, emotional reactions can be hard or nearly impossible for a machine to predict. While dating apps do not necessarily make a profit by mining data, the Match monopoly of creating different types of dating niches through a variety of apps is cause for some concern.
- Wired
- 2021
How Tech Transformed How We Hook Up—and Break Up
In this podcast, interviewees share several narratives which discuss how certain technologies, especially digital photo albums, social media sites, and dating apps, can change the nature of relationships and memories. Once algorithms for certain sites have an idea of what a certain user may want to see, it can be hard for the user to change that idea, as the Pinterest wedding example demonstrates. When it comes to photos, emotional reactions can be hard or nearly impossible for a machine to predict. While dating apps do not necessarily make a profit by mining data, the Match monopoly of creating different types of dating niches through a variety of apps is cause for some concern.
How should algorithms determine what photos a specific user may want to see or be reminded of? Should machines be trusted with this task at all? Should users be able to take a more active role in curating their content in certain albums or sites, and would most users even want to do this? Does the existence of dating apps drastically change the nature of dating? How could creating a new application which introduces a new dating “niche” ultimately serve a tech monopoly?
-
- 5 min
- Tech Crunch
- 2020
During Google’s attempt to merge with the company Fitbit, the NGO Amnesty International has provided warnings to the competition regulators in the EU that such a move would be detrimental to privacy. Based on Google’s historical malpractice with user data, since its status as a tech monopoly allows it to mine data from several different avenues of a user’s life, adding wearable health-based tech to this equation puts the privacy and rights of users at risk. Calls for scrunity of “surveillance capitalism” employed by tech giants.
- Tech Crunch
- 2020
-
- 5 min
- Tech Crunch
- 2020
No Google-Fitbit merger without human rights remedies, says Amnesty to EU
During Google’s attempt to merge with the company Fitbit, the NGO Amnesty International has provided warnings to the competition regulators in the EU that such a move would be detrimental to privacy. Based on Google’s historical malpractice with user data, since its status as a tech monopoly allows it to mine data from several different avenues of a user’s life, adding wearable health-based tech to this equation puts the privacy and rights of users at risk. Calls for scrunity of “surveillance capitalism” employed by tech giants.
When considering how companies and advertisers may use them, what sorts of personal statistics related to health and well-being should and should not be collected by mobile computing devices? How can devices originally built to stand on their own as one technological artifact become more convenient or harmful to a user when they become part of a technological architecture?
-
- 5 min
- Gizmodo
- 2020
This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.
- Gizmodo
- 2020
-
- 5 min
- Gizmodo
- 2020
You Need to Opt Out of Amazon Sidewalk
This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.
Why are “secondary networks” like the one described here a bad idea in terms of both surveillance and data privacy? Is it possible for the world to be too networked? How can tech developers make sure the general public has a healthy skepticism toward new devices? Or is it ultimately Amazon’s job to think about the ethical implications of this secondary network before introducing it for profits?
-
- 10 min
- Slate
- 2021
Using the tale of Art History Professor François-Marc Gagnon, whose video lectures were used to instruct students even after his death, this article raises questions about how technologies such as digital memory and data streaming for education in the time of coronavirus may ultimately undervalue the work of educators.
- Slate
- 2021
-
- 10 min
- Slate
- 2021
How a Dead Professor Is Teaching a University Art History Class
Using the tale of Art History Professor François-Marc Gagnon, whose video lectures were used to instruct students even after his death, this article raises questions about how technologies such as digital memory and data streaming for education in the time of coronavirus may ultimately undervalue the work of educators.
What are the largest possible detriments to automating teaching, both for students and for educators? If large amounts of data from a given course or discipline were used to train an AI to teach a course, what would such a program do well, and what aspects of education would be missed? How can educators have more personal control over the digital traces of their teaching? At what point might broader access to educational materials through digital networks actually harm certain groups of people?