News Article (145)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- New York Times
- 2018
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Facial Recognition Is Accurate, if You’re a White Guy
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
What does exterior accountability for facial recognition software look like, and what should it look like? How and why does racial bias get coded into technology, whether explicitly or implicitly?
-
- 5 min
- Inc
Clubhouse, a new, exclusive social network app which appeared during the coronavirus pandemic, has some frightening data collection practices which are outlined in detail in this article. Essentially, while the company was not monetized at the time of this article, it collects data not only on users on the platform, but also any contacts of that user.
- Inc
-
- 5 min
- Inc
Clubhouse Is Recording Your Conversations. That’s Not Even Its Worst Privacy Problem
Clubhouse, a new, exclusive social network app which appeared during the coronavirus pandemic, has some frightening data collection practices which are outlined in detail in this article. Essentially, while the company was not monetized at the time of this article, it collects data not only on users on the platform, but also any contacts of that user.
What are the consequences of social networks having detailed data on the personal networks of its users? What are the dangers of collecting data by putting many different social networking platforms into conversation with one another? How do draws such as exclusivity pull attention away from irresponsible data mining practices?
-
- 5 min
- Gizmodo
- 2021
Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.
- Gizmodo
- 2021
-
- 5 min
- Gizmodo
- 2021
Bots Reportedly Helped Fuel GameStonks Hype on Facebook, Twitter, and Other Platforms
Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.
Do you consider this case study, and the use of the bots, to be “activism”? How can this case study be summarized into a general principle for how bots may manipulate the economy? How do digital technologies help both wealth and non-wealthy people serve their own interests?
-
- 7 min
- VentureBeat
- 2021
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
GPT-3: We’re at the very beginning of a new app ecosystem
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
Should AI be able to imitate human speech unchecked? Should humans be trained to be able to tell when speech or text might be produced by a machine? How might Natural Language Processing cheapen human writing and writing jobs?
-
- 3 min
- CNN
- 2021
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
- CNN
- 2021
-
- 3 min
- CNN
- 2021
Microsoft patented a chatbot that would let you talk to dead people. It was too disturbing for production
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
How do humans control their identity when it can be replicated through machine learning? What sorts of quirks and mannerisms are unique to humans and cannot be replicated by an algorithm?
-
- 7 min
- Venture Beat
- 2021
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
- Venture Beat
- 2021
-
- 7 min
- Venture Beat
- 2021
Center for Applied Data Ethics suggests treating AI like a bureaucracy
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
Whose job is it to ameliorate the “privilege hazard”, and how should this be done? How should large data sets be analyzed to avoid bias and ensure fairness? How can large data aggregators such as Google be held accountable to new standards of scrutinizing data and introducing humanities perspectives in applications?