News Article (145)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 5 min
- Inc
- 2021
On International Data Privacy Day, Apple CEO Tim Cook fired shots against Mark Zuckerberg and Facebook’s model of mining user data through platform analytics and web mining to serve up targeted ads to users. By contrast, Cook painted Apple as a privacy oriented company who wants to make technology work for its users by not collecting their data and manipulating them psychologically through advertising.
- Inc
- 2021
-
- 5 min
- Inc
- 2021
Tim Cook May Have Just Ended Facebook
On International Data Privacy Day, Apple CEO Tim Cook fired shots against Mark Zuckerberg and Facebook’s model of mining user data through platform analytics and web mining to serve up targeted ads to users. By contrast, Cook painted Apple as a privacy oriented company who wants to make technology work for its users by not collecting their data and manipulating them psychologically through advertising.
Are you convinced that Apple has a better business model than Facebook? Should users be responsible for taking steps to protect themselves against web mining, or should Facebook be responsible for adding in more guardrails? What are the consequences of both Facebook and Apple products being involved in larger architectures that extend beyond the singular digital artifact?
-
- 5 min
- Wired
- 2021
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
These Doctors are using AI to Screen for Breast Cancer
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
Do there seem to be any drawbacks to using this technology widely? How important is transparency of the algorithm in this case, as long as it seems to provide accurate results? How might this change the nature of doctor-patient relationships?
-
- 10 min
- The Washington Post
- 2019
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
- The Washington Post
- 2019
-
- 10 min
- The Washington Post
- 2019
Are ‘bots’ manipulating the 2020 conversation? Here’s what’s changed since 2016.
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
How are social media networks architectures that can be manipulated to an individual’s agenda, and how could this be addressed? Should any kind of bot accounts be allowed on Twitter, or do they all have too much negative potential to be trusted? What affordances of social networks allow bad actors to redirect the traffic of these networks? Is the problem of “trends” or “cascades” inherent to social media?
-
- 5 min
- Time
- 2021
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
- Time
- 2021
-
- 5 min
- Time
- 2021
4 Big Takeaways From the Facebook Whistleblower Congressional Hearing
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
How does social quantification result in negative self-conception? How are the environments of social media platforms more harmful in terms of body image or “role models” than in-person environments? What are the dangers of every person having easy access to a broad platform of communication in terms of forming models of perfection? Why do social media algorithms want to feed users increasingly extreme content?
-
- 10 min
- The Atlantic
- 2014
When the Apple Health app first released, it lacked one crucial component: the ability to track menstrual cycles. This exclusion of women from accessible design of technology is not the exception but rather the rule. This results from problems inherent to the gender imbalance in technology workplaces, especially at the level of design. Communities such as the Quantified Self offer spaces to help combat this exclusive culture.
- The Atlantic
- 2014
-
- 10 min
- The Atlantic
- 2014
How Self-Tracking Apps Exclude Women
When the Apple Health app first released, it lacked one crucial component: the ability to track menstrual cycles. This exclusion of women from accessible design of technology is not the exception but rather the rule. This results from problems inherent to the gender imbalance in technology workplaces, especially at the level of design. Communities such as the Quantified Self offer spaces to help combat this exclusive culture.
In what ways are women being left behind by personal data tracking apps, and how can this be fixed? How can design strategies and institutions in technology development be inherently sexist? What will it take to ensure glaring omissions such as this one do not occur in other future products? How can apps that track and promote certain behaviors avoid being patronizing or patriarchal?