Podcast (7)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 27 min
- Cornell Tech
- 2019
Podcast about worker quantification in factors such as hiring, productivity and more. Dives into the discussion on why we should attempt a fair making of algorithms. Warns specifically about how algorithms can find “proxy variables” to approximate for cultural fits like race or gender even when the algorithms is supposedly controlled for these factors.
- Cornell Tech
- 2019
Quantifying Workers
Podcast about worker quantification in factors such as hiring, productivity and more. Dives into the discussion on why we should attempt a fair making of algorithms. Warns specifically about how algorithms can find “proxy variables” to approximate for cultural fits like race or gender even when the algorithms is supposedly controlled for these factors.
What are the dangers of having an algorithm involved in the hiring process? Is efficiency worth the cost in this scenario? Can humans ever be placed in a binary context?
-
- 28 min
- Cornell Tech
- 2019
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
- Cornell Tech
- 2019
Algorithms in the Courtroom
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
Would you rely on a risk-assessment algorithm to make life-changing decisions for another human? How can the transparency culture which Robinson describes be created? How can we make sure that political decisions stay political, and don’t end up being ultimately answered by engineers? Can “fairness” be defined by a machine?
-
- 27 min
- Cornell Tech
- 2019
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
- Cornell Tech
- 2019
Teaching Ethics in Data Science
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
Why is it important to implement ethical sensibility in data science? What could happen if we do not?
-
- 41 min
- The New York Times
- 2021
In this podcast episode, Ellen Pao, an early whistleblower on gender bias and racial discrimination in the tech industy, tells the story of her experience suing the venture capital firm Kleiner Perkins for gender discrimination. The episode then moves into a discussion of how Silicon Valley, and the tech industry more broadly, is dominated by white men who do not try to deeply understand or move toward racial or gender equity; instead, they focus on PR moves. Specifically, she reveals that social media companies and CEOs can be particularly performative when it comes to addressing racial or gender inequality, focusing on case studies rather than breeding a new, more fair culture.
- The New York Times
- 2021
Sexism and Racism in Silicon Valley
In this podcast episode, Ellen Pao, an early whistleblower on gender bias and racial discrimination in the tech industy, tells the story of her experience suing the venture capital firm Kleiner Perkins for gender discrimination. The episode then moves into a discussion of how Silicon Valley, and the tech industry more broadly, is dominated by white men who do not try to deeply understand or move toward racial or gender equity; instead, they focus on PR moves. Specifically, she reveals that social media companies and CEOs can be particularly performative when it comes to addressing racial or gender inequality, focusing on case studies rather than breeding a new, more fair culture.
How did Silicon Valley and the technology industry come to be dominated by white men? How can this be addressed, and how can the culture change? How can social networks in particular be re-imagined to open up doors to more diverse leadership and workplace cultures?
-
- 51 min
- TechCrunch
- 2020
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
- TechCrunch
- 2020
Artificial Intelligence and Disability
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
What are some problems with injecting even the most unbiased of technologies into a system biased against certain groups, including disabled people? How can developers aim to create technology which can actually put accessibility before profit? How can it be ensured that AI algorithms take into account more than just normative considerations? How can developers be forced to consider the myriad impacts that one technology may have on large heterogeneous communities such as the disabled community?
-
- 35 min
- Wired
- 2021
In this podcast, interviewees share several narratives which discuss how certain technologies, especially digital photo albums, social media sites, and dating apps, can change the nature of relationships and memories. Once algorithms for certain sites have an idea of what a certain user may want to see, it can be hard for the user to change that idea, as the Pinterest wedding example demonstrates. When it comes to photos, emotional reactions can be hard or nearly impossible for a machine to predict. While dating apps do not necessarily make a profit by mining data, the Match monopoly of creating different types of dating niches through a variety of apps is cause for some concern.
- Wired
- 2021
How Tech Transformed How We Hook Up—and Break Up
In this podcast, interviewees share several narratives which discuss how certain technologies, especially digital photo albums, social media sites, and dating apps, can change the nature of relationships and memories. Once algorithms for certain sites have an idea of what a certain user may want to see, it can be hard for the user to change that idea, as the Pinterest wedding example demonstrates. When it comes to photos, emotional reactions can be hard or nearly impossible for a machine to predict. While dating apps do not necessarily make a profit by mining data, the Match monopoly of creating different types of dating niches through a variety of apps is cause for some concern.
How should algorithms determine what photos a specific user may want to see or be reminded of? Should machines be trusted with this task at all? Should users be able to take a more active role in curating their content in certain albums or sites, and would most users even want to do this? Does the existence of dating apps drastically change the nature of dating? How could creating a new application which introduces a new dating “niche” ultimately serve a tech monopoly?