AI (115)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 6 min
  • Wired
  • 2019
image description
The Toxic Potential of YouTube’s Feedback Loop

Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.

  • Wired
  • 2019
  • 51 min
  • TechCrunch
  • 2020
image description
Artificial Intelligence and Disability

In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.

  • TechCrunch
  • 2020
  • 5 min
  • Venture Beat
  • 2021
image description
Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru

Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.

  • Venture Beat
  • 2021
  • 7 min
  • VentureBeat
  • 2021
image description
Salesforce researchers release framework to test NLP model robustness

New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.

  • VentureBeat
  • 2021
  • 5 min
  • MIT Tech Review
  • 2020
image description
The Year Deepfakes Went Mainstream

With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.

  • MIT Tech Review
  • 2020
  • 12 min
  • Wired
  • 2018
image description
How Cops Are Using Algorithms to Predict Crimes

This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.

  • Wired
  • 2018
Load more