Accountability (39)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- GIS Lounge
- 2019
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
- GIS Lounge
- 2019
-
- 5 min
- GIS Lounge
- 2019
When AI Goes Wrong in Spatial Reasoning
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
What happens when the source of the data itself (the dataset) is biased? Can the ideas present in this article (namely the intentionally broadening of the training data pool and inclusion of composite data) find application beyond GIS?
-
- 7 min
- Venture Beat
- 2021
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
- Venture Beat
- 2021
-
- 7 min
- Venture Beat
- 2021
Center for Applied Data Ethics suggests treating AI like a bureaucracy
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
Whose job is it to ameliorate the “privilege hazard”, and how should this be done? How should large data sets be analyzed to avoid bias and ensure fairness? How can large data aggregators such as Google be held accountable to new standards of scrutinizing data and introducing humanities perspectives in applications?
-
- 3 min
- Kinolab
- 2009
In a distant future after the “Water War” in which much of the natural environment was destroyed and water has become scarce, Asha works as a curator at a museum which displays the former splendor of nature on Earth. She receives a mysterious soil sample which, after digital analysis using a object recognition to take data from the soil, surprisingly contains water.
- Kinolab
- 2009
Digital Environment Analysis
In a distant future after the “Water War” in which much of the natural environment was destroyed and water has become scarce, Asha works as a curator at a museum which displays the former splendor of nature on Earth. She receives a mysterious soil sample which, after digital analysis using a object recognition to take data from the soil, surprisingly contains water.
How can technology be used to gather data on certain environments and aspects of an ecosystem to help them reach their full potential? How should this technology be made accessible to communities all across the world?
-
- 5 min
- Vice
- 2020
Robot researches in Japan have recently begun to use robotic “monster wolves” to help control wildlife populations by keeping them out of human civilizations or agricultural areas. These robots are of interest to robot engineers who work in environmentalism because although the process of engineering a robot does not help the environment, the ultimate good accomplished by robots which help control wildlife populations may outweigh this cost.
- Vice
- 2020
-
- 5 min
- Vice
- 2020
Robotic Beasts, Wildlife Control, and Environmental Impact
Robot researches in Japan have recently begun to use robotic “monster wolves” to help control wildlife populations by keeping them out of human civilizations or agricultural areas. These robots are of interest to robot engineers who work in environmentalism because although the process of engineering a robot does not help the environment, the ultimate good accomplished by robots which help control wildlife populations may outweigh this cost.
What are all the ways, aside from those mentioned in the article, in which robots and robotics could be utilised in environmentalist and conservationist causes? How could robots meant to tell wildlife where not to travel be misused?
-
- 5 min
- ZDNet
- 2020
In recent municipal elections in Brazil, the software and hardware of a machine learning technology provided by Oracle failed to properly do its job in counting the votes. This ultimately led to a delay in the results, as the AI had not been properly calibrated beforehand.
- ZDNet
- 2020
-
- 5 min
- ZDNet
- 2020
AI Failure in Elections
In recent municipal elections in Brazil, the software and hardware of a machine learning technology provided by Oracle failed to properly do its job in counting the votes. This ultimately led to a delay in the results, as the AI had not been properly calibrated beforehand.
Who had responsibility to fully test and calibrate this AI before it was used for an election? What sorts of more dire consequences could result from a failure of AI to properly count votes? What are the implications of an American tech monopoly providing this faulty technology to another country’s elections?
-
- 5 min
- TechCrunch
- 2020
At the end of 2020, Twitch, a social network predicated on streaming video content and commenting, expanded and clarified its definitions of hateful content in order to moderate comments or posts which harassed other users or otherwise had a negative effect on other people. However, as a workplace, the Twitch company has much to prove before validating this updated policy as something more than a PR move.
- TechCrunch
- 2020
-
- 5 min
- TechCrunch
- 2020
Twitch updates its hateful content and harassment policy after company called out for its own abuses
At the end of 2020, Twitch, a social network predicated on streaming video content and commenting, expanded and clarified its definitions of hateful content in order to moderate comments or posts which harassed other users or otherwise had a negative effect on other people. However, as a workplace, the Twitch company has much to prove before validating this updated policy as something more than a PR move.
How can content moderation algorithms be used for a greater good, in terms of recognizing hate speech and symbols? What nuances might be missed by this approach? What does the human part of content moderation look like? What responsibilities does such a position come with? How might content moderation on digital platforms moderate harassment behaviors in real life, and vice versa?