Themes (353)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- ZDNet
- 2020
In recent municipal elections in Brazil, the software and hardware of a machine learning technology provided by Oracle failed to properly do its job in counting the votes. This ultimately led to a delay in the results, as the AI had not been properly calibrated beforehand.
- ZDNet
- 2020
-
- 5 min
- ZDNet
- 2020
AI Failure in Elections
In recent municipal elections in Brazil, the software and hardware of a machine learning technology provided by Oracle failed to properly do its job in counting the votes. This ultimately led to a delay in the results, as the AI had not been properly calibrated beforehand.
Who had responsibility to fully test and calibrate this AI before it was used for an election? What sorts of more dire consequences could result from a failure of AI to properly count votes? What are the implications of an American tech monopoly providing this faulty technology to another country’s elections?
-
- 5 min
- MIT Tech Review
- 2020
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
AI Summarisation
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
How might this technology cause people to become lazy readers? How does this technology, like many other digital technologies, shorten attention spans? How can it be ensured that algorithms like this do not leave out critical information?
-
- 7 min
- MIT Technology Review
- 2020
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
- MIT Technology Review
- 2020
-
- 7 min
- MIT Technology Review
- 2020
Tiny four-bit computers are now all you need to train AI
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
Does more efficiency mean more data would be wanted or needed? Would that be a good thing, a bad thing, or potentially both?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 5 min
- Vice
- 2020
Robot researches in Japan have recently begun to use robotic “monster wolves” to help control wildlife populations by keeping them out of human civilizations or agricultural areas. These robots are of interest to robot engineers who work in environmentalism because although the process of engineering a robot does not help the environment, the ultimate good accomplished by robots which help control wildlife populations may outweigh this cost.
- Vice
- 2020
-
- 5 min
- Vice
- 2020
Robotic Beasts, Wildlife Control, and Environmental Impact
Robot researches in Japan have recently begun to use robotic “monster wolves” to help control wildlife populations by keeping them out of human civilizations or agricultural areas. These robots are of interest to robot engineers who work in environmentalism because although the process of engineering a robot does not help the environment, the ultimate good accomplished by robots which help control wildlife populations may outweigh this cost.
What are all the ways, aside from those mentioned in the article, in which robots and robotics could be utilised in environmentalist and conservationist causes? How could robots meant to tell wildlife where not to travel be misused?
-
- 5 min
- Venture Beat
- 2021
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
- Venture Beat
- 2021
-
- 5 min
- Venture Beat
- 2021
Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
How can bias in tech monopolies be mitigated? How can authorities on AI ethics be positioned in such a way that they cannot be fired when developers do not want to listen to them?