Skip to content
You are using an unsupported browser. For best results please use the latest versions of Chrome, Edge, Firefox or Safari.
Loading Events

« All Events

  • This event has passed.

SRI Seminar Series: Sven Nyholm, “AI, responsibility gaps, and asymmetries between praise and blame”

April 5, 2023, 3:10 pm - 4:30 pm

SRI Seminar Series: Sven Nyholm

Our weekly SRI Seminar Series welcomes Sven Nyholm, Professor of the Ethics of Artificial Intelligence at the Ludwig Maximilian University of Munich. Nyholm’s research focuses on applied ethics and the philosophy of technology, including topics such as human-robot interaction, self-driving cars, autonomous weapons, human enhancement, and self-tracking technologies.

In this session, Nyholm will discuss “responsibility gaps” and asymmetries regarding praise and blame for outcomes produced by artificial intelligence (AI) technologies. Using contemporary examples such as text produced by large language models, accidents caused by self-driving cars, and medical diagnoses and treatment, Nyholm will demonstrate how praise for good outcomes produced by AI is typically harder to deserve than blame for bad outcomes.

Talk title

“AI, responsibility gaps, and asymmetries between praise and blame”

Abstract

In my presentation, I will discuss what I think are some interesting asymmetries with respect to praise and blame for good and bad outcomes produced by AI technologies. I will suggest that if we apply widely agreed-upon criteria for under what circumstances people deserve praise, on the one hand, and widely agreed-upon criteria for under what circumstances people deserve blame, on the other hand, it might be harder to be praiseworthy for good outcomes produced when we hand over tasks to AI technologies than it is to deserve blame for bad outcomes that might be produced when we hand over tasks to AI technologies.

The topic of who is responsible for outcomes produced by AI technologies is usually called the topic of “responsibility gaps.” That is, there might be unclarity or gaps with respect to who is responsible for what AI technologies do or outcomes they produce. This problem is usually discussed in relation to bad outcomes caused by AI technologies (e.g., such as when a self-driving car hits and harms a human being). I suggest it is also important to discuss possible gaps in responsibility related to good outcomes that might be produced with the help of AI technologies. This can be important, for example, in workplaces where people want to get recognition for the work they do, but where more and more tasks are being handed over to AI. In general, the very idea of AI is to create technologies that can take over tasks from us human beings that we need our natural intelligence to perform. If tasks can be performed without any need for our intelligence—or perhaps without need for much effort, or any particular talents of ours—there will be less justification for us to claim credit for the performance of these tasks (e.g., work that we used to performed but that has been handed over to AI technologies). In contrast, if we hand over tasks we used to perform to AI, and we allow these technologies to sometimes cause harm, then we might not be off the hook but might deserve blame.

Using various examples and theories from the history of philosophy and contemporary ethics research, I will try to illustrate that praise for good outcomes produced by AI technologies is harder to deserve than blame for bad outcomes produced by AI technologies might be. As I discuss this asymmetry between praise and blame for good and bad outcomes caused by AI technologies, I will consider examples such as text produced by large language models (such as ChatGPT), accidents caused by self-driving cars, medical diagnoses or treatment recommendations made by medical AI technologies, AI used in military contexts, and more.


About Sven Nyholm

Sven Nyholm is Professor of the Ethics of Artificial Intelligence at the Ludwig Maximilian University of Munich in Germany. Before taking up this professorship in March of 2023, Nyholm worked in the Netherlands for several years, most recently as an associate professor of philosophy at Utrecht University. Nyholm is also a member of the ethics advisory board of the Human Brain Project (one of the world’s largest neuroscience projects), and an associate editor of the journal Science and Engineering Ethics. He has published on a wide range of topics within ethical theory, spanning from the history of ethical thought to recent ethical challenges raised by technologies such as artificial intelligence and robots. His publications include Humans and Robots: Ethics, Agency, and Anthropomorphism (2020) and This is Technology Ethics: An Introduction (2023). In his research, Nyholm is particularly interested in how technological developments can create a need for us human beings to reconsider what (if anything!) is unique about human beings, and whether these technological developments might force us to update or expand our social norms and ethical frameworks and ideas about what it is to live a good life as a human being.

Registration

To register for the event, visit the official event page.


About the SRI Seminar Series

The SRI Seminar Series brings together the Schwartz Reisman community and beyond for a robust exchange of ideas that advance scholarship at the intersection of technology and society. Seminars are led by a leading or emerging scholar and feature extensive discussion.

Each week, a featured speaker will present for 45 minutes, followed by an open discussion. Registered attendees will be emailed a Zoom link before the event begins. The event will be recorded and posted online.

Details

Date:
April 5, 2023
Time:
3:10 pm - 4:30 pm
Event Category:
Event Tags:
Website:
https://srinstitute.utoronto.ca/events-archive/seminar-2023-sven-nyholm

Organizer

Schwartz Reisman Institute for Technology and Society
View Organizer Website