Skip to content
You are using an unsupported browser. For best results please use the latest versions of Chrome, Edge, Firefox or Safari.
Loading Events

« All Events

  • This event has passed.

SRI Seminar Series: Luke Stark, “Conjecture and the right to reasonable inference in AI/ML decision-making”

November 22, 2023, 12:30 pm - 2:00 pm

SRI Seminar Series: Luke Stark

Our weekly SRI Seminar Series welcomes Luke Stark, an assistant professor in the Faculty of Information & Media Studies at Western University, and the inaugural Schwartz Reisman Institute Scholar-in-Residence, for a special in-person talk at the University of Toronto’s Rotman School of Management.

Stark researches the ethical, historical, and social impacts of computational technologies, including how these tools mediate social and emotional expression, make inferences about people, and are reshaping our relationships to collective action, our selves, and each other. His current book project, Reordering Emotion: Histories of Computing and Human Feelings from Cybernetics to Artificial Intelligence, is a history of affective computing and the digital quantification of human emotion from cybernetics in the 1940s to today’s social media and AI technologies.

One of the deeper issues underlying the design of AI systems is the ways in which they arrive at decisions. To what extent can a machine learning (ML) algorithm apply broad statistical patterns to discrete individual cases in a manner that is fair and accurate? In this talk, Stark will explore his recent efforts to develop detailed conceptual frameworks for assessing and classifying the types of inferences that ML systems deploy in their analyses and decision-making. This project seeks to inform an agenda for the regulation of AI systems based on the “reasonableness” of the inferences they produce.

Talk title

“Conjecture and the right to reasonable inference in AI/ML decision-making”

Abstract

AI systems grounded in deep learning seem to offer the prospect of extrapolating results from data without underlying models or a set of explicit theoretical assumptions guiding the analysis. In the enthusiasm to apply ML to the broadest range of problems possible, however, its developers have inadvertently shone a spotlight on an epistemological fissure which whose history long predates the development of these technologies, but which is fundamental to their useful application: the dangers of using inductive or abductive modes of inference to make statistical predictions and apply those generalized predictions to discrete individual cases.

This problem of applied inference, which has been variously described as a distinction between ‘empirical’ and ‘conjectural’ science (Ginzburg 2009) or between frequentist and subjective definitions of probability (Hacking 2006), has urgent contemporary implications in areas such as criminal justice, hiring, surveillance, and other domains. Bias in artificial intelligence (AI) systems is increasingly recognized as endemic, and as a major societal challenge—particularly in psychology, policing, medicine, and social assistance programs, all critical areas where automated decision systems are increasingly deployed(O’Neil 2017; Birhane 2021). Yet this bias is often grounded in the automation of inappropriate conjectural inference about human bodies and behaviours (Stark and Hutson 2021): population-level probability assumptions cannot be applied justifiably to individuals, and human activity is neither reliably regular nor repeatable across cultures and contexts (Piantadosi, Byar, and Green 1988).

In this paper, I argue that regulation via the epistemological structure of an application space is thus one potential mechanism to address the social impact of rapid advances in ML and other AI methods used for automated decision-making. Wachter and Mittelstadt (2019) have called for the formulation of a “right to reasonable inference” in the context of European data protection law, and this paper seeks to help define just what “reasonable” might mean in the practical application of such a right, and the abductive nature of machine learning outputs has been elucidated by Amoore (2022) and Hong (2023). I explore the history and contemporary implications of conjectural science to point towards a heuristic taxonomy for the assessment of ML systems both around their conceptual assumptions as well as their proposed use cases. The taxonomy is grounded both in analysis of the forms of inferential reasoning (inductive, deductive, and or abductive) involved in a particular automated analysis, as well as the domain in which the analysis is being performed. This matrix of inferential types and use case categories will support a granular AI regulatory regime of potential interest to technologists, activists, and policymakers in AI and related fields: one that is agnostic towards the technical mechanisms of AI systems and is instead focused on their inferential impacts.

Given the shaky epistemological foundations and social toxicity of much automated conjecture about human activities and behavior, such use cases deserve heightened legal, technical, and social scrutiny. Restricting the use of automated conjecture across AI’s areas of application would significantly decrease the societal harms caused by these technologies: if the inferences being automated and scaled in AI systems can be shown to be faulty, then no amount of technical, legal, or social buttressing can ever set them aright. This work thus has potentially widespread impact on the regulation and governance of AI systems and seeks to advance policymaking and regulatory discussions around how both to safely innovate around and judiciously restrict these technologies.


Suggested readings

Amoore, Louise. 2022. “Machine Learning Political Orders.” Review of International Studies, 1–17. https://doi.org/10.1017/s0260210522000031.

Birhane, Abeba. 2021. “The Impossibility of Automating Ambiguity.” Artificial Life, 1–18. https://doi.org/10.1162/artl_a_00336.

Ginzburg, Carlo. 2009. “Morelli, Freud and Sherlock Holmes: Clues and Scientific Method.” History Workshop Journal 9 (1): 5–36.

Hacking, Ian M. 2006. The Emergence of Probability. 2nd edition. Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/cbo9780511817557.

Hong, Sun-ha. 2023. “Prediction as Extraction of Discretion.” Big Data & Society 10 (1): 1-11.

O’Neil, Cathy. 2017. Weapons of Math Destruction. Broadway Books. Broadway Books.

Piantadosi, Steven, David P Byar, and Sylvan B Green. 1988. “The Ecological Fallacy.” American Journal of Epidemiology 127 (5): 893–904.

Stark, Luke, and Jevan Hutson. 2021. “Physiognomic Artificial Intelligence.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3927300.

Wachter, Sandra, and Brent Mittelstadt. 2019. “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI.” Columbia Business Law Review, 494–620. https://doi.org/10.31228/osf.io/mu2kf.


Venue

Rotman School of Management, University of Toronto, Room 127.

Entrance: 105 St. George Street, Toronto, ON M5S 3E6

Seminar will be broadcast live via Zoom (register for link).


About Luke Stark

Luke Stark is an assistant professor in the Faculty of Information & Media Studies (FIMS) at Western University. He researches the ethical, historical, and social impacts of computational technologies like artificial intelligence systems powered by techniques like machine learning, and is particularly animated by how these technologies mediate social and emotional expression, make inferences about people, and are reshaping, for better and worse, our relationships to collective action, our selves, and each other.

Stark’s current book project, Reordering Emotion: Histories of Computing and Human Feelings from Cybernetics to Artificial Intelligence, is a history of affective computing and the digital quantification of human emotion from cybernetics in the 1940s to today’s social media platforms and AI technologies.

Before joining Western, Stark was previously a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) Group at Microsoft Research Montreal, a fellow and an affiliate of the Berkman Klein Center for Internet & Society at Harvard University, and a postdoctoral fellow in the Department of Sociology at Dartmouth College. Stark received his PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.

Registration

To register for the event, visit the official event page.


About the SRI Seminar Series

The SRI Seminar Series brings together the Schwartz Reisman community and beyond for a robust exchange of ideas that advance scholarship at the intersection of technology and society. Seminars are led by a leading or emerging scholar and feature extensive discussion.

Each week, a featured speaker will present for 45 minutes, followed by an open discussion. Registered attendees will be emailed a Zoom link before the event begins. The event will be recorded and posted online.

Details

Date:
November 22, 2023
Time:
12:30 pm - 2:00 pm
Event Category:
Event Tags:
Website:
https://srinstitute.utoronto.ca/events

Organizer

Schwartz Reisman Institute for Technology and Society
View Organizer Website

Venue

Rotman School of Management, University of Toronto, Room 127
105 St. George Street
Toronto, ON M5S 3E6 Canada
+ Google Map