|Talks|

Regulation along the AI Development Pipeline for Fairness, Safety and Related Goals

Visiting speaker
Hybrid
Past Talk
Ben Laufer
PhD Candidate, Cornell Tech
Tue, Apr 1, 2025
3:00 PM UTC
Tue, Apr 1, 2025
3:00 PM UTC
In-person
4 Thomas More St
London E1W 1YW, UK
The Roux Institute
Room
100 Fore Street
Portland, ME 04101
Network Science Institute
2nd floor
Network Science Institute
11th floor
177 Huntington Ave
Boston, MA 02115
Room
58 St Katharine's Way
London E1W 1LP, UK

Talk recording

Machine learning (ML) and artificial intelligence (AI) systems are designed within a broader ecosystem involving multiple actors and interests. This talk focuses on attempts to regulate the AI development process to make these technologies fair, safe, performant, or otherwise aligned with social ends. I will start with a discussion of one proposal for ML regulation, stemming from U.S. disparate impact doctrine, which requires litigants to search for a “less discriminatory alternative” (LDA), an alternative policy that meets the same business needs but exhibits lower disparate impacts across protected groups. Defining this concept for data-driven decision-making might open up a promising avenue for regulation, however, a number of technical challenges remain. I will provide a set of formal results characterizing the "multiplicity" of model designs and the limits and opportunities for searching for LDAs. More generally, AI is often deployed in a way that requires a general-purpose model to adapt to a number of different domains. I will put forward a model of how regulation would operate in such a process. Reasoning about the interaction between regulators, general-purpose AI creators, and domain specialists suggests that even straightforward and modest regulatory measures can backfire, inadvertently undermining safety outcomes. Conversely, stronger regulations, applied strategically along the development pipeline, can boost both safety and performance outcomes and yield mutualistic benefits to utility. The talk will conclude with a discussion about the role for conceptual models in building actionable regulatory frameworks for AI.
About the speaker
Benjamin Laufer is a PhD Candidate in the School of Computing and Information Sciences at Cornell Tech, where he is advised by Helen Nissenbaum and Jon Kleinberg. He has spent time at Microsoft Research in the FATE group, and is supported by a LinkedIn PhD fellowship. He has received multiple "rising star" titles, paper oral presentations and related accolades. He previously graduated from Princeton University with a B.S.E. in Operations Research and Financial Engineering.
Share this page:
Apr 01, 2025