Upcoming Events

ML@GT Virtual Seminar: Shivani Agarwal, University of Pennsylvania

Events

ML@GT invites you to a virtual seminar featuring Shivani Agarwal from the University of Pennsylvania. This seminar is open to Georgia Tech faculty, staff, and students, and any interested members of the public.

Registration is required

Multiclass and Multi-Label Learning with General Losses: What is the Right Output Coding and Decoding?

 

Abstract

Many practical applications of machine learning involve multiclass learning problems with a large number of classes -- indeed, multi-label learning problems can be viewed as a special case. Multiclass learning with the standard 0-1 loss is fairly well understood; however, in practice, applications with large numbers of classes often require performance to be measured via a different, problem-specific loss. What is the right way to design principled and efficient learning algorithms for multiclass (and multi-label) problems with general losses? 

From a theoretical standpoint, an elegant approach for designing statistically consistent learning algorithms is via the design of convex calibrated surrogate losses. From a practical standpoint, an approach that is often favored is that of output coding, which reduces multiclass learning to a set of simpler binary classification problems. In this talk, I will discuss recent progress in bringing together these seemingly disparate approaches under a unifying lens to develop statistically consistent and computationally efficient learning algorithms for a wide range of problems, in some cases recovering existing state-of-the-art algorithms, and in other cases providing new ones. Our algorithms require learning at most r real-valued scoring functions, where r is the rank of the target loss matrix, and come with corresponding principled decoding schemes. I will also discuss connections with the field of property elicitation, and new tools for deriving quantitative regret transfer bounds via strongly proper losses.

About Shivani

Shivani Agarwal is Rachleff Family Associate Professor of Computer and Information Science at the University of Pennsylvania, where she also directs the NSF-sponsored Penn Institute for Foundations of Data Science (PIFODS) and co-directs the Penn Research in Machine Learning (PRiML) center. She is currently an Action Editor for the Journal of Machine Learning Research and an Associate Editor for the Harvard Data Science Review. She has previously been a Radcliffe Fellow at Harvard University, an Assistant Professor and Ramanujan Fellow at the Indian Institute of Science, and a postdoctoral lecturer at MIT. She received her PhD in computer science at the University of Illinois, Urbana-Champaign, and a bachelors degree in computer science as a Nehru Scholar at Trinity College, University of Cambridge. Her research interests include foundational questions in machine learning, applications of machine learning in the life sciences, and connections between machine learning and other disciplines such as economics, operations research, and psychology. More broadly, she is excited by research at the intersection of computer science, mathematics, and statistics, and its ability to turn data into actionable insights in both the natural and social sciences.

Date/Time
-