Upcoming Events

ML@GT Virtual Seminar: Ellie Pavlick, Brown University

Events

ML@GT is hosting a virtual seminar featuring Ellie Pavlick from Brown University.

Registration is required.

 

You can lead a horse to water...: Representing vs. Using Features in Neural NLP
 

Abstract

A wave of recent work has sought to understand how pretrained language models work. Such analyses have resulted in two seemingly contradictory sets of results. On one hand, work based on "probing classifiers" generally suggests that SOTA language models contain rich information about linguistic structure (e.g., parts of speech, syntax, semantic roles). On the other hand, work which measures performance on linguistic "challenge sets" shows that models consistently fail to use this information when making predictions. In this talk, I will present a series of results that attempt to bridge this gap. Our recent experiments suggest that the disconnect is not due to catastrophic forgetting nor is it (entirely) explained by insufficient training data. Rather, it is best explained in terms of how "accessible" features are to the model following pretraining, where "accessibility" can be quantified using an information-theoretic interpretation of probing classifiers.
 

About Ellie

Ellie Pavlick is an Assistant Professor of Computer Science at Brown University where she leads the Language Understanding and Representation (LUNAR) Lab. She received her PhD from the one-and-only University of Pennsylvania. Her current work focuses on building more cognitively-plausible models of natural language semantics, focusing on grounded language learning and on sample efficiency and generalization of neural language models.

Date/Time
-