Week of February 22

 

Language Universals Workshop 

 

Tal Linzen (NYU)

"Neural networks as cognitive models of syntax"

Abstract: Speakers of a language generalize their knowledge of syntax in a systematic way to constructions they have never encountered before. This observation has motivated the influential position in linguistics that humans are innately endowed with syntax-specific inductive biases. The applied success of deep learning systems that are not designed with such biases invites a reconsideration of this position. In this talk, I will review work that uses paradigms from psycholinguistics to examine the syntactic generalization capabilities of contemporary neural network architectures. Alongside some successes, this work suggests that human-like generalization requires stronger inductive biases than those expressed in standard neural network architectures.

 

Friday, February 26 | 12:00-1:30pm EST | Check email for Zoom link