Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization

Previous studies investigating the syntactic abilities of deep learning models have not targeted the relationship between the strength of the grammatical generalization and the amount of evidence to which the model is exposed during training. We address this issue by deploying a novel word-learning...

Full description

Bibliographic Details
Main Authors: Thrush, Tristan (Author), Wilcox, Ethan (Author), Levy, Roger (Author)
Format: Article
Language:English
Published: Association for Computational Linguistics, 2021-12-01T17:42:36Z.
Subjects:
Online Access:Get fulltext
LEADER 01953 am a22001813u 4500
001 138278
042 |a dc 
100 1 0 |a Thrush, Tristan  |e author 
700 1 0 |a Wilcox, Ethan  |e author 
700 1 0 |a Levy, Roger  |e author 
245 0 0 |a Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization 
260 |b Association for Computational Linguistics,   |c 2021-12-01T17:42:36Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/138278 
520 |a Previous studies investigating the syntactic abilities of deep learning models have not targeted the relationship between the strength of the grammatical generalization and the amount of evidence to which the model is exposed during training. We address this issue by deploying a novel word-learning paradigm to test BERT's (Devlin et al., 2018) few-shot learning capabilities for two aspects of English verbs: alternations and classes of selectional preferences. For the former, we fine-tune BERT on a single frame in a verbal-alternation pair and ask whether the model expects the novel verb to occur in its sister frame. For the latter, we fine-tune BERT on an incomplete selectional network of verbal objects and ask whether it expects unattested but plausible verb/object pairs. We find that BERT makes robust grammatical generalizations after just one or two instances of a novel word in fine-tuning. For the verbal alternation tests, we find that the model displays behavior that is consistent with a transitivity bias: verbs seen few times are expected to take direct objects, but verbs seen with direct objects are not expected to occur intransitively. The code for our experiments is available at https://github.com/TristanThrush/ few-shot-lm-learning. 
546 |a en 
655 7 |a Article 
773 |t 10.18653/V1/2020.BLACKBOXNLP-1.25 
773 |t Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP