Linguistically motivated models for lightly-supervised dependency parsing

Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014. === Cataloged from PDF version of thesis. === Includes bibliographical references (pages 122-132). === Today, the top performing parsing algorithms rely on the availability of anno...

Full description

Bibliographic Details
Main Author: Naseem, Tahira
Other Authors: Regina Barzilay.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2014
Subjects:
Online Access:http://hdl.handle.net/1721.1/89995
Description
Summary:Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014. === Cataloged from PDF version of thesis. === Includes bibliographical references (pages 122-132). === Today, the top performing parsing algorithms rely on the availability of annotated data for learning the syntactic structure of a language. Unfortunately, syntactically annotated texts are available only for a handful of languages. The research presented in this thesis aims at developing parsing models that can effectively perform in a lightly-supervised training regime. In particular we focus on formulating linguistically aware models of dependency parsing that can exploit readily available sources of linguistic knowledge such as language universals and typological features. This type of linguistic knowledge can be used to motivate model design and/or to guide inference procedure. We propose three alternative approaches for incorporating linguistic information into a lightly-supervised training setup: First, we show that linguistic information can be used in the form of rules on top of standard unsupervised parsing models to guide inference procedure. This method consistently outperforms existing monolingual and multilingual unsupervised parsers when tested on a set of 6 Indo-European languages. Next, we show that a linguistically aware model design greatly facilitates crosslingual parser transfer by leveraging syntactic connections between languages. Our transfer approach outperforms the state-of-the-art multilingual transfer parser across a set of 19 languages, achieving an average gain of 5.9%. The gains are even more pronounced - 14.4% - on non-Indo-European languages where existing transfer methods fail to perform. Finally, we propose a corpus-level Bayesian framework that allows multiple views of data in a single model. We use this framework to combine a dependency model with constituency view and universal rules, achieving a performance gain of 1.9% compared to the top-performing unsupervised parsing model. === by Tahira Naseem. === Ph. D.