Word Representations and Machine Learning Models for Implicit Sense Classification in Shallow Discourse Parsing

CoNLL 2015 featured a shared task on shallow discourse parsing. In 2016, the efforts continued with an increasing focus on sense classification. In the case of implicit sense classification, there was an interesting mix of traditional and modern machine learning classifiers using word representation...

Full description

Bibliographic Details
Main Author: Callin, Jimmy
Format: Others
Language:English
Published: Uppsala universitet, Institutionen för lingvistik och filologi 2017
Subjects:
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-325876
Description
Summary:CoNLL 2015 featured a shared task on shallow discourse parsing. In 2016, the efforts continued with an increasing focus on sense classification. In the case of implicit sense classification, there was an interesting mix of traditional and modern machine learning classifiers using word representation models. In this thesis, we explore the performance of a number of these models, and investigate how they perform using a variety of word representation models. We show that there are large performance differences between word representation models for certain machine learning classifiers, while others are more robust to the choice of word representation model. We also show that with the right choice of word representation model, simple and traditional machine learning classifiers can reach competitive scores even when compared with modern neural network approaches.