Replacing Linguists with Dummies: A Serious Need for Trivial Baselines in Multi-Task Neural Machine Translation

Recent developments in machine translation experiment with the idea that a model can improve the translation quality by performing multiple tasks, e.g., translating from source to target and also labeling each source word with syntactic information. The intuition is that the network would generalize...

Full description

Bibliographic Details
Main Authors: Kondratyuk Daniel, Cardenas Ronald, Bojar Ondřej
Format: Article
Language:English
Published: Sciendo 2019-10-01
Series:Prague Bulletin of Mathematical Linguistics
Online Access:https://doi.org/10.2478/pralin-2019-0005
Description
Summary:Recent developments in machine translation experiment with the idea that a model can improve the translation quality by performing multiple tasks, e.g., translating from source to target and also labeling each source word with syntactic information. The intuition is that the network would generalize knowledge over the multiple tasks, improving the translation performance, especially in low resource conditions. We devised an experiment that casts doubt on this intuition. We perform similar experiments in both multi-decoder and interleaving setups that label each target word either with a syntactic tag or a completely random tag. Surprisingly, we show that the model performs nearly as well on uncorrelated random tags as on true syntactic tags. We hint some possible explanations of this behavior.
ISSN:1804-0462