Jointly learning word embeddings using a corpus and a knowledge base.
Methods for representing the meaning of words in vector spaces purely using the information distributed in text corpora have proved to be very valuable in various text mining and natural language processing (NLP) tasks. However, these methods still disregard the valuable semantic relational structur...
Main Authors: | Mohammed Alsuhaibani, Danushka Bollegala, Takanori Maehara, Ken-Ichi Kawarabayashi |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2018-01-01
|
Series: | PLoS ONE |
Online Access: | http://europepmc.org/articles/PMC5847320?pdf=render |
Similar Items
-
Learning linear transformations between counting-based and prediction-based word embeddings.
by: Danushka Bollegala, et al.
Published: (2017-01-01) -
An iterative approach for the global estimation of sentence similarity.
by: Tomoyuki Kajiwara, et al.
Published: (2017-01-01) -
Metaphor interpretation using paraphrases extracted from the web.
by: Danushka Bollegala, et al.
Published: (2013-01-01) -
Explorations in Word Embeddings : graph-based word embedding learning and cross-lingual contextual word embedding learning
by: Zhang, Zheng
Published: (2019) -
Punctuation and Parallel Corpus Based Word Embedding Model for Low-Resource Languages
by: Yang Yuan, et al.
Published: (2019-12-01)