Summary: | Traditional distant supervision for relation extraction is faced with the problem of introducing noises. In this paper, we present a shared representation generator to de-emphasize the noisy expressions by extracting common features in relation. Different from computing weighted sum in widespread attention mechanism, we directly generate bag representation in multi-instance learning by feature transformation, which only remains the semantics related to predict relation. We introduce the generator loss into objective function to improve the performance of shared representation. Also, the structure of our proposed generator is flexible and scalable. To capture more structural information, piecewise convolutional neural network (PCNN) is widely used to divide the output of convolutional layer into three segments, but this approach breaks the consistence and inner relationship of the sentence. We encode the sentence with piecewise-LSTM convolutional neural network (PLSTM-CNN) to alleviate this issue, which adopts BiLSTM after the pooling layer of PCNN. The experimental results show that we achieve significant improvement on relation extraction as compared with the baselines.
|