Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction

In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations betwe...

Full description

Bibliographic Details
Main Authors: Wang, Wenya, Pan, Sinno Jialin
Other Authors: School of Computer Science and Engineering
Format: Journal Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/149202
_version_ 1811687565250527232
author Wang, Wenya
Pan, Sinno Jialin
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Wang, Wenya
Pan, Sinno Jialin
author_sort Wang, Wenya
collection NTU
description In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on predefined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction, which failed to adapt well to other domains of interest where only unlabeled data are available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this article, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domain-invariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on three benchmark data sets.
first_indexed 2024-10-01T05:18:20Z
format Journal Article
id ntu-10356/149202
institution Nanyang Technological University
language English
last_indexed 2024-10-01T05:18:20Z
publishDate 2021
record_format dspace
spelling ntu-10356/1492022021-09-09T02:24:50Z Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction Wang, Wenya Pan, Sinno Jialin School of Computer Science and Engineering Engineering::Computer science and engineering Aspect Terms Opinion Terms In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on predefined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction, which failed to adapt well to other domains of interest where only unlabeled data are available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this article, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domain-invariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on three benchmark data sets. Ministry of Education (MOE) Nanyang Technological University Published version This work was supported by NTU Singapore Nanyang Assistant Professorship (NAP) grant M4081532.020, Singapore MOE AcRF Tier-2 grant MOE2016-T2-2-060, and a Singapore Lee Kuan Yew Postdoctoral Fellowship. 2021-09-09T02:24:50Z 2021-09-09T02:24:50Z 2020 Journal Article Wang, W. & Pan, S. J. (2020). Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction. Computational Linguistics, 45(4), 705-736. https://dx.doi.org/10.1162/coli_a_00362 0891-2017 https://hdl.handle.net/10356/149202 10.1162/coli_a_00362 4 45 705 736 en M4081532.020 MOE2016-T2-2-060 Computational Linguistics © 2019 Association for Computational Linguistics. Published under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. application/pdf
spellingShingle Engineering::Computer science and engineering
Aspect Terms
Opinion Terms
Wang, Wenya
Pan, Sinno Jialin
Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction
title Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction
title_full Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction
title_fullStr Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction
title_full_unstemmed Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction
title_short Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction
title_sort syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction
topic Engineering::Computer science and engineering
Aspect Terms
Opinion Terms
url https://hdl.handle.net/10356/149202
work_keys_str_mv AT wangwenya syntacticallymeaningfulandtransferablerecursiveneuralnetworksforaspectandopinionextraction
AT pansinnojialin syntacticallymeaningfulandtransferablerecursiveneuralnetworksforaspectandopinionextraction