Ensemble Multi-Channel Neural Networks for Scientific Language Editing Evaluation
A huge and growing number of scientific papers are authored by non-native English speakers, driving increased demand for effective computer-based writing tools to help writers composing scientific articles. The Automated Evaluation of Scientific Writing (AESW) shared task promotes the use of natural...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9625008/ |
_version_ | 1818404521369403392 |
---|---|
author | Lung-Hao Lee Yuh-Shyang Wang Chao-Yi Chen Liang-Chih Yu |
author_facet | Lung-Hao Lee Yuh-Shyang Wang Chao-Yi Chen Liang-Chih Yu |
author_sort | Lung-Hao Lee |
collection | DOAJ |
description | A huge and growing number of scientific papers are authored by non-native English speakers, driving increased demand for effective computer-based writing tools to help writers composing scientific articles. The Automated Evaluation of Scientific Writing (AESW) shared task promotes the use of natural language processing tools to improve the quality of scientific writing in English by predicting whether a given sentence needs language editing or not. In this study, we propose an Ensemble Multi-Channel Neural Networks (called EMC-NN) model for scientific language editing evaluation, comprised of three main parts: a multi-channel word embedding representation, a combination of Bidirectional Long Short-Term Memory and Convolutional Neural Networks, and a majority voting ensemble. Experimental results on 143,804 testing sentences show that our proposed EMC-NN achieved an F1-score of 0.6367, outperforming the winner of the AESW-2016 competition task and the recent BERT transformers. Based on a series of in- depth analyses comparing the number of channels, ensemble size and network architectures, the proposed EMC-NN model is a relatively simple, but effective approach that offers significant performance improvements for scientific writing evaluation tasks. |
first_indexed | 2024-12-14T08:41:28Z |
format | Article |
id | doaj.art-64f4281f5d5d49cdb74c95c097a69bb1 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-14T08:41:28Z |
publishDate | 2021-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-64f4281f5d5d49cdb74c95c097a69bb12022-12-21T23:09:18ZengIEEEIEEE Access2169-35362021-01-01915854015854710.1109/ACCESS.2021.31300429625008Ensemble Multi-Channel Neural Networks for Scientific Language Editing EvaluationLung-Hao Lee0https://orcid.org/0000-0003-0472-7429Yuh-Shyang Wang1Chao-Yi Chen2Liang-Chih Yu3https://orcid.org/0000-0003-1443-4347Department of Electrical Engineering, National Central University, Taoyuan, TaiwanDepartment of Electrical Engineering, National Central University, Taoyuan, TaiwanDepartment of Electrical Engineering, National Central University, Taoyuan, TaiwanDepartment of Information Management, Yuan Ze University, Taoyuan, TaiwanA huge and growing number of scientific papers are authored by non-native English speakers, driving increased demand for effective computer-based writing tools to help writers composing scientific articles. The Automated Evaluation of Scientific Writing (AESW) shared task promotes the use of natural language processing tools to improve the quality of scientific writing in English by predicting whether a given sentence needs language editing or not. In this study, we propose an Ensemble Multi-Channel Neural Networks (called EMC-NN) model for scientific language editing evaluation, comprised of three main parts: a multi-channel word embedding representation, a combination of Bidirectional Long Short-Term Memory and Convolutional Neural Networks, and a majority voting ensemble. Experimental results on 143,804 testing sentences show that our proposed EMC-NN achieved an F1-score of 0.6367, outperforming the winner of the AESW-2016 competition task and the recent BERT transformers. Based on a series of in- depth analyses comparing the number of channels, ensemble size and network architectures, the proposed EMC-NN model is a relatively simple, but effective approach that offers significant performance improvements for scientific writing evaluation tasks.https://ieeexplore.ieee.org/document/9625008/Automated writing evaluationscientific Englishnatural language processingensemble learningmulti-channel neural networks |
spellingShingle | Lung-Hao Lee Yuh-Shyang Wang Chao-Yi Chen Liang-Chih Yu Ensemble Multi-Channel Neural Networks for Scientific Language Editing Evaluation IEEE Access Automated writing evaluation scientific English natural language processing ensemble learning multi-channel neural networks |
title | Ensemble Multi-Channel Neural Networks for Scientific Language Editing Evaluation |
title_full | Ensemble Multi-Channel Neural Networks for Scientific Language Editing Evaluation |
title_fullStr | Ensemble Multi-Channel Neural Networks for Scientific Language Editing Evaluation |
title_full_unstemmed | Ensemble Multi-Channel Neural Networks for Scientific Language Editing Evaluation |
title_short | Ensemble Multi-Channel Neural Networks for Scientific Language Editing Evaluation |
title_sort | ensemble multi channel neural networks for scientific language editing evaluation |
topic | Automated writing evaluation scientific English natural language processing ensemble learning multi-channel neural networks |
url | https://ieeexplore.ieee.org/document/9625008/ |
work_keys_str_mv | AT lunghaolee ensemblemultichannelneuralnetworksforscientificlanguageeditingevaluation AT yuhshyangwang ensemblemultichannelneuralnetworksforscientificlanguageeditingevaluation AT chaoyichen ensemblemultichannelneuralnetworksforscientificlanguageeditingevaluation AT liangchihyu ensemblemultichannelneuralnetworksforscientificlanguageeditingevaluation |