Automatic Report Generation for Chest X-Ray Images via Adversarial Reinforcement Learning
An adversarial reinforced report-generation framework for chest x-ray images is proposed. Previous medical-report-generation models are mostly trained by minimizing the cross-entropy loss or further optimizing the common image-captioning metrics, such as CIDEr, ignoring diagnostic accuracy, which sh...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9343868/ |
_version_ | 1831677806060240896 |
---|---|
author | Daibing Hou Zijian Zhao Yuying Liu Faliang Chang Sanyuan Hu |
author_facet | Daibing Hou Zijian Zhao Yuying Liu Faliang Chang Sanyuan Hu |
author_sort | Daibing Hou |
collection | DOAJ |
description | An adversarial reinforced report-generation framework for chest x-ray images is proposed. Previous medical-report-generation models are mostly trained by minimizing the cross-entropy loss or further optimizing the common image-captioning metrics, such as CIDEr, ignoring diagnostic accuracy, which should be the first consideration in this area. Inspired by the generative adversarial network, an adversarial reinforcement learning approach is proposed for report generation of chest x-ray images considering both diagnostic accuracy and language fluency. Specifically, an accuracy discriminator (AD) and fluency discriminator (FD) are built that serve as the evaluators by which a report based on these two aspects is scored. The FD checks how likely a report originates from a human expert, while the AD determines how much a report covers the key chest observations. The weighted score is viewed as a “reward” used for training the report generator via reinforcement learning, which solves the problem that the gradient cannot be passed back to the generative model when the output is discrete. Simultaneously, these two discriminators are optimized by maximum-likelihood estimation for better assessment ability. Additionally, a multi-type medical concept fused encoder followed by a hierarchical decoder is adopted as the report generator. Experiments on two large radiograph datasets demonstrate that the proposed model outperforms all methods to which it is compared. |
first_indexed | 2024-12-20T04:47:26Z |
format | Article |
id | doaj.art-bb739072aad0492b871a592e96848a55 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-20T04:47:26Z |
publishDate | 2021-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-bb739072aad0492b871a592e96848a552022-12-21T19:52:57ZengIEEEIEEE Access2169-35362021-01-019212362125010.1109/ACCESS.2021.30561759343868Automatic Report Generation for Chest X-Ray Images via Adversarial Reinforcement LearningDaibing Hou0https://orcid.org/0000-0002-4682-2187Zijian Zhao1https://orcid.org/0000-0002-7849-814XYuying Liu2https://orcid.org/0000-0002-1902-3742Faliang Chang3https://orcid.org/0000-0003-1276-2267Sanyuan Hu4School of Control Science and Engineering, Shandong University, Jinan, ChinaSchool of Control Science and Engineering, Shandong University, Jinan, ChinaSchool of Control Science and Engineering, Shandong University, Jinan, ChinaSchool of Control Science and Engineering, Shandong University, Jinan, ChinaDepartment of General Surgery, First Affiliated Hospital, Shandong First Medical University, Jinan, ChinaAn adversarial reinforced report-generation framework for chest x-ray images is proposed. Previous medical-report-generation models are mostly trained by minimizing the cross-entropy loss or further optimizing the common image-captioning metrics, such as CIDEr, ignoring diagnostic accuracy, which should be the first consideration in this area. Inspired by the generative adversarial network, an adversarial reinforcement learning approach is proposed for report generation of chest x-ray images considering both diagnostic accuracy and language fluency. Specifically, an accuracy discriminator (AD) and fluency discriminator (FD) are built that serve as the evaluators by which a report based on these two aspects is scored. The FD checks how likely a report originates from a human expert, while the AD determines how much a report covers the key chest observations. The weighted score is viewed as a “reward” used for training the report generator via reinforcement learning, which solves the problem that the gradient cannot be passed back to the generative model when the output is discrete. Simultaneously, these two discriminators are optimized by maximum-likelihood estimation for better assessment ability. Additionally, a multi-type medical concept fused encoder followed by a hierarchical decoder is adopted as the report generator. Experiments on two large radiograph datasets demonstrate that the proposed model outperforms all methods to which it is compared.https://ieeexplore.ieee.org/document/9343868/Medical report generationencoder-decoderadversarial trainingreinforcement learning |
spellingShingle | Daibing Hou Zijian Zhao Yuying Liu Faliang Chang Sanyuan Hu Automatic Report Generation for Chest X-Ray Images via Adversarial Reinforcement Learning IEEE Access Medical report generation encoder-decoder adversarial training reinforcement learning |
title | Automatic Report Generation for Chest X-Ray Images via Adversarial Reinforcement Learning |
title_full | Automatic Report Generation for Chest X-Ray Images via Adversarial Reinforcement Learning |
title_fullStr | Automatic Report Generation for Chest X-Ray Images via Adversarial Reinforcement Learning |
title_full_unstemmed | Automatic Report Generation for Chest X-Ray Images via Adversarial Reinforcement Learning |
title_short | Automatic Report Generation for Chest X-Ray Images via Adversarial Reinforcement Learning |
title_sort | automatic report generation for chest x ray images via adversarial reinforcement learning |
topic | Medical report generation encoder-decoder adversarial training reinforcement learning |
url | https://ieeexplore.ieee.org/document/9343868/ |
work_keys_str_mv | AT daibinghou automaticreportgenerationforchestxrayimagesviaadversarialreinforcementlearning AT zijianzhao automaticreportgenerationforchestxrayimagesviaadversarialreinforcementlearning AT yuyingliu automaticreportgenerationforchestxrayimagesviaadversarialreinforcementlearning AT faliangchang automaticreportgenerationforchestxrayimagesviaadversarialreinforcementlearning AT sanyuanhu automaticreportgenerationforchestxrayimagesviaadversarialreinforcementlearning |