Reduction of Video Compression Artifacts Based on Deep Temporal Networks
It has been shown that deep convolutional neural networks (CNNs) reduce JPEG compression artifacts better than the previous approaches. However, the latest video compression standards have more complex artifacts than the JPEG, including the flickering which is not well reduced by the CNN-based metho...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2018-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8502045/ |
_version_ | 1818927883069947904 |
---|---|
author | Jae Woong Soh Jaewoo Park Yoonsik Kim Byeongyong Ahn Hyun-Seung Lee Young-Su Moon Nam Ik Cho |
author_facet | Jae Woong Soh Jaewoo Park Yoonsik Kim Byeongyong Ahn Hyun-Seung Lee Young-Su Moon Nam Ik Cho |
author_sort | Jae Woong Soh |
collection | DOAJ |
description | It has been shown that deep convolutional neural networks (CNNs) reduce JPEG compression artifacts better than the previous approaches. However, the latest video compression standards have more complex artifacts than the JPEG, including the flickering which is not well reduced by the CNN-based methods developed for still images. Moreover, recent video compression algorithms include in-loop filters which reduce the blocking artifacts, and thus post-processing barely improves the performance. In this paper, we propose a temporal-CNN architecture to reduce the artifacts in video compression standards as well as in JPEG. Specifically, we exploit a simple CNN structure and introduce a new training strategy that captures the temporal correlation of the consecutive frames in videos. The similar patches are aggregated from the neighboring frames by a simple motion search method, and they are fed to the CNN, which further reduces the artifacts. Experiments show that our approach shows improvements over the conventional CNN-based methods with similar complexities for image and video compression standards, such as MPEG-2, AVC, and HEVC, with average PSNR gain of 1.27, 0.47, and 0.23 dB, respectively. |
first_indexed | 2024-12-20T03:20:05Z |
format | Article |
id | doaj.art-8d531d96854643f18e3c2ad4edded9b0 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-20T03:20:05Z |
publishDate | 2018-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-8d531d96854643f18e3c2ad4edded9b02022-12-21T19:55:14ZengIEEEIEEE Access2169-35362018-01-016630946310610.1109/ACCESS.2018.28768648502045Reduction of Video Compression Artifacts Based on Deep Temporal NetworksJae Woong Soh0Jaewoo Park1Yoonsik Kim2Byeongyong Ahn3Hyun-Seung Lee4Young-Su Moon5Nam Ik Cho6https://orcid.org/0000-0001-5297-4649Department of Electrical and Computer Engineering, INMC, Seoul National University, Seoul, South KoreaDepartment of Electrical and Computer Engineering, INMC, Seoul National University, Seoul, South KoreaDepartment of Electrical and Computer Engineering, INMC, Seoul National University, Seoul, South KoreaVisual Display Division, Samsung Electronics Co. Ltd., Suwon, South KoreaDepartment of Electrical and Computer Engineering, INMC, Seoul National University, Seoul, South KoreaVisual Display Division, Samsung Electronics Co. Ltd., Suwon, South KoreaDepartment of Electrical and Computer Engineering, INMC, Seoul National University, Seoul, South KoreaIt has been shown that deep convolutional neural networks (CNNs) reduce JPEG compression artifacts better than the previous approaches. However, the latest video compression standards have more complex artifacts than the JPEG, including the flickering which is not well reduced by the CNN-based methods developed for still images. Moreover, recent video compression algorithms include in-loop filters which reduce the blocking artifacts, and thus post-processing barely improves the performance. In this paper, we propose a temporal-CNN architecture to reduce the artifacts in video compression standards as well as in JPEG. Specifically, we exploit a simple CNN structure and introduce a new training strategy that captures the temporal correlation of the consecutive frames in videos. The similar patches are aggregated from the neighboring frames by a simple motion search method, and they are fed to the CNN, which further reduces the artifacts. Experiments show that our approach shows improvements over the conventional CNN-based methods with similar complexities for image and video compression standards, such as MPEG-2, AVC, and HEVC, with average PSNR gain of 1.27, 0.47, and 0.23 dB, respectively.https://ieeexplore.ieee.org/document/8502045/Advanced video coding (AVC)compression artifactsconvolutional neural networks (CNN)high efficiency video coding (HEVC)video compression |
spellingShingle | Jae Woong Soh Jaewoo Park Yoonsik Kim Byeongyong Ahn Hyun-Seung Lee Young-Su Moon Nam Ik Cho Reduction of Video Compression Artifacts Based on Deep Temporal Networks IEEE Access Advanced video coding (AVC) compression artifacts convolutional neural networks (CNN) high efficiency video coding (HEVC) video compression |
title | Reduction of Video Compression Artifacts Based on Deep Temporal Networks |
title_full | Reduction of Video Compression Artifacts Based on Deep Temporal Networks |
title_fullStr | Reduction of Video Compression Artifacts Based on Deep Temporal Networks |
title_full_unstemmed | Reduction of Video Compression Artifacts Based on Deep Temporal Networks |
title_short | Reduction of Video Compression Artifacts Based on Deep Temporal Networks |
title_sort | reduction of video compression artifacts based on deep temporal networks |
topic | Advanced video coding (AVC) compression artifacts convolutional neural networks (CNN) high efficiency video coding (HEVC) video compression |
url | https://ieeexplore.ieee.org/document/8502045/ |
work_keys_str_mv | AT jaewoongsoh reductionofvideocompressionartifactsbasedondeeptemporalnetworks AT jaewoopark reductionofvideocompressionartifactsbasedondeeptemporalnetworks AT yoonsikkim reductionofvideocompressionartifactsbasedondeeptemporalnetworks AT byeongyongahn reductionofvideocompressionartifactsbasedondeeptemporalnetworks AT hyunseunglee reductionofvideocompressionartifactsbasedondeeptemporalnetworks AT youngsumoon reductionofvideocompressionartifactsbasedondeeptemporalnetworks AT namikcho reductionofvideocompressionartifactsbasedondeeptemporalnetworks |