PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution

Research on image-inpainting tasks has mainly focused on enhancing performance by augmenting various stages and modules. However, this trend does not consider the increase in the number of model parameters and operational memory, which increases the burden on computational resources. To solve this p...

Full description

Bibliographic Details
Main Authors: Jaekyun Ko, Wanuk Choi, Sanghwan Lee
Format: Article
Language:English
Published: MDPI AG 2023-10-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/19/8313
_version_ 1827722102351855616
author Jaekyun Ko
Wanuk Choi
Sanghwan Lee
author_facet Jaekyun Ko
Wanuk Choi
Sanghwan Lee
author_sort Jaekyun Ko
collection DOAJ
description Research on image-inpainting tasks has mainly focused on enhancing performance by augmenting various stages and modules. However, this trend does not consider the increase in the number of model parameters and operational memory, which increases the burden on computational resources. To solve this problem, we propose a <b>P</b>arametric <b>E</b>fficient Image <b>I</b>n<b>P</b>ainting <b>Net</b>work (PEIPNet) for efficient and effective image-inpainting. Unlike other state-of-the-art methods, the proposed model has a one-stage inpainting framework in which depthwise and pointwise convolutions are adopted to reduce the number of parameters and computational cost. To generate semantically appealing results, we selected three unique components: spatially-adaptive denormalization (SPADE), dense dilated convolution module (DDCM), and efficient self-attention (ESA). SPADE was adopted to conditionally normalize activations according to the mask in order to distinguish between damaged and undamaged regions. The DDCM was employed at every scale to overcome the gradient-vanishing obstacle and gradually fill in the pixels by capturing global information along the feature maps. The ESA was utilized to obtain clues from unmasked areas by extracting long-range information. In terms of efficiency, our model has the lowest operational memory compared with other state-of-the-art methods. Both qualitative and quantitative experiments demonstrate the generalized inpainting of our method on three public datasets: Paris StreetView, CelebA, and Places2.
first_indexed 2024-03-10T21:34:07Z
format Article
id doaj.art-0873f154df444395aac7fefd0d68ad7b
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T21:34:07Z
publishDate 2023-10-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-0873f154df444395aac7fefd0d68ad7b2023-11-19T15:05:48ZengMDPI AGSensors1424-82202023-10-012319831310.3390/s23198313PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise ConvolutionJaekyun Ko0Wanuk Choi1Sanghwan Lee2Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, Republic of KoreaDepartment of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, Republic of KoreaDepartment of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, Republic of KoreaResearch on image-inpainting tasks has mainly focused on enhancing performance by augmenting various stages and modules. However, this trend does not consider the increase in the number of model parameters and operational memory, which increases the burden on computational resources. To solve this problem, we propose a <b>P</b>arametric <b>E</b>fficient Image <b>I</b>n<b>P</b>ainting <b>Net</b>work (PEIPNet) for efficient and effective image-inpainting. Unlike other state-of-the-art methods, the proposed model has a one-stage inpainting framework in which depthwise and pointwise convolutions are adopted to reduce the number of parameters and computational cost. To generate semantically appealing results, we selected three unique components: spatially-adaptive denormalization (SPADE), dense dilated convolution module (DDCM), and efficient self-attention (ESA). SPADE was adopted to conditionally normalize activations according to the mask in order to distinguish between damaged and undamaged regions. The DDCM was employed at every scale to overcome the gradient-vanishing obstacle and gradually fill in the pixels by capturing global information along the feature maps. The ESA was utilized to obtain clues from unmasked areas by extracting long-range information. In terms of efficiency, our model has the lowest operational memory compared with other state-of-the-art methods. Both qualitative and quantitative experiments demonstrate the generalized inpainting of our method on three public datasets: Paris StreetView, CelebA, and Places2.https://www.mdpi.com/1424-8220/23/19/8313image inpaintinggenerative adversarial networks (GANs)lightweight architectureconditional normalizationdilated convolutiondense block
spellingShingle Jaekyun Ko
Wanuk Choi
Sanghwan Lee
PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution
Sensors
image inpainting
generative adversarial networks (GANs)
lightweight architecture
conditional normalization
dilated convolution
dense block
title PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution
title_full PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution
title_fullStr PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution
title_full_unstemmed PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution
title_short PEIPNet: Parametric Efficient Image-Inpainting Network with Depthwise and Pointwise Convolution
title_sort peipnet parametric efficient image inpainting network with depthwise and pointwise convolution
topic image inpainting
generative adversarial networks (GANs)
lightweight architecture
conditional normalization
dilated convolution
dense block
url https://www.mdpi.com/1424-8220/23/19/8313
work_keys_str_mv AT jaekyunko peipnetparametricefficientimageinpaintingnetworkwithdepthwiseandpointwiseconvolution
AT wanukchoi peipnetparametricefficientimageinpaintingnetworkwithdepthwiseandpointwiseconvolution
AT sanghwanlee peipnetparametricefficientimageinpaintingnetworkwithdepthwiseandpointwiseconvolution