Unsupervised Exemplar-Domain Aware Image-to-Image Translation
Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, n...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-05-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/23/5/565 |
_version_ | 1797535406988722176 |
---|---|
author | Yuanbin Fu Jiayi Ma Xiaojie Guo |
author_facet | Yuanbin Fu Jiayi Ma Xiaojie Guo |
author_sort | Yuanbin Fu |
collection | DOAJ |
description | Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, namely exemplar-domain aware image-to-image translator (EDIT for short). From a logical perspective, the translator needs to perform two main functions, i.e., feature extraction and style transfer. With consideration of logical network partition, the generator of our EDIT comprises of a part of blocks configured by shared parameters, and the rest by varied parameters exported by an exemplar-domain aware parameter network, for explicitly imitating the functionalities of extraction and mapping. The principle behind this is that, for images from multiple domains, the content features can be obtained by an extractor, while (re-)stylization is achieved by mapping the extracted features specifically to different purposes (domains and exemplars). In addition, a discriminator is equipped during the training phase to guarantee the output satisfying the distribution of the target domain. Our EDIT can flexibly and effectively work on multiple domains and arbitrary exemplars in a unified neat model. We conduct experiments to show the efficacy of our design, and reveal its advances over other state-of-the-art methods both quantitatively and qualitatively. |
first_indexed | 2024-03-10T11:44:02Z |
format | Article |
id | doaj.art-164cdd9102504b5eafc96bef275978ce |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-10T11:44:02Z |
publishDate | 2021-05-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-164cdd9102504b5eafc96bef275978ce2023-11-21T18:10:51ZengMDPI AGEntropy1099-43002021-05-0123556510.3390/e23050565Unsupervised Exemplar-Domain Aware Image-to-Image TranslationYuanbin Fu0Jiayi Ma1Xiaojie Guo2College of Intelligence and Computing, Tianjin University, Tianjin 300350, ChinaElectronic Information School, Wuhan University, Wuhan 430072, ChinaCollege of Intelligence and Computing, Tianjin University, Tianjin 300350, ChinaImage-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, namely exemplar-domain aware image-to-image translator (EDIT for short). From a logical perspective, the translator needs to perform two main functions, i.e., feature extraction and style transfer. With consideration of logical network partition, the generator of our EDIT comprises of a part of blocks configured by shared parameters, and the rest by varied parameters exported by an exemplar-domain aware parameter network, for explicitly imitating the functionalities of extraction and mapping. The principle behind this is that, for images from multiple domains, the content features can be obtained by an extractor, while (re-)stylization is achieved by mapping the extracted features specifically to different purposes (domains and exemplars). In addition, a discriminator is equipped during the training phase to guarantee the output satisfying the distribution of the target domain. Our EDIT can flexibly and effectively work on multiple domains and arbitrary exemplars in a unified neat model. We conduct experiments to show the efficacy of our design, and reveal its advances over other state-of-the-art methods both quantitatively and qualitatively.https://www.mdpi.com/1099-4300/23/5/565image-to-image translationneural style transferunsupervised learninggenerative adversarial network |
spellingShingle | Yuanbin Fu Jiayi Ma Xiaojie Guo Unsupervised Exemplar-Domain Aware Image-to-Image Translation Entropy image-to-image translation neural style transfer unsupervised learning generative adversarial network |
title | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_full | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_fullStr | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_full_unstemmed | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_short | Unsupervised Exemplar-Domain Aware Image-to-Image Translation |
title_sort | unsupervised exemplar domain aware image to image translation |
topic | image-to-image translation neural style transfer unsupervised learning generative adversarial network |
url | https://www.mdpi.com/1099-4300/23/5/565 |
work_keys_str_mv | AT yuanbinfu unsupervisedexemplardomainawareimagetoimagetranslation AT jiayima unsupervisedexemplardomainawareimagetoimagetranslation AT xiaojieguo unsupervisedexemplardomainawareimagetoimagetranslation |