A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs
The applications of style transfer on real time photographs are very trending now. This is used in various applications especially in social networking sites such as SnapChat and beauty cameras. A number of style transfer algorithms have been proposed but they are computationally expensive and gener...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Universidad Internacional de La Rioja (UNIR)
2022-09-01
|
Series: | International Journal of Interactive Multimedia and Artificial Intelligence |
Subjects: | |
Online Access: | https://www.ijimai.org/journal/bibcite/reference/3145 |
_version_ | 1811209302304620544 |
---|---|
author | Rabia Tahir Keyang Cheng Bilal Ahmed Memon Qing Liu |
author_facet | Rabia Tahir Keyang Cheng Bilal Ahmed Memon Qing Liu |
author_sort | Rabia Tahir |
collection | DOAJ |
description | The applications of style transfer on real time photographs are very trending now. This is used in various applications especially in social networking sites such as SnapChat and beauty cameras. A number of style transfer algorithms have been proposed but they are computationally expensive and generate artifacts in output image. Besides, most of research work only focuses on some traditional painting style transfer on real photographs. However, our work is unique as it considers diverse style domains to be transferred on real photographs by using one model. In this paper, we propose a Diverse Domain Generative Adversarial Network (DD-GAN) which performs fast diverse domain style translation on human face images. Our work is highly efficient and focused on applying different attractive and unique painting styles to human photographs while keeping the content preserved after translation. Moreover, we adopt a new loss function in our model and use PReLU activation function which improves and fastens the training procedure and helps in achieving high accuracy rates. Our loss function helps the proposed model in achieving better reconstructed images. The proposed model also occupies less memory space during training. We use various evaluation parameters to inspect the accuracy of our model. The experimental results demonstrate the effectiveness of our method as compared to state-of-the-art results. |
first_indexed | 2024-04-12T04:37:11Z |
format | Article |
id | doaj.art-af91d36fbbf34f1a9e79795fa57c6c4e |
institution | Directory Open Access Journal |
issn | 1989-1660 |
language | English |
last_indexed | 2024-04-12T04:37:11Z |
publishDate | 2022-09-01 |
publisher | Universidad Internacional de La Rioja (UNIR) |
record_format | Article |
series | International Journal of Interactive Multimedia and Artificial Intelligence |
spelling | doaj.art-af91d36fbbf34f1a9e79795fa57c6c4e2022-12-22T03:47:46ZengUniversidad Internacional de La Rioja (UNIR)International Journal of Interactive Multimedia and Artificial Intelligence1989-16602022-09-017510010810.9781/ijimai.2022.08.001ijimai.2022.08.001A Diverse Domain Generative Adversarial Network for Style Transfer on Face PhotographsRabia TahirKeyang ChengBilal Ahmed MemonQing LiuThe applications of style transfer on real time photographs are very trending now. This is used in various applications especially in social networking sites such as SnapChat and beauty cameras. A number of style transfer algorithms have been proposed but they are computationally expensive and generate artifacts in output image. Besides, most of research work only focuses on some traditional painting style transfer on real photographs. However, our work is unique as it considers diverse style domains to be transferred on real photographs by using one model. In this paper, we propose a Diverse Domain Generative Adversarial Network (DD-GAN) which performs fast diverse domain style translation on human face images. Our work is highly efficient and focused on applying different attractive and unique painting styles to human photographs while keeping the content preserved after translation. Moreover, we adopt a new loss function in our model and use PReLU activation function which improves and fastens the training procedure and helps in achieving high accuracy rates. Our loss function helps the proposed model in achieving better reconstructed images. The proposed model also occupies less memory space during training. We use various evaluation parameters to inspect the accuracy of our model. The experimental results demonstrate the effectiveness of our method as compared to state-of-the-art results.https://www.ijimai.org/journal/bibcite/reference/3145generative adversarial networkcyclegangated ganprelusmooth l1 lossstyle transfer |
spellingShingle | Rabia Tahir Keyang Cheng Bilal Ahmed Memon Qing Liu A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs International Journal of Interactive Multimedia and Artificial Intelligence generative adversarial network cyclegan gated gan prelu smooth l1 loss style transfer |
title | A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs |
title_full | A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs |
title_fullStr | A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs |
title_full_unstemmed | A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs |
title_short | A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs |
title_sort | diverse domain generative adversarial network for style transfer on face photographs |
topic | generative adversarial network cyclegan gated gan prelu smooth l1 loss style transfer |
url | https://www.ijimai.org/journal/bibcite/reference/3145 |
work_keys_str_mv | AT rabiatahir adiversedomaingenerativeadversarialnetworkforstyletransferonfacephotographs AT keyangcheng adiversedomaingenerativeadversarialnetworkforstyletransferonfacephotographs AT bilalahmedmemon adiversedomaingenerativeadversarialnetworkforstyletransferonfacephotographs AT qingliu adiversedomaingenerativeadversarialnetworkforstyletransferonfacephotographs AT rabiatahir diversedomaingenerativeadversarialnetworkforstyletransferonfacephotographs AT keyangcheng diversedomaingenerativeadversarialnetworkforstyletransferonfacephotographs AT bilalahmedmemon diversedomaingenerativeadversarialnetworkforstyletransferonfacephotographs AT qingliu diversedomaingenerativeadversarialnetworkforstyletransferonfacephotographs |