Summary: | Recently, Conditional generative adversarial network (cGAN) plays an important role in image synthesis tasks and Vision Transformer (ViT) with self-attention mechanism have shown SOTA performance on computer vision field. In this report, I extent ViT to image synthesis tasks. I propose two ViT-based generator architectures with upsampling and transposed convolution encoders and one ViT-based discriminator. I demonstrate that my models, named cViTGAN, are capable of image synthesis task. I perform experiments on six different benchmarks, the models achieve comparable performance to the baseline models. My work shows that we can achieve reasonable results with ViT-based models.
|