Image Captioning with Style Using Generative Adversarial Networks

Image captioning research, which initially focused on describing images factually, is currently being developed in the direction of incorporating sentiments or styles to produce natural captions that reflect human-generated captions. The problem this research tries to solve the problem that captions...

Full description

Bibliographic Details
Main Authors: Dennis Setiawan, Maria Astrid Coenradina Saffachrissa, Shintia Tamara, Derwin Suhartono
Format: Article
Language:English
Published: Politeknik Negeri Padang 2022-03-01
Series:JOIV: International Journal on Informatics Visualization
Subjects:
Online Access:https://joiv.org/index.php/joiv/article/view/709
Description
Summary:Image captioning research, which initially focused on describing images factually, is currently being developed in the direction of incorporating sentiments or styles to produce natural captions that reflect human-generated captions. The problem this research tries to solve the problem that captions produced by existing models are rigid and unnatural due to the lack of sentiment. The purpose of this research is to design a reliable image captioning model that incorporates style based on state-of-the-art SeqCapsGAN architecture. The materials needed are MS COCO and SentiCaps datasets. Research methods are done through literature studies and experiments. While many previous studies compare their works without considering the differences in components and parameters being used, this research proposes a different approach to find more reliable configurations and provide more detailed insights into models’ behavior. This research also does further experiments on the generator part that have not been thoroughly investigated. Experiments are done on the combinations of feature extractor (VGG-19 and ResNet-50), discriminator model (CNN and Capsule), optimizer (Adam, Nadam, and SGD), batch size (8, 16, 32, and 64), and learning rate (0.001 and 0.0001) by doing a grid search. In conclusion, more insights into the models’ behavior can be drawn, and better configuration and result than the baseline can be achieved. Our research implies that research in comparative studies of image recognition models in image captioning context, automated metrics, and larger datasets suited for stylized image captioning might be needed for furthering the research in this field.
ISSN:2549-9610
2549-9904