Evolving storytelling: benchmarks and methods for new character customization with diffusion models

Diffusion-based models for story visualization have shown promise in generating content-coherent images for storytelling tasks. However, how to effectively integrate new characters into existing narratives while maintaining character consistency remains an open problem, particularly with limited dat...

Full description

Bibliographic Details
Main Authors: Wang, Xiyu, Wang, Yufei, Tsutsui, Satoshi, Lin, Weisi, Wen, Bihan, Kot, Alex Chichung
Other Authors: Interdisciplinary Graduate School (IGS)
Format: Conference Paper
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180038
https://2024.acmmm.org/accepted-list
https://dl.acm.org/doi/10.1145/3664647.3681373
_version_ 1824455119891070976
author Wang, Xiyu
Wang, Yufei
Tsutsui, Satoshi
Lin, Weisi
Wen, Bihan
Kot, Alex Chichung
author2 Interdisciplinary Graduate School (IGS)
author_facet Interdisciplinary Graduate School (IGS)
Wang, Xiyu
Wang, Yufei
Tsutsui, Satoshi
Lin, Weisi
Wen, Bihan
Kot, Alex Chichung
author_sort Wang, Xiyu
collection NTU
description Diffusion-based models for story visualization have shown promise in generating content-coherent images for storytelling tasks. However, how to effectively integrate new characters into existing narratives while maintaining character consistency remains an open problem, particularly with limited data. Two major limitations hinder the progress: (1) the absence of a suitable benchmark due to potential character leakage and inconsistent text labeling, and (2) the challenge of distinguishing between new and old characters, leading to ambiguous results. To address these challenges, we introduce the NewEpisode benchmark, comprising refined datasets designed to evaluate generative models' adaptability in generating new stories with fresh characters using just a single example story. The refined dataset involves refined text prompts and eliminates character leakage. Additionally, to mitigate the character confusion of generated results, we propose EpicEvo, a method that customizes a diffusion-based visual story generation model with a single story featuring the new characters seamlessly integrating them into established character dynamics. EpicEvo introduces a novel adversarial character alignment module to align the generated images progressively in the diffusive process, with exemplar images of new characters, while applying knowledge distillation to prevent forgetting of characters and background details. Our evaluation quantitatively demonstrates that EpicEvo outperforms existing baselines on the NewEpisode benchmark, and qualitative studies confirm its superior customization of visual story generation in diffusion models. In summary, EpicEvo provides an effective way to incorporate new characters using only one example story, unlocking new possibilities for applications such as serialized cartoons.
first_indexed 2025-02-19T03:33:09Z
format Conference Paper
id ntu-10356/180038
institution Nanyang Technological University
language English
last_indexed 2025-02-19T03:33:09Z
publishDate 2024
record_format dspace
spelling ntu-10356/1800382024-11-10T15:37:30Z Evolving storytelling: benchmarks and methods for new character customization with diffusion models Wang, Xiyu Wang, Yufei Tsutsui, Satoshi Lin, Weisi Wen, Bihan Kot, Alex Chichung Interdisciplinary Graduate School (IGS) 32nd ACM International Conference on Multimedia (MM '24) Computer and Information Science Generative diffusion model Story visualization Generative model customization Diffusion-based models for story visualization have shown promise in generating content-coherent images for storytelling tasks. However, how to effectively integrate new characters into existing narratives while maintaining character consistency remains an open problem, particularly with limited data. Two major limitations hinder the progress: (1) the absence of a suitable benchmark due to potential character leakage and inconsistent text labeling, and (2) the challenge of distinguishing between new and old characters, leading to ambiguous results. To address these challenges, we introduce the NewEpisode benchmark, comprising refined datasets designed to evaluate generative models' adaptability in generating new stories with fresh characters using just a single example story. The refined dataset involves refined text prompts and eliminates character leakage. Additionally, to mitigate the character confusion of generated results, we propose EpicEvo, a method that customizes a diffusion-based visual story generation model with a single story featuring the new characters seamlessly integrating them into established character dynamics. EpicEvo introduces a novel adversarial character alignment module to align the generated images progressively in the diffusive process, with exemplar images of new characters, while applying knowledge distillation to prevent forgetting of characters and background details. Our evaluation quantitatively demonstrates that EpicEvo outperforms existing baselines on the NewEpisode benchmark, and qualitative studies confirm its superior customization of visual story generation in diffusion models. In summary, EpicEvo provides an effective way to incorporate new characters using only one example story, unlocking new possibilities for applications such as serialized cartoons. Ministry of Education (MOE) Nanyang Technological University Published version The first author is under a scholarship funded by the Interdisciplinary Graduate Programme of Nanyang Technological University. This work was done at Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University. This research is supported in part by the NTU-PKU Joint Research Institute (a collaboration between the Nanyang Technological University and Peking University that is sponsored by a donation from the Ng Teng Fong Charitable Foundation). The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg). 2024-11-05T02:03:47Z 2024-11-05T02:03:47Z 2024 Conference Paper Wang, X., Wang, Y., Tsutsui, S., Lin, W., Wen, B. & Kot, A. C. (2024). Evolving storytelling: benchmarks and methods for new character customization with diffusion models. 32nd ACM International Conference on Multimedia (MM '24), 3751-3760. https://dx.doi.org/10.1145/3664647.3681373 979-8-4007-0686-8 https://hdl.handle.net/10356/180038 10.1145/3664647.3681373 https://2024.acmmm.org/accepted-list https://dl.acm.org/doi/10.1145/3664647.3681373 3751 3760 en © 2024 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution-Noderivs international 4.0 License. application/pdf
spellingShingle Computer and Information Science
Generative diffusion model
Story visualization
Generative model customization
Wang, Xiyu
Wang, Yufei
Tsutsui, Satoshi
Lin, Weisi
Wen, Bihan
Kot, Alex Chichung
Evolving storytelling: benchmarks and methods for new character customization with diffusion models
title Evolving storytelling: benchmarks and methods for new character customization with diffusion models
title_full Evolving storytelling: benchmarks and methods for new character customization with diffusion models
title_fullStr Evolving storytelling: benchmarks and methods for new character customization with diffusion models
title_full_unstemmed Evolving storytelling: benchmarks and methods for new character customization with diffusion models
title_short Evolving storytelling: benchmarks and methods for new character customization with diffusion models
title_sort evolving storytelling benchmarks and methods for new character customization with diffusion models
topic Computer and Information Science
Generative diffusion model
Story visualization
Generative model customization
url https://hdl.handle.net/10356/180038
https://2024.acmmm.org/accepted-list
https://dl.acm.org/doi/10.1145/3664647.3681373
work_keys_str_mv AT wangxiyu evolvingstorytellingbenchmarksandmethodsfornewcharactercustomizationwithdiffusionmodels
AT wangyufei evolvingstorytellingbenchmarksandmethodsfornewcharactercustomizationwithdiffusionmodels
AT tsutsuisatoshi evolvingstorytellingbenchmarksandmethodsfornewcharactercustomizationwithdiffusionmodels
AT linweisi evolvingstorytellingbenchmarksandmethodsfornewcharactercustomizationwithdiffusionmodels
AT wenbihan evolvingstorytellingbenchmarksandmethodsfornewcharactercustomizationwithdiffusionmodels
AT kotalexchichung evolvingstorytellingbenchmarksandmethodsfornewcharactercustomizationwithdiffusionmodels