Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization
Generative multimodal content is increasingly prevalent in much of the content creation arena, as it has the potential to allow artists and media personnel to create pre-production mockups by quickly bringing their ideas to life. The generation of audio from text prompts is an important aspect of su...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
ACM|Proceedings of the 32nd ACM International Conference on Multimedia
2024
|
Online Access: | https://hdl.handle.net/1721.1/157614 |
_version_ | 1824458027770576896 |
---|---|
author | Majumder, Navonil Hung, Chia-Yu Ghosal, Deepanway Hsu, Wei-Ning Mihalcea, Rada Poria, Soujanya |
author_facet | Majumder, Navonil Hung, Chia-Yu Ghosal, Deepanway Hsu, Wei-Ning Mihalcea, Rada Poria, Soujanya |
author_sort | Majumder, Navonil |
collection | MIT |
description | Generative multimodal content is increasingly prevalent in much of the content creation arena, as it has the potential to allow artists and media personnel to create pre-production mockups by quickly bringing their ideas to life. The generation of audio from text prompts is an important aspect of such processes in the music and film industry. Many of the recent diffusion-based text-to-audio models focus on training increasingly sophisticated diffusion models on a large set of datasets of prompt-audio pairs. These models do not explicitly focus on the presence of concepts or events and their temporal ordering in the output audio with respect to the input prompt. Our hypothesis is focusing on how these aspects of audio generation could improve audio generation performance in the presence of limited data. As such, in this work, using an existing text-to-audio model Tango, we synthetically create a preference dataset where each prompt has a winner audio output and some loser audio outputs for the diffusion model to learn from. The loser outputs, in theory, have some concepts from the prompt missing or in an incorrect order. We fine-tune the publicly available Tango text-to-audio model using diffusion-DPO (direct preference optimization) loss on our preference dataset and show that it leads to improved audio output over Tango and AudioLDM2, in terms of both automatic- and manual-evaluation metrics. |
first_indexed | 2025-02-19T04:19:22Z |
format | Article |
id | mit-1721.1/157614 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2025-02-19T04:19:22Z |
publishDate | 2024 |
publisher | ACM|Proceedings of the 32nd ACM International Conference on Multimedia |
record_format | dspace |
spelling | mit-1721.1/1576142024-12-23T05:01:45Z Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization Majumder, Navonil Hung, Chia-Yu Ghosal, Deepanway Hsu, Wei-Ning Mihalcea, Rada Poria, Soujanya Generative multimodal content is increasingly prevalent in much of the content creation arena, as it has the potential to allow artists and media personnel to create pre-production mockups by quickly bringing their ideas to life. The generation of audio from text prompts is an important aspect of such processes in the music and film industry. Many of the recent diffusion-based text-to-audio models focus on training increasingly sophisticated diffusion models on a large set of datasets of prompt-audio pairs. These models do not explicitly focus on the presence of concepts or events and their temporal ordering in the output audio with respect to the input prompt. Our hypothesis is focusing on how these aspects of audio generation could improve audio generation performance in the presence of limited data. As such, in this work, using an existing text-to-audio model Tango, we synthetically create a preference dataset where each prompt has a winner audio output and some loser audio outputs for the diffusion model to learn from. The loser outputs, in theory, have some concepts from the prompt missing or in an incorrect order. We fine-tune the publicly available Tango text-to-audio model using diffusion-DPO (direct preference optimization) loss on our preference dataset and show that it leads to improved audio output over Tango and AudioLDM2, in terms of both automatic- and manual-evaluation metrics. 2024-11-19T16:11:18Z 2024-11-19T16:11:18Z 2024-10-28 2024-11-01T07:51:12Z Article http://purl.org/eprint/type/ConferencePaper 979-8-4007-0686-8 https://hdl.handle.net/1721.1/157614 Majumder, Navonil, Hung, Chia-Yu, Ghosal, Deepanway, Hsu, Wei-Ning, Mihalcea, Rada et al. 2024. "Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization." PUBLISHER_POLICY en https://doi.org/10.1145/3664647.3681688 Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. The author(s) application/pdf ACM|Proceedings of the 32nd ACM International Conference on Multimedia Association for Computing Machinery |
spellingShingle | Majumder, Navonil Hung, Chia-Yu Ghosal, Deepanway Hsu, Wei-Ning Mihalcea, Rada Poria, Soujanya Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization |
title | Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization |
title_full | Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization |
title_fullStr | Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization |
title_full_unstemmed | Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization |
title_short | Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization |
title_sort | tango 2 aligning diffusion based text to audio generations through direct preference optimization |
url | https://hdl.handle.net/1721.1/157614 |
work_keys_str_mv | AT majumdernavonil tango2aligningdiffusionbasedtexttoaudiogenerationsthroughdirectpreferenceoptimization AT hungchiayu tango2aligningdiffusionbasedtexttoaudiogenerationsthroughdirectpreferenceoptimization AT ghosaldeepanway tango2aligningdiffusionbasedtexttoaudiogenerationsthroughdirectpreferenceoptimization AT hsuweining tango2aligningdiffusionbasedtexttoaudiogenerationsthroughdirectpreferenceoptimization AT mihalcearada tango2aligningdiffusionbasedtexttoaudiogenerationsthroughdirectpreferenceoptimization AT poriasoujanya tango2aligningdiffusionbasedtexttoaudiogenerationsthroughdirectpreferenceoptimization |