Large motion model for unified multi-modal motion generation

Human motion generation, a cornerstone technique in animation and video production, has widespread applications in various tasks like text-to-motion and music-to-dance. Previous works focus on developing specialist models tailored for each task without scalability. In this work, we present Large...

Full description

Bibliographic Details
Main Authors: Zhang, Mingyuan, Jin, Daisheng, Gu, Chenyang, Hong, Fangzhou, Cai, Zhongang, Huang, Jingfang, Zhang, Chongzhi, Guo, Xinying, Yang, Lei, He, Ying, Liu, Ziwei
Other Authors: College of Computing and Data Science
Format: Conference Paper
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180277
http://arxiv.org/abs/2404.01284v1
_version_ 1826118707747749888
author Zhang, Mingyuan
Jin, Daisheng
Gu, Chenyang
Hong, Fangzhou
Cai, Zhongang
Huang, Jingfang
Zhang, Chongzhi
Guo, Xinying
Yang, Lei
He, Ying
Liu, Ziwei
author2 College of Computing and Data Science
author_facet College of Computing and Data Science
Zhang, Mingyuan
Jin, Daisheng
Gu, Chenyang
Hong, Fangzhou
Cai, Zhongang
Huang, Jingfang
Zhang, Chongzhi
Guo, Xinying
Yang, Lei
He, Ying
Liu, Ziwei
author_sort Zhang, Mingyuan
collection NTU
description Human motion generation, a cornerstone technique in animation and video production, has widespread applications in various tasks like text-to-motion and music-to-dance. Previous works focus on developing specialist models tailored for each task without scalability. In this work, we present Large Motion Model (LMM), a motion-centric, multi-modal framework that unifies mainstream motion generation tasks into a generalist model. A unified motion model is appealing since it can leverage a wide range of motion data to achieve broad generalization beyond a single task. However, it is also challenging due to the heterogeneous nature of substantially different motion data and tasks. LMM tackles these challenges from three principled aspects: 1) Data: We consolidate datasets with different modalities, formats and tasks into a comprehensive yet unified motion generation dataset, MotionVerse, comprising 10 tasks, 16 datasets, a total of 320k sequences, and 100 million frames. 2) Architecture: We design an articulated attention mechanism ArtAttention that incorporates body part-aware modeling into Diffusion Transformer backbone. 3) Pre-Training: We propose a novel pre-training strategy for LMM, which employs variable frame rates and masking forms, to better exploit knowledge from diverse training data. Extensive experiments demonstrate that our generalist LMM achieves competitive performance across various standard motion generation tasks over state-of-the-art specialist models. Notably, LMM exhibits strong generalization capabilities and emerging properties across many unseen tasks. Additionally, our ablation studies reveal valuable insights about training and scaling up large motion models for future research.
first_indexed 2025-03-09T12:15:10Z
format Conference Paper
id ntu-10356/180277
institution Nanyang Technological University
language English
last_indexed 2025-03-09T12:15:10Z
publishDate 2024
record_format dspace
spelling ntu-10356/1802772024-10-01T07:56:50Z Large motion model for unified multi-modal motion generation Zhang, Mingyuan Jin, Daisheng Gu, Chenyang Hong, Fangzhou Cai, Zhongang Huang, Jingfang Zhang, Chongzhi Guo, Xinying Yang, Lei He, Ying Liu, Ziwei College of Computing and Data Science 2024 European Conference on Computer Vision (ECCV) S-Lab Computer and Information Science Motion generation Unified model Multi-modality Human motion generation, a cornerstone technique in animation and video production, has widespread applications in various tasks like text-to-motion and music-to-dance. Previous works focus on developing specialist models tailored for each task without scalability. In this work, we present Large Motion Model (LMM), a motion-centric, multi-modal framework that unifies mainstream motion generation tasks into a generalist model. A unified motion model is appealing since it can leverage a wide range of motion data to achieve broad generalization beyond a single task. However, it is also challenging due to the heterogeneous nature of substantially different motion data and tasks. LMM tackles these challenges from three principled aspects: 1) Data: We consolidate datasets with different modalities, formats and tasks into a comprehensive yet unified motion generation dataset, MotionVerse, comprising 10 tasks, 16 datasets, a total of 320k sequences, and 100 million frames. 2) Architecture: We design an articulated attention mechanism ArtAttention that incorporates body part-aware modeling into Diffusion Transformer backbone. 3) Pre-Training: We propose a novel pre-training strategy for LMM, which employs variable frame rates and masking forms, to better exploit knowledge from diverse training data. Extensive experiments demonstrate that our generalist LMM achieves competitive performance across various standard motion generation tasks over state-of-the-art specialist models. Notably, LMM exhibits strong generalization capabilities and emerging properties across many unseen tasks. Additionally, our ablation studies reveal valuable insights about training and scaling up large motion models for future research. Ministry of Education (MOE) Nanyang Technological University Submitted/Accepted version This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP20221- 0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). 2024-10-01T07:56:49Z 2024-10-01T07:56:49Z 2024 Conference Paper Zhang, M., Jin, D., Gu, C., Hong, F., Cai, Z., Huang, J., Zhang, C., Guo, X., Yang, L., He, Y. & Liu, Z. (2024). Large motion model for unified multi-modal motion generation. 2024 European Conference on Computer Vision (ECCV). https://dx.doi.org/10.48550/arXiv.2404.01284 https://hdl.handle.net/10356/180277 10.48550/arXiv.2404.01284 http://arxiv.org/abs/2404.01284v1 en MOET2EP20221-0012 NTU NAP IAF-ICP © 2024 ECCV. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. application/pdf application/pdf
spellingShingle Computer and Information Science
Motion generation
Unified model
Multi-modality
Zhang, Mingyuan
Jin, Daisheng
Gu, Chenyang
Hong, Fangzhou
Cai, Zhongang
Huang, Jingfang
Zhang, Chongzhi
Guo, Xinying
Yang, Lei
He, Ying
Liu, Ziwei
Large motion model for unified multi-modal motion generation
title Large motion model for unified multi-modal motion generation
title_full Large motion model for unified multi-modal motion generation
title_fullStr Large motion model for unified multi-modal motion generation
title_full_unstemmed Large motion model for unified multi-modal motion generation
title_short Large motion model for unified multi-modal motion generation
title_sort large motion model for unified multi modal motion generation
topic Computer and Information Science
Motion generation
Unified model
Multi-modality
url https://hdl.handle.net/10356/180277
http://arxiv.org/abs/2404.01284v1
work_keys_str_mv AT zhangmingyuan largemotionmodelforunifiedmultimodalmotiongeneration
AT jindaisheng largemotionmodelforunifiedmultimodalmotiongeneration
AT guchenyang largemotionmodelforunifiedmultimodalmotiongeneration
AT hongfangzhou largemotionmodelforunifiedmultimodalmotiongeneration
AT caizhongang largemotionmodelforunifiedmultimodalmotiongeneration
AT huangjingfang largemotionmodelforunifiedmultimodalmotiongeneration
AT zhangchongzhi largemotionmodelforunifiedmultimodalmotiongeneration
AT guoxinying largemotionmodelforunifiedmultimodalmotiongeneration
AT yanglei largemotionmodelforunifiedmultimodalmotiongeneration
AT heying largemotionmodelforunifiedmultimodalmotiongeneration
AT liuziwei largemotionmodelforunifiedmultimodalmotiongeneration