AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars

Capturing and editing full-head performances enables the creation of virtual characters with various applications such as extended reality and media production. The past few years witnessed a steep rise in the photorealism of human head avatars. Such avatars can be controlled through different input...

Täydet tiedot

Bibliografiset tiedot
Päätekijät: Mendiratta, Mohit, Pan, Xingang, Elgharib, Mohamed, Teotia, Kartik, B R, Mallikarjun, Tewari, Ayush, Golyanik, Vladislav, Kortylewski, Adam, Theobalt, Christian
Muut tekijät: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Aineistotyyppi: Artikkeli
Kieli:English
Julkaistu: ACM 2024
Linkit:https://hdl.handle.net/1721.1/153278
_version_ 1826202557444259840
author Mendiratta, Mohit
Pan, Xingang
Elgharib, Mohamed
Teotia, Kartik
B R, Mallikarjun
Tewari, Ayush
Golyanik, Vladislav
Kortylewski, Adam
Theobalt, Christian
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Mendiratta, Mohit
Pan, Xingang
Elgharib, Mohamed
Teotia, Kartik
B R, Mallikarjun
Tewari, Ayush
Golyanik, Vladislav
Kortylewski, Adam
Theobalt, Christian
author_sort Mendiratta, Mohit
collection MIT
description Capturing and editing full-head performances enables the creation of virtual characters with various applications such as extended reality and media production. The past few years witnessed a steep rise in the photorealism of human head avatars. Such avatars can be controlled through different input data modalities, including RGB, audio, depth, IMUs, and others. While these data modalities provide effective means of control, they mostly focus on editing the head movements such as the facial expressions, head pose, and/or camera viewpoint. In this paper, we propose AvatarStudio, a text-based method for editing the appearance of a dynamic full head avatar. Our approach builds on existing work to capture dynamic performances of human heads using Neural Radiance Field (NeRF) and edits this representation with a text-to-image diffusion model. Specifically, we introduce an optimization strategy for incorporating multiple keyframes representing different camera viewpoints and time stamps of a video performance into a single diffusion model. Using this personalized diffusion model, we edit the dynamic NeRF by introducing view-and-time-aware Score Distillation Sampling (VT-SDS) following a model-based guidance approach. Our method edits the full head in a canonical space and then propagates these edits to the remaining time steps via a pre-trained deformation network. We evaluate our method visually and numerically via a user study, and results show that our method outperforms existing approaches. Our experiments validate the design choices of our method and highlight that our edits are genuine, personalized, as well as 3D- and time-consistent.
first_indexed 2024-09-23T12:09:05Z
format Article
id mit-1721.1/153278
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T12:09:05Z
publishDate 2024
publisher ACM
record_format dspace
spelling mit-1721.1/1532782024-07-11T19:28:30Z AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars Mendiratta, Mohit Pan, Xingang Elgharib, Mohamed Teotia, Kartik B R, Mallikarjun Tewari, Ayush Golyanik, Vladislav Kortylewski, Adam Theobalt, Christian Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Capturing and editing full-head performances enables the creation of virtual characters with various applications such as extended reality and media production. The past few years witnessed a steep rise in the photorealism of human head avatars. Such avatars can be controlled through different input data modalities, including RGB, audio, depth, IMUs, and others. While these data modalities provide effective means of control, they mostly focus on editing the head movements such as the facial expressions, head pose, and/or camera viewpoint. In this paper, we propose AvatarStudio, a text-based method for editing the appearance of a dynamic full head avatar. Our approach builds on existing work to capture dynamic performances of human heads using Neural Radiance Field (NeRF) and edits this representation with a text-to-image diffusion model. Specifically, we introduce an optimization strategy for incorporating multiple keyframes representing different camera viewpoints and time stamps of a video performance into a single diffusion model. Using this personalized diffusion model, we edit the dynamic NeRF by introducing view-and-time-aware Score Distillation Sampling (VT-SDS) following a model-based guidance approach. Our method edits the full head in a canonical space and then propagates these edits to the remaining time steps via a pre-trained deformation network. We evaluate our method visually and numerically via a user study, and results show that our method outperforms existing approaches. Our experiments validate the design choices of our method and highlight that our edits are genuine, personalized, as well as 3D- and time-consistent. 2024-01-04T14:04:55Z 2024-01-04T14:04:55Z 2023-12-04 2024-01-01T08:50:01Z Article http://purl.org/eprint/type/JournalArticle 0730-0301 https://hdl.handle.net/1721.1/153278 Mendiratta, Mohit, Pan, Xingang, Elgharib, Mohamed, Teotia, Kartik, B R, Mallikarjun et al. 2023. "AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars." ACM Transactions on Graphics, 42 (6). PUBLISHER_CC en https://doi.org/10.1145/3618368 ACM Transactions on Graphics Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ The author(s) application/pdf ACM Association for Computing Machinery
spellingShingle Mendiratta, Mohit
Pan, Xingang
Elgharib, Mohamed
Teotia, Kartik
B R, Mallikarjun
Tewari, Ayush
Golyanik, Vladislav
Kortylewski, Adam
Theobalt, Christian
AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars
title AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars
title_full AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars
title_fullStr AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars
title_full_unstemmed AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars
title_short AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars
title_sort avatarstudio text driven editing of 3d dynamic human head avatars
url https://hdl.handle.net/1721.1/153278
work_keys_str_mv AT mendirattamohit avatarstudiotextdriveneditingof3ddynamichumanheadavatars
AT panxingang avatarstudiotextdriveneditingof3ddynamichumanheadavatars
AT elgharibmohamed avatarstudiotextdriveneditingof3ddynamichumanheadavatars
AT teotiakartik avatarstudiotextdriveneditingof3ddynamichumanheadavatars
AT brmallikarjun avatarstudiotextdriveneditingof3ddynamichumanheadavatars
AT tewariayush avatarstudiotextdriveneditingof3ddynamichumanheadavatars
AT golyanikvladislav avatarstudiotextdriveneditingof3ddynamichumanheadavatars
AT kortylewskiadam avatarstudiotextdriveneditingof3ddynamichumanheadavatars
AT theobaltchristian avatarstudiotextdriveneditingof3ddynamichumanheadavatars