Creating lifelike 3D head avatars and generating compelling animations for diverse subjects remain challenging in computer vision. This paper presents GaussianHead, which models the active head based on anisotropic 3D Gaussians. Our method integrates a motion deformation field and a single resolution tri-plane to capture the head’s intricate dynamics and detailed texture. Notably, we introduce a customized deriva tion scheme for each 3D Gaussian, facilitating the generation of multiple “doppelgangers” through learnable parameters for precise position transformation. This approach enables efficient representation of diverse Gaussian attributes and ensures their precision. Additionally, we propose an inherited derivation strat egy for newly added Gaussians to expedite training. Extensive ex periments demonstrate GaussianHead’s efficacy, achieving high f idelity visual results with a remarkably compact model size (≈ 12 MB). Our method outperforms state-of-the-art alternatives in tasks such as reconstruction, cross-identity reenactment, and novel view synthesis.
@misc{wang2024gaussianhead,
title={GaussianHead: High-fidelity Head Avatars with Learnable Gaussian Derivation},
author={Jie Wang and Jiu-Cheng Xie and Xianyan Li and Feng Xu and Chi-Man Pun and Hao Gao},
year={2024},
eprint={2312.01632},
archivePrefix={arXiv},
primaryClass={cs.CV}
}