ETH Sensing, Interaction & Perception Lab one logo ETH Sensing, Interaction & Perception Lab full logo

AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing

Jiaxi Jiang, Paul Streli, Huajian Qiu, Andreas Fender, Larissa Laich, Patrick Snape, and Christian Holz.

Proceedings of ECCV 2022.

Video

Publication

Jiaxi Jiang, Paul Streli, Huajian Qiu, Andreas Fender, Larissa Laich, Patrick Snape, and Christian Holz. AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing. In Proceedings of ECCV 2022.

Teaser

AvatarPoser teaser image

We address the new problem of full-body avatar pose estimation from sparse tracking sources, which can significantly enhance embodiment, presence, and immersion in Mixed Reality. Our novel Transformer-based method AvatarPoser takes as input only the positions and orientations of one headset and two handheld controllers (or hands), and generates a full-body avatar pose over 22 joints. Our method reaches state-of-the-art pose accuracy, while providing a practical interface into the Metaverse.

Abstract

Today’s Mixed Reality head-mounted displays track the user’s head pose in world space as well as the user’s hands for interaction in both Augmented Reality and Virtual Reality scenarios. While this is adequate to support user input, it unfortunately limits users’ virtual representations to just their upper bodies. Current systems thus resort to floating avatars, whose limitation is particularly evident in collaborative settings. To estimate full-body poses from the sparse input sources, prior work has incorporated additional trackers and sensors at the pelvis or lower body, which increases setup complexity and limits practical application in mobile settings. In this paper, we present AvatarPoser, the first learning-based method that predicts full-body poses in world coordinates using only motion input from the user’s head and hands. Our method builds on a Transformer encoder to extract deep features from the input signals and decouples global motion from the learned local joint orientations to guide pose estimation. To obtain accurate full-body motions that resemble motion capture animations, we refine the arm joints’ positions using an optimization routine with inverse kinematics to match the original tracking input. In our evaluation, AvatarPoser achieved new state-of-the-art results in evaluations on large motion capture datasets (AMASS). At the same time, our method’s inference speed supports real-time operation, providing a practical interface to support holistic avatar control and representation for Metaverse applications.