Multimodal Avatar Face Reconstruction from Smart Glasses (in collaboration with Google XR)
We will explore the future of facial expression tracking for AR/MR smart glasses and combine sensors inside the glasses (e.g., eye tracker, IMU, egocamera) with small additional cameras. We will build a small capture setup that integrates smart glasses, additional cameras, and stationary cameras for reference recordings of facial expressions. Afterwards, we will implement a new method to generate facial expressions using input from the smartglasses alone. The goal of this real-time reconstruction is to drive expressive 3D avatar animation that allows for natural teleconferencing and presence in Augmented Reality.