Multi-View 3D Hand Pose Estimation from Dynamic Egocentric Captures
This project explores multi-view 3D hand pose estimation by leveraging several dynamically moving viewpoints to address the persistent challenges of occlusion, viewpoint ambiguity, and fine-grained articulation. The goal is to develop a modern multi-view fusion model that exploits cross-view geometric consistency and spatio-temporal cues to achieve robust hand pose estimation. Beneficial will be our access to a motion simulation engine capable of generating large-scale synthetic multi-camera datasets. This will enable experimentation with synthetic-to-real transfer and state-of-the-art training strategies.



