Multimodal Deep Learning for Emotion Recognition from Egocentric and Physiological Data
This project investigates how deep learning models can leverage egocentric vision and physiological signals to predict human emotions in real-world scenarios. By combining first-person visual data from smart glasses with concurrent physiological signals such as heart rate, respiration, and skin conductance, the goal is to model the underlying human affective states, including distinct emotions, personality types, and continuous affect. The project builds on egoEMOTION, a dataset we recently recorded that explores how wearable and camera-based sensing can be used to capture subtle cues of emotion and personality during naturalistic activities.