Causality-preserving Asynchronous Reality

ACM CHI 2022Best paper award
Andreas Fender and Christian Holz
Causality-preserving Asynchronous Reality

Our system volumetrically captures physical events and allows immersed users to experience those events in an asynchronous yet causally-accurate manner. (1) The user is immersed inside a virtual replica of his real office and uses his physical mouse and keyboard to interact with virtual displays. (2) A coworker enters the room and leaves a spoken message while placing an object on the table, but the user had activated Focus Mode, so AsyncReality conceals any visual and auditory sensations from him. (3) After he completes his task, he discovers an artifact on the table and approaches it, which (4) triggers the playback of the captured event. (5) When the playback finishes, the user can interact with the real object.

Abstract

Mixed Reality is gaining interest as a platform for collaboration and focused work to a point where it may supersede current office settings in future workplaces. At the same time, we expect that interaction with physical objects and face-to-face communication will remain crucial for future work environments, which is a particular challenge in fully immersive Virtual Reality. In this work, we reconcile those requirements through a user’s individual Asynchronous Reality, which enables seamless physical interaction across time. When a user is unavailable, e.g., focused on a task or in a call, our approach captures co-located or remote physical events in real-time, constructs a causality graph of co-dependent events, and lets immersed users revisit them at a suitable time in a causally accurate way. Enabled by our system AsyncReality, we present a workplace scenario that includes walk-in interruptions during a person’s focused work, physical deliveries, and transient spoken messages. We then generalize our approach to a use-case agnostic concept and system architecture. We conclude by discussing the implications of an Asynchronous Reality for future offices.

Video

Reference

Andreas Fender and Christian Holz. Causality-preserving Asynchronous Reality. In Proceedings of ACM CHI 2022.

More images

Asynchronous Reality prototype apparatus

We installed four Azure Kinect cameras (A) in an office (B). The user wears an Oculus Quest 2 (C) with a RealSense D435 camera mounted to it (slightly angled downwards to capture hands and objects).

Rendering of virtual replication of our office space

A virtual replication of the office in our scenario. Instead of the real office, the user sees this reduced virtual representation. Depending on the system state, we augment this static replication with virtual content as well as parts of real-time point clouds (live or recorded).

Structure of causality graphs

Generalized components of a Causality Graph. Every event has one or more differences. Each Manipulation is either a Causality Node (C), which acts as dependency for at least one other event, or a Trigger (T), if the Manipulation is not a dependency for any other event. An event needs to have all of its dependencies ‘fulfilled’ before it can be played back. An event can depend on one or more Causality Nodes (not necessarily from the same event).

Architecture for implementing an Asynchronous Reality system

Overview of the general system architecture and data flow for implementing an Asynchronous Reality. The system receives RGB-D data from the local or remote room (top-left) as well as from the space around the user (top-right). If the user unavailable (e.g., Focus Mode) or currently catching up with reality, then the asynchronous processing and rendering pipeline is used (see left-most condition). Otherwise, the system simply renders the point clouds live (local space and/or from remote space during calls).