Research and publications

Our research spans the areas of computational interaction, physical computing, virtual/mixed reality and haptics, and predictive health monitoring. For all general inquiries and requests, email Christian at christian/holz#inf/ethz/ch.

2020

Haptic PIVOT

On-Demand Handhelds in Virtual Reality. ACM UIST 2020.

Haptic PIVOT: On-Demand Handhelds in VR.

Robert Kovacs, Eyal Ofek, Mar Gonzalez Franco, Alexa Fay Siu, Sebastian Marwecki, Christian Holz and Mike Sinclair. ACM UIST 2020.
PDF

Abstract:

We present PIVOT, a wrist-worn haptic device that renders virtual objects into the user’s hand on demand. Its simple design comprises a single actuated joint that pivots a haptic handle into and out of the user’s hand, rendering the haptic sensations of grasping, catching, or throwing an object – anywhere in space. Unlike existing hand-held haptic devices and haptic gloves, PIVOT leaves the user’s palm free when not in use, allowing users to make unencumbered use of their hand. PIVOT also enables rendering forces acting on the held virtual objects, such as gravity, inertia, or air-drag, by actively driving its motor while the user is firmly holding the handle. When wearing a PIVOT device on both hands, they can add haptic feedback to bimanual interaction, such as lifting larger objects. In our user study, 12 participants evaluated the realism of grabbing and releasing objects of different shape and size with mean score 5.19 on a scale from 1 to 7, rated the ability to catch and throw balls in different directions with different velocities (mean=5.5), and verified the ability to render the comparative weight of held objects with 87% accuracy for ~100g increments.

Omni

Volumetric Sensing and Actuation of Passive Magnetic Tools for Dynamic Haptic Feedback. ACM UIST 2020.

Omni: Volumetric Sensing and Actuation of Passive Magnetic Tools for Dynamic Haptic Feedback.

Thomas Langerak, Juan Zarate, David Lindlbauer, Christian Holz and Otmar Hilliges. ACM UIST 2020.
PDF

Abstract:

We present Omni, a self-contained 3D haptic feedback system that is capable of sensing and actuating an untethered, passive tool containing only a small embedded permanent magnet. Omni enriches AR, VR and desktop applications by providing an active haptic experience using a simple apparatus centered around an electromagnetic base. The spatial haptic capabilities of Omni are enabled by a novel gradient-based method to reconstruct the 3D position of the permanent magnet in midair using the measurements from eight off-the-shelf hall sensors that are integrated into the base. Omni’s 3 DoF spherical electromagnet simultaneously exerts dynamic and precise radial and tangential forces in a volumetric space around the device. Since our system is fully integrated, contains no moving parts and requires no external tracking, it is easy and affordable to fabricate. We describe Omni’s hardware implementation, our 3D reconstruction algorithm, and evaluate the tracking and actuation performance in depth. Finally, we demonstrate its capabilities via a set of interactive usage scenarios.

SurfaceFleet

Exploring Distributed Interactions Unbounded from Device, Application, User, and Time. ACM UIST 2020.
PDF (to appear) · details

SurfaceFleet: Exploring Distributed Interactions Unbounded from Device, Application, User, and Time.

Frederik Brudy, David Ledo, Michel Pahud, Nathalie Henry Riche, Christian Holz, Anandghan Waghmare, Hemant Surale, Marcus Peinado, Xiaokuan Zhang, Shannon Joyner, Badrish Chandramouli, Umar Farooq Minhas, Jonathan Goldstein, William Buxton and Ken Hinckley. ACM UIST 2020.
PDF (to appear)

Abstract:

Knowledge work increasingly spans multiple computing surfaces. Yet in status quo user experiences, content as well as tools, behaviors, and workflows are largely bound to the current device—running the current application, for the current user, and at the current moment in time. SurfaceFleet is a system and toolkit that uses resilient distributed programming techniques to explore cross-device interactions that are unbounded in these four dimensions of device, application, user, and time. As a reference implementation, we describe an interface built using Surface Fleet that employs lightweight, semi-transparent UI elements known as Applets. Applets appear always-on-top of the operating system, application windows, and (conceptually) above the device itself. But all connections and synchronized data are virtualized and made resilient through the cloud. For example, a sharing Applet known as a Portfolio allows a user to drag and drop unbound Interaction Promises into a document. Such promises can then be fulfilled with content asynchronously, at a later time (or multiple times), from another device, and by the same or a different user.

Tilt-Responsive Techniques for Digital Drawing Boards

ACM UIST 2020.
PDF (to appear) · details

Tilt-Responsive Techniques for Digital Drawing Boards.

Hugo Romat, Nathalie Henry Riche, Michel Pahud, Christopher Collins, Christian Holz, Adam Riddle, William Buxton and Ken Hinckley. ACM UIST 2020.
PDF (to appear)

Abstract:

Drawing boards offer a self-stable work surface that is continuously adjustable. On digital displays, such as the Microsoft Surface Studio, these properties open up a class of techniques that sense and respond to tilt adjustments. Each display posture—whether angled high, low, or somewhere in-between—affords some activities, but not others. Because what is appropriate also depends on the application and task, we explore a range of app-specific transitions between reading vs. writing (annotation), public vs. personal, shared person-space vs. task-space, and other nuances of input and feedback, contingent on display angle. Continuous responses provide interactive transitions tailored to each use-case. We show how a variety of knowledge work scenarios can use sensed display adjustments to drive context-appropriate transitions, as well as technical software details of how to best realize these concepts. A preliminary remote user study suggests that techniques must balance effort required to adjust tilt, versus the potential benefits of a sensed transition.

Virtual Reality Without Vision

A Haptic and Auditory White Cane to Navigate Complex Virtual Worlds. ACM CHI 2020.

Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual Worlds.

Alexa Fay Siu, Mike Sinclair, Robert Kovacs, Eyal Ofek, Christian Holz and Edward Cutrell. ACM CHI 2020.
PDF

Abstract:

Current Virtual Reality (VR) technologies focus on rendering visuospatial effects, and thus are inaccessible for blind or low vision users. We examine the use of a novel white cane controller that enables navigation without vision of large virtual environments with complex architecture, such as winding paths and occluding walls and doors. The cane controller employs a lightweight three-axis brake mechanism to provide large-scale shape of virtual objects. The multiple degrees-of-freedom enables users to adapt the controller to their preferred techniques and grip. In addition, surface textures are rendered with a voice coil actuator based on contact vibrations; and spatialized audio is determined based on the progression of sound through the geometry around the user. We design a scavenger hunt game that demonstrates how our device enables blind users to navigate a complex virtual environment. Seven out of eight users were able to successfully navigate the virtual room (6x6m) to locate targets while avoiding collisions. We conclude with design consideration on creating immersive non-visual VR experiences based on user preferences for cane techniques, and cane material properties.

A Rapid Tapping Task on Commodity Smartphones to Assess Motor Fatigability

ACM CHI 2020.

A Rapid Tapping Task on Commodity Smartphones to Assess Motor Fatigability.

Liliana Barrios, Pietro Oldrati, David Lindlbauer, Marc Hilty, Helen Hayward-Koennecke, Christian Holz and Andreas Lutteroti. ACM CHI 2020.
PDF

Abstract:

Fatigue is a common debilitating symptom of many autoimmune diseases, including multiple sclerosis. It negatively impacts patients’ every-day life and productivity. Despite its prevalence, fatigue is still poorly understood. Its subjective nature makes quantification challenging and it is mainly assessed by questionnaires, which capture the magnitude of fatigue insufficiently. Motor fatigability, the objective decline of performance during a motor task, is an underrated aspect in this regard. Currently, motor fatigability is assessed using a handgrip dynamometer. This approach has been proven valid and accurate but requires special equipment and trained personnel. We propose a technique to objectively quantify motor fatigability using a commodity smartphone. The method comprises a simple exertion task requiring rapid alternating tapping. Our study with 20 multiple sclerosis patients and 35 healthy participants showed a correlation of ρ = 0.8 with the baseline handgrip method. This smartphone-based approach is a first step towards ubiquitous, more frequent, and remote monitoring of fatigability and disease progression.

Towards Privacy-Preserving Ego-Motion Estimation using an Extremely Low-Resolution Camera

IEEE RA-L 2020.

Towards Privacy-Preserving Ego-Motion Estimation using an Extremely Low-Resolution Camera.

Armon Shariati, Christian Holz and Sudipta N. Sinha. IEEE RA-L 2020.
PDF

Abstract:

Ego-motion estimation is a core task in robotic systems as well as in augmented and virtual reality applications. It is often solved using visual-inertial odometry, which involves using one or more always-on cameras on mobile robots and wearable devices. As consumers increasingly use such devices in their homes and workplaces, which are filled with sensitive details, the role of privacy in such camera-based approaches is of ever increasing importance.
In this paper, we introduce the first solution to perform privacy-preserving ego-motion estimation. We recover camera ego-motion from an extremely low-resolution monocular camera by estimating dense optical flow at a higher spatial resolution (i.e., 4x super resolution). We propose SRFNet for directly estimating Super-Resolved Flow, a novel convolutional neural network model that is trained in a supervised setting using ground-truth optical flow. We also present a weakly supervised approach for training a variant of SRFNet on real videos where ground truth flow is unavailable. On image pairs with known relative camera orientations, we use SRFNet to predict the autoepipolar flow that arises from pure camera translation, from which we robustly estimate the camera translation direction. We evaluate our super-resolved optical flow estimates and camera translation direction estimates on the Sintel and KITTI odometry datasets, where our methods outperform several baselines. Our results indicate that robust ego-motion recovery from extremely low-resolution images can be viable when camera orientations and metric scale is recovered from inertial sensors and fused with the estimated translations.

2019

DreamWalker

Substituting Real-World Walking Experiences with a Virtual Reality. ACM UIST 2019.

DreamWalker: Substituting Real-World Walking Experiences with a Virtual Reality.

Jackie Yang, Christian Holz, Eyal Ofek and Andrew D. Wilson. ACM UIST 2019.
PDF

Abstract:

We explore a future in which people spend considerably more time in virtual reality, even during moments when they transition between locations in the real world. In this paper, we present DreamWalker, a VR system that enables such real-world walking while users explore and stay fully immersed inside large virtual environments in a headset. Provided with a real-world destination, DreamWalker finds a similar path in a pre-authored VR environment and guides the user while real-walking the virtual world. To keep the user from colliding with objects and people in the real-world, DreamWalker’s tracking system fuses GPS locations, inside-out tracking, and RGBD frames to 1) continuously and accurately position the user in the real world, 2) sense walkable paths and obstacles in real time, and 3) represent paths through a dynamically changing scene in VR to redirect the user towards the chosen destination. We demonstrate DreamWalker’s versatility by enabling users to walk three paths across the large Microsoft campus while enjoying pre-authored VR worlds, supplemented with a variety of obstacle avoidance and redirection techniques. In our evaluation, 8 participants walked across campus along a 15-minute route, experiencing a lively virtual Manhattan that was full of animated cars, people, and other objects.

CapstanCrunch

A Haptic VR Controller with User-supplied Force Feedback. ACM UIST 2019.

CapstanCrunch: A Haptic VR Controller with User-supplied Force Feedback.

Mike Sinclair, Eyal Ofek, Mar Gonzalez-Franco and Christian Holz. ACM UIST 2019.
PDF

Abstract:

We introduce CapstanCrunch, a force resisting, palm-grounded haptic controller that renders haptic feedback for touching and grasping both rigid and compliant objects in a VR environment. In contrast to previous controllers, CapstanCrunch renders human-scale forces without the use of large, high force, electrically power consumptive and expensive actuators. Instead, CapstanCrunch integrates a friction-based capstan-plus-cord variable-resistance brake mechanism that is dynamically controlled by a small internal motor. The capstan mechanism magnifies the motor’s force by a factor of around 40 as an output resistive force. Compared to active force control devices, it is low cost, low electrical power, robust, safe, fast and quiet, while providing high force control to user interaction. We describe the design and implementation of CapstanCrunch and demonstrate its use in a series of VR scenarios. Finally, we evaluate the performance of CapstanCrunch in two user studies and compare our controller with an active haptic controller with the ability to simulate different levels of convincing object rigidity and/or compliance.

Mise-Unseen

Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight. ACM UIST 2019.

Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight.

Sebastian Marwecki, Andrew D. Wilson, Eyal Ofek, Mar Gonzalez-Franco and Christian Holz. ACM UIST 2019.
PDF

Abstract:

Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user’s field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user’s field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user’s preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.

TORC

A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction. ACM CHI 2019.

TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction.

Jaeyeon Lee, Mike Sinclair, Mar Gonzalez-Franco, Eyal Ofek and Christian Holz. ACM CHI 2019.
PDF

Abstract:

Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.

Sensing Posture-Aware Pen+Touch Interaction on Tablets

ACM CHI 2019.

Sensing Posture-Aware Pen+Touch Interaction on Tablets.

Yang Zhang, Michel Pahud, Christian Holz, Haijun Xia, Gierad Laput, Michael McGuffin, Xiao Tu, Andrew Mittereder, Fei Su, William Buxton and Ken Hinckley. ACM CHI 2019.
PDF

Abstract:

Many status-quo interfaces for tablets with pen + touch input capabilities force users to reach for device-centric UI widgets at fixed locations, rather than sensing and adapting to the user-centric posture. To address this problem, we propose sensing techniques that transition between various nuances of mobile and stationary use via postural awareness. These postural nuances include shifting hand grips, varying screen angle and orientation, planting the palm while writing or sketching, and detecting what direction the hands approach from. To achieve this, our system combines three sensing modalities: 1) raw capacitance touchscreen images, 2) inertial motion, and 3) electric field sensors around the screen bezel for grasp and hand proximity detection. We show how these sensors enable posture-aware pen+touch techniques that adapt interaction and morph user interface elements to suit fine-grained contexts of body-, arm-, hand-, and grip-centric frames of reference.

SeeingVR

A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision. ACM CHI 2019.

SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision.

Yuhang Zhao, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek and Andrew D. Wilson. ACM CHI 2019.
PDF

Abstract:

Current virtual reality applications do not support people who have low vision, i.e., vision loss that falls short of complete blindness but is not correctable by glasses. We present SeeingVR, a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. Nine of our tools modify an existing VR application post hoc via a plugin without developer effort. The rest require simple inputs from developers using a Unity toolkit we created that allows integrating all 14 of our low vision support tools during development. Our evaluation with 11 participants with low vision showed that SeeingVR enabled users to better enjoy VR and complete tasks more quickly and accurately. Developers also found our Unity toolkit easy and convenient to use.

RealityCheck

Blending Virtual Environments with Situated Physical Reality. ACM CHI 2019.

RealityCheck: Blending Virtual Environments with Situated Physical Reality.

Jeremy Hartmann, Christian Holz, Eyal Ofek and Andrew D. Wilson. ACM CHI 2019.
PDF

Abstract:

Today’s virtual reality (VR) systems offer chaperone rendering techniques that prevent the user from colliding with physical objects. Without a detailed geometric model of the physical world, these techniques offer limited possibility for more advanced compositing between the real world and the virtual. We explore this using a realtime 3D reconstruction of the real world that can be combined with a virtual environment. RealityCheck allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical space without losing the sense of immersion or presence inside their virtual world. We demonstrate RealityCheck with seven existing VR titles, and describe compositing approaches that address the potential conflicts when rendering the real world and a virtual environment together. A study with frequent VR users demonstrate the affordances provided by our system and how it can be used to enhance current VR experiences.

Cross-Device Taxonomy

Survey, Opportunities and Challenges of Interactions Spanning Across Multiple Devices. ACM CHI 2019.

Cross-Device Taxonomy: Survey, Opportunities and Challenges of Interactions Spanning Across Multiple Devices.

Frederik Brudy, Christian Holz, Roman Rädle, Chi-Jui Wu, Steven Houben, Clemens Klokmose and Nicolai Marquardt. ACM CHI 2019.
PDF

Abstract:

Designing interfaces or applications that move beyond the bounds of a single device screen enables new ways to engage with digital content. Research addressing the opportunities and challenges of interactions with multiple devices in concert is of continued focus in HCI research. To inform the future research agenda of this field, we contribute an analysis and taxonomy of a corpus of 510 papers in the cross-device computing domain. For both new and experienced researchers in the field we provide: an overview, historic trends and unified terminology of cross-device research; discussion of major and under-explored application areas; mapping of enabling technologies; synthesis of key interaction techniques spanning across multiple devices; and review of common evaluation strategies. We close with a discussion of open issues. Our taxonomy aims to create a unified terminology and common understanding for researchers in order to facilitate and stimulate future cross-device research.

VRoamer

Generating On-The-Fly VR Experiences While Walking inside Large, Unknown Real-World Building Environments. IEEE VR 2019.

VRoamer: Generating On-The-Fly VR Experiences While Walking inside Large, Unknown Real-World Building Environments.

Lung-Pan Cheng, Eyal Ofek, Christian Holz and Andy Wilson. IEEE VR 2019.
PDF

Abstract:

Procedural generation in virtual reality (VR) has been used to adapt the virtual world to various indoor environments, fitting different geometries and interiors with virtual environments. However, such applications require that the physical environment be known or pre-scanned prior to use to then generate the corresponding virtual scene, thus restricting the virtual experience to a controlled space. In this paper, we present VRoamer, which enables users to walk unseen physical spaces for which VRoamer procedurally generates a virtual scene on-the-fly. Scaling to the size of office buildings, VRoamer extracts walkable areas and detects physical obstacles in real time, instantiates pre-authored virtual rooms if their sizes fit physically walkable areas or otherwise generates virtual corridors and doors that lead to undiscovered physical areas. The use of these virtual structures allows VRoamer to (1) temporarily block users’ passage, thus slowing them down while increasing VRoamer’s insight into newly discovered physical areas, (2) prevent users from seeing changes beyond the current virtual scene, and (3) obfuscate the appearance of physical environments. VRoamer animates virtual objects to reflect dynamically discovered changes of the physical environment, such as people walking by or obstacles that become apparent. In our proof-of-concept study, participants were able to walk long distances through a procedurally generated dungeon experience and reported high levels of immersion.

2018

Naptics

Convenient and Continuous Blood Pressure Monitoring during Sleep. ACM IMWUT 2018.

Naptics: Convenient and Continuous Blood Pressure Monitoring during Sleep.

Andrew Carek and Christian Holz. ACM IMWUT 2018.
PDF

Abstract:

Normal circadian rhythm mediates blood pressure during sleep, decreasing in value in healthy subjects. Current methods to monitor nocturnal blood pressure use an active blood pressure cuff that repeatedly auto-inflates while the subject sleeps. Since these inflations happen in intervals of thirty minutes to one hour, they cause considerable sleep disturbances that lead to false measurements and impact the person’s quality of sleep. These blood pressure samples are also just spot checks and rarely exceed 10–15 values per night.
We present Naptics, a wearable device woven into shorts. Naptics passively monitors the wearer’s blood pressure throughout the night—continuously and unobtrusively—without disturbing the user during sleep. Naptics detects the micro-vibrations of the wearer’s body that stem from the heartbeat and senses the optical reflections from the pulse wave as it propagates down the wearer’s leg. From the timing between these two events, Naptics computes the pulse transit time, which correlates strongly with the user’s blood pressure.
Naptics’ key novelty is its unobtrusive approach in tracking blood pressure during the night. Our controlled evaluation of six subjects showed a high correlation (r = 0.89) between Naptics’ calibrated mean arterial pressure and cuff-based blood pressure. Our in-the-wild evaluation validates Naptics in tracking five participants’ blood pressure patterns throughout four nights and compares them to before and after cuff measurements. In a majority of the nights, Naptics correctly followed the trend of the cuff measurements while providing insights into the behavior and the patterns of participants’ nocturnal blood pressure. Participants reported high sleep quality in sleep diaries after each night, validating Naptics as a convenient monitoring apparatus.

Doubling the Signal Quality of Smartphone Camera Pulse Oximetry Using the Display Screen as a Controllable Selective Light Source

IEEE EMBC 2018.

Doubling the Signal Quality of Smartphone Camera Pulse Oximetry Using the Display Screen as a Controllable Selective Light Source.

Christian Holz and Eyal Ofek. IEEE EMBC 2018.
PDF

Abstract:

Recent smartphones have the potential to bring camera oximetry to everyone using their powerful sensors and the capability to process measurements in real-time, potentially augmenting people’s lives through always-available oximetry monitoring everywhere. The challenge of camera oximetry on smartphones is the low contrast between reflections from oxyhemoglobin and deoxyhemoglobin. In this paper, we show that this is the result of using the camera flash for illumination, which illuminates evenly across bands and thus leads to the diminished contrast in reflections. Instead, we propose capturing pulse using the front-facing camera and illuminating with the phone’s display, a selective illuminant in the red, green, and blue band. We evaluate the spectral characteristics of the phone display using a spectroradiometer in a controlled experiment, convolve them with the sensitivity curves of the phone’s camera, and show that the screen’s narrowband display illumination increases the contrast between the reflections in the desired bands by a factor of two compared to flash illumination. Our evaluation showed further support for our approach and findings.

Project Zanzibar

A Portable and Flexible Tangible Interaction Platform. ACM CHI 2018.

Project Zanzibar: A Portable and Flexible Tangible Interaction Platform.

Nicolas Villar, Daniel Cletheroe, Greg Saul, Christian Holz, Tim Regan, Misha Sra, Hui-Shyong Yeo, William Field and Haiyan Zhang. ACM CHI 2018.
PDF

Abstract:

We present Project Zanzibar, a flexible mat that locates, uniquely identifies, and communicates with tangible objects placed on its surface, as well as senses a user’s touch and hover gestures. We describe the underlying technical contributions: efficient and localised Near Field Communication (NFC) over a large surface area; object tracking combining NFC signal strength and capacitive footprint detection, and manufacturing techniques for a rollable device form-factor that enables portability, while providing a sizable interaction area when unrolled. In addition, we detail design patterns for tangibles of varying complexity and interactive capabilities, including the ability to sense orientation on the mat, harvest power, provide additional input and output, stack, or extend sensing outside the bounds of the mat. Capabilities and interaction modalities are illustrated with self-generated applications. Finally, we report on the experience of professional game developers building novel physical/digital experiences using the platform.

Haptic Revolver

Touch, Shear, Texture, and Shape Rendering on a Reconfigurable Virtual Reality Controller. ACM CHI 2018.

Haptic Revolver: Touch, Shear, Texture, and Shape Rendering on a Reconfigurable Virtual Reality Controller.

Eric Whitmire, Hrvoje Benko, Christian Holz, Eyal Ofek and Mike Sinclair. ACM CHI 2018.
PDF

Abstract:

We present Haptic Revolver, a handheld virtual reality controller that renders fngertip haptics when interacting with virtual surfaces. Haptic Revolver’s core haptic element is an actuated wheel that raises and lowers underneath the fnger to render contact with a virtual surface. As the user’s fnger moves along the surface of an object, the controller spins the wheel to render shear forces and motion under the fngertip. The wheel is interchangeable and can contain physical textures, shapes, edges, or active elements to provide different sensations to the user. Because the controller is spatially tracked, these physical features can be spatially registered with the geometry of the virtual environment and rendered on-demand. We evaluated Haptic Revolver in two studies to understand how wheel speed and direction impact perceived realism. We also report qualitative feedback from users who explored three application scenarios with our controller.

CLAW

A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality. ACM CHI 2018.

CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality.

Inrak Choi, Eyal Ofek, Hrvoje Benko, Mike Sinclair and Christian Holz. ACM CHI 2018.
PDF

Abstract:

CLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index finger. Our controller enables three distinct interactions (grasping virtual object, touching virtual surfaces, and triggering) and changes its corresponding haptic rendering by sensing the differences in the user’s grasp. A servo motor coupled with a force sensor renders controllable forces to the index finger during grasping and touching. Using position tracking, a voice coil actuator at the index fingertip generates vibrations for various textures synchronized with finger movement. CLAW also supports a haptic force feedback in the trigger mode when the user holds a gun. We describe the design considerations for CLAW and evaluate its performance through two user studies. The first study obtained qualitative user feedback on the naturalness, effectiveness, and comfort when using the device. The second study investigated the ease of the transition between grasping and touching when using our device.

Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation

ACM CHI 2018.

Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation.

Yuhang Zhao, Cynthia Bennett, Hrvoje Benko, Ed Cutrell, Christian Holz, Meredith Morris and Mike Sinclair. ACM CHI 2018.
PDF

Abstract:

Traditional virtual reality (VR) mainly focuses on visual feedback, which is not accessible for people with visual impairments. We created Canetroller, a haptic cane controller that simulates white cane interactions, enabling people with visual impairments to navigate a virtual environment by transferring their cane skills into the virtual world. Canetroller provides three types of feedback: (1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions. We designed indoor and outdoor VR scenes to evaluate the effectiveness of our controller. Our study showed that Canetroller was a promising tool that enabled visually impaired participants to navigate different virtual spaces. We discuss potential applications supported by Canetroller ranging from entertainment to mobility training.

SurfaceConstellations

A Modular Hardware Platform for Ad-Hoc Reconfigurable Cross-Device Workspaces. ACM CHI 2018.

SurfaceConstellations: A Modular Hardware Platform for Ad-Hoc Reconfigurable Cross-Device Workspaces.

Nicolai Marquardt, Frederik Brudy, Can Liu, Ben Bengler and Christian Holz. ACM CHI 2018.
PDF

Abstract:

We contribute SurfaceConstellations, a modular hardware platform for linking multiple mobile devices to easily create novel cross-device workspace environments. Our platform combines the advantages of multi-monitor workspaces and multi-surface environments with the flexibility and extensibility of more recent cross-device setups. The SurfaceConstellations platform includes a comprehensive library of 3Dprinted link modules to connect and arrange tablets into new workspaces, several strategies for designing setups, and a visual configuration tool for automatically generating link modules. We contribute a detailed design space of cross-device workspaces, a technique for capacitive links between tablets for automatic recognition of connected devices, designs of flexible joint connections, detailed explanations of the physical design of 3D printed brackets and support structures, and the design of a web-based tool for creating new SurfaceConstellation setups.

PolarTrack

Optical Outside-In Device Tracking that Exploits Display Polarization. ACM CHI 2018.

PolarTrack: Optical Outside-In Device Tracking that Exploits Display Polarization.

Roman Rädle, Hans-Christian Jetter, Jonathan Fischer, Inti Gabriel, Clemens Klokmose, Harald Reiterer and Christian Holz. ACM CHI 2018.
PDF

Abstract:

PolarTrack is a novel camera-based approach to detecting and tracking mobile devices inside the capture volume. In PolarTrack, a polarization filter continuously rotates in front of an off-the-shelf color camera, which causes the displays of observed devices to periodically blink in the camera feed. The periodic blinking results from the physical characteristics of current displays, which shine polarized light either through an LC overlay to produce images or through a polarizer to reduce light reflections on OLED displays. PolarTrack runs a simple detection algorithm on the camera feed to segment displays and track their locations and orientations, which makes PolarTrack particularly suitable as a tracking system for crossdevice interaction with mobile devices. Our evaluation of PolarTrack’s tracking quality and comparison with state-ofthe-art camera-based multi-device tracking showed a better tracking accuracy and precision with similar tracking reliability. PolarTrack works as standalone multi-device tracking but is also compatible with existing camera-based tracking systems and can complement them to compensate for their limitations.

2017

Glabella

Continuously Sensing Blood Pressure Behavior using an Unobtrusive Wearable Device. ACM IMWUT 2017.

Glabella: Continuously Sensing Blood Pressure Behavior using an Unobtrusive Wearable Device.

Christian Holz and Edward Wang. ACM IMWUT 2017.
PDF

Abstract:

We propose Glabella, a wearable device that continuously and unobtrusively monitors heart rates at three sites on the wearer’s head. Our glasses prototype incorporates optical sensors, processing, storage, and communication components, all integrated into the frame to passively collect physiological data about the user without the need for any interaction. Glabella continuously records the stream of reflected light intensities from blood flow as well as inertial measurements of the user’s head. From the temporal differences in pulse events across the sensors, our prototype derives the wearer’s pulse transit time on a beat-to-beat basis.
Numerous efforts have found a significant correlation between a person’s pulse transit time and their systolic blood pressure. In this paper, we leverage this insight to continuously observe pulse transit time as a proxy for the behavior of systolic blood pressure levels—at a substantially higher level of convenience and higher rate than traditional blood pressure monitors, such as cuff-based oscillometric devices. This enables our cuff-less prototype to model the beat-to-beat fluctuations in the user’s blood pressure over the course of the day and record its short-term responses to events, such as postural changes, exercise, eating and drinking, resting, medication intake, location changes, or time of day.
During our in-the-wild evaluation, four participants wore a custom-fit Glabella prototype device over the course of five days throughout their daytime job and regular activities. Participants additionally measured their radial blood pressure three times an hour using a commercial oscillometric cuff. Our analysis shows a high correlation between the pulse transit times computed on our devices with participants’ heart rates (mean r = 0.92, SE = 0.03, angular artery) and systolic blood pressure values measured using the oscillometric cuffs (mean r = 0.79, SE = 0.15, angular–superficial temporal artery, considering participants’ self-administered cuff-based measurements as ground truth). Our results indicate that Glabella has the potential to serve as a socially-acceptable capture device, requiring no user input or behavior changes during regular activities, and whose continuous measurements may prove informative to physicians as well as users’ self-tracking activities.

Sparse Haptic Proxy

Touch Feedback in Virtual Environments Using a General Passive Prop. ACM CHI 2017.

Sparse Haptic Proxy: Touch Feedback in Virtual Environments Using a General Passive Prop.

Lung-Pan Cheng, Eyal Ofek, Christian Holz, Hrvoje Benko and Andy Wilson. ACM CHI 2017.
PDF

Abstract:

We propose a class of passive haptics that we call Sparse Haptic Proxy: a set of geometric primitives that simulate touch feedback in elaborate virtual reality scenes. Unlike previous passive haptics that replicate the virtual environment in physical space, a Sparse Haptic Proxy simulates a scene’s detailed geometry by redirecting the user’s hand to a matching primitive of the proxy. To bridge the divergence of the scene from the proxy, we augment an existing Haptic Retargeting technique with an on-the-fly target remapping: We predict users’ intentions during interaction in the virtual space by analyzing their gaze and hand motions, and consequently redirect their hand to a matching part of the proxy.
We conducted three user studies on our haptic retargeting technique and implemented a system from the three main results: 1) The maximum angle participants found acceptable for retargeting their hand is 40°, rated 4.6 out of 5 on average. 2) Tracking participants’ eye gaze reliably predicts their touch intentions (97.5%), even while simultaneously manipulating the user’s hand-eye coordination for retargeting. 3) Participants preferred minimized retargeting distances over better-matching surfaces of our Sparse Haptic Proxy when receiving haptic feedback for single-finger touch input.
We demonstrate our system with two virtual scenes: a flight cockpit and a room quest game. While their scene geometries differ substantially, both use the same sparse haptic proxy to provide haptic feedback to the user during task completion.

Finding Common Ground

A Survey of Capacitive Sensing in Human-Computer Interaction. ACM CHI 2017.

Finding Common Ground: A Survey of Capacitive Sensing in Human-Computer Interaction.

Tobias Grosse-Puppendahl, Christian Holz, Gabe Cohn, Raphael Wimmer, Oskar Bechtold, Steven Hodges, Matt Reynolds and Joshua Smith. ACM CHI 2017.
PDF

Abstract:

For more than two decades, capacitive sensing has played a prominent role in human-computer interaction research. Capacitive sensing has become ubiquitous on mobile, wearable, and stationary devices—enabling fundamentally new interaction techniques on, above, and around them. The research community has also enabled human position estimation and whole-body gestural interaction in instrumented environments. However, the broad field of capacitive sensing research has become fragmented by different approaches and terminology used across the various domains. This paper strives to unify the field by advocating consistent terminology and proposing a new taxonomy to classify capacitive sensing approaches. Our extensive survey provides an analysis and review of past research and identifies challenges for future work. We aim to create a common understanding within the field of human-computer interaction, for researchers and practitioners alike, and to stimulate and facilitate future research in capacitive sensing.

2016

NormalTouch and TextureTouch

High-fidelity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers. ACM UIST 2016.

NormalTouch and TextureTouch: High-fidelity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers.

Hrvoje Benko, Christian Holz, Mike Sinclair and Eyal Ofek. ACM UIST 2016.
PDF

Abstract:

We present an investigation of mechanically-actuated handheld controllers that render the shape of virtual objects through physical shape displacement, enabling users to feel 3D surfaces, textures, and forces that match the visual rendering. We demonstrate two such controllers, NormalTouch and TextureTouch. Both controllers are tracked with 6 DOF and produce spatially-registered haptic feedback to a user’s finger. NormalTouch haptically renders object surfaces and provides force feedback using a tiltable and extrudable platform. TextureTouch renders the shape of virtual objects including detailed surface structure through a 4×4 matrix of actuated pins. By moving our controllers around in space while keeping their finger on the actuated platform, users obtain the impression of a much larger 3D shape by cognitively integrating output sensations over time. Our evaluation compares the effectiveness of our controllers with the two defacto standards in Virtual Reality controllers: device vibration and visual feedback only. We find that haptic feedback significantly increases the accuracy of VR interaction, most effectively by rendering high-fidelity shape output as in the case of our controllers. Participants also generally found NormalTouch and TextureTouch realistic in conveying the sense of touch for a variety of 3D objects.

DuoSkin

Rapidly Prototyping On-Skin User Interfaces Using Skin-Friendly Materials. ACM ISWC 2016.

DuoSkin: Rapidly Prototyping On-Skin User Interfaces Using Skin-Friendly Materials.

Hsin-Liu Cindy Kao, Christian Holz, Asta Roseway, Andres Calvo and Chris Schmandt. ACM ISWC 2016.
PDF

Abstract:

Miniature devices have become wearable beyond the form factor of watches or rings—functional devices can now directly affix to the user’s skin, unlocking a much wider canvas for electronics. However, building such small and skin-friendly devices currently requires expensive materials and equipment that is mostly found in the medical domain. We present DuoSkin, a fabrication process that affords rapidly prototyping functional devices directly on the user’s skin using gold leaf as the key material, a commodity material that is skin-friendly, robust for everyday wear, and user-friendly in fabrication. We demonstrate how gold leaf enables three types of interaction modalities on DuoSkin devices: sensing touch input, displaying output, and communicating wirelessly with other devices. Importantly, DuoSkin incorporates aesthetic customizations found on body decoration, giving form to exposed interfaces that so far have mostly been concealed by covers. Our technical evaluation confirmed that gold leaf was more durable and preferable when affixed to skin than current commodity materials during everyday wear. This makes gold leaf a viable material for users to build functional and compelling on-skin devices. In our workshop evaluation, participants were able to customize their own on-skin music controllers that reflected personal aesthetics.

Pre-Touch Sensing for Mobile Interaction

ACM CHI 2016.

Pre-Touch Sensing for Mobile Interaction.

Ken Hinckley, Seongkook Heo, Michel Pahud, Christian Holz, Hrvoje Benko, Abigail Sellen, Richard Banks, Kenton O'Hara, Gavin Smyth and William Buxton. ACM CHI 2016.
PDF

Abstract:

Touchscreens continue to advance—including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen’s edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an “ad-lib interface” that fades in a different UI—appropriate to the context—as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction.

On-Demand Biometrics

Fast and Convenient Cross-Device Login. ACM CHI 2016.

On-Demand Biometrics: Fast and Convenient Cross-Device Login.

Christian Holz and Frank Bentley. ACM CHI 2016.
PDF

Abstract:

We explore the use of a new way to log into a web service, such as email or social media. Using on-demand biometrics, users sign in from a browser on a computer using just their name, which sends a request to their phone for approval. Users approve this request by authenticating on their phone using their fingerprint, which completes the login in the browser. On-demand biometrics thus replace passwords or temporary access codes found in two-step verification with the ease of use of biometrics. We present the results of an interview study on the use of on-demand biometrics with a live login backend. Participants perceived our system as convenient and fast to use and also expressed their trust in fingerprint authentication to keep their accounts safe. We motivate the design of on-demand biometrics, present an analysis of participants’ use and responses around general account security and authentication, and conclude with implications for designing fast and easy cross-device authentication.

2015

Tracko

Ad-hoc Mobile 3D Tracking Using Bluetooth Low Energy and Inaudible Signals for Cross-Device Interaction. ACM UIST 2015.

Tracko: Ad-hoc Mobile 3D Tracking Using Bluetooth Low Energy and Inaudible Signals for Cross-Device Interaction.

Haojian Jin, Christian Holz and Kasper Hornbæk. ACM UIST 2015.
PDF

Abstract:

While current mobile devices detect the presence of surrounding devices, they lack a truly spatial awareness to bring them into the user’s natural 3D space. We present Tracko, a 3D tracking system between two or more commodity devices without added components or device synchronization. Tracko achieves this by fusing three signal types. 1) Tracko infers the presence of and rough distance to other devices from the strength of Bluetooth low energy signals. 2) Tracko exchanges a series of inaudible stereo sounds and derives a set of accurate distances between devices from the difference in their arrival times. A Kalman filter integrates both signal cues to place collocated devices in a shared 3D space, combining the robustness of Bluetooth with the accuracy of audio signals for relative 3D tracking. 3) Tracko incorporates inertial sensors to refine 3D estimates and support quick interactions. Tracko robustly tracks devices in 3D with a mean error of 6.5 cm within 0.5 m and a 15.3 cm error within 1 m, which validates Tracko’s suitability for cross-device interactions.

Biometric Touch Sensing

Seamlessly Augmenting Each Touch with Continuous Authentication. ACM UIST 2015.

Biometric Touch Sensing: Seamlessly Augmenting Each Touch with Continuous Authentication.

Christian Holz and Marius Knaust. ACM UIST 2015.
PDF

Abstract:

Current touch devices separate user authentication from regular interaction, for example by displaying modal login screens before device usage or prompting for in-app passwords, which interrupts the interaction flow. We propose biometric touch sensing, a new approach to representing touch events that enables commodity devices to seamlessly integrate authentication into interaction: From each touch, the touchscreen senses the 2D input coordinates and at the same time obtains biometric features that identify the user. Our approach makes authentication during interaction transparent to the user, yet ensures secure interaction at all times. To implement this on today’s devices, our watch prototype Bioamp senses the impedance profile of the user’s wrist and modulates a signal onto the user’s body through skin using a periodic electric signal. This signal affects the capacitive values touchscreens measure upon touch, allowing devices to identify users on each touch. We integrate our approach into Windows 8 and discuss and demonstrate it in the context of various use cases, including access permissions and protecting private screen contents on personal and shared devices.

Bodyprint

Biometric User Identification on Mobile Devices Using the Capacitive Touchscreen to Scan Body Parts. ACM CHI 2015.

Bodyprint: Biometric User Identification on Mobile Devices Using the Capacitive Touchscreen to Scan Body Parts.

Christian Holz, Senaka Buthpitiya and Marius Knaust. ACM CHI 2015.
PDF

Abstract:

Recent mobile phones integrate fingerprint scanners to authenticate users biometrically and replace passwords, making authentication more convenient for users. However, due to their cost, capacitive fingerprint scanners have been limited to top-of-the-line phones, a result of the required resolution and quality of the sensor.
We present Bodyprint, a biometric authentication system that detects users’ biometric features using the same type of capacitive sensing, but uses the touchscreen as the image sensor instead. While the input resolution of a touchscreen is ~6 dpi, the surface area is larger, allowing the touch sensor to scan users’ body parts, such as ears, fingers, fists, and palms by pressing them against the display. Bodyprint compensates for the low input resolution with an increased false rejection rate, but does not compromise on authentication precision: In our evaluation with 12 participants, Bodyprint classified body parts with 99.98% accuracy and identifies users with 99.52% accuracy with a false rejection rate of 26.82% to prevent false positives.
Scanning users’ ears for identification, Bodyprint achieves 99.8% authentication precision with a false-rejection rate of 1 out of 13, thereby bringing reliable biometric user authentication to a vast number of commodity devices.

2013

Fiberio

A Touchscreen That Senses Fingerprints. ACM UIST 2013.

Fiberio: A Touchscreen That Senses Fingerprints.

Christian Holz and Patrick Baudisch. ACM UIST 2013.
PDF

Abstract:

We present Fiberio, a rear-projected multitouch table that identifies users biometrically based on their fingerprints during each touch interaction. Fiberio accomplishes this using a new type of screen material: a large fiber optic plate. The plate diffuses light on transmission, thereby allowing it to act as projection surface. At the same time, the plate reflects light specularly, which produces the contrast required for fingerprint sensing. In addition to offering all the functionality known from traditional diffused illumination systems, Fiberio is the first interactive tabletop system that authenticates users during touch interaction—unobtrusively and securely using the biometric features of fingerprints, which frees users from carrying identification tokens.

2012

Implanted User Interfaces

ACM CHI 2012.

Implanted User Interfaces.

Christian Holz, Tovi Grossman, George Fitzmaurice and Anne Agur. ACM CHI 2012.
PDF

Abstract:

We investigate implanted user interfaces that small devices provide when implanted underneath human skin. Such devices always stay with the user, making their implanted user interfaces available at all times. We discuss four core challenges of implanted user interfaces: how to sense input through the skin, how to produce output, how to communicate amongst one another and with external infrastructure, and how to remain powered. We investigate these four challenges in a technical evaluation where we surgically implant study devices into a specimen arm. We find that traditional interfaces do work through skin. We then demonstrate how to deploy a prototype device on participants, using artificial skin to simulate implantation. We close with a discussion of medical considerations of implanted user interfaces, risks and limitations, and project into the future.

2011

Understanding Touch

ACM CHI 2011.

Understanding Touch.

Christian Holz and Patrick Baudisch. ACM CHI 2011.
PDF

Abstract:

Current touch devices, such as capacitive touchscreens are based on the implicit assumption that users acquire targets with the center of the contact area between finger and device. Findings from our previous work indicate, however, that such devices are subject to systematic error offsets. This suggests that the underlying assumption is most likely wrong. In this paper, we therefore revisit this assumption.
In a series of three user studies, we find evidence that the features that users align with the target are visual features. These features are located on the top of the user’s fingers, not at the bottom, as assumed by traditional devices. We present the projected center model, under which error offsets drop to 1.6mm, compared to 4mm for the traditional model. This suggests that the new model is indeed a good approximation of how users conceptualize touch input.
The primary contribution of this paper is to help understand touch—one of the key input technologies in human-computer interaction. At the same time, our findings inform the design of future touch input technology. They explain the inaccuracy of traditional touch devices as a "parallax" artifact between user control based on the top of the finger and sensing based on the bottom side of the finger. We conclude that certain camera-based sensing technologies can inherently be more accurate than contact area-based sensing.

2010

The Generalized Perceived Input Point Model

How to Double Touch Accuracy by Extracting Fingerprints. ACM CHI 2010.

The Generalized Perceived Input Point Model and How to Double Touch Accuracy by Extracting Fingerprints.

Christian Holz and Patrick Baudisch. ACM CHI 2010.
PDF

Abstract:

It is generally assumed that touch input cannot be accurate because of the fat finger problem, i.e., the softness of the fingertip combined with the occlusion of the target by the finger. In this paper, we show that this is not the case. We base our argument on a new model of touch inaccuracy. Our model is not based on the fat finger problem, but on the perceived input point model. In its published form, this model states that touch screens report touch location at an offset from the intended target. We generalize this model so that it represents offsets for individual finger postures and users. We thereby switch from the traditional 2D model of touch to a model that considers touch a phenomenon in 3-space. We report a user study, in which the generalized model explained 67% of the touch inaccuracy that was previously attributed to the fat finger problem.
In the second half of this paper, we present two devices that exploit the new model in order to improve touch accuracy. Both model touch on per-posture and per-user basis in order to increase accuracy by applying respective offsets. Our Ridgepad prototype extracts posture and user ID from the user’s fingerprint during each touch interaction. In a user study, it achieved 1.8 times higher accuracy than a simulated capacitive baseline condition. A prototype based on optical tracking achieved even 3.3 times higher accuracy. The increase in accuracy can be used to make touch interfaces more reliable, to pack up to 3.3^2 > 10 times more controls into the same surface, or to bring touch input to very small mobile devices.