Draft:Heads-Up Computing

Introduction
Heads-Up Computing is a human-computer interaction approach initially proposed by Shengdong Zhao, professor at the City University of Hong Kong. . This interactive design approach seeks to seamlessly integrate computing support into daily activities within ubiquitous environments.

Various related applications have been explored in the design of games, video-learning and subtle interaction techniques. This vision suggests a potential solution to address issues arising from competing activities between digital and real-world interactions. For instance, the phenomenon known as “smartphone zombie” highlights how using a mobile phone while walking can diminish situational awareness. Instead, Heads-Up Computing aims to position digital interactions as complementary to real-world activities.

It is important to note that Heads-Up Computing represents an evolving field of research and development, with ongoing exploration into its practical applications and implications. While the long-term vision may involve embedding computing capabilities directly into human bodies, the current definition of Heads-Up Computing primarily involves wearable technology incorporating body-compatible hardware. It also includes multimodal interactions and resource-aware interactions that dynamically adjust based on the user's context



Characteristics
Heads-Up Computing is defined by three characteristics:


 * 1) Body-compatible hardware components. This design principle aligns the device's input and output modules with human sensory channels . Recognizing our head and hands as key sensing and actuating hubs, the design includes a head-piece for visual and audio output (such as smart glasses or earphones), a hand-piece (like a ring or wristband) for manual input and haptic feedback, and potentially a body-piece (like a robot) that can perform additional physical tasks for the user.
 * 2) Multimodal voice, gaze, and gesture interaction. With the head-, hand-, and body-pieces in place, users can input commands via voice, gaze, or subtle gestures involving the head, mouth, and fingers. These modalities are chosen as they can largely be performed during scenarios when the eyes and hands are busy, therefore covering a broad range of interaction needs in daily activities.
 * 3) Resource-aware interaction model. The interface of Heads-Up Computing needs to be dynamically created according to the available resources a user has at any given moment. Therefore, the system needs to monitor and be aware of the current activity the user is engaged in, as well as the environmental constraints faced at that given moment. An important area of development for this paradigm is a quantitative model that optimizes interactions by predicting the relationship between human perceptual space constraints and primary tasks. This model will be responsible for delivering just-in-time information to and from the head-, hand-, and body-pieces.