Sharon Oviatt

The Handbook of Multimodal-Multisensor Interfaces, Volume 1


Скачать книгу

tend to be abstract, meanings must be mapped to signals. In the absence of a shared understanding for what these stimuli signify, meaning-mapping is driven by personal experience [Schneider and MacLean 2014, Alter and Oppenheimer 2009]. Individual differences in describing and preferring haptic sensations are thus dominated by personal schemas of interpretation and sense-making [Seifi and MacLean 2013, Seifi et al. 2015, Levesque et al. 2012].

      How can design practices accommodate and leverage such extensive differences in perception and interpretation?

      Haptic researchers have been looking for common themes in users’ perception from the start, and many do exist. Shared interpretations can be translated into guidelines for designing sensations that are distinguishable and expressive for at least significant group of users. For example, most individuals agree with urgency being represented by higher vibrotactile energy and frequency values. Common cultural connotations can also be transferred from other modalities. Audition contributes an understanding of rhythm [Brown et al. 2006a], and auditory icons can be mimicked to achieve a comparable shared perception in haptic counterparts, whether a direct translation or exploitation of underlying design principles and parameters. For example, van Erp and Spapé [2003] transforms 59 short music pieces into vibrotactile patterns, while Ternes and MacLean [2008] builds a large set of vibration icons using rhythm and pitch (frequency).

      Large individual differences in haptic perception necessitate evaluating designs at scale, with a larger participant pool. Crowdsourcing evaluation of haptic designs is an enabling new direction (Section 3.5.2).

      While guidelines enable haptic design for users in the large, support for customization is key to design effectiveness for individuals [Seifi et al. 2014, Seifi et al. 2015, Ledo et al. 2012]. Applications should enable individual haptic meaning-mapping by allowing users to choose desired settings or mappings for a piece of information. The ability to tune pre-designed sensations or create new ones can further support users in tweaking a signal to their specific usage context and preferences.

      It will often be necessary to provide non-haptic backup modalities. Some individuals will be unable (e.g., for reasons of sensory, cognitive, or situational constraints) or unwilling to utilize haptic feedback, ever or in some situations. Interaction designers must allow users to mute or switch to other modalities when needed. When a haptic element is the primary form of information display, as discussed in Section 3.2.3, this may require automatic translation between haptics and other modalities like audio and visual [Hoggan and Brewster 2007].

      Several factors make haptic design challenging from a technical standpoint today. Hardware elements are typically able to render just one perceptual haptic submodality: vibration or force, shape, texture, shear, or temperature. These hardware elements are difficult to integrate, resulting in sensations very different from touching in the real world. Hardware also differs greatly in expressive nature and degree, even for a given submodality, and there is a large impact of hardware configuration (weight, materials, etc.) on the resulting sensations.

      As a consequence, haptic effects generally must be designed for a specific hardware element, and cannot easily be transferred to another actuator of a different mechanism, manner of being worn, or performance. Moreover, there is a general dearth of tools and expertise for haptic design in industry, and shortage of examples and accepted practices to draw on. Tool development is a priority for the field, and we will offer a perspective of the space that tools do, and must, jointly cover in Sections 3.4.3 and 3.5.3.

      The touch sense is routinely used in a close partnership with other modalities, which must be considered at design time. Here we examine multimodal interaction holistically by analyzing several scenarios in terms of their interactive goals and features (Sections 3.2.1 and 3.2.2); zoom in to look at the roles haptic sensations take with other modalities (Section 3.2.3); and examine the contribution of haptics to those interactions (Section 3.2.4).

      We begin by considering how a multimodal interaction can be structured in terms of goals and design element parameters. We will use the scenarios laid out in Table 3.1 to show how their interactive goals and features define interaction requirements; then further build on these examples for the rest of the chapter. These structures are generally not orthogonal or mutually exclusive; they might appear alone or in combination.

      A holistic interaction is often dominated by a particular information display objective. For example, it might provide, notify, and/or guide, deploying a variety of sensor modalities as appropriate. The interaction goal can shift according to the user’s momentary need, and a display can reconfigure its utilities. To illustrate, a common current approach for a navigation interface on a mobile or wearable device is to guide with “push” auditory directives and/or vibrotactile feedback about an upcoming turn; when the user needs more detail, the map is provided on a graphical screen (scenarios [S1] and [S2] in Table 3.1).

Image

      When provided or offered, information is continuously available. It can be accessed at the user’s will, or offered as an ambient stream where the user may consume or ignore it. It might be functional, e.g., indicating the time remaining on a clock or progress toward a goal on a wearable display [S1], or adding dexterity-enabling sensory layers to remote surgery context [S4]. It could enrich an experience (watching a haptically augmented movie [S3]). An interface might escalate an ambient information display channel to notify level (transitioning to a higher salience and discrete medium) when it becomes crucial.

      In notify, information is pushed to the user when it becomes of interest, or ready. Notifications can vary in salience, including sub-attentional; but a conceptual differentiation from provided information is that it is event-based, rather than continually available.

      A guiding display supports user movement and action, in real or virtual space or processes. Guiding can be continuous, e.g., steering assistance [Forsyth and MacLean 2006]; or periodic or occasional, e.g. when a wearable exercise device gives pace feedback [Karuei and MacLean 2014] [S1]. There are many other types of guiding interfaces, such as software wizards that take a user through steps of a complex configuration task, but these may not be as well suited to haptic participation. Guidance can be attentionally dominant or backgrounded, especially once well learned, as when the view of the road and traffic ahead nonconciously influences one’s speed control of a car.

      The larger goals of a multimodal interaction expose design parameters that will define how an interaction can play out, and are a step toward setting its requirements. All modalities can potentially be called upon for these design elements; some may work better than others in a given situation, and redundancy may be