Navigation Systems

Information Requirements for Environmental Learning and Navigation

Much of the work in the spatial cognition literature involves tasks where people learn an arrangement of target locations, pre-defined routes, or small (laboratory-sized) layouts. While all matter of interesting spatial operations can be studied with these experimental stimuli and tasks, natural spatial behavior generally occurs in a very different context (e.g., in real-world settings which are spatially extended), not limited to a few experimental objects, and subject to myriad sources of ambient variability. There are a growing number of studies that use large-scale environments, either virtual or physical, but most of this work is still concerned with routes rather than learning the space as a whole (environmental learning) or spatial inference (e.g., determining shortcuts, detours, straight-line distances between off-route landmarks, etc.). VEMI investigates such issues via spatial knowledge acquisition of large-scale indoor and outdoor environments, as well as addressing the transition between these spaces. In addition to performance measures such as speed and accuracy of executing a route or finding a destination, many of our studies also address performance of more complex spatial tasks such as cognitive map development, spatial inference making, spatial updating (keeping track of your position in space as you move), and the like. In addition to test performance, we are also interested in the far less studied domain of human spatial learning behavior. This research generally uses a free exploration (open search) paradigm instead of a directed search (route-based) design. Areas of interest include: how well people learn environments as a whole (form global representations), what exploratory patterns and decision-making processes they use, what learning strategies they employ, and how they perform with spatial uncertainty. These factors all provide important insight into what and how information is used during the learning process.

We study environmental learning and wayfinding behavior between blind, blindfolded-sighted, and sighted participants, with a growing interest in aging and how spatial performance changes across the lifespan. Comparison of these participant groups and age brackets facilitates understanding of how access to different sources of spatial information and use of different learning strategies effect performance as a function of real or simulated sensory loss and natural lifespan development.

 

Multimodal Interfaces for Real-Time Navigation Systems

Although we study all matter of spatial layouts, our work concentrates on indoor navigation using both real and virtual environments as well as incorporating augmented reality. Compared to outdoor travel, indoor navigation is aided by far less information from the environment, orienting cues, and external aids (such as maps or GPS). As a result, spatial learning and wayfinding of indoor spaces can pose some particularly difficult challenges.

A practical outcome of the building up and accessing of functionally equivalent representations is that, assuming provision of the appropriate information, different sensor technologies, multimodal interfaces, and spatial displays could support the same level of spatial behaviors as vision. Our goal is to identify a core set of sensory-independent spatial primitives for indoor and outdoor environments that support complex spatial behaviors, irrespective of the input channel. These spatial primitives are at the heart of our research and development of all navigation interfaces: visual, auditory, haptic, language-based, and multimodal. Although we are interested in the hardware and software used in such displays, our primary focus is on the content and presentation of spatial information—determining the minimal information requirements and best delivery methods supporting the highest level of environmental learning and navigation performance. These results are critical for understanding how multimodal spatial cognition is affected by the availability of different sources of information and are used to establish specifications for the design of visual and non-visual interfaces alike that support a similar level of performance across a range of common spatial behaviors and end-user groups.