Wayfinding with Words

Spatial language, terminology that either describes the environment or one’s position / orientation in it, is traditionally used for either giving static route directions (e.g., being told in advance how to walk / drive from your house to the pub) or for real-time route guidance (such as instructions given by a GPS-based navigation system). My dissertation research and subsequent early investigations focused on studying whether verbal descriptions could support more complicated tasks, such as learning global environmental structure and nonvisual wayfinding behavior. To this end, I developed a verbal protocol for spatial learning and navigation without vision. Key to this description logic was the use of real-time vs. static information about the user’s position and orientation in the environment, inclusion of cues about layout geometry vs. simple route information, and implementation of dual egocentric / allocentric reference frames instead of only using one or the other. A series of studies showed that these descriptions supported accurate spatial learning, cognitive map development, and wayfinding behavior in unfamiliar real and virtual indoor environments by both blindfolded-sighted and blind / visually impaired (BVI) people.

This work also led to the development of an offline virtual learning mode, called a virtual verbal display (VVD), which can be used as a training tool to explore and learn new environments before physically going there. Pre-journey learning is particularly important for BVI people as it allows exploration and learning of the environment in a safe and low-stress manner and for development of a cognitive map of the space that can then be accessed once physically there.

Two important outcomes of this research were: (1) learning of both real and virtual environments is more accurate and requires less cognitive load when using a perceptual interface such as spatialized audio (hearing objects or hallways as coming from their actual location in space) than from verbal information alone, and (2) When navigating virtual environments, descriptions that were paired with physical movement led to significantly improved cognitive map development and required less cognitive load than those that just use verbal information and a keyboard or joystick to effect movement. These results have provided much needed guidelines for the design of verbal and nonvisual interfaces and are increasingly being used in the design of sensory substitution devices and navigation systems.

My recent work in this domain has focused on using verbal descriptions and other nonvisual information for providing access to local “scenes” for people who cannot see their surrounds, e.g. BVI individuals or sighted folks operating in the dark.

Relevant citations:

1. Klatzky, R.L., Marston, J.R., Giudice, N.A., Golledge, R.G., & Loomis, J.M. (2006). Cognitive Load of Navigating Without Vision When Guided by Virtual Sound Versus Spatial Language. Journal of Experimental Psychology: Applied. 12(4), 223-232.

2. Giudice, N.A., Bakdash, J.Z., & Legge, G.E. (2007). Wayfinding with Words: Spatial Learning and Navigation Using Dynamically-Updated Verbal Descriptions. Psychological Research. 71(3), 347-358.

3. Giudice, N.A., & Tietz, J. (2008). Learning with Virtual Verbal Displays: Effects of Interface Fidelity on Cognitive Map Development. In C. Freksa, N. Newcombe, P. Gärdenfors, & S. Wölfl (Eds.), Proceedings of Spatial Cognition VI: Lecture Notes in Artificial Intelligence (Vol. 5248, pp. 121-137). Berlin: Springer.

4. Giudice, N.A., Marston, J.R., Klatzky, R.L., Loomis, J.M., & Golledge, R.G. (2008). Environmental learning without vision: Effects of cognitive load on interface design. Proceedings of the 9th International Conference on Low Vision (Vision 08). July, Montreal, Canada.

5. Giudice, N.A., Bakdash, J.Z., Legge, G. E., & Roy, R. (2010). Spatial Learning and Navigation Using a Virtual Verbal Display. ACM Transactions on Applied Perception. 7(1), 3:1-3:22 (Article 3). 

6. Kesavan, S. & Giudice, N.A. (2012). Indoor scene knowledge acquisition using a natural language interface. In C. Graf, N.A. Giudice, & F. Schmid (Eds.) Proceedings of the international Workshop on Spatial Knowledge Acquisition with Limited Information Displays (SKALID’12). pp. 1-6. August, Monastery Seeon, Germany.

Complete List of Published Work:

E-pubs at: https://umaine.edu/vemi/publications/

Google Scholar: https://scholar.google.com/citations?user=jD95I7EAAAAJ