Spatial learning and navigation using a virtual verbal display
Published: 2010
Publication Name: ACM Transactions on Applied Perception
Abstract:
Abstract: We report on three experiments that investigate the efficacy of a new type of interface called a virtual verbal display (VVD) for nonvisual learning and navigation of virtual environments (VEs). Although verbal information has been studied for routeguidance, little is known about the use of context-sensitive, speech-based displays (e.g., the VVD) for supporting free exploration and wayfinding behavior. During training, participants used the VVD (Experiments I and II) or a visual display (Experiment III) to search the VEs and find four hidden target locations. At test, all participants performed a route-finding task in the corresponding real environment, navigating with vision (Experiments I and III) or from verbal descriptions (Experiment II). Training performance between virtual display modes was comparable, but wayfinding in the real environment was worse after VVD learning than visual learning, regardless of the testing modality. Our results support the efficacy of the VVD for searching computer-based environments but indicate a difference in the cognitive maps built up between verbal and visual learning, perhaps due to lack of physical movement in the VVD.
Citation: Giudice, N.A., Bakdash, J.Z., Legge, G.E., & Roy, R. (2010). Spatial learning and navigation using a virtual verbal display. ACM Transactions on Applied Perception, 7(1), 3:1-3:22 (Article 3).