Navigating Novel Environments: A Comparison of Verbal and Visual Learning
Published: 2004
Publication Name:
Abstract:
Nicholas A. Giudice. Unpublished doctoral dissertation, Dec. 2004, UMN.
The purpose of this research was to investigate how well individuals learn novel indoor environments by means of verbal descriptions. Seven experiments were conducted addressing the following four questions:
1) Does spatial learning performance differ based on whether a building layout is learned using verbal descriptions or visual input?
2) Is learning performance influenced by the amount of environmental detail available to the navigator?
3) Does learning performance differ for exploration of real vs. virtual environments?
4) Does learning performance differ based on visual status? (Experiments included sighted, blindfolded sighted and blind participants)
The studies incorporated two experimental phases. During the training phase, participants freely explored a layout and were instructed to use the verbal/visual information to learn the complete network of corridors as well as to find four hidden target locations (indicated by an auditory cue). During the testing phase, knowledge of the training environment was evaluated by several measures, including straight-line distance estimation between target locations, map reproduction and route navigation.
These experiments represent the first known work to study whether spatial verbal descriptions support wayfinding behavior in large-scale layouts and to investigate whether learning with a virtual verbal display in simulated environments transfers to accurate navigation performance in the corresponding real environment. The results from these experiments provide clear evidence that verbal descriptions are an effective non-visual mode of environmental access. Performance during the training period was very similar between modalities, environments and participant groups, suggesting that verbal information is sufficient to support accurate learning, wayfinding and cognitive map development across a range of factors. In general, test performance was also quite good, although transfer to real-world navigation after virtual verbal learning was somewhat less accurate than after real-world learning or learning in visually rendered environments. The level of description provided did not reliably affect performance for verbal learning, suggesting that a minimal message describing local geometry is sufficient. In contrast, the same minimal message led to significantly degraded performance in the visual conditions.