Multimodal Spatial Cognition (MSC)
Historically, spatial cognition research has emphasized visual-spatial information and has ignored the role of other spatial modalities (e.g., hearing, touch, smell, etc). This is curious as we perceive the world through the synthesis of environmental sensing and the information gathered from nonvisual inputs is frequently used, in isolation or along with vision, to support spatial activities. I dubbed the term multimodal spatial cognition to describe my work investigating how we learn about, represent, and navigate our environment using different spatial modalities. Following from the success of my early research with spatial verbal descriptions to support all matter of spatial behaviors, I became interested in whether different spatial inputs could support the same level of spatial performance as is possible from vision. I have conducted studies investigating spatial learning within and between modalities, including 3D audio, haptic, linguistic, and visual learning of graphs, maps, object arrays, scenes, and large-scale indoor environments. Although these studies used a range of experimental procedures and test measures, a consistent theme from the results is the finding that learning from all inputs led to highly similar (i.e. functionally equivalent) test performance across a range of spatial tasks, including spatial updating, cognitive mapping, and navigation. Building on a model first elaborated by Jack Loomis, Bobby Klatzky and others, these findings are interpreted as supporting the development of an amodal spatial representation in memory, called the spatial image, which functions equivalently in the service of action, irrespective of input source.
Anatomical evidence for common computation of spatial information in the brain has also been found by a growing number of researchers in ‘expert processing regions. In an fMRI study I conducted with colleagues addressing visual and haptic scene learning, our results showed that the Parahippocampal Place Area (PPA), a brain region known for extracting the visuospatial structure of 3D scenes, was similarly innervated for scenes that were learned from both vision and touch. These data provided the first empirical support for the notion of an amodal spatial representation in the PPA based on neuronal populations that are preferentially tuned for spatial computation of 3-D geometric structure, irrespective of the modal source. In another fMRI study looking at detection of the direction of auditory motion, we found similar involvement of the hMT+ complex (an area known to be recruited for coding visual motion direction) in congenitally blind participants.
The underlying theme advanced by this line of research emphasizes the role of ‘space’ in spatial cognition, versus the traditional view espousing vision as its principle mechanism. While vision is an amazing conduit of spatial information, it by no means has a monopoly on space. My combined behavioral and neuroimaging MSC research has been influential in recent years given growing interest in multisensory information processing in the brain, assistive technology development, sensory substitution, and BVI navigation.
Relevant citations:
1. Giudice, N.A., Klatzky, R. L., & Loomis, J.M. (2009). Evidence for Amodal Representations after Bimodal Learning: Integration of Haptic-Visual Layouts into a Common Spatial Image. Spatial Cognition & Computation. 9(4), 287-304.
2. Wolbers, T.*, Loomis, J.M., Klatzky, R.L., & Giudice, N.A.* (2011). Modality Independent Coding of Spatial Layout in the Human Brain. Current Biology. 21(11), 984-989. (* Equal contribution of authors)
3. Giudice, N.A., Betty, M.R., & Loomis, J.M. (2011). Functional Equivalence of Spatial Images from Touch and Vision: Evidence from Spatial Updating in Blind and Sighted Individuals. Journal of Experimental Psychology: Learning, Memory, and Cognition. 37(3), 621-634.
4. Wolbers, T., Zahorik, P., & Giudice, N.A. (2011). Decoding the direction of auditory motion in blind humans. Neuroimage, special issue on Multivariate Decoding & Brain Reading. 56(2), 681-687.
5. Giudice, N.A., Klatzky, R.L., Bennett, C.R., & Loomis, J.M. (2013). Perception of 3-D location based on vision, touch, and extended touch. Experimental Brain Research. 224(1), 141-153
6. Giudice, N. A., Klatzky, R. L., Bennett, C., & Loomis, J. M. (2013). Combining locations from working memory and long-term memory into a common spatial image. Spatial Cognition and Computation, 13, 103-128.
7. Loomis, J.M., Klatzky, R.L., & Giudice, N.A. (2013). Representing 3D Space in working memory: Spatial images from vision, hearing, touch, and language. In S. Lacey & R. Lawson (Eds). Multisensory Imagery: Theory & Applications (pp. 131-156). New York: Springer.
Complete List of Published Work:
E-pubs at: https://umaine.edu/vemi/publications/
Google Scholar: https://scholar.google.com/citations?user=jD95I7EAAAAJ