Functional Equivalence
Underlying our interest in Multimodal Spatial Cognition is the theory of functional equivalence of spatial representations. At the heart of this hypothesis is the notion that when information is matched between inputs, learning from separate encoding modalities can build up into a common spatial representation in memory (called the spatial image in working memory or the cognitive map in long-term memory). The importance of this representation is that it can function equivalently in supporting spatial behaviors irrespective of the information source (although learning time may differ between inputs). A growing body of evidence, both behavioral and from neuroimaging, including work from VEMI, demonstrates that information learned from different modalities leads to functionally equivalent test performance for a host of spatial tasks, including spatial updating, orienting, and wayfinding behavior (see our Publications page for specific research). Our theoretical interest is independent of specific combinations of inputs or tasks, focusing instead on what situations will and will not induce equivalence between input modalities and investigating the structure of the ensuing spatial representations (and associated neural substrates) mediating this behavior.
There are at least three theoretical explanations for functionally equivalent behavior:
- Separate but equal hypothesis: modality-specific representations are developed that are isomorphic and exist in parallel.
- Recoding hypothesis: all inputs are converted into a visual representation.
- Amodal hypothesis: separate inputs lead to a common “spatial” representation not tied to any input source. Our research results favor this third explanation.
Our research on functional equivalence and the development of common spatial representations has been supported by several grants, with the research leading to a series of papers and chapters that deal with the similarity of different combinations of inputs.