VEMI's RESEARCH


Our Approach & Overview

Experiments in the VEMI Lab are conducted using real-world layouts, Virtual Environments (VEs), and Augmented Reality (AR). We employ a number of methodological approaches in our research, often combining techniques from Psychophysics, Experimental Psychology, and Cognitive Neuroscience with principles from Human-Computer Interaction and Human Factors Engineering. 

VEMI’s underlying theory adopts a holistic perspective of the enterprise of spatiality—one that focuses on common spatial information content, representation, mental computation, and behavior, rather than emphasizing the specific sensory conduit of this spatial information.

VEMI’s research interests bridge several domains, theoretical and applied, but the core of our program is linked by a fundamental interest in what we call multimodal spatial cognition (MSC). MSC deals with topics such as spatial learning and navigation from different sensory inputs, the effects of multimodal and cross-modal interactions on the mental representation of space, and a comparison of spatial computations, spatial problem solving, and spatial behavior between different information sources.

Underlying our interest in Multimodal Spatial Cognition is the theory of functional equivalence of spatial representations. At the heart of this hypothesis is the notion that when information is matched between inputs, learning from separate encoding modalities can build up into a common spatial representation in memory (called the spatial image in working memory or the cognitive map in long-term memory).

At VEMI, we study navigation behaviors (environmental learning and wayfinding) between blind, blindfolded-sighted, and sighted participants, with a growing interest in aging and how spatial performance changes across the lifespan. Compared to outdoor travel, indoor navigation is aided by far less information from the environment, orienting cues, and external aids (such as maps or GPS).

Current Funded Research Projects

2019  NSF, “Improving user trust of autonomous vehicles through human-vehicle collaboration”; (N.A. Giudice, UMaine (PI) and R.R. Corey, UMaine).

2019  NSF, “Development of a multimodal interface for improving independence of Blind and Visually-Impaired people”; (R.R. Corey, UMaine (PI); with H.P. Palani (PI) and N.A. Giudice, Unar Labs).

2018  NEH, “Preservation and access research and development”; Accessible Civil Rights Heritage Proposal; (N.A. Giudice, UMaine (PI); with R.R. Corey, (UMaine) and M. Williams (PI); J. Bell, Dartmouth College). 

2018  NSF, “A remote multimodal learning environment to increase graphical information access for Blind and Visually Impaired students”; (N.A. Giudice, (PI); with J.K. Dimmel, UMaine; and S.A. Doore, Bowdoin College). 

2017  NSF, “Perceptual and implementation strategies for knowledge acquisition of digital tactile graphics for Blind and Visually Impaired students”; (N.A. Giudice, UMaine (PI); with J. Gorlewicsz, Saint Louis University (PI); D.W. Smith, University of Alabama, Huntsville; and A.M. Stefix, University of Nevada, Las Vegas”. 

2017  NIH, “Roboglasse® electronic travel aid with hands free obstacle avoidance for blind and vision impaired users”; (N.A. Giudice, UMaine (PI); with Fauxsee Innovations LLC., (PI), Little Rock, AR). 

2017  NSF, “Touchscreen-based graphics for Blind and Visually Impaired users”; (N.A. Giudice, UMaine (PI); with H.P. Palani, UMaine and V. Buble).  

2015  NIH, “Audio-haptic virtual environments for large-scale navigation in the blind”; (N.A. Giudice, UMaine (PI); with L. Merabet, Harvard (PI); and K. Sathian, Emory). 

2014  NSF, “Non-Visual access to graphical information using a vibro-audio display”; (N.A. Giudice, (PI)).