Research Statement and Contributions

Research Statement

My research program is inherently interdisciplinary, combining principles from human perception, cognitive neuroscience, and human-computer interaction. My mission (and that of the VEMI Lab) is to envision, develop, and evaluate human-inspired nonvisual, enhanced visual, and Multimodal information access technologies for improving environmental awareness, spatial learning, and navigation. Our solutions make a difference in people’s lives by providing immediate benefits on the information access needs of blind/visually impaired people (representing 12 million persons in the U.S. and 285 million worldwide), as well as older adults experiencing vision loss (most visual impairment is age-related and the reality is that 70-year- old eyes are not as keen as 20-year- old eyes). Visual impairment need not be physical or permanent, sighted people are also frequently “blind” to their environment. My research program addresses these scenarios based on solutions for what we call situational blindness (e.g., texting while walking), eyes-free applications (such as performing a secondary task while driving), and when accurate imagination requires more than visual information (such as for understanding the sight/sound characteristics of a new windfarm installation). 

My basic research program has been influential to experimental psychologists and cognitive neuroscientists in the domain of blind spatial cognition, navigation, and multimodal information processing, as well as applied by computer scientists and engineers to the development of multimodal information access technology and sensory substitution devices. My experiences as a congenitally blind person provide me with unrivaled first-hand knowledge about the needs and challenges of this demographic and key insight of what works and doesn’t work for the design of nonvisual information access technology, something that is often misunderstood by researchers/ designers without this phenomenology.

Primary Contributions to Science

Below are several programmatic areas of particular interest where I believe my work has made the greatest scientific contributions and been the most impactful to both technology designers and end-users.

I. Wayfinding with words:

This line of research studies how spatial language, spatialized audio, and real-time verbal descriptions can be used to support nonvisual wayfinding and cognitive map development in large-scale real and virtual environments (with an emphasis on indoor spaces). It also addresses use of verbal descriptions and other nonvisual information providing access to local “scenes” for people who cannot see their surrounds, e.g. BVI individuals or sighted folks operating in the dark.

II. Multimodal Spatial Cognition (MSC):

Most spatial cognition research only addresses visual-spatial information and ignores the role of other spatial inputs. My research compares spatial learning, updating, and wayfinding behavior within and between modalities (3-D sound, touch, vision, and spatial language). I employ both behavioral and neuroimaging paradigms and incorporate both BVI and sighted people across a range of ages and abilities.

III. Blindness and visual impairment:

Most of my interests relate in some way to nonvisual or multimodal spatial abilities and related technologies but this line of work deals specifically with theories and technologies related to blind and visually impaired (BVI) people. The overarching theme is that the majority of challenges, differences, and problems cited in the literature regarding BVI spatial abilities are due to insufficient information access from nonvisual sensing or inadequate spatial problem solving abilities, rather than vision loss per se.

IV. Multimodal Information Access Technology:

Much of my recent research has dealt with the design, development, and usability evaluation of multimodal information access technology (MIAT) to support spatial perception, environmental awareness, and wayfinding behavior without vision (solutions for blind people), with reduced vision (solutions for visually impaired or older people), or with distracted vision (solutions for sighted people operating in eyes-free situations or who are situationally blind to their environment, for instance, texting while walking).

V. Spatial Aging and Navigation:

This research investigates how navigation and other spatial behaviors change across the lifespan as people age. Results are used to develop new spatial gerontechnologies to mitigate problems identified. This work is timely as our population is rapidly aging and normal declines in spatial abilities can have detrimental effects on independence, wellbeing, and quality of life for older adults.

VI. Multimodal Information Visualization (MIV):

Humans often have trouble imagining complex data, scenes, or environments. This challenge is exacerbated by use of traditional information visualization tools, which are static, 2D, and based purely on visual information. This line of research investigates the design of new spatial visualization techniques and development of improved multimodal interfaces for commercial interests. Our MIV approach is based on cutting-edge virtual and augmented reality technologies and multimodal interfaces employing audio, touch, vision, or combinations thereof, to render information in an intuitive, meaningful, and accessible manner.

Complete List of Published Work:

E-pubs at: https://umaine.edu/vemi/publications/

Google Scholar: https://scholar.google.com/citations?user=jD95I7EAAAAJ