The development and validation of a new smartphone based non-visual spatial interface for learning indoor layouts
Publication Name: Unpublished Master's Thesis
Citation: Raja, M.K. (2011). The development and validation of a new smartphone based non-visual spatial interface for learning indoor layouts. Unpublished Master’s Thesis, Dec. 2011, UMaine (N.A. Giudice: thesis advisor)
Thesis Advisor: Dr. Nicholas A. Giudice
An Abstract of the Thesis Presented in Partial Fulfillment of the Requirements for the Degree of Master of Science (in Spatial Information Science and Engineering) December, 2011
Maps are an important source of spatial knowledge; but most maps are visual in nature and not accessible by visually impaired users. Although there is significant research on accessible maps based on audio and tactile cues, these non-visual maps have various shortcomings. For instance, most of these map displays are non-refreshable displays and require substantial time, cost, and effort to create and update. Available refreshable displays are very expensive which make them cost exclusive for the majority of visually impaired users, especially those living in low-income countries (which represent the largest percentage of the visually impaired). Most of these displays are bulky and cumbersome to carry around, often requiring a fixed setup and desktop computer. To overcome these shortcomings in accessible maps, this thesis research work has designed and tested a novel non-visual interface for conveying indoor spatial information layouts using vibro-tactile and audio cues called “Vibro-Audio Map”. This interface is implemented on highly portable, comparably inexpensive, off-the-shelf smartphones. The thesis describes the conceptualization and development of this new interface, and presents the results of an initial usability experiment to show that the Vibro-Audio Map is equivalent in spatial path learning performance when compared with a hardcopy tactile map conveying the same information. Map panning is inevitable given that most maps are bigger than the available device screen size. Non-visual map panning presents a unique set of challenges which includes both cognitive and interface level challenges. An easy-to-use map panning method called “Button-based-Pan” method was developed as part of this thesis research in order to facilitate non-visual exploration of large maps. This panning method though easy and intuitive, still presents the map in a manner that requires high cognitive demands to process by a non-visual user. One solution to this problem, investigated in this thesis, is to reduce the amount of map panning required using a device by extending its effective screen space. This requirement led to the “Extended-Display” concept where the smartphone becomes the information window (or an information lens) to a virtual map projected on a flat table surface. The Extended-Display system increases the effective accessible search space for large map exploration. The Extended-Display is a generic concept and can be used for many purposes and scenarios including large visual map exploration. This experimental work presents a functional proof-of-concept of an Extended-Display based on an optical target identification system using a Kinect camera, and demonstrates its efficacy in displaying Vibro-Audio Maps. The thesis then presents experimental results, which compare indoor layout learning performance among three interface conditions. These three conditions are: (1) Vibro-Audio Map with pan mode, (2) Vibro-Audio Map with Extended-Display mode, and (3) Hardcopy tactile map with audio information. The results provide clear evidence that the Vibro-Audio Map is equivalent in spatial learning performance when compared against the traditional hardcopy tactile map condition, which is currently the most accepted mode of non-visual map learning. Finally, the thesis suggests future research directions on Vibro-Audio Map and Extended-Display technology.