Natural-Language Scene Descriptions for Accessible Non-Visual Museum Exhibit Exploration and Engagement
Indoor navigation and exploration of museum environments present unique challenges for visitors who are blind or have significant vision impairments (BVI). Like other indoor spaces, museums represent dynamic indoor environments that requires the need for both guided and self-tour experiences to allow for BVI visitor independence. In order to fully engage with a museum and its exhibits, BVI visitors need assistive technologies that support natural-language (NL) spatial descriptions that provide flexibility in the way system users receive descriptive information about gallery scenes and exhibit objects. In addition, the user interface must be connected to a robust database of spatial information to interact with mobile device tracking data and user queries. This paper describes the results of an early-stage demonstration project that utilizes an existing graph database model to support a NL information access and art gallery exploration system. Specifically, we investigated using a commercially available voice assistant interface to support NL descriptions of a gallery space and the art objects within it. Future work involves refining the language structures for scene and object descriptions, the integration of the voice assistant interface with tracking and navigation technologies, and additional user testing with sighted and BVI museum visitors.
Citation: Doore, S.A., Sarrazin, A.C., and Giudice, N.A. (2019). Natural-Language Scene Descriptions for Accessible Non-Visual Museum Exhibit Exploration and Engagement. Stock, K., , Jones, C., & Tenbrink, T. (Eds.) In the Proceedings of Workshops and Posters at the 14th International Conference on Spatial Information Theory (COSIT 2019). Regensburg, Germany, Springer International Publishing, (Pp. 91-100).