Using Augmented Reality (AR) to Create Immersive and Accessible Museums for People with Vision-Impairments

Nihanth Cherukuru, National Center for Atmospheric Research, USA, Rayvn Manuel, SI/National Museum of African American History and Culture, USA, AJ Lauer, NCAR, USA, Tim Scheitlin, National Center for Atmospheric Research, United States, Bhoomika Bhagchandani, Independent Researcher, USA

Abstract

Augmented Reality (AR) technology has been predominantly used to develop applications that visually augment physical spaces. However, the enabling technologies of AR such as image detection and Simultaneous Localization and Mapping could be used to augment spaces using other sensory modalities such as audio and haptics. This presentation focuses on an application which utilizes this approach to serve low-vision and blind visitors, by allowing them to autonomously and safely explore exhibitions and galleries. The application enables museum administrators to add auditory virtual tags to real world objects/images that are triggered based on the proximity of the marked object to the visitor's mobile device running the application. This implementation has the benefit of providing context-aware information that is delivered autonomously and with minimal infrastructure modification, thus addressing the limitations with AI and sighted-agent based solutions. The motivation for the project, overview of the underlying technology, implementation details of a prototype followed by limitations and challenges of this approach will be presented.

Keywords: Augmented Reality, Accessibility, Blind, vision impaired, Smartphones, Museum Exploration, Wayfinding, Navigation, Localization

Introduction

Museums are one of the primary venues for learning outside traditional classrooms and play a critical role in building an informed society. Although it is imperative to make the content accessible to everyone, the ocularcentric designs (designs which privilege vision over other senses) of most of the museums and galleries exclude people with vision disabilities (Candlin 2003; Lisney et al. 2013). Artifacts that are traditionally displayed behind enclosures pose an additional challenge to visitors with vision disabilities. Lack of information accessibility and the impossibility of appreciating something that isn’t accessible (Lisnet et al. 2013) was often quoted by people with vision disabilities as one of the barriers that stopped them from visiting museums (Asakawa et al. 2018; Candlin 2003; Ginley 2013). Further, Candlin (2003) argues that attempts to address this issue with occasional educational events and tokenistic drop-in provisions do not qualify as making museums accessible since they inherently do not address the pervasive issues of ocularcentric museum designs.  A recent survey conducted by Asakawa et al. 2018 found that the main accessibility issues faced by people with vision impairments (PVI) could be categorized into (i) Mobility Issues – difficulties with moving around independently, and (ii) Inaccessible content – lack of non-visual alternatives to museum artifacts. Study participants highlighted their desire to have an independent museum experience through access to tools that provide navigation assistance and detailed contextual audio narration of the exhibits.  The rise of smartphones over the last decade has helped us inch closer to developing tools that could provide detailed navigation assistance through enabling technologies such as WiFi, Bluetooth Low energy (BLE) beacons and Ultra Wide Band (UWB) beacons (Zafari et al. 2019). However, there is still a need for a technology that is reliable, affordable, and accurate.

This article covers an Augmented Reality (AR)-based application that serves low-vision and blind visitors by enabling them to autonomously and safely explore exhibitions and galleries. AR is an environment in which computer generated content is anchored to the physical objects surrounding the user and presented as an overlay through heads-up-displays or smartphone camera views.  AR, as a technology, has been predominantly used to visually augment physical spaces (Figure 1).  However, utilizing the same underlying enabling technologies of AR  (such as image detection and tracking, and Simultaneous Localization and Mapping), physical spaces can also be augmented using other sensory modalities such as audio and haptics. The application proposed in this paper enables museum administrators to add auditory virtual tags to real world objects/images (artifacts) that are triggered based on the proximity of the marked object to the visitor’s mobile device running the application.  This implementation has the benefit of providing context-aware information that is delivered autonomously and with minimal infrastructure modification, thus addressing the aforementioned limitations of existing solutions. Moreover, the recent advent of native high-level application programming interfaces for smartphone AR applications enables developers to readily access these technologies.

This article covers an Augmented Reality (AR)-based application that serves low-vision and blind visitors by enabling them to autonomously and safely explore exhibitions and galleries. AR is an environment in which computer-generated content is anchored to the physical objects surrounding the user and presented as an overlay through heads-up-displays or smartphone camera views.  AR, as a technology, has been predominantly used to visually augment physical spaces (Figure 1).  However, utilizing the same underlying enabling technologies of AR  (such as image detection and tracking and Simultaneous Localization and Mapping), physical spaces can also be augmented using other sensory modalities such as audio and haptics. The application proposed in this paper enables museum administrators to add auditory virtual tags to real world objects/images (artifacts) that are triggered based on the proximity of the marked object to the visitor’s mobile device running the application.  This implementation has the benefit of providing context-aware information that is delivered autonomously and with minimal infrastructure modification, thus addressing the aforementioned limitations of existing solutions. Moreover, the recent advent of native high-level application programming interfaces for smartphone AR applications enables developers to readily access these technologies.

Image contains two photographs of applications running visual AR. The image on the left shows a person holding an iPad with the screen showing a globe being overlaid on a box. The image on the right shows a model of a hurricane placed on a table in AR
Figure 1: Examples of applications implementing visual AR. a) Image detection and tracking-based AR; b) Visual Inertial Odometry (VIO)-based AR.

Related Work

The work presented in this article builds on prior research intersecting disability studies and assistive technologies pertaining to wayfinding. Darken and Sibert (1996) classify wayfinding tasks into three categories (i) Naive search — a search task for a specific target whose location is unknown with no prior knowledge of the environment, (ii) Primed search (Navigation)– a search task for a specific target whose location is known and (iii) Exploration search– a search task without any specific target. Their study suggests that these activities operate with different sets of requirements, highlighting the importance of recognizing the distinctions while studying or developing tools to support wayfinding. Given the current state of technology and contextual complexity involved with automated naive search tasks, only navigation and exploration tasks are considered in this article.   

Preferences and requirements of PVIs for wayfinding

Numerous studies, surveys, and case studies involving PVIs have highlighted the necessity of tools and techniques to facilitate independent indoor wayfinding (Asakawa et al., 2018; Candlin, 2003; Ginley, 2013; L. Guerreiro, et al., 2019; Lisney, et al., 2013; Quinones, et al., 2011). Further, according to the survey conducted by Quinones, et al. (2011), participants have shared that they rarely engage in independent exploration due to the lack of tools and unique challenges such an activity presents. Some other barriers to independent wayfinding quoted by PVIs include, over-reliance on friends and family for information due to the lack of non-visual alternatives, inconsistent quality of audio descriptions provided by sighted guides (Asakawa, et al., 2018) and insufficient accuracy of existing navigation systems in indoor settings (Quinones, et al., 2011).

One of the earliest studies to examine wayfinding without sight was conducted by Passini and Proulx (1988).  In this study the cognitive process of wayfinding was compared between a group of congenitally totally blind participants and a control group of sighted participants.  The study revealed that participants who were blind tended to prepare their journey in more detail and preferred to have a detailed plan before the start of the journey. They also relied on significantly more units and substantially different types of information to complete the task. For instance, the study participants relied more on contextual and environmental variables such as auditory, olfactory, and textural information compared to the control group which relied more on building (architecture and layout) and interior features (layout of furniture and signage) to navigate. Further, many mundane physical features that the study group used as waypoints such as ashtrays and door frames were completely overlooked by the control group pointing to the importance of co-development of any assistive technologies with the end users. The study also found that people who are blind understand the geometric characteristics of the space and were able to build a spatial representation of their task to an extent that is comparable to a sighted person.

A more recent study (Quinones, et al., 2011) found that PVIs felt unfamiliar routes to be more challenging due to the lack of a mental map. A potential solution that was offered was to complement turn-by-turn directions with a tactile map containing information about the surroundings. This requirement to gain greater awareness of their current location and the potential value of knowing their surroundings through multiple modalities is a recurring finding that was highlighted by other studies as well (Guerreiro, et al., 2019; Hutchinson & Eardley, 2020; Miao, et al., 2011). 

Audio description is one of the most prevalent modalities used in smartphone-based assistive technologies for the blind. Guidelines and requirements for the audio descriptions are covered in great detail by Hutchinson and Eardley (2020) for content narration and Miao, et al. (2011) for navigation.

Enabling Technologies

The advent of smartphones made it possible for people to carry a single device that could serve multiple purposes, meeting a critical requirement brought up in the needs-finding study conducted by Quinones, et al. (2011).  The process of navigation consists of two components: (i) Localization— the process of determining the location of the device (and consequently the user) in the real world, and (ii) Path finding– the process of finding the optimal path from the device location to a target.  Path finding algorithms have been around for a long time and most of the current research is focused on technology to improve the accuracy of localization.  A recent review article by Zafari, et al. (2019) provides a comprehensive review of the current state of localization systems. The most prevalent smartphone-based navigation applications are built using one or a combination of the following technologies (Figure 2, Table 1):

  1. GPS (low accuracy): The oldest and most popular of all the systems but with limited applications for indoor localization caused by poor indoor signal quality and an accuracy of ~10m.
  2. WiFi (moderate accuracy): The most studied of all localization systems (Zafari, et al., 2019), it uses the received strength of the WiFi signal (RSSI) from a nearby access point to determine the device location relative to the access point. 
  3. Bluetooth Low Energy (BLE) beacons (high accuracy): These systems rely on a RSSI obtained from BLE beacons. They have an accuracy of ~3-5 m. 
  4. Ultra Wide Band (UWB) Beacons (very high accuracy): A relatively new technology that uses a different wavelength and determines the location of the device by measuring the distance and angle of the received signal from the beacon.
Figure showing the accuracy of different way finding technologies as concentric circles. In the increasing order of accuracy GPS, WIFI, BLE, UWB , AR
Figure 2: Comparing the accuracies of the current state-of-the-art smartphone-based wayfinding technologies.

High fidelity localization and pose tracking (Roll, pitch, and yaw angles to determine view direction) are essential requirements for any turn-by-turn navigation systems. The accuracy of the current systems fall short of this requirement.  Although higher accuracy can be obtained with UWB beacons, this approach has its limitations such as the technology not being available in many smartphones and requiring external installation of the beacons in the facility.  Moreover, all these systems lack pose tracking (determining view direction) and would need to rely solely on the inertial measurement units (IMU) for this purpose. IMU is an electronic component that is used to estimate the device’s orientation.  Pose tracking based on IMUs is unreliable since they are known to be sensitive around metallic objects and suffer from measurement drift.  This can be problematic in assistive technologies where accuracy and safety go hand-in-hand or are of paramount importance.  For instance, (Guerreiro, et al., 2019) tested a navigation system based on BLE beacons at an airport, and found that some blind participants were about to enter an escalator going in the wrong direction. The problem was due to the airport layout in which the escalators going in opposite directions were situated right next to one another, falling within the accuracy limits of the navigation system.

Technology

Accuracy

Cost

Power

Other limitations

GPS

Low

Low

Low

Not suitable for indoor navigation due to poor signal characteristics indoors

WiFi RSSI

Moderate

Low

Low

The accuracy is not sufficient for turn-by-turn navigation

BLE Beacons

Moderate – High

Moderate

Moderate

Requires the addition of beacons in the facility. Accuracy falls short for turn-by-turn navigation

UWB Beacons

High – Very High

Moderate – High

Moderate

A relatively new addition to smartphones that is not widely available in all smartphones.

Table 1: Comparing the characteristics of the current state-of-the-art smartphone based wayfinding technologies.

Computer vision-based systems such as the ones used in AR can address many of these concerns while providing accurate localization capabilities (~0.1 m). Computer vision algorithms detect unique patterns in the surroundings using the live camera feed, and by tracking these features in subsequent video frames, the device can estimate its relative movement and pose.  The advancements in computing power of smartphones have made AR capabilities widely available in multiple devices. All the popular device manufacturers now provide native high-level application programming interfaces for smartphone AR applications enabling developers to readily access these technologies. AR has been predominantly used to visually augment physical spaces with virtual objects which remember their placement in the real world (Figure 1, Cherukuru & Calhoun, 2016). The approach to assistive technology involves replacing these virtual objects with non visual alternatives such as an audio description which gets triggered as the device gets closer to the object.

One category of assistive technologies that was not included above was virtual assistance services. Platforms like Aira (https://aira.io) have made significant efforts to address the issue with navigation and visual assistance through their remote video-call service with sighted agents. However, this service requires the visitor to depend on a sighted individual for assistance. The reliance on other people for day-to-day functioning makes it a relatively expensive subscription service, pointing to the need for more autonomous solutions.  Artificial Intelligence (AI)-based solutions such as Seeing-AI (https://www.microsoft.com/en-us/ai/seeing-ai) provide such an alternative; however, the current reliability of AI falls short in addressing the complexities required to facilitate independent exploration in a curated space. This limits their application to identification tasks and activities that only require high-level scene description. Although AI-based object recognition can be implemented to identify and provide descriptive narration of specific pre-trained artifacts in an exhibit space, the effort required to train an AI algorithm makes it difficult to design a generic, scalable solution using this approach alone.

Rationale

To address the hardware and software limitations of the existing technologies for wayfinding mentioned in the previous section, an AR and image detection-based approach to wayfinding is proposed. This approach performs localization using inside-out tracking in which the localization is performed on the device looking out at the surroundings (Gourlay & Held, 2017). This approach has some clear advantages such as requiring minimal to no infrastructural changes to the museums, alleviating privacy concerns since the device need not share any data with the content providers, and improving the accuracy of localization.  High-end AR devices such as Hololens (a head mounted display) have been successfully demonstrated to support aspects of visual cognition such as obstacle avoidance, scene understanding, formation and recall of spatial memories, and navigation (Grayson, et al., 2020; Liu, et al., 2018). The current work seeks to extend this functionality to smartphones which are ubiquitous. The application exploits in-built AR APIs which helps us leverage hardware and operating system upgrades with minimal modifications to the application.  Finally, almost all of the existing wayfinding applications focus on navigation although exploration plays a major role in creative institutions like museums.  The proposed application is being designed specifically for museums giving us an opportunity to develop an open source application capable of being used in any curatorial space.

Case Study

Description of the Core technology

The main working components of AR applications are computer vision algorithms used by the device to localize and track its movement in the 3D world. Visual Inertial Odometry, and image detection & tracking are the two main building blocks in the proposed application.

  • Visual Inertial Odometry (VIO). Visual Inertial Odometry (VIO) is a process that determines a device’s movement in the real world by combining camera live feed data with the inertial measurement unit (IMU) readings.  Computer vision algorithms identify unique patterns and features in the surroundings and track them between the frames to estimate the device’s movement and orientation in space (Figure 3b).  The addition of IMU to the calculation helps improve the efficiency of the algorithm and incorporates redundancy. The limitation of VIO is that although the device movement and pose can be determined with great accuracy and precision, the measurements are relative to the device’s initial state.  For instance, if the device moves by a certain distance in a building, VIO can determine the path fairly accurately but there is no direct mechanism to identify where in the building the device is situated making them unsuitable for localization corresponding to a floor map. This limitation can be addressed using image detection and tracking.
  • Image detection and tracking. Image detection and tracking is one of the earliest approaches used in AR applications.  In this method, computer vision algorithms are used to identify and store unique features and patterns in images provided by the user. During runtime, the application searches these images in the live camera feed.  Once an image is detected, the camera pose can be determined relative to the location of the image.  Unlike VIO, this approach determines the device location and pose relative to an external object (the image in this case). Thus, if the real world location of the image is known apriori on a floor map, the device can be localized. The limitation of this approach is that the image has to be in the camera’s view for it to be reliably tracked. However, the proposed application does not require image tracking and relies only on image detection.

The two techniques complement one another and a robust localization technique can be implemented by combining the two (Figure 4). By creating a catalogue of images (such as artwork on display, wall paper, signage in front of exhibits etc.) whose location is known in a floor map, the device can be localized inside a building and VIO can be used to track the device in instances where the image is not directly in the camera view (Figure 3).  In instances where the application is interrupted (which is very likely given that smartphones have multiple other use cases), the use of multiple images helps the application to relocalize from wherever the application is resumed.  Once the device is localized, external data can be used to provide location specific audio description and other services as the user explores the space.  Navigation functionality can be built on top of this service by using path finding algorithms on the map of the facility.

A three panel image demonstrating VIO and way finding implemented with the current application. The images show different views of a poin cloud (shown with yellow dots) displayed on top of a floor plan of an apartment. The background is black.
Figure 3: Figure showing the localization process performed in an indoor setting (apartment). a) The colored line denotes the path taken by the device with the point cloud data obtained from VIO overlaid on the floor map of the apartment. The squiggles in the path correspond to the actual path taken demonstrating the higher accuracy/sensitivity to device movement of this technique; b) Perspective view of the point cloud data obtained from a VIO process overlaid on the house floor plan. The point cloud data could also be potentially used to identify obstacles in the path, thereby allowing the system to be aware of changes in a dynamic environment.c) A close up view of the objects/images detected (shown in gray) in the surroundings. The known location of these images in reference to the map can be used for initial localization and to correct for any errors in the tracking.

The VIO and the image detection and tracking routines used in the proposed project are built on the native AR application programming interfaces (API) such as ARKit (https://developer.apple.com/augmented-reality/arkit/ ) and ARCore (https://developers.google.com/ar ) which drove the recent popularity of smartphone AR applications. These APIs abstract the low-level computer vision algorithms simplifying the AR application development process.  Since these APIs are developed and maintained by the device manufacturers, they are continuously updated to take advantage of hardware and software optimizations, making it simpler for the developers who now need to just maintain the application built on top of these APIs.

A flow chart showing the overview of the application. It groups the AR APIs (Image detection, image library and VIO) in one box that combine with the map data to perform localization.
Figure 4: An overview of the core technology implemented in the usability study.

Usability Study

As a proof-of-concept, an AR application was built using the core technology introduced in the previous section. The application was developed using Unity game engine (https://unity.com) and the ARkit API.  The application has two modes of operation: admin mode and visitor mode. The admin mode was used to create a point cloud-based indoor map of an exhibit space and audio descriptions were anchored at specific locations throughout the space. This data was uploaded to a server along with a trigger image (usually a photograph of some exhibit near the entrance). The trigger image is used to identify the data to be fetched and provide initial localization for the visitor’s device running the application in the visitor mode.  Once initialized, the audio descriptions that were previously placed were played back on the visitor’s smartphone as they approached different waypoints using VIO and the saved map.

A two panel image consisting of photographs. The larger photograph shows a young woman with a white cane walking in front of posters about climate. The woman is holding her phone out with one hand. The inset image shows a screen capture of the phone with a top portion in teal. The caption pointing to the teal section read: Virtual waypoint to tiger audio description
Figure 5: a) Photograph of the co-researcher testing the application to explore an exhibit space at the National Center For Atmospheric Research’s Mesa Lab Visitor Center. b) A screenshot of the smartphone running the pilot application taken at the same instant as the photograph. The teal-colored block seen at the top of the image shows the location of the waypoint which triggers the audio description.

The use-case was tested in the exhibit space at the National Center for Atmospheric Research’s Mesa Lab Visitor Center in Boulder, Colorado with a co-researcher who is blind.  The co-researcher used her own smartphone running the application in visitor mode (Figure 5). The main motivation for this study was to test the usability of AR-based wayfinding for independent indoor navigation and to gather information that could guide further development.

The co-researcher shared several experiences using this app that concurred with the findings of previous studies with regard to feelings of autonomy and self-confidence. The real-time auditory access to the content of each poster along the wall while walking was identified as an empowering experience. Overall, the observations made during the study reflected a promising potential for AR-based smartphone applications using audio tags for blind visitors in museum spaces.  The usability test also highlighted a few limitations that could be addressed in future work.

Limitations

AR applications are built on computer vision algorithms that require heavy computations to be performed on data from multiple onboard sensors. Consequently, they have been notorious for being battery drains on mobile devices. While a major obstacle, this is a well known issue and given the ever increasing applications implementing AR, device manufacturers have been addressing these through mobile architectural changes (e.g., larger batteries, AR application specific processors) and software improvements (e.g., AI-based image detection). 

Unlike most of the AI-based applications, AR applications can function without needing to send any user data to an external server keeping the flow of information from the content provider to the user at all times. In fact, during testing, the app was able to function normally even when the internet connection was dropped after initialization. While this addresses many privacy concerns for the user, the reliance on a camera does raise privacy concerns for bystanders. The successful adoption of this technology largely depends on the social acceptance of these devices by both PVIs as well as bystanders. Prior studies conducted on Head-Mounted-Displays (such as AR glasses) have found that bystanders in general had a higher level of social acceptability for assistive technologies compared to other use-cases, provided the intent is passively communicated through signage or other means (Ahmed, et al., 2018).  Moreover, most of the concerns that were raised pertained to misrepresentations in the physical descriptions of people derived from fallible artificial intelligence (Akter, 2020), which is not relevant to the current work.  

Cameras are one of the primary sensors used in AR applications and require users to keep the camera view unobstructed.  For visual AR applications, this usually is not a concern since the user relies on the phone screen for visual camera feedback, invariably holding the phone out in their hands.  Almost all PVIs use a white cane for mobility, often using their dominant hand for this purpose. The co-researcher (a white cane user) in our study observed that this made using the smartphone inconvenient at times since it limited her dexterity while performing tasks that require a free hand, such as navigating stairs.  This issue would be more pronounced for guide dog users who might have both hands occupied if they hold the service animal’s harness in one hand and the white cane in another.  Head mounted displays such as AR glasses can provide a hands-free option to address this issue. Given the absence of affordable, consumer-oriented AR glasses at the time of writing, a potential workaround would be to use a lanyard or a chest harness for the phone.

Computer vision-based wayfinding approaches are susceptible to poor lighting conditions similar to human vision.  Further, since the algorithms visually track patterns in the environment, the fidelity of the tracking system is greatly reduced in instances with smooth, non-textured surfaces and glossy surfaces that reflect light. Recent advancements in smartphone hardware include the addition of lidar (LIght Detection And Ranging) sensors  to the device, which makes AR usable in low light conditions while reducing the device’s power consumption. Lidars use infrared pulses to measure the distance of various objects from the phone, thereby capturing an almost instantaneous 3D image of the camera view.  While this feature is available only on high-end devices at the time of writing, it is likely that this would be more widespread in the future as AR gains popularity and mainstream adoption.  Nonetheless, both the devices (admin and visitor) used in the usability study did not have lidar sensors and yet functioned without any major performance issues.     

Smartphones are ubiquitous and consequently it makes sense for this assistive technology to be implemented through a bring-your-own-device model with the content providers delivering the functionality to the user’s personal device. However, at the time of writing, the state-of-the-art AR functionality is available only on high end devices which may raise the issue of affordability.  A potential work around for this situation would be for the facilities to have a few loaner devices to be used inside the facility.  

The proof-of-concept in this study primarily relied on audio description and haptics (phone vibration) to provide guidance to the user. However, prior studies have shown that audio description alone might be inadequate especially in information dense settings such as museums (Hutchinson & Eardley 2020). Thus, access to 3D printed objects, tactile prints and maps to complement the audio description could greatly enhance the visitor’s experience.

Future Work

This article presents an AR-based wayfinding approach to make museum and exhibit spaces accessible to people who are blind or vision impaired. The work draws from prior research studies which highlight the need for a precise and accurate wayfinding system to address the shortcomings of current techniques while also fostering independent indoor wayfinding for visitors who are blind.  The prototype developed in this study was meant to test the feasibility of the core technology and to gather preliminary observations on the usability. The functionality of the core technology (localization) demonstrated in the pilot project can easily be extended to facilitate navigation by integrating path finding algorithms with the facility map.  Leading from the observations in this study and building on the core technology, a more comprehensive open source system for exploration and navigation is being developed in collaboration between the Smithsonian’s National Museum of African American History and Culture and the National Center for Atmospheric Research. The end goal of this project is to develop a modular system built on native AR APIs, providing developers with a wayfinding solution that could serve as a building block on which other functionalities can be explored in the realm of assistive technologies.  

Acknowledgements

This material is based upon work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement 1852977.  The work presented in this article was partly funded by the UCAR U-Innovate award and the NCAR Education & Outreach Diversity, Equity, and Inclusion grant. 

References

Ahmed, T., Kapadia, A., Potluri, V., & Swaminathan, M. (2018). Up to a limit? privacy concerns of bystanders and their willingness to share additional information with visually impaired users of assistive technologies. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(3), 1-27. 

Akter, T., Ahmed, T., Kapadia, A., & Swaminathan, S. M. (2020, October). Privacy Considerations of the Visually Impaired with Camera-Based Assistive Technologies: Misrepresentation, Impropriety, and Fairness. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (pp. 1-14).

Asakawa, S., Guerreiro, J., Ahmetovic, D., Kitani, K. M., & Asakawa, C. (2018, October). The present and future of museum accessibility for people with visual impairments. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 382-384).

Candlin, F. (2003). Blindness, art and exclusion in museums and galleries. International Journal of Art & Design Education, 22(1), 100-110.

Cherukuru, N. W., & Calhoun, R. (2016). Augmented reality-based Doppler lidar data visualization: Promises and challenges. In EPJ Web of Conferences (Vol. 119, p. 14006). EDP Sciences.

Darken, R. P., & Sibert, J. L. (1996, April). Wayfinding strategies and behaviors in large virtual worlds. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 142-149).

Ginley, B. (2013). Museums: A whole new world for visually impaired people. Disability Studies Quarterly, 33(3).

Gourlay, M. J., & Held, R. T. (2017). Head‐Mounted‐Display Tracking for Augmented and Virtual Reality. Information Display, 33(1), 6-10.

Grayson, M., Thieme, A., Marques, R., Massiceti, D., Cutrell, E., & Morrison, C. (2020, April). A dynamic AI system for extending the capabilities of blind people. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-4).

Guerreiro, J., Ahmetovic, D., Sato, D., Kitani, K., & Asakawa, C. (2019, May). Airport accessibility and navigation assistance for people with visual impairments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(pp. 1-14).

Hutchinson, R. S., & Eardley, A. F. (2020). The Accessible Museum: Towards an Understanding of International Audio Description Practices in Museums. Journal of Visual Impairment & Blindness, 114(6), 475-487.

Lisney, E., Bowen, J. P., Hearn, K., & Zedda, M. (2013). Museums and technology: Being inclusive helps accessibility for all. Curator: The Museum Journal, 56(3), 353-361.

Liu, Y., Stiles, N. R., & Meister, M. (2018). Augmented reality powers a cognitive assistant for the blind. ELife, 7, e37841.

Miao, M., Spindler, M., & Weber, G. (2011, November). Requirements of indoor navigation system from blind users. In Symposium of the Austrian HCI and Usability Engineering Group (pp. 673-679). Springer, Berlin, Heidelberg.

Nevile, L., & McCathieNevile, C. (2002). The Virtual Ramp to the Equivalent Experience in the Virtual Museum: Accessibility to Museums on the Web.

Passini, R., & Proulx, G. (1988). Wayfinding without vision: An experiment with congenitally totally blind people. Environment and behavior, 20(2), 227-252.

Quinones, P. A., Greene, T., Yang, R., & Newman, M. (2011). Supporting visually impaired navigation: a needs-finding study. In CHI’11 Extended Abstracts on Human Factors in Computing Systems (pp. 1645-1650).

Sato, D., Oh, U., Naito, K., Takagi, H., Kitani, K., & Asakawa, C. (2017, October). Navcog3: An evaluation of a smartphone-based blind indoor navigation assistant with semantic features in a large-scale environment. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 270-279).

Zafari, F., Gkelias, A., & Leung, K. K. (2019). A survey of indoor localization systems and technologies. IEEE Communications Surveys & Tutorials, 21(3), 2568-2599.


Cite as:
Cherukuru, Nihanth, Manuel, Rayvn, Lauer, AJ, Scheitlin, Tim and Bhagchandani, Bhoomika. "Using Augmented Reality (AR) to Create Immersive and Accessible Museums for People with Vision-Impairments." MW21: MW 2021. Published January 29, 2021. Consulted .
https://mw21.museweb.net/paper/using-augmented-reality-ar-to-create-immersive-and-accessible-museums-for-people-with-vision-impairments/