Using GIS Platforms to Support Accessibility: the case of GIS UTAD

Hugo Fernandes

University of Trás-os-Montes and Alto Douro, Quinta de Prados, Apt.1013, 5001-801, Vila Real, Portugal

Telmo Adão

University of Trás-os-Montes and Alto Douro, Quinta de Prados, Apt.1013, 5001-801, Vila Real, Portugal

Nuno Conceição

University of Beira Interior, 6201-001 Covilhã, Portugal

Hugo Paredes

GECAD, Knowledge Engineering and Decision Support Research Group, Institute of Engineering of Porto, 4200-072 Porto, Portugal
University of Trás-os-Montes and Alto Douro, Quinta de Prados, Apt.1013, 5001-801, Vila Real, Portugal

Pedro Araújo

University of Beira Interior, 6201-001 Covilhã, Portugal

João Barroso

GECAD, Knowledge Engineering and Decision Support Research Group, Institute of Engineering of Porto, 4200-072 Porto, Portugal
University of Trás-os-Montes and Alto Douro, Quinta de Prados, Apt.1013, 5001-801, Vila Real, Portugal


In everyday life, people need to move, whether in business or leisure. Navigation requires spatial knowledge and ability to make decisions based on geographic information. Recently, powerful tools have been developed, enhancing the capabilities of geographical analysis and decision making. This work presents a platform to handle and provide geographic information, including accessibility-oriented features. This Geographic Information System (GIS) is part of a wider project, called SmartVision. The aim of this project is to create a system that allows blind users to navigate in the University of Trás-os-Montes and Alto Douro (UTAD) campus. The GIS platform, together with other modules of the SmartVision system prototype, provides information to the blind user, assisting his navigation and giving alerts of nearby points-of-interest or obstacles. Together with the GIS platform, this paper also describes how the interface between user and prototype is implemented and how the geographic information is handled to assist the navigation.

1. Introduction

Historically, the human being always moved around exploring new worlds. Maps, mapping technology and orientation instruments played a key role in this enormous task and, many times, were the key to survival. This stands until today. In modern daily life people need to move, whether in business or leisure, sightseeing or addressing a meeting. Often this is done in familiar environments, but in some cases we need to find our way in unfamiliar scenarios. Other situations with thorough requirements, such as demographic studies, geographic dissemination of diseases or other kinds of surveys, involve deep knowledge of specific locations. In both cases, the recording and availability of useful data is extremely useful. As a new and emerging technology in the early 1970s, Geographic Information Systems (GIS) had a profound in?uence in the capabilities of geographic analysis. These systems marked a turning point in the reinforcement of geography as an explicitly spatial discipline (Longley et al. 2005). GIS are now widely accepted as powerful and integrated tools for storing, manipulating, visualizing, and analyzing spatial data. GIS software has enabled users to view spatial data in its proper format. As a result, the interpretation of spatial data has become easy and increasingly simple to understand. Besides all the developments and public acceptance of these technologies, people with disabilities still lack the real benefits that these systems could provide in accessibility-oriented applications. With current technology it is already possible to create systems that assist people with special needs to navigate, eliminating many of their mobility restrictions (Kitsas et al. 2006; Scooter et al. 2005). The availability of GIS via Web is becoming a reality in many fields (Doyle et al. 1998). Thus, the previous criticisms of GIS being an elitist technology may no longer be valid in the same context (Pickles 1995). GIS and the Web are ever-evolving technologies and hold great potential for public use, allowing wider involvement in environmental decision making. To build a successful Web GIS, it is necessary to consider the development more as a process, rather than a step. The implementation should also respect the available technology and the application requirements (Alesheikh et al. 2002). In this paper we propose the creation of a Geographic Information System, in order to assist the navigation of people with disabilities, namely blind users. First we will describe the main project behind this initiative, the SmartVision Project, and the Geographic Information System of the University of Trás-os-Montes and Alto Douro as the GIS platform. Then we will cover the handling of accessibility information by the SmartVision prototype and its user interfaces. Finally we will make some considerations about future work and conclusions about the work done.

2. SmartVision Project

Currently, a system to assist the navigation of blind or visually impaired people is being developed at the University of Trás-os-Montes and Alto Douro (UTAD). This project is named SmartVision and its main objective is to develop a system that helps visually impaired people to navigate, providing ways to get to a desired location and, while doing so, giving information regarding obstacles and various points-of-interest (POI) like zebra-crossings, building entrances, etc. The system is built in a modular structure, as seen in Figure 1.

The figure shows the SmartVision prototype modular structure. It comprises three layers. The base layer is the SmartVision base module. The second layer contains five modules: the Interface Module, the GIS Module, the Navigation Module, the Location Module and the Computer Vision Module. The third layer contains the technologies used by the Location Module, namely: RFID, GPS and Wi-Fi.

Figure 1. SmartVision prototype modular structure

The SmartVision Module is responsible for managing and establishing communication between all available modules. This module also receives inputs from the user and makes decisions on what information the user should get from the system. The Location Module is responsible for providing regular updates on the user's current geographic coordinates to the SmartVision Module. To provide this information both in indoor and outdoor environments, this module makes use of different technologies: Global Positioning System (GPS) for outdoor environments and Wi-Fi for indoor environments. Radio-Frequency Identification (RFID) and Computer Vision are common to both indoor and outdoor environments and work by detecting landmarks placed in the ground. Each location technology has a specific accuracy and the Location Module always chooses the one with the best accuracy from the ones available in each moment. In terms of hardware, the RFID reader is placed in the white cane and the camera is chest-mounted. The GPS antenna is connected via Bluetooth and the Wi-Fi antenna is a built-in component of the mobile computer. The Navigation Module is responsible for route planning and providing information about surrounding points-of-interest (POI). It connects to the SmartVision Module and requests two different data inputs: GIS data and location data. To get the GIS data, the SmartVision Module queries the GIS server in order to get maps and POIs. The user location is fed from the Location Module. After analyzing the requested data, the Navigation Module feeds back the SmartVision Module with navigation instructions. The amount and accuracy of the GIS data stored in the GIS server is critical in order to give the best instructions to the blind user. The Computer Vision Module provides orientation instructions by detecting known landmarks in the ground and keeping the user within safe routes. The camera used was the Bumblebee2 Stereo Camera, from Point Grey. Being a stereo vision system, this camera is able to provide disparity information together with the image frames. This information is used to calculate the distance between the user and detected landmarks. So, in addition to giving orientation instructions to the SmartVision Module, with this information, the Computer Vision Module is also able to feed the Location Module with location information. Finally, the Interface Module, as the name indicates, is responsible for providing user interface. To do this it uses three resources: two outputs and one input. The two outputs are text-to-speech software and vibration actuators. Since the hearing sense in very important to blind user, the vibration actuators are used while navigating and the voice interface is used only when navigating the menus and giving POI information. The user provides inputs by using a small four-button device to scroll between the menus, applying the options and getting back to the previous menus. The user interacts directly with the SmartVision Module and all other modules are independent. This way, the user can get information even when some modules are not available, or cannot provide information. For example, if GPS is not available or if the user is in an indoor environment, the location module can get information from the RFID tags, Wi-Fi or Computer Vision. In this paper we will focus on the GIS Platform, Navigation module and Interface module. These modules will be explained in detail in the next sections.

3. GIS Platform

In this chapter we will explain how the GIS Server was developed and how it interacts with the client applications. Given that Wi-Fi is only available in some specific spaces, all information need to navigate must be stored in the SmartVision prototype. This way it is possible to have access to information whatever the scenario, indoor or outdoor, without the need to be regularly querying the GIS. The updates to the geographic information stored in the prototype are done when Internet connection is available, through the use of webservices. The information of the different elements is stored in digital map files, or shapefiles (ESRI 1998), and a MySQL database (e.g., number of available places in car park). For the distribution of geographic information, the adopted architecture was client/server, three-tier or n-tier (Peng et al. 2006). In this model, the client application (SmartVision prototype) must have the ability to handle and give geographic information to the user. Figure 2 presents a summary of the client/server architecture.

This figure shows the three-tier client/server architecture. The three layers, from top to bottom, are the Client Layer, the Application Layer and the Data Layer. The Client Layer contains the user application. The Application Layer is composed by the Web Server and the GIS Server. The Data Layer contains the MySQL database and the shapefiles.

Figure 2. Three-tier client/server architecture

In this type of structure, the SmartVision prototype receives requests from the user through the Interface module and makes a request to the GIS. The request is acknowledged by the Web server, which forwards it to the GIS server. The GIS server interprets the request and calls the spatial data stored in the digital map files (shapefiles) and database to generate the data to be returned. The generated data is divided into two groups: the digital map files and a XML file containing the results of the query made to the database. The Web Server accesses the generated data and returns it to the client, via a webservice. To improve the server's security, the GIS Server and database server are protected from external access by a firewall. A web application is also available to allow managing the geographic information stored in the server. A simplified scheme of the overall architecture is presented in Figure 3, following the three-tier model.

The figure shows the overall architecture of the SmartVision System. In this figure the Internet cloud connects the user application to the Web and GIS servers. There is a firewall to protect these servers. In turn, the Webserver connects to the MySQL database and the GIS server connects to the shapefiles.

Figure 3. Overall architecture

Since the SmartVision prototype has support for indoor and outdoor navigation, the GIS server must store details of campus and building interiors, due to the abstraction created by the Location module. Currently we have already mapped features helpful for outdoor navigation. The features already mapped (outdoor) are: Roads Buildings Access roads BUS Stops Car parking Road signs Crosswalks Number of parking places Green zones Sport facilities Web access facilities In the case of UTAD Campus (outside of buildings), the conclusion was that the geocoding should be done manually, using an aerial view. However this can pose some problems in getting coordinates of some specific points, due to photo resolution. We may use GPS to fix those points (Chuanjun et al. 2008).

4. Navigation Module

The Navigation module handles aspects related to the computation of the route that the blind user must follow from its original position to the chosen destiny. In terms of functionality, every point-of-interest (POI) in the database is transmitted to the user through the Interface module, in order for him to choose the desired one. Then, the operations that follow are implemented using a known routing algorithm, the Dijkstra Shortest Path First (SPF) routing algorithm. According to Ertl (Ertl 1998), this algorithm is known for being able to calculate, in a graph, the shortest path from a starting vertex to a destination vertex. The reason why we used this algorithm is that, in this kind of application, we have a balanced solution between calculus efficiency and implementation simplicity. Obviously, the road layer in the map is built in a manner similar to a graph, where the points of beginning, intersection and end of the road correspond to the graph vertices and the intermediate points are the graph's edges, allowing the applicability of the algorithm to this layer. Figure 4 shows the result of the application of the algorithm to trace a route. Although the blind user doesn't take advantage of the graphical interface, it is used to support development and testing.

This figure presents the route calculation results, using the Dijkstra Shortest Path First Algorithm, on a map.

Figure 4. Results of the application of the Dijkstra SPF algorithm.

One must note that the navigation module is the part of the system responsible for assuring that the blind user gets to his destination with the assistance of indications provided by the interface module. At the moment the navigation module is undergoing an improvement process, which, after the algorithm application, subdivides the route into intermediate points, using a tolerance margin to guarantee that the user gets to each point. Between two consecutive points, the algorithm tries to anticipate the movement of the user in relation to the next point, correcting his trajectory with relevant instructions. In other words, if the navigation line that the user is taking starts going out of the tolerance radius for the next route point, the navigation module sends an alert to the SmartVision module, which in turn sends an alert to the Interface module, with the intention of correcting the trajectory of the user by the required amount of degrees to the left or right. This operation is repeated until the user gets to the desired destination.

5. User Interface

When developing a navigation interface for visual impaired users, one must consider that it has to be intuitive and easy to use, once it will replace the partial or total lost of sight (Holland et al. 2002; Kowalik et al. 2004). This kind of interface can be established through touch and/or hearing. In the SmartVision prototype, the interface between user and system is made using voice, vibration and a 4 pushbuttons device which is still in development. Each interface has a specific use. The voice module is used to guide the user with short information, or longer information when intentionally requested by the user. The vibration module is used, for example, when simple instructions, like "turn left" or "turn right", are needed. The pushbuttons device will work as an interface between the blind user and the SmartVision prototype. For instance, he will be able to ask for detailed information about the surrounding environment. If available, this information will be sent to him through the voice module. The voice module in SmartVision was created as a Dynamic-Link Library (DLL) that uses the Microsoft speech synthesizer API. This DLL is loaded by the system's main module and provides the methods to select the voice, volume, pitch and output device. The Microsoft API was used due to the wide range of voices in several languages available in the market that allow integration with this API. It is important to mention that these voices are made to sound as real as possible, which helps the visual impaired person to get adapted. It would be harder if the voice sounded like a computer voice. Another important point is related to the fact that this API is already integrated in all Windows operating systems, since Windows 98. In order to synthesize a text block, two different methods are available. In the first method, the text to be processed is directly sent to the module and synthesized, while in the second method a XML (eXtensible Markup Language) file is used. This file is structured in order to be able to return the text that will be processed, by selecting the respective language and identifier. It is important to refer that the language of both the voice and the text must match. The main advantage of this method is related to the fact that all information needed to inform and guide the blind is previously structured, saved and can be set in several languages, but it requires a pre-compilation of all the text blocks. In spite of providing all necessary information for blind guidance, the audio interface communication is established only one way, between SmartVision and the blind user, not allowing the information to be requested by the user. To solve this problem, the interface with 4 pushbuttons was developed (Figure 5). It enables the user to navigate in the system and to request information about some buildings or zones. To connect the buttons to the computer where the other SmartVision modules are being executed, we used a convertor that emulates a RS232 port through an USB interface. This way, we can program the DLL using the simplicity of the RS232 protocol and the availability of the USB ports.

This figure shows the pushbuttons interface hardware connected to the computer using USB connection.

Figure 5. Pushbuttons interface

Regarding vibration modules, in the market there are some haptic devices for various purposes, which include handicapped aid. However, concerning blind people's spatial guiding, there is a lot to be done. Therefore, the vibration interface emerged as an alternative because it can guide the blind person without requesting any of his other sensorial systems. This kind of system uses one or more vibrators which are permanently attached to the person's body and use vibration to guide him both in open or closed spaces. The success of this kind of system depends on two very important aspects. The first one is related to the location and quantity of vibrators. They must be placed in a zone of the body where they do not interfere with the person's mobility and there must be at least two vibrators. This way the system is able to move the person forwards and backwards, to the left and to the right, make him move and stop. The sensors are usually located in the upper part of the body or in the arms, but there are some studies that suggest they should be located in the feet (Watanabe et al. 2010). The second aspect focuses on the way the navigation is performed. It is therefore necessary to previously define which signals are to be sent by the vibrators and what is the meaning of each signal. The signals to be sent depend on the number of vibrators. The smaller the quantity of vibrators we have at our disposal, the more complex the vibration encoding will be. This type of interfaces is very useful since it allows the blind person to receive help from an external source without compromising any of his other senses, increasing his perception of the surrounding world.

6. Conclusions

One of the modules that has been proving to be very important in the task of giving information to the blind users is the GIS module. If the area that we want to cover is described in detail, it is possible to give information to the user better and more easily. On one hand it is possible to give proximity information, like the vicinity of a street light post or a zebra-crossing. On the other hand it is also possible to give information regarding more general features, like the surrounding buildings and services. The GIS of the UTAD campus has been successfully developed and it is now in a consolidation stage and loading of new information layers. The GIS module already feeds the Navigation module with information relative to the UTAD campus, having the possibility of finding routes and guiding the blind to the chosen destiny. The interface module is divided in three parts: audio, vibration and pushbutton device. The audio interface is now fully set and working, while the vibration and pushbutton interfaces are still in the development and test stage. The tests made to this module are encouraging, although demonstrating that the audio interface has better performance when playing messages requested by the user. The SmartVision prototype is also composed by other modules, as seen in section 2, and, at the moment, they are all being integrated. A set of tests done to the assembled system by blind users will be performed in order to validate and improve the system. In the future, we envisage the development of a standalone prototype to assist the navigation of blind people in the UTAD campus.


This research was supported by the Portuguese Foundation for Science and Technology (FCT), through the project PTDC/EIA/73633/2006 - SmartVision: active vision for the blind.

7. References