Abstract
This paper introduces a monitoring and navigation system to help people with disabilities (
1. Introduction
Disabled people face daily life problems more often compared to nondisabled ones. Technology has helped people with different types of disabilities to facilitate their quality of living. Monitoring and navigation systems for people with disabilities are needed to obtain a better life by being more independent.
Monitoring and navigation systems are technologies that could be used to locate disabled people and facilitate their roaming by increasing the area of localization and mobility. This has an impact on disabled people lifestyle, improves their independence, and minimizes risks caused by their wandering. Surrounding awareness intelligence could help in facing problems such as unsuitable lighting and slippery wet floors that could threaten their safety.
Current monitoring and navigation systems require a wide range of different sensors, cameras, and recording information for processing. These systems are built in assistive environments and they do not describe different events in detail or deal with new configurations using the wireless technology.
One crucial requirement for disabled people is safety especially when navigating. For the blind pedestrians, it is always recommended to avoid areas with low illumination. Deaf pedestrians avoid crowded and noisy regions, while disabled on wheelchairs prefer elevators rather than stairs. We included the deaf people for completeness of the profiles, since we wanted the disabled people to be self-dependent and get benefit of prototype. For safety reasons, disabled people would prefer longer but safer routes [1].
Wireless sensor networks (WSNs) are composed of low power nodes that are connected wirelessly. Each sensor node has limited processing and power capabilities, but when the sensors nodes coordinate together, they can communicate, measure, and actuate in more detail. Disabled people face serious challenges on daily basis. Challenges could be to stay oriented, recall previsited places, and keep safe while travelling to unfamiliar areas. People with no disabilities often use maps or written directions as navigation tools to remain oriented.
In this paper, we propose a monitoring and navigation system using wireless sensor networks (MNDWSN) to help people with disabilities. The proposed work describes an approach to provide distributed support for navigation and monitoring for people with disabilities. Types of disabilities covered in this paper are blind, deaf, and on wheel chair. The proposed architecture is based on wireless sensor networks with distributed smart phones.
The routing algorithm used in this paper is Dijkstra's shortest path that adapts based on the severity conditions to ensure safety. A prototype is built and tested in the experimental field with real subjects. The proposed system is used to locate people with disabilities, detect surroundings, and route to reach desired destination. Building the proposed system started by designing the prototype, experimenting with tentative people, and performance evaluation is studied on a three-floor university building [2].
The paper is organized as follows. Section 2 discusses the literature survey and challenges faced by the disabled to obtain the desired system. Section 3 has the proposed system. Experiments conducted are presented in Section 4. Results are shown in Section 5. Section 6 has the conclusions and future work.
2. Literature Review
The work done by Sevillano et al. [3] proposed a simple wheelchair system with less onboard sensors to interact with an intelligent environment. The wheelchair navigation depends on its smart software. External sensors information is fed into the smart software. The work proposed by Stock et al. [4] described a study that examines the impact of a GPS-based system called WayFinder for people with disabilities to use public transportation systems. The proposed system operates on handheld devices.
Carmien et al. [5] proposed a system for delivering instant directions to passengers' handheld devices. The proposed system used GPS wireless technology installed on the buses. Authors in [5] presented an architecture that is composed of a personal travel assistant that uses GPS data to deliver instant triggers. The proposed system is composed of a mobile triggering client and collects task conditions from the mobile client to give warnings of potential problems.
Chen et al. [6] provided an active study in the received signal strength indicator (RSSI) for localization purposes. The study was planned to facilitate disabled people movement at home. The system was implemented in Zigbee modules. Experiments showed that planned system can find the locations with minimum error rate. Choi et al. [7] presented a research in digital home and home management systems. Disabled people can remotely manage home appliances through home network open service gateway that uses RSSI data.
Santos et al. [8] proposed a B-live approach that is used for home automating appliances for elderly people with special needs. Marco et al. [9] presented a service for people with special needs. The system uses Zigbee and ultrasound to meet application requirements. Li et al. [10] proposed a system for health monitoring based on WSN by describing a comprehensive model for data collection and its comparison with theoretical models.
All the above systems are expensive and cumbersome. To the best of our knowledge, using WSN with smart phones for selecting routes and for monitoring the location of disabled people with three different disabilities has not been studied in literature.
3. The Proposed System
As designed by human activity assistive technology (HAAT) shown in Figure 1, an assistive solution has four components: (1) the human who makes the activity, (2) the activity by human, (3) the assistive technology to facilitate the activity, and (4) the context in which the first three components exist.

HAAT model.
The proposed monitoring and navigation system MNDWSN is influenced by the HAAT model to assist individuals with disabilities. MNDWSN consists of a smart phone used as a handheld device. Communication is performed using wireless sensor networks (WSNs). WSN is composed of nodes to sense the environment to locate and navigate the disabled people. MNDWSN has a routing engine to navigate disabled people based on their destinations. The proposed assistive model is shown in Figure 2.

Proposed assistive model.
A smart phone is used by disabled people to show instant directions and instructions by displaying pictures, videos, or vibrating. This could be achieved with or without verbal aid based on the appropriate profile of the disabled person conditions. At each point of interest, a photo could be taken based on the configuration. The photos could be used for possible relocations [11]. In the proposed system, a node is placed on each floor at each destination point. A destination point is any location the disabled person could request to navigate to.
Our experiments are based on participants with various disabilities. Empirical studies showed that blind people have navigation difficulties due to insufficient environmental information necessary for navigation. When blind people plan a route in an indoor environment they require a clear environment excluding interrupting sounds. People on wheelchairs have physical problems in accessing information, opening doors, and dealing with stairs.
We use a monitoring and navigation system to route disabled people. Routing algorithms are used to select the best path. Considered routes that depend on physical distances of the links are not always adopted, since safer routes are used rather than shortest routes. When the disabled person reaches a decision point, the smart phone picks data such as the degree of light levels and sound amplitude from the sensor nodes. That information is used to help in navigation or could warn of stairs, possible power shutdown, or maintenance in progress.
Disabled person could move around conditions that could change dynamically, a meeting node that is known by other nodes as shown in Figure 3. The meeting node stores the data received from other sensor nodes. Once the updates are received by smart phones, they are sent to a nearby node, which in turn builds a route to the meeting node. Minimum cost routes are selected using Dijkstra's algorithm. WSN sensors have been placed at decision points. Examples for decision points are doorways, corners, the interior of a room, and the intersections of hallways. Routing algorithms select the optimal routes according to distance considering physical distance of links within building structures. Routing algorithms are not always an optimal solution according to safety for disabled people since it depends on the type of disability. A numerical weight is assigned for each link. For example, a weight from point A to point B is calculated by

Meeting node.
Disabled people are mobile either pedestrian or on wheelchair and hence a special meeting node is set up that is well known to all other nodes. A meeting node stores data received from all other sensor nodes. Smartphones communicate with meeting nodes to use the information by sending messages to the closest node that sends it to the meeting node. Smartphones return the path of the minimal cost which considers the physical distance in addition to the obstacle factor for the people on wheelchairs and the vision factor for the blind disabled people taking an updated estimation of the edge weights as inputs.
Routing mechanism is used to give directions to the right place. WSN depends on the received signal strength (RSS) to locate objects. MNDWSN system empirically finds that errors can change due to environmental fluctuations such as closed doors, density of people, and the ways obstacles are located in the building.
Disabled people may miss navigation information or misinterpret it. For a system to be realistic, it must handle errors if they occur. Given the computed route, disorientation can be detected instantaneously. If the disabled person is on the right track, a central node should match a node on the path towards the destination. In case of a miscalculated signal, it indicates that the user has deviated from the right path. Recalculation of the correct route is performed on the disabled smart phone as a function of the current position that works as a starting point for recalculation process. When the disabled person is detected by the sensor node, a camera takes order to start recording in order to give chance for an escort to view the condition of the disabled person in case if he/she needs help.
The proposed architecture is shown in Figure 4. The server is located at the ground floor of the prototype. Server tracks the disabled people who may deviate from the correct route. Tracking interface is used for the disabled people guardians to monitor the disabled while the smartphone is being used. Tracking functions by the smartphone recording the disabled profile data such as ID, time stamping visited position, elapsed time after leaving the last position, and sending all the information wirelessly to the server that does the tracking operation.

Architecture and system interaction for monitoring and navigation prototype.
The smartphone used in the prototype is Samsung Galaxy S. It has Dual band CDMA2000/EV-DO. A 800 and 1,900 MHz; WiMAX 2.5 to 2.7 GHz; (GSM/GPRS/EDGE): frequencies covered are 850, 900, 1,700, 1,800, 1,900, and 2,100 MHz; 3G (HSDPA 7.2 Mbps, HSUPA5.76 Mbps): 900, 1,900, and 2,100 MHz. Each smartphone is running Android 2.3.6 with TouchWiz UI 3.0/TouchWiz UI.
The environment development phase is performed through downloading the Android SDK then installing the ADT plugin for Eclipse. SDK tools and platforms are downloaded using the SDK manager. The server used is an Intel-based PC server for authenticating the disabled people participating in the prototype, planning the route, and serving photos when requested by the smartphone at each time any location is visited.
The smartphone is designed where timeout is set based on individuals, the last visited locations, and the selected destinations. Server that is located on the ground floor gets an alert when timeout collapses. Timeout value is implemented by java script code and personalized according to profile. Disabled people are told to turn back if they missed the predefined correct route to the desired destination.
Based on the platform that works in mode training materials such as navigation photos and their associated sensors for waypoints on the right track are posted and listed in a sequential order with regard to specific destinations. For convenience of trainers to brief the disabled person, explanatory descriptions such as names of position and distance to the next position are also included.
Photos show directions for the individual to take whether they are straight ahead, left, right, or taking an elevator. By the help of the trainers, users are able to become skilled at operating the smartphone and sending signals to the sensors.
Localization is used for determining directions to the right direction. Localization triggers WSN sensor nodes based on the value of the received signal strength (RSS) and we use the different strength to activate cameras for monitoring purposes. We found empirically that errors fluctuate significantly due to environmental changes such as closed doors, high density people, and temporary obstacles in the building.
Smartphone receive signals when missing a decision point. Beacons emitted from sensor are used to trigger the navigation instructions on the smartphone. In our implementation, we work in two phases in parallel for localization, (1) Bluetooth and (2) IEEE802.15.4 Zigbee, based for the navigation and we use RSS only for monitoring. Bluetooth is used as the beacon source since modules are inexpensive and most smartphones have Bluetooth devices. This can simplify the picking up process. Bluetooth is originally designed to cover areas of diameter around 10 meters; it is modified to cover at most 3 meters for better precision and better battery consumption.
IEEE802.15.4 is the basis for the Zigbee. We are currently implementing it using WASPmote technology. WASPmote facilitates the communication with smartphones. In this prototype we are working in two directions: Bluetooth direction where we adapt the power as mentioned before and the IEEE802.15.4 which is the basis for the Zigbee. We are currently using WASPmote technology. WASPmote facilitates the communication with smartphones [12].
Disabled people may misinterpret the given information or at least miss it eventually. Disorientation should be handled properly in realistic systems. Using both the computed route for the user and the beacon the smartphone picks up, disorientation can be detected very quickly using the smartphone. If the user is on the right track, the beacon should be consistent with the recommended route. In case of receiving an unexpected beacon, it indicates that the user has deviated from the right correct route. Recalculation of optimal route is performed on the user smartphone using current position as starting point for routing.
Smartphones are carried by each disabled person in the proposed prototype. Smartphone shows the just in time directions and instructions by displaying pictures or videos with or without verbal actions based on the disabled profile. Giving direction is triggered when the user approaches the area close to the decision point, so that the user has time to use it in the navigation.
Through experimentation, we found that 2-3 meters works well as a distance of showing navigation prompts on the user's smartphone. At each decision point, every orientation requires a photo. Therefore, arrows that represent a direction are replaced by the routing algorithm depending on user's current orientation. For indoor intersections, there are four photos prepared in advance. In the proposed prototype, a node is placed on the floor plan at each decision point. Any physical space the disabled person is presented by a navigation choice. WSN sensor nodes have been placed at decision points which are only doorways, corners, the interior of a room, or the intersections of hallway.
4. Experiments
A connected graph with 60 nodes and different weights for the links was constructed for the three-floor building. Every floor is similar to the one shown in Figure 5. Each floor contains 10 rooms in addition to stairs and elevator. Each room is 3 m2 as shown in Figure 5.

Floor design.
We experimented with ten different disabled people as shown in Table 1. We made sure to have different genders, different ages, and different types of disabilities. Profiles of the participants are as shown Table 1.
Profiles for 10 different disabled people.
One is older (slower) and the other is younger (faster) to measure RSSI for each case and learn its associated allowed range. In this paper, we have adopted Sevillano et al. [3] approach for locating the cameras in each room in addition to the sensor nodes for full coverage.
Baseline experiments are conducted as references initial phase before experimenting with the MNDWSN. The random walk served as the baseline reference. The baseline reference does not use the proposed MNDWSN.
MNDWSN has improved wandering by 10 times compared to the reference baseline experiment. Baseline reference experiment is based on paper maps and free walks. Recalculation of the path is performed using Dijkstra's routing algorithm.
Experiments conducted using the routing algorithm took mean run time processing times of 0.001 seconds and maximum processing times of 0.01 seconds when used on the three-floor building. This is accepted to be used for real time pedestrian navigation for its low response time. Experiments were repeated 3 times for verification purposes. MNDWSN used 64-bit CPU, a 2.4 GHz radio transceiver, and a builtin light sensor.
Experimentation of MNDWSN was tested in real environment. Starting points were located at a restroom on the ground floor. Destination points were located in a restroom on the second floor. We established reference experiments to provide a baseline for the experiments. The baseline results were compared to those with MNDWSN to compute the performance improvement.
Reference experiments were conducted where the participants got verbal instructions only and then carried a paper map to navigate their trips. Participants were then asked about the preferred preference to use. The preferences that the participants considered included (1) verbal aids only, (2) only picture, and (3) picture with verbal aids. For the rest of the experiments, disabled people carried their smart phones and made use of the MNDWSN system resulting. This experimentation phase constituted a total of 70 sessions.
Experiments are designed to test the implemented prototype. We have implemented seven different trips in different combinations of stairways, elevators, and turns. The routes are designed to exhibit various complexities as shown in Table 2. We changed the sources and the destinations for reliability purposes and to avoid memorizing the path by the disabled person.
Seven different trips for experimentation.
A trip is considered a successful trip if the participant was able to make it to the destination without even a single outside request or help. Otherwise, if the participant got lost somewhere in the middle or requested a help other than the MNDWSN, it was considered a failed trip. The seven different trips are experimented as shown in Table 2.
5. Results
Experiments show promising results for using MNDWSN system. Results show evidence with regard to the applicability of the proposed autonomous indoor navigation system. Experimental outcomes were deduced based on the observations. The baseline experiments suggested that blind people with disabilities could not navigate without a person by their side. We found that MNDWSN was able to assist them efficiently.
For the blind participants who used the verbal aids only they used different routes that excluded the dim corridors and/or stairs. For the deaf participant their profile supported only pictures. For the wheelchair disability, both picture and verbal aids were supported. This resulted in longer but safer trips. However, this was a safe route. Disabled participant followed a different longer route to avoid stairs in the suggested route and an elevator was suggested. So the disabled person had to wait for some time there before catching the elevator.
MNDWSN guided the disabled volunteers to follow a route of seven sensor nodes. Among the seven nodes, two nodes were end nodes of the corridor and the two other nodes were points located in a room with a low light level.
RSSI scheme is used to activate the monitoring through the cameras; different responses are recorded. The main purpose of the RSSI measurements is to trigger the cameras based on the moving objects. The variations in the received signal strength indicator (RSSI) value are used to find the movement of disabled people in the field. The different values of the RSSI depend on the person under study as shown in Table 3.
RSSI range based for each disabled person.
As deduced from Table 3, it was found that if the RSSI is lower than −20, this means that a disabled person is moving for detection. A couple of cases were tested per disability in order to study the accepted range in each case. In order to make the trip, selected positions on the route have to be passed successfully. Positions are determined using sensors. Server determines the positions using the sensor patterns. Volunteered people use devices and are trained before the experiments. Volunteers were trained to use the smartphone and to sense based on their profiles. Table 4 shows the experimental results when using the MNDWSN.
Success to failure ratio using MNDWSN.
In 70 trips performed by 10 different users, the number of failed trips was 7 out of 70 trips and the number of successful trips was 63 trips making the percentage of success be 90%. Disabled people 1 through 4 failed to complete the route involving stairs. It seemed like they did not respond well to the instructions of the prototype. Disabled person number 8 passed by the sensor without sensing it on route 4 which caused a deviation from the correct path.
Reference experiments were conducted to ask the volunteer to take a route without using the proposed MNDWSN system smartphone. These types of experiments were done as reference experiments and to draw an estimate of the effectiveness of the assistive environment and to have baseline navigation ability.
If the same routes are to be traversed successfully without any navigation aids, this implies uncertainty of the proposed system. Successful path without the use of MNDWSN implies that there is no need for the smartphone for this participant. Patient has memorized the route and hence the results are not accurate.
As shown in Figure 5, in the 70 trips, 31 trips failed and 39 succeed (Table 5) making a percentage of success of 55.7% as opposed to 90% using the MNDWSN.
Success to failure ratio without using MNDWSN.
6. Conclusion and Future Work
This paper proposes a monitoring and navigation system MNDWSN to help people with three types of disabilities (blind, deaf, and on wheelchair) with different ages, genders, and profiles. The proposed system is based on wireless sensor networks (WSNs), where sensor nodes and cameras are scattered in the hallways at each room in the tested building.
Experiments were conducted with ten real participants. Four of them are blind, two are deaf, and four are on wheelchairs. Baseline reference experiments were conducted where the paper maps and guiding escort were helping the disabled person. Comparisons were conducted to compare the MNDWSN to the baseline reference experiments. A trip is said to be a successful trip if the disabled person is able to reach the destination without an external help. Otherwise, it is considered a failed trip. The proposed MNDWSN system showed a noticeable improvement. MNDWSN outperformed baseline reference experiments by 34%.
This is an ongoing research; more experiments are to be conducted with more types of disabilities. Only one type of disability per person is assumed in this paper. Multiple disabilities per person are kept for future work.
Footnotes
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
Thanks are due to volunteers who shared in this study. Also thanks are due to the referees for their comments and efforts.
