Abstract
In this article, we propose the visual application of a navigation framework for a wheeled robot to disinfect surfaces. Since dynamic environments are complicated, advanced sensors are integrated into the hardware platform to enhance the navigation task. The 2D lidar UTM-30LX from Hokuyo attached to the front of the robot can cover a wide scanning area. To provide better results in laser scan matching, an inertial measurement unit was integrated into the robot’s body. The output of this combination feeds into a global costmap for monitoring and navigation. Additionally, incremental encoders that obtain high-resolution position data are connected to the rear wheels. The role of the positioning sensor is to identify the existing location of the robot in a local costmap. To detect the appearance of a human, a Kinect digital camera is fixed to the top of the robot. All feedback signals are combined in the host computer to navigate the autonomous robot. For disinfection missions, the robot must carry several ultraviolet lamps to autonomously patrol in unknown environments. To visualize the robot’s effectiveness, our approach was validated using both a virtual simulation and an experimental test. The contributions of this work are summarized as follows: (i) a structure for ultraviolet-based hardware was first established; (ii) the theoretical computations for the robot’s localization in the 3D workspace will play a fundamental role in further developments; and (iii) data fusion from advanced sensing devices was integrated to enable navigation in uncertain environments.
Introduction
In the recent time, the infected disease has impacted every place on Earth, including Asia, Europe, and Africa. Most regular activities such as commercial exchange and increasingly interconnected economies have been globally interrupted. Although there have been significant efforts to battle against this virus, such as breaking the chains of virus transmission and reducing the infection rate, to date, we have not only misunderstood the original root of the virus but also missed the mechanisms of virus transformation. Hence, to protect public health, direct interactions between people should be restricted as much as possible. However, it is impossible to prohibit social relations in a modern society. As a result, an intermediate smart robot represents an excellent solution for delivering medications and food, measuring human health, and assisting people’s psychological lives. In the waves of infected diseases, the potential applications of robotics systems are considerably widened.
In real-world applications, the presence of robots around us is not strange due to the Industrial Revolution 4.0, which involves smart sensing devices, wide-band communication, connected multi-agents, and visual computation in complicated systems. In most relevant studies, smart sensors and smart manufacturing processes were able to facilitate monitoring and supervising the entire production line, in addition to making their own decisions. Nevertheless, the problems are completely different in medical applications. To adapt with rapid infection, it is necessary to establish an intelligent supply chain for medical disposables and equipment so that patients can use the required essential medical items in time. 1,2 Otherwise, it has also affected manufacturing and the economy throughout the world. This reality emphasizes the need for more research into remote operating machines and autonomous systems that can operate far from operators and make decisions by themselves. To respond to these requirements, there are extensive developments and opportunities to be explored and integrated in robotics. In the case of clinical care, some fields of specific importance, including disease prevention, diagnosis and screening, patient care, and disease management, need greater investments.
In this context, an autonomous robot must navigate by itself in an unknown environment. For several decades, various commonplace navigation strategies have been used. These strategies can be classified into two subcategories: classical approaches and reactive ones. During the period when artificial intelligence schemes were not commonly studied, classical methods were very popular for mobile robots to solve issues in navigation. These methods included cell decomposition, 3,4 roadmaps, 5 –7 and artificial potential field. 8,9 The major disadvantages of these methods are their high-cost computational resources and difficulty adapting to unexpected occurrences in the working environment, making these methods difficult to implement in real-time applications. In the second group, advanced algorithms such as particle swarm optimization, 10 artificial bee colony, 11 cuckoo search, 12 firefly algorithm, 13 and data fusion 14 are suggested for mobile robot navigation over conventional methods. These methods have the potential ability to deal with uncertainty when the robot moves alone. Today, most researchers prefer to focus on reactive approaches because of their powerful ability to fuse various strategies or data to enhance autonomous characteristics. With less computational effort, these approaches are promised to develop more in further studies.
Literature reviews
Generally speaking, there is still room to develop robotics systems to combat COVID-19 in such areas as telesurgery robots, diagnostic testing of COVID symptoms, personal care robots, and disease prevention. For the first kind of robot, teleoperation is a mature technology that can be used for both telemedicine and telecommuting. This technology can be used regardless of the known or suspected severe acute respiratory syndrome coronavirus status of the patient, and all surgery can be fulfilled in an epicenter of the COVID-19 pandemic. Robot-assisted surgery 15 could be helpful in decreasing the time of a patient’s stay in the hospital and protecting the surgical team at the patient’s bedside. This technology potentially lessens not only contamination with body fluids and surgical gasses of the surgical area but also the number of directly exposed medical staff. However, some questions still remain, as surgery robots may be not sufficient for all cases of COVID-19 or for unknown future diseases that also become widespread. Furthermore, the obligatory constraints of facilities and the operating skills of the surgical robot could become possible issues if such a robot is deployed in poor nations or developing countries. In the near future, with considerable enhancement in 5G bandwidth, remote communication with surgery robots 16,17 will become faster and more stable. In the areas of medicine delivery, health-care services, and daily consultation, the use of robot assistant care 18 is an excellent method for maintaining medical order in a hazardous environment. In this way, the health and wellness of staff in the hospital can be preserved to fight against COVID-19. During a long period of treatment, the absence of social interaction for the patient and the machinery’s intelligence should be discussed more thoroughly. For the initial diagnostic testing of COVID-19, most experts suggest that one should gather and examine nasopharyngeal and oropharyngeal swabs. When an outbreak occurs, a vital problem is a lack of qualified medical staff to swab patients and process test samples. The greatest values of robots in clinical application 19,20 are their ability to provide noncontact detection and remote sampling in order to minimize risk. However, there is no discussion on isolating mutual infections among a large number of suspected patients, and medical assistant persons are still needed to detect pathogens in isolation sleeves.
For the robot’s classification in diagnosis and screening, the use of a wheeled platform to measure temperature or recognize disease symptoms could be a practical application in public places, such as at the entryways of buildings and offices. Commonly, an automated camera system is utilized to screen multiple people in a large area. In the literature, 21,22 several developers introduced models for surveillance robots in order to promote social distancing in complex urban areas and monitor the body temperatures of people in crowded groups. Incorporating thermal sensors and visual computation schemes into distant mobile robots could increase the efficiency and coverage of screening. Nonetheless, the design of these robots is still very simple while real-world scenarios are complex. Intelligent algorithms must be integrated into such robots to predict human intentions. Currently, although robot models are small, they can still cause confused feelings or perturbation when reaching out to humans.
For practical implementation, a friendly service robot is one of the best ideas for human-oriented design. Given the issues of an ageing population and the busy life styles of youth in many countries, there is a need to utilize intelligent robotic systems and autonomous machines for this purpose. Most notably, elderly people need their mental health taken care of if they remain in insolation. In the literature, 23 –25 a companion robot was shown to have the potential to mitigate feelings of loneliness by building different types of supportive relationships. Initially, the operationalization and measurement of loneliness and an impact analysis of the companion robot were undertaken. However, existing limitations include the significant need to improve the robot. Moreover, the level of autonomy and proactive interaction model were too simply investigated. To enhance its interactive effects, the robot needs to deeply converse with elderly patients by integrating artificial intelligence algorithms. For the well-being of humans, the socially assistive robot plays a role in addressing the secondary impacts of the global pandemic. 26 Usually, most researchers focus on the primary aspects of robotic applications such as monitoring and reducing loneliness when a human remains in isolation. On the other hand, the secondary influences on distance learning, job searches, and vocational training need to be explored. Interdisciplinary robotics investigations should be considered to establish a foundation for fighting against COVID-19.
The most important factor to evaluate the quality of a hotel is human resources. When making travel decisions, people often compare several options for accommodations with different attributes. In the tourism industry, the attitude performance of staff can exert an influence on a customer’s pleasure to avoid inducing any distress or anxiety. 27,28 The change in a customer’s consciousness to accept the robot’s appearance would considerably encourage many industrial developers. For disease prevention, robot-controlled touchless ultraviolet (UV) light is being employed for disinfection because COVID-19 can exist on contaminated surfaces due to respiratory droplet transfer. 29 Before the occurrence of the outbreak, disinfection by UV lamp was considered an efficient touchless solution in the terminal tools of rooms. 30 During the global pandemic, COVID-19 can remain on inanimate surfaces, including metal, glass, and plastic, for days. The use of a wheeled robot with an UV light device has thus become increasingly common to reduce contamination on high-touch surfaces in offices, hotels, public places, and hospitals. 31 Instead of manual disinfection, which involves many operators and increases exposure risk, autonomous wall-following disinfection robots represent a cost-efficient, rapid, and highly effective method. 32 However, in uncertain environments that might contain unexpected obstacles, wall-less working spaces, or unseen areas, more studies are needed to develop autonomous UV-based disinfection robots. Table 1 summarizes studies on robotic applications related to the global pandemic. The contributions of our research are as follows. First, the development of a hardware platform for UV-based application is outlined. Second, a mathematical expression of theoretical localization in a three-dimensional workspace is described to compute the location of the robot. Third, a data fusion technique is developed to autonomously drive the robot in an unknown environment.
Review of state-of-the-art robotic applications in fighting COVID-19.
Proposed approach
Disinfectant robots are a recent technology used to deactivate micro-organisms but require mastery of a set of techniques including mechanics, electronics, navigation, and programming, in which vision-based drive is the most significant application. Since this technology works in an unknown environment, the autonomous disinfection robot must first launch a global map. Then, the robot can robustly navigate and orientate itself to find final destination without any collisions.
Investigation of UV power sources
Previous studies indicated that an ultraviolet beam (UVB) can effectively destroy various micro-organisms including COVID-19. UV disinfection technology uses either mercury bulb devices or pulsed xenon bulb devices.
29
In both cases, objects and surfaces in direct line of sight can be more successfully decontaminated by UVB than objects in other areas. It is widely recognized that UVB is a set of continuously radiant point sources, as shown Figure 1(a). Each point source overspreads the radiant power Pi
, which can be measured by the ratio of the radiant power of the lamp and the total amount of point sources. The radiation of point source Pi
is a scalar value. Hence, the intensity of UVB at any point A, which has a relative distance

A set of continuously radiant point sources (a) and intensity at a point on the sphere centered on the point source (b).
The intensity EA produced by Pi of UVB at point A is computed as
where Pi
is the radiant power at point source i,
The UV intensity around the point source can be precisely determined by the distance between the point source and the receiving point and the absorption coefficient of the UV transmission medium.
As shown in Figure 2(a), the intensity at any receiving point in the radiation region is considered to be the sum of all the intensity distributions from the source points in the system.

The intensity at any point receiving from total point sources: without crystal tube (a) and with crystal tube (b).
where ri
is the traveling distance radiating from the ith point source to the receiving point, and
If many UV lamps are used, the UV intensity at any one point will be equal to the total UV intensity of each lamp at that point. From the results of surveying UV intensity at any point using the multi-point source method, we can determine the UV intensity when using multiple lamps
where
For the UV disinfection system shown in Figure 2(b), the UV lamp is situated in a crystal tube. Then, the total coefficient of radiation absorption 33,34 is calculated as
where
Consequently, equation (1) becomes
Combining equation (1) and equation (2), we obtain
where
The authors in the literature 35,36 discussed which UV-based technologies are appropriate for autonomous robots in the current situation. First, it is necessary to locate the adapted UV light on top of the mobile platform, as shown in Figure 3. Given the new restrictions placed on daily life by social distancing requirements, advanced techniques could be utilized for navigation and detection in this scenario, such as lidar, digital cameras, and ultrasound sensors. The robotic UV platform fuses integrated sensors to perform simultaneous localization and mapping (SLAM).

The uniform distribution of UV lamps (a), mobile UV-based disinfection system (b), and inside mechanism of vertically adjustable height (c).
To stabilize the whole system, including both the mobile platform and UV lamp, the system’s center of gravity (CoG) must be adapted to different system states. The robot model is theoretically simulated in Figure 4(a). To avoid collapse of the system, the moment generated by the force around the center of rotation C must be greater than the moment of the centrifugal force
where b is the distance between two active wheels, h is the height of the CoG, P is the gravity force, v is the velocity of the robot, m is the total weight of the system, g is gravitational acceleration, and R is the radius of turning motion.

Computational model of the whole system during motion (a) and theoretical validation on a computer (b).
To validate the mathematical computations, the theoretical result was simulated on a computer, as shown in Figure 4. The benefits of adaptive height provide a working space with a wide area for the UV lamp to disinfect and prevent accidental collapse. 37,38 In terms of adjusting system height, the robot shrinks to its minimum size when turning in order to ensure stable movement. In a linear trajectory, the system height can reach the boundary conditions shown in equation (7).
Autonomous navigation in an unknown environment
In SLAM, the robot operating system (ROS) is an essential tool for the robotic system to function in uncertain circumstances. The autonomous navigation function consists of one master node and several other nodes, among which the move_base node plays a crucial role. 39 This node is able to plan the desired trajectory and order the driving motors via linear and angular velocity. The inputs for the move_base node include the scanning data from sensing devices, odometry information, and translational offset values. In the move_base node, two costmaps are used to save environmental data: the global costmap plans the motion trajectory on a universal map, while the purpose of the local costmap is to locally generate an obstacle-avoidance map.
In this research, we propose a navigation framework that avoids collision for mobile platform with UV lamps (see Figure 5). Data from the lidar sensor and inertial measurement unit (IMU) sensor are fused together to produce the evaluated values used as input for the model of observation. To augment the system’s highly accurate navigation and lessen the computational burden, the process for estimating weights and resampling is employed before the global map of the workspace is created. For localization of the robot, two positioning sensors are required. Based on the resulting signals, a model of motion is established to build the robot’s trajectories. The Monte Carlo algorithm, which represents the distribution of possible states for the robot to move in and sense the environment, is embedded in the sampling position. Then, a local costmap is formed to indicate the current locations of the robot. Based on these investigations, autonomous navigation via data fusion was successfully deployed in the platform of the ROS.

The proposed navigation framework for the UV-based disinfection robot in unknown environment.
Theoretical estimation in a 3D workspace
Here, we consider a nonholonomic wheeled mobile robot without wheel slips. A list of mathematical symbols is briefly outlined in Table 2. It is assumed that
Summary notation of mathematical symbols.

The representation of vector u in the given coordinate R
0 and the rotated coordinate R
1 (a) and three angles

The model of motion for a mobile platform in 3D working space.
With any vector
or
Since
or
which we re-write as follows
where
Therefore,
Matrix
With
From equations (15) and (18), we obtain
where
In brief, we obtain
In the relationship between R 0 and R 1
At time sequence k, the location of the robot in working space is defined as
where
The model of motion for a mobile platform is computed as
where
The angular values of
In the above equation,
Model of motion
Generally speaking, the system state usually involves the three linear variables
Due to the effect of uncertain factors or environmental disturbances, it is impossible to present the system state using a vector at time sequence t. To overcome this problem, the system state is indicated by the probability distribution. In Figure 8, the autonomous robot is initially located at

Model of probability distribution for the robot’s system state.
Model of observation
The model of observation that describes data processing from a laser under external disturbances is defined as a conditional probability distribution
Consider that the measurements are independent. Then, we can approximate
Map m is a grid map that is divided into several cells. Each cell contains the coordinates (x, y) together with or without an obstacle. In cell mi , map m performs a set of cells
Monte-Carlo localization
The method of Monte Carlo localization is a type of particle filter used to identify the robot’s position in a given workspace. This method uses a finite number of samples to represent a probability distribution.
40,41
Because the number of samples is limited, this scheme is approximate. The distribution function
Here, each particle
Results of simulation and experiments
In this section, we verify the effectiveness of the proposed approach in the ROS platform. The model of the autonomous surveillance robot was established as shown in Figure 9(a) with the exact design in a real scene as shown in Figure 9(b). The overall process to manipulate the robot is as follows. There were several target rooms, and the robot initially moved from the walking corridor to each room. When arriving in a room, the robot would patrol around the tables and chairs for several minutes. At that time, an UV lamp was vertically elevated to expand the laser scanning range. The wheeled robot utilized data fusion from various sensors to avoid collision, plan its trajectory, and check whether it had completed its movement round. To further research the performance of the surveillance robot in disinfecting, we conducted some real-world experiments. The proposed framework was embedded into the microprocessor, which performed the proper behavior.

Model of the mobile robot in the virtual environment (a) and practical scenario (b).
Figure 10 shows the simulation results for the mobile robot using the proposed framework. The blue color is the zone that the laser beam could reach to. The autonomous robot accomplishes its task of disinfection in two stages. In the first step, due to the unknown nature of the environment, the global map is empty. Then, the robot must scan around its location to acquire the surrounding data. After receiving feedback signals, the surveillance robot is able to recognize whether there is an obstacle. Later, the robot approaches subsequent positions without any collision. The robot’s movement is sensed by two rear positioning sensors that exactly reflect the present location on map. After entering each room, the robot patrols around. This process is repeated many times until the robot completely creates the visible map. With this knowledge, the robot can autonomously control an UV lamp in the second step. If an obstacle suddenly appears, the robot updates the status of the global map. It can be seen clearly that the proposed navigation framework performed well in the virtual tests.

Simulation results of the proposed navigation framework for mobile robot: (a) initial stage and (b) final stage. See link shorturl.at/koL57.
We conducted the experimental validation with the same conditions. Our research and development center, which includes several meeting rooms, working rooms, and walking corridors, was used as the area of the proposed framework. The autonomous surveillance robot spent the first period exploring the unknown environment by visiting and scanning each room. After a period of time, the global map in the host personal computer was successfully established, as shown in Figure 11.

Practical result of the successful building map.
The result of the real-world verification of our approach is shown in Figure 12. Ignoring the slipping phenomenon, the autonomous robot accurately tracked the desired trajectory and avoided obstacles. The mixed control using both self-governing navigation and a UV lamp flexibly enabled a series of complex actions. Therefore, our proposed framework can be employed in navigation for UV-based disinfection robots and promises to be a highly applicable technique in public zones to prevent disease. To make a feature of competitive performance, Table 3 represents the structure and techniques of the proposed method comparing to other works.

Practical results of the proposed navigation framework for the mobile robot. See the link at shorturl.at/sCEI1.
List of comparative specifications among related researches.
Conclusions
In this article, the visual application of navigation framework for autonomous system in unknown environment was presented. The mobile platform integrating with UV lamp patrols around living area to eliminate bacteria. Based on this idea, the proposed framework involving data fusion from different sensing devices, is to navigate whole system to avoid obstacle. Several simulation tests and experiments are conducted to demonstrate the effectiveness. It is believed that our developed method is entirely capable of enabling the self-governing navigation of UV-based disinfection robot in public workspace.
Future work in this field remains necessary. Advanced algorithms should be investigated to enhance the visual capabilities of surveillance robots on any terrain. Moreover, humans or groups of humans might appear in front of the robot. 46,47 Accordingly, socially interactive models and context-based learning techniques for improving the robot’s behavior represent promising research directions.
Supplemental material
Supplemental Material, sj-pdf-1-arx-10.1177_17298806231162202 - Visual application of navigation framework in cyber-physical system for mobile robot to prevent disease
Supplemental Material, sj-pdf-1-arx-10.1177_17298806231162202 for Visual application of navigation framework in cyber-physical system for mobile robot to prevent disease by Thanh Phuong Nguyen, Hung Nguyen and Ha Quang Thinh Ngo in International Journal of Advanced Robotic Systems
Footnotes
Acknowledgments
We acknowledge Ho Chi Minh City University of Technology (HCMUT), VNU-HCM for supporting this study.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
