Abstract
The trajectory tracking performance of an actual robot is usually used as an important standard to evaluate a nonlinear controller algorithm. The aim of this article is to design a new experimental robot system based on the Robot Operating System-MATLAB framework. To validate the performance, a nonlinear tracking controller includes the following features. Firstly, the surface robot model is in Lie algebra
Introduction
The researchers have worked on mobile robots’ motion control for many years. It is very important to find a suitable experiment system for validating the control algorithm. However, many pieces of equipment cannot afford many institutions requirements because of price, capability, or space–size limitations. It indeed prevents many researchers to take part in the relative research work. Therefore, it is necessary to find a popularization robot experiment system.
Generally, an autonomous robot experiment system includes two main modules, location estimator and motion planning controller. According to performance classification, there are many kinds of location sensors, including QR codes, infrared ray localizer, laser radar, and vision-based camera. In autodrive vehicle projects, the cars’ location and navigation are usually based on laser-radar. 1,2 The sensor is also called multi-threaded Lidar, which are often used on high accuracy requirement projects. On the other hand, unmanned aerial vehicle (UAV) location equipment is a kind of high-speed camera, VICON. 3,4 However, the laser-radar and VICON are not only expensive but also needs typical computer support. They are lack of commercial margin because of high manufacturing cost. Therefore, in industrial application, mobile robots are usually located by encoders, QR codes or infrared ray localizers. For example, in automation warehouse, unmanned transporting robots are located by the QR code labels fixing on the ground. 5,6
Since the microelectronics hardware revolution, the computer image-processing ability has been upgraded by graphics processing unit (GPU) module. Consequently, the mobile robot generates an environment map 7 and estimates its position in the global frame based on a real-time image database. 8 The technique is called visual odometry (VO), a kind of robot location technique by real-time image single. 9,10 In addition, mobile robots generate environment maps by VO, and image data are called vision-based simultaneous location and mapping (vSLAM). The robot vSLAM system has been widely used on UAV motion planning projects. 11,12 The embedded GPU computer and vSLAM technique were widely used on mobile robot tracking and mapping missions, for example, an autonomous robot with real-time SLAM, 13 stereo vSLAM method on outdoor mobile robot, 14 the quadruped robot mapping and locomotion, 15 outdoor mobile platform with multi-lines lidar, 16 and indoor multi-robots navigation. 17 A worldwide top robot laboratory, GRASP at the University of Pennsylvania, uses Kalman filter to optimize quadrotor locating algorithm, named vision inertial odometry. 18 The lab has finished the short-range and long-term planning with obstacle avoidance algorithm design and implementation. 19 Recently, the GRASP lab is focusing on the application of a UAV motion planning system with mathematical graph theory. 20
In despite of the robot locating ability, the robot tracking control algorithm is more significant. There are many classical controllers, for example, proportional–integral–derivative (PID), linear quadratic regulator (LQR), and Lyapunov theory-based backsptepping control (BKSP). At present, the researchers prefer multi-robot autopilot algorithm optimization and implementation. For instance, mobile robot,
21,22
UAV motion planning,
23
unmanned surface vehicle controller,
24
robot operating system (ROS)-based network optimization for multi-quadrotors control
25
and search-based UAV motion planning by linear quadratic theory.
26
A new research field is search-based motion planning for UAV flight in
This article aims to design an adaptive nonlinear controller and an experimental mobile robot system with the following features. Firstly, the robot dynamic and kinematic model is structured by Lie Algebra
The main structure of this article is summarized as follows. The first chapter makes a statement about the whole research. The second section introduces the major symbols in this article, including the saturation function, skew-symmetric matrix, rotation matrix, and unit vectors. The mobile robot model and problem are introduced in the third section. An adaptive tracking controller is expressed in the fourth chapter. To validate the controller, a kind of mobile robot tracking experiment system is designed on a novel ROS-MATLAB framework in the fifth section. In the future, the experiment system could be used on relative academic or industrial application projects. When ROS2-Industry put into operation, the ROS-MATLAB experiment system could be applicated to many practical cases, for instance, formation control between UAV and USV, autonomous robots indoor patrol and examination in a warehouse, rescuing robots in extreme conditions, or auto-drive unmanned surface vehicles under space–earth network.
Notation
Special variables definition
To classify different variables, scalar, vector, and matrix are defined as follows. A scalar is a normal character, for instance, mass m, angular velocity r, and weight mass g. A vector character is written in bold, such as linear velocity
A prime
A unit vector is expressed as symbol
A saturation function is
In general, there are many functions were used as saturators. For example,
An unknown variable
There are typical characters that represent the space vector’s norm and direction.
Rotation matrix
A rotation matrix
Because of the rotation matrix
Different from the pure rotation calculation in SO(3), the SE(3) object represents a 3D homogeneous transformation matrix consisting of a translation and rotation.
A SE(3) function in space is usually a 6×1 vector, which includes translation and rotation vectors.
Time derivative of rotation matrix
Because a mobile robot moves on a surface, the time derivative of its rotation matrix
where
Skew-symmetric matrix
During rotation matrix R time derivative calculation, there is a special variable called skew-symmetric matrix, which includes angular velocity vector
During space vector calculations, the skew-symmetric matrix has several special performances.
Firstly, there is a skew-symmetric matrix that includes a force’s direction
Secondly, if the direction vector is in the format as
Thirdly, the cubed skew-symmetric matrix equals to negative of the original function as
If the mobile robot is moving in surface, skew-symmetric matrix S could be simplified as
Robot modeling and problem formulation
Turtlebot2 robot
In this project, the trajectory tracking controller, identification name BKSP-V-TB2, is designed based on a practical robot, Turtlebot2 (TB2). The robot had been widely used by many research institutions because its platform, KOBUKI, has five kinds of programmable data ports for different kinds of additional equipment requirements. For example, the robot upgrades its computational vision calculation ability by adding an embedded GPU computer (Nvidia Jetson TX2). To achieve the simultaneously location and mapping function, the robot equips a laser radar (Rplidar-A2), a depth camera (Kinect V1), and a powerful WIFI module for real-time graph signal communication. As Figure 1, the mobile robot’s fix-body frame in space includes three components in x, y, and z axes.

The TB2 robot fix-body frame. TB2: Turtlebot2.
Mathematics model in space
In general, a mobile robot only revolves around surface motion, as the classic textbook by Craig
34
and Fossen.
35
However, to prepare for the space–earth cooperation control task, the mobile robot is defined in space vector format. The robot position and orientation are defined in inertial frame

The TB2 robot motion model. TB2: Turtlebot2.
The robot orientation
Kinematics and dynamics model symbols. 39
According to the general definition, the Turtlebot2 mobile robot’s kinematic and dynamic models are
where
Problem formulation
Robot motion model in space
The first challenge of this article is establishing a space model for the practical robot, Turtlebot2. To expand the autonomous system to airplane–vehicle cooperation location and navigation, the robot model was defined in space vectors. The robot dynamic model, equation (2), and kinematic function (1) have been introduced in “Mathematics model in space” section. In the mathematics aspect, three dimensions function will be friendly for rotation matrix R calculation and transformation between space frames. However, the 3D model’s calculation is much more complex than surface models.
Tracking controller
The second problem is designing a practical TB2 robot tracking controller, which is expended in the fourth section. It is also the most important task full of the challenge because the mobile robot is an underactuated model with nonlinear and unknown parameters. It means that the robot model is lack of active controller in the y-axis direction. Based on the mobile robot’s kinematic and dynamic model features, the control algorithm is based on Lyapunov direct method. In addition, according to the mobile robot, TB2, practical features and the motion controllers are linear ud
and angular velocity
Controller stability verification
The third challenge is about the control algorithm’s stability verification. A tracking controller’s fundamental function is decreasing errors between desired and actual trajectory based on the vision of backstepping control theory and the Lyapunov direct method. In other words, it is an optimization problem about error boundedness and convergence. In the physical aspect, the objective is forcing the mobile robot’s actual position
Experiment system
The fourth problem is designing a suitable experiment system to test the mobile robot control algorithm. The traditional robot motion and location experiment equipment are based on high-speed cameras (indoor) or multi-location sensors (outdoor), such as GPS, multi-lines radar, and vision-based sensors. All the equipment above have common features that are expensive price and hard to protect. In this article, the experiment system is based on a low-cost robot, Turtlebot2, and the ROS-MATLAB framework. The TB2 robot is lightweight, modular, and extensible equipment. The ROS system is good at distributing framework calculation and communication. It is the reason why ROS system becomes one of the most popular programming platforms in many institutions and industrial factories.
Signal transformation
The last problem comes from the ROS-MATLAB framework in the practical experiment. The control algorithm in MATLAB m-files needs to transform into a recognizable signal in practical mobile robots and embedded microcomputers. There are many hidden problems during ROS-MATLAB establishment, such as communication signal transformation, time-delay cancellation, and time-varying variables singularity avoidance.
Controller design and stability analysis
The Lyapunov direct method is a fundamental theory for studying nonlinear systems. Most practical robots are not ideal linear models. In this article, the mobile robot controller theory is based on the author’s previous work, the nonlinear controller about autonomous wheelchair,
36
two wheels robots cooperation control,
37,38
USV,
39,40
and tracking controller test system.
41
–43
The practical robot, Turtlebot2, is a nonlinear control system whose algorithm is based on Lyapunov direct method and backstepping control law. The robot movement is driven by left and right wheels with encoders. Therefore, the control variables are relative with linear ud
and angular ur
velocity, as a vector
Position error
In general, a trajectory tracking controller’s basic objective is driving the robot following a desired path. It means that the first target is canceling the error, equation (3) between robot’s actual and desired position in initial frame
where actual position is
According to mobile robot kinematic equation (1), the time derivative function of the position error is
where
First Lyapunov function
Based on the position error
where estimation error
According to position error equation (4), the first Lyapunov time derivative function is
where W
1 is a positive definite parameter
Desired linear velocity
According to the robot features, the control variable is related to velocity
Because the mobile robot is an underactuated system, which cannot directly controlled by a space vector, the deconstruction is necessary.
Space vector deconstruction
In an underactuated system, the active control vector needs deconstruction before calculating its solution. In general, there are two ways to separate a space vector

Space vector deconstruction.
It is obvious that, in case 1, the projection on robot surge u and sway
There is an important conclusion that the robot desired linear velocity
First Lyapunov function time derivative
Based on vector deconstruction equations (6) and (7) definition, the time derivative of the first Lyapunov function becomes
where u
1 and
Direction error
Based on the actual and desired velocity vector
where the direction error
Second Lyapunov functions
After direction error definition
The second Lyapunov time derivative function is
where W
2 is a positive definite function as
Desired angular velocity
Based on the second time derivative of Lyapunov function
There is a fundamental hypothesis in the Lyapunov direct method that, if the time derivative Lyapunov function is negative definite as time variable
where there is a function be independent of angular velocity
In addition, after replacing unknown disturbance
There is a new generated unknown disturbance part
Finally, the desired angular velocity
Unknown parameter estimator
In general, the unknown parameter
As equation (11), there are two unknown disturbance estimator function,
where
According to the Lyapunov direct method requirement, the time derivative of the Lyapunov function must be semi-negative definite. It means all components of the
After substituting equation (17) into (19), an integrator format estimator functions
Controller stability analysis
If the kinematic parameters in equation (1) are certainly known, a perfect control law is
where ud
and
Proposition
The control law ud
(20),
Proof
Based on the control law ud
(20),
and its time derivative
is a negative semidefinite function. Based on the condition, the time derivative of V is negative definite, meaning that the origin is uniformly asymptotically stable.
Experiment system design
In general, a robot tracking control system includes three parts, locating, controlling, and adding function modules. However, most of the robot locating equipment is very expensive, such as multi-lines radar, stereo vision camera, or high-speed industrial visual sensors. To validate the controller performance without expensive equipment, a kind of robot tracking test system was designed based on the ROS-MATLAB framework. The implementation robot, Turtlebot2, components are a KINECT camera, JETSON TX2 embedded GPU, KOBUKI platform, and an Rplidar-A2. The original target is tracking control of the mobile robot TB2, which was widely used in academic research, especially on ROS-based simulation systems. As Figure 4, the TB2 robot components include a KOBUKI mobile robot with two active wheels, two-speed encoders, a Kinect-V1 depth camera, a laser radar RPlidar-A2, and an Nvidia TX2 GPU-embedded computer.

The TB2 mobile robot components. TB2: Turtlebot2.
The experiment will be introduced based on two aspects, hardware, and networks, in the next section.
Embedded computer
As Nvidia company’s official datasheet was introduced, the Jetson TX2 is a full-featured development platform for visual-based computing. The powerful graph calculation unit, 256-core NVIDIA Pascal GPU, makes it an ideal equipment for applications requiring high-computational performance in a low-power envelope. The module was pre-flashed with a Linux environment, included many common APIs, and was supported by NVIDIA’s official development tools and apps. The computer will be installed with ROS and related TB2 robot packages in this project.
Depth camera
The Kinect V1 camera is a kind of developed vision-based human body sensor. Based on the depth camera, the system could calculate the target model states in 3D space. The camera will be used on the mobile robot self-position and orientation estimation task.
Laser radar
The Rplidar-A2 is a kind of accurate radar, which the longest reflection distance is up to 20 m. In this project, the radar will provide an environment map by scanning surrounding obstacles. Based on the initial-frame map and robot self-odometry data, the system could calculate its position and location in the global frame.
Simulation software
The ROS-MATLAB framework is based on a calculation computer with Matlab software as Figure 5, a Wi-Fi-based Ethernet, and several location sensors, such as vision-based radar, laser, and motors encoders.

The ROS-MATLAB experiment framework. ROS: Robot Operating System.
When running the trajectory tracking experiment, the mobile robot sends odometry data to the embedded computer, Jetson TX2. In addition, the Rplidar generates a local map by scanning the surrounding environment. The vision data, which was captured by the Kinect camera, were also sent to the TX2 computer. The robot publishes all data in the ROS-MATLAB system and receives the command in Ethernet. The tracking controller is running in MATLAB software on host personal computer (PC) and sending them to practical robot by ROS system, as Figure 6.

The TB2 robot tracking experiment control screen. TB2: Turtlebot2.
Simulation and experiment results
Parameters setting
The robot tracking controller is validated on the ROS-MATLAB experiment system. In this project, the simulation assumed that the robot running in an ideal environment, which ignores many additional disturbances, such as ground friction, damping force, motor voltage delay, and mobile robot speed limitation. However, after practical pretesting experiments, there is a conclusion that it is impossible to cancel the mistake between practical and ideal robot models. Therefore, to minimize the errors, the initializing process is necessary before practical experiments. One of the most important tasks is setting control algorithm parameters. The experiment requirement, all controller gains, and robot parameters are defined in Table 2.
Mainly parameters definition.
Before the practical experiment, communication must be established between host PC and the mobile robot, Turtlebot2. Generally, the preparation work includes ROS host PC and master computer network setting, Turtlebot bring-up programming, host PC and robot microcomputer connection testing, mobile robot odometry, and radar scanning data publishing code compiling. The preparation work is finished until the mobile robot could be controlled on Matlab software (PC side) manually.
Trajectory tracking performance
During the practical experiment, the mobile robot desired path is a circle, radius of which is about 1 m, center locates at the initial position

The TB2 robot trajectory tracking experiment paths. TB2: Turtlebot2.
The practical robot tracking path is much more complex than the simulation model. In the beginning, the mobile robot, TB2, turns left suddenly, which is different from the simulation mobile robot. The reason is the simulating robot model facing to target direction in the beginning. However, the practical robot’s initial orientation cannot be defined, which is one of the ROS-MATLAB limitations. Therefore, the mobile robot TB2 has to turn suddenly in the beginning until the controller calculated the desired target direction and orientation.
Position signals and errors
Position signals
As Figure 8, there are reference

The TB2 robot tracking position. TB2: Turtlebot2.
Position errors
As Figure 9, the position errors between reference and model

The TB2 robot tracking position errors. TB2: Turtlebot2.
As time passes by, all of the errors are converging to zero without sudden shock. It means the controller drives the robot’s approach to the destination normally. In addition, the system is working effectively under the unknown disturbance
Controller performance
As Figure 10, there is the backstepping control law performance in the first 30 s. The surge velocity controller ud
approaches to stability point in 10 s. The angular velocity controller

The TB2 robot tracking controller performance. TB2: Turtlebot2.
Simulation and experiment comparation
The practical robot working environment is much more complex than the simulation. The track mobile model is under the assumption that the robot works under ideal conditions. It can be found in Figure 7 that the practical robot’s initial position is a complex number. Even though there are many unknown parameters or disturbances, the BKSP-TB2 robot controller is still working normally. In the mathematics aspect, the control algorithm can not only decrease the error between the assumed robot and desired path but also could work on a practical robot. The controller’s robust ability under nonlinear models and unknown disturbance is much better than other linear classic controllers.
Lyapunov function performance
As the most important variables, the Lyapunov function and its time derivative are presented in Figure 11. The Lyapunov function represents all errors “energy,” which is always converging to zero if the system is effective. It means the Lyapunov function should be decreasing (the first and second subfigures) and its time derivative must be negative definite (the third subfigure). When the Lyapunov function is positive and the time derivative is semi-negative definite, the control law achieves trajectory tracking by guaranteeing all of the robot’s states errors

The Lyapunov functions and its time derivative.
Controller features comparison
To validate the robustness of the BKSP-TB2 control law, it is necessary to contrast the backstepping controller with a widely used control algorithm, PID. Firstly, to quantity the performance effect between backstepping and PID controller in this case, a simulation system was built in Matlab–Simulink toolbox. Secondly, the mobile robot dynamic model needs linearization before PID controller simulation. Facing double linearized mobile robot models, both controllers work under the same circle-like trajectory. Finally, both controller performances are presented within states errors, as shown in Figure 12.

The state errors of backstepping and PID controller. PID: proportional–integral–derivative.
Both orientation error signals under both controllers are as follows. denotes, the Backstepping makes the error approach to equilibrium faster with less vibration.
To validate the difference between backstepping and PID, the controller states errors are calculated in ISE (Integral of Square Error), ITSE(Integral of Timed Square Error), and IAE (Integral of Absolute Errors), as Table 3. The simulation signal result is denoted in Figure 13.
Controller effects errors comparation.
PID: proportional–integral–derivative.

Controllers effects in ISE, ITSE, and IAE. ISE: integral of square error; ITSE: integral of timed square error; IAE: integral of absolute errors.
As Table 3 demonstrates, the backstepping controller effect errors in ISE, ITSE, and IAE are much less than PID, which represents that the backstepping controller has better performance and robustness.
In conclusion, the backstepping controller is much more suitable for the nonlinear system and has good performance in adaptive features, as Table 4. The PID controller is simple and used widely. However, single PID gives poor performance when the loop gains must be reduced. They also have difficulties in the presence of nonlinearities and have lag in response to large disturbances. The LQR algorithm is just an automated way of finding an appropriate state-feedback controller. It is many times difficult to find the settings of a regulating controller by using a mathematical algorithm which minimizes a cost function with weighting factors.
Controller features comparation.
BKSP: backsptepping; PID: proportional–integral–derivative; LQR: linear quadratic regulator.
Conclusion
In this article, a novel experimental mobile robot system based on the ROS-MATLAB framework has been introduced. The aim of the system is to validate the surface trajectory tracking controller designed using Lie Algebra
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The work described in this article was partially supported by FDCT funding support project, Macau, China (File No. 0027/2018/ASC, 0041/2018/AIR, and 003/2019/AIR), Education innovation project, Beijing Normal University, Zhuhai, China (Project Code: 202009), and Beijing Normal University at Zhuhai, Undergraduate Training Program for Innovation and Entrepreneurship (Project Codes: S202319027022 and X202319027159).
