Abstract
This paper presents the design, implementation and validation of real-time visual servoing tracking control for a ball and plate system. The position of the ball is measured with a machine vision system. The image processing algorithms of the machine vision system are pipelined and implemented on a field programmable gate array (FPGA) device to meet real-time constraints. A detailed dynamic model of the system is derived for the simulation study. By neglecting the high-order coupling terms, the ball and plate system model is simplified into two decoupled ball and beam systems, and an approximate input-output feedback linearization approach is then used to design the controller for trajectory tracking. The designed control law is implemented on a digital signal processor (DSP). The validity of the performance of the developed control system is investigated through simulation and experimental studies. Experimental results show that the designed system functions well with reasonable agreement with simulations.
1. Introduction
A ball and plate system is a common experimental setup in control laboratories. It consists of a horizontal plate, which is tilted along each of two horizontal axes such that a ball can be controlled to roll to any position on the plate. It serves to visually demonstrate and reinforce the underlying principles of nonlinear dynamics and control theory. Moreover, because of its inherent nonlinearity, instability, and underactuation, this system is widely used as a test bed for verifying the performance and effectiveness of new control algorithms or technology. Several aspects of mechatronic design, implementation, and control of a ball and plate system have been investigated in the literature [1-10]. In existing studies, the control design has usually been based on the simplified model of the ball and plate system, and derivation of a detailed dynamic model has received far less attention. Based on the model linearized with respect to the equilibrium, [1] designed a PID controller and lead controller to stabilize the system. In [3, 7], a nonlinear simplified model was given, and a sliding mode control was used to achieve trajectory tracking. Based on a linear simplified model, [10] designed a PID controller and state observer feedback controller to achieved set-point regulation and trajectory tracking.
For a ball and plate system, sensors for sensing the position of the ball can be divided into two main types: the touch screen [1, 10] and the overhead camera [2-6, 8, 9]. Both these sensing devices suffer from certain shortcomings. The ball can easily lose contact with the touch screen plate during the motion, and this causes discontinuity in the position measurement. Since the sampling rate of a common camera (30 Hz) is limited, use of a common camera for sensing the position of the ball imposes real-time constraints on the system. Visual servoing [11] is a control framework that incorporates visual information in feedback control loops. Real-time image data acquisition and processing is a critical issue in visual servoing applications. A high frame rate and low processing latency are essential since the visual servoing system must make quick decisions based on the information extracted from a scene. Real-time image processing requires a high pixel processing rate, massive and parallel computation, and efficient hardware utilization. General-purpose processors cannot always provide enough computational power to fulfil real-time requirements due to their sequential nature. Due to their inherent architectural parallelism and configurable flexibility, field programmable gate arrays (FPGAs) have become increasingly popular implementation platforms for real-time image processing. In [12], an FPGA/DSP (digital signal processor) architecture was proposed for real-time image processing. This architecture was designed to deal with parallel/pipelined procedures to handle multiple input images. A flexible FPGA-based systolic architecture for real-time window-based image processing was proposed in [13]. Its computational core was based on a configurable two-dimensional systolic array of processing elements. In [14, 15], FPGAs with a pipelined architecture were used to implement optical flow estimation from image sequences in real time.
The idea behind feedback linearization [16-18] is to find a diffeomorphism and a state feedback control law that transform the nonlinear system into a linear time-invariant system. The design is then carried out on the linear system using linear control design techniques. Both full-state feedback linearization and partial-state feedback linearization can be considered. In full-state feedback linearization, also called input-state feedback linearization, the state equations are fully linearized. In partial-state feedback linearization, also called input-output feedback linearization, the input-output map of the system is linearized, while the map from the input to the state can be only partially linearized. However, the applicability of feedback linearization is limited to special classes of nonlinear systems satisfying the constraints of controllability, involutivity, and the existence of a relative degree or minimum phase property. One approach to relaxing one or more of these applicability constraints is to use the techniques of approximate feedback linearization. Approximate feedback linearization was first proposed in [19], where necessary and sufficient conditions for approximate full-state feedback linearization of nonlinear systems are given. This method is intended to find a nonlinear coordinate change and state feedback such that a linear approximation is obtained for an equilibrium point accurate to second or higher order. In [20], it was shown that one can approximately input-output linearize a nonlinear system by neglecting the nonlinearities responsible for the failure of the involutivity conditions. An approximate tracking controller was then designed for a ball and beam system.
This paper presents the design, implementation, and validation of tracking control of a ball and plate system using visual feedback. The position of the ball is measured with a machine vision system that uses an image sensor with a frame rate of 150 frames per second to guarantee a certain level of real-time performance. The pipelined image processing of the machine vision system is implemented on an FPGA device to meet real-time constraints. The detailed dynamic model of the ball and plate system is derived. This detailed dynamic model is adopted to build a reliable and accurate simulation model for the purpose of validating the closed-loop performance of the control system, once a controller design is available. For control design purposes, the detailed dynamic model is far too complex. By neglecting the high-order coupling terms, the ball and plate system is simplified into two decoupled ball and beam systems, and an approximate input-output feedback linearization approach [20] is then used to design a controller for trajectory tracking. A DSP is used for the implementation of the control algorithm and complex arithmetic operations. The effectiveness of the controller and machine vision system is validated through simulation and experimental results. Some studies [8, 9] have reported the control of a vision-based ball and plate system via an FPGA implementation. The processing frame rate of the machine vision system developed in this paper (150 frames/sec) is much higher than that of [8] and [9] (16 frames/sec). Unlike the present work, the experimental results given in [8] and [9] are the results of set-point regulation, which is less challenging.
The rest of this paper is organized as follows. In Section 2, a brief description of the experimental setup is given. Section 3 describes the relationship between the world coordinates and the image coordinates. The image processing algorithms for determining the position of the ball from the image data are introduced, and the implementation and architecture of the image processing algorithms on an FPGA device are presented. In Section 4, the dynamic model of a ball and plate system is derived. In Section 5, a review of approximate input-output feedback linearization is presented. Based on approximate input-output feedback linearization, the design of a controller for trajectory tracking is given. In Section 6, the simulation and experimental results are given. Finally, Section 7 contains some concluding remarks.
2. Experimental Apparatus
The experimental setup of a ball and plate system is shown in Figure 1, and the schematic overview of the experimental setup is shown in Figure 2. The system consists of two dc motors with gear heads. Each motor shaft is fixed to an L-shaped linkage that is attached to the plate with a universal joint. This L-shaped linkage consists of two linkages connected with a bearing joint. The plate is pivoted at its centre with a universal joint. When the L-shaped linkage is driven by the motor to move up and down, the plate inclines and changes its rotational angles; thus the ball can roll on the plate. This actuation mechanism is similar to the one designed and implemented in [1]. An optical encoder with a resolution of 1250 pulses/rev is attached to each motor shaft to measure the angular position of the motor. The position of the ball is measured with an overhead machine vision system, which consists of a CMOS camera sensor, static random access memory (SRAM), and an FPGA board. The camera is a Micron CMOS image sensor, MT9M011, which can capture 352×288 pixels quantized to 1024 RGB-colour-levels at 160 frames per second. In this study, the image sensor is set to operate in the grey-level mode at 150 frames per second. The SRAM (13M bits) acts as a frame buffer, which is used for storing an image during camera calibration. An FPGA board is used to implement the image processing algorithms written in very-high-speed integrated circuit hardware description language (VHDL). The FPGA board is an Altera Cyclone EP1C12Q240C8 FPGA. The controller is implemented on a DSP board, which is the Texas Instruments TMS320F240 digital signal processor (150-MHz/32-bit) equipped with two 12-bit D/A converters and two quadrature encoder pulse (QEP) units. The QEP unit provides the angular position and rotational direction of the motor from the quadrature encoder signals. The velocity of the ball and the angular velocity of the motors are estimated from the displacement travelled per unit time, and then the estimated velocities are passed through digital low-pass filters to attenuate the high-frequency noise. Voltage signals are generated according to the designed control law and supplied to the power amplifiers that drive the dc motors. The power amplifiers are operational amplifiers that can operate at up to ±20 V and 2.75 A.

Experimental setup of the ball and plate system.

Schematic diagram of the experimental setup.
3. Visual Measurement and Image Processing Algorithm
A machine vision system is used to determine the displacement of the ball on the plate. A black ball and white plate are used to simplify visual sensing and to increase the accuracy of visual measurements. Determination of the position of the ball is based on the perspective pinhole camera model [21-22], which is given by
where p = [xim, yim, 1]T represents the two-dimensional homogeneous coordinates of the image point in the image coordinate system; PO = [X, Y, Z, 1]T represents the three-dimensional homogeneous coordinates of the object point in the world coordinate system, and λ is a scalar factor. The 3×3 matrix RE and three-dimensional vector t are extrinsic camera parameters that describe the rotation and translation between the world frame and the camera frame, respectively. KI is the intrinsic camera parameter matrix given by
Here, (ox, oy) is the principal point on the image coordinate system in pixels; fsx is the size of the unit length in horizontal pixels; fsy is the size of the unit length in vertical pixels, and fsθ is the skew of the pixels. The world frame oXYZ is chosen with the Z-axis pointing vertically upwards and the origin o located at the centre of the plate. The camera is mounted right above the plate with its optical axis passing through the origin of the world frame. The camera frame is denoted by ocxcmvcmzcm. Th xcm-axis of the camera frame is chosen to be parallel and aligned to the X-axis of the world frame. Thus, the ycm-axis points vertically downwards. The extrinsic camera parameters are then given by:
where dc = 0.91 (m) is the distance from the origin of the world frame to the origin of the camera frame. The intrinsic camera parameters are obtained using the camera calibration procedure proposed in [21]. In this study, the parameter λ = dc – R is obtained, where R is the radius of the ball. The intrinsic camera parameters are given in Table 1.
Intrinsic camera parameters
To determine the position of the ball on the image plane, the captured images are processed using a series of image processing algorithms. A sample of a raw image captured by the camera is shown in Figure 3.

Image captured by the camera.
The image processing algorithm steps [23] are described below:
Step 1. Background subtraction
Background subtraction is an approach to differentiate foreground objects from their background. This step is used here to remove brightness fluctuations due to uneven background illumination. Background subtraction was applied to the image in Figure 3 and the resulting image is shown in Figure 4.

Background subtraction applied to the image in Fig. 3.
Step 2. Thresholding
Thresholding is a simple and efficient method applied to grey-level intensity images to differentiate between the object and the background. Moreover, thresholding converts a grey-scale image into a bi-level image which contains all of the essential information concerning the shape, position and number of objects. This process is described as follows:
where f(xim,yim) is the grey-scale level of the pixel at point (xim,yim) in the image coordinate system, and Tth is the threshold value. Applying thresholding with a threshold value of 100 to the image in Figure 4 produces the image in Figure 5. The threshold value is determined experimentally to obtain the best system performance.

Thresholding applied to the image in Fig. 4.
Step 3. Median filter
As seen in Figure 5, there is noise in the image. Further filtering processing is needed to remove noise. Noise removal is performed by a median filter, which requires the use of a 3×3window to slide along the image, and the median intensity value of the pixels within the window becomes the output intensity of the centre pixel. Applying the median filtering process to the image in Figure 5 produces the image in Figure 6.

Median filtering process applied to the image in Fig. 5.
Step 4. Centroid determination
As shown in Figure 6, the white pixels correspond to the ball, while the plate is shown by the black pixels. Since the ball is symmetrical, the centroid of the image of the ball is taken as its location on the plate. The centroid (xc, yc) is given by
In order to process the image data in real time, the image processing algorithms are implemented on an FPGA using VHDL. The FPGA technology is chosen because it is able to provide massive amounts of reprogrammable hardware acceleration and parallel/pipelined processing ability. In this study, the image processing is divided into the modules as shown in Figure 7.

FPGA architecture pipelined.
The functions of the modules are:
There is no iterative process in this design. Thus, this processing can be fully pipelined in hardware implementation to improve efficiency and provide a high data throughput rate. The design is implemented on Altera Cyclone EP1C12Q240C8 FPGA. The FPGA configuration data is stored in an electrically-erasable programmable read-only memory (EEPROM) and can be re-programmed by the host computer. A programmable clock generator is used to generate three global clocks for system clocks and the pixel clock. The I2C serial interface is set to operate at 62.5 kHz; the operation frequency for carrying out image processing is 100 MHz, and the pixel clock is also 100 MHz. In order for the DSP board to communicate with the FPGA, a 16-bit bus connected to the DSP board data bus is used for data exchange. The read/write commands are specified with different addresses. The proposed design can process 150 grey-level images per second with a resolution of 352 × 288 pixels.
4. System Modelling
In this section, we derive the mathematical model of the ball and plate system by using the Euler-Lagrange method [24]. Figure 8 illustrates the basic features of the system. Assuming that the ball rolls on the plate without slipping, the ball is always in contact with the plate; there is no rotational motion of the ball with respect to its vertical axis, and all friction forces and torques are neglected. There are two coordinate frames used, as shown in Figure 8.

Coordinate frames for the ball and plate system.
The space-fixed coordinate system (world frame) oXYZ has the Z-axis pointing vertically upwards and the origin located at the centre of the plate. Unit vectors along the oXYZ-coordinate system are denoted by Î, Ĵ, and K̂. The coordinate system fixed on the plate with the z-axis perpendicular to the plate and the origin located at the centre of the plate is denoted by oxyz. Unit vectors along the oxyz-coordinate system are denoted by î, ĵ, and K̂. The oxyz-coordinate system is, with respect to the oXYZ-coordinate system, rotated about the x-axis by an angle θx followed by a rotation about the y-axis by an angle θy. The Euler-Lagrangian equation is
where L is the Lagrangian function, and Q and q are the generalized forces and generalized coordinates, respectively. The Lagrangian function L is defined as
where T is the kinetic energy, and V is the potential energy. For this system, q is selected as
Q is given by
where τx and τy are the torques exerted on the plate in the x-axis and y-axis, respectively.
If the inclination of the plate changes, the ball moves on the plate. The position of the ball relative to the plate-fixed coordinate can be mapped into the space-fixed coordinate. The transformation between these coordinates is written as follows:
where PRF is the rotation matrix and
The position vector of the ball in the oxyz -coordinate system is given by
where R is the radius of the ball. From (11), we have
By taking derivative of (14) with respect to time, the velocity vector of ball in the oXYZ -coordinate system is then given by
where
The angular velocity vector of the plate due to rotation is denoted by
The relation between the rotation matrix and angular velocity [24] is given as
From (20) and (12), it is given that
Let ω1î+ω1ĵ +ω1K̂ denote the angular velocity vector of the ball with respect to the oxyz-coordinate system. Under the assumption of rolling without slipping and twisting we have the following conditions:
Let
The kinetic energy of the ball is given by
where m and Jb are the mass and moment of inertia of the ball, respectively. Let the inertia matrix of the plate be
where Jpx, Jpy, and Jpz are the moment of inertia about the x, y, and z-axes, respectively. The kinetic energy of the plate is then given by
Then, the total kinetic energy of the system is
From (14), the potential energy of system is given by
where g is the gravitational acceleration. By using (8), (26) and (27), the Lagrangian function is given by
Then, from (7) the dynamic equations of the ball and plate system are given by
This detailed dynamic model is adopted to build a reliable and accurate simulation model for the later simulation studies. The dynamics given in (29)-(32) are highly coupled and nonlinear. They are far too complex for control design purposes, so model simplification is necessary. The ball and plate system is a two-dimensional extension of the ball and beam system. Assuming that the operating ranges of θx and θy are small, the high-order coupling terms are, therefore, small and negligible. The ball and plate system can be treated as two decoupled ball and beam systems as follows:
From (33)-(36), we know that the dynamic equations in the x and y-axes are identical. Also, the actuation mechanisms driving the tilts of the plate in the x and y-axes are identical. In the following, only the ball and beam system and the actuation mechanism in the x-axis will be discussed. The inclination of the plate is driven by a four-bar linkage as shown in Figure 9, where τ‘x is the torque generated by the motor, and τx is the torque exerted on the plate. Based on the force analysis, we have

Connections of the linkages and plate.
We assume that the angles of θx and ϕ are small enough that lsin θx = dsinϕ can be simplified as
Since l = d, we have
Note that θx is not directly measurable. However, for feedback control, θx can be approximated by the measurement of the angular displacement of the motor shaft. Moreover, (37) can be rewritten as
A voltage signal, which is generated according to a control law as designed below, is supplied to an amplifier, which drives the permanent-magnet dc motor to control the L-shaped linkage. Since the electrical time constant of a dc motor is usually much smaller than the mechanical time constant, and the value of the viscous friction coefficient is negligible, the following reduced-order model [24] of the dc motor is adopted:
where vx is the control voltage, Kmx is the motor constant, is the armature resistance, and n is the gear ratio. In this study, the value of n is chosen as 8.
The state vector is defined as
From (34), (35), (39), and (40), the state space equation of the system on the basis of (41) can be written as:
where ux is the new control input, which is given by
and
The state space equation (42) is used for later control design. The physical parameters of the system are listed in Table 2.
Physical Parameters
Based on the perspective projection (1), the machine vision system measures the position of the ball relative to the space-fixed coordinate oXYZ. To determine the actual position of the ball relative to the plate-fixed coordinate oxyz, we consider the coordinates of the centre of the ball measured by the machine vision system and the actual coordinates relative to the plate-fixed coordinate as shown in Figure 10.

Coordinates of the centre of the ball measured by the machine vision system and the actual coordinates relative to the plate-fixed coordinate.
Here, the point
The coordinates of the point
Based on the perspective projection, the points A, B, C must be collinear. Thus, we have
From (46), the centre of the ball on the x-y plane of the plate-fixed coordinate is given by
and
5. Tracking Control Design Based on Approximate Input-Output Feedback Linearization
This far, we have discussed the ball and plate system simplified into two decoupled ball and beam systems. In this section, for each decoupled ball and beam system we use an approximate input-output feedback linearization technique to design a tracking control law.
Consider a single-input single-output system of the form
where x ∈ Rn,u and yo ∈ R, f and g are smooth vector fields on Rn, and h: Rn → R is a smooth scalar function. We assume that x = 0 is an equilibrium point of the system. The derivative of h(x) along vector field f(x) is expressed by the Lie derivative
The system of (49) and (50) is said to have the relative degree γ, which is well-defined in an open neighbourhood U of the origin if for
In [20], it was shown that the relative degree of the ball and beam system is not well defined at certain locations, and this system violates the involutivity conditions of feedback linearization. Therefore, neither exact input-output feedback linearization nor full-state feedback linearization is applicable to this particular system. Alternatively, the approximate input-output feedback linearization proposed in [20] provides a method to find a set of coordinate transforms ξi = ϕi(x), i = 1,…,γ, that approximate the output and its subsequent derivatives. The idea of approximate input-output feedback linearization is to construct the coordinate transform in the usual fashion as in the exact input-output feedback linearization by neglecting the second or higher order terms in the approximate output and its subsequent derivatives. We consider the ball and beam system of (42) with the output yo=h(x) = x1. To proceed with approximate input-output feedback linearization, a smooth function ϕ1(x) is chosen to approximate the output
Then, we obtain
The term of 2Ex1x4ux in
we obtain a feedback linearizable system given by
where
approximately linearizes the ball and beam system of (42) with the output yo = h(x) = x1 from v to yo up to the terms of O(x,ux)2. Here, a scalar function δ(x) is said to be O(x)n if
where Ki are chosen so that s4+K3s3+K2s2 + K1s+K0 is a Hurwitz polynomial.
To analyse the tracking error and stability of the closed-loop system, we define the state vector and control input in the x and y-axes thus:
Then, the ball and plate system can be written as two decoupled ball and beam systems with higher order coupling terms as follows:
where Δf(xp,u) are the higher order coupling terms. yox and yoy are the outputs in the x and y-axes, respectively. Using the coordinate transforms (54) and feedback control law (56), the approximate input-output feedback linearization for the system (59) is given by
where ψ(xp,u) are the summation of the higher order coupling terms Δf(xp,u) and the approximation errors. We define the tracking error vector e as
With the stable tracking control law (57), the closed-loop system can be then written as
where
for xp ∈ U ∈ ⊂ R8 and ‖yd‖ ≤ d. Since A is a Hurwitz matrix, there exists a positive definite matrix P such that
Consider the Lyapunov candidate function for the error system (62):
The derivative of
Then, for all
Therefore, V̇ < 0 whenever ‖e‖ is large, which implies that ‖e‖ and, hence, the states of the closed-loop system are bounded. Moreover, for a sufficiently small reference command ‖yd‖ and appropriate initial conditions, we can conclude that the tracking error is of order O(∈).
6. Simulation and Experimental Results
To measure the performance of the designed control law, the control system was simulated in MATLAB/Simulink using the parameters in Table 2. The control gains of (57) were chosen to be [K3, K2, K1, K0]=[8.7788, 145.6124, 799.0362, 1998.3245]. The simulation of circular trajectory tracking was conducted. The ball was commanded to follow a circular trajectory of radius 0.08 m, centred at the origin with a constant angular speed of 0.75 rad/sec. The corresponding trajectory references of each axis are ydx =0.08sin(0.75t) and ydy = 0.08cos(0.75t). The initial conditions were set to x = 0 m, y = 0 m, θx = 0 rad, θ
y
= 0 rad, ẋ = 0 m/sec, ẏ = 0 m/sec,

Comparison of simulation results (dash-dot line) of the position responses of the ball and tracking command (solid line) in the x-axis.

Comparison of simulation results (dash-dot line) of the position responses of the ball and tracking command (solid line) in the y-axis.

Simulation result of the angular position response of the inclining plate in the x-axis.

Simulation result of the angular position response of the inclining plate in the y-axis.

Simulation result of the control voltage vx.

Simulation result of the control voltage vy.

Simulation results of the position error of the x-axis (dash-dot line) and position error of the y-axis (solid line).

Comparison of the simulation result (dash-dot line) of the trajectory of the ball and commanded trajectory (solid line).
Figure 11(c) shows that the control voltages did not saturate in the simulation for the chosen initial conditions and trajectory references.
For further validation, the designed control law and image processing algorithms were implemented and tested on the experimental setup, as shown in Figure 1. The image processing algorithms were implemented on the FPGA board in VHDL. The designed controller was implemented on the DSP system in C. The camera in the proposed system operated at 150 frames/sec. Thus, the sampling frequency of the system was set to 150 Hz.
The trajectory tracking experiment was conducted with the same initial conditions and controller parameters as in the simulation. The experimental results of the position responses of the ball, the angular position responses of the inclining plate, the control voltages, position errors, and the trajectory of the ball are shown in Figure 12(a), (b), (c), (d), and (e), respectively. As shown in Figure 12(a) and (e), the ball converged towards the desired trajectory in 1.9 seconds with a starting point at the origin. In Figure 12(d) it can be observed that the static position error in the x-axis is within a range from −0.0094 m to 0.0049 m, while the static position error in the y-axis is within a range from −0.0080 m to 0.0091 m. It also shows a good agreement between simulation results and experimental results. The mismatch and amplitude oscillations between simulation and experimental results are possibly caused by approximation errors in control design, disturbances, and unmodelled hardware effects such as friction, ball slipping, plate unevenness, illumination variation, low resolution of the camera (352×288 pixels), etc. As shown in the analysis of tracking error and stability, one can choose a small reference command and initial conditions with a small initial tracking error to improve tracking performance.

Comparison of experimental result (dash-dot line) of the position response of the ball and tracking commands (solid line) in the x-axis.

Comparison of experimental result (dash-dot line) of the position response of the ball and tracking commands (solid line) in the y-axis.

Experimental result of theangular position response of the inclining plate in the x-axis.

Experimental result of the angular position response of the inclining plate in the y-axis.

Experimental result of the control voltage vx.

Experimental result of the control voltage vy.

Experimental results of the position error of the x-axis (dash-dot line) and position error of the y-axis (solid line).

Comparison of the experimental result (dot line) of the trajectory of the ball and commanded trajectory (solid line).
For a camera with a low frame rate, visual servoing can result in performance degradation or even closed-loop instability. The obtained controller was discretized by the Tustin's method. The control system was then simulated in MATLAB/Simulink. From the simulation shown in Figure 13, we found that the closed-loop system becomes unstable for trajectory tracking when the sampling time is greater than 43 ms (23.26 Hz). The tested system cannot be stabilized if the frame rate is below 31 frames/sec. This is the reason this system requires a fast camera. A video clip demonstrating the performance of the developed system is available at http://lab.es.ncku.edu.tw/csplab/ball&plate-wmv.htm. The video shows that the system performs well even with different initial conditions and disturbance.

System becomes unstable when system performs trajectory tracking with a sampling time of 43 ms.
7. Conclusions
Visual servoing tracking control for a ball and plate system was designed, implemented, and validated. The experimental setup, system modelling and implementation of the machine vision system were described in detail. The main contribution of this paper is twofold. First, the complete system model was derived for system simulation and analysis. Approximate input-output feedback linearization was used to design the control law for trajectory tracking. Second, an FPGA implementation was used to carry out image processing algorithms to achieve real-time constraints.
The performance of the designed control system was investigated via simulated and experimental responses providing comparisons and the validation of controller design and system modelling. Both simulation and experimental results showed that the designed system had good performance. This experimental setup can serve as a test bed for nonlinear control schemes as well as supporting laboratory practice.
Footnotes
8. Acknowledgements
This work was supported by the Chieftek Precision Co., Ltd. and by the National Science Council, Taiwan, through grants NSC-100-2627-E-009-001 and NSC-101-2627-E-009-00.
