Diverse image-based tracking schemes for a robot moving in free motion have been proposed and experimentally validated. However, few visual servoing schemes have addressed the tracking of the desired trajectory and the contact forces for multiple robot arms. The main difficulty stems from the fact that camera information cannot be used to drive force trajectories. Recognizing this fact, a unique error manifold that includes position-velocity and force errors in orthogonal complements is proposed. A synergistic scheme that fuses camera, encoder and force sensor signals into a unique error manifold allows proposing a control system which guarantees exponential tracking errors under parametric uncertainty. Additionally a small neural network driven by a second order sliding mode surface is derived to compensate robot dynamics. Residual errors that arise because of the finite size of the neural network are compensated via an orthogonalized second order sliding mode. The performance of the proposed scheme, in two significant applications of the multiple robot arms, is validated through numerical simulations.
To increase the executing tasks of the robot manipulators in several practical applications, the use of multiple robot arms working together in the same workspace and cooperatively has been proposed. Working abilities, loading capabilities and object manipulation are some advantages of multiple robot arms with respect to a single robot. In order to resolve the complex dynamic interactions in these robotic systems, many studies have focused attention on designing robust and adaptive control schemes to minimize the hardware and software.
Although several approaches for cooperative schemes [9, 13, 15], fast control [5, 8, 14, 26], load distribution and path planning [19] have been reported in the literature, a difficultly in synthesizing these approaches in practice is the inexact knowledge of almost all system parameters.
Additionally, to avoid the computational load and the ill-posed nature on the inverse kinematics in the joint robot control, controllers in operational coordinates or the use of visual information called visual servoing can be addressed as an alternative solution.
Despite the fact that several pieces of research have addressed vision/force control [1, 4, 7, 27] for a single robot, few of them show robustness to uncertainties in robot parameters or camera parameters. In [27] a controller based on the multisensor fusion is proposed where the signals from sensors are used to correct uncalibrated parameters. But the force/position controller and the planner require an exact knowledge of the robot manipulator. An adaptive algorithm is used in [21, 22] to estimate the visual Jacobian matrix in free motion. This approach is based on the assumption that the visual Jacobian matrix can be decomposed as a product of two matrices. The first matrix requires kinematic parameters of the robot manipulator, while the second matrix is assumed unknown. That is, the rotation matrix between the robot base frame with respect to the vision frame is unknown. On the other hand, a neurovisual servoing control of a planar robot manipulator is presented in [23] where it is assumed that the link lengths of the robot manipulator are not exactly known. The neural network is used to avoid the drift in the parameter estimated and some possible overshoot in the estimated of the gravitational vector. That is, it is not necessary to know the gravity vector and boundedness of the Jacobian matrix against a perturbation. At the same time, there is a dearth of research that extends or combines these visual approaches to multiple robot arms under parametric uncertainty.
This paper aims to design a theoretical neuro-visual servoing controller that guarantees analytically image-based tracking of constrained multiple robot arms subject to parametric uncertainty, that is, i-th robot and the camera parameters are unknown. The neuro-visual servoing control is based on a second order sliding surface for visual position and contact force trajectories respectively. To approximate the i-th robot dynamics and the parameter variations, a small neural network driven by the sliding mode is used. The residual error that arises from this bounded approximation is eliminated with the sliding mode into a unique error manifold. In contrast with other approaches, in this paper exponential convergence of image-based position error and contact forces are guaranteed with smooth control and chattering free. Finally, the parametric uncertainty on the i-th robot manipulator and camera are parameterized in terms of the regressor time as a parameter vector. Then, to get parametric uncertainty, this vector is multiplied by a factor with respect to the nominal value.
The viability of this approach is demonstrated through two typical applications of the multiple robot manipulators in a horizontal plane. In the first simulation, a pair of robot manipulators grasps an object firmly applying the desired force through the contact point, while the end-effector of the i-th robot manipulator follows the desired visual trajectory. In the second simulation a more complex task is presented, one that involves the changing dynamics and an impact regime to manipulate an object by two robotic fingers. Strictly speaking, transition tasks involve free motion, impact and constrained motion regimes. Impulsive and unilateral constraints arise in the impact regime, which makes it very difficult to design a control system. The simple approach would be to avoid impact and to commute consistently between a free and constrained controller. That is, given that algebraic constraints arise in the constrained regime, a virtual constraint will be proposed to model a DAE1 system for all regimes. Thus, the control scheme proposed is based on the avoiding impact regime, through zero transition velocity from free to constrained motion, which means that impulsive dynamics does not appear. Differently then [18, 24, 25], this approach is one of fast constrained object manoeuvring for non-redundant multiple robot arms. Furthermore, a decentralized cooperative tracking is obtained without using the model of the robot manipulator, nor exact knowledge of the inverse Jacobian.
Constrained dynamics of a multiple arms robot
Dynamical Equations. Consider a robotic system composed of l rigid robot manipulators that is holding an object, the differential algebraic dynamic model of the i-th manipulator, i ∊ {1,‥‥,l}, constrained by the object's geometry can be expressed by
where are angular positions, velocities and accelerations of the revolute joints of the i-th manipulator, which has ni degrees of freedom, is a positive-definite inertial matrix, is a Coriolis matrix which models centripetal and centrifugal forces, is a vector of gravitational torques, and is a vector of input-controlled joint torques. If the end-effector of the i-th robot manipulator is in touch with the frictionless and infinitely rigid surface of the object, which is modelled by the geometric function with mi > 0 representing the number of independent contact points of the i-th manipulator, then
where is the constraint Jacobian, is an orthonormal matrix, and a Lagrange multiplier that represents the magnitude of the contact force.
Manipulation Space. When the robotic system manipulates an object, the position of the i-th end effector is geometrically constrained on a pi = ni – mi dimensional manifold Ori, where
From the standpoint of path planning, equation (2) stands for the trajectory to be followed which runs on the surface of the object. Thus, the constraint imposed by equation (2) should be satisfied in all time which means that the manipulator does not invade the object's space.
Even though the formalism presented through the paper can be extended for a i-th rigid manipulator, this work will focus in a robotic system with two manipulators (i = 2), each of them having a single contact point (mi = 1, for i ∊ {1,2}). This is a planar robotic system, thus the manipulators move in a horizontal plane.
Open Loop Error Equation. According to the properties of equation (1), which can be parametrized linearly [10], in terms of a nominal reference as follows
where the regressor
is composed of known non-linear functions, and mi ∊ ℜpi represents unknown but constant parameters. If we add and subtract equation (3) to equation (1), then the following open loop error equation arises
where the joint error manifold is defined by
Constrained Positions and Velocities. Once the manipulators jointly grasp an object and hold it securely, they keep pulling and pushing. Thus, in order to work in a cooperative manner and to rigidly manipulate an object, we need to model these effects. This is achieved by using constrained position and constrained velocity variables [14]. Considering the total derivative of the holonomic constraints imposed by equation (2), the constrained position and constrained velocity are defined as
respectively. Consequently, to propose an appropriate error coordinate system, equation (6) should be included in the error manifold through .
In order to design an image-based decentralized controller without resorting to and gi(qi), we need to define a joint error manifold that uses the visual position of the CCD camera sensor. Firstly, let us establish a mapping between joint velocities and end-effector visual velocities.
Visual Kinematics. Let the direct and differential kinematics for the i-th robot manipulator be defined by
where fi:ℜn → ℜn is the generally non-linear forward kinematics, represents the end-effector position of the i-th robot manipulator in the operational workspace, and Ji(qi) its Jacobian matrix. Given that a planar robot is used in this work, the operational space has two dimensions and n = 2. In this paper we use a widely accepted fixed (static) camera configuration [6], which has a thin lens without aberration, whose mathematical description consists of a composition of four transformations that take into account: joint to Cartesian transformation, Cartesian to camera transformation, camera to CCD transformation, and CCD to image transformation. Therefore, the 2D visual position of the CCD camera sensor, which maps the i-th end-effector position into the image space (screen), is given by
with
where λf is the focal distance along optical axis, z stands for the depth of field, defines the distance between optical axis and the i-th robot base, and defines the translation of the camera centre to the image centre. Taking the time derivative of equation (8), the differential camera model is
Notice that the constant transformation αR maps the static robot Cartesian velocities into visual Cartesian velocities, and that the visual flow does not contain any independent input. Now, substituting equation (7) into equation (10), the relation between image velocities with joint velocities is defined as
where determines the visual velocity of the i-th robot end-effector, and αRJi(qi) maps the i-th joint velocities to visual velocities.
Taking into account the equation (11), the inverse differential visual kinematics expression for the i-th robot manipulator is obtained
where , with is a non-degenerate matrix. It is important to notice that the entries of are a function of the robot manipulators and the camera parameters. That is, equation (12) says that the joint velocities of the i-th robot manipulator appear as a function of the visual velocities.
Global Joint Space Decomposition. Consider a partition of the joint space coordinate qi as follows
where and . Due to the kinematic constraint there are mi dependent coordinates which define qi1. According to the implicit function theorem [16], a relationship between qi1 and qi2 always exists, i.e, qi1 = Ω(qi2), at least locally. Taking the derivative of ϕi(qi) = 0, with its corresponding partition given by equation (13), we obtain
with and . Taking into account the inverse differential visual kinematics defined in equation (12), it is possible to determine an mi × mi nonsingular matrix for qi1 in visual space. Then, the velocity of the joints of the i-th robot can be expressed as
where always exists on Ω because rank[Jφ1 (qi)] = mi, i.e., there are mi independent contact points, and is well posed. That is, equation (17) is well posed because exists in the finite workspace imposed by the holonomic constraint defined in equation (2). Taking into account the partitions expressed by equations (13) and (17), the velocity of the generalized coordinates can be written as
where has a full rank (ni – mi), and in general will have the following form
Since Jφ1 (qi) Qi = 0, the image of Qi lies in the null space of Jφ1 (qi), i.e., the n-state space is decomposed into two orthogonal subspaces such that it can be written as direct sum ℜn range {Jφi (qi)}⊗ range {Q}.
Then, independent joint generalized coordinates are formed by qi2 and its derivative.
Visual system
Visual Error Manifolds. Given that Qi and Jφi+ (qi) are orthogonal complements, and lies in the tangent plane at the contact point, a natural decomposition of that arises at the contact point can be defined as
where
In order to assure the position and force tracking, the nominal visual position/force reference can be designed as
where stands for a visual nominal reference which parameterizes the system in terms of visual coordinates, E2i is a positive definite diagonal matrix, with the desired constraint position, and the subscript di denotes desired reference, bi and are positive gain scalars, the desired force, and the function sgn(y) is the discontinuous signum function of vector y.
Uncalibrated Joint Error Surface. Assuming that the parameters of the i-th manipulator are uncertainties and the parameters α, θ and b are not well known because the camera is not calibrated, this implies that the nominal reference defined previously in equation (21) cannot be used. Since we are interested in simultaneously controlling the visual position-force of the i-th manipulator, the noncalibrated nominal reference at the velocity level is
where stands as an estimate of so that the rank of is ni – mi, for all t and for all qi ∊ Ω. In addition, and are defined as in equation (21). Now, substituting (20) and (22) into equation (5), the uncertain visual error manifold becomes
where is called the extended orthogonalized force manifold. In order to guarantee the tracking of the visual errors, a second order sliding mode is proposed through the nominal visual reference given as
where are positive-definite diagonal matrices, and with Given that the sliding mode condition is satisfied for , and not for we have that the switching over the Cartesian manifold will be invariant to parametric uncertainties thanks to
Open Loop Error Dynamics. The parametrization of the i-th robot manipulator (see equation (3)), requires the derivative of equation (22), which is discontinuous. Given that we will use neural networks to handle the entire system, we will simplify the equation (3) in order to avoid discontinuous signals. One way of doing this is to decompose into continuous and discontinuous terms as follows
with
with , and . The continuous function tanh(*) is used as approximation of discontinuous function sgn(*), such that tanh(0) = 0 and when . Notice that have the following properties: Finally, the parametrization of (3) using the equations (22) and (25) gives rise to
where is continuous due to and is considered as an endogenous bounded disturbance subject to matching conditions. It is important to notice that cannot be compensated by the neural network, because it is discontinuous. However, is bounded due to and being bounded. Adding and subtracting equation (26) into equation (1), the open loop error equation arises as
It remains now to design the controller ui that satisfies the problem raised. Our controller will compensate with a low dimensional neural network, while the approximation error of the i-th neural network and , in will be compensated with an orthogonalized second order sliding mode inner loop. At this stage, before designing ui, we discuss the design of the neural network architecture as an associator to approximated .
Uncertain neuro-visual based controller
Neural Network Architecture
A tree network structure that satisfies the Stone-Weierstrass theorem [2] is used to approximate continuous regressor i.e., many neurons on one layer feed a single neuron on the next layer. The input-output relationship for this generic architecture is given as where xk is the k-th input to the neural network, wk is the k-th weight, and ϕ the activation function. Notice that tree structure could have one or more hidden layers, where the linear activation function is used as the last stage of a multilayer neural network. Considering the property that neural networks can approximate any smooth function f(x), where φ belongs to a compact set s of Rn [3], [11], we can use a sufficiently large neural network to approximate some function f(x) such that
with and is minimized.
Given that the regressor is continuous, it will be approximated by an ADAptive LINear Element (ADALINE) [20], which consists of a single neuron of the McCulloch-Pitts type. Using a matrix representation, the approximator will be designed such that
with and is minimized. Thus, the low dimensional neural network is used to approximate the unknown function by
where are the inputs to the neural network, are the estimated neural network weights for the i-th manipulator and is a functional reconstruction error with ∊N > 0.
Given that the size of the neural network is defined by taking into account only the regressor elements, its approximation capacity is severely limited. However, instead of increasing its number of neurons in order to reduce the residual error, the error will be eliminated by means of a training procedure based on an orthogonalized second order sliding mode.
Theorem 4.1. Assume that all initial conditions and desired trajectories of the i-th manipulator belong to Ω. Consider the dynamical model of the i-th manipulator in the closed-loop with the control law and neuro-adaptive law defined as
where and is the i-th desired contact force. Then, for small initial conditions and large enough feedback gain matrix there is exponential convergence of the visual position error and force tracking error despite the existence of parametric uncertainties in .
where is radially unbounded only when and for bounded signals is zero only at For the small initial errors belonging to a neighbourhood ∊1 and ∊2 centred in the equilibriums and there exists a large enough feedback gain matrix such that each of the components of converges to ∊1 > 0 as t → ∞, and also converge to ∊2 > 0 when t → ∞. This result stands for local stability of provided that the state is near the desired trajectories for any initial conditions. The boundedness in the L∞ sense leads to the existence of the constant ∊3 > 0 such that
Since desired trajectories are C2 and feedback gains are bounded, we have that which implies that and Then, the output of the neural network is also bounded. According the previous arguments and the boundedness of the i-th robot dynamics, its Coriolis matrix and gravitational term, and given that inertia matrix is positive-definite and upper bounded, the right hand side of equation (32) is bounded, such that Then, there exists a bounded scalar ∊4 > 0 such that
So far, we can conclude the boundedness of all closed-loop error signals. To prove convergence of tracking errors, the sliding modes condition must be verified in the force subspace and in the position subspace Qi.
Part II.a [Sliding modes for the velocity subspace]. Adding and subtracting to (23), we obtain
where is called extended orthogonalized position manifold and for σ > 0 under the assumption that is not degenerated in Ω, and is composed by trigonometric functions. Now, since is smooth and lies in the reachable robot space Ω, and and Qi are bounded, we have that and are bounded (this is possible because and Qi are bounded). Then, there exist bounded constants ∊5 > 0 and ∊6 > 0 such that
Multiplying (35) by and rearranging terms we obtain
and
where and is the local projection of onto Given that there are ni – mi independent coordinates, and mi dependent coordinates can be expressed as a function of independent coordinates, then the holonomic constraint is well posed. That is, once a solution (given by a initial condition) is chosen, this solution cannot jump to another solution of since we have ruled out crossing singularities, so it stays always in the domain of this solution. Then this local domain in fact becomes the global domain of this solution. So, it is reasonable to assume a bijection in this domain, between and . Hence, without loss of generality we can analyse either or . Taking into account the definition of , (37) becomes
Multiplying the time derivative of the equation (38) by , we obtain that
where μ2 = γpi – ∊7 and Therefore, we guarantee the existence of a sliding mode at if
at time Then by definition of and given that
Then, a second order sliding mode is enforced, for all time, in the subspace of position.
Part II.b [Sliding modes for the force subspace] If we multiply equation (35) for , we obtain
where The time derivative of the equation (40), multiplied by , becomes
where and If gfi > ∊8, then a sliding mode at is induced for all time, because since .
Part III.a [Visual tracking errors] Since a sliding mode exists for all time at then we have
implying that position tracking errors globally and exponentially tends to zero. This is xsi → xsdi and
Part III.b [Force tracking errors] Since a sliding mode is induced for all time, this means that exponentially fast, thus it implies that the convergence of force tracking errors arises, exponentially fast. Then implies that .
Numerical simulation results
In order to demonstrate the usefulness of our decentralized controller, we present digital simulations implemented on the stiff numerical solver on Matlab 5.3, under a 1 [ms] sampling time. In these simulations the motion of the object is confined to the horizontal plane. Additionally, each experiment was executed under different initial conditions, zero initial velocity and parametric uncertainties on the i-th manipulator and the camera. As mentioned in the section 4, the neural network has only one, layer and four weights per degree of freedom, where wrm–ij (0) = 0 with m = 1, 2, i = 1, 2 and j = 1, 2, 3, 4.
The i-th robot parameters are defined as: masses , , lengths , , masses centre, , , inertia , for i-th robot finger. The Sony DFWVL500 CCD camera parameters are defined as: clock-wise rotation angle scale factor αv = 77772 [pixeles / m], depth field of view z = 1.5 [m], camera offset offset ∊IOI = [0.00050.0003]T [m] and focal length λf = 0.008 [m]. Finally, it is assumed that the image and joint coordinates are available in these simulations. As a reference, each figure presented has four graphics where the joints of robot jr and the robot number rm are defined as follows for jr, rm = 1, 2.
As a first case we consider a pair of robotic manipulators grasping a rigid object, i.e. the contact between object and manipulators is a frictionless contact point. The desired task consists of the end-effector of the i-th robot manipulator following a desired visual trajectories while a desired force is applied in the normal direction through the contact point.
Exponential visual tracking errors for position and velocity are presented in Figures 1 and 2, respectively. The system trajectories converge to an error that can be considered as zero (less than 0.1 pixel error). After a transient response in a few milliseconds in the first joints of the robotic manipulators, the control effort applied to each joint is smooth and chattering free, Fig. 3. Finally, the performance of the contact force and force tracking errors are presented in Fig. 4. As is expected, the simulation results show exponential convergence without any knowledge of i-th robot dynamics and the camera, i.e., the Jacobian is parameterized in terms of regressor times as parameter vector Then, the vector is multiplied by a factor to get X% of uncertainty with respect to the nominal value. In these simulations, the desired image-based desired trajectories are defined as follows: and , respectively. In addition, the feedback gains used in this experiment are: .
Visual position tracking errors for two robot fingers grasping an object.
Visual velocity tracking errors for two robot fingers grasping an object.
Smooth control applied to each joint.
Contact force and force errors applied to the object by each finger.
Finally, we assumed that 90% of parametric uncertainties on exists.
As a second case, a general scheme for free motion and constrained motion through a smooth transition is presented. A smooth transition allows us to avoid an impact regime between the robotic finger and the object, that is, a zero transition velocity from the free motion to constrained motion can be guaranteed. Therefore, impulsive dynamics does not appear and position controller commute stably to position/force controller and vice versa. To ensure that each robot finger approaches on the object surface to a velocity near to zero, it is necessary to change the feedback gain Ai defined in (24) by where , for small positive scalar ∊ such that α0 is close to 1 and 0 < δ ≪ 1. The time base generator ξi = ξi(t) ∊ C2 must be provided by the user so as ξi goes smoothly from 0 to 1 in finite time and ξi = ξi(t) is a bell-shaped derivative of ξi such that [17]. Once a finite time convergence of position and velocity trajectories is assured in free motion, a stable switching of the closed-loop system to constrained motion is guaranteed.
Under the framework presented in the previous sections, the i-th geometric function in these three phases is defined as
where U is the unconstrained space (Phase 1), I is the transition space (Phase 2), C is the constrained space (Phase 3), t1 is the contact time which is defined by the user through tbi and Tc is the contact time which is a very small time period. For free and constrained tasks, Jφi+qi and Qi are defined as follows
In addition, the noncalibrated nominal reference becomes:
Therefore, the uncertain visual error manifold is defined as
Then, using (42) the open loop error equation for free and constrained tasks is given as in equation (27), i.e.,
where
with is now the contact force. Finally, the neuro-adaptive law is given as
and
where have the same dimensions as defined in equations (30) and (31) respectively.
From the previous definitions for transition tasks, we proceed to describe the task. Consider two rigid planar robots as fingers which are not grasping the object initially. Then, from any initial condition the end-effector of the i-th robot finger will follow a desired trajectory (cubic polynomial) in finite time tbi (free control) under 75% of parametric uncertainty on . Once stable contact is achieved, the position controller commutes to a force/position controller (constrained motion). Then, and represent the force and desired visual trajectories in constrained motion under 75% of parametric uncertainty on Visual tracking errors for free and constrained motion are shown in Fig. 5. The impulse response between position and force controller commutation is avoided, as shown in Fig. 6. Instead, the performance of the control signal is smooth and chattering free. Additionally, in Fig.8 we can observe that the change in the velocity before and after collision, is the same, i.e. with h > 0 so that acceleration is analytic. On the other hand, Fig. 7 shows that before t ≤ 1.5[s], the contact force is zero, however, for t>1.5[s] the force converges to a desired value after a few seconds, i.e., the robot grasps an object rigidly. The feedback gains used in this experiment are:
Visual position tracking errors for transition tasks.
Smooth control input applied to each joint.
Contact force and force error applied to the object by each finger.
Visual velocity tracking errors for transition tasks.
Finally, we have not obtained a systematic procedure to tune the control gains basically because of the nonlinear nature of the closed-loop system. Thus, the feedback gains are tuned on a trial-and-error-basis, according to the interplay of each gain in the closed-loop system.
Conclusions
A neuro-visual servoing controller which guarantees image-based tracking of constrained multiple robot arms subject to parametric uncertainty is proposed. Exponential convergence in the visual force/position subspaces is guaranteed, even when neither robot parameters nor camera parameters are known. The main feature of our scheme is the ability to fuse the image coordinates/visual velocities and contact forces in orthogonal complements. The neural network control loop compensates for DAE dynamics, while an inner piecewise continuous sliding mode control loop adds the missing effort to induce sliding modes. Numerical simulations allow us to validate our proposed scheme through two significant applications: two robotic manipulators grasping a rigid object, and two robotic fingers executing tasks that involve changing dynamics and impact regime.
Footnotes
7. Acknowledgment
This work was partially supported by the FAI project ICI-002-11 of the Univ. de Los Andes, Chile.
1
Differential Algebraic Equation.
References
1.
BaetenJ.VerdonckW.BruyninckxH.De SchutterS. (2000), Combining Force Control and Visual Servoing for Planar Contour Following, Machine Intelligence and Robotic Control, vol.2, no.2, pp.3–9.
2.
CotterN. E. (1990), The Stone-Weierstrass Theorem and Its Application to Neural Network, IEEE Trans. on Neural Networks, vol.1, no.4, pp.290–295.
3.
CybenkoG. (1989), Approximation by Superpositions of a Sigmoidal Function, Mathematics of Control, Signals, and Systems, vol. 2, pp. 303–314.
4.
DixonW.E.ZergerogluE.FangY.DawsonD.M. (2002), Object Tracking by a Robot Manipulator: A Robust Cooperative Visual Servoing Approach, Proc. IEEE International Conference on Robotics and Automation, pp.211–216.
5.
GeSS.HuangL.LeeT.H. (2001), Model-based and Neural-network-based Adaptive Control of Two Robotic Arms Manipulating an Object with Relative Motion, International Journal of Systems Science, vol.32, no.1, pp.9–23.
6.
HutchinsonS.HagerG. (1996), A Tutorial on Visual Servo Control, IEEE Trans. on Robotics and Automation, vol.12, no.5, pp.651–670.
7.
IshikawaT.SakaneS.SatoT.TsukuneH. (1996), Estimation of Contact Position between a Grasped Object and the Environment based on Sensor Fusion of Vision and Force, IEEE/SICE/RSJ Int. Conference Multi-sensor Fusion and Integration for Intelligent Systems, pp.116–123.
8.
KawasakiH.BinRamli R.UekiS. (2006), Decentralized Adaptive Coordinated Control of Multiple Robot Arms for Constrained Tasks, Journal of Robotics and Mechatronics, vol.8, no.5, pp.580–588.
9.
KimK.HoriY (1997), Analysis and Control of Grasping Motion by Two Cooperative Robots, Proceedings of the Power Conversion Conference, pp.513–518.
10.
LewisF.L.AbdallaahC.T. (1994), Control of Robot Manipulators, New York: Macmillan.
KwanC.M.YessildirekA.LewisF.L. (1999), Robust Force/Motion Control of Constrained Robots Using Neural Network, Journal of Robotics Systems, vol.16, no.12, pp.697–714.
13.
LiuY.H.ArimotoS. (1996), Distributively Controlling Two Robots Handling an Object in the Task Space without any Communication, IEEE Trans. on Automatic Control, vol.41, no.8, pp.11931198.
14.
LiuY.H.ArimotoS.KitagakiS.Parra-VegaV. (1997), Model-based Adaptive and Distributed Controller for Holonomic Cooperations of Multiple Manipulators, Int. Journal of Control, vol.67, no.5, pp.649–673.
15.
LuiJFKarimAM (2000), Robust Control of Planar Dual-arm Cooperative Manipulators, Robotics and Computer-Integrated Manufacturing, vol.16, no.2–3, pp.109–119.
16.
McClamrochN.H.WangD. (1998), Feedback Stabilization and Tracking of Constrained Robots, IEEE Trans. on Automatic Control, vol.33, pp.419–426.
17.
Parra-VegaV.HirzingerG. (2000), Finite-time Tracking for Robot Manipulators with Singularity-Free Continuous Control: A Passivity-based Approach, Proc. of the 39th IEEE Conference on Decision and Control, pp.5085–5090.
18.
SarkarN.YunX.EllisR. (1998), Live-Constraint-Based Control for Contact Transitions, IEEE Trans. on Robotics and Automation, vol.14, no.5, pp.743–754.
19.
SzewczykJ.PlumetF.BidaudP. (2002), Planning and Controlling Cooperating Robots through Distributed Impedance, Journal of Robotic Systems, vol.19, no.6, pp.283–297.
20.
WidrowB. (1960), Adaptive Switching Circuits, Institute of Radio Engineering IRE WESCON Convention Record, Part 4, NY, pp. 96–104, 1960.
21.
ShenY.SunD.LiuY.H.LiK. (2003), Asymptotic Trajectory Tracking of Manipulators using Uncalibrated Visual Feedback, IEEE/ASME Trans. on Mechatronics, vol.8, no.1, pp.87–98.
22.
LiuY.H.HeshengW.DongxiangZ. (2006), Dynamic Tracking of Manipulators using Visual Feedback from an Uncalibrated Fixed Camera, Proc. IEEE International Conference on Robotics and Automation, pp.4124–4129.
23.
LoretoG.GarridoR. (2006), Stable Neurovisual Servoing for Robot Manipulators, IEEE Trans. on Neural Networks, vol. 17, no. 4, pp. 953–965.
24.
PagillaP.R.YuB. (2004), An Experimental Study of Planar Impact of a Robot Manipulator, IEEE/ASME Transactions on Mechatronics, vol.9, no.1, pp.123–128.
25.
ShojiY.InabaM.FukudaT. (1991), Impact Control of Grasping, IEEE Trans. on Industrial Electronics, vol.38, no.3, pp.187–194.
26.
UzmayI.BurkanR.SatihayaH. (2004), Application of Robust and Adaptive Control Techniques to Cooperative Manipulation, Control Engineering Practice, vol.12, no.2, pp.139–148.
27.
XiaoD. (2000), Sensor-hybrid Position/Force Control of a Robot Manipulator in an Uncalibrated Environment, IEEE Trans. Control System Technology, vol.8, no.4, pp. 635–645.