Image moments are global descriptors of an image and can be used to achieve control-decoupling properties in visual servoing. However, only a few methods completely decouple the control. This study introduces a novel camera pose estimation method, which is a closed-form solution, based on the image moments of planar objects. Traditional position-based visual servoing estimates the pose of a camera relative to an object, but the pose estimation method directly estimates the pose of an initial camera relative to a desired camera. Because the estimation method is based on plane parameters, a plane parameters estimation method based on the 2D rotation, 2D translation, and scale invariant moments is also proposed. A completely decoupled position-based visual servoing control scheme from the two estimation methods above was adopted. The new scheme exhibited asymptotic stability when the object plane was in the camera field of view. Simulation results demonstrated the effectiveness of the two estimation methods and the advantages of the visual servo control scheme compared with the classical method.
Visual servoing control refers to the use of computer vision data to control the motion of a robot.1 In visual servoing, two closely linked problem themes are subjects of active research2: the design of visual features pertinent to the robotic task to be realized and a control scheme with the chosen visual features such that the desired characteristics are obtained during visual servoing. The latter adopts the control scheme of ensuring an exponential decoupled decrease in error. The former employs the observed parameters of the visual feature in the image of geometric primitives (points, straight lines, ellipses, and cylinders) in image-based visual servoing (IBVS).3–5 In position-based visual servoing (PBVS),6–8 the geometric primitives reconstruct the camera pose, which then serves as input for visual servoing. The above approaches subject the image stream to an ensemble of measurement processes, including image processing, image matching, and visual tracking steps, from which the visual features are determined.9 With large measurement errors, PBVS can be affected by instability in pose estimation and IBVS designed from the image points is subject to local minima, singularity, and limited convergence domain.10 This is due to the image mismatch, strong nonlinearities, and coupling in the interaction matrix of image points. As a solution to these issues, image moments were introduced for visual servoing.11–14
Image moments have a broad spectrum of applications in image analysis, such as invariant pattern recognition,15,16 pose estimation,17,18 and reconstruction.19 A set of moments computed from a digital image represents the global characteristics of the image shape and provides much information regarding the different geometrical features of the image.20 Because the descriptors used are global and nongeometrical, they avoid image processing such as feature extraction, matching, and tracking.18 Moments are also useful in achieving control-decoupling properties and choosing a minimal number of features to control the whole degrees of freedom of a camera. Therefore, visual servoing using image moments obtains a large convergence domain and adequate robot trajectories, because of the reduced nonlinearities and coupling in interaction matrix of the adequate combinations of moments.
The analytical form of the interaction matrix related to any moment generated from segmented images was determined, and the result was applied to classical geometric primitives.11 However, this method requires an object plane configured such that the object and camera planes are parallel to the desired position. Another drawback is the strong coupling of the interaction matrix. Tahri and Chaumette12 designed a decoupled control scheme to weaken the coupling of the visual features and functions when the object is parallel to the image plane. A generalization of the aforementioned property, that is, when the desired object position is unparallel to the image plane, has achieved excellent experimental results, but it should determine what virtual rotation to apply to the camera and that the x- and y-axes angular velocities of the camera are still coupled. To solve the problem of selecting the image moments to control the rotational motions around the x- and y-axes simultaneously with the translational motions along the same axis, new visual features computed from a low-order shifted moment invariant are proposed.13 On the basis of the shifted moments, the selection of a unique feature vector independent of the object shape is proposed and exploited in IBVS. Although this method significantly enlarges the convergence domain of the closed-loop system, the interaction matrix is also coupled. The above methods are all IBVS schemes. Presently, few studies on PBVS control schemes based on image moments exist, but the hybrid visual servoing is very popular. A novel moment-based 2 1/2D visual servoing method for grasping textureless planar parts is proposed by He et al.14 Instead of applying high-order image moments, it uses rotation features, providing a decoupled interaction matrix that has a full rank and has no local minimum in the control scheme. However, the estimation method of the relative rotation of the textureless parts in real time is based on the cross-correlation analysis and is not a closed-form solution.
To handle the above issues more effectively, this study proposes a closed-form solution of camera pose estimation based on the image moments of planar objects. This method directly estimates the relative pose of the initial camera and the desired camera and not the pose of the camera relative to the object plane. Because the estimation method follows plane parameters, a plane parameters estimation method based on invariant moments is also proposed. Finally, we adopted the PBVS control scheme, which has a completely decoupled interaction matrix.
The rest of the article is organized as follows. The second section discusses preliminary knowledge; the third section introduces camera pose estimation based on image moments; the fourth section is devoted to plane parameters estimation based on the invariant moments; the fifth section discusses the PBVS control scheme and stability analysis; the sixth section presents the simulation results obtained from the co-simulation of MATLAB and CoppeliaSim; and finally, the seventh section outlines the conclusions and future work.
Preliminaries
This section introduces the preliminary knowledge used in this study. The pinhole camera model and imaging geometry are introduced, which play important roles in deriving formulas. Then, the relationships between the image moments of two cameras are described, laying the foundation for two camera pose estimation.
The pinhole camera model and imaging geometry
From the pinhole camera mode,21 for a 3D coordinate in the camera frame, which projects images as 2D homogeneous normalized coordinates , we have . The image plane measurement (in pixels) of points is given by . The relationship between and is expressed as , where is a nonsingular matrix containing the intrinsic parameters of the camera.22 To simplify the notation, we will assume that any quantity is expressed in the normalized space. This is equivalent to assuming a calibrated camera, that is, full knowledge of the calibration matrix .
The homogeneous normalized coordinates of the two camera image points in Figure 1(a) are , satisfying a certain relationship. Assuming that point on planar b is represented as and in the camera frames and , respectively, then and satisfy
where and are the depth of the observed point relative to the camera frames and , respectively, and and are the rotation matrix and translation vector of the camera frame relative to the camera frame , respectively. Substituting for using equation (1), equation (2) can be written as
Basic geometry for two images: (a) the imaging plane is not parallel to the object plane and (b) the imaging plane is parallel to the object plane.
The following will only consider a planar or has a planar limb surface. Here, the depth Z of any object plane can be expressed as a continuous function of its image coordinates (see the study of Espiau and Chaumette3 for more details) and as follows
where A, B, and C are the object plane parameters, and we define . The parameters of a plane containing typical primitives has been studied by Espiau and Chaumette3 and Chaumette et al.23 To find the parameters of a plane containing any pattern, the general method for estimating plane parameters based on the invariant moments will be introduced in the fourth section.
Image moments
Moments are generic (and intuitive) descriptors computed from several kinds of objects defined either from closed contours or from a set of points. The order 2D geometric moments are denoted by and can be expressed as
where is the region of the normalized space in which the image intensity function is defined. The intensity centroid is given by
The moments computed with respect to the intensity centroid are called centered moments and are defined as
The following considers objects defined by a binary region. Therefore, we assume that in all the region that defines the object.
In the studies of Espiau and Chaumette3 and Chaumette,11 the interaction matrix related to any centered moment defined from equation (7) has been determined. So we can get
where the relative kinematic screw between the camera and the object, where and represent the translational and rotational velocity components, respectively. And the interaction matrix can be calculated as
where
where is defined as .
Relationship between the image moments of two cameras
If two cameras are not parallel to the object plane (see Figure 1(a)), then and in equation (3) are not constants and they will complicate the calculation of the image moment. Therefore, we introduce a simple situation where two cameras are parallel to the object plane (see Figure 1(b)).
We denote the initial and transformed image coordinates of the feature pattern by and , respectively, the corresponding intensity functions by and , and their image moments by and . Therefore
Assuming that the image intensity values are preserved during the transformation, we have . Further, , where is the Jacobian of the transformation. The following notations are used for the moments of the initial and transformed images
According to equation (3), when the two cameras are parallel to the object plane (see Figure 1(b)), we can obtain the following
where , and are the depths of the observed point relative to the camera frames and , respectively. The expressions relating to the image moments can be derived by substituting the image coordinate transformation equation (11) and different values of p and q in equation (9). The moment equations up to second order are given below
Camera pose estimation based on image moments
This part estimates the relative homogeneous transformation of the two camera frames based on the image moments of the object plane in the two cameras and the plane parameters. In other words, the rotation matrix and the translation vector in Figure 1(a) will be estimated using image moments and plane parameters. The estimation method is a closed-form solution.
Estimate the rotation matrix of the two camera frames
The estimation method of the rotation matrix in Figure 1(a) will be introduced in this section. Firstly, to make the imaging plane of the camera parallel to the object plane, the camera frames and need to be rotated, and the rotation matrices are and , respectively. In other words, the camera frames and in Figure 1(a) will be converted to the camera frames and in Figure 1(b). Then, the rotation matrix of camera frame relative to in Figure 1(b) will be calculated according to the image moment. Finally, the rotation matrix can be calculated as
Any rotation matrix can be expressed by Euler angles as follows
where , , and are rotation operations about coordinate frame axes z, y, and x, respectively, and , , , , , and . Appendix 1 explains the relationship between the rotation matrix and Euler angles. The following will introduce the calculation of the Euler angles of the rotation matrices , , and .
The calculation of the rotation matrices and
From equation (4), the formula of the object plane relative to the camera frame can be expressed as . Therefore, the normal vector of the object plane b in Figure 1(a) is expressed as and in camera frames and , respectively. and are normalized as and .
To make the imaging plane of the camera parallel to the object plane, the camera frame is rotated (see Figure 1(a)). The Euler angles of this rotation matrix are , , , so we can obtain
We find that the unit normal vector of the object plane in the rotated frame is . Therefore, we can get
To calculate Euler angles and , equation (15) is expressed as
As a result, the solution of equation (16) is obtained
(17)
where
Similarly, the camera frame in Figure 1(a) can also be rotated. The Euler angles of this rotation matrix are , so we can also get
Therefore, the Euler angles and can be obtained by analogy to equation (17).
The calculation of the rotation matrix
Firstly, the images n and o are rotated to and by and , respectively. Then, the following shows how to estimate the rotation matrix of frame relative to frame based on the image moments.
Because the z axes of frames and are parallel to each other (see Figure 1(b)), can be calculated by the image-centered moments. The rotation matrix can be calculated by
where . and are the orientation angles of the inertial principal axes of the target plane b on the images of cameras and . The orientation angles and can be calculated as (see the study of Horn24 for more details)
where , , and are the second-order central moments and are defined by equation (7). It is well known that and can be expressed as
There are two solutions calculated for using equation (23), but only one solution set is correct. The following will explain how to choose the correct solution.
Selection of angles and
This section will introduce the method for selecting the correct angle by introducing the third-order moments.
From equation (19), the rotation matrix can be calculated as
Therefore, it is easy to know that , , , and . , and are calculated by equations (25) and (10), respectively. Theoretically, is the estimate of . Therefore, if is correct, then , , , and . To increase the robustness of the angle selection method, we define
If is correct, then . Otherwise, . So the positive coefficient is defined (usually can be taken). If is satisfied, is the correct solution; otherwise, is the correct solution.
Finally, the relative rotation matrix of the two camera frames and can be calculated by equation (13), where , and are defined in equations (18), (19), and (14), respectively.
Estimate the translation vector of the two camera frames
This part will introduce the estimation method of the translation vector in Figure 1(a).
From the camera frames and rotation introduced in the previous section, we can obtain
where and are defined in equations (14) and (18), respectively, and and can be calculated as
where and are expressed as
Substituting for , , and using equations (28) and (13), equation (3) can be converted to
Because the imaging planes of the camera frames and are parallel, and are constant. According to equations (29) and (12), the relationship between the image moments of cameras and can be expressed as
where and are defined as
As a result, the translation vector is obtained
Therefore, the calculation method of the homogeneous transformation matrix has been completed.
Plane parameters estimation based on image moments
The homogeneous transformation matrix calculation method introduced in the third section needs to know the plane parameters in the frames and . In visual servo control, the latter is usually known, but the former is difficult to know. So we need to use the known information of image o and plane parameters to estimate plane parameters . The methods introduced in the studies of Chaumette11 and Chaumette et al.23 apply only to typical primitives. Therefore, the following will present a plane parameters estimate method, which is suitable for any pattern.
Generally, the plane parameters are known, so we can calculate the rotation matrix and get the rotated image . However, there are errors in the plane parameters . Similarly, the rotation matrix and the rotated image can be calculated (note if there is no error in , ). Theoretically, the 2D rotation, 2D translation, and scale invariant moments of images and are the same. So we can correct the plane parameters according to these invariant moments.
We choose two moments that are invariant to 2D rotation, 2D translation, and to scale25
It is easy to know that , so we can calculate from image . According to equation (9), the interaction matrix related to the invariant moments has the following form
We want to rotate image to image that is parallel to image , so and will be calculated. Therefore, we can get
where
where is not affected by plane parameters with error. The invariant moments can be expressed as the Taylor expansion
Truncating the Taylor expansion at first order and substituting for using equations (34), we can approximate equation (35) as
Therefore, rotation matrix of the frame relative to can be calculated as
where is a skew-symmetric matrix representation of .
Now, we start to correct the initial plane parameters . First, the image will be rotated to by the rotation matrix . Noting that
where S is the area of the object plane. Then, the plane parameter in the frame is . According to equations (29) and (38), can be expressed as
The final corrected plane parameters are
Plane parameter error can be calculated as , where the desired plane parameter can be approximately expressed as . Finally, we can design the correction scheme as
where .
PBVS and stability analysis
This section first introduces the PBVS control scheme, which is based on camera pose estimation described in the third section. Then, the stability of the method will be analyzed. This visual servoing scheme has been studied by Chaumette and Hutchinson.26 We briefly revisit the PBVS control scheme.
In this study, we can define the expected and current values of features and , in which is a translation vector calculated using equation (31) and gives the angle/axis parameterization for the rotation matrix calculated by equation (13). Therefore, the vision-based control schemes minimize an error , which is defined as
The relationship between and is given by
So the control scheme is obtained by
where is an estimation of the interaction matrix related to and be calculated as
where is the sinus cardinal defined such that and . According to equation (43), rotation and translation are decoupled, which allows us to obtain a simple control scheme
Next, simply analyze the stability. Lyapunov function candidate is defined as
By deriving the Lyapunov function concerning time, equation (45) was transformed into
If is satisfied and the object plane is always in the camera field of view, then both
and
will be guaranteed. So , the system is asymptotically stable.
Simulation results
We evaluate the control scheme proposed in this article. Considering a vision sensor and a object plane as examples, the co-simulation was conducted on the MATLAB 2020b and CoppeliaSim 4.1 platforms. Some different patterns, that is, an “octopus,” a “whale,” and a “flame” (see Figures 2(a), 4(a), and 5(a)), were considered in the plane. The white noise with standard deviation pixels is added to the image in the following simulations.
Simulation results using camera pose estimation
This part mainly verifies the effectiveness of the camera pose estimation method introduced in the third section, so we assume that plane parameters in the frame are known (). We consider the case where the image and object planes with the “octopus” pattern are nonparallel at the desired position (). Figure 2(a) shows the green and red contours representing the desired and initial image contours, respectively. Then, PBVS scheme is used for camera control.
The obtained results are given in Figure 2. They show the good behavior of camera pose estimation and the control law. The translation component of the visual features is , so the obtained camera trajectory is straight line (see Figure 2(b)). Despite the corresponding displacement is very large (), we can note in Figure 2(c) and (d) the decoupled and exponential decrease of the six combinations of the corresponding displacement and the six camera velocity components. Because the interaction matrix is decoupled, the corresponding displacement in Figure 2(c) all converge to less than . Although noise is added, the pose estimation method proposed in this article is suitable when the image and object planes are nonparallel at the desired position.
Results obtained using PBVS with known plane parameters: (a) “octopus” contour, (b) camera trajectory, (c) errors on pose, and (d) camera velocities. PBVS: position-based visual servoing.
In addition, the method proposed in this article is suitable for the case that , but some methods based on image moments, that is, the methods proposed by Chaumette11 and Tahri and Chaumette,12 do not have this advantage. The following will design two simple situations to illustrate this problem.
The visual features in the method of Tahri and Chaumette12 are
The “octopus” contour continues to be adopted, but (equivalent to ). One situation is , the other is (see Figure 3(a) and (b)). The obtained results are given in Figure 3. The method in this article can make the feature errors () and pose errors () converge to less than for both situations. However, the method proposed by Tahri and Chaumette12 is only suitable for case that (see Figure 3(c) and (e)). For , this method can only converge feature error () to less than (see Figure 3(d)), while the pose error () still has a large error, that is, (see Figure 3(f)). The reason is that the feature of the velocity is α, which is only available for . Therefore, the calculation of proposed in “The calculation of the rotation matrix ” section has certain advantages for case that .
Results obtained using PBVS with known plane parameters: (a) “octopus” contour with , (b) “octopus” contour with , (c) errors on features with , (d) errors on features with , (e) errors on poses with , and (f) errors on poses with . PBVS: position-based visual servoing.
Simulation results using plane parameters estimation
This part mainly verifies the plane parameters estimation method introduced in the fourth section. We simulated a situation where a desired image and a object plane with the “whale” pattern are nonparallel at the desired position. ( and ). Twenty-seven percent errors are randomly added to to obtain plane parameters . Note that is unknown, but is known. Figure 4(a) shows the green and red contours representing the desired and initial image contours, respectively. The PBVS schemes with and without plane parameters estimation are used to control the cameras, respectively.
The obtained results are given in Figure 4. The camera trajectories controlled by the PBVS schemes with and without plane parameters estimation are shown in Figure 4(b) . Due to the early correction of plane parameter errors (Figure 4(c)), the trajectory obtained by the former method is slightly different from the straight-line trajectory in Figure 2(b). It is easy to see that the plane parameter errors of the former method () quickly converge to , while the plane parameter errors of the latter method () still have a large error in Figure 4(c). Despite the corresponding displacement is large (), the pose errors obtained by the former method still all converge to less than , but the pose errors () obtained by the latter cannot converge to a small value (see Figure 4(d)). The PBVS scheme shows camera velocities (see Figure 4(e) and (f)). There is no oscillation for any component of the camera velocity in both methods.
Results obtained using PBVS with errored plane parameters: (a) “whale” contour, (b) camera trajectories, (c) errors on plane parameters, (d) errors on pose, (e) camera velocities with plane parameters estimation, and (f) camera velocities without plane parameters estimation. PBVS: position-based visual servoing.
As a result, the plane parameters estimation method proposed in this article can effectively eliminate the parameter error, and satisfactory results are obtained in PBVS scheme.
Simulation results compared to the classical method
This part will show the comparison results between the method proposed in this article and the classical method proposed by Tahri and Chaumette.12 The latter’s visual features are expressed as equation (47) and the interaction matrix is
which largely improves the system behavior.28 The former’s interaction matrix is calculated by equation (43). We consider the case where the image and object planes with the “frame” pattern (see Figure 5(a)) are parallel at the desired position ( and ). The corresponding displacement is .
Firstly, we assume that the planar parameter is known, and then use these two methods for visual servo control, respectively. The obtained results are shown in Figure 5.
Results obtained using two methods with known plane parameters: (a) “frame” contour, (b) camera trajectory, (c) errors on features using classical method, (d) errors on features using our method, (e) errors on pose using classical method, (f) errors on pose using our method, (g) camera velocities using classical method, and (h) camera velocities using our method.
The camera trajectories controlled by the two methods are showed in Figure 5(b). Because our method adopts PBVS control scheme, the camera trajectory is a straight line; while the classical method adopts the IBVS control scheme, so the camera trajectory is a curve. The two methods can not only make the feature errors converge to (see Figure 5(c) and (d)) but also make the pose errors converge to (see Figure 5(e) and (f)). However, the camera velocities controlled by our method is more stable than the camera velocities controlled by the classical method (see Figure 5(g) and (h)). The result is that the interaction matrix computed by our method has a better condition number than the classical method. Because the former has a completely decoupled interaction matrix (equation (43)). The boxplots of condition numbers of interaction matrices obtained by two methods with known plane parameters are shown in Figure 6. The maximum, minimum, and mean values of the condition numbers calculated by the classical method are , , and , respectively. However, the maximum, minimum, and mean values of the condition numbers calculated by the our method are , , and , respectively. Therefore, the method proposed in this article has obvious advantages over the classical method when the planar parameter is known.
The boxplots of condition numbers of interaction matrices obtained by two methods with known plane parameters: (a) classical method and (b) our method.
Finally, errors are randomly added to to obtain plane parameters . Then, the two methods are respectively used for visual servo control. The obtained results are shown in Figure 7.
Results obtained using two methods with errored plane parameters: (a) camera trajectory, (b) errors on plane parameters, (c) errors on features using classical method, (d) errors on features using our method, (e) errors on pose using classical method, (f) errors on pose using our method, (g) camera velocities using classical method, and (h) camera velocities using our method.
The camera trajectories controlled by the two methods are shown in Figure 7(a). Although there are errors in plane parameters, both methods can converge the visual features to (see Figure 7(c) and (d)). However, our method can also converge the pose error to , while the classical method can only converge to (see Figure 7(e) and (f)). This is because the former method has plane parameter estimation and quickly converges the plane parameter errors to , but the latter method does not have this advantage (see Figure 7(b)). It can be seen from Figure 7(g) and (h) that the camera velocities calculated by the method proposed in this article are still more stable than the camera velocities calculated by the classical method. The boxplots of condition numbers of interaction matrices obtained by the two methods with errored plane parameters are shown in Figure 8. The maximum, minimum, and mean values of the condition numbers calculated by the classical method are , , and , respectively. However, the maximum, minimum, and mean values of the condition numbers calculated by our method are , , and , respectively. The latter is significantly smaller than the former. Therefore, the method proposed in this article still has obvious advantages over the classical method when the planar parameter is errored.
The boxplots of condition numbers of interaction matrices obtained by two methods with errored plane parameters: (a) classical method and (b) our method.
Conclusion
This study proposes two new estimation methods based on image moments, which are used to estimate camera pose and plane parameters, respectively. The former method, which is a closed-form solution, directly estimates the relative pose of the initial camera and the desired camera, not the pose of the camera relative to the object plane. The latter method uses the 2D rotation, 2D translation, and scale invariant moments to estimate the plane parameters. From both estimation methods, the article employs a PBVS scheme for the object plane. The simulation results have validated our approaches. One advantage of the two estimation methods is that they do not require image processing such as features extraction, matching, and tracking, the other is that the PBVS scheme is suitable for the case that . In addition, the condition number of the interaction matrix calculated by the method proposed in this article is very small, which is very important for visual servo control. However, some particular configurations can make the object plane leave the camera field of view, which will affect the stability of visual servoing control. Future work will be devoted to designing a visual servoing scheme that increases the stability and prefer better decoupling of the interaction matrix.
Footnotes
Appendix 1
Acknowledgment
The authors thank Shuo Wang for helping with the language of this manuscript and all colleagues for providing different types of help during the preparation of this manuscript.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by National Key R&D Program of China (2016YFC0803000, 2016YFC0803005).
ORCID iDs
Yuhan Chen
Xiao Luo
References
1.
FathianKJinJWeeSG, et al.Camera relative pose estimation for visual servoing using quaternions. Robot Autonom Syst2018; 107: 45–62.
2.
BakthavatchalamMTahriOChaumetteF. Improving moments-based visual servoing with tunable visual features. In: 2014 IEEE international conference on robotics and automation (ICRA), 31 May–5 June 2014, Hong Kong, China, pp. 6186–6191. IEEE.
3.
EspiauBChaumetteF. A new approach to visual servoing in robotics. IEEE Transact Robot & Automat1992; 8(3): 313–326.
4.
LiCLChengMYChangWC. Dynamic performance improvement of direct image-based visual servoing in contour following. Int J Adv Robot Syst2018; 15(1): 1729881417753859.
5.
Bueno-LópezMArteaga-PérezMA. Fuzzy vs nonfuzzy in 2D visual servoing for robot manipulators. Int J Adv Robot Syst2013; 10(2): 108.
6.
ColomboFTde Carvalho FontesJVda SilvaMM. A visual servoing strategy under limited frame rates for planar parallel kinematic machines. J Intelligent Robot Syst2019; 96(1): 95–107.
7.
NohSParkCParkJ. Position-based visual servoing of multiple robotic manipulators: verification in gazebo simulator. In: 2020 international conference on information and communication technology convergence (ICTC), 21–23 October 2020, Jeju Island, South Korea, pp. 843–846. IEEE.
8.
XinJChenKBaiL, et al.Depth adaptive zooming visual servoing for a robot with a zooming camera. Int J Adv Robot Syst2013; 10(2): 120.
9.
BakthavatchalamMTahriOChaumetteF. A direct dense visual servoing approach using photometric moments. IEEE Transact Robot2018; 34(5): 1226–1239.
10.
ChaumetteFHutchinsonS. Visual servoing and visual tracking. In: SicilianoBKhatibOAngMH (eds) Handbook of robotics. Berlin: Springer, 2008, pp. 563–583.
11.
ChaumetteF. Image moments: a general and useful set of features for visual servoing. IEEE Transact Robot2004; 20(4): 713–723.
12.
TahriOChaumetteF. Point-based and region-based image moments for visual servoing of planar objects. IEEE Transact Robot2005; 21(6): 1116–1127.
13.
TahriOTamtsiaAYMezouarY, et al.Visual servoing based on shifted moments. IEEE Transact Robot2015; 31(3): 798–804.
14.
HeZWuCZhangS, et al.Moment-based 2.5-D visual servoing for textureless planar part grasping. IEEE Transact Indust Electr2019; 66(10): 7821–7830.
15.
NguyenTTNguyenTPBoucharaF, et al.Momental directional patterns for dynamic texture recognition. Comput Vision Image Understand2020; 194: 102882.
16.
ViririSTapamoJ. Iris pattern recognition based on cumulative sums and majority vote methods. Int J Adv Robot Syst2017; 14(3): 1729881417703931.
17.
TahriOChaumetteF. Complex objects pose estimation based on image moment invariants. In: Proceedings of the 2005 IEEE international conference on robotics and automation, 18–22 April 2005, Barcelona, Spain, pp. 436–441. IEEE.
18.
HichamHTahriOBenseddikHE. Rotation estimation: a closed-form solution using spherical moments. Sensors2019; 19(22): 4958.
19.
SunYXuBWangX. Pseudo fourth-order moment based bearing fault feature reconstruction and diagnosis. ISA Transactions2021; 118: 238–246.
20.
MukundanRRamakrishnanKR. Moment functions in image analysis—theory and applications. Singapore: World Scientific, 1998.
21.
HolmgrenDE. An invitation to 3-D vision: from images to geometric models. Hoboken, NJ: John Wiley & Sons, Ltd, 2010.
22.
ForsythDPonceJ. Computer vision: a modern approach. 2nd ed. Hoboken, NJ: Prentice Hall, 2014.
23.
ChaumetteFRivesPEspiauB. Classification and realization of the different vision-based tasks. Singapore: World Scientific, 1993.
24.
HornBKP. Robot vision. Cambridge, MA: MIT Press, 1986.
25.
TahriO. Application des momentsa l’asservissement visuel et au calcul de pose. These de doctorat, Université de Rennes, 2004.
MalisE. Improving vision-based control using efficient second-order minimization techniques. In: IEEE international conference on robotics and automation, 2004. Proceedings. ICRA’04, 26 April–1 May 2004, New Orleans, LA, USA, volume 2, pp. 1843–1848. IEEE.