Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An Improvement of Pose Measurement Method Using Global Control Points Calibration

  • Changku Sun,

    Affiliation State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin, China

  • Pengfei Sun,

    Affiliation State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin, China

  • Peng Wang

    wang_peng@tju.edu.cn

    Affiliations State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin, China, Science and Technology on Electro-optic Control Laboratory, Luoyang Institute of Electro-optic Equipment, Luoyang, China

Abstract

During the last decade pose measurement technologies have gained an increasing interest in the computer vision. The vision-based pose measurement method has been widely applied in complex environments. However, the pose measurement error is a problem in the measurement applications. It grows rapidly with increasing measurement range. In order to meet the demand of high accuracy in large measurement range, a measurement error reduction solution to the vision-based pose measurement method, called Global Control Point Calibration (GCPC), is proposed. GCPC is an optimized process of existing visual pose measurement methods. The core of GCPC is to divide the measurement error into two types: the control point error and the control space error. Then by creating the global control points as well as performing error calibration of object pose, the two errors are processed. The control point error can be eliminated and the control space error is minimized. GCPC is experimented on the moving target in the camera’s field of view. The results show that the RMS error is 0.175° in yaw angle, 0.189° in pitch angle, and 0.159° in roll angle, which demonstrate that GCPC works effectively and stably.

Introduction

Detecting the rigid transformation of images into known geometry, namely the pose measurement, is one of the central problems in aircraft inflight refueling, spacecraft docking, and comprehensive helmet mounted display [13]. In the aircraft control during aerial refueling, it is commonly used to provide accurate relative position measurements to the controller of unmanned air vehicle [4]. In spacecraft docking, pose measurement is central to the positioning of the docking assembly, and accomplished with the assistance of artificial markers or natural markers on the spacecraft [5]. In comprehensive helmet mounted display, it plays a significant role in combining the pose of helmet with direction of the weapon or sensor [6].

There are many technologies such as magnetic, ultrasonic, and mechanical ones in pose measurement fields [7, 8]. Vision-based pose measurement technology stands out for its excellent anti-jamming capability and adaptability in harsh environments.

According to the difference of track target, vision-based tracking technology is divided into the following two categories [3]. The two categories are distinguished by markers, such as planar marker or point marker. One type is planar marker. It uses the perpendicular line segments, parallel line segments, intersection of the adjacent lines, and asymmetry of the cutting-off corner as track target [9, 10]. Planar marker is rarely used in practical application, because it requires both high manufacturing precision and rigid geometric constraints. Hence, the natural marker of objects replaces the manual planar marker. The other type is point marker. Each marker of this type presents one point correspondence between the scene and the image. Point marker such as circular marker is introduced, because the appearance of circular patterns is relatively invariant under perspective distortion and because their centroid provides a stable 2D position that can easily be determined with sub-pixel accuracy. The widely used pose measurement method based on point marker is known in the literature as the Perspective-n-Point(PnP) problem, whose objective is to measure the object pose based on image of known point markers [11]. There are lots of papers researching on the PnP problem, and the solutions to the problem are classified into two types: polynomial method and iterative method [1214]. The former formulates a fourth to eighth order polynomial system by using three to five correspondences of the observed points to solve the PnP problem. And the iterative method regards the PnP problem as an optimization problem of the affine invariant cost function. The solutions are tested by practical applications, confirming both of the two methods need precision enhancement. A deep analysis has been performed on the pose measurement method, and a regular error is found during the measurement process. Michael D. Grossberg and Shree K. Nayar find an object space error through the analysis of linear perspective projection in [15]. The object space error is defined as the distance between the world point and the projection of this point onto the line of sight. In [16], the object space error is introduced into the PnP problem. Gerald Schweighofer and Axel Pinz recast the PnP problem as a minimization function by the given world points and their measurement in a camera, and the objective function is the minimum costs of the points. Furthermore, Hatem Hmam and Jijoong Kim formulate the object space error as a semidefinite positive relaxation(SDR) program. A convex relaxation is employed to solve the SDR in [17]. There are still other papers studying on the image error of point marker. The image error stems from the difference between the real centroid and the ideal centroid. Three methods of feature point tracking are proposed in [18]. They are compared in terms of accuracy and stability, but ignoring the impact on pose measurement. Bart Ons et al. find the adverse impact and propose the visual anisotropy of computational model [19]. The illusory orientation bias of three Gaussian Luminance ellipses are discussed, and furthermore, it is proved that the extracted center of bright ellipse and the physical angular coordinates are not coincident.

According to the methods above, the object space error is produced from the process for the theoretical imaging model approximate to the perspective projection model, and the image error is influenced by the brightness distribution and the edge of feature point. The current papers focus on the reduction of measurement error through more accurate parameters of imaging model or better extraction of centroid [1519]. However, those two methods are not powerful enough to eliminate the inconformity. As the sources and the influence factors of the two errors are varied, they are redistributed without considering the nature in this paper. The two errors are redefined as two types: the control point error and the control space error. Then by creating the global control points as well as performing error calibration of object pose, the two errors are processed. The control point error is the measurement error between the initial reference point and the global control point while the control space error is the measurement error between the measuring point and the corresponding global control point. The first error is the primary source of the measurement error and eliminated by using the standard reference data of moving space. The other error is reduced by decreasing the control range of global control point. According to the analysis above, the Global Control Point Calibration(GCPC) is proposed to create the global control points and calibrate the measurement error of object pose.

Description of System

The proposed schematic diagram of GCPC for pose measurement is shown in Fig 1. The system consists of (1) a target, (2) a three-axis turntable, (3) a turntable control box, (4) a camera, and (5) a computer.

The devices work in the following way: The target is fixed on the three-axis turntable which is controlled by the turntable control box; the image of rotating target is captured by the camera while the three standard Euler angles of turntable are read by the control box; both of the two sets of data are transmitted into the computer simultaneously.

The initial coordinate of feature point is calculated as the image of rotating target is transmitted into the computer. The algorithm of point coordinate is the Pose from Orthography and Scaling with Iterations(POSIT)[20]. POSIT is a classical algorithm and approved by scholars, companies, and defense [21, 22].

The principle of POSIT is shown as Fig 2. The feature points have the depth while the orthographic projecting points have the same depth . The cost function is formed as: (1) where f is the focal length of camera, is the average depth of feature points (m = 0, 1, 2, 3). In scaled orthographic projection, the image of a point is and while in perspective projection the image of that is and . The coordinates of feature point is calculated in Eq (2) as εx and εy below a threshold value or when the run times reach the limit. (2) where oj is the object coordinate system, is the matrix which corresponds to the minimum of εx and εy.

thumbnail
Fig 2. Geometric interpretation of POSIT.

The pinhole camera with the center of projection at Oc, optical axis aligned with Oc, image plane uv at a distance f from Oc. The origin of object coordinate system at , m = 0,1,2,3, i is the number of object’s position, c is the camera coordinate system, mIi is the corresponding image point set, is the corresponding orthographic projecting point set.

https://doi.org/10.1371/journal.pone.0133905.g002

The Measurement Reference of GCPC

Through Eqs (1) and (2), the spatial coordinate of , based on the camera coordinate system, is obtained. The following step is to transform from to . is the spatial coordinate which is based on the measurement reference. Furthermore, both the creation of global control points and the performance of error calibration are conducted in measurement reference. The relationship of the measurement reference to the other coordinate system are shown in Fig 3.

thumbnail
Fig 3. Coordinate systems of object moving.

Oms-XmsYmsZms represents the measurement reference.

https://doi.org/10.1371/journal.pone.0133905.g003

The relationship between the Oc-XcYcZc and the Oms-XmsYmsZms should be described as: (3)

The matrix and the vector are described as: (4) where h1 and h2 are respectively the unit direction vector of oms-xms and oms-yms, h3 = h1 × h2. The (u, v, w) is the vector .

The point sets and are selected respectively to establish oms-xms and oms-yms. Both of them only rotate around an axis. x is the number of object’s position which rotates around the oms-xms and y is the number of object’s position which rotates around the oms-yms. Taking the point sets into Eq (5) [23, 24]: (5) where Eq 5(a) is the equation of plane fitting, the coefficient (a, b, d) is the direction vector of plane; 5(b) is the equation of circle fitting, point (e, g) is the anchor point of axis which locates in the fitting plane, r is the circle radius. The unit vectors of oms-xms and oms-yms are determined through Eq (5). The fitting process is described in Fig 4. The two dotted circles are determined by Eq 5(a), and they locate in the planes determined by Eq 5(b) respectively.

thumbnail
Fig 4. The measurement reference of GCPC.

The (ax, bx, dx) and (ex, gx) corresponds to the while the (ay, by, dy) and (ey, gy) corresponds to the . The two dotted circles are determined by the two solid arcs respectively.

https://doi.org/10.1371/journal.pone.0133905.g004

As the point sets , , and rotate around the axis oms-xms, oms-yms, and oms-zms respectively, they share a center of rotation theoretically. As the trajectories of them are non-coplanar arcs, a sphere fitting is adopted to describe the arcs. Taking the three point sets into the following sphere fitting equation, the sphere center is the shared center of roation [25]. (6) where ,,, (l, n, q) is the sphere center, h is the sphere radius.

As the measurement reference Oms-XmsYmsZms is established, the coordinate of feature point is obtained.

The Principle of GCPC

The information of object pose in the measurement reference is formally defined as: (7) where the object pose is represented by Ii and xi which are respectively the image feature and the standard pose vector. According to the expression, GCPC is organized as the following overview Fig 5.

thumbnail
Fig 5. The schematic diagram of GCPC.

FMi keeps control of the space around it. ki and t respectively represent the number of control point and measuring point. yji is the measured value of pose vector between FMi and FMj.

https://doi.org/10.1371/journal.pone.0133905.g005

The control point error is the measurement error between the global control point and the reference point while the control space error is the measurement error between the measuring point FMt and the corresponding control point . The measured pose vector is obtained in two ways: (8) (9)

In Eq (8), is the directly measured value of xt, and shows both the control point error and the control space error. In Eq (9), is the measured value of , which contains the control space error of . is the standard pose vector between the reference point and the control point . GCPC optimizes the object pose FMt by using Eq (9).

The Implementation of GCPC

The feature point of different positions filled the moving space, and the spatial distribution of them is simulated. Part of the results are displayed in Fig 6. Each intersection point on the curve corresponds to a feature point. The space surrounded by the curve, which is filled with the feature points, is parameterized by the angle information of feature points.

thumbnail
Fig 6. The spatial distribution of feature point.

(a)Roll Angle: 0°~10° (b) Roll Angle: 10°~20° (c) Roll Angle: 20°~30°. In order to observe conveniently, the simulate curve has been divided into three parts by the roll angle. The other two angle of the curve are yaw angle from 0° to 60° and pitch angle from 0° to 60°. The angle between adjacent points is 10°.

https://doi.org/10.1371/journal.pone.0133905.g006

As the feature points in Fig 6 rotate around the three axes omx-xms, omx-yms, and omx-zms simultaneously, the trajectories of them beyond description. Fig 6 is different form Fig 4. Scaling down the trajectories in Fig 6, and the scaled trajectories turn into the non-coplanar arcs. The biggest difference between Figs 4 and 6 is that the feature points are used in different ways. Fig 4 focuses on the solid arc that is part of the dotted circle while Fig 6 focuses on the moving space that filled with the feature points. The moving space in Fig 6 is subdivided into small fragments by the curve mesh. The central point of fragment is selected as the control point, and the measuring point is constrained by the control point in the same fragment. Then the implementation of GCPC follows the two steps: the creation of control points and the calibration of measuring point.

The Creation of Global Control Points

Given a set of feature point , (i, j, k) are respectively the number of object’s position in omx-xms, omx-yms, and omx-zms. A sparse point set is selected as the initialized control points. As the moving space is parameterized by the angle information of feature points, the initialized control points are equally distributed in the angle space. The angle based space is divided as Fig 7.

thumbnail
Fig 7. The spatial distribution of initialized control points.

The range of curve is the same with that of Fig 6, and it is an eighth of the moving space which is symmetrical around . The initialized control points ‘○’ take control of the surrounding space which is partitioned by adjacent control points.

https://doi.org/10.1371/journal.pone.0133905.g007

With the assistance of adjacent points, the points in MI divide the moving space into ideal subspaces. The measuring point in the ideal subspace is calibrated by the corresponding control point. But through the analysis of measurement reference, it can be concluded that a system error exists in the Oms-XmsYmsZm. The axes fitting of oms-xms and oms-yms is inaccurate and the three axes are incompletely perpendicular. An angle filter is introduced to eliminate the impact of inaccurate measurement reference. For each point in the moving space corresponding to a pose vector (msα, msβ, msγ), the angle between the control point and the measuring point is calculated as: (10) where t is the number of measuring point. The pose vector is defined as the following: (11)

The matrix msRi, j, k is defined as: (12) is the rotation matrix and turned into Euler angles through Eq (13): (13) where C = COS, S = SIN, (α, β, γ) is an abbreviation for .

It is assumed that the total number of MI is M, and the total number of measuring points is N. There are M corresponding to the measuring point . Only the point which corresponds to the minimum of is selected as the optimized control point. Through this assumption, there are N point pairs of optimized control point and measuring point. The frequency of occurrence of the optimized control points is counted, and the cutoff frequency of the angle filter is N/M. The candidate control points with lower frequency of occurrence are filtered out. The filtered control point set is established.

The control space of point in MII is extended as the candidate control points are removed. Another subdivision of the control space is employed to improve the calibration capability of point in MII. The second division is an optimization of the control space by decreasing the angle between two adjacent control points. The subdivided control point set is formed. The former angle filter can then be reused to the point set MIII, and the final global control point set is created.

The Calibration of Measuring Point

The calibration process is separated into two steps: one is the determination of the pair of control point and measuring point, and the other is the calibration of the pose vector. According to Eq (10), the point pair and is determined by the minimum of . Bring the standard pose vector of into the following equation: (14) where is the calibrated pose vector of measuring point .

The Measurement Procedure

The measurement procedure of GCPC is shown in Fig 8. GCPC for pose measurement is divided into three steps: the establishment of measurement reference, the creation of global control points, and the calibration of measuring point. The first two steps run only once as the moving space is established.

Results and Discussion

Experiment system

For the experiment with real data, an infrared camera is used, and the camera’s field angle is 80°. The internal camera parameters are calibrated [26, 27], and the results are shown in Table 1.

The infrared LEDs are selected as positioning feature points, and the relative spatial position of the four feature points is shown in Table 2. Small holes are chosen to be drilled on the support board of target when the target is produced and the LEDs are selected to submerge in the hole. All devices are located on the experiment platform. Fig 9 shows the practical system in laboratory.

The global control points of GCPC

The range of moving space is -50° to 50° in yaw angle, -50° to 50° in pitch angle, and -30° to 30° in roll angle. The interval angle of sample is 5° at each DOF, and there are 5118 target images within the camera’s field of view.

The images with single DOF rotation can be used to establish the measurement reference. The parameters of measurement reference are shown in Table 3.

MI = {msPi, j, k}(i = -2,-1,0,1,2, j = -2,-1,0,1,2, k = -1,0,1) is selected as an initialized control point set. The interval angle is 20° at each DOF. Part of the MI is beyond the camera’s field of view, and the cutoff frequency of MI is 125. Then MI is filtered, and its frequency of occurrence is shown as Fig 10.

thumbnail
Fig 10. The frequency of occurrence of MI.

Only the filtered control points are displayed. The red line is the cutoff frequency.

https://doi.org/10.1371/journal.pone.0133905.g010

Fig 10 demonstrates that the control points succeed in controlling the space around them, and it is obvious that the uneven distribution is affected by the system error from measurement reference. The control points in Fig 10 constitute control point set MII. According to the statistical result, the control space of the points in MII is expanding considerably. Then the points in MII are subdivided by decreasing the interval angle to 10°. The new control points are grouped into control point set MIII. The cutoff frequency of MIII is 466. As the angle filter is performed on MIII, the statistical results are shown in Fig 11.

thumbnail
Fig 11. The frequency of occurrence of MIII.

Only the filtered control points are displayed. The red line is the cutoff frequency.

https://doi.org/10.1371/journal.pone.0133905.g011

From the frequency of occurrence in Fig 11, the final control point set MIV is established. The point set MIV is formed by the points in Table 4.

The pose measurement results which respectively correspond to the four control point sets MI, MII, MIII, and MIV are compared in the next section.

Pose measurement results

In order to prove the role of GCPC, the pose measurement of measuring target in the whole moving space is accomplished. The moving space has been established by the Oms-XmsYmsZms. The gathered data are transmitted into POSIT and GCPC, and the pose measurement results are analyzed. The control point sets MI, MII, MIII, and MIV are respectively adopted by GCPC. The root mean square(RMS) error of GCPC and POSIT are displayed in Fig 12.

thumbnail
Fig 12. The RMS error of the results.

The x axis is used to distinguish the yaw angle, pitch angle, and roll angle.

https://doi.org/10.1371/journal.pone.0133905.g012

By comparing the results of GCPC and those of POSIT, it is obvious that the measurement accuracy of GCPC is higher than that of POSIT in the whole moving space. The comparisons of the four control point sets demonstrate that the creation of global control points is effective.

In order to test the error distribution of GCPC, the measuring points are classified into the surface of angle determined by three angles. The range of the first two angles are respectively -50° to 50° in yaw angle, -50° to 50° in pitch angle. The third angle changes from -30° to 30°. The RMS error of the surfaces of angle are shown in Fig 13.

thumbnail
Fig 13. The RMS error of the surfaces of angle.

The x axis represents the third angle.

https://doi.org/10.1371/journal.pone.0133905.g013

The RMS error of GCPC is far less than that of POSIT. The former is stable and reduced to 0.2° while the latter fluctuates along the roll angle and reaches 1.2°. The steep trend of POSIT demonstrates that the measurement error mentioned earlier exits in the pose measurement process, and the gentle trend of GCPC proves that the measurement error is calibrated successfully in the whole moving space.

The above data analysis is based on the RMS error, and 100 measuring points with the maximal errors are selected. The optimization of GCPC to the measuring points is shown in Fig 14.

Through analysis of Fig 14, it is evident that the measurement error is reduced by GCPC. The control point error which is the primary source of the measurement error is eliminated successfully, and the error curve which fluctuates around zero is caused by the control space error.

Conclusions

In this paper, GCPC is developed to optimize the pose measurement error. The control point error is redefined to be the primary source of measurement error, and calibrated by the corresponding global control point. The control space error has less impact on the pose measurement, and minimized by the subdivision of control space. Both of the creation of global control points and the calibration of pose measurement have been confirmed by experiment. The experiment results show that the pose measurement process is calibrated by the global control points successfully. To sum up, GCPC improves the accuracy of pose measurement.

Supporting Information

S1 Dataset. Camera captured dataset.

This excel contains the capture data used as the basis for the pose measurement solution described in the manuscript. The data is given by means of image coordinate.

https://doi.org/10.1371/journal.pone.0133905.s001

(XLSX)

Author Contributions

Conceived and designed the experiments: CS PS. Performed the experiments: PS. Analyzed the data: PW. Contributed reagents/materials/analysis tools: CS PS. Wrote the paper: PS. Modified the manuscript: PW.

References

  1. 1. Mao W, Eke F. A survey of the dynamics and control of aircraft during aerial refueling. Nonlinear dynamics and systems theory. 2008; 8(4): 375–388.
  2. 2. Murphy-Chutorian E, Trivedi MM. Head pose estimation in computer vision: a survey. IEEE Trans Pattern Anal Mach Intell. 2009; 31(4): 607–626.
  3. 3. Lepetit. V, Fua. P. Monocular Model-Based 3D Tracking of Rigid Objects: A Survey. Foundations and Trends in Computer Graphics and Vision. 2005; 1(1): 1–89.
  4. 4. Valasek J, Gunnam K, Kimmett J, Tandale MD, Junkins JL, Hughes D. Vision-based sensor and navigation system for autonomous air refueling. J Guid Control Dynam. 2005; 28(5): 979–989.
  5. 5. Pan H, Huang JY, Qin SY. Relative Pose Estimation under Monocular Vision in Rendezvous and Docking. Applied Mechanics and Materials. 2014; 433799–805.
  6. 6. Valenti R, Sebe N, Gevers T. Combining head pose and eye location information for gaze estimation. IEEE Trans Image Process. 2012; 21(2): 802–815. pmid:21788191
  7. 7. Kim SJ, Kim BK. Dynamic ultrasonic hybrid localization system for indoor mobile robots. IEEE Trans Ind Electron. 2013; 60(10): 4562–4573.
  8. 8. Schall G, Wagner D, Reitmayr G, Taichmann E, Wieser M, Schmalstieg D, et al. Global pose estimation using multi-sensor fusion for outdoor Augmented Reality. ISMAR 2009: Proceeding of the 8th International Symposium on Mixed and Augmented Reality; 2009; Santa Barbara,CA. IEEE; 2009. p. 153–162.
  9. 9. Xiaopeng C, Rui L, Wang X, Ye T, Qiang H. A novel artificial landmark for monocular global visual localization of indoor robots. ICMA 2010: Proceeding of the IEEE International Conference on Mechatronics and Automation; 2010; Xi'an, China. IEEE; 2010. p. 1314–1319.
  10. 10. Yun L, Yimin C, Renmiao L, Deyi M, Qiming L. A novel marker system in augmented reality. ICCSNT 2012: Proceeding of the 2nd International Conference on Computer Science and Network Technology; 2012; Changchun,China. IEEE; 2012. p. 1413–1417.
  11. 11. Fan B, Du Y, Cong Y. Robust and accurate online pose estimation algorithm via efficient three-dimensional collinearity model. Iet Comput Vis. 2013; 7(5): 382–393.
  12. 12. Wu Y, Hu Z. PnP Problem Revisited. J Math Imaging Vis. 2006; 24(1): 131–141.
  13. 13. Hesch JA, Roumeliotis SI. A Direct Least-Squares (DLS) method for PnP. ICCV 2011: Proceeding of the 13th International Conference on Computer Vision; 2011; Barcelona, Spain. IEEE; 2011. p. 383–390.
  14. 14. Tang J, Chen W-S, Wang J. A novel linear algorithm for P5P problem. Appl Math Comput. 2008; 205(2): 628–634.
  15. 15. Grossberg MD, Nayar SK. A general imaging model and a method for finding its parameters. ICCV 2001: Proceeding of the 8th IEEE International Conference on Computer Vision; 2001; Vancouver, Canada. IEEE; 2001. p. 108–115 vol.102.
  16. 16. Schweighofer G, Pinz A. Globally Optimal O(n) Solution to the PnP Problem for General Camera Models. BMVC 2008: Proceeding of the British Machine Vision Conference. 2008; 1–10.
  17. 17. Hmam H, Kim J. Optimal non-iterative pose estimation via convex relaxation. Image Vision Comput. 2010; 28(11): 1515–1523.
  18. 18. Pan H, Huang J, Qin S. High accurate estimation of relative pose of cooperative space targets based on measurement of monocular vision imaging. Optik. 2014; 125(13): 3127–3133.
  19. 19. Ons B, Verstraelen L, Wagemans J. A Computational Model of Visual Anisotropy. Plos One. 2011; 6(6): e21091. pmid:21738607
  20. 20. Dementhon D, Davis L. Model-based object pose in 25 lines of code. Int J Comput Vision. 1995; 15(1–2): 123–141.
  21. 21. David P, DeMenthon D, Duraiswami R, Samet H. SoftPOSIT: Simultaneous Pose and Correspondence Determination. Int J Comput Vision. 2004; 59(3): 259–284.
  22. 22. Gramegna T, Venturino L, Cicirelli G, Attolico G, Distante A. Optimization of the POSIT algorithm for indoor autonomous navigation. Robot Auton Syst. 2004; 48(2–3): 145–162.
  23. 23. Al-Sharadqah A, Chernov N. Error analysis for circle fitting algorithms. Electron J Stat. 2009; 3(0): 886–911.
  24. 24. Ahn SJ, Rauh W, Warnecke H-J. Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola. Pattern Recogn. 2001; 34(12): 2283–2303.
  25. 25. Lukács G, Martin R, Marshall D. Faithful least-squares fitting of spheres, cylinders, cones and tori for reliable segmentation. In: Burkhardt H. and Neumann B., editors. Computer Vision—ECCV'98. Freiburg, Germany: Springer 1998.
  26. 26. Tsai RY. An efficient and accurate camera calibration technique for 3D machine vision. CVPR 1986: Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition; 1986; Miami, FL. IEEE; 1986.
  27. 27. Wang J, Shi F, Zhang J, Liu Y. A new calibration model of camera lens distortion. Pattern Recogn. 2008; 41(2): 607–615.