Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Algebraic Error Based Triangulation and Metric of Lines

  • Fuchao Wu,

    Affiliation National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, P.O. Box 2728, Beijing, 100190, China

  • Ming Zhang ,

    zhangming@nlpr.ia.ac.cn

    Affiliation National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, P.O. Box 2728, Beijing, 100190, China

  • Guanghui Wang,

    Affiliation Department of Electrical Engineering & Computer Science, University of Kansas, 1520 West 15th Street, Lawrence, KS, 66045–7608, United States of America

  • Zhanyi Hu

    Affiliation National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, P.O. Box 2728, Beijing, 100190, China

Abstract

Line triangulation, a classical geometric problem in computer vision, is to determine the 3D coordinates of a line based on its 2D image projections from more than two views of cameras with known projection matrices. Compared to point features, line segments are more robust to matching errors, occlusions, and image uncertainties. In addition to line triangulation, a better metric is needed to evaluate 3D errors of line triangulation. In this paper, the line triangulation problem is investigated by using the Lagrange multipliers theory. The main contributions include: (i) Based on the Lagrange multipliers theory, a formula to compute the Plücker correction is provided, and from the formula, a new linear algorithm, LINa, is proposed for line triangulation; (ii) two optimal algorithms, OPTa-I and OPTa-II, are proposed by minimizing the algebraic error; and (iii) two metrics on 3D line space, the orthogonal metric and the quasi-Riemannian metric, are introduced for the evaluation of line triangulations. Extensive experiments on synthetic data and real images are carried out to validate and demonstrate the effectiveness of the proposed algorithms.

Introduction

Line triangulation [1], [2] refers to the process of determining a 3D line given its projections in two or more images and the corresponding camera matrices. As one of the fundamental problems in computer vision, this problem is trivial in theory, since the corresponding 3D line is the intersection of the back-projection planes of the image lines. However, when the number of views is larger than 2, the back-projection planes usually do not intersect at one line in the 3D space due to measurement errors and image noise. This leads to find a 3D line that fits the measured data optimally, i.e., optimal line triangulation.

Minimizing the algebraic error of line triangulation is a linear least squares problem with a quadratic constraint (called the Klein constraint), as defined in Section 2 of this paper. Adrien and Sturm [3], [4] proposed a linear algorithm for the algebraic error minimization. This algorithm first finds a solution of the corresponding linear least squares problem (i.e., by ignoring the Klein constraint), then, the solution is corrected subsequently by a singular value decomposition (SVD) method with the Klein constraint enforcement. This algorithm yields only an approximation of the optimal solution to the algebraic error minimization. The paper [5] proposed a suboptimal solution to algebraic-error line triangulation. This algorithm finds a suboptimal solution of the original problem by relaxing the quadratic unit norm constraint to six linear constraints. However, this still cannot yield an optimal solution to algebraic error minimization. To the best of our knowledge, how to find optimal solution of the algebraic error minimization is still an open problem.

In studies on line triangulation, a natural question is that which one of the above three optimality criteria is the “best”? In order to answer this question, we need a criterion which is independent of the three optimality criteria to describe the “bestness”. One intuitive criterion is the 3D error, i.e. distance between a reconstructed line and its ground truth. The Euclidean distance does not give a reasonable measure since it is not an intrinsic distance on 3D line space. So far, no study on the metrics of 3D lines is available in the literature, and thus, it is still an open problem for the evaluation of line triangulations.

This paper focuses on the triangulations and metrics of lines. The main contributions are summarized as follows:

  • Based on the Lagrange multipliers theory, a formula to compute the Plücker correction is given and this Plücker correction formula is used to establish a quasi-Riemannian metric in 3D line space. From the formula, a new linear algorithm, LINa, is proposed for line triangulation. The computational complexity of our new linear algorithm is much simpler compared with the SVD method in the literature.
  • For the algebraic error minimization, two new algorithms, OPTa-I and OPTa-II, are proposed to find the optimal solution. The OPTa-I is based on finding roots of a system of 2-degree polynomial equations in five variables; and the OPTa-II is based on solving a system of polynomial equations in two variables (one polynomial is of 6-degree and the other is of 10-degree). The continuous homotopy algorithm [6], [7] is used to solve these systems of polynomial equations.
  • Two new metrics on 3D line space, named as the orthogonal metric and the quasi-Riemannian metric, are proposed for the evaluation of line triangulations. The orthogonal metric is based on the angular distance on rotation groups [8] and the orthogonal representation of 3D lines [4]; and the quasi-Riemannian metric is based on the Riemannian metric on the 5-dimensional unit sphere and our proposed Plücker correction formula.

The rest of the paper is organized as follows. Section 2 presents some preliminaries used in the paper. The Plücker correction formula and a new linear algorithm are presented in Section 3. Section 4 elaborates the two optimal algorithms for the algebraic error minimization. Section 5 gives two new metrics on 3D line space. Some experimental results with synthetic and real data are presented in Section 6 and Section 7, respectively. Finally, the paper is concluded in Section 8.

Preliminaries

2.1 Plücker Coordinates

In 3D projective space, the Plücker coordinates of a line is defined by a nonzero 6-vector: (1) where are two non-coincident points on the line. The Plücker coordinates is homogeneous since the two 6-vectors computed with two different pairs of points on the line are equal up to a nonzero factor. From Eq (1), it is easy to see that uTv = (x4y−y4x)T(x×y) = 0, i.e. the Plücker coordinates satisfies uTv = 0, or written in a matrix form: (2)

In 5D projective space, the quadric defined by Eq (2) is called the Klein quadric [9], thus, the Plücker coordinates satisfies the Klein quadric constraint. Conversely, if a nonzero 6-vector satisfies the Klein constraint, it must be the Plücker coordinates of a line in a 3D projective space.

2.2 Point-Line Distance

In the image plane, the algebraic distance from a point x = (x,y,1)T to a line l is defined as [10]: (3)

Given a measured point set of a line l, = {xj = (xj,yj,1)T: 1≤jM}, and let (4) then,la is called the linear least squares fitting of the measured point set , which has linear solution [10].

2.3 Optimality Criteria

Given N line-projection matrices,, and let i = {xij = (xij,yij,1)T: 1≤jMi} be a measured point set from the imaged line of a 3D line L, the line triangulation is meant to estimate the 3D line L from these measured point sets i(1≤iN). The algebraic distance of point-line in the image plane leads to the following optimality criteria to solve this problem [4], [10]: (5) where is called the optimal solution to minimize algebraic error. makes the sum of squared algebraic distances from the measured points xij to the re-projection lines reach a minimum, thus, are the linear least squares fittings of the measured point sets {1,2,…,N}.

The minimization term in Eq (5) can be expressed as (6)

Thus, the cost function Eq (5) can be rewritten as (7) which means that the minimization of the algebraic error is a linear least squares problem with the Klein constraint.

Linear Solution to Minimize Algebraic Error

Adrien and Sturm [4] first proposed a linear algorithm to estimate , which is divided into the following two steps:

(a) Solve the linear least squares problem: (8)

The solution is the eigenvector correspond to the matrix A’s smallest eigenvalue.

(b) Compute the nearest point from to the Klein quadric as the final estimate: (9)

Adrien and Sturm [4] gave an SVD method to compute the nearest point .

The step (b) in the above algorithm is called the Plücker correction. When there are errors in the measurement data, does not strictly satisfy the Klein constraint, hence, it can not be the Plücker coordinates of a line in the 3D projective space. Thus, the Plücker correction is an important step in the algorithm. This section presents a formula to compute the Plücker correction and a new linear algorithm ‘LINa’.

3.1 Linear Algorithm LINa

We consider the following minimization: (10)

Although this minimization contains a unit norm constraint, it is in fact equivalent to Eq (9) according to the following Lemma.

Lemma 1: (a) If is the optimal solution of Eq (10), then must be the optimal solution of Eq (9).

(b) Conversely, if is the optimal solution of Eq (9), then must be the optimal solution of Eq (10).

Proof: For an arbitrary unit 6-vector L, there must be (11)

Since is the optimal solution of Eq (10), and (12)

Let , then (13)

Let , then, (14)

Since is the optimal solution of Eq (9), and , thus (15)

(a): If Lk is not the optimal solution of Eq (9), then,. From Eqs (13) and (15), we have (16) and thus,. Then, by Eqs (12) and (14),, which is contrary to the fact that is the optimal solution of Eq (10). Therefore,Lk must be the optimal solution of Eq (9).

Similarly, (b) can be proved.

According to Eq (11), the minimization problem Eq (10) is simplified to (17)

Proposition 1 below gives an analytical expression of . Compared with the SVD method to compute , the computation of is much simpler.

Proposition 1: For and , (a) The minimization Eq (10) has a unique solution if as: (18a)

(b) The minimization Eq (10) has infinitely many solutions if as: (18b)

The proof of the proposition is given in the next subsection. The geometric interpretations for Eqs (18a) and (18b) are shown in Fig 1. Since and , Eq (18a) can be rewritten as (19)

thumbnail
Fig 1. Geometric interpretation of the Plücker correction .

https://doi.org/10.1371/journal.pone.0132354.g001

Thus, when satisfies the Klein constraint , there must be .

(a) (b) (c)

Based on the above discussion, our linear algorithm LINa can be summarized in Table 1.

Remark 1: In practice, the case (b) in Proposition 1 happens rarely. This is because the Klein constraint makes {u, v} orthogonal to each other, thus, they must be linearly independent of each other. When there are errors in the measurement data, the solution of Eq (8) cannot guarantee the orthogonality of {u, v}, except for their linear independency. Hence, the case (b) rarely happens in practice.

By Lemma 1 and Proposition 1, the optimal solution of Eq (15) can be obtained as:

(a) If , then (20a)

(b) If , then (20b)

3.2 Proof of Proposition 1

Construct the Lagrange function of Eq (17) as follows: (21)

According to the optimization theory [11], the solution of Eq (17) must be a stationary point of the Lagrange function, i.e., there are multipliers (α*,β*) such that is a solution of the following Lagrange equations: (22)

Thus, by solving the Lagrange equations we can obtain the optimal solution . The first equation in Eq (25) can be rewritten as (23)

From the last two equations in Eq (22), we have (24)

(i) If , then by Eqs (23) and (24), we have (25)

Let α′ = (α+β)−1 and β′ = (αβ)−1, then from Eq (23) we have (26)

Thus, (27) (28)

Therefore, the following linear equations on (α′2,β′2) hold: (29) then, (30)

Substituting Eq (30) into Eq (26) gives the following four solutions to L: (31)

The geometric interpretations of the four solutions are shown in Fig 2.

It can be easily verified that , and thus, (32)

(ii) When , there must be . According to Eq (24) and the second equation of Eq (23), we have β = α. Substituting it into the first equation in Eq (23), we have (33)

Therefore, (34)

By the last two equations in Eq (22), we have (35)

Thus,. Substituting it into Eq (23), we have (36)

If , then (37) (38)

Thus, (39) and , (40)

Similarly, if , then (41) and , (42)

By Eqs (40) and (42), has infinitely many solutions: (43)

Next, we consider the set . Let u = sd (where d is a unit 3-vector, s≠0), then (44)

thus, . Therefore Eq (43) can be rewritten as (45)

(iii) Similarly, when , also has infinitely many solutions: (46)

Optimal Solution by Minimizing Algebraic Error

The algorithm LINa can only provide an approximate solution by minimizing algebraic errors. This section will present two algorithms ‘OPTa-I’ and ‘OPTa-II’ to compute the optimal solution. The algorithm OPTa-I converts the optimization problem to that of finding the real solutions of two systems of 2-degree polynomial equations in five variables, and the algorithm OPTa-II to that of finding the real solutions of a system of polynomial equations in two variables (one is of 6-degree, and the other is of 10-degree).

4.1 Algorithm OPTa-I

The optimal algorithm OPTa-I is summarized in Table 2.

Eq (47) is a system of 2-degree polynomial equations in six variables, and it has at most 64 real solutions based on the algebraic equations theory. Proposition 2 next shows this system can be simplified into two systems of 2-degree polynomial equations in five variables. Here we at first prove that Eq (48) is the optimal solution to the algebraic error minimization.

Proof: Consider the Lagrange function and the Lagrange equations of Eq (7): (49) (50)

The first equation in Eq (50) can be rewritten as (51)

It is obvious that this equation is equivalent to the following equation: (52)

By eliminating the multipliers (α,β) in the above equation, we obtain the following 2-degree polynomial equations in (u,v): (53)

The last two equations in Eq (50) can be rewritten as (54)

By combining Eqs (53) and (54), we have Eq (47). L from the stationary points (L,α,β) of the Lagrange function Eq (49) must be a real solution of Eq (47), thus the optimal solution of Eq (7) must belong to the real solution set of Eq (47), i.e.,. Therefore, (55)

Proposition 2: The solution set of Eq (47) is the union of solution sets of two systems of 2-degree polynomial equations in five variables.

Proof: Let be the solution set of Eq (47), then it must be the union of the following two sets: (56)

Clearly, is the solution set of the system of 2-degree polynomial equations in five variables obtained by setting v3 = 0 in Eq (47). For the set , we consider the resulting equation system obtained by removing the unit norm constraint in Eq (47): (57)

It is second order homogeneous on L = (uT,vT)T, and the set formed by normalizations of its all nonzero solutions is just the solution set of Eq (47). Thus, let be the solution set of the system of 2-degree polynomial equations in five variables obtained by setting v3 = 1 in Eq (57), there must be (58)

Hence, Proposition 2 holds.

4.2 Algorithm OPTa-II

Let A(α,β) = A−αK−βI and its adjoint matrix be denoted as , all its elements are at most 5-degree polynomials of (α,β). It is easy to see, (59) where AiT(α,β) is the i-th row vector of A(α,β), therefore, (60)

For each k (1≤k≤6), is a system of 5-degree polynomial equations of (α,β), whose real solution set is denoted as . Next, we prove that this system has at least one real solution, i.e..

Let be the sub-matrix formed by deleting the k-th row of A(α,β), then can be expressed as

(a): (61)

and it has the same solutions as the following equation system:

(b): (62)

This is because: From (b), both a1 and a2 can be linearly represented with , thus, for arbitrary , {a1, a2,ai, aj,ak} must be linearly dependent, i.e., det(a1, a2,ai, aj,ak) = 0. Hence, solutions of (b) must be the ones of (a). Obviously, solutions of (a) are also the ones of (b). Therefore, (a) has the same solutions with (b).

Since non-real solutions of a system of real polynomial equations occur in complex conjugate pairs, there is at least one real solution in the 25 solutions of (b). Thus, has at least one real solution.

The algorithm OPTa-II is summarized in Table 3.

For the two polynomial equations in the step (b) of OPTa-II, one is of 6-degree and the other is of 10-degree, and thus it has at most 60 real solutions based on the algebraic equations theory. Next, we prove that Eq (65) is the optimal solution to the algebraic error minimization.

Proof: The first equation in Eq (50) can be rewritten as A(α,β)L = 0, thus, L≠0 leads to the following 6-degree polynomial equation of (α,β): (66) i.e., the multipliers (α,β) of the stationary points (L,α,β) of the Lagrange function Eq (48) satisfies Eq (66).

Since the system of polynomial equations has no real solutions, there must be or for (α,β)∈R2. This leads to rank(A(α,β)) = 5 for (α,β)∈R2. Therefore, from Eqs (60) and (66), L of the stationary points (L,α,β) can be expressed as (67)

By the second equation in Eq (50), the multipliers (α,β) of the stationary points (L,α,β) must belong to one of the real solution sets of the following two systems of polynomial equations: (68) i.e.,. Thus, from Eq (66) and the unit norm constraint LTL = 1, L of the stationary points (L,α,β) must belong to the following set: (69)

Therefore, (70)

The algorithm OPTa-II needs only to solve some systems of polynomial equations in two variables, it effectively simplifies the algorithm OPTa-I. If for any {i,j} pairs, the system of polynomial equations has real solution, the algorithm OPTa-II may fail. However, this situation never happened in our extensive numerical simulations.

Remark 2: In the experiments of this paper, we use the continuous homotopy method [6] [7] to solve the system of polynomial equations. The method is first proposed in [12]. Through 30 years of efforts of many researchers, the method has made a great success in computing zero points of non-linear mappings. It can give all zero points of a polynomial mapping [6][7][13]. In the field of computer vision, the method has been used to solve self-calibrations of cameras, such as the Kruppa equations [14], the modulus constraint equations and the absolute quadric constraint equations [15]. For the 2-degree polynomials with five variables in OPTa-I and the high-degree polynomials with two variables in OPTa-II, the continuous homotopy method is of high computational efficiency.

Metrics on 3D Line Space

In order to evaluate 3D errors of line triangulations, we need a metric in 3D line space. The Euclidean distance

(where L, L′ are the normalized Plücker coordinates of lines ) is not appropriate for the evaluation of line triangulations since is not an intrinsic distance on 3D line space. The aim of this section is to introduce two new metrics on 3D line space, called the orthogonal metric and the quasi-Riemannian metric. Compared with the Euclidean metric and the orthogonal metric, the quasi-Riemannian metric appears more appropriate.

In this section, the 5-dimensional unit sphere centered at the origin in is denoted by , and the intersection of the Klein quadric and is denoted by , which is a 4-dimensional smooth sub-manifold of , called the unit Klein quadric.

5.1 Orthogonal Metric in 3D Line Space

The proposed orthogonal metric is mainly from the angular distance of rotation matrices [8] and the orthogonal representation of 3D lines [4]. The angular metric on rotation group is given in Appendix I.

If , then (a): u≠0,v≠0; or (b):u = 0,‖v‖ = 1; or (c): ‖u‖ = 1,v = 0. By the definition of Plücker coordinates, for the case (b), L is the Plücker coordinates of a 3D line passing through the origin and v is its direction; for the case (c), L is the Plücker coordinates of a 3D line on the infinite plane and u is its normalized coordinates as a 2D line on the infinite plane.

Let , and . We define L = (u, v)for , then (71)

Thus, from the following mappings: (72) (73)

we obtain the mapping by [4]: (74) and it is called the orthogonal representation of .

The above mapping fails for and . In order to obtain a complete mapping from into , we add definition for and as follows: (75) where Wπ/2 is the 2D rotation of angle π/2. An explanation of this definition will be given later.

Using the angular distance on , the following distance on is obtained: (76)

Since if and only if L is the normalized Plücker coordinates of a 3D line ; and are the normalized Plücker coordinates of the same 3D line, the distance dO leads directly to the following distance on 3D line space : (77) and it is called the orthogonal distance of 3D lines.

Now, we can give an explanation for the definition Eq (75). If , then (78)

Thus, the first mapping in the definition Eq (75) is meant the orthogonal distance of two lines passing through the origin is just twice their included angle.

Similarly, If , then . Since u and u are the normalized coordinates of the infinite lines and , respectively, and they are the normal vectors of plane passing through and that passing through . Hence, the second mapping in the definition Eq (75) is meant the orthogonal distance of two infinite lines and is just twice the included angle of the two planes.

5.2 Quasi-Riemannian Metric on 3D Line Space

Based on the Riemannian metric [16] and analysis in Appendix II, the quasi-Riemannian distance on leads directly to the quasi-Riemannian distance on : (79)

It is not difficult to verify that: lines and are coplanar if and only if their Plücker coordinates satisfy LTKL = 0. Thus, the quasi-Riemannian distance of coplanar lines is given by the following formula: (80)

5.3 Comparison of the Three Metrics

In order to compare the performance of different metrics, we gerenated a 3D unit cube centered at the origin in space, and Fig 3 shows the 12 edges of the unit cube. Fig 4(a), 4(b) and 4(c) shows respectively the distances between the edges computed by the Euclidean metric, the Orthogonal metric, and the quasi-Riemannian metric, where different distance values are represented with different colors.

thumbnail
Fig 3. 12 edges on the unit cube.

(a) Euclidean metric; (b) Orthogonal metric; (c) Quasi-Riemannian metric.

https://doi.org/10.1371/journal.pone.0132354.g003

thumbnail
Fig 4. Distances between the edges on the unit cube by the three metrics.

https://doi.org/10.1371/journal.pone.0132354.g004

Based on their relative positions, the edge pairs belong to either the two parallel relationships (P-I and P-II) or the two orthogonal relationships (O-I and O-II) are listed as below:

Each of the three metrics can give a unique distance for each relationship, as shown in Table 4. However, from Table 4 it can be seen that the Euclidean metric could not distinguish between O-I and O-II; the orthogonal metric could not distinguish between P-I and O-I; while the quasi-Riemannian metric gives different distances for all four relationships, and these distances are consistent with our intuition that the distances for P-I, P-II, O-I and O-II should increase gradually. This observation implies that the quasi-Riemannian metric is reasonable than the Euclidean metric or the orthogonal metric.

thumbnail
Table 4. Distances computed by the three metrics for P-I, P-II, O-I and O-II.

https://doi.org/10.1371/journal.pone.0132354.t004

In the experiments of this paper, the quasi-Riemannian metric is used to evaluate the 3D errors of line triangulations. In real experiment, the true line and the estimated line are close with each other, so they can be considered as lying on the same plane. Therefore, the Quasi-Riemannian metric would be . Let arccos(LTL) = θ. Since the angle of the two lines is small, so the Euclidean metric () can be approximated by 2sin(θ/2)≈θ. Therefore, the Quasi- Riemannian metric is equal to the Euclidean metric. The same situation also applies to the Orthogonal metric. As a result, the three metrics would be equal to each other or equal up to a scale factor.

Experiments with Simulated Data

In the experiments of this section, we simulated eight 3D space lines on two orthogonal planes, as shown in Fig 5. Using the synthetic data, we generated six images by adjusting the cameras location and parameters. The size of the images is of 1024×1024. In order to simulate the effect of image noise, we evenly sample 20 points on each image line segment, and add Gaussian noise with zero mean and σ standard deviation to these sampled image points, then, the actual projected image line is fitted by the orthogonal least squares fitting from these noise-corrupted point set.

thumbnail
Fig 5. Eight 3D lines (in pink color) on two orthogonal planes used in the simulations, where the small pyramids stand for camera viewpoints.

https://doi.org/10.1371/journal.pone.0132354.g005

We evaluated and compared the performance of the linear algorithm LIN [4], the proposed linear algorithm LINa; and the optimal algorithms based on the algebraic optimality criterion (AOC): OPTa-I and OPTa-II. The used criteria of evaluation are RMS (root mean square) of the 3D errors (i.e., the quasi-Riemannian distance of reconstructed line to its ground truth), the algebra errors, and the orthogonal errors.

6.1 Stability to Noise

This experiment is to test the numerical stability of the algorithms with respect to different noise levels in the same geometric configuration. During the experiment, Gaussian noise with zero mean and σ standard deviation is added to each image point, and the noise level σ varies from 0.0 to 3.0 pixels in steps of 0.5, and 150 independent trials are carried out under each noise level. Fig 6 shows the experimental results on 6 views.

thumbnail
Fig 6. Stability of the algorithms with respect to different noise levels.

(a) 3D errors; (b) algebraic errors; (c) orthogonal errors.

https://doi.org/10.1371/journal.pone.0132354.g006

According to Lemma 1, LIN and LINa algorithm should yield the same result. On the other hand, since OPTa-I and OPTa-II algorithms both solve the algebraic-error minimization problem with the same error cost function, the two optimization algorithms should yield comparable estimation results, and only difference may be caused by the computational errors when solving the high-degree functions. These results have been verified by the experiments. In our experiments, both the LIN and the LINa algorithms produce the same errors, while the OPTa-I and the OPTa-II yield very close results, thus, we only show the results of LINa and OPTa-II in Fig 6. From this experiment, we can see that the RMS errors of all the algorithms increase with the increase of noise levels. The two optimal algorithms based on the AOC yield lower 3D errors, algebraic errors, and orthogonal errors than the two linear algorithms. Please note that since the three criteria are with different meanings and units, they are not comparable to each other.

In the experiments, both the OPTa-I and OPTa-II algorithms rarely have the situation of no real solutions. With the increase of noise level and image number, the possibility of no real solutions will increase slowly.

We also compared the computational cost of these algorithms. The real computation time of the LIN, LINa, OPTa-I, and OPTa-II algorithms are 0.002, 0.002, 11.681, 36.688 seconds, respectively. The two linear algorithms have comparable running time, while the two optimal algorithms are much computational intensive. Among the two optimal algorithms, the OPTa-I is faster than the OPTa-II since the former only needs to solve a 2-degree polynomial equation system, while the OPTa-II needs to solve one 6-degree and one 10-degree systems. Thus, OPTa-I is a better choice in practice.

6.2 Stability to Configurations

This experiment is to test the numerical stability of the algorithms with respect to geometrical configurations. The number of views varies from 4 to 12 in steps of 2 during the experiments. At each number of views, 150 independent trials are carried out. Fig 7 shows the experimental results at noise level σ = 1.5, where only the results from LINa and OPT-II are plotted, as analyzed in Section 6.1, the LIN and LINa algorithms yield the same results, and the OPTa-I and OPTa-II produce very similar results. We can see from this experiment that the RMS error of all the algorithms decreases when the number of view increases. The two optimal algorithms outperform the two linear algorithms in term of 3D error, algebraic error, and orthogonal error.

thumbnail
Fig 7. Stability of different algorithms with respect to geometrical configurations.

(a) 3D errors; (b) Algebraic errors; (c) orthogonal errors.

https://doi.org/10.1371/journal.pone.0132354.g007

Experiments with Real Images

The proposed algorithms were evaluated using extensive real images. The experimental results on four data sets are reported below. As shown in Fig 8, the used images include a calibration cube, a planar checkerboard, and the Oxford datasets “model house” and “corridor” (http://www.robots.ox.ac.uk/~vgg/data/data-mview.html). The lines marked with white and red in these images are used to test the algorithms.

thumbnail
Fig 8. Image sets used in the experiments.

(a) Calibration cube; (b) planar checkerboard; (c) Corridor; (d) Model House.

https://doi.org/10.1371/journal.pone.0132354.g008

For the calibration cube, six images were taken by a Nikon D40 camera, with the image size of 3008×2000. The correspondences between the 3D points on the cube and their images are used to compute the camera matrices. For the planar checkerboard, six images were taken by a Sony HX5C camera, with the image size of 2592×1944, while the camera matrices are computed by the calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/). For the model house images and the corridor images, the camera matrices and the two end coordinates of the image lines are provided by the Oxford datasets.

Fig 9 shows the 3D errors, algebra errors, and orthogonal errors of different algorithms associated with the four data sets. From these experiments we can obtain the same conclusion as the simulation tests. The two optimal algorithms yield similar results which are better than those from the two linear algorithms. Although we plot the 3D error, algebraic error, and orthogonal error in one graph in Fig 9, these three errors are not comparable to each other since they are obtained using different criteria with different units. Fig 10 shows the 3D reconstruction results of the fours objects using the OPTa-I algorithm. The 3D models of these lines are correctly recovered by the proposed algorithm.

thumbnail
Fig 9. Experimental results.

(a) Calibration cube; (b) planar checkerboard; (c) model house; (d) corridor.

https://doi.org/10.1371/journal.pone.0132354.g009

thumbnail
Fig 10. 3D Reconstruction results.

(a) Calibration cube; (b) planar checkerboard; (c) model house; (d) corridor.

https://doi.org/10.1371/journal.pone.0132354.g010

Conclusion

In this paper, we have investigated line triangulations and line metrics. First, a new formula for the Plücker correction is introduced, by which a new linear algorithm for line triangulation is proposed. Then, two optimal algorithms are proposed from the algebraic optimality criterion. In addition, two metrics in 3D line space, the orthogonal metric and the quasi-Riemannian metric, are proposed for the quality evaluation of line triangulations. The experiments using simulated data and real images validate the proposed algorithms and show that the optimal solution can reconstruct more accurate 3D lines.

Appendix I: Angular Metric on

Let be the 3D rotation group. For there is the following angle- axis representation: (81) where θ (0≤θπ) and a(‖a‖ = 1) are respectively the rotation angle and rotation axis of R, and the rotation angle satisfies: (82)

The angular distance of is defined as [8] (83)

Similarly, for the 2D rotation group , the angular distance is defined as (84)

According to the angular distances and , the angular distance on can be defined as (85)

Since the geodesic distances of metric spaces and are the angular distances and themselves [8], it is not difficult to verify that:d is also the geodesic distance of metric space .

Appendix II: Quasi-Riemannian Metric on

We first state briefly the Riemannian metric on in order to introduce quasi-Riemannian metric on . Let S = (0,…,0,−1)T and N = (0,…,0,1)T, called respectively the south pole and north pole of , we define the mappings as follows: (86) where and . Their inverse mappings are (87) and is a smooth structure on . The Riemannian metric on induced by the standard Euclidean metric, , in is (88)

Let be a smooth or piecewise smooth curve in , its length is defined as (89)

For , let be the set of all smooth or piecewise smooth curves with the endpoints at Y0 and Y1, the Riemannian distance induced by the metric Eq (88) is

is the short arc from Y0 to Y1 on a great circle in .

(90)

It is not difficult to verify that the Riemannian distance ds and the Euclidean distance dE (= ‖Y0Y1‖) both satisfy the following relation:

Lemma 2: For , (91)

Next, we introduce the quasi-Riemannian distance on from the Riemannian metric on .

For , let (92)

Then, we have the following lemma.

Lemma 3: (a) If u0±v0≠−(u1±v1), then u(tv(t)≠0, t∈[0,1]

(b) If u0+v0 = −(u1+v1), then

(c) If u0v0 = −(u1v1), then

Proof: From u(tv(t) = (1−t)(u0±v0)+t(u1±v1), (93)

Since ‖u0±v0‖ = ‖u1±v1‖ = 1, s = 1 by the first equation in Eq (93), thus u0±v0 = −(u1±v1). Therefore (a) holds. If u0±v0 = −(u1±v1), there must be t = 1/2 by the second equation in Eq (93), and thus (b) holds. Similarly, (c) holds.

Clearly, the short arc from X0 to X1 on a great circle in is (94)

Since

we have

  1. If , then ;
  2. If , then .

For the case (a), the Riemannian distance on leads directly to the Riemannian distance between X0 and X1 in : (95)

We consider the case (b) next. According to Proposition 1 and Lemma 3, the best approximation of on the sub-manifold under the Euclidean metric is (96)

By Lemma 2, X*(t) is also the best approximation of on under the Riemannian metric, thus X*(t) is the orthogonal projection of on under the Riemannian metric. By Lemma 3, X*(t) is a smooth or piecewise smooth curve on . Thus by letting , a quasi-Riemannian distance between X0 and X1 in is obtained using the Riemannian metric on : (97a)

Proposition 3: The integration Eq (97a) can be expressed as: (97) where, (98)

Specifically, if then , i.e., Eq (91) is a special case of Eq (97).

Proof: By some mathematical manipulation, the integrand of Eq (97a) can be expressed as: (99)

Thus, (100)

If , then , and (101)

Author Contributions

Conceived and designed the experiments: FCW MZ GHW. Performed the experiments: MZ. Analyzed the data: FCW MZ GHW. Contributed reagents/materials/analysis tools: FCW MZ GHW ZYH. Wrote the paper: FCW MZ GHW. Revision: FCW MZ GHW ZYH.

References

  1. 1. Josephson K. and Kahl F. (2008) Triangulation of points, lines and conics. J. Math. Imaging Vis. 32: 215–225.
  2. 2. Faugeras O. and Mourrain B. (1995) On the geometry and algebra of the point and line correspondences between n images. In: Proc. ICCV95, Cambridge, Massachusetts, USA, pp. 951–956.
  3. 3. Bartoli A. and Sturm P. (2004) The 3D line motion matrix and alignment of line reconstructions. Int. Comput. Vis. 57: 159–178
  4. 4. Bartoli A. and Sturm P. (2005) Structure-from-motion using lines: Representation, triangulation, and bundle adjustment. Computer vision and image understanding, 100: 416–441.
  5. 5. Zhang Q., Wu Y., Wang F., Dong Q., Jiao L. (2014). Suboptimal Solutions to the Algebraic-Error Line Triangulation. Journal of Mathematical Imaging and Vision, 49(3): 611–632.
  6. 6. Morgan A. (1987) Solving polynomial system using continuation for engineering and scientific problems. Englewood Cliffs, NJ: Prentice Hall.
  7. 7. Morgan A. and Sommese A. (1987) Computing all solutions to polynomial systems using homotopy continuation. Appl. Math. Comp. 24: 115–138.
  8. 8. Freund R. W. and Jarre F. (2001) Solving the sum-of-ratios problem by an interior-point method. J. Glob. Opt. 19: 83–102.
  9. 9. Ronda J., Valdés A. and Gallego G. (2008) Line geometry and gamera autocalibration. J. Math. Imaging Vis. 32: 193–214.
  10. 10. Hartley R. and Zisserman A. (2000) Multiple view geometry in computer vision. Cambridge: Cambridge University Press.
  11. 11. Lasdon L. S. (2002) Optimization theory for large systems. Mineola, New York: Dover Publications, Inc.
  12. 12. Chow S. N., Mallet-Paret J. and Yorke J. (1978) Finding zeros of maps: homotopy methods that are constructive with probability one, Math. Comp., 32: 887–889.
  13. 13. Chow S. N., Mallet-Paret J. and Yorke J. (1979) A homotopy method for locating all zeros of a system of polynomials, in Lecture Notes in Math., 730: 77–88.
  14. 14. Maybank S. and Faugeras O. (1992) A theory of self-calibration of a moving camera. Int. Comput. Vis. 8: 123–151
  15. 15. Wu F C., Zhang M. and Hu Z Y. (2013) Self-Calibration under the Cayley framework, Int. Comput. Vis. 103: 372–398.
  16. 16. Petersen P. (1998) Riemannian geometry, GTM171, New York: Springer-Verlag.