Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Correction Method for Line Extraction in Vision Measurement

  • Mingwei Shao,

    Affiliation Beihang University, Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beijing, 100191, China

  • Zhenzhong Wei ,

    zhenzhongwei@buaa.edu.cn

    Affiliation Beihang University, Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beijing, 100191, China

  • Mengjie Hu,

    Affiliation Beihang University, Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beijing, 100191, China

  • Guangjun Zhang

    Affiliation Beihang University, Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beijing, 100191, China

Abstract

Over-exposure and perspective distortion are two of the main factors underlying inaccurate feature extraction. First, based on Steger’s method, we propose a method for correcting curvilinear structures (lines) extracted from over-exposed images. A new line model based on the Gaussian line profile is developed, and its description in the scale space is provided. The line position is analytically determined by the zero crossing of its first-order derivative, and the bias due to convolution with the normal Gaussian kernel function is eliminated on the basis of the related description. The model considers over-exposure features and is capable of detecting the line position in an over-exposed image. Simulations and experiments show that the proposed method is not significantly affected by the exposure level and is suitable for correcting lines extracted from an over-exposed image. In our experiments, the corrected result is found to be more precise than the uncorrected result by around 45.5%. Second, we analyze perspective distortion, which is inevitable during line extraction owing to the projective camera model. The perspective distortion can be rectified on the basis of the bias introduced as a function of related parameters. The properties of the proposed model and its application to vision measurement are discussed. In practice, the proposed model can be adopted to correct line extraction according to specific requirements by employing suitable parameters.

Introduction

In the field of remote sensing, curvilinear structures (lines for short) are extracted from aerial and satellite images to determine key information such as roads and rivers [14]. Further, in the field of medical image analysis, line extraction facilitates the detection of blood vessels and nerves, and the obtained information is important for medical diagnosis [59]. Moreover, in some fields of vision measurement, including 3D reconstruction using structured light, stereo reconstruction [1015], 3D Object Retrieval and Recognition [1622], and so on, the image of the feature that reflects information on the target morphology is often a line. Therefore, line extraction is an indispensable technique in various fields.

Thus far, various methods for line extraction have been proposed. Lines can be extracted using skeletons, whereby the candidate skeleton is extracted from the Euclidean distance map [23]. The skeletal points of the candidate skeleton are classified into three types: ridge, ravine, and stair. Based on the classification, the line regions are reconstructed. In [24], the clustering of principal curves was adopted to detect curvilinear features in spatial point patterns. Based on the theory of hierarchical, agglomerative, and related iterative relocation, line features in spatial point patterns can be determined in a straightforward manner. A line extraction method used primarily for detecting roads in spaceborne synthetic aperture radar images is presented in [25]. This method is based on a genetic algorithm, and it can detect roads accurately via curve segment extraction and postprocessing. Thus, the three different methods described above employ three distinct theories. However, problems such as extensive computation and significant bias persist in line extraction. Owing to the large number of classified candidate points, optimization is necessary in some methods; thus, the extraction may be time-consuming. Moreover, the extraction may involve additional noise when a line is asymmetrical. Therefore, models for asymmetric bar-shaped [26], parabolic, and Gaussian line profiles have been proposed to describe the profile of a line in [27] (Steger’s method). The center of the profile can be determined by the zero crossing of its first-order derivative, whereas the edge can be determined by the zero crossing of its second-order derivative. Partial derivatives of a real image are computed by convolving the image with the corresponding partial derivative of a normal Gaussian kernel. A bias function that can be obtained by the bisection method [28] is used to rectify the center and edge of the line. In addition, the bar-shaped model has been used to extract line positions of light stripes [29]. Owing to its high accuracy, this method is widely used for image processing in the field of vision measurement.

In vision measurement systems based on line-structured light, over-exposure is ubiquitous owing to intense illumination and extensive reflection. Because of the finite sensitivity of the vision sensor, it becomes saturated easily; thus, over-exposure of images is inevitable. In [30], a license-plate detection method has been proposed to facilitate the detection of license plates from over-exposed images. This method involves the following steps: converting a color image into a grayscale image, equalizing the image, detecting the edges, checking the black pixel ratio, verifying the license plate, and outputting the license plate. In [31], an approach for correcting over-exposure in photographs has been introduced; it is based on the separate recovery of color and lightness. However, these methods are effective only in some specific fields because they are not based on a special model, as defined in [27]. But in Steger’s method, the over-exposure is not considered. The center of the line profile is normally with additional error in the over-exposure image.

Moreover, most cameras used for capturing images are based on a projective model; therefore, perspective distortion of the captured image is inevitable, and it results in inaccurate line extraction. The typical projective model is shown in Fig 1. The centerline of a curve in the scene is L, whereas the centerline of its image is l. The line l’ in the image is the projection of L on the image plane. Further, o-xy is the image coordinate system, whereas O-XYZ is the camera coordinate system. Owing to the existence of perspective distortion, the lines l and l’ are not coincident. Similarly, a circle in the scene also suffers from the same problem. In [32], the perspective distortion of a circular center in the image plane has been modeled. In [33], the perspective distortion of an elliptical center, which is a more general case, has been analyzed on the basis of projective transformation and analytic geometry. Thus, perspective distortion is a significant issue that needs to be rectified from the viewpoint of line extraction, especially in some industrial measurement fields that require high precision.

In this paper, we propose a new model for correcting line positions in an over-exposed image. The proposed model is based on the Gaussian model of Steger’s method, with the additional capability of suitably fitting line profiles in over-exposed images. As a result of this correction procedure, the center of the line profile becomes close to the ideal/actual center (as shown in Fig 2). We also discuss the related properties of perspective distortion. In addition, we describe the relationship between the centerline of a scene and the centerline of its image, which can serve as a reference for correcting the bias introduced by perspective distortion. Further, we discuss the practical applications of the proposed method, including its application to vision measurement. Finally, we describe some simulations and experiments conducted to verify the validity of our method. Because the line position is critical in vision measurement, obtaining the line position is the primary objective of this study.

thumbnail
Fig 2. (a) Over-exposure image, (b) extraction result in over-exposure image.

https://doi.org/10.1371/journal.pone.0127068.g002

Correction of Line Position in Over-Exposed Image

A) Model description

a) Steger’s line extraction model.

As lines exhibit a characteristic profile across the line at each point, the problems concerning the extraction of lines are essentially one-dimensional in nature. The related analysis can be carried out for one-dimensional line profile. When considering one-dimensional line profile, a bar-sharped line model, a parabolic line model and a Gaussian line model are used to fit the profile and remove the bias in Steger’s method respectively[26,27]. The images of the three models in Steger’s method are shown in Fig 3.

thumbnail
Fig 3. (a) Image of a bar-shaped, (b) image of a parabolic, and (c) image of a Gaussian line with equal line widths.

https://doi.org/10.1371/journal.pone.0127068.g003

The functions of the three models stated in Steger’s method are listed below: fb(x) denotes the bar-shaped line profile, fp(x) denotes the parabolic line profile, and fg(x) denotes the Gaussian line profile. (1) (2) (3) where w denotes the width of the profile and a (0 ≤ a <1) denotes the asymmetry of the profile. Among these three models, the Gaussian line profile is the most precise model for line extraction after correction, whereas the bar-shaped line profile is simplest one.

To extract the line position in an over-exposed image, the bar-shaped profile and the Gaussian profile are adopted to fit the profile, including the ideal and real profiles. The related fitting results are shown in Fig 4.

thumbnail
Fig 4. (a) Ideal over-exposure model and fitting result using the bar-shaped profile and the Gaussian profile, (b) line profile in the over-exposed image.

https://doi.org/10.1371/journal.pone.0127068.g004

In Fig 4, there is a certain bias between the ideal center and the fitted one when either the bar-shaped line profile or the Gaussian line profile is used. Image saturation is the main reason for this phenomenon, as shown in Fig 4(A). Similar problems also exist in real images, as shown in Fig 4(B). Thus, the models stated in Steger’s method cannot fit the line profile in an over-exposed image properly. Therefore, a bias is introduced between the extracted center and the ideal/actual one.

b) Over-exposure model.

Owing to the saturation of over-exposed images, Steger’s line extraction models, including the bar-shaped, parabolic, and Gaussian line profiles, do not allow accurate fitting of the line profile. Therefore, we propose a new model, namely, the over-exposure model, based on Steger’s Gaussian line profile.

As shown in Fig 5, let us assume that T is the saturation value in the over-exposure model. From Eq (3), f(x) is given by (4) where a and w are defined as in Eq (3). The relation between T and a is 1 > T > a > 0 in the scale space. The description of the real profile requires a scale factor related to the description in the scale space. The fitness of a profile in the real image using the proposed method is plotted in Fig 5.

thumbnail
Fig 5. Fitness of a real profile using the proposed model.

https://doi.org/10.1371/journal.pone.0127068.g005

The center of the gray value profile is determined by the zero crossing of the first-order derivative (f’(x) = 0), i.e., the local maximum (for bright lines) or local minimum (for dark lines) of the gray value profile. For real images, this criterion must be augmented with a criterion for selecting salient lines representing the noise involved. This can be achieved with a threshold on |f”(x)|, as mentioned in Steger’s method, i.e., by requiring that f”(x) << 0 for bright lines and f”(x) >> 0 for dark lines. Furthermore, derivatives of the real image are estimated by convolving the image with the derivative of a normal Gaussian kernel [34]. The normal Gaussian kernel and its derivatives are given by (5) (6) (7) In the scale space, the description of the profile can be determined by convolving the gray value profile with the corresponding derivative of the normal Gaussian kernel. The scale space description is given by

(8)(9)(10)

The related notations are defined as

B) Removal of bias

As the line position is the necessary information in vision measurement, we analyze the bias of the center of the profile in an over-exposed image as well as the effect of the saturation value T. The center of the profile is determined from the zero crossing of the first-order derivative, i.e., r'(x,σ,w,a,T) = 0. Because direct determination is difficult, the centerline is computed using a numerical root finding algorithm, as described in Steger’s method. We use the bisection method [28] in the proposed extraction method. The following proposition is obtained from [27]: if both σ and w are scaled by the same constant factor s, the line and edge locations will be sl, sel, and ser, where l is the line position, el is the left edge point, and er is the right edge point. Thus, bias analysis can be performed for σ = 1, and all other values can be obtained via simple multiplication by the actual scale σ.

a) Effect of T on bias.

In this section, we analyze the effect of the parameter T on the bias, under the conditions of asymmetry and symmetry. The relationship between the convolution of the profile with the Gaussian kernel function and the parameter T is shown in Fig 6.

thumbnail
Fig 6. (a) Gray value as a function of T and position x in the case of symmetry; (b) Gray value as a function of T and position x in the case of asymmetry.

https://doi.org/10.1371/journal.pone.0127068.g006

The line position, i.e., the maximum of the profile, is invariant in the case of symmetry. In contrast, in the asymmetric condition, the maximum of the profile varies according to T. Thus, in the case of asymmetry, correction should be performed in order to obtain a precise position.

b) Correction in asymmetric condition.

The profile in the over-exposure model is a part of the Gaussian line profile. Let the entire Gaussian line profile be Rg(x,σ,w,a) and the missing part be . Then, the expression for r can be rewritten as (11) When T << 1, the extreme point of r is not unique, i.e., the center of the profile will be incorrect. Then, can be substituted for f(x,σ,w,a,T), because they have the same center point. Further, the description of can be determined in a similar manner as the description of f(x,σ,w,a,T): (12) (13) (14) The related notations are defined as in Eq (10).

It is known that the edge points of the line are the zero crossings of its second-order derivate, while the line positions are the zero crossings of its first-order derivate. In order to eliminate the scale effect, the symbol λ is introduced: (15) The bias function is a map from vσ and λ to wσ, T, and a, where vσ is the width of the profile, i.e., vσ = |eler|; thus, the correction can be determined [27].

Since a function from R2 to R3 can only be injective, it is difficult to analyze the bias removal function. For this reason, the parameter w is fixed, and in this case, the bias removal function is from R2 to R2, which is surjective. In our paper, we analyze only the relationships of vσ, λ, and the bias with the parameters T and a when w equals 1, 2, and 3, respectively. As in the case of the approach described in [27], the results for other values can be obtained by interpolation. The relations are shown in Figs 7, 8 and 9. Our profile involves the constraint 1 > T > a > 0, but the data are expanded by interpolation in these illustrations for the sake of clarity.

thumbnail
Fig 7. When w = 1, (a) width as a function of a and T; (b) λ as a function of a and T; (c) bias as a function a and T.

https://doi.org/10.1371/journal.pone.0127068.g007

thumbnail
Fig 8. When w = 2, (a) width as a function of a and T; (b) λ as a function of a and T; (c) bias as a function of a and T.

https://doi.org/10.1371/journal.pone.0127068.g008

thumbnail
Fig 9. When w = 3, (a) width as a function of a and T; (b) λ as a function of a and T; (c) bias as a function of a and T.

https://doi.org/10.1371/journal.pone.0127068.g009

The results indicate that the line width increases as a increases or T decreases. Thus, the line width increases from Fig 7(A) to Fig 9(A). Therefore, the line width is proportional to w. From Fig 7(B), λ increases as a decreases or T increases. Similarly, we can conclude that λ is proportional to w from Figs 7(B), 8(B) and 9(B). The bias, which is shown in Figs 7(C), 8(C) and 9(C) when w equals 1, 2, and 3, respectively, is proportional to a and w but inversely proportional to T. Therefore, correction is necessary for accurate extraction of the line position, especially when a and w are too large or T is too small.

Correction of Perspective Distortion

A) Related properties

As the camera model is projective, perspective distortion exists in the captured line image, i.e., the line position of the image does not correspond with the projection of the line position of the scene on the image plane. The perspective distortion in the camera model is shown in Fig 10.

In Fig 10, O is the optical center of the camera and O-XYZ is the camera coordinate system (CCS). The profile of the line is denoted by line segment AC, and its midpoint is point B. The points corresponding to points A, B, and C on the image plane are points a, b, and c, respectively. Two lines parallel to ac and passing separately through point A and point C intersect with line OO1 at point O2 and point O3, respectively. Point F and point G are the intersections of line AO2 with line OB and line OC, separately. The vertical lines that pass through point B and point C intersect with the line AO2 at point D and point E, respectively. Owing to the limited field of view (FOV) of the lens, the range of angle BAD (0~90°) and angle GOO1 (0~45°) can be determined; we define angle BAD as θ, angle GOO1 as β, the width of the profile as l, and the distance from the line to the image plane as d. The following relation is obtained: (16) Then, (17) Therefore, the length of line segment EF is (18) As the length of segment EG is |EG| = l sin θ tan β, the following equation can be derived: (19) Then, the deviation ratio is given by (20) Given the width and location of the line, the derivation ratio is proportional to θ. Moreover, lines with the same z-coordinate in the CCS have the same derivation ratio.

The width of line segment ac, which is the line profile on the image plane, can be derived as follows: (21) The offset of the center is easily determined as (22) As θ and β are known, Eq (22) can be rewritten as (23) where δ = (sinθcosθ+sin2θtanβ).

The partial derivatives of Δ with respect to l, f, and d are obtained as (24) The following properties can be deduced from Eq (24):

  1. The offset is proportional to l and inversely proportional to d. Further, it is proportional to the focal length f when d > f, and inversely proportional to f when d < f. In general, d is far larger than f; therefore, the assumption that the offset is proportional to f can be treated as a fact in practice.
  2. The offset is a function of θ and β when f, l, and d are determined. The relationship is shown in Fig 11.
thumbnail
Fig 11. Perspective distortion as a function of θ and β.

https://doi.org/10.1371/journal.pone.0127068.g011

Because we assume to be 1 in Fig 11, the offset should be multiplied by a scale factor in practice. We can see that the offset is zero when θ is zero. Although the offset is proportional to θ, the available information will reduce as l decreases and θ increases. Synthesizing the related factors, θ should be minimized to obtain sufficient information and decrease the offset. If required, correction can be performed on the basis of Eq (22).

B) Perspective distortion in measurement

In this section, we consider a typical vision measurement system, namely, the line-structured light vision system (LSLVS), as an example. In the measurement process of LSLVS, a light stripe is projected onto the target surface. The light stripe, which represents the available information, is captured for reconstruction.

The schematic of LSLVS is shown in Fig 12. The profile of the laser beam, i.e., the line segment ab in Fig 12, can be represented as the normal Gaussian line profile. The profile of the projection on the image plane is AB. Line segment BC is parallel to line segment ab, and it also satisfies the normal Gaussian distribution. Point D is the midpoint of BC, while line DF is parallel to line AC and intersects with line AB at point F. Because the projective angle of the laser projector (angle Φ in Fig 12) is small (in general, around 0.02°), the line segment AB can also be treated as the normal Gaussian line profile. Similarly, the midpoint of line segment AB can be treated as the projection (point E) of the center of the laser projector, i.e., point E and point F are approximately coincident.

thumbnail
Fig 12. Schematic of line-structured light vision system.

https://doi.org/10.1371/journal.pone.0127068.g012

Point a’ and point b’ are the projective points of point A and point B on the image plane. Line segment a’b’ is not a Gaussian line profile, but it should be multiplied by a function of the Gaussian line profile. Then, the description of line segment a’b’ is given by (25)

Owing to offset of the light stripe in the x-direction, Eq (25) is simplified as (26) where F = cosθ + sinθtanβ, and θ and β are defined as in Fig 10. For a vision measurement system, the normal FOV is less than 90° (e.g., a 1/3” CCD camera with a 2.8-mm lens); similarly, β is less than 45°. Then, Eq (26) can be rewritten as (27) F and its gradient as a function of θ and β are shown in Fig 13.

thumbnail
Fig 13. (a) F as a function of θ and β; (b) Gradient of F as a function of θ and β.

https://doi.org/10.1371/journal.pone.0127068.g013

In Fig 13(A), the value of F is less than 1.5. Moreover, the continuity of F with a small gradient can be deduced from Fig 13(B). As there exists only a small variation in θ and β in the CCS, the variation of F is small, i.e., the offset is small. In the measurement process of LSLVS, the width of the laser stripe is small, whereas the distance from the lens to the target (d) is much greater than the focal length (f). Therefore, we obtain the relationship . In other words, the offset due to perspective distortion is small. The geometrical center of the line image can be considered as the representation of the center of the line in the scene. In some special fields that require high precision, the line position can be corrected, if required, on the basis of the method described in Section 4.

Experiment

A) Images using the over-exposure model

In this section, we explain how the over-exposure model is used to extract the line position from an over-exposed image. In the field of medical image analysis, owing to over-exposure, the image of the line is always saturated in the negative direction (Fig 14(A)). During laser measurement, over-exposure is inevitable; Fig 14(B) and 14(C) show some examples. The line position is extracted using the over-exposure method, and the related results are plotted in Fig 14.

thumbnail
Fig 14. Application of the proposed method to (a) medical imaging (the image is taken from [27]); (b) image of a 1D target in the line-structured light vision system; (c) image of a planar target in the line-structured light vision system.

https://doi.org/10.1371/journal.pone.0127068.g014

To verify the effect of exposure level on the proposed method, images were captured under different levels of exposure. Fig 15 shows the setup of the line-structured light vision system. In this system, AVT F-504B was employed as the camera for capturing images of the light stripe at different levels of exposure. The light stripe was projected onto the surface of an experimental workpiece. By controlling the exposure time, a series of images at different exposure levels can be captured. As the camera faces the surface of the experimental workpiece, the perspective distortion is negligible in this experiment. The results are shown in Fig 16.

thumbnail
Fig 16. Extraction of the light stripe at different exposure levels.

The green line represents the gray value of the profile, whereas the red line represents the fitting result. Extraction with exposure duration of (a) 70 ms; (b) 253 ms; (c) 590 ms; (d) 870 ms; (e) 1030 ms; (f) 2723 ms.

https://doi.org/10.1371/journal.pone.0127068.g016

The images captured at different levels of exposure are shown in Fig 16. The exposure time of the image in Fig 16(A) is 70 ms, and the gray value of the line is unsaturated. Therefore, for Fig 16(A), extraction using the Gaussian model of Steger’s method is identical to that using the proposed method. Further, it is verified that the extraction accuracy is sufficiently high such that the obtained value can be regarded as the true value in this experiment. From Fig 16(A) to Fig 16(F), the exposure time increases, and the exposure becomes increasingly evident. Thus, the centers of these images (Fig 16(B)–16(F)) can be detected by the proposed method. Unlike the method described in [27], the proposed method detects the centers according to the fitting edges of the profile, and the result obtained is compared with the true value.

In order to compare with the detected result of other methods, the extractions were also performed using Steger’s method and the method mentioned in [23] (in Fig 16, the extractions using other methods are not plotted for the sake of clarity). The line centers using three different methods as a function of exposure time are listed in Table 1.

thumbnail
Table 1. Extraction results under various exposure times by three methods.

https://doi.org/10.1371/journal.pone.0127068.t001

According to Table 1, the root mean square (RMS) error of the centers extracted by the proposed method is less than 0.5 pixels, while the RMS error of the centers extracted by the other two methods is less than 1.0 pixel. Although Steger’s method and the method in [23] also provide good results, they are less accurate than the over-exposure model. The corrected result is more precise than the uncorrected result by around 45.5%. Thus, the proposed method is capable of precisely detecting the center from an over-exposed image.

B) Correction of perspective distortion

Because it is difficult to determine the center of the line, a planar target with zebra stripes is employed in our experiment. The image is captured at an angle of around 60°. In the experiment, the adjacent black and white stripes are treated as a group. In this case, the center is obvious because of the distinct contrast between black and white. Then, the group is processed as a single color (Fig 17(A)), and the image center of the group can be determined, while the corrected center can also be determined using Eq (20). The related result is shown in Fig 17.

thumbnail
Fig 17. Effect of perspective distortion on extraction.

(a) Processed image, (b) Original image. The green circle denotes the image center, whereas the red cross denotes the corrected one.

https://doi.org/10.1371/journal.pone.0127068.g017

Furthermore, a series of experiments were conducted to evaluate the effect of perspective distortion on the extraction. Because it is difficult to measure the shooting angle, we merely increased the angle to observe the perspective distortion. We obtained the center of the line using the extraction method and then corrected based on the related properties. The ideal center is obtained as descripted above. As the extraction method descripted in our manuscript, the center of the line at one position can be obtained from its one-dimensional line profile. In this case, the center is simplified to one-dimensional space and the results are listed in Table 2.

In general, the image center does not correspond with the projection of the scene center on the image plane. According to Table 2, the extraction is improved evidently after correction of perspective distortion. The root mean square (RMS) error of the extracted center after correction is about 0.22 pixels, while the RMS error of uncorrected extraction is about 1.44 pixels. In our experiments, the shooting angle increases from No.1 to No.9 gradually. In order to show evidently, the perspective distortion as a function of the shooting angle is shown in Fig 18.

thumbnail
Fig 18. Perspective distortion as a function of the shooting angle.

https://doi.org/10.1371/journal.pone.0127068.g018

It can be seen that, the perspective distortion cannot be neglected, especially when the shooting angle is large. After correcting based on Eq (20), the distortion is nearly eliminated. In practice, the correction should be performed according to specific requirements.

Conclusion

In this paper, we proposed a method for correcting line positions extracted from an over-exposed image. Based on the Gaussian line profile of Steger’s method, a new line model was developed by incorporating over-exposure features. Accordingly, the proposed model was used to determine the line position from an over-exposed image. Simulations and experiments showed that the proposed model is more suitable than Steger’s method for line extraction from an over-exposed image in terms of its accuracy and its ability to detect the actual line center in the over-exposed image. We also analyzed the perspective distortion, which is inevitable during line extraction owing to the projective camera model employed in vision measurement. The perspective distortion can be rectified on the basis of the bias introduced as a function of related parameters. In addition, we discussed the properties of the proposed model and its application to vision measurement. In practice, a suitable model should be selected to correct line extraction, and the perspective distortion can be corrected according to specific requirements. Moreover, the proposed model can be used not only in vision measurement but also in other fields such as remote sensing and medical image analysis, where over-exposed images are frequently encountered.

Although the proposed line model is based on the Gaussian line profile, higher accuracy can be achieved by replacing the Gaussian line profile by a better extraction method. In addition, when the over-exposure is severe, the saturation value is small. In such cases, the proposed line model becomes ineffective, and it should be improved to obtain the actual line position. We plan to investigate such scenarios in future studies.

Acknowledgments

We would like to thank Dr. Y. Wang from Beihang University who helped the authors to finish the experiments. We would also like to thank one anonymous reviewer for helpful suggestions that improved this manuscript.

Author Contributions

Conceived and designed the experiments: MS ZW. Performed the experiments: MS. Analyzed the data: MS. Contributed reagents/materials/analysis tools: MS ZW GZ. Wrote the paper: MS MH.

References

  1. 1. Wessel B. Automatische Extraktion von Straβaus SAR-Bilddaten, Deutsche Geodätische Kommission, Reihe C, Heft 600, München, 2006.
  2. 2. Hedman K. Statistical fusion of multi-aspect synthetic aperture radar data for automatic road extraction, Deutsche Geodätische Kommission, Reihe C, Heft 654, München, 2010.
  3. 3. Hinz S, Baumgartner A. Automatic extraction of urban road networks from multi-view aerial imagery ISPRS J. Photogramm. Remote Sens., 58 (2003), pp. 83–98.
  4. 4. Hinz S, Wiedemann C. Increasing efficiency of road extraction by self-diagnosis Photogramm. Eng. Remote Sens., 70 (2004), pp. 1457–1466.
  5. 5. Fleming MG, Steger C, Zhang J, Gao J, Cognetta AB, Pollak I, et al. Techniques for a structural analysis of dermatoscopic imagery Comput. Med. Imaging Graph., 22 (1998), pp. 375–389. pmid:9890182
  6. 6. Fleming MG, Steger C, Cognetta AB, Zhang J. Analysis of the network pattern in dermatoscopic images Skin Res. Technol. (1999), pp. 42–48.
  7. 7. Xiong G, Zhou X, Degterev A, Ji L, Wong STC. Automated neurite labeling and analysis in fluorescence microscopy images. Cytometry Part A, 69A (2006), pp. 494–505.
  8. 8. Zhang Y, Zhou X, Witt RM, Sabatini BL, Adjeroh D, Wong STC. Dendritic spine detection using curvilinear structure detector and LDA classifier. NeuroImage, 36 (2007), pp. 346–360. pmid:17448688
  9. 9. Nuydens R, Dispersyn G, Kieboom GVD, de Jong M, Connors R, Ramaekers F, et al. Bcl-2 protects against apoptosis-related microtubule alterations in neuronal cells. Apoptosis, 5 (2000), pp. 43–51 pmid:11227490
  10. 10. Wei Z, Zhou F, Zhang G. 3d coordinates measurement based on structured light sensor. Sensor Actuat. A: Phys., 120 (2005), pp. 527–535.
  11. 11. Wong AK, Niu P, He X. Fast acquisition of dense depth data by a new structured light scheme. Comput. Vision Image Understand., 98 (2005), pp. 398–422.
  12. 12. Sun J, Zhang G, Wei Z, Zhou F. Large 3d free surface measurement using a mobile coded light-based stereo vision system. Sensor Actuat. A: Phys., 132 (2006), pp. 460–471.
  13. 13. Hinz S, Stephani M, Schiemann L, Zeller K. An image engineering system for the inspection of transparent construction materials. ISPRS J. Photogramm. Remote Sens., 64 (2009), pp. 297–307.
  14. 14. Lemaître C, Miteran J, Matas J. Definition of a model-based detector of curvilinear regions, in: Kropatsch WG, Kampel M, Hanbury A (Eds.), Computer analysis of images and patterns, Lecture Notes in Computer Science, vol. 4673, Springer-Verlag, Berlin (2007), pp. 686–693.
  15. 15. Guan T, Duan LY, Yu JQ, Chen YJ, and Zhang X, Real Time Camera Pose Estimation for Wide Area Augmented Reality Applications, IEEE Computer Graphics and Application, 31(3), pp. 56–68, 2011. pmid:24808092
  16. 16. Gao Y, Wang M, Ji RR, Wu XD, Dai QH, 3D Object Retrieval with Hausdorff Distance Learning, IEEE Transactions on Industrial Electronics, 61(4), pp. 2088–2098, 2014.
  17. 17. Gao Y, Wang M, Zha ZJ, Shen JL, Li XL, Wu XD, Visual-Textual Joint Relevance Learning for Tag-Based Social Image Search, IEEE Transactions on Image Processing, 22(1), pp. 363–376, 2013. pmid:22692911
  18. 18. Gao Y, Wang M, Tao DC, Ji RR, Dai QH, 3D Object Retrieval and Recognition with Hypergraph Analysis, IEEE Transactions on Image Processing, 21(9), pp.4290–4303, 2012. pmid:22614650
  19. 19. Ji RR, Duan LY, Chen J, Yao H, Yuan J, Rui Y, Gao W, Location discriminative vocabulary coding for mobile landmark search, International Journal of Computer Vision, 96 (3), pp. 290–314, 2012.
  20. 20. Guan T, He Y, Gao J, Yang J, and Yu J, On-Device Mobile Visual Location Recognition by Integrating Vision and Inertial Sensors, IEEE Transactions on Multimedia, 15(7), pp.1688–1699, 2013.
  21. 21. Guan T, He YF, Duan LY, and Yu JQ, Efficient BOF Generation and Compression for On-Device Mobile Visual Location Recognition, IEEE Multimedia, 21(2), pp. 32–41, 2014.
  22. 22. Guan T, Fan Y, Duan LY, On-Device Mobile Visual Location Recognition by Using Panoramic Images and Compressed Sensing Based Visual Descriptors, PLOS ONE, 9(6), 2014.
  23. 23. Jang J-H, Hong K-S. Detection of curvilinear structures and reconstruction of their regions in gray-scale images. Pattern Recognition 35 (2002) 807–824.
  24. 24. Stanford DC, Raftery AE. Finding curvilinear features in spatial point patterns: principal curve clustering with noise. IEEE Trans. Pattern Anal. Mach. Intell., 22(6), June 2000.
  25. 25. Jeon B-K, Jang J-H, Hong K-S. Road detection in spaceborne SAR images using a genetic algorithm. IEEE Trans. Geosci. Remote Sens., 40(1), Jan 2002.
  26. 26. Steger C. An unbiased detector of curvilinear structures, IEEE Trans. Pattern Anal. Mach. Intell. 20 (1998) 113–125.
  27. 27. Steger C. Unbiased extraction of lines with parabolic and Gaussian profiles. Computer Vision and Image Understanding, 117 (2013) 97–112.
  28. 28. Press WH, Numerical recipes in C: The Art of Scientific Computing, 1992.
  29. 29. Yang X, He J, Zhang G, Zhou F. A method of sub-pixel extraction from circular structured light stripes center, Opto-Electronic Engineering, 31(4), 46–49.
  30. 30. Tseng HY, Lai CH, Yu SS. An effective license-plate detection method for overexposure and complex vehicle images, ICHIT 2008: 176–181.
  31. 31. Guo D, Cheng Y, Zhuo S, Sim T. Correcting over-exposure in photographs. CVPR 2010: 515–521.
  32. 32. Heikkila J, SilvCn O. A four-step camera calibration procedure with implicit image correction. Computer Vision and Pattern Recognition, 1997.
  33. 33. Zhang G, Wei Z. A position-distortion model of ellipse center for perspective projection. Measurement Science and Technology, 2003,14:1420–1426.
  34. 34. Florack LMJ, ter Haar Romeny BM, Koenderink JJ, Viergever MA. Scale and the differential structure of images, Image vision Comput. 10(1992) 376–388.