Top Banner
Contents lists available at ScienceDirect Automation in Construction journal homepage: www.elsevier.com/locate/autcon Development of high-accuracy edge line estimation algorithms using terrestrial laser scanning Qian Wang a , Hoon Sohn b , Jack C.P. Cheng c, a Department of Building, School of Design and Environment, National University of Singapore, Singapore b Department of Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea c Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, Hong Kong, China ABSTRACT Accurate estimation of object edge lines is the key for accurate dimension estimations. However, due to the limited spatial resolution of terrestrial laser scanning and the existence of mixed pixels along object edges, edge line estimation is always imperfect. Most of the existing edge line estimation algorithms simply remove mixed pixels before edge line estimation without utilizing mixed pixel data. This study developed high-accuracy edge line estimation algorithms for flat surfaces using 3D terrestrial laser scanning. In this study, four different edge line estimation algorithms are proposed and compared, and two of them leverage information from mixed pixels to improve the edge line estimation accuracy. The effects of scan data parameters on the performance of the edge line estimation algorithms are investigated using numerical simulations and validation experiments. Based on the simulation results, recommendations of edge line estimation algorithm and scanning settings are provided, which are further validated through experiments. 1. Introduction Terrestrial laser scanning (TLS) is being adopted in the construction industry as it is able to acquire 3D range measurement data at a high speed and with high accuracy [1]. Recent research efforts have been reported on laser scanning based quality inspection [2–5], as-built model reconstruction [6–8], structural health monitoring [9,10], and construction progress tracking [11–13]. One important task in these applications is to estimate the dimensions of construction components. According to different range measurement principles, TLS can be di- vided into two categories, time-of-flight (TOF) and amplitude-modu- lated continuous-wave (AMCW). TOF scanners measure the distance by emitting a laser pulse and detecting the returned pulse, and the distance is obtained from the travelling time. AMCW scanners emit amplitude- modulated, continuous waves and measure the phase difference (known as phase shift) between the emitted and the returned signals. The distance is then obtained based on the phase shift. In general, AMCW scanners are more suitable for dimension estimation because they have a higher ranging accuracy (a few millimeters) than TOF scanners (a few centimeters) [14]. The existence of mixed pixels can, however, deteriorate the accu- racy of dimensions estimated from AMCW laser scan data [15]. A mixed pixel occurs when the laser beam crosses an object edge and the beam is split into two parts. When the TLS receives the reflections of two split beams, the estimated distance of the mixed pixel becomes random [16]. Fig. 1(a) illustrates an example of a mixed pixel, where P3 is a mixed pixel and the other points (P1, P2, P4, P5) are normal points (i.e. non- mixed pixels). Fig. 1(b) displays mixed pixels obtained from real laser scan data. Note that, if the distance between two surfaces is larger than half of the ambiguity interval of the scanner, the mixed pixels could occur in front of the foreground surface (i.e. surface 1) or behind the background surface (i.e. surface 2) [16]. In the example shown in Fig. 1(b), the shortest wavelength used by the scanner is 2.4 m and the ambiguity interval is half of the wavelength, i.e. 1.2 m. Usually an object edge is defined as the intersection of two surfaces. When both surfaces are scanned, the edge can be accurately estimated as the intersection of surfaces despite of the existence of mixed pixels. However, when only one surface defining the edge can be effectively scanned, mixed pixels will negatively affect the edge estimation accu- racy. Therefore, mixed pixels are usually removed prior to edge esti- mation using a mixed pixel filter [15–20]. Once mixed pixels are re- moved, object edges are estimated using only normal points that fall on the target. However, this estimation often leads to underestimation of object dimensions as shown in Fig. 2, and the dimension difference between the measured and the true edges is named as “edge loss” in Tang et al. [21]. Although Tang et al. [21] and Wang et al. [2] proposed methods to estimate edge lines with consideration of the edge loss ef- fect, the previous studies simply removed mixed pixels and did not utilize mixed pixels for edge line estimation. This study aims to develop high-accuracy edge line estimation https://doi.org/10.1016/j.autcon.2019.01.009 Received 23 September 2018; Received in revised form 7 January 2019; Accepted 18 January 2019 Corresponding author. E-mail address: [email protected] (J.C.P. Cheng). Automation in Construction 101 (2019) 59–71 Available online 29 January 2019 0926-5805/ © 2019 Elsevier B.V. All rights reserved. T
13

Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

Jun 28, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

Contents lists available at ScienceDirect

Automation in Construction

journal homepage: www.elsevier.com/locate/autcon

Development of high-accuracy edge line estimation algorithms usingterrestrial laser scanningQian Wanga, Hoon Sohnb, Jack C.P. Chengc,⁎

a Department of Building, School of Design and Environment, National University of Singapore, Singaporeb Department of Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Koreac Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, Hong Kong, China

A B S T R A C T

Accurate estimation of object edge lines is the key for accurate dimension estimations. However, due to the limited spatial resolution of terrestrial laser scanning andthe existence of mixed pixels along object edges, edge line estimation is always imperfect. Most of the existing edge line estimation algorithms simply remove mixedpixels before edge line estimation without utilizing mixed pixel data. This study developed high-accuracy edge line estimation algorithms for flat surfaces using 3Dterrestrial laser scanning. In this study, four different edge line estimation algorithms are proposed and compared, and two of them leverage information from mixedpixels to improve the edge line estimation accuracy. The effects of scan data parameters on the performance of the edge line estimation algorithms are investigatedusing numerical simulations and validation experiments. Based on the simulation results, recommendations of edge line estimation algorithm and scanning settingsare provided, which are further validated through experiments.

1. Introduction

Terrestrial laser scanning (TLS) is being adopted in the constructionindustry as it is able to acquire 3D range measurement data at a highspeed and with high accuracy [1]. Recent research efforts have beenreported on laser scanning based quality inspection [2–5], as-builtmodel reconstruction [6–8], structural health monitoring [9,10], andconstruction progress tracking [11–13]. One important task in theseapplications is to estimate the dimensions of construction components.According to different range measurement principles, TLS can be di-vided into two categories, time-of-flight (TOF) and amplitude-modu-lated continuous-wave (AMCW). TOF scanners measure the distance byemitting a laser pulse and detecting the returned pulse, and the distanceis obtained from the travelling time. AMCW scanners emit amplitude-modulated, continuous waves and measure the phase difference(known as phase shift) between the emitted and the returned signals.The distance is then obtained based on the phase shift. In general,AMCW scanners are more suitable for dimension estimation becausethey have a higher ranging accuracy (a few millimeters) than TOFscanners (a few centimeters) [14].

The existence of mixed pixels can, however, deteriorate the accu-racy of dimensions estimated from AMCW laser scan data [15]. A mixedpixel occurs when the laser beam crosses an object edge and the beam issplit into two parts. When the TLS receives the reflections of two splitbeams, the estimated distance of the mixed pixel becomes random [16].

Fig. 1(a) illustrates an example of a mixed pixel, where P3 is a mixedpixel and the other points (P1, P2, P4, P5) are normal points (i.e. non-mixed pixels). Fig. 1(b) displays mixed pixels obtained from real laserscan data. Note that, if the distance between two surfaces is larger thanhalf of the ambiguity interval of the scanner, the mixed pixels couldoccur in front of the foreground surface (i.e. surface 1) or behind thebackground surface (i.e. surface 2) [16]. In the example shown inFig. 1(b), the shortest wavelength used by the scanner is 2.4 m and theambiguity interval is half of the wavelength, i.e. 1.2 m.

Usually an object edge is defined as the intersection of two surfaces.When both surfaces are scanned, the edge can be accurately estimatedas the intersection of surfaces despite of the existence of mixed pixels.However, when only one surface defining the edge can be effectivelyscanned, mixed pixels will negatively affect the edge estimation accu-racy. Therefore, mixed pixels are usually removed prior to edge esti-mation using a mixed pixel filter [15–20]. Once mixed pixels are re-moved, object edges are estimated using only normal points that fall onthe target. However, this estimation often leads to underestimation ofobject dimensions as shown in Fig. 2, and the dimension differencebetween the measured and the true edges is named as “edge loss” inTang et al. [21]. Although Tang et al. [21] and Wang et al. [2] proposedmethods to estimate edge lines with consideration of the edge loss ef-fect, the previous studies simply removed mixed pixels and did notutilize mixed pixels for edge line estimation.

This study aims to develop high-accuracy edge line estimation

https://doi.org/10.1016/j.autcon.2019.01.009Received 23 September 2018; Received in revised form 7 January 2019; Accepted 18 January 2019

⁎ Corresponding author.E-mail address: [email protected] (J.C.P. Cheng).

Automation in Construction 101 (2019) 59–71

Available online 29 January 20190926-5805/ © 2019 Elsevier B.V. All rights reserved.

T

Page 2: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

algorithms for edge lines on flat surfaces and to investigate the effects ofscan data parameters on edge line estimation. Four different edge lineestimation algorithms are developed and compared in this study.Among them, two algorithms utilize the information from mixed pixelsto improve the edge line estimation accuracy. Furthermore, the effectsof three scan data parameters on the developed edge line estimationalgorithms are investigated through numerical simulations. Based onthe simulation results, recommendations of scanning settings are pro-vided to achieve high-accuracy edge line estimations and experimentsare conducted to validate these findings.

The rest of this paper is organized as follows. Section 2 introducesthe research background on laser scanning and the processing of laserscan data. Section 3 introduces the development of edge line estimationalgorithms. Section 4 investigates the effects of scan data parametersand compares the performances of the proposed algorithms. Discussionsand recommendations of scanning settings are elaborated in Section 5.Experimental results are provided in Section 6 to validate the findingsfrom the numerical simulations. Section 7 concludes this study andprovides suggestions for future work.

2. Research background

2.1. Laser scan data

While in operation, a TLS rotates 360° horizontally and a built-inmirror reflects the laser beam vertically with a certain field of viewdepending on the scanner specification, as shown in Fig. 3(a). In thisway, the TLS scans objects with a certain angular resolution, θ, in bothhorizontal and vertical directions. Note that some scanners can sethorizontal and vertical angular resolutions separately. The resultingscan points are arrayed in horizontal and vertical grids with row in-dexes (e.g., 1, 2, 3…) and column indexes (e.g., A, B, C…), as shown inFig. 3(a). The angle of a laser beam with respect to the target surfacenormal is known as the incident angle β. As shown in Fig. 3(b), thespacing s between two adjacent scan points is related to β, θ, and the

distance L from the TLS to the target. TLS emits a circular laser beamwith a diameter of d0 at exit. The size d of a laser spot (the area illu-minated by the laser beam on the target) is determined as:

= + +d d L L t L( ) /cos /cosD s0 0 (1)

where L0 is the distance from the laser emitter to the focal point, αD isthe laser divergence, ω is the rotation rate of the scanner, and ts is thesampling time of a scan point [21].

While scanning, a surrounding background object is scanned alongwith the target object. Here, raw scan data consist of three types of datapoints: valid points, background points, and mixed pixels, as shown inFig. 4(a). A valid point and a background point occur when the laserbeam fully falls on the target and on the background object, respec-tively. Among these three types of points, mixed pixels and backgroundpoints are usually removed before edge line estimation. For example,the model developed by Tang et al. [21] removes non-valid points in-cluding mixed pixels and background points. The model extracts thelast valid point in each row (filled dots in Fig. 4(b)) and finds the lo-cations of hypothetical adjacent points (empty dots in Fig. 4(b)), whichare non-valid points. To make sure that valid points are fully inside thetarget and non-valid points are not, the edge line must be located be-tween the lower bound and upper bound of edge position, as shown inFig. 4(b). If the unknown location of the edge position is assumed tohave a uniform distribution within this range, the average of the lowerand upper bounds becomes the estimated edge position.

2.2. Edge detection

A number of research efforts have been reported on edge detectionfrom 3D point cloud data. Depth discontinuity in point cloud data is animportant feature for edges. Tang et al. [19] analyzed and comparedtwo different algorithms for detecting depth-discontinuity including thenormal-angle filter and the edge-length filter. Both algorithms trian-gulate the point cloud data into a triangle mesh. The normal-angle filterdetects depth discontinuity by finding triangles with oblique orienta-tions, and the edge-length filter detects depth discontinuity based ontriangles with long edge lengths. Another popular method for edgedetection is to search for high-curvature points. For example, Fan et al.[22] detected edges of point cloud data by finding the local extremumof curvatures measured at each point. Rusu et al. [23] used regiongrowing based on smoothness constraint to extract planar patches frompoint cloud data, and curvature of each point was used to decidewhether this point belonged to the region or became an edge point.Other edge detection methods such as the normal variation analysis[20], the 3D Canny algorithm [24], combination of normal estimationand graph theory [25], eigenvalues of the covariance matrix of neigh-bors [26], and alpha-shapes [27], are also developed.

Most of the existing studies are focused on the detection of edgepoints with a large variance in normal or curvature, and are usually

Fig. 1. Mixed pixels in laser scan data. (a) Occurrence of a mixed pixel P3 when the laser beam crosses the object edge (adapted from [2] with permission). (b)Example of mixed pixels obtained from real scan data (adapted from [15] with permission).

Fig. 2. Edge loss between the measured and the true edges.

Q. Wang et al. Automation in Construction 101 (2019) 59–71

60

Page 3: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

applicable to both flat and curved surfaces. Different from these existingstudies, this study is focused on the detection of straight boundarieslines (e.g. edge lines) for flat surfaces when only one surface definingthe edge can be effectively scanned. Previous studies on edge line es-timation did not explicitly consider the edge loss problem illustrated inFig. 2. Without proper handling of the edge loss problem, there is al-ways a difference between the extracted edges and the actual edges. Tothe best knowledge of the authors, this study is the very first attempt toaccurately estimate edge lines with arbitrary angles considering theedge loss problem. Although Tang et al. [21] proposed a model toquantify the edge loss, the model is applicable to only horizontal andvertical edge lines.

2.3. Mixed pixel filtering

Point cloud data often contain noise measurements that can affectthe accuracy of edge line estimation. Hebert and Krotkov [16] sum-marized a list of effects that can result in corrupted or degraded laserscan data including mixed pixels, range/intensity crosstalk, and rangedrift. Lichti et al. [28] discussed several types of artifacts includingangular displacement of features, mixed pixels, detector saturation,blooming, multipath, and nonnormal incident angle. Among these ef-fects and artifacts, mixed pixels have drawn the most attention fromresearchers and a number of methods are developed for filtering outmixed pixels. For example, Hebert and Krotkov [16] proposed to use amedian filter to remove mixed pixels considering that mixed pixels are

isolated from normal points. Later, Adams and Probert [17] proposed aphysical model for the distance measurement process of laser scannersand the occurrence of mixed pixels. Then sudden changes in surfacereflectance and/or range were proposed to detect mixed pixels. Tuleyet al. [18] developed an algorithm for removing mixed pixels from scandata of terrains with vegetation, which was applicable to thin structuresonly. Tang et al. [19] compared several different algorithms for re-moving mixed pixels, which were fundamentally established based onthree different filters including normal-angle filter, edge length filter,and cone of influence algorithm. Recently, the authors' research groupdeveloped a mixed pixel filter based on theoretical modeling of mixedpixel measurements [19]. The distance measurement of a mixed pixelwas firstly modeled, and the optimal threshold value for differentiatingmixed pixels from normal points was calculated based on probabilisticanalysis. Che and Olsen [20] recently developed a normal variationanalysis (Norvana) method, which is able to filter mixed pixels fromterrestrial laser scan data.

Mixed pixels are usually removed from laser scan data before esti-mating edge lines of objects. However, simple removal of mixed pixelsis not the best strategy for edge line estimation, because mixed pixelsalso contain useful information for edge line estimation. Because thelaser beams for mixed pixels are split by the edge line, mixed pixelsreveal the position of the edge line. Therefore, two edge line estimationalgorithms proposed in this study assume that mixed pixels are detectedusing the algorithm developed in [19], and the positions of mixed pixelsare then exploited to enhance edge line estimation accuracy. Note that

Fig. 3. Illustrative example of TLS data. (a) 3D view of TLS data. (b) Top view of TLS data.

Fig. 4. Edge line estimation from raw scan data. (a) Three types of points in raw scan data (adapted from [2] with permission). (b) Edge line estimation based on onlyvalid points using the model developed in [21].

Q. Wang et al. Automation in Construction 101 (2019) 59–71

61

Page 4: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

this study does not develop a new algorithm for mixed pixel filtering.Instead, an existing algorithm is adopted to detect mixed pixels and thisstudy utilizes the information from the detected mixed pixels to en-hance edge line estimation accuracy.

3. Development of edge line estimation algorithms

This study proposes four different edge line estimation algorithms,namely SVM1, LSR1, SVM2, and LSR2, as summarized in Table 1. Thesealgorithms are named after the mathematical models used in these al-gorithms: support vector machines (SVM) [29] and least squares re-gression (LSR). The SVM1 and LSR1 algorithms assume that valid pointsand non-valid points (including background points and mixed pixels) inraw scan data are distinguishable, and non-valid points are removedbefore edge line estimation. On the other hand, the SVM2 and LSR2algorithms assume that valid points, mixed pixels, and backgroundpoints in raw scan data are distinguishable using a mixed pixel detec-tion algorithm. Therefore, the latter two algorithms take advantage ofmixed pixels for edge line estimation. The details of the four algorithmsare described in Sections 3.1 to 3.4.

3.1. SVM1 algorithm

The SVM1 algorithm is implemented in the following three steps.

(1) Extraction of the last valid point in each row: From the raw scandata, some valid points from the target are first chosen and all thescan points are projected onto the least squares fitting plane ofthese valid points. As shown in Fig. 5, valid points on the target arerepresented as solid dots, including both empty and filled ones.Each point is labeled with a row index (from 1 to 5) and a columnindex (from A to E) so that each point can be represented by itslocation. For the given true edge line, five filled solid dots, in-cluding (1, D), (2, D), (3, C), (4, C), and (5, B), are extracted as thelast valid point (i.e., the valid point closest to the edge line) in eachrow. In fact, the last valid point can be extracted from each columnas well. Whether to select the last valid point in each row or in eachcolumn depends on how many points can be found. In this example,if the last valid points in each row are selected, the abovementionedfive points will be found. However, if the last valid points are se-lected in each column, only two points will be found includingpoints (4, C) and (2, D). Therefore, finding the last valid points in

each row rather than in each column is a better choice in this ex-ample.

(2) Creation of the first hypothetical non-valid point (HNVP) in eachrow: In each row, the first HNVP is created next to the last validpoint. For example, point (1, E) is hypothetically created next topoint (1, D) as shown in Fig. 5. Here, the distance between points(1, D) and (1, E), sDE, is obtained from sCD, the distance betweenpoints (1, C) and (1, D), assuming that point (1, E) is on the targetsurface as shown in Fig. 6. First, sCD is determined as:

=s L Ltan tanCD D C0 0 (2)

where L0 is the perpendicular distance from the scanner to the target,and βD and βC are the incident angles of points (1, D) and (1, C), re-spectively. βD and βC are related to the distances from the scanner topoints (1, D) and (1, C), which are denoted as LD and LC, as follows:

= L Lcos /C C1

0 (3)

= L Lcos /D D1

0 (4)

Note that sCD, LC, and LD values are obtained from laser scan data,and L0, βD, and βC values can be obtained from Eqs. (2)–(4) (seeAppendix A). Then, sDE can be obtained as L0 tan βE − L0 tan βD ac-cording to Eq. (2). Here, βE − βD is equal to βD − βC because the angularresolution is constant and the target surface is flat. In addition, the laserspot size d of the created point in both horizontal and vertical directionsis calculated based on the scanning parameters according to Eq. (1).Note that the laser spot may not be a circle on an oblique surface.

Table 1Summary of proposed edge line estimation algorithms.

Algorithm Mathematical model Assumption

SVM1 SVM Valid points and non-valid points aredistinguishableLSR1 LSR

SVM2 SVM Valid points, mixed pixels, and backgroundpoints are distinguishableLSR2 LSR

Fig. 5. Estimation of the edge line using the SVM1 algorithm (adapted from [2] with permission).

Fig. 6. Creation of the first HNVP (1, E) next to the last valid point (1, D).

Q. Wang et al. Automation in Construction 101 (2019) 59–71

62

Page 5: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

Once sDE is obtained, the location of point (1, E) is determined. Asshown in Fig. 5, a total of five HNVPs are created and labeled as emptydashed dots. Because the first HNVP can be either a mixed pixel or abackground point, it can be partially or fully outside the target.

(3) Estimation of the edge line using SVM: Because the last valid pointin each row is fully inside the target, the endpoint nearest to theedge line (crossing in Fig. 5) must be inside the target. As for thefirst HNVP in each row, the endpoint farthest to the edge line(triangle in Fig. 5) must be outside the target. Then, the true edgeline should be somewhere between the two endpoints in each row.Thus, SVM for two-class classification could be deployed to estimatethe edge line. Taking the two groups of endpoints as data sets fromtwo different classes, SVM could give a separation line with themaximum margin to the support vectors of each class.

In reality, the nearest or farthest endpoints of the last valid points orthe first HNVPs are unknown until the angle of the edge line is de-termined. Thus, the edge line is estimated with an iterative approach. Inthe first iteration, taking the center points of the last valid points andthe first HNVPs as data sets from two different classes, an initial se-paration line is obtained using SVM. Then, based on the angle of theinitial separation line, the two groups of endpoints are obtained and thefirst edge line estimation is obtained from SVM by taking the twogroups of endpoints as two classes of data sets. In the second iteration,the two groups of endpoints are updated based on the angle of the es-timated edge line in the previous iteration, and an updated edge lineestimation is obtained using the updated endpoints. Similarly, theendpoints and the edge line estimation are updated iteratively until theangle difference between two consecutive edge line estimations is lessthan 0.1°. The value of 0.1° is selected because an angle difference of0.1° in edge line estimation will result in a difference of less than0.01 mm in the locations of endpoints when the laser spot size is10 mm, which is sufficiently small.

3.2. LSR1 algorithm

Similar to the SVM1 algorithm, the LSR1 algorithm also operates onthe last valid points and the first HNVPs as shown in Fig. 7. As men-tioned before, the edge line must be located between the nearest end-point (shown as a crossing) of the last valid point and the farthestendpoint (shown as triangle) of the first HNVP. Therefore, the edge linecan be estimated by fitting a straight line to all the middle points (filledsquares in Fig. 7) between the two groups of endpoints.

However, as the nearest or farthest endpoints are unknown until theangle of the edge line is determined, the edge line is estimated with aniterative approach. In the first iteration, the middle points between thecenters of the last valid points and the first HNVPs in each row areobtained, and the middle points are fitted to a straight line using LSR.Then, based on the angle of the fitted line, the two groups of endpointsare obtained, and the first edge line estimation is obtained by findingthe LSR fitted line of the middle points between the two groups of

endpoints. In the second iteration, the two groups of endpoints areupdated based on the angle of the estimated edge line in the previousiteration, and an updated edge line estimation is obtained using theupdated endpoints. Similarly, the endpoints and the edge line estima-tion are updated iteratively until the angle difference between twoconsecutive edge line estimations is less than 0.1°.

3.3. SVM2 algorithm

In the SVM1 and LSR1 algorithms, the edge line is estimated usingthe last valid point and the first HNVP in each row. On the other hand,the SVM2 algorithm estimates the edge line using the last valid pointand the first hypothetical background point (HBP) in each row, asfollows.

(1) Identification of the first HBP in each row: In the SVM1 and LSR1algorithms, the first HNVP in each row is identified by creating ahypothetical point next to the last valid point. Using the mixed pixeldetection algorithm previously developed by the authors' group[19], it is possible to decide whether the first HNVP is a mixed pixelor a background point. If it is a background point, this point be-comes the first HBP in this row, such as points E2 and D4 in Fig. 8. Ifit is a mixed pixel, additional hypothetical points are created next tothe mixed pixel to find the first HBP in this row. The number ofmixed pixels in a row can be obtained using the mixed pixel de-tection algorithm. Then, the HBP closest to the edge line (with theshortest perpendicular distance from point center to the edge line)after all mixed pixels becomes the first HBP. For example, points F1,E3, and D5 are identified as the first HBP after a mixed pixel, asshown in Fig. 8.

(2) Estimation of the edge line using SVM: Because the last valid pointin each row is fully inside the target, the nearest endpoints (crossingin Fig. 8) of the last valid points to the edge line must be inside thetarget. As the first HBP is fully outside the target, the nearestendpoints (triangles in Fig. 8) of the first HBPs to the edge line mustbe outside the target. Therefore, SVM could be deployed to estimatea decision boundary that separates the two groups of endpointswith the maximum margin. A similar iterative approach is adoptedto estimated edge line.

3.4. LSR2 algorithm

Similar to the SVM2 algorithm, the LSR2 algorithm also operates onthe last valid point and the first HBP in each row. The LSR2 algorithmfinds the middle point between the nearest endpoint of the last validpoint and the nearest endpoint of the first HBP in each row, and theedge line is estimated by fitting a straight line to all the middle pointsbetween the two groups of endpoints using LSR. As the endpoints areunknown until the edge line angle is determined, a similar iterativeapproach with the LSR1 algorithm is adopted to estimate the edge line.

Fig. 7. Estimation of the edge line using the LSR1 algorithm.

Q. Wang et al. Automation in Construction 101 (2019) 59–71

63

Page 6: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

4. Parameter study and comparison of edge line estimationalgorithms

This section investigates the effects of scan data parameters on theproposed edge line estimation algorithms and compares the perfor-mance of the proposed algorithms. Section 4.1 defines three scan dataparameters affecting the performance of edge line estimation. Section4.2 describes the numerical simulations performed to investigate theeffects of the parameters. Sections 4.3 to 4.5 analyze the effects of thethree parameters based on simulation results. Last, Section 4.6 com-pares the performance of the proposed algorithms.

4.1. Scan data parameters

Scan data can be described by three parameters, namely (1) spacings between two adjacent scan points, (2) angle α between edge line andscanning direction, and (3) laser spot size d. All of the three parametershave potential effects on edge line estimation accuracy, as explained inthe following subsections.

4.1.1. Spacing s between two adjacent scan pointsThe edge line estimation accuracy always improves as spacing s

becomes smaller. For example, assuming that all the scan points haveconstant s and d values, with a fixed vertical edge line, the estimationerror using the SVM1 algorithm can vary from 0 (Fig. 9(a)) to s/2(Fig. 9(b)), depending on the relative position of the scan data withrespect to the true edge line. If the relative position of the scan datafollows a uniform distribution, the average estimation error becomes s/4, proportional to spacing s. It is worth noting that this proportionalrelationship may not be valid for edge lines with other angles.

4.1.2. Angle α between edge line and scanning directionThe angle α between the edge line and the scanning direction also

affects the edge line estimation accuracy. The scanning direction isdefined as the direction of a row or a column of scan points. Althoughtwo scanning directions (vertical and horizontal) result in two α values,

this study takes the smaller value (the one less than 45°). Assuming thatall the scan points have constant s and d values, when α = 0°, themaximum estimation error is s/2 using the SVM1 algorithm (Fig. 10(a)).When α is 45°, the maximum error becomes s2 /4 with the same al-gorithm (Fig. 10(b)).

4.1.3. Laser spot size dWith a different laser spot size, the division of valid points, mixed

pixels, and background points varies, thereby resulting in different edgeline estimations. However, any analytical relationship between laserspot size d and edge line estimation accuracy cannot be drawn at thispoint. Therefore, the effect of laser spot size is further investigated inthe following numerical simulation. It is worth noting that the spacing sand laser spot size d are correlated.

4.2. Numerical simulation

Numerical simulations were conducted to investigate the effects ofthe three parameters (s, α, and d) on the proposed edge line estimationalgorithms. Simulated scan data are generated based on a predefinedset of scanning parameters and the specifications of FARO S70 laserscanner. The s and d values of scan data are varied by changing theperpendicular distance L0 from the scanner to the target and the angularresolution θ. As shown in Table 2, a total of 12 groups of simulationsare conducted with different L0 (2 m, 4 m, 8 m, and 16 m) and θ (0.144°,0.288°, and 0.576°) values. The range of L0 is determined because 2 m to16 m is the common application range for geometric quality inspectionof construction products using laser scanning, which is the applicationscenario that requires high-accuracy edge line estimations. Low angularresolutions are selected to demonstrate that the proposed techniquescan provide high-accuracy edge line estimations even with low angularresolutions. Note that a lower resolution takes less scanning time. Thescan data are generated for a square area of 1000 mm × 1000 mm as-suming that the direction from the scanner to the center of the scannedarea is perpendicular to the target, as shown in Fig. 11(a). Note that thes and d values shown in Table 2 are the values for the scan point withincident angle of 0° only (i.e. the point at the center in Fig. 11(a)). For

Fig. 8. Edge line estimation using SVM2 algorithm.

Fig. 9. Edge line estimation error for a vertical edge line. (a) Error equals to 0. (b) Error equals to s/2.

Q. Wang et al. Automation in Construction 101 (2019) 59–71

64

Page 7: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

other scan points, the s and d values become larger as the incident angleincreases. For example, for simulation group 1, the s and d values for ascan point near the edge of the scanned area are 5.34 mm and 2.82 mm,respectively, while the s and d values at the center point are 5.0 mm and2.72 mm, respectively.

For each group, ten sub-groups of simulations are performed byvarying the α value from 0° to 45° with an increment of 5°. Here, α is theangle of the true edge line with respect to a vertical line, and a1000 mm-long true edge line is placed in the scanned area as shown inFig. 11(b). In each simulation, assuming that the target object is on theleft side of the true edge line, all the simulated scan data are divided

into valid points, mixed pixels, and background points based on theirlocations. For example, all the simulated points crossing the edge lineare regarded as mixed pixels. Then, the last valid points and the firstHNVPs or the first HBPs are extracted as needed for each estimationalgorithm as shown in Fig. 11(b). The average perpendicular distancefrom the true edge line to the estimated edge line is defined as the edgeline estimation error.

Once the scan data parameters (L0, θ, and α) are fixed, the positionof the true edge line is varied. For each specific combination of scandata parameters, 500 repetitive simulations are conducted by randomlyvarying the position of the true edge line within the scanned area. Theserepeated simulations produce 500 different edge line estimation errors,and the average value of the 500 estimation errors is denoted as E forfurther analyses.

4.3. Effects of spacing s

Fig. 12 shows the relations between E and s for different algorithms.Each sub-figure shows the relationship for different α and d values.These data are obtained from simulation groups 1–3 and 7–9. Overall,the estimation error E increases as s increases, and this relation is closerto quadratic rather than linear. It is speculated that the number of scanpoints on the target is inversely proportional to s2, and the accuracy ofedge line estimation is linearly related to the number of scan points.

4.4. Effects of angle α

Fig. 13 shows the relations between E and α for different algorithms.

Fig. 10. Effects of angle α on edge line estimation accuracy (adapted from [2] with permission). (a) The maximum estimation error of s/2 occurs when α = 0°. (b) Themaximum estimation error of s2 /4 occurs when α = 45°.

Table 212 groups of simulations with different s and d values.

Simulationgroup no.

L0 (m) θ (°) s (mm) d (mm) α (°) Repeating times

1 2 0.144 5.0 2.72 0, 5,10, 15,20, 25,30, 35,40, 45

500 times for eachspecific

combination ofscan data

parameters

2 2 0.288 10.1 2.723 2 0.576 20.1 2.724 4 0.144 10.1 3.325 4 0.288 20.1 3.326 4 0.576 40.2 3.327 8 0.144 20.1 4.528 8 0.288 40.2 4.529 8 0.576 80.4 4.5210 16 0.144 40.2 6.9211 16 0.288 80.4 6.9212 16 0.576 160.8 6.92

Fig. 11. Simulation of scan data for edge line estimation. (a) Simulated scan data. (b) Edge line estimation using one of proposed edge line estimation algorithms.

Q. Wang et al. Automation in Construction 101 (2019) 59–71

65

Page 8: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

Each sub-figure shows the relations with different s and d values. Thesedata are obtained from simulation groups 2–3 and 7–8. For all the cases,E is always very high when α equals to 0°, and drastically decreasesonce α deviates from 0°. This is because all the last valid points and firstHNVPs (or HBPs) are aligned with the edge line when α is 0°, providingalmost the same information for edge line estimation. It is also similarfor cases when α is 45°. Because all the last valid points and first HNVPs(or HBPs) are approximately aligned with the edge line (e.g. the caseshown in Fig. 10(b)), the E value is relatively higher when α is 45°. Forother α values, the last valid points and first HNVPs (or HBPs) providedifferent information for edge line estimation, resulting in a more ac-curate edge line estimation. For all the cases, α values between 15° and35° always provide sufficiently small E.

4.5. Effects of laser spot size d

Fig. 14 shows the relations between E and d for different algorithms.Each sub-figure shows the relations with different s and α values. These

data are obtained from simulation groups 3, 5–8, and 10. Fig. 14 showsthat E has no specific trend with respect to d. Compared to the effects ofs and α, the effect of d on the edge line estimation algorithms seems tobe negligible. This is potentially because the laser spot size d is smalldue to short scanning distances and low incident angles.

4.6. Comparisons of edge line estimation algorithms

This subsection compares the performance of the four proposedalgorithms. In addition, an algorithm without considering the edge lossproblem is also presented for comparison as a benchmark algorithm.The benchmark algorithm (denoted as LSR_bm) extracts the last validpoint in each row and the edge line is estimated by fitting a straight lineto all the last valid points using LSR. The edge line estimation error ofthe LSR_bm algorithm is obtained using the same simulation method.

As the E value varies significantly with respect to scan data para-meters, the E value is normalized for proper comparison of algorithms.First, for given s, α, and d values, the E values of all the five algorithms

Fig. 12. Relations between E and s with different α and d values.

Fig. 13. Relations between E and α with different s and d values.

Q. Wang et al. Automation in Construction 101 (2019) 59–71

66

Page 9: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

(four proposed algorithms and one benchmark algorithm) are obtained.Then, the E values are normalized with the minimum E value among allthe five algorithms: normalized E = E / minimum E for a specific scanparameter setup. Fig. 15 shows the averaged normalized E of the fourproposed algorithms with respect to all s, d, and α values. When therelation between the averaged normalized E and one parameter, say s,is computed, the normalized E is averaged over all the d and α values.Fig. 15 shows that the LSR2 algorithm has the best overall performancefor most s, d, and α values and the SVM2 algorithm ranks the second. Onthe other hand, the LSR1 and SVM1 algorithms are generally less ac-curate.

The performance of the benchmark algorithm LSR_bm was alsocompared. It is found that the normalized average E value of thebenchmark algorithm is at least one order of magnitude larger than the

proposed four algorithms. Hence, the corresponding E value is notshown in Fig. 15. The results indicate that considering edge loss com-pensation in edge line estimation can greatly improve the edge lineestimation accuracy.

5. Discussions and recommendations

5.1. Sensitivity analysis with respect to scan data misclassification

So far, the performance of edge line estimation algorithms is ana-lyzed with no consideration of scan data misclassification (For theSVM1 and LSR1 algorithms, scan data are perfectly classified into validand non-valid points, and scan data are again perfectly divided intovalid points, mixed pixels, and background points for the SVM2 and

Fig. 14. Relations between E and d with different s and α values.

Fig. 15. Averaged normalized E of algorithms with different (a), (b) d, and (c) α values, for the four proposed algorithms.

Q. Wang et al. Automation in Construction 101 (2019) 59–71

67

Page 10: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

LSR2 algorithms). In reality, classification between valid points, mixedpixels, and background points is not perfect and misclassification oftenoccurs. Here, the effect of scan data misclassification on edge line es-timation is investigated.

For simplicity, only four cases of misclassification are consideredhere: case 1) some of the last valid points are misclassified as mixedpixels, case 2) some of mixed pixels are misclassified as the last validpoints, case 3) some of mixed pixels are misclassified as the firstbackground points, and case 4) some of the first background points aremisclassified as mixed pixels. Because the SVM1 and LSR1 algorithmsdivide scan data into valid and non-valid points, these algorithms areaffected by cases 1 and 2 misclassification. On the other hand, becausescan data are divided into valid points, mixed pixels, and backgroundpoints for the SVM2 and LSR2 algorithms, they are affected by all thefour misclassification cases. Furthermore, it is assumed that four mis-classification cases have the same misclassification probability of p.

Simulations were conducted by varying the p value from 10% to20% and 30%. For given s, α, and d values, the increase rate of E with aspecific p value relative to E value without misclassification was cal-culated. Then, for each algorithm and p value, the averaged increaserate of E over all the simulations was obtained and shown in Table 3.Among the four algorithms, the LSR2 algorithm shows the lowest sen-sitivity to scan data misclassification. When p = 30%, the estimationerror of the LSR2 algorithm is increased only by 4.7%. The LSR1 andSVM2 algorithms also have relatively low sensitivities to scan datamisclassification while the SVM1 algorithm is most affected by mis-classification.

5.2. Recommendations of scanning settings

Based on the simulation results in Section 4 and discussions inSection 5.1, the following recommendations of scanning settings areprovided to improve edge line estimation accuracy.

(1) Estimate the edge line using the LSR2 algorithm. Based on the si-mulation results, the LSR2 algorithm provides the best edge lineestimation accuracy as well as the best robustness to mis-classification. Therefore, the LSR2 algorithm is recommended foredge line estimation.

(2) Scan with α values between 15° and 35°. Rather than aligning thescanning direction with one of the edge lines, it is recommended toskew the scanning direction with respect to the edge lines.Simulation results show that α values between 15° and 35° alwaysresults in smaller E values.

(3) Determine s value based on the required accuracy of edge line es-timation. A smaller s value always provides a higher edge line es-timation accuracy than a larger one, but also demands a longerscanning and data processing time. Therefore, a trade-off betweenthe estimation accuracy and time cost needs to be made.

Based on the above recommendations, Table 4 shows the E valueobtained by the LSR2 algorithm for all the 12 groups of simulationswith α between 15° and 35°. For each group of simulation, the averagedE (=avg) over all α values between 15° and 35° is shown in Table 4. Thisis because this study suggests a range of α value and the performance ofthe algorithm over all α values is evaluated using avg. In addition, the

averaged E value with the LSR_bm algorithm over α between 15° and 35°

is also shown in Table 4. The results show that, for the same s value (i.e.for the same scanning and processing time), the LSR2 algorithm canreduce the edge line estimation error at least by 85% compared to thebenchmark algorithm LSR_bm. It is worth noting that, as s increasesfrom 5.0 mm to 160.8 mm, the reduction ratio of E gradually decreasesfrom 99% to 85%. This indicates that the LSR2 algorithm is more ad-vantageous than the benchmark algorithm when s value is small.

6. Experimental validation

Experiments were conducted to validate the major findings from thesimulations. Specifically, the experiments aimed to validate 1) theperformance of the four proposed algorithms compared to the bench-mark algorithm, and 2) the effects of α on edge line estimation accu-racy. A rectangular specimen was manufactured and scanned using alaser scanner. Then, the four edge lines of the specimen and four cornerpoints, defined as the intersection points of four edges, were estimatedfrom the scan data using the proposed algorithms. Then, the lengths offour edges of the specimen were calculated as the distances between theextracted corner points. Note that the edge line estimation accuracycannot be directly evaluated through experiments because the true edgeline is unknown in the experiments. Instead, the lengths of four edgesestimated by the proposed algorithms were compared to the manuallymeasured ground truth edge lengths, and the errors of edge lengthmeasurements were used to evaluate the edge line estimation accuracy.

6.1. Experimental setup

The experiments were conducted on a rectangular foam-made spe-cimen with a size of 1000 mm × 800 mm. The specimen was scannedusing a FARO S70 laser scanner with different s, α, and d values, asshown in Fig. 16. As shown in Table 5, a total of 12 groups of experi-ments were conducted with different distance L0 and the same angularresolution θ, resulting in four different spacing s values (5.0 mm,10.1 mm, 20.1 mm, and 40.2 mm). Note that these s values are ap-proximated only for the scan point with incident angle of 0°. Further-more, three different groups of α values were adopted by rotating thespecimen, including 0°, 45° and random value between 15° and 35°.This is because the previous simulation results show that α values be-tween 15° and 35° always result in sufficiently small estimation error,and any α value between 15° and 35° has a similar estimation error. Foreach group of experiments, tests were repeated by 20 times withslightly different scanner locations (varying within 100 mm).

Table 3Averaged increase rate of E with varying p values for different algorithms.

Averaged increase rate of E (%) Algorithm

SVM1 LSR1 SVM2 LSR2

p 10% 1.5 1.8 1.3 1.120% 13.5 3.5 3.9 3.730% 17.6 6.6 5.8 4.7

Table 4E value obtained by the LSR2 algorithm with α value between 15° and 35° for allthe 12 groups of simulations.

Simulationgroup no.

s (mm) d (mm) α (°) AveragedE (mm)

Averaged EwithLSR_bm(mm)

Reductionratio of E

1 5.0 2.72 15,20,25,30,35

0.04 4.78 99%2 10.1 2.72 0.11 7.06 98%3 20.1 2.72 0.28 12.10 98%4 10.1 3.32 0.12 7.15 98%5 20.1 3.32 0.30 11.88 97%6 40.2 3.32 0.79 21.12 96%7 20.1 4.52 0.29 12.36 98%8 40.2 4.52 0.79 20.65 96%9 80.4 4.52 3.74 40.42 91%10 40.2 6.92 0.78 22.59 97%11 80.4 6.92 3.69 41.74 91%12 160.8 6.92 12.6 84.91 85%

Q. Wang et al. Automation in Construction 101 (2019) 59–71

68

Page 11: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

6.2. Experimental results

For each group of experiments, the estimated lengths of all the fouredges of the specimen were compared to the ground truth and the es-timation errors of edge lengths were calculated. Because each group ofexperiments was repeated by 20 times, a total of 80 (=4 edges × 20times) estimation errors were obtained. The mean, minimum (Min),maximum (Max), and standard deviation (SD) values of the 80 errorsare shown in Table 6 for each experiment group and each algorithm.

Two major findings were observed from the experimental results.First, for all the 12 groups of experiments, the LSR2 algorithm alwayshas the lowest mean and SD values of estimation errors among all thealgorithms. In some cases, the SVM2 algorithm also has the lowestmean or SD values. The LSR_bm algorithm always has a much largerestimation error than the proposed algorithms. The advantages of theproposed algorithms become more substantial when α value falls be-tween 15° and 35°. For cases with α values between 15° and 35°, themean value of estimation error is reduced by at least 88% when usingthe SVM2 algorithm compared to the LSR_bm algorithm.

Second, for all four proposed algorithms, experiments with α valuesof 0° show much larger errors than α values between 15° and 35°. Inmost cases, the mean value of estimation errors is reduced by more than80% with α values between 15° and 35° compared to α value of 0°. Inaddition, the errors with α values of 45° are also larger than errors withα values between 15° and 35°, but smaller than cases with α values of 0°.On average, the mean value of estimation errors with α values of 45° isincreased by 131% compared to errors with α values between 15° and

35°.In conclusion, the experimental results show that the proposed al-

gorithms are much more accurate than the LSR_bm algorithm due to theconsideration of the edge loss problem. Particularly, the SVM2 andLSR2 algorithms have better accuracy than the other two algorithms,showing the benefits of utilizing mixed pixel information. Furthermore,scanning with α values between 15° and 35° can greatly reduce theestimation error than α value of 0°. Therefore, the experimental results

Fig. 16. Experimental setup of specimen and a laser scanner.

Table 5Scanning parameters for validation experiments.

Group no. L0 (m) θ (°) s (mm) d (mm) α (°) Repeatingtimes

1 2 0.144 5.0 2.72 0 202 2 0.144 5.0 2.72 15–35

(random)3 2 0.144 5.0 2.72 454 4 0.144 10.1 3.32 05 4 0.144 10.1 3.32 15–35

(random)6 4 0.144 10.1 3.32 457 8 0.144 20.1 4.52 08 8 0.144 20.1 4.52 15–35

(random)9 8 0.144 20.1 4.52 4510 16 0.144 40.2 6.92 011 16 0.144 40.2 6.92 15–35

(random)12 16 0.144 40.2 6.92 45

Table 6Comparison of edge length estimation errors for four proposed algorithms andthe benchmark algorithm.

Group no. Algorithm Edge length estimation error (mm)

Mean Min Max SD

1 SVM1 3.1 0.2 6.2 1.7LSR1 3.3 0.1 6.5 1.7SVM2 2.1 0.1 4.1 1.3LSR2 1.9 0.1 3.7 1.1

LSR_bm 5.2 0.4 11.2 3.62 SVM1 0.6 0 1.3 0.4

LSR1 0.7 0 1.5 0.4SVM2 0.3 0 0.5 0.2LSR2 0.3 0 0.5 0.2

LSR_bm 3.8 0.2 5.2 2.33 SVM1 1.3 0 2.8 0.7

LSR1 1.5 0 2.6 0.8SVM2 1.0 0 1.9 0.5LSR2 0.9 0 1.9 0.4

LSR_bm 4.4 0.2 8.2 2.74 SVM1 5.6 0.2 12.1 3.1

LSR1 4.6 0.1 9.9 2.8SVM2 3.5 0.1 7.2 2.1LSR2 3.5 0.2 6.3 2.0

LSR_bm 10.9 0.4 20.4 6.65 SVM1 1.0 0 2.3 0.6

LSR1 1.1 0 2.4 0.6SVM2 0.8 0 1.7 0.5LSR2 0.8 0 1.5 0.5

LSR_bm 7.1 0.2 14.5 3.86 SVM1 2.3 0 4.9 1.3

LSR1 2.1 0 4.3 1.3SVM2 1.3 0 3.1 0.9LSR2 1.3 0 2.7 0.8

LSR_bm 8.3 0.4 15.9 4.77 SVM1 9.6 0.6 20.0 5.4

LSR1 8.4 0.4 18.9 4.6SVM2 7.4 0.2 14.9 4.1LSR2 7.0 0.7 12.6 3.9

LSR_bm 19.7 1.8 36.8 10.88 SVM1 1.7 0.1 3.8 0.9

LSR1 1.8 0 3.4 0.9SVM2 1.0 0 2.2 0.6LSR2 0.9 0.2 1.5 0.5

LSR_bm 13.4 1.5 25.8 7.69 SVM1 4.2 0.1 8.2 2.1

LSR1 3.7 0 7.4 1.9SVM2 3.1 0 5.7 1.5LSR2 3.0 0.1 5.5 1.5

LSR_bm 15.4 1.4 28.3 8.210 SVM1 18.9 0.8 35.6 9.9

LSR1 17.5 1.3 32.2 9.4SVM2 14.4 0.4 27.8 8.4LSR2 12.8 0.1 20.0 7.6

LSR_bm 36.7 0.9 70.7 21.211 SVM1 4.2 0.3 7.9 2.3

LSR1 3.9 0 8.2 2.3SVM2 2.5 0.1 4.1 1.2LSR2 2.3 0 3.6 1.0

LSR_bm 23.8 1.2 44.7 14.312 SVM1 7.9 0.3 14.9 4.7

LSR1 6.8 0.4 13.3 4.6SVM2 5.3 0 11.2 3.6LSR2 5.1 0 7.8 3.1

LSR_bm 29.0 1.1 56.1 16.8

Q. Wang et al. Automation in Construction 101 (2019) 59–71

69

Page 12: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

are in agreement with the findings from the numerical simulations.

7. Conclusions and future work

This study develops and compares four different edge line estima-tion algorithms, namely SVM1, LSR1, SVM2, and LSR2. The LSR1 andLSR2 algorithms are based on least square regression, but the SVM1 andSVM2 algorithms utilize support vector machine. While the SVM1 andLSR1 algorithms simply remove mixed pixels before edge line estima-tion, the SVM2 and LSR2 algorithms utilize mixed pixels to improve theedge line estimation accuracy. Numerical simulations were conductedto investigate the performance of the proposed algorithms and to studythe effects of scan data parameters including the spacing s between twoadjacent scan points, angle α between edge line and scanning direction,and laser spot size d on edge line estimation. A total of 12 groups ofsimulations were conducted varying the aforementioned scan dataparameters, and misclassification of data among valid points, mixedpixels, and background points was also considered in the simulations.The simulation results show that s and α have strong effects on the edgeline estimation accuracy, and the d value has no clear effect on the edgeline estimation accuracy. Among the proposed algorithms, LSR2 algo-rithm has the best edge line estimation accuracy as well as the bestrobustness to misclassification. Based on the simulation results, thefollowing recommendations are made for more accurate edge line es-timations: 1) use the LSR2 algorithm, 2) set the α value between 15° and35°, and 3) select the s value based on the required edge line estimation

accuracy.Experiments were also conducted to validate the findings from the

numerical simulations. Experimental results show that the proposedalgorithms are much more accurate than the benchmark algorithmLSR_bm, which does not consider edge loss compensation. Among thefour proposed algorithms, the SVM2 and LSR2 algorithms have betteraccuracy than the other two algorithms, demonstrating the benefits ofutilizing mixed pixels for edge line estimation. Furthermore, experi-mental results show that scanning with α values between 15° and 35°can greatly reduce edge line estimation error compared to scanningwith α value of 0°. In general, the experimental results validate thefindings from the numerical simulations.

Further studies are warranted before the findings can be generalizedfor real applications. The variation of the s and d values in the la-boratory experiments was minimal considering the limited size of thespecimen. Therefore, additional studies are necessary to investigate theeffect of s and d variations on the proposed edge line estimation algo-rithms.

Acknowledgement

This research was supported by a grant (13SCIPA01) from SmartCivil Infrastructure Research Program funded by Ministry of Land,Infrastructure and Transport (MOLIT) of Korea Government and theKorea Agency for Infrastructure Technology Advancement (KAIA).

Appendix A. Derivation of L0, βD, and βC values from Eqs. (2)–(4)

Based on Eqs. (3) and (4), Eq. (2) can be written as:

=s L LL

L LL

tan cos tan cosCDD C

01 0

01 0

Because =( )tan cos LL

L LL

1D

D0 2 02

0and =( )tan cos L

LL L

L1

CC0 2 02

0, we get:

=s L L L LCD D c2

02 2

02

where L0 can be solved from the above equation as L0 is the only unknown.Then, βD and βC can be obtained from Eqs. (3) and (4) as L0 is known.

A.1. Table of notations

Notation Meaning

θ Angular resolution of laser scannerβ Incident angle of laser beamsL Distance from laser scanner to targets Spacing between two adjacent scan pointsα Angle between edge line and scanning directiond Laser spot sizeL0 Perpendicular distance from the scanner to the targetE Estimation error of an edge line estimation algorithm

References

[1] FARO, Laser Scanner FARO Focus3D, http://www.faro.com/en-us/products/3d-surveying/faro-focus3d/overview, (2018) , Accessed date: 1 January 2018.

[2] Q. Wang, M.K. Kim, J.C. Cheng, H. Sohn, Automated quality assessment of precastconcrete elements with geometry irregularities using terrestrial laser scanning,Autom. Constr. 68 (2016) 170–182, https://doi.org/10.1016/j.autcon.2016.03.014.

[3] Q. Wang, M.K. Kim, H. Sohn, J.C. Cheng, Surface flatness and distortion inspectionof precast concrete elements using laser scanning technology, Smart Struct. Syst. 18(3) (2016) 601–623, https://doi.org/10.12989/sss.2016.18.3.601.

[4] M.K. Kim, Q. Wang, J.W. Park, J.C. Cheng, H. Sohn, C.C. Chang, Automated di-mensional quality assurance of full-scale precast concrete elements using laserscanning and BIM, Autom. Constr. 72 (2016) 102–114, https://doi.org/10.1016/j.autcon.2016.08.035.

[5] P. Tang, D. Huber, B. Akinci, Characterization of laser scanners and algorithms fordetecting flatness defects on concrete surfaces, J. Comput. Civ. Eng. 25 (1) (2010)31–42, https://doi.org/10.1061/(ASCE)CP.1943-5487.0000073.

[6] S. Pu, G. Vosselman, Knowledge based reconstruction of building models fromterrestrial laser scanning data, ISPRS J. Photogramm. Remote Sens. 64 (6) (2009)575–584, https://doi.org/10.1016/j.isprsjprs.2009.04.001.

[7] P. Tang, D. Huber, B. Akinci, R. Lipman, A. Lytle, Automatic reconstruction of as-built building information models from laser-scanned point clouds: a review ofrelated techniques, Autom. Constr. 19 (7) (2010) 829–843, https://doi.org/10.1016/j.autcon.2010.06.007.

[8] X. Xiong, A. Adan, B. Akinci, D. Huber, Automatic creation of semantically rich 3Dbuilding models from laser scanner data, Autom. Constr. 31 (2013) 325–337,https://doi.org/10.1016/j.autcon.2012.10.006.

[9] H.S. Park, H.M. Lee, H. Adeli, I. Lee, A new approach for health monitoring ofstructures: terrestrial laser scanning, Comput.-Aided Civ. Inf. Eng. 22 (1) (2007)19–30, https://doi.org/10.1111/j.1467-8667.2006.00466.x.

Q. Wang et al. Automation in Construction 101 (2019) 59–71

70

Page 13: Automation in Construction - KAISTssslab.kaist.ac.kr/article/pdf/2019_May_WangQian.pdf · Q. Wang et al. Automation in Construction 101 (2019) 59–71 60. applicable to both flat

[10] M.J. Olsen, F. Kuester, B.J. Chang, T.C. Hutchinson, Terrestrial laser scanning-basedstructural damage assessment, J. Comput. Civ. Eng. 24 (3) (2009) 264–272, https://doi.org/10.1061/(ASCE)CP.1943-5487.0000028.

[11] S. El-Omari, O. Moselhi, Integrating 3D laser scanning and photogrammetry forprogress measurement of construction work, Autom. Constr. 18 (1) (2008) 1–9,https://doi.org/10.1016/j.autcon.2008.05.006.

[12] Y. Turkan, F. Bosche, C.T. Haas, R. Haas, Automated progress tracking using 4Dschedule and 3D sensing technologies, Autom. Constr. 22 (2012) 414–421, https://doi.org/10.1016/j.autcon.2011.10.003.

[13] C. Kim, H. Son, C. Kim, Automated construction progress measurement using a 4Dbuilding information model and 3D data, Autom. Constr. 31 (2013) 75–82, https://doi.org/10.1016/j.autcon.2012.11.041.

[14] M. Lemmens, Terrestrial laser scanning, Geo-information, Springer, Dordrecht,2011, pp. 101–121, , https://doi.org/10.1007/978-94-007-1667-4_6.

[15] Q. Wang, H. Sohn, J.C. Cheng, Development of a mixed pixel filter for improveddimension estimation using AMCW laser scanner, ISPRS J. Photogramm. RemoteSens. 119 (2016) 246–258, https://doi.org/10.1016/j.isprsjprs.2016.06.004.

[16] M. Hebert, E. Krotkov, 3D measurements from imaging laser radars: how good arethey? Image Vis. Comput. 10 (3) (1992) 170–178, https://doi.org/10.1016/0262-8856(92)90068-E.

[17] M.D. Adams, P.J. Probert, The interpretation of phase and intensity data fromAMCW light detection sensors for reliable ranging, Int. J. Robot. Res. 15 (5) (1996)441–458, https://doi.org/10.1177/027836499601500502.

[18] J. Tuley, N. Vandapel, M. Hebert, Analysis and removal of artifacts in 3-D LADARdata, Proceedings of the 2005 IEEE International Conference on Robotics andAutomation, IEEE, 2005, pp. 2203–2210, , https://doi.org/10.1109/ROBOT.2005.1570440.

[19] P. Tang, D. Huber, B. Akinci, A comparative analysis of depth-discontinuity andmixed-pixel detection algorithms, Proceedings of the Sixth International Conferenceon 3-D Digital Imaging and Modeling (3DIM 2007), IEEE, 2007, pp. 29–38, ,https://doi.org/10.1109/3DIM.2007.5.

[20] E. Che, M.J. Olsen, Multi-scan segmentation of terrestrial laser scanning data basedon normal variation analysis, ISPRS J. Photogramm. Remote Sens. 143 (2018)233–248, https://doi.org/10.1016/j.isprsjprs.2018.01.019.

[21] P. Tang, B. Akinci, D. Huber, Quantification of edge loss of laser scanned data atspatial discontinuities, Autom. Constr. 18 (8) (2009) 1070–1083, https://doi.org/10.1016/j.autcon.2009.07.001.

[22] T.J. Fan, G. Medioni, R. Nevatia, Segmented descriptions of 3-D surfaces, IEEE J.Robot. Autom. 3 (6) (1987) 527–538, https://doi.org/10.1109/JRA.1987.1087146.

[23] R.B. Rusu, Z.C. Marton, N. Blodow, M. Dolha, M. Beetz, Towards 3D point cloudbased object maps for household environments, Robot. Auton. Syst. 56 (11) (2008)927–941, https://doi.org/10.1016/j.robot.2008.08.005.

[24] O. Monga, R. Deriche, 3D edge detection using recursive filtering: application toscanner images, Proceedings of CVPR '89: IEEE Computer Society Conference onComputer Vision and Pattern Recognition, IEEE, 1989, pp. 28–35, , https://doi.org/10.1109/CVPR.1989.37825.

[25] J.A. Yagüe-Fabra, S. Ontiveros, R. Jiménez, S. Chitchian, G. Tosello, S. Carmignato,A 3D edge detection technique for surface extraction in computed tomography fordimensional metrology applications, CIRP Ann. Manuf. Technol. 62 (1) (2013)531–534, https://doi.org/10.1016/j.cirp.2013.03.016.

[26] D. Bazazian, J.R. Casas, J. Ruiz-Hidalgo, Fast and robust edge extraction in un-organized point clouds, Proceedings of the 2015 International Conference on DigitalImage Computing: Techniques and Applications (DICTA), IEEE, 2015, November,pp. 1–8, , https://doi.org/10.1109/DICTA.2015.7371262.

[27] H. Edelsbrunner, E.P. Mücke, Three-dimensional alpha shapes, ACM Trans. Graph.13 (1) (1994) 43–72, https://doi.org/10.1145/174462.156635.

[28] D.D. Lichti, S.J. Gordon, T. Tipdecho, Error models and propagation in directlygeoreferenced terrestrial laser scanner networks, J. Surv. Eng. 131 (4) (2005)135–142, https://doi.org/10.1061/(ASCE)0733-9453(2005)131:4(135).

[29] C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn. 20 (3) (1995)273–297, https://doi.org/10.1007/BF00994018.

Q. Wang et al. Automation in Construction 101 (2019) 59–71

71