Draft Automatic integration of 3-D point clouds from UAS and airborne LiDAR platforms Journal: Journal of Unmanned Vehicle Systems Manuscript ID juvs-2016-0034.R1 Manuscript Type: Article Date Submitted by the Author: 12-Jul-2017 Complete List of Authors: Persad, Ravi; York University Armenakis, Costas; York University, Department of Earth and Space Science and Engineering; Hopkinson, Chris; University of Lethbridge Brisco, Brian; Natural Resources Canada Earth Sciences Keyword: point clouds, UAS, LiDAR, matching, registration, automation Is the invited manuscript for consideration in a Special Issue? : UAV-g (Unmanned Aerial Vehicles in geomatics) https://mc06.manuscriptcentral.com/juvs-pubs Journal of Unmanned Vehicle Systems
39
Embed
Automatic integration of 3-D point clouds from UAS and ... · Automatic integration of 3-D point clouds from UAS and airborne LiDAR platforms ... Automatic registration of 3-D point
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Draft
Automatic integration of 3-D point clouds from UAS and
airborne LiDAR platforms
Journal: Journal of Unmanned Vehicle Systems
Manuscript ID juvs-2016-0034.R1
Manuscript Type: Article
Date Submitted by the Author: 12-Jul-2017
Complete List of Authors: Persad, Ravi; York University Armenakis, Costas; York University, Department of Earth and Space Science and Engineering; Hopkinson, Chris; University of Lethbridge Brisco, Brian; Natural Resources Canada Earth Sciences
Keyword: point clouds, UAS, LiDAR, matching, registration, automation
Is the invited manuscript for consideration in a Special
Issue? : UAV-g (Unmanned Aerial Vehicles in geomatics)
https://mc06.manuscriptcentral.com/juvs-pubs
Journal of Unmanned Vehicle Systems
Draft
1
Automatic registration of 3-D point clouds from
UAS and airborne LiDAR platforms
Ravi Ancil Persad1, Costas Armenakis
1, Chris Hopkinson
2, Brian Brisco
3
1York University,
2University of Lethbridge,
3Natural Resources Canada
Abstract: An approach to automatically co-register 3-D point cloud surfaces from
Unmanned Aerial Systems (UASs) and Light Detection and Ranging (LiDAR)
systems is presented. A 3-D point cloud co-registration method is proposed to
automatically compute all transformation parameters without the need for initial,
approximate values. The approach uses a pair of point cloud height map images for
automated feature point correspondence. Initially, keypoints are extracted on the
height map images, and then a log-polar descriptor is used as an attribute for matching
the keypoints via a Euclidean distance similarity measure. Our study area is the
Peace-Athabasca Delta (PAD) situated in north-eastern Alberta, Canada. The PAD is
a world heritage site, therefore regular monitoring of this wetland is important. Our
method automatically co-registers UAS point clouds with airborne LiDAR data
collected over the PAD. Together with UAS data acquisition, our approach can
potentially be used in the future to facilitate automated co-registration of
heterogeneous data throughout the PAD region. Reported transformation parameter
accuracies are: a scale error of 0.02, an average rotation error of 0.123° and an
average translation error of 0.237m.
Keywords: point clouds, UAS, LiDAR, matching, registration, automation
Page 1 of 38
https://mc06.manuscriptcentral.com/juvs-pubs
Journal of Unmanned Vehicle Systems
Draft
2
Introduction
In the geomatics engineering community, Unmanned Aerial Systems (UASs) have
received considerable attention and continue to be an area of significant interest for
both academia and industry. As we move forward, UAS technology will continue to
strongly influence future commercial trends and research directions undertaken in
various geospatial fields such as urban planning, cadastral and topographic
surveying/mapping, photogrammetry and low-altitude remote sensing. However,
significant advancements may not always necessarily come from a single technology
but through the integration of multiple technologies and their respective data.
UAS platforms provide many benefits including portability and cost-effective data
acquisition. UASs can potentially have a substantial impact due to their capability and
convenience for flying on a frequent basis. UASs also have low mobilization and
operational costs, thus facilitating continuous data acquisition for mapping
applications. This is critical for various applications such as topographic mapping /
map-updating of smaller areas and for detecting changes in non-urban (e.g., glaciers,
icefields, rivers) and urban (e.g., cities) environments. Nevertheless, there are also
several limitations with geospatial data collected by UASs. Coverage of an area may
be hindered due to short flight times and payload restrictions of the UAS. This may be
sufficient for ‘small-scale’ mapping of an area but will not suffice for larger projects
with time restrictions. Quality and accuracy of data is another concern with UAS-
generated data. Small, non-metric cameras are often utilized for UASs. However, the
majority of these cameras have low-resolution and are prone to various types of lens
distortions. As a result, the density and accuracy of 3-D point clouds generated via
Page 2 of 38
https://mc06.manuscriptcentral.com/juvs-pubs
Journal of Unmanned Vehicle Systems
Draft
3
structure-from-motion (SFM) algorithms (Forsyth and Ponce 2002) will be negatively
affected. In such instances when there are coverage and accuracy concerns for data
collection, combinations of sensors and platforms such as airborne Light Detection
and Ranging (LiDAR) systems, satellites and large airborne metric camera systems
are instead employed. These sensors are considerably more expensive and complex
when compared to small UASs. Therefore, data acquisition from these larger
platforms has higher operational and processing costs. Data integration requires that
all datasets must be referenced in the same coordinate system. Generally, data co-
registration is achieved by the manual selection of corresponding feature points on the
source and target datasets to be aligned. Multi-sensor data integration is commonly
referred to as alignment or co-registration in the fields of photogrammetry and
computer vision. Due to the large amounts of point data collected and the repeated
periods of data acquisition as with the case for map revision, automatic co-registration
is desired. This minimizes or eradicates the need for manual input from a human
operator.
The data co-registration problem
The objective of co-registration is to align a source dataset with a target dataset.
The source and target datasets are typically in different coordinate systems which vary
in terms of a scale factor, rotation angles and translational shifts. Automatic co-
registration of a dataset pair (i.e., a source dataset and a target dataset) is a two-fold
problem. The first aspect is the ‘correspondence’ problem. This refers to the
extraction and establishment of conjugate geometric key features (e.g., points, lines or
planes) on both the source and target domains by matching their feature attributes.
Page 3 of 38
https://mc06.manuscriptcentral.com/juvs-pubs
Journal of Unmanned Vehicle Systems
Draft
4
The second issue is the ‘transformation’ problem and refers to the computation of
transformation parameters using the corresponding key features. The determination of
the relationship between two 3-D coordinate systems is a well-studied area and has
various applications in both photogrammetry and computer vision (Lorusso et al.
1995). It has been referred to as the 3-D similarity (or conformal) transformation or
the absolute orientation problem over the years. Nevertheless, the objective remains
the same, i.e., the estimation of rotational, translational and scalar elements which are
required to bring a pair of 3-D objects from two different Cartesian coordinate
systems into a common system, thereby aligning them. In our case, we seek to
transform the source point clouds, �������� into the system of the target point clouds,
����� using Eq. (1).
����������� ����� = ������������ + ��1�
where, � is the single, global scaling factor,
� is a 3x3 orthogonal rotation matrix comprising three angles, ω, φ, κ,
� is a 3x1 translation vector with 3 components, (tx, ty, tz),
����������� �����
is the �������� when aligned with �����.
Related work on UAS and LiDAR point cloud integration
There has been prior work in the area of point cloud integration from UAS and
LiDAR sensors. Generally, UAS point clouds are generated in an un-georeferenced
local photo coordinate system via SFM, whilst LiDAR point data is typically derived
in a georeferenced coordinate system. In this case, UAS data can be georeferenced
using ‘direct’ or ‘indirect’ georeferencing approaches (Colomina and Molina 2014).
Page 4 of 38
https://mc06.manuscriptcentral.com/juvs-pubs
Journal of Unmanned Vehicle Systems
Draft
5
In the latter, surveyed ground control points are used to estimate the position and
orientation of the sensor platform through photogrammetric triangulation, while the
former case directly uses position and orientation parameters provided by navigation
sensors from the global navigation satellite systems (GNSS) and inertial measurement
units (IMUs).
A case of the direct georeferencing approach to co-register UAS data with 3-D
LiDAR point clouds has been presented by Yang and Chen (2015). They collected
images and LiDAR points using a rotor-type mini-UAS equipped with a Canon 5D
Mark II camera and Riegl LMS-Q160 scanner. Position and orientation of the UAS
are computed using an on-board Novatel Span GNSS/IMU receiver through direct
georeferencing. These position and orientation transformation parameters were used
for alignment of the LiDAR point cloud data to the UAS images. Afterwards, dense 3-
D points were computed using the UAS imagery. While the two datasets are now in
the same reference system, an additional step was necessary to refine the UAS to
LiDAR point cloud co-registration. The well-known Iterative Closest Point (ICP)
algorithm (Besl and McKay 1992) was used for this purpose. Their approach relies on
data collected from the GNSS and the IMU sensors. GNSS positioning may be
affected due to satellite geometry and availability, as well as other systematic effects
such as multi-path, while IMU data are known to degrade over time.
Therefore, we seek a purely data-driven co-registration approach such as those
presented in Novak and Schindler (2013), and in Persad and Armenakis (2015). Their
methods utilized automatic feature extraction and feature correspondence to co-
Page 5 of 38
https://mc06.manuscriptcentral.com/juvs-pubs
Journal of Unmanned Vehicle Systems
Draft
6
register 3-D point clouds from UAS and LiDAR systems. Both works relied on the
projection of the 3-D point clouds into a 2-D planimetric height map image domain to
perform the matching process.
Persad and Armenakis (2015) automatically extracted 2-D height map image point
features or ‘keypoints’ on both the UAS and LiDAR datasets using a surface
curvature-based keypoint detector. Afterwards, they formed attributes or descriptors
of these 2-D keypoints. Specifically, they utilized the ‘SURF’ keypoint descriptor
(Bay et al. 2008) for the matching process. The SURF descriptor is based on the
computation of Haar wavelet filter statistics in both the horizontal and vertical image
directions. Keypoints with similar descriptors from both UAS and LiDAR datasets are
then established as corresponding points. The associated 3-D coordinates of matched
2-D keypoints are then used for computing the 3-D conformal transformation
parameters to enable the co-registration of UAS and LiDAR point clouds. On the
other hand, Novak and Schindler (2013) employed local image gradient information
as a feature descriptor. The hypothesized and test algorithm, RANdom SAmple
Consensus (RANSAC) (Fischler and Bolles 1981) was then used to find matching
point features.
Automatic alignment of UAS and LiDAR point clouds
In this work, we address the alignment of 3-D point cloud datasets generated by
SFM from overlapping UAS camera images with those obtained by airborne LiDAR.
Therefore, our objective is the computation of the seven-parameter 3-D conformal
transformation (i.e., scale factor s, three rotation angles (ω, φ, κ), three translations
Colomina, I., and Molina. P. 2014. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS Journal of Photogrammetry and Remote Sensing, 92, pp.79-97.
Förstner, W. 1994. May. A framework for low level feature extraction. In European
Conference on Computer Vision (pp. 383-394). Springer Berlin Heidelberg.
Forsyth, D.A. and Ponce, J. 2002. Computer vision: a modern approach. Prentice
Hall Professional Technical Reference.
Hopkinson, C., Chasmer, L., Gynan, C., Mahoney, C. and Sitar, M. 2016. Multi-
sensor and Multi-spectral LiDAR Characterization and Classification of a Forest
Environment. Canadian Journal of Remote Sensing, Vol. 42 , Iss. 5, 501-520.
Horn, B.K. 1987. Closed-form solution of absolute orientation using unit
quaternions. JOSA A, 4(4), 629-642.
Jaques, D.R. 1989. Topographic Mapping and Drying Trends in the Peace-Athabasca
Delta, Alberta Using LANDSAT MSS Imagery. Report prepared by Ecostat
Geobotanical Surveys Inc. for Wood Buffalo National Park, Fort Smith, Northwest
Territories, Canada, 36 pp. and appendix.
Kokkinos, I., Bronstein, M.M., and Yuille, A. 2012. “Dense scale invariant
descriptors for images and surface,” Technical Report, INRIA.
Li-Chee-Ming, J., Murnaghan, K., Sherman, D., Poncos, V., Brisco, B., and
Armenakis, C., 2015. Validation of Spaceborne Radar Surface Water Mapping with
Optical sUAS Images. The International Archives of Photogrammetry, Remote
Sensing and Spatial Information Sciences, 40(1), p.363.
Lindeberg, T. 2013. Scale-space theory in computer vision (Vol. 256). Springer
Science & Business Media.
Lorusso, A., Eggert, D., and Fisher, R. 1995. A Comparison of Four Algorithms for
Page 23 of 38
https://mc06.manuscriptcentral.com/juvs-pubs
Journal of Unmanned Vehicle Systems
Draft
24
Estimating 3-D Rigid Transformations. In Proceedings of the 4th British Machine