Top Banner
1 1 23 4
16

Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

May 13, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

1

1 23 4

Page 2: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

2

Simultaneous Segmentation and SuperquadricsFitting in Laser-Range Data

Ricardo Pascoal, Vitor Santos,Member, IEEE, Cristiano Premebidaand Urbano Nunes,Member, IEEE

Abstract—This work presents a method for simultane-ous segmentation and modeling of objects detected in rangedata gathered by a laserscanner mounted onboard ground-robotic platforms. Superquadrics are used as model forboth segmentation and object shape fitting. The proposedmethod, which we name Simultaneous Segmentation andSuperquadrics Fitting (S3F), relies on a novel globalobjective function which accounts for the size of the object,the distance of range points, and for partial-occlusions.Experimental results, using 2D range data collected fromindoor and outdoor environments, are qualitatively andquantitatively analyzed. Results are compared with popu-lar and state-of-the-art segmentation methods. Moreover,we present results using 3D data obtained from an in housesetup, and also from a Velodyne LIDAR. This work findsapplications in areas of mobile robotics and autonomousvehicles, namely object detection, segmentation and mod-eling.

I. INTRODUCTION

OBJECT detection and modeling is a long-standing research topic for the mobile robotics

community, with the majority of the approachesrelying on data from laser range sensors and cam-eras mounted on-board a robot. While perceptionsystems using cameras have extensive applications,laser-based solutions constitute a field of particularimportance in robotics, with many applications fo-cussing on intelligent transportation, as evidencedby the works of [1], [2], [3], [4], [5] and [6].

Common to most on-board perception systemsusing laser range data is the use of a segmentation

Copyright (c) 2013 IEEE. Personal use of this material is per-mitted. However, permission to use this material for any otherpurposes must be obtained from the IEEE by sending a request [email protected].

R. Pascoal is with IEETA, University of Aveiro, Cam-pus Universitario Santiago, 3810-193 Aveiro, Portugal, e-mail:[email protected].

V. Santos is with the Dept. of Mechanical Engineering and IEETA,University of Aveiro, e-mail:[email protected].

C. Premebida and U. Nunes are with Institute of Sys-tems and Robotics (ISR-UC), Dept. of Electrical and Com-puter Engineering, University of Coimbra, Portugal, e-mail:cpremebida,[email protected].

process as the primary step towards detection andtracking of moving objects or landmarks detection.Generally, laser range data segmentation, or clus-tering, is the process of grouping range-points, be-longing to a given scan, to a limited number of sets.Points within a set (a cluster or asegment) sharecommon spatial characteristics, usually defined interms of a distance criterion. In the robotics com-munity, methods for laser-range data segmentationhave been reported by [7], [4], and [8]. In general,segmentation is carried out independently of otherprocesses, however, some authors have proposedmethods which perform, concurrently, segmentationand geometric primitive extraction. Among the mostcommon geometric primitives used in the context ofmobile robotics are lines, see a survey given by [9],and circles, as proposed by [10].

Usually, object tracking is conducted based on theobject’s centroid and its relative shape, thus one ofthe concerns is how to model the shape (appearance)of the segment (or cluster) which characterizesa given object under tracking. Depending on theenvironment where the robot moves, the problem ofassigning a proper geometric model to a segment isnot trivial. This problem becomes more challengingunder the presence of partial information, in non-structured scenarios, and with moving objects. Inorder to mitigate such situations, we propose to usesuperquadrics as a general geometric primitive uponwhich additional hypothesis may be set forth. Su-perquadrics, when compared to ellipsoids, providean interesting increase of the degree of modellingcapability at the cost of very few additional parame-ters, and, on the other hand, it is simple to constrainthe parameters in order to obtain a specific range ofprimitive shapes.

Superquadrics, as defined by [11], are a fam-ily of geometric shapes with many applications incomputer vision, computer graphics and, recently,in robotics as well. The research works of [12]

Page 3: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

3

and [13] are examples of using superquadrics (orsuperellipses in the two-dimensional case) in com-puter vision. The work of [12], still constitutesan important reference on superquadrics fitting inrange images. In robotics, in most of the casessuperquadrics are devoted to object modeling forthe purpose of manipulation and grasping, suchas the work in 3D hand tracking of [14]. On theother hand, the use of superquadrics in mobilerobotics and autonomous vehicle applications hasbeen underexplored so far. Exceptions to this ob-servation are the works of [15], which employssuperquadric surfaces to model free configurationspaces for autonomous navigation, and [16] whichuses superquadrics to model an indoor environment,as perceived by a mobile robot, with the purpose ofdetecting changes in such environment.

In this work we expand on this topic and pro-pose a method for concurrent segmentation and su-perquadric fitting in range data collected in indoor-outdoor scenarios. This method, named Simultane-ous Segmentation and Superquadric Fitting (S3F),explores superquadrics formulation as a intrinsicmodel to be used in both segmentation and shape fit-ting. Additionally, we discuss and propose a solutionto deal with self-occlusions and report experimentson 2D and 3D field data, allowing a better un-derstanding of the problem of fitting superquadricsin laser-range data collected from moving ground-robotic platforms.

This paper is organized as follows. Backgroundmaterial, superquadric models and their propertiesare presented in Section II. In Section III, weaddress specific formulations necessary to supportthe new proposed objective function used in ourmethod. Experiments, comparisons and discussionsare presented in Section IV. Finally, conclusions aregiven in Section V.

II. BRIEF REVIEW ON SUPERQUADRICS

For the sake of completeness, the canonicaland parametric formulation associated with su-perquadrics, and existing objective functions, arebriefly presented in the sequel (see [17] for a morecomplete description). Superquadrics have great po-tential in modeling a large set of shapes, someof them are illustrated in Fig. 1, and have beenused as geometrical primitives in computer graphics,computer vision, and object modeling for roboticsapplications as presented by [13].

ε1 = 0.1, ε

2 = 0.1 ε

1 = 0.1, ε

2 = 1 ε

1 = 0.1, ε

2 = 2

ε1 = 1, ε

2 = 0.1 ε

1 = 1, ε

2 = 1 ε

1 = 1, ε

2 = 2

ε1 = 2, ε

2 = 0.1 ε

1 = 2, ε

2 = 1 ε

1 = 2, ε

2 = 2

Fig. 1: Examples of superquadric shapes and their2D projection.

Superquadrics may be represented in canonicalimplicit form by the contour surface of a functionas formulated by [12]. This function, for a fixed setof size and shape parameters, respectivelyai andε j ,with ai|i=1,2,3,ε j | j=1,2 : ai ∈R>0 andε j ∈ [0.1,2],is given by

F(x) =

(

(

x1

a1

) 2ε2+

(

x2

a2

) 2ε2

)

ε2ε1

+

(

x3

a3

) 2ε1, (1)

where x ∈ R3 with x = [x1,x2,x3]

T . This functionis often termedinside-outside functionbecause forF(x) < 1 a given pointx lies inside the enclosedvolume, and forF(x) > 1 the point lies outsidethe volume enclosed by the superquadric surface(visible side). On the other hand, if a point lies onthe surface of a superquadric, here denoted byxs,thenF(xs) = 1. Forx∈R2, i.e. 2D shapes, equation(1) simplifies to:

F(x) =(

x1

a1

) 2ε2+

(

x2

a2

) 2ε2. (2)

Equations (1) and (2) are valid for superquadricscentered at a local reference frame. Consideringthe coordinate system shown in Fig. 2, whereXpis a point in the sensor (laser) coordinate system(X1,X2,X3), the origin of the local reference systemis given by the coordinates of the vectorµ andrepresented by(x1,x2,x3). Therefore, in order tocope with superquadrics having arbitrary orientationand centered at a distance ofµ from the sensor

Page 4: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

4

Fig. 2: Coordinate systems associated with the sen-sor (laser) and the local (superquadric) referencesystem. The centroid of the quadric is given byµ .

reference frame, a rotation (expressed byR) isapplied to the superquadric local reference frameand then followed by a translation (expressed byµ). As suggested by [12], these operations maybe collected into a homogeneous transformation asfollows:

[

Xp

1

]

= T[

xp

1

]

, with T =

[

R µ01×3 1

]

, (3)

wherexp is a point in the local reference system.The rotation is performed based on Euler-Rodriguesparameters thus, the matrix may be written as sug-gested by [18]

R =

1−2(

e22+e2

3

)

2(e1e2+e0e3) 2(e1e3−e0e2)

2(e1e2−e0e3) 1−2(

e21+e2

3

)

2(e2e3−e0e1)

2(e1e3+e0e2) 2(e2e3−e0e1) 1−2(

e21+e2

2

)

,

(4)

with e0 = cos(θ/2), ei=1,2,3 = λi sin(θ/2), andθis the angle of rotation about the direction vectorλ and‖λ‖= 1. Using Euler-Rodrigues parametersenables an unconstrained and singularity free trans-formation.

For the purpose of graphical representation theparametric representation is necessary. Given thesize and shape parameters (ai , ε j), the 3D parametricrepresentation is of the form

xs1 = a1sgn(cos(ω))cosε1(η)|cos(ω)|ε2

xs2 = a2sgn(sin(ω))cosε1(η)|sin(ω)|ε2

xs3 = a3sgn(sin(η))|sin(η)|ε1

(5)

where(xs1,xs2,xs3) are the coordinates of the pointson the surface of the superquadric,η :−π

2 ≤ η ≤π2 and ω : −π ≤ ω < π. The parametric form

allows one to identify that there exists a boundingbox [−a1,a1]× [−a2,a2]× [−a3,a3] and a one-to-one correspondence between points belonging to thesurface and the parameters (angles)η and ω. Be-cause of the one-to-one correspondence, the angularparameters corresponding to points on the surfacecan be retrieved by

η = atan2

(

sgn(xs3)

xs3

a3

1/ε1

,

(

(

xs1

a1

)2/ε2

+

(

xs2

a2

)2/ε2)ε2/2ε1

ω = atan2

(

sgn(xs2)

xs2

a2

1/ε2

,sgn(xs1)

xs1

a1

1/ε2)

.

(6)

A. Common objective functions

The importance of selecting an adequate objectivefunction for superquadric fitting has been addressedby [19], [20] and [21], while [19] and [21] haveperformed numerical experiments specifically tocharacterize the behavior of the objective functionsused in fitting of 3D superquadrics. In [19], theyhave used dense synthetic range data while [21]worked with field data. They have analyzed aspectssuch as the concavity of the objective function to-wards the minimum and precision on the recoveredparameters. In particular, the conclusion of the studyreported by [21] was that the “mean square error andminimal volume” objective function used in [12],expressed by

G=3

∏i=1

ai ·∑j

(

1−Fε1(x jp))2

, (7)

is less adequate than the “radial Euclidean” objec-tive function proposed by [19]:

G= ∑j

(

‖x jp‖(

1−F−ε1/2(x jp)))2

, (8)

where the summation is over all points being fitted,which may be the full laser scan or just part of thescan. The analysis presented by [19] shows that theradial objective function outperformed other mea-sures in terms of precision, sensitivity to parametricchanges and the concavity towards the minimum.

Page 5: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

5

The objective function from the landmark paperof [12] is expressed by the product between avolume penalty,GV = ∏3

i=1ai , and some quadratic

error, GF = ∑ j

(

1−Fε1(x jp))2

, resulting in (7).Within our framework, both expressions (7) and(8) do not fulfill all the requirements to solve theproblem studied in this work. More specifically,we noted that a null error inGF will in factpreclude a desirable further minimization ofGV .For instance, in indoor environments, a null distanceerror is easily induced by a straight wall segment(in the form of a line-segment), or a planar patchin the 3D case, and this will not allow a propervolume minimization. The objective function (8),showed to be more robust to outliers but alsoevidenced difficulties in properly minimizing thesuperquadric volume. Notice that line-segments (orplanar patches) would constitute ill posed problemsif the inclusion of a volume minimization term wasnot properly chosen.

While volume minimization and distance errorare quite common subjects in the literature dedi-cated to superquadrics, visibility is not commonlyaddressed as part of a recovery process but, instead,as a condition for shading in computer graphics.Objects represented by superquadrics are frequentlyshaded using information on the angle between thesuperquadric normal and the light source. Unlessinformation about the viewpoint is available, or es-timated, the superquadric fitting procedure may pro-duce surfaces fitting to points which would actuallybe self-occluded from the laserscanner (consideringsuch surface actually exists). This is a problem,not considered in previous publications, that occurswhen no penalty is given to points in an objectoccluded by a recovered superquadric. In fact, whenusing solely distance and volume objective func-tions, a superquadric could fit just as well to scannedpoints with one of its self-occluded sides along asegment. Without penalizing points which lie on theself-occluded side, the fitting procedures will oftenfail to correctly recover even the simplest partiallyrepresented object. This finding led us to propose amethod comprising a new objective function whichis presented in the sequel.

III. SIMULTANEOUS SEGMENTATION AND

SUPERQUADRIC FITTING IN RANGE DATA

An appropriate segmentation stage is a key steptowards detecting and modeling objects using datafrom laserscanners. Usually, segmentation is per-formed independently of shape extraction and, asconsequence, there is no guarantee of consistencybetween the hypothesis used to perform the seg-mentation and the shapes extracted from the seg-ments (objects). This is pointed out, e.g., in [22]where it stated that performing segmentation, as anindependent process, is not adequate for use withrobot manipulators because segmentation itself willfulfill constraints other than those imposed by fittingsuperquadrics.

In this work, the characteristics and the conditionsunder which our dataset was collected consider-ably differ from the related works mentioned inprevious Sections. Due to sparseness of the data,type of environment, scale variations, noise, andpartial information, a new optimization procedurewas needed to cope with typical mobile robotic andintelligent transportation environments. The short-comings mentioned in Section II have motivated usto propose an objective function based on partialcosts, so that the volume of the superquadric, thedistance error and the visibility are all taken intoaccount (in the 2D case, it is adapted accordinglyto area, distance and visibility). The proposed ob-jective function takes the form of a weighted sumof all the portions to be minimized:

G= α1GV +α2GF +α3G∇, (9)

where the partial costsGV , GF and G∇ refer tovolume, distance and visibility respectively, and theweight hyperparametersαi control the behavior andinfluence of each cost parcel of (9). The volumepenalty is important once the other partial costs havelow values, more specifically in situations of self-occlusion which would result in an undeterminatevolume.

We begin by considering the extracted volume,which is penalized in the form of the product

GV =n

∏i=1

ai , (10)

wheren= 3 if (1) is considered (3D case) orn= 2if (2) is used (2D case). Theai, as defined when(1) was established, represent the size parameters.

Page 6: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

6

Taking xip to be theith scanned point from a set of

N points represented in the local reference frame,and considering a bounding sphere with diameterd = maxj ,k(‖x

jp−xk

p‖), where it is assumed that thesuperquadric is inscribed by the sphere,

[

0,d3[

is aconservative interval for the values ofGV .

The distance part of our objective function isbased on the work of [19] and is expressed as:

GF =1N

N

∑i=1

(

‖xip‖(

1−F−ε1/2(xip,Λ)

))2, (11)

where[

0,d2[

is assumed as a conservative intervalfor the values ofGF , with zero being the case whenthe scanned points are exactly on the surface. Thevector of unknowns isΛ = ai ,εm,µ ,λ1,λ2,θ i.e.,a total of 11 components in 3D.

Potential self-occlusion can be readily verifiedby using the dot product between the superquadricsurface normal and the unit vector defined fromthe sensor to the range point. The dot productshould be negative at the time of detection. Whena parametric formulation exists, it is common tocalculate the surface normal as the cross-productbetween the tangents using derivatives with respectto the parameters (angles). However, this procedurewould require the solution of (6) to recover theparametersη and ω in (5) that correspond to aparticular range point. The additional equations torecover the correct quadrant ofη and ω, and theuse of the cross-product, can be easily avoided.In fact, if ε1 or ε2 are extended to be functionsof x (see [23], [24]), there are additional benefitsin avoiding the use ofη and ω since they resultin implicit equations. Instead, the gradient of thecanonical representation has been used to calculatethe surface normal directly at the range points,n = ∇F

‖∇F‖ . This is possible because the gradient ofthe canonical representation, calculated at a rangepointxp = βxs, with β a scalar andxs on the surfaceof the superquadric, results in

∇F(xp) = β 2/ε1∇F(xs), (12)

i.e., for radially aligned points, the normal has thesame direction (orientation and sense).

The gradient of an extended superquadric,∇F =[∂x1F ,∂x2F,∂x3F] i.e., the general case which con-siders that the exponents are themselves function of

the coordinates (see [23], [24]), is given by

∂x1F = ( f1+ f2)ε2ε1

(

1ε1

∂ε2

∂x1−

ε2

ε21

∂ε1

∂x1

)

ln( f1+ f2)

+( f1+ f2)ε2ε1−1(

f1

(

2x1ε1−

1ε2ε1

∂ε2

∂x1ln(

x21

)

)

−f2

ε2ε1

∂ε2

∂x1ln(

x22

)

)

−f3ε2

1

∂ε1

∂x1ln(

x23

)

, (13)

∂x2F = ( f1+ f2)ε2ε1

(

1ε1

∂ε2

∂x2−

ε2

ε21

∂ε1

∂x2

)

ln( f1+ f2)

+( f1+ f2)ε2ε1−1(

f2

(

2x2ε1−

1ε2ε1

∂ε2

∂x2ln(

x22

)

)

−f1

ε2ε1

∂ε2

∂x2ln(

x21

)

)

−f3ε2

1

∂ε1

∂x2ln(

x23

)

, (14)

∂x3F =−ε2

ε21

∂ε1

∂x3( f 1+ f 2)

ε2ε1 ln( f1+ f2)

+ f3

(

2x3ε1−

1

ε21

∂ε1

∂x3ln(

x23

)

)

, (15)

with xi = xi/ai and f1 = x2

ε21 , f2 = x

2ε22 and f3 = x

2ε13 .

The pure visibility constraint results in a discon-tinuous function at the superquadric surface (visible,on the surface, and not visible). This characteristicis highly undesirable in the context of iterativeoptimization and thus an additional contribution ofthe present work is to propose a smooth objectivefunction to improve convergence. The self-occlusionobjective function has been designed using the di-rectional derivative∇r , and the hyperbolic tangentas a replacement for the discontinuous Heavisidefunction. This replacement ensures a smooth tran-sition zone with controllable width for the shiftfrom occluded to visible. The hyperbolic tangentis a C∞ function and has also been suggested forother optimization problems such as described by[25]; thus, the self-occlusion objective function isexpressed by

G∇ =12+

12N

N

∑i=1

tanh(

α0∇rF(

xip,Λ))

(16)

where G∇ takes values in the interval]0,1[, withzero being the limit value when all points lie on thevisible side of the surface and one takes limα0→∞ G∇(i.e. the hyperbolic tangent converges to a Heavi-side). The gradient is calculated in the superquadric

Page 7: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

7

local reference frame and the rotations to the sen-sor reference are performed afterwards. Indeed, theparameters associated with rotation and translationare constants and do not need to be applied beforedifferentiation. Translation itself is not necessary forthe analysis of gradients or directional derivatives.The parameterα0 controls the transition width ofthe hyperbolic tangent and thus its similarity withthe Heaviside function. The directional derivativeof the inside-outside functionused in equation (16),∇rF, is determined by∇rF = ∇F ·Xp/‖Xp‖. Tocalculate this directional derivative it is importantto have data from the laser scanner in the sensorreference frame at the time of scanning (even ifthe reference frame is moving - which is the casein this work). Applications using 2D data requirethe dimension of the vector of unknowns to beadequately reduced. Because the proposed objectivefunction is well behaved, the Levenberg-Marquardtalgorithm was used to achieve the results reportedin this work.

The Levenberg-Marquardt algorithm requires theobjective function Jacobian. Special care in estab-lishing the cost function has enabled stable nu-merical differentiation and, additionally, it allowscalculation of the analytical Jacobian. Numericalconditioning of the analytical Jacobian and theuse of other optimization methods, which may beconsidered in evaluating trade-offs between memoryusage and processing time, are considered impor-tant topics that require extensive and additionalnumerical experiments which exceeds the scopeof this work. Furthermore, our method is iterativeand needs initial values for the shape, size andorientation of the superquadric. These values havebeen estimated based on a bounding box whichencompasses the whole scan, i.e. a rounded cornerrectangular superquadric has been used and alignedwith the direction of motion of the mobile platform.

A. Implementation

Our method concurrently segments and extractssuperquadrics from 2D range data and can be ap-plied as a self contained 2D method, or expandedto the 3D case (for selected environments). Con-current, in this context, means that segmentationis based on successfully fitting superquadrics thus,at some point in the segmentation process, an ex-tracted superquadric is obtained automatically while

a segment is identified. For the 2D case, whichis detailed in this Section, equation (2) is usedas part of the solution in both the segmentationand the goodness of fit, thus ensuring consistencyof the model used simultaneously for segmentationand shape fitting. In 3D environments, the sameprocedure applies when the data contains sufficientinformation allowing a consistent projection to a 2Dplane. Notice that, for the 3D case, the formulationgiven in (1) is used to recover the parameters of thesuperquadrics.

Considering a setS of N 2D range-pointsxi inthe local coordinate system (wherei = 1, · · · ,N), themain steps of the method, which is summarized inAlgorithm 1 , are the following:

1) Calculate the midpointsxm (where m =1, · · · ,N− 1) between each two consecutiveangularly ordered scanned points.

2) Fit a superquadric to the scanned points byminimizing the objective function (9).

3) Check the value of the objective functionagainst the threshold (ThrG) and the minimumnumber of points (equal to 2 in our case).If less than the threshold, then a valid su-perquadric and a segment have been foundand its parametersΛ are added to a list.Remove fromS the points belonging to theextracted superquadric, and jump to step 5.

4) Calculate the most-interior midpointw.r.t. thesuperquadric (b1), and the point with worst‘shear’ (b2). Midpoint b1 corresponds to thesmallest value ofF and point b2 to theworst difference between gradients at suc-cessive points (interpreted as being forceswhich would cut the superquadric). Selectb2unlessb1 has no neighbor within a distanced.Distanced is a scene preservation parameterwhich is used only to reject superquadricswhich have good fit but are bridging gapsgreater thand (fixed in our experiments asd = 1.5m). Return to step 2 with the subsetthat ends with the point closest to the seg-menting point.

5) Group unclassified points within unprocessedsubsets (if they exist), thus making a newinitial set, and jump to 2 if the number isgreater than a certain minimum (set to 1).

6) The algorithm stops when all points have beenassigned to a given superquadric or labeled asoutlier.

Page 8: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

8

Algorithm 1 Simultaneous segmentation and su-perquadric extraction algorithmInput: S: set of 2D range-points (x) sorted by the angular values, wherex∈S;Output: Ω: set of parameters (Λ) for all recovered superquadrics;1: ΛL and ΛU : element-wise lower and upper bounds forΛ;2: Λ∗ = Λ : ΛL ≤ Λ≤ ΛU: element-wise operators assumed;3: G: measure of the goodness of fit, see (9);ThrG: adopted threshold on

the goodness of fite.g., ThrG = 5.0;4: i: the index of a given laser-pointxi ;5: m: the index of a given midpointxm = 0.5(xi +xi+1);6: b: the index of a given laser-point selected as break pointxb;7: S∗: set of range-points which do not have an associated superquadric;8: Ss: set of points belonging to a superquadric that complies with ThrG;9: S∗← S;

10: while |S∗| > 1 do11: while G> ThrG do12: Λ : minΛ∈Λ∗ G : i.e. Λ which minimizesG;13: if G≤ ThrG then14: Ss← S∗;15: Ω←Ω

Λ;16: else17: b1= minxm∈S∗ F(xm,Λ);18: b2 = maxxi∈S∗ |∂x1(F(xi ,Λ) − F(xi+1,Λ))| + |∂x2(F(xi ,Λ) −

F(xi+1,Λ))|;19: b= select(b1,b2);20: S∗← S∗ \xi | b< i ≤ |S∗|;21: end if22: end while23: S∗← S\Ss;24: end while

The reason for using midpoints at step 4, andnot the range-points as proposed by [26]1, is toreduce the chance of inappropriate segmentationwhich may happen when range-points lie on thesuperquadric boundary. This modification was foundnecessary, in particular, to cope with parallel linesfrom detected walls (e.g. in a tunnel).

To illustrate, sequentially, the effects of each stepof the algorithm, an example of recovering a su-perquadric from a range-scan is shown in Fig. 3. Theresult of the step 2 is shown in the left part of Fig. 3,where the segment having the breakpoint is chosen,according to step 4, to break the scan into a new sub-scan. The procedure is repeated for the remainingsub-scans until a good fit is found. The superquadricshown in the right part of Fig. 3 is, for this example,the first to be recovered. The range-points belongingto this superquadric form the first segment and arenot considered for further iterations, indicating thesimultaneity behavior. Finally, the algorithm restartsat step 1 and the procedure is repeated until thefitting criteria are no longer met.

As demonstrated, our method works from globalto local, has a deterministic behavior and wasdesigned towards dealing primarily with 2D data.These aspects are in contrast with most of the

1Nevertheless [26] studied superquadrics in a different applicationfield.

published algorithms using superquadric models,such as theSegmentor, detailed in [17], which workfrom local to global, rely on a large number of initialrandom (stochastic) seeds and are designed exclu-sively for 3D data. Moreover, in the applications weare dealing with, the CPU-time of theSegmentorseems to be prohibitive.

−2 0 2 4 6−2

0

2

4

6

X1 (m)

X2 (

m)

Input scan and a break point

2 4 6 80

2

4

6

X1 (m)

X2 (

m)

First subset and a break point

3 4 5 60.5

1

1.5

2

X1 (m)

X2 (

m)

First extracted superquadric

Fig. 3: Sequence illustrating the extraction of asuperquadric from a scan of 2D range-points.

Due to the visibility constraint imposed by thepresent procedure, which tends to force the su-perquadric to go behind the points as seen by thesensor, our method tends to extract a superquadricin such a way that the ‘hidden’ part of a segment ofrange-points with a convex-like shape (for example,the L-shape of a vehicle as perceived by a laserscan-ner) is filled by the superquadric shape. Conversely,in segments with a concave-like appearance (e.g.,interior wall corners) the method tends to extracttwo quadrics instead of one, as shown in Fig. 4 bythe highlighted segments.

−3 −2 −1 0 1 2 30

1

2

3

4

5

6

(a)

−4 −2 0 20

1

2

3

4

5

6

(b)

Fig. 4: Segmented indoor scenes with some concaveregions.

IV. EXPERIMENTS

The experiments reported in this paper aim atdemonstrating the performance of the proposedmethod in terms of segmentation and superquadricfitting to laser range data. To support experimentalanalysis, a set of laser scans collected in indoor and

Page 9: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

9

outdoor scenarios is used. The majority part of thedataset used in our experiments were obtained withour robotic vehicles AtlasCAR and the ISRobotCar,shown in Fig. 5. The first vehicle is described by[27], while the second vehicle is presented in thevideo-paper of [28].

Fig. 5: The AtlasCAR and the ISRobotCar instru-mented robotic vehicles.

A. Experiments on 2D laser-range data

We have evaluated our method in a set of 2Drange-data collected from indoor and outdoor en-vironments with laserscanners mounted on mobileplatforms. The weighting parameters intrinsic tothe S3F method,α0 in (16) and αi in (9), havebeen adjusted and fixed throughout the experimentsas α0..3 = 2,1,80,30. The inequality which isverified during optimization, comparing the costfunction against a threshold, is invariant to a non-zero scale factor applied simultaneously to the ex-plicit hyperparameters of (9) and the threshold. Itis thus important to keep the ratio between thosehyperparameters and the thresholdThrG which pro-vides the optimal results. The value ofα1 may beset to one. Then, the other hyperparameters maybe chosen as multiples ofα1. The ratio betweenα2 and α3 is especially important becauseα2 isresponsible for controlling the fit to the set of pointswhile α3 is responsible for recovering superquadricswhich are consistent with the sensor viewpoint.It has been found through numerical experimentsthat α2

α3close to 2.7 andα2 close to 30 provide

good results for outdoor scenes. The hyperparameterα0 is important to the numerical stability of theoptimization, and controls the transition width ofthe visibility constraint, it is not invariant to thementioned scale factor and should be chosen aslarge as possible. Before discussing the quantitative

evaluation, detailed in Section IV-B, it is appropriateat this point to present some qualitative analysis.This consideration comes from the fact that thereare some interesting details of fitting superquadricsto real data that deserves some preliminary dis-cussions. Macroscopic quantitative analysis tend toocclude some cases of difficult fitting which appearto be simple. In [4], some of the peculiarities ofreal outdoor data are briefly discussed and, beforepresenting statistical results, we will further buildupon those.

−8 −6 −4 −2 0 2 40

5

10

15

20

Fig. 6: Segmentation of a scene containing road,poles, trees and bushes.

Results obtained on range-data collected in urban-like scenarios are shown in Fig. 6 to Fig. 8. Thescan appears to be noisy but mostly this is not due tointrinsic sensor noise, rather to characteristics of theobjects being scanned and because of ego-motion,constituting one of the challenging aspects of real-world data. In Fig. 6, tree trunks and road poleshave been properly segmented and the extractedsuperquadrics, which depend on number of sup-port points available for fitting, resemble roundedsquares and ellipses. The road side bushes werealso segmented and the openings between themhave been correctly preserved. The S3F algorithmworks from global to local and the final fit hasonly local support (i.e., occluding superquadrics arenot avoided in the global sense) moreover, line-segments do not constitute an ill posed problemsince volume minimization is active.

Figure 7 shows a road scene where a vehicleis approaching from the opposite direction withrespect to the ego-vehicle. On the left, an overviewof the scan and respective extracted superquadricsare depicted. On the bottom-right, a detailed view ofthe scene with the oncoming vehicle and some polesnearby are shown, giving evidence of the robustnessof the proposed method under sparse data.

The road scene of Fig. 8 is characterized byseveral road agents, some of them with partial data,

Page 10: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

10

−15 −10 −5 0 0

2

4

6

8

10

12

14

16

18

20

−8 −6 −4 −2 0 4

6

8

10

Fig. 7: Case of an approaching car in the oppositedirection of the ego-vehicle.

−10 −5 0 5 10 150

5

10

15

Fig. 8: Scene with pedestrians, vehicle, trees andsign poles.

including the presence of pedestrians in very closeproximity. Typically, trees and lamp posts have beensegmented and modeled by superquadrics resem-bling square shapes with rounded corners, whilethe vehicle and some nearby pedestrians have beensegmented by a single superquadric.

−4 −3 −2 −1 0 1 2 3 40

2

4

6

8

10

12

Fig. 9: On the left, superquadrics resulting fromAlgorithm 1 with ThrG = 2 (red solid line) andThrG = 5 (blue dashed line). On the right, an imageof the scene is shown.

Finally, a road scene with pedestrians and amoving vehicle (at the left) are shown in Fig. 9.Using different values ofThrG in the Algorithm1, namely ThrG = 2 and ThrG = 5, we obtained

the superquadrics in red-solid and blue-dashed lines,respectively. AsThrG increases, the number of de-tected segments and respective quadrics decreases,indicating a tendency of merging points. This be-havior is intrinsic since high thresholds will tendto merge segments while lower thresholds have theopposite effect, establishing a tradeoff. It is noticed,as by [4], that vehicles pose a particular problemto segmentation algorithms, normally leading tooversegmentation due to heavy discontinuity in thescan of the fender area.

B. Performance evaluation

The methodology used to quantitatively evaluatesegmentation algorithms is described as follows.We have manually labeled 3098 segments in orderto compose the setΓ. The labeling process wasperformed on the laserscanner Cartesian space withthe aid of image-frames collected from the on-boardcameras. For each scans, the objects (segments)of interest were labeled according to the followingcriteria:

1) The objects of interest are: vehicles, poles,tree-trunks, sign-posts and pedestrians.

2) Points belonging to rigid body objects, such asthe vehicles, were labeled as a single object.

3) For any pair of clearly spaced groups ofrange-points, that correspond to the legs ofa given pedestrian, the pair is labeled astwo independent segments. Otherwise, if theyappear too close, a single segment is added tothe ground-truth.

4) A group of objects appearing together in animage-frame and in the corresponding scan,especially people close to each other, was usu-ally labeled by a number of segments less thanthe actual number of objects, since a moreinformed decision could not be established.

Let Γs denote the set of labeled segments for agiven scans, wherens = |Γs| is the number of ofsegments belonging toΓs; wherein |.| denotes thecardinality of a set. Each labeled segment inΓs isdenoted bySi (with i = 1, · · · ,ns), where the numberof range-points in each segmentSi is designated byTi . Similarly, let Ks be the number of superquadricsextracted froms andM j the number of points in agiven superquadricQ j . For the purpose of evalua-tion, each superquadric is referred as a ‘segment’.For each segment we calculateRi = ∑Ks

j=1

∣Si⋂

Q j∣

Page 11: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

11

as the total number of points ofSi which have therange-coordinates inQ j . Thus, if there is a perfectmatch betweenSi and Q j , Ri = Ti = M j , if Si hasno association with anyQ j , Ri = 0, while if thereexists a partial-matchRi < Ti .

Let Pi be the set of superquadrics which containsat least one point associated toSi andPi = |Pi | bethe number of superquadrics matched toSi . Denot-ing by Mi the total number of points associated withthe superquadrics inPi thus, based on the abovenotations, we propose the following ‘performancemeasure’ calculated per eachith−segment:

Di =2Ri

(Mi +Ti)Pi, (17)

whereDi ∈ ]0,1]. In (17), Pi penalizes oversegmen-tation since, if this happens,1Pi

< 1. On the otherhand, undersegmentation and undetected fractionsof ground truth affect (17) by 2Ri

(Mi+Ti)< 1. The per-

scan performance measure is expressed by

Ls=ns

∑i=1

log(Di) (18)

and the global measureL, considering all scansof the dataset (N=1081), reduces toL = ∑N

s=1Ls.The maximum value of (18) (Ls = 0) representsa perfect match, which means that each segmentin Γ is represented unambiguously by a singlesuperquadric. For purpose of comparison with othermethods, the value ofL should be normalized by thetotal number of segments inΓ (equivalent to usinga geometric mean).

We benchmark the S3F method against two pop-ular segmentation methods used by the roboticcommunity and a state-of-the-art method. The firstone is a simple but effective method designatedJump Distance Segmentation (JDS), which con-siders an approximation of the Euclidean distanceJD = |r i − r i+1| between consecutive range-points(r i, r i+1) and a thresholdThrJ. In the JDS method,a break-point is detected ifJD> ThrJ. The secondevaluated method, called KF-based Segmentation(KFS), uses the Kalman filter in conjunction with astatistical test, under a Chi-square validation region,to detect breakpoints. The stochastic model used todescribe the spatial-dynamic evolution of the rangemeasurements as well the transition matrices aredescribed in [29]. Basically, a breakpoint is detectedif the normalized innovation squaredexceeds a

thresholdThrχ according to aχ21 distribution table.

The model used in the KFS framework is not restric-tive with respect to shape thus, it assumes a constantrate of change between the range-distance and theangle (also termed constant speed model). The thirdmethod considered here, called Segmentation usingInvariant Parameters (SIP), has been recently devel-oped by Fortinet al. [4] where the authors reportedexperimental results on vehicle detection. The SIPmethod was primary developed to deal with lasermeasurements in Polar coordinates. This approachis founded on the use of lines to model segmentedobjects, and leads to the definition of a criterionof line-segment detection that only depends on thesensor intrinsic parameters and range measurementnoise. In [4], a confidence interval in a Mahalanobis-based merging procedure and a scene preservationdistance are considered design parameters. Finally,after adjusting these design parameters by meansof validation tests, we perform experiments withthe SIP method considering the standard deviation(ThrS) of range measurements as the variable pa-rameter.

For N=1081 scans, Fig. 10 shows the values ofLas function of normalized thresholds, beingThrmaxLthe threshold which provides maximum value ofL(the thresholds were normalized in order to alignthe maximum of each curve at a common abscissa).One may notice that all curves have identical trends,which is explained by the fact that distance-basedcriteria form the basis of the methods.

0.4 0.6 0.8 1.0 1.2 1.4−900

−850

−800

−750

−700

−650

−600

−550

−500

−450

−400

L

Thr/Thrmax L

S3FJDSKFSSIP

Fig. 10: Results of the global measureL as functionof the normalized threshold for S3F, JDS, KFS andSIP methods.

Page 12: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

12

The nomenclature used for indicating our evalu-ation measures is as follows. N.Det.seg= number of detected (extracted) seg-ments used to represent a total of 3098 labeled-segments ofΓ. N.Match = total number of extracted segmentswith unique and complete ground-truth correspon-dence. N.Overseg= total number of over-segmentationinstances. N.Underseg= total number of under-segmentationinstances.

The benchmark results are summarized in Ta-ble I and were obtained using thresholds whosevalues correspond to the maximum of the globalmeasureL. The maximum values for each methodcorrespond to the absolute thresholdsThrG = 5.5,ThrJ =0.1, Thrχ = 2.0 andThrS=0.3, respectivelyfor S3F, JDS, KFS and SIP. According to the resultsgiven in Table I, the S3F method outperforms allmethods in what respects the characteristics of therecovered set. The KFS slightly outperforms theJDS in terms of perfect match (given byN.Match),yet additionally providing less undersegmentationand better stability near the maximumL (Fig. 10),while the JDS method presents lower oversegmen-tation compared to the KFS method. Besides ve-hicles segmentation and detection, which was theprimary application presented in [4], here SIP wasbenchmarked in a real-world dataset with a varietyof objects with different sizes and shapes. Underthese circumstances the performance of SIP wasmainly penalized by its higher rate of oversegmen-tation, induced by laser-data missing returns and byvariability intrinsic to object shapes.

Complementary results of the S3F method areprovided in Fig. 11, with values ofL shown asfunction of normalized hyperparametersα2 andα3, chosen for their very distinctive roles in theoptimization. Parameters were varied one at a timeand normalized byαimaxL, the respective values ofαi=2,3 which result in maximumL for ThrG = 5.5.One can verify thatα3 has a strong influenceon segmentation performance, where low valuesof α3 alleviate visibility constraints at the costof undersegmentation, while large values induceoversegmentation in order to impose visibility.

Further performance assessment is performedbased on precision-recall curves that characterizesegmentation results in terms of over and underseg-

TABLE I: Summary of the experimental results ofthe algorithms. The ground truth has 3098 segments.

Method N.Det.seg N.Match N.Overseg N.UndersegS3F 3308 2270 309 142KFS 3523 1949 555 203JDS 3419 1853 458 222SIP 3588 1332 615 222

0.5 1 1.5 2 2.5 3 −500

−490

−480

−470

−460

−450

−440

−430

−420

−410

−400

i/α

imax L

L vs α3

L vs α2

Fig. 11: Global measureL achieved by S3F asfunction of normalizedα2 andα3.

mentation [30]. In this work, we define precisionPrand recallRemeasures by

Pr(Thr) =N.Match

N.Match+N.Overseg(19)

Re(Thr) =N.Match

N.Match+N.Underseg, (20)

where each value of(Thr) gives a point in the curve.We have varied the thresholds according toThrG ∈[0.1,14], ThrJ ∈ [0.005,0.22], Thrχ ∈ [0.005,9.0]and ThrS ∈ [0.083,3.3], with the resulting curvesshown in Fig. 12. Among the methods, JDS wasmost sensitive to the increase of the thresholdlevel (measured as a fraction of the optimal value)and thus it is more prone to low recall (highundersegmentation). On the other hand, comparedto JDS, the KFS method presents better behavior.However, Thrχ acts in a more complex mannerthan ThrJ. S3F exhibits robustness with respect toprecision and shows a more stable behavior aroundthe optimal-operating point of thePr−Recurve (theupper right point), while SIP presents the higheststability near the optimal operating point.

All code has been implemented in interpretedlanguage and without exercising special care in

Page 13: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

13

respect to resource management (memory and callsto the processor). For timing purposes, conceptualAlgorithm 1 for S3F has been set to 0.01 toleranceon estimated parameters (one centimeter in the caseo size parameters), to the optimal threshold, and hasbeen given the initial values discussed in Section III.

In these conditions, extracting up to 10 two-dimensional superquadrics using our method de-mands on average 1.5 s of CPU-time running ina single thread-single core (Intel T9300 2.5 GHzCPU, Ubuntu 12.04) in Matlab environment. How-ever, most of the time of the conceptual algorithmis spent in operations whose number can be reducedif the implementation exchanges information on thebreakpoints between iterations, in which case singlethreaded execution time is expected to fall to around0.2 s. Further reductions require optimization ofresources. Both the KFS and the JDS are severaltimes faster. The version of SIP applied to our dataruns in 0.05 s on average.

0.7 0.75 0.8 0.85 0.9 0.95 10.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Pre

cisi

on

S3F

JDS

KFS

SIP

Fig. 12: Results of precision-recall for S3F, JDS,KFS and SIP methods.

C. Experiments on 3D laser-range data

Processing of 3D point clouds normally resultsin computer-intensive procedures. The projection of3D point clouds onto a 2D plane can often be usedas a means to reduce problem complexity and tomake 2D superquadric based segmentation applica-ble as a pre-processing step for the 3D problem.This is the main reason why 2D representationsfor 3D scenes have been chosen and successfullyapplied in other studies,e.g. [31], [32].

In this section we have used 3D point cloudsavaliable from the KITTI dataset [33] and from anin-house platform which extrinsically rotates a 2Drange finder, as shown in Fig. 13. The 3D pointclouds from the KITTI dataset were generated bya Velodyne sensor and have an intrinsic orderingwhile from the in-house setup they are not ordered.

Raw point clouds are sparse and noisy, the scenesthemselves are unstructured and it becomes com-plex to fit general models for the objects. Directsuperquadric fitting would be prone to errors andwould be highly time consuming. To overcomethese problems, we first project the 3D point-cloudonto a 2D plane and then apply the S3F algorithmdirectly. The decision on how to perform the pro-jection, or if indeed there is a projection that greatlysimplifies the segmentation, will depend on the typeof scene and it is out of scope of this work to satisfyall situations.

Fig. 13: The in-house 3D laser platform.

We assume that the correct projections are verti-cal onto a plane that fits the ground. The plane isdescribed by all points in space satisfying:

n · (xp−x0) = 0, (21)

with n the plane normal andx0 a point known tobelong to that plane. Such point is chosen to bethe intersection between the plane and the verticalaxis of the reference frame moving with the vehicle(relative reference frame,i.e. x3). Thus, there arethree unknowns, two normal components, and oneoffset. The unknown unit normal components havebeen chosen asn1 and n2 with n3 assumed to bepositive.

We start by determining the plane that best fitsto the points on the ground, if they exist. Assumingthat geometric moments and principal directions ofsparse clouds, conditioned on partial information,should induce errors, we have decided to fit the

Page 14: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

14

ground-plane using nonlinear least squares and acost G∇ similar to the self-occlusion proposed forthe superquadrics. This cost states that the planecannot have many scanned points under it (someassumptions have been made to account for sensornoise).

Once the plane is found, a 2D description issought and required to be compatible with the con-current extraction procedure. Further processing isnecessary. The 3D point cloud is projected verticallyonto the previously determined plane and a synthetic2D scan is obtained by using a grid procedure,similar to occupancy grids presented by [34], whosegrid size depends on the smallest element we wishto detect. The grid size in our work has been chosento be 5 cm. The resulting 2D scan is processed usingthe same procedures as before.

Results are given here for a simple harbor scenescanned by the in-house scanner and for more com-plex scenes available from the KITTI dataset. 2Dsuperquadrics are used to determine the points thatwill contribute to a 3D fit, by calculating the inside-outside function for all candidates and consideringa specified neighborhood, which we have fixedarbitrarily at 0.1 m. In case overlapping exists, thoseoverlapping points are attributed in a greedy mannerto the largest superquadric. Since 2D superquadricsare a subset of the 3D, they are also used to estimatethe initial parameters for the 3D. Fitting each 3Dsuperquadric to centimeter tolerance then takes onaverage 0.2 s on raw point clouds from the KITTIdataset.

Presented in the top-right part of Fig. 14 arethe extrated 2D superquadrics for the harbor sceneshown adjacent. Finally, the resulting 3D fit isshown on the bottom of the same figure. It canbe seen that the 2D segmentation has been wellperformed and that the 3D superquadrics are ad-equate to model the containers. In Fig. 15 and Fig.16 are results for point clouds from the KITTIdataset, an image of the scene is presented onthe top and the modeled 3D scene on the bottom.These scenes further demonstrate the importante andapplicability of 2D projections to reduce complexityand computational time for processing point cloudsfrom real outdoor scenarios. In Fig. 15 it can be seenthat not all vehicles have been correctly recovered,in particular the dark ones on the right side of theroad. This is a problem on the sensor side, sincethe light emmited by the laserscanner, and which

then hits the dark vehicles, provides a very weakreturn signal, eventually not discernable from noiseand tagged as missing.

0 5 10 15

−10

−5

0

5

0

5

10

15

20

−15

−10

−5

0

5

0

2

4

Fig. 14: Harbor scene as described by 2D and 3Dsuperquadrics. In the left-down part, the extracted2D superquadrics are shown, while the final 3Dsuperquadrics are illustrated on the right.

V. CONCLUSION

In this paper we have proposed a method formultiple object segmentation and modeling, usingsuperquadrics formulation, in range data collectedfrom laserscanners mounted onboard ground-roboticplatforms. The proposed method is directly applica-ble to 2D cases and is based on the minimization ofan objective function and on segmentation criteriaintroduced in this work. The objective functionaccounts for the size of the recovered object, thedistance between the recovered superquadric and therange points, and for partial-occlusions. The criteriaadopted for segmentation are based on the su-perquadricinside-outside functionand its gradient.As consequence, the method preserves consistencybetween the formulation used for segmentation andfitting.

In the form of an iterative algorithm, and con-sidering the experimental results obtained with a

Page 15: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

15

Fig. 15: Scene from the KITTI dataset with vehicles.

Fig. 16: City scene from the KITTI dataset withpedestrians, poles, bicycle and buildings.

labeled dataset comprising 2D range scans fromindoor and outdoor environments, the segmentationaspect of the present proposal outperformed twopopular segmentation approaches in terms of a seg-mentation performance measure, introduced in thiswork, which penalizes over and undersegmentation.Regarding object modeling, which is an integral partof the method, we can conclude that, at the cost offew shape parameters, superquadrics have a strongcapability of representing multiple shapes of interestwithin a unified functional approach.

Finally, the algorithm used in the 2D case hasbeen applied as the primary preprocessing stage in3D range data, preserving the consistency between2D and 3D formulations and providing a convenientsolution towards a more complete 3D object seg-mentation and modeling.

ACKNOWLEDGMENT

The first author would like to thank the Por-tuguese Foundation for Science and Technol-ogy (FCT) for funding under post-graduationgrant SFRH/BPD/69555/2010 (Funds from POPH-QREN, co-financed by the European Social Fundand MEC National Funds). This work has alsobeen supported by FCT and COMPETE program(grant PDCS10: PTDC/EEA-AUT/113818/2009).We would like to thank Dr. Benoıt Fortin for sharingthe original scripts of the SIP method with pre-clustering and merging. We would like to thank theanonymous reviewers, their comments have lead toa significant improvement of the manuscript.

REFERENCES

[1] S. Gidel, P. Checchin, C. Blanc, T. Chateau, andL. Trassoudaine, “Pedestrian detection and tracking inan urban environment using a multilayer laser scanner,”Intelligent Transportation Systems, IEEE Transactions on,vol. 11, no. 3, pp. 579–588, 2010.

[2] C. Premebida and U. Nunes, “Fusing lidar, camera and semanticinformation: A context-based approach for pedestrian detec-tion,” The International Journal of Robotics Research, vol. 32,no. 3, pp. 371–384, 2013.

[3] J. Han, D. Kim, M. Lee, and M. Sunwoo, “Enhanced roadboundary and obstacle detection using a downward-lookinglidar sensor,” Vehicular Technology, IEEE Transactions on,vol. 61, no. 3, pp. 971–985, 2012.

[4] B. Fortin, R. Lherbier, and J.-C. Noyer, “Feature extraction inscanning laser range data using invariant parameters,”VehicularTechnology, IEEE Transactions on, vol. 61, no. 9, pp. 3838–3850, August 2012.

[5] S. Rodriguez, V. Fremont, P. Bonnifait, and V. Cherfaoui,“An embedded multi-modal system for object localization andtracking,” Intelligent Transportation Systems Magazine, IEEE,vol. 4, no. 4, pp. 42–53, 2012.

[6] J. Choi, J. Lee, D. Kim, G. Soprani, P. Cerri, A. Broggi, andK. Yi, “Environment-detection-and-mapping algorithm forau-tonomous driving in rural or off-road environment,”IntelligentTransportation Systems, IEEE Transactions on, vol. 13, no. 2,pp. 974–982, 2012.

[7] T.-D. Vu and O. Aycard, “Laser-based detection and trackingmoving objects using data-driven markov chain monte carlo,” inRobotics and Automation, 2009. ICRA ’09. IEEE InternationalConference on, 2009, pp. 3800–3806.

[8] C. Premebida and U. Nunes, “Segmentation and geometricprimitives extraction from 2d laser range data for mobile robotapplications,” inIn 5th National Festival of Robotics ScientificMeeting (Robotica 2005), Coimbra, Portugal, April 2005.

Page 16: Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data

16

[9] V. Nguyen, S. Gchter, A. Martinelli, N. Tomatis, andR. Siegwart, “A comparison of line extraction algorithmsusing 2d range data for indoor mobile robotics,”AutonomousRobots, vol. 23, no. 2, pp. 97–111, 2007. [Online]. Available:http://dx.doi.org/10.1007/s10514-007-9034-y

[10] S. Zhang, M. Adams, F. Tang, and L. Xie, “Geometrical featureextraction using 2d range scanner,” inControl and Automation,2003. ICCA ’03. Proceedings. 4th International Conferenceon,2003, pp. 901–905.

[11] A. H. Barr, “Superquadrics and angle preserving transforma-tions,” IEEE Comput. Graph. Appl., vol. 1, no. 1, pp. 11–23,1981.

[12] F. Solina and R. Bajcsy, “Recovery of parametric modelsfromrange images: the case for superquadrics with global defor-mations,” Pattern Analysis and Machine Intelligence, IEEETransactions on, vol. 12, no. 2, pp. 131–147, 1990.

[13] D. Katsoulas, C. Bastidas, and D. Kosmopoulos, “Superquadricsegmentation in range images via fusion of region and boundaryinformation,” Pattern Analysis and Machine Intelligence, IEEETransactions on, vol. 30, no. 5, pp. 781–795, 2008.

[14] R. Cipolla, B. Stenger, A. Thayananthan, and P. Torr, “Handtracking using a quadric surface model,” inMathematics ofSurfaces, LNCS 2768, 2003, pp. 129–141.

[15] D. Conner, H. Choset, and A. Rizzi, “Towards provable nav-igation and control of nonholonomically constrained convex-bodied systems,” inProceedings of the 2006 IEEE InternationalConference on Robotics and Automation (ICRA ’06), May 2006.

[16] P. Drews, P. Nunez, R. Rocha, M. Campos, and J. Dias,“Novelty detection and 3d shape retrieval using superquadricsand multi-scale sampling for autonomous mobile robots,” inRobotics and Automation (ICRA), 2010 IEEE InternationalConference on, May 2010, pp. 3635–3640.

[17] A. Jaklic, A. Leonardis, and F. Solina,Segmentation andRecovery of Superquadrics, ser. Computational imaging andvision. Dordrecth: Kluwer, 2000, vol. 20, iSBN 0-7923-6601-8.

[18] T. I. Fossen,Marine Control Systems - Guidance, Navigation,and Control of Ships, Rigs and Underwater Vehicles. Trond-heim: Marine Cybernetics, 2002.

[19] A. H. Gross and T. E. Boult, “Error of fit measures for recover-ing parametric solids,” inSecond International Conference onComputer Vision, 1988, pp. 690–694.

[20] E. V. Dop and P. Regitien, “Fitting undeformed superquadricsto range data: Improving model recovery and classification,” inInternat. Conf. Computer Vision and Pattern Recognition, 1998,pp. 396–401.

[21] Y. Zhang, “Experimental comparison of superquadric fittingobjective functions,”Pattern Recognition Letters, vol. 24, no. 1,pp. 2185–2193, 2003.

[22] G. Biegelbauer and M. Vincze, “Efficient 3d object detectionby fitting superquadrics to range image data for robot’s objectmanipulation,” inRobotics and Automation, 2007 IEEE Inter-national Conference on, April 2007, pp. 1086–1091.

[23] L. Zhou, C. Kambhamettu, and R. Kambhamettu, “Extendingsuperquadrics with exponent functions: Modeling and recon-struction,” inIEEE Conference on Computer Vision and PatternRecognition, 1999, pp. 73–78.

[24] L. Zhou and R. Kambhamettu, “Representing and recognizingcomplete set of geons using extended superquadrics,” inIEEEConference on Computer Vision and Pattern Recognition, 2002.

[25] D. Skanda and D. Lebiedz, “An optimal experimental designapproach to model discrimination in dynamic biochemicalsystems,”Bioinformatics, vol. 26, no. 7, pp. 939–945, 2010.

[26] T. S. Bhabhrawala, “Shape recovery from medical image datausing extended superquadrics,” Master’s thesis, Department of

Mechanical and Aerospace Engineering, State University ofNew York at Buffalo, 2004.

[27] V. Santos, J. Almeida, E. Avila, D. Gameiro, M. Oliveira,R. Pascoal, R. Sabino, and P. Stein, “Atlascar - technologiesfor a computer assisted driving system on board a commonautomobile,” inIntelligent Transportation Systems (ITSC), 201013th International IEEE Conference on, September 2010, pp.1421 – 1427.

[28] M. Silva, F. Moita, U. Nunes, L. Garrote, H. Faria, and J.Ruivo,“Isrobotcar: The autonomous electric vehicle project,” inIntelli-gent Robots and Systems (IROS), 2012 IEEE/RSJ InternationalConference on, October 2012, pp. 4233–4234.

[29] G. A. Borges and M.-J. Aldon, “Line extraction in 2d rangeimages for mobile robotics,”J. Intell. Robotics Syst., vol. 40,no. 3, pp. 267–297, Jul. 2004.

[30] F. Estrada and A. Jepson, “Quantative evaluation of a novelimage segmentation algorithm,” inProceedings of IEEE SocietyConference on Computer Vision and Pattern Recognition, vol. 2,San Diego, USA, 2005, pp. 1132–1139.

[31] O. Wulf, D. Lecking, and B. Wagner, “Robust self-localizationin industrial environments based on 3d ceiling structures,” inProceedings of the 2006 IEEE International Conference onIntelligent Robots Systems. IEEE, 2006, beijing, China.

[32] D. Ferguson, M. Darms, C. Urmson, and S. Kolski, “Detection,prediction, and avoidance of dynamic obstacles in urban envi-ronments,” inProceedings of the 2008 IEEE Intelligent VehiclesSymposium. Eindhofen: IEEE, June 2008, pp. 1149–1154.

[33] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for au-tonomous driving? the kitti vision benchmark suite,” inCon-ference on Computer Vision and Pattern Recognition (CVPR),2012.

[34] M. Himmelsbach, F. von Hundelshausen, and H.-J. Wunsche,“Lidar-based perception for offroad navigation,” inProceedingsof FAS 2008, Fahrerassistenzsysteme Workshop 2008. Walting,Germany: C. Stiller and M. Maurer, April 2008.

———————————–