Top Banner
Research Collection Conference Paper A toolbox for easily calibrating omnidirectional cameras Author(s): Scaramuzza, Davide; Martinelli, Agostino; Siegwart, Roland Publication Date: 2006 Permanent Link: https://doi.org/10.3929/ethz-a-005704721 Originally published in: 12, http://doi.org/10.1109/IROS.2006.282372 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection . For more information please consult the Terms of use . ETH Library
8

12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

Apr 14, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

Research Collection

Conference Paper

A toolbox for easily calibrating omnidirectional cameras

Author(s): Scaramuzza, Davide; Martinelli, Agostino; Siegwart, Roland

Publication Date: 2006

Permanent Link: https://doi.org/10.3929/ethz-a-005704721

Originally published in: 12, http://doi.org/10.1109/IROS.2006.282372

Rights / License: In Copyright - Non-Commercial Use Permitted

This page was generated automatically upon download from the ETH Zurich Research Collection. For moreinformation please consult the Terms of use.

ETH Library

Page 2: 12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

Proceedings of the 2006 IEEE/RSJInternational Conference on Intelligent Robots and Systems

October 9 - 15, 2006, Beijing, China

A Toolbox for Easily Calibrating OmnidirectionalCameras

Davide Scaramuzza, Agostino Martinelli, Roland SiegwartAutonomous Systems Lab

Swiss Federal Institute of Technology Zurich (ETH)CH-8092, Zurich, Switzerland

{davide.scaramuzza, agostino.martinelli, r.siegwart}f ieee.org

Abstract - In this paper, we present a novel technique forcalibrating central omnidirectional cameras. The proposedprocedure is very fast and completely automatic, as the user isonly asked to collect a few images of a checker board, and clickon its corner points. In contrast with previous approaches, thistechnique does not use any specific model of the omnidirectionalsensor. It only assumes that the imaging function can bedescribed by a Taylor series expansion whose coefficients areestimated by solving a four-step least-squares linearminimization problem, followed by a non-linear refinementbased on the maximum likelihood criterion. To validate theproposed technique, and evaluate its performance, we apply thecalibration on both simulated and real data. Moreover, we showthe calibration accuracy by projecting the color information of acalibrated camera on real 3D points extracted by a 3D sick laserrange finder. Finally, we provide a Toolbox which implementsthe proposed calibration procedure.

Index Terms - catadioptric, omnidirectional, camera,calibration, toolbox.

I. INTRODUCTION

An omnidirectional camera is a vision system providing a3600 panoramic view of the scene. Such an enhanced field ofview can be achieved by either using catadioptric systems,which opportunely combine mirrors and conventionalcameras, or employing purely dioptric fish-eye lenses [10].

Omnidirectional cameras can be classified into twoclasses, central and non-central, depending on whether theysatisfy the single effective viewpoint property or not [1]. Asshown in [1], central catadioptric systems can be built bycombining an orthographic camera with a parabolic mirror, ora perspective camera with a hyperbolic or elliptical mirror.Conversely, panoramic cameras using fish-eye lenses cannotin general be considered as central systems, but the singleviewpoint property holds approximately true for some cameramodels [8].

In this paper, we focus on calibration of centralomnidirectional cameras, both dioptric and catadioptric. Afterdescribing our novel procedure, we provide a practical MatlabToolbox [14], which allows the user to quickly estimate theintrinsic model of the sensor in a very practical way.

II. RELATED WORK

Previous works on omnidirectional camera calibration canbe classified into two different categories. The first oneincludes methods which exploit prior knowledge about thescene, such as the presence of calibration patterns [3, 4] orplumb lines [5]. The second group covers techniques that donot use this knowledge. This includes calibration methodsfrom pure rotation [4] or planar motion of the camera [6], andself-calibration procedures, which are performed from pointcorrespondences and epipolar constraint through minimizingan objective function [7, 8, 9, 11]. All mentioned techniquesallow obtaining accurate calibration results, but primarilyfocus on particular sensor types (e.g. hyperbolic and parabolicmirrors or fish-eye lenses). Moreover, some of them requirespecial setting of the scene and ad-hoc equipment [4, 6].

In the last years, novel calibration techniques have beendeveloped, which apply to any kind of central omnidirectionalcameras. For instance, in [2], the authors extend the geometricdistortion model and the self-calibration procedure describedin [8], including mirrors, fish-eye lenses and non-centralcameras. In [15, 17, 18], the authors describe a method forcentral catadioptric cameras using geometric invariants. Theyshow that any central catadioptric system can be fullycalibrated from an image of three or more lines. In [16], theauthors present a unified imaging model for fisheye andcatadioptric cameras. Finally, in [19], they present a generalimaging model which encompasses most projection modelsused in computer vision and photogrammetry, and introducetheory and algorithms for a generic calibration concept.

In this work, we also focus on calibration of any kind ofcentral omnidirectional cameras, but we want to provide atechnique, which is very practical and easy to apply. Theresult of this work is a Matlab Toolbox, which requires aminimum user interaction. In our work, we use a checkerboard as a calibration pattern, which is shown at differentunknown positions. The user is only asked to collect a fewimages of this board and click on its corner points. No a prioriknowledge about the mirror or the camera model is required.

The work described in this paper reexamines thegeneralized parametric model of a central system, which wepresented in our previous work [20]. This model assumes thatthe imaging function, which manages the relation between a

1-4244-0259-X/06/$20.00 C)2006 IEEE5695

Page 3: 12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

pixel point and the 3D half-ray emanating from the singleviewpoint, can be described by a Taylor series expansion,whose coefficients are the parameters to be estimated.

The contributions of the present work are the following.First, we simplify the camera model by reducing the numberof parameters. Next, we refine the calibration output by usinga 4-step least squares linear minimization, followed by a non-linear refinement, which is based on the maximum likelihoodcriterion. By doing so, we improve the accuracy of theprevious technique and allow calibration to be done with asmaller number of images.Then, in contrast with our previous work, we no longer needthe circular boundary of the mirror to be visible in the image.In that work, we used the appearance of the mirror boundaryto compute both the position of the center of theomnidirectional image and the affine transformation.Conversely, here, these parameters are automaticallycomputed using only the points the user selected.

In this paper, we evaluate the performance and therobustness of the calibration by applying the technique tosimulated data. Then, we calibrate a real catadioptric camera,and show the accuracy of the result by projecting the colorinformation of the image onto real 3D points extracted by a3D sick laser range finder. Finally, we provide a MatlabToolbox [14] which implements the procedure described here.

The paper is organized in the following way. For the sakeof clarity, we report in section III the camera modelintroduced in our previous work, and provide its newsimplified version. In section IV, we describe our cameracalibration technique and the automatic detection of both theimage center and the affine transformation. Finally, in sectionV, we show the experimental results, on both simulated andreal data, and present our Matlab Toolbox.

III. A PARAMETRIC CAMERA MODEL

For major clarity, we initially report the central cameramodel introduced in [20], then, we provide its new simplifiedversion. We will use the notation given in [8].

In the general central camera model, we identify twodistinct references: the camera image plane (u', v') and thesensor plane (u", v")). The camera image plane coincides withthe camera CCD, where the points are expressed in pixelcoordinates. The sensor plane is a hypothetical planeorthogonal to the mirror axis, with the origin located at theplane-axis intersection.In Fig. 1, the two reference planes are shown in the case of acatadioptric system. In the dioptric case, the sign of u'" wouldbe reversed because of the absence of a reflective surface. Allcoordinates will be expressed in the coordinate system placedin 0, with the z axis aligned with the sensor axis (see Fig. I a).Let X be a scene point. Then, assume " [u",v"]' be theprojection of X onto the sensor plane, and u'=[u',v']' itsimage in the camera plane (Fig. lb and ic). As observed in[8], the two systems are related by an affine transformation,which incorporates the digitizing process and small axesmisalignments; thus u " = Au '+t, where A E 932x2 and t E92xl .

At this point, we can introduce the imaging function g,which captures the relationship between a point u", in thesensor plane, and the vector p emanating from the viewpoint0 to a scene point X (see Fig. la). By doing so, the relationbetween a pixel point u' and a scene pointX is:

(1)where X E 9i4 is expressed in homogeneous coordinatesand P E 93x4 iS the perspective projection matrix. By calibrationof the omnidirectional camera we mean the estimation of thematrices A and t, and the non-linear function g, so that allvectors g(Au'+t) satisfy the projection equation (1). Weassume for g the following expression

(2)

We assume that the function f depends on u" and v" onlythrough p"I' u2 ±v" ,2 . This hypothesis corresponds toassume that the function g is rotationally symmetric withrespect to the sensor axis.

(a) (b) (c)Fig. 1 (a) coordinate system in the catadioptric case. (b) Sensor plane, inmetric coordinates. (c) Camera image plane, expressed in pixel coordinates.(b) and (c) are related by an affine transformation.

Function f can have various forms related to the mirror orthe lens construction. These functions can be found in [10, 1 1,12]. Unlike using a specific model for the sensor in use, wechoose to apply a generalized parametric model off, which issuitable to different kinds of sensors. The reason for doing so,is that we want this model to compensate for anymisalignment between the focus point of the mirror (or thefisheye lens) and the camera optical center. Furthermore, wedesire our generalized function to approximately hold with thesensors where the single viewpoint property is not exactlyverified (e.g. generic fisheye cameras). In our earlier work, weproposed the following polynomial form for f

f(U,V")=+a+a,p" + a2p2 +... + aNp"N (3)

where the coefficients ai, i = 0,1, 2, ...N, and the polynomialdegree N are the parameters to be determined by thecalibration.Thus, (1) can be rewritten as

5696

A-p =A.g(ull) = A.g(Au'+t) = PX, A>O.

p p Pr) = p prP,f(urrvff)yg(u YV (U YV Y

Page 4: 12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

A = g(Au'+t+t ) P X, A > 0. (4)

As mentioned in the introduction, in this paper we want toreduce the number of calibration parameters. This can be doneby observing that all definitions of f which hold forhyperbolic and parabolic mirrors or fisheye cameras [10, 11,12], always satisfy the following:

dfdp P=O

0.

This allows us to assume a, = 0, and thus (3) can be rewrias:

J(u 'UV'') = a0+ a2P" +... + aNpN

.

pattern, M, = [XZj,1i,Zj] the 3D coordinates of its points in thepattern coordinate system, and m [u , v ]T the correspondentpixel coordinates in the image plane. Since we assumed thepattern to be planar, without loss of generality we have Zi =.Then, equation (7) becomes

(8)

(5)

ttenTherefore, in order to solve for camera calibration, theextrinsic parameters have to be determined for each pose of

(6) the calibration pattern.

IV. CAMERA CALIBRATION

By calibration of an omnidirectional camera we mean theestimation of the parameters [A, t, a0, a2,... aN ]. In order toestimate A and t, we introduce a method, which, unlike otherprevious works, does not require the visibility of the circularexternal boundary. This method is based on an iterativeprocedure. First, it starts by setting A to the unitary matrix. Itselements will be estimated using a non-linear refinement.Then, our method assume the center of the omnidirectionalimage Q, to coincide with the image center IC, that is Qc = ICand thus t = a (Oc - IC) = 0 . Observe that, for A, theassumption to be unitary is reasonable because the eccentricityof the external boundary, in the omnidirectional image, isusually close to 0. Conversely, oc can be very far from theimage center Ic . The method we will discuss does not careabout this. In sections IV.D and IV.E, we will discuss how tocompute the correct values ofA and oc .

To resume, from now on we assume u "=ac u'. Thus, bysubstituting this relation in (4) and using (6), we have thefollowing projection equation

Avtt =Ag(ax(u')=)alv' =Aax vI =PlX, Aa>0(7)' f( P') aO ~~~~~+ +aNp'd A. Solvingfor camera extrinsic parameters

Before describing how to determine the extrinsicparameters, let us eliminate the dependence from the depthscale 2ii This can be done by multiplying both sides ofequation (8) vectorially by p1j

AjojAplj =j i i t] =0 > g ' 2 i] 0-(9)

Now, let us focus on a particular observation of the calibrationpattern. From (9), we have that each point pj on the patterncontributes three homogeneous equations

v (r31Xj +r32Y +t3) f (pj) (r21Xj +r22Y' +t2)=0 (10. 1)

f(Pj)- (rllXj +rl2Y +±tl)u*('j1X±3X 2Yj+t32)=0o (10.2)

*(r21Xj +r22Y' +t2) Vj *(rllXj +2Y +t)) (10.3)

Here Xj, Y and Z are known, and so are u j,v. Also, observethat only (10.3) is linear in the unknown "111 12r"2,1r22,t1,t2Thus, by stacking all the unknown entries of (10.3) into avector, we rewrite the equation (10.3) for L points of thecalibration pattern as a system of linear equations

where now u' and v' are the pixel coordinates of an imagepoint with respect to the image center, and p' is the Euclideandistance. Also, note that the factor a can be directly integratedin the depth factor A; thus, only N parameters ( aO Ia2. aN)need to be estimated.

During the calibration procedure, a planar pattern ofknown geometry is shown at different unknown positions,which are related to the sensor coordinate system by a rotationmatrix R = [r1, r2, r3] and a translation t, called extrinsicparameters. Let I be an observed image of the calibration

M H=0,where

H=[=rll'r12r2l,r22,t1,t2]T-viX1 -viY ulX u1Y -

-VLXL -VLYL ULXL UL YL

(1 1)

-V1 UlU Y.(12)

VL UL_

A linear estimate ofH can be obtained by minimizing theleast-squares criterion min || M H 2|, subject to || H 2| = 1 . Thisis accomplished by using the SVD. The solution of (11) is

5697

X.Uii ii -A.-Vii i j j j.j Pij =.j-. X= [ ". r2 "3 'I -0 =["I "2 '].

ao + + LqN)idj -II

Page 5: 12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

known up to a scale factor, which can be determined uniquelysince vectors rl, r2 are orthonormal. Because of theorthonormality, the unknown entries r31, r32 can also becomputed uniquely.

To resume, the first calibration step allows finding theextrinsic parameters 5l1'52' r21, r22,r31, r32, t1, t2 for each pose ofthe calibration pattern, except for the translation parameter t3 .This parameter will be computed in the next step, whichconcerns the estimation of the image projection function.

B. Solvingfor camera intrinsic parameters

In the previous step, we exploited equation (10.3) to findthe camera extrinsic parameters. Now, we substitute theestimated values in the equations (10.1) and (10.2), and solvefor the camera intrinsic parameters a,,a2I...,aN that describethe shape of the imaging function g. At the same time, we alsocompute the unknown t3 for each pose of the calibrationpattern. As done above, we stack all the unknown entries of(10.1) and (10.2) into a vector and rewrite the equations as asystem of linear equations. But now, we incorporate all Kobservations of the calibration board. We obtain the followingsystem

A1 A 2

C1 Clp1

AK AKPKCK CKPK2

ANAlp1NC1p1

A N

AKPKCKPK

- V1

- U1

00

00

00

00

VK

UK

whereA rjX±rj2Y±t' Bi =ViAi =r2li+ r22Y+ t2 Bi=Vr3iX+ r32Y ), Ci-

,i= r3X +r32X

aOa,A

B1

aN=IB ((13)3 B

t2 K

Kt

tK

<1X' + <B'Y' + t,

linear minimization. In subsection E, we will apply a non-linear refinement based on the maximum likelihood criterion.The structure of the linear refinement algorithm is thefollowing:1. The first step uses the camera model ( a,a2,...,aN )

estimated in B, and recomputes all extrinsic parametersby solving all together equations (10.1), (10.2) and (10.3).The problem leads to a linear homogeneous system,which can be solved, up to a scale factor, using SVD.Then, the scale factor is determined uniquely byexploiting the orthonormality between vectors rl, r2 .

2. In the second stage, the extrinsic parameters recomputedin the previous step are substituted in equations (10.1)and (10.2) to ulteriorly refine the intrinsic camera model.The problem leads to a linear system, which can besolved as usual by using the pseudoinverse.

D. Iterative center detection

As stated at the beginning of section IV, we want ourcalibration toolbox to be as automatic as possible, and so, wedesire the capability of identifying the center of theomnidirectional image oc (Fig. 1c) even when the externalboundary of the sensor is not visible in the image.

To this end, observe that our calibration procedurecorrectly estimates the intrinsic parametric model only if oc istaken as origin of the image coordinates. If this is not the case,by back-projecting the 3D points of the checker board into theimage, we would observe a large reprojection error withrespect to the calibration points (see Fig. 2a). Motivated bythis observation, we performed many trials of our calibrationprocedure for different center locations, and, for each trial, wecomputed the Sum of Squared Reprojection Errors (SSRE).As a result, we verified that the SSRE always has a globalminimum at the correct center location.

Finally, the least-squares solution of the overdeterminedsystem is obtained by using the pseudoinverse. Thus, theintrinsic parameters a., a2,..., aN, which describe the model, arenow available. In order to compute the best polynomial degreeN, we actually start from N=2. Then, we increase N by unitarysteps and we compute the average value of the reprojectionerror of all calibration points. The procedure stops when aminimum error is found.

C. Linear refinement ofintrinsic and extrinsic parameters

To resume, the second linear minimization step describedin part B finds out the intrinsic parameters of the camera, andsimultaneously estimates the remaining extrinsic t3. The nexttwo steps, which are described here, aim at refining thisprimary estimation. This refinement is still performed by

(a) (b)Fig. 2 When the position of the center is wrong, the 3D points of the checkerboard do not correctly back-project (green rounds) onto the calibration points(red crosses) (a). Conversely, (b) shows the reprojection result when the centeris correct.

This result leads us to an iterative search of the center O0,which stops when the difference between two potential centerlocations is less than a certain fraction of pixel £ (wereasonably set £=0.5 pixels):1. At each step of this iterative search, a particular image

region is uniformly sampled in a certain number of points.

5698

Page 6: 12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

2. For each of these points, calibration is performed byusing that point as a potential center location, and SSREis computed.

3. The point giving the minimum SSRE is assumed as apotential center.

4. The search proceeds by refining the sampling in theregion around that point, and steps 1, 2 and 3 are repeateduntil the stop condition is satisfied.

Observe that the computational cost of this iterative search isso low that it takes only 3 seconds to stop.

E. Non- linear refinement

The linear solution given in the previous subsections A, Band C is obtained through minimizing an algebraic distance,which is not physically meaningful. To this end, we chose torefine it through maximum likelihood inference.

Let us assume we are given K images of a model plane,each one containing L corner points. Next, let us assume thatthe image points are corrupted by independent and identicallydistributed noise. Then, the maximum likelihood estimate canbe obtained by minimizing the following functional:

100

50

150_100

150

200

-350-300

-250 < 2' z2-200 200

-150-150 100

-50 0

50 100

Fig. 3 A picture of our simulator showing several calibration patterns and thevirtual omnidirectional camera at the axis origin.

£ = , , m m(RT ,A,A ,a,ca2.*aN.M.a M (14)=1 =1

where m(Ri,Tj,A,Oc,a,a2...,aN,MI) is the projection of thepoint M1of the plane i according to equation (1). R, and T7 are

the rotation and translation matrices of each plane pose; R, isparameterized by a vector of 3 parameters related to R, by theRodrigues formula. Observe that now we incorporate into thefunctional both the affine matrix A and the center of theomnidirectional image Oc.

By minimizing the functional defined in (14), we actuallycompute the intrinsic and extrinsic parameters whichminimize the reprojection error. In order to speed up theconvergence, we decided to split the non-linear minimizationinto two steps. The first one refines the extrinsic parameters,ignoring the intrinsic ones. Then, the second step uses the

extrinsic parameters just estimated, and refines the intrinsicones. By performing many simulations, we found that thissplitting does not affect the final result with respect to a globalminimization.

To minimize (14), we used the Levenberg-Marquadtalgorithm, as implemented by the Matlab functionlsqnonlin. The algorithm requires an initial guess of theintrinsic and extrinsic parameters. These parameters areobtained using the linear technique described in the previoussubsections. As a first guess for A, we used the unitary matrix,while for °c we used the position estimated through theiterative procedure explained in subsection D.

V. EXPERIMENTAL RESULTS

In this section, we present the experimental results of theproposed calibration procedure on both computer simulatedand real data.

A. Simulated Experiments

The reason for using a simulator is that we can monitorthe actual performance of the calibration, and compare theresults with a known ground truth. The simulator wedeveloped allows choosing both the intrinsic parameters (i.e.the imaging function g) and extrinsic ones (i.e. the rotationand translation matrices of the simulated checker boards).Moreover, it permits to fix the size of the virtual pattern, andalso the number of calibration points, as in the real case. Apictorial image of the simulation scenario is shown in Fig. 3.As a virtual calibration pattern we set a checker planecontaining 6x8=48 corner points. The size of the pattern is150x210 mm. As a camera model, we choose a 4th orderpolynomial, whose parameters are set according to thoseobtained by calibrating a real omnidirectional camera. Then,we set to 900x1200 pixels the image resolution of the virtualcamera.

A. 1. Performance with respect to the noise level

In this simulation experiment, we study the robustness ofour calibration technique in case of inaccuracy in detecting thecalibration points. To this end, we use 14 poses of thecalibration pattern. Then, Gaussian noise with zero mean andstandard deviation a is added to the projected image points.We vary the noise level from c5=0.1 pixels to c5=3.0 pixels,and, for each noise level, we perform 100 independentcalibration trials. The results shown are the average.

Fig. 4 shows the plot of the reprojection error vs. u. Wedefine the reprojection error as the distance, in pixels, betweenthe back-projected 3D points and correct image points. Figure4 shows both the plots obtained by just using the linearminimization method, and the non-linear refinement. As youcan see, the average error increases linearly with the noiselevel in both cases. Observe that the reprojection error in thenon-linear estimation is always less than that in the linear

5699

Page 7: 12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

method. Furthermore, note that for v= 1.0, which is largerthan the normal noise in a practical calibration, the averagereprojection error of the non-linear method is less than 0.4pixels.

15

0.5

0 0.5 1 1.5 2 2.5 3

Fig. 4 The reprojection error vs. thenoise level with the linearminimization (dashed line in blue)and after the non-linear refinement(solid line in red). Both units are inpixels.

1

1 - -- --R

10 5 - -------- -r----- --------T - / -

050'0 1 2 3

Fig. 5 Accuracy of the extrinsicparameters: the absolute error (mm) ofthe translation vector vs. de noiselevel (pixels). The error along the x, yand z coordinates is representedrespectively in red, blue and green.

B. Real Experiments Using the Proposed Toolbox

Following the steps outlined in the previous sections, wedeveloped a Matlab Toolbox [14], which implements our newcalibration procedure. This tool was tested on a real centralcatadioptric system, which is made up of a hyperbolic mirrorand a camera having the resolution of 1024x768 pixels. Onlythree images of a checker board taken all around the mirrorwere used for calibration. Our Toolbox only asks the user toclick on the corner points. The clicking is facilitated by meansof a Harris corner detector having sub-pixel accuracy. Thecenter of the omnidirectional image was automatically foundas explained in IV.D. After calibration, we obtained anaverage reprojection error less than 0.3 pixels (Fig. 2.b).Furthermore, we compared the estimated location of the centerwith that extracted using an ellipse detector, and we foundthey differ by less than 0.5 pixels.

620 B.] Mapping Color Information on 3D points

640f

660

680

700

720

740

760

780500 550 600 650 700 750

Fig. 6 An image of the calibration pattern, projected onto the simulatedomnidirectional image. Calibration points are affected by noise with a =3.0pixels (blue rounds). Ground truth (red crosses). Reprojected points after thecalibration (red squares).

In Fig. 6, we show the 3D points of a checker board back-projected onto the image. The ground truth is highlighted byred crosses, while the blue rounds represent the calibrationpoints perturbed by noise with c=3.0 pixels. Despite the largeamount of noise, the calibration is able to compensate for theerror introduced. In fact, after calibration, the reprojectedcalibration points are very close to the ground truth (redsquares).

We also want to evaluate the accuracy in estimating theextrinsic parameters R and T of each calibration plane. To thisend, Figure 5 shows the plots of the absolute error (measuredin mm) in estimating the origin coordinates (x, y and z) of agiven checker board. The absolute error is very small becauseit is always less than 2mm. Even if we do not show the plotshere, we also evaluated the error in estimating the correctplane orientations, and we found an average absolute errorless than 20.

One of the challenges we are going to face in ourlaboratory consists in getting high quality 3D maps of theenvironment by using a 3D rotating sick laser range finder(SICK LMS200 [13]). Since this sensor cannot provide thecolor information, we used our calibrated omnidirectionalcamera to project the color onto each 3D point. The results areshown in Fig. 7.

In order to perform this mapping both the intrinsic andextrinsic parameters have to be accurately determined. Here,the extrinsic parameters describe position and orientation ofthe camera frame with respect to the sick frame. Note thateven small errors in estimating the correct intrinsic andextrinsic parameters would produce a large offset into theoutput map. In this experiment, the colors perfectlyreprojected onto the 3D structure of the environment, showingthat the calibration was accurately done.

VI. CONCLUSIONS

In this paper, we presented a novel and practicaltechnique for calibrating any central omnidirectional cameras.The proposed procedure is very fast and completelyautomatic, as the user is only asked to collect a few images ofa checker board, and to click on its corner points. Thistechnique does not use any specific model of theomnidirectional sensor. It only assumes that the imagingfunction can be described by a Taylor series expansion, whosecoefficients are the parameters to be estimated. Theseparameters are estimated by solving a four-step least-squareslinear minimization problem, followed by a non-linearrefinement, which is based on the maximum likelihoodcriterion.

5700

Page 8: 12, Originally published …30954/et… · in [8], including mirrors, fish-eye lenses and non-central cameras. In [15, 17, 18], the authors describe a method for central catadioptric

Fig. 7 The panoramic picture shown in the upper window was taken by using a hyperbolic mirror and a perspective camera, the size of 640x480 pixels. Afterintrinsic camera calibration, the color information was mapped onto the 3D points extracted from a sick laser range finder. In the lower windows are the mappingresults. The colors are perfectly reprojected onto the 3D structure of the environment, showing that the camera calibration has been accurately done.

In this work, we also presented a method to iterativelycompute the center of the omnidirectional image withoutexploiting the visibility of the circular field of view of thecamera. The center is automatically computed by using onlythe points the user selected.

Furthermore, we used simulated data to study therobustness of our calibration technique in case of inaccuracyin detecting the calibration points. We showed that the non-linear refinement significantly improves the calibrationaccuracy, and that accurate results can be obtained by usingonly a few images.

Then, we calibrated a real catadioptric camera. Thecalibration was very accurate as we obtained an averagereprojection error les than 0.3 pixels in an image theresolution of 1024x768 pixels. We also showed the accuracyof the result by projecting the color information from theimage onto real 3D points extracted by a 3D sick laser rangefinder.

Finally, we provided a Matlab Toolbox [14], whichimplements the entire calibration procedure.

ACKNOWLEDGEMENTS

This work was supported by the European projectCOGNIRON (the Cognitive Robot Companion). We alsowant to thank Jan Weingarten, from EPFL, who provided thedata from the 3D sick laser range finder [13].

REFERENCES1. Baker, S. and Nayar, S.K. 1998. A theory of catadioptric image

formation. In Proceedings of the 6th International Conference onComputer Vision, Bombay, India, IEEE Computer Society, pp. 35-42.

2. B.Micusik, T.Pajdla. Autocalibration & 3D Reconstruction with Non-central Catadioptric Cameras. CVPR 2004, Washington US, June 2004.

3. C. Cauchois, E. Brassart, L. Delahoche, and T. Delhommelle.Reconstruction with the calibrated syclop sensor. In IEEE International

Conference on Intelligent Robots and Systems (IROS'00), pp. 1493-1498, Takamatsu, Japan, 2000.

4. H. Bakstein and T. Pajdla. Panoramic mosaicing with a 180- field ofview lens. In Proc. of the IEEE Workshop on Omnidirectional Vision,pp. 60-67, 2002.

5. C. Geyer and K. Daniilidis. Paracatadioptric camera calibration. PAMI,24(5), pp. 687-695, May 2002.

6. J. Gluckman and S. K. Nayar. Ego-motion and omnidirectional cameras.ICCV, pp. 999- 1005, 1998.

7. S. B. Kang. Catadioptric self-calibration. CVPR, pp. 201-207, 2000.8. B. Micusik and T. Pajdla. Estimation of omnidirectional camera model

from epipolar geometry. CVPR, I: 485.490, 2003.9. B.Micusik, T.Pajdla. Para-catadioptric Camera Auto-calibration from

Epipolar Geometry. ACCV 2004, Korea January 2004.10. J. Kumler and M. Bauer. Fisheye lens designs and their relative

performance.11. B.Micusik, D.Martinec, T.Pajdla. 3D Metric Reconstruction from

Uncalibrated Omnidirectional Images. ACCV 2004, Korea January 2004.12. T. Svoboda, T.Pajdla. Epipolar Geometry for Central Catadioptric

Cameras. IJCV, 49(1), pp. 23-37, Kluwer August 2002.13. Weingarten, J. and Siegwart, R. EKF-based 3D SLAM for Structured

Environment Reconstruction. In Proceedings of IROS 2005, Edmonton,Canada, August 2-6, 2005.

14. Google for "OCAMCALIB".15. X. Ying, Z. Hu, Catadioptric Camera Calibration Using Geometric

Invariants, IEEE Trans. on PAMI, Vol. 26, No. 10: 1260-1271, October2004.

16. X. Ying, Z. Hu, Can We Consider Central Catadioptric Cameras andFisheye Cameras within a Unified Imaging Model?, ECCV'2004, Prague,May 2004.

17. J. Barreto, H. Araujo, Geometric Properties of Central Catadioptric LineImages and their Application in Calibration, PAMI-IEEE Trans onPAMI, Vol. 27, No. 8, pp. 1327-1333, August 2005.

18. J Barreto, H. Araujo, Geometric Properties in Central Catadioptric LineImages, ECCV'2002, Copenhagen, Denmark, May 2002.

19. P. Sturm, S. Ramaligam, A Generic Concept for Camera Calibration,ECCV'2004, Prague, 2004.

20. D. Scaramuzza, A. Martinelli, R. Siegwart, A Flexible Technique forAccurate Omnidirectional Camera Calibration and Structure fromMotion. Proceedings of IEEE International Conference on ComputerVision Systems (ICVS'06), New York, January 2006.

5701