Top Banner
ARTICLE International Journal of Advanced Robotic Systems Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption Regular Paper Jixin Lv 1* , Yukinori Kobayashi 2 , Takanori Emaru 2 and Ankit A. Ravankar 1 1 Graduate School of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan 2 Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan * Corresponding author(s) E-mail: [email protected] Received 30 September 2014; Accepted 12 February 2015 DOI: 10.5772/60407 © 2015 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract In an indoor environment, slope and edge detection is an important problem in simultaneous localization and mapping (SLAM), which is a basic requirement for mobile robot autonomous navigation. Slope detection allows the robot to find areas that are more traversable while the edge detection can prevent robot from falling. Three-dimension‐ al (3D) solutions usually require a large memory and high computational costs. This study proposes an efficient two- dimensional (2D) solution to combine slope and edge detection with a line-segment-based extended Kalman filter SLAM (EKF-SLAM) in a structured indoor area. The robot is designed to use two fixed 2D laser range finders (LRFs) to perform horizontal and vertical scans. With local area orthogonal assumption, the slope and edge are modelled into line segments swiftly from each vertical scan, and then are merged into the EKF-SLAM framework. The EKF-SLAM framework features an optional prediction model that can automatically decide whether the applica‐ tion of iterative closest point (ICP) is necessary to compen‐ sate for the dead reckoning error. The experimental results demonstrate that the proposed algorithm is capable of building an accurate 2D map swiftly, which contains crucial information of the edge and slope. Keywords EKF-SLAM, line segments, ICP, slope and edge detection, local orthogonal assumption 1. Introduction To perform autonomous navigation, a robot that enters an unknown environment needs to incrementally reconstruct a consistent map of the environment and simultaneously estimate its position with respect to the map so that it can continue navigating without collision or falling. In most cases, the dead reckoning method is executed to estimate the displacement of the robot, which is equipped with proprioceptive sensors, such as wheel encoders and inertial sensors. However, dead reckoning suffers from unbounded error accumulation. Therefore, external sensors (typically LRF, ultrasonic sensors, cameras, and 3D sensors) are essential not only for mapping the environment but also for the localization correction. It is noted that the data gathered from external sensors while exploring the environment are also noisy. To achieve localization and mapping simultaneously and accurately, the data from the sensors must be precisely fused to get the optimal estimation. This problem is well- 1 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407
16
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 48373

ARTICLE

International Journal of Advanced Robotic Systems

Indoor Slope and Edge Detection by usingTwo-Dimensional EKF-SLAM withOrthogonal AssumptionRegular Paper

Jixin Lv1*, Yukinori Kobayashi2, Takanori Emaru2 and Ankit A. Ravankar1

1 Graduate School of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan2 Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan* Corresponding author(s) E-mail: [email protected]

Received 30 September 2014; Accepted 12 February 2015

DOI: 10.5772/60407

© 2015 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided theoriginal work is properly cited.

Abstract

In an indoor environment, slope and edge detection is animportant problem in simultaneous localization andmapping (SLAM), which is a basic requirement for mobilerobot autonomous navigation. Slope detection allows therobot to find areas that are more traversable while the edgedetection can prevent robot from falling. Three-dimension‐al (3D) solutions usually require a large memory and highcomputational costs. This study proposes an efficient two-dimensional (2D) solution to combine slope and edgedetection with a line-segment-based extended Kalmanfilter SLAM (EKF-SLAM) in a structured indoor area. Therobot is designed to use two fixed 2D laser range finders(LRFs) to perform horizontal and vertical scans. With localarea orthogonal assumption, the slope and edge aremodelled into line segments swiftly from each vertical scan,and then are merged into the EKF-SLAM framework. TheEKF-SLAM framework features an optional predictionmodel that can automatically decide whether the applica‐tion of iterative closest point (ICP) is necessary to compen‐sate for the dead reckoning error. The experimental resultsdemonstrate that the proposed algorithm is capable ofbuilding an accurate 2D map swiftly, which containscrucial information of the edge and slope.

Keywords EKF-SLAM, line segments, ICP, slope and edgedetection, local orthogonal assumption

1. Introduction

To perform autonomous navigation, a robot that entersan unknown environment needs to incrementallyreconstruct a consistent map of the environment andsimultaneously estimate its position with respect to themap so that it can continue navigating without collisionor falling. In most cases, the dead reckoning method isexecuted to estimate the displacement of the robot, whichis equipped with proprioceptive sensors, such as wheelencoders and inertial sensors. However, dead reckoningsuffers from unbounded error accumulation. Therefore,external sensors (typically LRF, ultrasonic sensors,cameras, and 3D sensors) are essential not only formapping the environment but also for the localizationcorrection. It is noted that the data gathered from externalsensors while exploring the environment are also noisy.To achieve localization and mapping simultaneously andaccurately, the data from the sensors must be preciselyfused to get the optimal estimation. This problem is well-

1Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407

Page 2: 48373

known as the SLAM problem, which has attracted a lotof interest from researchers in the past decade. In [1], abrief history of the SLAM research and two influentialsolutions to the SLAM problem, i.e., EKF-SLAM and Rao-Blackwellized particle filters (FastSLAM), are presented.The major issues in the SLAM research, such as computa‐tional complexity, data association, and environmentrepresentation, are discussed in [2]. A general overviewand detailed analysis of SLAM can also be found in [3]and [4].

Occupancy grid maps and feature-based maps are twowidely adopted methods to represent the environment.Occupancy grid maps can represent arbitrary forms of theenvironment by dividing it into grids, where each cell ofthe grid is either occupied or free. However, it requires ahuge amount of memory for the divided grids, and it iscomputationally expensive during the update process.Feature-based maps are popular in various SLAM studiesdue to their compactness. Points, line segments, and planesare typical features that are employed in the feature maps.Line-segment-based maps, which can represent structuredenvironments adequately, are often employed in indoorenvironment applications [5, 6] due to their small memoryrequirement and low computational cost.

With regard to the sensors, 2D LRFs are popular in theSLAM research due to their high speed and accuracy.Another merit of using LRFs is that they are robust tovariations of lighting and temperature conditions. A fixedLRF takes measurements in one plane and thus only a 2Dmap can be built. To acquire a 3D point cloud of theenvironment, some continuing or reciprocating rotationmechanisms to drive LRF have been developed by re‐searchers in [7] and [8]. An alternative method is to fix twoLRFs orthogonally to realize horizontal and vertical scans[9]. Compared with rotating mechanisms, the lattersolution has some merits, such as saving energy, reducingnoise, and avoiding extra vibration.

EKF and ICP, which refer to the probabilistic method andscan matching method, respectively, are the most popularalgorithms employed in the SLAM research. EKF is theextended nonlinear version of the Kalman filter afterlinearizing the state and measurement equation via Taylorexpansion. Usually, EKF-SLAM is based on features, suchas points, line segments, and planes. Various successfulapplications or analysis of EKF-SLAM have been reportedby researchers in [10] and [11]. ICP addresses the problemof estimating the optimal transformation that aligns twosets of points that partially overlap [12]. There are manyvariants of ICP, such as iterative dual correspondence [13]and metric-based ICP [14]. A fast 2D version of ICP hasbeen proposed for motion estimation in [15] and is adoptedin this study because of its straightforward usability.

In the case of SLAM application in indoor environments,the orthogonal assumption is a sound constraint especiallyfor the structured buildings that feature many planes thatare parallel or perpendicular to each other. By utilizing thefeatures (typically line segments and constrained planes)

scanned from these planes, a specific orientation thatindicates the planes’ alignment can be estimated. Thisassumption has been adopted in many studies to achievegood results with a limited use of sensors [16, 17] and lowercomputational cost [18].

As compared with a 3D map, a 2D map only carries planarinformation about the environment. Therefore, the memo‐ry, computational costs of manufacturing, updating, andconsultation are dramatically reduced. It is mainly appliedto represent indoor environments, especially neat interiorcorridors. However, in some buildings, even in the samefloor, there are often stairs because of the height variationbetween different areas. Some of them are accompanied byslopes or elevators for the convenience of using wheelchairs. It is extremely important to detect these types ofslopes and edges so that the robot can avoid falling andmake a decision on whether it is possible to move along theslope. By using RGBD cameras or another 3D sensor, 3DSLAM technology [19, 20] is capable of mapping andupdating these features with various increased costs. Thecost not only increases in the mapping process but also risesin localization since the 3D feature has more parameters.To overcome it, an alternative option is to enrich the 2Dmap by introducing key information on the essential 3Dfeatures [21]. Using ultrasonic sensors, a slope detectionsystem has been proposed in [22]. However, it onlyestimates the gradient of slope and the robot is still underthe risk of falling. 2D maps that record all crucial informa‐tion of slope and edge are reported in [23] and [24], by usinga stereo camera and 3D LRF, respectively. These two worksextract the features from 3D data and then transfer to the2D map. The limitation is that they only generate the localmap instead of integrating it into the SLAM algorithm tocreate the global map necessary for global path planningduring autonomous navigation.

In this paper, we present an ICP-assisted line segment-based EKF-SLAM framework to construct 2D maps thatrecord essential information on slopes and edges. Theproposed algorithm uses an odometer and two fixed LRFsas the main sensors. The horizontally placed LRF is usedfor the traditional planar scanning that is parallel to theground, whereas the vertically fixed LRF is used to monitorthe ground in front of the robot to detect slopes and edges.There are three main contributions that result from thisstudy. First, a 2D ICP algorithm is applied to evaluate thedead reckoning result and make the decision whether tostep further to compensate for error. Second, the slopes andedges detected in the vertical scans are modelled into 2Dline segments with the angular parameter estimated bylocal orthogonal assumption. Third, a method to integrateand update the extracted features (the line segments thatrepresent the slope and edge) into the EKF-SLAM frame‐work is proposed.

The rest of this paper is structured as follows. Section 2describes the mobile robot and system architecture devel‐oped in this research. The algorithm of ICP-assisted line-segment-based EKF-SLAM is introduced in section 3; itconsists of the line segments extraction, 2D EKF-SLAM,

2 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407

Page 3: 48373

and ICP application. Section 4 presents the method forextracting the features of the slope and edge and integrat‐ing them into the EKF-SLAM framework. The experimentthat verifies the efficiency of the proposed algorithm isdetailed in the section 5. Section 6 includes several conclu‐sions and proposed future work.

Figure 1. The mobile robot and sensors used for the experiments

2. Robot platform and system architecture

This section first introduces the mobile robot and itsequipped sensors. Then, the system architecture is brieflyexplained.

2.1 Robot platform and sensors

Figure 1 shows the experimental mobile robot employed inthis research. This robot is developed by assembling twoLRFs (both manufactured by Hokuyo Co. Ltd) on theplatform (Plat_F1, Japan Systems Design Co. Ltd). LRF1(UHG-08LX) is attached on the front part of the bottom baseto fulfil the horizontal scans. The second layer is placedabove the base to support the PC. An aluminium frame isfixed on the forefront of the second layer to verticallysustain LRF2 (UBG-04LX-F01) at an appropriate height sothat the ground can be detected well enough by the verticalscanning. The odometer, which consists of two encoders

(RE20F_100_200, Copal Electronics), is installed next to therear wheels to estimate the displacement of the robot.

2.2 System architecture

The whole system architecture can roughly be divided intothree procedures with regard to the data read from threesensors, as shown in Figure 2.

The dead reckoning process is executed to estimate thetransformation of the robot during the time intervalbetween step (t −1) and step t . After transformation byutilizing the estimation result, a copy of the horizontal scanSh (t) is applied to match the former horizontal scan Sh (t −1)in order to evaluate the quality of the dead reckoning result.Based on this evaluation, the decision is made on whetherICP must be executed to compensate for the odometererror. Finally, the EKF prediction process is conducted byutilizing the original or compensated dead reckoningresult.

Meanwhile, the line segments are extracted from thehorizontal scan and then imported into the EKF-SLAM toexecute the feature association and correction. In addition,the orthogonal angle is estimated based on the extractedline segments if any slope or edge has been detected in thevertical scan process.

Other than the horizontal procedure, the orthogonalestimation is conducted at every step of the vertical scanSv(t) to ensure that the central axis of the scan plane isparallel to the ground. The vertical feature extraction ismainly utilized in the line segments scanned from the slopeand the points that are next to the edge. With the angularparameter provided by horizontal orthogonal estimation,the vital information of vertical features can be estimatedand then transported into the EKF-SLAM for featureassociation and correction.

Figure 2. Overview of the system architecture

3. ICP assisted line segments based EKF-SLAM

The first two parts of this section are brief reviews of our previous work on a 2D line segment-based EKF-SLAM

algorithm [6]. The last subsection briefly introduces the ICP algorithm. Then, an ICP-based EKF-SLAM prediction model

is detailed. Finally, a simple selection to pick a suitable prediction model is proposed.

3.1 Line segments extraction

A good line extraction algorithm is expected to extract the parameters of the straight line segments from noisy raw points

scanned by LRF accurately and swiftly. There are many famous algorithms that have been proposed in the past including

incremental, split and merge, Hough transform, line regression, RANSAC, and expectation maximization algorithm. Based on

the comparison between these algorithms conducted in [25], the split and merge is employed in this study because of its

superior speed and accuracy. As the name indicates, it first splits raw points into collinear clusters and then fits and merges the

line segments.

3.2 2 Line segments based EKF-SLAM

EKF-SLAM algorithm utilizes a large state vector to record the localization (robot pose state P) and mapping result (feature

state F). The state vector is modelled by a Gaussian variable and maintained by the EKF through prediction, feature association,

correction, and new feature addition.

At time t, the state vector considering line segments as the features is expressed as follows:

T

1,

t t nP F F (1)

where the subscript “n” stands for “feature number.” As shown in Figure 3, the robot pose and the parameters of the

features with respect to the global frame G G

O-X Y are as follows:

( ) ( ) ( ) ( ) ( ), .

t p t p t p t i f i f iP x y F (2)

Figure 2. Overview of the system architecture

3Jixin Lv, Yukinori Kobayashi, Takanori Emaru and Ankit A. Ravankar:Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption

Page 4: 48373

3. ICP assisted line segments based EKF-SLAM

The first two parts of this section are brief reviews of ourprevious work on a 2D line segment-based EKF-SLAMalgorithm [6]. The last subsection briefly introduces the ICPalgorithm. Then, an ICP-based EKF-SLAM predictionmodel is detailed. Finally, a simple selection to pick asuitable prediction model is proposed.

3.1 Line segments extraction

A good line extraction algorithm is expected to extract theparameters of the straight line segments from noisy rawpoints scanned by LRF accurately and swiftly. There aremany famous algorithms that have been proposed in thepast including incremental, split and merge, Houghtransform, line regression, RANSAC, and expectationmaximization algorithm. Based on the comparison be‐tween these algorithms conducted in [25], the split andmerge is employed in this study because of its superiorspeed and accuracy. As the name indicates, it first splits rawpoints into collinear clusters and then fits and merges theline segments.

3.2 Line segments based EKF-SLAM

EKF-SLAM algorithm utilizes a large state vector to recordthe localization (robot pose state P) and mapping result(feature state F). The state vector is modelled by a Gaussianvariable and maintained by the EKF through prediction,feature association, correction, and new feature addition.

At time t, the state vector considering line segments as thefeatures is expressed as follows:

( )= LT

1 ,t t nP F Fm (1)

where the subscript “n” stands for “feature number.” Asshown in Figure 3, the robot pose and the parameters of thefeatures with respect to the global frame O-XGYG are asfollows:

( ) ( )= =( ) ( ) ( ) ( ) ( ), .t p t p t p t i f i f iP x y Fq r a (2)

Figure 3. Geometrical relationship between the robot pose and feature parameters

where the subscript “p” stands for “pose,” and “f” stands for “feature.” Moreover, the covariance matrix of state vector is

expressed as follows:

1

1 1 1

1

.

t t t n

t n

n t n n

P P F P F

F P F F F

t

F P F F F

(3)

The endpoints of line segments are stored and updated out of the EKF frame.

3.2.1 Dead reckoning based EKF-SLAM prediction

EKF-SLAM prediction is a procedure to estimate the transformation of robot pose between two steps such as from time

( 1t ) to time t . Based on the kinematic model shown in Figure 4, the predicted new pose of the robot as derived from

the odometer-based dead reckoning is calculated as follows:

1, ,

t dr t tg u (4)

( 1)1sin( ),

2 2p tp t p t

R L R Lx x

D

(5)

( ) ( 1) ( 1)cos( ),

2 2p t p t p t

R L R Ly y

D

(6)

( ) ( 1) ,

p t p t

R L

D

(7)

where D stands for the distance between the rear wheels, t

u R L denotes the travel distance of wheels measured

by the odometer during the time interval, R corresponds to the right wheel, while L stands for the left one. The

corresponding covariance matrix of the odometer measurement is as follows:

2

2

0.

0

Rw

L

C

(8)

The covariance matrix of state vector is updated by the equation given below:

T T1

,t p t p w w w

G G G C G (9)

with the Jacobi matrices 1 1, /

p dr t t tG g u and

1( , ) /

w dr t t tG g u u .

Figure 3. Geometrical relationship between the robot pose and featureparameters

where the subscript “p” stands for “pose,” and “f” standsfor “feature.” Moreover, the covariance matrix of statevector is expressed as follows:

æ öS S Sç ÷S S Sç ÷

S = ç ÷ç ÷ç ÷S S Sè ø

L

L

M M O ML

1

1 1 1

1

.

t t t n

t n

n t n n

P P F P F

F P F F Ft

F P F F F

(3)

The endpoints of line segments are stored and updated outof the EKF frame.

3.2.1 Dead reckoning based EKF-SLAM prediction

EKF-SLAM prediction is a procedure to estimate thetransformation of robot pose between two steps such asfrom time (t −1) to time t . Based on the kinematic modelshown in Figure 4, the predicted new pose of the robot asderived from the odometer-based dead reckoning iscalculated as follows:

( )-= 1 , ,t dr t tg um m (4)

( ) ( )( ) ( )

--

D + D D - D= - +( 1)1 sin( ),

2 2p tp t p t

R L R Lx x

Dq (5)

( ) ( )- -

D + D D - D= + +( ) ( 1) ( 1)cos( ),

2 2p t p t p t

R L R Ly y

Dq (6)

( )-

D - D= +( ) ( 1) ,p t p t

R LD

q q (7)

where D stands for the distance between the rear wheels,ut =(ΔR ΔL ) denotes the travel distance of wheels meas‐ured by the odometer during the time interval, ΔRcorresponds to the right wheel, while ΔL stands for the leftone. The corresponding covariance matrix of the odometermeasurement is as follows:

D

D

æ ö= ç ÷ç ÷è ø

2

2

0.

0R

wL

Cs

s(8)

The covariance matrix of state vector is updated by theequation given below:

-S = S +T T1 ,t p t p w w wG G G C G (9)

with the Jacobi matrices Gp =∂gdr(μt−1, ut) / ∂μt−1 andGw =∂gdr(μt−1, ut) / ∂ut .

4 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407

Page 5: 48373

Figure 4. Kinematic model of robot

This typical dead reckoning-based prediction model usually performs very well when the robot is moving at low

rotational speeds with no serious slippage occurring. Another merit is that it requires a low computational cost because

of its simplicity.

3.2.2 EKF-SLAM features association

After the horizontal scan, the LRF explores the environment at time t, with respect to horizontal robot frameR R

O-X Y

(marked as frame h

R ), a number of line segments are extracted as the measurements:

T

( ) ( ), 0,1 ,h hR R

i l i l iz i m , (10)

with the corresponding covariances:

( ) ( ) ( )

( ) ( ) ( )

2

2, 0,1 .

R R Rh h hl i l i l i

R R Rh h hl i l i l i

iC i m

. (11)

Some of these measurements are scanned from the same objects that have been stored as features in the state vector. The

main task of feature association is to find and associate these types of re-observed features so that the EKF-SLAM can

correct the predicted state by comparing the measurements and stored features.

For jth stored feature, the estimated observation based on the predicted state can be calculated as follows:

( ) ( ) ( ) ( ) ( )

( ) ( )

c

ˆˆ

ˆ

os sin.

h

h

R

f j

j tR

f j

f j p t f j p t f j

f j p t

z h

x y

(12)

The corresponding covariance matrix of ˆj

z is as follows:

Tˆj j t j

C H H (13)

with the Jacobi matrix ( ) /j t t

H h , and t

is the latest covariance matrix of the state vector.

To associate measurements with estimated observations, from all possible line pairs, the pairs that have similar

parameters are firstly picked out by simply comparing the differences of their parameters with the preset thresholds.

Then, for each picked pair that contains two lines, the shorter line is marked as 1l while the longer line is marked as

2l .

The length of 2

l is given by 2

L . Along the direction of 2

l , the distances from the endpoints of 1l to the center of

2l ,

which are given by 1ce

d and 2ce

d respectively, are calculated. If either of 1ce

d and 2ce

d is smaller than 2

/ 2L , the overlap

between 1l and

2l is confirmed and the pair will be treated as a candidate association pair, as shown in Figure 5.

Figure 4. Kinematic model of robot

This typical dead reckoning-based prediction modelusually performs very well when the robot is moving at lowrotational speeds with no serious slippage occurring.Another merit is that it requires a low computational costbecause of its simplicity.

3.2.2 EKF-SLAM features association

After the horizontal scan, the LRF explores the environ‐ment at time t, with respect to horizontal robot frameO-XRYR (marked as frame Rh ), a number of line segmentsare extracted as the measurements:

( )= = ¼T

( ) ( ) , 0,1 ,h hR Ri l i l iz i mr a (10)

with the corresponding covariances:

æ öç ÷= = ¼ç ÷ç ÷è ø

( ) ( ) ( )

( ) ( ) ( )

2

2, 0,1 .

R R Rh h hl i l i l i

R R Rh h hl i l i l i

iC i mr r a

a r a

s s

s s(11)

Some of these measurements are scanned from the sameobjects that have been stored as features in the state vector.The main task of feature association is to find and associatethese types of re-observed features so that the EKF-SLAMcan correct the predicted state by comparing the measure‐ments and stored features.

For jth stored feature, the estimated observation based onthe predicted state can be calculated as follows:

( )

( )( )

æ öç ÷= =ç ÷è ø

æ ö- -ç ÷=ç ÷-è ø

( ) ( ) ( ) ( ) ( )

( ) ( )

cos

ˆˆ

si.

ˆ

n

h

h

Rf j

j tRf j

f j p t f j p t f j

f j p t

z h

x y

rm

a

r a a

a q

(12)

The corresponding covariance matrix of z j is as follows:

= S Tˆj j t jC H H (13)

with the Jacobi matrix H j =∂h (μt) / ∂ μt , and Σt is the latestcovariance matrix of the state vector.

To associate measurements with estimated observations,from all possible line pairs, the pairs that have similarparameters are firstly picked out by simply comparing thedifferences of their parameters with the preset thresholds.Then, for each picked pair that contains two lines, theshorter line is marked as l1 while the longer line is markedas l2. The length of l2 is given by L 2. Along the direction ofl2, the distances from the endpoints of l1 to the center of l2,which are given by dce1 and dce2 respectively, are calculated.If either of dce1 and dce2 is smaller than L 2 / 2, the overlapbetween l1 and l2 is confirmed and the pair will be treatedas a candidate association pair, as shown in Figure 5.

Figure 5. Overlap confirmation of two line segments

After a series of candidate association pairs , , ˆi j

z z can be picked out. And the Mahalanobis Distance of each pair is

calculated as given below:

1T

.ˆˆ ˆij i j i j i j

d z z C C z z

(14)

Between the calculated results of the pairs, the minimum value min

d is compared with an artificial criterion 2 .

2 .mind (15)

If this judgment (15) is satisfied, the corresponding pair is successfully associated. After the correction procedure based

on this associated pair is executed, another iteration of feature association will be carried out until no more pair can be

associated.

3.2.3 EKF-SLAM correction

For the observation

iz that has been associated with the feature

jF , EKF-SLAM will implement the correction process to

the state vector and its covariance matrix. The EKF correction process is typically written as follows:

1T ˆ ,

j t j i jK H C C

(16)

ˆ ,t t j i j

K z z (17)

.t j j t

I K H (18)

jK is usually called Kalman gain, which is calculated by utilizing the uncertainties of the real measurements and the

estimated observations. After the Kalman gain has been calculated, the state vector and corresponding covariance are

updated by (17) and (18).

3.2.4 EKF-SLAM new feature addition

For the measurement i

z that has failed to associate with any stored features, EKF-SLAM will treat it as a new detected

feature. Thus i

z will become the (n+1)th feature 1n

F in the state vector:

( 1)

1

( 1)

( ) ( ) ( 1) ( ) ( 1)

( ) ( )

,

cos sin .

h

h

f n

n t i

f n

R

l i p t f n p t f n

R

p t l i

F f P z

x y

(19)

Moreover, the covariance matrix of the state vector also has to be augmented as follows:

Figure 5. Overlap confirmation of two line segments

After a series of candidate association pairs {(zi, z j), …}can be picked out. And the Mahalanobis Distance of eachpair is calculated as given below:

( ) ( ) ( )-= - + -

1T ˆ ˆ .ˆ ij i j i j i jd z z C C z z (14)

Between the calculated results of the pairs, the minimumvalue dmin is compared with an artificial criterion χ 2.

< 2 .mind c (15)

If this judgment (15) is satisfied, the corresponding pair issuccessfully associated. After the correction procedurebased on this associated pair is executed, another iterationof feature association will be carried out until no more paircan be associated.

3.2.3 EKF-SLAM correction

For the observation zi that has been associated with thefeature F j, EKF-SLAM will implement the correctionprocess to the state vector and its covariance matrix. TheEKF correction process is typically written as follows:

( )-= S +1T ,ˆ

j t j i jK H C C (16)

( )= + - ,ˆt t j i jK z zm m (17)

5Jixin Lv, Yukinori Kobayashi, Takanori Emaru and Ankit A. Ravankar:Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption

Page 6: 48373

( )S = - S .t j j tI K H (18)

K j is usually called Kalman gain, which is calculated byutilizing the uncertainties of the real measurements and theestimated observations. After the Kalman gain has beencalculated, the state vector and corresponding covarianceare updated by (17) and (18).

3.2.4 EKF-SLAM new feature addition

For the measurement zi that has failed to associate with anystored features, EKF-SLAM will treat it as a new detectedfeature. Thus zi will become the (n+1)th feature Fn+1 in thestate vector:

( )++

+

+ +

æ öç ÷= =ç ÷è ø

æ ö+ +ç ÷=ç ÷+è ø

( 1)1

( 1)

( ) ( ) ( 1) ( ) ( 1)

( ) ( )

,

cos sin .

h

h

f nn t i

f n

Rl i p t f n p t f n

Rp t l i

F f P z

x y

r

a

r a a

q a

(19)

Moreover, the covariance matrix of the state vector also hasto be augmented as follows:

æ öS S Sç ÷ç ÷

S = ç ÷S S Sç ÷

ç ÷ç ÷S S S +è ø

L

M O M ML

L

T T

T

T T

,

t t n t

n t n n t

t t n t

P P F P p

tF P F F P p

p P p P F p P p l i l

J

J

J J J J J C J

(20)

where Jp =∂ f (Pt , zi) / ∂Pt and J l =∂ f (Pt , zi) / ∂ zi are theJacobi matrices that linearize the new feature with respectto robot pose and measurement variables, respectively.

3.3 Dead reckoning result evaluation and ICP compensation

The prediction procedure is critical for EKF-SLAM since itusually has a great influence on the subsequent featureassociation. In spite of its high speed, pure dead reckoning-based prediction is risky because of the unpredictableslippage and increasing linearization error of the kinematicmodel when the robot increases its rotational speed. Somefatal errors may lead to the divergence of EKF-SLAM. Toovercome this inconvenience and make the predictionmore robust, ICP is introduced to evaluate the deadreckoning result and to compensate it when necessary.

3.3.1 ICP

In general, ICP is an iterative algorithm that iterativelyfinds correspondences between two given scans, and itcalculates a transformation that minimizes the distancebetween corresponding points until the terminationcondition is satisfied [26]. A fast 2D version ICP method,

discussed in [15], is adopted in this study. For each itera‐tion, it calculates the Euclidean squared distance for everypossible combination of points in two scans and establishesthe correspondence index based on the traditional closestpoint rule. The outliers are detected and discarded fromvalid correspondence if their closest distance is over a givenrejection threshold Erej. Finally, the transformation param‐eters are calculated by minimizing a general matchingindex Im, which is formulated by the following equation:

==å 0 ,

cnii

mc o

eI

n P(21)

where ei is the Euclidean squared distance between the ithcorresponding points, nc stands for the number of validcorrespondences, and Po is the overlapping degree.

3.3.2 ICP based EKF-SLAM prediction

To estimate the accurate transformation of the robot poseXm(t ) =(Δxm(t ) Δ ym(t ) Δθm(t ))T between time (t −1) and timet, ICP is applied to match horizontal scans Sh (t −1) andSh (t). It is noted that instead of referring to the global frame,Xm(t ) is a relative transformation with respect to the robotframe at time (t −1). The initial estimation Xm(t )

0 is given bythe odometer-based dead reckoning result as follows:

( )D - DD =0

( ) ,m t

R LD

q (22)

( ) ( )D + DD = - D0 0

( ) ( )sin ,2m t m t

R Lx q (23)

( ) ( )D + DD = D0 0

( ) ( )cos .2m t m t

R Ly q (24)

The rejection threshold Erej is a vital parameter for ICPimplementations. A low value may lead to a bad conver‐gence, whereas a large value tends to deteriorate the result.For each point in Sh (t −1) the maximum rejection thresholdErej

max is initialized as follows:

( ) ( )= + = +22

0.99 6.63 ,maxrej r sE E Ec s (25)

where σ corresponds to standard deviation of the scannedpoint and Es stands for possible slippage error, which isdetermined by taking robot speed and time interval intoconsideration. The rejection threshold is updated by thealgorithm named relative motion threshold (RMT),

6 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407

Page 7: 48373

proposed in [27], when it is bigger than a fixed minimumcriterion:

( ) ( )= =22

0.99 6.63 .minrejE c s (26)

After ICP has run k iterations and arrived at a fit solutionXm(t )

k , the corresponding covariance Σm(t ) has to be deter‐mined with the intention of EKF-SLAM prediction. Byutilizing a closed-form estimation method [28], the cova‐riance is derived as follows:

( )- -

æ ö æ ö¶ ¶ ¶ ¶S = Sç ÷ ç ÷ç ÷ ç ÷¶ ¶ ¶ ¶¶ ¶è ø è ø

1 1T2 2 2 2

( ) 2 2

,m m m mm t

I I I ISS X S XX X

(27)

where Σ(S ) stands for the covariance of measurementsSh (t −1) and Sh (t). The necessary derivatives ∂2 Im / ∂X 2 and∂2 Im / ∂S∂X can be easily computed in a closed form basedon the analytical solution of Xm(t ) as stated in [15].

Finally, the ICP-based new predicted pose of the robot iscalculated as follows:

( )-= 1 ( ), ,kt m t m tg Xm m (28)

( ) ( ) - --= + D - D( ) ( 1) ( ) ( 1)1 cos sink km t p t m t p tp t p tx x x yq q (29)

( ) ( ) - --= + D + D( ) ( 1) ( ) ( 1)1 sin cosk km t p t m t p tp t p ty y x yq q (30)

-= + D( ) ( 1) ( ) .k

p t p t m tq q q (31)

The covariance matrix of the state vector is updated by thefollowing equation:

-S = S + ST T1 ( ) .t p t p m m t mG G G G (32)

with the Jacobi matrices Gp =∂gm(μt−1, Xm(t )k ) / ∂μt−1 and

Gm =∂gm(μt−1, Xm(t )k ) / ∂Xm(t )

k .

3.3.3 Optional EKF-SLAM prediction

Although the ICP-based EKF prediction gives a morerobust result, it is associated with a much higher computa‐tional cost than dead reckoning for both the iterations andthe final covariance estimation of the ICP result. Therefore,an optional EKF-SLAM prediction model is proposed byselecting two addressed models based on the evaluation ofthe dead reckoning result.

To verify whether the dead reckoning result is preciseenough, only one iteration of ICP is executed with therejection criterion Erej

eva, which is determined by the Erejmin

plus one small tolerance ET . The overlapping degree Po andmatching index Im can be found after a single iteration.Then, a simple judgment is conducted:

> < & & ,eva evao o m mP P I I (33)

where Poeva is the overlapping threshold and Im

eva is the errorindex threshold. The judgment (33) is satisfied only whentwo successive scans overlap mostly with low errors, whichmeans the dead reckoning result provides a good estima‐tion. Thus, if the judgment is satisfied, the faster deadreckoning-based EKF prediction is going to be conducted.Otherwise, a further ICP iteration has to be employed withthe reinitialized rejection threshold Erej

max so that ICP-basedEKF prediction will be executed to guarantee the robust‐ness of the prediction.

4. Slope and edge feature in EKF-SLAM

4.1 Vertical feature extraction and integration

The basic concept of feature detection and extraction in thevertical scan is shown in Figure 6. A slope marked as ABCD(corresponding to four corners) is placed next to an edgemarked as GH . When LRF2 finds a line segment SLscanned from the slope, partial information of the slope canbe extracted by decomposing vector SL

→ into SN

→ and NL

→.

The distance between the robot and the edge can beestimated from the detected edge points E by similardecomposition. For correct decomposition, the exactorientation of the slope and edge line needs to be known inadvance; this can be solved by utilizing the horizontalscanned line segments with the orthogonal assumption.

4.1 Vertical feature extraction and integration

The basic concept of feature detection and extraction in the vertical scan is shown in Figure 6. A slope marked as ABCD

(corresponding to four corners) is placed next to an edge marked as GH . When LRF2 finds a line segment SL scanned

from the slope, partial information of the slope can be extracted by decomposing vector SL

into SN

and NL

. The

distance between the robot and the edge can be estimated from the detected edge points E by similar decomposition.

For correct decomposition, the exact orientation of the slope and edge line needs to be known in advance; this can be

solved by utilizing the horizontal scanned line segments with the orthogonal assumption.

Figure 6. Slope and edge detection by the vertical scan

Figure 7. Simplified top view of the vertical feature detection

Since this research focuses on the 2D map, a top view of Figure 6 is represented in Figure 7. To simplify the explanation

of the slope and edge detection, the vertical scanning plane is designed to be in the same plane as R R

OY Z . The main

extraction procedure is executed in horizontal frame R R

OX Y (marked as frame h

R ) and vertical frame R R

OY Z (marked

as frame v

R ).

4.1.1 Orthogonal angle estimation

The exact orientation of the slope and edge line is estimated with a well-known geometrical constraint named

orthogonal assumption. When a slope or edge is detected by LRF2, the local orthogonal angle hR

o and its variance 2

Rho

Figure 6. Slope and edge detection by the vertical scan

Since this research focuses on the 2D map, a top view ofFigure 6 is represented in Figure 7. To simplify the explan‐ation of the slope and edge detection, the vertical scanningplane is designed to be in the same plane as OYRZR. Themain extraction procedure is executed in horizontal frameOXRYR (marked as frame Rh ) and vertical frame OYRZR

(marked as frame Rv).

7Jixin Lv, Yukinori Kobayashi, Takanori Emaru and Ankit A. Ravankar:Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption

Page 8: 48373

4.1.1 Orthogonal angle estimation

The exact orientation of the slope and edge line is estimatedwith a well-known geometrical constraint named orthog‐onal assumption. When a slope or edge is detected by LRF2,the local orthogonal angle αo

Rh and its variance σαo

Rh2 are

calculated [29] based on the horizontal scanned linesegments as shown in Figure 8. Since the slope is typicallylocated in the corridor, αo

Rh can precisely indicate theorientation of the slope with respect to the robot’s horizon‐tal frame O-XRYR.

are calculated [29] based on the horizontal scanned line segments as shown in Figure 8. Since the slope is typically

located in the corridor, hR

o can precisely indicate the orientation of the slope with respect to the robot’s horizontal frame

R RO-X Y .

Figure 8. Orthogonal angle estimation of the horizontal scan

The global orthogonal angle o

can be added to the EKF-SLAM state vector t

as a feature OF .

( ), .h hR RO

o O t o p t oF f P (34)

Moreover, the covariance matrix of the state vector can be augmented in a similar way as in equation (20), and it will not

be detailed here.

Another application of the orthogonal assumption is the vertical scan correction. The central axis of LRF2 is designed to

be parallel to the ground. However, even in the indoor environment, the ground where the robot is moving cannot be

absolutely flat and smooth. The space between tiles can easily affect the parallelism, which may lead to errors during the

vertical feature extraction. To overcome this problem, orthogonal estimation of the vertical scan is executed in every step

to compensate for the error. This is feasible because most features in the vertical scan plane, such as ceiling, wall, and

ground, are parallel or perpendicular to each other as shown in Figure 9.

Figure 9. Vertical scanning plane and features of interest

4.1.2 Vertical scan classification

Before starting to build the line segment model of the slope and edge, the vertical scanned data need to be classified

under the frame of LRF2. Basically, the classification is executed after extraction of line segments. The slope line segment

Figure 8. Orthogonal angle estimation of the horizontal scan

The global orthogonal angle αo can be added to the EKF-SLAM state vector μt as a feature F O.

( ) ( ) ( )= = = +( ), .h hR ROo O t o p t oF f Pa a q a (34)

Moreover, the covariance matrix of the state vector can beaugmented in a similar way as in equation (20), and it willnot be detailed here.

Another application of the orthogonal assumption is thevertical scan correction. The central axis of LRF2 is de‐

4.1 Vertical feature extraction and integration

The basic concept of feature detection and extraction in the vertical scan is shown in Figure 6. A slope marked as ABCD

(corresponding to four corners) is placed next to an edge marked as GH . When LRF2 finds a line segment SL scanned

from the slope, partial information of the slope can be extracted by decomposing vector SL

into SN

and NL

. The

distance between the robot and the edge can be estimated from the detected edge points E by similar decomposition.

For correct decomposition, the exact orientation of the slope and edge line needs to be known in advance; this can be

solved by utilizing the horizontal scanned line segments with the orthogonal assumption.

Figure 6. Slope and edge detection by the vertical scan

Figure 7. Simplified top view of the vertical feature detection

Since this research focuses on the 2D map, a top view of Figure 6 is represented in Figure 7. To simplify the explanation

of the slope and edge detection, the vertical scanning plane is designed to be in the same plane as R R

OY Z . The main

extraction procedure is executed in horizontal frame R R

OX Y (marked as frame h

R ) and vertical frame R R

OY Z (marked

as frame v

R ).

4.1.1 Orthogonal angle estimation

The exact orientation of the slope and edge line is estimated with a well-known geometrical constraint named

orthogonal assumption. When a slope or edge is detected by LRF2, the local orthogonal angle hR

o and its variance 2

Rho

Figure 7. Simplified top view of the vertical feature detection

signed to be parallel to the ground. However, even in theindoor environment, the ground where the robot is movingcannot be absolutely flat and smooth. The space betweentiles can easily affect the parallelism, which may lead toerrors during the vertical feature extraction. To overcomethis problem, orthogonal estimation of the vertical scan isexecuted in every step to compensate for the error. This isfeasible because most features in the vertical scan plane,such as ceiling, wall, and ground, are parallel or perpen‐dicular to each other as shown in Figure 9.

are calculated [29] based on the horizontal scanned line segments as shown in Figure 8. Since the slope is typically

located in the corridor, hR

o can precisely indicate the orientation of the slope with respect to the robot’s horizontal frame

R RO-X Y .

Figure 8. Orthogonal angle estimation of the horizontal scan

The global orthogonal angle o

can be added to the EKF-SLAM state vector t

as a feature OF .

( ), .h hR RO

o O t o p t oF f P (34)

Moreover, the covariance matrix of the state vector can be augmented in a similar way as in equation (20), and it will not

be detailed here.

Another application of the orthogonal assumption is the vertical scan correction. The central axis of LRF2 is designed to

be parallel to the ground. However, even in the indoor environment, the ground where the robot is moving cannot be

absolutely flat and smooth. The space between tiles can easily affect the parallelism, which may lead to errors during the

vertical feature extraction. To overcome this problem, orthogonal estimation of the vertical scan is executed in every step

to compensate for the error. This is feasible because most features in the vertical scan plane, such as ceiling, wall, and

ground, are parallel or perpendicular to each other as shown in Figure 9.

Figure 9. Vertical scanning plane and features of interest

4.1.2 Vertical scan classification

Before starting to build the line segment model of the slope and edge, the vertical scanned data need to be classified

under the frame of LRF2. Basically, the classification is executed after extraction of line segments. The slope line segment

Figure 9. Vertical scanning plane and features of interest

4.1.2 Vertical scan classification

Before starting to build the line segment model of the slopeand edge, the vertical scanned data need to be classifiedunder the frame of LRF2. Basically, the classification isexecuted after extraction of line segments. The slope linesegment can be easily found by checking the parameterssuch as length, range, angle, and covariance. For edge pointdetection, two situations must be considered. The firstsituation is that the edge point is the intersection point ofthe ground line and down slope line segment, as shown inFigure 9. The second situation is shown in Figure 10; a shortground line segment has been detected without a followingslope line segment. In this case, the remote endpoint of theground line segment will be treated as the edge point if theincidence angle of its ray is bigger than a given thresholdωT

E and no points are scanned from other features in therange of dT

E to the edge point.

can be easily found by checking the parameters such as length, range, angle, and covariance. For edge point detection,

two situations must be considered. The first situation is that the edge point is the intersection point of the ground line

and down slope line segment, as shown in Figure 9. The second situation is shown in Figure 10; a short ground line

segment has been detected without a following slope line segment. In this case, the remote endpoint of the ground line

segment will be treated as the edge point if the incidence angle of its ray is bigger than a given threshold ET and no

points are scanned from other features in the range of ET

d to the edge point.

Figure 10. Second situation of edge point detection

Figure 11. Edge line estimation based on the edge point E

4.1.3 Line segments modeling and integration of edge

In a 2D map, the key characteristic of the edge can be perfectly represented by the edge line, which is marked as line

segment HG in Figure 7. However, the vertical scan plane can only detect one edge point that is supposed to be the

closest one to the real edge line. Thus, as shown in Figure 11, the edge feature EF that is going to be added into the state

vector can be estimated by utilizing the single detected edge point E , the robot pose t

P , and the corresponding

orthogonal angle o

as follows:

( ) ( ) ( )

, ,

sin cos sin .

E O Re E t e

Re o p t p t o p t o

F f P F y

y x y

(35)

The covariance matrix of the state vector also has to be augmented as follows:

( ) ( ) ( )

( )

( )

T T

( ) ( ) ( )

T

( ) ( )

T T

( ) ( ) ( )

O O O Et t i t

O Eii t i

E O E Et i

O O Ot t i t

O Oii t i t

O O Ot t i t

P F P F F P F F

t FF P F F F

F P F F F F

P F P F F P F

FF P F F P F

y y yP F P F F P F

J

J

J J J J J C J

(36)

Figure 10. Second situation of edge point detection

8 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407

Page 9: 48373

can be easily found by checking the parameters such as length, range, angle, and covariance. For edge point detection,

two situations must be considered. The first situation is that the edge point is the intersection point of the ground line

and down slope line segment, as shown in Figure 9. The second situation is shown in Figure 10; a short ground line

segment has been detected without a following slope line segment. In this case, the remote endpoint of the ground line

segment will be treated as the edge point if the incidence angle of its ray is bigger than a given threshold ET and no

points are scanned from other features in the range of ET

d to the edge point.

Figure 10. Second situation of edge point detection

Figure 11. Edge line estimation based on the edge point E

4.1.3 Line segments modeling and integration of edge

In a 2D map, the key characteristic of the edge can be perfectly represented by the edge line, which is marked as line

segment HG in Figure 7. However, the vertical scan plane can only detect one edge point that is supposed to be the

closest one to the real edge line. Thus, as shown in Figure 11, the edge feature EF that is going to be added into the state

vector can be estimated by utilizing the single detected edge point E , the robot pose t

P , and the corresponding

orthogonal angle o

as follows:

( ) ( ) ( )

, ,

sin cos sin .

E O Re E t e

Re o p t p t o p t o

F f P F y

y x y

(35)

The covariance matrix of the state vector also has to be augmented as follows:

( ) ( ) ( )

( )

( )

T T

( ) ( ) ( )

T

( ) ( )

T T

( ) ( ) ( )

O O O Et t i t

O Eii t i

E O E Et i

O O Ot t i t

O Oii t i t

O O Ot t i t

P F P F F P F F

t FF P F F F

F P F F F F

P F P F F P F

FF P F F P F

y y yP F P F F P F

J

J

J J J J J C J

(36)

Figure 11. Edge line estimation based on the edge point E

4.1.3 Line segments modeling and integration of edge

In a 2D map, the key characteristic of the edge can beperfectly represented by the edge line, which is marked asline segment HG in Figure 7. However, the vertical scanplane can only detect one edge point that is supposed to bethe closest one to the real edge line. Thus, as shown inFigure 11, the edge feature F E that is going to be added intothe state vector can be estimated by utilizing the singledetected edge point E , the robot pose Pt , and the corre‐sponding orthogonal angle αo as follows:

( ) ( )( )( )

= =

= - + +( ) ( ) ( )

, ,

sin cos sin .

E O Re E t e

Re o p t p t o p t o

F f P F y

y x y

r

a q a a(35)

The covariance matrix of the state vector also has to beaugmented as follows:

æ öS S Sç ÷ç ÷S = S S Sç ÷ç ÷S S Sè ø

æ öS S Sç ÷ç ÷= S S Sç ÷ç ÷S S S +ç ÷è ø

( ) ( ) ( )

( )

( )

T T( ) ( ) ( )

T( ) ( )

T T( ) ( ) ( )

O O O E

t t i t

O Eii t i

E O E Et i

O O Ot t i t

O Oii t i t

O O Ot t i t

P F P F F P F F

t FF P F F F

F P F F F F

P F P F F P F

FF P F F P F

y y yP F P F F P F

J

J

J J J J J C J

m

m

m m m m

(36)

where Cy is the variance of the measurement yeR, the Jacobi

matrices Jμ =∂ f E (Pt , F O , yeR) / ∂ (Pt , F O) and

Jy =∂ f E (Pt , F O , yeR) / ∂ ye

R.

æ öS Sç ÷S =ç ÷S Sè ø

( ).

Ot t

Ot

O Ot

P P F

P FF P F

(37)

The first added edge feature F E is incomplete because ithas no length with only one endpoint instead of two. Thisproblem will be solved when another edge point is scannedand then the length can be estimated.

4.1.4 Line segments modeling and integration of slope

There are several key characteristics of slope such as width,gradient, length, and location. The location and width canbe represented by the slope start line, which can be treatedas line segment AB in Figure 7. The estimation of slope startline, however, differs from the edge line estimation. Insteadof finding a point that is closest to the slope start line, theintersection point between the scanned slope line segmentand ground is adopted. This is because the start line maynot be scanned even when some line segment is scannedfrom the slope, as shown in Figure 12.

where y

C is the variance of the measurement Re

y , the Jacobi matrices , , / ( , )O R OE t e t

J f P F y P F and

, , /O R R

y E t e eJ f P F y y .

( )

.O

t t

Ot

O Ot

P P F

P F

F P F

(37)

The first added edge feature EF is incomplete because it has no length with only one endpoint instead of two. This

problem will be solved when another edge point is scanned and then the length can be estimated.

4.1.4 Line segments modeling and integration of slope

There are several key characteristics of slope such as width, gradient, length, and location. The location and width can be

represented by the slope start line, which can be treated as line segment AB in Figure 7. The estimation of slope start

line, however, differs from the edge line estimation. Instead of finding a point that is closest to the slope start line, the

intersection point between the scanned slope line segment and ground is adopted. This is because the start line may not

be scanned even when some line segment is scanned from the slope, as shown in Figure 12.

Figure 12. Slope is detected without scanning the start line

(a) (b)

Figure 13. Slope start line and scanned area estimation

From the geometrical relationship shown in Figure 13 (a), the distance Rs

y , which is measured from robot origin to the

intersection point in R

Y axis direction, can be calculated by using the robot frame height R

h and the scanned slope line

segment vR

sll in the plane R R

O-Y Z as

Figure 12. Slope is detected without scanning the start line

where y

C is the variance of the measurement Re

y , the Jacobi matrices , , / ( , )O R OE t e t

J f P F y P F and

, , /O R R

y E t e eJ f P F y y .

( )

.O

t t

Ot

O Ot

P P F

P F

F P F

(37)

The first added edge feature EF is incomplete because it has no length with only one endpoint instead of two. This

problem will be solved when another edge point is scanned and then the length can be estimated.

4.1.4 Line segments modeling and integration of slope

There are several key characteristics of slope such as width, gradient, length, and location. The location and width can be

represented by the slope start line, which can be treated as line segment AB in Figure 7. The estimation of slope start

line, however, differs from the edge line estimation. Instead of finding a point that is closest to the slope start line, the

intersection point between the scanned slope line segment and ground is adopted. This is because the start line may not

be scanned even when some line segment is scanned from the slope, as shown in Figure 12.

Figure 12. Slope is detected without scanning the start line

(a) (b)

Figure 13. Slope start line and scanned area estimation

From the geometrical relationship shown in Figure 13 (a), the distance Rs

y , which is measured from robot origin to the

intersection point in R

Y axis direction, can be calculated by using the robot frame height R

h and the scanned slope line

segment vR

sll in the plane R R

O-Y Z as

Figure 13. Slope start line and scanned area estimation

From the geometrical relationship shown in Figure 13 (a),the distance ys

R, which is measured from robot origin to theintersection point in YR axis direction, can be calculated byusing the robot frame height h R and the scanned slope line

segment lslRv in the plane O-YRZR as

-=

cos ,sin

v v

v

R RR sl R sls R

sl

hy r aa

(38)

9Jixin Lv, Yukinori Kobayashi, Takanori Emaru and Ankit A. Ravankar:Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption

Page 10: 48373

where αslRv is the angular parameter of lsl

Rv with respect tothe negative ZR axis and is measured positive in counter-clockwise direction. After ys

R has been estimated, the slopefeature F S , which contains the distance parameter of slopestart line, can be calculated as follows:

( ) ( )( )( )

= =

= - + +( ) ( ) ( )

, ,

sin cos sin .

S O Rs S t s

Rs o p t p t o p t o

F f P F y

y x y

r

a q a a(39)

Moreover, the covariance matrix of the state vector can beaugmented in a similar way as with the edge line.

The partially discovered slope area can be determinedby projecting the endpoints of lsl

Rv along and perpendicu‐lar with the orthogonal angle. The area can be found asthe blue dots plotted in Figure 13 (b). It will be augment‐ed when another line segment is scanned from the sameslope.

The last key characteristc of the slope is the gradient. Fromthe slope model plotted in Figure 14, the geometric rela‐tionship between the slope gradient tanαs and the scanned

line angle αslRv can be found as follows:

=tantan ,cos

v

h

Rsl

s Ro

aa

a(40)

where αoRh is the local orthogonal angle that has been

discussed before. The slope gradient can be excluded fromthe EKF-SLAM frame because it is independent of thecoordinate transformation in 2D frame. From every verticalscan that finds the same slope, the slope gradient tanαs canbe updated by a simple independent one-dimensional EKFwith the observation equation (40).

Until this point, three necessary characteristics of edgeand slope have been successfully integrated into 2D EKF-SLAM frame. They are treated as one set but added asthree individual features because slope and edge maynot be detected simultaneously. Other features such asthe endpoints are stored and updated out of EKFframework.

cos

,sin

v v

v

R RR sl R sls R

sl

hy

(38)

where vR

sl is the angular parameter of vR

sll with respect to the negative

RZ axis and is measured positive in counter-

clockwise direction. After Rsy has been estimated, the slope feature SF , which contains the distance parameter of slope

start line, can be calculated as follows:

( ) ( ) ( )

, ,

sin cos sin .

S O Rs S t s

Rs o p t p t o p t o

F f P F y

y x y

(39)

Moreover, the covariance matrix of the state vector can be augmented in a similar way as with the edge line.

The partially discovered slope area can be determined by projecting the endpoints of vR

sll along and perpendicular with

the orthogonal angle. The area can be found as the blue dots plotted in Figure 13 (b). It will be augmented when another

line segment is scanned from the same slope.

The last key characteristc of the slope is the gradient. From the slope model plotted in Figure 14, the geometric

relationship between the slope gradient tans

and the scanned line angle vR

sl can be found as follows:

tan

tan ,cos

v

h

R

sls R

o

(40)

where hR

o is the local orthogonal angle that has been discussed before. The slope gradient can be excluded from the

EKF-SLAM frame because it is independent of the coordinate transformation in 2D frame. From every vertical scan that

finds the same slope, the slope gradient tans

can be updated by a simple independent one-dimensional EKF with the

observation equation (40).

Until this point, three necessary characteristics of edge and slope have been successfully integrated into 2D EKF-SLAM

frame. They are treated as one set but added as three individual features because slope and edge may not be detected

simultaneously. Other features such as the endpoints are stored and updated out of EKF framework.

Figure 14. Slope model with the scanned slope line segment

Figure 14. Slope model with the scanned slope line segment

Figure 15. The detail flowchart of the vertical features process in the EKF-SLAM frame

4.2 Vertical feature association and correction

Although a rough EKF processing of vertical features can be found in Figure 2, a detail flowchart is plotted in Figure 15

because the feature association of orthogonal angle and vertical feature (edge or slope) are dependent; vertical features

differ from the horizontal features. For a stored orthogonal angle o

and its corresponding edge and slope, the estimated

observations are as follows:

( ),ˆ hR

o O t o p th (41)

( )cosˆ sin ,hR

e E t e o p t op th x y (42)

( )cosˆ sin .hR

s S t s o p t op th x y (43)

Moreover, the estimation of their corresponding covariance can be expressed in a general equation as follows:

T ,ˆX X t X

C H H (44)

where Jacobi matrix ( ) /X X t t

H h and the subscript X can stand for each feature. The association between

measurement X

z and estimated observation ˆX

z is based on their Mahalanobis distance:

1T 2ˆˆ ˆ .

X X X X X X Xd z z C C z z

(45)

When a vertical feature has been detected in the scan, a local horizontal orthogonal angle hR

O is firstly going to be

estimated as the measurement and then will try to associate the measured angle with a stored orthogonal angle. If no

stored orthogonal angle can be associated with this measurement, the local orthogonal angle and detected vertical feature will

be added as a new set of features. Otherwise, the stored vertical feature that corresponds to the associated o

will try to

associate with the detected vertical features. If the vertical feature association fails, the orthogonal angle and detected

vertical features are still going to be added as a new set of features. If the association is successful, the EKF correction will be

executed based on the associated features as follows:

Figure 15. The detail flowchart of the vertical features process in the EKF-SLAM frame

4.2 Vertical feature association and correction

Although a rough EKF processing of vertical features canbe found in Figure 2, a detail flowchart is plotted in Figure15 because the feature association of orthogonal angle andvertical feature (edge or slope) are dependent; verticalfeatures differ from the horizontal features. For a storedorthogonal angle αo and its corresponding edge and slope,the estimated observations are as follows:

( ) ( )= = - ( )ˆ ,hRo O t o p tha m a q (41)

( ) ( )( )= = - - ( )cosˆ sin ,hRe E t e o p t op th x yr m r a a (42)

( ) ( )( )= = - - ( )cosˆ sin .hRs S t s o p t op th x yr m r a a (43)

Moreover, the estimation of their corresponding cova‐riance can be expressed in a general equation as follows:

= S Tˆ ,X X t XC H H (44)

where Jacobi matrix HX =∂h X (μt) / ∂μt and the subscript Xcan stand for each feature. The association betweenmeasurement zX and estimated observation z X is based ontheir Mahalanobis distance:

10 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407

Page 11: 48373

( ) ( ) ( )-

= - + - <1T 2ˆˆ ˆ .X X X X X X Xd z z C C z z c (45)

When a vertical feature has been detected in the scan, a localhorizontal orthogonal angle αo

Rh is firstly going to beestimated as the measurement and then will try to associatethe measured angle with a stored orthogonal angle. If nostored orthogonal angle can be associated with thismeasurement, the local orthogonal angle and detectedvertical feature will be added as a new set of features.Otherwise, the stored vertical feature that corresponds tothe associated αo will try to associate with the detectedvertical features. If the vertical feature association fails, theorthogonal angle and detected vertical features are stillgoing to be added as a new set of features. If the associationis successful, the EKF correction will be executed based onthe associated features as follows:

( )-= S +1T ,ˆ

X t X X XK H C C (46)

( )= + - ,ˆt t X X XK z zm m (47)

( )S = - S .t X X tI K H (48)

5. Experiment and result

5.1 Experimental environment

The experiment was conducted in a real environment thatconsists of crossroad and corridors, as shown in Figure 16.Three boxes (marked as Box 1, 2, 3) are intentionally placedalong the long and straight corridor to add more detectablefeatures to the environment. There are two slopes and oneedge in the experimental environment. Slope 1 is a down‐slope accompanied with an edge, while Slope 2 is anupslope next to the stairs. Figure 17 gives a detailed viewof these two slopes. The robot is controlled to move roughlyalong the green dashed line (sketch, not the exact route) andfinally come back to the initial position as shown in Figure16. There are a total of 700 steps conducted in this experi‐ment. For each step, incremental encoder data, a horizontalscan, and a vertical scan are recorded.

1T ˆ ,

X t X X XK H C C

(46)

ˆ ,t t X X XK z z (47)

.t X X t

I K H (48)

5. Experiment and result

5.1 Experimental environment

The experiment was conducted in a real environment that consists of crossroad and corridors, as shown in Figure 16.

Three boxes (marked as Box 1, 2, 3) are intentionally placed along the long and straight corridor to add more detectable

features to the environment. There are two slopes and one edge in the experimental environment. Slope 1 is a downslope

accompanied with an edge, while Slope 2 is an upslope next to the stairs. Figure 17 gives a detailed view of these two

slopes. The robot is controlled to move roughly along the green dashed line (sketch, not the exact route) and finally come

back to the initial position as shown in Figure 16. There are a total of 700 steps conducted in this experiment. For each

step, incremental encoder data, a horizontal scan, and a vertical scan are recorded.

Figure 16. Experimental environment

(a) (b)

Figure 17. One downslope, one edge and one upslope

5.2 Experimental raw data

Figure 18 is plotted by utilizing the raw horizontal scanned data that are registered by the dead reckoning robot

trajectory. The scanned data are severely corrupted because of the unbounded accumulating error of the dead reckoning

pose. Notice that the data scanned from Slope 2 are very noisy considering the small shift of the robot during the

scanning on the Slope 2; this is mainly due to the inclined surface of Slope 2 and rough ground in front of it. Figure 19

shows similar results for registered raw vertical scanned data. The area of Slope 2 can be easily found out while the area

of Slope 1 is covered by the data scanned from the ground because of the huge reckoning error when robot returned to

the initial position.

Figure 16. Experimental environment

1T ˆ ,

X t X X XK H C C

(46)

ˆ ,t t X X XK z z (47)

.t X X t

I K H (48)

5. Experiment and result

5.1 Experimental environment

The experiment was conducted in a real environment that consists of crossroad and corridors, as shown in Figure 16.

Three boxes (marked as Box 1, 2, 3) are intentionally placed along the long and straight corridor to add more detectable

features to the environment. There are two slopes and one edge in the experimental environment. Slope 1 is a downslope

accompanied with an edge, while Slope 2 is an upslope next to the stairs. Figure 17 gives a detailed view of these two

slopes. The robot is controlled to move roughly along the green dashed line (sketch, not the exact route) and finally come

back to the initial position as shown in Figure 16. There are a total of 700 steps conducted in this experiment. For each

step, incremental encoder data, a horizontal scan, and a vertical scan are recorded.

Figure 16. Experimental environment

(a) (b)

Figure 17. One downslope, one edge and one upslope

5.2 Experimental raw data

Figure 18 is plotted by utilizing the raw horizontal scanned data that are registered by the dead reckoning robot

trajectory. The scanned data are severely corrupted because of the unbounded accumulating error of the dead reckoning

pose. Notice that the data scanned from Slope 2 are very noisy considering the small shift of the robot during the

scanning on the Slope 2; this is mainly due to the inclined surface of Slope 2 and rough ground in front of it. Figure 19

shows similar results for registered raw vertical scanned data. The area of Slope 2 can be easily found out while the area

of Slope 1 is covered by the data scanned from the ground because of the huge reckoning error when robot returned to

the initial position.

Figure 17. One downslope, one edge and one upslope

5.2 Experimental raw data

Figure 18 is plotted by utilizing the raw horizontal scanneddata that are registered by the dead reckoning robottrajectory. The scanned data are severely corrupted becauseof the unbounded accumulating error of the dead reckon‐ing pose. Notice that the data scanned from Slope 2 are verynoisy considering the small shift of the robot during thescanning on the Slope 2; this is mainly due to the inclinedsurface of Slope 2 and rough ground in front of it. Figure19 shows similar results for registered raw vertical scanned

11Jixin Lv, Yukinori Kobayashi, Takanori Emaru and Ankit A. Ravankar:Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption

Page 12: 48373

data. The area of Slope 2 can be easily found out while thearea of Slope 1 is covered by the data scanned from theground because of the huge reckoning error when robotreturned to the initial position.

Figure 18. Horizontal scan registered by dead reckoning trajectory

Figure 19. Vertical scan registered by dead reckoning trajectory

5.3 EKF-SLAM results

Figure 20 shows the EKF-SLAM result when the robot finished the vertical scanning on Slope 2 and turned back toward

to the original position. Comparing with the one reckoned by odometer, the tracjctory has been obviously corrected, but

it still has a certain amount of residual error which can be observed through the latest detected features in the left part of

the figure.

The final 2D EKF-SLAM result is shown in Figure 21. The features in the left part of the figure have been well corrected

after the robot returned to the original position and re-observed the previous features. Overall, this 2D map gives a good

top view of the environment.

Figure 20. EKF-SLAM result in the halfway of the whole experiment

Figure 18. Horizontal scan registered by dead reckoning trajectory

Figure 19. Vertical scan registered by dead reckoning trajectory

5.3 EKF-SLAM results

Figure 20 shows the EKF-SLAM result when the robotfinished the vertical scanning on Slope 2 and turned backtoward to the original position. Comparing with the onereckoned by odometer, the tracjctory has been obviouslycorrected, but it still has a certain amount of residual errorwhich can be observed through the latest detected featuresin the left part of the figure.

The final 2D EKF-SLAM result is shown in Figure 21. Thefeatures in the left part of the figure have been wellcorrected after the robot returned to the original positionand re-observed the previous features. Overall, this 2Dmap gives a good top view of the environment.

However, as shown in Figure 21, an unexpected small crackcan be found between the start line of Slope 1 and the edge.Since the slope start line is extracted from the intersectionpoints of the line segment scanned from the slope and theground line, this error is supposed to be mainly caused bythe discontinuity between Slope 1 and the ground, asshown in Figure 22. As for Slope 2, a noisy horizontal linesegment scanned from the slope’s surface is recorded; thisis mainly due to the imperfect flat ground. When the LRF1

scans on the inclined surface of Slope 2, the noise can easilybe aggravated when the robot moves on the tile coveredground and vibrates slightly; some indication of this can befound in the scanned data pictured in Figure 18. Theslightly overestimated width of the Slope 2 can be ex‐plained by the same reason.

To verify the validity of the proposed algorithm, all the vitalparameters of edge and slope that are estimated by the EKF-SLAM are compared with the reference value to calculatethe error as follows:

= -_ .Error EKF result Reference (49)

Figure 20. EKF-SLAM result in the halfway of the whole experiment

Figure 21. Final EKF-SLAM result with detected edge and slopes

Figure 22. Discontinuity of the slope 1 with the ground

12 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407

Page 13: 48373

It is noted that the reference values cannot be exactly truesince they are measured by the author. The error ofreference may be bigger than expected especially thereference of slope distance parameter ρs(1) which is anindirect measurement due to the discontinuity as shown inFigure 22.

The changes in all the errors with respect to the time (step)are shown in Figure 23, Figure 24 and Figure 25, respec‐tively. The errors of the orthogonal angles are shown inFigure 23. Angle αo(1) indicates the orientation of the edgeline and start line of Slope 1 while αo(2) only represents theangular parameter of the start line of slope 2. Both angles

Figure 23. Error of orthogonal angles

Figure 24. Error of distances parameters

Figure 25. Error of slope angle

are varying and corrected during the robot motion. Figure24 shows the errors of distance parameters ρe, ρs(1), andρs(2), which correspond to the edge, slope 1 and slope 2,respectively. The errors of αo(2) and ρs(2) diminish after step460 when the robot moves towards the original positionand re-observes the previous features. As for the gradientof the slopes, each of them is updated by an independentEKF so that they are corrected only when the correspond‐ing slope is under observation, as shown in Figure 25.

As the final SLAM result, the residual error of the orthog‐onal angle is mainly introduced by the error of localorthogonal angle estimation. The residual error is small butstill slightly affects the orientation of the slope and edge. Itcan be found in the left bottom inset of Figure 21. Theaccuracy of distance parameter of slope and edge is mainlydetermined by the accuracy of orthogonal angle and thescanned slope line segment that were extracted in thevertical scan. Considering the size of the explored environ‐ment, the residual errors of distance parameters areacceptable. Based on the distance parameter and corre‐sponding orthogonal angle, the location of slope and edgecan be determined in 2D map and easy for consultation.The error of the slope gradient is free from the localizationerror and is mostly introduced by the error of scanned slopeline segment and local orthogonal angle estimation. Theestimated slope gradients are accurate enough for the robotto discover whether the slope is traversable.

5.4 Optional prediction results

To verify the efficiency of the optional prediction model,the error of the predicted transformation of the robot ateach step should be presented. However, it is difficult tofind the exact value of the robot pose transformationbetween two successive steps. Thus, the robot trajectory ofthe EKF-SLAM is adopted as the reference route and thetransformation between the two steps can be calculated.The error can then be calculated as follows:

= -_ . _ .Error Prediction Trans EKF Trans (50)

The optional prediction is executed in this EKF-SLAM, andout of a total of 700 steps, the ICP-based prediction isadopted in 174 steps while the remaining 526 steps adoptdead reckoning-based prediction.

The pose transformation is composed of translation androtation. The translational error is usually small since therobot cannot move at a high speed due to security consid‐erations. Figure 26 and Figure 27 show plots of translationalerrors along XR direction and YR direction at each step,respectively. Overall, the error along the YR is bigger thanerrors along XR since the robot was moving forward insteadof turning most of the time. However, no big difference canbe found between the errors of two prediction models.Compared with the translational error, the rotational error

13Jixin Lv, Yukinori Kobayashi, Takanori Emaru and Ankit A. Ravankar:Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption

Page 14: 48373

is more critical because it normally leads to a very big errorwhen the range of scanned data increases, which may leadto the failure of features association. The fatal rotationalerrors are substantially reduced by optional prediction, asshown in Figure 28.

Figure 26. Comparison of translational error along XR

Figure 27. Comparison of translational error along YR

Figure 28. Comparison of rotational errors

The statistical results of the error properties are listed inTable 1 and Table 2. All the means of errors listed in Table1 are close to zero. Table 2 lists out the standard deviationof errors which can act as the judgment of performancebetween two prediction models. Optional prediction

shows slightly bad results in translation. It is usually dueto the scan error caused by LRF itself and slight vibrationwhen the robot moving on the tiled floor. However, thetranslational distance between each step is usually over50mm, thus these differences between two predictionmodels can be neglected. On the contrary, optional predic‐tion gives a superior performance in the rotational errorcorrection by reducing the rotational error over 32.7percent. Since it makes the prediction more accurate, thefeature association of EKF-SLAM will therefore be morerobust.

X(mm) Y(mm) Angle( ° )

Dead Reckoning 0.6751 -0.2111 0.0459

Optional 0.3692 -0.3479 0.0337

Table 1. Means of translational and rotational errors

X(mm) Y(mm) Angle( ° )

Dead Reckoning 6.0679 10.5175 1.1434

Optional 6.6889 11.2768 0.7687

Table 2. Standard deviations of translational and rotational errors

5.5 Computational cost

It is well-known that the computational cost of EKF-SLAMis O(n2) per step where n is the number of stored feature[10]. The difference between the 3D version and 2D versionis that the 3D feature has three parameters while 2D featureonly has two parameters. Because of the quadratic form, 3DEKF-SLAM requires 2.25 times computational cost per stepas much as 2D EKF-SLAM when they have the same featurenumber. Another extra computational cost of 3D SLAMcomes from the feature extraction. Constrained plane andline segment are the preferred features of 3D SLAM [20]and 2D SLAM to represent structured indoor environment.The extraction of constrained plane from spatial pointcloud requires much more computational cost than the linesegment, which is extracted from fewer planar points.

Figure 29 shows the computational time of the proposed2D algorithm at each step during the experiment. Overall,the time cost of each step is low. But it keeps increasing,however, along the experiment process because more andmore features are detected and added into EKF-SLAM.

In general, the step that adopts ICP-based prediction modelrequires higher computational cost than the step thatadopts dead reckoning-based prediction. Obviously, theapplication of optional prediction successfully reduces thecomputational cost by selecting the appropriate predictionmodel. A few heavy costs, which require much highercomputational time than the neighbouring steps, can befound. This is because the bad dead reckoning resultsrequire more iterations of ICP to compensate for the errors,especially when the rotational error is huge.

14 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407

Page 15: 48373

6. Conclusion and future work

In this study, a computationally inexpensive algorithm hasbeen proposed to detect the slope and edge in a structuredindoor environment. This algorithm is mainly composed of2D line segment-based EKF-SLAM and orthogonal as‐sumption. Instead of introducing an additional mechanismto rotate LRF to get 3D points clouds, a vertically fixed LRFframework has been adopted to enable the robot to fulfilthe horizontal scan and vertical scan. Also, an optionalprediction model has been proposed to select the suitablemodel from dead reckoning-based prediction and ICP-based prediction, which saves the computational cost andensures the robustness of the prediction result. An experi‐ment was conducted in a real indoor environment wherethere are edge, downslope, and upslope features. Thesatisfactory results and analysis verify the validity of theproposed methods.

There are two main improvements that can be conductedin the future. First, considering the rotational error of theprediction error, a metric-based ICP is usually reported togive a better solution than the traditional ICP. Thus, someother variants of ICP can be compared and introduced toachieve better results. Second, using a single vertical scanplane to monitor the ground is still risky for navigation. Therobot may fall from the edge, which is nearly parallel to therobot displacement orientation, and it is difficult to detect.Additional cheaper sensors, such as sonar, can be intro‐duced to monitor two sides of the robot to guarantee itssafety.

7. Acknowledgements

This research was supported by the China ScholarshipCouncil and Hokkaido University.

8. References

[1] Durrant-Whyte H. and Bailey T. Simultaneouslocalization and mapping: Part I. Robotics Automa‐tion Magazine, IEEE, 13(2): 99-108, June 2006. DOI:10.1109/MRA.2006.1638022

Figure 29. Computational time of each step

[2] Bailey T. and Durrant-Whyte H. Simultaneouslocalization and mapping (SLAM): Part II. RoboticsAutomation Magazine, IEEE, 13(3): 108–117, Sep.2006. DOI: 10.1109/MRA.2006.1678144

[3] Thrun S., Burgard W., and Fox D. ProbabilisticRobotics (Intelligent Robotics and Autonomous Agent).The MIT Press, 2005. DOI: 10.1145/504729.504754

[4] Frese U. A discussion of simultaneous localizationand mapping. Autonomous Robots, 20 : 25-42, 2006.DOI: 10.1007/s10514-006-5735-x

[5] An S. Y., Kang J. G., Lee L. K., and Oh S. Y. LineSegment-Based Indoor Mapping with Salient LineFeature Extraction. Advanced Robotics, 26 : 437-460,2012. DOI: 10.1163/156855311X617452

[6] Lv J., Kobayashi Y., Ravankar A. A., and Emaru T.Straight Line Segments Extraction and EKF-SLAMin Indoor Environment. Journal of Automation andControl Engineering, 2(3) :270-276, 2014. DOI:10.12720/joace.2.3.270-276

[7] Chou Y. S., Liu J. S. A Robotic Indoor 3D MappingSystem Using a 2D Laser Range Finder Mounted ona Rotating Four-Bar Linkage of a Mobile Platform.International Journal of Advanced Robotic Systems, 10,2013. DOI: 10.5772/54655

[8] Elseberg T., Borrmann D., and Nüchter A. 6DOFsemi‐rigid slam for mobile scanning. Proceedings ofIEEE/RSJ International Conference on IntelligentRobots and Systems, 2012.

[9] Thrun S., Burgard W., and Fox D. A real‐timealgorithm for mobile robot mapping with applica‐tions to multi‐robot and 3D mapping. Proceedings ofIEEE International Conference on Robotics and Auto‐mation, 2000.

[10] Paz L., Tardos J., and Neira J. Divide and conquer:EKF SLAM in O(n). IEEE Transactions on Robotics,24(5), 2008. DOI: 10.1109/TRO.2008.2004639

[11] Ahn S., Choi M., Choi J., and Chung W. Dataassociation using visual object recognition for EKF-SLAM in home environment. Proceedings ofIEEE/RSJ international conference on intelligent robotsand systems, 2588–2594, 2006.

[12] Besl P. J. and McKay N. D. A method for registrationof 3-d shapes. IEEE Transactions on Pattern Analysisand Maching Intelligence, 14(2) :239-256,1992. DOI:10.1109/34.121791

[13] Lu F. and Milos E. Robot Pose Estimation inUnknown Environments by Matching 2D RangeScans. Journal of Intelligent Robotics Systems, 18 :3557-3563, 2005. DOI: 10.1023/A:1007957421070

[14] Minguez J., Lamiraux F., and Montesano L. Metric-Based Scan Matching Algorithms for Mobile RobotDisplacement Estimation. Proceedings of IEEEInternational Conference on Robotics and Automation,2005.

[15] Martínez J. L., González J., Morales J., Mandow A.,and García-Cerezo, A. J. Mobile robot motion

15Jixin Lv, Yukinori Kobayashi, Takanori Emaru and Ankit A. Ravankar:Indoor Slope and Edge Detection by using Two-Dimensional EKF-SLAM with Orthogonal Assumption

Page 16: 48373

estimation by 2D scan matching with genetic anditerative closest point algorithms. Journal of FieldRobotics, 23: 21–34. DOI: 10.1002/rob.20104.

[16] Choi Y. H., Lee T. K., and Oh S. Y. A line featurebased SLAM with low grade range sensors usinggeometric constraints and active exploration formobile robot, Autonomous Robot, 24(1): 13–27, 2008.DOI: 10.1007/s10514-007-9050-y

[17] Yap T. N. and Shelton C. R. SLAM in Large IndoorEnvironments with Low-Cost, Noisy, and SparseSonars. Proceedings of IEEE International Conferenceon Robotics and Automation, 2009.

[18] Nguyen V., Harati A., Martinelli A., Siegwart R.,and Tomatis N. Orthogonal SLAM: a step towardlightweight indoor autonomous navigation.Proceedings of IEEE/RSJ International Conference onIntelligent Robots and Systems, 2006.

[19] Ruhnke M., Kummerle R., Grisetti G., and BurgardW. Highly accurate 3D surface models by sparsesurface adjustment. Proceedings of IEEE InternationalConference on Robotics and Automation (ICRA), 2012.

[20] Weingarten J. and Siegwart R. EKF-based 3D SLAMfor structured environment reconstruction. Proceed‐ings of IEEE/RSJ International Conference on IntelligentRobots and Systems, 2005.

[21] Luo R. C. and Lai C. C. Enriched indoor mapconstruction based on multisensor fusion approachfor intelligent service robot. IEEE Transactions onIndustrial Electronics, 59 (8): 3135–3145, Aug. 2012.DOI: 10.1109/Tie.2011.2141090

[22] Wei B., Gao J., Li K., and Chen H. Navigation andSlope Detection System Design for Autonomous

Mobile Robot. The Ninth International Conference onElectronic Measurement & Instruments, 2009.

[23] Murarka A. and Kuipers B. A Stereo Vision BasedMapping Algorithm for Detecting Inclines, Drop-offs, and Obstacles for Safe Local Navigation.Proceedings of IEEE/RSJ International Conference onIntelligent Robots and Systems, Oct. 2009.

[24] Akdeniz B. C. and Bozma H. I. Local TerrainMapping via 3D Laser Based Bubble Surfaces.European Conference on Mobile Robots (ECMR), Sep.2013.

[25] Nguyen V., Gächter S., Martinelli A., Tomatis N.,and Siegwart R. A comparison of line extractionalgorithms using 2d range data for indoor mobilerobotics. Autonomous Robots, 23(2): 97-111, 2007.DOI: 10.1007/s10514-007-9034-y

[26] Segal A., Haehnel D., and Thrun S. Generalized-ICP. Proceedings of Robotics: Science and Systems,2009.

[27] Pomerleau F., Colas F., Ferland F., Michaud F.Relative Motion Threshold for Rejection in ICPRegistration. Proceedings of The 7th InternationalConference on Field and Service Robotics (FSR), July.2009.

[28] Censi A. An accurate closed-form estimate of ICP'scovariance. Proceedings of IEEE International Confer‐ence on Robotics and Automation (ICRA), 2007.

[29] Lv J., Kobayashi Y., Hoshino Y., Emaru T. SlopeDetection based on Orthogonal Assumption.Proceedings of IEEE/SICE International Symposium onSystem Integration (SII), 2012.

16 Int J Adv Robot Syst, 2015, 12:44 | doi: 10.5772/60407