Map Building without Localization by Dimensionality Reduction Techniques Takehisa YAIRI RCAST, University of Tokyo Session : VISION, GRAPHICS AND ROBOTICS.

Post on 17-Jan-2016

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Map Building without Localization by Dimensionality Reduction

Techniques

Takehisa YAIRI

RCAST, University of Tokyo

Session : VISION, GRAPHICS AND ROBOTICS

Outline

• Background

– Motivation, Purpose and Problem to consider

• Related Works

– SLAM, and Mapping with DR methods

• Proposed Framework - LFMDR

– Basic idea, Assumptions, Formalization

• Experiment

– Visibility-Only and Bearing-Only Mappings

• Conclusion

Motivation

• Map building– An essential capability for intelligent agents

• SLAM (Simultaneous Localization and Mapping)– Has been mainstream for many years– Very successful both in theory and practice

• I like SLAM too, but I feel something’s missing..– Are the mapping and localization really inseparable ?– Are the motion and measurement models necessary ?– How about the aspect of map building as an

abstraction of the world ?

• Is there another map building framework ?

Purpose

• Reconsider the robot map building from the viewpoint of dimensionality reduction and propose an alternative framework– Localization-Free Mapping by Dimensionality

Reduction (LFMDR)– No localization, no motion and measurement models– Heuristics : Closely located objects tend to share

similar histories of being observed by a robot

Reduce dimensionality,while preserving localityt=1

t=2t=N

Map Building Problem to Consider

• Feature-based map (i.e. Not topological, not occupancy-grid)

– A map is represented by 2-D coordinates of objects

• There EXIST motion and measurement models – But, they are not necessarily known in advance

1ξ 2ξ Mξ

mPositions of objects

tx1tx1tu

ty ttt mg wxy ,

tttt mf vuxx ,, 11

Observation

State

Move

Motion model(State transition model)

Measurement model(Observation model)

Exist, but may be unknown

Related Works : SLAM [Thrun 02]• Problem :

“Estimate m and x1:t from y1:t , given f and g”• Solutions:

– Kalman Filter with extended state– Incremental maximum likelihood [Thrun, et.al. 98]– Rao-Blackwellized Particle Filter [Montemerlo, et.al. 02]

• Motion and measurement models must be given• Estimations of map and robot position are coupled

Given: Output: Input:

Motion model

Measurement model

Measurement data

ttt mg wxy ,

tttt mf vuxx ,, 11 TT yyyy ,,, 21:1 Map

Robot trajectory TT xxxx ,,, 21:1

Mm ξξξ ,,, 21 jξ̂

Related Works : Dimensionality Reduction and Mapping (1)

• Idea of using DR for robot map building is not new itself ..

• [Brunskill & Roy 05]– PPCA to extract low-dimensional

geometric features (line segments) from range measurements

• [Pierce & Kuipers 97]– PCA to obtain low-level mappings

between robot’s actions and perceptions (sensorimotor mapping)

Point features(High dimensional)

Line segments(Low dimensional)

DR

Related Works : Dimensionality Reduction and Mapping (2)

• Another existing idea is to estimate robot’s states (locations, poses) from a sequence of high dimensional observation data

• Appearance manifolds [Ham, et.al. 05]– LLP + Kalman filter

• Action respecting embedding [Bowling, et.al. 05]– SDE

• Wifi-SLAM [Ferris, et.al. 07]– GP-LVM

DimensionalityReduction

Observation Space

State Space

x1

x2

Related Works : Dimensionality Reduction and Mapping (cont.)

• Treat row vectors as data points• Estimate x1:N and g, from y1:N ,

given f

ty

)()()1(

)()()1(

)(1

)(1

)1(1

MN

jNN

Mt

jtt

Mj

yyy

yyy

yyy

Observation data (from time 1 to N)

Dimension of features

Tim

e

Observation Space

y(1)

y(M)

y(j)

State Space

x1

x2

DR

trajectory

1. All objects are uniquely identifiable

2. Measurement model can be decomposed to homogeneous submodels for individual objects

3. Locations of at least 3 objects are known in advance (Anchor objects)

Proposed Framework : LFMDR (1)Assumptions

An observation about an object is roughly dependent only on its location, given the map and robot’s position

Decomposable

tMmmmM

ttt tttyyy wξξξ xxx ,2,1,

)()2()1( g,,g,g,,,

ttt m wxy ,g

The second assumption may look too restrictive, but, ..

Proposed Framework : LFMDR (2)Interpretation as a DR Problem

• Imagine a mapping between an object position and its history of observation

Tjmjm

TjN

jN

j

Nggyy ξξy xx ,,

)()(1:1

)( ,,,,1

)()()1(

)()()1(

)(1

)(1

)1(1

MN

jNN

Mt

jtt

Mj

yyy

yyy

yyy

ObservationData Matrix

Tim

e

jm Nξx :1,

Mapping)(

:1j

Ny

Observation History Space XY coordinates (2-dimensional)(N-dimensional)

“If two objects are closely located, their histories of observation are similar ”

Proposed Framework : LFMDR (4)Procedure

1. Explore the environment and obtain observation history data Y1:N

2. Break Y1:N into a set of column

vectors {y(j)1:N}j=1,…,M

3. Apply a DR method to the vectors and obtain a set of 2-D vectors

4. Perform the optimal Affine transformation w.r.t anchor objects, and obtain final estimates

)()()1(

)()()1(

)(1

)(1

)1(1

MN

jNN

Mt

jtt

Mj

yyy

yyy

yyy

)(:1

jNy

Features of LFMDR (1)(Comparison with SLAM)

• Common– Based on state space model

• Different– No assumption that motion and measurement

models are known– Map is directly estimated without robot

localization (localization-free mapping)– Off-line procedure– Larger amount of data required– Assumption of no missing data

ttt mg wxy ,

tttt mf vuxx ,, 11

Adva

ntag

esDi

sadv

anta

ges

Features of LFMDR (2)(Comparison with Other DR-based Approaches)• Comparison with [Brunskill & Roy 05]

– Global vs. Local

• Comparison with [Ham,et.al. 05] [Bowling,et.al. 05] [Ferris,et.al. 07]– Column vectors vs. Row vectors

(i.e., Object positions vs. Robot positions)

v.s.DRDR

1 0 0 1 : 0 : 1 11 1 0 0 : 0 : 1 11 0 0 0 : 1 : 1 10 0 0 0 : 1 : 0 10 0 0 0 : 1 : 0 11 1 1 0 : 1 : 0 1: : : : : : : : :: : : : : : : : :1 1 0 0 : 1 : 0 11 0 0 0 : 1 : 0 1

DRv.s.

DR

)()()1(

)()()1(

)(1

)(1

)1(1

MN

jNN

Mt

jtt

Mj

yyy

yyy

yyy

)()()1(

)()()1(

)(1

)(1

)1(1

MN

jNN

Mt

jtt

Mj

yyy

yyy

yyy

jξ̂

Experiment

• Applied to 2 different situations– [Case 1] Visibility-only mapping– [Case 2] Bearing-only mapping

• Common settings:– 2.5[m]x2.5[m] square region– 50 objects (incl. 4 anchors)– Exploration with random direction

change and obstacle avoidance

• Evaluation– Mean Position Error (MPE)– Mean Orientation Error (MOE)– Averaged over 25 runs

WEBOTS simulator

Triangle Orientation

A-B-C A-C-B

A

BC

A

B C

Difference

DR Methods

1. Linear PCA

2. SMACOF-MDS [DeLeeuw 77]

(a) Equal weights, (b) kNN-based weighting

3. Kernel PCA [Scholkopf,et.al. 98]

(a) Gaussian, (b) Polynomial

4. ISOMAP [Tennenbaum,et.al. 00]

5. LLE [Roweis&Saul 00]

6. Laplacian Eigenmap [Belkin&Niyogi 02]

7. Hessian LLE [Donoho&Grimes 03]

8. SDE [Weinberger, et.al. 05]

Parameters(k, 2, d) were tuned manually

Case 1 : Visibility-Only MappingDescription

• Building a map using only visibility information– i.e., Whether each object is visible (1) or not (0)

• An assumption in this simulation:– An object is visible if its horizontal visual angle of non-occluded

part is larger than 5 deg

Case 1 : Visibility-Only MappingVisibility Measurements

Visibility Observation Data

Object ID

1 2 3 4 … j … M-1 M

Observation history vector of an object

Tim

e

123456::

N-1

N

TjN 1,1,,1,1,1,1,0,0)(

:1 y

1 0 0 1 : 0 : 1 11 1 0 0 : 0 : 1 11 0 0 0 : 1 : 1 10 0 0 0 : 1 : 0 10 0 0 0 : 1 : 0 11 1 1 0 : 1 : 0 1: : : : : : : : :: : : : : : : : :1 1 0 0 : 1 : 0 11 0 0 0 : 1 : 0 1

Normalization)(

:1)(

:1)(

:1~ j

Nj

Nj

N yyy

)(:1

~ jNy

Observation HistorySpace

(Binary matrix)

Column

Compensate variety of the frequencies the objects are observed

Euc. norm

Case 1 : Visibility-Only MappingMaps After 2000 Time Steps

LPCA KPCA(Gaussian, 2=0.5)

Isomap(k=6)SMACOF (k=5)

LLE (k=8) LEM (k=6) SDE (k=7)HLLE (k=8)

Case 1 : Visibility-Only MappingMean Position Errors

0 400 800 1200 1600 20000

0.5

1

1.5

2

Number of Steps

Mea

n Po

sitio

n E

rror

(m

)

LPCASMACOF (WGT,K=5)

KPCA (GAUS,2=0.5)ISOMAP (K=6)LLE (K=8)LEM (K=6)HLLE (K=8)SDE (K=7)

Case 1 : Visibility-Only MappingFinal Map Errors

Gold

Silver

Bronze

LPCA(CMDS) - 1.055 10 18.19 8SMA(UNWGT) - 0.421 7 5.86 6SMA(WGT) K=5 0.206 4 4.83 4KPCA(GAUS) 2 = 0.5 0.926 8 23.29 9KPCA(POLY) d=8 0.953 9 27.03 10ISOMAP K=6 0.177 2 4.11 2LLE K=8 0.241 5 5.4 5LEM K=6 0.352 6 8.17 7HLLE K=8 0.192 3 4.24 3SDE K=7 0.138 1 3.65 1

DR methodsOpt. param.MPE [m] Rnk MOE[%] Rnk

Case 2 : Bearing-Only MappingDescription

• Building a map only with bearing measurements

– Motivated by recent popularity of Bearing-Only SLAM– Assuming all objects are always visible (No missing

observation)

(Relative direction angles to objects)

Bearingangles

Case 2 : Bearing-Only MappingBearing Measurements

1,1 2,1 : j,1 : M,1

1,2 2,2 : j,2 : M,2

1,N 2,N : j,N : M,N

Original Bearing DataObject ID

Tim

e

1 2 j M

1

2

N

cos1,1 cos2,1 : cosj,1 : cosM,1

sin1,1 sin2,1 : sinj,1 : sinM,1

cos1,2 cos2,2 : cosj,2 : cosM,2

sin1,2 sin2,2 : sinj,2 : sinM,2

cos1,N cos2,N : cosj,N : cosM,N

sin1,N sin2,N : sinj,N : sinM,N

1 2 j M

Unit directional vectors

Tim

e

1

2

N

)(:1

~ jNy

Observation History Space

2N-dimensional

Use a unit directional vector

instead of bearing angle

Discontinuity

tj , T

tjtj ,, sin,cos

Case 2 : Bearing-Only Mapping Maps After 2000 Time Steps

LPCA SMACOF (k=8) Isomap (k=9)

LLE (k=8) LEM (k=7) SDE (k=7)

Case 2 : Bearing-Only Mapping Mean Position Errors

500 1000 1500 20000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Number of Steps

Mea

n Po

sitio

n E

rror

(m

)

LPCASMA-WGT (K=8)ISOMAP (K=9)LLE (K=8)LEM (K=7)SDE (K=7)

Case 2 : Bearing-Only Mapping Final Map Errors

LPCA(CMDS) - 0.168 5 2.33 5SMA(UNWGT) - 0.101 4 1.38 3SMA(WGT) K=8 0.0609 1 1 1KPCA(GAUS) 2=1.0 3.47 9 49.2 9KPCA(POLY) d=2 0.605 8 9.15 8ISOMAP K=9 0.0979 3 1.83 4LLE K=8 0.173 6 3.03 6LEM K=7 0.367 7 8.46 7HLLE - NA - NA - SDE K=7 0.0741 2 1.36 2

DR methodsOpt. param.MPE [m] Rnk MOE[%] Rnk

Gold

Silver

Bronze

(*) It might imply the distribution approaches to linear

Conclusion

• Reconsidered robot map building from the viewpoint of dimensionality reduction

• Proposed a new framework named LFMDR – Motion and measurement models are not required– Not need to estimate robot’s poses (localization-free)– However, larger amount of data is needed

• Tested on two types of sensor measurements– Visibility information, and Bearing angles

• Compared a variety of DR methods

Future Works

• Relaxation of restrictions– Missing measurements

– Data association problem

• Scalability– Mapping of a larger number of objects

• On-line algorithm– Tracking of moving objects

• Multi-sensor fusion– e.g. mapping with bearing and range measurements

top related