Top Banner
Parameterized Sensor Model and Handling Specular Reflections for Robot Map Building Md. Jayedur Rashid May 16, 2006 Master’s Thesis in Computing Science, 20 credits Supervisor at CS-UmU: Thomas Hellstrom Examiner: Per Lindstr¨om Ume ˚ a University Department of Computing Science SE-901 87 UME ˚ A SWEDEN
88

Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

May 28, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Parameterized Sensor Modeland

Handling Specular Reflectionsfor Robot Map Building

Md. Jayedur Rashid

May 16, 2006Master’s Thesis in Computing Science, 20 credits

Supervisor at CS-UmU: Thomas HellstromExaminer: Per Lindstrom

Umea UniversityDepartment of Computing Science

SE-901 87 UMEASWEDEN

Page 2: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this
Page 3: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Abstract

Map building is one of the most popular classical problems in mobile robotics fieldand sensor model is the way to interpret raw sensory information in spatial world espe-cially for building map. For mobile robots, working in indoor environment, UltrasonicSonar(US) is famous because of its availability, wide angle and long range sensing abilityetc, in spite of some problems. Among them, Specular Reflection problem is the mostprominent one.

A parameterized sensor model for map building by using occupancy grid with Bayesianupdating has been proposed as the first part of this MS Thesis project. To optimize theparameter of the proposed model, a measurement of map goodness strategy has beenapplied. One new approach, without handcrafted map of the actual environment, formeasuring the goodness of maps has been tested with implementation result. Threedifferent processes, statistical analysis, map scoring technique over binary maps andderivative of image using 8-connecting neighbor concept borrowed from image process-ing field, have been studied to find the measurement of map goodness. Though satisfac-tory result has not been achieved for measuring goodness of maps without ground map,proposed sensor model generates better maps than standard sensor model, described inR. Murphy [27], with same parameter set has been proved by this study.

Dealing with specular reflection problem while using ultrasonic sonar has been in-vestigated in the second part of this MS Thesis project. The robot has a number ofultrasonic sonar that partly overlap, especially over time when the robot is moving.Based on this redundancy, one data processing algorithm, HSR (Handling SpecularReflection) using linear feature which is called Estimated Wall in this project, has beendeveloped. HSR is off line data processing in batch mode. The HSR algorithm is shownto improve the quality of maps in two different experimental environments.

Page 4: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

ii

Page 5: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Contents

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Organization of This paper . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Literature Review 52.1 Map Building and Sensor Model . . . . . . . . . . . . . . . . . . . . . . 52.2 Measurement of map goodness . . . . . . . . . . . . . . . . . . . . . . . 62.3 Specular Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Ultrasonic Sensors 93.1 Physical Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2 Advantages and Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Sonar Based Map Building 154.1 Map Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.2 Evidence Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.3 Sonar Sensor Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.4 Bayesian Updating/ Bayesian Sensor Fusion . . . . . . . . . . . . . . . . 19

4.4.1 Brief Theoretical Review of Bayesian Method . . . . . . . . . . 194.4.2 Getting Numerical Value . . . . . . . . . . . . . . . . . . . . . . 214.4.3 Updating With Bayes’ Rule . . . . . . . . . . . . . . . . . . . . . 23

5 Proposed Sensor Model and Measurement of Map goodness 275.1 Proposed Parameterized Sensor Model . . . . . . . . . . . . . . . . . . . 275.2 Parameter Optimization of the Proposed Model . . . . . . . . . . . . . . 345.3 Measurement of map goodness . . . . . . . . . . . . . . . . . . . . . . . 355.4 Result, Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . 40

6 Dealing with Specular Reflection 436.1 HSR Algorithm Description with Pseudo-Code . . . . . . . . . . . . . . 43

6.1.1 Processing Sensor Readings (Step-1) . . . . . . . . . . . . . . . . 45

iii

Page 6: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

iv CONTENTS

6.1.2 Estimating Variant Walls(Step-2) . . . . . . . . . . . . . . . . . 466.1.3 Finding and Updating Specularly Reflected Readings(Step-3) . . 49

6.2 Parameters of HSR Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 556.3 Relation between Openings and Sensor’s distance . . . . . . . . . . . . . 566.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6.4.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.4.2 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

6.5 Discussion, Conclusion and Recommendation . . . . . . . . . . . . . . . 65

7 Acknowledgements 67

References 69

A Pseudo Code of HSR Algorithm 73

Page 7: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

List of Figures

3.1 Polaroid ultrasonic transducer. The membrane is the disk. This figure isadopted from [27]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.2 Typical beam pattern of Polaroid Ultrasonic sensor (adapted from [15]Thomas Hellstrom’s ”Intelligent Robotics” class lecture Spring 2005). . 10

3.3 Sonar cone with a wide angle of 300 and range response R. . . . . . . . 113.4 Angular uncertainty of a sonar reading. 2α is the VOF and R is the sonar

response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.5 Specular Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.6 Cross-talking problem adapted from Introduction to AI Robotics, R.

Murphy[27]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.7 Cross-talking problem adapted from [3]. Critical Path is any path of

ultrasound that causes cross-talking. . . . . . . . . . . . . . . . . . . . . 13

4.1 An example of Occupancy Grid(Taken from Thomas Hellstrom’s classlecture [15]). Red, Blue and Green cells represent Occupied, Empty andUnknown respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4.2 Two dimensional representation of a sonar beam projected onto an occu-pancy grid. β is the half angle of sensor cone width and R is the maximumrange of a sonar. Grid elements are divided into four regions (1, 2, 3 and4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.3 Three dimensional representation of a sonar beam (Taken from R. Mur-phy [27]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.4 An example of calculating (α,r) for elements of Region 1. Here sensorreading,s=6 and interested cell is dark one, R=10 and β = 15 (Takenfrom R. Murphy [27]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.5 An example of calculating (α,r) for elements of Region 2. Here sensorreading,s=6 and interested cell is dark one, R=10 and β = 15 (Takenfrom R. Murphy [27]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.6 (a)Two dimensional representation (b)Three dimensional representationof a sonar beam projected onto an occupancy grid using equations (4.9), (4.10), (4.11)and (4.12). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

v

Page 8: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

vi LIST OF FIGURES

5.1 Probability trend for a sensor readout through acoustic axis, α = 0.Sensor readout,s=5, Maxoccupied = 0.9 and Tolerance = ±0.5 in Region1. r is the distance of grid elements from sensor origin. The graph isgenerated by simulating the equations (4.9), (4.10), (4.11) and (4.12). 28

5.2 Proposed Belief or Probability pattern/trend for Region 1 through acous-tic axis, α = 0. This is the generic model without saying anything aboutparameters. Here, s=Assumed exact sensor readout obtained by a sensor,tol=tolerance for Region 1. . . . . . . . . . . . . . . . . . . . . . . . . 28

5.3 Proposed Belief or Probability pattern/trend for Region 2 through acous-tic axis, α = 0. This is the generic model without saying anything aboutparameters like Region 1. Here, s=Assumed exact sensor readout ob-tained by a sensor, tol=tolerance for Region 1. . . . . . . . . . . . . . 29

5.4 Updated Figure of Figure 5.2. . . . . . . . . . . . . . . . . . . . . . . . 305.5 Proposed Parameterized Sensor Model. . . . . . . . . . . . . . . . . . . 305.6 2D view of Proposed Parameterized Sensor Model with sensor readout,s=7.5

units. X-axis is the sensor readout. . . . . . . . . . . . . . . . . . . . . . 325.7 3D view of Proposed Parameterized Sensor Model with sensor readout,s=7.5

units. X-axis is the sensor readout. No spikes is in opposite direction inRegion 1 and Region 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.8 Map built by using a) Standard Sensor Model (described in Chapter 4)with Beta=15, Tolerance=1.0, MaxOccupied = 0.98. (b) by using Pro-posed Parameterized Sensor Model, with some arbitrary parameters val-ues; P1 = 0.02(P1 = 1 − P4),P2 = 0.15,P3 = 1.0,P4 = 0.98,P5 = 15. Fordata collection, Amigobot (with 8 Ultrasonic sensors) has been used. . 34

5.9 (a)Histogram generated from the mean of standard deviation. (b) Tablecontaining values of histogram. . . . . . . . . . . . . . . . . . . . . . . . 36

5.10 Finding optimal parameter set by mean of standard deviation method. 385.11 Optimal map by derivative of maps and comparison method . . . . . . . 405.12 Optimal map with parameter set by comparing two different maps of

same environment method. . . . . . . . . . . . . . . . . . . . . . . . . . 40

6.1 Amigobot’s Geometry and the direction of a reading sensed by the specificsensor, S (The figures are adopted from Amigobot ActiveMedia Manual) 45

6.2 Translating sensor readings into Cartesian (x,y) format by our program.Upper right hand corner table in this figure represents the Robot odom-etry and sensed sensor reading at one instance at that odometry. Andthe figure (colored) is the visualization of those readings after translationwith origin (500,500). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

6.3 Actual Environment. For the discussion of this section data have beencollected while wall 4 was White Card Board (used for making poster)that was very smooth. . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Page 9: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

LIST OF FIGURES vii

6.4 Map S is obtained by plotting the data set after translating into Cartesianfashion, of the environment described in Figure 6.3. The blue points arerobot’s trajectory and red dots represent sensed data. 1,2,3,4 denote theboundary walls of the environment. . . . . . . . . . . . . . . . . . . . . 47

6.5 Estimated Wall Concept for a single sensor reading,r. Point A repre-sents the data point for reading r as well as the middle point of estimatedwall CB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6.6 After executing estimateVariantWalls function for some readings withminWallSize=40mm, maxWallSize=150mm and threshold distance, Rth=Rmax=3000mm.This map is called ES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.7 Representation of searching points and direction for Step-3 in HSR Al-gorithm for a sensor reading,r. . . . . . . . . . . . . . . . . . . . . . . . 51

6.8 For reading r (represented by point B), any instance of C(x,y), ES[xr][yr]is occupied by estimated wall and that instance of C(x,y) satisfies allthree conditions mentioned above. So, C(x, y) becomes Cs and reading ris proved as the result of Specular Reflection or Cross-taking. As a result,reading r should be updated by Distance(ACs). . . . . . . . . . . . . . 52

6.9 Explanation of Condition 1. When the robot is at PosB, the obstacle(wall represented by long black line on which B is stood) is sensed by thesensor in a normal way. But when the robot is at PosA, the obstacle isout of range. So it takes infinity values. We may also get some readings asinfinity because of specular reflection. Point C represents the maximumsensor range from PosA. . . . . . . . . . . . . . . . . . . . . . . . . . . 53

6.10 Actual environment, wall AB, is represented by the sensor readings ob-tained from different time with different odometry positions. In the map,Wall AB can be shown by the points near to the wall (d,c,e,f,b etc),but the point a. Point a is either the result of Cross-talking or specularreflection or not a noise at all, TDTh gives this decision. . . . . . . . . 54

6.11 Determining the value of TDTh. This is one example of the experimentwhen d=2000mm. Several readings with different d values have beentaken and average of (r1 r2) gives the value of TDTh. . . . . . . . . . . 56

6.12 The relation between hole/opening size and the distance at which thisopening/hole can be sensed almost perfectly by sonar mounted on Amigobot.57

6.13 Amigobot ActiveMedia Robot. . . . . . . . . . . . . . . . . . . . . . . . 58

6.14 Physical Environment for the experiment 1. . . . . . . . . . . . . . . . 59

6.15 Comparison among different values of parameters as well as without HSRAlgorithm while no opening/hole in the environment. . . . . . . . . . . 60

Page 10: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

viii LIST OF FIGURES

6.16 Comparison among different values of parameters as well as without HSRAlgorithm with different opening sizes in the environment. Sub figure (a)describes the actual environment. D1 and D2 are openings with minimumsize of 80cm. Walls E, T and A are not sensed by the robot. So readingsfrom that side are not interesting. . . . . . . . . . . . . . . . . . . . . . 63

Page 11: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

List of Tables

5.1 Used parameters set with minimum(2nd column) and maximum(3rd col-umn) range. 4th column denotes the optimal parameter set by usinglowest column of histogram mentioned in Figure 5.9. 5th column con-tains the lowest VSDNS generated set and 6th column is obtained bytaking the arithmetic mean over 1440 VSDNS.*P1:P1=1-P4 relation has been used. . . . . . . . . . . . . . . . . . . . . 37

5.2 This table contains all parameter sets obtained from the first bar of his-togram mentioned in figure 5.9.*P1:P1=1-P4 relation has been used. . . . . . . . . . . . . . . . . . . . 39

6.1 Solution area for minWallSize and maxWallSize paremeters. First columnof the table represents the size of minimum wall, minWallSize parameter.2nd and 3rd column of the table represent minimum size and maximumsize of maxWallSize respectively for a certain minWallSize. . . . . . . . 62

ix

Page 12: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

x LIST OF TABLES

Page 13: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Chapter 1

Introduction

1.1 Motivation

Service robots are getting popularity day by day. So it becomes attractive to the re-searchers and developers. For building a service robot, one of the most primary con-ditions is to work as properly as possible in indoor environment. And to achieve thisgoal, usually mobile robots use a spatial representation of actual environment calledmap. Robotic mapping addresses the problem of acquiring the spatial models of phys-ical environment through mobile robot [35]. Correct mapping of working environmentis quite helpful for navigating mobile robots autonomously [20, 9]. Actually sensingenvironment and its interpretation as a map is one of the basics of autonomous robot[1, 17, 18]. Among several problems for building autonomous robots, mapping problemis considered as one of the leading problems, though we have some methods for mapbuilding [35].

In an unknown environment, sensors provide information to be intelligent like naturalbeing to an autonomous agent, either the agent is physical or a software [6]. So, sensorswork as organ for a robot or an autonomous agent. But to use this sensory informationdirectly for building a map of environment is not convenient. Because a model of theworld built from raw sensory data may be incorrect though the sensors are functioningcorrectly [6]. Generally map is built from raw sensory information through one or moreinterpreting methods, which be called Sensor Model. A detail discussion of sensor modelis available in Chapter 4.

Ultrasonic sensor is famous for robotic mapping. Ultrasonic sonar sensors detectobstacle by the reflection of sound. They emit ultrasonic sound that hits obstacles andspread out in several directions. Some energy come back to the sonar sensor becauseof reflections and after receiving one simple calculation is done with the time-of-flightof sound [37, 34]. With this calculation, sensors give the readings which are used forinternal representation of environment, e.g. map [34].

Ultrasonic sensors are inexpensive, fast and have a long operating range with wideangular coverage [27]. But they have several shortcomings as well. According to RobinMurphy [27], major three problems with sonar range readings are as follows (for detaildescription see Chapter 3):1) Foreshortening2) Specular Reflection3) Cross-Talk

1

Page 14: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

2 Chapter 1. Introduction

Foreshortening:According to Robin R. Murphy [27],”A sonar has a 30 degree field of view. This means that sound is being broadcast in a300 wide cone. If the surface is not perpendicular to the transducer, one side of the conewill reach the object first and return a range first. Most software assumes the reading isalong the axis of the sound wave. If it uses the reading (which is really the reading for150) the robot will respond to erroneous data.” This problem is called foreshortening.

This is a different type of problem. So far there is no way to solve this foreshorteningproblem [27]. That’s why we over look this problem and assume that ranges are comingfrom acoustic axis 1 of the sensor.Specular reflection:Specular reflection is when the wave form hits to the surface sufficiently distant fromperpendicular of that surface, a very little amount, which is not reliable to consider forcalculation, of energy is received and most of energy reflected away [9, 11]. This happensbecause of several reasons.Cross-Talk:Robin R. Murphy [27] quotes:”Consider a ring of multiple sonars. Suppose the sonars fire (emit a sound) at about thesame time. Even though they are each covering a different region around the robot, somespecularly reflected sound from a sonar might wind up getting received by a completelydifferent sonar. The receiving sonar is unable to tell the difference between soundgenerated by itself or by its peers. This source of wrong reading is called cross-talk,because the sound waves are getting crossed.”

This problem is found mainly the corner of a room or that types of places whereseveral walls or obstacles stand side by side and creates consecutive reflections. Thereare some solutions of this problem. But this is not the main focus of this thesis paper.

1.2 Goals

The goal of this master thesis project can be divided into two parts:

• The first part is to propose a new parameterized sensor model for map buildingusing ultrasonic sonar sensors. The model depends on some parameters. Bestmodel is found based on the optimum values set of those parameters. So, parameteroptimization is also included in this part. For finding optimal parameters set, mapgoodness strategy has been used. This could be done by defining a measure ofgoodness for the map. A new approach, measuring goodness of maps withoutusing ground map (original map of the environment), has also been tried in thispart.

• The second part in this master project is to analyze when and why specular re-flections occur and to try to find methods to reduce the problems. In this partwe have proposed one technique/algorithm for handling specular reflections whileusing ultrasonic sensors, especially for map building. This method is an off linebatch processing.

1Acoustic Axis: The axis along which the sound wave works.

Page 15: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

1.3. Organization of This paper 3

1.3 Organization of This paper

Motivation and Goals of this paper have already been described in this Chapter.Chapter 2 contains the review of existing related works. We have used Polaroid ul-trasonic sonar in this project and it’s physical and theoretical description is available inChapter 3. Detail discussion on map building and one standard sensor model adaptedfrom [27] have been described in Chapter 4. But only evidence grid with Bayesianupdating approach has been gone through in detail for map building problem. In nextchapter, Chapter 5, we have proposed one parameterized sensor model and for findingthe optimum parameter set for the proposed model, one new optimization technique,a measure of map goodness criteria without using ground map, has been proposed also.Implementation results, Conclusion and Discussion on this part have been given at theend of that Chapter with future works. In Chapter 6, dealing with specular reflectionproblem, one algorithm, HSR, has been proposed and described with necessary param-eters in detail. The HSR Algorithm has been tested in two different environments.Implementation, test results and recommended works for this part have been mentionedin the same Chapter. Acknowledgements have been given in Chapter 7.

Page 16: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

4 Chapter 1. Introduction

Page 17: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Chapter 2

Literature Review

In this Chapter, existing literature studies about robotic map building and sensor model,measurement of map goodness and specular reflection problem have been performedsequentially.

2.1 Map Building and Sensor Model

A lot of researchers are interested on ultrasonic sensors based mobile robots. Some ofthem are mentioned (related with robot evidence grid) in the annotated bibliographyof [23]. Most of the discussion contains two issues: Map making and Localization.According to R. Murphy [27], both are closely related i.e., conjugate of each other.Without localization, accurate map is not possible and vise versa. But our concentrationis mainly map building and the sensor model for sonar sensors, localization is assumedcorrect.

There are two types of map building methods; grid based and feature based [27, 7,36, 14].

The grid based map building concept is introduced by Moravec and Elfes [23, 25, 11].In this approach, a two or three dimensional grid is used for representing the robot’sworking environment. So far two dimensional is popular and each small square of the2D grid is called cell. Each cell represents the status of the physical environment, eitherthe corresponding place is empty or occupied by obstacle. Many of our sensors reportthe distance, which is called range, to the nearest object in a given direction. Given therobot’s position, the probabilities of the cells near to the indicated object is increasedand the probabilities between the sensor and the sensed object is decreased (since itis the first object in that direction). The exact amount of increase or decrease to thevarious cells forms the sensor model. So, in the representing grid, the value of a celldepends not only the range values of the sensor, but also a probabilistic estimation fromsensory information and sensor responses are projected to the grid through a propersensor model [23].

Most common three methods for using sonar sensor models to update a grid areBayesian, Dempster-Shafer and HIMM in case of multi sensory information integration[27]. Occupancy Grid or grid based method is also known as certainty and evidence grids[27]. But the grid method was first introduced to turn wide angle range measurementsinto detailed spatial maps [37, 23]. So it is especially effective for using ultrasonic sensors

5

Page 18: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6 Chapter 2. Literature Review

with sparse responses [36] and this approach is quite useful in ultrasonic based roboticapplications [32, 7, 24], usually in localization and map building.

In ”Feature-based methods” demonstrated by [30], first try to locate features in theenvironment, then localizing them, and finally used them as known landmarks by whichto localize the robot as it searches for the next landmarks. Several researchers used thismethod. In [28], Don Murray and Cullen Jennings describe a visually guided robot thatcan plan paths, construct maps and explore an indoor environment.

For representing the sonar beam, sonar sensor model is needed. Sensor model is thetechnique that process the sonar reading for visual representation. So it controls thevisual map representation of physical environment. Robin Murphy describes a generalsensor model [27], that has been described in Chapter 4. For building high resolutionmap, Moravec and Elfes has given one probabilistic sensor model in [26]. MatthiasFichtner and Axel Grobmann design a probablistic sensor model for camera pose es-timation in hallways and cluttered space using 3D geometrical model of environmentthat is obtained from images taken by camera and they used this model for probabilisticrobot localization [13].

In this paper, we have proposed one parameterized probabilistic sensor model formobile robot mapping. It is the modified version of the general sensor model which isdescribed in [27] and Chapter 4 in this paper.

2.2 Measurement of map goodness

Map is used for robot navigation either in known or unknown environment. As mapis the visual representation of physical environment, it is better to develop a map formobile robot as good as possible. For several reasons measuring the goodness of mapsis important. As for example, to provide a merit values for automatic learning sensormodel [23], building a Self-Organization Map [38], etc.

In [23], some process of finding map goodness, like matching, scoring and entropy aredescribed. Moravec uses information theory’s formulae for scoring between two maps.

Ypma [38] concentrates ”failure detection in process monitoring involves a classifica-tion mainly on the basis of data from normal operation. When a Self-Organizing Map isused for the description of normal system behavior, a compatibility measure is neededfor declaring a map and a data set as matching.” They have propose a novel variantof one such measure and have investigated usefulness of consisting and novel measuresboth with synthetic data and in two real world applications. As the Self-OrganizingMap involves both a vector quantization and a nonlinear projection of the input spaceto some (usually lower-dimensional) output space, looking at the average quantizationerror is measure of the map goodness [38, 19].

Moravec and Elfes [26] proposed some different processes for measuring map goodnessor matching maps. Scoring over grid cells similarities, that is, if empty cell of one mapoverlaps the empty cell of other map, then increase positive score or increase negativescore and same process in case of occupied cells.

Finding optimal parameters set of sonar model, we have tried with one new approach,which is based on statistical measurement and it has been described in Chapter 5.

Page 19: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

2.3. Specular Reflection 7

2.3 Specular Reflection

The definition of specular reflection has been given in Chapter 1 (Introduction). In caseof using ultrasonic sonar, it is assumed that ideally all objects would have a flat surfaceperpendicular to the transducer1, but of course, this rarely happens [27]. The situationcan be worse if the reflected signal bounces off several objects until by coincidence returnsource energy back to the transducer. Specular reflection itself is not only a problembut also generates another problem, cross-talk [27]. According to one researcher MonetSoldo [27], specular reflection and cross-talk can cause problems in different ways eachtime. Actually specular reflection is the problem with environment, not the problemwith program or physical characteristic of sonar sensor. But sometimes because of lowpower operating levels, the source signal is not enough strong and the returned signalworthless and this problem is difficult to identify from specular reflection returns [27].According to Robin Murphy [27], taking the average of three readings, current plus thelast two, from each sensor is one method of eliminating spurious readings. This ad hocstrategy is common on purely reactive robots.

A good study on physical description and properties of ultrasonic sonar is alwayshelpful for better understanding of specular reflection and other problems with ultrasonicsonar and their solutions as well. Physical principles of ultrasonic sensors can be foundin [29]. Johann Borenstein and Alex Cao has done experimental characterization ofultrasonic sonar in [4].

Michael Drumheller discussed the specular reflection and other sonar response prob-lems in [8] at the time of analysis of the Polaroid sonar. In [3], a new successful methodfor error elimination,”Error Eliminating Rapid Ultrasonic Firing (EERUF)”, was pro-posed by Johann Borenstein and Yoram Koren for eliminating cross-talks. Zou in hisMaster Thesis [37], in Chapter 5, mentioned a detail description on the Range Confi-dence Factor(RCF), which is introduced by John Hwan Lim and Dong Woo Cho [21], toreduce the unreliable ultrasonic readings, mainly specular reflections.He also proposed anew method in integrating the conflict factor in evidence theory into an ultrasonic sensormodel for an adaptive model, which reduces errors caused by specular reflections. Alsoa novel algorithm called ”outlier rejection algorithm” has been proposed in the samepaper for further eliminating unreliable readings. In Feature Prediction paper [34], onemethod called ”Feature Prediction Algorithm” has been introduced for detecting andhandling specular reflected readings.

In this paper, we have proposed an approach for handling specular reflection inChapter 6. This is an off line data analysis in batch processing manner. It has somesimilarities with ”Feature Prediction Algorithm” but not the same. Our Algorithm isalso able to handle Cross-Talk problem in some cases.

1Transducer: it is a mechanism, or element, of the sensor that transforms the energy associated withwhat is measured into another form of energy. This term is often used interchangeably with sensor.[27]

Page 20: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

8 Chapter 2. Literature Review

Page 21: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Chapter 3

Ultrasonic Sensors

In our project, ultrasonic sensor has been used. So the physical and theoretical de-scription of ultrasonic sensor comes as a consequence and this chapter contains thesedescriptions with some advantages and disadvantages in a nut shell.

Ultrasonic sensor is a proximity sensor, which measures the relative distance betweenthe sensor and the object of the environment. Since the sensor is mounted onto therobot, it is a straightforward computation to translate the range relative to the sensorto a range relative to the robot at large. Most of the proximity sensors are active [27],i.e., they put out energy to the environment to either change the energy or enhance it.Ultrasonic sensor is also called sonar and sonar is the most popular proximity sensor.

3.1 Physical Description

Sonar refers to any system for using sounds depending on the medium of the environmenteither air or water to measure range. Generally ultrasonic sound, just at the edge ofhuman hearing, is used in air medium. As a result the terms ”sonar” and ”ultrasonics”are used interchangeably when discussing extracting range from acoustic energy.

Ultrasonic is possibly the most common sensor on commercial robots operating in-doors and on research robots. They emit a sound and measure the time it takes tobounce back. So they are active like most of the proximity sensors. The time of flight,time from emission to bounce back, along with the speed of sound in the environment(even though air changes density with altitude) is sufficient to compute the range of theobject based on equation, S = V × t, Where V is velocity of sound, t is half of totaltraveling period and S is calculated distance.

A robotic sonar transducer is shown in Figure 3.1. It consists of a thin metallicmembrane, and is like a dollar coin in size and thickness. A very strong electrical signalgenerates a waveform, causing the membrane on the transducer to produce sound and atimer is set and the membrane becomes stationary. The reflected sound, or echo, vibratesthe membrane which is amplified and then thresholded on returned signal strength; iftoo little sound is received, the sonar assumes the reflected sound is a noise and soignores it. But if the received signal is strong enough to be valid, the timer is trippedand time of flight is obtained. From the above equation, we know V and t. So we cancalculate S, the distance between sensor and object in the environment.

In reality, when the sound wave is generated by Polaroid transducer, multiple sec-ondary sound waves, which interact over different regions of space around the transducer

9

Page 22: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

10 Chapter 3. Ultrasonic Sensors

Figure 3.1: Polaroid ultrasonic transducer. The membrane is the disk. This figure isadopted from [27].

before dissipating has been produced by the sound beam. These secondary sound wavesare called side lobes. Figure 3.2 gives the sound intensity pattern of the Polaroid rangingmodule.

Figure 3.2: Typical beam pattern of Polaroid Ultrasonic sensor (adapted from [15]Thomas Hellstrom’s ”Intelligent Robotics” class lecture Spring 2005).

But most robotic systems assume that only sound from the main, or centremost,lobe is responsible for a range measurement. It is generally accepted that the rangeof a sonar response forms a 300 wide angle cone at about 5 meters away, as shown inFigure 3.3. The R is the range response produced from the sonar and α = 150 is thehalf wide angle of the sonar cone.

Page 23: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

3.2. Advantages and Problems 11

Figure 3.3: Sonar cone with a wide angle of 300 and range response R.

3.2 Advantages and Problems

Besides providing direct range measurements, ultrasonic sensors are cheap, fast andhad terrific coverage. They are easy to maintain which is one of the big advantagesin practical using. High range detection accuracy of sonar is another reason to bepopular. For Polaroid sensor, a typical accuracy is 1% of the reading over the entirerange. Ultrasonic sensor is able to provide range information from 0.1524m (6 inches)to 10.668m (35 feet) ( But in our project, used Amigobot’s range is from 0.1524m (6inches) to 3m (10 feet)).

Though they have the above advantages, ultrasonic sensors have many shortcomingsand limitations. They are as follows (They have already been mentioned in Chapter 1with definitions only):

• Angular uncertainty or ForeshorteningAngular uncertainty means the uncertainty of the angular information of a sonarresponse while detecting any object of a certain range. When an ultrasonic sensorgets a range response of R meters, the response simply represents a cone with 2α(according to Section 3.1, Figure 3.3) within which the object may be present.There is no way to find out exactly where the position of the object is. If thesurface is not perpendicular to the acoustic axis of sonar, one side of the cone mayreach the object first and return a range first. As shown in Figure 3.4, VOF (viewof focus) of the ultrasonic sensor is 2α, the object can be anywhere in the shadedregion for the response R. This angular uncertainty is also known as foreshortening.

So far there is no solution of this problem. But according to most of the researchers,for practical purposes, it is assumed that the reading is along the axis of the soundwave or along the acoustic axis [27].

• Specular ReflectionSpecular reflection refers to the sonar response that is not reflected back directlyfrom the object or not reflected back at all to the sonar. In specular reflection, theultrasound wave emitted from the sonar, hits sufficiently distant from perpendicu-lar of the surface and the sound is reflected away from the reflecting surface, whichresults in longer range reporting or missing the detection of object altogether.

Page 24: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

12 Chapter 3. Ultrasonic Sensors

Figure 3.4: Angular uncertainty of a sonar reading. 2α is the VOF and R is the sonarresponse.

Figure 3.5: Specular Reflection

The specular reflection is due to different relative positions of the ultrasonic trans-ducer and the reflecting surfaces. Figure 3.5 shows sonar responses in two differentsituations. In Figure 3.5a, the sonar transducer’s axis is perpendicular to the re-flection surface, i.e., inclined angle zero, so most of the sound energy is reflecteddirectly back to the ultrasonic sensor and only a very small percentage of the en-ergy is scattered in other directions. However in Figure 3.5b, because the sonartransducer’s axis is not perpendicular to the surface, much energy will be reflectedaway and this may produce a longer range reporting or even no response at all.The amount of reflected sound energy depends strongly on the surface structure

Page 25: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

3.2. Advantages and Problems 13

of the obstacle and the inclined angle,θ (in Figure 3.5).

• Cross-TalkingWhen more than one sensors or sonar ring is used if one sensor receives the soundemitted by other sensor, this problem is called cross-talking. This is an undesirablephenomenon. Figure 3.6 explains cross-talking problem in detail.

Figure 3.6: Cross-talking problem adapted from Introduction to AI Robotics, R.Murphy[27].

Figure 3.7: Cross-talking problem adapted from [3]. Critical Path is any path of ultra-sound that causes cross-talking.

Cross-talking problem is the result of specular reflection [27] most of the cases.But according to Johann Borenstein, it can be occurred without specular reflectionproblem too. Figure 3.7, adapted from his article [3], describes the situation. Inthe same article, he has proposed a method, ”Error Eliminating Rapid UltrasonicFiring (EERUF), which can almost eliminate cross-talking problem.

In this MS Thesis paper, we have tried to develop an algorithm, HSR (in Chapter 6),

Page 26: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

14 Chapter 3. Ultrasonic Sensors

which can handle specular reflection and cross-talk because of specular reflection as well.

Page 27: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Chapter 4

Sonar Based Map Building

In this Chapter, the theory behind grid based map building has been described. At thebeginning of this Chapter map representation with popular Evidence Grid is focusedhighly. Then discussion on Sensor model is available. For mapping sensor model valuesonto evidence grid and fusing them with existing values Bayesian Sensor Fusion, themost popular sensor fusion method, has been discussed in detail as a consequence.

4.1 Map Building

The robotic mapping problem is to acquire a spatial representation of robot’s workingenvironment. In other word, it’s a process of constantly updating the representationof knowledge about robot’s surroundings [7]. For collecting information about environ-ment or to perceive the outside world, the robot needs some devices, called sensors.Cameras, range finders using sonar, laser, and infrared technology, radar, tactile sen-sors, compasses, and GPS are commonly used sensors bringing to bear for this task[35]. These sensors directly provide the environmental information as images or rangereadings. We have used ultrasonic sonar sensors for map building as they are easilyavailable, inexpensive and easy to control [36, 27].

Generally, there are two approaches for sonar-based map building according to theform of data representation,

• Feature-Based world modelIn this model, physical world is represented by points, lines or surface description ofthe environment from the sensed data. Kalman filter and Extended Kalman filterhave been widely used in feature-based world to integrate the sensory informationand maintain the map [20].

• Grid-Based world modelSensory information is represented onto a two dimensional or three dimensionalgrid; either each cell of the grid is occupied or empty. This grid is also calledoccupancy grid [27], as each element in the grid holds a value representing whetherthe location in space is occupied or empty. For ultrasonic sensor, grid based modelis more suitable than other approach, because of their unreliable readings problem[36]. This approach is based on probabilistic model, Bayes Rule [35]. It allowsa statistical expression of confidence in the correctness of data by projecting thesonar response onto the grids.

15

Page 28: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

16 Chapter 4. Sonar Based Map Building

We are using ultrasonic sound sensors for our project and they have a wide angle ofview (most commonly 30 degree). The evidence grid representation is quite useful toturn wide angle range measurements from ultrasonic sensors into detailed spatial maps.Moreover, diffusive multiple sonar readings are able to be integrated on the evidencegrids into increasingly confident and detailed map of a robot’s surroundings [37]. Fusingthe grid with other sensor readings increases the confidence values of the grid. Thoughthe evidence grid approach computationally expensive, provides more robust and reliablenavigation than other methods [23]. ”The evidence grid approach has been used in agreat number of robotic systems” [35]. Because of the above reasons, the ”evidence gridapproach” has been used for representing the physical world in our project too.

4.2 Evidence Grid

In 1983, Martin C. Martin and Hans P. Moravec first formulated the ”Evidence Grid” inCMU (Carnegie Mellon University) Mobile Robot Laboratory to turn wide angle rangemeasurement from cheap ultrasonic sonar mounted on robot to spatial maps [25, 23]. ”Itaccumulates diffuse evidence about the occupancy of a grid of small volumes of nearbyspace from individual sensor readings into increasingly confident and detailed maps ofa robot’s surroundings” [23].

Evidence grid is a two or three dimensional grid (2D is most commonly used and weare using too) corresponding to the actual world. In evidence grid, the world is defined asa discrete spatial lattice in which each cell of the grid is a discrete state variable [15, 10].Evidence grid approach is the spatial world representation based on probability. Eachcell of the grid gets a value for the belief, occupied by a object or empty or unknown byeach sensor’s readout and some Evidential Methods1. The grid can contain sufficientinformation about the environment like, occupancy of objects; reach ability and dangeretc which can be sued for various activities such as exploration, localization, motionplanning, landmark identification and obstacle avoidance for a mobile robot. Using a2D grid, to know the 3D shape of object is impossible, but the position of the object canbe found. Most common evidential methods for occupancy grid are Bayesian, Dempster-Shafer and HIMM [15, 27]. We have used Bayesian approach, so only Bayesian methodhas been discussed. Figure 4.1 is an example of occupancy grid.

In Figure 4.1 red, blue and green colors represent high evidence for being occupied,empty and unknown respectively. At the beginning, it is assumed that the environmentis unknown. The robot moves across Y-Axis for collecting ultrasonic sensor’s responses.No information is available beyond the maximum range, 10 units in the figure.

One of the most attractive features of the occupancy grid is the possibility to in-corporate information from sensors of different types in a single grid. A very accuratelocalization using evidence grid for a robot mounted with a set of 16 sonar sensors and atriangulation-based structured light range finder has been experimented by A. C. Schultzand W. Adams [33].

4.3 Sonar Sensor Model

In order to incorporate raw sensory information, sensor readouts, onto the occupancygrid, a common sensor information presentation is essential and sensor model function is

1Evidential Method: Sensor Model with some process to get numerical values from the sensor model.

Page 29: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

4.3. Sonar Sensor Model 17

Figure 4.1: An example of Occupancy Grid(Taken from Thomas Hellstrom’s class lecture[15]). Red, Blue and Green cells represent Occupied, Empty and Unknown respectively.

used for converting sensory information to a common representation. Whatever methodof updating uncertainty is used requires a sensor model. A sensor model can be generatedin a number of ways [27]:

• Empirical Method is based on testing. Test the sonar, collect data as the correct-ness of sensor model result. The sensor model is formed from the set of beliefs ofall possible observation and the beliefs come from the frequency of correct readingin an observation.

• Analytical Method focus on the physical properties of the sensor i.e., sensor modelis generated from the understanding of the physical properties of the device.

• Subjective Method is the result of designer’s experience, an unconscious expressionof empirical testing.

Polaroid ultrasonic sensor or sonar has been heavily studied, but the principles ofscoring, fusing beliefs etc theoretical concepts are also applicable for any sensor [27]. Wehave used Amigobot Active Media Robot containing 8 ultrasonic sonars [2].

Most roboticists have converged on a model of sonar uncertainty which looks likeFigure 4.2. And Figure 4.3 is the three dimensional view of a standard sensor model.

This (Figure 4.2) is a basic model of a single sonar beam. The sonar beam has afield of view specified by β, the half angle representing the width of the cone and R,the maximum range a sensor can detect. This field of view can be projected onto theevidence grid. From the Figure 4.2, the model can be divided into four different regions:

• Region 1: The state of elements of this region are probably occupied(drawn as‘hill´ in Figure 4.3) by object as readout has been returned from this reason.

Page 30: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

18 Chapter 4. Sonar Based Map Building

Figure 4.2: Two dimensional representation of a sonar beam projected onto an occu-pancy grid. β is the half angle of sensor cone width and R is the maximum range of asonar. Grid elements are divided into four regions (1, 2, 3 and 4).

Figure 4.3: Three dimensional representation of a sonar beam (Taken from R. Mur-phy [27]).

• Region 2: Affected elements of this region are probably empty(drawn as ‘valley´in Figure 4.3), as this region is in between sonar and the obstacle.

• Region 3: This region is beyond the sensor readout but inside the view of focusand less than maximum range of the sonar. So logically, condition of the affectedelements is unknown(drawn as flat surface in Figure 4.3).

• Region 4: This region is outside of the sonar sensor’s field of view. So nothing is

Page 31: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

4.4. Bayesian Updating/ Bayesian Sensor Fusion 19

known about the elements of this region.

For a given range reading, the elements of Region 2 are more likely to be emptythan the elements of Region 1, which are to be really occupied [27]. Elements closelyto the acoustic axis are more reliable than that of towards the edge of the sonar coneregardless of empty or occupied. In a nutshell,

• The closer to the acoustic axis, the higher the belief

• The nearer to the origin of the sonar beam, the higher the belief.

The sensor model discussed above reflects a general concept. There are severaldifferent ways to convert the discussed model into a numerical value for belief. Bayesiansensor fusion or Bayesian approach is one of the most popular [25, 27] and other twomethods, Dempster-Shafer Theory and HIMM, are also famous. Each of the threemethods does the translation slightly differently [27]. In our project, we have usedBayesian approach. So we are going to discuss only Bayesian approach in detail in nextsection.

4.4 Bayesian Updating/ Bayesian Sensor Fusion

The most popular method for fusing evidence is to translate sensor readout into prob-abilities and to combine probabilities using Bayes’ rule. In the early 1980’s, Moravecand Elfes at CMU (Carnegie Mellon University) pioneered this probabilistic approachand later Moravec turned to a form of Bayes’ rule that uses probabilities expressed aslikelihood and odds. This has some computational advantages as well as some problemswith priors.

In the Bayesian Approach, the sensor model generates conditional probabilities ofthe form P (s|H). These are then converted into P (H|s), which is expected, by usingBayes’ rule. It is also possible to fuse two probabilities, either from two different sensorssensing at the same time or from two different times, using Bayes’ Rule. In the followingsubsections, these have been described step by step and this part has been adopted fromR. Murphy’s ”Introduction to AI Robotics” [27].

4.4.1 Brief Theoretical Review of Bayesian Method

Let H stands for ”hypothesis” obtained from experiment. In our case (updating anoccupancy grid using ultrasonic sonar), the experiment is sending the acoustic waveout and measuring the time of flight, and the outcome of the range reading reportingwhether the region being sensed is Empty or Occupied. So, the hypothesis for an elementgrid[i][j] is Empty or Occupied. As a consequence, it can be written H = {H,¬H} orH = {Occupied, Empty}. Moreover, a probability function scores evidence on a scale of0 to 1 for a particular event, H.

The probability that H has really occurred is presented by P (H):

0 ≤ P(H) ≤ 1

One of the most interesting features of probabilities is that if the possibility of hap-pening is know then the probability of not happening can be found easily. As we know

Page 32: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

20 Chapter 4. Sonar Based Map Building

that the summation of probabilities for an event is equal to 1. So if P(H) is known,P (¬H) can be computed from the following formula,

P (¬H) = 1− P (H)

So far we have discussed about unconditional probabilities. As these probabilitiesgive us only a priori information and they are quite independent on sensor readings,S, they are not interesting. Actually, it is more useful for a robot to have a functioncomputing the state or probability of elements grid[i][j] for a particular sensor readout,s,where the state is either Occupied or Empty. And using conditional probabilities, wecan get this sort of information. P (H|s) represents the probability that H has reallyoccurred given a particular sensor reading,s; the ”|” denotes ”given”. The interestingthing is that conditional probabilities 2 also have the property,

P (H|s) + P (¬H|s) = 1.0

For occupancy grid, P (Occupied|s) and P (Empty|s) are calculated for each elementgrid[i][j] which is covered by the sensor scan, s. At each grid element, the tuple oftwo probabilities for the region is stored. But still, there needs a transfer functionthat can transfer sensor readings into the probabilities for each element in a way likeFigure 4.2. The discussion about this transfer function is available in next subsection(Subsection 4.4.2, ”Getting Numerical Values”). So, we can proceed assuming that wehave these numerical values, because for continuation numerical values are not essentialright now.

From sensor model we get P (s|H): the probability that the sensor would return thevalue being considered given it was really occupied. But we need the probability thatthe area at grid[i][j] is really occupied for a given particular sensor reading,s : P (H|s).So now we need any relation to get the probability P (H|s) from P (s|H) and Bayes’rule is this powerful tool, which specifies the relationship between P (s|H) and P (H|s).Following is a short note on Bayes’ Rule adapted from [22, 31].Bayes’ Rule:

Let A and B are two events, P (A) > 0 and P (B) > 0P (A|B) be the conditional probability of event A when event B is givenThen r

P (A|B) =P (A ∩B)

P (B)(4.1)

Where P (A ∩B) be the probability of intersection set of A and B, with

A ∩B = {x : xεA and xεB} (4.2)

We can write (4.1) as follows:

P (A ∩B) = P (A|B)P (B) = P (B|A)P (A) (4.3)

If {Bi} be a countable set of exhaustive events and A be an arbitrary event, thenaccording to total probability rule

P (A) =∑

iεI

P (Bi)P (A|Bi) (4.4)

2In [27], page-381, it is mentioned as ”Unconditional probabilities also have the property”. I think,it’s a printing mistake and it should be ”Conditional probabilities also have the property” instead ofprevious one.

Page 33: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

4.4. Bayesian Updating/ Bayesian Sensor Fusion 21

where

P (Bi) > 0 and Bi ∩Bj = φ (i 6= j, i, jεI)

A =⋃

iεI

(A ∪Bi)

and I ⊆ {1, 2, 3...n}.

Based on the conditional probability and total probability rule described in Equations(4.1) to (4.4), the Bayesian formula can be generalized as [22],

P (Bj |A) =P (A|Bj) · P (Bj)∑iεI P (Bi)P (A|Bi)

(4.5)

where {Bi} is a countable set of exhaustive events such that P (Bi) ≥ 0 and A is anevent with P (A) ≥ 0.

Bayes formula (4.5) can be expressed informally in English [31] as

posterior =likelihood× prior

evidence(4.6)

From equation (4.5), it is clear that by observing the value of A, we can convertP (A|Bj) to the a posterior probability P (Bj |A)- the probability of the state of ele-ments being Bj given that A has been measured with the help of prior probabilityP (Bj). The term P (A|Bj) is called likelihood of Bj with respect to A. Actually a pos-terior probability is the product of likelihood and prior probability ; the evidence factor,∑

iεI P (Bi)P (A|Bi), works as the scaling factor, which assures the posterior probabilitiessum to one, as all good probabilities have [31].

So, from the above short note on Bayes’ Rule, we can get our expected probabilitiesP (H|s) from P (s|H) by using the relation (4.5) as following:

P (H|s) =P (s|H)P (H)

P (H)P (s|H) + P (¬H)P (s|¬H)(4.7)

Substituting H by Occupied equation (4.7) becomes

P (Occupied|s) =P (s|Occupied)P (Occupied)

P (Occupied)P (s|Occupied) + P (Empty)P (s|Empty)(4.8)

P (s|Occupied) and P (s|Empty) are obtained from the sensor model. The otherterms, P (Empty) and P (Occupied) are unconditional probabilities or prior probabilities,usually called priors. If the prior probabilities are known then it is straightforward toconvert the probabilities obtained from sensor model to the form of the occupancy grid.In some cases the prior probabilities are known, but most of the cases they are unknown.If the knowledge isn’t available, it is assumed that P (Occupied) = P (Empty) = 0.5according to [27].

4.4.2 Getting Numerical Value

How to get numerical values of P (s|Occupied) and P (s|Empty) from the sensor modeldescribed in section 4.3, has been described elaborately this section. This section is oneof the most important part of this MS Thesis paper.

Page 34: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

22 Chapter 4. Sonar Based Map Building

According to R. Murphy [27], one set of functions that can quantify the sensor modelinto probabilities are as follows:

Each grid element falling in Region 1 should be updated using the following equa-tions:

P (s|Occupied) =R−r

R + β−αβ

2×Maxoccupied (4.9)

P (s|Empty) = 1.0− P (s|Occupied) (4.10)

wherer=the distance of the grid element from the sensor positionα=the angle of the grid element from the sensor positionR=Maximum Range that is possible to sense on behalf of the sensorβ=width of the sensor coneMaxoccupied is the assumption that a reading of occupied is never completely reli-

able [27].From the above equations, we can get some observations that should have a sensor

model: The term β−αβ represents the idea that the closer the grid element to the acoustic

axis, the higher the belief. And the other term R−rR captures the nearer the grid elements

to the origin of the sonar beam, the higher the belief idea. Generally the value ofMaxoccupied=0.98. This means that a grid element can never have a belief of occupiedmore than 0.98. Another important thing is the thickness or width of Region 1,generally called tolerance,ε. The tolerance comes because of the resolution of the sonar.One common value of tolerance is ±0.5. But tolerance may have different values too.The width of Region 1 depends on this parameter. So, tolerance has a significantimpact of covering more elements of the grid. In other sense, Region 1 has a finitethickness because of tolerance. For pictorial explanation please see the Figure 4.4. Thedark cell represents the element of interest onto the grid.

Figure 4.4: An example of calculating (α,r) for elements of Region 1. Here sensor read-ing,s=6 and interested cell is dark one, R=10 and β = 15 (Taken from R. Murphy [27].

Page 35: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

4.4. Bayesian Updating/ Bayesian Sensor Fusion 23

And every grid element falling in Region 2 can be calculated according to thefollowing formulae:

P (s|Occupied) = 1.0− P (s|Empty) (4.11)

P (s|Empty) =R−r

R + β−αβ

2(4.12)

All parameters represent the same meaning as Region 1 except Maxoccupied. Be-cause in Region 2, an element of grid can have the probability of being empty [27].Figure 4.5 explains generating probabilities for Region 2 with example calculation.

Figure 4.5: An example of calculating (α,r) for elements of Region 2. Here sensor read-ing,s=6 and interested cell is dark one, R=10 and β = 15 (Taken from R. Murphy [27]).

Sensor model obtained from above equations (4.9), (4.10), (4.11) and (4.12) lookslike Figure 4.6.

So far we have discussed about only one reading. But still the fusion of readingsmore than one is left. Following sub section explains how to fuse one reading with otherreadings onto evidence grid using Bayesian method.

4.4.3 Updating With Bayes’ Rule

As we don’t have any prior knowledge about the environment, we can assume the priorprobabilities as 0.5 i.e., P (H) = P (¬H) = 0.5 3 according to ”Brief Theoretical Reviewof Bayesian Method (sub section 4.4.1)” of this report. So, the grid should be initializedwith 0.5 and as a result first updating is quite simple. In case of first observation, forcomputing new probabilities, Bayes’ rule is implemented and the prior P (H) is replacedwith the new value.

3The use of 0.5 as priors can give the conclusion that P (Occupied|s) and P (s|Occupied) are inter-changeable, but in general P (H|s) 6= P (s|H) [27].

Page 36: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

24 Chapter 4. Sonar Based Map Building

(a) (b)

Figure 4.6: (a)Two dimensional representation (b)Three dimensional representation ofa sonar beam projected onto an occupancy grid using equations (4.9), (4.10), (4.11)and (4.12).

Problem arises at the time of second or more observations, or observations made byother sensors at the same time, because on that time priors are not known directly. Butwe can use Bayes’ rule iteratively so that the probability at time tn−1 can be used asthe prior for calculating the probability of current observation, tn. With this concept,Bayes’ rule becomes:

P (H|s1, s2...sn) =P (s1, s2...sn|H)P (H)

P (s1, s2...sn|H)P (H) + P (s1, s2...sn|¬H)P (¬H)(4.13)

where s1, s2...sn are the n multiple observations.

From equation (4.13), it is found that we need a sensor model that can provide usP (s1, s2...sn|H), occupied or empty values for grid element grid[i][j] with n combinationsof sensor readouts, instead of generating only P (s|H), evidence values with only singlesensor readout. But we can simplify P (s1, s2...sn|H) to P (s1|H)P (s2|H)...P (sn|H) ifwe consider that s1 is the result of different experiment than s2 and the others. Now weneed to know all previous (n-1) readings. But it’s impossible to predict that how manytimes an element of the grid will be sensed. As a result to implement this idea, one willuse linked list data structure for each element of the grid and the list can be very longeven 100s of elements. Moreover equation (4.7) needs only 3 multiplication where asnow updating needs 3(n-1) multiplications. So in programming point of view, it is notrealistic.

Fortunately, a recursive version of Bayes’ rule can be derived by using the property,P (Hs)P (s) = P (s|H)P (H). The recursive Bayes’ rule is as follows:

P (H|sn) =P (sn|H)P (H|sn−1)

P (sn−1|H)P (H|sn−1) + P (sn−1|¬H)P (¬H|sn−1)(4.14)

So with each new observation, equation (4.14) is calculated and the obtained resultis stored at grid[i][j]. As the rule holds commutative property, order of the simultaneousreadings for updating grid doesn’t affect at all.

Page 37: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

4.4. Bayesian Updating/ Bayesian Sensor Fusion 25

Now we have all necessary information for using Bayesian Method onto evidencegrid for robotic map building. All maps in this report have been developed using thismethod, described above, but equations converting sensor model to numeric values.We have proposed new sensor model, Parameterized Sensor Model, with differenttransfer equations. Chapter 5 contains detail of our proposed model.

Page 38: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

26 Chapter 4. Sonar Based Map Building

Page 39: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Chapter 5

Proposed Sensor Model andMeasurement of Map goodness

In this chapter, firstly we have described our proposed parameterized sensor model inSection 5.1 with the description of parameters. Section 5.2 and Section 5.3 containoptimization of parameters and measuring map goodness methods, which have beentried in this project with implementations as well. Results, Discussion and Conclusionwith future works of this part of this project are available in Section 5.4.

5.1 Proposed Parameterized Sensor Model

We 1 have tried to improve the sensor model that has been described in detail in Chap-ter 4. Bayesian approach for building sonar based map has been focused deeply withsome equations to convert sensor model values into probabilities. But if we concentrateto the Figure 4.6 (in page 24), we find some discrepancy with the expected sensor model,mentioned in Figure 4.3 (in page 18). According to the theoretical point of view of agood sensor model, Region 1 should not contain the belief that can lead the robot asempty space as well as Region 2 should not with occupied information. In 2D Figure 4.6of standard sensor model, there are some red cells at the corner of the sensor cone inRegion 2 as well as some blue cells at the corner of Region 1. From the description ofevidence grid, we know that red cells and blue cells stand for high probability of beingoccupied and being empty respectively. In 3D view of sensor model, these cells (red inRegion 2 and blue in Region 1) have come as the spike in opposite direction i.e., upwardpeak in Region 2 and downward spike in Region 1.

Again because of sensor’s sensitivity property (resolution of the sonar) exact dis-tances or ranges of the obstacle are not believable hundred percent [27]. That’s why,the sensor model considers tolerance value in Region 1. Actually, tolerance is the pa-rameter, which tells us the error range of sensor model. If we take a look to the sensormodel, which has been described in Chapter 4, with the trend of believes (Probabilities)both Empty and Occupied through acoustic axis, where the angle between the grid el-ements and the sensor origin’s position is zero, i.e., α = 0 (α is the notation based onthe discussion in Chapter 4), we get a graph like Figure 5.1.

1In this report, I have mentioned ”We” instead of ”I” in several Sections and Chapters, especiallyin the Chapters of our proposed models and proposed methods. To me, it is more logical.

27

Page 40: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

28 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

Figure 5.1: Probability trend for a sensor readout through acoustic axis, α = 0. Sensorreadout,s=5, Maxoccupied = 0.9 and Tolerance = ±0.5 in Region 1. r is the dis-tance of grid elements from sensor origin. The graph is generated by simulating theequations (4.9), (4.10), (4.11) and (4.12).

In the Figure 5.1, following features are noticed in Region 1

• At the beginning of the region the belief is the highest and

• Up to end of the region, the belief has been decreased gradually.

But it is more logical to give the highest belief to the point that has been obtained asthe return of the sensor reading and both sides of that point should contain less believes.As for example, if the sensor reading is 500 units, the point that stands exact 500 unitsaway from the origin of the sensor should be more reliable than others in Region 1. Asa result, the belief or probability trend for Region 1 should look like the Figure 5.2.

Figure 5.2: Proposed Belief or Probability pattern/trend for Region 1 through acousticaxis, α = 0. This is the generic model without saying anything about parameters. Here,s=Assumed exact sensor readout obtained by a sensor, tol=tolerance for Region 1.

This figure (Figure 5.2) gives the model only for Region 1 on acoustic axis, α = 0.This figure represents just the generic model of Region 1 without saying anything about

Page 41: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

5.1. Proposed Parameterized Sensor Model 29

the parameters, like how higher should be the value of probability of s with respect tothe other points of Region 1 or w.r. to the horizontal line shown in the figure etc. Noteven the names or symbols of the parameters are mentioned here. The detail descriptioncomes later in this section with the complete proposed model.

If we look at Region 2 in Figure 5.1, it is noticeable that the lowest belief of thisregion is zero (Pr=0) at the beginning of this region. And by analyzing equations 4.11and 4.12 (in page 23) that convert sensor model values to probabilities or beliefs forRegion 2, the highest belief of this region goes up to 0.5 (if r=R and α = 0). Butthe belief or probability 0.5 states the status of a grid element, grid[i][j], as unknown,i.e., neither Occupied nor Empty. That’s (0.5 belief in Region 2) not the correctinterpretation for a good sensor model. So, we have the opportunity to modify thebelief of Region 2 with a parameter that can control the highest belief of Region 2.Again, if the belief of any element of the grid, grid[i][j], is zero or very little ( close to zero), and if the same element is judged by other sensor as non zero value or as occupied thenthe updating of that element can be zero or can be very slow. This happens if the sensorgives some readings shorter than a threshold values. As for example, in case of AmigobotActive media Robot(http://www.activrobots.com/ROBOTS/index.html), objects thatare closer than 10 cm are considered as 10 cm away [2]. But the same places can bedisplayed as occupied with other values more than 10 cm by other sensor or anothertime using the same sensor. So, it is not wiser to consider the lower probability as zero.It should be more than zero and we can control it by another parameter. As a resultthe generic model of Region 2 looks like as Figure 5.3.

Figure 5.3: Proposed Belief or Probability pattern/trend for Region 2 through acousticaxis, α = 0. This is the generic model without saying anything about parameters likeRegion 1. Here, s=Assumed exact sensor readout obtained by a sensor, tol=tolerancefor Region 1.

Similarly, we can say about Region 1 that this region should not give opinionabout the element of a grid as unknown. So, another parameter, which can control thelowest belief value, can be introduced in Region 1. And Figure 5.2 can be modified asFigure 5.4.

From the above discussion, we can derive a complete modified probability trend onlyfor acoustic axis where α = 0. Including the parameters, the proposed model looks likeFigure 5.5.

Actually, we have defined the sensor model in two phases. One phase is for acous-

Page 42: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

30 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

Figure 5.4: Updated Figure of Figure 5.2.

Figure 5.5: Proposed Parameterized Sensor Model.

tic axis, i.e., for α = 0 and the other phase is for 0 < α ≤ view of focus of a sensor.Our concept is to develop the model for acoustic axis and then distribute the probabili-ties/beliefs in both sides of the acoustic axis. So, the highest priority has been given tothe acoustic axis.

In Figure 5.5, the notation P (S|Or,α) represents the probability/belief of the ele-ment, which generates α angle and r straight line distance with the sensor origin ontothe grid. So the belief graph, P (S|Or,α=0) vs r, is the model for acoustic axis and theequation (5.1) distributes the probability values over the whole covered area by sensor’s

Page 43: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

5.1. Proposed Parameterized Sensor Model 31

view of focus. If the view of focus of the sensor be P5, the distribution equation for0 < α ≤ P5 range is as follows:

P (S|OR,α) =P5 − α

P5(P (S|OR,α=0)− 0.5) + 0.5 (5.1)

This equation controls:

• The probability of any grid element, grid[i][j], in Region 2 without acoustic axiswill not be more than 0.5. So, no spikes are shown up in this region.

• There will be no valley in Region 1, as the probability can comes at least (0.5 +P2) ≥ 0.5 where P2 ≥ 0.

• With the increasing values of α,P (S|Or,α) values are decreased. That is as farfrom the acoustic axis, the less belief. This is one of the most important featuresof a good sensor model according to many researchers [27, 15].

Our proposed sensor model looks like Figure 5.6 as 2D and Figure 5.7 as 3D.

Page 44: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

32 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

0 5 10 150

5

10

15

x

Sensor model

y

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Figure 5.6: 2D view of Proposed Parameterized Sensor Model with sensor readout,s=7.5units. X-axis is the sensor readout.

0 5 10 1505

1015

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

Occupancy grid

y

P(O

cc(x

,y)|

s 1,...s

n)(x,

y)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Figure 5.7: 3D view of Proposed Parameterized Sensor Model with sensor readout,s=7.5units. X-axis is the sensor readout. No spikes is in opposite direction in Region 1 andRegion 2.

In our proposed Parameterized Sensor Model we have used some parameters tocontrol the probabilities/beliefs. Description of those parameters is as follows:

• Parameter 1(P1): This parameter works as the threshold for minimum proba-bility value in Region 2 in acoustic axis. The purposes of this parameter havealready been described above briefly. To use P1 = 1 − P4 is a good idea and thereason has been described in the description of P4. But it is also interesting toassign individual/different value for this parameter.

• Parameter 2(P2): This parameter controls the highest probability in Region 2and the lowest probability in Region 1 through acoustic axis. In other sense,

Page 45: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

5.1. Proposed Parameterized Sensor Model 33

this parameter is the threshold between known (empty or occupied) and unknown(Probability=0.5) elements through acoustic axis. If someone wants to allow thetransition probability 0.5 among Region 1 and Region 2, using P2 = 0 is enough.

• Parameter 3(P3): P3 denotes the tolerance for Region 1. Several researchersuse ε (epsilon) to represent this tolerance. We have also used in Chapter 4 thisparameter as epsilon. Because of sequence, we are using P notation from rightnow.

• Parameter 4(P4): Maximum probability of occupancy for a grid element hasbeen controlled by the parameter, P4. The reason has been described alreadyabove in this section and also in Chapter 4. As P1 is the maximum probabilityof empty of a grid element, one can consider P1 = 1−P4. So OccupiedMaximum +EmptyMaximum = 1 2. But it is not a mandatory rule, just an interesting relationbetween P1andP4.

• Parameter 5(P5): View of focus of the sensor is one of the most importantparameter in case of sonar based map building and P5 stands for this. P5 is notused in case of acoustic axis model in our proposed model.

We convert sensor model value to probability using linear equation ( straight lineequation ), through acoustic axis. In Figure 5.5, we have used line1, line3 and line4for representing all these linear equations. They are as follows:

P (S|Or,α=0) =

r < (S − P3) : line1(S − P3) ≤ r < S : line3S ≤ r ≤ (S + P3) : line4

(5.2)

We can also express these equations as follows according to straight line equation:

line1: y = P1 + (0.5− P1 − P2)/(S − P3) ∗ r ; if r < (S − P3) (5.3)

line3: y = (0.5 + P2) + (P4 − 0.5− P2)/P3 ∗ (r − S + P3) ; if (S − P3) ≤ r < S (5.4)

line4: y = P4 + (0.5 + P2 − P4)/P3 ∗ (r − S) ; if S ≤ r ≤ (S + P3) (5.5)

In our sensor model, we have divided the working regions into two parts, like theexisting model (discussed in Chapter 4). We can express these regions as follows:Region 1 (R-I): We can represent this region by line3 and line4 (see Figure 5.5).line3 denotes the increasing probabilities where as line4 stands for decreasing beliefs inRegion 1. The intersection of line3 and line4 is the peak value, which is the probabilityor belief of the sensor readout,s and that’s the highest belief in Region 1.Region 2 (R-II): line1 ( see Figure 5.5) is the Sensor Model for this region throughacoustic axis.

So far, we have described all regions, parameters, the probability conversion functionsfrom sensor model through acoustic axis (α = 0) and the equation ( equation (5.1) )for distributing the probabilities of acoustic axis over the view of focus of the sensor.Now, we can take a comparative look the output with our sensor model and the modeldiscussed in Chapter 4. Maps of the same environment have been generated by existingand proposed sensor model in Figure 5.8 by using same parameters values as many aspossible.

2EmptyMaximum means the lowest probability.

Page 46: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

34 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

SM=Standard Sensor Model, Beta(P5)=15, MaxOcc(P4)=0.98, Tolerance(P3)=1.00

10 20 30 40 50 60 70 80

10

20

30

40

50

60

70

80

SM=PSM, P1=0.02, P2=0.15, P3=1.00, P4=0.98, P5=15.0

10 20 30 40 50 60 70 80

10

20

30

40

50

60

70

80

(a) (b)

Figure 5.8: Map built by using a) Standard Sensor Model (described in Chapter 4) withBeta=15, Tolerance=1.0, MaxOccupied = 0.98. (b) by using Proposed ParameterizedSensor Model, with some arbitrary parameters values; P1 = 0.02(P1 = 1 − P4),P2 =0.15,P3 = 1.0,P4 = 0.98,P5 = 15. For data collection, Amigobot (with 8 Ultrasonicsensors) has been used.

In Figure 5.8b, proposed sensor model has been used with some arbitrary values ofparameters (P1 = 0.02(P1 = 1 − P4),P2 = 0.15,P3 = 1.0,P4 = 0.98,P5 = 15). But theproposed model works better with some specific set or sets of parameters. We can callthis parameters’ set as “optimal parameter set” (In this report, we will mention theset of parameters, though sets of parameters are also possible). In Section 5.2 of thisChapter, we have described the methods for finding the optimal parameters’ set.

5.2 Parameter Optimization of the Proposed Model

In order to get the best result with the proposed sensor model, described in abovesection, optimal parameter set is needed. Optimal parameter set can be found bymeasuring the goodness of maps generated by the sensor model. Some existing methodsof this measurement have been described in Section 2.2 in Literature Review Chapter.But in this project, we have tried some new approach to measure the goodness of maps.So far all existing methods use the original handcrafted map of the working environment(ground map) for this purpose. But we have tried some approaches, which can give thismeasurement without any handcrafted map of the working environment. Next sectionof this Chapter has explained in detail with implementation of these approaches.

Page 47: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

5.3. Measurement of map goodness 35

5.3 Measurement of map goodness

In this MS Thesis project, we have tried to find out one statistical method for mea-suring goodness of maps without using ground map3 in variant approaches. A goodsensor model should give opinion about one cell of the occupancy grid as uniform aspossible either the cell denotes empty or occupied place in the environment is the basicof our first approach. As a result, the standard deviation of that cell after generatingthe map should be as less as possible. As for example, let grid[x][y] represents an oc-cupied cell by several sensor readings. So grid[x][y] is updated with several differentor equal values, obtained from sensor model, with Bayesian updating process. If thevalues obtained from sensor model are more similar, the sensor model is more optimal.Another approach is to find the noise in map by using derivative of image (described”Derivative of maps and Comparison” below in detail) and comparing the noise amongall maps. Map containing the lowest noise is the best one. And the third approach isto run the robot more than one time in the same environment with same odometry4

position even odometry heading, compare them with different sets of parameters andgive some score based on the matching method according to the first method of [26].The Highest correct matching and the lowest incorrect matching denotes the best map.This approach has been described in ”Using two Data sets of same environment” below.

Mean of Standard Deviations (First Approach): This is our first approach basedon standard deviation. For each cell of the mapping grid for a single map, the generatednumerical values from our proposed model have been saved and standard deviation ofthose values has been calculated. If the grid’s dimension be L ×M × N , for each cellXi,j in L×M plan the standard deviation over third dimension (towards N) is

σi,j =√

1N−1

∑Nk=1 (Xi,j,k −Xi,j)

2

where iεL and jεM , and Xi,j = 1N

∑Nk=1 Xi,j,k.

So, we have L ×M dimension matrix, MSD containing standard deviation, σ for eachelement as

MSD =

σ1,1 σ1,2 ... σ1,M

.. ..

.. ..

.. ..σL,1 σL,2 ... σL,M

But we are interested only to non zero elements of MSD. A vector consisting of nonzero σi,j can be formed from MSD and this vector is called vector of standard deviationof non zero sigma,VSDNS in this analysis. So, V SDNS = [σ1σ2......σP ] where σp =σi,j 6= 0; p = 1, 2, 3...P and P is the total number of nonzero element of matrix MSD.It is possible to express each VSDNS by its arithmetic mean according to statistics.As a result, V SDNS = σ = 1

P

∑Pp=1 σp. For our project, we have generated 1440

maps with different combination of parameters and we have calculated V SDNSl wherel = 1, 2...1440. After generating a histogram 5 for 1440 VSDNS with 14 bins, we havefound that the histogram represents a normal distribution like Figure 5.9. Table of

3Ground Map: Map of the actual environment drawn by hand or any other painting tools, notgenerated by robot.

4Odometry: It represents the position of Robot with respect to global co-ordinate at any moment.In 2D co-ordinate(X-Y) representing a point’s position, we need only (x,y) and for denoting direction,we need angle, which is θ here, w.r. to X-axis. So, odometry means (x,y,θ) of the robot of that time.The θ is also called heading, heading angle or robot heading.

5Histogram:A histogram is defined as a bar graph that shows frequency data.

Page 48: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

36 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

Figure 5.9 shows the value containing each bin.

0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.130

50

100

150

200

250Num of Bins=14; Mean of Standard Deviation

(a) (b)

Figure 5.9: (a)Histogram generated from the mean of standard deviation. (b) Tablecontaining values of histogram.

As the lower standard deviation vectors (VSDNS) represent better parameter set, wecan take the decision about our parameters from the lowest value of this histogram. Thewidth of the bin containing lowest value (in our case 0.054079 to 0.059045) can be calledExpected Interval or Flat Interval for our parameters set in this project and we can findall maps as well as parameter sets which are responsible for this expected interval. Ifwe take the average of all these parameter sets for individual parameter, we get the bestvalue of each parameter as the optimum parameters’ set. Table 5.1 denotes the rangeof different parameters used for testing this method (2nd and 3rd column) and obtainedoptimum parameter set by this approach (4th column). In Figure 5.10, sub figure (d)contains the optimal parameter set found by this method. If the minimum VSDNS isconsidered, the obtained parameter set is 5th column of the table and the map lookslike sub figure (e). Instead of using histogram, if we consider V SDNS, we get the subfigure (f) which is remarkably better than sub figure (d) and (e) in Figure 5.10 thoughthis is a crude method and the values of parameters are 6th column of the table 5.1.

If we look at table 5.2, which represents the deeper insight of the first bar of histogramin Figure 5.9, some tendencies are observed for individual parameter. Lower values ofP3, P5 and higher values of P2 in the search space give less VSDNS where as P1and P4 don’t show any specific patterns. We can call this observation as stability testfor parameters.

But there are some flaws in this analysis method. Taking the decision from histogramis not very sound, because the number of bins is variable. This method also needs hugememory and long running time. About 50 hours had been spent to generate 1440 mapsfor getting VSDNS in Pentium 3(1.5 GHz), Windows-XP and 1.0 GB memory machine.

Page 49: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

5.3. Measurement of map goodness 37

Parameter Min Max Optimum Optimum Optimum(using histogram) (V SDNSminimum) (V SDNS)

*P1 0.02 0.05 0.03 0.02 0.02P2 0.05 0.2 0.171 0.2 0.05

2*P3 0.5 3.0 0.786 0.5 2.0P4 0.95 0.98 0.97 0.98 0.98P5 6.0 40.0 6.14 6 14

Table 5.1: Used parameters set with minimum(2nd column) and maximum(3rd column)range. 4th column denotes the optimal parameter set by using lowest column of his-togram mentioned in Figure 5.9. 5th column contains the lowest VSDNS generated setand 6th column is obtained by taking the arithmetic mean over 1440 VSDNS.*P1:P1=1-P4 relation has been used.

Derivative of maps and Comparison (Second Approach): Here the basic conceptis that each cell of the grid should contain similar value with respect to its nearestneighbors. This can be found by using the concept of derivatives of image [16]. If wecalculate this value for all cells of the grid, and apply some statistical process like meanor sum etc.(we have tested with sum), we can get a measurement about the mappinggrid. We call this measurement the noise of a map in our project and it is calculatedas follows:Let the 2D mapping grid be M×N and the window be W = m×n. So, for any cell,(x,y)of mapping grid, the derivative,Dx,y beSx,y =

∑x+bm/2ci=x−bm/2c

∑y+bn/2cj=y−bn/2c abs(Wi,j −Wx,y);

Dx,y = Sx,y/(m× n− 1);where, x = 2, 3......M − 1 andy = 2, 3......N − 1;Now, the derivative of a single map or the noise of the map,Dmap can be calculated asfollows:Dmap =

∑Mx=2

∑Ny=2 Dx,y

The less noise content denotes better map as well as better parameter set of the sensormodel has been used for generating the map.

We have tried this method with 8 connecting neighbor for 1440 maps with differentvalues of parameters same as Table 5.1 (in Mean of Standard Deviations). For eachmap, the derivative of all cells has been calculated and their summed value denotes thederivative of that map. As with the change of parameters the size of the map might bechanged, so before finding the optimum parameter set normalization is required. Fornormalization, only updated cells have been considered. We have found P1=0.05 (1-P4); P2=0.2; P3=0.5; P4=0.95; P5=40 parameter set as optimal by this method andthe map with this optimal set looks like Figure 5.11.

Though we have found a parameter set as optimal by this method, it does not reflectthe actual optimal map. It is worse than most of the maps of Figure 5.10. So it is notpossible to find out an actual optimal set using this approach.

Comparing two different maps of same Environment (Third Approach): Ifthe environment is same, the map should be similar all the time. If the robot is driventwo times from the same position with the same direction and same exploring behavioras well as same speed, then we get two different maps of the same environment. Let

Page 50: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

38 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

Parameter set: P1=0.04; P2=0.13; 2P3=1.9; P4=0.96; P5=6

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

Parameter set: P1=0.02; P2=0.13; 2P3=1.0; P4=0.98; P5=30

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

(a) (b)

Parameter set: P1=0.02; P2=0.2; 2P3=3; P4=0.98; P5=40

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

Parameter Set:P1=0.03;P2=0.17;2P3=0.78;P4=0.97;P5=6.14

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

(c) (d)

Parameter Set:P1=0.02;P2=0.2;2P3=0.5;P4=0.98;P5=6

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

Parameter Set:P1=0.02;P2=0.05;2P3=2.0;P4=0.98;P5=14

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

(e) (f)

Figure 5.10: Finding optimal parameter set by mean of standard deviation method.

they are A and B respectively. If a threshold value for considering the occupied placeonto the map has been used, binary maps A and B containing only obstacle of the realenvironment can be obtained. Now we can compare binary maps A and B and givesome score based on [26]. If any specific cell, grid[x][y], in both map A and map B,represents same status either occupied or empty, this cell can be called correct cell orpoint and the positive score is increased. Otherwise that specific cell can be consideredas wrong information provider and the negative score is increased. (This concept is onlyfor measuring map goodness in this project. That cell can provide correct informationfor both maps though they are considered wrong with respect to each other in case of this

Page 51: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

5.3. Measurement of map goodness 39

P5 P4 2P3 P2 VSDNS6 0.98 0.5 0.2 0.0540796 0.97 0.5 0.2 0.0542566 0.96 0.5 0.2 0.0544356 0.95 0.5 0.2 0.0546156 0.98 0.5 0.15 0.0558616 0.97 0.5 0.15 0.0560436 0.96 0.5 0.15 0.0562286 0.95 0.5 0.15 0.0564146 0.98 1 0.2 0.0565176 0.97 1 0.2 0.0566636 0.96 1 0.2 0.0568116 0.95 1 0.2 0.0569617 0.98 0.5 0.2 0.0576846 0.98 0.5 0.1 0.0577057 0.97 0.5 0.2 0.057876 0.97 0.5 0.1 0.0578937 0.96 0.5 0.2 0.0580586 0.96 0.5 0.1 0.0580826 0.98 1 0.15 0.0581947 0.95 0.5 0.2 0.0582486 0.95 0.5 0.1 0.0582736 0.97 1 0.15 0.0583476 0.96 1 0.15 0.05856 0.95 1 0.15 0.0586556 0.98 1.5 0.2 0.0586826 0.97 1.5 0.2 0.0588016 0.96 1.5 0.2 0.0589226 0.95 1.5 0.2 0.059045

Table 5.2: This table contains all parameter sets obtained from the first bar of histogrammentioned in figure 5.9.*P1:P1=1-P4 relation has been used.

comparison.) Intersection (A,B) and Exclusive-OR (A,B) 6 provide total positive scoreand total negative score respectively for one combination of parameters. For differentcombinations of parameters, we can calculate positive and negative scores from twobinary maps. The optimum parameter set is the pair of Maximum positive score andMinimum negative score.

We have tested with 1440 different combinations of parameters of our proposed sensormodel and have found P1=0.02 (1-P4); P2=0.05; 2*P3=3; P4=0.98; P5=35 as optimalset and Figure 5.12 shows the map. It is better than several sets but not the best one.So, this method is not working as expected in theoretical point of view. Another thing,it is not sound to run the robot two times.

6Intersection(T

) is set operation and Exclusive Or is logical X-OR operation on two data sets.

Page 52: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

40 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

Parameter set: P1=0.05; P2=0.2; 2P3=1.0; P4=0.95; P5=40

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

Figure 5.11: Optimal map by derivative of maps and comparison method

Parameter Set; P1=0.02;P2=0.05;2*P3=3.0;P4=0.98;P5=35

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

Figure 5.12: Optimal map with parameter set by comparing two different maps of sameenvironment method.

5.4 Result, Discussion and Conclusion

In this MS Thesis project, we have tried to find out some statistical approach for mea-suring the goodness of maps without using the ground map or the original map of theenvironment. And this measurement can be used to find an optimal parameter set of aparameterized sensor model.

We have proposed one parameterized sensor model in this Chapter and have wantedto use the measurement of map goodness (various methods described in Section 5.3)without ground map for determining the optimal parameter set. It’s a matter of factthat none of our tried measurement of map goodness without ground map has givensatisfactory results, though we have got some better parameter set by some methodsdescribed above. So, we didn’t find the best parameter set of our proposed parameterized

Page 53: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

5.4. Result, Discussion and Conclusion 41

sensor model. But our Mean of Standard Deviations (V SDNS process) measurementof map goodness gives better result than other ideas. So Mean of Standard Deviationmethod for measuring the goodness map without ground map can be applied and theparameter set found by this method in our proposed sensor model can be used. Moreover,we have fond that our proposed sensor model gives better result than standard sensormodel (described in Chapter 4, Figure 4.6) with same parameter set whether it is optimalor not (See Figure 5.8). In that sense, proposed model is better than sensor modeldescribed in Chapter 4.

All of our applied methods for measuring the goodness of map depend on searchingspace. This is one of the limitations. Long running time is another problem with all ap-proaches. In our proposed sensor model, calculation complexity is higher than standardsensor model equations (described in Chapter 4) though proposed models gives betterresult than standard sensor model.

Future Works:Finding optimal parameter set for proposed sensor model by using ground map can bementioned as future work. Genetic Algorithm can be applied for this purpose. Try-ing to reduce the calculation complexity as well as running time is important as nextstep. Though map building algorithm is not the main concern of this project, buildingcombined map by using different types of sensors would be really interesting, especiallywhile sonar is used. It would be nice to represent proposed sensor model by probabilitydistribution function (PDF). Neural Network can be used to convert the sensor modelinformation to numerical values, though collecting sample data is not so easy.

We have tried different map goodness methods with short range sensor (Maximumconsiderable Sensor Range was 1 meter). May be that’s why we did not get so muchdistorted maps with the changes of parameters as we expected like ’Figure 7’ in [23].We have got all maps (see Figure 5.10) which are nearly similar. So trying with longrange sonar can be mentioned as future work also.

Page 54: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

42 Chapter 5. Proposed Sensor Model and Measurement of Map goodness

Page 55: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Chapter 6

Dealing with SpecularReflection

In this chapter one algorithm, which can handle the noisy data because of specularreflection while using ultrasonic sonar sensor, has been developed and described in detail.The name of the algorithm is HSR after the name of ”Handling Specular Reflection”.Specular Reflection problem has been described in section 3.2 in Chapter 3 alreadymentioning why and when it occurs.

6.1 HSR Algorithm Description with Pseudo-Code

HSR Algorithm takes raw sensory data with robot odometry1 and tries to update thespecularly reflected readings from the raw data set. Though the updated data set canbe used for various purposes later on, but the main concern of my project is to use atindoor robotic map building. This is an off line data processing and batch processing isthe basic. We are using global co-ordinate of odometry for all calculation and a grid forrepresenting those calculation as temporary maps at different levels. HSR can be usedone time or iteratively. In this project, HSR has been implemented iteratively.

If we get any reflected sensor reading, at least one obstacle is there and it is assumedthat the obstacle is perpendicular to the direction of the sensor’s acoustic axis (seeFigure 6.5) [27]. We can assume the obstacle is linearly shaped with fixed length, as allcurved shape is piece-wise liner. This type of linear obstacle, which is estimated fromthe sensor reading based on the above assumption, is called Estimated Wall in ourproject (for detail explanation please see the Section 6.1.2). If the size of EstimatedWall varies with some parameters (especially with sensor reading), we can call this asEstimated Variant Wall. By using this estimated wall information, we can identifyand update the sensor readings, which are the results of specular reflection or cross-talking. If any reading intersects with this estimated wall, we can consider this readingas noise (based on some conditions described in Section 6.1.3) and can update thisreading by the distance between Estimated Wall and the relevant sonar. A temporary

1Odometry: It represents the position of Robot with respect of global co-ordinate at any moment.In 2D co-ordinate(X-Y) representing a point’s position, we need only (x,y) and for denoting direction,we need angle, which is θ here, w.r to X-axis. So, odometry means (x,y,θ) of the robot of that time.The θ is also called heading, heading angle or robot heading.

43

Page 56: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

44 Chapter 6. Dealing with Specular Reflection

map interpreting all sensor readings as points is called S, which is used for generatingEstimated Wall. But we don’t need to consider Estimated Wall for all readings. As weknow that shorter range readings are more reliable than longer range readings in case ofsonar, readings less than a certain range, which is called Threshold Reading in ourproject, are enough for considering Estimated Wall and these readings are consideredas correct readings in preliminary stage though they can be updated. This modifiedversion of map S is called Extended S or ES. In order to apply this idea, we needto process the raw sensor data before estimating the wall. So, HSR Algorithm can bedivided into three steps:

• Processing sensor readings for next steps

• Estimating Variant Walls for relevant readings

• Finding and Updating specularly reflected readings by using Estimated Wall in-formation and other conditions

For applying this idea, some parameters are essential to consider and those aredescribed in different sections where are needed in this chapter. Following is the Pseudo-Code of HSR Algorithm:function HSR (data, pose, minWallSize, maxWallSize, Rth, TDTh)Input Parameters: data, sensor reading; pose, robot’s odometry of data collection;minWallSize and maxWallSize, minimum and maximum size of the estimated wall re-spectively; Rth is the threshold value for considering reliable readings; TDTh, Targeteddistance threshold or cross-talking tolerance.(Please see the description of section ”Step-2” for minWallSize, maxWallSize and Rth ;and section ”Step-3” for TDTh in detail).

Output Parameters: uData, the updated data set

Step-1: Processing Sensor ReadingsConvert all sensor readings, data, into global Cartesian coordinates,(x,y), by using odom-etry position of each data sample and robot’s geometry as well as save all translatedinformation, odometry position, sensor position and data points in a database DBR.DBR ← [translated odometry, sensor positions and data points] ;S← Plotting raw sensory information after translation; // S is a temporary map//(for detail please see Section 6.1.1: ”Processing Sensor Readings ” )

Step-2: Estimating Variant Walls// (Please see the Section 6.1.2 for detail description with parameters)for each datai ≤ Rth of data and save this information as a map, ES.estimateVariantWalls function does this step.ES ← estimateVariantWalls (data, pose, DBR, S, minWallSize, maxWallSize, Rth);

Step-3: Finding and Updating Specularly Reflected ReadingsUpdate sensor readings which are possible by using the information obtained from Step-2 and conditions mentioned in upDate function.uData ← upDate(data, pose, DBR, ES, TDTh);//(This step has been described with parameters in detail in Section 6.1.3)

Page 57: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.1. HSR Algorithm Description with Pseudo-Code 45

End HSR

Each step has been described with relevant description of parameters in differentsections below in this Chapter. The complete Pseudo-Code of HSR including varioussteps is available in Appendix- A.

6.1.1 Processing Sensor Readings (Step-1)

Sonar sensor reading denotes the distance between the sensor and the obstacle by whichthe ultrasonic sound is reflected and is returned back to the receiver of the sonar. Sensorsare mounted on the robot at fixed positions with fixed distances and directions withrespect to a specific point of the robot. Let this point be C and one of the sensor’sposition be S. C can be center or any other point depending on the shape of therobot. So, from the physical structure of the robot, we can calculate the direction ofCS (specific point to sensor position). In our project, we have used Amigobot ActiveMedia Robot [2] and from the manual of Amigobot, we get the physical structure of therobot, which looks like Figure 6.1. We can call this physical shape as robot geometry.

Figure 6.1: Amigobot’s Geometry and the direction of a reading sensed by the specificsensor, S (The figures are adopted from Amigobot ActiveMedia Manual)

The direction of the data sensed by a sensor is same as the direction of the sensor(see the Figure 6.1) from the physical point, C of the robot. From the robot geometry,for a specific sensor, the direction of the reading can be calculated. If the direction of

Page 58: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

46 Chapter 6. Dealing with Specular Reflection

the robot has been changed, it is still possible to calculate the direction and position ofall sensors as well as all readings by using the simple geometrical method. So, we canuse the odometry information without any problem. If we use odometry information, wehave the global co-ordinate of odometry and we can convert any reading into Cartesianformat, (x, y) with respect to that co-ordinate. So, we can consider each reading asa point in the global co-ordinate of odometry and we can call the reading/data (eachsensor data) as a data point. sensorPos and dataPos functions do these tasks in ourproject (Figure 6.2 demonstrates this task). And all these data points are saved in adata base, DBR (in my program, DBR is the combination of DBXR and DBYR. DBXRand DBYR contain x and y co-ordinates of all necessary information respectively).

Figure 6.2: Translating sensor readings into Cartesian (x,y) format by our program.Upper right hand corner table in this figure represents the Robot odometry and sensedsensor reading at one instance at that odometry. And the figure (colored) is the visual-ization of those readings after translation with origin (500,500).

After translating all data readings into Cartesian format with the relevant odometryposes, we can get a map of the environment by plotting these translated data pointswith odometry onto a grid. Let’s call this temporary map S. Figure 6.4 is an exampleof map S that represents the environment described in Figure 6.3.

6.1.2 Estimating Variant Walls(Step-2)

The temporary map S, described in the above section, contains all sensor readings eitherthey denote empty or occupied, i.e., either they are received by the receiver of the sonaror not. But we don’t need to consider all readings onto the grid in map S ; only thosereadings, which are less than or equal to a threshold distance, Rth, are enough, becauseonly those readings will be used for detecting the specularly reflected readings. In othersense, we are expecting that readings less than Rth contain less noisy and more reliable

Page 59: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.1. HSR Algorithm Description with Pseudo-Code 47

Figure 6.3: Actual Environment. For the discussion of this section data have beencollected while wall 4 was White Card Board (used for making poster) that was verysmooth.

Figure 6.4: Map S is obtained by plotting the data set after translating into Cartesianfashion, of the environment described in Figure 6.3. The blue points are robot’s tra-jectory and red dots represent sensed data. 1,2,3,4 denote the boundary walls of theenvironment.

readings. In general, one can use the Maximum Range of the sensor, Rmax, as Rth.If the readings are less than Rmax, but they are not reflecting the exact conjecture ofthe real environment then Rth = Rmax is not a wiser decision (This concept has been

Page 60: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

48 Chapter 6. Dealing with Specular Reflection

described in detail in ”Parameters of HSR Algorithm” section in this Chapter). So Rth

is a threshold parameter that can take values other than Rmax as well as Rmax in somecases. Though now map S contains only the readings which are less than or equal toRth, still all data points are saved in DBR as before.

If we get any reading,r less than Rmax by any sonar,s, that means there is at least oneobstacle at the distance r from the sonar sensor,s. But it is not possible to know wherethe exact position of the obstacle is in the focus of sonar (’Foreshortening’ problem).Though sonar’s ultra sound works as a cone shape, it is assumed that the obstacle standson the acoustic axis (along the axis of the sound wave) [27] and the orientation of theobstacle is perpendicular to the acoustic axis. Based on this concept, we can assumethat there is some obstacle at the data point, A (from where the sound is reflected andreturned to the sonar) correspond to the reading r of sonar s (see Figure 6.5) and itbe shaped like straight line, which can be assumed as the wall for that reading and thewall is perpendicular to the direction of sound wave (see Figure 6.5 ”Estimated WallConcept” ). We can call this wall as Estimated Wall for our project.

Figure 6.5: Estimated Wall Concept for a single sensor reading,r. Point A representsthe data point for reading r as well as the middle point of estimated wall CB.

So, it is possible to modify the map S with these types of lines/walls for sensorreadings that are less than or equal to Rth. We can say this new map as ES (S withEstimated wall or Extended S). As the sensor’s coverage is more at longer distancethan shorter, considering variant size of wall proportional to distance is more logical.So for considering the size of the wall, we have two parameters: minimum size of wall(minWallSize) and maximum size of wall (maxWallSize). Every sensor has a sensitivityrange and we call this range as minimum and maximum range of sonar as Rmin andRmax respectively. Minimum size of the wall, minWallSize, is at Rmin and maximumsize of wall, maxWallSize is at Rmax. The in between size of the wall i.e., any reading, rwhere Rmin < r < Rmax, has been calculated using linear interpolation. The formulas

Page 61: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.1. HSR Algorithm Description with Pseudo-Code 49

are as follows:dR ← Rmax −Rmin;dr ← r −Rmin;dWall ← maxWallSize−minWallSize;estWall ← minWallSize + (dWall/dR) ∗ dr; //estWall denotes the estimated wall

Always the data point, correspond to reading, should be considered as the middle ofthe estimated wall, because of angular uncertainty problem of sonar and it is assumedthat obstacle stands onto the acoustic axis (see Figure 6.5. A is the middle point of CB).So estWall = estWall/2 can be calculated as the length of each side of the wall from thedata point. Still the world size has been calculated based on the real world co-ordinatei.e., global co-ordinate that has been used for odometry. Now for plotting the estimatedwall, grid has been used. All points on the wall are calculated in the global co-ordinateand are pointed onto the grid. This approach reduces the rounding error. This processhas been applied for all readings less than or equal to Rth. But any point of estimatedwall should not affect any actual information, like original data reading point, pointsacquired by robot at the time of exploration, sensor’s position etc (in Figure 6.4, blueand dark red points should not be overlapped by estimatedWall); because the estimatedwall points have the least priority, i.e., physical information should keep unchanged forconsidering estimated information. And any cell of the grid, grid[x][y], at time t shouldbe considered as estimated wall’s cell if grid[x][y] does not contain any of the followinginformation at time t or (t-n)1. Robot’s position2. Any sensor’s position3. Sensor reading sensed by sensor physically4. estimated walls information alreadyWhere n is any non zero positive number (Please see Figure 6.6).

estimateVariantWalls function does this operation so far described in this section(Step-2). The Pseudo-Code of this function is available in Appendix- A. After executingestimateVariantWalls, we get the map ES with estimated wall information. ES looks likeFigure 6.6. This figure contains some odometry positions and corresponding readingsand explanations for one sensor only. In the figure, dark blue(R) and light blue(S)dots represent robot’s trajectories and the single sensor’s position respectively. Darkred (D) and light red (W) are for exact reading sensed by that sensor and estimatedwall for corresponding D point sequentially. As the reading is longer, estimated wallcorresponding that reading is larger.

6.1.3 Finding and Updating Specularly Reflected Readings(Step-3)

From the above section, we have one temporary map ES, consisting of points and lines,with estimated wall information for readings less than Rth. Now we want to find outerroneous readings from the provided data set to our HSR Algorithm and want to updatethose erroneous data if possible by using the estimated variant wall information. Thissection is responsible for this task as the Step-3 of the HSR Algorithm.

Without knowing the environment, it is not possible to know which reading has beenreceived because of specular reflection from the data set. That is, if we don’t have theinformation about environment either any place is empty or not, it is not possible tomake any comments about any sensor reading which may goes out of range especially.But short range data are more reliable than long range data in case of ultrasonic sonar.

Page 62: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

50 Chapter 6. Dealing with Specular Reflection

Figure 6.6: After executing estimateVariantWalls function for some readings with min-WallSize=40mm, maxWallSize=150mm and threshold distance, Rth=Rmax=3000mm.This map is called ES.

As we don’t know either the reading is the result of specular reflection or not, we wantto check all the data with some conditions mentioned below in this section. To do this,we sort all readings in descending order( as shorter range reading is more reliable thanlonger range readings), take readings one by one and try with the updating proceduredescribed below.

We know the odometry position of any time t, specific sensor’s position of a readingas well as Cartesian co-ordinate of that reading itself at time t from the data base DBR,which has been created in Step-1 (Section 6.1.1) of HSR Algorithm. Let for a specificreading,r the co-ordinate of the sensor that has sensed the reading, is A(x1,y1) and theco-ordinate of the reading, r is B(x2,y2) (see the Figure 6.7). These co-ordinates areobtained from global axis of odometry, not from the grid. Now we can start to searchfrom point A to B in global axis through a straight line, AB. Let C(x,y) be any pointon AB for each searching interval for a particular reading r. For each instance of C(x,y)on the searching line AB, we can check the following conditions (for detail explanationof all conditions, see ”Explanation of the Conditions” sub-section below):

• 1. Distance(AC) ≤ Rmax; Rmax is the maximum range of the sonar sensor andDistance(AC) represents the Cartesian distance between points A and C.

• 2. C(x,y) is not outside of the temporary map ES.

If the above two conditions are satisfied then we have to take a look onto the map ESwhether the cell ES[xr][yr], corresponds to C(x,y) where (xr,yr) is scaled values of (x,y)by grid’s scaling factor, is occupied by 1) any actual reading sensed by sensor physicallyor 2) by any estimated wall’s information. Because 1) there might be any reading which

Page 63: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.1. HSR Algorithm Description with Pseudo-Code 51

Figure 6.7: Representation of searching points and direction for Step-3 in HSR Algo-rithm for a sensor reading,r.

sensed by another sensor at the same time t or another time t´ where t 6= t´ ; or bythe same sensor at different time t´. This can be happened because of different angularposition of odometry or the angular advantages of one sensor to another for that part ofthe environment. 2) There might be any obstacle, which could also cover the directionof this reading, very near to this point (from the Estimated Wall Concept). If this cell,ES[xr][yr] corresponds to C(x,y), is occupied then we have to consider the followingcondition (Condition 3) for taking the decision about the reading r, represented bypoint B(x2,y2), for which we are trying to know whether this is correct reading or theresult of specular reflection.

• 3. Distance(CB) ≤ TargetDistanceThreshold (in short TDTh); TDTh is a tol-erance parameter for cross-talking or sensitivity of a reading. (TDTh has beenexplained later in ”Explanation of the Conditions” section in detail in this Chap-ter).

If any one of the above three conditions does not hold then go for the next searchingpoint. Let Cs(x, y) is a specific instance of C for the current reading r and all the aboveconditions are satisfied with this instance. If all the above conditions are satisfied thenpoint B(x2, y2) should not exist at that position, i.e., reading r should not be as long asit is right now. Because in reality, it is not possible to get any reading beyondthe obstacle or wall. So, we can say that point B(x2, y2) is the effect ofspecular reflection, i.e., reading r is the result of specular reflection. In thisway we can identify all data which are result of specular reflection.

If we find that reading r, represented by point B(x2,y2), is a wrong data i.e., if anyCs, which satisfies above three conditions, exists then we can update this reading, r

Page 64: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

52 Chapter 6. Dealing with Specular Reflection

by the value of Distance(ACs), because Cs might be the exact position of B(x2, y2),if there occurs no specular reflection (see Figure 6.8). Still we are using the points ofglobal co-ordinate and that should give better estimation than calculating the distancefrom the grid as well as should give less rounding error.

Figure 6.8: For reading r (represented by point B), any instance of C(x,y), ES[xr][yr] isoccupied by estimated wall and that instance of C(x,y) satisfies all three conditions men-tioned above. So, C(x, y) becomes Cs and reading r is proved as the result of SpecularReflection or Cross-taking. As a result, reading r should be updated by Distance(ACs).

In our project upDate function finds specularly reflected readings and updates themas described above. The Pseudo-Code of upDate function is given in Appendix- A.

Explanation of the Conditions

Condition 1. Distance(AC) ≤ Rmax; Rmax is maximum range of the sonar:This condition represents that for any reading r (corresponding data point B(x2,y2)), ifthe straight line distance between starting point, A(x1,y1) and any intermediate point,C(x,y) on searching line, AB(Figure 6.7), is greater than or equal to the maximumsensor range, Rmax then we should not consider this reading, r as an erroneous noise.Instead of providing erroneous information, reading r denotes a really empty place inenvironment. This condition is applied to find and update specularly reflected reading inHSR Algorithm (and all notations are according to the description of Step-3 of HSR).

Actually, the reliable reading of sonar is between the maximum and minimum rangeof the sonar sensor. So, if we update any reading by any value outside of this reliablerange, it can add noise to data set instead of purification. And if any situation occurs

Page 65: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.1. HSR Algorithm Description with Pseudo-Code 53

contradictory of this condition in case of reading r then reading r denotes actual emptyplace i.e., really no obstacle at that place and the reading r is correct. Figure 6.9 explainsbetter this condition.

Figure 6.9: Explanation of Condition 1. When the robot is at PosB, the obstacle (wallrepresented by long black line on which B is stood) is sensed by the sensor in a normalway. But when the robot is at PosA, the obstacle is out of range. So it takes infinityvalues. We may also get some readings as infinity because of specular reflection. PointC represents the maximum sensor range from PosA.

In the Figure 6.9, the data has been sensed from two different positions of the robotat different time. When the robot was at position PosA, the reading was dA and whenthe robot was at PosB the reading was dB and Rmax = distance(PosA,C). After plot-ting the collected data with robot’s positions (odometry), point A correspond to readingdA and point B correspond to reading dB . As distance(PosA, Wall) > Rmax, dA is avery large value, like specularly reflected reading that goes away from the sonar thoughit is not really specularly reflected. That’s why point A goes beyond the actual wall.On the other hand, point B is reasonable correct reading less than Rmax. Accordingto our concept, if we don’t consider this condition (Condition 1) then point A (readingdA) could be updated by either point B (reading dB) or any other point on estimatedwall generated from reading dB (perpendicular on PosB to B line). But here readingdA is not because of specular reflection; it denotes really an empty place.

Condition 2. C is not outside of ES:Like Condition 1, C(x,y) denotes the searching point on line AB from sensor positionA(x1, y1) to data point B(x2, y2), which represents the reading r (see the Figure 6.7).ES is the map containing actual points as well as the estimated walls’ information forfinding and updating erroneous readings. Update is occurred if C becomes Cs (seeFigure 6.8) according to the above description in Step-3 of HSR Algorithm. As ES

Page 66: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

54 Chapter 6. Dealing with Specular Reflection

contains the information based on which any instance of C becomes Cs, if the searchingpoint, C goes out of ES, there is no possibility of B being updated by any reliablereading or estimated wall information. So, it is better to skip searching in this case andreading r represents actual empty place.

Condition 3. Distance(CB) ≤ TargetDistanceThreshold (in short TDTh):A(x1,y1), B(x2,y2) and C(x2,y2) represent the same meaning as in Figure 6.7 and otherconditions. As in case of searching, A is starting and B is target to go on by usingpoint C on line AB, we can call the straight line distance from C to B as the targetdistance. If we get any searching point so that C becomes Cs (see Figure 6.8), thenwe should check either Cs represents the point B itself or not and this can be done byusing TDTh (Target Distance Threshold value). Because it is meaning less, moreoveris distorting the actual environment in map representation if any reading is updated byitself. The reason why this type of situation can be come and how can we control it,has been described below.

It is very normal to get different readings from the same obstacle or wall in differenttime with different odometry position in case of sonar sensor. The readings may not bethe exact position onto grid or real odometry co-ordinate, though they are denoting thesame object in the same environment like in the Figure 6.10.

Figure 6.10: Actual environment, wall AB, is represented by the sensor readings obtainedfrom different time with different odometry positions. In the map, Wall AB can be shownby the points near to the wall (d,c,e,f,b etc), but the point a. Point a is either the resultof Cross-talking or specular reflection or not a noise at all, TDTh gives this decision.

If the situation is like Figure 6.10, then it is a matter of question either the points a,c, b or f should be updated or not. c, b and f are quite closer to the wall, AB. And theparameter TargetDistanceThreshold (TDTh) is the answer of this question. TDTh fixesthe region how long someone wants to think that these points are same. This type ofsituation can also be occurred because of calculation error that is not possible to avoid,like rounding error or truncation error while a grid has been introduced. Again, at thetime of searching from A(x1,y1) to B(x2,y2) and updating B(x2,y2) by Cs(x, y) (seethe Figure 6.8), TDTh protects B(x2,y2) to be updated by itself calculated from global

Page 67: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.2. Parameters of HSR Algorithm 55

co-ordinate value instead of its actual value. The situation like this Figure (Figure 6.10) is very common at the corner of room or environment because of cross-talking. So thisparameter can also protect cross-talking error. From the above discussion, we can callTDTh as a sensitivity of a sonar reading or tolerance of cross-talking.

6.2 Parameters of HSR Algorithm

Threshold Distance, Rth: Sensor readings lower or equal than this parameter areconsidered for the estimation of walls as correct readings (in Step-2 of HSR Algorithm).This parameter gives just the initial guess about readings, because some of these read-ings can be updated too. Rth = Rmax is sufficient guess for this case, where Rmax is themaximum range of sensor. But if there is opening or hole in the environment, like doors,Rth may take different value that depends on the size of the opening and the distancethe opening can be sensed by sonar correctly at the time of robot’s exploration. I havefound a relation between opening size and the distance the opening can be sensed bysonar correctly (see the Section 6.3 for this study). Based on this relation, it is foundthat Rth = 800(mm) is giving better result with the opening size 80 cm, where the datahas been collected sufficient near (less than Rth) to the opening so that the sonar candetect the opening clearly. In the second experiment (see Subsection 6.4.2), I have usedthis value. Note: the data collected by Amigobot is in millimeter. So Rth = 800mm.

Estimated Wall Size: This parameter has been calculated from maximum and min-imum size of the wall. minWallSize (minimum wall size) is the size of the estimatedwall at minimum range of sensor and maxWallSize (maximum wall size) is the size ofthe estimated wall at maximum range of sensor. This parameter depends on the en-vironment. So, it is very hard to get actual value of this parameter. But from ourexperiment, we have found that minWallSize=20 mm and maxWallSize=110-190 mmworks well usually. But it is not correct all time, because it depends on the environmentas well as the sensor of the robot. In case of opening or hole in the environment, smallwall size should work better. But that time, noise will not be removed as much as with-out openings in the environment. Based on the relation between the size of the openingand the distance from which the opening can be sensed correctly, I have found that withthe minWallSize=20 mm and maxWallSize=80 mm is giving a reasonable result whileRth = 800mm (see the Experiment 2 in Section 6.4.2 of this Chapter below).

TargetDistanceThreshold: It is the threshold value for considering sensitivity of areading or data point (described in the ”Explanation of Conditions” section in third con-dition in Step-3 of HSR Algorithm). From experiment (trial and error method with adata set), I have found that TDTh=60 (mm) is giving reasonable result with Amigobot.Another way to measure this parameter is to calculate this value from physical propertyof sensor. We know that sonar works with cone shaped area. So for a certain distance,d it is possible to measure the sensor reading through the acoustic axis at the distanced at point A and at the farthest distant point B, which is possible to be sensed bythe sonar for that certain distance,d from the center of the cone,A (see Figure 6.11below). Let the reading at A is r1 and at B is r2. Though the readings are different,theoretically they will be considered as the same point in case of map building. Becauseit is not possible to understand from which point of the sonar cone we are getting thereadings either from A or B. So we can say, lower than the difference (r1-r2) are thesame readings in this case and B is an instance of A. The maximum range of the sonar

Page 68: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

56 Chapter 6. Dealing with Specular Reflection

of Amigobot is 3 meter and we get very less noisy values below 1 meter. So we haveconsidered d=2 meter (average reading between 1 and 3 meters) near about for thisTDTh experiment shown in the Figure 6.11. Obtained r1 =2005mm and r2=2067mmwas one instance with d=2000mm. Several readings have been tested with changing dvalues and the average TDTh=(r1-r2) has been found as 60mm.

Figure 6.11: Determining the value of TDTh. This is one example of the experimentwhen d=2000mm. Several readings with different d values have been taken and averageof (r1 r2) gives the value of TDTh.

Resolution of the grid for ES: It is better to use lower resolution for representingES map, which contains the estimated walls information in HSR Algorithm, as ES hasbeen used in case of searching (see Condition-3 in the pseudo-code of ”upDate” functionin HSR Algorithm). Unit resolution gives almost no error in searching in theoreticalpoint of view, but it needs large memory and longer operation time. In our project wehave used 1 cm (10 mm) as each small square, as our data set’s unit is in millimetersand it has given reasonable results.

Rmax and Rmin: These parameters are sensor’s parameters. We have used Amigobotmounted with 8 sonar sensors of same type for this project and their Rmin = 100mmand Rmax = 3000mm have been mentioned in manual. But practically, we have foundRmin = 165mm and Rmax = 3000mm. So, we have used the second parameters set.

6.3 Relation between Openings and Sensor’s distance

For building the map similar to physical environment, it is important to sense the openand occupied space as correctly as possible by ultrasonic sonar. During this project,

Page 69: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.4. Implementation 57

we have found that if there is any hole or opening, as for example open door, in theenvironment, the sonar can not detect this opening/hole after a certain distance fromthe hole. This distance depends on the physical property of the sonar (like, view of focus,maximum range etc) and the size of the opening. But in our algorithm, for getting theidea about the parameter Rth(described in ”parameters of the algorithm” Section 6.2),to know from which distance the opening can be sensed correctly is important. In othersense, the robot should explore less than this distance for generating actual conjecture ofthe environment perfectly. We have done one experiment to find out the relation betweenthe holes in environment and the distance from which they can be sensed clearly. Thissection describes this experiment and obtained results.

Experimental environment is very simple, just making a hole, which works as thedoor in real environment, between two smooth poster boards. We have used AmigobotActiveMedia robot in this experiment also and we know that the minimum range ofAmigobot’s sonar is 165 mm and VOF is 300. So, if we derive the robot straight withone of it’s sensors perpendicular to the walls and less than 165 mm distant from wallsand hole, we get the number of readings, which denote the hole and this is the maximumnumber of readings for detecting that hole. Let the number of readings is n and thehole’s size is h, where the distance is d = dmin < 165 mm. Now, increase the distancedi and count the number of readings, ni which detect the hole from di distance. Selectthe distance dj as the maximum distance at which the hole h can be sensed clearly ifnj = 60% ∗ n relation holds, where j is one instance of i for the hole size h. For thedifferent hole size h, we have got the following (Figure 6.12) relationship between h andd.

(a) Graphical Representation (b) Collected data table for fig a

Figure 6.12: The relation between hole/opening size and the distance at which thisopening/hole can be sensed almost perfectly by sonar mounted on Amigobot.

6.4 Implementation

We have tested our developed HSR Algorithm in two different environments; first envi-ronment consists of no hole or opening (Experiment 1) and the other one contains three

Page 70: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

58 Chapter 6. Dealing with Specular Reflection

openings of different sizes (Experiment 2). We have run the HSR algorithm iterativelyand the terminating conditions are as follows:

• If the number of readings which are updated in first iteration be X and any otheriteration be Y then iteration should continue until Y = 10%∗X satisfies. Becauseimprovement less than 10% is not cost effective; it does not improve the qualityof map remarkably than the first iteration.

• Again we have noticed that if any iteration updates more readings than it’s imme-diate previous iteration, the map starts to distort. So, this should not be allowed,i.e., (prevUpdatedReadings − curUpdatedReadings) > 0 is another terminatingcondition for this iterative process.

Equipments: Amigobot ActiveMedia Robot (See the Figure 6.13) has been used forcollecting data. Amigobot has the following technical specifications[2]:

• Eight (8) same type of range finder sonar with 30 degree angular coverage eachare mounted on Amigobot; 6 sensors are in front and 2 are in rear.

• Two wheels with individual motors and one rear caster

• Two 500-tic shaft encoders

• IEEE 802.11b wireless ethernet communication

Figure 6.13: Amigobot ActiveMedia Robot.

Speed and exploration behavior has been described in each experiment section in-cluding sampling rate. Program development environment is MATLAB 7.0 and it hasbeen tested in the computer with Microsoft Windows XP Professional, ARIA 2.1 andARIA-MATLAB Adapter (developed by J. Borgstrom in his MS Thesis [5] at Umea Uni-versity, Sweden, 2005 in Computer Science department under the supervision of ThomasHellstrom ) for communicating with the robot.Map building Algorithm: For map building, evidence grid method with BayesianUpdating has been used (as described in Chapter 4).

Page 71: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.4. Implementation 59

Figure 6.14: Physical Environment for the experiment 1.

6.4.1 Experiment 1

Environment: The environment for this experiment looks like Figure 6.14. There wasno hole or opening in the environment. There was no objects inside the room, but mostof the walls are enough smooth for specular reflection.Data Collection: Data has been collected by using kind of follow-wall behavior, butit doesn’t depend on sensor data. The speed has been kept constant and it’s speed=[4075], (speed=[LeftWheel RightWheel]). As a result, the robot has been moved creat-ing a constant trajectory. And the sampling time of data with this speed has been175ms/sampling i.e., on an average in each second 5.7 data samples have been collected.

Result: We have tested HSR Algorithm running iteratively by the collected data withdifferent values of our parameters, especially variant wall size parameters, minWallSizefrom 10mm to 70mm and maxWallSize from 20mm to 250mm. As we found thatTDTh=60 is sufficient, this value is same for every case and this is enough for thesonar that has been mounted on Amigobot. As there is no opening in the environment,Rth = Rmax is sufficient guess and we have done that. Figure 6.15 expresses the effectof HSR Algorithm with different parameters.

In Figure 6.15, sub figure (a) is generated without HSR Algorithm and it containsthe specularly reflected readings. To get idea about actual environment, described inFifure 6.14, from this sub figure is quite impossible. But with HSR algorithm, forthe mentioned parameter set/range in Table 6.1, we have got similar output like subfigure (c), which is pretty close to the actual environment. First column of the tablerepresents the size of minimum wall, minWallSize parameter. For each minimum wallsize, we can get similar improvement with different maximum wall sizes and 2nd and3rd columns of the table represent minimum size and maximum size of maxWallSize fora certain minWallSize respectively. Sub figure (b) is obtained if maxWallSize becomesless than the values of 2nd column of the table with a specific minWallSize. And subfigure (d) represents the situation while maxWallSize is more than the mentioned valuesin 3rd column of the table with corresponding minWallSize. But in the Figure 6.15, anysub figure is better than the sub figure (a), which is generated without HSR Algorithm.

Page 72: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

60 Chapter 6. Dealing with Specular Reflection

(a)

After HSR; Rth=3000, Rmax=3000, Scale=10, wall=10−30

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

(b)

Figure 6.15: Comparison among different values of parameters as well as without HSRAlgorithm while no opening/hole in the environment.

Page 73: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.4. Implementation 61

After HSR; Rth=3000, Rmax=3000, Scale=10, wall=20−120

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

(c)

After HSR; Rth=3000, Rmax=3000, Scale=10, wall=30−140

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

(d)

Figure 6.15 (Continued).

Page 74: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

62 Chapter 6. Dealing with Specular Reflection

minWallSize(mm) Minimum maxWallSize(mm) Maximum maxWallSize(mm)10 160 20020 110 19030 50 11040 50 10050 60 9060 70 8070 80 80

Table 6.1: Solution area for minWallSize and maxWallSize paremeters. First columnof the table represents the size of minimum wall, minWallSize parameter. 2nd and3rd column of the table represent minimum size and maximum size of maxWallSizerespectively for a certain minWallSize.

So from this experiment, we can say that HSR Algorithm is able to handle specularlyreflected data while there is no opening in the environment and it works better withminWallSize=(10 to 50)mm and maxWallSize=(190 to 60)mm. But using minWall-Size=20mm with maxWallsize between (110 to 190)mm is better, because the solutionarea is wider and probability to add noise is less within this range while no openings inthe environment.

6.4.2 Experiment 2

Environment: We have done this experiment with some openings or holes in the envi-ronment. Sub figure (a) in Figure 6.16 represents the actual environment. Wall, consistsof glass, is also included. There was no object inside the room. The measurement ofthe walls and holes (doors) are mentioned in figure.

Data Collection: Amigobot has been used for this experiment with same kind of followwall behavior. The speed is kept constant and it is speed=[40 75], (speed=[LeftWheelRightWheel]). As a result, the robot moves creating a constant trajectory. And thesampling time of data with this speed is 175ms/sample i.e., on an average in each sec-ond 5.7 data samples have been collected. At the time of data collection D1, and D2have been kept open (see Figure 6.16). So, totally 3 holes are in the environment. Butmost important thing is that the robot should be closer enough to the openings so thatall openings can be detected by the sonar correctly. In this experiment, the minimumopening size was 80 cm in length and according to Section 6.3, robot’s sensing distanceshould not be more than 80 cm from openings.

Result: We have tested the collected data with different estimated wall size for beingsure about openings and walls with our HSR Algorithm. As we have used same sonar,TDTh=60mm is kept unchanged like the first experiment. But Rth has been changedand mentioned on the top of the figures. Followings are the figures (Figure 6.16) thatare obtained with HSR Algorithm and without HSR Algorithm.

After running the HSR Algorithm iteratively, we have found that minWallSize =10mm and maxWallSize = 80mm is working better than other combinations whileminimum opening size is 80cm in the environment (sub figure (e)). maxWallSize < 80(like 40 to 70 etc) works better in openings but they can’t handle specularly reflectedreadings very well (see sub fiugre (c) and (d)). On the other hand, maxWallSize >

Page 75: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.4. Implementation 63

(a)Original Environment.

(b) (c)

(d) (e)

Figure 6.16: Comparison among different values of parameters as well as without HSRAlgorithm with different opening sizes in the environment. Sub figure (a) describes theactual environment. D1 and D2 are openings with minimum size of 80cm. Walls E, Tand A are not sensed by the robot. So readings from that side are not interesting.

Page 76: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

64 Chapter 6. Dealing with Specular Reflection

(f) (g)

(h) (i)

Figure 6.16 (Continued).

Page 77: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

6.5. Discussion, Conclusion and Recommendation 65

80mm(like 90 to 110, 120 to 170, 180 to 200 etc) can update specularly reflected readingsbetter than with 80mm, but they add noise in openings (sub figure (f), (g) and (i));sometimes (see sub figure (h)) they abolish the openings from the environment whatis not expected at all. But sub figure (g) is really interesting. This figure handlesspecularly reflected data most except adding some noise in opening D1 and wall X. Butif we look at sub figure (e), we can over come this problem. So figure (e) is betterthan others. Again if we look at figure (i), same problem in openings. If this amount ofnoise is tolerable then minWallSize = 20mm and maxWallSize = 80mm is the bestparameter set. Though sub figure (i) gives similar result, this one is more expensivethan sub figure (g). This says that HSR can give better result with more than oneparameter sets whatever the opening size exists in the environment.

Again any sub figure looks better than without HSR Algorithm (sub figure (b)),except sub figure (h). So HSR Algorithm is able to handle specularly reflected data inenvironment with openings too. But there exists one very important trade-off betweenopening size of the environment and handling specularly reflected readings. If the robotdoes not explore the environment based on the relation described (in section ”Relationbetween openings and the distance”, Section 6.3), to get better result would be verydifficult.

6.5 Discussion, Conclusion and Recommendation

Discussion and Conclusion: From the above two experiments (Experiment 1 andExperiment 2 in Section 6.4), we can conclude that HSR Algorithm is able to detectspecularly reflected readings and also can update them in any type of indoor environmenteither any opening exists or not. This can contribute for the improvement of mobilerobot map building for indoor environment and the results have been shown by imposingoriginal handcrafted environment over generated maps both without HSR and with HSRalgorithm. The result is quite satisfactory.

But the main flaw of the HSR algorithm is the dependency of some prior knowledgeof the environment either there is any openings or not; if openings then the minimumopening’s size etc. Actually, this dependency comes as a consequence of one of themajor deficiencies of ultrasonic sonar. Because sonar can not detect openings more thana certain distance and this distance depend on the size of the opening (”Relation betweenOpenings and Sensor’s distance” section 6.3 in this Thesis paper). If this problem isovercome, HSR can works environment independently.

However HSR vastly increases the quality of maps, if some prior environmentalknowledge are provided.

Future works:1) To evaluate the performance of developed algorithm is important. Evaluation of thedeveloped methods can be done by using proposed benchmarks in [34]. For lack of time,it has not been done. So this can be taken as next step of this project.2)Sonars are good for detecting obstacle in a certain range. But for detecting holes inthe environment, it faces problems. And in case of map building, detecting holes andobstacles as they are in the environment is important. In that sense, using anothersensor, which can detect holes more correctly than sonar like IR sensor or Camera withsonar is useful. Another way is to use a sonar with less wide angle so that the sonar canidentify holes in the environment as well. And this could be taken as the consequenceof the current project.

Page 78: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

66 Chapter 6. Dealing with Specular Reflection

3) Building relationship for calculating estimated walls with various grid resolution andconnected parameters.4) Searching scope: trying to improve searching order and to avoid calculation error.5)It would be nice if the relation between holes and the distance from which they can besensed could be expressed by any mathematical equation than representing graphicallywith interpolation or extrapolation for getting vales.6) HSR depends on some information of working environment like hole/opening size etc.Using some probabilistic approach to avoid these dependency can be considered futuretask.7) Applying Neural Network for solving specular reflection problem is suggested. Thisidea can also help to be environment independent.8) HSR Algorithm has some problems of adding noise at openings in the environ-ment. I think, topology-based maps augmented with geometric information, proposedby Alessandro Saffiotti in [12], can be helpful to over come this problem. Because,topology-based maps concentrates open spaces. So, investigating HSR with topology-based map building instead of matric map 2 would be interesting.

2Metric map: It represents the environment according to the absolute positions of objects.

Page 79: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Chapter 7

Acknowledgements

I would like to thank my supervisor Thomas Hellstrom who guided me whenever neededduring this project both morally and physically, at the Department of Computer Science,Umea University. Thanks to my friends, Fabian and Erik Billing, who helped me severaltimes by discussing about the ideas, data collection strategies etc. It is my pleasure tothanks all of my dearest colleagues of MS Thesis Lab of the Department of ComputerScience for their nice co-operation during this project.

67

Page 80: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

68 Chapter 7. Acknowledgements

Page 81: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

References

[1] Mongi A. Abidi and Ralph C. Gonzalez. Data Fusion in Robotics and MachineIn-telligence. Academic Press, Boston, 1992.

[2] ActiveMedia Robotics, http://robots.amigobot.com/SHAmigos.html. AmigoBotUser’s Guide With ActivMedia Intelligent Robotics Software ARIA . Saphira . Map-per . Simulator, 1.9 edition, August 2003.

[3] J. Borenstein and Y Koren. Error eliminating rapid ultrasonic firing for mobile robotobstacle avoidance. Proceedings of IEEE International Conference on Robotics andAutomation, 11, No. 1:132–138, 1995.

[4] Johann Borenstein and Alex Cao. Experimental Characterization of Polaroid Ul-trasonic Sensors in Single and Phased Array Configuration. In UGV TechnologyConference at the 2002 SPIE AeroSense Symposium, Michigan, USA, 2002.

[5] Jonas Borgstrom. ARIA and Matlab Integration With Applications.http://www.cs.umu.se/education/examina/Rapporter/JonasBorgstrom.pdf.

[6] Jennifer Carlson and Robin R. Murphy. Use of Dempster-Shafer Conflict Metricto Detect Interpretation Inconsistency. Technical report, Safety Security RescueResearch Center,University of South Florida, 4202 E. Fowler Avenue, Tampa, FL33620, USA, 2004-05.

[7] Kok Seng Chong and Lindsay Kleeman. Sonar based map building for a mobilerobot. Proceedings of the 1997 IEEE International Conference on Robotics andAutomation, pages 1700–1705, 1997.

[8] Michael Drumheller. Mobile robot localization using sonar. IEEE Transactions onPattern Analysis and Machine Intelligence, 9, Issue-2:325–332, 1987.

[9] Hugh F. Durrrant-Whyte. Integration, Coordination and Control of Multi-SensorRobot Systems. Kluwer Academic Publishers, 1988.

[10] Alberto Elfes. Sonar-based real-world mapping and navigation. IEEE Journal ofRobotics and Automation, 3:249–265, 1987.

[11] Alberto Elfes. Occupancy grids: A stochastic spatial representation for active robotperception. Proceedings of the Sixth Conference on Uncertainty in AI, pages 60–70,1990.

[12] Elisabetta Fabrizi and Alessandro Saffiotti. Augmenting topology-based maps withgeometric information. Robotics and Autonomous Systems, pages 40(2):91–97, 2002.

69

Page 82: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

70 REFERENCES

[13] Matthias Fichtner and Axel GroBmann. A Visual-Sensor Model for Mobile RobotLocalisation. Technical report, Artificial Intelligence Institute, Department of Com-puter Science, Technische Universitat Dresden, Technische Universitat Dresden,Germany, 2003.

[14] Giovanni Ulivi Giuseppe Oriolo and Marilena Vendittelli. Real-time map buildingand navigation for autonomous robots in unknown environments. IEEE Trans-action on Systems, Man, and Cybernetics, 28:316–333, 1998.

[15] Thomas Helstrom. Class lecture of intelligent robotics course(2005, spring semister).http://www.cs.umu.se/kurser/TDBD17/VT05/utdelat/ch11a.pdf.

[16] ”Berthold K. P. Horn”. ”Robot Vision”. ”The MIT Press”, ”1986”.

[17] Everett H. R. Johann Borenstein and Feng L. Navigating Mobile Robots : systemsand techniques. A. K. Peters, Ltd, Wellesley, MA, 1996.

[18] H. R. Everett Johann Borenstein and L.Feng. Where am I ? Sensors and MethodsFor Mobile Robot Positioning. Technical report, University of Michigan, Michigan,USA, 1996.

[19] Teuvo Kohonen. Self-Organizing Maps. Springer Series in Information Sciences.,Springer, Berlin, Heidelberg, New York, 1995, 1997, 2001.

[20] John J. Leonard and Hugh F. Durrrant-Whyte. Directed Sonar Sensing for MobileRobot Navigation. Kluwer Academic Publishers, 1992.

[21] John Hwan Lim and DongWoo Cho. Specular reflection probability in the certaintygrid representation. Transactions of the ASME, 116:512–520, 1994.

[22] Eugene Lukacs. Probability and Mathematical Statistics. Academic Press, NewYork and London, 1972.

[23] Martin C. Martin and Hans P. Moravec. Robot Evidence Grids. Technical report,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA,1996.

[24] Larry Matthies and Alberto Elfes. Integration of sonar and stereo range data usinga grid-based representation. IEEE Journal of Robotics and Automation, pages 727–737, 1988.

[25] Hans P. Moravec. Sensor fusion in certainty grids for mobile robots. AI Magazine,9:61–74, 1988.

[26] Hans P. Moravec and Alberto Elfes. High Resolution Maps from Wide Angle Sonar.Technical report, The Robotics Institute, Carnegie Mellon University, Pittsburgh,PA 15213, USA, 1984.

[27] Robin R. Murphy. Introduction to AI Robotics. MIT Press, 55 Hayward StreetCambridge, MA 02142-1493, USA, 2000.

[28] Don Murray and Cullen Jennings. Stereo vision based mapping and navigation formobile robots. Technical report, Department of Computer Science, University ofBritish Columbia, Vancouver, Canada, 1996.

Page 83: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

REFERENCES 71

[29] OLYMPUS, http://www.panametrics-ndt.com/index.html. Ultrasonic TransducerTechnical Notes.

[30] W. Rencken. Autonomous sonar navigation in indoor, unknown and unstructuredenvironments. In Proc. IEEE Int’l Conf. on Intelligent Robots and Systems, pages431–438, 1994.

[31] Peter E. Hart Richard O. Duda and David G. Stork. Pattern Classification (Sec-ond Edition). WILEY-INTERSCIENCE, John Wiely and Sons, Inc., Nwe York,Chichester, Weinheim, Brisbane, Singapore, Toronto, 2000.

[32] Alan C. Schultz and William Adams. Continuous localization using evidencegrids. Proceedings of the 1998 IEEE International Conference on Robotics andAu-tomation, pages 1401–1406, 1998.

[33] Alan C. Schultz and William Adams. Continuous localization using evidence grids.In ICRA, pages 2833–2839, 1998.

[34] Shane O’Sullivan, J.J. Collins, Mark Mansfield, David Haskett and Malachy Eaton.Linear Feature Prediction for Confidence Estimation of Sonar Readings in MapBuilding. http://www.skynet.ie/ sos/ (9th International Symposium on ArtificialLife and Robotics (AROB), 2004, Oita, Japan).

[35] Sebastian Thrun. Robotic Mapping: A Survey. Technical report, School of Com-puter Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA, 2002.

[36] Raschke U. and Johann Borenstein. A comparison of grid-type map-building tech-niques by index of performance. Proceedings of IEEE International Conference onRobotics and Automation, 3:1828–1832, 1990.

[37] Zou Yi. Multi-Ultrasonic Sensor Fusion for Mobile Robots in Confined Spaces.www.duke.edu/ yz21/research/ZouyiMEThesis.pdf.

[38] A. Ypma and R.P.W. Duin. Novelty detection using self-organizing maps. Progressin Connectionist-Based Information Systems - Proceedings of ICONIP97, pages1322–1325, 1997.

Page 84: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

72 REFERENCES

Page 85: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

Appendix A

Pseudo Code of HSRAlgorithm

function HSR (data, pose, minWallSize, maxWallSize, Rth, TDTh)Input Parameters: data, sensor reading; pose, robot’s odometry of data collection;minWallSize and maxWallSize, minimum and maximum size of the estimated wall re-spectively; Rth is the threshold value for considering reliable readings; TDTh, Targeteddistance threshold or cross-talking tolerance.(Please see the description of section ”Step-2” for minWallSize, maxWallSize and Rth ;and section ”Step-3” for TDTh in detail).

Output Parameters: uData, the updated data set

Step-1: Processing Sensor ReadingsConvert all sensor readings, data, into global Cartesian coordinates,(x,y), by using odom-etry position of each data sample and robot’s geometry as well as save all translatedinformation, odometry position, sensor position and data points in a database DBR.DBR ← [translated odometry, sensor positions and data points] ;S← Plotting raw sensory information after translation; // S is a temporary map//(for detail please see Section 6.1.1: ”Processing Sensor Readings ” )

Step-2: Estimating Variant Walls// (Please see the Section 6.1.2 for detail description with parameters)for each datai ≤ Rth of data and save this information as a map, ES.estimateVariantWalls function does this step.ES ← estimateVariantWalls (data, pose, DBR, S, minWallSize, maxWallSize, Rth);

Step-3: Finding and Updating Specularly Reflected ReadingsUpdate sensor readings which are possible by using the information obtained from Step-2 and conditions mentioned in upDate function.uData ← upDate(data, pose, DBR, ES, TDTh);//(This step has been described with parameters in detail in Section 6.1.3)

End HSR

73

Page 86: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

74 Chapter A. Pseudo Code of HSR Algorithm

function estimateVariantWalls (data, pose, DBR, ES, minWallSize, maxWallSize, Rth)Input parameters:Output parameters: ES with estimated wall for relevant readings.

interestedData ← data ≤ Rth;Foreach datai ∈ interestedData

I. Calculate the estimated size of the wall, estWall, linearly as follows:dR ← Rmax −Rmin;dr ← datai −Rmin;dWall ← maxWallSize−minWallSize;estWall ← minWallSize + (dWall/dR) ∗ dr;

II. Calculate two terminal coordinates, (x1,y1) and (x2,y2), of the estimated wallconsidering that the estimated wall is perpendicular to the direction of datai

from the sensor by which datai has been obtained and the co-ordinate of datai, DBR(datai),should be the mid-point of estimated wall.

III. Divide estWall into n number of intervalswhile each interval size, interval has been calculated as follows:

n ← maximum(abs(x1− x2), abs(y1− y2));interval ← estWall/n;where (x1,y1) and (x2,y2) are terminal points of estimated wall, estWall,obtained from previous step II.

IV. Foreach interval intervalj1. Calculate the point,(x,y), of estimated wall2. If (ES[x][y] is not occupied)

Update ES[x][y]with some values so that one can differentiatewith exact reading point.ES[x][y] ← CONSTV AL.estWallV al;

End IfEnd For // interval

End For // interestedDataEnd estimateVariantWalls // end of function

function upDate (data, pose, ES, DBR, TDTh)Input Parameters:Output Parameters: updated data set data

1. Consider the longer range readings of sensors first. So, sort the data in descendingorder in such a way so that the synchronization among DBR, pose and data should existas well.

2. Foreach datai of sorted dataI. Get the coordinates of datai and relevant sensor from DBR as follows:

(x1, y1) ← sensorpoint from DBR(datai); //DBR contains both of their information.(x2, y2) ← datapoint from DBR(datai);

II. Search from (x1, y1) to (x2, y2) with n number of intervals and d interval size(n and d are calculated same as step-III in estimateVariantWalls function) as follows:

Page 87: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

75

flag=true; // the search will be continued if flag=trueWhile (flag) do

(x, y) ← coordinate for each interval,dj , from (x1,y1);dist ← straight line distance[(x1,y1), (x,y)];// ”dist” is the Distance(AC) in our discussion

//Condition-1:if (dist > Rmax)

The reading, datai, denotes no obstacle, i.e. the place is really empty.So, no update and no search anymore.flag ← false;Exit from the loop; // go out of loop.

end if // condition-1(xr, yr) ← translation of (x,y) onto the grid by scaling factor of grid;

// Condition-2:If ((xr,yr) is out of the map, ES )

There is no possibility to be updated datai;flag ← false ;Exit from the loop;

End If // condition-2

// Condition-3:If (ES[xr][yr] denotes a sensor reading sensed by sensor or is an estimated wall’s point)

a) Calculate target distance, targetDist, from the point (xr,yr) to (x2,y2) as followingstargetDist ← straight line distance [(x2, y2), (xr, yr)];b) If (targetDist ≤ TDth)

(xr,yr) and (x2,y2) denote the same point (x2,y2). So, it is meaninglessto update datai. Moreover, if the datai has been updated in this case,the map gets distorted.

flag ← flase;Exit from the loop;

End if // TDTh

// Now the datai is the result of specular reflection and the reading// should be (x,y), not (x2,y2). In grid,(x,y) is represented as (xr,yr).

c) Update datai with dist that has been calculated from (x,y) and (x1,y1) i.e.;datai ← dist;

flag ← false;

end If //Condition-3End While

End For // datai

End upDate //end of function

function runHSR (data, pose, minWallSize, maxWallSize, Rth, TDTh)//This function uses HSR Algorithm iteratively.Input Parameters: same meaning as function HSROutput Parameters: filteredData, the updated data set

Page 88: Parameterized Sensor Model and Handling Specular Re ... · Dealing with specular re°ection problem while using ultrasonic sonar has been in-vestigated in the second part of this

76 Chapter A. Pseudo Code of HSR Algorithm

// Calling HSR functionfd1 ← HSR(data, pose, minWallSize, maxWallSize, Rth, TDTh);//counting the number of updated readingscount ← 0;Foreach datai in data set

If (datai 6= fd1i)count ← count + 1;

End ForimproveF irst ← count;improvePrev ← infinity;For i=1 to maximum number of iteration

fd2 ← HSR(fd1, pose, minWallSize,maxWallSize, Rth, TDTh);//counting the number of updated readingscount ← 0;Foreach datai in fd1 set

If (datai 6= fd2i)count ← count + 1;

End ForimproveCur ← count;If (improveCur > 0 And improveCur > 0.1 ∗ improveF irst

And (improvePrev − improveCur) > 0)improvePrev ← improveCur;fd1 ← fd2;

ElseExit from loop

End IfEnd For // loop for iterationfilteredData ← fd1;End runHSR// end of function