University of Alexandria Faculty of Engineering Computer Science and Automatic Control Departement Graduation Project Academic Year 2005 / 2006 Exploration and Map-Building Using a Mobile Robot Ahmed Mohamed El-Sayed Hassan Ayman Mohammed Abdel-Hameed Mohamed Abd El-Rahman Al-Khazendar Mohamed Amir Mansour Yousef Supervisor: Prof. Dr Mohamed Salah El-Deen Selim
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
University of Alexandria
Faculty of Engineering
Computer Science and Automatic Control Departement
Graduation Project
Academic Year 2005 / 2006
Exploration and Map-Building
Using a Mobile Robot
Ahmed Mohamed El-Sayed Hassan
Ayman Mohammed Abdel-Hameed
Mohamed Abd El-Rahman Al-Khazendar
Mohamed Amir Mansour Yousef
Supervisor: Prof. Dr Mohamed Salah El-Deen Selim
To our families who supported us during all the logical and nonlogical
actions we did in the previous 4 years..
To all people who gave us material and/or immaterial help..
Preface
This document goes into 9 chapters, each descriping certain part of the
project.
Chapter 1 Gives an overview about the whole system, the reqiured speci-
fication of the hardware, and the general architecture of the system.
Chapter 2 Gives a detailed view about the components of the robot. It
illustrates the theory of operation of different used sensors.
Chapter 3 Explains robot motion and how to calibrate it.
Chapter 4 Illustrates how the communication between the computer and
the robot is accomplished. It shows the detailes of required software
and hardware for the communication.
Chapter 5 Explains the selected methods of exploration and map-building.
It also shows the results of simulating such methods.
Chapter 6 Shows the performed experiments to test the system.
Chapter 7 Shows the analysis of different sources of errors in measurments.
Chapter 8 Gives a proposed appraoch to extend the capaabilities of the
robot. It also shows the progress status of implementing this approach.
Chapter 9 Gives a short summary about the project. It also shows the
This project addresses the problem of map-building using a mobile robot for
small scale indoor environments. The robot will collect data using a range
finder sensor while an application is running on a remote station, which has
a kind of wireless connection with the mobile robot, and whose task is the
control of the mobile robot and the construction of the map from the received
data using an appropriate method.
1.2 Introductoin
Building maps of indoor environments is a pivotal problem in mobile robotics.
The problem of map building is the problem of determining the location of
entities of interest such as landmarks and obstacles, often relative to a global
frame of reference (sush as a Cartesian coordinate frame). The mapping prob-
lem is often referred to as the concurrent mapping and localization problem.
The distance is measured relative to the position of the mobile robot.
Therefore, the more accurately the location of the robot is determined, the
more accurately the location of the surrounding sensed objects is determined,
and the more accurate the map is constructed. A hardware calibration has
been carefully performed to obtain more accurate motion and more accurate
sensor measurements.
Because of that the processing unit on the mobile robot does not pro-
vide the enough processing power required for the data processing and map
Chapter 1. System Overview 2
construction, a more powerfull remote processing unit is used, a PC in this
project.
Because of the need to the mobility of robot and the remote control, a
wireless communication is used to exchange data between the mobile robot
and the PC. The PC runs an application responsible for the control of the
behaviour of the mobile robot and the map construction using the data re-
ceived from the mobile robot. Bluetooth is picked to be used in wireless
communication. An API has been developed to be the interface between
the PC application and the mobile robot. This interface is composed of the
commands which control and guide the mobile robot through the execution
of its tasks. The data and commands are sent through bluetooth connection
according to a simple protocol which provide synchronization and data reli-
abilty.
The map is extracted from the received data according to one of two meth-
ods. The first is Points Locality method. It uses the line-based representation
of the map, i.e. the map is represented as a set of line segments which fit
to sets of two-dimensional range scan data that are acquired from multiple
positions. The second method is the Occupancy Grid method. It represents
the environment as a two-dimensional grid whose cells have a probability
of being occupied by objects. Figure1.2 and Figure1.3 show the results of
applying the two methods when exploring the area shown in Figure1.1.
Figure 1.1: An example of simple environment
Chapter 1. System Overview 3
Figure 1.2: An example of output of the points locality method
Figure 1.3: An example of output of the occupancy grid method
1.3 System Architecture
The proposed system consists of three independent entities: application en-
tity, API entity and robot hardware entity. Each entity has its own design
which doesn’t depend on other entities. This approach provides extendibil-
ity and changeability of each part of the system. That’s because when the
commands are sent according to a predefined API, changing application will
not affect the program on the robot. On the other hand, changing some part
of the robot, or even replacing robot with another one, with the same API,
will not affect the application. Figure 1.4 shows these entities.
Physically, system consists of three parts: Computer side, robot side, and
communication between the robot and the computer. Computer side con-
tains both application and API. Robot side consists of its physical compo-
Chapter 1. System Overview 4
Figure 1.4: System Architecture
nents and software modules that control such components. The communi-
cation between the robot and the computer is accomplished over Bluetooth
connection. Bluetooth dongle is connected to the computer, and Bluetooth
module is connected to the robot.
1.4 Required Computer Specifications
• Processor: Pentium M 1.6GHz Centrino
• Memory: 512MB
• Platform: Windows XP, .NET framework 2
• Connectivity : Bluetooth Connection.
1.5 Robot Hardware
The Board of Education Rev C carrier board for BASIC Stamp 24-pin micro-
controller modules is used. The Board of Education makes it easy to connect
a power supply and serial cable to the BASIC Stamp module. Its major
components and their functions are indicated by labels in Figure 1.5.
BOE-BOT robot kit provided by Parallax has been used. Below is a list
of all components fixed on BOE-BOT board. This section will show how to
connect them. Details of each component will come later.
• 6-9 Volt Battery (regulated to 5 Volt)
Chapter 1. System Overview 5
Figure 1.5: Board of Education
• BS2pe Basic Stamp module
• 2 Continuous rotation servo motors (one for each side to rotate wheels)
• 2 Digital Encoders (one for each wheel)
• 2 Infrared headlights modules
• PING)))TM ultrasonic range finder
• Standard servo motor (to rotate PING)
• Hitachi HM55B Digital Compass Module
• EmbeddedBlue eb500 Bluetooth Module
Figure 1.6 shows BOE-BOT with all components fixed on it. Schematic
diagram for components connection is shown in Figure 1.7. Pin assignment
for BASIC Stamp is shown in Table 1.1
Chapter 1. System Overview 6
Figure 1.6: BOE-BOT Robot Components
Figure 1.7: Robot Components Schematic Diagram
Chapter 1. System Overview 7
I/O PIN number Device PIN
0 Bluetooth INPUT
1 Bluetooth OUTPUT
2 Infrared Leds (in both sides)
3 Left Infrared detector
4 Right Infrared detector
5 Bluetooth Status
6 Bluetooth Mode
7 Compass Enable
8 Compass Clock
9 Compass INPUT/OUTPUT
10 Right Digital Encoder
11 Left Digital Encoder
12 Right Continuous Rotation Servo
13 Left Continuous Rotation Servo
14 PING Standard Servo
15 PING Signal
Table 1.1: PIN Assignment for Basic Stamp Module
Chapter 2
Robot Components
2.1 Hardware Components at a Glance
BASIC Stamp, Figure 2.1, which is produced by Parallax Company, is the
microcontroller module used to control the other robot components.
Four types of sensors are fixed on robot board: infrared headlights, ul-
trasonic range finder, digital compass and digital encoder.
Infrared Headlights, Figure 2.2, are used to detect objects but cannot
determine exactly the distance to object. It is important in many appli-
cations such as obstacle avoidance and roaming. Ultrasonic range finder is
used mainly to detect obstacles and measure how far they are. Parallax
PING)))TM Ultrasonic Range Finder, Figure 2.3, has been used. The third
type of sensors is digital compass, which is used to determine robot direc-
tion with accuracy of around five degrees. Hitachi HM55B Compass Module,
Figure 2.4, has been used. The last type of sensors is digital encoder, Figure
2.5. Digital encoder is a reflective sensor used to detect objects that are very
closed to the sensor. It’s mainly used to control robot motion.
Ultrasonic sensor is fixed on Bracketing Kit produced by Parallax. The
main part of this kit is the standard servo motor, Figure 2.6, which is used
to rotate sensor around 180 degrees. Another type of servo motors is Con-
tinuous Rotation servo motor which is used to rotate robot wheels.
Chapter 2. Robot Components 9
Figure 2.1: BASIC Stamp Module, Model: BS2pe
Figure 2.2: Infrared Transmitter and Receiver
2.2 Basic Stamp Module
In the project, the BS2pe packaged in 24-PIN DIP 1 is used. The module
may be programmed using PBASIC2 language. The main specifications of
this module are:
Microcontroller Ubicom SX48AC
Clock Speed 8 MHz Turbo3
Program Execution Speed 6000/sec4
RAM Size 38 Bytes (12 I/O, 26 Variable)
Scratch Pad RAM 128 Bytes
1DIP stands for ”Dual In-line Package”. Typical IC with two rows of legs parallel to
one another.2Parallax Basic All-Purpose Simple Instruction Code3The instruction execution time is derived by dividing the oscillator frequency by either
one (turbo mode) or four (non-turbo mode).4The exact number depends on the instructions executed and the number of parameters
to each
Chapter 2. Robot Components 10
Figure 2.3: PING)))TM Ultrasonic Range Finder
Figure 2.4: Hitachi HM55B Compass Module
EEPROM (Program) Size 16×2K Bytes (16 K for source)
Number of I/O pins 16 + 2 Dedicated Serial
PC Programming Interface Serial (9600 baud)
2.2.1 Memory organization
EEPROM (Program Memory) This is the memory portion that holds
the program. It’s divided into 16 pages, or slots, each of size 2K Bytes.
RAM Organization The BS2pe has 32 bytes of Variable RAM space. Of
these, the first six bytes are reserved for input, output, and direc-
tion control of the I/O pins. The remaining 26 bytes are available for
general-purpose use as variables.
Scratchpad RAM The BS2pe has some additional RAM called Scratchpad
RAM. The BS2pe has 128 bytes of Scratchpad RAM (0-127). Scratch-
pad RAM can only be accessed with the GET and PUT commands 5
and cannot have variable names assigned to it. The highest location in
Scratchpad RAM, location 127 on the BS2pe, is read-only, and always
contains the number of the currently running program slot. This can
5see the GET and PUT command descriptions for more information
Chapter 2. Robot Components 11
Figure 2.5: Digital Encoder
Figure 2.6: Parallax Standard Servo Motor
be handy for programs that need to know which program slot they exist
in.
2.3 Sensors
2.3.1 Infrared Headlights
Theory of Operation
Infrared light is used to illuminate the robot’s path and determine when the
light reflects off an object.
The infrared object detection system built on the Boe-Bot is like a car’s
headlights in several respects. When the light from a car’s headlights reflects
off obstacles, your eyes detect the obstacles and your brain processes them
and makes your body guide the car accordingly. The Boe-Bot uses infrared
LEDs for headlights as shown in Figure 2.7. They emit infrared, and in some
Chapter 2. Robot Components 12
Figure 2.7: Using Infrared in detecting ojects
cases, the infrared reflects off objects and bounces back in the direction of
the Boe-Bot. The eyes of the Boe-Bot are the infrared detectors. The in-
frared detectors send signals indicating whether or not they detect infrared
reflected off an object.
The IR detectors have built-in optical filters that allow very little light
except the 980 nm infrared that we want to detect with its internal photodi-
ode sensor. The infrared detector also has an electronic filter that only allows
signals around 38.5 kHz to pass through. In other words, the detector is only
looking for infrared that’s flashing on and off 38,500 times per second. This
prevents IR interference from common sources such as sunlight and indoor
lighting. Sunlight is DC interference (0 Hz), and indoor lighting tends to
flash on and off at either 100 or 120 Hz, depending on the main power source
in the region. Since 120 Hz is outside the electronic filter’s 38.5 kHz band
pass frequency, it is completely ignored by the IR detectors.
Using Basic Stamp to Control Infrared sensor
Below is a code snippet that uses infrared.
FREQOUT 8, 1, 38500
irDetector = IN9
Chapter 2. Robot Components 13
In this code snippet, FREQOUT sends 38.5 kHz signal to the IR LED connected
to Pin 8, next line stores the IR detector’s output, which is connected to Pin
9, in a bit variable named irDetector. The IR detector’s output state when
it sees no IR signal is high. When the IR detector sees the 38500 Hz har-
monic reflected by an object, its output is low. The IR detector’s output
only stays low for a fraction of a millisecond after the FREQOUT command is
done sending the harmonic, so it’s essential to store the IR detector’s output
in a variable immediately after sending the FREQOUT command.
As we’ve seen, each IR LED/detector pair is connected with two pins
of Basic Stamp controller, one for light source and the other for detector.
IF two sensors are used, in both sides of robot, four pins of Stamp will be
needed, what if three sensors are used? It becomes too much for the limited
number of Stamp I/O bins. A good solution for this problem is to use one
pin for all light sources; it doesn’t matter if all sources transmit light even
if only one detector is used. In case of two sensors, total number of Stamp
pins becomes three, which saves one I/O pin.
2.3.2 Ultrasonic Range Finder
Theory of Operation
The Parallax PING ultrasonic range finder provides precise, non-contact dis-
tance measurements from about 3 cm to 3 meters. It is very easy to connect
to BASIC Stamp, requiring only one I/O pin.
The Ping sensor works by transmitting an ultrasonic (well above human
hearing range) burst and providing an output pulse that corresponds to the
time required for the burst echo to return to the sensor. By measuring the
echo pulse width, the distance to target can easily be calculated.
Figure 2.8 shows how this operation is done, under control of a host mi-
crocontroller (trigger pulse), the sensor emits a short 40 kHz (ultrasonic)
burst. This burst travels through the air at about 1130 feet per second, hits
an object and then bounces back to the sensor. The PING))) sensor provides
an output pulse to the host that will terminate when the echo is detected;
hence the width of this pulse corresponds to the distance to the target.
Chapter 2. Robot Components 14
Figure 2.8: Ping Signals
Using Basic Stamp to Control Ping
Below is a code snippet that uses infrared.
LOW Ping ’ make trigger 0-1-0
PULSOUT Ping, Trigger ’ activate sensor
PULSIN Ping, IsHigh, rawDist ’ measure echo pulse
rawDist = rawDist / 2 ’ remove return trip
rawDist = rawDist * Scale ’ convert to uS
In this code snippet, PULSEOUT generates input pulse on Ping pin, PULSEIN
generates output pulse that terminates when detecting echo, the width of this
pulse is stored in rawDist variable, which is divided be 2 to remove return
trip and then converted to microseconds.
2.3.3 Digital Compass
Theory of Operation
The Hitachi HM55B Compass Module is a dual-axis magnetic field sensor
that can add a sense of direction to the robot. The sensing device on the
Compass Module is a Hitachi HM55B chip. An onboard regulator and resis-
tor protection make the 3 volt HM55B chip compatible with 5 volt BASIC
Stamp microcontroller supply and signal levels.
As shown in Figure 2.9, The Hitachi HM55B Compass Module has two
axes, x and y. Each axis reports the strength of the magnetic field’s compo-
nent parallel to it. The x-axis reports (fieldstrength)∗cos(θ), and the y-axis
Chapter 2. Robot Components 15
Figure 2.9: Calculating Angle using Compass
reports the (fieldstrength ∗ sin(θ). To resolve θ into a clockwise angle from
north, use arctan(−y/x), which in PBASIC 2.5 is x ATN − y. The ATN
command returns the angle in binary radians.
The Hitachi HM55B chip on the Compass Module reports its x and y
axis measurements in terms of microteslas µT in 11-bit signed values. The
HM55B is designed to return a value of 1 for a north magnetic field of 1 µT
parallel to one of its axes. If the magnetic field is south, the value will be
-1. These are nominal values. According to the HM55B datasheet, the ac-
tual µT value for a measurement of 1 could range anywhere from 1 to 1.6 µT .
Using Basic Stamp to Control Compass
The microcontroller connected to the HM55B must control its enable and
clock inputs and use synchronous serial communication to get the axis mea-
surements from its data input and data output pins. For example, a BASIC
Stamp 2 can be programmed to control the Compass Module’s enable lines
with HIGH/LOW and send values that reset the device and start a mea-
surement with SHIFTOUT commands. The SHIFTOUT command controls
the Compass Module’s clock input as it sends data bit values to its data
input. The converse of SHIFTOUT is SHIFTIN, which also controls the de-
vice’s clock input as it collects data bits sent by the device’s data output pin.
Below is a code snippet that controls compass:
HIGH En: LOW En ’ Send reset command to HM55B
SHIFTOUT DinDout,clk,MSBFIRST,[Reset\4]
HIGH En: LOW En ’ HM55B start measurement command
Chapter 2. Robot Components 16
SHIFTOUT DinDout,clk,MSBFIRST,[Measure\4]
status = 0 ’ Clear previous status flags
DO ’ Status flag checking loop
HIGH En: LOW En ’ Measurement status command
SHIFTOUT DinDout,clk,MSBFIRST,[Report\4]
SHIFTIN DinDout,clk,MSBPOST,[Status\4] ’ Get Status
LOOP UNTIL status = Ready ’ Exit loop when status is ready
SHIFTIN DinDout,clk,MSBPOST,[x\11,y\11] ’ Get x & y axis values
HIGH En ’ Disable module
IF (y.BIT10 = 1) THEN y = y | NegMask ’ Store 11-bits as signed word
IF (x.BIT10 = 1) THEN x = x | NegMask ’ Repeat for other axis
2.3.4 Digital Encoders
Digital Encoders are reflective sensors fixed in front of robot wheels to detect
their motion. The sensors emit infrared light and look for its return from a
reflective surface. They are calibrated for optimal sensing of surfaces a few
millimeters away. The Boe-Bot’s wheels, even though they are black, reflect
sufficient IR to cause the sensors to respond. When a sensor ”sees” part of
a wheel, it pulls its output low. When it’s looking through a hole, its output
floats, and the pullup resistor pulls it high. Because the sensors emit and
detect only modulated IR (at about 7.8KHz) they are relatively insensitive to
ambient light. Nevertheless, some fluorescent fixtures may also emit light at
this frequency and could interfere with their operation. As a Boe-Bot wheel
turns, the sensor will see an alternating pattern of hole-no hole-hole-no hole,
etc. Its output will be a square wave whose frequency corresponds to the
speed of rotation.
More details aboout digital encoders and using them in motion control
and calibration will be mentioned in chapter 3.
Chapter 2. Robot Components 17
2.4 Servo Motors
2.4.1 Continuous Rotation Servo Motors
Continuous Rotation servo motors6 are pulse width controlled motors. They
are fed with train of pulses, with 20 ms between each two pulses. The width
of the pulse controls the speed of rotation, 1.5 ms pulse width will stop
the motor, greater pulse width will rotate it counterclockwise, smaller pulse
width will rotate it clockwise. Figure 2.10 shows this operation.
Figure 2.10: Pulse Width Control
2.4.2 Ultrasonic Bracketing Kit
The PING Bracketing Kit includes a standard servo and all mounting hard-
ware required to attach the PING ultrasonic sensor to the front of the Paral-
lax Boe-Bot robot (or any custom-made robot chassis with a flat mounting
6For more information about continuous rotation servo motors, check Appendix C
Chapter 2. Robot Components 18
spot on the front).
Standard Servo7 provides 180 degrees of ultrasonic scanning ability. Train
of pulses is fed to the motor to rotate it, the width of pulse controls the
amount of rotation and then the angle of Ping.
Appropriate pulse width for each angle is experimentally calculated using
Trial and Error. Table 2.1 contains experiment results.
Angle Pulse width (no. of Stamp cycles) Pulse width (ms)
90 (left) 1148 2.296
45 (left) 910 1.82
0 (forward) 693 1.386
45 (right) 465 0.93
90 (right) 245 0.49
Table 2.1: Standard servo pulse width for different Ping angles
7For more information about standard servo motors, check Appendix C
Chapter 3
Robot Motion
Robot motion is either in straight lines or pivoting on its center. General
motion is available but will complicate the problem and increase inaccuracies
in measurements, hence, for simplicity, this part is ignored. Parallax Con-
tinuous Servo Motors and Digital Encoders have been used to apply motion.
Motion calibration is the most important part in any robotics application.
Without calibration, motion will be completely unreliable and will break
down any application. Some experiments and techniques are used to calibrate
robot motion.
3.1 Motion Requirements
In order to move the robot, continuous rotation servo motors are used. Two
servo motors are fixed at both sides of the robot, they must be identical and
perfectly aligned to achieve an appropriate motion.
Boe-Bot is able to perform two motion actions: move forward, or back-
ward, for a certain distance, and rotate right, or left, around its center by a
certain degree. Each action is treated in a different way.
• Motion in a straight line:
In order to move in a straight line, the two wheels of robot must rotate
with the same velocity. e.g. to move robot forward, right wheel must
rotate clockwise and left wheel must rotate counterclockwise. Below is
a Basic Stamp code snippet to move robot in a straight line:
MOVE
Do
Chapter 3. Robot Motion 20
’ Assume that 750 is the pulse width that stops robot
PULSEOUT RightWheel, 650 ’ 750 - 100
PULSEOUT LeftWheel, 850 ’ 750 + 100
LOOP
RETURN
Unfortunately, this code will not work as expected because of calibra-
tion considerations. This point will be mentioned in details in Section
3.2.
The previous code uses infinite loop to rotate wheels. It needs some
modification to move robot for a certain distance. To achieve that, it’s
important to find a way to calculate the distance robot has traveled so
far. The simplest solution is to make use of robot velocity and time to
calculate distance, however, this solution it’s not a practical one since
it needs an ideal environment. Therefore, another solution is adopted
that uses digital encoders.
Figure 3.1: Using Digital Encoder to calculate distance
BOE-BOT wheels come equipped with eight evenly spaced holes. A
digital encoder is used as well. It is a reflective sensor fixed in front of
wheel to detect its holes. As shown in figure 3.1, sixteen encoder pulses
indicates that the wheel made a full rotation, (2πr cm, where r is wheel
radius). Using wheel radius, any distance can be easily converted from
meters to encoder pulses. MOV E routine will count encoder pulses till
Chapter 3. Robot Motion 21
they reach the required number.
According to BOE-BOT dimensions, one encoder pulse indicates that
the wheel travelled 1.3 cm. When robot moves forward, this distance
will be the distance that the robot actually moved.
Below is a pseudo code that moves robot for a certain distance.
MOVE(Distance)
EP = Convert Distance to Encoder Pulses
While(EP <> 0)
PULSEOUT right_wheel, 650
PULSEOUT left_wheel, 850
if(Right encoder pulse)
EP = EP - 1
Loop
RETURN
• Rotation around center:
The main disadvantage of digital encoder is its inaccuracy. All dis-
tances are measured using encoder pulses. One encoder pulse indicates
about 1.3 cm of wheel travel. This problem appears more clearly in
rotation. Using digital encoders may introduce more than 5◦ of error,
which is very high and affects application accuracey. Another point is
why to involve digital encoder in rotation while having a digital com-
pass? Although the latter nearly has the same error tolerance, using it
is easier. Adding to that, error in digital compass doesn’t accumulate
as it does with digital encoders. That’s because compass gives absolute
measurments indepenedt from previous inaccuracies.
Rotation is split into two routines:
– General rotation: using compass, rotate robot slowly until it reaches
the target angle.
– 90◦ rotation: it must be more accurate, as it will be frequently
used in the application. Special routine is used to handle this
action by calculating the exact number of Stamp pulses needed to
rotate robot 90◦.
Chapter 3. Robot Motion 22
3.2 Motion Calibration
The key to success in any odometry system is calibration. Calibration takes
its importance because real world is not ideal, documented features of robot
motors, wheels and other components may not be so accurate, or even may
change with the environment. For example, motor current consumption de-
pends on power supply, wheel rotation depends on surface friction and weight
of robot, wheels may not be exactly the same size, and their axles may not
align perfectly.
Below are four ways to calibrate robot motion.
3.2.1 Pre-Motion Calibration
Pre-Motion calibration targets calculating the real values for servo motors,
based on the current state of robot motors, wheels, power supply, etc. It’s
performed after the final assembly of robot parts and before using it.
The first step is to center the servos, such that they do not rotate when fed
with 1.5 ms pulses, then, to establish the relationship between pulse width
and servo speed.
• Step 1: Centering the Servos
The first source of error is that the pulse width expected to stop the
Figure 3.2: Centering Servo Motor
servo motor, 1.5 ms in our case, is not the actual one. The following
experiment is done to avoid this problem. Figure 3.2 shows the signal
that has to be sent to the servo to calibrate it. This is called the cen-
ter signal, and after the servo has been properly adjusted, this signal
instructs it to stay still.
If servo turns, it means that it needs calibration, then, using a screw-
driver, potentiometer in the servo is adjusted until the servo stop turn-
ing.
Chapter 3. Robot Motion 23
After this step, supplying servo with 1.5 ms pulse train will make it
not turning.
• Step 2: Establishing relationship between pulse width and
speed
In step two, the goal is to determine for each servo, because they may
differ, the correspondences between the various pulse widths and the ac-
tual rotation speeds. The importance of this step is to ensure that both
wheels will rotate with the same velocity when moving in a straight line.
Experiment steps are as follows:
– Center the two servos.
– determine a single maximum speed for both wheels. To do this,
both servos are sent a stream of 256 pulses of the same width,
at one extreme of their pulse range. This will cause the Boe-
Bot to spin around. While its doing this, the program counts the
transitions on each encoder output. Next, do the same at the other
extreme, and the Boe-Bot will spin the other direction. Finally,
take the lowest of the four counts measured, and this becomes the
maximum common sustainable speed.
– Next, cycle the servos through a series of pulse streams, each with
256 pulses, but each series with a different pulse width. Again,
count edges for each servo. From this data one could construct
a graph for each servo of its velocity at each of the tested pulse
widths. But this is not what we want.
– What we really want is a graph of the pulse width for each of
several equally-spaced velocities. To get these, the program uses
linear interpolation between the points on the first graph to ap-
proximate the needed pulse values.
After this experiment, each servo has sixteen pulse width values corre-
sponding to sixteen equally spaced velocities.
Unfortunately, these values are not so accurate and depend on digital
encoder with its range of error. For simplicity and more accuracy,
motion in straight line uses other values which are calculated using
trial and error, these values are not valid for different surfaces, motors,
Chapter 3. Robot Motion 24
or any other factors. Table 3.1 shows these values1.
Speed Level Right Motor Pulse width Left Motor Pulse width (ms)
No Motion(Null) 744 744
Level 1 733 754
Level 2 725 762
Level 3 720 767
Level 4 707 780
Level 5 655 815
Table 3.1: Motors pulse width for motion in straight line with different speeds
3.2.2 Coordinating Motion
Previous calibration ensures that robot constants are correct, but we still
need to overcome errors occurred because of the surrounding environment.
One of the most important sources of error is the difference between
velocities of the two wheels, it may happen because the two motors, or the two
wheels, are not identical, or because of friction. To overcome this problem,
we must first detect that one wheel gets proportionately ahead of the other
one, then leave out pulses to retard its motion until the other wheel catches
up. To detect such case, Digital encoder is used again, but now while motion
itself. If robot moves in straight line, then two wheels must cross the same
distance with the same velocity, which means that digital encoders must give
a consistent pulses sequence, {left, right, left, right,....}, if the left wheel is
faster than the other, the sequence of encoder pulses will contain {left, left}which can be easily detected by robot program. Below is a pseudo code of
this scenario:
While motion
read digital encoders pulses
if left encoder reads two consecutive pulses
stop left wheel for some servo pulses
if right encoder reads two consecutive pulses
1Pulse width is measured in number of BASIC Stamp clock pulses. To convert it to
milliseconds, multiply by 0.002
Chapter 3. Robot Motion 25
stop right wheel for some servo pulses
Loop
3.2.3 Ramping
Inertia is a fact of life. ”A body at rest will remain at rest, and a body in
motion will remain in motion, unless acted upon by another force.” And this,
of course, applies to robots and robot wheels. Starting or stopping motion
without gradual acceleration and deceleration is not only jarring to the servos
internal mechanisms but wastes precious battery energy. If STOP command
is sent to robot while moving, inertia will keep it moving a little, possibly
for another servo pulse or two, completely messing up the precision we set
out to achieve. On the other hand, if MOVE command is sent to robot, inertia
may make it shake and divert from its proper path.
Fortunately, ramping can solve the problem. Figure 3.3 shows how to
do it. To move robot for a distance D with velocity Vmax, firstly, ramp
velocity from Vmin to Vmax, then, move with Vmax until it almost finishes,
and finally, ramp velocity again from Vmax to Vmin
Figure 3.3: Ramping wheel velocity curve
Implementing ramping on BOE-BOT is easy, using values in Table 3.1, but
it highly improved the motion.
3.2.4 Error Detection and Correction
After all calibration steps, previously mentioned, BOE-BOT moves with an
acceptable behavior. But what if any accidental problem happened and
caused the robot to divert from its path?
A final solution using compass is provided. If the robot receive MOVE 50 cm
FORWARD command, figure 3.4 shows its new position if it diverts from its
path by an angle θ, assuming that diversion was in a straight line, the steps
Chapter 3. Robot Motion 26
to detect and correct this error will be as follows.
Figure 3.4: Error detection in robot motion
• Compass will detect that robot direction changed by angle θ
• Using compass, rotate robot to the correct direction again.
• Using the assumption that diversion was in a straight line, calculate
the new location of robot, point B.
(XB, YB) = (XA +Dsinθ, YA +Dcosθ) (3.1)
Chapter 4
Robot-Computer
Communication
4.1 Communication Environment
4.1.1 Communication Hardware Components at a Glance
The communication between the robot and the computer is accomplished
over Bluetooth connection. That necessitates having Bluetooth modules on
each of the involved devices. For the robot, the EmbeddedBlue Series eb500,
Figure 4.1, has been used. It’s produced by ”A7 Engineering” company to be
distributed by Parallax, where most of the components used in this project
are bought. For the computer, D-Link DBT-122 Bluetooth USB Adapter,
Figure 4.2, is used.
4.1.2 Robot Communication Side
The module used supports many features. The most important is it supports
simple serial UART1 communications and control. Its range in the open field
is 328 feet. It also has low current consumption for long battery life.
The eb500 implements all components of the Bluetooth stack on board
so that additional host processor code is not required. Once a connection to
another Bluetooth device has been established, the link has the appearance
of a cabled serial connection eliminating the need for special wireless protocol
knowledge.
1For more informatio, check appendix B
Chapter 4. Robot-Computer Communication 28
Figure 4.1: EmbeddedBlue eb500 Bluetooth Module
Figure 4.2: D-Link DBT-122 Bluetooth USB Adapter
The module supports 2 modes. The first mode is Data mode where any
data in sent to the module from the robot is transferred directly over Blue-
tooth connection to the other pair of connection. The second mode is Com-
mand mode where any data sent to the module from the robot is transferred
to the module itself and interpreted as a command. The module supports
some commands for connection establishment and termination. There are
also commands for change the security setting. The full listing of the com-
mands can be found in the manual of the module.
The steps done at the robot side to communicate with the computer are
as follows:
• The connection is established and that can be checked from the status
pin of the eb500
• The module automatically changes to data mode.
Chapter 4. Robot-Computer Communication 29
• Both the Stamp microcontroller and the Bluetooth module have US-
ART modules. Those USART modules are used to exchange data be-
tween them.
• To disconnect, the Bluetooth module is changed to Command Mode
and a disconnect command is sent to it.
Below is a code snippet that exchanges data with eb500 module.
’Wait for the connection to be established and switch into data mode.
’When switching into data mode, a 300ms timeout is required to give the
’module enough time to make the change.
’in5 is the status pin.
’0 = Not Connected, 1 = Connected
WaitForConnection:
IF in5 = 0 THEN WaitForConnection
’Switch to Data Mode
’ 6 is the number of mode pin
’ LOW = Command Mode, HIGH = Data Mode
HIGH 6
PAUSE 300
’Now Connection established
’Send "Hello World"
SEROUT 1,84,["Hello World",CR]
’Receive some data and store it in variable "ReceivedData"
SERIN 0,84,ReceivedData
’Switch to Command Mode
LOW 6
’ Wait for acknowledgment from the module
SERIN 0,84,[WAIT(CR,>)]
’Disconnect from the remote device by sending dis command
SEROUT 1,84,["dis",CR]
Chapter 4. Robot-Computer Communication 30
’ Wait for acknowledgment from the module
SERIN 0,84,[wait(CR,">")]
’It is disconnected now
4.1.3 Computer Communication Side
For the hardware part, the D-Link adapter driver is easily installed. The
adapter supports the SPP2. The adapter is mapped into a new serial port
installed when the driver is installed. The application running on the com-
puter should use this serial port for sending and receiving data which hides
the Bluetooth connection complexities making use of the Serial Port Profile 3.
For the software part, the application running should use the pre-mentioned
serial port. In order to simplify the task of interfacing the serial port, the
.NET2.0 framework is used. It’s the first .NET framework the provide classes
for communicating with the serial ports. The SerialPort class is used. It sup-
ports methods to send/receive bytes to/from the serial port.
The steps required at the computer side to communicate with the com-
puter are as follows:
• Make the connection between the computer and the robot using Blue-
tooth adapter tools.
• From System Settings check the serial port assigned to the Bluetooth
adapter.
• In the SerialPort class, use the method Open to allocate the required
resources by the system for the port.
• Now, you can easily use the methods provided by SerialPort class for
sending and receiving data.
2SPP stands for Serial Port Profile3For more information about Bluetooth profiles, check appendix A
Chapter 4. Robot-Computer Communication 31
4.2 Communication API
One of the points that were taken into consideration while designing this
project is how different modules in the project can communicate together.
For software modules, OOP concepts are applied to provide encapsulation
and reusability. The tricky point is that there exist some hardware modules
in the project. This chapter gives an overview about the correspondence
between the hardware module and the application running on the computer.
4.2.1 Why Using an API?
The robot is a very generic module. It can be used in many applications.
In order to account for such generality, there should be certain API through
which any application that uses the robot can control it. To make it sim-
pler, the commands related to controlling the robot and the modules on it
are gathered together in some kind of API. Adding to the simplicity of this
design, it provides changeability, reusability and extendibility.
Changeability and reusability are achieved by being able to replace any
module on the robot, even the whole robot, without affecting the applica-
tion that uses it. That is possible provided that the new robot configuration
supports the same commands the previous one supported. For example, as
mentioned in Chapter 2, the robot has compass on it. The compass can be
replaced with a more accurate one without affecting the application running
on the computer. That was the changeability at the robot side. At the
application side, the whole application can be changed without affecting the
robot. For example, in this project the proposed application is map-building.
It can be smoothly changed to any other application without modifying the
robot, assuming the robot components fulfill the requirements of the new
application for sure.
Extendibility is achieved by putting the commands in a separated layer.
More commands can be added to the API seamlessly. Even at the robot
side, new hardware modules can be added and their corresponding com-
mands should be provided to the API which results in a more powerful API
with minor changes.
Chapter 4. Robot-Computer Communication 32
4.2.2 API Commands
According to the modules attached to the robot, the API commands have
been determined. Below are the commands and their description.
Initialize Initializes the status variables of the robot, e.g. the position, the
angle and sensors directions.
Connect Initializes the connection with robot.
Disconnect Disconnects from the robot.
Move Forward Makes the robot move till another command of stopping is
invoked.
Move Distance Moves the robot to the forward a certain distance and then
stop it automatically.
Stop Stops the robot whether it was moving or rotating.
Measure Ping Returns the distance in front of the ultra-sonic sensor at its
current position and angle.
Measure Compass Returns the angle of the robot with respect to the mag-
netic north.
Rotate Ping Rotates the Ultra-sonic sensor to one of the five positions.
(90◦ to the left, 45◦ to the left, to the forward, 45◦ to the right, 90◦ to
the right)
Rotate Left 90◦ Rotates the robot 90◦ to the left without using the com-
pass.
Rotate Right 90◦ Rotates the robot 90◦ to the right without using the
compass.
Rotate Left Slow Rotates the robot slowly to the left till stopping com-
mand is invoked.
Rotate Right Slow Rotates the robot slowly to the left till stopping com-
mand is invoked.
Rotate With Compass Rotates the robot to any angle with respect to the
magnetic north using the compass.
Chapter 4. Robot-Computer Communication 33
Figure 4.3: The UML of the API Implementation at the Computer Side
4.2.3 API Implementation
The API implantation is paired between the application and the robot. There
is one to one correspondence between the methods in the API application
part and the modules on the robot microcontroller.
At the robot side, the code is divided into modules. Upon the reception
of a command, the appropriate module is executed. Such modular design
facilitates the pre-explained changeability and extendibility.
At the application side, the API is implemented as certain classes. The UML
shown in Figure 4.3 illustrates them.
The abstract class Robot encapsulates the standard data members and
Chapter 4. Robot-Computer Communication 34
methods that should be available in any robot supporting this API. Those
methods are abstract and have no implementation. Any actual robot class,
BoeBotRobot class in this project, must inherit from Robot class and give
the appropriate implementation for such methods.
The Robot class, or any inherited class from it, needs a class implement-
ing the interface CommunicationPort. This interface provides the meth-
ods required to open connection with the robot, disconnect from the robot,
send data to the robot, and receive data from the robot. In this project,
the serial port profile of the Bluetooth dongle is used, and hence a class
SerialPortCommunication is implementing the interface CommunicationPort.
An instance SerialPortCommunication class is passed to constructor of the
BoeBotRobot class.
Chapter 5
Map Building Application
Map Building is one of the fundamental tasks of mobile robots and many re-
searchers have focused on the problem of how to represent the environment
as well as how to acquire models using this representation. Exploration is
the task of guiding a vehicle in such a way that it covers the environment
with its sensors.
Two methods have been used to build the map. The first is Points
Locality, it aims to fit the points returned by sensor readings in line seg-
ments. The second method is a well known method called Occupancy Grid,
it usees a probabilistic approach to build the map. For navigation, Wall
Follower technique has been used.
5.1 Method 1: Points Locality
This section will discuss one of the two methods used in map construction. In
this method the map is represented as a list of line segments which descripe
the contours of the objects. This representation provides the simplicity and
compression because it saves the memory needed for holding the map, and
on the other hand it provides an appropriate level of details.
Now, how we can find those line segments that descripe the map. A method
has been developped to obtain these line segments, from the sensor raw data
received from the robot, which we called ”Points Locality Method”, this
name is extarcted from the idea this method depends on to determine each
segment.
If we try to extract the line segments from the raw data measured by robot
range finder sensor, we will have two problems. First we should devide the
set of points into a number of sub-sets of co-linear points each of them holds
Chapter 5. Map Building Application 36
the points related to one segment. Second, because of the inaccuracy in the
measured points, we need to fit these points to find a candidate line segment
which results in minimum error.
5.1.1 Co-linearity Problem
We can solve this problem by grouping the points from the beginning into
co-linear points depending on the locality of the new point and the locality of
the existing line segments. Each group of neighbour points should be related
to the same line segments and each new point can be inserted in one of the
existing line segments or it can form a new one depending on the locality of
the new point and the locality of the existing segments, hence the loclity of
points enables us to decide if they compose one segment or not as will be
explained in the next section.
5.1.2 Neighbourhood Decidablity using Points Local-
ity
Each new received point is checked with each existed segment to see if it is a
neighbour of its points, i.e. it corresponding to this segment, or not. There
are some different cases:
1. The new point is near by the start or the end of the line segment, in
this case either this point belongs to this sigment or belongs to another
segment which is intersected with it as in corners. If the new height of
the new point is small enough and the total error after the insertion of
the new point is still small enough, then the new point would belong
to this segment, otherwise it would belong to an intersected segment.
2. The new point is far from the start and end points of the line segment
but the height of the point is small enough, as in the first case, the new
point may belongs to the line segment or an intersected segment, and
this is decided acording to the current and the new total error as in the
first case.
3. The new point is far from the start and end points of the line segment
and its height is not small enough, in this case the new point doesn’t
Chapter 5. Map Building Application 37
belong to this line segment.
If it’s found that the new point doesn’t belong to any one of the existed line
segment, it will be inserted in a new line segment which will have only one
point untill it receives other neighbour points.
Figure 5.1: The solid points belongs to the line segment but the non-solid
points are far points and doesn’t belong to the line segment.
5.1.3 Fitting Problem
Because of the errors resulted from the robot motion and sensor measure-
ments, we need to find the candidtae line segments which give the minimum
error in its set of points. Using the Perpendicular regression which minimize
the heights of the points over the line segments, we can find that candidate
segment. A mathematical method has been developped to implement the
Perpendicular regression and is explained in the next section.
5.1.4 Perpendicular Regression
We aim to minimize the error in the points which determine the line segment,
according to Perpendicular regression, the error can be defined as the sum of
points height (Perpendicular distance between the point and the candidate
line segment):
Chapter 5. Map Building Application 38
Figure 5.2: An example of Perpendicular fitting
ε =n∑
i=1
hi (5.1)
ε =n∑
i=1
√∆2
X + ∆2Y (5.2)
and for simplicity, the error will be defined as the sum of the squares of
heights rather than the sum of heights:
ε =n∑
i=1
h2i (5.3)
ε =n∑
i=1
(∆2X + ∆2
Y ) (5.4)
Now, we need to minimize the error ε wich is a function of the coefficeints
(a, b) of the equation of the line segment (y = ax+ b).
We need to find a and b that minimize the error ε. By partially differenting
with respect to a and b and equating with zero, we get:
∂ε
∂a= a
n∑i=1
P 2xi−a
n∑i=1
P 2yi
+(a2−1)n∑
i=1
PxiPyi
−b(a2−1)n∑
i=1
Pxi+2ab
n∑i=1
Pyi−nab2 = 0
(5.5)
∂ε
∂b= a
n∑i=1
Pxi−
n∑i=1
Pyi+ nb = 0 (5.6)
Chapter 5. Map Building Application 39
by solving these two equations simultaneously, we get the value of a and b
as following 1
a =−ψ +
√ψ2 + 4φ2
2φb =
−a∑n
i=1 Pxi+
∑ni=1 Pyi
n(5.7)
where:
ψ = n(n∑
i=1
P 2xi−
n∑i=1
P 2yi
)−(n∑
i=1
Pxi)2−(
n∑i=1
Pyi)2 φ = n
n∑i=1
PxiPyi
−n∑
i=1
Pxi
n∑i=1
Pyi
(5.8)
After finding the values of a and b, we can decide the start and end points of
Figure 5.3: An example of corners problem in the points locality method
the line segment according to the list of its points. Now, each line segment is
determined with the start and end points and the map can ignore the large
number of data points.
There are two problems will appear when using this method. The first one
is the curves which can’t be extarcted by fitting line segments to the data
points. The curves will appear as a number of non-continuous line segments
as shown in Figure 5.4. The second problem is the corner problem which will
lead to undesired results , as shown in Figure 5.3, because of the method of
1for more information about the mathematical proof, return to appendix D
Chapter 5. Map Building Application 40
Figure 5.4: An example of curves problem in the points locality method
determinig the neighbourhood of points which belong to one line segment as
descripe in Section 5.1.2.
5.2 Method 2: Occupancy Grid
The second map-building method is called Occupancy Grid. This method
can overcome the problem of uncertainty about distance measured by the
ultrasonic sensor. This method uses a probabilistic approach to build the
map. The map is implemented as a two dimensional grid in which each cell
(x, y) has a value that represents the probability of occupancy of this cell, i.e.
the probability that this cell contains an object. This value is updated upon
each sensor reading. The final value of the cell is the value used to draw the
map.
5.2.1 Measurements Uncertainty
Although the Ultrasonic sensor could measure the distance with an accept-
able error, it isn’t enough to determine the obstacle position. As shown in
Fig 5.5, the ultrasonic sensor only gives an arc on which the detected obstacle
Chapter 5. Map Building Application 41
is located. We couldn’t use the ultrasonic measurements directly to build the
map assuming that the obstacle on the line of side of the ultrasonic sensor,
especially if the reading is from a big distance, as the resulting map would
be too erroneous.
Figure 5.5: Ultrasonic Sensor Range Regions
To overcome the measurements uncertainty problem we used a probabilistic
method. As shown in Figure 5.5, and due to sensor detection of the object,
region A contains cells that may be occupied by the object, region B contains
cells that most probable to be empty.
We have to raise the probability of occupancy in all cells in region A and
decrease the probability of occupancy in all cells in region B. The problem
now is to determine the probability of each cell. This probability could be es-
timated using different method, one of them is using a mathematical method
to estimate the occupancy probability for each cell.
For region A, the probability of occupancy of each cell given this sensor
reading is inversely proportional to both the distance and the angle between
it and the sensor. At contrary for every cell in region B the probability of
occupancy of each cell given this sensor reading is directly proportional to
both the distance and the angle between it and the sensor. We used these
Chapter 5. Map Building Application 42
Figure 5.6: Calculating Probability From Distance and Angle Between Sensor
and Cell
factors to estimate the probability of occupancy of each cell given this sensor
reading as follows:
As shown in Figure 5.6, let D be the maximum distance sensor can get,
β be the maximum cone angle, d and α are the distance and angle of the
current cell to be updated, respectively).
In Region A:
Pr(Occ) =
c1D−dc1D
+ c2β−αc2β
2(5.9)
In Region B:
Pr(Occ) =
c1D−dc1D
+ c2β−αc2β
2(5.10)
Where Pr(Occ) is the probability that the cell is occupied, Pr(Occ) is the
probability that the cell is free. c1 and c2 are constants depend on cell size.
The smaller cell size, the larger the constants are. Minimum value of these
constants is 2, just to ensure that probability is greater than 0.5. If not,
sensor reading may cause a negative effect on the total probability.
5.2.2 Probability updating over time
Till now, we just estimated the probability of occupancy of each cell given
one sensor reading. To determine a good estimation for occupancy prob-
ability, every cell should be sensed more than one time. The problem is
Chapter 5. Map Building Application 43
how to update the cell occupancy probability upon the new sensor reading.
Let the probability of occupancy of each cell given the ST sensor reading is
Pr(Occx,y|ST )
The final stored probability after T sensor readings would be
Pr(Occx,y|S1, S2, ..., ST )
Using Baye’s low and conditional probability rules, the following formula can
be proved2:
Pr(Occx,y|S1, S2, ..., ST ) =
1− (1 +Pr(Occx,y|S1)
1− Pr(Occx,y|S1)
T∏τ=2
Pr(Occx,y|Sτ )
1− Pr(Occx,y|Sτ )
Pr(Occx,y)
1− Pr(Occx,y))
−1(5.11)
The previous formula is suitable for cells in region A. For Region B, the suit-
able value would be Pr(Occx,y|S1, S2, ..., ST ) and can be calculated, similarly,
using the formula:
Pr(Occx,y|S1, S2, ..., ST ) =
1− (1 +Pr(Occx,y|S1)
1−Occx,y|S1)
T∏τ=2
Pr(Occx,y|Sτ )
1− Pr(Occx,y|Sτ )
Pr(Occx,y)
1− Pr(Occx,y))
−1 (5.12)
In real implementation, the only needed information for each cell is Pr(Occx,y|S1, S2, ..., ST ),
which is updated when cell lies in region A, and Pr(Occx,y|S1, S2, ..., ST ),
which is updated when cell lies in region B. When drawing map, the maxi-
mum of these two values will be chosen.
5.2.3 Method Implementation
• Initialize every cell with initial probability equal to 0.5, because we
have no information about the map. By this initialization, the map is
totally unknown and the entropy is the.
• Substituting in Equations 5.9 and 5.10, get the probability of occupancy
given the current sensor reading for all cells in regions A,B.
• Use the recursive formulae, 5.11 and 5.12, to update the two values of
each cell, Pr(Occx,y|S1, S2, ..., ST ) and Pr(Occx,y|S1, S2, ..., ST ).
• After finishing navigation, and for each cell, select the maximum value
from Pr(Occx,y|S1, S2, ..., ST ) and Pr(Occx,y|S1, S2, ..., ST ).
• Using the selected values, fill the grid points with a grayscale color
corresponding to the probability.
2See Appendix E for the complete proof
Chapter 5. Map Building Application 44
5.2.4 Occupancy-Grid Advantages
• It is independent of the objects shape; it could draw both curves and
straight lines with the same accuracy.
• It is applicable whether the map exits or not, in other words we could
use this method to update an existing map if the environment is dy-
namic.
• It treated the problem of uncertainty with an acceptable solution.
• It could be used with some modifications to build 3D maps, in this
case, we assume that the occupancy probability of any cell is directly
proportional to the height of this cell; this assumption could give good
results as the ultrasonic sensor cone is 3D and its readings depend on
the height of the object
5.3 Navigation
A common technique for exploration strategies is to extract frontiers between
known and unknown areas and to visit the nearest unexplored place. An-
other simple technique is wall folllower. In this technique, robot is assumed
to begin motion at a point that the wall is right to it. The technique guides
the robot to follow the wall such that the wall always ramains right to the
robot. Robot keeps motion until it reaches its first location again. This
technique may not navigate the whole area if there is some locations far from
the wall, but for simplicity, wall follower has been choosen as a navigation
technique. Frontiers-Based approach is left as a futer work.
Below is a pseudo code for wall follower technique:
WallFollow
For i = 1 to number_of_steps
rotate sensor to forward direction and read it (Forward Read)
rotate sensor to right direction and read it (Right Read)
if Forward Read < forward_threshold then
rotate left 90 degrees
else if Right Read > right_threashold then
rotate right 90 degrees
Chapter 5. Map Building Application 45
end if
move one step
Loop
RETURN
This code snippet checks for two cases, the first is when forward sensor
reading becomes lower than a certain threshold, it means that there is a
wall in front of the robot. The second case is when the right sensor reading
becomes higher than a certain threshold, it means that the wall right to the
robot just ended.
5.4 Simulation Results
Simulation is an important step before real implementation, to validate and
verify mapping methods. Simulation ensures that methods are valid and well
implemented apart from robot hardware problems. The next step is to im-
plement the valid and verified mapping methods in a real application with
the robot itself. Real experiments will be mentioned in chapter 6.
MobotSim free trial simulator has been used to simulate robot motion
and sensors. The following figures shows simulation results.
Figure 5.7 shows result of ocuupancy grid method simulation, Figure 5.8
shows result of points locality method simulation. Note that in occupancy
grid there is no constraint on the shape of the border, it may be either lines
or curves. Points locality method will not give the same result if the border
contains curves. In general, simulation results are encouraging.
Chapter 5. Map Building Application 46
Figure 5.7: Simulation Result for Occupancy Grid Method
Chapter 5. Map Building Application 47
Figure 5.8: Simulation Result for Points Locality Method
Chapter 6
Experiments and Results
This chapter illustrates some experiments in a prepared environment for
small scale indoor areas. These experiments target examining the system
behavior, determining the points of weakness and discovering any recom-
mended refinements. Such expriments was performed using the following