Top Banner
Navigation System Design Eduardo Nebot Centre of Excellence for Autonomous Systems The University of Sydney NSW 2006 Australia [email protected] http://acfr.usyd.edu.au/homepages/academic/enebot/ May 5, 2005 Version 1.1
151
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Navigation

Navigation System Design

Eduardo NebotCentre of Excellence for Autonomous Systems

The University of Sydney NSW 2006Australia

[email protected]://acfr.usyd.edu.au/homepages/academic/enebot/

May 5, 2005

Version 1.1

Page 2: Navigation

Navigation System Design (KC-4) i

Contents

1 Introduction to Navigation Systems 1

1.1 An Historical Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 A Modern Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2.1 The Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2.2 Other Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Course Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Introduction to Vehicle Modelling 5

2.1 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 Basic Kinematic Model . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2 Kinematic Model of the Utility Car (UTE) . . . . . . . . . . . . . . . . . . 7

3 Beacon Based Navigation 15

3.1 Linear Kalman Filter Estimator . . . . . . . . . . . . . . . . . . . . . . . . 15

3.1.1 Prediction Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.1.2 Update Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.2 The Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.3 The Information Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.4 The Extended Information Filter . . . . . . . . . . . . . . . . . . . . . . . 20

3.5 Sensors Used for Beacon Based Navigation Systems . . . . . . . . . . . . . 21

3.6 Bearing Only Navigation Systems . . . . . . . . . . . . . . . . . . . . . . . 22

3.6.1 State Transition (Prediction) Equations . . . . . . . . . . . . . . . . 22

3.6.2 Sensor Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.6.3 Update Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.7 Range-Bearing Navigation Systems . . . . . . . . . . . . . . . . . . . . . . 25

3.8 Data Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.9 Filter Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Page 3: Navigation

Navigation System Design (KC-4) ii

4 The Global Positioning System (GPS) 29

4.1 GPS observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.2 Position evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.3 Most Common GPS Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.4 Fundamental of DGPS (Differential GPS) . . . . . . . . . . . . . . . . . . 35

4.5 Phase carrier GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.6 Coordinate Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5 Inertial Sensors 44

5.1 Fundamental Principles of Accelerometers and Gyroscopes . . . . . . . . . 44

5.1.1 Accelerometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5.1.2 Gyroscopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.1.3 Vibratory gyroscopes . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.1.4 Fiber Optic Gyros . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5.2 Sensor Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5.2.1 Evaluation of the Error Components . . . . . . . . . . . . . . . . . 49

5.2.2 Faults associated with Inertial Sensors . . . . . . . . . . . . . . . . 50

5.3 Application Inertial Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.4 Coordinate Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.5 Inertial Navigation Equations . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.5.1 The Coriolis Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.5.2 Navigation in large areas . . . . . . . . . . . . . . . . . . . . . . . . 60

5.5.3 Navigation in local areas, Earth Frame . . . . . . . . . . . . . . . . 62

5.5.4 Schuler Damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.6 Attitude Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.6.1 Euler Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.6.2 Direction Cosine Matrix (DCM) Representation . . . . . . . . . . . 65

5.6.3 Quaternion Representation . . . . . . . . . . . . . . . . . . . . . . . 67

Page 4: Navigation

Navigation System Design (KC-4) iii

5.6.4 Discussion of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 68

5.7 Attitude evaluation error sources . . . . . . . . . . . . . . . . . . . . . . . 69

5.7.1 Errors in attitude computation . . . . . . . . . . . . . . . . . . . . 69

5.7.2 Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

5.7.3 Minimising the Attitude Errors . . . . . . . . . . . . . . . . . . . . 72

5.8 Error Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

5.9 Calibration and Alignment of and Inertial Measurement Unit . . . . . . . . 75

5.9.1 Alignment Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 75

5.9.2 Alignment for Low Cost Inertial Units . . . . . . . . . . . . . . . . 77

5.9.3 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.10 Position, Velocity and Attitude Algorithms . . . . . . . . . . . . . . . . . . 78

6 GPS/INS Integration 79

6.1 Navigation architectures for Aided Inertial Navigation Systems . . . . . . . 79

6.2 Linear, Direct Feedback Implementation . . . . . . . . . . . . . . . . . . . 82

6.3 Aiding in Direct Feedback Configurations . . . . . . . . . . . . . . . . . . . 85

6.3.1 Real Time Implementation Issues . . . . . . . . . . . . . . . . . . . 88

6.4 Aiding Using a Vehicle Model . . . . . . . . . . . . . . . . . . . . . . . . . 94

6.4.1 General Three-Dimensional Motion . . . . . . . . . . . . . . . . . . 95

6.4.2 Motion of a Vehicle on a Surface . . . . . . . . . . . . . . . . . . . 96

6.5 Estimation of the Vehicle State Subject to Constraints . . . . . . . . . . . 97

6.5.1 Observability of the States . . . . . . . . . . . . . . . . . . . . . . . 98

6.5.2 Multiple Aiding of an Inertial System . . . . . . . . . . . . . . . . . 101

6.6 Tightly Coupled GPS/Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

6.6.1 Advantages and Disadvantages of the Tightly and Loosely CoupledConfigurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Page 5: Navigation

Navigation System Design (KC-4) iv

7 Simultaneous Localisation and Mapping (SLAM) 108

7.1 Fundamentals of SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

7.1.1 Process Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

7.1.2 The Observation Model . . . . . . . . . . . . . . . . . . . . . . . . 110

7.2 The Estimation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

7.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

7.3 Fundamentals results in SLAM . . . . . . . . . . . . . . . . . . . . . . . . 113

7.4 Non-linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

7.5 Optimization of SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

7.5.1 Standard Algorithm Optimization . . . . . . . . . . . . . . . . . . . 120

7.5.2 Compressed Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

7.5.3 Sub-Optimal SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Page 6: Navigation

Navigation System Design (KC-4) 1

1 Introduction to Navigation Systems

Navigation may be defined as the process of determining vehicle position (also knownas Localisation). This is distinct from Guidance (or Control) which is the process ofcontrolling a vehicle to achieve a desired trajectory.

An autonomous vehicular system generally must include these two basic competencies inorder to perform any useful task.

In these notes, the problem of navigation is discussed in detail. Probabilistic methods areused in order to perform accurate, predictable localisation.

1.1 An Historical Perspective

The navigation information that is generally needed is orientation, speed and positionof the vehicle. Modern navigation techniques were introduced several hundred years agoto aid vessels crossing the ocean. The first navigation techniques were used to estimatethe position of a ship through dead reckoning, using observations of the ships speed andheading. With this information, the trajectory of the ship was able to be predicted,albeit with errors that accumulated over time. The heading of the ship was determinedby observing the position of the sun, or other stars. The velocity was found with a log,compensating for the effects of local currents.

Absolute information was used to provide a position fix, to compensate for the errorsaccumulated through dead reckoning. These fixes were obtained when well known naturalor artificial landmarks were recognised.

The Mediterranean was the proving ground for these navigational techniques. It was anappropriate environment as the water currents are light in most places, and many naturallandmarks can be seen along most possible journeys.

A new navigational problem arose amongst the explorers of the fifteenth century. Evenwith the invention of the compass, dead reckoning was only useful for short periods oftime due to difficulties in obtaining accurate measurements of velocity. The difficultiesin compensating for local currents made dead reckoning only useful for periods of a fewhours. In the open sea, natural landmarks are scarcely available, making an accurateposition update not possible.

It was clear at that time that new methods, or new sensors, were needed to obtain absoluteposition information. Techniques to determine Latitude were developed in the early 1550’sby the Portuguese. Latitude information could be found with relative accuracy (30 miles)by the observation of the Pole star. The determination of Longitude is a much morecomplicated problem, and in fact, took another 300 years to be solved.

Page 7: Navigation

Navigation System Design (KC-4) 2

This problem was of fundamental economic importance since many ships were lost at seadue to total uncertainty in Longitude. In 1714, England’s parliament offered a signifi-cant prize to anyone who could formulate a reliable method to obtain absolute positioninformation in the open sea.

Two approaches were mainly investigated at this time. One method was based on accurateprediction and observation of the moon. Difficulties in predicting the lunar orbit withenough precision and accuracy required of the instrumentation made this approach almostimpossible to implement in standard ships at that time.

The other important approach was based on knowing the time with enough accuracy toevaluate the Longitude. By 1770 both techniques were mature enough and were used inmany journeys across the ocean. By the end of the century, navigation in the open seabecame predictable and safe, making possible the growing of business involving commer-cial transactions from all over the world. In fact, Captain Cook tested one of Harrison’schronometers (the Mark IV), on his last journey.

1.2 A Modern Perspective

The previous section introduced the essential elements of navigation, Prediction and Cor-rection.

Prediction can be considered to be the use of a model of some description to provide deadreckoning information. Dead reckoning has the property of accumulating position errorwith time.

Correction is the process whereby the observation of landmarks (either natural or artifi-cial) can reduce the location uncertainty inherent in dead reckoning.

It may be argued that with the advent of modern sensors such as the Global PositioningSystem (GPS) that dead reckoning is no longer a necessary part of navigation. Thisassertion may be readily refuted by the following statement: There is no such thing as aperfect sensor. All sensors have some measure of error or uncertainty present within everymeasurement. Similarly, if it were possible to perfectly model vehicle motion, externalsensors would not be needed.

Therefore it is essential to understand not only the sensors used for navigation, but alsothe model used for prediction, as they both contribute to the fidelity of the positionsolution.

As both prediction and correction steps contain uncertainty, it is useful to pose navigationas an Estimation problem. If the error in prediction, and the error in correction can bemodelled as probability distributions (they can, as will be seen in later sections) then theKalman filter can be used to fuse all available information into a common estimate thatmay then be used for guidance.

Page 8: Navigation

Navigation System Design (KC-4) 3

1.2.1 The Kalman Filter

A consistent methodology for estimating position from navigation sensors is through theuse of Kalman filtering and, for nonlinear systems, through the use of the extendedKalman filter.

The Kalman filter is a linear statistical algorithm used to recursively estimate the states ofinterest. The states of interest will usually consist of the vehicle pose and other relevantvehicle parameters. In map building, the state vector can be augmented with featurepositions, so that they too may be estimated.

To aid in the estimation of the states, the Kalman filter requires that there be two math-ematical models: the process model and the observation model. The process model de-scribes how the states evolve over time, together with an estimate of the errors committedby the system. The observation model explicitly describes the information supplied bya sensor as a function of the state, together with a model of measurement noise. Thesemodels correspond to prediction and correction respectively.

For a linear system subject to Gaussian, uncorrelated, zero mean measurement and processnoises, the Kalman filter is the optimal minimum mean squared error estimator.

An additional benefit of using the Kalman filter estimator is that it keeps track of theuncertainties present in the system via the computation of a covariance matrix. This isimportant in many applications where it is desired to know how well (or how poorly) asystem is performing.

The Kalman filter has been used sucessfully in many multi-sensor AGV applications,including transport vehicles[9], road vehicles[13, 15] and indoor mobile robotics[5].

1.2.2 Other Issues

There is considerable research into different types of dead reckoning and external sensorsthat can work reliably in different environments and weather conditions. Different sensorsuites are usually required for each application, such as navigation in underground mining,underwater vehicles, ships, and aerospace. Each system requires a particular combinationof dead reckoning and absolute sensors.

In many of these applications, reliability and integrity are very important issues. Suchis the case in autonomous operation of very large machines. In these applications, thenavigation system must be designed with enough redundancy to be able to detect anypossible fault that may occur. This usually requires that multiple redundant sensors areused, each based on different physical principles to avoid common failure modes.

Page 9: Navigation

Navigation System Design (KC-4) 4

1.3 Course Outline

This course presents the tools and techniques necessary for designing navigation systemsbased on natural and artificial landmarks working in structured and unstructured envi-ronments.

Part 1 Presents the introductory navigation topics of vehicle modelling, beacon (knownfeature) based navigation. Laboratory: localisation assignment with outdoor experimen-tal data.

Part 2 Presents an introduction to inertial sensors and inertial navigation. laboratory:Calibration, Alignment and integration of data obtained with a full six degree of freedominertial measurement unit (IMU).

Part 3 Presents the fundamental of Global Positioning Systems (GPS) and GPS/INSdata fusion algorithms. laboratory: Integration of GPS/INS data.

Part 4 Presents the fundamentals of Simultaneous Localisation and Mapping. Optimaland Suboptimal algorithms to reduce the computational and memory requirements ofSLAM. laboratory: SLAM assignment with outdoor experimental data.

Page 10: Navigation

Navigation System Design (KC-4) 5

2 Introduction to Vehicle Modelling

This section introduces the topic of vehicle modelling. The vehicle model can be a veryimportant component of a navigation system. However the issue of vehicle modeling fornavigation is largely neglected in the literature. One of the most important issues withmodelling is knowing the limitation of the model selected. For example, in the case ofa pure kinematic model it is important to know the operational range where the modelis valid. A simple kinematics model can be very accurate to predict the trajectory of avehicle while moving in straight direction. However it can have significant degradationwhen turning or uder different load conditions due to tire deformation. There are differenttype of models that can be used for a particular application such as kinematics, dynamicsor lumped parameter models. In this course basic kinematics Ackerman steered vehiclemodels will be presented and used as a running example for the different navigationalgorithms introduced. A much more detailed reference on the various aspects of vehiclemodelling as they relate to navigation can be found in [14].

2.1 Kinematics

Perhaps the simplest method of vehicle modelling is simply to exploit the vehicle’s kine-matic constraints. These constraints are known as the rigid body and rolling motionconstraints. The rigid body constraint makes the assumption that the vehicle frame isrigid, wheras the rolling motion constraint assumes that all points on the vehicle rotateabout the Instantaneous Centre of Rotation with the same angular velocity.

It is further assumed that there is no slip between the vehicle tyres and the ground, andthat vehicle motion may be adequately be represented as a two-dimensional model whosemotion is restricted to a plane. The constraints mentioned above have the additionaleffect of reducing the total number of degrees of freedom for an ackerman steered vehiclefrom a total of six (the speeds of each wheel plus the steer angle of each of the frontwheels) to two.

It is often convenient for modelling purposes to replace the four wheels of an ackermansteered vehicle with two ‘virtual’ wheels located along the vehicle centerline as can beseen in Figure 1. These virtual wheels encapsulate the two degrees of freedom we are leftwith once rigid body and rolling motion constraints are applied.

2.1.1 Basic Kinematic Model

Consider the vehicle model shown in Figure 2, it can be shown that the continuous timeform of the vehicle model (with respect to the center of the front wheel) can be derivedas follows:

Page 11: Navigation

Navigation System Design (KC-4) 6

gr

gl

O

Rrr

B

W

'Virtual' Wheels

x

y

Figure 1: Ackerman Steered Vehicle Geometry

x(t) = V (t) cos(φ(t) + γ(t))

y(t) = V (t) sin(φ(t) + γ(t))

φ(t) =V (t) sin(γ(t))

B(1)

where x(t) and y(t) denote the position of the vehicle, the angle φ(t) is the orientation ofthe vehicle with respect to the x axis, and V (t) represents the linear velocity of the frontwheel. The angle γ is defined as the steer angle of the vehicle, and B is the base lengthof the vehicle.

These equations are reported widely in the literature[6, 8, 13, 19].

Measurements are made of the wheel’s angular velocity ω and steer angle γ.

It should be noted that for nonlinear models such as this, a closed form solution forthe discrete time vehicle model can not in general be found by integrating Equation 1between the time intervals tk and tk−1 as is possible in the linear case. For nonlinearmodels, an Euler approximation of the integral can be used, assuming the control inputsare approximately constant over this interval. This approximation yields a discrete time

Page 12: Navigation

Navigation System Design (KC-4) 7

g

O

B

x

y

V = wR

Figure 2: Simplified Kinematic Vehicle Model

vehicle model of the form

x(k) = x(k − 1) + ∆TV (k − 1) cos(φ(k − 1) + γ(k − 1))

y(k) = y(k − 1) + ∆TV (k − 1) sin(φ(k − 1) + γ(k − 1))

φ(k) = φ(k − 1) + ∆TV (k − 1) sin(γ(k − 1))

B(2)

where ∆T is defined as the sample time from tk−1 to tk.

This discrete time vehicle model is now implementable within a discrete time ExtendedKalman Filter.

2.2 Kinematic Model of the Utility Car (UTE)

This section presents a more realistic model of a standard outdoor vehicle shown in Figure3. In general the models that predict the trajectory of the vehicle and the models thatrelates the observation with the states are non-linear. The navigation algorithms to beused require the linearisation of these models. In this case the Jacobian of the processand observation are needed to propagate the uncertainties.

Assume a vehicle equipped with dead reckoning capabilities and an external sensor capable

Page 13: Navigation

Navigation System Design (KC-4) 8

Figure 3: Utility car used for the experiments. The vehicle is equipped with a Sick laserrange and bearing sensor, linear variable differential transformer sensor for the steeringand back wheel velocity encoders.

of measuring relative distance between vehicle and the environment as shown in Figure4. The steering control α, and the speed υc are used with the kinematic model to predictthe position of the vehicle. In this case the external sensor returns range and bearinginformation to the different features Bi(i=1..n). This information is obtained with respectto the vehicle coordinates (xl, yl), that is z(k) = (r, β) , where r is the distance from thebeacon to the range sensor, β is the sensor bearing measured with respect to the vehiclecoordinate frame.

Considering that the vehicle is controlled through a demanded velocity υc and steeringangle α the process model that predicts the trajectory of the centre of the back axle isgiven by

xc

yc

φc

=

vc · cos (φ)vc · sin (φ)vc

L· tan (α)

+ γ (3)

Where L is the distance between wheel axles as shown in Figure 5. To simplify theequation in the update stage, the kinematic model of the vehicle is designed to representthe trajectory of the centre of the laser. Based on Figure 4 and 5, the translation of thecentre of the back axle can be given

PL = PC + a · ~Tφ + b · ~Tφ+π/2

(4)

PL and PC are the position of the laser and the centre of the back axle in global coordinatesrespectively. The transformation is defined by the orientation angle, according to thefollowing vectorial expression:

Page 14: Navigation

Navigation System Design (KC-4) 9

xy l l

a

b

f

z(k)=(r,b)

x

y

B1

B3

BiL

L

vc

Figure 4: Vehicle Coordinate System

~Tφ = (cos (φ) , sin (φ)) (5)

The scalar representation is

xL = xc + a · cos (φ) + b · cos (φ + π/2)

yL = yc + a · sin (φ) + b · sin (φ + π/2)

Finally the full state representation can be written

xL

yL

φL

=

vc · cos (φ)− vc

L· (a · sin (φ) + b · cos (φ)) · tan (α)

vc · sin (φ) + vc

L· (a · cos (φ)− b · sin (φ)) · tan (α)

vc

L· tan (α)

+ γ (6)

The velocity, υc , is measured with an encoder located in the back left wheel. This velocityis translated to the centre of the axle with the following equation:

vc=νe(

1− tan (α) · HL

) (7)

Where for this car H = 0.75m, L=2.83 m, b = 0.5 and a = L + 0.95m. Finally the discrete

Page 15: Navigation

Navigation System Design (KC-4) 10

xc)(yc,

x

y

Encoder Laser

L

a

bH

l

l

x

yL

L

Pc

Figure 5: Kinematics parameters

model in global coordinates can be approximated with the following set of equations:

x(k)y(k)φ(k)

=

x(k − 1) + ∆T vc(k − 1) · cos (φ(k − 1))−vc

L· (a · sin (φ(k − 1)) + b · cos (φ(k − 1)))

· tan (α(k − 1))y(k − 1) + ∆T vc(k − 1) · sin (φ(k − 1)) +

vc(k−1)L

· (a · cos (φ(k − 1))− b · sin (φ(k − 1)))· tan (α(k − 1))

φ(k − 1) + vc(k−1)L

· tan (α(k − 1))

+ γ (8)

where ∆t is the sampling time, that in this case is not constant.

The observation equation relating the vehicle states to the observations is

z = h (X, xi, yi) =

[zi

r

ziβ

]=

√(xi − xL)2 + (yi − yL)2

atan(

(yi−yL)(xi−xL)

)− φL + π/2

+ γh (9)

where z is the observation vector, is the coordinates of the landmarks, xL , yL and φL arethe vehicle states defined at the external sensor location and γh the sensor noise.

The complete non-linear model can be expressed in general form as:

X (k + 1) = F (X (k) , u (k) + γu (k)) + γf (k)z (k) = h (X (k)) + γh (k)

(10)

Page 16: Navigation

Navigation System Design (KC-4) 11

The propagation of uncertainties and estimation of states will require the evaluation ofthe jacobians of the process and measurement models.

The Jacobian of the Process model with respect to the state variables is:

Fx =∂F

∂X=

1 0 −∆T (vc sin φ + vc

Ltan α(a cos φ− b sin φ)

0 1 ∆T (vc cos φ− vc

Ltan α(a sin φ− b cos φ)

0 0 1

The evaluation of the Jacobian with respect to the input require a few additional mathe-matical steps. In this section the full procedure is shown for two different cases [11]:

• Using Steering encoder: φ = 1L· v1 · β with β = tan α

• Using Gyro information: φ = φz

The general kinematic model is

F(v1, φ

)=

v1 · cos (φ) + φ · ηx (φ)

v1 · sin (φ) + φ · ηy (φ)

φ

v1 =v2

1− β HL

φ =1

L· v1 · β or φ = φz

G (v2, β, φz) = F(v1, φ

)

withηx = −(a sin φ + b cos φ)ηy = (a cos φ− b sin φ)

The general partial derivatives with respect to the three inputs:

Page 17: Navigation

Navigation System Design (KC-4) 12

∂G

∂v2

=∂F

∂v1

· ∂v1

∂v2

+∂F

∂φ· ∂φ

∂v1

· ∂v1

∂v2

∂G

∂β=

∂F

∂v1

· ∂v1

∂β+

∂F

∂φ·(

∂φ

∂v1

· ∂v1

∂β+

∂φ

∂β

)

∂G

∂φz

=∂F

∂φ· ∂φ

∂φz

∂F

∂v1

=[

cos (φ) sin (φ) 0]T

∂F

∂φ=

[ηx (φ) ηy (φ) 1

]T

∂v1

∂v2

=1

1− β HL

,∂v1

∂β=

v2(1− β H

L

)2 ·H

L

∂φ

∂v1

Lor

∂φ

∂v1

= 0

∂φ

∂β=

v1

Lor

∂φ

∂v1

= 0

∂φ

∂φz

= 0 or∂φ

∂φz

= 1

If the steering angle is used to predict (as the derivative) the orientation then the gradientis

Page 18: Navigation

Navigation System Design (KC-4) 13

∂G

∂ (v2, β, φz)=

[∂G∂v2

∂G∂β

∂G∂φz

]

∂G

∂v2

=∂F

∂v1

· ∂v1

∂v2

+∂F

∂φ· ∂φ

∂v1

· ∂v1

∂v2

=

∂G

∂v2

=

cos (φ)sin (φ)

0

· 1

1− β HL

+

ηx (φ)ηy (φ)

1

· β

L· 1

1− β HL

∂G

∂β=

∂F

∂v1

· ∂v1

∂β+

∂F

∂φ·(

∂φ

∂v1

· ∂v1

∂β+

∂φ

∂β

)=

∂G

∂β=

cos (φ)sin (φ)

0

· v2(

1− β HL

)2 ·H

L+

ηx (φ)ηy (φ)

1

·

L· v2(

1− β HL

)2 ·H

L+

v1

L

)

∂G

∂φz

=∂F

∂φ· ∂φ

∂φz

=

ηx (φ)ηy (φ)

1

· 0 = 0

If a Gyro is used to predict the change in orientation then the following gradient is used:

∂G

∂ (v2, β, φz)=

[∂G∂v2

∂G∂β

∂G∂φz

]

∂G

∂v2

=

cos (φ)sin (φ)

0

· 1

1− β HL

∂G

∂β=

cos (φ)sin (φ)

0

· v2(

1− β HL

)2 ·H

L

∂G

∂φz

=

ηx (φ)ηy (φ)

1

· 1 =

ηx (φ)ηy (φ)

1

Finally the jacobian of the observation model with respect to the state variables can beevaluated with the following equation Then the output Jacobian matrix, evaluated in(xL, yL, ϕ) will be

Page 19: Navigation

Navigation System Design (KC-4) 14

∂H

∂ (xL, yL, ϕ)=

∂ (r, β)

∂ (xL, yL, ϕ)=

[(xL−xi)√

d

(yL−yi)√d

0

− (yL−yi)d

(xL−xi)d

−1

]

d= (xL − xi)2 + (yL − yi)

2

These models will be used with a standard EKF algorithms to navigate, build and main-tain a navigation map of the environment.

Page 20: Navigation

Navigation System Design (KC-4) 15

3 Beacon Based Navigation

In this section, a navigation system employing artificial beacons (or targets) is developed.The kinematic vehicle model developed in the previous section will be used, and sensormodels will be described.

The generally accepted definition of a ‘beacon’, is a target whose position (and perhapsother parameters) is known to the navigation system. By observing a beacon whoseabsolute position is known, the position of the vehicle with respect to the beacon may beeasily computed. Therefore, beacon based navigation systems are perhaps the easiest toimplement.

The drawback, of course, is that infrastructure is usually needed to implement a navigationsystem of this type. A form of navigation that overcomes this limitation will be discussedin Section 10.

3.1 Linear Kalman Filter Estimator

This section presents a brief introduction of Kalman filter techniques to make the readerfamiliar with notation and terms that will be used through the rest of the course. TheKalman filter is a linear estimator that fuses information obtained from model, observationand prior knowledge in an optimal manner. We can start by assuming the system we areinterested can be modeled by:

x(k) = F (k)x(k − 1) + B(k)u(k) + G(k)v(k) (11)

where x(k−1) is the state of the system at time k−1,u(k) is an input, v(k) is additive noise,B(k) and G(k) are input and noise transition matrices and F (k) is the transition matrixthat allows to propagate x(k) to x(k + 1), being k the index to denote one incrementalstep of time. We can also assume that the observation model have the following form:

z(k) = H(k)x(k) + w(k) (12)

where z(k) is the observation at time k, H(k) is the observation matrix that relates thestates with the observations and w(k) is additive noise. It is also assumed that all noisesare Gaussian, uncorrelated and zero mean.

E[v(k)] = E[w(k)] = 0

with covariances:

E[v(i)vT (j)] = δijQ(i) , E[w(i)wT (j)] = δijR(i)

Page 21: Navigation

Navigation System Design (KC-4) 16

The process and observation noises are also assumed uncorrelated:

E[v(i)wT (j)] = 0

Finally it is assumed that we know with a given uncertainty the initial state:

x(0) = x0 , P (0) = P0

where P0 is in general a covariance diagonal Matrix with an initial guess of the uncertaintyon the initial value x0. With the process and observation models F,B,G, H, covariancesQ, R and P0 and initial state x0 the following Kalman Filter equations can be applied torecursively estimate the state in two stages: Prediction and Update.

3.1.1 Prediction Stage

The prediction stage is usually responsible for the high frequency information and usesdead reckoning sensors inputs to drive the process model and generate a prediction of thestate x at time k − 1 given all the information up to time k.

x(k/k − 1) = F (k)x(k − 1/k − 1) + B(k)u(k) (13)

The uncertainty of this prediction is also evaluated:

P (k/k − 1) = F (k)P (k − 1/k − 1)F T (k) + G(k)Q(k)GT (k) (14)

3.1.2 Update Stage

The filter will work in prediction mode until an observation become available. The statex(k) will then be updated with the observation obtained at time k. In this stage we obtainthe estimate x(k/k) that is usually read as the state x(k) given all the information up totime k. The update equations for the state is:

x(k/k) = x(k/k − 1) + W T (k)(z(k)−H(k)x(k/k − 1)) (15)

It can be seen that the state x at time k given all the information up to time k isobtained with the prediction x(k/k − 1) plus a correction factor formed by a product ofthe Kalman Gain W and the difference between the observation z(k) and the prediction ofthe observation H(k)x(k/k−1). At the same time, since an observation was included theuncertainty of the state, P (k/k), must decrease. This can be seen in the update equation

Page 22: Navigation

Navigation System Design (KC-4) 17

for the state covariance P :

P (k/k) = P (k/k − 1)−W (k)S(k)W T (k) (16)

where the Kalman gain matrix W (k) and innovation covariance are given by

W (k) = P (k/k − 1)HT (k)S(k)−1 (17)

S(k) = H(k)P (k/k − 1)HT (k/k − 1) + R(k) (18)

.

The last equation represents the covariance of the innovation sequence:

ν(k) = z(k)−H(k)x(k/k − 1)

Since in most cases the true value of the state is never known the innovation sequence givesan indication of how well the filter is working. This measure can be obtained by comparingthe actual value of the innovation sequence with the expected variance predicted by S(k).This approach will be used to validate the data before incorporating the observation andfor data association purposes. More on these later in this chapter.

3.2 The Extended Kalman Filter

In most real cases the process and/or observation models do not behave linearly andhence the linear Kalman filter described above cannot be implemented. To overcome thisproblem, the extended Kalman filter (EKF) is implemented. It provides the best MMSEof the estimate and in principle it is a linear estimator which linearises the process andobservation models about the current estimated state.The non-linear discrete time system is described as

x(k) = F(x(k − 1),u(k), k) + w(k), (19)

where F(·, ·, k) is a non-linear state transition function at time k which forms the currentstate from the previous state and the current control input.The non-linear observation model is represented as

z(k) = H(x(k)) + v(k). (20)

Page 23: Navigation

Navigation System Design (KC-4) 18

Following the same definitions outlined in the Kalman filter, the state prediction equationis

x(k|k − 1) = F(x(k − 1|k − 1),u(k)), (21)

and the predicted covariance matrix is

P(k|k − 1) = ∇Fx(k)P(k − 1|k − 1)∇FTx (k) + Q(k). (22)

The term ∇Fx(k) is the Jacobian of the current non-linear state transition matrix F(k)with respect to the predicted state x(k|k − 1).When an observation occurs, the state vector is updated according to

x(k|k) = x(k|k − 1) + W(k)ν(k) (23)

where ν(k) is the nonlinear innovation,

ν(k) = z(k)−H(x(k|k − 1)). (24)

The gain and innovation covariance matrices are

W(k) = P(k|k − 1)∇HTx (k)S−1(k), (25)

S(k) = ∇Hx(k)P(k|k − 1)∇HTx (k) + R(k), (26)

where the term ∇Hx(k) is the Jacobian of the current observation model with respect tothe estimated state x(k|k).The updated covariance matrix is

P(k|k) = (I−W(k)∇Hx(k))P(k|k − 1)(I−W(k)∇Hx(k))T +

W(k)R(k)WT (k). (27)

In the linear Kalman Filter, if the models are time invariant, then the Kalman gains canbe computed off line and used within the filter in real time. However, in the non-linearimplementation, since the process and observation models are dependent on the currentstate, this benefit is not available. Furthermore, the non-linear filter must be properlyinitialised to ensure that the linearised models are accurate, since this can lead to filterinstabilities. Finally, the presence of the Jacobians adds a higher level of complexity andfurthermore increases the computational processing required.

Page 24: Navigation

Navigation System Design (KC-4) 19

3.3 The Information Filter

The information filter is mathematically equivalent to the Kalman filter. That is, if bothfilters use the same data then identical results would be obtained (the reader is referredto [20] for a derivation of the information filter).The key components in the information filter are the information matrix Y(k|k), and theinformation vector y(k|k). Assuming that the noise is Gaussian then Y(k|k) is the inverseof the covariance matrix,

Y(k|k) = P−1(k|k), (28)

and hence represents the amount of information present amongst the states of interest andtheir correlations. The information vector represents the information content in the statesthemselves and can be evaluated by transforming the state values over to informationspace,

y(k|k) = Y(k|k)x(k|k). (29)

The predicted information vector is

y(k|k − 1) = L(k|k − 1)y(k − 1|k − 1) + B(k)u(k), (30)

where L(k|k − 1) = Y(k − 1|k − 1)F(k)Y−1(k − 1|k − 1).

The transformation matrix B(k) transforms the control input u(k) to information space.The corresponding predicted information matrix is

Y(k|k − 1) = [F(k)Y−1(k − 1|k − 1)FT (k) + Q(k)]−1. (31)

Again, these two equations are evaluated at each sample of the dead reckoning sensor.The information contribution to the states due to an observation is in the form of theinformation observation vector,

i(k) = HT (k)R−1(k)z(k). (32)

The amount of certainty associated with this observation is in the form of the informationobservation matrix,

I(k) = HT (k)R−1(k)H(k) (33)

The benefit of implementing the information filter lies in the update stage. Since theobservation has been transformed over to information space, the update procedure is

Page 25: Navigation

Navigation System Design (KC-4) 20

simply the information contribution of this observation to the prediction,

y(k|k) = y(k|k − 1) + i(k) (34)

Y(k|k) = Y(k|k − 1) + I(k). (35)

If there is more than one sensor providing observations, the information observation vec-tor and matrix is simply the sum of the individual observation information vectors andmatrices and hence,

y(k|k) = y(k|k − 1) +∑

ij(k) (36)

Y(k|k) = Y(k|k − 1) +∑

Ij(k). (37)

where j is the number of sensors providing information at time k.There are a number of advantages associated with the Information filter:

• No innovation covariance matrix S(k) has to be computed;

• Furthermore, there is no gain matrix W(k) which needs to be computed;

• The initial information matrix can be easily initialised to zero information;

• It is computationally easier to implement an information filter in a multisensorenvironment since it is simply the summation of the individual information contri-butions. Furthermore, a Kalman filter cannot accommodate for the situation wherethere are multiple observations to be handled by a filter at the same time, due tothe correlation of the innovation amongst the observations.

3.4 The Extended Information Filter

The derivation of the extended information filter (EIF) can be found in [20]. As with theEKF, the EIF estimates the states of interest given non-linear process and/or observationmodels, however in information space.The predicted information vector and matrix is

y(k|k − 1) = Y(k − 1|k − 1)F(x(k − 1|k − 1),u(k)), (38)

Y(k|k − 1) = [∇Fx(k)Y−1(k − 1|k − 1)∇FTx (k) + Q(k)]−1. (39)

When an observation occurs, the information contribution and its associated informationmatrix is

i(k) = ∇HTx (k)R−1(k)[ν(k) +∇Hx(k)x(k|k − 1)], (40)

I(k) = ∇HTx (k)R−1(k)∇Hx(k), (41)

Page 26: Navigation

Navigation System Design (KC-4) 21

where the innovation is

ν(k) = z(k)−H(x(k|k − 1)). (42)

Although the non-trivial derivation of the Jacobians still occurs in this form, the EIF hasthe benefit of being easily initialised, that is, one can set the initial conditions to zeroinformation. Furthermore, there is the added benefit of decentralising the filter acrossmultiple nodes and the incorporation of multiple observations to a single filter [20].

3.5 Sensors Used for Beacon Based Navigation Systems

The sensors used in beacon based navigation systems typically fall into two distinct cat-egories. The first, bearing only, refers to sensors that simply return the bearing and/orazimuth to a given beacon. These type of laser systems are usually cheaper since theydo not need to implement the range measurement circuitry. Single cameras also fall intothis category, unless some form of stereo vision can also provide range information.

The second category of sensors are those that return both range and bearing to a beacon.Typically used range-bearing sensors include sonar, laser, radar, and stereo vision.

Figure 6: Bearing Laser, range and bearing radar and laser

In most navigation applications the beacons will be clearly differentiated from the back-ground but will not report their identity. The navigation system will have a list with thelocation of all the beacons and will be responsible to determine which is the beacon thathas been detected. This process is called data association and will be more difficult andless reliable with bearing only sensors.

Page 27: Navigation

Navigation System Design (KC-4) 22

3.6 Bearing Only Navigation Systems

This section presents bearing only localisation algorithms using the basic nonlinear kine-matic and observations models presented in the previous chapter. The UTE model will beused in the laboratory assignments and will be referred to in this chapter when necessary.The extended Kalman filter is also introduced to deal with nonlinear models.

3.6.1 State Transition (Prediction) Equations

Consider the Ackerman steered vehicle discussed in Section 2.1. Equation 1 gives thecontinuous time vehicle model. For implementation in a discrete time EKF, the modelmust be discretised as was shown in Equation 2. This discretised vehicle model (statetransition equation) is reproduced here for clarity;

x(k) = x(k − 1) + ∆TV (k − 1) cos(φ(k − 1) + γ(k − 1))

y(k) = y(k − 1) + ∆TV (k − 1) sin(φ(k − 1) + γ(k − 1))

φ(k) = φ(k − 1) + ∆TV (k − 1) sin(γ(k − 1))

B(43)

To best illustrate this example, assume that the kinematic assumptions (rigid body, rollingmotion) hold true. When this is the case, the only source of process noise is injected viathe control inputs; steer angle, and wheel velocity such that these parameter may bemodelled as:

ω(t) = ω(t) + δω(t)

γ(t) = γ(t) + δγ(t)

where δω(t) and δγ(t) are Gaussian, uncorrelated, zero mean random noise processes withvariances σ2

ω and σ2γ respectively.

The discrete time state vector at time k + 1 can now be defined as

x(k + 1) = f(x(k),u(k), k) + v(k) (44)

= [x(k + 1), y(k + 1), φ(k + 1)]T

whereu(k) = [ω(k), γ(k)]T

is the control vector at time k, v(k) is additive process noise representing the uncertaintyin slip angles and wheel radius, and f(·) is the nonlinear state transition function mapping

Page 28: Navigation

Navigation System Design (KC-4) 23

the previous state and current control inputs to the current state, here represented byEquation 43.

Assume the system has an estimate available at time k which is equal to the conditionalmean

x(k|k) = E[x(k)|Zk] (45)

The prediction x(k + 1|k) for the state at time k + 1 based on information up to timek is given by expanding Equation 44 as a Taylor series about the estimate x(k|k), elimi-nating second and higher order terms, and taking expectations conditioned on the first kobservations, giving

x(k + 1|k) = E[x(k + 1)|Zk] = f(x(k|k),u(k), k) (46)

The vector state prediction function f(·) is defined by Equation 43 assuming zero processand control noise. The prediction of state is therefore obtained by simply substitutingthe previous state and current control inputs into the state transition equation with nonoise.

Define the noise source vector as

δw(k) = [δω(k), δγ(k)]T

The error between the true state and the estimated state is given by

δx(k|k) = x(k)− x(k|k) (47)

the prediction of covariance is then obtained as

P(k + 1|k) = E[δx(k|k)δx(k|k)T |zk] (48)

= ∇fx(k)P(k|k)∇fTx (k) +∇fw(k)Σ(k)∇fT

w(k) (49)

where ∇fx(k) represents the gradient or Jacobian of f(·) evaluated at time k with respectto the states, ∇fw(k) is the Jacobian of f(·) with respect to the noise sources, and Σ(k)is the noise strength matrix given by

Σ(k) =

[σ2

ω 00 σ2

γ

](50)

∇fx(k) and ∇fw(k) are given by the following,

Page 29: Navigation

Navigation System Design (KC-4) 24

∇fx(k) =

1 0 ∆TV sin(φ(k|k) + γ(k))0 1 ∆TV cos(φ(k|k) + γ(k))0 0 1

(51)

and

∇fw(k) =

∆T cos(φ(k|k) + γ(k)) −∆TV sin(φ(k|k) + γ(k))∆T sin(φ(k|k) + γ(k)) ∆TV cos(φ(k|k) + γ(k))∆T sin(γ(k))

B∆TV cos(γ(k))

B

(52)

For the UTE model the reader needs to select the type of information used to determinethe change of bearing, steering encoder or Gyro, and then select the appropriate equationto evaluate the Jacobian.

3.6.2 Sensor Prediction

Assume that the vehicle has a laser sensor attached such that the bearing to a number offixed beacons Bi = [Xi, Yi], i = 1, . . . , n can be obtained.

This observation can be predicted by a non-linear model

z(t) = h(x(t)) + v(t) (53)

The bearing to a beacon is given by arctan( Yi−y(t)Xi−x(t)

). However the platform is oriented inthe direction φ, so for this system, the measurement equation for each beacon detectedby the laser is given by the model

ziθ(t) =

[arctan

(Yi − y(t)

Xi − x(t)

)− φ(t)

]+

[vi

θ(t)]

(54)

where the laser observation noise viθ , is uncorrelated zero mean and Gaussian with

strength σ2θ , and is assumed to be identical for each beacon observed.

The predicted observation is found by taking expectations conditioned on all previousobservations, truncating at first order to give

z(k + 1|k) = E[z(k + 1)|Zk] (55)

= h(x(k + 1|k))

If there is a predicted state for the vehicle, x(k + 1|k), we can therefore predict theobservations that will be made at that state. From Equation 54 and from Equation 55,

Page 30: Navigation

Navigation System Design (KC-4) 25

we have the predicted observations as

z(k + 1|k) =[zi

θ(k + 1|k)]

=

[arctan

(Yi − y(k + 1|k)

Xi − x(k + 1|k)

)− φ(k + 1|k)

](56)

Now, the innovation or observation prediction error covariance S(k), used in the calcu-lation of the Kalman gains must be computed. This is found by squaring the estimatedobservation error and taking expectations of the first k measurements to give the following,

S(k + 1) = ∇hx(k + 1)P(k + 1|k)∇hTx (k + 1) + R(k + 1) (57)

In this case, the Jacobian ∇hx(k + 1) is given by

∇hx(k + 1) =[− y(k+1|k)−Yi

d2

x(k+1|k)−Xi

d2 −1]

(58)

where d =√

(Xi − x(k + 1|k))2 + (Yi − y(k + 1|k))2 is the predicted distance between abeacon and the vehicle.

The observation variance term R(k) is given by

R(k) =[σ2

θ

](59)

3.6.3 Update Equations

The state update equations for the EKF are given by

x(k + 1|k + 1) = x(k + 1|k) + K(k + 1)[z(k + 1)− h(x(k + 1|k))] (60)

whereK(k + 1) = P(k + 1|k)∇hT

x (k + 1)S(k + 1)−1 (61)

3.7 Range-Bearing Navigation Systems

The implementation of a range–bearing beacon based navigation system is very similarto the bearing only system presented in Section 3.6.

The only major difference is in the derivation of the sensor model. The sensor still observesa number of fixed beacons, however it now also returns the range to these beacons.

Page 31: Navigation

Navigation System Design (KC-4) 26

The prediction equations for such a sensor can be derived as

z(k + 1|k) =

[zi

r(k + 1|k)zi

θ(k + 1|k)

]=

[ √(Xi − x(k + 1|k)2 + (Yi − y(k + 1|k))2

arctan(

Yi−y(k+1|k)Xi−x(k+1|k)

)− φ(k + 1|k)

](62)

where zir is the range measurement returned by the sensor when observing the i’th beacon.

In this case, the Jacobian ∇hx(k + 1) is given by

∇hx(k + 1) =

[x(k+1|k)−Xi

dy(k+1|k)−Yi

d0

− y(k+1|k)−Yi

d2

x(k+1|k)−Xi

d2 −1

](63)

where d =√

(Xi − x(k + 1|k))2 + (Yi − y(k + 1|k))2 is (as before), the predicted distancebetween the i’th beacon and the vehicle.

The observation variance term R(k) is given by

R(k) =

[σ2

r 00 σ2

θ

](64)

where σ2r is the variance associated with the range measurement. As before it is assumed

that the sensor is corrupted by Gaussian, zero-mean, uncorrelated measurement noise.

The state and covariance prediction, and update steps are identical to those in Section3.6.

3.8 Data Association

This is a fundamental problem in beacon navigation: We have a list of beacons withposition information but when a beacon is sensed we have the following question:

What is the identity of this beacon ?

There are several ways to address this problem. The first (and simplest) is to use beaconsthat are not identical. A common method for achieving this in bearing only navigationis to use bar codes for uniquely identifying targets. The second method uses statistics todetermine how likely a given beacon is the one that was sensed.

The first method makes the association very simple but it can become limited with respectto the number of beacons possible or expensive to be used in large areas. The secondmethod can use an unlimited number of identical pasive beacons (reflective tape) but willmake the data association more complex. This section will discuss basic data associationalgorithms based on statistical methods.

Page 32: Navigation

Navigation System Design (KC-4) 27

The innovation (ν) for a navigation filter is defined as the difference between the observedand predicted observations as

ν(k) = z(k)−H(k)x(k/k − 1) (65)

The innovation mean can be found as

E[ν(k)|Zk−1

]= E

[z(k)− z(k|k − 1)|Zk−1

]

= E[z(k)|Zk−1

]− z(k|k − 1)

= 0 (66)

with variance

S(k) = E[ν(k)νT (k)

]

= E[(z(k)−H(k)x(k|k − 1))(z(k)−H(k)x(k|k − 1))T

]

= E[(H(k) [x(k)− x(k|k − 1)] + v(k))(H(k) [x(k)− x(k|k − 1)] + v(k))T

]

= R(k) + H(k)P(k|k − 1)HT (k) (67)

So, the innovation sequence is white with zero mean and variance S(k). For beaconmatching (data validation), these properties can be exploited to determine how likely abeacon is to be at the current observation.

The normalised innovations squared q(k) are defined as

q(k) = νT (k)S−1(k)ν(k) (68)

which under the assumption that the filter is consistent can be shown to be a χ2 randomdistribution with m degrees of freedom, where m = dim(z(k)), the dimension of themeasurement sequence[3].

For gating observations, q is simply formed (usually for each beacon Bi), a threshold ischosen from χ2 tables (given a confidence value, 95% or 99% for example), and then qcompared to the threshold. If the value of q for a given beacon is less than the threshold,then that beacon is likely to have been the one observed.

One caveat however: It is important to keep track of the number of beacons gated for agiven observation. Depending on the geometry of the beacons, it is sometimes possiblefor multiple beacons to (quite validly) match an observation. In these cases, it is usuallyprudent to ignore the observation, rather than trying to guess which beacon was observed.In other words, it is usually safer to throw away good data than to use bad data. Thisis very important since the incorporation of a wrong beacon will most likely generate acatastrophic fault in the filter. That is the filter will not be able to match any more

Page 33: Navigation

Navigation System Design (KC-4) 28

beacons univocally since the vehicle will be in another position and the state covarianceerror will grow indefinitely. There are some other methods to improve the robustness ofdata association that consider more than one observation at the time or perform delaybatch association.

3.9 Filter Implementation

Any real localization application will have dead reckoning sensors that will provide highfrequency information at fixed intervals and external sensors that will provide low fre-quency information not necessarily at a prefix interval. An example of the dead reckoningsensors are velocity and steering encoders that are sampled at given intervals. Externalsensors usually monitor the environment and eventually report information when a pos-sible beacon is detected. In general we have various prediction cycles for each externalinformation. This external information will not in most cases happen at the predictionstime. Furthermore, the external information will be present in some cases with a signifi-cant delay due to the processing necessary to extract the possible features or the validationprocess. One example is shown in Figure 7. In this case the system perform predictions1, 2 and 3. At time 3 we realize that an observation is available with time stamp some-where between 2 and 3. The right procedure to incorporate this information is to savethe previous states at 2 and make a new prediction up to the observation time. At thistime perform an update and then a prediction to time 3 again. This process will requirethe filter to save few previous states in accordance with the maximum delay expected inthe external information.

Prediction 2 Prediction 1 Prediction 3

Observation

Pred 4 Pred5

Update

Figure 7: Asynchronous Prediction Update Cycle

Page 34: Navigation

Navigation System Design (KC-4) 29

4 The Global Positioning System (GPS)

The Global Positioning System is an all-weather and continuous signal system designed toprovide information to evaluate accurate position worldwide. The GPS system achievedinitial operation capabilities on 8 December 1993. It consist of 3 major segments: space,control and user.

Space Segment: It consists of the GPS satellites. The GPS operational constellationconsists of 24 satellites that orbit the earth with a period of 12 hours. There are 6 differentorbital planes with 55 degrees inclination. Each plane has a radius of 20200 km and thepath is followed by 4 satellites. This distribution assures that at least four satellites arealways in view anywhere around the world. The maximum number of satellites that canbe seen from any point in the surface of the earth is 12, although this is very rare due tonatural terrain ondulation or other obstructions.

Control Segment: This segment consist of a system of tracking stations distributedaround the world. The main objective is to determine the position of the satellites toupdate their ephemeris. Satellite clock correction are also updated.

User segment: The user segment consists of the GPS receivers that use the GPSsignal information. The satellites transmit information at frequencies L1=1575.42 Mhzand L2=1227.6 Mhz. They are modulated with a pseudo random C/A and P codes andwith a navigation message. The C/A code is a repeating 1 Mhz pseudo random binarysequence. There is a separate C/A code for each satellite in operation. The code is usedby the receiver to identify the satellite and to obtain range information. The PrecisionPositioning System (PPS) is provided by the P code included in the L2 signal. This codeis only available to authorised users. The Standard Positioning System (SPS) is basedon the C/A code that is included in both frequencies. The original accuracy of SPS is100 m (2drms), 156 m vertical (95%) and 340 ns (95 %), with Selective Availability. Thedrms metric stands for distance root mean square. In this case 2drms is the radius of thecircle that bound the error with 95 % probability. The accuracy of the PPS system is22m horizontal (2drms) 27.7 m vertical (95%), and 200 ns (95%) time.

The navigation message is present in both the L1 and L2 frequencies. It contains informa-tion about the precise satellite ephemeris, clock correction parameters, and low precisionephemeris data for the other satellites. After 30 seconds of continuous satellite trackingthe GPS receiver is able to use the tracked satellite for position determination.

Page 35: Navigation

Navigation System Design (KC-4) 30

4.1 GPS observables

Two types of observations are of main interest. One is the pseudo-range which is thedistance from the satellite to the vehicle plus additional errors due to clock drifts, theionosphere, the troposphere, multi-path and Selective Availability, (SA). SA is a deliber-ate error introduced by the US Department of Defense and consists of additional noiseincluded in the transmitted satellite clock and satellite ephemeris. SA has been elimi-nated since May 2000 making the standard GPS more accurate. Another observationused by GPS receivers is Doppler frequency information. As the vehicle and the satellitesare in constant motion with respect to each other the receiver signal will experience achange in frequency proportional to their relative velocities. This information, usuallyreferred to as delta range, can be used to generate very accurate velocity estimation.Although most GPS receivers generate position and velocity information, only the moresophisticated systems generate velocity from Doppler observation making it independentof position information. This is an important consideration for the data fusion algorithmsince position and velocity observation errors must not be correlated. The precision ofthe solution is affected by two main factors: the geometry of the actual relative positionsof the satellites with respect to the receiver, usually referred as PDOP (position dilutionof precision), and secondly, the precision in range determination. With time stamp rangeand satellite position information of at least 4 satellites the 3-D position x, y and z, andthe GPS receiver clock drift ∆t can be obtained using standard triangulation techniques.Equation 69 presents the set of non linear equation that needs to be solved to evaluate theposition of the GPS receiver. The position of the satellites is predicted with the ephemerisinformation. Some GPS receivers include all the satellites in view to obtain a position fix.They usually implement a least square or Kalman filter algorithm to estimate positionand velocity of the receiver.

4.2 Position evaluation

Once ephimeris and ranges from 4 satellites become available, the set of no-linear equationpresented in Equation 69 can be solved. This equation is function of the known rangesand satellite position and unknown position of the GPS receiver and clock drift.

r1 =√

(x− x1)2 + (y − y1)2 + (z − z1)2 + c∆t

r2 =√

(x− x2)2 + (y − y2)2 + (z − z2)2 + c∆t

r3 =√

(x− x3)2 + (y − y3)2 + (z − z3)2 + c∆t

r4 =√

(x− x4)2 + (y − y4)2 + (z − z4)2 + c∆t

(69)

Although there are closed form solution for this system, GPS receivers usually linearizethese equations and evaluate the position by solving a linear set of equations or by im-

Page 36: Navigation

Navigation System Design (KC-4) 31

plementing a least square solution for the case where redundant information is available.Linearizing Equation 69 we have:

r1(p)r2(p)r3(p)r4(p)

=

r1(p0)r2(p0)r3(p0)r4(p0)

+ A

x− x0

y − y0

z − z0

c∆t

+

εx

εy

εz

εclock

(70)

with A:

A =

x−x1

δSi

y−y1

δSi

z−z1

δSi1

x−x1

δSi

y−y1

δSi

z−z1

δSi1

x−x1

δSi

y−y1

δSi

z−z1

δSi1

x−x1

δSi

y−y1

δSi

z−z1

δSi1

(71)

and δSi

δSi =√

(x− x1)2 + (y − y1)2 + (z − z1)2 (72)

Where p = [x, y, z, c∆t]T and ε are the errors due to range noise and missing higher orderterms in the linearization. With this approximation the change in position can then beevaluated:

∆p = A−1∆r (73)

When ranges from more than four satellites are available a least square solution can beimplemented:

∆p = (ATA)−1AT∆r (74)

Finally the previous position is updated with the correction to obtain the position of therover at he time stamp of the pseudo-range information:

p = p0 + ∆p (75)

The psedudorange is obtained by the GPS hardware through a correlation process andthe position of the satellites needs to be computed using the precise satellite ephimeriesbroadcast as part of the navigation message by each satellite.

Page 37: Navigation

Navigation System Design (KC-4) 32

4.3 Most Common GPS Errors

There are several sources of ranging error for GPS. Depending on the type of GPS imple-mentation they can introduce significance errors in the position estimation.

Satellite Clock Errors The GPS system ultimately depends on the accuracy of thesatellite clock. The ground stations are responsible for estimating the clock errors. Thesestations do not correct the clock but upload the parameters for the correction formulathat the satellite broadcast as part of the navigation message. Each GPS receiver needsto compensate the pseudorange information with a polynomial equation function of theseparameters. The major source of clock error is SA. With SA on the RMS value of theerror could be as large as 20 m. Without SA the clock drift can be predicted to reducethe error to less than 2 meter for a 12 hour period.

Ephemeris Errors These errors result from the satellite transmitting the ephimerisparameter with errors. This will translate in errors in the prediction of the position ofthe satellite that will generate an incorrect position fix of the GPS receiver. These errorsgrow with time since the last update performed by the ground stations. The GPS receiverusually do not use satellites with ephemeries older than 2 hours.

Ionosphere Errors Since there are free electron in the Ionosphere, the GPS signal doesnot travel at the speed of light while in transit in this region. The delay is proportionalto the number of free electrons encountered and to the inverse of the signal frequency.The user can compensate for these errors using a diurnal model for these delays. Theparameters of this model are part of the GPS navigation message. The error obtainedafter this compensation is in the order of 2-5 m. Another compensation method uses thesignals at both frequencies, L1 and L2 to solve for the delay. This method can reduce theerrors to 1-2 m.

Troposphere Errors Another delay is introduced by the variation of the speed of thesignal due to variation in temperature, pressure and humidity. Model correction canreduce this error to the order of 1 meter.

Receiver Errors This is the error introduced by the GPS receiver when evaluating therange though the correlation process. It is mostly dependent on non-linear effects andthermal noise. The magnitude of this error is in the order of 0.2-0.5 m.

Page 38: Navigation

Navigation System Design (KC-4) 33

Multipath errors The range information is evaluated measuring the travel time of thesignal from the satellite to the rover. Sometimes this path may be obstructed but thesignal will still reach the rover antenna through an indirect path by multiple reflections.The receiver will obtain erroneous range and phase carrier difference information and anincorrect position fix will be generated. The multi-path problem can be reduced with anappropriate selection of the antenna, GPS receiver, and accepting observations only fromsatellites with a minimum elevation angle. Since multi-path is due to multiple reflection,satellites with lower elevation angles are more prone to generate these errors. The antennacan also be retrofitted with a choke-ring ground plate to eliminate low angle reflections.Some GPS receivers include algorithms and extra hardware to detect and eliminate varioustype of multi-path errors. Although this technology has been making enormous progressin the past few years it is almost impossible to guarantee a given GPS accuracy 100%of the time when working in application such as mining or stevedoring. Large piecesof equipment, such as cranes, draglines and large structures will almost certainly causeimportant perturbation to the system at unpredictable times.

Selected Availability Selective Availability (SA) is a deliberate error introduced bythe US Department of Defense and consists of additional noise included in the transmittedsatellite clock and satellite ephemeris. SA has been disconnected since May 2000 and itis not the intent of the US to ever use SA again on the civil GPS signal. Due to this factthis section has a brief description of this type of error to demonstrate the complexityof filtering coloured noise. Figure 8 shows the longitude and latitude error in metersobtained with a standard GPS with SA.

0 1000 2000 3000 4000 5000 6000 7000 8000 -100

-50

0

50

100

meters

longitude error

0 1000 2000 3000 4000 5000 6000 7000 8000 -200

-100

0

100

Time in sec.

meters

latitude error

Figure 8: Latitude and Longitude errors due to SA

Page 39: Navigation

Navigation System Design (KC-4) 34

It is clear from this figure that the error is time correlated, showing a characteristicoscillatory behavior with an amplitude of approximately 60 meters. Figure 9 presents theaverage PSD obtained with sequences of 512 samples of longitude information. Similarresults are obtained for latitude information. The position estimation is corrupted with

10 -4

10 -3

10 -2

10 -1

-10

0

10

20

30

40

50

frequency in Hz

Power Spectrum Density of Lonngitude error

Figure 9: PSD of Longitude Error. The PSD has a gain of approximate 43 db below 2mHz, continuing with a sharp roll-off of approximate 40 db/decade up to 6 mHz wherethe magnitude starts to decrease at approximate 20 db/decade.

colored noise of very low frequency. It becomes clear that without additional informationit is practically impossible to eliminate this added noise in short periods of time. Formore information about SA teh reader can see [22]

Geometric Dilution of Precision The quality of the solution of the position andclock bias error in relation with the error in the pseudorange measurement is a functionof the matrix A, Equation 73. Assuming that the standard deviation for the pseudorangeobservation is σ, it can be proved that the matrix covariance for the state p, Equation75, is given by:

P = (ATA)−1

σ2 = Hσ2 (76)

From this equation the various definition of estimation accuracy are defined:

V DOP =√

H33 (77)

HDOP =√

(H11) + (H22) (78)

PDOP =√

(H11 + H22 + H33) (79)

GDOP =√

(H11 + H22 + H33 + H44) (80)

Page 40: Navigation

Navigation System Design (KC-4) 35

Finally the estimated error of the individual component of the state vector p can be givenas function of the DOP variables

√var(z) = σV DOP (81)√

var(x) + var(y) = σHDOP (82)√var(x) + var(y) + var(z) = σPDOP (83)√

var(x) + var(y) + var(z) + var(c∆t) = σGDOP (84)

Most GPS receivers evaluate these uncertainties in real time allowing the user to monitorthe accuracy of the solution. This is important in real time application of stand aloneGPS or when integration with INS is used. In some cases the GPS may be able to reporta solution but due to the geometrical configuration of the satellites in view the accuracyof the solution can be very poor.

4.4 Fundamental of DGPS (Differential GPS)

The errors in range determination can be significantly reduced with the use of anotherstation placed at a known surveyed location. This station processes the information fromall satellites in view and determines the errors in range information. These errors arethen broadcast by a base station with a radio modem. The other GPS receivers receivethe range correction with a radio-modem to compensate the pseudorange observation.The base station is usually placed in a location with good sky visibility. It processsatellites with at least 5 degrees over the horizon to avoid multipath problems. Thisimplementation is called differential GPS (DGPS) and is shown in Figure 10. Althoughthe user can provide for the range correction, base station and telemetry, there are anumber of commercial services that provide standard RTCM corrections. With DGPSthe errors due to delays in the Troposphere and Ionosphere are significantly reduced. Iteliminates almost all the errors due to SA ( satellite ephimeris and clock dither ). DGPSwas of fundamental importance with SA on. With this approach the positioning errorswere reduced from approximate 100 meters to under a metre. Without SA the reductionis not that important, specially for low quality DGPS systems as can be seen in thefollowing tables. The prediction of the magnitude of the errors for different type of GPS(SPS/PPS/DGPS with and without SA implementation is presented in table 1 ( SPSAutonomous and Differential without SA), Table 2 (SPS Autonomous and Differentialwith SA) and table 3 (PPS Autonomous and Differential)

Finally Table 4 presents actual experimental errors obtained from a 24 hour data setlogged with a standard stand-alone GPS receiver. It can be seen that the magnitude ofthe errors are smaller than the predicted values.

Page 41: Navigation

Navigation System Design (KC-4) 36

Error Source Bias Random Total DGPSEphemeris 2.1 0 2.1 0

Satellite Clock 1 0.7 2.1 0Ionosphere 4 0.5 4 0.4

Troposphere 0.5 0.5 0.7 0.2Multipath 1 1 1.4 1.4

Receiver measurement 0.5 0.2 0.5 0.5

User Equiv Range Error 5.1 1.4 5.3 1.6Vertical 1 σ error VDOP 12.8 3.9

Horizontal 1 σ error HDOP 2.0 10.2 3.1

Table 1: Standard Errors ( metres ) - L1 C/a. Without SA

Error Source Bias Random Total DGPSEphemeris 2.1 0 2.1 0

Satellite Clock 20 0.7 20 0Ionosphere 4 0.5 4 0.4

Troposphere 0.5 0.5 0.7 0.2Multipath 1 1 1.4 1.4

Receiver measurement 0.5 0.2 0.5 0.5

User Equiv Range Error 20.5 1.4 20.6 1.6Vertical 1 σ error VDOP 51.4 3.9

Horizontal 1 σ error HDOP 2.0 41.1 3.1

Table 2: Standard Errors ( metres ) - L1 C/a. With SA

Page 42: Navigation

Navigation System Design (KC-4) 37

Error Source Bias Random Total DGPSEphemeris 2.1 0 2.1 0

Satellite Clock 2.0 0.7 2.1 0Ionosphere 1.0 0.5 1.2 0.1

Troposphere 1.0 1.0 1.4 1.4Multipath 1 1 1.4 1.4

Receiver measurement 0.5 0.2 0.5 0.5

User Equiv Range Error 3.3 1.5 3.6 1.5Vertical 1 σ error VDOP 8.6 3.7

Horizontal 1 σ error HDOP 2.0 6.6 3.0

Table 3: Standard Errors ( metres ) - PPS Dual Frequency P/Y Code

Confidence Hor. GPS Hor DGPS Vert GPS Vert DGPS50% 2.5 1.6 5.6 1.8

68.27% 3.8 2.2 7.4 2.795.45% 7 4.2 14.4 6.499.7% 9.8 6.7 21.3 12.0

Table 4: Comparison of the errors between GPS and DGPS without SA

Page 43: Navigation

Navigation System Design (KC-4) 38

Figure 10: Diferential GPS implementation

4.5 Phase carrier GPS

The more sophisticated GPS receivers make use of phase carrier information to improvethe accuracy of the position fix. Differential carrier phase tracking consists of measuringthe phase shift between the same signal received at the base and the rover stations.This phase shift is proportional to the distance between these two stations. Much moreelaborated hardware and software algorithms are needed for this implementation sincethese measurements are subject to phase ambiguities, that is uncertainty on the number ofwhole carrier waves between the receiver and the satellite. Algorithm convergence is a vitalfactor in achieving the accuracy of the carrier phase measurements. The resolution of theinteger ambiguity plays a very important role in the solution of the baseline vector . Whenthe receiver unit is switched on and commences logging the initial whole cycle difference(ambiguity), between the satellite and the receiver is unknown. As time progresses and thereceiver continuously locks on to satellite signals without interruption the receiver movestowards generating a fixed integer ambiguity for each corresponding satellite signal. Oncethe state of the ambiguity is held fixed, the receiver is said to have ”converged” andthe ambiguity is resolved. In some cases, the receiver is unable to reach a fixed integerambiguity state and is held floating with minimal tolerance and is also said to haveconverged. Once the ambiguities are said to have converged, each corresponding satellitesignal ambiguity, N, is held constant and the change in phase, ∆ψ, is used to calculatethe change in the receiver’s position.

Convergence Time for Phase carrier GPS In any navigation application it is ex-pected that in many occasions the number of satellites in view will not be enough toobtain a carrier phase solution. The system will then be restarted and the ambiguities

Page 44: Navigation

Navigation System Design (KC-4) 39

will need to be resolved again. During the convergence period the accuracy is consider-ably degraded. The convergence time depends on a number of different factors, includingthe baseline, number of visible satellites, satellite configuration, the method of ambiguityresolution: i.e. single frequency, dual frequency or combined dual frequency and codepseudorange data. The most important factor influencing the time taken for a receiverto solve the unknown ambiguities, to either a fixed or floating solution, is the appliedmethod of ambiguity resolution. However, both the visible satellite count and the lengthof the baseline vector also have an important influence.

Method for Ambiguity Resolution: In this section the comparison between twohigh quality type of phase carrier implementations is presented. The experimental datawas obtained with a GPS receiver manufactured by Novotel working with two differentalgorithms named RT20 and RT2. The RT20 algorithm has the capability of generatinga predictable position accuracy of within 20-cm CEP in real time using the single L1frequency. The RT2 algorithm, however, is capable of generating a predictable accuracyof within 2-cm CEP also in real time. The RT2 algorithm utilizes both the L1 and theL2 frequencies, making it possible to resolve ambiguities faster and more accurately usinglinear combinations of L1 and L2 frequencies. Figure 11, presents the GPS estimationof its standard deviation for the RT2 and the RT20 algorithms. For this test the GPSantenna was located at a perfectly known position so that the real convergence time canbe evaluated. It can be seen that depending on the algorithm implementation the systemwill take between 100 to 180 seconds to be within specifications.

50 100 150 200 250 300

0

0.2

0.4

0.6

0.8

1

1.2

Standard Deviation

Time (sec)

Std

D

ev (

m)

RT20

RT2

Figure 11: Convergence time for steady state position

The RT2 algorithm had a very distinct and sudden time of convergence, when compared tothe RT20 algorithm. The RT20 algorithm demonstrates a more gradual process of solving

Page 45: Navigation

Navigation System Design (KC-4) 40

50 100 150 200 250 300 350

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Standard Deviation

Time (sec)

Std

De

v (

m)

Baseline Approx. 100 m

Baseline Approx. 1 km

Figure 12: Convergence time for the RT2 algorithm with different baselines

for the ambiguities, slowly moving towards convergence. This distinct difference was aresult of the dual frequency capabilities of the RT2 algorithm. Each individual ambiguityvalue for any particular observation corresponds to one possible pseudorange value, definedas a lane. So, for any particular observation there are a large number of possible lanecombinations, meaning the receiver must analyze each possible lane value in order toselect the correct one. For the receivers with only single frequency capabilities, there is noother alternative but to tediously sort through the possible lanes in search for the correctsolution. However the application of dual frequency monitoring of the L1 and the L2frequencies introduces the advantage of building linear combination of the two frequenciesand as a result leading to additional ”wider” and ”narrower” lanes. The increased widelane wavelength provides an increased ambiguity spacing, thus making it easier to choosethe correct lane. When the correct wide lane has been selected, the software can thensearch for the narrow lane and as a result resolve the integer ambiguities faster and moreaccurately. Once the ambiguities are fixed, these values considered constant, resulting ina more consistent and accurate position estimate. However when the number of satellitesin view becomes less than five the ambiguities are lost and a full cycle will be restarted.

Visible Satellites An adequate number of visible satellites is an essential componentof the GPS system. Not only does the number of visible satellites influence the accuracyof the generated position, but also, in the case of phase measurements, influences the res-olution of the signal ambiguities. Therefore, the number of visible satellites does influencethe convergence time of a receiver. There are a large number of possible lane combinations

Page 46: Navigation

Navigation System Design (KC-4) 41

for every observation. Therefore, in a situation of increased satellite visibility the numberof possible lanes will also increase, thus providing a greater chance of selecting the correctlane and improving the ambiguity resolution time.

Baseline Length The length of the baseline between the remote and the referencestation has arguably the second largest influence on the time taken for a receiver toreach a converged state, since, essentially the key to convergence is solving the startingvector. For a receiver to reach convergence, the calculations performed by the relativecarrier phase GPS system must generate the position of the remote station with respectto the reference station, i.e. baseline vector. Therefore the length of the baseline playsan important factor in the convergence time. Figure 12 shows the time required for thesame receiver, using the RT2 algorithm, to converge to a fix integer ambiguity solution,firstly at a baseline of 100 m. and secondly at a baseline of 1 km. At both locations thenumber of visible satellites were 6.

GLONASS The GLONASS is the Russian equivalent GPS system, in this case namedGlobal Navigation Satellite System. It also transmits navigation and range data on twofrequencies L1 and L2. Unlike GPS where the satellites are distinguished by the spread-spectrum code, the satellites in GLONASS are distinguished by the frequency of thesignal, L1 between 1597-1617 MHz and L2 between 1240-1260 MHz. The system is alsoenvisioned to handle 24 satellites in almost the same configuration as the GPS equivalent.

Both systems employ a military only code package which is modulated onto both the L1and L2 frequencies. The corresponding civilian access code lies only on the L1 frequency.The initial purpose of implementing two frequencies is to remove the ionospheric errorsassociated with the transmission of the signal through the atmosphere.

If both systems are combined then a theoretical doubling of the number of satellites inview is achievable. This increase in the number of satellites increases both the navigationperformance and fault detection capabilities. There are various commercial GPS receiversthat use both systems.

4.6 Coordinate Transformation

This section presents various coordinate transformation equation that converts positionand velocity given in the standard ECEF frame to the local navigation frame used by thedata fusion algorithms presented in this course.The position Pg = PX , PY , PZ and velocity Vg = VX , VY , VZ obtained from the GNSSreceiver is delivered in the WGS-84 datum, represented by the ECEF coordinate frame,Figure 13.

Page 47: Navigation

Navigation System Design (KC-4) 42

The latitude and longitude is obtained by transforming the position observation receivedby the GNSS receiver unit from ECEF to geodetic coordinates. The equations are

Semimajor Earth axis (metres) a = 6378137.0

Undulation f = 0.003352810664747

Semiminor Earth axis (metres) b = a(1− f)

Coefficients :

p =√

Px2 + Py

2

t = tan−1(Pz × a

p× b)

e1 = 2f − f 2

e2 =a2 − b2

b2

latitude (degrees) φ = tan−1(PZ + e2× b sin3 t

p− e1× a cos3 t)

longitude (degrees) γ = tan−1(PY

PX

)

Given the latitude and longitude, a transformation matrix Cng can be formed which

transforms the position and velocity data given by the GNSS receiver from the ECEFframe over to the navigation frame.

Cng =

− sin φ cos λ − sin φ sin λ cos φ− sin λ cos λ 0

− cos φ cos λ − cos φ sin λ − sin φ

(85)

Thus the position and velocity of the vehicle in the local navigation frame is

Pn = CngPg (86)

Vn = CngVg (87)

If the vehicle’s work area has negligible change in latitude and longitude, then Cng is

effectively fixed.Equations 86 and 87 form the observations needed for the aided inertial navigation system.

Page 48: Navigation

Navigation System Design (KC-4) 43

X

Z

D

N

E

λ

φ Y

Figure 13: Coordinate representations. XYZ represents the orthogonal axes of the ECEFcoordinate frame used by the GNSS receiver. λ represents the longitude and φ the latitudeof the vehicle. NED represents the local navigation frame at which the vehicle states areevaluated to.

Page 49: Navigation

Navigation System Design (KC-4) 44

5 Inertial Sensors

Inertial sensors make measurements of the internal state of the vehicle. A major ad-vantage of inertial sensors is that they are non-radiating and non-jammable and may bepackaged and sealed from the environment. This makes them potentially robust in harshenvironmental conditions. Historically, Inertial Navigation Systems (INS) have been usedin aerospace vehicles, military applications such as ships, submarines, missiles, and toa much lesser extent, in land vehicle applications. Only a few years ago the applica-tion of inertial sensing was limited to high performance high cost aerospace and militaryapplications. However, motivated by requirements for the automotive industry, a wholevariety of low cost inertial systems have now become available in diverse applications suchas heading and attitude determination. The most common type of inertial sensors areaccelerometers and gyroscopes. Accelerometers measure acceleration with respect to aninertial reference frame. This includes gravitational and rotational acceleration as well aslinear acceleration. Gyroscopes measure the rate of rotation independent of the coordi-nate frame. The most common application of inertial sensors is in the use of a headinggyro. Integration of the gyro rate information provides the orientation of the vehicle.Another application of inertial sensors is the use of accelerometers as inclinometers andto perform vibration analysis. The most sophisticated use of inertial sensor is in the fullsix degree if freedom navigation systems. In this case at least three gyros are used to trackthe orientation of the platform and a minimum or three accelerometers are integrated toprovide velocity and position. They can also provide 3-D position information and, unlikeencoders, have the potential of observing wheel slip. This section presents different typeof inertial sensors and the fundamental principles inertial navigation.

5.1 Fundamental Principles of Accelerometers and Gyroscopes

5.1.1 Accelerometers

Accelerometers measure the inertia force generated when a mass is affected by change invelocity. This force may change the tension of a string or cause a deflection of beam or mayeven change the vibrating frequency of a mass. The accelerometers are composed of threemain elements: a mass, a suspension mechanism that positions the mass and a sensingelement that returns an observation proportional to the acceleration of the mass. Somedevices include an additional servo loop that generates an apposite force to improve thelinearity of the sensor. A basic one-degree of freedom accelerometer is shown in Figure14. This accelerometer is usually referred to as an open loop since the acceleration isindicated by the displacement of the mass.

The accelerometer described in this example can be modelled with a second order equa-tion:

Page 50: Navigation

Navigation System Design (KC-4) 45

Massm

Spring

Damper

X

Frame

0

Sensitiveaxis

k

c

Figure 14: Basic Components of an open loop accelerometer

F = md2x

d2t+ c

dx

dt+ kx (88)

where F is the applied force to be measured. The damping can be adjusted to obtain fastresponses without oscillatory behaviour. Many of the actual commercial accelerometersare based on the pendulum principle. They are built with a proof mass, a spring hingeand a sensing device. These accelerometers are usually constructed with a feedback loopto constrain the movement of the mass and avoid cross coupling accelerations. There areaccelerometers that implement variation of this principle such as the Sundstrand ”Q-Flex”design and the Analog Device silicon micro-machined accelerometer. One important spec-ification of the accelerometers is the minimum acceleration that can be measured. Thisis of fundamental importance when working in land vehicle applications, where the ac-celeration expected is usually in the range of 0.1-0.3 g. Figure 15 shows the accelerationmeasured when travelling with a car at low speed. A standard accelerometer for suchapplications is usually capable of measuring acceleration of less than 500 µg. The de-pendence of the bias with temperature and the linearity of the device are also importantspecifications.

5.1.2 Gyroscopes

These devices return a signal proportional to the rotational velocity. There is a largevariety of gyroscopes that are based on different principles. The price and quality of these

Page 51: Navigation

Navigation System Design (KC-4) 46

0 50 100 150−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

Time (sec)

Acc

eler

atio

n (

g)

Figure 15: Typical acceleration in land vehicle applications

sensors varies significantly. The following sections present some of the most common typesof gyroscopes available for industrial applications.

5.1.3 Vibratory gyroscopes

These types of gyroscopes can be manufactured in very different forms but they are allbased on obtaining rotation rate by measuring coriolis acceleration. The device can bemodelled by a simple mass-spring system as shown in the Figure 16. The purpose of thegyroscope is to measure the angular velocity of the particle that is supposed to be rotatingabout the axis OZ.

In this approach the particle is made to vibrate with constant amplitude along the axisOX. This motion is usually referred to as primary motion and is controlled by an embeddedcircuit that maintains the oscillation at constant amplitude. Under rotation the mass willexperience a coriolis inertia force that will be proportional to the applied rate of turnand will have a direction parallel to the OY axis. This motion is referred to as secondarymotion and its amplitude is measured to provide information proportional to the angularrotation. Some devices provide an additional feedback mechanism for this secondarymotion to increase the linearity of the sensor. The Performance of gyroscopes is mainlycharacterized by two parameters: scale factor and drift. A gyro with low distortion in thescale factor will be able to accurately sense angular rates at different angular velocities.Various system imperfections and as well as environmental conditions can cause significantvariations in the scale factor. Gyro drift is the nominal output of the gyro in steady state.

Page 52: Navigation

Navigation System Design (KC-4) 47

O

Y

X

Primarymotion

Secondarymotion

Figure 16: Mass spring model

This non-zero output is usually temperature dependent.

Murata manufactures a gyroscope model Gyrostar ENV05 that is based on this principle.This device presents very good high frequency characteristics but poor low frequencyperformance. The drift expected is in the order of 1 degree / minute. Noise and significantbias temperature dependence also affect the performance of this particular sensor. Betterperformance can be expected from the Vibrating Structure Gyroscopes from BAE andthe quartz rate sensors build by Systron Donner. These devices have a drift of less that0.15 degree/minute and the linearity is better than 0.05 % of the full range.

5.1.4 Fiber Optic Gyros

The fiber optic gyros are based on the Sagnac effect discovered by Georges Sagnac in1913. This effect can be easily explained assuming two waves of light circulating inopposite direction around a path of radius R. If the source is rotating at speed ω, thelight traveling in the opposite direction will reach the source sooner than the wave travelingin the same direction, as shown in Figure 17. The wave traveling with the rotation willcovers a distance D1 in a transit time t1, while the other signal covers a distance t2 intime D2

D1 = 2πR−Rωt1D2 = 2πR−Rωt2

(89)

Making the waves travel the path N times, the difference in transit time becomes:

∆t = N(t2 − t1) =4πNR2ω

c2(90)

Page 53: Navigation

Navigation System Design (KC-4) 48

It is important to relate this time difference with a phase shift at a particular frequency.

φ = 2π∆tf (91)

For a given rotation ω the phase shift will be

∆φ =8π2RN

cω (92)

Most low cost implementations of these devices works in an open loop manner. Themaximum phase shift that can be evaluated without ambiguities is 90 degrees.

R

Lightsource

ω

t2t1

Figure 17: Transmission time difference

There are commercial laser gyros such as the KVH model Ecore 2000 and 1000, whichare capable of drifts as low as 0.036 and degree 0.12 per minute respectively. Closed loopoptical gyros are also available but they are more expensive

5.2 Sensor Errors

The measurement errors associated with inertial sensors are dependent on the physicaloperational principle of the sensor itself. The general error equation used for gyroscopicperformance along, say, the x direction, is written as

δωx = b + bg

ax

ay

az

+ sfωx + myωy + mzωz + η, (93)

Page 54: Navigation

Navigation System Design (KC-4) 49

and that of the accelerometers as

δfx = b + sfax + myay + mzaz + η, (94)

where

• b is the residual biases

• bg is a 1× 3 vector representing the g-dependent bias coefficients

• sf is the scale factor term

• m represents mounting and internal misalignments and

• η is random noise on the sensor signal

Other terms such as anisoelastic and anisoinertia errors mainly affect mechanical gyroswhile vibro-pendulous errors affect mechanical accelerometers, and hence will not be con-sidered here.Apart from the random noise term η, all other error terms are, in principle, predictablethus allowing for some form of compensation.

5.2.1 Evaluation of the Error Components

This section discusses tests which can be performed to gyros in order to systematicallyremove the errors. Similar tests can be conducted on the accelerometer. It should benoted that temperature and memory effect play a significant role in the stability of theoutput of low cost inertial sensors. It is for this reason that when one purchases low costinertial units, not all the values for the error terms are available, and so testing is requiredbased on the application at hand.If the gyro is left stationary then the only error term encountered is that of the g-independent bias. One of strongest correlations that can be found in inertial sensorsis that between the g-independent bias and temperature, also known as in-run bias. Thusby cycling through the temperature that would be encountered in the target applicationthe bias on the gyro can be determined. The better the gyro, the smaller the bias variationover the temperature range. Furthermore, the better the sensor the greater the linearityof this bias variation.There is also a hysterisis effect encountered with most inertial sensors. Thus in an ensem-ble of tests, cycling through the temperature in the same manner each time, the variationin the bias between ensembles can be determined. This is known as the switch-on toswitch-on bias. Again the better the gyro, the better the switch-on to switch-on bias.When testing for the remaining error components, the g-independent bias is assumed

Page 55: Navigation

Navigation System Design (KC-4) 50

known and is removed.Start with a rate table and place the gyro such that its supposed sensitive axis is orthog-onal to input rotation vector, then any output signal will be due to internal misalignmentof the sensor’s input axis. The purpose of mounting the sensor orthogonal to the inputrotation is to deal with a small error about null as opposed to a small error about a largevalue (as would be encountered if the sensitive axis was parallel to the rotation input).With the misalignment and g-independent bias available, the gyro is placed on the ratetable with its sensitive axis parallel to rotation input. The rate table is cycled from sta-tionary to the maximum rotation measurable by the gyro in steps, holding the rotationrate at particular points. The scale factor and the scale factor linearity can then be de-termined by comparing the output of the gyro to the rotation rate input. With low costgyros the temperature also has a part to play with scale factor stability, thus the testsshould also be conducted with varying temperature.Finally the gyro can be placed in a centrifuge machine which applies a rotation rate andacceleration to the gyro. The output reading from the gyro minus the previous termscalibrated, results in the determination of bias due to acceleration or g-dependent bias.By mathematically integrating Equations 93 and 94, the effect of each error term on theperformance of the gyro can be determined. For each error term that is accounted forthere is a corresponding increase in performance.Table 5 compares the error values available for the RLG, FOG, ceramic and silicon VSGs

Characteristic RLG FOG Ceramic VSG Si VSGg-independent bias o/hr 0.001-10 0.5-50 360-1800 > 2000g-dependent bias o/hr/g 0 < 1 36-180 36-180

scale factor non-linearity % 0.2 - 0.3 0.05 - 0.5 5 - 100 5 - 100

Table 5: Comparison of some of the major errors with various gyro implementations

5.2.2 Faults associated with Inertial Sensors

The accelerometers measure the absolute acceleration with respect to an inertial frame.We are interested in the translational acceleration hence the algorithms used should com-pensate for all other accelerations. For practical purposes it can be considered that gravityis the only other acceleration present. In Figure 15 it can be seen that the average acceler-ation obtained from a land vehicle is smaller that 0.3 g. This implies that the orientationof the accelerometer has to be known with very good accuracy in order to compensatefor gravity without introducing errors comparable to the actual translation acceleration.

Page 56: Navigation

Navigation System Design (KC-4) 51

This has been the main limitation that has prevented the widespread use of inertial sen-sors as position sensors in low cost land navigation applications. The orientation of theaccelerometer can be tracked using a gyroscope. These sensors provide an output propor-tional to the rotation speed but are always contaminated by noise and drift. For shortperiods of time the drift can be approximated by a constant bias. The orientation isevaluated with a single integration of the gyroscope output:

θm =

∫θ + b + ν dt = θ + bt +

∫ν dt (95)

The integration given in equation 95 returns the rotation angle with two additional un-desired terms. A random walk due to the noise and another term that grows with timeproportional to the gyro bias. As shown in figure 18, the gyro drift will introduce an error

Figure 18: Errors due to drift in the gyro

in acceleration given by

a = ax sin(bt) (96)

For small angle increments the errors in acceleration, velocity and position are given by:

Page 57: Navigation

Navigation System Design (KC-4) 52

a = axbt (97)

v =1

2axbt

2 (98)

p =1

6axbt

3 (99)

It is important to remark that a constant gyro bias introduces an error in position de-termination that grows proportional to t3. For standard low cost, good quality gyrosthe bias expected is in the order of 1-10 degrees / hour. Without calibration, the biascould introduce an error of approximately 140 meters due to the incorrect compensationof gravity after only 2 minutes of operation. Another aspect that needs to be consideredis the resolution of the analog to digital converter used, especially when working withlow drift gyroscopes. The original noise standard deviation of the gyro noise could beincreased selecting an inappropriate digital to analog converter. A gyro with stability ofthe order of 1 degree per hour that works in a range of +/-100 deg/sec and will requirea 16-bit converter to operate under its specifications. The bias in the accelerometer willalso increase the error in position and is proportional to the square of time:

v = bat (100)

p =1

2bat

2 (101)

The error introduced by the accelerometer is important enough to be neglected. Figure19 presents the bias measured in a set of identical accelerometers corresponding to aninertial unit over a period of six hours whilst they were stationary. Evidently the biasesdo not remain constant and in fact it can be clearly seen that any one accelerometer isnot indicative of the general performance of the other accelerometers. The different biasesvalues indicates that they are physically different low cost inertial sensors. The changein the value occur due to an increase of the temperature of the unit due to ambienttemperature variations. The estimation of the biases needs to be done each time thevehicle is stationary to minimize this effect. The calibration procedure must incorporatethe identification of gyro and accelerometer biases at the beginning of the navigation taskand become active each time the vehicle stops. These identification of the biases can alsobe done on-line but information from other sensors is required. A second mayor error isdue to the integration of random walk (

∫ν dt) which causes a growing error term known

as random walk. Figure 20 presents the effects if integrating a gyro signal on severaloccasions after the bias was removed and the unit stationary. Although the average erroris zero in any particular run the error grow will occur in a random direction. Assumingunity strength white Gaussian noise and letting x(k) =

∫ t

0ν(u)δ(u),then the ensemble

Page 58: Navigation

Navigation System Design (KC-4) 53

0 2 4 6−20

−15

−10

−5

0

5x 10

−3

Acc

eler

atio

n (g

)

X accelerometer

0 2 4 60.01

0.015

0.02

0.025

0.03

Acc

eler

atio

n (g

)

Y accelerometer

0 2 4 6−1.04

−1.03

−1.02

−1.01

−1

Time (hours)

Acc

eler

atio

n (g

)

Z accelerometer

0 2 4 615

20

25

30

Time (hours)

Tem

p (C

)

Temperature

Figure 19: Bias change of the accelerometers over a period of 6 hours.

mean value is:

E[

∫ t

0

ν(u)δu] =

∫ t

0

E[ν(u)]δu (102)

and the variance is

E[x2(t)] = E[

∫ t

0

ν(u)δu

∫ t

0

ν(v)δv] =

∫ t

0

∫ t

0

E[ν(u)ν(v)]δuδv (103)

E[ν(u)ν(v)]is the auto-correlation function, which in this case (assuming uncorrelatederrors) is simply a Delta Dirac function δ(u− v). Hence Equation 103 becomes:

E[x2(t)] =

∫ t

0

∫ t

0

δ(u− v)δuδv = t (104)

The standard deviation of the noise is:

σν =√

t (105)

Therefore without any aiding the white noise will cause an unbounded error growth inthe inertial sensors whose value at any particular point will be bounded by ±2

√t 95 %

of the time. For white noise of strength K the standard deviation is σν = K√

t. Thelarger the standard deviation of the noise the greater the standard deviation of the error.This is an important specification of the system, since a good gyro can be degraded with

Page 59: Navigation

Navigation System Design (KC-4) 54

0 0.5 1 1.5 2 2.5 3−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3Random Walk − X gyro

Time (min)

Ang

le (

degr

ees)

Figure 20: Typical acceleration in land vehicle applications

inappropriate signal conditioning and electronic.

5.3 Application Inertial Sensors

Inclinometers: These sensors are based on accelerometers that resolve the direction ofthe gravity vector. The gravity vector is a known quantity that is constant in magni-tude and direction. Figure 21 presents a basic example of a tilt sensor. These sensorsare usually calibrated through a look-up table to determine the offset as a function oftemperature. There are tilt sensors available for this purpose that can provide accurateinformation while the vehicle is stationary. Low cost devices can provide bank and ele-vation information with an accuracy of up to 0.1 degrees. These sensors can not be usedwhen the platform is moving since they are sensitive to translational and rotational ac-celerations. Pendulum accelerometers are also used to determine the bank and elevationangles. These devices use the principle that a pendulum with an equivalent radius of theearth will always point to the vertical irrespective of the acceleration. This effect can beapproximated with appropriate feedback loop compensation. Although they are able toobtain some rejection of the undesired acceleration due to the movement of the platformthey can only be used under very low vehicle accelerations.

Vibration: Vibration Analysis consist of measuring the frequency, strength over a rangeof frequencies of vibrations. Machinery vibration problems can consume excessive powerand impose additional wear on bearings, seals, couplings and foundations. Vibrations

Page 60: Navigation

Navigation System Design (KC-4) 55

G

Accelerometer

Vout=Accelerometer Offset

G

Accelerometer

Vout=Scale Factor *G * Sin( B ) +

Accelerometer Offset

B

Figure 21: Basic principle of inclinometers

are typically caused by machinery misalignment and unbalance. These problems canbe detected by obtaining the frequency response of the vibration of the machine underinvestigation. This is commonly done via analysis of the FFT of an acceleration signal.There are other application where vibration monitoring may be useful such as the analysisand identification of vibration sources, problems in structures, vibration and shock testing,analysis to ensure products comply with specified vibration required for the applicationetc.

Heading: Another application of inertial sensors is in the use of a heading gyro. Integra-tion of the gyro rate information provides the orientation of the vehicle. A good qualitygyro will have zero or constant bias, and small noise variance. There are many types ofgyroscopes but few in the price range that can be afforded in commercial mobile robotapplications. Fiber optic and vibratory gyroscopes have been released with satisfactoryspecifications and reasonable price ranges for many industrial applications

Inertial Measurement Unit (IMU) A full inertial navigation system (INS) consistsof at least three (triaxial) accelerometers and three orthogonal gyroscopes that providemeasurements of acceleration in three dimensions and rotation rates about three axes.The Physical implementation of inertial sensors can take on two forms:

• Gimballed arrangement: This unit consists of a platform suspended by a gymbalstructure that allows three degree of freedom, as shown in Figure 22. This device isusually attached to a vehicle through the outermost gymbal. The vehicle can thenperform any trajectory in 3-D while the platform is maintained level with respectto a desired navigation frame using the feedback control loops. These loops use thegyro signal to provide appropriate control singnal to the actuators places in eachgymbal. With this approach the platform can be maintained aligned with respectto a coordinate system. The information provided by the accelerometers can thenbe integrated directly to obtain position and velocities.

Page 61: Navigation

Navigation System Design (KC-4) 56

Figure 22: Gymbaled Six Degree of Freedom INS. Each axis has a control loop to maintainthe axis at a particular orientation with respect to a coordinate frame. The gyro providesthe feedback information to control the actuators of each gymbal

• Strapdown arrangement: The IMU system assembled from low cost solid-state com-ponents is almost always constructed in a ”strap-down” configuration. This termmeans that all of the gyros and accelerometers are fixed to a common chassis and arenot actively controlled on gimbals to align themselves in a pre-specified direction.As shown in Figure 23, a gyro and accelerometer is placed in each axis. This designhas the advantage of eliminating all moving parts. The strapdown construction,however, means that substantially more complex software is required to computeand distinguish true linear acceleration from angular acceleration and body roll orpitch with respect to gravity. Once true linear acceleration has been determined,vehicle position may be obtained, in principle, by double integration of the accel-eration. Vehicle orientation and attitude may also, in principle, be determined byintegration of the rotation rates of the gyros. In practice this integration leads tounbounded growth in position errors with time due to the noise associated with themeasurement and the non-linearity of the sensors.

5.4 Coordinate Frames

Inertial Navigation systems require the transformation of the different measurementquantities between different frames of reference. The accelerometer and gyros measure

Page 62: Navigation

Navigation System Design (KC-4) 57

Gz

Az

z

Gx Ax

Ay

Gy

y

x

Figure 23: Strapdown Six degree of Freedom INS. Each axis has an accelerometer and agyro. The gyro signal are used to determine the attitude of the unit and compensate theacceleration information for gravity.

acceleration and rotation rates with respect to an inertial frame. Other sensors suchas Global Positioning System (GPS) report information in Earth Centered Earth Fixed(ECEF) coordinate frame. The navigation system needs to transform all the informationto a common coordinate frame to provide the best estimate and sometime be able toreport the output in various other coordinate frames. This section will introduce thedifferent coordinate system used in this course.

Inertial Frame According to Newtown’s law of motion, an inertial frame is a framethat does not rotate or accelerate. It is almost impossible to find a true inertial frame. Agood approximation is the one that is inertial with a distant star. For practical purposeswe define an inertial frame with origin at the earth center of mass, with x axis pointingtowards the vernal equinox at time T0, the z axis along the earth’s spin axis and the yaxis completing the rigth handed system.

Page 63: Navigation

Navigation System Design (KC-4) 58

Earth-centered Earth-Fixed (ECEF) This frame has origin at the mass centre ofthe Earth, the x axis points to the Greenwich Meridian in the equatorial plane, the yplane at 90 degree east in the same plane and the z axis in the direction of rotation ofthe reference ellipsoid. This frame is not an inertial frame. It can be approximated toan inertial frame by a negative rotation equivalent to the Greenwich mean Sidereal Time(GMST). The earth’s geoid is a surface that minimize the difference with the earth’ssea level. This geoid is approximated with an ellipsoid around the earth’s minor axis.Different parameter are available for different areas of the world. The most commonreference ellipsoid used is the WGS 84. This frame is also referred as Transport Frame.

Local Level Earth Frame This frame, also known as Geographic or Earth frame, isdefined locally relative to the earth’s geoid. The axis x points to the North’s direction,the z axis is perpendicular to the tangent of the ellipsoid pointing towards the interiorof the earth, not necessarily to the centre of the earth. Finally the y axis points east tocomplete a right handed orthogonal system

Wander Frame The Local Level frame presents some numerical problems to integratethe data from inertial system when close to the poles. In this areas significant rotationaround the axis z are required to maintain the x axis pointing North. This is true evenwith small movement in the y direction. This problem can be solved performing all thecomputation in a frame that does not require to point to North. The x axis then wandersfrom North at a rate chosen by the user.

Geocentric frame This frame is similar to the Local Level frame with the differencethat the z axis points to the centre of the earth.

Local Geodetic This coordinate system is very similar to the Local level Frame. Themain difference is that when the system is in motion the tangent plane origin is fixed,while in the local level frame the origin is the projection of the platform origin into theearth’s geodic. This is frame is mainly used to perform navigation in local areas relativeto a given origin point.

Body Frame Most navigation systems have sensors attached to the vehicle. Thebody frame constitutes a new frame that is rigidly attached to the vehicle. The origin ofthis frame can be arbitrarily chosen although some location may simplify the kinematicsmodels.

Page 64: Navigation

Navigation System Design (KC-4) 59

Summary on frames

Figure 24 shows the coordinates system used in this course. Absolute sensors like GPSprovide position in ECEF coordinates, using X, Y, Z or latitude, Longitude and HeightΦ, λ, H. Inertial sensors will return information in its body frame, which is in permanentrotation and translation from the navigation frame N,E, D shown in the figure. All thesensory information needs to be converted to the navigation frame prior to perform thedata fusion task to obtain the best position estimate. This is one of the tasks of theNavigation system.

X

Z

D

N

E

λ

φ Y

Figure 24: Different Navigation Frames

5.5 Inertial Navigation Equations

This section presents the derivation of the inertial navigation equations for global andlocal area navigation. For additional information the reader can see [24]

5.5.1 The Coriolis Theorem

Navigation with respect to a rotating frame such as the earth requires the Coriolis theorem.The theorem states that the velocity of a vehicle with respect to a fixed inertial frame vi,

Page 65: Navigation

Navigation System Design (KC-4) 60

is equal to the ground speed of a vehicle ve (the velocity of the vehicle with respect to theearth), plus the velocity of the vehicle due to the rotation rate of the earth with respectto the inertial frame ωie, at the point on earth where the vehicle is located r, that is,

vi = ve + ωie × r, (106)

where ωie = [0 0 Ω] and Ω is the rotation rate of the Earth . Differentiating this equationwith respect to the inertial frame gives

vi|i = ve|i + ωie|i × r + ωie × v|i. (107)

Now assuming that the angular acceleration of the Earth is zero, that is ωie = 0, andsubstituting Equation 106 into 107,

ωie × v|i = ωie × [ve + ωie × r]

= ωie × ve + ωie × [ωie × r], (108)

hence Equation 107 becomes

vi|i = ve|i + ωie × ve + ωie × [ωie × r]. (109)

Now vi|i = f + g where f is the specific force encountered due to the vehicle motion andg is the force due to gravity. Hence Equation 109 becomes

f + g = ve|i + ωie × ve + ωie × [ωie × r], (110)

that is,

ve|i = f− [ωie × ve] + [g− ωie × [ωie × r]]. (111)

Equation 111 simply states that the acceleration over the Earth’s surface is equal to theacceleration measured by the accelerometers compensated for the Coriolis accelerationencountered due to the velocity of the vehicle over a rotating Earth, ωie×ve, and for thelocal gravity acceleration, which comprises of the Earth’s gravity, g, and that due to therotation of the Earth, also known centripetal acceleration ωie × [ωie × r].

5.5.2 Navigation in large areas

The Transport frame is used when a vehicle travels over large distances around the earthsuch as in missile or air transport applications. The objective of this framework is to ob-tain the ground speed of a vehicle with respect to a navigation frame, generally expressedin North, East and Down coordinates, at a given location, expressed in latitude, longitude

Page 66: Navigation

Navigation System Design (KC-4) 61

and height. In such situations the rotation of the earth needs to be taken into consid-eration along with the fact that the navigation frame is also rotating. As an example,constantly keeping the North axis aligned on a flight from Sydney to London will requirethe navigation frame to constantly rotate. This, coupled with the fact that the earth is ro-tating during this transition, causes a Coriolis acceleration between the navigation frame,the earth rotation and the ground speed of the vehicle. Hence, the Transport frameworkis defined as: the acceleration of the vehicle with respect to the navigation frame ve|n, isequal to the acceleration of the vehicle with respect to the inertial frame ve|i, minus theCoriolis acceleration due to the navigation frame rotating ωen, on a rotating earth ωie.That is,

ve|n = ve|i − (ωie + ωen)× ve. (112)

Substituting in Equation 111,

ve|n = fn − [ωie × ve] + [g− ωie × [ωie × r]]− (ωie + ωen)× ve

= fn − (2ωie + ωen)× ve + [g− ωie × [ωie × r]], (113)

where fn represents the acceleration in the navigation frame. Since the inertial unitmeasures the acceleration in the body frame fb, this acceleration needs to be transformedinto the navigation frame, that is,

fn = Υfb. (114)

The transformation Υ, can take on three forms as will be discussed in Section 5.6. Re-gardless of how Υ is obtained, the rotation data relating the body to navigation frame ωbn

is required. However, the gyros measure the total inertial rotation ωib which comprises ofωbn plus the rotation of the navigation frame with respect to the inertial frame, which isthe rotation rate of the navigation frame with respect to the earth plus the rotation rateof the earth with respect to the inertial frame,

ωbn = ωib − [ωie + ωen], (115)

where ωen is known as the transport rate and is defined as

ωen = [ve

Ro + h,−ven

Ro + h,−vn tan L

Ro + h]. (116)

L is the latitude of the vehicle, Ro is the radius of the Earth and h is the height of thevehicle above the earth’s surface. The transport rate is what rotates the navigation frameas the vehicle travels across the surface of the earth.

Page 67: Navigation

Navigation System Design (KC-4) 62

5.5.3 Navigation in local areas, Earth Frame

In the Earth frame the acceleration of the vehicle with respect to the earth ve|e, is equalto the acceleration of the vehicle with respect to the inertial frame ve|i, minus the Coriolisacceleration due to the velocity of the vehicle ve, over a rotating earth ωie,

ve|e = ve|i − ωie × ve. (117)

The Earth frame will be used throughout this work since its definition is well suited forland vehicle applications. Substituting in Equation 111 gives

ve|e = fe − 2[ωie × ve] + [g− ωie × [ωie × re]]. (118)

Again, since the inertial unit measures the acceleration in the body frame, the accelerationmeasurements need to be transformed into the earth frame,

fe = Υf b, (119)

where Υ now comprises of the rotation rates ωbe which relates the body to earth frame.However, the gyros measure the total inertial rotation ωib which comprises of ωbe plusthe rotation of the earth with respect to the inertial frame transformed over to the bodyframe, so

ωbe = ωib −Υbeωie. (120)

5.5.4 Schuler Damping

The major difference between the Transport and Earth frame equations is the transportrate defined by Equation 116. The transport rate is subtracted from the body rates andhence defines the attitude of the system with respect to the Transport frame. However,incorporating the transport rate also adds a bound to the errors associated with inertialsystems. This is because the acceleration in the navigation frame is twice integrated toprovide velocity and then position. The velocity is used to determine the transport ratewhich is then subtracted from the body rates.For example, assume that the pitch is in error by a small angle δβ. This error in turn causesan incorrect calculation of the acceleration due to the misalignment and the measurementof gravity given by gδβ. Repeating this process shows that misalignment feeds back ontoitself. Looking at the characteristic equation of such a system, it may be observed that itexhibits simple harmonic motion with period

ωs =

√g

Ro

,

Page 68: Navigation

Navigation System Design (KC-4) 63

or 84.4 minutes. This is known as the Schuler oscillation. In essence, by implementingthe Transport frame, the errors are bounded within the Schuler oscillations. If a systemis tuned to the Schuler frequency then some general rules of thumb can be obtained forinertial systems:

• An initial velocity error δvo, causes a position error of δvosin(ωst)

ωs;

• An initial attitude error δβ, causes a position error of δβR(1− cos(ωst));

• An acceleration bias δf , causes a position error of δf(1−cos(ωst)ω2

s); and

• A gyro bias δωb, causes a position error of δωbR(t− sin(ωst)ωs

).

Thus for all errors, except for gyro bias, the position error is contained within the Schuleroscillation. The gyro bias causes Schuler oscillations which however grow linearly withtime. It is for this reason that, with accurate gyros, passenger airlines can travel vastdistances relying solely on inertial navigation.A reasonable question is “Why not use the Transport frame for land vehicle applicationsand hence benefit from this bounding property?”. The Transport frame can be usedfor any inertial navigation implementation and thus allow for bounded error growth ir-respective of how large this error may be. However, as an example, a low cost inertialsystem with a typical gyro bias of 0.01deg/sec will give rise to a bounded position error of160000km while a gyro bias of less than 0.001deg/hr, such as encountered with ring lasergyros found in passenger airlines, gives rise to a bounded position error of 4km. ThusSchuler bounding has no purpose when using low grade inertial units. If the transportterm from Equation 113 is removed, the resulting equation has a similar structure to thatof the Earth frame, Equation 118. However, the Earth frame provides a better conceptualunderstanding when dealing with navigation within confined areas.

5.6 Attitude Algorithms

The transformation matrix Υ is required to evaluate the acceleration vector in the desiredframe given the acceleration vector in the body frame. This transformation matrix has tobe accurate since misalignment (errors in estimated attitude) causes the components ofthe gravity vector measured by the accelerometers to be confused with true vehicle accel-eration. Integrated over time, even small misalignment errors will cause large estimatederrors.The transformation matrix consists of the roll, pitch and yaw angles required to rotatethe body frame to the navigation frame and hence is updated continuously since the bodyframe is always rotating with respect to the navigation frame. The updating process prop-agates this matrix based on data obtained from the gyros, thus any errors in Υ is caused

Page 69: Navigation

Navigation System Design (KC-4) 64

by both the physical errors associated with the gyros and the errors in the algorithmsused to propagate the transformation matrix. The physical errors is discussed in Section5.2.There are a number of algorithms available for attitude propagation. However, regard-less of the type of attitude algorithm implemented, in their analytical forms they provideidentical results. The choice of algorithm depends on the processing power available, onthe errors introduced due to discretisation of algorithms, and on the errors introduceddue to the simplification of the algorithms. The simplification of algorithms is to ease thecomputational load on the processor.There are in principle three approaches to propagating the transformation matrix: Euler,Direction Cosine and Quaternion methods. The Euler approach is conceptually easy tounderstand although it has the greatest computational expense. Although the Quater-nion approach is computational inexpensive compared to the other two, it has no directphysical interpretation to the motion of the vehicle. The Direction Cosine approach fitsin between, both in terms of physical understanding and in computational expense.

5.6.1 Euler Representation

The Euler representation is the easiest form to understand. It is the mathematical analogform of a gimbal system. The roll θ, pitch β and yaw γ angles used to represent therotations from the body frame to the navigation frame are placed in a transformationmatrix En

b , by multiplying the consecutive rotation matrices representing a roll, followedby a pitch and then a yaw,

Enb =

γc −γs 0γs γc 00 0 1

βc 0 βs

0 1 0−βs 0 βc

1 0 00 θc −θs

0 θs θc

=

βcγc −θcγs + θsβsγc θsγs + θcβsγc

βcγs θcγc + θsβsγs −θsγc + θcβsγs

−βs θsβc θcβc

(121)

where subscripts s and c represent sine and cosine of the angle. As the body frame rotates,the new angles are obtained from

θ = (ωy sin θ + ωz cos θ) tan β + ωx (122)

β = ωy cos θ − ωz sin θ (123)

γ = (ωy sin θ + ωz cos θ) sec β. (124)

where ω is the rotation rate measured by the gyros about the x, y and z axes. As isapparent from these equations, both the roll and yaw updates have singularities when thepitch of the vehicle is π

2. This is equivalent to the physical phenomenon of “gimbal lock”

Page 70: Navigation

Navigation System Design (KC-4) 65

in gimbal systems. In a strapdown system, as the vehicle approaches π2

then theoreticallyinfinite word length and iteration rates are required in order to accurately obtain theresult. It is for this reason that Euler updates are not generally implemented, althoughthis is not a problem for land vehicle applications.Furthermore, this approach is computationally expensive due to the trigonometry requiredin the updates and in forming En

b .

Discretisation The discretisation procedure is as follows:

• The new roll angle is obtained through integration by

θ(i + 1) =

∫θdt + θ(i) (125)

and similarly for pitch and yaw. The angles are then placed into the matrix to formEn

b (i + 1).

• Obtain the acceleration data in the body frame fb = [fx, fy, fz] and evaluate theacceleration in the navigation frame

fn = Enb fb (126)

• Integrate the acceleration to obtain velocity and then position.

5.6.2 Direction Cosine Matrix (DCM) Representation

The direction cosine matrix Cnb , is a 3 × 3 matrix containing the cosine of the angles

between the body frame and the navigation frame. Cnb is propagated by

Cn

b = Cnb Ωbn, (127)

where Ωbn is a skew-symmetric matrix representing rotation rates as measured by thegyros,

Ωbn =

0 −ωz ωy

ωz 0 −ωx

−ωy ωx 0

. (128)

The initial Cnb is simply the initial En

b .

Page 71: Navigation

Navigation System Design (KC-4) 66

Discretisation The discretisation procedure is as follows:

• Obtain the gyro outputs ωx, ωy, ωz and integrate to determine the change in angleφx, φy, φz.

• Determine the angular vector magnitude

φ =√

φ2x + φ2

y + φ2z (129)

• Determine the series coefficients

α =sin φ

φ(130)

β =1− cos φ

φ2(131)

• Form the angular skew matrix

φ =

0 −φz φy

φz 0 −φx

−φy φx 0

(132)

• Update the direction cosine matrix

Cnb (i + 1) = Cn

b (i)[I3×3 + αφ + βφ2] (133)

The transformation matrix can be simplified if one assumes small angular rotations(< 0.05deg) thus,

Cnb (i + 1) = Cn

b (i) [I + ∆θ] (134)

with

∆θ =

0 −∆θY ∆θP

∆θY 0 −∆θR

−∆θP ∆θR 0

(135)

• Obtain the acceleration data in the body frame and evaluate the acceleration in thenavigation frame

fn = Cnb (i + 1)fb (136)

• Integrate the acceleration to obtain velocity and then position.

Page 72: Navigation

Navigation System Design (KC-4) 67

5.6.3 Quaternion Representation

In the Quaternion approach the rotation from one frame to another can be accomplishedby a single rotation about a vector q through an angle q. A quaternion consists offour parameters which are a function of this vector and angle. The initial quaternionis obtained from the roll, pitch and yaw angles defined in the Euler representation oralternatively through the Direction Cosine parameters,

q(i) =

0.5 ∗√1 + Cn

b11+ Cn

b22+ Cn

b331

4×q(1)(Cn

b32− Cn

b23)

14×q(1)

(Cnb13− Cn

b31)

14×q(1)

(Cnb21− Cn

b12)

(137)

Discretisation The discretisation procedure is as follows:

• Obtain the gyro outputs ωx, ωy, ωz and integrate to determine change in angleφx, φy, φz.

• Determine the quaternion operator coefficients

γ =sin φ

2

2(138)

δ = cosφ

2(139)

• Determine the quaternion operator

h(i) =

δγ φx

γ φy

γ φz

(140)

• Generate the quaternion matrix and update

q(i + 1) =

q(i)1 −q(i)2 −q(i)3 −q(i)4

q(i)2 q(i)1 −q(i)4 q(i)3

q(i)3 q(i)4 q(i)1 −q(i)2

q(i)4 −q(i)3 q(i)2 q(i)1

h(i) (141)

• Obtain the acceleration data in the body frame and evaluate the acceleration in the

Page 73: Navigation

Navigation System Design (KC-4) 68

navigation frame.

fn = q(i + 1)fTb q∗(i + 1), (142)

where q∗ is the complex conjugate of q. A less computationally expensive approachis taken by converting the quaternion back into Cn

b by

Cnb =

(q21 + q2

2 − q23 − q2

4) 2(q2q3 − q1q4) 2(q2q4 + q1q3)2(q2q3 + q1q4) (q2

1 − q22 + q2

3 − q24) 2(q3q4 − q1q2)

2(q2q4 − q1q3) 2(q3q4 + q1q2) (q21 − q2

2 − q23 + q2

4)

,

and then

fn = Cnb fb (143)

• Integrate the acceleration to obtain velocity and then position.

5.6.4 Discussion of Algorithms

All three approaches produce identical results in their complete form.The Euler approach is not commonly used due to the presence of the roll/yaw singularity.This does not pose a problem for land vehicle applications. However, the Euler approachrequires high computational effort and thus powerful processing if fast updates are re-quired.The Quaternion has the benefit of only requiring the update of four variables. It is mostoften used in military and space applications. In order to save on computational cost, thequaternion is converted to a DCM representation for transformation of the accelerationfrom the body frame to the navigation frame. This can be a significant computationalburden if fast sampling rates are required. To overcome this most military users imple-ment a dual sampling system.As an example, a Laser INS (LINS) system typically samples Ring Laser Gyros (RLGs)at 1200Hz and the accelerometers at 400Hz. The lower sampling rate is sufficient fortypical LINS navigation applications. Three samples of the RLGs are used to updatethe quaternion, thus forming a “super” quaternion representing a single rotation. Thisquaternion is then converted to the DCM representation and used to transform the ac-celeration measurements.The DCM approach is commonly used in all forms of inertial navigation. It also offersthe best representation for land navigation. Although nine variables are updated at eachsample, it is less computationally expensive than the Euler representation, and unlike theQuaternion approach, requires no conversion over to another representation in order totransform the acceleration data from the body frame to the navigation frame. In order

Page 74: Navigation

Navigation System Design (KC-4) 69

to satisfy this small angle assumption, the update rates generally have to be high. Forexample, land vehicles can undertake rotation rates of 50deg/sec, and so a sampling rateof at least 1kHz is required (to keep the angles below 0.05deg). Such a sampling rateis high for low cost inertial systems although in more expensive systems used in militaryand space applications 20kHz is not uncommon.

5.7 Attitude evaluation error sources

The accuracy of the evaluation of attitude can be affected by a number of factors such asthe approximation made by the computation algorithm and the hardware limitation ofthe technology used. This section present a description of the most important sources oferrors.

5.7.1 Errors in attitude computation

Equations 130 and 131 presented the series coefficients required for the DCM updatingprocess. If these coefficients can be determined exactly then the DCM generated wouldbe exact. However, in practice a Taylor series expansion is required,

α = 1− φ2

3!+

φ4

5!− ... (144)

β =1

2!− φ2

4!+

φ4

6!− ... (145)

The number of terms which are included in the expansion, and hence the accuracy of theseries, describes the order of the algorithm. In [27] it is shown that the error in the DCMalgorithm can be represented as

Ddc =1

δt(φa1 cos φ− sin φ + φ2a2 sin φ), (146)

where a1 = 1 and a2 = 0 for a first order algorithm, a1 = 1 and a2 = 0.5 for a secondorder algorithm, a1 = 1− φ2

3!and a2 = 0.5 for a third order algorithm and so on.

For example, consider a land vehicle whose axis experiences rotations in the order of20deg/sec. Table 6 shows the drift in the algorithm when the sampling rates are set to10, 125, 200 and 500Hz and the order of the algorithm is increased from a first to a third.

What is apparent from the table is that although the performance of the algorithm in-creases at each increase in sampling rate, there is a more marked improvement by increas-ing the order of the algorithm.Furthermore, although the sampling rates suggested in this table are not comparable to

Page 75: Navigation

Navigation System Design (KC-4) 70

Order of Algorithm 10Hz 125Hz 200 Hz 500 Hz1 30 0.23 0.04 0.0122 14.7 0.11 0.021 0.0063 0.001 5× 10−7 4.85× 10−7 1.4× 10−9

Table 6: Algorithm Drift Rates in o/hr caused by sampling a continuous rotation rate of20deg/sec

the kHz sampling rates implemented by high grade inertial systems, it is the relativemagnitude between the sampling rate and the continuous rotation rate that is of con-cern. Hence, as is apparent from the table, the drift errors caused by the simplificationof the algorithm are kept small as long as the sampling is of higher magnitude than thecontinuous rotation rate.

5.7.2 Vibration

Vehicle vibration has a detrimental affect on the performance of inertial navigation sys-tems. This is due to the low bandwidth of low cost inertial sensors, causing attenuationor total rejection of motion vibration at high frequencies. Vibration is generally a combi-nation of both angular and translational motion. If the bandwidth of both the gyros andaccelerometers are equal, and the vibration of the vehicle at frequencies above this band-width has smaller magnitude than the resolution of the sensors (which is likely with lowcost units), then vibratory motion can safely be ignored. However, the bandwidth of lowcost gyros is almost always significantly less than for low cost accelerometers. Vibrationscause small changes in attitude which is measured by the accelerometers as a componentof gravity. This acceleration component will be inaccurately evaluated as apparent vehiclemotion since the gyros can not measure this change in attitude.To overcome this problem it is necessary to introduce vibration absorbers into the physi-cal system to remove high frequency vibration.As an example, ceramic VSGs have a bandwidth of approximately 70Hz, those of aparticular brand of piezo-accelerometers 300Hz. Thus the approach would be to obtainvibration absorbers that attenuate as much of the signal as possible above 70Hz. How-ever, while low cost inertial units have the benefit of being light weight, it is difficult tofind absorbers that can attenuate high frequencies given the light load on the absorbersthemselves. Furthermore, it is important to ensure that the natural frequency associatedwith the absorbers does not lie in a region where any excitement of the vibration causesinaccurate readings of the sensors.

The effect of vibration on the attitude algorithms is termed “coning” or “non-commutativity”.

Page 76: Navigation

Navigation System Design (KC-4) 71

The definition of coning motion is simply stated as the cyclic motion of one axis due tothe rotational motion of the other two axes. If one axis has an angular vibration rota-tion of A sin(ωt) and the other axis A sin(ωt + %), then the third axis will have a motiondescribing a cone. A perfect cone motion would occur if the phase difference % betweenthe two axes is ninety degrees, otherwise the motion will exhibit some other form (and inreality would be random). In [10] three forms of coning errors are discussed:

• True Coning Errors Type 1: Where the true coning motion due to vibration stimu-lates the presence of coning motion in the algorithms used to determine the attitudeof the inertial unit.

• True Coning Errors Type 2: Where the true coning motion due to vibration causesa real net angular rotation rate ωc, which if not properly taken into account will beassumed to be rotation of the vehicle when it is not.

• Pseudo Coning Errors Type 3: When there is no real coning motion due to vibrationbut coning motion is produced in the algorithms due to induced errors in the inertialunit or in the algorithms themselves.

Type 1 errors arise because of sensor errors, namely gyro scale factor and gyro orthogo-nality errors, both of which are large with low cost inertial units.An example of Type 2 errors may occur when an inertial unit uses RLGs. The operationof this gyro works on the principle of two opposing light beams operating in a fixed opticalpath having the same optical frequencies. When the gyro undergoes rotation the frequen-cies change in order to maintain the resonant frequency required for the laser beams. Atvery low rotations this does not happen and the two beams assume the same frequency,thus it appears as though there is no rotation; a phenomenon known as “lock-in”. To re-move this the gyros are “dithered”. That is, a low amplitude angular vibration is appliedto the gyro. This vibration occurs at high frequency and hence can cause coning motion.One way to minimise the affect is to dither the gyros at different frequencies.Type 3 errors are of most concern. They arise from errors associated with truncation inthe attitude algorithm and the limited bandwidth of the algorithm, both of which arealleviated as the order of the algorithm is increased along with the sampling rate.The problem with coning motion is determining whether the right order algorithm or sam-pling rate has been chosen, and whether affects such as quantisation errors or dithering iscausing any errors. The approach taken in industry is to place the system on a vibrationtable which can subject the unit to coning motion. The drift in position and attitude isan indication of coning error magnitude. This however, is only beneficial in cases wherethe application is known and the inertial unit is built to suite. However, for generic lowcost inertial applications one purchases the unit “off the shelf” and hence such techniquesare not available. In these situations vibration can cause errors in the attitude evaluationand hence drive navigation errors, thus requiring aiding from an external sensor in orderto bound these errors.

Page 77: Navigation

Navigation System Design (KC-4) 72

5.7.3 Minimising the Attitude Errors

In light of the arguments presented, the approach taken for minimising the errors asso-ciated with attitude algorithms for low cost inertial units will consist of the followingsteps:

• The bandwidth of the gyros employed will be the limiting factor in performance.Generally the accelerometers employed and the sampling rate achievable will behigher than the bandwidth of the gyro. If vehicle vibration exceeds that of thebandwidth of the gyro, then the only reasonable choice is to use vibration absorbersto attenuate incoming signals above the bandwidth of the gyro, taking into consid-eration the natural frequency of the absorber.

• The sampling rate of the inertial sensors should be above the Nyquist frequency andfurthermore should be as high as possible.

• The higher the order of the algorithm the better and this will come down to thesampling rate and the processing power available.

5.8 Error Models

The accuracy of inertial navigation will depend on the accuracy of the sensors employedand on the computer implementation of the inertial algorithms. To analyse how these twoeffects contribute to the inertial navigation system development, inertial error equationsare derived. These equations provide the position, velocity and attitude error growth,given the sensor and algorithm errors. The development of the error equations is accom-plished by perturbing (or linearising) the nominal equations. The perturbation can occurin two forms

• The Computer Approach (ψ angle) where the analysis occurs about the computedgeographical position determined by the inertial computer. Since the computerframe is known the perturbation of the angular position and rate are zero.

• The True Frame Approach (φ angle) where the perturbation occurs about the trueposition of the vehicle. The true position is not known and hence all the elementsare perturbed.

The benefit of implementing the perturbation in the computer frame is that the misalign-ment ψ between the computer frame and the true frame is independent of the positionof the computer frame. When perturbing in the true frame the misalignment is coupledwith the position. However, perturbation in the true frame is simpler. Both perturbation

Page 78: Navigation

Navigation System Design (KC-4) 73

techniques produce identical results as shown in [1, 2, 4].Perturbation analysis has always centered on the Transport or Wander Azimuth framesas these are due to their wide spread use.The perturbation of the Earth frame is the objective of this section and is done so usingthe true frame approach. It will be shown that in this frame, misalignment propagation isindependent of position, thus delivering the same benefit as the computer frame approach.

Perturbation in the true frame is accomplished by stating that the error in a particu-lar state is the difference between the estimated state and the true state. Thus given thatthe Earth frame velocity equation (Equation 118) is

ve|e = Cebfb − 2[ωie × ve] + [g− ωie × [ωie × re]], (147)

and defining the total gravity acceleration as

gt = [g− ωie × [ωie × re]], (148)

then by definition of the perturbation

δv = ˜ve

e − vee

= [Ce

bf b −Cebfb]− [2ωe

ie ×vee −2ωe

ie × vee] + [gt − gt]. (149)

where ˜ve

e is the evaluated velocity vector and vee is the true velocity vector. The evaluated

transformation matrix Ce

b, is the true transformation matrix Ceb, misaligned by the mis-

alignment angles δψ. The misalignment is represented in skew-symmetric form [δψ×].Hence

Ce

b = [I3×3 − [δψ×]]Ceb, (150)

thus

δv = Cebf b −Ce

bf b − [δψ×]Cebf b − δ[2ωe

ie × vee] + δgt. (151)

If the errors in Coriolis and gravity terms are insignificant then

δv = Cebδf b − [δψ×]Ce

bf b

= Cebδf b − [δψ×]f e. (152)

Now

−[ψ×]f e = feT[δψ×], (153)

Page 79: Navigation

Navigation System Design (KC-4) 74

hence

δv = feT[δψ×] + Ce

bδf b

= [f e×]δψ + Cebδf b. (154)

Thus the propagation of velocity errors in the Earth frame δv is equal to the accelerationerror in the Earth frame due to the misalignment of the frame [f e×]δψ, together with theerrors associated with the measurements of the acceleration δf b transformed over to theEarth frame via Ce

b. The errors in the measurements of the acceleration are associatedwith the accelerometers and are discussed in the next section.Rearranging Equation 150

[δψ×] = I3×3 − Ce

bCebT , (155)

and differentiating gives

˙[δψ×] = −Ce

bCe

b

T − ˙Ce

bCebT . (156)

The updating process of the transformation matrix both for the true and evaluated framesare

Ce

b = CebΩ

ebe and ˙C

e

b = Ce

bΩebe. (157)

Substituting into Equation 156 gives

˙[δψ×] = −Ce

b[CebΩ

bbe]

T − [Ce

bΩbbe]C

ebT

= −Ce

b[Ωb

be −Ωbbe]C

ebT . (158)

Perturbation of the rotation update matrix gives

δΩbbe = Ω

b

be −Ωbbe, (159)

therefore

˙[δψ×] = −Ce

bδΩbbeC

ebT . (160)

Substituting in Equation 150

˙[δψ×] = −[[I3×3 − δψ×]Ceb]δΩe

beCebT

= −CebδΩb

beCebT + [δψ×]Ce

bδΩbbeC

ebT . (161)

Page 80: Navigation

Navigation System Design (KC-4) 75

The product of small terms δψ× and δΩ, is assumed zero. Therefore

˙[δψ×] = −CebδΩb

beCebT

= −CebδΩb

beCbe.

= −[Cebδωb

be×]. (162)

The error form of Equation 120 is

δωbbe = δωb

ib −Cbeδωe

ie. (163)

Given that the rotation rate of the earth is known, thus δωeie = 0, then Equation 162

can be rewritten as

˙[δψ×] = −Ceb[δωb

ib×] or

δψ = −Cebδωb

ib. (164)

That is, the propagation of attitude errors δψ is simply the errors associated with the ro-tation rate measurements δωb

ib, transformed over to the Earth frame via Ceb. The errors

associated with the measurements are purely defined with the gyros and are discussedin the next section. Note that the propagation of the attitude errors is independent ofposition. Thus although the derivation was approached through perturbation of the trueframe the result delivers the same benefit as found in the computer frame approach, andthis is due to the structure of the inertial equations in the Earth frame.Thus given the velocity and attitude error propagation equations and an input trajectory,the error growth of the inertial system can be determined.The only terms which need to be determined are the errors in the acceleration measure-ment δf b = [δfx, δfy, δfz]

T , and the errors in the rotation rate δωbib = [δωx, δωy, δωz]

T .These terms can be experimentally evaluated.

5.9 Calibration and Alignment of and Inertial MeasurementUnit

The objective of calibration is to determine the biases in the accelerometers and gyros.This is obtained by first determining the initial alignment of the inertial unit and hencein turn evaluating the initial direction cosine matrix.

5.9.1 Alignment Techniques

If the accelerometer readings are known perfectly then the attitude of the inertial unitcan be determined by resolving the gravity component. In order to determine the gravity

Page 81: Navigation

Navigation System Design (KC-4) 76

component measured by the accelerometers, Equation 136 is rearranged to give

fb = (Cnb )−1fn (165)

Since Cnb is orthogonal, its inverse is simply its transpose. The inertial unit is stationary

and hence the only acceleration measured is that due to gravity along the vertical axis.Thus

fn =[

0 0 −g]T

. (166)

Equation 165 becomes

fxT

fyT

fzT

=

βcγc βcγs −βs

−θcγs + θsβsγc θcγc + θsβsγs θsβc

θsγs + θcβsγc −θsγc + θcβsγs θcβc

00−g

(167)

where the subscript T represents the true acceleration component due to gravity. Hence

fxT= g sin β (168)

fyT= −g sin θ cos β (169)

fzT= −g cos θ cos β (170)

Although no sensor is perfect, the higher the accuracy the tighter the error tolerancesand thus the accuracy of alignment which can be obtained. As the accuracy of sensorsdecrease, due to the errors mentioned previously, the alignment accuracy also decreases.Rearranging Equation 168 to determine the pitch β, and substituting this into eitherEquations 169 or 170 will solve for roll θ. This procedure for determining alignment iscalled “coarse alignment”.If the accuracy provided by coarse alignment is not sufficient for navigation performancethen external alignment information will be required. This information can come fromtilt sensors or GNSS attitude information for example. Coarse alignment is generally usedfor rapid alignment, such as in missile applications, where the time required for averagingdata from external sensors is not available.The final term which needs to be evaluated is the heading of the vehicle γ. Gyrocompass-ing is the self determination of the heading. Once the attitude of the unit is found, thereading on the gyros can be used to determine the components of the Earth’s rotation,and by knowing the initial position of the vehicle, heading can be resolved. However, withlow cost gyros, gyrocompassing is generally not possible due to the high noise and lowresolution of these sensors. In such cases external information is required to determineinitial heading.

Page 82: Navigation

Navigation System Design (KC-4) 77

5.9.2 Alignment for Low Cost Inertial Units

Due to the low accuracy of the inertial units implemented in this thesis, none of the com-mon self aligning or calibration methods provide results accurate enough for navigation.Instead, the inertial unit uses two tilt sensors which provide measurements of the bankand elevation of the inertial platform, thus providing the initial alignment informationrequired.A positive bank will cause the y-accelerometer to measure a component of gravity equalto

fyT= −g sin(bank) (171)

Similarly a positive elevation will cause the x-accelerometer to measure

fxT= g sin(elevation) (172)

Equating Equation 168 with 172, and Equation 169 with 171, the pitch and roll anglesare

β = elevation (173)

θ = sin−1(sin(bank)

cos β) (174)

To resolve the heading a compass is used.Equations 173 and 174 are used along with the initial heading angle to determine theinitial Cn

b .

5.9.3 Calibration

The simplest method of obtaining the biases of the inertial sensors is to measure thereadings from each sensor whilst the vehicle is stationary. These bias values are used tocalibrate the IMU. For gyros, the bias is simply the reading from these sensors when thevehicle is stationary. However, the alignment of the inertial unit is required in order todetermine the biases on the accelerometers. This is accomplished during the alignmentstage since the “expected” acceleration due to gravity can be determined and hence anyanomalies in these values are attributed to bias. Thus the bias on the x-accelerometer isobtained by;

bfx = fx − fxT(175)

where fx is the measured acceleration and fxTis the expected acceleration obtained during

the alignment stage. The bias is obtained similarly for the remaining accelerometers. For

Page 83: Navigation

Navigation System Design (KC-4) 78

more information and experimental results the reader can see [21]

5.10 Position, Velocity and Attitude Algorithms

Figure 25 presents the tasks required to work with a full inertial measuring unit. Thebank, elevation and heading information is used to calibrate and align the unit whilethe vehicle is stationary. Once the initial transformation matrix Cn

b (0) is obtained, it isupdated at fast rates with the gyroscope information to obtain Cn

b (k). This matrix isused to transform acceleration in the body coordinate frame to the navigation coordinateframe. Then the single and double integration is performed to obtain inertial velocity andposition.

Inertial Measurement

Unit

Bank

Elevation

Calibration Algorithm

Heading

Gyros (R,P,Y)

Acceleration Ax,Ay,Az

Acceleration

in body frame

Integral

Integral

Velocity x,y,z

Position x, y ,z

(0) n

b C

() n

b Ck

Acceleration in

Navigation frame

Figure 25: Inertial Attitude, position and velocity prediction

Page 84: Navigation

Navigation System Design (KC-4) 79

6 GPS/INS Integration

Inertial sensors have been used in numerous applications for the past 50 years. Thistechnology originally developed for military purposes has started to appear in industrialapplications. This has been possible due to the significant reduction in cost of inertialsensors. Low cost full six degree of freedom inertial system are currently available for lessthan US 10,000. Unfortunately this reduction of cost comes with a substantial reductionin quality. These units without any aiding can only perform navigation for very shortperiod of time. The solution to this problem is aiding inertial systems with externalinformation to maintain the error within certain bonds. The most common aiding sensorfor outdoor application has been the GPS in all its forms ( Autonomous / Differential/ RTK ). This section presents various navigation architectures that fuse GPS, INS andmodeling information in an optimal manner.

6.1 Navigation architectures for Aided Inertial Navigation Sys-tems

The navigation architecture depends on the types of sensors and models employed. Foraided inertial navigation systems, the inertial component can either be an Inertial Mea-surement Unit (IMU), which only provides the raw acceleration and rotation rate data, oran Inertial Navigation System (INS) providing position, velocity and attitude information.The aiding source can either be considered as a sensor providing raw sensor information,or as a navigation system providing the position, velocity and/or attitude information.The principle states of interest which are estimated by the filter, and hence which governsthe type of model implemented, are the position, velocity and attitude of the vehicle, orthe position, velocity and attitude errors.The most natural implementation of an aided inertial navigation system is to drive anon-linear filter with the raw acceleration and rotation rate data provided by the IMU,as shown in Figure 26. The implementation is known as a “direct” filter structure. Theprocess model usually represents the kinematic relationship of the vehicle and the statesof interest. The state vector is propagated by the model and inertial data. The aidinginformation can be obtained from a navigation system where observations such as positionand velocity are supplied to the system. The estimate would be in the form of the vehiclestates.

The disadvantage of such an implementation is that the prediction equations have to beevaluated at each sample of the inertial data. This requires substantial processing dueto the high sample rates of IMUs. Another problem is that in general, high frequencymaneuvers are usually filtered in the linear (or linearised) model. The consequence of

Page 85: Navigation

Navigation System Design (KC-4) 80

IMU

External Aiding Sensor

Non-Linear Filter

Acceleration and

Rotation Rates

Observations

Position, Velocities and

Attitude Estimates

Figure 26: The “direct” structure implements a non-linear filter to estimate the position,velocity and attitude of the vehicle. The inertial data is provided by an IMU and theaiding data from a navigation system.

-

Inertial Navigation

System

External Aiding System

Linear Filter

Position, Velocity and Attitude

Observations

Position,

Velocities and

Attitude

Estimates

+ Observed

Errors

- +

Estimated Errors of Position, Velocity

and Attitude

Estimated Position, Velocity and Attitudes

Figure 27: The “indirect feedback” method allows a linear filter implementation andminimises the computational overhead on the filter structure.

Page 86: Navigation

Navigation System Design (KC-4) 81

-

Inertial

Navigation

System

External Aiding

System

Linear Filter

Position, Velocity

and Attitude

Observations

Position,

Velocities and Attitude

Estimates

+ Observed Error

Estimated Errors of Position,

Velocity and Attitude

Figure 28: The “direct feedback” method overcomes the problem of large observationvalues being provided to the filter by correcting the INS directly.

this omission is that the filter will unnecessarily attenuate the high frequency informa-tion provided by the INS. With this approach the system may not be able to track fastmaneuvers. To overcome this, an INS should be employed so that a constant stream ofvehicle state information is available external to the filter. To correct any errors, the filterestimates the errors in these states. The inertial information can still be obtained evenif no additional information is available. Figure 27 illustrates this implementation andis known as the “indirect feedback” method. The observation which is delivered to thefilter is the “observed error” of the inertial navigation solution, that is, the difference be-tween the inertial navigation solution and the navigation solution provided by the aidingsource. In this implementation the inertial high frequency information is fed directly tothe output without attenuation while the Kalman Filter provides the low frequency cor-rection to the IMU. Since the observation is the observed error of the inertial navigationsolution, and since the filter is estimating the errors in the inertial navigation solution,then the process model has to be in the form of an error model of the standard inertialnavigation equations. Thus the inertial navigation equations are linearised to form errorequations. Since the equations are linearised the filter implementation takes on a linearform. Although the prediction stage is still implemented, it can run at the same rate asthe sampling rate of the INS or at lesser intervals.The disadvantage of the indirect feedback method is the unbounded error growth of theINS which causes an unbounded growth in the error observation delivered to the filter.This poses a problem to the linear filter since only small errors are allowed due to thelinearisation process. That is, large drift in the state values from the INS cause largeobservation errors being fed into the filter and thus invalidating the assumptions held bythe filter. The optimal implementation is illustrated in Figure 28 and is known as the

Page 87: Navigation

Navigation System Design (KC-4) 82

“direct feedback” method. In this structure the estimated errors are fed back to the INS,thus minimising the growth of the observed error that is delivered as an observation tothe filter.

6.2 Linear, Direct Feedback Implementation

This navigation architecture is driven with observation errors formed with the predictedand observed position and/or velocities. In order to implement the Kalman Filter pro-posed, a model of the propagation of the errors is needed. Equations 154 and 164 de-scribed the inertial error equations in the Earth frame for velocity and attitude. Therate of change of the position error can be equated as the velocity error. Thus the errorequations are

δp = δv (176)

δv = [f e×]δψ + Cebδf b (177)

˙[δψ×] = −Cebδωb

ib. (178)

Ceb transforms vectors from the body frame to the Earth frame. As in the GNSS imple-

mentation, the Earth frame is represented by North (N), East (E) and Down (D) vectorsand hence for clarification the transformation matrix is represented as Cn

b .The navigation loop implements a linear Kalman filter. The state vector consists of theerror states,

x =[

δpN δpE δpD δvN δvE δvD δψN δψE δψD

]T. (179)

The process model is

F =

0 0 0 1 0 0 0 0 00 0 0 0 1 0 0 0 00 0 0 0 0 1 0 0 00 0 0 0 0 0 0 −fD fE

0 0 0 0 0 0 fD 0 −fN

0 0 0 0 0 0 −fE fN 00 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0

(180)

In compact from

Page 88: Navigation

Navigation System Design (KC-4) 83

rv

φ

=

0 I 00 0 an

0 0 0

rvφ

+

I3∗30 0 00 Cn

b 00 0 Cn

b

0fa

ωbib

x = Fgx + Gw(181)

This model can be extended to consider the biases and drift of the accelerometers andgyros. A gyro error model presented here consists of a first order Markov process withcorrelation time τ plus white noise ν:

θ = −(1/τ)θ + νg E[νg] = 0, E[νgνTg ] = Rg (182)

The matrix Tg that takes into account the time constant for the three gyros is

Tg =

−1/τx 0 0

0 −1/τy 00 0 −1/τz

(183)

A standard error model for the accelerometers consists of a random constant componentplus white noise:

a = νa E[νa] = 0, E[νaνTa ] = Ra (184)

The matrix Ta that models the accelerometer errors is:

Ta =

0 0 00 0 00 0 0

(185)

Finally the augmented model that considers the new states has the following extendedmatrixes :

F =

0 0Fg Cn

b 00 Cn

b

0 0 0 Ta 00 0 0 0 Tg

G =

0 0 0 0 00 Cn

b 0 0 00 0 Cn

b 0 00 0 0 I 00 0 0 0 I

x =[

ri vi Φi fgi ωbib

]

(186)

Where r,v and ψ are the errors in position, velocity and angle misalignments, and fgi and

Page 89: Navigation

Navigation System Design (KC-4) 84

ωbib are the errors in the accelerometers and gyros in the body frame.

Note that in this implementation, the acceleration inputs are fed directly into the processmodel and hence there is no control vector, u = 0. The process model F comprises oftime-varying terms. Thus in order to determine the state transition matrix numericalmethods are required. If however it is constant over the sampling interval then the statetransition matrix is simply F(k) = exp(F∆t), where ∆t is the sampling time of the inertialunit. In the case of land vehicles, the dynamics are of a much lower frequency than thesampling frequency. Hence over the sampling period F remains constant, hence

F(k) = exp(F∆t)

= I + F∆t +(F∆t)2

2!+ . . .

The discretisation is only taken to the second term since any extra terms which are addedto the final state transition matrix are of negligible value.Thus the state prediction is

x(k|k − 1) = F(k)x(k − 1|k − 1). (187)

The advantage of using this model is that the implementation is linear and the model isindependent of the vehicle dynamics.Initially the inertial sensors are calibrated and all the errors are removed, thus x(1|0)is set to zero. Thus, from Equation 187, the state prediction in the next cycle is alsozero, and so forth. Therefore the state prediction is always zero and no correction ofthe inertial errors occurs during the prediction cycle. That is, the position, velocity andattitude information obtained from the navigation system is simply the data from theINS since there are no predict errors to correct it.

However, due to the drift in the INS solution, there is a corresponding growth in uncer-tainty in the states and this is evaluated through the predicted covariance matrix is

P(k | k − 1) = F(k)P(k − 1 | k − 1)FT (k) + Q(k) (188)

This is a 9× 9 matrix representing the uncertainty in the inertial predicted errors.Q(k) is the discrete process noise matrix also of dimension 9 × 9 and is evaluated using[18]

Q(k) =1

2

[F(k)G(k)Qc(k)G(k)TF(k)T + G(k)Qc(k)G(k)T

]∆t. (189)

Equations 176 - 178 show how the noise terms (in acceleration and angular velocity) areconverted from the body frame to the navigation frame and hence Qc(k), which is the

Page 90: Navigation

Navigation System Design (KC-4) 85

uncertainty in the process model over a period of one second, is defined as

Qc(k) =

δp 0 0

0 δfb 0

0 0 δωbib

. (190)

δp is the noise injected into the position error evaluation and its value is purely dependenton the uncertainty in the evaluation of the position from the integration of velocity. Theremaining error terms in this matrix are the noise values on the sensor readings, that is,the errors in the body frame, which can be obtained from manufacturer specifications orthrough experimentation. G(k) is

G(k) =

I3×3 0 0

0 Cnb (k) 0

0 0 −Cnb (k)

. (191)

6.3 Aiding in Direct Feedback Configurations

In a direct feedback structure, the model implemented in the filter is a linear error modelrepresenting the errors in the vehicle states, generally position, velocity and attitude.When an observation becomes available, the filter estimates the errors in these states.Since the model is an error model of the inertial equations, the observation z(k) is theobserved error of the inertial navigation solution and not the observation delivered bythe aiding sensor. Thus if an aiding system provides position and velocity data then theobservation vector becomes,

z(k) =

[zP (k)zV (k)

]=

[Pinertial(k)−PGPS(k)Vinertial(k)−VGPS(k)

](192)

Figure 29 illustrates the observation structure [18]. The true acceleration, velocity andposition of the vehicle have noise added to them to represent measurements taken by thesensors. The acceleration, as measured by the inertial navigation system, is integratedtwice to obtain the indicated velocity and position of the vehicle. The acceleration in-formation is obtained by the accelerometers and it is assumed that acceleration due togravity has been compensated for.

Page 91: Navigation

Navigation System Design (KC-4) 86

Inertial Unit

at

vt

xt

wa

wv

wx

xi vi

vgps

xgps

+ +

+

+

+

+ _

_

_

_

z2

z1

Figure 29: Illustration of how the observation measurements zP and zV are obtained bythe inertial and aiding information.

By defining the terms δP(k) and δV(k) as the position and velocity errors in the inertialdata after the integration process, the observation model becomes

z(k) =

[Pinertial(k)−PGPS(k)Vinertial(k)−VGPS(k)

]

=

[(PT (k) + δP(k))− (PT (k)− vP (k))(VT (k) + δV(k))− (VT (k)− vV (k))

]=

[δP(k)δV(k)

]+

[vP(k)vV(k)

]

(193)

The observation is thus the error between the inertial indicated position and velocity andthat of the aiding sensor, and the uncertainty in this observation is reflected by the noise ofthe aiding observation. This offers another benefit in the direct feedback implementationand involves the tuning implementation; the noise on the observation is the noise on theaiding sensor. Thus once an inertial unit and process model is fixed then the process noisematrix Q(k) is also fixed, and tuning the filter is solely based on the observation noise

Page 92: Navigation

Navigation System Design (KC-4) 87

matrix R(k). The estimate of the error states at time k given all observations up to timek can then evaluated using the standard kalman filter update equations:

x(k|k) = x(k|k − 1) + W(k)ν(k), (194)

where W(k) is a gain matrix produced by the Kalman filter and ν(k) is the innovationvector.

ν(k) = z(k)−H(k)x(k|k − 1). (195)

W(k) = P(k|k − 1)HT (k)S−1(k), (196)

where S(k) is known as the innovation covariance and is obtained by

S(k) = H(k)P(k|k − 1)HT (k) + R(k). (197)

Finally the covariance matrix is updated due to this observation:

P(k|k) = (I−W(k)H(k))P(k|k − 1)(I−W(k)H(k))T +

W(k)R(k)WT (k) (198)

However with this approach, since the prediction of the error states x(k|k − 1) is alwayszero then the state estimate becomes

x(k|k) = W(k)z(k) (199)

That is, the update is simply a weighted sum of the observations. The observation modelH(k) is

H(k) =

[I3×3 0 0

0 I3×3 0

](200)

The updated position and velocity errors can now be used to correct the position andvelocity of the INS,

Pinertial(k|k) = Pinertial(k|k − 1)− δp(k|k) (201)

Vinertial(k|k) = Vinertial(k|k − 1)− δv(k|k) (202)

Equation 150 related the true direction cosine matrix to the estimated direction cosine

Page 93: Navigation

Navigation System Design (KC-4) 88

matrix. In the context of the terminology used in this section this is written as

Cnb (k|k − 1) = [I3×3 − [δψ×]]Cn

b (k|k). (203)

Rearranging this equation as

Cnb (k|k) = [I3×3 − [δψ×]]−1Cn

b (k|k − 1), (204)

provides the update equation for the direction cosine matrix once an estimate of theattitude errors occurs. Since the term in the square brackets is orthogonal then its inverseis simply its transpose (A−1 = AT ), and furthermore it is a skew-symmetric matrix (thusAT = −A), hence Equation 204 is written as

Cnb (k|k) = [I3×3 + [δψ×]]Cn

b (k|k − 1) (205)

where

[δψ×] =

0 −δψD δψE

δψD 0 −δψN

−δψE δψN 0

(206)

Note that [δψ×] is in the navigation frame yet it is used to correct the Cnb matrix whose

elements are defined in the body frame. It is assumed that the misalignment errors aresmall and hence the errors associated about the navigation frame are equal to those aboutthe body frame. When large misalignments are encountered, the linear assumptions heldare not valid. [16] deals with such situations.

6.3.1 Real Time Implementation Issues

Filter Consistency and Fault Detection The filter implementations discussed pro-vide various methods of estimating the states of interest. However, unless the true valuesof those states is known, there is no way of determining whether the filter is computingcorrect estimates. The only information available from the real world is the observation,and hence the only form of measure for determining the behaviour of the filter is thedifference between the observation and the predicted observation, that is, the innovation(Equations 195 and 225 for the Kalman filter and EKF).The innovation has the property that it must be both unbiased and white, and have co-variance S(k) if the filter is operating correctly. To determine whether this is the case,the innovation is normalised,

γ = νT (k)S−1(k)ν(k). (207)

Page 94: Navigation

Navigation System Design (KC-4) 89

If the filter assumptions are correct then the samples of γ are distributed as a χ2 distri-bution in m degrees of freedom (the number of observations being estimated).Instead of using Equation 207 as a method of determining filter consistency, it can beused as a gating function. When an observation is obtained, Equation 207 is formed, andif the value γ is less than a predefined threshold, then the observation is accepted. Thisallows for a means of detecting any faults within the observation. The threshold value isobtained from standard χ2 tables and is chosen based on the confidence level required.Thus for example, a 95% confidence level, and for a state vector which includes threestates of position and three of velocity, then γ = 12.6.

Detection of Multipath High frequency faults arise when the GNSS signals undergomultipath errors. This results in a longer time delay of the signals affecting the fix ofthe Standard Differential receiver, and also altering the phase of the signal thus affectingthe Carrier Phase Differential fix. Another high frequency fault, although it occurs lessoften and with less effect, is when the receiver utilises a different set of satellites in orderto determine the position fix. The accuracy of the fix is dependent on the geometry ofthe observed satellites. Changes in satellite configuration due to blockages of the satelliteview will in turn alter the resulting fix. Both forms of high frequency faults cause abruptjumps in the position and velocity fixes obtained by the GNSS receiver.High frequency faults are therefore environment dependent. An open area such as aquarry will be less likely to produce multipath errors than will a container terminal forexample. Consequently, the tuning of the filter which fuses the inertial unit and GNSSdata is dependent on the environment.The most common method of rejecting multipath errors is obtained when the receivercan distinguish between the true signal and the reflected signal. How well the receivercan distinguish between these two signals is dependent on the accuracy of the receiver’sinternal timing procedures.However, these systems cannot reject multipath errors completely, and even with theconstant improvement of GNSS technology high frequency faults such as multipath alwaysneed to be factored into the development of the navigation system.Equation 207 can be use to mantain filter consistency by evaluating a gating functionwhich has the property of being a χ2 distribution. This gating function can be usedto determine if the process or observation models are valid or if any observations arespurious. Thus it can potentially determine if the multipath errors have occurred.Due to satellite geometry, the GNSS fix in the vertical plane is significantly less accuratethan that in the horizontal plane. Consequently the fix in North and East may lie wellwithin the validation region, whilst that of Down may exceed it and force the result ofthe gating function beyond the threshold if the same noise values were used for all theterms in the observation noise matrix R(k). However, if these noise terms are accountedfor by taking the values from the GNSS receiver directly, than the validation gate will

Page 95: Navigation

Navigation System Design (KC-4) 90

detect multipath errors correctly.Similarly, changes in satellite geometry cause the Dilution of Precision (DOP) to vary.The DOP is a measure of the accuracy of the GNSS fix and is used in the GNSS positionsolution to obtain the optimum configuration of satellites which the receiver should track,if it cannot track all of them due to hardware limitations. Changes in satellite geometryoccur when part of the sky is invisible to the receiver antenna due to obstacles blockingthe GNSS signals. The receiver then has to obtain a new set of satellites. Consequently, achange in DOP will affect the GNSS solution causing high frequency faults. These faultscan be detected using the same technique as discussed for multipath errors. However,these changes are not as large as those encountered for multipath errors and generally goundetected, unless the accuracy of the inertial unit is comparable to that of the GNSSreceiver.

Filter Tuning There are two stages in the filter flow; the prediction stage where thepredicted inertial errors are always zero and the uncertainty grows with time, and theestimation stage where the estimates of the inertial errors are obtained by a weightedsum of the observations and the uncertainty is bounded. If no observation is obtainedfor an extended period of time, or equivalently if GNSS fixes are rejected due to errors,the filter will continuously cycle in prediction mode and no corrections to the inertialnavigation solution will be made. The longer the duration without correction, the greaterthe uncertainty in the navigation solution. When a fix does occur, the observation maypass the gating function test even though the fix may be in error since the uncertainty isthe inertial navigation solution is large. This is shown in Figure ??.

correction GPS

Inertia

As with any Kalman filter implementation, tuning lies in the choice of values for theprocess Q(k) and observation R(k) covariance matrices. For example, a large Q(k) willimply an inaccurate inertial system. During the prediction stage the uncertainty in the

Page 96: Navigation

Navigation System Design (KC-4) 91

inertial data will grow according to the magnitude of Q(k). When a GNSS fix does occurthere is a greater possibility the inertial unit will be corrected using the first availableGNSS fix, irrespective of the accuracy of this fix. Likewise, small values in R(k) will implyaccurate GNSS fixes which may pose a problem when the fix is in error and is being fusedwith low accuracy inertial sensors.Thus tuning becomes a delicate adjustment of both the Q(k) and R(k) matrices alongwith the employment of the gating function, Equation 207, in order to reject the highfrequency faults of the GNSS.The variances along the diagonal of R(k) are determined simply by obtaining the GDOPvalues from the receiver and assuming there is no correlation between the fixes of thenavigation axes (which is how the GDOP is provided).Determining the values for Q(k) depends on the noise level of the sensors implementedwhich can be obtained from the manufacturer or through experimentation. The principletuning parameters which need to be addressed are the variances of velocity and attitude.A large variance in the velocity terms will imply greater reliance on the GNSS velocityfixes. Furthermore, the greater the accuracy of the velocity estimates, the greater thedampening on the attitude errors. If there are no attitude fixes then only the velocityinformation can contain the drift in attitude and how quickly and accurately this canhappen depends on the variance on the attitude.

Algorithm There are two points of concern in a real time implementation of an in-ertially aided GNSS navigation system: processing power; and latency. Data latency isextremely important, especially with the use of GNSS sensors, where it is common tofind position latencies in the order of tens of milliseconds and that of velocity reachingover a second. The processing power internally in a GNSS sensor is used to control thecorrelators to lock onto satellites, determine the ephemeris of the satellites, and then com-pute the position and velocity of the receiver with respect to some coordinate frame. Thecomplexity of these calculations increases with the number of satellites used to determinethe solution.Standard pseudorange solutions require the least computational processing. RTK tech-nology requires the most processing power. In addition if attitude information is requiredthis also requires additional processing. The processing time is usually not of concern tomost GNSS users; surveyors for example or more generally, those users that do not requirereal time information. To overcome this latency problem, careful consideration needs tobe given to the real time implementation, and when the latency is large a complete re-design of the filter structure may be required. When the latency of the solution is lowthe position information can be propagated using velocity data and a constant velocitymodel. This is quite sufficient for low speed vehicles. Problems arise when the latency ofthe GNSS solution is high, and the vehicle dynamics are fast. A GNSS position solutionwith a latency of 50ms will have an error of 0.8m for a vehicle traveling at 60km/hr. If the

Page 97: Navigation

Navigation System Design (KC-4) 92

vehicle is moving with constant velocity and moving in a straight line then the positioncan be simply propagated forward. However, any deviation from a straight line will leadto incorrect estimates when the GNSS fix is fused with the current inertial state. What isrequired is to store the inertial data and process the estimates at the time that the GNSSfix should have occurred, and then propagate the inertial solution through the backlogof data. All is well if both the position and velocity GNSS data have equal latencies.However, if there is a requirement for the GNSS receiver to obtain velocity and positionwith independent measurements then this requires alternative methods. For example, inRTK systems a doppler approach is used to obtain velocity while the determination ofthe phase ambiguity is used for position measurement. In such systems the latency of thevelocity data can sometimes be over 1s, and hence the position and velocity determinationoccurs at different points in time. To overcome this GNSS manufacturers propagate thevelocity through the position data and hence both the position and velocity output occurat the same time with the same latencies. However such an approach produces correlationbetween the position and velocity data, which is not ideal for navigation systems.

Hardware It was shown before that extremely high sampling of inertial measurementsensors is not required in land field robotics since the maximum rotation and accelera-tion frequencies are not high (with obvious consideration to vibration). In addition, thepower of modern processors is sufficient to handle the data throughput required in theseapplications.

The navigation functions required for the aided INS strapdown system are usually parti-tioned between the fast guidance unit and the navigation processor, as shown in Figure30

Fast Guidance

Unit

(Prediction)

Navigation

Unit

*Kalm an

Filter

(Estimation)

INS

Platform DGPS

Receiver

Gyro and

Accelerations

at 200Hz

Accelerations, 10 Hz

Position and Velocity

200Hz

Position and Velocity

10 Hz

Transformation Matrix, 10 Hz

Bias and Misalignment, Position

and Velocity Corrections

Position and

Velocity at 10 Hz

Figure 30: INS/GPS Navigation architecture

Page 98: Navigation

Navigation System Design (KC-4) 93

The system can be implemented as two independent processes. The first process is im-plemented in the fast guidance unit. This process communicates with the inertial unitthrough a very fast link. The inertial measurement unit transmits triaxial gyro and accel-eration information at a high frequency rate. The guidance unit integrates the directioncosine matrix and transforms the acceleration to the navigation frame. It also generatesthe predictions for velocity and position through single and double integration respectively.The positions, velocities, accelerations and the transformation matrix are transmitted tothe navigation unit at a much lower rate. Although the vehicle state information is al-ready transformed into the local navigation coordinate frame, the transformation matrixis still required to implement the data fusion algorithm. The navigation unit receivesobserved position and velocity information from the DGPS receiver at much lower rates5-20 Hz. It is responsible for implementing the direct feedback Kalman Filter algorithm.

The navigation algorithm flow can be implemented as follows:

1. Since the vehicle is generally autonomous, all guidance commands are known inadvance and hence the navigation filter knows when the vehicle is stationary. Whilstthe vehicle is stationary the system reads in all acceleration, rotation rate and tiltsensor data from the inertial unit and provide the average of these sensor readings.The logging time for this average depend on the actual vibration present in thesystem.

2. Once the averaging process has been completed, calibration of the inertial unit isaccomplished as described in Section 5.9. This is used to determine biases and toobtain the initial Cn

b matrix. At this stage the GPS position fix is used to determinethe initial Cn

g matrix, Section 4.6.

3. The navigation system then proceeds onto the inertial navigation system with theinitial position as determined by the GPS receiver and the initial attitude as obtainedfrom the alignment stage. Initially the GPS sensor may not provide the position ofthe vehicle until it has reached its most accurate solution, which in the case of RTKsystems may take up to few minutes. The filter is initialised. The guidance computeris then informed that the navigation system is ready to provide the navigationsolution;

4. As the vehicle moves the acceleration and gyro values are read in and the biasesremoved, Equation 175. Cn

b is updated, Equation 133, and the acceleration in thenavigation frame is computed, Equation 136. These values are then integrated toprovide the position and heading of the vehicle;

5. If no GPS fix is available then return to Step 4, otherwise determine the GPS fixin the navigation frame, Equations 86 and 87. When the latency is small and the

Page 99: Navigation

Navigation System Design (KC-4) 94

vehicle dynamics are low, the velocity data can be used to propagate the positioninformation using a constant velocity model with reasonable accuracy;

6. Fuse the inertial and GPS data as described in Section 6.2;

7. Use the gating function, Equation 207, to determine if there are any high frequencyfaults. If high frequency faults exist then only send out the current state values asdetermined by the inertial system and return to Step 4. If the validation check haspassed then correct the inertial unit as shown in Section 6.2 and then return to Step4. The estimation procedure occurs within the sampling time of the inertial unit,however, if this was not the case and again the latency is low, then the correctedvelocity is used to propagate the corrected position value.

6.4 Aiding Using a Vehicle Model

In this section we present a new type of aiding information that does not come fromexternal sensors. This alternative method is based on the application of constraints onthe inertial equations if the motion of the vehicle is itself constrained in some way. Thisapproach uses the land vehicle’s motion to constrain the error growth of inertial units.The filter structure illustrated in Figure 31 is presented here.

IMU

Non-Linear

Kalman Filter

Vehicle

Constraints

Position, Velocity and

Attitude Estimates

Velocity

Observations Acceleration and

Rotation Rates

Figure 31: Typical acceleration in land vehicle applications

The general three dimensional inertial equations are derived from Newton’s laws and

Page 100: Navigation

Navigation System Design (KC-4) 95

are thus applicable to all forms of motion. The inertial equations on board a train applyequally as well to an aerobatic aircraft. However, the motion of the train is clearly differentto that of the aircraft, in particular, the train is constrained to move along the rails. Thisfact can be used as a “virtual observation”, constraining the error growth along the lateraland vertical directions. The motion of a general land vehicle is not much different fromthat of the train, and hence can also use constrained motion equations. If the vehicleundergoes excessive slip or movement in the vertical direction off the ground, then extramodelling is required, however, the use of constraints as virtual observations is still valid.

6.4.1 General Three-Dimensional Motion

Let the motion of the vehicle be described by the state equation,

x = f(x,u) (208)

where the vehicle state vector x =[VT

n ,PTn , γ, β, θ

]T, and the measurements u=

[fTb ,ωT

b

]T.

Using the kinematic relationship between ωb and the rates of changes of the Euler angles,and assuming that the rate of rotation of the earth is negligible, the state equations forvehicle motion can be written as

V n = Cnb fb (209)

P n = Vn (210)

γ =ωy sin θ + ωz cos θ

cos β(211)

β = ωy cos θ − ωz sin θ (212)

θ = ωx + (ωy sin θ + ωz cos θ) tan β (213)

Equations 209 - 213 are the fundamental equations that enable the computation of thestate x of the vehicle from an initial state x(0) and a series of measurements fb andωb. The Euler method is implemented here for a better conceptual understanding. It isimportant to note the following with respect to these equations.

1. These equations are valid for the general motion of a body in three-dimensionalspace.

2. Equations 209-213 represent a set of non-linear differential equations that can easilybe solved using a variety of different techniques.

Page 101: Navigation

Navigation System Design (KC-4) 96

3. It is possible to linearise these equations, for sufficiently small sampling intervals,by incorporating all the elements of the direction cosine matrix Cn

b into the stateequation.

Clearly, the rate of error growth can be reduced if the velocity of the vehicle and the Eulerangles β and θ can be estimated directly. It will be shown in the next section that this isindeed possible for a vehicle moving on a surface.

6.4.2 Motion of a Vehicle on a Surface

Motion of a wheeled vehicle on a surface is governed by two non-holonomic constraints.When the vehicle doesn’t jump off the ground and does not slide on the ground, thevelocity of the vehicle in the plane perpendicular to the forward direction (x-axis) is zero.Under ideal conditions, there is no side slip along the direction of the rear axle (y-axis)and no motion normal to the road surface (z-axis), so the constraints are

Vy = 0 (214)

Vz = 0 (215)

In any practical situation, these constraints are to some degree violated due to the presenceof side slip during cornering and vibrations caused by the engine and suspension system.In particular the side slip is a function of the vehicle state as well as the interaction betweenthe vehicle tyres and the terrain. A number of models are available for determining sideslip, but these models require the knowledge of the vehicle, tyre and ground characteristicsthat are not generally available. Alternatively, information from external sensors can beused to estimate slip on-line. As a first approximation however, it is possible to modelthe constraint violations as follows:

Vy − ηy = 0 (216)

Vz − ηz = 0 (217)

where ηy and ηz are Gaussian, white noise sources with zero mean and variance σ2y and σ2

z

respectively. The strength of the noise can be chosen to reflect the extent of the expectedconstraint violations.Using the following equation that relates the velocities in the body frame Vb = [Vx, Vy, Vz]

T

to Vn,Vb = [Cn

b ]T Vn

Page 102: Navigation

Navigation System Design (KC-4) 97

it is possible to write constraint Equations 216 and 217 as a function of the vehicle statex and a noise vector w= [ηy, ηz]

T ,

[Vy

Vz

]= M +

[ηy

ηz

], (218)

where M is

[VN(sin θ sin β cos γ + cos θ sin γ) + VE(cos θ cos γ + sin θ sin β sin γ) + VD(sin θ cos β)VN(sin θ sin γ + cos θ sin β cos γ) + VE(cos θ sin β sin γ − sin θ cos γ) + VD(cos θ cos β)

]

(219)The navigation frame implemented here is North (N), East (E) and Down (D). It is nowrequired to obtain the best estimate for the state vector x modeled by the state Equations209-213 from a series of measurements fb and ωb, subjected to the constraint Equation218.

6.5 Estimation of the Vehicle State Subject to Constraints

The state equation, obtained by the discretisation of Equations 209-213, is

x(k|k − 1) = f(x(k − 1|k − 1),u(k)), (220)

and the discrete time version of the constraint equation obtained from Equation 218

z(k) = H(x(k)) + v(k). (221)

Estimation of the state vector x subjected to stochastic constraints can be done in theframework of an extended Kalman filter. It is proposed to treat Equation 221 as anobservation equation where the “observation” at each time instant k is in fact identicalto zero. The Kalman filter algorithm proceeds recursively in three stages:

Prediction: The algorithm first generates a prediction for the state estimate and thestate estimate covariance at time k according to

x(k|k − 1) = F(x(k − 1|k − 1),u(k)), (222)

P(k|k − 1) = ∇Fx(k)P(k − 1|k − 1)∇FTx (k) + Q(k). (223)

The term ∇Fx(k) is the Jacobian of the current non-linear state transition matrix F(k)with respect to the predicted state x(k|k − 1).

Page 103: Navigation

Navigation System Design (KC-4) 98

Observation: Following the prediction, it is assumed that an observation z(k) that isidentical to zero is made. An innovation is then calculated as follows

ν(k) = z(k)− z(k|k − 1) (224)

where z(k) is in fact set to zero. An associated innovation covariance is computed using

S(k) = ∇Hx(k)P(k|k − 1)∇HTx (k) + R(k) (225)

Update: The state estimate and corresponding state estimate covariance are then up-dated according to

x(k|k) = x(k|k − 1) + W(k)ν(k) (226)

where ν(k) is the nonlinear innovation,

ν(k) = z(k)−H(x(k|k − 1)). (227)

The gain and innovation covariance matrices are

W(k) = P(k|k − 1)∇HTx (k)S−1(k) (228)

where the term ∇Hx(k) is the Jacobian of the current observation model with respect tothe estimated state x(k|k).The updated covariance matrix is

P(k|k) = (I−W(k)∇Hx(k))P(k|k − 1)(I−W(k)∇Hx(k))T +

W(k)R(k)W(k). (229)

6.5.1 Observability of the States

The extended Kalman filter algorithm obtains estimates of the state x. However, not allthe state variables in this estimate are observable. For example, inspection of the stateand observation equations suggest that the estimation of the vehicle position, Pn, requiresdirect integrations and therefore is not observable.Furthermore, if the vehicle moves in a trajectory that does not excite the relevant degrees-of-freedom, the number of observable states may be further reduced. Intuitively, forwardvelocity is the direct integral of the measured forward acceleration during motion alongstraight lines, therefore is not observable. Clearly an analysis is required to establish theconditions for observability.As the state and observation equations are non-linear, this is not straightforward. In this

Page 104: Navigation

Navigation System Design (KC-4) 99

section an alternative formulation of the state equations, that directly incorporates thenon-holonomic constrains, are developed in order to examine this issue.Consider the motion of a vehicle on a surface as shown in Figure 32. Assume that the

Figure 32: Motion of a vehicle on a surface. The navigation frame is fixed and the bodyframe is on the local tangent plane to the surface and is aligned with the kinematic axesof the vehicle.

non-holonomic constraints are strictly enforced and therefore the velocity vector V of thevehicle in the navigation frame is aligned with bx. Let s, s and s be the distance measuredfrom some reference location to the current vehicle location along its path, together withits first and second derivatives with respect to time.Therefore,

V=sbx.

Acceleration of the vehicle is given by,

f =V = sbx+sbx.

As the angular velocity of the coordinate frame b is given by ωb,

f = sbx+sωb × bx,

f = sbx+sωzby-sωybz.

Page 105: Navigation

Navigation System Design (KC-4) 100

Components of the acceleration of the vehicle in the body frame b become,

f.bx = s, (230)

f.by = sωz, (231)

f.bz = −sωy. (232)

Given that fn = Cnb fb and using this in the above equations,

fx

fy

fz

=

Vf

Vfωz

−Vfωy

+

βcγc βcγs −βs

−θcγs + θsβsγc θcγc + θsβsγs θsβc

θsγs + θcβsγc −θsγc + θcβsγs θcβc

00−g

,

where g is the gravitational constant and Vf = s is the speed of the vehicle.Rearranging this, the following three equations relating the vehicle motion to the mea-surements from the inertial unit can now be obtained,

Vf − fx + g sin β = 0, (233)

Vfωz − fy − g sin θ cos β = 0, (234)

Vfωy + fz + g cos θ cos β = 0. (235)

It is clear from the above equations that,

• when the forward acceleration is zero the roll (θ) and pitch (β) can be directlycomputed from the inertial measurements

• if one of the angular velocities ωy or ωz is not zero, the forward velocity can also becomputed directly.

• even when the forward acceleration is non-zero, it is possible to write a differentialequation containing only the forward velocity and the inertial measurements bysubstituting Equations 234 and 235 into 233. Therefore, Vf can be obtained by oneintegration step involving the inertial measurements.If the constraints are not used, two integration steps are required to obtain velocities.This result is of significant importance. The fact that the forward acceleration isobservable makes the forward velocity error growth only a function of the randomwalk due to the noise present in the observed acceleration.

• It is possible to use the Equations 234 and 235 directly to obtain the complete vehiclestate without going through the Kalman filter proposed in the previous section.This, however, makes it difficult to incorporate models for constraint violations inthe solution. Also, when the constraint violation is significant, such as in off roadsituations or cornering at high speeds, the white noise model is inadequate. Forexample, if there is significant side slip, explicit slip modeling may be required.

Page 106: Navigation

Navigation System Design (KC-4) 101

Figure ?? shows the performance of the algorithm in large area with the vehicle movinga low speed. It can clearly be seen that the incorporation of the constraint, Vy = 0, Vz =0, results in a performance of the algorithm very close to GPS / INS. For comparisonpurposes the raw data integration is also presented showing that a large error is obtainedafter the first minute of operation

−300 −250 −200 −150 −100 −50 0 50

−50

0

50

100

150

200

POSITION

EAST (m)

NO

RT

H (

m) <−−−IMU/GPS

Raw data<−−−−−

Constrained data<−−−

Figure 33: Comparison between integration or raw data, GPS/INS and constraint algo-rithms

6.5.2 Multiple Aiding of an Inertial System

This section will discuss the aiding of an inertial unit with three forms of external obser-vations; position and velocity derived from GPS, speed from a drive shaft encoder andthe vehicle model constraints. By incorporating all three pieces of observations, moreinformation is provided for correction of errors in the inertial data. Furthermore, theconstraint algorithms contain the growth of the attitude error when there are no GPSfixes, thus sustaining the accuracy of the inertial unit. The speed information providedby the encoder data makes the forward velocity observable.The approach taken is to use a linear information filter with the inertial error model de-veloped as a process model. Thus a direct feedback structure is implemented as shown inFigure 34. This makes the system practically achievable, and also allows easy informationaddition from the aiding sensors, especially since they are delivered asynchronously andat different rates. It is shown that the additional sensors significantly improve the quality

Page 107: Navigation

Navigation System Design (KC-4) 102

of the location estimate. This is of fundamental importance since it makes the inertialsystem less dependent on external information.

INS

Kalman

Filter

GNSS

Position, Velocity

and Attitude

Observed Error

Position and

Velocity

Observations

Estimated Errors of Position,

Velocity and Attitude

Vehicle

constraints +

wheel encoder

+

-

+

-

Velocity

Observations

Observed Error

Figure 34: Direct feedback structure

The inertial error model as outlined in Section 6.2 is implemented here using the Infor-mation form of the Kalman filter. This form is adopted in this implementation sincevarious type of external information is incorporated in asynchronous form to the system.Nevertheless identical results can be obtained with the standard Kalman filter.

Prediction: The prediction stage is implemented using:

y(k|k − 1) = L(k|k − 1)y(k − 1|k − 1) + B(k)u(k), (236)

where L(k|k − 1) = Y(k − 1|k − 1)F(k)Y−1(k − 1|k − 1).

and

Y(k|k − 1) = [F(k)Y−1(k − 1|k − 1)FT (k) + Q(k)]−1. (237)

Observation: When an observation from the aiding sensor or constraint is made, theobservation vector generated is the observed error of the inertial system,

z(k) = zinertial(k)− zaiding(k).

At each sampling of the inertial unit, a constraint observation is made, that is, Vy = 0and Vz = 0. At this stage the velocity vector is only partly completed, requiring thespeed of the vehicle along the body x-axis which is obtained from the speed encoder Vx.

Page 108: Navigation

Navigation System Design (KC-4) 103

The sampling rate of the speed encoder is 25Hz. However, it can be safely assumed thatconstant velocity occurs between these samples, due to the slow dynamics of the vehicle,and hence the observation formed can occur at the sampling rate of the inertial unit. Thisobservation, which is now in the body frame, is converted to the North (N), East (E),Down (D) frame using Cn

b . That is,

zvelocityV (k) = Cn

b

Vx(k)Vy(k)Vz(k)

(238)

=

Vx(k) cos β cos γVx(k) cos β sin γ−Vx(k) sin β

. (239)

Thus the observation vector is

z(k) = zinertialV (k)− zvelocity

V (k).

The observation model is given by

HvelocityV =

(03×3 I3×3 03×3

)(240)

and the observation covariance matrix is

RvelocityV =

σ2x 0 00 σ2

y 00 0 σ2

z

. (241)

Since the velocity vector is transformed from the body frame to the navigation frame, theobservation noise covariance needs to be transformed as well and is done so by

Rvelocity

= Cnb R

velocityCnb

T (242)

When the position and velocity are obtained from the GNSS receiver the observationvector is

z(k) =

[zinertial

P (k)− zGNSSP (k)

zinertialV (k)− zGNSS

V (k)

](243)

The observation model is

HGNSS(k) =

(I3×3 03×3 03×3

03×3 I3×3 03×3

), (244)

Page 109: Navigation

Navigation System Design (KC-4) 104

and the observation noise matrix is

RGNSS(k) =

σ2PN 0 0 0 0 00 σ2

PE 0 0 0 00 0 σ2

PD 0 0 00 0 0 σ2

V N 0 00 0 0 0 σ2

V E 00 0 0 0 0 σ2

V D

(245)

where the individual variances are obtained from the GDOP values.

Update: Once the observations are formed, the information state vector is generated

i(k) = HT (k)R−1(k)z(k). (246)

The amount of certainty associated with this observation is in the form of the informationobservation matrix,

I(k) = HT (k)R−1(k)H(k) (247)

The estimate then proceeds according to

y(k|k) = y(k|k − 1) + i(k) (248)

Y(k|k) = Y(k|k − 1) + I(k). (249)

and if there is more than one sensor at this time stamp:

y(k|k) = y(k|k − 1) +∑

ij(k) (250)

Y(k|k) = Y(k|k − 1) +∑

Ij(k). (251)

where j is the number of sensors providing information at time k.The algorithms were tested in a very large area shown is Figure 35. In this test we usedthe velocity constraint and GPS information. The comparison with respect to the errorsposition and velocity are shown in Figure ?? and ??. It can be seen that the error invelocity tend to zero as expected and the position is also controlled by the close zerovelocity error.

6.6 Tightly Coupled GPS/Ins

The three methods discussed so far are also known as “loosely coupled” implementationssince there is no feedback to the aiding sensor/navigation system. If feedback is provided

Page 110: Navigation

Navigation System Design (KC-4) 105

−100 0 100 200 300 400 500 600 700−50

0

50

100

150

200

250

300POSITION

EAST (m)

NO

RT

H (

m)

Figure 35: Vehicle Path

to the aiding source a tighter configuration is obtained which in turn improves systemintegrity. Figure 38 illustrates what is known as a “tightly coupled” configuration. Itoffers the advantages of being robust and increases performance since it allows the systemsdesigner to delve into the operation and algorithms of both sensors.

The inertial sensor provides raw data to the filter which usually incorporates a kinematicmodel of the vehicle. The aiding sensor also provides raw information. The filter esti-mates the states of the vehicle, and uses these estimates to assist the aiding sensor inits attainment of observations. For example, the aiding information can help GNSS withtracking satellites or assist a scanning radar with pointing and stabilisation.

6.6.1 Advantages and Disadvantages of the Tightly and Loosely CoupledConfigurations

The loosely coupled configurations offer the distinct advantage of being highly modularin accuracy and cost. The system’s designer can implement the model of choice alongwith the desired INS in whatever navigation structure is preferred. Any aiding sensorcan then be added to the navigation system. Although the tightly coupled configurationis more robust than the loosely coupled configuration, it is more expensive to implementand more difficult to develop. Furthermore, if a different aiding sensor is employed, themodels and algorithms must change substantially.

The use of GNSS as an aiding system for inertial navigation systems has been the subject

Page 111: Navigation

Navigation System Design (KC-4) 106

0 20 40 60 80 100 120 140 160 180 2000

1

2

3

4

5

6

Nor

th E

rror

(m

)

Time (s)

Position Error

| Constrained Inertial

| Constrained + GPS

0 20 40 60 80 100 120 140 160 180 2000

5

10

15

20

25

30

35

Eas

t Err

or (

m)

Time (s)

| Constrained Inertial

| Constrained + GPS

Figure 36: Position error incorporating constrain and GPS

of study for a number of years. The majority of implementations have been looselycoupled. This is due to companies specialising in inertial systems developing INS unitswith built in filtering techniques in a loosely coupled configuration and in turn, GNSScompanies focusing on delivering state of the art GNSS navigation systems. To implementa tightly coupled configuration requires close collaboration with GNSS companies, sincethe aiding information from the navigation filter which is fed back to the GNSS sensorassists with the satellite tracking algorithms. These algorithms are kept secret since thespeed of satellite reacquisition, the ability to locate and track satellites after they havebeen lost, is what separates the quality of receivers between different manufactures, andhence is a strong marketing tool.

Page 112: Navigation

Navigation System Design (KC-4) 107

0 20 40 60 80 100 120 140 160 180 2000

0.2

0.4

0.6

0.8

1

1.2

1.4

Nor

th E

rror

(m

/s)

Time (s)

Velocity Error

| Constrained Inertial

| Constrained + GPS

0 20 40 60 80 100 120 140 160 180 2000

0.5

1

1.5

2

Eas

t Err

or (

m/s

)

Time (s)

| Constrained + GPS

| Constrained Inertial

Figure 37: Velocity error incorporating constrain and GPS

IMU

External Aiding Sensor

Non-Linear Filter

Acceleration and

Rotation Rates

Raw Observations

Position, Velocities and

Attitude Estimates

Position, Velocity and

Attitude Estimates

Figure 38: The “tightly coupled” configuration treats both inputs as sensors and not asnavigation systems. Furthermore, the filter estimates are sent to the aiding sensor andnot the inertial sensor. This configuration allows for a more robust system.

Page 113: Navigation

Navigation System Design (KC-4) 108

7 Simultaneous Localisation and Mapping (SLAM)

Reliable localization is an essential component of any autonomous vehicle system. Thebasic navigation loop is based on dead reckoning sensors that predict high frequencyvehicle manoeuvres and low frequency absolute sensors that bound positioning errors. Theproblem of localization given a map of the environment or estimating the map knowingthe vehicle position has been addressed and solved using a number of different approaches.Section 3 presents a Kalman filter technique to estimate the position of the vehicle basedon the known position of artificial landmarks. Although this method can be made veryreliable is has the drawback that requires the modification of the environment with theaddition special infrastructure. In addition the location of these infrastructure need to besurveyed.

A related problem is when both, the map and the vehicle position are not known. In thiscase the vehicle start in an unknown location in an unknown environment and proceedto incrementally build a navigation map of the environment while simultaneously usethis map to update its location. In this problem, vehicle and map estimates are highlycorrelated and cannot be obtained independently of one another. This problem is usu-ally known as Simultaneous Localization and Map Building (SLAM) and was originallyintroduced [26]. During the past few years significant progress has been made towardsthe solution of the SLAM problem [17] [23] [12] [7] [25]

Kalman filter methods can also be extended to perform simultaneous localization and mapbuilding. There have been several applications of this technology in a number of differentenvironments, such as indoors, underwater and outdoors. One of the main problems withthe SLAM algorithm has been the computational requirements. Although the algorithmis originally of O(N3) the complexity of the SLAM algorithm can be reduced to O(N2),N being the number of landmarks in the map. For long duration missions the number oflandmarks will increase and eventually computer resources will not be sufficient to updatethe map in real time. This N2 scaling problem arises because each landmark is correlatedto all other landmarks. The correlation appears since the observation of a new landmark isobtained with a sensor mounted on the mobile robot and thus the landmark location errorwill be correlated with the error in the vehicle location and the errors in other landmarksof the map. This correlation is of fundamental importance for the long-term convergenceof the algorithm and needs to be maintained for the full duration of the mission. Thissection presents an introduction to the SLAM problem, description of the computationcomplexity and different approaches that makes possible the implementation of SLAM inreal time in very large environments.

Page 114: Navigation

Navigation System Design (KC-4) 109

7.1 Fundamentals of SLAM

The SLAM algorithm address the problem of a vehicle with a known kinematic model,starting at an unknown position, moving through an unknown environment populatedwith artificial or natural features. The objective of SLAM is then to localize the vehicleand at the same time build an incremental navigation map with the observed features. Thevehicle is equipped with a sensor capable of taking measurement of the relative locationof the feature and the vehicle itself. This scenario is shown if Figure 39. To facilitatethe introduction of the SLAM equations a linear model for the vehicle and observation isused.

x v

p i

Features and Landmarks

Global Reference Frame

Mobile Vehicle

Vehicle-Feature Relative

Observation

Figure 39: A vehicle taking relative measurements to environmental landmarks

7.1.1 Process Model

The state of the system consist of the position and orientation of the vehicle augmentedwith the position of the landmarks. Assuming that the state of the vehicle is given byxv(k) the motion of the vehicle through the environment can be modeled by the followingequation:

xv(k + 1) = Fv(k)xv(k) + uv(k + 1) + vv(k + 1) (252)

where Fv(k) is the state transition matrix, uv(k) a vector of control inputs, and vv(k)a vector of temporally uncorrelated process noise errors with zero mean and covariance

Page 115: Navigation

Navigation System Design (KC-4) 110

Qv(k) [[18]] for further details). The location of the ith landmark is denoted pi. SLAMconsiders that all landmarks are stationary. The “state transition equation” for the ith

landmark ispi(k + 1) = pi(k) = pi , (253)

It can be seen that the model for the evolution of the landmarks does not have anyuncertainty. Assuming that N are actually validated and incorporated by the systemthen the vector of all N landmarks is denoted

p =[

pT1 . . . pT

N

]T(254)

The augmented state vector containing both the state of the vehicle and the state of alllandmark locations is denoted

x(k) =[

xTv (k) pT

1 . . . pTN

]T. (255)

The augmented state transition model for the complete system may now be written as

xv(k + 1)p1...

pN

=

Fv(k) 0 . . . 00 Ip1 . . . 0...

.... . . 0

0 0 0 IpN

xv(k)p1...

pN

(256)

+

uv(k + 1)0p1

...0pN

+

vv(k + 1)0p1

...0pN

(257)

x(k + 1) = F(k)x(k) + u(k + 1) + v(k + 1) (258)

where Ipiis the dim(pi)× dim(pi) identity matrix and 0pi

is the dim(pi) null vector.

As can be seen from Equation 256 the size of the matrices involved were augmentedby n ∗ N , being n the number of states required to represent a landmark and N thenumber of landmarks incorporated into the map. In a large environment this number willtend to grow and eventually the computer resources will not be sufficient to process theinformation from the external sensor in real time.

7.1.2 The Observation Model

The vehicle is equipped with a sensor that can obtain observations of the relative locationof landmarks with respect to the vehicle. Assuming the observation to be linear and

Page 116: Navigation

Navigation System Design (KC-4) 111

synchronous, the observation model for the ith landmark is written in the form

zi(k) = Hix(k) + wi(k) (259)

= Hpip−Hvxv(k) + wi(k) (260)

where wi(k) is a vector of temporally uncorrelated observation errors with zero mean andvariance Ri(k). The term Hi is the observation matrix and relates the output of thesensor zi(k) to the state vector x(k) when observing the i(th) landmark. It is importantto note that the observation model for the ith landmark is written in the form

Hi = [−Hv ,0 · · ·0,Hpi ,0 · · ·0] (261)

This structure reflects the fact that the observations are “relative” between the vehicleand the landmark, often in the form of relative location, or relative range and bearing(see Section 4).

7.2 The Estimation Process

In the estimation-theoretic formulation of the SLAM problem, the Kalman filter is usedto provide estimates of vehicle and landmark location. We briefly summarise the notationand main stages of this process The Kalman filter recursively computes estimates for astate x(k) which is evolving according to the process model in Equation 256 and whichis being observed according to the observation model in Equation 259. The Kalman filtercomputes an estimate which is equivalent to the conditional mean x(p|q) = E [x(p)|Zq](p ≥ q), where Zq is the sequence of observations taken up until time q. The error in theestimate is denoted x(p|q) = x(p|q) − x(p). The Kalman filter also provides a recursive

estimate of the covariance P(p|q) = E[x(p|q)x(p|q)T |Zq

]in the estimate x(p|q). The

Kalman filter algorithm now proceeds recursively in three stages:

• Prediction: Given that the models described in equations 256 and 259 hold, andthat an estimate x(k|k) of the state x(k) at time k together with an estimateof the covariance P(k|k) exist, the algorithm first generates a prediction for thestate estimate, the observation (relative to the ith landmark) and the state estimatecovariance at time k + 1 according to

x(k + 1|k) = F(k)x(k|k) + u(k) (262)

zi(k + 1|k) = Hi(k)x(k + 1|k) (263)

P(k + 1|k) = F(k)P(k|k)FT (k) + Q(k), (264)

respectively.

Page 117: Navigation

Navigation System Design (KC-4) 112

• Observation: Following the prediction, an observation zi(k + 1) of the ith landmarkof the true state x(k + 1) is made according to Equation 259. Assuming correctlandmark association, an innovation is calculated as follows

νi(k + 1) = zi(k + 1)− zi(k + 1|k) (265)

together with an associated innovation covariance matrix given by

Si(k + 1) = Hi(k)P(k + 1|k)HTi (k) + Ri(k + 1). (266)

• Update: The state estimate and corresponding state estimate covariance are thenupdated according to:

x(k + 1|k + 1) = x(k + 1|k) + Wi(k + 1)νi(k + 1) (267)

P(k + 1|k + 1) = P(k + 1|k)−Wi(k + 1)S(k + 1)WTi (k + 1) (268)

Where the gain matrix Wi(k + 1) is given by

Wi(k + 1) = P(k + 1|k)HTi (k)S−1

i (k + 1) (269)

The update of the state estimate covariance matrix is of paramount importance to theSLAM problem. Understanding the structure and evolution of the state covariance matrixis the key component to this solution of the SLAM problem.

7.2.1 Example

Assume a vehicle moving in one dimension and observing relative range to a numberof landmarks. We would like to design a filter to track the position and velocity ofthe vehicle. We can also assume that the position of the vehicle is known with someuncertainty, although this is not relevant for this example. Since no additional absoluteinformation is available such as GPS, if one assume the vehicle is traveling at constantvelocity the estimation of uncertainty of its position will grow with time. In addition dueto initial error in position and velocity the difference between estimated and real positionwill grow indefinite.

[pos(k + 1)vel(k + 1)

]=

[1 ∆t0 1

] [pos(k)vel(k)

]+

[01

]v(k) (270)

In this example we can assume that two landmarks are already incorporate into the map.The process model is then extended as follows:

Page 118: Navigation

Navigation System Design (KC-4) 113

pos(k + 1)vel(k + 1)p1(k + 1)p2(k + 1)

=

1 ∆t 0 00 1 0 00 0 1 00 0 0 1

pos(k)vel(k)p1(k)p2(k)

+

0100

v(k) (271)

Once a landmark is incorporated into the map it will remain as part of the state vector.The full augmented system needs to be used each time a prediction or observation is made.In future section we will see that optimizations are possible to reduce the computationcomplexity of SLAM. On the other hand, the observation model will be a function of thenumber of landmarks observed. In the case two landmarks are observed we have:

[z1(k)z2(k)

]=

[1 0 −1 01 0 0 −1

]

pos(k)vel(k)p1(k)p2(k)

+

[11

]w(k) (272)

In this example it was also assumed that both observation were taken with the samesensor or with another sensor with similar noise characteristics.

7.3 Fundamentals results in SLAM

This section presents three fundamental results in the solution of SLAM. For a full demon-stration of this results the readers are encouraged to see [7]

The state covariance matrix may be written in block form as

P(i|j) =

[Pvv(i | j) Pvm(i | j)Pmv(i | j) Pmm(i | j))

]

where Pvv(i | j) is the error covariance matrix associated with the vehicle state estimate,Pmm(i | j) is the map covariance matrix associated with the landmark state estimates,and Pvm(i | j) is the cross-covariance matrix between vehicle and landmark states.

Theorem 1 The determinant of any sub-matrix of the map covariance matrix decreasesmonotonically as successive observations are made.

The algorithm is initialised using a positive semi-definite (psd) state covariance matrixP(0|0). The matrices Q and Ri are both psd, and consequently the matrices P(k + 1|k),Si(k + 1),Wi(k + 1)Si(k + 1)WT

i (k + 1) and P(k + 1|k + 1) are all psd. From Equation

Page 119: Navigation

Navigation System Design (KC-4) 114

268, and for any landmark i,

detP(k + 1|k + 1) = det(P(k + 1|k)−Wi(k + 1)Si(k + 1)WT (k + 1))

≤ detP(k + 1|k) (273)

The determinant of the state covariance matrix is a measure of the volume of the un-certainty ellipsoid associated with the state estimate. Equation 273 states that the totaluncertainty of the state estimate does not increase during an update.

Theorem 2 In the limit the landmark estimates become fully correlated

As the number of observations taken tends to infinity a lower limit on the map covariancelimit will be reached such that

limk→∞

[Pmm(k + 1 | k + 1)] = Pmm(k | k) (274)

Also the limit the determinant of the covariance matrix of a map containing more thanone landmark tends to zero.

limk→∞

[detPmm(k | k)] = 0 (275)

This result implies that the landmarks become progressively more correlated as successiveobservations are made. In the limit then, given the exact location of one landmark thelocation of all other landmarks can be deduced with absolute certainty and the map isfully correlated.

Theorem 3 In the limit, the lower bound on the covariance matrix associated with anysingle landmark estimate is determined only by the initial covariance in the vehicle esti-mate P0v at the time of the first sighting of the first landmark.

When the process noise is not zero the two competing effects of loss of information con-tent due to process noise and the increase in information content through observations,determine the limiting covariance. It is important to note that the limit to the covarianceapplies because all the landmarks are observed and initialised solely from the observationsmade from the vehicle. The covariances of landmark estimates can not be further reducedby making additional observations to previously unknown landmarks. However, incorpo-ration of external information, for example using an observation is made to a landmarkwhose location is available through external means such as GPS, will reduce the limitingcovariance.

In summary, the three theorems presented above describe, in full, the convergence prop-erties of the map and its steady state behaviour. As the vehicle progresses through the

Page 120: Navigation

Navigation System Design (KC-4) 115

environment the total uncertainty of the estimates of landmark locations reduces mono-tonically to the point where the map of relative locations is known with absolute precision.In the limit, errors in the estimates of any pair of landmarks become fully correlated. Thismeans that given the exact location of any one landmark, the location of any other land-mark in the map can also be determined with absolute certainty. As the map convergesin the above manner, the error in the absolute location estimate of every landmark (andthus the whole map) reaches a lower bound determined only by the error that existedwhen the first observation was made.

Thus a solution to the general SLAM problem exists and it is indeed possible to constructa perfectly accurate map describing the relative location of landmarks and simultaneouslycompute vehicle position estimates without any prior knowledge of landmark or vehiclelocations.

7.4 Non-linear Models

In general the models that predict the trajectory of the vehicle and the models that relatesthe observation with the states are non-linear. The SLAM can still be formulated butrequires the linearization of these models. In this case the Jacobian of the process andobservation models are used to propagate the covariances. In this section we present amore realistic model of a standard outdoor vehicle.

Assume a vehicle equipped with dead reckoning capabilities and an external sensor capableof measuring relative distance between vehicle and the environment as shown in Figure40. The steering control α, and the speed υc are used with the kinematic model to predictthe position of the vehicle. In this case the external sensor returns range and bearinginformation to the different features Bi(i=1..n). This information is obtained with respectto the vehicle coordinates (xl, yl), that is z(k) = (r, β) , where r is the distance from thebeacon to the range sensor, β is the sensor bearing measured with respect to the vehiclecoordinate frame.

Considering that the vehicle is controlled through a demanded velocity υc and steeringangle α the process model that predicts the trajectory of the centre of the back axle isgiven by

xc

yc

φc

=

vc · cos (φ)vc · sin (φ)vc

L· tan (α)

+ γ (276)

Where L is the distance between wheel axles as shown in Figure 41. To simplify theequation in the update stage, the kinematic model of the vehicle is designed to represent

Page 121: Navigation

Navigation System Design (KC-4) 116

xy l l

a

b

f

z(k)=(r,b)

x

y

B1

B3

BiL

L

vc

Figure 40: Vehicle Coordinate System

the trajectory of the centre of the laser. Based on Figure 40 and 41, the translation ofthe centre of the back axle can be given

PL = PC + a · ~Tφ + b · ~Tφ+π/2

(277)

PL and PC are the position of the laser and the centre of the back axle in global coordinatesrespectively. The transformation is defined by the orientation angle, according to thefollowing vectorial expression:

~Tφ = (cos (φ) , sin (φ)) (278)

The scalar representation is

xL = xc + a · cos (φ) + b · cos (φ + π/2)

yL = yc + a · sin (φ) + b · sin (φ + π/2)

Page 122: Navigation

Navigation System Design (KC-4) 117

xc)(yc,

x

y

Encoder Laser

L

a

bH

l

l

x

yL

L

Pc

Figure 41: Kinematics parameters

Finally the full state representation can be written

xL

yL

φL

=

vc · cos (φ)− vc

L· (a · sin (φ) + b · cos (φ)) · tan (α)

vc · sin (φ) + vc

L· (a · cos (φ)− b · sin (φ)) · tan (α)

vc

L· tan (α)

+ γ (279)

The velocity, υc , is measured with an encoder located in the back left wheel. This velocityis translated to the centre of the axle with the following equation:

vc=νe(

1− tan (α) · HL

) (280)

Where for this car H = 0.75m, L=2.83 m, b = 0.5 and a = L + 0.95m. Finally the discretemodel in global coordinates can be approximated with the following set of equations:

x(k)y(k)φ(k)

=

x(k − 1) + ∆T vc(k − 1) · cos (φ(k − 1))−vc

L· (a · sin (φ(k − 1)) + b · cos (φ(k − 1)))

· tan (α(k − 1))y(k − 1) + ∆T vc(k − 1) · sin (φ(k − 1)) +

vc(k−1)L

· (a · cos (φ(k − 1))− b · sin (φ(k − 1)))· tan (α(k − 1))

vc(k−1)L

· tan (α(k − 1))

+ γ (281)

where ∆t is the sampling time, that in this case is not constant.

The observation equation relating the vehicle states to the observations is

Page 123: Navigation

Navigation System Design (KC-4) 118

z = h (X, xi, yi) =

[zi

r

ziβ

]=

√(xi − xL)2 + (yi − yL)2

atan(

(yi−yL)(xi−xL)

)− φL + π/2

+ γh (282)

where z is the observation vector, is the coordinates of the landmarks, xL , yL and φL arethe vehicle states defined at the external sensor location and γh the sensor noise.

The complete non-linear model can be expressed in general form as:

X (k + 1) = F (X (k) , u (k) + γu (k)) + γf (k)z (k) = h (X (k)) + γh (k)

(283)

The effect of the input signal noise is approximated by a linear representation

F (X (k) , u (k) + γu (k)) + γf (k) ∼= F (X (k) , u (k)) + γ (k)γ (k) = Ju · γu (k) + γf (k)Ju = ∂F

∂u

∣∣X=X(k),u=u(k)

(284)

The matrix noise characteristics are assumed zero mean and white:

E γf (k) = E γu (k) = E γh (k) = 0E

γf (i) · γT

f (j)

= δi,j ·Qf (i)E

γh (i) · γT

h (j)

= δi,j ·R (i)E

γu (i) · γT

u (j)

= δi,j ·Qu (i)E

γh (i) · γT

h (j)

= 0

δi,j =

0 i 6= j1 i = j

Eγ (i) · γT (j)

= δi,j ·

(Ju ·Qu (i) · JT

u + Qf (i))

= δi,j ·Q (i)

(285)

An Extended Kalman Filter (EKF) observer based on the process and output modelscan be formulated in two stages: Prediction and Update stages. The Prediction stage isrequired to obtain the predicted value of the states X and its error covariance P at timek based on the information available up to time k − 1,

X (k + 1, k) = F (X (k, k) , u (k))P (k + 1, k) = J · P (k, k) · JT + Q (k)

(286)

The update stage is function of the observation model and the covariances:

Page 124: Navigation

Navigation System Design (KC-4) 119

S (k + 1) = H · P (k + 1, k) ·HT (k + 1) + R (k + 1)W (k + 1) = P (k + 1, k) ·HT (k + 1) · S−1 (k + 1)ϑ (k + 1) = Z (k + 1)− h (X (k + 1, k))X (k + 1, k + 1) = X (k + 1, k) + W (k + 1) · ϑ (k + 1)P (k + 1, k + 1) = P (k + 1, k)−W (k + 1) · S (k + 1) ·W T (k + 1)

(287)

Where

J = J (k) =∂F

∂X

∣∣∣∣(X,u)=(X(k),u(k))

, H = H (k) =∂h

∂X

∣∣∣∣X=X(k)

(288)

are the Jacobian matrixes of the vectorial functions F (x, u) and h(x) respect to the stateX and R is the covariance matrix characterizing the noise in the observations.

Under the SLAM framework the system will detect new features at the beginning of themission and when exploring new areas. Once these features become reliable and stablethey are incorporated into the map becoming part of the state vector. The state vectoris then given by:

X =

[XL

XI

]

XL = (xL, yL, φL)T ∈ R3

XI = (x1, y1, .., xN , yN)T ∈ R2N

(289)

where (x, y, φ)L and (x, y)i are the states of the vehicle and features incorporated into themap respectively. Since this environment is consider to be static the dynamic model thatincludes the new states becomes:

XL (k + 1) = f (XL (k)) + γXI (k + 1) = XI (k)

(290)

It is important to remarks that the landmarks are assumed to be static. Then the Jacobianmatrix for the extended system is

∂F∂X

=

[∂f

∂xL∅

∅ I

]=

[J1 ∅∅ I

]

J1 ∈ R3x3, ∅ ∈ R3xN , I ∈ R2Nx2N

(291)

The observations zr and zβ are obtained from a range and bearing sensor relative to thevehicle position and orientation. The observation equation given in Equation 282 is afunction of the states of the vehicle and the states representing the position of the land-mark. The Jacobian matrix of the vector h with respect to the variables (xL, yL, φL, xi, yi)

Page 125: Navigation

Navigation System Design (KC-4) 120

can be evaluated using:

∂h

∂X=

[∂zr

∂X∂zβ

∂X

]=

∂ri

∂(xL,yL,φL,xj ,yjj=1..N)∂βi

∂(xL,yL,φL,xj ,yjj=1..N)

(292)

This Jacobian will always have a large number of null elements since only a few landmarkswill be observed and validated at a given time. For example, when only one feature isobserved the Jacobian has the following form:

[∂zr

∂X∂zβ

∂X

]=

[∆x∆

∆y∆

0 0 0 ... −∆x∆

−∆y∆

0 ... 0 0

−∆y∆2

∆x∆2 −1 0 0 ... ∆y

∆2 −∆x∆2 0 ... 0 0

](293)

where ∆x = (xL − xi) , ∆y = (yL − yi) , ∆ =√

(∆x)2 + (∆y)2

These models can then be used with a standard EKF algorithm to build and maintain anavigation map of the environment and to track the position of the vehicle.

7.5 Optimization of SLAM

Under the SLAM framework the size of the state vector is equal to the number of thevehicle states plus twice the number of landmarks, that is 2N +3 = M . This is valid whenworking with point landmarks in 2 − D environments. In most SLAM applications thenumber of vehicle states will be insignificant with respect to the number of landmarks.The number of landmarks will grow with the area of operation making the standard filtercomputation impracticable for on-line applications. In this section we present a seriesof optimizations in the prediction and update stages that reduce the complexity of theSLAM algorithm from O(M3) to O(M2). Then a compressed filter is presented toreduce the real time computation requirement to O(2N2

a ), being Na the landmarks in thelocal area. This will also make the SLAM algorithm extremely efficient when the vehicleremains navigation in this area since the computation complexity becomes independentof the size of the global map. These algorithms do not make any approximations and theresults are exactly equivalent to a full SLAM implementation.

7.5.1 Standard Algorithm Optimization

Prediction Stage Considering the zeros in the Jacobian matrix of Equation 291 theprediction Equation 286 can be written:

Page 126: Navigation

Navigation System Design (KC-4) 121

P+ = J · P · JT + Q =

[J1 ∅∅T I

]·[

P11 P12

P21 P22

]·[

JT1 ∅T

∅ IT

]+

[QV ∅∅ ∅2

]

J1 ∈ R3x3, ∅ ∈ R3x2N , I ∈ R2Nx2N

P11 ∈ R3x3, P12 ∈ R3x2N , P21 = P T12, P22 ∈ R2Nx2N

(294)

The time sub-indexes are not used in this explanation for clarity of presentation. Per-forming the matrix operations explicitly the following result is obtained:

J · P =

[J1 ∅∅T I

]·[

P11 P12

P21 P22

]=

[J1 · P11 J1 · P12

I · P21 I · P22

]=

[J1 · P11 J1 · P12

P21 P22

]

J · P · JT =

[J1 · P11 J1 · P12

P21 P22

]·[

JT1 ∅T

∅ I

]=

[J1 · P11 · JT

1 J1 · P12 · IP21 · JT

1 P22 · I]

=

[J1 · P11 · JT

1 J1 · P12

(J1 · P12)T P22

]

(295)

It can be proved that the evaluation of this matrix requires approximately only 9Mmultiplications. In general, more than one prediction step is executed between 2 updatesteps. This is due to the fact that the prediction stage is usually driven by high frequencysensory information that acts as inputs to the dynamic model of the vehicle and needs tobe evaluated in order to control the vehicle. The low frequency external sensors reportthe observation used in the estimation stage of the EKF. This information is processed atmuch lower frequency. For example, the steering angle and wheel speed can be sampledevery 20 milliseconds but the laser frames can be obtained with a sample time of 200milliseconds. In this case we have a ratio of approximately 10 prediction steps to oneupdate step. The compact form for n prediction steps without an update is

P (k + n, k) =

[P11 (k + n, k) G1 · P12 (k, k)

(G1 · P12 (k, k))T P22 (k, k)

](296)

where

G1 = G1 (k, n) =n−1∏i=0

J1 (k + i) = J1 (k + n− 1) · .... · J1 (k) (297)

Page 127: Navigation

Navigation System Design (KC-4) 122

Update Stage Since only a few features associated with the state vector are observedat a given time, the Jacobian matrix H will have a large number of zeros. When only onefeature is incorporated into the observation vector we have:

H = H (k) = ∂h∂X

∣∣X=X(k)

= [H1, ∅1, H2, ∅2] ∈ R2xM , M = (2N + 3)

H1 = ∂h∂XL

∣∣∣X=X(k)

= ∂h∂(xL,yL,φL)

∣∣∣X=X(k)

∈ R2x3

H2 = ∂h∂Xi

∣∣∣X=X(k)

= ∂h∂(xi,yi)

∣∣∣X=X(k)

∈ R2x2

∅1, ∅2 = nullmatrices(

∂h∂Xj

= ∅ ∀j 6= i)

.

(298)

At a give time k the Kalman gain matrix W requires the evaluation of PHT

P ·HT = P1 ·HT1 + P2 ·HT

2

P1 ∈ RMx3 , P2 ∈ RMx2

It can be proved that the evaluation will require 10M multiplications. Using the previousresult, the matrix S and W can be evaluated with a cost of approximately 20M

S = H · P ·HT + R ∈ R2∗2

W = P ·HT · S−1 ∈ RMx2 (299)

The cost of the state update operation is proportional to M . The main computationalrequirement is in the evaluation of the covariance update where complexity is O(M2).

Experimental results The SLAM algorithm presented were tested an outdoor envi-ronment with a standard utility vehicle retrofitted with dead reckoning sensors and alaser range sensor as shown in Figure 42. In this application the most common relevantfeature in the environment were trees. The profiles of trees were extracted from the laserinformation. A Kalman filter was also implemented to reduce the errors due to the dif-ferent profiles obtained when observing the trunk of the trees from different locations.The vehicle was started at a location with known uncertainty and driven in this areafor approximately 20 minutes. Figure 43 presents the vehicle trajectory and navigationlandmarks incorporated into the relative map. This run includes all the features in theenvironment and the optimisation presented. The system built a map of the environmentand localized itself. The accuracy of this map is determined by the initial vehicle positionuncertainty and the quality of the combination of dead reckoning and external sensors.In this experimental run an initial uncertainty in coordinates x and y was assumed. Fig-ure 44 presents the estimated error of the vehicle position and selected landmarks. Thestates corresponding to the vehicle presents oscillatory behaviour displaying the maxi-mum deviation farther from the initial position. This result is expected since there is no

Page 128: Navigation

Navigation System Design (KC-4) 123

absolute information incorporated into the process. The only way this uncertainty can bereduced is by incorporating additional information not correlated to the vehicle position,such as GPS position information or recognizing a beacon located at a known position.It is also appreciated that the covariances of all the landmarks are decreasing with time.This means that the map is learned with more accuracy while the vehicle navigates. Thetheoretical limit uncertainty in the case of no additional absolute information will be theoriginal uncertainty vehicle location. Figure 45 presents the final estimation of the land-marks in the map. It can be seen that after 20 minutes the estimated error of all thelandmarks are below 60 cm.

Figure 42: Utility car used for the experiments. The vehicle is equipped with a Sick laserrange and bearing sensor, linear variable differential transformer sensor for the steeringand back wheel velocity encoders.

7.5.2 Compressed Filter

In this section we demonstrate that it is not necessary to perform a full SLAM update whenworking in a local area. This is a fundamental result because it reduces the computationalrequirement of the SLAM algorithm to the order of the number of features in the vicinityof the vehicle; independent of the size of the global map. A common scenario is tohave a mobile robot moving in an area and observing features within this area. Thissituation is shown in Figure 46 where the vehicle is operating in a local area A. Therest of the map is part of the global area B. This approach will also present significantadvantages when the vehicle navigates for long periods of time in a local area or when the

Page 129: Navigation

Navigation System Design (KC-4) 124

Figure 43: Vehicle trajectory and landmarks. The ’*’ shows the estimated position ofobjects that qualified as landmarks for the navigation system. The dots are laser returnsthat are not stable enough to qualify as landmarks. The solid line shows the 20 minutesvehicle trajectory estimation using full SLAM.

0 200 400 600 800 1000 1200 1400 1600 1800 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

subsamples

me

ters

deviation of Xv,Yv and (x,y) of landmarks:[3][5][10][30][45]

Figure 44: The History of selected state’s estimated errors. The vehicle states showsoscillatory behaviour with error magnitude that is decreasing with time due to the learningof the environment. The landmarks always present a exponential decreasing estimatederror with a limit of the initial uncertainty of the vehicle position.

Page 130: Navigation

Navigation System Design (KC-4) 125

Figure 45: 11 Final estimated error of all states. For each state the final estimated erroris presented. The maximum error is approximately 60 cm

external information is available at high rate. Although high frequency external sensorsare desirable to reduce position error growth, they also introduce a high computationalcost in the SLAM algorithm. For example a laser sensor can return 2-D information atfrequencies of 4 to 30 Hz. To incorporate this information using the full SLAM algorithmwill require to update M states at 30 Hz. In this work we show that while working ina local area observing local landmarks we can preserve all the information processing aSLAM algorithm of the order of the number of landmarks in the local area. When thevehicle departs from this area, the information acquired can be propagated to the globallandmarks without loss of information. This will also allow incorporating high frequencyexternal information with very low computational cost. Another important implicationis that the global map will not be required to update sequentially at the same rate of thelocal map.

Update step Considered the states divided in two groups:

X =

[XA

XB

], XA ∈ R2NA+3, XB ∈ R2NB ,

X ∈ R2N−3, N = NA + NB

(300)

Page 131: Navigation

Navigation System Design (KC-4) 126

B A

B

Figure 46: Local and Global areas

The states XA can be initially selected as all the states representing landmarks in an areaof a certain size surrounding the vehicle. The states representing the vehicle pose arealso included in XA. Assume that for a period of time the observations obtained are onlyrelated with the states XA and do not involve states of XB, that is

h (X) = h (XA) (301)

Then at a given time k

H =∂h

∂X

∣∣∣∣X=X(k)

=∂h

∂ (XA, XB)

∣∣∣∣X=X(k)

=[

∂h∂XA

∂h∂XB

]=

[Ha 0

](302)

Considering the zeros of the matrix H the Kalman gain matrix W is evaluated as follows

P =

[Paa Pab

Pba Pbb

]P ·HT =

[Paa ·HT

a

Pba ·HTa

]

H · P ·HT = Ha · Paa ·HTa S = Ha · Paa ·HT

a + R

W = P ·HT · S−1 =

[Paa ·HT

a · S−1

Pba ·HTa · S−1

]=

[Wa

Wb

](303)

From these equations it is possible to see that

1. The Jacobian matrix Ha has no dependence on the states XB.

Page 132: Navigation

Navigation System Design (KC-4) 127

2. The innovation covariance matrix S and Kalman gain Wa are function of Paa andHa. They do not have any dependence on Pbb, Pab, Pba and Xb.

The update term dP of the covariance matrix can then be evaluated

dP = W · S ·W T =

[Paa · κ · Paa ξ · Pab

(ξ · Pab)T Pba · κ · Pab

](304)

with κ = HTa ·S−1 ·Ha and ξ = Paa ·κ. In the previous demonstration the time subindexes

were neglected for clarity of the presentation. These indexes are now incorporated topresent the recursive update equations. The covariance matrix after one update is

P (k + 1, k + 1) = P (k + 1, k)− dP (k + 1, k)

Paa(k + 1, k + 1) = Paa(k + 1, k)− Paa(k + 1, k) · κ (k) · Paa(k + 1, k)

Pab(k + 1, k + 1) = Pab(k + 1, k)− ξ(k) · P Tab(k + 1, k)

= (I − ξ(k)) · Pab(k + 1, k)

Pbb(k + 1, k + 1) = Pbb(k + 1, k)− Pba(k + 1, k) · κ (k) · Pab(k + 1, k)

(305)

And the covariance variation after t consecutive updates:

Pab(k + t, k + t) = Φ(k + t− 1) · Pab(k, k)Pbb(k + t, k + t) = Pbb(k, k)− Pba(k, k) · ψ (k − 1) · Pab(k, k)

(306)

with

Φ(k + t) = (I − ξ(k + t)) · (I − ξ(k + t− 1)) · .... · (I − ξ(k)) =k+t∏i=k

(I − ξ(i))

ψ(k + t) =k+t∑i=k

(ΦT (i− 1) · κ(i) · Φ(i− 1)

)

Φ(k − 1) = I, ψ(k − 1) = 0

(307)

The evaluation of the matrices Φ(k) , ψ(k) can be done recursive according to:

Φ(k + t) = (I − ξ(k + t)) · Φ(k + t− 1)ψ(k + t) = ψ(k + t− 1) + ΦT (k + t− 1) · κ(k + t) · Φ(k + t− 1)

(308)

Page 133: Navigation

Navigation System Design (KC-4) 128

with Φ(k), ψ(k), κ(k), ξ(k) ∈ R2Na×2Na. During long term navigation missions, the num-ber of states in Xa will be in general much smaller than the total number of states inthe global map, that is Na << Nb < M . The matrices ξ(k) and κ(k) are sparse and thecalculation of Φ(k) and Ψ(k) has complexity O(N2

a ). It is noteworthy that Xb, Pab, Pba

and Pbb are not needed when the vehicle is navigating in a local region ’looking’ only atthe states Xa. It is only required when the vehicle enters a new region. The evaluationof Xb, Pbb, Pab and Pba can then be done in one iteration with full SLAM computationalcost using the compressed expressions. The estimates Xb can be updated after t updatesteps using

Xb(k + t, k + t) = Xb(k + t, k)− Pba(k, k) · θ(k + t) (309)

with θ(k + t) =k+t−1∑

i=k

ΦT (i− 1) ·HTa (i) · S−1(i) · ϑ(i) , m the number of observations, in

this case range and bearing, θ(k) ∈ R2Na×m, Z(k) ∈ Rm, Φ(k) ∈ R2Na×2Na, Ha(k) ∈Rm×2Na and S(k) ∈ Rm×m. Similarly, since Ha is a sparse matrix, the evaluation cost ofthe matrix θ is proportional to Na. The derivation of these equations is presented in [12]

Extended Kalman Filter formulation for the compressed filter In order to main-tain the information gathered in a local area it is necessary to extend the EKF formulationpresented in Equations (286) and (287). The following equations must be added in theprediction and update stage of the filter to be able to propagate the information to theglobal map once a full update is required:

Predictionstep

Φ (k) = Jaa (k, k − 1) · Φ (k − 1)ψ (k) = ψ (k − 1)ψ (0) = IΦ (0) = 0

Updatestep

Φ (k) = (I − ξ(k)) · Φ (k − 1)

ψ (k) = ψ (k) + ΦT (k − 1) · κ(k) · Φ(k − 1)

(310)

When a full update is required the global covariance matrix P and state X is updatedwith equations (306) and (309) respectively. At this time the matrices Φ and Ψ are alsore-set to the initial values of time 0, since the next observations will be function of a givenset of landmarks.

Map Management It has been shown that while the vehicle operates in a local areaall the information gathered can be maintained with a cost complexity proportional tothe number of landmarks in this area. The next problem to address is the selection oflocal areas. One convenient approach consists of dividing the global map into rectangular

Page 134: Navigation

Navigation System Design (KC-4) 129

regions with size at least equal to the range of the external sensor. The map manage-ment method is presented in Figure 47 . When the vehicle navigates in the region r thecompressed filter includes in the group XA the vehicle states and all the states related tolandmarks that belong to region r and its eight neighboring regions. This implies thatthe local states belong to 9 regions, each of size of the range of the external sensor. Thevehicle will be able to navigate inside this region using the compressed filter. A full up-date will only be required when the vehicle leaves the central region r. Every time the

r

Figure 47: Map Management for the compressed Algorithm. The local area is composedof nine squares of length approximate of the range of the sensor. The vehicle is alwayswithin the central region of inside the threshold area.

vehicle moves to a new region, the active state group XA, changes to those states thatbelong to the new region r and its adjacent regions. The active group always includesthe vehicle states. In addition to the swapping of the XA states, a global update is alsorequired at full SLAM algorithm cost. Each region has a list of landmarks that are knownto be within its boundaries. Each time a new landmark is detected the region that ownsit appends an index of the landmark definition to the list of owned landmarks. It is notcritical if the landmark belongs to this region or a close connected region. In case of strongupdates, where the estimated position of the landmarks changes significantly, the ownersof those landmarks can also be changed. An hysteresis region is included bounding thelocal area r to avoid multiple map switching when the vehicle navigates in areas closeto the boundaries between the region r and surrounding areas. If the side length of theregions are smaller that the range of the external sensor, or if the hysteresis region is made

Page 135: Navigation

Navigation System Design (KC-4) 130

too large, there is a chance of observing landmarks outside the defined local area. Thisobservation will be discarded since they cannot be associated with any local landmarks. Insuch case the resulting filter will not be optimal since this information is not incorporatedinto the estimates. Although these marginal landmarks will not incorporate significantinformation since they are far from the vehicle, this situation can be easily avoided withappropriate selection of the size of the regions and hysteresis band. Figure 47 presentsan example of the application of this approach. The vehicle is navigating in the centralregion r and if it never leaves this region the filter will maintain its position and the localmap with a cost of a SLAM of the number of features in the local area formed by the9 neighbour regions. A full SLAM update is only required when the vehicle leaves theregion.

Computational Cost The total computational requirement for this algorithm is ofO(N2

a ) and the cost of the update when the vehicle leaves the local area is of O(NaN2b ).

Provided that the vehicle remains for a period of time in a given area, the computationalsaving will be considerable. This has important implications since in many applicationsit will allow the exact implementation of SLAM in very large areas. This will be possiblewith the appropriate selection of local areas. The system evaluates the location of thevehicle and the landmark of the local map continuously at the cost of a local SLAM.Although a full update is required at a transition, this update can be implemented as aparallel task. The only states that need to be fully updated are the new states in the newlocal area. A selective update can then be done only to those states while the full updatefor the rest of the map runs as a background task with lower priority. These results areimportant since it demonstrates that even in very large areas the computational limitationof SLAM can be overcame with the compression algorithm and appropriate selection oflocal areas.

Experimental Results The compressed algorithm was implemented using local regionsof 40x40 meters square. These regions are appropriate for the Sick laser range sensorused in this experiment. Figure 48 shows part of the trajectory of the vehicle withthe local area composed of 9 squares surrounding the vehicle. To demonstrate that thecompressed algorithm maintains and propagates all the information obtained, the historyof the covariances of the landmarks were compared with the ones obtained with thefull SLAM algorithm. Figure 49 shows a time evolution of standard deviation of fewlandmarks. The dotted line corresponds to the compressed filter and the solid line to thefull SLAM. It can be seen that the estimated error of some landmarks are not continuouslyupdated with the compressed filter. These landmarks are not in the local area. Oncethe vehicle makes a transition the system updates all the landmark performing a fullSLAM update. At this time the landmarks outside the local area are updated in oneiteration and its estimated error become exactly equal to the full SLAM. This is clearly

Page 136: Navigation

Navigation System Design (KC-4) 131

shown in Figure 50 where at the full update time stamps both estimated covariancesbecome identical. Figure 51 shows the difference between full SLAM and compressedfilter estimated landmarks covariance. It can be seen that at the full update time stampsthe difference between the estimation using both algorithms becomes zero as demonstratedbefore. This shows that while working in a local area it is possible to maintain all theinformation gathered with a computational cost proportional to the number of landmarksin the local area. This information can then be propagated to the rest of the landmarksin the map without any loss of information.

−200 −150 −100 −50 0 50 100 150 200−150

−100

−50

0

50

100

150

200

250

300

350

Latitude (metres)

Lo

ng

itu

de

(met

res)

Figure 48: Vehicle and local areas. This plot presents the estimated trajectory and navi-gation landmark estimated position . It also shows the local region ’r’ with its surroundingregions. The local states XA are the ones included in the nine regions shown enclosed bya rectangle in the left button corner of the figure

7.5.3 Sub-Optimal SLAM

In this section we present a series of simplification that can further reduce the compu-tationally complexity of SLAM. This sub-optimal approach reduces the computationalrequirements by considering a subset of navigation landmarks present in the global map.It is demonstrated that this approach is conservative and consistent, and can generateclose to optimal results when combined with the appropriate relative map representation.Most of the computational requirements of the EKF are needed during the update processof the error covariance matrix. Once an observation is being validated and associated to

Page 137: Navigation

Navigation System Design (KC-4) 132

Figure 49: Landmark estimated position error for full Slam and compressed filter. Thesolid line shows the estimated error provided by the full SLAM algorithm. This algorithmupdates all the landmarks with each observation. The dotted line shows the estimatederror provided by the compressed filter. The landmark that are not in the local areaare only updated when the vehicle leaves the local area. At this time a full updates isperformed and the estimated error becomes exactly equal to full SLAM

Page 138: Navigation

Navigation System Design (KC-4) 133

156 158 160 162 164 166 168 170 172 174 0.18

0.19

0.2

0.21

0.22

0.23

0.24

0.25

0.26

0.27

Sta

nd

ard

De

via

tio

n

Time

Compression Filter - Full Slam Comparison

Compresed Filter

Full Slam

Full Update

Figure 50: Landmark estimated position error for full Slam and compressed filter (en-hanced). This plot presents an enhanced view of the instant when the compressed algo-rithm performed a full update. A time ”165” the full slam (solid line) and the compressedalgorithm (solid lines with dots) report same estimated error as predicted.

240 260 280 300 320 340 360 380

-5

-4

-3

-2

-1

0

1

x 10 -3 Compression Filter - Full Slam Comparison

Time

Co

va

ria

nce

Diffe

ren

ce

Full Update

Figure 51: Estimated error differences between full slam and compressed filter. Theestimated error difference between both algorithms becomes identically zero when the fullupdate is performed by the compressed algorithm.

Page 139: Navigation

Navigation System Design (KC-4) 134

a given landmark, the covariance error matrix of the states is updated according to

P = P −∆P∆P = W · S ·W T (311)

The time subindexes are neglected when possible to simplify the equations. The statevector can be divided in 2 groups, the Preserved ”P” and the Discarded ”D” states

X =

[XP

XD

], XP ∈ RNP , XD ∈ RND , X ∈ RN , N = NP + ND (312)

With this partition it is possible to generate conservative estimates by updating the statesXD but not updating the covariance and cross-covariance matrices corresponding to thissub-vector. The covariance matrix can then be written in the following form:

P =

[PPP PPD

P TDP PDD

], ∆P =

[∆PPP ∆PPD

∆P TDP ∆PDD

]= W · S ·W T (313)

Conservative updates are obtained if the nominal update matrix ∆P is replaced by thesub-optimal ∆P ∗

∆P ∗ =

[∆PPP ∆PPD

∆PDP ∅]

= ∆P −[ ∅ ∅∅ ∆PDD

]

P ∗ = P −∆P ∗ = P −∆P +

[ ∅ ∅∅ ∆PBB

]

It can be shown that this simplification generates consistent error covariance estimates.Demonstration: The covariance error matrix P ∗(k + 1) can be rewritten as follows

P ∗(k + 1) = P (k)−∆P ∗ = P (k)−∆P + δ (314)

where

∆P ∗ =

[∆PPP ∆PPD

∆PDP ∅]

= ∆P − δ

∆P =

[∆PPP ∆PPD

∆PDP ∆PDD

]≥ 0 δ =

[ ∅ ∅∅ ∆PBB

]≥ 0

(315)

The matrices ∆P and µ are positive semi-definite since:

Page 140: Navigation

Navigation System Design (KC-4) 135

∆P =

[∆PPP ∆PPD

∆P TDP ∆PDD

]= W · S ·W T ≥ 0

∆PDD = WD · SD ·W TD ≥ 0

(316)

As given in Equation 314, the total update is formed by the optimal update plus anadditional positive semi-definite noise matrix δ. The matrix δ will increase the covarianceuncertainty:

P ∗ (k + 1) = P (k + 1) + δ (317)

then the sub-optimal update of P ∗ becomes more conservative than the full update:

P ∗ (k + 1) ≤ P (k + 1) ≤ P (k) (318)

Finally the sub-matrices that need to be evaluated are PPP , PPD and PDP . The signif-icance of this result is that PDD is not evaluated. In general this matrix will be of highorder since it includes most of the landmarks. The fundamental problem becomes theselection of the partition P and D of the state vector. The diagonal of matrix ∆P canbe evaluated on-line with low computational cost. By inspecting the diagonal elementsof ∆P we have that many terms are very small compared to the corresponding previouscovariance value in the matrix P . This indicates that the new observation does not have asignificant information contribution to this particular state. This is an indication to selecta particular state as belonging to the subset D. The other criterion used is based on thevalue of the actual covariance of the state. If it is below a given threshold, it can be acandidate for the sub-vector D. In many practical situations a large number of landmarkscan usually be associated to the sub-vector D. This will introduce significant computa-tional savings since PDD can potentially become larger than PPP . The cross-correlationPPD and PDP are still maintained but are in general of lower order as can be appreciatedin Figure 52 .

Finally the selection criteria to obtain the partition of the state vector can be given withthe union of the following Ii sets:

I1 = i : ∆P (i, i) < c1 · P (i, i) , I2 = i : P (i, i) < c2 , I = I1 ∪ I2 (319)

Then ∆P∗ is evaluated as follows:

∆P ∗ (i, j) = 0 ∀i, j : i ∈ I and j ∈ I∆P ∗ (i, j) = ∆P (i, j) ∀i, j : i /∈ I or j /∈ I

(320)

Page 141: Navigation

Navigation System Design (KC-4) 136

diagonal elements

Discarded Landmarks

states

Vehicle

States

Preserved landmarks

states

Figure 52: Full covariance matrix divided into the covariance blocks corresponding tothe Vehicle and Preserved landmarks states (XP ) and Discarded landmarks states (XD).The cross-correlation covariance between the Preserved and Discarded states are fullyupdated as shown in grey. Finally the cross-correlation between the elements of thestates corresponding to the ”Discarded landmarks” are not updated as shown in white.

The error covariance matrix is updated with the simplified matrix DP

P ∗ (k + 1, k + 1) = P (k + 1, k)−∆P ∗ (321)

The practical meaning of the set I1, is that with the appropriate selection of c1 we canreject negligible update of covariances. As mentioned before the selection of I1 requiresthe evaluation of the diagonal elements of the matrix ∆P . The evaluation of the ∆P (i, i)elements requires a number of operations proportional to the number of states instead ofthe quadratic relation required for the evaluation of the complete matrix ∆P . The secondsubset defined by I2 is related to the states whose covariances are small enough to be con-sidered practically zero. In the case of natural landmarks they become almost equivalentto beacons at known positions. The number of elements in the set I2 will increase withtime and can eventually make the computational requirements of SLAM algorithms com-parable to the standard beacon localisation algorithms . Finally, the magnitude of thecomputation saving factor depends of the size of the set I. With appropriate explorationpolicies, real time mission planning, the computation requirements can be maintainedwithin the bounds of the on-board resources.

Implementation Issues: Relative Map Representation The sub-optimal approachpresented becomes less conservative when the cross correlation between the non relevantlandmarks becomes small. This is very unlikely if an absolute reference frame is used,that is when the vehicle, landmarks and observation are represented with respect to

Page 142: Navigation

Navigation System Design (KC-4) 137

a single reference frame. The cross-correlations between landmarks of different regionscan be substantially decreased by using a number of different bases and making theobservation relative to those bases. With this representation the map becomes groupedin constellations. Each constellation has an associated frame based on two landmarks thatbelong to this constellation. The ’base’ landmarks that define the associated frame arerepresented in a global frame. All the others landmarks that belong to this constellationare defined in the local frame. For a particular constellation, the local frame is based onthe 2 base landmarks

La =

[xa

ya

], Lb =

[xb

yb

](322)

It is possible to define 2 unitary vectors that describe the orientation of the base frame:

v1 = 1‖Lb−La‖ · (Lb − La) = 1√

(xb−xa)2+(yb−ya)2·[

xb − xa

yb − ya

]=

[v11

v12

]

v2 =

[v21

v22

]=

[ −v12

v11

], 〈v2, v1〉 = 0

(323)

The rest of the landmarks in this particular constellation are represented using a localframe with origin at La and axes parallel to the vectors ν1 and ν2.

Li =

[xi

yi

], La

i =

[ξi

ηi

](324)

with

ξi = 〈(Li − La) , v1〉 = (Li − La)T · v1

ηi = 〈(Li − La) , v2〉 = (Li − La)T · v2

(325)

The following expression can be used to obtain the absolute coordinates from the relativecoordinate representation

Li = La + ςi · v1 + ηi · v2 (326)

The reference frame is formed with two landmarks as shown in Figure 53. The observationare then obtained relative to this frame. Assuming that the external sensor returns rangeand bearing, the observation functions are:

Page 143: Navigation

Navigation System Design (KC-4) 138

hi = Li −XL −Ri · (cos (βi) , sin (βi)) = 0βi = αi + φ− π/2αi : objectanglewithrespecttolaserframeRi : objectrangewithrespecttolaser(XL, φ) = (xL, yL, φ) : vehiclestates

(327)

Finally

hi = La + ςi · v1 + ηi · v2 − PL −Ri · (cos (βi) , sin (βi)) = 0 (328)

With this representation the landmark defining the bases will become the natural ”Pre-served” landmarks. The observations in each constellation will be evaluated with respectto the bases and can be considered in the limit as observation contaminated with whitenoise. This will make the relative elements of the constellation uncorrelated with the otherconstellation relative elements. The only landmarks that will maintain strong correlationwill be the ones defining the bases that are represented in absolute form.

L b

L a

L i

e i

n i

x b x a

y a

y i

x i

y b

Figure 53: Local reference frame

Page 144: Navigation

Navigation System Design (KC-4) 139

Experimental Results The next set of plots present a comparison of the performanceof the sub-optimal algorithm. Figure 54 and 55 present two runs, one using most of thestates and the other with only 100 states. The plots show that the total number of statesused by the system grows with time as the vehicle explores new areas. It is also shown thenumber of states used by the system in grey and the number of states not updated withstars ”*”. In the first run, very conservative values for the constant I1 and I2 were selectedso most of the states were updated with each observation. The second run corresponds toa less conservative selection plus a limitation in the maximum number of states. Figure55 shows that a large number of states are not updated at every time step resulting ina significant reduction in the computational cost of the algorithm. From Figures 56 and57 it can be seen that the accuracy of the SLAM algorithm has not been degraded bythis simplification. These Figures present the final estimated error of all the states forboth runs. It is noteworthy that only the bases are represented in absolute form. Theother states are represented in relative form and its standard deviation becomes muchsmaller. One important remark regarding the advantage of the relative representationwith respect to the simplification presented: Since the bases are in absolute form theywill maintain a strong correlation with the other bases and the vehicle states. They willbe more likely to be chosen as ”preserved” landmarks since the observations will havemore contribution to them than the relative states belonging to distant bases. In factthe states that will be chosen will most likely be the bases and the states associated withthe landmarks in the local constellation. It is also important to remark that with thisrepresentation the simplification becomes less conservative than when using the absoluterepresentation. This can clearly be seen by looking at the correlation coefficients for allthe states in each case. This is shown in Figures 58 and 59 where the correlation ofthe relative and absolute map respectively is presented. In Figure 58 each block of thediagonal corresponds to a particular constellation and the last block has the vehicle statesand the bases. The different constellations becomes de-correlated from each other andonly correlated to the first block whose cross correlation are updated by the sub-optimalalgorithm presented. These results imply that with the relative representation the crosscorrelation between constellation becomes zero and the sub-optimal algorithm presentedbecomes close to optimal. This is not the case for the absolute representation as shownin Figure 59 where all the states maintained strong cross-correlations.

Finally Figure 60 presents the results of a 4 km trajectory using the compressed algorithmin a large area. In this case there are approximately 500 states in the global map. Thesystem created 19 different constellations to implement the relative map. The cross-correlation coefficients between the different constellations become very small as shownin Figure 61. This demonstrates the advantages of the compressed algorithm since thelocal areas are significantly smaller than the global map. When compared with the fullSLAM implementation the algorithm generated identical results (states and covariance)with the advantage of having very low computational requirements. For larger areas thealgorithm becomes more efficient since the cost is basically function of the number of local

Page 145: Navigation

Navigation System Design (KC-4) 140

landmarks. These results are important since it demonstrates even in very large areas thecomputational limitation of SLAM can be overcame with the compressed algorithm andappropriate selection of local areas.

Figure 54: Total number of states and states used and not updated. The figure presentsthe total number of states with a solid black line. This number is increasing becausethe vehicle is exploring new areas and validating new landmarks. The states used by thesystem are represented in grey. The number of states not used is represented with ’*’. Inthis run the system used most of the states available.

Page 146: Navigation

Navigation System Design (KC-4) 141

Figure 55: Total number of states and states used and not updated. In this run amaximum number of states was fixed as constraint for the sub-optimal SLAM algorithm.This is appreciated in the grey plot where the maximum number of states remain belowa given threshold. The number of states not updated increases with time.

0 50 100 150 200 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

landmark states

me

ters

final deviations (1 sigma)

Figure 56: final estimation errors for relative and absolute states using most of the states.

Page 147: Navigation

Navigation System Design (KC-4) 142

0 50 100 150 200 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

landmark states

mete

rs

final deviations (1 sigma)

Figure 57: Final estimated error of relative and absolute states using a reduced numberof states. These results are similar to the ones using most of the states. This resultshows that the algorithm is not only consistent but close to optimal when used with theappropriate map representation.

Page 148: Navigation

Navigation System Design (KC-4) 143

covariance coefficients (states ordered by constellation)

states

sta

tes

20 40 60 80 100 120 140 160 180 200 220

20

40

60

80

100

120

140

160

180

200

220

Figure 58: Correlation coefficient for the relative representation. Each block representsthe cross-correlation coefficient of the elements of the different constellation. The blockin the right corner contains the vehicle states and all the bases. It can be seen that thecross-correlation between different constellations is very small. It is also clear the non-zerocross-correlation between the bases and the different constellations. These correlationsare updated by the sub-optimal filter.

covariance coefficients

states

sta

tes

20 40 60 80 100 120 140 160 180 200 220

20

40

60

80

100

120

140

160

180

200

220

Figure 59: Correlation Coefficient for the absolute representation. In this case the mapappears completely correlated and the sub-optimal algorithm will generate consistent butmore conservative results.

Page 149: Navigation

Navigation System Design (KC-4) 144

Figure 60: Constellation map and vehicle trajectory. 19 constellations were formed by thealgorithm. The intersection of the bases are presented with a ’+’ and the other side of thesegment with a ’o’. The relative landmarks are represented with ’*’ and its associationwith a base is represented with a line joining the landmark with the origin of the relativecoordinate system

covariance coefficients (states ordered by constellation)

states

sta

tes

50 100 150 200 250 300 350 400 450

50

100

150

200

250

300

350

400

450

Figure 61: Cross correlation coefficients. The plots shows 19 constellation and a block inthe right hand corner containing the correlation coefficient for the bases and the vehiclestates. It can be appreciated that the crosscorrelation between the relative states of thedifferent bases is very small.

Page 150: Navigation

Navigation System Design (KC-4) 145

References

[1] I.Y. Bar-Itzhack. Identity Between INS Position and Velocity Error Models. In Journal ofGuidance and Control, pages 568–570, September 1981.

[2] I.Y. Bar-Itzhack and D. Goshen-Meskin. Identity Between INS Position and Velocity Er-ror Equations in the True Frame. In Journal of Guidance and Control, pages 590–592,November 1988.

[3] Y. Bar-Shalom and X. Li. Estimation and Tracking: Principles, Techniques and Software.Artech House, Boston, 1993.

[4] D.O. Benson. A Comparison of Two Approaches to Pure-Inertial and Doppler-Inertial ErrorAnalysis. In IEEE Transactions on Aerospace and Electronic Systems, pages 447–455, July1975.

[5] R. Chatila and S. Lacroix. Adaptive Navigation for Autonomous Mobile Robots. Interna-tional Symposium on Robotics Research, 1995.

[6] I. Cox. Blanche - An Experiment in Guidance and Navigation of an Autonomous RobotVehicle. IEEE Transactions on Robotics and Automation, 7:193–204, 1991.

[7] Clark S. Dissanayake G., Newman P. and Durrant-Whyte H. A Solution to Simultane-ous Localisation adn Map Building (SLAM) Problem. In IEEE Journal of Robotics andAutomaton, volume 17, No.3, June 2001.

[8] H. Durrant-Whyte. Where Am I? Industrial Robot, 21(2), 1994.

[9] H. F. Durrant-Whyte. An Autonomous Guided Vehicle for Cargo Handling Applications.International Journal of Robotics Research, 15, 1996.

[10] D.J. Flynn. A Discussion of Coning Errors Exhibited by Inertial Navigation Systems. InRoyal Aircraft Establishment. Technical Memorandum Rad-Nav 243, 1984.

[11] Guivant J. Efficient Simultaneous Localisation and Mapping in Large Environments. PhDthesis, University of Sydney, 2003.

[12] Guivant J. and Nebot E. Optimization of Simultaneous Localization and Map BuildingAlgorithm for Real Time Implementation. 17, No.3:731–747, June 2001.

[13] S. Julier. Process Models for the High-Speed Navigation of Conventional Road Vehicles.Technical report, University of Oxford, 1994.

[14] S. Julier. Process Models for the Navigation of High Speed Land Vehicles. PhD thesis,University of Oxford, 1996.

[15] A. Kelly. A 3D State Space Formulation of a Navigation Kalman Filter for AutonomousVehicles. CMU Robotics Institute Technical Report, 1994.

Page 151: Navigation

Navigation System Design (KC-4) 146

[16] X. Kong. Development of a Non-Linear Psi-Angle Model for Large Misalignment Errorsand its Application in INS Alignment and Calibration. In IEEE International Conferenceon Robotics and Automation, pages 1430–1435, May 1999.

[17] John Leonard and H. Feder. Decoupled stochastic mapping for mobile robot and auvnavigation. IEEE Journal of Oceanic Engineering, 66, No.4:561–571, 2001.

[18] P. Maybeck. Stochastic Models, Estimation and Control, volume 1. Academic Press, 1982.

[19] P. McKerrow. Introduction to Robotics. Addison-Wesley, 1991.

[20] A.G.O Mutambara. Decentralised Estimation and Control with Applications to a ModularRobot. PhD thesis, Department of Engineering Science, University of Oxford, 1994.

[21] Durrant-Whyte H Nebot E. Initial Alignment and Calibration of Low Cost Inertial Nav-igation Units for Land Vehicle Applications. In Journal of Robotic System, volume 16(2),pages 81–92, February 1999.

[22] Scheding S. Nebot E., Durrant-Whyte H. Frequency domain modelling of aided GPS forvehicle navigation systems. In Robotics and Autonomous Systems, volume 25, pages 73–82,February 1990.

[23] J. Neira and J.D. Tardos. Data association in stochastic mapping using the joint compati-bility test”. IEEE Transaction of Robotics and Automation, pages 890–897, 2001.

[24] Sukkarieh S. Aided Inertial Navigation Systems for Autonomous Land Vehicles. PhD thesis,Department of Mechanical and Mechatronic Engineering, University of Sydney, 2000.

[25] Michael Montemerlo Sebastian. Fastslam: A factored solution to the simultaneous local-ization and mapping problem, http://citeseer.nj.nec.com/503340.html.

[26] Cheeseman P. Smith R., Self M. A stochastic map for uncertain spatial relationships. In4th International Symposium on Robotic Research, MIT Press, 1987.

[27] D.H. Titterton and J.L. Weston. Strapdown Inertial Navigation Technology. Peter Peregri-nus Ltd, 1997.