Top Banner
University of Southern Denmark Master’s thesis MSc in Engineering - Robot Systems Autonomous Airborne Tool-carrier Authors: Hjalte B. L. Nygaard Peter E. Madsen Supervisors: Ulrik P. Schultz Rasmus N. Jørgensen Kjeld Jensen May 30, 2012
135
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: ATC Digital

University of Southern Denmark

Master’s thesisMSc in Engineering - Robot Systems

Autonomous AirborneTool-carrier

Authors:

Hjalte B. L. Nygaard

Peter E. Madsen

Supervisors:

Ulrik P. Schultz

Rasmus N. Jørgensen

Kjeld Jensen

May 30, 2012

Page 2: ATC Digital

Abstract

The field of Unmanned Aerial Vehicles has been under rapid development duringthe last decade. Eyes in the sky have proven to be a powerful tool in a variety ofdomains. Hence many different approaches have been taken to create these typeof systems. However, the foundation of these varying approaches differs mainlyin the task they complete and thus the payload they carry. This thesis proposes amodular approach to the fixed wing UAV domain, aiming to induce reusability ofthe same flying platform in a multitude of applications. This is done by the use ofswappable software as well as hardware structures, utilizing ROS as a commoncommunication basis. Although a completely autonomous flying prototype isnot yet implemented, promising results have been achieved. A hardware basehas been constructed, using MEMS sensors and an ARM processor. As stateestimation is a common problem for all airborne platforms, this has been thea main concern of this thesis. A cascaded Extended Kalman Filter has beenimplemented, and some of its performance measures have been verified, usingvision based horizon tracking. Aided flights have been conducted, utilizing simplePID loops to maintain plane attitude, based on the pilot commands and feedbackfrom the state estimator.

Page 3: ATC Digital

Acknowledgments

We owe our deepest gratitude to a number of people, who have been involvedin this project, one way or the other. First of all our beloved and patientsweethearts. Without their love and support, this thesis would have remained adream. We really appreciate the time, patience and especially steady hand andeagle-eye of Carsten Albertsen, in assisting us on SMD soldering and bug-fixing during the PCB prototype manufacturing. In the same phrase, gratitudeshall be expressed to David Brandt, for helping us out on Gumstix relatedhardware design issues and ad-hoc components. Morten Larsen should bethanked for his tireless support on Linux and CMake related issues. We havebenefited from his ability to decipher illegible compiler errors. We also thankStig Hansen and Kold College for kindly lending us field and airspace to per-form flight test. We are deeply grateful for Jimmi Friis’ guidance into theworld of RC model planes. Your patience and kind instructions was invaluable.Without the initial help of Henning Porsby we would never have gotten inthe air. Andreas Rune Fugl’s input and ideas have proved inspirational inour implementation and future work. We owe our gratitude to Lars PeterEllekilde for breaking the ice on the subjects of quaternions and Kalman fil-tering, and Dirk Kraft for suggesting vision procedures. Lastly we would liketo thank Bent Bennedsen and Henrik Midtiby for their input on the useof imagery in agriculture.

Flexible is much too rigid, in aviation you have to be fluid- Verne Jobst

1

Page 4: ATC Digital

Contents

Contents 3

Introduction 6

1 Background & Analysis 71.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.2 Application areas . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.2.1 Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 101.2.2 Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . 101.2.3 Practicalities . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.3 System requirements & Architecture . . . . . . . . . . . . . . . . 12

2 Auto Pilot 142.1 Low level control loops . . . . . . . . . . . . . . . . . . . . . . . . 16

2.1.1 PID controller . . . . . . . . . . . . . . . . . . . . . . . . 162.1.2 Bank and Heading control . . . . . . . . . . . . . . . . . . 172.1.3 Climb and Altitude control . . . . . . . . . . . . . . . . . 192.1.4 Miscellaneous control . . . . . . . . . . . . . . . . . . . . 19

2.2 Flight Management System . . . . . . . . . . . . . . . . . . . . . 202.2.1 Trajectory Smoothing . . . . . . . . . . . . . . . . . . . . 202.2.2 Trajectory Tracking . . . . . . . . . . . . . . . . . . . . . 212.2.3 Tool Interaction . . . . . . . . . . . . . . . . . . . . . . . 22

2.3 Flight Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 State Feedback 263.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.1.1 Velocities and course parameters . . . . . . . . . . . . . . 283.1.2 Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.1.3 Altitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.1.4 AHRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2 Sensor fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2.2 Estimator architecture . . . . . . . . . . . . . . . . . . . . 363.2.3 Kinematic models . . . . . . . . . . . . . . . . . . . . . . 37

3.3 Noise considerations . . . . . . . . . . . . . . . . . . . . . . . . . 423.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4 Visualization and Simulation 44

2

Page 5: ATC Digital

4.1 Telemetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.1.1 Link hardware . . . . . . . . . . . . . . . . . . . . . . . . 444.1.2 Message passing . . . . . . . . . . . . . . . . . . . . . . . 45

4.2 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.3 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5 Implementation 485.1 Airframe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.2 Autopilot PCB . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.2.1 Processor and data buses . . . . . . . . . . . . . . . . . . 515.2.2 Airspeed sensor . . . . . . . . . . . . . . . . . . . . . . . . 525.2.3 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . 525.2.4 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . 535.2.5 Magnetometer . . . . . . . . . . . . . . . . . . . . . . . . 545.2.6 Barometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 545.2.7 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545.2.8 Zigbee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545.2.9 Ultrasonic proximity . . . . . . . . . . . . . . . . . . . . . 555.2.10 R/C interface & Failsafe operation . . . . . . . . . . . . . 555.2.11 Power management . . . . . . . . . . . . . . . . . . . . . . 565.2.12 Auxiliary components . . . . . . . . . . . . . . . . . . . . 57

5.3 Software building blocks . . . . . . . . . . . . . . . . . . . . . . . 575.3.1 fmFusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 575.3.2 fmController . . . . . . . . . . . . . . . . . . . . . . . . . 585.3.3 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595.3.4 fmTeleAir and fmTeleGround . . . . . . . . . . . . . . . . 59

5.4 Ground Control system . . . . . . . . . . . . . . . . . . . . . . . 605.5 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615.6 Flight test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.6.1 Vision based post flight roll verification . . . . . . . . . . 625.6.2 Aided Flight . . . . . . . . . . . . . . . . . . . . . . . . . 65

6 Perspective 686.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686.2 Project analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Bibliography 71

Nomenclature 76

Appendix A Attitude kinematics 80

Appendix B Heading kinematics 84

Appendix C Position and Wind kinematics 88

Appendix D Wind statistics 91

Appendix E Wikis and how-tos 92

3

Page 6: ATC Digital

Appendix F PCB design 94

Appendix G Thesis proposal 100

Appendix H Sensor Fusion for Miniature Aerial Vehicles 105

Appendix I Project Log 124

4

Page 7: ATC Digital

Introduction

During the past decade, the field of small scale autonomous airborne vehicleshas undergone a rapid development. This development has become possiblethrough new sensor technologies. Small, light-weight, integrated circuit com-ponents now replace larger, heavier mechanical sensors. Development has beenseen in a wide variety of applications, including military, education, commercialand recreation. Example use can be seen in agriculture, aerial photography,disaster management and inspection systems. Common for these areas are spe-cialized systems, with integrated sensors for very specific areas of application.Through this thesis, the state of the art will be assessed. Automated flight andways to control unmanned aerial vehicles will be reviewed. By implementingstate estimation and low level control, on a highly capable embedded platform,we aim to break with the single purpose paradigm, seen in todays solutions. Wepropose a single, modular system, applicable to various different use cases, andairframes. Through increased computational capabilities, the same autopilotcan be used for many different tools, and thereby application areas. We callthis the Autonomous Tool Carrier. It is hypothesized that, through aided con-trol, unmanned aerial vehicles can be controlled by inexperienced pilots, andthat the implementation of such a system, can be used as the first step towardsrealizing this new concept.

5

Page 8: ATC Digital

Chapter 1Background & Analysis

This chapter will analyse existing systems available in the domain of UnmannedAerial Vehicles (UAVs), identify application areas and asses the available tech-nologies. Based on this analysis, it will be formulated how an Autonomous ToolCarrier can supplement the current state of the art. Furthermore, the overallbuilding-blocks of such a system will be identified and introduced.

The remainder of this introductory Chapter will be structured as follows: In Sec-tion 1.1 an overview of the existing commercial and open source based projectswill be given, Section 1.2 on the next page surveys applications for UnmannedAerial Vehicles and finally Section 1.3 on page 12 describes the proposed systemand its architecture. The latter Section also outlines the structure of this thesis.

1.1 Related Work

A review of existing autopilot systems will be conducted in this section. Theavailable systems have been divided into two groups, commercial and opensource autopilots. A select number of open source auto pilot projects are pre-sented in Table 1.1 on the next page. General for these systems are the rela-tively low hardware costs, compared to the commercial units. The majority ofthe projects use Complementary filtering for state estimation, as it is easier toimplement compared to the Kalman filters used by the commercial units. Theseprojects generally seem to lack coherent documentation and scientific founda-tion. None of the systems provide tool interaction, as the purpose of the projectsis limited to enabling automated flight with R/C model planes.

A number of commercial autopilots are available, as can be seen in Table 1.2on the following page. Based on various sales documents, it can be deducedthat commonly, these autopilots are based on Extended Kalman Filters. Theturn-key prices are relatively high, ranging from 4.000 Euro and up. Sys-tem information beyond sales documents is generally sparsely available. TheKestrel autopilot[12] is an exception. This autopilot originates from Birming-ham Young University[23] and has been used comprehensively by Beard et al. intheir research, in various aspects regarding automated flight of miniature aerial

6

Page 9: ATC Digital

Table 1.1: Table of open source autopilot projects

Name Computer State Est. HW Price[EUR]Payloadcomm.

Paparazzi[14] 60 MHz ARM Complimentary 400 no

GluonPilot[4] 80 MHz dsPIC EKF 480 no

ArduPilot[1] 20 MHz AVR Complimentary 160 no

OpenPilot[6] 150MIPS Cortex-M3 Complimentary 70 no

Table 1.2: Table of commercial autopilot systems

Name Payload Computer State Est. Price[EUR]Payloadcomm.

Kestrel[12] - 29MHz 8-bit EKF 4,000 one-way

SenseFly[8]12MP CanonPowerShot - - 8,500 no

MicroPilot[5]Gimbal Camera

EUR 19,250 150 MIPS 12 state EKF 6,400 one-way

Piccolo II[7] - - - 6,000 one-waya

Avior 100[2] - 600 MHz 15 state EKF N/A one-way

Gatewing x100[3]10MP digital

camera - - N/A noa Might have two way communication. No specific information found.

vehicles[19, 45, 54, 59]. Due to the academic history of this autopilot, someinsight to its internal workings is available. Most of the commercial autopilotscarry some sort of tool and most of them are indeed capable of communicatingthe position and orientation of the airframe to the tool. However, none of themsuggest capabilities of involving the tool in the flight itself. Others simply fixatea regular camera to the airframe.When surveying the field, it is soon recognized that State Estimation plays amajor role, for the overall system performance. This is due to the fact that thissubsystem provides the feedback for the actual controller to react on. Thus, abad state estimate yields poor overall autopilot performances. This is reflectedin the hobbyists using relatively simple methods, as opposed to the commercialmore complex and thus expensive systems.

1.2 Application areas

The primary role of UAVs is provide an eye in the sky. This is useful for adhoc data collection, in many domains. This section will survey some differentapplication areas of UAVs and delimit the requirements, in terms of payloads,versatility and various other aspects.

Aerial photography in agriculture

In recent decades there has been an increasing interest in remote sensing-basedprecision agriculture[44, 52, 70]. Especially Site Specific Weed Management(SSWM) has great potential for reducing the amount pesticides needed for ade-quate weed management[33]. Historically, weed management has involved a vastamount of manual labor. In the past decades the use of herbicides has greatly

7

Page 10: ATC Digital

reduced the amount of manual labor. However, these chemical agents impactboth environment and yield of the crop. Gutjahr and Gerhards shows that useof herbicides can be reduced by approximately 80%, by spraying selectively inweed infested areas, rather than broad spraying the entire field. Further op-timization can be achieved by selectively activating relevant spraying nozzles,when driving a spraying boom over a weed plant[40]. The two examples requiredifferent vision system approaches; the former needs to localize weed patchesand the latter individual plants within the patch. As weed control is relevantin the germination period, weed patches can be identified by excessive bioma-terial density. This can be measured using Normalized Difference VegetationIndex [70] (NDVI). This indexing method exploits the fact that photosynthesisutilizes most red light with a wavelength of λ ≈ 680nm, and reflects most near-infrared light with a wavelength of λ ≈ 750nm, as illustrated in Figure 1.1a.This steep rise in reflection is known as the red edge.

(a) Reflection curves for soil and different plantspecies, illustrating typical red edges [70, figure8.1.]

(b) Weed detection by row relativeplacement [22, figure 10.c]

Figure 1.1: Reprinted figures, illustrating two different approaches to imagebased weed detection.

Another method, when dealing with row crops, is identification of biomass be-tween rows, as seen in [22] and, and depicted in Figure 1.1b.

As pointed out by Weis and Sokefeld [70], successful crop to weed discriminationis essential for SSWM. Therefore spatial and temporal variations of weed pop-ulations needs to be assessed if treatment should vary within the field. Satellitebased image data have been proven useful for localizing big weed patches[39],by combining RGB (red, green & blue) and NIR (Near Infra Red) images withNDVI. Weed patches are detected in images, from the increased biomass, com-pared to the surrounding areas. However, due to the limited spatial resolutionof the satellite based images, small patches remains undetectable. Furthermore,it is expensive and the revisiting time of the QuickBird satellite used in [39], isup to 3.5 days.

8

Page 11: ATC Digital

Disaster overview and search & rescue aid

As it is noted by Bendea et al. [20], small scale unmanned aerial vehicles (UAV)are ideal for disaster overview. I a multitude of situations, be that floods, earth-quakes or post hurricane situations, an immediate overview of the situation canprove very helpful. Revisiting times of satellites, and their limited spatial reso-lution limits their use in these cases, thus supplementary imagery from a UAVcan aid disaster management.

An eye in the sky could also be invaluable in the search for missing persons.Combined with thermal cameras, an effective search could be conducted withoutthe cost of a manned aircraft. Also the aircraft requires less space for storage,and can easily be standby for immediate dispatch. Additionally, multiple UAVscan be launched to perform a swift overflight of an area without additionalpersonnel requirements.

Other application

The need to inspect gas pipelines is explained by Hausamann et al. [34], whereit is concluded that remote sensing possibilities of UAVs could potentially beapplied. Espinar and Wiese [29] uses a UAV to collect dust samples from trop-ical trade winds from North Africa’s Sahara, in an effort to link it with healtheffects on humans and ecosystem. Data could also be sampled from other moreinhospitable environments, like gamma radiation of the recent Fukushima dis-aster in Japan. Furthermore, regular civil aerial photography is also an area ofapplication. This can be used for producing pictures for sale, wildlife monitor-ing, cattle inspection and counting.

The different application areas have different constraints to the UAV and partic-ularly the payload it carries. In agriculture RGB and IR images are required tomap fields and extract NDVI maps, which can be used to localize weed patchesthat needs treatment. The spatial resolution should be high enough to clearlydistinguish row from inter-row areas. In applications such as search and rescue,thermal cameras are deemed useful, as they can detect body heat. Thermalcameras are likewise useful in agriculture, as they can be used to spot deer[69]in the field before harvesting. The majority of the application areas utilizescameras, to provide ’the grand overview’. However, the specifications of thesecameras varies. As it has been pointed out, other applications requires verydifferent sensing tools.

1.2.1 Conditions

In order to identify the requirements of UAVs in Denmark, a number of practicaland regulative considerations must be addressed.

1.2.2 Regulations

The Civil Aviation Administration - Denmark, has a number of regulationsregarding recreational model plane flying and automated unmanned aerial ve-hicles. The most essential parts of the regulations, contained in article BL 9-4[48], have been identified as:

9

Page 12: ATC Digital

� Weight: BL 9-4 §4.2, model planes with a take off weight of 7-25 kgrequires special permissions and registration of aircraft. Therefore a max-imum weight of 7 kg is required for anyone not possessing this permit.

� Flying height: BL 9-4 §2.2.e, model planes are limited to a maximumaltitude of 100 m.

� Line of sight BL 9-4, a model plane must alway be in line of sight (with-out using helping tools like binoculars) and if the plane is autonomous,the person operating the plane should always be able to change to manualmode, in flight, to be able to avert potential collisions.

1.2.3 Practicalities

Apart from the regulative considerations, a number of other practicalities mustbe assessed. When selecting an aerial platform a number of requirements anddesired capabilities must be defined. The platform needs to operate outdoors,and has to be operable on any, weather-wise, regular day. When dealing withlight weight smaller aircraft, the main concern before takeoff is the wind.

DMI [42] provides a wind rose average spanning from 1993-2002, see Figure 1.2.From the wind rose it can be conducted, see Appendix D, that if a UAV can flyin winds up to 10,7 m/s, it can operate 347 days of the year. The wind tends toslow down at night, in the morning and evening. Thus, for all practical reasons,a plane that flies stable in 10,7 m/s wind can be used all year in Denmark.

Wind speed Days a year[m/s] [%]

0.3 - 5.4 585.5 - 10.7 37> 10.8 5

Figure 1.2: DMI wind rose [42]. Average wind speeds and direction in a periodfrom 1993 to 2002 at Odense Airport.

It is desirable to use electrical propulsion, over fuel driven motors as these areless demanding in maintenance, cleaner, quieter and simpler to work with andcontrol. As the plane will be used as a tool carrier, it has to be able to carryextra weight. It is estimated that a typical tool, like a camera, would typicallyweigh up to 1 kg - this would be the minimum extra weight capability of the

10

Page 13: ATC Digital

plane. It is desirable to have an as low stall speed. Low stall speed is essen-tial for slow flights and allows for added weight while still maintaining realistictake-off and landing speeds.

When adding weight to the plane, the flight dynamics will change. The addedmass increases the inertia of the system, and as a result the aircraft step-responseto control input will be slower. As seen in (1.1), the lift force of a wing isdependent on wing area, A, traveling speed, v, lift coefficient, CL and air densityρ:

L

A=

1

2· v2 · ρ · CL (1.1)

Thus, as an effect of added weight, the take-off and landing speed will increaseproportionally by the square root of the added weight, as seen in (1.2) through(1.3), where WS denotes wing loading:

v2 =2 · L

A · ρ · CL

∣∣∣∣LA = M·g

A =WS ·g(1.2)

=2 · g ·WS

ρ · CL(1.3)

When dealing with UAVs a number of issues must be dealt with. The foremostchallenge is that fixed wing platform must be in motion in order to maintain lift.Thus, stopping in the air, in case of an unforeseen event is clearly not an option.As a result it must be ensured that a pilot or operator is always ultimately incontrol of the airframe; even when it is flying autonomously, he or she must beready to manually take over the flight.The fact that the platform is airborne also introduces a number of challengesin the development phase. First of all, it is non-trivial to do a flight test. Anumber of conditions must be met; 1) One must have a capable pilot, 2) theweather must be right, 3) one must have an airspace to perform the test in.As debugging directly on the flying platform is not applicable, every piece ofdata that might be, or become relevant in later analysis, must be logged ortransmitted wireless to the ground during flights.That being said, the domain of autonomous flight is an inspiring area to dodevelopment in, and the many challenges inspires long working hours.

1.3 System requirements & Architecture

Based on the above analysis, it is concluded that a modular system capable ofinterfacing a range of tools, would enable future developer to focus at the taskand thus payload at hand, rather than building the vehicle carrying it. Further-more, by providing bi-directional platform to tool interfacing, new possibilitiescould arise. In order to provide such a system, it is of the outmost importancethat basic building blocks like state estimation, autopilot and telemetry link arein place. Providing these fundamental sub systems will be the main focus ofthis thesis.In order to provide modularity, is was chosen to take the abstraction to a higherlevel, than seen in any of the existing UAV systems. This has been done by

11

Page 14: ATC Digital

building the system using ROS[58] as a middleware. By utilizing ROS, a wellestablished open source community with a broad code and sensor base is drawnupon. In order to accommodate ROS, it is necessary to provide a hardwareplatform, capable of running the system.Figure 1.3 depicts the suggested structure of such a system. As seen in thisblock diagram, a simulator is capable of replacing the entire airframe. Thisis done in order to enhance development and debugging. This is necessary, asflight tests are not trivially conducted, as it was concluded in Subsection 1.2.3on page 11.

AutopilotChapter 2

ControlChapter 2

State EstimatorChapter 3

Flight PlanChapter 2

ToolChapter 2

TelemetryChapter 4

Ground ControlStation

Chapter 4

SimulatorChapter 4

AirframeChapter 5

SensorsChapter 3

ActuatorsChapter 5

Figure 1.3: Overview of information flow in the proposed system structure. Itshould be noted that chapter references are to the main discussion of the topic.

The remainder of this thesis will be structured as follows. In Chapter 2, ’AutoPilot’ on the following page, the subject of autonomous flight will be discussedalong with the processing needs of the flight computer. Chapter 3, ’StateFeedback’ on page 26, surveys sensors, methods and algorithms for determiningquantities such as position and orientation of the moving UAV. Various devel-oped tools for aiding debugging and monitoring are discussed in Chapter 4, ’Vi-sualization and Simulation’ on page 44. Chapter 5, ’Implementation’ on page 48,condenses all these aspects into a flying prototype of the proposed system andevaluates some overall performance aspects. Finally Chapter 6, ’Perspective’ onpage 68, debriefs this thesis and outlines the future of the project.

12

Page 15: ATC Digital

Chapter 2Auto Pilot

The overall purpose of the autopilot is to enable automated, purposeful flight.The purpose of the flight depends on the payload and task at hand, but will typ-ically be some sort of overflight of a predefine area. When dealing with aircraftcontrol, a number of standard aviation terms are used. The most importantones are listed in the Nomenclature on page 76.

Different approaches on autopilot design is found in the literature. Commonly,they are build in some sort of hierarchy, with the lowest layer stabilising the rollaxis, pitch axis and velocity of the airframe. On top of this, methods for con-trolling altitude and heading can be build. The higher abstraction layers utilizesthese abilities to provide execution of a plan, in one or another format. Theplan is typically composed of a set of waypoints, i.e ’dots’ which the airframemust ’connect’ during the flight, in a specified order. As fixed wing aircraftsneeds velocity to maintain lift, turning on the spot is not an option. Hence,some sort of smoothing must be applied when interpolating the waypoints, toprovide a flyable path.

Although the autopilot is not fully implemented, an overview of the proposedstructure will be given here and depicted in Figure 2.1 on the next page.

� The top level is Planning - the act of creating a set of waypoints, i.e a list ofchronically ordered places, velocities, measurements and other conditionswhich must be satisfied, one by one. This list will be referred to as theflight plan. As planning involves the operator and can be done pre-flight,it will be done on the Ground Control Station (GCS).

� The intermediate level is the Flight Management System (FMS), whichcarries out the flight plan on a higher abstraction level, waypoint by way-point.

� The actual real time control of the airframe is done by a set of controllers,which aims to position the airframe correctly in terms of attitude, alti-tude, velocity etc, according to the current setpoints given by the FMS.Typically one controller is devoted to each set of control surfaces. Such a

13

Page 16: ATC Digital

Planning

Flight ManagementSystem

Section 2.2

Tool / PayloadSection 2.2.3

ControllersSection 2.1

ActuationSection 5.2.10

Fight ComputerSection 2.3

Flight plan

Control modes and setpoints

Control efforts

Operator

Operator

Sta

teF

eed

bac

kS

eeC

hapte

r3

Figure 2.1: Overview of the autopilot sub-system structure. The operator isin control in the planning phase, but is also capable of overriding the controlefforts and thus manually control the airframe.

controller can work in different modes, depending on what condition theFMS requires it to maintain.

As pointed out by Low [47], the separation between FMS and the controllers,introduces an abstraction, which allows the FMS to control different types ofairframes, without modification. The controllers must be tuned to the airframein question, but the way of controlling it is in essence the same. It should benoted that key parameters, such turn radius, cruising speed, etc of the airframeand its controllers needs to be known in the planning phase as they vary withthe type and proportions of the airframe.

The FMS also communicates with the payload tool. This communication pro-vides the required information to the tool, for sampling data points, when aset of conditions are met (i.e, when the airframe is at the appropriate posi-tion). It is likewise possible to change flight parameters during flight, basedon observations acquired by the tool. E.g if the tool misses a data sample, itcan request the FMS to fly back to the point in question for another attempt.Other interesting thinkable scenarios could be following a ’rabbit’ or path, moni-tored by the tool - or expanding the search area based on the tool’s observations.

The final layer of the system is actuation. In this layer, the servos are signalledto position control surfaces at the deflection commanded by the controllers. Alsohere, the operator can get involved, as he shall at all times be able to disengagethe autopilot system and take over control via a remote control.The technical details presented in the remainder of this Chapter are structuredbottom-up, as Section 2.1 on the following page describes the low level con-trollers, Section 2.2 on page 20 describes how the low level controllers can beutilized by the FMS, when executing a flight plan. Section 2.3 on page 23 eval-uates the need for computational power and other constraints associated with

14

Page 17: ATC Digital

f

A

f

A

f

A

+Setpoint

(Desired value)

Feedback(Actual value)

+

−Ki ·

∫e(t)dt

Kp · e(t)

Kd · ddte(t)

e(t)+

Outputto system

Figure 2.2: Block diagram of a basic PID loop with bandwidth limited inputsand output. Please note that implementations might deviate from the depictedversion.

the UAV domain and finally chooses a platform. Section 5.2.10 on page 55 de-scribes the subsystem for interacting with the R/C components, which performsthe actual execution of the flight.It should be noted that throughout this Chapter, the state of the airframe isassumed to be known and correct. Chapter 3, ’State Feedback’ on page 26describes methods for acquiring this state.

2.1 Low level control loops

The low level controllers serves to control flight parameters such as roll, pitch,yaw and velocity. They do so by continuously controlling the deflection of thecontrol surfaces. The output to these surfaces are based on the requirementsfrom the FMS and the feedback from the state estimator

2.1.1 PID controller

The lowest level of control is stabilisation of the roll and pitch axis of the air-frame. For this form of control, the PID scheme is applicable, as it is generallysufficient for controlling most types of systems and can be tuned with relativeease. Figure 2.2 depicts such a controller. It takes two input signals, referredto as set point and feedback. The setpoint is the desired value of the plant un-der control. The actual value of the plant must be measured and returned asfeedback. By subtracting the current plant value (feedback) from the desired(setpoint), the error is found. The controller outputs a single value, output,which must be connected to an actuator or subsystem, controlling the plantunder control. The output of the controller is the weighted sum of current theerror, integral- and derivative of the error. The applied weights are referred toas Kp, Ki and Kd in the ideal form, for proportional, integral and derivativerespectively.

Various schemes exist for tuning the three parameters on a PID controller, with-out modelling the system under control. Most notable is Ziegler-Nichols[72].This method is a step-by-step recipe for tuning a PID controller for a arbitrarysystem, by applying various gains and inputs and observing the system reaction.

In Figure 2.2, the PID controller is depicted with bandwidth limiting low passfilters on setpoint, feedback and output. These are not strictly necessary, but

15

Page 18: ATC Digital

can be applied to reduce the bandwidths, in order to filter out noise in feedbackand setpoint. This filter can likewise limit the rate of change of the output. Ifthe cut-off frequencies of these filters are not too low, it will not compromisethe functionality of the PID-controller, as it does not make any sense to controlthe airframe any faster than its natural frequency. Moving average filtering canalso be applied to e.g the setpoint to reshape a step input to a linear ramp.

Integral wind-up is the situation, where the controller aims to control the out-put to a unreachable setpoint and thus the integrator term never gets ’satisfied’and keeps integrating. This scenario will make the output stick to the upperlimit, even when the setpoint is back in the reachable region. Integral wind-upis handled by limiting the value of the integrator.

By nesting PID controllers more complex controller structures can be build.This is done by letting one controller output the setpoint for the next. Thisway e.g the heading of the airframe can be controlled by outputting the desiredroll angle to another controller. When dealing with nested controller loops, thecontroller being controlled by another is referred to as the inner control loop andthe controller controlling it the outer control loop. It should be noted that theinner most controller should always be the fastest. Controlling a slow controllerwith a fast one will result in a unstable system.

2.1.2 Bank and Heading control

The bank angle of the airframe is controlled using a PID loop, as seen in Fig-ure 2.3a on the next page. The feedback is the actual roll angle, φa, and thesetpoint is given by outer control loops as the desired roll angle, φd. The outputof the controller is connected to the ailerons, with opposite signs for left andright, to induce roll.

An outer control loop can control the heading of the airframe, by banking theairframe, under the assumption that a lateral controller is used to maintain thealtitude and the velocity is constant, as:

r =V 2a

g · tan(φ)(2.1)

Where Va is the velocity through air, r is the turn radius and g is the gravita-tional acceleration. Based on this, the angular velocity around the yaw axis, ψ(i.e. the rate of turning), can be controlled, as:

ψ · r = Va ⇔ ψ =Var

(2.2)

ψ =VaV 2a

g·tan(φ)

=g · tan(φ)

Va(2.3)

When using a PID loop for controlling the heading, the output must be limited.If the airframe rolls too much, the lift is compromised, such that the lateralcontroller is unable to maintain the altitude.

16

Page 19: ATC Digital

PIDFB

SPPID

FB

SPAirframe

φdAileron

φaχa

χdRollHeading

(a) Longitudinal low level controller loops.

PIDFB

SPPID

FB

SPAirframe

θdElevator

θaAlta

AltdPitchAltitude

(b) Lateral low level controller loops.

PIDFB

SPAirframe

0Rudder

αy

TurnCoordinator

(c) Vertical low level controller / turn coordi-nator.

PIDFB

SPAirframe

VdThrottle

Va

Velocity

(d) Throttle controller / Cruise control.

Figure 2.3: Low level PID controllers.

17

Page 20: ATC Digital

By controlling the yaw of the airframe, the course over ground can be controlled,as the following heading covary:

χ = ψ + β (2.4)

Where χ is the course over ground and β is the side slip angle, due to the windcomponent. This time and heading varying offset can be trimmed away by theintegral term of the controller.

2.1.3 Climb and Altitude control

The lateral controller, depicted in Figure 2.3b on the preceding page, is similarto the longitudinal; The inner PID loop controls the pitch of the airframe, byoutputting the elevator control effort. The feedback is the current pitch angle,θa, and the setpoint is given by a outer control loop, as the desired pitch angle,θd.An outer loop delivers the setpoint for the inner loop, in order to maintaina given altitude. The coupling between pitch and altitude is more intuitive,compared to φ→ ψ, as:

Alt ≈ Va · cos(γ) · cos(φ) (2.5)

Where the course climb angle, γ, is given the the pitch angle, θ, and the angleof attack, α:

γ = θ − α (2.6)

Strictly speaking, (2.5) is an approximation, as the wind is neglected. Pleaserefer to Figure 3.2b on page 29. However, this approximation is viable for alti-tude control purposes.

Similarly to the longitudinal controller, the output of the outer loop must belimited, as it is undesirable to climb or decent to steeply - At least under normalcircumstances.

2.1.4 Miscellaneous control

The vertical controller outputs the rudder control effort. See figure 2.3c on thepreceding page. This controller will per default maintain the y-acceleration ofthe airframe at zero[38], such that coordinated turns are performed. As the servoused to control the rudder also controls the steering wheel of the airframe, aspecial controller must be used to handle take-off, landing and taxiing situations.

Figure 2.3d on the previous page depicts a simple PID control to maintaining theairframe’s velocity, referred to as throttle- or cruise control. It should be notedthat either the indicated airspeed, Vi, or the true airspeed, Va, can be utilizedas feedback for this controller. By using the true airspeed as feedback, time-of-flight and ground speed can be controlled. But if the airframe experiencesstrong tailwind, the velocity relative to the air might get to small to produceproper lift, possible resulting in stall situations. A solution could be using thetrue airspeed per default and in-cooperate a safety mechanism based on theindicated airspeed.

18

Page 21: ATC Digital

Figure 2.4: Examples of circle fitting trajectory smoothing. Reprinted from[17, Fig. 13]

.

2.2 Flight Management System

The foremost task of the Flight Management System (FMS) is to ensure that theflight is executed correctly, by keeping the flight plan and its state of executionup-to-date. This includes a set of subtasks:

1. Interpolate the flight plan, into a smooth, flyable trajectory.

2. Ensure that the low level controllers do in fact fly the trajectory.

3. Communicate state of plan and airframe to the tool and receive updatesfrom it.

2.2.1 Trajectory Smoothing

Trajectory smoothing is the task of interpolating the waypoints, from the flightplan, into a continuous, flyable trajectory. Normally, it is desirable to fly asstraight as possible in between waypoints, as these will be the lines where datamust be sampled by the tool. Thus, flying straight lines in between the way-points and turning in minimum circles in the line segment transitions doesintuitively seem like a good idea. However, one must carefully consider howthe circle segments are used: In Figure 2.4 Anderson et al. [17] illustrates threedifferent approaches to this, as the FMS can either:

a) Fly the minimum distance path (left path)

b) Force the trajectory through the waypoint (center path)

c) Fly the equal length distance (right path)

19

Page 22: ATC Digital

(a) Nested PID loops can be used for trajectory tracking, by minimising the distancefrom the the desired path. Partial reprint from [38, Fig. 17]

(b) Non-linear approach on trajectorytracking. The method aims to intersectthe desired path, a fixed distance ahead ofthe airframe. Reprinted from [57, Figure1]

(c) A vector field can be described, thatguides the UAV towards the desired tra-jectory and gradually towards the direc-tion of it. Partial reprint from [54, Fig.1]

Figure 2.5: Reprinted figures of different approaches on trajectory tracking.

If the purpose of the flight is to gather data along the line segments, option b)seems advantageous, as it maximizes the line segments length. Is the purposeon the other hand to get from A to B fast, option a) is the way to go. Finallyoption c) is advantageous, if the flight time is important, as the flown distanceis easily calculated, from the sum of waypoint distances. The information onwhich scheme to use should be part of the flight plan description, to ensure thatthe planner and FMS share their interpretations of the trajectory.

2.2.2 Trajectory Tracking

A number of different approaches to trajectory tracking can be found in theliterature. A few examples are picked an reviewed here. Commonly, they relyon the longitudinal low level controller, but differ in the way the desired headingis produced.

Ippolito [38] proposes a controller, based on nested PID controllers. This con-troller essentially extends the longitudinal controller depicted in Figure 2.3a onpage 18, by letting an additional PID loop control the desired heading. Thecontroller has two modes, as it can either track a line, by controlling the desired

20

Page 23: ATC Digital

heading from the orthogonal distance between the line and the airframe, ortrack a circle segment, by comparing the distance from the circle center to thedesired circle radius. The controller is depicted in Figure 2.5a on the precedingpage. This scheme is temptingly simple, but not quite flawless. As the orthogo-nal distance between the airframe and the trajectory is used, the controller canonly work reactively. This will inevitable result in the airframe overshooting thetrajectory, before any error will be present for the PID to react on.

Park et al. [57] proposes a non linear controller, which follows a imaginary rab-bit a fixed distance ahead of the airframe, on the trajectory. A conceptualillustration is given in Figure 2.5b on the previous page. The turn radius is con-tinuously set such that the airframe will intercept the trajectory at that verydistance. Thus, the airframe will smoothly approach the trajectory and adjustthe turn radius such that the curve will be tracked.

Nelson et al. [54] proposes a Vector Field based navigation and control schemefor fixed wing UAV. The general idea of this scheme is that various ’interest’maps can be combined. E.g the UAV can be directed away from densely popu-lated areas, towards unexplored areas etc. Thus this scheme is not necessarilyclassified as a strictly trajectory tracker, but interesting indeed. In Figure 2.5con the preceding page such a vector field is depicted for a straight line segmentof the trajectory.

2.2.3 Tool Interaction

Beyond the capabilities of a regular autopilot in a UAV, the autopilot of an ATCmust be coupled with the tool it is carrying. This coupling is bidirectional, asthe tool affects the dynamics of the airplane, while the autopilot provides infor-mation about the current state of the airframe. A use-case could be a cameratool; mounting a camera under the belly of the plane increases its weight andthus its flight characteristics, i.e. the take-off and landing speeds are increased,as well as the step response time to control input. Therefor the tool must pro-vide information about its size and weight to the autopilot, such that theseparameters can be accounted for. Furthermore, the tool needs information fromthe autopilot. Assume the camera is mounted in a gimbal and the mission goalis to photograph a specific location on the ground from 360◦ around. The toolwould then need to know where to point the gimbal, and when to take a photo.Should the tool fail, it could inform the autopilot, such that a second circle couldbe flown. The publish/subscribe[30] model is ideal for accommodating this sortof feature, as the tool can just subscribe to this information. The autopilotcould likewise subscribe to the information provided be the tool.

Various tools can be contemplated, and a theoretical classification has beenmade:

1) Passive tool: Works independently of the airframe state. Ex. a camerafilming the entire flight. Requires no autopilot support or interfacing.

2) Interactive tool: Executes commands send from the FMS. Ex. take pic-ture, turn gimbel here. Requires the tool to subscribe to the autopilot com-

21

Page 24: ATC Digital

mands.

3) Controlling tool: Tool provides autopilot control commands. Ex. tooltracks a red car and provides target coordinates to the autopilot.

The various tool categories requires different levels of support. 1) Passive, re-quires little implementation in the autopilot software, as long as its dimensionsand weight is known it is just a matter of fixating it in the tool mount. 2)Interactive, requires some support. Typical message types must be predefined,such that the tool can subscribe to. 3) Controlling requires a more completeintegration, as it must be ensured that the commands send from the tool do notresult in hazardous control - i.e. the autopilot should supply failsafe mechanisms.

At this stage of the project, tool interfacing has not been implemented. It hashowever played a major role in the design and architecture of both hardware andsoftware, such that interfaces are provided, and the software has been modulebased.

2.3 Flight Computer

At the heart of the ATC lies the main computer. Its tasks include, but is notlimited to, state estimation, control, tool interfacing and telemetry linkage. Inthose regards a number of requirements have to be met.

The ATC being intended as a development platform, outlines the requirementsfor the main computer. It is desirable to have ’enough’ processing power to tryout new ideas, without compromising the main tasks of keeping the platformairborne. It is likewise desired to use to publish / subscribe model, as describedearlier. This inherently includes some amount of computational message trans-port overhead, which must be accommodated. The performance requirementsare similar to those of a modern desktop PC. Due to space and power con-straints carrying a desktop PC around is not an option. Furthermore a varietyof hardware interfaces must be available, including SPI, I2C, UART, USB andEthernet. High performance embedded computers have these interfaces.As it is seen in for example the Paparazzi project 1 a dedicated processor like theSMT32/LPC214x can handle the inertial navigational system. This approachyields a fine solution for a dedicated autopilot. However, it limits expandabilityand modularity as all resources are dedicated to keeping the airframe in the air.Thus, there is very limited options for adding extra sensors and tools.Another approach is to use a processor like ARM, and run a Linux kernel on it.There are various distributors of this type of processor boards. Most notableare the Beagleboard and Gumstix. A comparison of the features offered by thetwo is listed in Table 2.1 on the next page.

1http : //paparazzi.enac.fr/wiki/Umarimv10

22

Page 25: ATC Digital

Feature Beagleboard xM Gumstix FE

Dimensions 78.74 x 76.2 mm 58 x 17 x 4.2 mmWeight 37 g 5.6 gProcessor DM3730 OMAP3530Clock 1000 MHz 600MHzRAM 512 MB 512 MBFlash 0 MB 512 MBPower 5V USB or DC power 3.3V DCVideo DVI/LCD/S-Video DVI/LCDDSP 800 Mhz 220MHzJTAG 3

UART 3

USB OTG 3

USB HOST 3

Ethernet 3 7

Audio I/O 3

Camera connector 3

microSD 3

Bluetooth 7 3

Wi-Fi 7 3

Expansion boards 7 3

I2C - SPI - ADCPWM - GPIO 3

Table 2.1: Comparison of BeagleBoard xM and Gumstix FE

BeagleBoard has the advantage of integrated Ethernet, however it comes at aprice of a rather large motherboard type PCB, similar to that of a standalonePC, whereas the Gumstix provides WiFi and a processor-only board, meant forusing in expansion boards. Thus the design of the Gumstix is more in line withthe design criteria of this system - allowing for a module based hardware.Gumstix has been deemed the optimal compromise between size, interfaces andcomputational power. Gumstix offers small self-contained computers, capableof higher abstraction operating systems, such as Linux. The form-factor allowsfor easy installation in custom designed boards. Furthermore Gumstix have avibrant open source based user community, using OpenEmbedded2 and BitBake.This allows for quick access to support on numerous hardware and softwarerelated issues.

2.4 Conclusion

In this Chapter, various aspects of automated flight have been examined, withemphasis on modularity and independence of specific fixed wing airframes. Bysplitting the executive autopilot into a Flight Management System and a con-troller layer, airframe specific tunings only need to be handled at the lowestlayer. Abstract tasks, such as tool interfacing, trajectory smoothing and flight

2http://www.openembedded.org/

23

Page 26: ATC Digital

control can be handled by the generic Flight Management System. Althoughsuch a system has not been implemented, the lower control layer it relies on,has been. The PID controller scheme was found useful for this task, which in-herently means that well known methods can be used to tune the controller fora specific airframe. In Section 5.6.2 on page 65 it will be verified by field tests,that the implemented parts of the controller are indeed capable of controllingthe airframe. A Gumstix has been chosen as the main flight computer, such thatinterfacing with various tools can be done with minor effort, utilizing Linux andROS. The lack of realtime capabilities on the Linux operating system is dealtwith, by letting a small coprocessor interface with the R/C components. Thiscoprocessor also makes sure that the operator can reclaim manual control of theairframe, in case of a operating system crash.

24

Page 27: ATC Digital

Chapter 3State Feedback

To enable control of the airframe, state feedback is essential. Together with set-points, this feedback will enable real time control of the airframe, as discussedin Chapter 2. Several state variables has to be monitored, to enable low- aswell as higher level control. The airframe’s bank, climb and heading needs to beknown. Rotating from world to plane body frame, is done using Euler angles,in the order Z-Y-X. This de facto standard does intuitively relate ψ to heading,θ to elevation and φ to bank, as seen in Figure 3.1b on page 28. Combined,these three will be referred to as the pose of the airframe. A subset of the pose,excluding heading information, will be referred to as the attitude of the airframe.

The airframe coordinate frame is positioned (xa, ya, za) = Nose, Starboard,Down. The global coordinate frame used is Universal Transverse Mercator(UTM). The Cartesian UTM coordinate system is chosen over the sphericallatitude / longitude system, as it eases calculation. The UTM frame is po-sitioned (x0, y0, z0) = North, East, Down. However, as it is unintuitive thatnegative movement on the z0-axis will map to positive climb, the altitude is re-ferred to as positive above ground or sea level, as seen in Figure 3.1a on page 28.This figure also shows the relations of the position of the airframe, (Pn, Pe), tothe global UTM frame. It is assumed that the UTM frame is a valid inertialreference frame. Although this is not exactly true, due rotation of the Earthand the solar system, the massive scale of these motions reduce the impact onour scale-relatively much faster moving system and are thus neglected.

It is vital to know the velocity of the airframe for control purposes, but also forattitude estimation, as it will discussed in Section 3.2 on page 33. In aeronautics,velocity is a more complicated issue compared to ground based vehicles, as atotal of three different velocities must be considered:

1. The Indicated airspeed, Vi, is the velocity of the airframe with respect tothe surrounding air.

2. The True airspeed, Va, is the total 3D velocity of the airframe, with respectto the inertial reference frame. On a completely windless day, Va and Viare identical.

25

Page 28: ATC Digital

3. The Speed over ground or Course speed, Vg, is the total 2D (x-y) velocityof the airframe, with respect to the inertial reference frame.

Please refer to Figures 3.2a and 3.2b on page 29 for a graphical visualisation ofthe different velocities and their relations. Especially Vi and Va must be eval-uated separately, as Vi is important with respect to flight dynamics (e.g. stallspeed, control surface response) and Va with respect to inertial measurements.

For control purposes, it is convenient to know the direction and magnitude ofthe wind. E.g, this will allow the for better dead reckoning navigation in thecase of lost GPS fix. Likewise, the Flight Management System will be capableof controlling the course heading, χ, rather than the airframe heading, ψ.

A complete list of variables used to describe the state of the airframe is givenin Table 3.1.

Variable Description Type Unit

Vi Indicated Airspeed Direct [m/s]Va True Airspeed Derived [m/s]Vg Speed over Ground Direct [m/s]

φ Roll/bank angle Fused [rad]θ Pitch/elevation angle Fused [rad]ψ Yaw/heading angle Fused [rad]

Pn Position north Fused / Direct [m]Pe Position east Fused / Direct [m]Alt Altitude Fused / Direct [m]

Wn Wind in north direction Unobservable [m/s]We Wind in east direction Unobservable [m/s]α Angle of Attack Unobservable [rad]β Side slip angle Derived [rad]

χ Course heading Derived [rad]γ Course climb Derived [rad]

Table 3.1: Table of state variables, needed to be determined. Quantities markedwith Direct are directly measurable, Fused are deduced by fusing of variousmeasurements. Unobservable quantities are not measurable and must be es-timated. Derived quantities are redundant and can be calculated from othervariables.

The remainder of this Chapter will be structured as follows. Section 3.1 onthe following page will examine sensors, applicable for measuring the quantitieslisted in Table 3.1. In Section 3.2 on page 33 methods and kinematic equationsfor fusing noisy data and estimating unobserved quantities will be explored.Noise modelling will briefly be discussed in Section 3.3 on page 42. FinallySection 3.4 on page 43 will assess the overall performance of the implementedmethods.

26

Page 29: ATC Digital

UTMn = x0

UTMe = y0

Down = z0

Up = −z0

UTMO

Alt = −z

PnPe

ya

xa

za

(a) The airframe positioned in the worldframe.

x0

y0

z0

−z0

ψ

θ

xa

φ

ya

za

(b) ZYX Euler rotation. Firstψ around the world frame z-axis,then θ around the new interme-diate y-axis and finally φ aroundthe body frame x-axis.

Figure 3.1: Illustrations of position and orientation variables, relating thebodyframe to the world frame.

3.1 Sensors

A number of different sensors must be combined to provide the information re-quired. This section will review the different sensor types, capable of measuringthe variables. Later sections will review methods for extracting the informationfrom the noisy sensor data streams.

3.1.1 Velocities and course parameters

Vi can be measured using an pitot tube. This sensor is a specialized anemometerand operates by comparing the static air pressure with the dynamic pressure,induced by the airframes velocity through the surrounding air. A variety ofother types of anemometers can be used to measure the airspeed, but the pitottube is most commonly used in aeronautics.

If the airframe pose, angle-of-attack and the wind components are know, thetrue airspeed, Va can be calculated from the indicated airspeed, Vi:

Va =

√(Vi · cγ · cψ +Wn)

2+ (Vi · cγ · sψ +We)

2+ (Vi · sγ)

2(3.1)

where the course climb angle γ = θ − α. Similarly, the speed over ground, Vgcan be determined from the indicated airspeed:

Vg =

√(Vi · cγ · cψ +Wn)

2+ (Vi · cγ · sψ +We)

2(3.2)

GPS based sensors can also be used to determine Vg directly and from thismeasurement Va can be determined, if the airframes attitude is known:

Va =Vgcγ

(3.3)

Which also implies that:Vg = Va · cγ (3.4)

27

Page 30: ATC Digital

From the wind components, indicated airspeed and attitude, the course overground can be determined[19]:

χ = tan−1

(Vi · sψ · cγ +We

Vi · cψ · cγ +Wn

)(3.5)

= ψ − β (3.6)

Which also implies that β = ψ − χ.

As the aforementioned velocities refer to the inertial reference frame, they canalso be estimated over short time by integration of airframe acceleration, butlong term integration will soon loose precision due to various noise sources.

ONorth

Viwn

we

Va

w

ψ

χ

β

(a) Top view, showing relation between yaw, ψ, wind components Wn

and We, side slip-angle β, course-heading χ, indicated airspeed Vi andtrue airspeed Va.

OHorizontal

Vg

Vi

α

wVa

θγ

(b) Side view, showing relation between wind W , Angle of attack α, course climb angle,γ, indicated airspeed Vi, true airspeed Va, and speed over ground Vg.

Figure 3.2: Illustration of angles and velocities related to the airframe move-ment.

Pitot model

The pitot tube is connected to a differential pressure sensor, such that Bernoulli’sequation (3.7) can be applied:

Pt = Ps +

(ρair · V 2

i

2

)(3.7)

28

Page 31: ATC Digital

Where Pt is total pressure, Ps is static pressure (the two pressures given bythe pitot tube) and ρair is air density. Thus all variables are accounted for, asthe differential pressure sensor measures the dynamic pressure Pd = Pt − Ps.Therefore:

Vi =

√2 · Pdρair

(3.8)

The air density ρair varies with temperature, humidity and pressure, but is inthe vicinity of 1.22kg/m3 under ’normal’ conditions.

3.1.2 Position

The position of the airframe within the reference frame can be measured by GPS(Global Positioning System). This system uses global landmarks, in the formof satellites. Satellites with known positions and synchronised clocks frequentlytransmits radio signals, containing a time stamps. By receiving timestampsfrom multiple satellites, the difference in time-of-flight of the radio signals canbe determined and thus the position of the receiver. Integrated GPS receiversare available in small modules and can be interfaced via a serial communicationline. The update rate is typically 1 to 10Hz. The GPS outputs the positionin spherical longitude/latitude coordinates. These coordinates needs to be con-verted to UTM northing and easting coordinates. This conversion is somewhatcomplex, but is already a integrated part of FroboMind and thus available with-out further effort.

3.1.3 Altitude

The altitude of the airframe can be measured using GPS. However, the resolu-tion on the altitude is somewhat limited from this sensor, due to satellite versusreceiver geometry. The static air pressure, Ps, can also be used, as the pressuredecreases up through the atmosphere[19].

Ps = ρair ·Alt · g ⇔ Alt =Ps

ρair · g(3.9)

Specialized barometric altimeters are available in IC sized packages. Thesesensors precisely measure air pressure and temperature, from which the altitudecan be calculated. However, as atmospheric pressure varies with the weather,the pressure at sea level must be known to calculate the absolute altitude abovesea level. If on the other hand the altitude above the ground station is adequate,a series of measurements can be conducted on the ground before take-off andused as reference. During long flights or changes in weather, the pressure at thesurface will vary and this reference will not be accurate.In certain situations, the barometric altimeter is not precise enough. For in-stance in take off and landing situations, the distance to the runway must beknown in cm-scale. A ultrasonic range finder is useful for this scale and ismounted on the belly of the airframe. The sensor operates on the time-of-flightprinciple: A ultrasonic chirp is transmitted and detected when it is reflected bythe surface of the runway. The time of travel is proportional to the speed ofsound and the distance travelled. The range of this type of sensor is howeverlimited to approximately 7 meters.

29

Page 32: ATC Digital

3.1.4 AHRS

The Attitude and Heading Reference System (AHRS for short) serves, as thename suggests, as a reference for the airframe’s orientation in 3D.

Historically, mechanical gimbal-mounted spinning mass gyroscopes have beenuseful for this type of instrumentation, as they maintain their orientation,with reference to the inertial reference frame, as the airframe is maneuvered.While these mechanical systems are bulky, expensive and heavy, MEMS (MicroElectronic Mechanical System) technology have introduced much cheaper andsmaller rate gyroscopes. These devices are based on vibrating micro structuresand are available in IC-sized packages for direct PCB mounting. As the air-frame rotates, the vibrating mass tends to maintain its direction of travel andwill therefore exceed a force (the Coriolis force) on its supports. This force canbe measured and is proportional to the rate of rotation[53]. They do not sensethe orientation of the airframe, but the rate of rotation. Integration of the rateof rotation will yield the actual orientation. The integration must however bedone in Euler sequence (3.11) as rotation is not cumulative.

φkθkψk

=

1 sφk · tθk cφk · tθk0 cφk −sφk0

sφkcθk

cφkcθk

· ωk (3.10)

φnθnψn

=

∫ n

0

φ

θ

ψ

dt (3.11)

≈n∑

k=0

φkθkψk

·∆tk (3.12)

Where ωk is the current angular rotation rate at time k and ∆tk is the sam-pling period at time k. Sensor imperfection, discrete time resolution, vibrationsand integration errors, will accumulate over time, and eventually render theinformation useless. Hence, a way of measuring the attitude and heading withabsolute reference is necessary and will be reviewed here.

Attitude sensing

The direction of Earth’s gravitational field, relative to the airframe, can usedto determine the attitude (pitch and roll). This acceleration can be measuredusing MEMS accelerometers, like rate gyroscopes available in IC-sized pack-ages. These devices consist of micro machined masses, suspended by cantileverbeams, acting as spring elements. Accelerations acting on the mass will deflectthe suspending beams and move the proof mass, relative to the base. Thisdisplacement is typically measured by the changing capacitance to the mass,with respect to end walls. During stable, level flight, the created lift is equal togravity, such that the acceleration is exactly −g. Maneuvering will change thedirection of the airframe, by using the control surfaces to produce centripetalacceleration in the desired direction, according to Newtons 3rd law of motion.

30

Page 33: ATC Digital

As these sources of acceleration are indistinguishable, the sensor is useless ifthe centripetal accellerations are not estimated. Additionally acceleration willaffect the airframe, from various noise sources such as vibrations, wind gustsetc. These noise sources must be filtered out.Alternatively, infra red thermopile sensors can be used to detect the horizon andthus the attitude of the airframe. This method works by a pair of thermopiles,facing opposite directions, in a collinear configuration (see Figure 3.3). As theairframe banks, one thermopile ’sees’ more sky and the other more Earth. Asthe Earth is relatively warmer than the sky, the attitude angle can be estimatedfrom this temperature difference. However, it is not trivial to extract the exactmagnitude of the roll and pitch angles from this information, as the temperatureof Earth and the sky varies. Hilly terrain will likewise distort the informationprovided by the sensors. Apart from these imperfections, the system is simpleand intuitive. Some issues might arise in the implementation. The sensors needto be mounted externally on the airframe, with clear sight of the Earth and sky.This will constrain the mounting possibilities. While the system is attractivelysimple, it is dismissed on the basis of the aforementioned imperfections.

Figure 3.3: Two thermopiles on opposite sides of the airframe can be used todetermine the roll angle.Reprinted from http: // paparazzi. enac. fr/ wiki/ Sensors/ IR [14].

Heading sensing

As the gravitational field tends to point to the center of Earth, it does notcontribute with any information of the airframe’s heading. However, Earth’smagnetic field is useful for determining the airframe’s heading. A digital mag-netometer can be used to measure the field strength in three orthogonal axisand thus provide a three dimensional vector pointing towards the North Mag-netic Pole. A number of constraints must be considered, when using Earth’smagnetic field for heading determination:

1. Earth’s magnetic field is not uniform across the planet.

2. Other (high power) electrical equipment and soft iron masses affects themagnetic field around the airframe.

3. As the airframe pitches and rolls the sensed field rotates, as it is measuredwith respect to the base frame on all axis, not only heading.

31

Page 34: ATC Digital

If the location is known, the first problem is easily overcome by a look-up-table,based on the World Magnetic Model[16]. Ignoring the declination of the mag-netic field will in some areas of the world be a reasonable approximation. Inother areas, especially near Earth’s magnetic poles, the magnetometer will beperfectly useless, if the exact location is not known and the specifications ofthe magnetic field can not be looked up. Electromagnetic noise from on-boardpower components are harder to eliminate. Shielding the sensor is clearly nota possibility, as such a shield will not discriminate signal over noise . Shieldingthe power system will probably prove cumbersome. However, by positioning themagnetometer as remotely as possible to the power system, the magnitude ofthis issue can be reduced.

Alternatively, a GPS based sensor can be used to measure heading. As thissensor calculates the heading from changes on position, it will obviously onlywork when the airframe is in motion. Also, the sensor does measure the course-over-ground, rather than the actual orientation of the airframe. These twoheadings are not necessarily the same, due wind and thus side slip angle, asdescribed in Section 3.1.1 on page 28. Alternatively, two GPS receivers can beused in a differential configuration. This would provide heading information,even when standing still.

3.2 Sensor fusion

In order to obtain a precise and fast responding state estimate, suitable as feed-back for the control loops, some of the aforementioned sensors must be fused.Sensor fusion is the discipline of combining multiple noise infested sensor datastreams into a combined estimate of the actual state. E.g both accelerome-ters and gyroscopes are indeed capable of measuring the attitude of the air-frame, but neither of them is capable of doing so very well. Vibrations andun-modelled accelerations impact the accelerometers measurements, such thatshort-term extraction of the state is unsuitable. Similar sources of noise appliesto magnetometer measurements. Integration errors on the other hand rendersthe gyroscopes long-term precision worthless. The authors have conducted asurvey of suitable sensor fusion methods for UAVs (See Appendix H, ’SensorFusion for Miniature Aerial Vehicles’ on page 105). This section will describethe major findings of this survey and derive the mathematical formulation of anapplicable state estimator.

3.2.1 Methods

Several methods of varying complexity are capable of sensor fusion. The simplestmethod is complimentary filtering[35, 49]. Complementary filtering is useful forcombining accurate fast, but biased data with slow, noise disturbed but absolutedata. This is done by a pair of complimentary filters, one high pass the other lowpass, as seen in figure 3.4 on the following page. Pilot experiments have shownthat this filter type is indeed capable of fusing gyroscope and accelerometer datainto a viable attitude estimate. The complementary filter is however not a stateestimator, and is as such not capable of in-cooperating unobserved variables,such as the wind components, angle of attack, gyroscope biases etc.

32

Page 35: ATC Digital

f

A

f

A

xu = f(u, x)

xz = f(z, x)

Σ x

z

u

Figure 3.4: Basic complimentary filter. The high pass filter (top) and lowpass filter (bottom) filters out the weakness of each sensor. As the sensors haveopposite weaknesses, their strengths can be combined.

Bayes’ filters[18, 21, 71] on the other hand are recursive state estimators. Bayes’filters work under the assumption that the true state, x is not directly observable- It is a hidden Markov process. However, sensor measurements, z are derived ofthe true state. Because of the Markov assumption, the probability distributionfunction (PDF) of the current state, given the previous state, p(xk|xk−1), isindependent of the history of states, such that

p(xk|xk−1,xk−2 · · ·x0) = p(xk|xk−1) (3.13)

As the observation, z, is derived of the unobserved true state, it is likewise fairto conclude that that it depends only on the current state and not the historyof states.

p(zk|xk,xk−1 · · ·x0) = p(zk|xk) (3.14)

Bayes’ theorem[51, 71] provides the mathematical foundation for describing theprobability of the state given a measurement, based on the probability of themeasurement given the state. According to (3.15) is it necessary to describe theprobabilities of the state and the measurement.

p(x|z) =p(z|x) · p(x)

p(z)(3.15)

Implementation of Bayes’ filter work in a two-step recursion. First, the PDFis propagated, using the current system input, uk, and a model of the systemin question. As the PDF is propagated, it is smeared a bit in order to reflectthe uncertainty associated with the state transition, wk, referred to as the statetransition noise. The projected state estimate is referred to as the a priori stateestimate, x−k , as this is the estimate, prior to in-cooperation of a measurement.The second step is to in-cooperate a measurement, zk. Based on Bayes’ theoremthis measurement can describe a PDF of the state. As the measurement basedstate estimate is in-cooperated, the PDF is focused to reflect that knowledge hasbeen acquired. The new estimate is referred to as the posteriori state estimate,x+k .

Many different implementations of Bayes’ filters exists. They mainly differ onthe way they represent the PDF of the state estimate. Generally speaking, themore detailed the description of the state estimate needs to be, the more compu-tation is required. E.g Particle filters[68] is capable of representing a arbitraryPDF, but in limited resolution. Kalman filter[27, 43] represent the PDF as aGaussian distribution and can thus use a parametric description, containing the

33

Page 36: ATC Digital

mean and covariance. This representation does limit the PDF to be symmet-ric and univariate (i.e, only keeps one hypothesis of the state). The originalformulation[43] uses the State Space model to propagate the state estimate for-ward in time. The filter can however be extended to work for nonlinear systemsas well. The Kalman filter is chosen as the complexity of multivariate PDF’sare not required in this application. The Kalman filter will be reviewed here.

As mentioned, the Kalman filter uses the linear state space model to projectthe state estimate and covariance forward in time:

x−k = x+k−1 +

(A · x+

k−1 +B · uk)·∆tk (3.16)

P−k = P+k−1 +

(A · P+

k−1 ·A> +Q)·∆tk (3.17)

Where the state transition model,A, is used to project the state estimate, x, andthe state covariance matrix, P , forward in time, by the time step ∆tk = tk−tk−1.The system input, u, is projected into the state vector by the input model,B, and the transition noise covariance, Q, is added to the state covariancematrix. Note that A and B are not step-dependant, as the system is linear. Ameasurement is in-cooperated in the measurement update step:

Kk = P−k ·H> ·(H · P−k ·H> +R

)−1

(3.18)

x+k = x−k +Kk ·

(zk −H · x−k

)(3.19)

P+k = (I −Kk ·H) · P−k (3.20)

Where K is the Kalman gain, which is calculated from the state covariancematrix, the observer model, H and the observation covariance matrix, R. Basedon the Kalman gain and the innovation term, zk −H · xk, the posteriori stateestimate, is calculated. Finally, the state covariance matrix is corrected to reflectthe information gain, introduced by the measurement.As mentioned, the Kalman filter is limited to linear systems, by the linear statespace model. However, it can be extended[63] to work for non-linear systems aswell. This is done by using a pair of non-linear models f(x,u,w) and h(x,v):

xk = f (xk,uk,wk) (3.21)

= Ak · xk +Bk · uk +W k ·wk (3.22)

zk = h (xk,vk) (3.23)

= Hk · xk + V k · vk (3.24)

f(x,u,w) models the state transition. It takes the current state vector, xk,the current system input vector, uk and the noise vector wk as inputs. wk

is a random noise vector, drawn from a mean zero Gaussian distribution withcovariance Q. f(x,u,w) needs not to be formulated in the linear form (3.22),but the linear models Ak and W k must be calculated for each time update.These can be calculated as the Jacobians, i.e linearising around the state andnoise vectors respectively, such that:

Ak[i,j] =∂f [i]

∂x[j](xk,uk) (3.25)

34

Page 37: ATC Digital

and

W k[i,j] =∂f [i]

∂w[j](xk,uk) (3.26)

h(x,v) models the sensor output, based on the current state estimate, xk andthe measurement noise vector, vk which is drawn from a mean zero Gaussiandistribution with covarianceR. h(x,v) needs not to be in the linear form (3.24),but the linear models Hk and V k need to be known. They can be found bylinearising the model around the state and noise vector respectively, such that:

Hk[i,j] =∂h[i]

∂x[j](xk) (3.27)

and

V k[i,j] =∂h[i]

∂v[j](xk) (3.28)

The time update step of the extended Kalman filter[18, 21, 71] (EKF) is formu-lated as:

x−k = x+k−1 + f(x+

k−1,uk, 0) ·∆tk (3.29)

P−k = P+k−1 +

(Ak · P+

k−1 ·A>k +W k ·Q ·W>k

)·∆tk (3.30)

Where the state is propagated using the state transition function f(x,u,w).Note that the noise input is set to zero, as no intentional noise is added. Thestate transition noise covariance matrix, Q is projected by the state transi-tion noise model W k, before it is added to the state covariance matrix. Themeasurement update step is formulated as:

Kk = P−k ·H>k ·(Hk · P−k ·H>k + V k ·R · V >k

)−1

(3.31)

x+k = x−k +Kk ·

(zk − h

(x−k , 0

))(3.32)

P+k = (I −Kk ·Hk) · P−k (3.33)

Where the measurement noise model, V k is used to project the measurementnoise covariance matrix, R and the measurement model, h(x,v), is used in theinnovation term. Also here, no intentional noise is injected, so vk = 0.

This concludes the theoretical description of the EKF, used in the system. It wasfound that the EKF was most suitable for state estimation. The formulationof this filter has been reviewed and explained. The state estimator can beconfigured in various ways. This will be investigated further in the next. Afterthis, the kinematic models needed for the filtering are derived.

3.2.2 Estimator architecture

The architecture of the state estimator is vital to performance in terms of cal-culation efficiency and precision. But also from a debugging and developmentpoint of view, various constructions are easier to work with compared to others.Three different architectures have been considered.

35

Page 38: ATC Digital

The first scheme is the obvious single 9-state filter[61], with the measurementvector composed of acceleration, α, magnetic field strength, β, UTM positionand altitude, utm, as depicted in Figure 3.5a on the next page. However, asthe four sensors (accelerometer, magnetometer, GPS receiver and barometricaltimeter), needed to compose the measurement vector, have very different up-date rates, the lowest frequency must be used. In this case the GPS receiverwith a update frequency of approximately 5 Hz. This measurement frequencyis not acceptable for attitude estimation.

It is however possible to use separate measurement vectors[56] within the sameestimator, by defining different H, V and R matrices for each of the mea-surement vector, as illustrated in Figure 3.5b on the following page. With thisarrangement, the measurement rate dependency between the sensors can beeliminated. To simplify the problem even more a thrid approach can be taken;Splitting the estimator into smaller sub-estimators[28, 60], responsible for co-herent parts of the state vector. Such a scheme is depicted in Figure 3.5c on thenext page. The advantage of this decoupling is that debugging and developmentcan be done in a segmented fashion. This structure also reduces the size of thematrices remarkably, which is advantageous as it reduces the need for compu-tational resources. However this segmentation results in lost coupling betweenmeasurement and state, i.e. magnetometer can not aid attitude estimation, andthe GPS can not aid heading estimation. But as the primary and intentionalsensor data is still available to the respective estimators, this is not considereda major setback.CONCLUSION

3.2.3 Kinematic models

The Extended Kalman filter, which was chosen for state estimation earlier inthis Section, requires kinematics models of the system. These models are usedto propagate the estimate and its covariance forward in time, as discussed insubsection 3.2.1 on page 33. A total of three different estimators are needed,as explained in the previous subsection. The outlines for the models for eachestimator will be given in the following subsection. Appendices A to C onpages 80–88 describes the deviations in more detail.

Attitude kinematic model

The attitude is estimated by fusing gyroscope and accelerometer data. Thegyroscope is used as system input vector, propagating the estimate. Accelerom-eter data is used to correct the estimate. Up till this point, the airframe posehas been described in Euler angles, as these naturally map directly to the flightparameters, bank, climb and heading. Thus they provide a intuitive represen-tation of the airframe’s rotation. However, there is a snake in paradise, as theEuler angle representation is numerically unstable near the limits of definition:As the pitch angle approaches θ ≈ ±π/2, the roll and yaw axis are collinearand a degree of freedom is lost. This situation is referred to as a gimbal lock, asthe mathematical gimbal is locked from rotating freely in any direction. Also,when the pitch angle is zero and the roll angle approaches φ ≈ ±π/2, the pitchand yaw axis align and a similar situation arises. In order to cope with this

36

Page 39: ATC Digital

9-stateEstimator

φθψPnPeAltWn

We

α

[ωVi

]

αβutm

x

u

z

(a) 9-state estimator.

9-stateEstimator

φθψPnPeAltWn

We

α

[ωVi

]

[α]

[β]

[utm

]

x

u

z3

z2

z1

(b) 9-state estimator with three differentmeasurement vectors.

2-stateAttitude

Estimator

1-stateHeading

Estimator

6-statePosition

Estimator

[ωVi

]

φθψPnPeAltWn

We

α

[α]

[β]

[utm

]

(c) Cascaded 2+1+6 state estimator.

Figure 3.5: Different estimator architectures considered. Vi is the indicatedairspeed, ω is the vector the angular rotation rates, α is the acceleration vector,β is the magnetic field strength vector. All are measured in the plane body frame.utm is the measured position vector in the inertial reference frame.

37

Page 40: ATC Digital

issue, the Quaternion rotation representation is used in the attitude estimatorinstead. This representation is composed a hyper complex number, with a totalof four parameters. The representation is thus redundant, as rotation in threedimensions only strictly needs three parameters.

q = w + x · i+ y · j + z · k (3.34)

i2 = j2 = k2 = i · j · k = −1 (3.35)

By constraining the quaternion to have unit magnitude, a total of three degreesof freedom remains: √

w2 + x2 + y2 + z1 = 1 (3.36)

Rotation of one quaternion by another, can be expressed by quaternion multi-plication:

qb0 ≈ qa0 · qba (3.37)

Where qyx is the rotation from frame x to frame y. Quaternion multiplicationcan be done in matrix form:

wb0xb0yb0zb0

wa0 −xa0 −ya0 −za0xa0 wa0 −za0 ya0ya0 za0 wa0 −xa0za0 −ya0 xa0 wa0

·

wbaxbaybazba

(3.38)

Letting qa0 be the state vector, it can be forward projected in the time updateby the angular velocities in the airframe, measured by the rate gyroscope. Un-der the small angle approximation, the infinitesimal rotation can be written inQuaternion form by:

∆q =

1ωx/2 ·∆t

ωy/2 ·∆t

ωz/2 ·∆t

(3.39)

Equations (3.38) and (3.39) forms the basics of the state transition function,f(x,u,w). The function is fully derived in (A.1) to (A.9) on page 80. The finalfunction formulation is given here:

f(x,u,w) = x (3.40)

=

−x −y −zw −z yz w −x−y x w

·

ωx + ηωx

ωy + ηωy

ωz + ηωz

· 1

2(3.41)

Where x is the state vector, composed of the quaternion w, x, y, z, the systeminput vector, u, is composed of angular rotation rates, ωx, ωy, ωz, measured bythe gyroscope and the state transition noise vector, w is the noise on the gyro-scope measurements, ηωx

, ηωy, ηωz

. As the small angle approximation deformsthe Quaternion, such that it over time no longer have unity magnitude, it needsto be normalized frequently. This normalization can be performed by (A.10).

38

Page 41: ATC Digital

In the measurement update step, the accelerometer measurement model, h(x,v),models the measurement based on the state vector, x and the measurement noisevector, v. The acceleration in the airframe is the sum of gravity, the centripetalforces and measurement noise. The measured gravity is the gravity vector,

modelled by rotating the vector[0 0 −g

]Tinto the airframe by a rotation

matrix, Rq(x), calculated from the Quaternion state vector. The centripetalforces, which the airframe is subject to, is the cross product of the vectors ofvelocities and angular velocities in plane body frame. The angular velocitiesare measured by the gyroscope. The velocities in the plane body frame are notdirectly known, and must be computed from true airspeed, Va, angle of attack,α, and side slip angle, β. The measurement noise vector, v is simply added tothe equation:

h (x,v) = z (3.42)

= Rq(x) ·

00−g

+

ωxωyωz

×

cα · cβsβ

sα · cβ

· Va + v (3.43)

Appendix A, ’Attitude kinematics’ on page 80 describes the mathematical deriva-tions in detail. The derivation of the linear models, A(x,u), W (x,u), H(x)and V (x), based on (3.25) to (3.28) on page 35, can also be found in this ap-pendix. As the remaining system uses the Euler angle representation, conversionback and forth between the two representations is needed. This can be doneaccording the (A.23) and (A.24) on page 83, respectively.

Heading kinematic model

Based on gyroscope and magnetometer data the heading of the airframe can beestimated. Equation (3.44)[67, eq. (1.4-15)] forms the basic for the estimatorstime update step. It relates the changes in Euler angles to the angular velocitiesin the airframe, by introducing the Euler angle derivatives in the appropriateintermediate frames of the rotation.

u+w =

φ00

+Rx(φ) ·

0

θ0

+Ry(θ) ·

00

ψ

(3.44)

Where Rx(φ) is the rotation matrix by φ around the x-axis and Ry(θ) is therotation matrix by θ around the y-axis. u is the vector of angular rotationvelocities in the plane body frame and w is the noise vector on these measure-ments. Through Equations B.1 to B.5 on page 85, the state transition model isderived on this foundation, such that:

f(x,u,w) = ψ =[sφcθ

cφcθ

]·([ωyωz

]+

[ηωy

ηωz

])(3.45)

Where the state vector, x is the heading, ψ. The system input vector, u, is theangular velocities in the plane body frame around the y and z axis, ωy and ωzrespectively. The x-axis angular velocity is not needed, as it maps directly to

39

Page 42: ATC Digital

the roll angle. The state transition noise vector, w is the noise on the gyroscopereading, ηωy

and ηωzrespectively. The airframe’s attitude, φ and θ, is presumed

to be known from the Attitude estimator, described in Section 3.2.3 on page 37.

The measurement update step is based on the measurement model, h(x,v)and models the magnetometers three dimensional reading of Earth’s magneticfield. Earth’s magnetic field varies across the planet, but can be determined fora given position from the World Magnetic Model[16]. The field strengths, innorth, east down components, on the test site is given in Table B.1 on page 84.These components will be referred to as:

β0 =

βnβeβd

(3.46)

To model the magnetometer measurement, these components are rotated intothe airframe. This is done by three rotation matrices, Rx(φ), Ry(θ) and Rz(ψ).The measurement noise vector, v is added to the expression.

h(x,v) = z (3.47)

= R>x (φ) ·R>y (θ) ·R>z (ψ) · β0 + v (3.48)

Appendix B, ’Heading kinematics’ on page 84 describes the mathematical deriva-tions in detail. The derivation of the linear models, A(x,u), W (x,u), H(x)and V (x), based on Equations 3.25 to 3.28 on pages 35–36, can also be foundin this appendix.

Position and Wind kinematic models

The final stage of the cascaded filter, estimates the position, Pn and Pe, andaltitude, Alt, along with the angle of attack, α and wind components in northand east directions, Wn and We respectively. The system input is the indicatedairspeed, Vi and the measurement vector is composed of GPS and baromet-ric altimeter readings, converted into UTM coordinates utmn, utme and utmd

respectively. The wind components and angle of attack are not directly measur-able and are thus unobserved state variables. The forward kinematic formulationused in the time update is:

f(x,u,w) = x (3.49)

PnPeAlt

Wn

We

α

=

Vi · cγ · cψ +Wn + ηPn

Vi · cγ · sψ +We + ηPe

Vi · sγ + ηAltηWn

ηWe

ηα

(3.50)

Where the course climb angle is given by

40

Page 43: ATC Digital

γ = θ − α (3.51)

Please refer to Figures 3.2a and 3.2b on page 29 for an illustrative descriptionof the relations between various angles and velocities.The measurement model is simply composed by picking elements from the statevector:

h(x,v) = z (3.52)

=

PnPe−Alt

+ v (3.53)

Appendix C, ’Position and Wind kinematics’ on page 88 describes the mathe-matical derivations in detail.

3.3 Noise considerations

The state transition noise covariance matrix, Q and measurement noise covari-ance matrix, R needs to be modelled, for each state estimator. These matricesdescribe the variances of the state transition and observation, such that theKalman estimator can effectively weight the a priori state covariance matrix,P− and innovation term. In other terms, the estimators need an idea of thecertainty of the observations, used to correct the state estimate in the measure-ment step and an idea of how much certainty is lost, as the state estimate isprojected forward in the time update step. The covariance matrices are definedas:

Q = E[(w − E [w]) · (w − E [w])

T]

(3.54)

= E[(w − 0) · (w − 0)

T]

(3.55)

R = E[(v − E [v]) · (v − E [v])

T]

(3.56)

= E[(v − 0) · (v − 0)

T]

(3.57)

Where E[x] is the expected value of x (similar to the mean value of x). Thenumerical values are hard to determine, as many different sources of noise mayimpact the exact values. Therefore, it is convenient to think of the Q and Rmatrices as uncertainty levels. I.e the less confidence we have in the state tran-sition, the greater the values in Q should be. Similarly, the less confidence wehave in the observations, the greater the values placed in the R matrix shouldbe. It should be noted that the ratio between the two matrices is vital for thefilter performance. I.e large confidence in state transition and low confidence inmeasurements tends to smooth the state estimate at the cost of slower conver-gence and visa versa.

41

Page 44: ATC Digital

The uncertainties are expressed in variance, ν = σ2. The values on the diago-nal represent the variance of the measure it self and the off-diagonal elementsrepresent coherent deviations. For all measurements, we do not expect anyco-variation and will only place values on the diagonal.

3.4 Conclusion

In this Chapter, the variables describing the state of the airframe have beendescribed. Also the mathematical formulation of their relationship have beendeduced. Is has been concluded that not all variables are directly measurableand must thus be estimated. The theoretical and practical foundation for suchstate estimation, using the Extended Kalman Filter have been described, inthe context of fixed wing aircrafts. Various architectures for the state estimatorhave been suggested and a cascaded scheme has been found appropriate. Finally,the models for state transition and sensor estimation have been deduced andit has been discussed how noise models influence the performance of the stateestimators.

42

Page 45: ATC Digital

Chapter 4Visualization and Simulation

When using a MAV as a ATC, it is essential to be able to monitor the status ofthe aircraft. This is typically done on a Ground Control Station (GCS) as seenin a number of open source projects. When the ATC becomes fully autonomous,the main operator control interface changes from the radio to the GCS. Whenit is no longer a concern to keep the ATC in the air, focus moves to the taskat hand; What is the status of the aircraft, are there any errors, where is theaircraft, is the tool behaving correctly, is the plan executing correctly and manyother questions may arise. It is the job of the GCS to answer all these questionsin a user-friendly way. Furthermore, it should be possible for the operator totake action, if needs for changes arises. Therefore, a bidirectional link mustbe provided, allowing the operator to send corrective command to the ATC inorder to change plan, control or other aspects of the mission.

The remainder of this Chapter will be structured as follows. First, the require-ments and proposed solutions for telemetry are outlined in Section 4.1. Opensource projects for visualization and simulations are surveyed in Sections 4.2 onthe next page and 4.3 on page 46. The later two sections selects the open sourceprojects, which will be used as base for the GCS and simulator respectively.

4.1 Telemetry

In order for the operator to have a satisfactory overview, information such asairframe, system and mission status must be provided wirelessly. This enablesremote visualization of the mission on the GCS. By providing two way com-munication it is likewise possible to perform in-flight correct of the flight plan.When tuning the controller, it is also desirable to be able to change controlparameters in-flight.

4.1.1 Link hardware

As it is stated in the Danish Civil Aviation Administration BL 9-4 [48], onemust always have a model plane in line of sight. It is estimated that a plane,

43

Page 46: ATC Digital

with the legal maximum weight of 7 kg, is not visible at ranges above 1 km.Thus this serves as the minimum required link range.

The necessary data rate of the link is dependent on the amount, resolution andfrequency of the data transmissions. The feedback should come at a minimumfrequency of 10 Hz. It is desired to have an airframe and system state feedbackresolution in double precision i.e. 64 bits per data field. With 11 data fields(roll, pitch, yaw, latitude, longitude, altitude, velocity, CPU, memory, batteryand number of satellites) this yields a minimum data rate of 7 kbit/s. Messageoverhead and checksums add to this. The infrequent uplink user input is as-sumed to be way below this rate. Therefore the minimum acceptable data rateis approximately 10 kbit/s. Several link candidates are available. Four of themost common technologies are presented and compared in Table 4.1.

Technology Range [typ.] Transfer rate Power consumption

Bluetooth 1 - 100 m 1-24 Mbit/s 1-100 mWWi-Fi 250 m Up to 300 Mbit/s Typ. 1 WZigbee [26] 90 m – 40 km 10-250 kbit/s 1-100 mW3G Coverage dep. Up to 30 Mbit/s Typ. < 5 W

Table 4.1: Telemetry comparison overview

As with all wireless networks, range is limited by transmission power, antennatype, environment, etc. Considering standard equipment, both Bluetooth andWi-Fi are inadequate for in field telemetry due to their limited range. Wi-Fi canhave extended range, but requires non-standard and heavy equipment. Both 3Gand ZigBee could be used as link, however, as 3G requires coverage that is notavailable everywhere. Zigbee is deemed more generic as it function standalone.

4.1.2 Message passing

It is important that the messages passed between GCS and ATC is valid, espe-cially in the up-link. It could be catastrophic if a corrupted way-point, desiredaltitude or control parameter was passed through to the airplane control. There-fore, steps must be taken to ensure that only valid data is passed through.To ensure that binary data on the Zigbee serial line is correctly transmittedand received, an implementation of Serial Line Internet Protocol (SLIP [62])encapsulation and checksum can be used. SLIP provides a means for knowingstart and stop of packages, whilst checksum validates the data. Lastly, anacknowledgement must be send if package transfer should be guaranteed.

4.2 Visualization

In order to have a decent Human Computer Interface (HCI), usable by endusers, it is necessary to display the abundance of airframe information graphi-cally. As non of the authors are particularly skilled in graphical programming,it has been deemed unnecessary to reinvent the wheel - there is already an mul-titude of open source GCSs targeted at just this problem. Mentioning a few

44

Page 47: ATC Digital

includes QGroundControl [15], HappyKillmore [10], openPilot GCS [13] andCCNY ground station [9]. For a quick overview of the graphical quality of thefour mentioned GCSs, please refer to Figure 4.1.

(a) QGroundControl (b) OpenPilot GCS

(c) Happykillmore GCS (d) CCNY Ground Station

Figure 4.1: Visual comparison of four different open source ground stations

All four GCS features similar capabilities, however, CCNY ground station standsout as being a bit more simplistic and is already an integrated part of ROSthrough the CityFlyer project hosted by The City College of New York [55].Therefore it is the GCS of choice at this point in the project.

4.3 Simulator

In order reduce development time and cost, a simulator was deemed useful.A simulator can be utilized for rapid testing of new control paradigms, stateestimators and control parameters, without the risks involved with a real testflight. Also, pure simulation experiments can be conducted independent ofwind, weather and other inconvenient phenomenons. Lastly, pilot skill can alsobe practiced.

Furthermore it is desirable that the simulator is open source and that it can beported to ROS. By doing so, work can be contribute to the community with anew feature not seen in ROS before. To have the simulator being ROS com-patible, it is essential that it runs under Linux. In order for a simulator toaccommodate these needs it has to fulfill a number of criteria:

� Controllable via external input device, such as joystick or mouse.

45

Page 48: ATC Digital

� Include radio controlled airplanes, such that practicing and controllersgive useful results and experience.

� Provide or give access to state information, such that sensor output canbe simulated.

� Be open source and free, such that it can be ported to ROS and shared.

� Run natively on Linux.

With these requirements in mind, three potential candidates, all fulfilling therequirements, have been found:

Name Size [MB] URLCRRCsim 7.4 crrcsim.berlios.de

FlightGear 300 flightgear.org

Slope Soaring 35 rowlhouse.co.uk/sss

Either of the three candidates could potentially fulfill the job, it has been chosento go with CRRCsim, as it is the smallest of the three, very easy to overviewand has pre-implemented sensor models, making it rather convenient to simu-late the sensors available in hardware. It could be argued that FlightGear ismore sophisticated and accurate, but it is mainly focused on real scale airplanesand requires significant graphical hardware. Maintaining a high frame-rate isessential, in order for the simulated sensor data to mimic real hardware.

4.4 Conclusion

It has been decided to use Zigbee as the radio link for GCS. This is based on thefact that it features the best combination of range, weight, power consumptionand transfer speeds for the needs stated in the requirements. It has been dis-cussed how it can be guaranteed that data packages are correctly transferred.The CCNY groud station has been selected. It is not the most complete of theavailable GCS, but for the current stage of the project it fulfills the needs. Forfuture implementation of more complicated tasks, CCNY ground station can beexpanded, or a more complete GCS can be integrated into a ROS node. Thefuture requirements of the ground station has been stated, and there is roomfor expansion.

46

Page 49: ATC Digital

Chapter 5Implementation

Autopilot ControlState Estimator

Flight Plan ToolTelemetry

Ground ControlStation

Simulator

AirframeSensors Actuators

Figure 5.1: Overview of the entire system, green boxes have been implementedwhile red have not.

This Chapter summarises the implementations done, based on the subsystemsdescribed in the previous chapters. As not all the subsystems, discussed earlier,have been implemented, Figure 5.1 provides an overview of the extent of thecurrent implementations.

Everything has been designed with modularity in mind. An effort has been madeto build the entire system in the spirit of ROS - ranging all the way from hard-ware to software to simulator. Initially the project was branched out from theFrobomind [41] project, and still incorporates the system structure defined here.

The remainder of this Chapter will be structured as follows: Section 5.1 de-scribes the selection of the model airframe, which forms the base for the flyingprototype. The developed and prototyped PCB, which forms the base for the

47

Page 50: ATC Digital

Figure 5.2: Photography of the Multiplex Mentor, used as the implementationbase for this thesis.

autopilot are described in Section 5.2 on the next page. Section 5.3 on page 57explains the software structure and its individual sub-components. Aspects ofporting the open source projects, which forms the base for the GCS and simu-lator are described in Sections 5.4 on page 60 and 5.5 on page 61 respectively.Finally, Section 5.6 on page 62 describes the conducted flight tests and resultshereof.

5.1 Airframe

Under guidance of a R/C reseller, it was decided to use the Multiplex Mentorairframe as a base for the prototype of this project. The airframe is depictedin Figure 5.2. The Mentor is a high winged foam model, with a moderate wingaspect ratio. It comes at a very reasonable price, has a wingspan of 1.6m andweights around 2.0 kg when fitted with standard equipment.

The high winged design gives the airframe embodied stability, as the wing ispositioned above the center of gravity. The downside of this design is reducedmanoeuvrability, compared to a low-winged design. As the airframe is not meantto perform swift manoeuvres, this is a fair trade-off.The construction material plays a vital role on parameters such as price, weightand durability. Generally three alternatives are available, balsa-wood, foamand composites. The composites and balsa-wood constructions are lightweighthollow bodies and are rather expensive. The hollow body designs results inspacious fuselages, which allows for easy fitting of additional equipment. Thefoam models are not as rigid, but comes at a much lower price. By embeddingcarbon fibre stiffeners in the foam to support the wings, the low rigidity of thefoam is somewhat compensated. The foam models are far less spacious, as thelow material strength is compensated by increased wall thickness on the fuselage.

48

Page 51: ATC Digital

It is expected that the model might endure crashes during the development andhence repair-ability is a property that must be considered. The foam modelscan simply be glued together, where as repairing balsa- and composite modelsare in best case more complicated. As several modifications will be needed onthe fuselage, the workability of the foam material is a great advantage.The airframe has one of the largest and most spacious fuselage of any small scaleaircraft available, leaving adequate room for both large batteries, autopilot andpayload.Furthermore, spare parts are readily available for the Multiplex Mentor.The Mentor has a total of five control channels. One for throttle and one foreach control-plane, namely ailerons, rudder and elevator. In addition, an ex-tra channel is needed to accommodate in-flight switch between automated andmanual remote control. In case of error or potential collision, the manual over-rule can be activated and the plane brought down safely. Furthermore, beingable to switch to manual allows for manual take off and landing if necessary.

The flight ready Multiplex Mentor is composed of:

� Motor: Brushless outrunner motor. 870 rpm/V capable of producing720W.

� Servos: Common hobby servos controls ailerons, rudder and elevator.

� Electronic Speed Control: 60A AC controller with PWM input enableseasy control of the motor.

� Battery: A 4 cell lithium-polymer battery provides 6150mAh at 14.8v ata maximum discharge rate of 150A.

� Receiver: A 2.4GHz receiver decodes the radio signal and outputs it asservo PWM pulses.

� Propeller: A propeller with a diameter of 13 inches and a pitch of6.5 inches. Yielding a theoretical maximum speed of 870 rpmV · 14.8V ·6.5inches = 126kmh

5.2 Autopilot PCB

A printed circuit board (PCB) has been designed to accommodate the flight-computer and its associated sensors and I/O units. The PCB is designed withcompactness and integration in mind, leading to most devices being mounteddirectly on the board, rather than remotely on breakout boards. The width isconstrained to be 74mm, in order to fit the desired mounting face in the Men-tor. A maximum length of 140mm is available in this location– however only118mm was required. As the the Mentor fuselage surrounds both long sides ofthe PCB, access is restricted to the short sides and top/bottom of the PCB.This fact constraints the locations of the RJ45, USB-A and differential pressuregauge, as these needs to be located at the edge of the PCB for connectivity.Also the Xbee module needs to located at the edge of the PCB, in the interestof antenna performance. The remaining antennas (GPS, WiFi and Bluetooth)are not mounted directly on the PCB, and can thus be mounted remotely, for

49

Page 52: ATC Digital

Figure 5.3: Blockdiagram of the developed autopilot board

Figure 5.4: Image of the final developed autopilot board

the power planes not to interfere with antenna performances.

For full schematics and PCB layout, please refer to Appendix F on page 94.

5.2.1 Processor and data buses

The OpenEmbedded file system environment available for the Gumstix proces-sor is based on the Angstrom distribution. However, due to lacking support forcross-compilation of ROS, it is not easily installed on the Angstrom file system.Therefor it has been decided to install a ROS supported Ubuntu distributionon the Gumstix. In order not to lose the highly useful customizability of Open-Embedded, a precompiled Ubuntu distribution could not be used. Thus Open-Embedded was utilized to compile a Linux kernel matched against a minimalUbuntu file-system generated by Rootstock1, thereby substituting OpenEmbed-ded’s use of Angstrom with Ubuntu, while still maintaining a fully customizableoperating system. Ubuntu does have a slightly larger memory usage comparedto Angstrom, however, the 512 MB RAM available to the Gumstix is morethat adequate when using it as a non-graphical/console system. For a complete

1https://launchpad.net/project-rootstock

50

Page 53: ATC Digital

guide to installing Ubuntu and ROS on the Overo Gumstix board, please referto Appendix E.1 on page 92.

As it can be seen in Figure 5.3, every sensor, except the GPS, is connected to theGumstix via the I2C bus. The analog sensors, as the differential pressure fromPitot, the battery voltage and ultrasonic range finder have all been connectedto the I2C bus via the on-board AVR.

For future tool connections, USB, SPI and Ethernet support has been addedto the system. Ethernet is obtained with the use of LAN9221 [66], a parallelinterface LAN chip, also used in the Tobi expansion board sold by Gumstix.USB and SPI are integrated parts of the processor, where SPI has been wiredas pin headers, and USB has been added with required power management.Lastly, some General Purpose Input/Outputs (GPIO) of the OMAP processorhave been wired for future use. Neither of these have been used yet, but areprovided by the board for future tool and development support.

5.2.2 Airspeed sensor

Signal Conditioning

According to the differential pressure sensor MPXV7002dp datasheet[64], thevoltage output is given by (5.1).

Vout = 5[V ] · (0.2 · pd[kPa] + 0.5)± 6.25% · VFSS (5.1)

Where VFSS is defined as the Full Scale Span, typically 4V at VSupply = 5V.

A differential amplifier, based on an operational amplifier is used for signalconditioning. The designed amplifier subtracts 2.5V, as only positive pressureand thus velocity is deemed relevant. The amplifier then scales the outputto match the 3.3V level of the ADC (See Section 5.2.10, ’R/C interface &Failsafe operation’ on page 55). A low-pass filter is build in, to remove varioushigh frequency noise components. Please refer to Appendix F.2 on page 95 fordetailed schematics.

Calibration

According to (3.8) on page 30, the full range of the differential pressure sensor,

assuming an air density, ρ, of 1.25 kgm3 is then given by zero to V =

√2·2·103[Pa]

1.25[ kg

m3 ]=

204kmh . A field test was conducted in a car, comparing the pitot model outputwith GPS indicated speed. From the test, it was conducted that the model fits,but with a nonlinear factor of 1.22. After applying the factor, as seen in Figure5.5, the model is reliable.

5.2.3 Accelerometer

The accelerometer used is the triple axis ADXL345 [25]. It can measure up to±16g with a resolution of 3.9 mg

LSB . To calibrate the sensor, the PCB has beenleveled on a still flat surface to measure the g force in all directions. That way

51

Page 54: ATC Digital

Ref. vel. Meas. mean[m/s] [m/s]

7.6 7.610.3 10.212.9 12.615.3 15.6

Figure 5.5: Field test results, pitot output, up- and downwind, compared withGPS reference. Model output has been corrected with a factor 1.22

the nonlinearity of the sensor is accounted for. According to the datasheet, themaximum sampling frequency is 3,200kHz. However, the acceleration is onlysampled at approximately 50Hz. In order to satisfy the Nyquist theorem, thedevice is setup band limit the output to 25Hz.

5.2.4 Gyroscope

The gyroscope used is the InvenSense ITG3200 [37]. It is a triple axis gyro-

scope, with a resolution of 0.0696◦/sLSB saturating at ±2000◦/s. When dealing

with MEMS gyroscopes, calibration considerations must be made. Unlike a tra-ditional mechanical gyroscope, a MEMS gyroscope measures the angular rateof change, rather than the angle. A number of factors must be considered whenusing a MEMS gyroscope, as it is explained by Analog Devices in [46], the errorsof a gyroscope can be classified as:

Biaserror =

∫ t1

0

ωbias · dt = ωbias · t1 (5.2)

Thus, the bias error influences the gyro measurements even when the gyro isstanding still. Nonlinearity in the construction of the sensor also results inskewed measurements when moving the gyro. This error is state in the data-sheet to be 0.2% typical.In order to compensate for these errors two methods are suggested in the paper:Bias can be estimated by sampling the gyroscope for a short time (3 seconds)while it is standing still, and permanently subtract the average of these mea-surements when using the gyroscope. This is also what has been implementedin practice. The scaling error has been neglected in the setup, but it could beestimated by rotating the gyroscope a known angle, and compare output to aknown reference.There are not a lot of 3-axis digital standalone gyroscopes available, one ishowever the L3G4200D [50] very similar to the ITG3200. Either could be used

52

Page 55: ATC Digital

as they share most specifications, however the ITG3200 was in stock whencomponents were ordered.As the gyroscope is sampled at approximately 100Hz, the internal low pass filteris set to 50Hz, in order to satisfy the Nyquist theorem.

5.2.5 Magnetometer

The initial digital magnetometer used was the triple axis HMC5883L. It canmeasure up to ±800µT with a resolution of 0.5 µT

LSB . However, experimentssoon showed that due to its placement relatively close to the motor drivingthe plane, the measured magnetic field was highly distorted. It was decidedto move the magnetometer away from the noise source. Therefore a secondmagnetometer has been mounted, further towards the back of the airframe.

5.2.6 Barometer

To compliment the altitude estimate of the GPS, a Bosch BMP085 [65] baro-metric pressure sensor has been installed. It measures 300 − 1100hPa at aresolution of 0.03 hPaLSB corresponding to 500m below to 9000m above sea levelat a resolution of 36cm.

5.2.7 GPS

For global positioning, a µBlox-5 based 50 Channel D2523T GPS receiver hasbeen used. It features a helical antenna and a 5Hz update rate with a Circu-lar Error Probability of 2.5m. It was chosen mainly because of its low priceand high update rate. The GPS does however only have volatile memory, anddoes not come with a backup battery. Therefore, all configurations done to theGPS is lost after a power cycle. This presents a problem, as the software usedto program the GPS, called U-Center is Windows based, and not intended forembedded Linux. The U-Center can save a configuration file, containing HEXcodes for the configurations, like desired baud-rate, update frequency and otheroperation modes. The configuration protocol was reverse engineered and a C++based parser was created. This parser can read the configuration file, send thecommands to the GPS, and receive the acknowledgement confirming correctconfiguration with checksums. This parser has then been configured to run atevery boot of the system - thus, the GPS will always be correctly configured.

The main configurations done are enabling only GPGGA and GPVTG messagetypes, as they provide all the information the state estimator needs, that beinglatitude, longitude, altitude, delusion of precision, number of satellites fixed,course and speed over ground. Furthermore the GPS is configured to update asfast as possible, 5Hz.

5.2.8 Zigbee

The Zigbee modem connects to the same serial port used by the default kernelterminal. This was done as it was the only free UART in hardware, and if care istaken it can be used as an advantage. When debugging the software, especially

53

Page 56: ATC Digital

16-bit timer@ 1MHz

Radioreceiver

Gumstix

Servos

Pin-changeinterrupt

Comparematch

interruptI2C

Interface

Limits

EEPROM

ADC

Sw

itch

Man

Auto

Man/auto

Battery

Ultrasonic

Pitot

Figure 5.6: Block diagram of the low-level coprocessor. Gray boxes representhardware pieces, green AVR hardware modules and blue software modules.

Wi-Fi, it is very useful to have a terminal providing full access to the system,by attaching a Zigbee modem this terminal connection becomes wireless.

5.2.9 Ultrasonic proximity

An LV-EZ4 ultrasonic proximity sensor has been installed pointing out the bellyof the plane, intended to use for landing procedures when the correct distanceto the ground is very essential. The sensor installed was the longest range andmost narrow-band lightweight proximity sensor we could identify. It measuresfrom 20cm to 7m with a resolution of 2.5cm/LSB. The sensor has a multitudeof interfaces, including PWM, analog and serial. It has been installed with theanalog interface to the AVR. In the current state of the project it has not yetseen use.

5.2.10 R/C interface & Failsafe operation

In the R/C hobby world, the de facto standard interface used for controllingservos and motor controllers are simple fixed frequency PWM signals. The highperiod of the signal is used to set the servo set-point and varies from approxi-mately 1.0 ms to 2.0 ms, 1.5ms being center. The radio system uses 5 channelsfor controlling servos and the motor controller and a 6th to switch between au-tomatic and manual flight. It is of the outmost importance that the systemis capable of switching into manual mode, even in the event of a crash on themain flight computer. This ensures that the pilot has the opportunity to avoida crash. Thus it is desirable to keep the switching between manual and auto-mated flight at the lowest possible level. Meanwhile, the timing of the servosignals needs to be precise, as the resolution is approximately 1◦/5.6µs, whichis unreachable from Linux user space.

An 8-bit AtMega328 microprocessor[24] is used to handle the low-level servo in-put and output, as it is capable of µs precision and offers the desired decoupling

54

Page 57: ATC Digital

from the Linux system, to ensure that manual to automatic switching is alwaysavailable. The co-processor samples the pulse width from the radio receiver viahardware interrupts. On both rising and falling edges of each radio channel, theinterrupt samples the value of a 1MHz 16bit timer. These two samples are thensubtracted to find the pulse width. The signal is reconstructed by a comparematch interrupts on the same timer. The first channel’s output is set high, anda compare match interrupt is scheduled to trigger the desired pulse-width later.This interrupt then sets the current channels output low, the next channels out-put high and schedules the next channels compare match interrupt based on thedesired pulse width. This scheme continues until all channels have been servedand restarts in a 50Hz cycle. As the co-processor has build in ADCs, these areutilized for sampling battery voltage, pitot pressure via the differential pressuresensor and ultrasonic range-to-ground. Please refer to figure 5.6 on the previouspage for an overview of the hardware/software structure of the co-processor,and the Doxygen documentation for detailed descriptions.

5.2.11 Power management

A total of four different supply voltages are present on the PCB. 14.8V inputfrom the 4-cell LiPo battery, 5V for USB interfaces and the differential pressuregauge. The majority of devices is operated at 3.3V. This group, including; theGumstix, various sensor chips (BMP085, ADXL345, ITG3200, HMC5883), I/Omuxing (AtMega328), the Ethernet interface (LAN9221) and various auxiliarydevices, e.g. level shifters. As the GumStix I/O’s operates at 1.8V, all interfacesto this chip needs to be at this voltage level. Hence a 1.8V supply is neededfor level conversion. An estimate of the maximum power consumption at eachlevel has been conducted to dimension the voltage regulators accordingly, thesum up of this estimation can be seen in Tabel 5.1.

Consumer [mA] 1.8V 3.3V 5.0V

I2C 20 20LAN 10 190AVR 25Diff. press 5USB Host 500Misc. sensors 10Gumstix 700Zigbee 250LEDs 70Regulator 30FTDI 8Total [mA] 30 1,273 535

Table 5.1: Power consumption on different levels.

Based in the amount of current needed on 3.3V and 5.0V it has been decidedto maximize efficiency by using switch mode regulators for these levels. As only30mA are drawn on 1.8V, a linear regulator is used there.

55

Page 58: ATC Digital

fmFusion

fmController

Airframe orsimulator

fmTeleAir fmTeleGround

GCS

Zigbee

State est. Control effort

Motion Servo cmd.

Figure 5.7: Figure of the software blocks and their communication.

Apart from the voltage levels on the control board, the RC servos, motor andreceiver also needs power. This power is supplied by the Electronic Speed Con-troller (ESC). The motor is driven by the AC regulator, while the servos andreceiver are driven by the Battery Eliminating Circuit(BEC) in the ESC. Tooptimize efficiency, it is recommended to use either an ESC with switch-modeBEC or a separate power supply for servos, as a linear regulator for the servoswould waste up to 3A · 9.8V = 29.4W .

5.2.12 Auxiliary components

A number of other components have been used to solve different needs. A FTDIchip connects the Linux system terminal UART to a USB plug on the board,thereby the board can be directly interfaced with a USB cable. To interface theGumstix, level conversion from 1.8V to 3.3V and vice versa was needed. Thishas been done with the SN74LVC16T245 [36] 16-bit dual supply bus transceiver.

5.3 Software building blocks

The software needed to accommodate the system blocs, has been implementedas seen in 5.7. It is based on five major nodes:

1. fmFusion: State estimation and hardware interfaces.

2. fmController: Low lever PID controls.

3. fmTeleAir & fmTeleGround: Wireless telemetry package handling.

4. GCS: Ground control station.

5. Simulator: identical interface as the actual hardware, for grounded de-velopment.

The following sections will discus these nodes one by one.

5.3.1 fmFusion

fmFusion has two major roles; it is the only node in the system that has accessto the I2C port, and as such is the only node that can interact with the low-level

56

Page 59: ATC Digital

sensors. This scheme must be used to prevent concurrency issues of the bus. Itssecond responsibility is to executed the state estimator. The state estimator isrun in the same node that has a handle on the I2C bus, such that data doesnot have to be passed around - internal pointers can be used. This structureallows for three different modes, as can be seen in Table 5.2

Mode Subscribes to Publishes Note

Autopilot none State Estimate Used for the finished au-topilot and aided flight

Simulator Sensor data State Estimate Used to test new algo-rithms on in the simulator

Data logger none Sensor data Used to log raw sensordata

Table 5.2: fmFusion and its modes.

Also, by implementing the ability for fmFusion to subscribe to sensor data ratherthan polling them from the I2C bus, it is possible run the state estimator on adesktop computer without the actual sensors.

The open source KFilter [11] library has been used to implement the stateestimator. The kinematics models are described in detail in Chapter 3 withkinematics derived in Appendix A through C. The filter library provides classesand vector and matrix types, such that the implementation process can be fo-cused on producing the right filter inputs, rather than spending time worryingwhether more low-level problems like matrix and vector multiplication functionsproperly.

When instantiating the filter class, all the known elements of the filter areinitialized in separate functions. I.e. makeA() is overloaded to contain the Amatrix of the filter, while makeProcess() defines the Process update function etc.

In practice this results in the instantiation of three EKF classes, where theoutput from one is used as input in the next. Please refer to the Doxygendocumentation for details.

5.3.2 fmController

fmController implements low level controls in the system. It subscribes to theairframeState published by the state estimator (fmFusion), furthermore it alsosubscribes to the remote control radio data, such that set-points can be inputvia the radio. In future versions it will subscribe to the setpoints published bythe FMS.

The PID controllers used in the implementation was originally a part of an Ar-duino library 2, but due to various shortcomings it was decided to expand thiscritical component. The source is documented via Doxygen. The implementa-tion complies with the description given in Section 2.1.1 on page 16. Various

2http://arduino.cc/playground/Code/PIDLibrary

57

Page 60: ATC Digital

practical issues had to be dealt with, never the less; when switching from man-ual to automated flight, the PID’s integral terms has to be initialized at ’rest’,such that the output is the same as it was during manual flight. Derivativekicks needed also to be dealt with. Simply letting the feedback, and not theerror, feed the derivative term, this is overcome. By defining the minimum andmaximum output of each controller, integral wind-up is easily dealt with insidethe PID class. Tuning of the parameters was mainly done manually; first in thesimulator to get an idea of the magnitude of the values. During later field testsfine tuning to the airframe was conducted iteratively. Tuning schemes such asZiegler-Nichols could have been used, but it was found that hand-tuning was notcomplicated nor time consuming and adequate to obtain mediocre performance.

5.3.3 GPS

The GPS device provides data in NMEA format. This standard contains along list of different messages. Two of these message types, GGA and VTGcontains the data used in this system. Position data is provided in longitude /latitude format by the GGA message. The remainder of the system uses UTMcoordinates, so conversion must be conducted. An original Frobomind structure,consisting of fmCSP, fmSensors and fmExtractors is used to obtain the desiredinformation from the GPS. fmCSP has the low-level handle on the serial port,taking parameters for the actual port name, baud-rate and the desired publishtopic name. fmCSB publishes the NMEA strings, containing data from theGPS device. fmSensors subscribes to and parses the NMEA stings and assignsthe values to appropriate message-types; i.e. GGA and VTG type messagesare published by fmSensors. However, as the UTM coordinates need to beextracted, the fmExtractor subscribes to the GGA data and converts longitudeand latitude to UTM northings and eastings. A small overview can be seen inTable 5.3.

Node Subscribes to Publishes Note

fmCSP none Serial string Attaches directly to theserial port, publishingNMEA data as strings

fmSensors Serial string GGA and VTG Parses the string data andputs it into appropriatemessage structures

fmExtractor GGA UTM Converts latitude and lon-gitude to UTM coordi-nates.

Table 5.3: The node collection used to extract positional data form a GPSreceiver, connected to a UART.

5.3.4 fmTeleAir and fmTeleGround

To accommodate the need of a wireless link between the aircraft and the groundstation while the ATC is operating a Zigbee modem is needed in either end ofthe system. The Zigbee modem are capable of a variety of network topologies,

58

Page 61: ATC Digital

and were originally designed for home automation, as seen in [32]. As there isonly one ATC in this project, the modems are configured for simple point topoint serial communication, effectively using the modems as a 1.5km[26] wire-less serial connection. With the hardware used, the project could however laterbe expanded with more ATCs, with the Zigbee modems allowing them to inter-connect as well, enabling possibilities as swarm flying and coordinated flight, asproposed by Floreano et al. [31].

Transporting this data raw via serial line would deviate from the ROS designterminology. Therefore steps have been taken to make the serial link seemtransparent. By having a ROS node in either end of the Zigbee connection, thelink becomes transparent. By using ROS’ own serialization methods, predefinedmessage types that can be subscribed to by one Zigbee node, then serialized andsend to another Zigbee node, where it is deserialized and published as thoughit had always been part of ROS.Utilizing this message transport protocol, the data already published on theautopilot board can also be published on the ground computer for the groundstation to visualize. All that is required is that the two nodes are compiled withthe same predefined message types.

fmTeleAir executes on the autopilot and is responsible for the link to the GCS.Whatever this node subscribes to, gets published in the other end by fmTele-Ground, executing on the GCS. To ensure data packages are read properly SLIP[62] has been implemented together with the ROS serializer as can be seen inFigure 5.8.

Figure 5.8: The data structure transmitted over the Zigbee serial link, wrappedin slip to ensure valid packages. STA and END are slip definitions for start andend of package.

5.4 Ground Control system

The information needed by the operator on the ground, consists of the airframestate, the signal quality of the GPS fix and various other information. Thetelemetry capability provided by fmTeleGround and fmTeleAir is used for thispurpose. A ground station developed for the CityFlyer by CCNY Robotics andIntelligent Systems Lab [55] has been converted and modified to suit the currentneeds of the project.

The ground station is ROS based and runs in the GTK+ graphical environment.The virtual cockpit as seen in Figure 5.9a, is made up of a configurable numberof ’instruments’. These instruments are drawn with vector graphics, such thatthe ground station can scale to any screen size. Furthermore, the individualinstruments can be modified in scale, size, color, and their placement can also

59

Page 62: ATC Digital

(a) Virtual cockpit (b) Map and tracker

Figure 5.9: Ground Control Station views.

be rearranged.The ground station also features a map tab, where the user can choose betweena large array of online map providers like OpenStreet or Google Maps. Herethe airframe track can be visualized with a line, and the map can also be set tomove with the airframe.

By having the ROS enabled ground station subscribe to the aforementionedZigbee serial node topics - all the necessary information is available, and readyto be visualized.

5.5 Simulator

CRRCsim is build directly from source, as this allows integration of sensor datapublishers and steering input subscribers, utilizing ROS. The GNU Autotconftools build system is used. However these proved difficult to mate with the ROSmake system. Therefore dependencies were manually installed and a CMakesystem was configured to include ROS in the build and thereby enabling thewhole ROS suite in the source. Having done that, one of the existing unusedinterfaces was modified as a ROS node subscribing to radio data and publishingsensor data.

Figure 5.10: Rxgraph of the simulator (crrcsim) running alongside the state es-timator (fmFusion), the PID controller (flystixController), the zigbee connectedradio input (fmRemoteGround) and finally a ground station (ground station).

60

Page 63: ATC Digital

As it can be seen in the overview in Fig. 5.10 on the preceding page, the simulatorsubscribes to servo data and publishes sensor data on a multitude of topics. Thisarchitecture allows replacement of the simulator node, directly with the actualhardware, by changing the mode of fmFusion, as described in Section 5.3.1 onpage 57.

5.6 Flight test

5.6.1 Vision based post flight roll verification

One of the major challenges of estimating the state of an airplane is the lack ofground truth, and hence the lack of a solid correct reference. The GPS can beused to some extent, if the wind speed is known, the pitot tube can be calibrated,as seen in Section 3.1.1 on page 29. Also, the heading deduced mainly from themagnetometer, can be verified by GPS heading, measured when moving in aconstant direction. This method can however not be used accurately for turns,as the internal filter of any GPS introduces a latency in the output. However,the GPS is of no help when estimating roll and pitch. Thus, we are faced withthe problem of acquiring a reference. As it has been shown in, roll and pitchcan be deducted from a video stream. Therefore a test flight conducted with acamera mounted on the airframe, facing out the front of the plane.

In itself this video footage can be synchronized with the logged state estimatevisualized on a virtual cockpit. This task is however inaccurate, tedious andtime consuming, as a person has to watch the video stream and visually deter-mine if the state estimate corresponds to the video. Therefore an automatedvision system has been developed in Matlab. The flow of this script is depictedin Figure 5.11 on the next page.

61

Page 64: ATC Digital

Load image Nas greyscale

Images

Guassian blur

Otzu two levelthresholding

Canny edgedetection

Hough-spacetransform andpeak detection

Deduct rollangle from

peak position

Lastimage?

No

Yes

Done

Figure 5.11: Horizon detection algorithm flow chart

62

Page 65: ATC Digital

The method is stable as long as there is a clear shade difference, enabling Otzu’smethod to distinguish sky from ground. If an image has only sky, the mostdominant line is typically the propeller, and if only ground is filmed typicallyfield boundaries are found. In a video section of 1:20 minute, during which skyand ground is constantly visible, a data sample has been extracted. Figures5.12 through 5.14 illustrates this particular 1:20 minute flight, and comparesthe vision estimate with the Extended Kalman Filter state estimate.

Figure 5.12: Comparing vision(solid) and EKF(dashed) roll estimates

Figure 5.13: Roll error over time

63

Page 66: ATC Digital

Figure 5.14: Roll error distribution. The standard deviation is 5.21◦.

The EKF state estimate has been recorded with a frequency of 50Hz whereasthe camera used had a framerate of below 15 fps. Therefore Matlab has beenused to resample the vision roll estimate, and to ensure synchronization cross-correlation has also been conducted. It is noteworthy to say that the pitch anglecould also have been deducted from the video, this was however not tested. It isalso estimated that the script as presented in Figure 5.11 on page 63, at timesdeviates with up to 6 degrees in seemingly well defined horizon images. Thereforthe standard deviation of the error between vision and EKF estimate as seen inFigure 5.14, is just as likely to come from the vision script as the EKF. Therebythe test indicates that the EKF state estimate of roll is viable.

5.6.2 Aided Flight

Once the state estimate has been implemented and verified, the first step to-wards the autopilot described in Chapter 2 on page 14, is the ability to controlthe plane based on its state and user input.

Stabilized flight is characterized by the plane stabilizing itself, in roll and pitch,when the remote control sticks are released by the pilot. This can be used toteach aspiring pilots how to fly, with a reduced risk of crashing a plane. If panicoccurs, a dangerous situation can typically be averted by releasing all controlsticks and, if lift is desired, elevator action can be provided. However, thislimited form of stabilization does not provide a means for good control, onlycorrection if the situation is about to, or already is out of hand.

If however, the user’s control input could be used to guide the plane in the rightdirection rather than directly controlling the actuators of the plane, a more in-tuitive way of flying could be realized. If a robust way of controlling the plane

64

Page 67: ATC Digital

can be implemented, an autopilot utilizing the same control input would bemore intuitive to program. Therefore a control model has been implemented,where the right hand control stick, in the horizontal direction, defines the rolldegree set-point, rather than the aileron deflection. Please refer to Figure 5.17on the following page. If the controller is then limited to a reasonable set ofmaximum roll angles (i.e. no more than ±90◦) this input then actually controlsthe roll angle and thereby the turn radius of the plane - the same ultimate goalof regular direct aileron control, but without the pilot having to do the actualcontrol. Likewise, traditional control of the right hand vertical stick maps toelevator deflection. See Figure 5.18 on the next page. Here the ultimate goalis to pitch at a certain rate or angle in order to change the plane altitude.Therefore a control loop has been implemented, where this stick input maps toa pitch angle rather than elevator deflection. By limiting the maximum andminimum pitch angle to safe values (i.e. ±45◦ for safe regular flight) the planeflies reasonably safe, no matter the user input, while still leaving great freedomof control to the pilot. The pitch controller currently maps to the elevator anddoes not account for roll. This results in the vertical control stick influences theturn radius, rather than pitch, when the airframe is not horizontal. This effectcan be seen in Figure 5.16 on the following page.

The aided flight procedure has been implemented and tested in the field. Ona day with moderate wind of about 5-6 m/s the plane was hand launched inaided mode, flown for five minutes, and safely landed, never changing back tomanual mode. The flight was mainly conducted in a large circle, turning left.Figure 5.15 shows the relation between set-point from the radio and the stateestimate of the roll.

Figure 5.15: Roll angle as a function of user input

65

Page 68: ATC Digital

Roll ≈ 0° Roll ≈ 40°

Figure 5.16: Pitch angle as a function of user input

(a) Radio aileron

Mode FunctionManual Aileron deflectionStabilized Stabilize horizontally if centeredAided Roll angle set-point

(b) Radio aileron function

Figure 5.17: Radio aileron input function dependent on flight mode

(a) Radio elevator

Mode FunctionManual Elevator deflectionStabilized Stabilize horizontally if centeredAided Pitch angle set-point

(b) Radio elevator function

Figure 5.18: Radio elevator input function dependent on flight mode

66

Page 69: ATC Digital

Chapter 6Perspective

6.1 Future work

The ultimate goal of this project, to make an Autonomous Tool Carrier, ca-pable of interfacing various tools and fly autonomously, has not yet been fullyrealized. The work that has been done, has however been conducted in such away, that it makes for a good starting point for future work. The most eminenttopic of future work is the tool interfacing. Hardware interfaces have been madeavailable, and the software has been designed to easily accommodate changesand modules. The interfacing protocol has however not been defined and imple-mented. Also the effect of adding weight to the system has been investigated,but the impact on control response and tuning of the controllers as a functionof added weight should be explored.

Of the state estimate parameters, pitot vs. GPS speed and roll angle vs. videorecorded have been verified. However, the other state parameters have not beenscientifically verified. The verification of parameters such as pitch, yaw, posi-tion etc. should also be conducted. It is proposed to extract the pitch using thesame approach as has been done for the roll estimate. Heading and position area bit more complex to verify, as the ground truth is not readily available. Sofar, the verification of the remaining parameters have been done by manuallyinspecting the state, visualized by the ground station, and comparing them tovideo recordings. Also, pitch has been verified in the sense that it works inaided flight mode, but again, a more scientific method should be used.

The last major field of future work lies in the development of the Flight Man-agement System. Various concepts have been investigated, but none of themhave been implemented. This would be a natural next step of the project as thestate estimator now provides all the data necessary for flying autonomously.

67

Page 70: ATC Digital

6.2 Project analysis

When the project was first defined, a number of time consuming factors wereunderestimated. Initially, the idea was to create the entire prototype, starting athardware design level, and ending with an ATC, capable of autonomous flightand tool interactions. This is in itself a vast project, and even if everythingwould have worked in the first try, it was ambitious to think that the flyingprototype could have been finished within the time limits. Apart from this, anumber of factors introduced unforeseen delays; dealing with aircraft automa-tion presents a number of challenges in transition from idea to test, as errors canpotentially destroy the entire physical platform. Unlike automation of groundvehicles, hitting an emergency stop is a very bad idea when airborne. Also noneof the authors could fly model planes, and were thus faced with either hiring apilot for every test flight, or having to learn how to fly. Hiring a pilot wouldbe the safest option, but it is expensive. Often-times it is also desirable to beable to fly within the hour, if the weather conditions are just right. Thereforeit was decided that one author would spend time learning to fly. This obvi-ously delayed the initial flights, and increased the risk of crashes, due to lackof experience. The trade-off was worth it though, as many more flights were beconducted, at the right times.

Initially the project work was slightly unstructured. A time plan had beenforged, but it did not become an integrated part of the day to day work. Thiswas later realized, and the plan was revised. This resulted in a SCRUM-likeprocess, where sub-deliveries were formed and logged for the supervisors to see.The generated log can be seen in Appendix I on page 124. This structuregreatly focused and increased the productivity of the team, resulting in aneffective round-off of the project. Finally, in the last stages of the project, theusefulness of an on-line process log was discovered. Through the log, supervisorsand students alike, had a common forum for discussing the day to day progressand direction of the project. The log is attached in Appendix I on page 124.

6.3 Conclusion

Through a study of state of the art autonomous unmanned aerial vehicles, it hasbeen discovered that though there are several robust systems available. Theytend to focus on specific application areas and use specialized, integrated tools.By the use of a multitude of disciplines, ranging from hardware layout, em-bedded programming, software development, kinematics and digital filtering, aprototype foundation of a new concept, in the area of autonomous aerial vehicles,has been made. By building the system from the bottom up, with modularityin mind, it is anticipated that various tools can interface and interact with thesame autopilot. The cornerstone has been set for a continued project. A plat-form capable of stable state estimation and proven control of the longitudinalaxis has been developed, implemented and tested. A combination of tests andpractical verification has shown that a 9 state Extended Kalman Filter, basedon cheap digital sensors, can be used to accurately determine the state of anaircraft in motion.

68

Page 71: ATC Digital

Furthermore, it has been demonstrated that the state estimate and the nestedPID loops are capable of maintaining control of the airframe under aided flight.

It has been shown that by substituting direct control with attitude control, anovice pilot can fly a model aircraft, even in moderate winds. Nothing suggeststhat this should not be the case for stronger winds as well. It is postulatedthat as this control is now in place, the next step towards an Autonomous ToolCarrier can be taken. By building an autopilot, as suggested in this thesis, andimplementing a tool communication protocol on the provided hardware andsoftware, the system can become a reality.

69

Page 72: ATC Digital

Bibliography

[1] Ardupilot. www.diydrones.com/.

[2] Avior 100 autopilot. http://www.atiak.com/products/

ati-avior-100-specifications/.

[3] Gatewing x100. www.gatewing.com.

[4] Gluonpilot. www.gluonpilot.com.

[5] Micropilot. www.micropilot.com.

[6] Openpilot. URL http://www.openpilot.org. www.openpilot.org.

[7] Piccolo ii. http://www.cloudcaptech.com/piccolo_II.shtm.

[8] Sensefly. www.sensefly.com.

[9] Ccny ground station. http://ros.org/wiki/ccny_ground_station.

[10] Happy killmore ground control station. http://code.google.com/p/

happykillmore-gcs/.

[11] Kfilter - free c++ extended kalman filter library. www.kalman.

sourceforge.net.

[12] Procerus technologies, kestrel autopilot. http://www.procerusuav.com.

[13] Openpilot ground control station. http://www.openpilot.org/product/openpilot-gcs/.

[14] Paparazzi. www.paparazzi.enac.fr.

[15] Q ground control. http://qgroundcontrol.org/.

[16] World magnetic model 2010. National Geophysical Data Center, NationalOceanic and Atmispheric Administration. http://www.ngdc.noaa.gov/

geomagmodels/IGRFWMM.jsp?defaultModel=WMM.

[17] E.P. Anderson, R.W. Beard, and T.W. McLain. Real-time dynamic tra-jectory smoothing for unmanned air vehicles. Control Systems Technology,IEEE Transactions on, 13(3):471–477, may 2005. ISSN 1063-6536. doi:10.1109/TCST.2004.839555.

70

Page 73: ATC Digital

[18] A. L. Barker, D. E. Brown, and W. N. Martin. Bayesian estimation andthe kalman filter. Computers Math. Applic, pages 55–77, 1994.

[19] R. Beard. State estimation for micro air vehicles. In Javaan Chahl, LakhmiJain, Akiko Mizutani, and Mika Sato-Ilic, editors, Innovations in IntelligentMachines - 1, volume 70 of Studies in Computational Intelligence, pages173–199. Springer Berlin / Heidelberg, 2007. ISBN 978-3-540-72695-1.URL http://dx.doi.org/10.1007/978-3-540-72696-8_7. 10.1007/978-3-540-72696-8 7.

[20] H. Bendea, P. Boccardo, S. Dequal, F. Giulio Tonolo, D. Marenchino, andM. Piras. Low cost uav for post-disaster assessment. The InternationalArchives of the Photogrammetry, Remote Sensing and Spatial InformationSciences., XXXVII(B8), 2008.

[21] W. Burgard, D. Fox, and S. Thrun. Probabilistic robotics. Cambridge, MA:MIT Press, 2005.

[22] Xavier P. Burgos-Artizzu, Angela Ribeiro, Maria Guijarro, and Gon-zalo Pajares. Real-time image processing for crop/weed discriminationin maize fields. Computers and Electronics in Agriculture, 75(2):337–346,2011. ISSN 0168-1699. doi: 10.1016/j.compag.2010.12.011. URL http:

//www.sciencedirect.com/science/article/pii/S0168169910002620.

[23] R. S. Christiansen. Design of an autopilot for small unmanned aerial vehi-cles. Master’s thesis, Brigham Young University, Aug 2004.

[24] Atmel Cooperation. Atmega328 datasheet, Dec 2009. 8271A–AVR–12/09.

[25] Analog Devices. Digital accelerometer adxl345. Technical report, AD, 2009.

[26] Digi. Xbee - rf family comparison matrix. Technical report, Digi, 2011.

[27] Roger M. du Plessis. Poor Man’s Explanation of Kalman Filtering or HowI Stopped Worrying and Learned to Love Matrix Inversion. June 1967.

[28] A. M. Eldredge. Improved state estimation for micro air vehicles. Master’sthesis, Brigham Young University, Dec 2006.

[29] Vanessa Espinar and Dana Wiese. An extreme makeover - scientists up-grade a toy plane with robotic technologies. GPS World, pages 20–27, Feb2006.

[30] Patrick Th. Eugster, Pascal A. Felber, Rachid Guerraoui, and Anne-MarieKermarrec. The Many Faces of Publish/Subscribe. ACM Computing Sur-veys, 35(2):114–131, june 2003.

[31] Dario Floreano, Sabine Hauert, Severin Leven, and Jean-Christophe Zuf-ferey. Evolutionary Swarms of Flying Robots. Laboratory of IntelligentSystems, Ecole Polytechnique Federale, Lausanne, Switzerland.

[32] K. Gill, Shuang-Hua Yang, Fang Yao, and Xin Lu. A zigbee-based homeautomation system. Consumer Electronics, IEEE Transactions on, 55(2):422–430, may 2009. ISSN 0098-3063. doi: 10.1109/TCE.2009.5174403.

71

Page 74: ATC Digital

[33] C. Gutjahr and R. Gerhards. Decision rules for site-specific weed man-agement. In Erich-Christian Oerke, Roland Gerhards, Gunter Menz,and Richard A. Sikora, editors, Precision Crop Protection - the Chal-lenge and Use of Heterogeneity, pages 223–239. Springer Netherlands,2010. ISBN 978-90-481-9277-9. URL http://dx.doi.org/10.1007/

978-90-481-9277-9_14. 10.1007/978-90-481-9277-9 14.

[34] Dieter Hausamann, Werner Zirnig, Gunter Schreier, and Peter Strobl. Mon-itoring of gas pipelines - a civil uav application. Aircraft Engineering andAerospace Technology, 77(5):352–360, 2005.

[35] W.T. Higgins. A comparison of complementary and kalman filtering.Aerospace and Electronic Systems, IEEE Transactions on, AES-11(3):321–325, may 1975. ISSN 0018-9251. doi: 10.1109/TAES.1975.308081.

[36] Texas Instruments. Sn74lvc16t245 16-bit dual-supply bus, 2005.

[37] InvenSense. ITG-3200 Product Specification, 1.4 edition, March 2010.

[38] Corey Ippolito. An autonomous autopilot control system design for small-scale uavs. Technical report, University of Carnegie Mello, 2005. NASAAmes Research Center.

[39] Jan Jacobi and Matthias Backes. Classification of weed patches in quickbirdimages: Verification by ground truth data. EARSeL European Associationof Remote Sensing Laboratories, 5(2):173–179, July 2006. ISSN 1729-3782.

[40] Niels Jul Jacobsen. Den intelligente sprøjtebom. Syddansk Universitet,Mærsk Mc-Kinney Møller Instituttet.

[41] Kjeld Jenesn. Frobomind, a conceptual architecture for field robot software.http://wiki.fieldrobot.dk/index.php/FroboMind:Architecture.

[42] Bent Vraae Jørgensen and Bjarne Siewertsen. Dmi wind statistics. http:

//www.dmi.dk/dmi/saadan_blaeser_det_i_danmark.

[43] R. E. Kalman. A New Approach to Linear Filtering and Prediction Prob-lems. Transactions of the ASME – Journal of Basic Engineering, (82 (SeriesD)):35–45, 1960.

[44] Wajahat Kazmi, Morten Bisgaard, Francisco Garcia-Ruiz, Karl DamkjærHansen, and Anders la Cour-Harbo. Adaptive surveying and early treat-ment of crops with a team of autonomous vehicles. Proceedings of the 5thEuropean Conference on Mobile Robots ECMR 2011, pages 1–6, 2011.

[45] Derek Kingston, Randal Beard, Al Beard, Timothy McLain, MichaelLarsen, and Wei Ren. Autonomous vehicle technologies for small fixedwing uavs. In AIAA Journal of Aerospace Computing, Information, andCommunication, pages 2003–6559, 2003.

[46] Mark Looney. A simple calibration for mems gyroscopes. Technical report,Analog Devices, 2010.

72

Page 75: ATC Digital

[47] Chang Boon Low. A trajectory tracking control design for fixed-wing un-manned aerial vehicles. In Control Applications (CCA), 2010 IEEE Inter-national Conference on, pages 2118–2123, sept. 2010. doi: 10.1109/CCA.2010.5611328.

[48] Statens Luftfartsvæsen. Bl 9-4 bestemmelser om luftfart med ubemandedeluftfartøjer, som ikke vejer over 25 kg. Bestemmelser for Civil Luftfart,(3), January 2004.

[49] R. Mahony, M. Euston, P. Coote, J. Kim, and T. Hamel. A complementaryfilter for attitude estimation of a fixed-wing uav. In Intelligent Robots andSystems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages340–345, sept. 2008. doi: 10.1109/IROS.2008.4650766.

[50] ST Microelectronics. L3G4200D MEMS motion sensor: ultra-stable three-axis digital output gyroscope. Technical report, ST, 2010.

[51] E. C. Molina. Bayes’ theorem - an expository presentation. Bell systemTechnical Journal, pages 273–283, 1931.

[52] M.S. Moran, Y. Inoue, and E.M. Barnes. Opportunities and limitationsfor image-based remote sensing in precision crop management. RemoteSensing of Environment, 61(3):319–346, 1997. ISSN 0034-4257. doi:10.1016/S0034-4257(97)00045-X. URL http://www.sciencedirect.com/

science/article/pii/S003442579700045X.

[53] S. Nasiri, M. Lim, and M. Housholder. A critical review of the market statusand industry challenges of producing consumer grade mems gyroscopes.Technical report, InvenSense, 2009.

[54] D.R. Nelson, D.B. Barber, T.W. McLain, and R.W. Beard. Vector fieldpath following for miniature air vehicles. Robotics, IEEE Transactions on,23(3):519–529, june 2007. ISSN 1552-3098. doi: 10.1109/TRO.2007.898976.

[55] The City College of New York. City-flyer robotics lab. http://robotics.ccny.cuny.edu/blog/node/20.

[56] S. M. Oh. Multisensor fusion for autonomous uav navigation based on theunscented kalman filter with sequential measurement updates. In Multi-sensor Fusion and Integration for Intelligent Systems (MFI), 2010 IEEEConference on, pages 217–222, sept. 2010. doi: 10.1109/MFI.2010.5604461.

[57] Sanghyuk Park, John Deyst, and Jonathan P. How. A new nonlinear guid-ance logic for trajectory tracking. In In Proceedings of the AIAA Guidance,Navigation and Control Conference, pages 2004–4900, 2004.

[58] Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote,Jeremy Leibs, Eric Berger, Rob Wheeler, and Andrew Ng. Ros: an open-source robot operating system. Willow Garage, 2009.

[59] J.D. Redding, T.W. McLain, R.W. Beard, and C.N. Taylor. Vision-basedtarget localization from a fixed-wing miniature air vehicle. In AmericanControl Conference, 2006, page 6 pp., june 2006. doi: 10.1109/ACC.2006.1657153.

73

Page 76: ATC Digital

[60] S. Riaz and A. B. Asghar. Ins/gps based state estimation of micro air vehi-cles using inertial sensors. In Innovative Systems Design and EngineeringVol 2, No 5, 2011, 2011.

[61] S. Riaz and Dr. A. M. Malik. Single seven state discrete time extendedkalman filter for micro air vehicle. In Proceedings of the World Congresson Engineering 2010 Vol II, volume Proceedings of the World Congress onEngineering, 2010.

[62] J. Romkey. A nonstandard for transmission of ip datagrams over seriallines: Slip. Network Working Group, June 1988. http://tools.ietf.

org/html/rfc1055.

[63] S. F. Schmidt. Kalman filter: Its recognition and development for aerospaceapplications. Journal of Guidance and Control, 4(1):4–7, 1981.

[64] Freescale Semiconductor. MPXV7002 - Integrated Silicon Pressure SensorOn-Chip Signal Conditioned, Temperature Compensated and Calibrated,2009. rev 2.

[65] Bosch Sensortec. Bmp085 digital pressure sensor - data sheet, 2009. BST-BMP085-DS000-05.

[66] SMSC. High-performance 16-bit non-pci 10/100 ethernet controller withvariable voltage i/o, 2012.

[67] B. L. Stevens and F. L. Lewis. Aircraft Control and Simulation. JohnWiley & Sons, 1st edition, 1992. ISBN 978-0471613978.

[68] S. Thrun, F. Dellaert, D. Fox, and W. Burgard. Monte carlo localization:Efficient position estimation for mobile robots. In Proceedings of the Six-teenth National Conference on Artificial Intelligence, pages 343–349, July1999.

[69] Andres Villa-Henriksen. Automatic thermal based animal detection systemfor mowing operations. Master’s thesis, University of Aarhus, August 2011.

[70] Martin Weis and Markus Sokefeld. Detection and identification of weeds.In Erich-Christian Oerke, Roland Gerhards, Gunter Menz, and Richard A.Sikora, editors, Precision Crop Protection - the Challenge and Use ofHeterogeneity, pages 119–134. Springer Netherlands, 2010. ISBN 978-90-481-9277-9. URL http://dx.doi.org/10.1007/978-90-481-9277-9_8.10.1007/978-90-481-9277-9 8.

[71] G. Welch and G. Bishop. An introduction to the kalman filter, 2001.

[72] J. G. Ziegler and N. B. Nichols. Optimum settings for automatic controllers.Transactions of ASME, 64:759–768, 1942.

74

Page 77: ATC Digital

Nomenclature

Abbreviations

ADC Analog to Digital Converter

ATC Autonomous Tool Carrier

AVR Officially just a name of the Atmel© microcontroller line. Mostlikely it is derived from Alf and Veard’s RISC

BL Bestemmelser for Civil Luftfart. (Danish law on civilian aviation)

DMI Dansk Meteorologisk Institut (Danish Meteorological Institute)

EKF Extended Kalman Filter

FMS Flight Management System

FTDI Future Technology Devices International. Company, specializedin USB to UART adapters.

GCS Ground Control Station

GPIO General Purpose Input / Output

GPS Global Positioning System

I2C Inter-Integrated circuit bus

IC Integrated Circuit

IMU Inertial Measurement Unit

IR Infra red

LAN Local Area Network

MAV Micro Aerial Vehicle

MEMS Micro Electro Mechanical System

NIR Near InfraRed colour channel

75

Page 78: ATC Digital

NVDI Normalized Difference Vegetation Index

PCB Printed Circuit Board

PDF Probability distribution function

PID Proportional, Derivative & Integral Controller

PWM Pulse Width Modulation

RAM Random Access Memory

R/C Radio Controlled

RGB Red, Green and Blue colour channels

SMD Surface mouted devices

SPI Serial Peripheral Interface

SSWM Site Specific Weed Management

UART Universal Asynchronous Teceiver / Transmitter

UAS Unmanned Aerial System

UAV Unmanned Aerial Vehicle

USB Universal Serial Bus

USB-OTG USB On-The-Go

UTM Universal Transverse Mercator

WiFi Wireless Fidelity

Notations

a Scalar

a Vector

A Matrix

ak Value at kth time step

sα Sine of alpha, sin(α)

cα Cosine of alpha, cos(α)

tα Tangent of alpha, tan(α)

a Derivative

a Estimate

a− A priori estimate

a+ Posteriori estimate

76

Page 79: ATC Digital

p(a) The probability of a

p(a|b) The probability of a given b

Symbols

Rk Euler rotation matrix, rotating around the x-axis - k-being eitherthe x, y or z axis

Rq Quaternion rotation matrix

αn Acceleration along the n-axis - not to confuse with angle of attack,α

α Angle of attack - not to confuse with acceleration, αn

βn Magnetic field strength along the n-axis

β0 Magnetic field strength in reference frame

χ Course over ground

γ Course climb angle

K Kalman gain

λ Wavelength

R Measurement noise covariance matrix

H Measurement model. Relates x to z

V Measurement noise model. Relates v to z

h(x,v) Non-linear measurement model

f(x,u,w) Non-linear state transition model

z Measurement or observation

ωn Angular velocity around n-axis

φ Rotation around the longitudinal axis

Pe Position in east direction.

Pn Position in north direction.

ψ Rotation around the vertical axis

q Quaternion

P State covariance matrix

x State estimate

Q State transition noise covariance matrix

A State transition model

77

Page 80: ATC Digital

W State transition noise model. Relates w to x

w State transition noise. Mean zero Gaussian with covariance Q

u System input

v Measurement noise. Mean zero Gaussian with covariance R

θ Rotation around the lateral axis

We Wind in east direction.

Wn Wind in north direction.

Terminology

Ailerons Control surface, typically located the outer half towards the wingtip. Deflection upwards causes reduced lift, deflection downwardscauses increased lift. Operated in contra mode to induce roll.

Course Direction of flight

Elevator Control surface, located at the trailing edge of the tailplane. De-flection causes movement around the lateral axis.

Lateral Axis going from wingtip to wingtip, through the center of gravity.

Longitudinal Axis going from plane nose to tail, through the center of gravity.

ROS Robot Operating System. A middleware for inter task communi-cation.

Rudder Control surface, located on the trailing edge of the vertical stabi-lizer. Deflection causes movement around the vertical axis in thedirection of the deflection.

Throttle Controls the rotational speed of the propeller and thus the gener-ated thrust. Usually used to generate a force out the nose of theplane. It can however be used as a brake for landing, as a slowlyrotating propeller causes more drag than a fixated propeller.

Vertical Axis from top to bottom of the plane, through the center of grav-ity

78

Page 81: ATC Digital

Appendix AAttitude kinematics

φ, ωxx, αx

θ, ωy

y, αy

ωz, ψ

z, αz

Figure A.1: Illustration of symbols and axis, relevant to this appendix.

x =

wxyz

, u =

ωxωyωz

, z =

αxαyαz

, w =

ηωx

ηωy

ηωz

, v =

ηαx

ηαy

ηαz

A.1 Time update

Under the small angle approximation, the quaternion representation of a in-finitesimal rotation is

∆q(xk,uk) ≈[

1u+w

2 ·∆t

]=

1(ωx + ηωx) ·∆t/2(ωy + ηωy ) ·∆t/2(ωz + ηωz

) ·∆t/2

(A.1)

As a quaternion is rotated by another quaternion by multiplication, the statecan be extrapolated by quaternion multiplication in the matrix form:

79

Page 82: ATC Digital

xk+1 ≈ xk ·∆q(xk,uk) (A.2)

=

wk −xk −yk −zkxk wk −zk ykyk zk wk −xkzk −yk xk wk

·

1(ωx + ηωx

) ·∆t/2(ωy + ηωy

) ·∆t/2(ωz + ηωz

) ·∆t/2

(A.3)

However, the derivative of the state transition is wanted for the Kalman esti-mator, and is derived in (A.4) through (A.8). The derivate is approximated, asthe linear model y = ∆y

∆x is used.

xk ≈xk ·∆q(xk,uk)− xk

∆t(A.4)

= (xk ·∆q(xk,uk)− xk) · 1

∆t(A.5)

=

wk −xk −yk −zkxk wk −zk ykyk zk wk −xkzk −yk xk wk

·

1(ωx + ηωx) ·∆t/2(ωy + ηωy ) ·∆t/2(ωz + ηωz

) ·∆t/2

wkxkykzk

·

1

∆t(A.6)

=

wk −xk −yk −zkxk wk −zk ykyk zk wk −xkzk −yk xk wk

·

0(ωx + ηωx) ·∆t/2(ωy + ηωy

) ·∆t/2(ωz + ηωz

) ·∆t/2

·

1

∆t(A.7)

=

wk −xk −yk −zkxk wk −zk ykyk zk wk −xkzk −yk xk wk

·

0(ωx + ηωx

)(ωy + ηωy

)(ωz + ηωz

)

·

1

2(A.8)

Thus, the state transition function f(x,u,w) is given by (A.9).

f(xk,uk,wk) =

−xk · (ωx + ηωx)− yk · (ωy + ηωy

)− zk · (ωz + ηωz)

wk · (ωx + ηωx)− zk · (ωy + ηωy

) + yk · (ωz + ηωz)

zk · (ωx + ηωx) + wk · (ωy + ηωy )− xk · (ωz + ηωz )−yk · (ωx + ηωx) + xk · (ωy + ηωy ) + wk · (ωz + ηωz )

·

1

2

(A.9)The small angle approximation, introduced in (A.1), does deform the unitquaternion, such that it over time no longer will be true that ||x|| = 1. Toavoid this, the quaternion must be normalized frequently by the following equa-tion:

xn =x

||x|| =

wxyz

√w2 + x2 + y2 + z2

(A.10)

80

Page 83: ATC Digital

The state transition matrix, A is given by the partial derivative of f(x,u, 0)with respect to x

A[i,j] =∂f [i]

∂x[j](x,u, 0) (A.11)

=

0 −ωx −ωy −ωzωx 0 ωz −ωyωy −ωz 0 ωxωz ωy −ωx 0

·

1

2(A.12)

W [i,j] =∂f [i]

∂w[j](x,u,w) (A.13)

=

−x −y −zw −z yz w −x−y x w

·

1

2(A.14)

A.2 Measurement

The measurement, z, is the three dimensional acceleration of the plane in thebody frame. This vector is the sum of the gravity field towards the center ofEarth and the centripetal forces produced by manoeuvring the plane and thesensor noise, measured in m/s2. The measurement model is given by:

hk (xk,vk) = zk (A.15)

= R>(xk) ·

00−g

+

ωxωyωz

×

cα · cβsβ

sα · cβ

· Va + vk (A.16)

where α is the angle of attack, β the side slip angle and R(x) is the orthogonal

rotation matrix by quaternion x =[w x y z

]T, defined as

R(x) =

w2 + x2 − y2 − z2 2 · (w · z − x · y) 2 · (x · z + w · y)2 · (x · y + w · z) w2 − x2 + y2 − z2 2 · (w · x− y · z)2 · (w · y − x · z) 2 · (y · z + w · x) w2 − x2 − y2 + z2

(A.17)Thus, expanding (A.16) gives the following expression

h (x,v) =

2 · (w · y − x · z)−2 · (w · x+ y · z)−w2 + x2 + y2 − z2

· g+

ωy · cβ · sα− ωz · sβωz · cα · cβ − ωx · cβ · sαωx · sβ − ωy · cα · cβ

· Va +

ηαx

ηαy

ηαz

(A.18)Note that level flight, with α and β both equal zero, significantly reduces thisexpression.The observation matrix, H is given by the partial derivative of h with respectto the state vector, x

81

Page 84: ATC Digital

H [i,j] =∂h[i]

∂x[j](x, 0) (A.19)

=

y −z w −x−x −w −z −y−w x y −z

· 2 · g (A.20)

The observation noise model, V is given by the partial derivative of h withrespect to the measurement noise vector, v.

V [i,j] =∂h[i]

∂v[j](x,v) (A.21)

=

1 0 00 1 00 0 1

(A.22)

A.3 Conversion between Quaternion to Euler an-gles

The quaternion representation is only applied inside the attitude estimator. Inthe remainder of the system, the Euler angle representation is deemed moreintuitive. The Euler angles can be computed from the unit quaternion by thefollowing equation

φθψ

=

tan−1

(2 · (w · x+ y · z)w2 − x2 − y2 + z2

)

sin−1 (2 · (w · y − x · z))

tan−1

(2 · (w · z + x · y)

w2 + x2 − y2 − z2

)

(A.23)

Likewise, converting Euler angles to quaternion is needed for initialisation andcan be done with the following equations

wxyz

=

c(φ/2) · c(θ/2) · c(ψ/2) + s(φ/2) · s(θ/2) · s(ψ/2)s(φ/2) · c(θ/2) · c(ψ/2)− c(φ/2) · s(θ/2) · s(ψ/2)c(φ/2) · s(θ/2) · c(ψ/2) + s(φ/2) · c(θ/2) · s(ψ/2)c(φ/2) · c(θ/2) · s(ψ/2)− s(φ/2) · s(θ/2) · c(ψ/2)

(A.24)

82

Page 85: ATC Digital

Appendix BHeading kinematics

φ, ωx

x, βx

θ, ωy

y, βy

ωz, ψ

z, βz

Figure B.1: Illustration of symbols and axis, relevant to this appendix.

x =[ψ]

, u =

[ωyωz

], z =

βxβyβz

, w =

[ηωx

ηωy

], v =

ηβx

ηβx

ηβx

The heading estimator is the second stage of the cascaded state estimator. Themeasurement vector, z is a three dimmensional vector of the measured magneticfield in the airframe. Extrapolation of the state estimate between measurementsis based on the angular rates measured by the gyroscope and the knowledge ofpitch and roll angles from the Attitude estimator (See Appendix A, ’Attitudekinematics’ on page 80). The magnetic field vector around the surface of Earthvaries with position and time. Table B.1 summarizes the field properties at thetest site.

North East DownComponent [nT] 17,266.95 604.12 46,922.54Rate-of-change [nT/yr] 7.85 39.66 30.55

Table B.1: Earth magnetic field at 55◦ 23’ N, 10◦ 23’ E as ofapril 2012. Data courtesy of NOAA, World Magnetic Model 2010http://www.ngdc.noaa.gov/geomag-web

83

Page 86: ATC Digital

B.1 Time update

The estimator operates with Euler angles, as opposite to the Attitude estima-tor, which uses the Quaternion representation. The Euler angle representationis chosen, as it is more intuitive and the complexity of the Quaternion represen-tation is not needed for the heading estimator, as no singularity issues associatedwith this rotation, given that θ 6= ±π2 - This special case is handled separately.The kinematics is based on [67]. The gist is to introduce the derivatives of theEuler angles in their respective intermediate frames:

ωx + ηωx

ωy + ηωy

ωz + ηωz

=

φ00

+Rx(φ) ·

0

θ0

+Ry(θ) ·

00

ψ

(B.1)

=

φ00

+

1 0 00 cφ sφ0 −sφ cφ

·

0

θ0

+

cφ 0 −sφ0 1 0sφ 0 cφ

·

00

ψ

(B.2)

=

φ00

+

φ− sθ · ψcφ · θ + sφ · cθ · ψ−sφ · θ + cφ · cθ · ψ

(B.3)

=

1 0 −sθ0 cφ cφ · cθ0 −sφ cφ · cθ

·

φ

θ

ψ

(B.4)

By inverse transformation, the airframe rate of change ωx, ωy & ωz are mappedinto the world frame via Euler angles:

φ

θ

ψ

=

1 tθ · sφ tθ · tcφ0 cφ −sψ0 sφ

cθcφcθ

·

ωx + ηωx

ωy + ηωy

ωz + ηωz

(B.5)

The projection function f(xk−1,uk,wk) = x propagates the ψ only, an is thusgiven by

f(x,u,w) =[sφcθ

cφcθ

]·[ωy + ηωy

ωz + ηωz

](B.6)

=[sφcθ

cφcθ

]· (u+ v) (B.7)

Note that the ωx does not contribute to the yaw propagation, as it maps directlyto φ. The state transition matrix, A is given by the partial derivative of f withrespect to x

A[i,j] =∂f [i]

∂x[j](x,u, 0) (B.8)

=[0]

(B.9)

84

Page 87: ATC Digital

The measurement noise covariance matrix, W is given by the partial derivativeof the state transition function, f , with respect to the state transition noisevector, w.

W [i,j] =∂f [i]

∂w[j](x,u,w) (B.10)

=[sφcθ

cφcθ

](B.11)

B.2 Measurement

The measurement, z, is the three dimensional magnetic field strength, measuredin the plane body frame. This vector reflects earth’s magnetic field, at thelocation of the plane, rotated into the body frame. The de- and inclination andthe field strength of earth magnetic field varies with the location on earth andchanges over time. In Odense, Denmark, where the tests where conducted, thecomponents of earth magnetic field strength is given in Table B.1 on page 84.

β0 =

βnβeβd

=

17.2670.60446.923

[µT ] (B.12)

h (x,v) =

βxβyβz

(B.13)

= R>x (φ) ·R>y (θ) ·R>z (ψ) · β0 + v (B.14)

=

1 0 00 cφ sφ0 −sφ cφ

·

cθ 0 −sθ0 1 0sθ 0 cθ

·

cψ sψ 0−sψ cψ 0

0 0 1

·

βnβeβd

+

ηβx

ηβy

ηβz

(B.15)

=

cψ · cθ · βn + cθ · sψ · βe − sθ · βd + ηβx

(cφ · sψ − sφ · sθ · cψ) · βn + (cφ · cψ + sφ · sθ · sψ) · βe + sφ · cθ · βd + ηβy

(sφ · sψ + cφ · sθ · cψ) · βn + (sφ · cψ − cφ · sθ · sψ) · βe + cφ · cθ · βd + ηβz

(B.16)

The observation matrix, H is given by the partial derivative of h with respectto the state vector, x

H [i,j] =∂h[i]

∂x[j](x, 0) (B.17)

=

−cθ · sψ · βn + cθ · cψ · βe− (cφ · cψ + sφ · sθ · sψ) · βn − (cφ · sψ − sφ · sθ · cψ) · βe(sφ · cψ − cφ · sθ · sψ) · βn + (sφ · sψ + cφ · sθ · cψ) · βe

(B.18)

=

−cθ · sψ cθ · cψ 0−cφ · cψ − sφ · sθ · sψ −cφ · sψ + sφ · sθ · cψ 0sφ · cψ − cφ · sθ · sψ sφ · sψ + cφ · sθ · cψ 0

·

βnβeβd

(B.19)

85

Page 88: ATC Digital

Note the last column of the matrix in (B.19) is zero, as βd does not contributeto the heading estimate. The observation noise covariance matrix, V is given bythe partial derivative of h with respect to the measurement noise vector, v. Asthe measurement noise is in the same frame as the sensor, the noise is mappeddirectly to the measurements, as seen in (B.21).

V [i,j] =∂h[i]

∂v[j](x,v) (B.20)

=

1 0 00 1 00 0 1

(B.21)

86

Page 89: ATC Digital

Appendix CPosition and Wind kinematics

This appendix serves to explain the kinematics governing the position and windestimator. The estimated state vector, x is the UTM position, Pn and Pe, ofthe airframe, the wind components in north, Wn and east We directions. Also,the angle-of-attack, α and the altitude Alt of the airframe is estimated. Themeasurement vector z is composed of the measured UTM coordinates from theGPS receiver and the altitude measured by GPS or the barometric pressuresensor. The system input vector, u is composed of the heading, ψ, and pitch,θ, of the airframe along with the indicated airspeed, Vi;

x =

PnPeAltWn

We

α

, u =

θψVi

, z =

UTMn

UTMe

UTMd

w =

ηPn

ηPe

ηAltηWn

ηWe

ηα

, v =

ηUTMn

ηUTMe

ηUTMd

C.1 Time update

The state transition function, f(x+k−1,uk,wk) = x−k , is the non-linear function

which outputs the derivative the state estimate at time k. The input vectors arethe previous state estimate, xk−1, the system input, uk and the system transi-tion noise, wk. The state vectors is composed of the UTM position northing,Pn, and easting, Pe, the wind in north, Wn and east,We, directions. It shouldbe noted that none of the measurements in vector z measures Wn, We or α di-rectly. These state variables are unobserved. The system input is the indicatedairspeed, Vi, airframe heading, ψ and pitch, θ. The state transition noise vectoris composed of the variances of the mean zero random noise changes in the statevector. f(x,u,w) is given in (C.1)

87

Page 90: ATC Digital

f(xk−1,uk,wk) = xk =

Vi · cψ · c(θ − α) +Wn + ηPn

Vi · sψ · c(θ − α) +We + ηPe

Vi · s(θ − α) + ηAltηWn

ηWe

ηα

(C.1)

The state transition matrix, Ak, is the partial derivative of f(x,u,w) withrespect to xk

A[i,j] =∂f(xk,uk, 0)[i]

∂x[j]=

0 0 0 1 0 Vi · s(θ − α) · cψ0 0 0 0 1 Vi · s(θ − α) · sψ0 0 0 0 0 −Vi · c(θ − α)0 0 0 0 0 00 0 0 0 0 00 0 0 0 0 0

(C.2)

The state transition noise covariance matrix, wk is given by the partial deriva-tive of of f(x,u,w) with respect to wk

W [i,j] =∂f(xk,uk, 0)[i]

∂w[j]=

1 0 0 0 0 00 1 0 0 0 00 0 1 0 0 00 0 0 1 0 00 0 0 0 1 00 0 0 0 0 1

(C.3)

C.2 Measurement

The measurement model h(xk,vk) is the measurement model. It outputs themeasurement, based on the current state vector and measurement noise vector,v and is described in (C.5)

h(xk,vk) = z (C.4)

=

Pn + ηUTMn

Pe + ηUTMe

−Alt+ ηUTMd

(C.5)

Note the inconsistency of Alt and UTMd, due to the different convention (Al-titude is upwards, while the down is down-wards in the North-East-Down righthanded reference frame).

The observation matrix, Hk is given by the partial derivative of h(xk,vk) withrespect to the state vector, xk.

88

Page 91: ATC Digital

H [i,j] =∂h(x, 0)[i]

∂x[j](C.6)

=

1 0 0 0 0 00 1 0 0 0 00 0 −1 0 0 0

(C.7)

The observation noise covariance matrix, V is given by the partial derivative ofh(xk,vk) with respect to the measurement noise vector, v.

V [i,j] =∂h(x,v)[i]

∂v[j](C.8)

=

1 0 00 1 00 0 1

(C.9)

89

Page 92: ATC Digital

Appendix DWind statistics

Based on the graphical data obtained from DMI [42], Table D.1 has been de-ducted. The table illustrates the distribution of wind from 12 different directions

Wind direction Amount 0.3-5.4 m/s 5.5-10.7 m/s ¿=10,8 m/s[�] [Days/year] [Days] [Days] [Days]

345-15 13 9 4 015-45 15 10 5 045-75 21 14 6 175-105 29 19 10 0105-135 36 22 13 1135-165 19 14 5 0165-195 33 23 10 0195-225 46 28 16 1225-255 57 27 25 5255-285 52 24 23 5285-315 29 14 13 2315-345 15 10 4 0Total 365 213 135 17

Table D.1: Wind statistics deducted from [42]

90

Page 93: ATC Digital

Appendix EWikis and how-tos

E.1 Installing Ubuntu and ROS on Overo Gum-stix

1. Make a minimal headless Ubuntu file system with rootstock (use rootstock-0.1.99.4 on Ubuntu 10.04, as the 0.1.99.3 version provided in the reposi-tories is buggy)

$ sudo ./rootstock –fqdn overo –login ubuntu –password tmppwd –imagesize2G –serial ttyS2 -d lucid –seed build-essential,openssh-server

This takes a few hours.

2. Download wireless-tools.deb and libiw29.deb for armel from Debian.

3. Create MLO, u-boot.bin and uImage using OpenEmbedded. Rememberto match the kernel number to that produced by Rootstock:

Add PREFERRED VERSION linuxomap3 = ”2.6.34” after

PREFERRED PROVIDER virtual/kernel = ”linuxomap3”

in /overo-oe/org.openembedded.dev/conf/machine/overo.conf

4. Format an SD card as per usual for Overo, and extract MLO, u-boot.bin,uImage, File system, modules and the downloaded .debs.

5. Boot the Overo, login (user: ubuntu, pwd: tmppwd), consider activatingthe root user:

sudo passwd root

6. Install the debs:

dpkg -i libiw29.deb wirelesstools.deb

7. Now you should be able to connect to any unprotected network.

91

Page 94: ATC Digital

8. Add ”deb http://ports.ubuntu. something lucid main restricted ++ mul-tierse ++ universe” to /etc/apt/sources.list. (no quotes)

9. Add ”deb http://packages.ros.org/ros/ubuntu lucid main” to /etc/apt/sources.list.d/ros-latest.list (no quotes)

10. Run the commands:

apt-get install wget

apt-get install ros-electric-ros-base

echo ”source /opt/ros/electric/setup.bash” >> /.bashrc

˜/.bashrc

11. You now have Ubuntu and ROS on your Overo Gumstix.

92

Page 95: ATC Digital

Appendix FPCB design

F.1 Power management

93

Page 96: ATC Digital

F.2 Servo interface, i2c-bus, pitot signal condi-tioning

94

Page 97: ATC Digital

F.3 Gumstix, Ethernet

95

Page 98: ATC Digital

F.4 Level conversion

96

Page 99: ATC Digital

F.5 USB interfaces

97

Page 100: ATC Digital

F.6 PCB Layout

98

Page 101: ATC Digital

Appendix GThesis proposal

99

Page 102: ATC Digital

Thesis proposal

Development of a modular autonomous airborne tool-carrier forautomated data acquisition

Peter Emil MadsenUniversity of Southern Denmark

[email protected]

Hjalte Bording Lerche NygaardUniversity of Southern Denmark

[email protected]

October 3, 2011

1 Introduction

This project concerns the development of anavionic tool carrier, capable of autonomous naviga-tion from a given set of way points. The platformwill be capable of interfacing with a range of tools,through a common hardware and software inter-face. The ability to exchange tool enables the plat-form to be used in a multitude of contexts. Thisproject will focus on designing and applying theplatform in an agricultural context.

2 Motivation

A major motivation for this project, is decreasingthe use of herbicides, through optimized spraying.Today’s common practice is broad spraying an en-tire field, to cope with weed. However, weeds tendto grow in patches, rather than uniformly across thefield. Studies [11, 12, 18, 19] show that it is possibleto reduce the use of sprays, by selectively sprayingthese patches. In order to localize the weed patches,various vision based systems have been proposed inthe literature [18]. The proposed platform, couldbe utilized to gather imagery data used to localizeweed patches.

By speciation and growth stage determinationof weeds, from imagery data [18, 14, 15], selectionof proper herbicides and dose can be optimized.A Danish system called Crop Protection Online(CPO) [17, 10] combines year-to-year knowledgeof weed populations, derived from manual fieldinspections, with a knowledge base of various

crops, weeds, diseases, pests and Plant ProtectionProducts (PPP) to suggest efficient treatment,using a minimum of PPP. Studies [13, 10, 16, 17]have proven the system to be robust and haveconsiderable potential in reducing herbicide usage(approx. 40% in grain and 20% in other crops).However, the system needs to be fed with data,gathered from labor intense field inspections. Thisfact is a setback for the system’s deployment. Byautomating the data collection process, the laborinvolved with using CPO could be reduced, andthus it would potentially be used more widely -resulting in a more efficient use of herbicides.

Apart from the potential improvement in agri-cultural context, an autonomous avionic tool car-rier system could also be used in a multitude ofother situations. Inhospitable environments, likethe nuclear disaster in Fukushima, Japan, could beinspected and radiation and imagery data could becollected. Wildfires, as is currently seen in Texas,USA, could be inspected live to improve firefightingtactics and other natural disasters could be moni-tored fast and accurately. More tedious tasks, suchas perimeter guarding, livestock count etc. couldalso be automated, freeing labour resources to per-form other tasks.

3 Related work

Several commercial systems are currently available,from various vendors. Commonly, they consist ofa light weight fixed-wing airframe, with onboard

1100

Page 103: ATC Digital

GPS receivers and live radio links to a basestations, performing route planning and post-flightdata processing and representation. The com-pany SenseFly[8] focus on aerial photographing,computation and meteorological data acquisition,based on three different UAV’s and associatedbasestation software. The SmartPlanes[9] systemis much similar to SenseFly; the sensor andpost-flight software is however more focused onorthophotographical mapping. Both SenseFly andSmartPlane provides the hard- and software, tobe used by unskilled labours, without any expertknowledge needed to gather data. GeoSense[3]uses a less easy-to-use system, based on thearduPilot[2] project. GeoSense does not provideready to fly hard-/software, but merely a com-mercial mapping service. ASETA[1] (AdaptiveSurveying and Early treatment of crops witha Team of Autonomous vehicles) is a researchproject at Aalborg- and Copenhagen Universities,aiming to reduce the usage of pesticides, throughearly automated treatment, utilizing coordinatedairborne helicopters and ground vehicles. Thehelicopters provide relevant data for analysis ona ground station, which prompts ground vehiclesto further inspection and appropriate treatment.The ASETA project uses two different helicopters,for different types of sensor equipment. Severalopen source autopilot project exists, the mostnoteworthy ones are OpenPilot[5], Paparazzi[6],arduPilot[2] and gluonpilot[4]. Both OpenPilotand Paparazzi utilizes ST32 ARM processors forflight control, where as the arduPilot is based ona much simpler (AVR-based) Arduino platform.The Gluonpilot is based on a 16-bit 80MHzdsp microprocessor. A ground-station GUI iscurrently being developed. Common for the opensource projects, is a great flexibility in air-framechoices, ranging from fixed-wing to quad-rotorsand even in-between[7]. However, the setups arefar from straight forward, in most of the projects.The OpenPilot-project has however developed aplug-in based system, which allows for easy setupof various airframes.

The mentioned state-of-the-art projects coververy specific problem areas. Whereas the opensource pilots provide a way for getting small air-crafts airbourne, no specific use of the autonomyis covered. The companies that fly autonomously

are commercial products that focus on providing aspecific service - and as such have to be profitableand thus expensive. The ASETA project operatesin somewhat the same field as this project, how-ever, this project focuses on a fixed wing solution- which generally gives a longer flight time thanhovering.

By providing an autonomous tool carrying agent,capable of carrying a range of sensors, the sameplatform could be used in a multitude of areas. Bymaking the project modularized and open source, acommunity could provide with new sensor modules,and improve on the existing system. Thus, in steadof solving a specific problem, the project providesa base for further development.

4 Learning Goals

At the end of this project, the student will be ableto:

• Identify the problem domain through litera-ture study.

• Evaluate existing commercial and research so-lutions related to the identified problem.

• Use this domain knowledge to propose a novelsolution.

• Use an engineering approach to design andbuild a demonstrator based on the proposednovel solution.

• Apply acquired knowledge within mobilerobotics to the proposed solution.

• Apply acquired knowledge within embeddedautonomous robot software design to the pro-posed solution.

5 Delivery

At the end of this project, a system containing thefollowing will be delivered:

• An autonomous flying tool-carrier capable of:

– Automatically follow a predefined path,based on way-points.

2101

Page 104: ATC Digital

– Aided take-off and landing.

– Carrying different tools in a mechanically,electrically and software standardized in-terface.

– Interface with and interchange data withthe mounted tool. E.g data acquisitionvia an camera tool.

– A prototype proof of concept tool,demonstrating capabilities of the system.

• A graphical user interface used to:

– Aid path planning.

– Represent data.

– Communicate live with airbourne unit.

5.1 Delimitation:

This project will not provide a tool-range formounting on the tool-carrier. Neither will it con-cern the utilization of the mounted tools. Theproject provides a tool-carrier, capable of carryingtools designed for the carrier. How the tools areutilized is up to the developer/user. A proof ofconcept tool will be developed.

References

[1] Aseta. www.aseta.dk.

[2] Diy drones. www.diydrones.com.

[3] Geosense. www.geosense.com.my.

[4] Gluonpilot. www.gluonpilot.com.

[5] Openpilot. www.openpilot.org.

[6] Paparazzi. www.paparazzi.enac.fr.

[7] The quadshot. www.thequadshot.com.

[8] Sensefly. www.sensefly.com.

[9] Smartplanes. www.smartplanes.se.

[10] L. Hagelskjær and L. N. Jørgensen.Planteværn online - sygdomme og skad-edyr. DJF rapport, Markbrug(98):39–50,January 2004.

[11] S.R Herwitz, L.F Johnson, S.E Dunagan, R.GHiggins, D.V Sullivan, J Zheng, B.M Lob-itz, J.G Leung, B.A Gallmeyer, M Aoyagi,R.E Slye, and J.A Brass. Imaging from anunmanned aerial vehicle: agricultural surveil-lance and decision support. Computers andElectronics in Agriculture, 44(1):49–61, 2004.

[12] B. T. Jones. An evaluation of a low-cost uavapproach to noxious weed mapping. Master’sthesis, Brigham Young University, Septem-ber 2007. mapping, UAV, Camp Williams,weed, remote control, RC, airplane, GPS,post-process, vision, noxious, thistle, imagery,clover, photo, photos.

[13] L. N. Jørgensen, E. Noe, A. M. Lang-vad, P. Rydahl, J. E. Jensen, J. E. Ørum,H. Pinnschmidt, and O. Q. Bøjer. Planteværnonline – et værktøj til at reducere pesti-cidforbruget i landbruget. Bekæmpelsesmid-delforskning fra Miljøstyrelsen, page 115, 2007.

[14] P.J. Komi, M.R. Jackson, and R.M. Parkin.Plant classification combining colour and spec-tral cameras for weed control purposes. In In-dustrial Electronics, 2007. ISIE 2007. IEEEInternational Symposium on, pages 2039–2042, june 2007.

[15] Zhao Peng. Image-blur-based robust weedrecognition. In Artificial Intelligence andComputational Intelligence (AICI), 2010 In-ternational Conference on, volume 1, pages116–119, oct. 2010.

[16] Per Rydahl. PC-Planteværn - optimerede her-bicidblandinger i bederoer, volume I of DJFrapport Markbrug, pages 185–196. DJF, 2001.

[17] Per Rydahl. Planteværn online - ukrudt. DJFrapport, Markbrug(98):27–38, January 2004.

[18] D. Slaughter, D. Giles, and D. Downey. Au-tonomous robotic weed control systems: A re-view. Computers and Electronics in Agricul-ture, 61(1):63–78, April 2008.

[19] A.M. Smith and R.E. Blackshaw. Crop/weeddiscrimination using remote sensing. InGeoscience and Remote Sensing Symposium,2002. IGARSS ’02. 2002 IEEEInternational,volume 4, pages 1962–1964 vol.4, 2002.

3102

Page 105: ATC Digital

103

Page 106: ATC Digital

Appendix HSensor Fusion for Miniature AerialVehicles

104

Page 107: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 1

Sensor Fusionfor

Miniature Aerial Vehicles

Peter E. Madsen Hjalte B. L. Nygaard

University of Southern Denmark

January 9, 2012

Abstract

In the field of autonomous robotics, there is a desire to determine preciseattitude and position, utilizing cheap, and thus often noisy sensors. This papersurveys a range of existing methods for Micro Air Vehicle sensor fusion, aimedat this problem. Furthermore a partial implementation of a previously proposedthree-stage extended Kalman filter is conducted on a newly developed autopilotboard. The board is based on MEMS-technology, GPS and other sensors. Promis-ing results have been obtained for Pitch and Roll estimation by fusing gyroscopeand accelerometer measurements. It is concluded that by investing time into sen-sor fusion, it is indeed possible to gain high quality results, even using cheap noisysensors.

105

Page 108: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 2

Contents

Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Theory & methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1 Filter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 Bayes’ Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.2 Three-stage Extended Kalman Filter . . . . . . . . . . . . . . . . . . 9

3.2.1 Stage 1 - Pitch & Roll . . . . . . . . . . . . . . . . . . . . . 103.2.2 Stage 2 - Heading . . . . . . . . . . . . . . . . . . . . . . . . 123.2.3 Stage 3 - Position & Wind . . . . . . . . . . . . . . . . . . . 13

4 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134.1 Octave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134.2 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

5 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

A Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

B Pitch-roll kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

C Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

106

Page 109: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 3

1 IntroductionThis paper is the result of a 5 ECTS self study conducted by two master students atthe University of Southern Denmark, Odense. The paper aims to survey methods forfusing noisy sensor data, and implement a method deemed suitable to the task of es-timating attitude and position of a Micro Air Vehicle (MAV). The task of optimizingrobot attitude and position estimation is of increasing interest to a growing commu-nity of robotisists. Turnkey solutions, in the form of Inertial Measurement Unit (IMU)sensor, with or without global positioning, are available commercially [3, 4]. Howeverthese sensor packages are relatively expensive. In this paper, it is shown that reliableresults can be obtained using cheaper individual sensors, by the use of sensor fusion.

Throughout this paper, the reader can refer to Figure 1 for a summary of the mea-sured and estimated variables. α denotes accelerations measured from the on boardaccelerometer, β denotes magnetic field strengths, measured from the on board magne-tometer and ω denotes angular velocities, measured from the on board rate gyroscope.For a more complete list of mathematical notation, Appendix A on page 18 contains afull list of notations and variables used throughout this paper. It should also be notedthat the airframe is oriented nose-starboard-down, and that the right handed coordinatesystem is used.

φ, ωxx, αx, βx

θ, ωy

y, αy , βyψ, ωz

z, αz , βz

Figure 1: Illustration of symbols and axis.

The paper is split in four main sections; Section 2 covers the theory of surveyed sensorfusion methods. Section 3 on page 8 explains the method that has been implemented,and describes in detail the mathematics, such that reimplementation can be conducted.Section 4 on page 13 gives a brief overview of the tools that have been used to visu-alize experimental results, and finally Appendix A through C on page 18 and forward,contains material like derivations and graphs, considered too detailed to make it intothe paper, but useful to the curious reader.

2 Theory & methodsA fundamental challenge of mobile robotics, is the effect of inaccurate measurementson position and attitude. The robots ’knowledge’ of its pose is vital for interactingwith- and maneuvering in its environment. Two simple methods for localization andpose determination is feasible, as the robot can either 1) use internal sensors, whichkeeps track of its movements and thus pose, 2) use global landmarks to periodicallydetermine its pose. However, problems arises when using either of the two methods, as

107

Page 110: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 4

the former (referred to as dead-reckoning) will integrate even the slightest disturbance,into significant, irreversible errors. The latter can work excellent in known, engineeredenvironments, containing landmarks of adequate quantity and quality (e.g RFID-tags,coloured markings, ultrasonic beacons etc.). If, however, we want the robot to know itsposition in a less adapted environment, the position updates tends to have low updaterates and imprecise measurements.

No matter how expensive and precise sensing equipment that is build into the robots,these challenges might be reduced, but cannot be eliminated. However, by combiningboth approaches, the difficulties can be overcome. This chapter presents a number ofmethods for such sensor fusion.

2.1 Filter typesFour types of filtering have been found suitable for MAV sensor fusion:

1. The Kalman filter, which is a recursive estimator. It uses parametrized Gaussiandistributions to model the uncertainties in the system and a state space modelto project the estimate forward in time. The filter uses least-squares curve-fit tooptimize the state estimate.

2. The Extended Kalman filter, uses a non-linear function to project the estimateforward, rather than the linear state space model used in the Kalman filter. Thelinear models, needed to project the covariance parameters, are found by lin-earizing the non-linear functions around the state estimate.

3. Particle filters uses a cloud of particles in the state space to represent the prob-ability distribution, rather than the parametric Gaussians used by the Kalmanfilter. This effectively means that the filter is multi-modal, as it is not limited bythe uni-modal nature of Gaussian representation.

4. Complementary filters are the only non-Bayes’ filter discussed in this text. Itworks in the frequency domain rather than the time domain and does not inter-pret the probability of states or outputs. It uses two inputs, describing the state,assuming one responds fast and is accurate, but biased, and another is bias free,but might be slower in responds and noisy. By using a pair of complementaryfilters, one low-pass and the other high-pass, the two signals can be combined,with maximum yield of both signals. See Figure 2 for a basic block diagram.

LPF

HPFx = f(u)

x = f(z)

+ x

z

u

Figure 2: Basic complimentary filter.

Particle filters have proven very strong in mobile robot localization (known in the fieldunder the name ’Monte Carlo Localization’ [17, 18]) , where multiple hypothesis isneeded. This is especially true for indoor Simultaneous localization and mapping, as

108

Page 111: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 5

no good source for absolute position estimates are available. These filters are howeverfairly complex and computationally heavy. As an absolute position estimate can beacquired from GPS in the outdoor habitat of MAVs, the complexity of particle filters isnot needed.

Both the Kalman filter [6, 8, 13, 14] and Complimentary filters [10, 12] have been usedfor state estimation of MAVs. The complementary filters is computationally lighter,compared to Kalman filters, but seems to respond slower. When the gyroscope is atrest, after a rapid change, it does not contribute to the state estimate. However, theaccelerometer data does still to some extend reflect the old position due to the phasedelay introduced by the low pass filter. The Kalman filter on the other hand does notoperate on old data and hence it introduces no phase delay. In addition, the Kalmanfilter is capable of rejecting statistical outliers in the data stream.The Extended Kalman filter (EKF) is chosen for the later implementation. The follow-ing sections will give a overview of its theoretical background and formulation.

2.2 Bayes’ FiltersThe Kalman filter is a Bayes’ filter, which means that it work under the assumptionthat each observation is independent of previous observations. I.e the true state, xk isassumed to be an Hidden Markov process and the observations zk are derived from thisprocess. In other words, the true state can only be observed indirectly. Because of theMarkov assumption, the probability of the current state, p(xk), can be derived fromjust the previous, p(xk−1), and is independent to all prior states.

p(xk|xk−1, xk−2, · · · , x0) = p(xk|xk−1) (2.1)

Similarly, the probability of the current observation, p(zk) depends only on the currentstate, xk, and is independent of all prior states.

p(zk|xk, xk−1, · · · , x0) = p(zk|xk) (2.2)

This allows the Bayes’ filters to work recursively, as there is no need to keep track ofthe history of state estimates and observations, along with their probability distribu-tions.

The filters are implemented in a two-step recursion:

• First, the previous state probability distribution, p(xk−1), is projected into thenext time step with the system input, uk. While this is done, uncertainty is addedto the estimate to reflect the system noise. This new estimate is the a priori esti-mate, p(ˆxk).

• The second step includes a observation zk, which based on Bayes’ Theorem (seeeq. (2.3)) can provide a probability distribution of the state, if p(xk) and p(zk)are known. This sensor-based state estimate can then be combined with ˆxk togive a better estimate, xk, of the state.

p(xk|zk) =p(zk|xk) · p(xk)

p(zk)(2.3)

109

Page 112: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 6

We wish not to go deeper into the mathematical background of Bayes’ filters, but refercurious readers to [11], [19], [7] and [5].

To get a conceptual idea of the Bayes’ filters, imagine a simple robot, capable of mov-ing in a straight line, as illustrated in Figure 3. The robot is capable of measuring itsposition along the track, by measuring the distance to a wall at the end of the track.However, these measurements are noisy and occurs at a slow rate. As the robot moves,it utilizes dead reckoning to keep track of its position - But due to integration errorsand external disturbances, this estimate soon grows unreliable. By accepting the factthat both measurements include uncertainties and modeling these, the two data sets canbe combined in a beneficial way.As the movements of the robot projects the state estimate forward, the uncertainty ofthe estimate is increased. This estimate is called the a priori, as it is an estimate, priorto in-cooperation of an observation. When the robot receives a external measurementof its location (an observation), the certainty of this measurement is assessed and com-bined with the a priori to for the next location estimate. As the two are combined, theuncertainty is reduced, bringing forward a better estimate of the location, than either ofthe two sensors can provide by them self.

Estimate

Measurement

A priori

Pk−1

xk−1

Pk

xk

R

zkPk−1 + Q

ˆxk

Figure 3: Conceptual model of the Kalman filter.

2.3 Kalman Filter

∫A

+B H +u x zx

w v

Figure 4: Linear state space model.

Kalman filters, first proposed by R. E. Kálmán in 1960 [9] have proven very efficient inavionics, when used in the extended version. The first use of the filter was in the Apollo

110

Page 113: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 7

space program, where it was used for trajectory estimation on the lunar missions [15].The Kalman filter uses the Gaussian distribution function for parametric probabilityrepresentation and assume Gaussian mean-zero noise models for both process- andobservation noise. The filter is based around a linear state-space model (see Figure 4on the preceding page) of the system, which is used to project the old state estimatesinto the next one and estimate the observation:

xk = A · xk−1 +B · uk + wk (2.4)

zk = H · xk + vk (2.5)

where w is the process noise, and v is the observation noise. The matrix A projects thestate estimate ahead and B relates the system input into the state space.

At the time update the a priori’s of state estimate ˆx and estimate covariance Pk iscalculated. This is done by projecting the previous values with A and adding the systeminput B · uk and process noise covariance Q respectively:

ˆxk = xk−1 +A · xk−1 +B · uk (2.6)

Pk = A · Pk−1 ·AT +Q (2.7)

In the measurement update ˆxk and Pk, are corrected, based on an observation zk:

Kk = Pk ·HT ·(H · Pk ·HT +R

)−1(2.8)

xk = ˆxk +Kk ·(zk −H · ˆxk

)(2.9)

Pk = (I −K ·H) · Pk (2.10)

where R is the measurement noise covariance, H relates state to measurement. Thematrix K is the Kalman gain, used to mix a priori estimate with the observation basedstate estimate.

2.4 Extended Kalman FilterAs the state space model is a linear model, the Kalman filter is limited to linear systems.In real-world application, only very few of such systems exists. The Kalman filter canhowever be extended [5, 7, 19] to work for non-linear systems. Rather than using thelinear state space models, the two non-linear functions f(x,u,w) and h(x, v) are usedto project the state and estimate sensor output, so that:

xk = f(xk−1,uk,wk) (2.11)

zk = h(xk, vk) (2.12)

The Kalman filter needs the linear model matrices, A and H still, though. But thesecan be derived by linearizion of the f(x,u,w) and h(x, v), as their partial derivativeswith respect to the state vector.

A[i,j](x,u) =∂f[i]∂x[j]

(x,u, 0) (2.13)

H[i,j](x) =∂h[i]

∂x[j](x, 0) (2.14)

111

Page 114: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 8

Besides, two new Jacobians are needed to project the system noise w and sensor noise,v into respectively the input vector and the measurement vector. Theses can similarlybe derived by linearizion of f(x,u,w) and h(x, v), as the partial derivatives with respectto the noise vectors.

W[i,j](x) =∂f[i]∂w[j]

(x,u,w) (2.15)

V[i,j](x) =∂h[i]

∂v[j](x, v) (2.16)

In the time update of the EKF, the projection of the estimated state is given by f(x,u,w).The estimate covariance is projected much like in the original filter. Wk terms areadded to project the process noise, Q covariance into state space.

ˆxk = xk−1 + f(xk−1,uk, 0) (2.17)

Pk = Ak · Pk−1 ·ATk +Wk ·Q ·WTk (2.18)

Similarly, the measurement update is upgraded with projection of R using Vk and thesensor output is estimated by h(x, v).

Kk = Pk ·HTk ·(Hk · Pk ·HT

k + Vk ·R · V Tk)−1

(2.19)

xk = ˆxk +Kk ·(zk − h

(ˆxk, 0

))(2.20)

Pk = (I −Kk ·Hk) · Pk (2.21)

This concludes the theory covered in this paper.

3 ImplementationThis section will discuss our implementation of a special 3-stage EKF, tailored forMAVs. An overview of the hardware developed to realize the sensor fusion is provided.This hardware includes both sensors used in this project and other sensors, not yetutilized, but deemed useful to make a completely autonomous MAV.

3.1 HardwareFor this and another project, an autopilot board, see Figure 5 on the next page, hasbeen developed. The board is centered around an 600 MHz, 512MB RAM OMAP3based Gumstix. The purpose of the board is to sense, process and control everythingneeded to keep a MAV in the air, independent of any groundlinks. To do so, it hasbeen equipped with a range of sensors, listed in Table 1 on the following page. Someof the sensors work standalone, where others have to be fused to provide useful mea-surements. All of the sensors, except the GPS, are connected to a 400 kHz I2C bus ofthe Gumstix. The GPS has a serial interface, and is thus connected to a UART of theGumstix. An overview of the sensors fused in Section 3.2 on the next page can be seenin Figure 6 on page 10.Linux drivers for the sensors had to be developed, as it has been decided to run theGumstix with a stripped down Ubuntu distribution. This is however not the focus ofthis project, thus interfacing details are not discussed in this paper.

112

Page 115: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 9

Figure 5: The developed Gumstix autopilot expansion board

Sensor Name NotesGyroscope IGT3200 Triple axis. ±2000 ◦/s @ 0.0696(◦/s)/LSBAccelerometer ADXL345 Triple axis. ±16g @ 3.9mg/LSBMagnetometer HMC5883L Triple axis. ±8G @ 5mG/LSBGPS D2523T 50 Ch. Helical receiver, 4Hz, µblox chipset.Airspeed MPXV7002 Differential press. sensor w. pitot tube. ±2kPa.Altitude BMP085 Barometric Press. 300-1100 hPa @ 0.03 hPa/LSBProximity LV-EZ4 Ultrasonic ground dist. 0-7m @ 2.5cm/LSB

Table 1: Sensors available on the Autopilot board

3.2 Three-stage Extended Kalman FilterIn 2006, Andrew M. Eldredge wrote his Master thesis ’Improved state estimation forminiature air vehicles’ [8]. Here Eldredge proposed a new use of the EKF - a threestage filter, designed especially for Micro Air Vehicles (MAV’s). This new approach,as opposed to other implementations [14] separates the complete position and orien-tation filter into three cascaded estimation steps: 1) Pitch & Roll, 2) Heading and 3)Position & wind, as seen in Figure 7 on the next page.

113

Page 116: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 10

Figure 6: Autopilot board block diagram

AttitudeEstimator

HeadingEstimator

NavigationEstimator

[φθ

]

ψ

pnpeµnµe

[ωyωz

]

βxβyβz

ωxωyωzvair

αxαyαz

[θvair

][gpsngpse

]

Figure 7: Three-stage state estimation illustrating the three stages, input and outputs.The diagram is as replication of [8, Fig. 2.9, p. 21]

By separating the filter into three steps, the implementation is greatly simplified and theconceptual understanding is more convenient to grasp. A complete state-space modelcontains seven parameters: roll, pitch, yaw, north/east position and north/east windspeed - yielding rather large matrices if all is handled at once. Furthermore each stepcan be updated individually, which is very convenient, as our sensors have differentupdate rates (the gyroscope has an internal sample rate of up to 8 kHz, whereas theGPS updates with a maximum of 4 Hz).

We have successfully implemented Stage 1 of the proposed filter. The following threesections will explain our implementation and discuss how the last two steps will beimplemented in the future.

3.2.1 Stage 1 - Pitch & Roll

In this first of three steps, the goal is to determine the state vector, x, composed of Pitch,θ, and Roll, φ. This is done utilizing the gyroscope’s angular rates as system input, u,and with the accelerometer’s linear accelerations as observations, z:

114

Page 117: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 11

x =[φ θ

]TState vector

u =[ωx ωy ωz

]TSystem input vector

z =[αx αy αz

]TObservation vector

Figure 8: Stage 1 components

Note that compared to the original implementation by Eldredge, total airspeed vair hasbeen neglected for now. With, x, u and z in place we are ready to proceed to define thefirst Time-update, or ’Predict’ stage.

Time-update of Pitch & RollHere we wish to propagate gyro angular rates to estimate change in Roll and Pitch.This is done using the estimated current state and kinematics as proposed in [16, p. 27]and to the readers convenience derived in Appendix B on page 20. Thus:

f(x,u) =

θ

]=

[ωx + tθ · sφ · ωy + tθ · cφ · ωz

cφ · ωy − sφ · ωz

](3.1)

Linearizing by partial derivatives, with respect to x, at the current estimate yields theJacobian matrix A(x,u), please refer to Equations (B.7) through (B.10) on page 21 fora complete derivation:

A[i,j](x,u) =∂f[i]∂x[j]

(x,u) =

[ωy · tθ · cφ− ωz · tθsφ ωy·sφ+ωz·cφ

c2θ−ωy · sφ− ωz · cφ 0

](3.2)

That concludes the functions needed to compute the Time-update of an EKF, statedin equations (2.17) and (2.18) on page 8. Now the next goal is to calculated theMeasurement-update or ’Correction’.

Measurement-update of Pitch & RollIn order to complete the Measurement-update, we are required to calculate the Kalman-gain. The gain depends on the accelerometer output prediction model h(x) and itsJacobian matrix H(x). The Sensor prediction model simply maps the gravity vectorinto the airframe, as derived in Equation (B.16):

h(x) =

axayaz

=

sθ−sφ · cθ−cφ · cθ

(3.3)

The Jacobian matrix, H(x) is composed of the partial derivatives of h(x), with respectto x, please refer to Equation (B.17) through (B.19) on page 22 for a complete deriva-tion:

H[i,j](x) =∂h[i]

∂x[j](x) =

0 cθ−cφ · cθ sφ · sθsφ · cθ cφ · sθ

(3.4)

Finally we only need to consider noise before we can proceed to the next stage.

Noise considerations of Pitch & RollV relates the measurement (accelerometer) noise, v, to the sensor estimate z, as V isthe partial derivative of h(x, v) (Eq. (3.3)) with respect to v. The complete derivation

115

Page 118: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 12

of this Jacobian matrix can be found in Equations (B.20) through (B.22) on page 22where it is proved that V is given by the identity matrix:

V (x) =

1 0 00 1 00 0 1

(3.5)

The derivation simply proves that the noise on the accelerometers maps directly to whatwe are measuring.

R denotes the measurement noise covariance, and is hard to model, thus it is usuallyfound by trial and error tuning.

W relates the system noise, w, to the state estimate x, as W is the partial derivativesof f(x,u,w) (Eq. (3.1)) with respect to w. The complete derivation of this Jacobianmatrix can be found in Equations (B.11) through (B.14) on page 21 where it is provedthat W is given by:

W (x) =

[1 tθ · sφ tθ · cφ0 cφ −sφ

](3.6)

This shows that our gyroscope noise needs to be mapped into the system model.

Q denotes the process noise covariance, and is, like R usually found by trial and errortuning.

This concludes the measures necessary to compute an estimate of the Roll and Pitchangles of the airframe. Simulation results and 3D visualization of the implementationcan be viewed in Sections 4.1 and 4.2 on page 13.The following two sections will cover how Heading, Position and Wind can be esti-mated in future work.

3.2.2 Stage 2 - Heading

In stage 2, the goal is to determine Heading, ψ, from the Pitch and Roll angles foundin Stage 1, and gyroscope measurements combined with Magnetometer measurementsβ, thus:

x = ψState

u =[φ θ ωy ωz

]TSystem input vector

z =[βx βy βz

]TObservation vector

Figure 9: Stage 2 components

The kinematics described in [8, Sec. 2.7], maps ψ to current Pitch, Roll and angularrates. Furthermore a sensor model [8, Eq. 2.6] maps airframe Magnetometer measure-ments into the Earths magnetic field. With this data, and a map of the Earths magneticfield, the procedure for obtaining Heading, ψ through Kalman filtering is similar tothat described in Stage 1 in Section 3.2.1 on page 10. A noteworthy property of thismethod for obtaining Heading is, that opposed to similar implementations, this methoddoes not rely on GPS to find ψ. This is desirable, as GPS updates are generally rela-tively infrequent and loss of satellite connection is not uncommon.

116

Page 119: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 13

3.2.3 Stage 3 - Position & Wind

Once the complete attitude[φ θ ψ

]Thas been determined in stages 1 and 2, it

is time to estimate the absolute position,[pn pe

]T, and wind,

[µn µe

]T. This is

accomplished using the GPS as our sensor and previously estimated Pitch, Headingand Airspeed:

x =[pn pe µn µe

]TState vector

u =[ψ θ vair

]TSystem input vector

z =[gpsn gpse

]TObservation vector

Figure 10: Stage 3 components

The kinematic procedure suggested in [8, Sec. 2.8] deals with the dynamics, and againthe EKF approach is applied to the kinematic model. Finally, after three cascaded stepsthe complete state vector is obtained:

xcomplete =[φ θ ψ pn pe µn µe

]T(3.7)

4 VisualizationThis section gives a short introduction to the procedures made to test and visualizesensor fusion algorithms.

4.1 OctaveAll code is initially written in Octave. Octave allows for quick development and doesnot have the same ’stiffness’ as a pure C/C++ implementation. Thus all experimen-tation and development is done in Octave, and once a desired filter performance isobtained, it is ported to C/C++. C/C++ implementations are based on the Open-Sourcelibrary KFilter [2], which provides an framework for the EKF algorithm.Real world sensor data has been logged to .csv files, and these are imported and ana-lyzed in Octave. An example of such an experiment is shown in Figures 13 and 14 onpage 23

4.2 3DIn order to visualize the filter performance live, much like it is seen with many commer-cial products [3, 4], live communication between the Autopilot board and a computerwith a monitor and 3D renderer is necessary. For this communication, a middlewarecalled Robot Operating System (ROS) is utilized. ROS allows for link transparent com-munication between two or more TCP/IP enabled units. Thus we use the Gumstix Wifito connect to a visualizing computer. ROS also has a build in 3D render called RViz,this is used to visualize filter output as seen in Figure 11 on the next page.

117

Page 120: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 14

Figure 11: The 3D model used in RViz for live visualization of filter output

RViz with ROS allows for .dae graphics to be imported. Many 3D models are freelyavailable in this format in the Google 3D Warehouse [1], thus an airplane is easilyfound and imported into RViz. The visualization gives a very intuitive feedback, asto how well the filter performs. Furthermore it allows for simultaneous comparisonof different implementations, and the real world quantity of vibrations etc. are bettervisualized compared to static plots.

5 ValidationIn order to asses the quality of the implemented filter, a reference experiment has beenconducted. A commercial ’Attitude and Heading reference System’ (AHRS) [3] wasfixated to the autopilot board frame. Both systems estimated pitch and roll angles. InFigure 12a on the following page and 12b on the next page the two estimates are plottedtogether. As it is seen from the figures, the estimates angles are very similar, though itseems that the implemented filter responds a bit different under g-forces (t = 15:20s).This is expected to be a matter of filter tuning and g-force compensation.

118

Page 121: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 15

(a) Pitch estimates.

(b) Roll estimates.

Figure 12: Estimated pitch and roll angles from conducted reference experiment.

6 ConclusionFuture work includes implementation of the last two stages of the cascaded extendedKalman filter, mentioned in Sections 3.2.2 and 3.2.3. Furthermore, these stages wouldneed to be validated. The global position and wind estimates are hard to evaluate.Position could be evaluated using a carefully defined path or another sensor known toprovide a certain accuracy. The wind estimate is not as interesting as is its impact onthe position estimate, thus, if it can be approximated to aid the position observer, thatwould suffice.An Extended Kalman filter have been implemented. Simulation, visualization and val-idation reflects that the implemented EKF is indeed capable of fusion gyroscope andaccelerometer data, into a viable state estimate. The mathematical and theoretical back-grounds of the filter has been described. We would like to thank Lars-Peter Ellekildefor his patience and guidance throughout this project.

119

Page 122: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 16

7 References[1] Google 3d warehouse, open source 3d models.

http://sketchup.google.com/3dwarehouse/.

[2] Kfilter - free c++ extended kalman filter library. http://kalman.sourceforge.net/.

[3] Vectornav homepage, inertial measurement systems. http://www.vectornav.com.

[4] Xsens homepage, inertial measurement systems. http://www.xsens.com.

[5] A. L. Barker, D. E. Brown, and W. N. Martin. Bayesian estimation and the kalmanfilter. Computers Math. Applic, pages 55–77, 1994.

[6] R. Beard. State estimation for micro air vehicles. In Javaan Chahl, Lakhmi Jain,Akiko Mizutani, and Mika Sato-Ilic, editors, Innovations in Intelligent Machines- 1, volume 70 of Studies in Computational Intelligence, pages 173–199. SpringerBerlin / Heidelberg, 2007. 10.1007/978-3-540-72696-8_7.

[7] W. Burgard, D. Fox, and S. Thrun. Probabilistic robotics. Cambridge, MA: MITPress, 2005.

[8] A. M. Eldredge. Improved state estimation for micro air vehicles. Master’s thesis,Brigham Young University, Dec 2006.

[9] R. E. Kalman. A New Approach to Linear Filtering and Prediction Problems.Transactions of the ASME – Journal of Basic Engineering, (82 (Series D)):35–45, 1960.

[10] R. Mahony, M. Euston, P. Coote, J. Kim, and T. Hamel. A complementary filterfor attitude estimation of a fixed-wing uav. In Intelligent Robots and Systems,2008. IROS 2008. IEEE/RSJ International Conference on, pages 340–345, sept.2008.

[11] E. C. Molina. Bayes’ theorem - an expository presentation. Bell system TechnicalJournal, pages 273–283, 1931.

[12] W. Premerlani and P. Bizard. Direction cosine matrix imu: Theory. DIY Dronescommunity, May 2007.

[13] S. Riaz and A. B. Asghar. Ins/gps based state estimation of micro air vehiclesusing inertial sensors. In Innovative Systems Design and Engineering Vol 2, No5, 2011, 2011.

[14] S. Riaz and Dr. A. M. Malik. Single seven state discrete time extended kalmanfilter for micro air vehicle. In Proceedings of the World Congress on Engineering2010 Vol II, volume Proceedings of the World Congress on Engineering, 2010.

[15] S. F. Schmidt. Kalman filter: Its recognition and development for aerospaceapplications. Journal of Guidance and Control, 4(1):4–7, 1981.

[16] B. L. Stevens and F. L. Lewis. Aircraft Control and Simulation. John Wiley &Sons, 2nd edition, 2003. ISBN 978-0-471-37145-8.

120

Page 123: ATC Digital

Appendix : Sensor Fusion for Miniature Aerial Vehicles 17

[17] S. Thrun, F. Dellaert, D. Fox, and W. Burgard. Monte carlo localization: Efficientposition estimation for mobile robots. In Proceedings of the Sixteenth NationalConference on Artificial Intelligence, pages 343–349, July 1999.

[18] S. Thrun, F. Dellaert, D. Fox, and W. Burgard. Robust Monte Carlo localizationfor mobile robots. In Artificial Intelligence, volume 128, pages 99–141, 2000.

[19] G. Welch and G. Bishop. An introduction to the kalman filter, 2001.

121

Page 124: ATC Digital

122

Page 125: ATC Digital

Appendix IProject Log

123

Page 126: ATC Digital

Dette er flyver gruppens log. Den vil Dagligt blive opdateret med en ny to-do liste og lidt reflektion over hvordan fremgangen er gået. 01/05Vi er nu konverteret fra praktiske gøremål til rapportskrivning. Der er lavet udkast til rapportstruktur, og lagt op til debat vejledere og os imellem. Rapportskrivning påbegyndes. 25/04Der laves flight test med aided flight. 2 flyvninger er i første omgang gennemført med aided flight aktiveret - og det spiller. Man kan nu styre pitch og roll direkte fra remoten. PID er tunet så den reagerer ‘blødt’. Systemet er så stabilt at vi, mens vi var i luften, stille og roligt kunne give remoten til Peter, som aldrig før har fløjet i virkeligheden. Han styrede flyet rundt som en professionel og gav remoten tilbage til Hjalte efter ca. 6-10 minutters god flyvning. Derefter landede vi flyet i aided tilstand, og finere landing er sjældent set.Nu tuner vi flere småting mens vi gennemgår formiddagens generede data, og om vejret vil, så tager vi på en ny mission i eftermiddag/aften. 24/04- Peter ændrer nogle fremgangsmåde i forbindelse med pitot kalibrering, således at offset og gradient nu bliver justeret af kalman via GPS.- Hjalte er på virksomhedsbesøg hele dagen. 23/04Rå data er blevet analyseret og state estimatoren tunet. Ydermere er 3. led af kalman blevet implementeret. 20/04Efter længere tids stilhed er der nu lidt liv i loggen igen.. Vi har været ude at flyve i det dejlige (=vindstille) vejr i eftermiddag. Fra sidste flyvetur lærte vi at det måske var lidt naivt, at tro det hele virkede på en gang. Derfor har vi har taget ved lære, og istedet for at køre state estimatoren, har vi valg denne gang at logge alle sensor data (der er ikke CPU kraft til at gøre begge på en gang.). Således skulle der nu være mulighed for at tune state estimatoren på jorden ud fra sensor-data + video ud af ‘forruden’. Dagen bød på to succelfulde flyvninger, filmet både fra flyveren og jorden. Dermed har vi et godt udgangspunkt for at komme videre. I næste uge ser vi på disse data, og der kommer også en video så i alle kan følge med i status. Derudover har vi i de lydløse dage, siden sidst, udført fem vigtige opgaver: - Det, på printet placerede magnetometer, har vist sig (som frygtet) at sidde for tæt på motorens magnetfelt. Vi har lavet interface til et nyt magnetometer, nu placeret længere tilbage i flyveren -> altså flyver man ikke længere altid mod nord når motoren trækker strøm!! (Den nye (større) motoer er også moteret). - Udover det nye magnetometers yaw estimering, bruger vi nu også GPS målinger til at beregne heading. Ydermere supplementerer GPS nu også pitotrøret i måling af hastighed. - GPS’en har ingen flash eller backup batteri. Derfor har vi tidligere skulle huske at konfigurere

124

Page 127: ATC Digital

GPSen efter hver powercycle - denne konfigurering er nu implementeret som en del af boot, således at GPSen altid er korrekt konfigureret - dermed er der ikke plads til så mange mennesklige fejl/forglemmelser. - Der lavet en strukturel ændring i koden, som gør at den samme kode kan køres i forskellige ‘modes’. Dermed skal der kun ændres ét sted, uanset om man simulerer, flyver med state estimat eller flyver og logger sensor data. Alt dette styres v.h.a. et enkelt parameter i vores launch fil. - Sidst men ikke mindst har vi holdt minimøde med Ulrik og Rasmus (hver for sig), hvor vi har diskuteret opfyldelse af læringsmål og leverancer. Det er også blevet luftet at vi inden længe skal til at begynde en overgangsfase hvor vi stille og roigt udfaser praktisk projektarbejde med rapportskrivning -> ingen af os er interesserede i at rapporten skal skrives i alt hast til sidst. (vi har vist haft nok klassiske speciale bummerter efterhånden) Vi har indset, og er ærgelige over at vi ikke når alt det vi genre vil, men dog ved godt mod, og håber at levere et brugbart projekt, som kan overtages af andre - samtidig med at vi får rundet det vi har nået godt af, både praktisk og rapportmæssigt. Go’ weekend til alle jer der læser med. 4/4Vi har klargjort til flyvning i morgen. Turens formål bliver at teste pitch og roll estimering samt semiautomatisk/aided flyvning, hvorfor flyveren selv justere sig til et ønsket pitch og roll, via pid. Setpoints er stick positionerer på remoten. Pid parametre er trimmet så godt det nu kan lade sig gøre i simulator, men vi kommer nok til at trimme på dem i forbindelse med flyvningerne i morgen.Herudover er der lavet batteri fixtur og monteret et kamera på flyvet, der skal bruges til verificering af pitch/roll estimat, ved sammenligning af kunstig/estimeret horisont og den faktiske på videoen. 3/4Dagen har været træls. En compiler forskel gjorde at efterprøvet kode på x86 ikke virkede på ARM. Fejlen blev til sidst fundet og nu virker state estimeringen ordentligt med quaternions på stage 1. 2/4Bugfix af quaternions. Quaternions er nu funktionelle i stage 1 (pitch og roll estimering). Således har vi ikke eulervinklernes singulariteter, og vi kan estimere flyverens pitch og roll 360 grader. 30/3Lavet ny motorflange, så vi undgår resonans ved 70%+ gas.Konverteret fra Euler til Quaternions. 29/3Holdte minimøde med Anders Fugl.

125

Page 128: ATC Digital

Skrev på rapport.Udskiftede motoraksel. 28/7Støj er blevet tilføjet til simulatorens sensor-output, størrelsen af denne er baseret på målinger taget under første flyvetur. Ground station kan også tilsluttets samtidig som simulatoren, så state-estimate kan visualiseres live.reassign Ny batterikasse er lavet til flyveren, således at risikoen for at batteriet banker ned i elektronikken ved en evt. hård landing er minimeret. 27/3Der er blevet arbejdet med simulatoren og state estimatoren. For at kunne validere vores kalman filtre, er der fra simulatoren genereret kode til at simulere sensor outputs (Gyro’er, accelerometre, magnetometre). Disse sensor data publiseres via ROS til en ekstern node, indeholdende stateestimatore for attitude og heading - Det ser ud til at virke. Videre arbejde går på at tilføje støj (Her kan vores flyvetur med Henning give et godt fingerprej af støj størrelsen), og fiddle med filter tuning. Herudover skal vi have simulatoren til at publisere GPS, pitot og højde data til sidste stadie af state estimatoren. 26/3Arbejde videre på PID løkker. Mixning af elevator og rudder afhængig af roll. Altitude control ok, Roll control ok, Pitch control ok. 22/3CRRCSim er nu rossificeret -> en helt basal PID løkke kan holde flyveren i en given højde.PID biblioteket skal modificeres og tilpasses så vi får pænere respons. Kjeld: Der ligger nu kode til min mini webserver (og dermed en stabil socket server) på: http://wiki.kjen.dk/index.php/Open_Source_Software 20/3- Der arbejdes på rapporten. Aided flight og PID løkkerne til styring beskrives og illustreres.- Estimator og I2C node er kogt sammen, men stadig delt i 2 klasser -> 25-30% CPU er frigivet.- Der tunes PID, så godt som det nu kan lade sig gøre på jorden.- Har bonnet Wheelspinmodels for motor aksler, de regner med de bliver sendt i dag eller i morgen. Kjeld :-) http://www.dr.dk/Nyheder/Udland/2012/03/20/175030.htm?rss=true 19/3Peter optimerer koden. At holde sig 100% til ROS tanken om adskilte noder viste sig værende

126

Page 129: ATC Digital

for hårdt for ARM’en. Derfor er Kalman og I2C/sensor-noden slået sammen til én. Dermed fjerner vi den største ressource sluger ifht. datarate og beskedstørrelse. Hjalte dokumenterer Pitot hardware, teori og valideringstest. Ydermere har vi aftalt at mødes med Andreas Fugl sidst i uge 13. (26/3 eller senere). [Kjeld]Mht. mærkater til flyveren, så er det fint nok med alm. reklame for SDU-Tek. Jeg er sikker på, at Bo finder en god løsning. Dog bør han tage hensyn til, at vinger jo brækker en gang imellem, så det kan sikkert godt betale sig at printe et sæt ekstra folier. 16/3 RasmusSe hvad de har på Hohenheim Uni jeg lige har besøgt ;-) http://sengis.uni-hohenheim.de/uas.en.php

- Motor er skilt ad, og ny aksel er bestilt. - Kamera montering til Nokia tlf. er lavet, så vi kan filme horisont.- Har skrevet til Andreas Rune Fugl (mmmi ph.d)., der skulle have erfaring med modelfly, for at høre om vi kunne tappe af hans erfaring.- Har afleveret templates af flyverent mærkater til Bo fra kommunikation, så vi kan få vores egne stickers ( åben debat omkring takst/billede(er) ). 15/3-12:Hjalte havde besøg fra tyskland og holdte fri.Peter implementerede PID til brug ved Pitch og Roll Aided Flight. 14/3-12: (Kjeld)Jeg faldt lige over denne artikel, - har ikke haft tid til at checke kvaliteten, men der er lidt forskellige data i den, som måske kan bruges i jeres rapport, og så beskriver den jo en lidt anden applikation som. Se om I kan bruge noget: http://kjen.dk/bib/kj/

127

Page 130: ATC Digital

2008%20LOW%20COST%20UAV%20FOR%20POST-DISASTER%20ASSESSMENT.pdf (login og password er ‘robotbil’) Første selvstændige flyvertur er gennemført. Resulterede i en bukket motoraksel. 13/3-12:Vi har været på Kold College og tale med Stig Hansen som har givet os lov til at bruge to marker (se nedenfor) til testflyvninger.

Vi har lov til at flyve på dem når det passer os frem til projektets afslutning i juni. Brugen er fri, så længe vi tager fornuftigt hensyn til afgrøder. Stig vil gerne høre om projektet, og vi har aftalt at de kan kigge med ved en flyvning, en gang når vi har noget mere konkret. Ny Projekthåndtering.Efter aftale ved møde d. 12/3, skal vi til at lave sprint af ca. 2 ugers varighed, hvor en prædefineret arbejdsopgave ender ud i et ‘produkt’ som kan vises og afsluttes (alá SCRUM metodologien). Således bliver det daglige projektarbejde struktureret og vejledere kan let følge med i projektets fremgang (eller mangel på samme). Nedenfor er vores første udkast til sådanne arbejdsopgaver og deres tidsprioriteringer. SPRINT:

128

Page 131: ATC Digital

Dato Sprint produkter

26/3Sprint 1

- Stage 1 (Kalman roll+pitch estimate, incl hastighed).- Aided Bank.- Aided Climb.- Stage 2 (Kalman yaw estimate).

9/4Sprint 2

- Aided Heading.- Stage 3 (Kalman position and wind estimate).

23/4Sprint 3

- Litteraturstudie af Navigations- og reguleringsmetoder.- Valg og definering af Navigations- og reguleringsmetode.- Rossificer Simulator.

7/5Sprint 4

- Værktøj / showcase ELLER Waypoint navigation.

21/5Sprint 5 (halvt sprint)

Afslut Projekt.

------------------------------------------------------------- x ----------------------------------------------------Sprint 1: 1) Sensor model for pitot - Done - Matematisk model - Done

- Verficéring- vingen ud af vinduet på bil versus GPS hastighed - Done

- Dokumentér - Done 2) - Verficér stage 1, roll & pitch

- Indkooperer airspeed til estimering af genereret g i sensor model - Done- Små, lige flyveture med kendt attitude (Først plan, så med små udslag etc)- Generér lidt g-krafter, se om filteret reagere som forventet- Generér lidt flere g, se om filteret reagere som forventet- Field test : Video bagfra, overlay / PiP artificial horizon / Video på flyver (gl. tlf kamera)- Dokumentér (YouTube link, something...)

3) - Aided flight:

- Aided bank / roll:stick/roll_desired -> [PID] -> ailerons -> roll angle - Done

^ | | (roll_actual) | '----------------------------'

- Tuning af PID parametre. Først i lab'en, siden i feltet.- Aided Climb.

stick/pitch_desired -> [PID] -> elevator -> pitch angle - Done

129

Page 132: ATC Digital

^ | | (pitch_actual) | ‘-----------------------------'

- Tuning af PID parametre. Først i lab'en, siden i feltet.- Field test : stick position versus artificial horizon el. video- Dokumentér - Påbegyndt

------------------------------------------------------------- x ----------------------------------------------------Sprint 2:1) - Verficér stage 2, (Heading / yaw) - Først lige, korte stræk i forskellige retninger

- Derefter små cirkel stykker- Flyv i bestemt yaw-retning :

(roll_desired) yaw_desired --> [PID] ------------> [PID] ------> Ailerons --> BankAngle ^ ^ | | | | (roll_actual) | | | (yaw_actual) '----------------’ | '---------------------------------------------------------' 2) - Stage 3 af kalman (Position og vind estimat)

- Rosbag: Flyv pænt! Gerne kort, men pænt.- Implementér, debug osv på workstation- Verificér i luften

- Ground truth..? Video- Dokumenter

3) - Færdigør rapport kapitel - Introduktion

- Sensor fusion teori- 3-stage model- Litteratur studie (quaternions, 7-state model, DCM, mm.)- Del konklusion

------------------------------------------------------------- x ----------------------------------------------------Sprint 3: 1) - Undersøg og definer flyve plan

- Litteratur studie- Metoder : Primitiver (Linje, cirkel etc), Splines, AIDL (evt modificeret)- Vurder flybarhed- Sammenkopling med målepunkter (Ikke bare flyvetur, men flyvetur med formål..)- Dokumentér

2) - Vælg / udvikl metode til læsning af flyveplan- [Dx, Dy, Dz, Dh] = f(Px, Py, Ph) (Ønsket (Pn,Pe,Altitude, Heading) som function af

aktuel (Px,Py,Heading))- Vil give position-, heading- og altitude error, som kan indgå som inputs i kontrolløkker- Ideér:

130

Page 133: ATC Digital

- Vector field omkring "primitiver"- ?

3) - Klargør simulator - Done- Rossificer simulator med AHRS ud og Servo ind. - Done

4) - Dokumenter------------------------------------------------------------- x ----------------------------------------------------Sprint 4:Mulighed 1: Autonom navigation.1) - Implementér navigation (fundet passede i forgående sprint) i simulator.2) - Valider navigation i field test (Ground truth...?) Mulighed 2: Værktøj.1) - Definer interfaces (HW, SW, diskuter om Mekanisk skal være standard)

- Forsyningsspænding, strøm, busser (Parallel, Ethernet, USB, etc...)- sw interface (rostopics, trigger signals, data transfer etc.)

2) - Implementer eksempel værktøj.3) - Dokumenter.4) - Get nice grade.------------------------------------------------------------- x ---------------------------------------------------- 12/3-2012:Møde afholdt med alle vejledere. Der bliver hanket op i projektet og sat nye retningslinier. Ny scrum agitig fremgangsmåde med print. Vi har bestilt Modelflyver-unions medlemskab til Hjalte, så han er forsikret: 700 kr. gennem MMMI. Vi har bestilt højrisiko reserve dele fra østrigske www.der-schweighofer.at gennem MMMI / Richard Beck: Nylon skruer: 2x http://www.der-schweighofer.at/artikel/88737 á 3.60€Vinge beslag: 2x http://www.der-schweighofer.at/artikel/63017 á 5.40€Vinger: 1x http://www.der-schweighofer.at/artikel/75292 á 37.90€Halesæt: 1x http://www.der-schweighofer.at/artikel/75294 á 18.90€Propeller: 3x http://www.der-schweighofer.at/artikel/44950 á 6.00€Porto: 1x á 6.00€Total : 98.80€ = 741 DDK

5/3-12 - Todo:

● Klargør til vejledermøde d. 12/3-12 (se liste herunder).● Flyt differens tryk sensor ud i vingen, så vi ikke får problemer med bøjet luftslange ved

fremtidige flyvninger. ● Modelér pitot -> luft hastighed færdig● Implementer 3. stage af kalman filteret (GPS -> position + vind estimat)

131

Page 134: ATC Digital

● Spekuler over hvordan vi definere en rute plan / waypoint set - Ideér:○ AIDL - Reduceret (Kan vi finde en specifikation?)○ Sammensæt af primitiver (Straight line, arc.... etc?)○ Splines gennem waypoints - evt med nogle constraints

● Hvordan kæder vi måledata / måle punkter ind i ruteplanen? Evt kriterier for om en måling kan gennemføres eller om flyet skal vende om for en ny gennemflyvning af målepunktet.

● Overvej flyv-bar-hed af waypoints / path. (Vi kan ikke momentalt banke)● Ny montering til printet - evt re-design...

Til møde, Mandag d 12/3-12:Synopsis

● Disposition til rapport - (påbegyndt)● Skriv et par afsnit - (Done)

Budget

● Hvad sagde vi vi ville bruge:

Sum Total - Estimeret inden indkøb

RC-gear 5.155 kr

Electronics 4.843 kr

Show-case / Proof of concept : Camera-tool (Gumstix + camera + gimbal + 3D ABS prototype + PCB) (Rough estimate)

4.000 kr

Total sum 13.998 kr

● Hvad vi har brugt:

Sum Total - Reelle indkøb

RC-gear 6.676 kr

Electronics 4.915 kr

Show-case / Proof of concept : camera 410 kr

Replacement parts (fried electronics)Motor(665), modtager(649) og motorstyring(475), fejlprint(360)

2.149 kr

Total sum 14.150 kr

● Difference af budget estimat og reelt indkøb: 13998-14150 = 152 kr.○ Her skal det noteres at der ingen showcase er indkøbt. De 152 kroner

(+showcase) over budget skyldes:■ RC udstyret og elektronik var 1595 dyrere (bedre grej end først

132

Page 135: ATC Digital

budgeteret i.e. Gumstix FE, 2 batterier, God lader.)■ Uforudsete ekstra-omkostninger i form af afbrændt: motor, modtager,

motorstyring, LAN chip og et par andre småting.■ Der var ydermere i budgettet ikke taget højde for udgift til pilotløn.■ Pilotløn = 3x ??? kr/time

● Hvad skal vi bruge?

○ Plast-bolte■ Plast-flange-dims■ Kan købes hos Multiplex - Men små dyrt og nok omstændigt.. Dansk

forhandler? Dagbog / Log

● google docs, så vejlederene kan følge med (Kunne jo starte med dette dokument) Tidsplan

● gammel tidsplan -> /dev/null● Lav ny / revideret

○ Den gamle tidsplan skal revideres da:■ Sensor fusion tager længere tid end antaget.■ Dataopsamling er meget mere tidskrævende end først antaget; pga.

koordinering med Henning, hans løn og vejret.■ Flyvning er ikke bare noget man går ud og gør, kræver mere tid/

planlægning end antaget. Pilot / lær at flyve selv issue.

● MMMI penge til trainer flyver?● Klub / unions medlemskaber?● Flyveplads? (Kold college / odense lufthavn / Lars Tyndskids mark?)

Agenda

1. Velkommen.2. Status på projektet.3. Budget:

a. Hvad budgetterede vi med.b. Hvad har vi brugt.c. Hvad kommer vi til at mangle.

4. Dataopsamling + test af fly -> flyv selv, eller betal ekstern pilot.5. Gennemgang af første flyvetur med dataopsamling.6. Tidsplan:

a. Revidering af deliverables.b. Projektets videre forløb.

7. Åben diskussion:a. Opfyldes forventninger?b. Hvad kan vi gøre bedre?c. Hvad er godt?

133