Top Banner
Technical Documentation Remotely Operated Underwater Vehicle Version 1.0 Author: Niklas Sundholm Date: December 19, 2016 R V Status Reviewed Isak Wiberg 2016-12-18 Approved Isak Nielsen 2016-12-19
58

Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

Jul 20, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

Technical Documentation

Remotely Operated Underwater Vehicle

Version 1.0

Author: Niklas Sundholm

Date: December 19, 2016

R V

Status

Reviewed Isak Wiberg 2016-12-18Approved Isak Nielsen 2016-12-19

Page 2: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

Project Identity

Group E-mail: [email protected]

Homepage: www.isy.liu.se/edu/projekt/tsrt10/2016/rov/

Client: Isak Nielsen, Linkoping UniversityPhone: +46 13 28 28 04 , E-mail: [email protected]

Customer: Rikard Hagman, Combine Control Systems ABPhone: +46 72 964 70 59 , E-mail: [email protected]

Contact at SAAB: Nicklas Johansson, SAAB Dynamics ABPhone: +46 13 18 63 65 , E-mail: [email protected]

Course Responsible: Daniel Axehill, Linkoping UniversityPhone: +46 13 28 40 42 , E-mail: [email protected]

Project Manager: Isak WibergPhone: +46 73 542 49 59 , E-mail: [email protected]

Advisors: Kristoffer Bergman, Linkoping UniversityPhone: +46 73 847 31 51 , E-mail: [email protected]

Group Members

Name Responsibility Phone E-mail(@student.liu.se)

Elin Karlstrom Tests +46 76 233 06 32 elika149Klas Lindsten Hardware +46 76 248 66 69 klali296Fredrik Ljungberg Simulation +46 73 051 48 95 frelj431Erik Skold Software +46 73 905 43 43 erisk214Niklas Sundholm Documentation +46 70 981 68 87 niksu379Isak Wiberg Project manager +46 73 542 49 59 isawi527

Filip Ostman Design +46 72 203 33 07 filos433

Page 3: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

Document History

Version Date Changes made Sign Reviewer(s)

0.1 2016-12-08 First draft. KL KL0.2 2016-12-13 Revisions based on feedback. All IW0.3 2016-12-15 Revisions based on feedback. All EK, ES1.0 2016-12-18 First version. Revisions based

on feedback.All IW

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 4: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

Contents

1 Introduction 2

1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 System Overview 3

2.1 System Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 System Software Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.3 System Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.4 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.4.1 Inertial Measurement Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.4.2 Pressure Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.5 Limitations and exclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.6 Design Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Coordinate Systems Introduction 7

3.1 Introduction to Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.2 Local Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.3 Global Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.4 Marker Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.5 Pool Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.6 Camera Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Vision System 12

4.1 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.1.1 Image Transport Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.2 ArUco Markers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4.3 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4.4 Vision System Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

5 Modelling the ROV 16

5.1 Euler Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

5.2 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.2.1 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.2.2 Rigid-Body Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5.3 Hydrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5.4 Hydrostatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5.5 Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5.6 Model in Component Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5.7 Simplified Model Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

6 Parameter Estimation 24

6.1 Simplifying the Model Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

6.2 Gathering Estimation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

6.3 Parameter Estimation Using an Extended Kalman Filter . . . . . . . . . . . . . . . . . . . 26

6.4 Parameter Estimation Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

6.5 Further Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Page 5: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

7 Control System 28

7.1 Decentralized Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

7.1.1 PI Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

7.2 Decoupling Thrust Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

7.2.1 Cascade Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

7.3 General Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7.4 Combined Velocity Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

7.5 Depth Velocity Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

7.6 Position and Attitude Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

7.7 Modes of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

7.7.1 Input Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

7.7.2 Control Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

7.8 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

7.9 Further Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

7.9.1 Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

8 Sensor Fusion 38

8.1 EKF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

8.1.1 EKF Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

8.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

8.3 Motion Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

8.4 Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

8.4.1 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

8.4.2 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

8.4.3 Pressure Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

8.4.4 Vision System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

8.5 Conversion of Velocities from PCS to LCS . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

8.6 Further Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

9 Graphical User Interface 43

9.1 Front End Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

9.2 Further Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

10 Simulator 44

10.1 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

10.2 Simulator in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

10.3 Further Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

A Telegraph Test 47

B Communication 49

B.1 Messages specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

B.2 Dynamic Reconfigure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Page 6: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 1

Notations

ArUco Library for detection of markers in images [1].

CB Center of Buoyancy.

CCS Camera Coordinate System.

CG Center of Gravity.

CSI-2 Camera Serial Interface Type 2.

DoF Degrees of Freedom.

EKF Extended Kalman Filter.

ESC Electronic Speed Controller.

FOV Field of View.

GCS Global Coordinate System.

GUI Graphical User Interface.

I/O Input/Output.

IMU Inertial Measurement Unit.

LCS Local Coordinate System.

MCS Marker Coordinate System.

NED North East Down (coordinate system).

PCS Pool Coordinate System.

PWM Pulse Width Modulation.

ROS Robot Operating System.

ROV Remotely Operated Underwater Vehicle.

SLAM Simultaneous Localization and Mapping.

XBOX Video game console built by Microsoft.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 7: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 2

1 Introduction

In both civilian and military applications there is an increasing interest and need forautonomous vehicles that can carry out missions at sea, in the air and on land withoutcontact with an operator. Examples of tasks are surveillance, rescue operations, surveying,mapping, repairs or tactical missions.

This project has continued to develop a remotely operated underwater vehicle (ROV).This document serves to present and discuss the results and future development of thisproject.

1.1 Purpose

The main purpose of this project was to develop a positioning system that can be usedin a swimming pool. It was also about developing a robust control system, a simulationenvironment for speeding up the development process and improving the existing modelof the ROV. The findings from this project can be used in future development projects tofurther develop the ROV.

1.2 Use

The result of this project will be used for further development where the ultimate goal isto achieve a fully autonomous system. The ROV will also be used as a demo platform forCombine Control Systems AB.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 8: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 3

2 System Overview

This section outlines the system, both regarding hardware and software. The system con-sists of the ROV itself and a PC, here referred to as the workstation, which is connectedto the ROV via an Ethernet cable. Figure 1 shows a schematic diagram displaying thehardware components (and the software running on them), whilst Figure 2 shows thedifferent modules and information flow during normal operations. The hardware compo-nents are denoted with sharp-edged boxes and software modules with round-edged boxes.Solid lines denote channels through which the components are powered and dashed linesdenote channels through which components communicate.

Figure 1: An overview of the components in the system, how they are powered and howthey communicate. Solid lines denote channels through which the components are poweredand dashed lines denote channels through which components communicate. Sharp-edgedboxes denote hardware components and round-edged boxes denote software modules.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 9: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 4

Figure 2: Information flow between the different modules in the system during normaloperation. Sharp-edged boxes denote hardware components and round-edged boxes denotesoftware modules.

2.1 System Hardware Components

The system has two main hardware components: the ROV itself and the workstation towhich it is connected. These, in turn, consists of several pieces of hardware, namely:

• A chassis and thrusters from Blue Robotics.

• A Raspberry Pi 2 Model B single board computer.

• An HKPilot Mega 2.7, an Arduino-based flight controller board with I/O and anIMU.

• Six ESCs, one for each motor.

• Battery and USB power regulator.

• Raspberry Pi camera module v.2.

• A PC running Linux and ROS (the workstation).

• Ethernet cable to connect the ROV to the workstation.

2.2 System Software Components

ROS is used as a main framework for the software running on the ROV and the worksta-tion. There is a concept in ROS called nodes, which are separated units of code that canrun more or less independently from each other. ROS nodes communicate with each othervia ROS topics. ROS topics are channels that nodes can write messages to by publishingand read messages from by subscribing.

These nodes run on the ROV.

roscore A node that is part of stock ROS. Handles back-end parts such as topics.

raspicam node Camera node for streaming video from the ROV.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 10: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 5

bluerov arduino Node encapsulating the HKPilot which has the IMU, water pressuresensor and barometer attached. The thrusters are controlled via this node. Alsoreferred to as the I/O-module in some cases since that is the only task that itperforms.

These nodes run on the workstation:

heartbeat Node for checking the connection between the workstation and the ROV andthe HKPilot.

teleop xbox Node for handling inputs from the XBOX controller.

joy A node for interacting with the OS USB inputs.

rqt A GUI for the ROV.

controller Node with the implementation of the controllers for the ROV.

sensorfusion The sensor fusion node.

The communication between the nodes is described in Appendix B.

2.3 System Modules

This section will give a brief introduction to the software modules in the system. Themodules in the ROV are the control module, the sensor fusion module, the I/O module,and a module for the graphical user interface (GUI).

The sensor fusion module consists of implementations of filters and state estimators. Itreceives raw sensor data as input and yields state estimates as output. The sensor fusionmodule is implemented on the workstation. The sensor fusion module is further describedin Section 8.

The control module consists of the different controller implementations for the ROV. Itreads state estimates from the sensor fusion module, and outputs PWM control signalsto the ESCs which in turn control the rotational speed of the six thrusters. The controlmodule is further described in Section 7.

The I/O-module is located on the onboard computer and the HKPilot. The onboardcomputer runs a rosserial node and functions as master, while the HKPilot runs a rosserial-arduino node that sends sensor readings to the master by serial communication. TheI/O-module was implemented during the MSc. thesis work [2] and it’s implementationwill not be further described in this document.

The graphical user interface module is part of the software running on the workstation.It is used by the operator to ease the operation of the ROV. A guide on how to use theGUI is available in the User Manual [3]. A brief description of the implementation of theGUI and some ideas for further development is available in Section 9.

2.4 Sensors

To enable fusion of predicted states with observed states, the ROV needs to accumulatedata from sensors. Here the sensors of the ROV and their individual contribution to thesensor fusion module are described.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 11: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 6

2.4.1 Inertial Measurement Unit

An inertial measurement unit (IMU) uses a gyro and an accelerometer to measure angularvelocity and acceleration. The IMU in the ROV measures the acceleration along threeaxes and the angular velocities around the same axes. The sensor fusion module also usesthe sensor data from the IMU to assists in the estimation of the ROV’s position in one ofthe fixed coordinate systems.

2.4.2 Pressure Sensor

The ROV is also equipped with a pressure sensor and a barometer, which are used toestimate the depth of the ROV. The pressure sensor is placed in the stern of the ROV,which in turn means that the attitude of the ROV could be taken into account whenestimating the current depth, however that is not implemented.

2.5 Limitations and exclusions

The ROV has only operated in a swimming pool. The system only works if the wa-ter is unclouded and provides clear sight. The requirements regarding ROV operationalperformance is not guaranteed to hold in an environment other than a swimming pool.

In order to simplify the sensor fusion, two additional assumptions regarding the ArUcomarkers are introduced. The first is that the first ArUco marker detected must be themarker defining the pool coordinate system (PCS). This will make the transformationbetween the marker coordinate system (MCS) and PCS defined from the beginning ofoperation and thus simplifying the sensor fusion. The other assumption is that the orien-tation of the MCS (and combined with the former requirement, also PCS) in the globalcoordinate system (GCS) is known from the beginning of an operation, i.e has been mea-sured beforehand. This excludes the estimation of these coordinate system’s relation toeach other but if such estimations are implemented in the future, the system will be lessdependent on the initial conditions.

2.6 Design Philosophy

The ROV is module based and needs to remain so for further development. It is importantthat code in software projects is well written and commented. All new C/C++ sourcecode associated with ROS conforms to the ROS C++ code standard and the code that isnot associated with ROS conforms to the Google C++ code standard.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 12: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 7

3 Coordinate Systems Introduction

This section will serve as an introduction to the coordinate systems used in this project.

Four different coordinate systems are defined for the system: the Local Coordinate System(LCS), the Global Coordinate System (GCS), the Marker Coordinate System (MCS) andthe Pool Coordinate System (PCS). The GCS is a North-East-Down (NED) coordinatesystem with its x-axis pointing north, y-axis pointing east and z-axis pointing down.

The four coordinate systems are illustrated in Figure 3.

Figure 3: Illustration of the different coordinate systems. MCS is fixed in the first markerdetected and LCS is fix in the ROV. PCS is aligned with the pool with the z-axis pointingdown and GCS is a NED system. The GCS and PCS both have their origin at the samepoint as the MCS

3.1 Introduction to Quaternions

A quaternion is an extension of the complex number system. A quaternion can be writtenon the form

q = q0 + q1i+ q2j + q3k, (1)

where q0 is the real part, q1 q2 and q3 are real coefficients to i, j and k, which are thefundamental quaternion units, similar to the imaginary part of a complex number. The

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 13: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 8

fundamental quaternion units have the following properties

i2 = j2 = k2 = ijk = −1, (2a)

ij = k, ji = −k, (2b)

jk = i, kj = −i, (2c)

ki = j, ik = −j. (2d)

(2e)

The transformations between the MCS, PCS, LCS and GCS are predetermined accordingto Figure 4. The transform from PCS to GCS contains the parameter α which the userdefines by manually measuring the magnetic heading, in degrees, of the PCS y-axis. Themeasured α can then be inputed in the GUI which converts α to radians. The rotationsbetween the different coordinate systems are defined in (4).

Figure 4: Relationship between the different coordinate systems and what quaternionsdescribe their rotation relative each other. Note that the camera coordinate system (CCS)is only a helper coordinate system, since the vision system gives the pose of the camera,not the centre of gravity for the ROV.

Unit quaternions have the property of representing rotation in 3D space. The quaternionrepresentation of a rotation θ about a unit axis (ux, uv, uz) is

q = cos

2

)+ (uxi+ uvj + uzk) sin

2

). (3)

This representation is utilized for describing the relationships between the different coor-dinate systems. A quaternion A

Bq is read as the rotation of B:s coordinate axes in A:scoordinate system. Given this notation, the following quaternions are defined:

PCSGCSq =

cos(π/2 + α)

2+

sin(π/2 + α)

2k,

MCSPCS q = 0.5 + 0.5i− 0.5j − 0.5k,CCSLCSq = 0.5− 0.5i− 0.5j − 0.5k.

(4)

These, along with MCSCCS q (which is given as a measurement from the vision system, see

Section 4) fully describe all coordinate systems relation with each other. If a relation

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 14: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 9

between two coordinate systems which are not directly connected to each other are sought,for example LCS rotation in GCS, the following property is utilized:

GCSLCSq = GCS

PCSq ⊗ PCSMCSq ⊗MCS

CCS q ⊗ CCSLCSq = (5)

= PCSGCSq

−1 ⊗MCSPCS q

−1 ⊗MCSCCS q ⊗ CCS

LCSq. (6)

Finally, a quaternion can be converted to a rotation matrix representation [4] by

R(q) =

q20 + q21 − q22 − q23 2q1q2 − 2q0q3 2q0q2 + 2q1q32q0q3 + 2q1q2 q20 − q21 + q22 − q23 −2q0q1 + 2q2q3−2q0q2 + 2q1q3 2q2q3 + 2q0q1 q20 − q21 − q22 + q23

. (7)

3.2 Local Coordinate System

The LCS is fix in the ROV with the x-axis pointing towards the bow and the y-axispointing starboard. The z-axis is perpendicular to both the x-axis and y-axis and sincethe LCS is a right-handed system, the z-axis is pointing down. The origin is placed in theROV’s centre of gravity (CG).

3.3 Global Coordinate System

The GCS is a NED coordinate system with its x-axis pointing north, y-axis pointing eastand z-axis pointing down. North refers to the direction of north on the WGS84 referenceellipsoid (i.e. pointing north along some meridian line), east points along the circles oflatitude of the WGS84 reference ellipsoid and down points toward the centre of the WGS84reference ellipsoid. The (N,E) coordinates describe the position on a tangent plane andthe (D)-coordinate describe the offset from this plane. For an illustration of the GCS, seeFigure 5.

The GCS origin is fixed relative the WGS84 ellipsoid and placed somewhere on its surface.In this project the GCS origin is placed in the MCS origin. The GCS is appropriate touse when longitude and latitude remain relatively constant, as it does during operationsin this project. In such circumstances, the GCS can be considered as an inertial frame ofreference, even though it rotates with the earth.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 15: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 10

N

CE

E

D

GCS origin

Figure 5: The GCS system is a North-East-Down system (NED). The ellipsoid for whichthe GCS is defined is the WGS84 reference ellipsoid. The North-axis points along themeridian lines of the WGS84 while the East-axis points along the circles of latitude of theWGS84. The Down-axis points toward CE, which is the centre of the WGS84 ellipsoid.The (N,E) coordinates define a position on the tangent plane in question, and the (D)-coordinate defines the offset from this plane. The GCS origin is fix on the WGS84 ellipsoidand is placed in the MCS origin.

3.4 Marker Coordinate System

The MCS is a coordinate system aligned with the first ArUco marker detected by thevision system. The MCS is oriented according to Figure 6. The reason for introducingthis coordinate system is that the ROS package aruco mapping, which is used in thevision system, delivers camera pose estimates in this coordiante system. All positionmeasurements will be delivered to the sensor fusion model in the MCS but will be triviallyconverted to PCS coordinates immediately. It is therefore convenient to estimate positionand velocities in PCS coordinates. These positions and velocities can then be transformedbetween the different coordinate systems using simple transformation operations.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 16: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 11

Figure 6: The MCS positioned and oriented according to the first ArUco marker detected.The x-axis is pointing up in the marker, the y-axis left and the z-axis away from the marker.The origin is located in the centre of the first marker detected. All ArUco markers havea well defined up direction, however this direction has no visual que easily interpreted bya human.

3.5 Pool Coordinate System

The PCS is a coordinate system fixed in the pool, intended to help the operator byproviding coordinates which are easily interpreted. Once the MCS is defined, the PCS isdefined by a rotation of the MCS so that the x/y-plane is aligned with the water surface.This means that also the PCS is undefined until the first marker has been located. ThePCS is intended as a coordinate system for visualising states and simplify operator inputin a user friendly way.

3.6 Camera Coordinate System

The Camera Coordinate System (CCS) has its origin in the Raspberry Pi Camera Modulewith its x-axis pointing starboard, y-axis pointing towards the keel and z-axis pointingtoward the bow of the ROV. This coordinate system exists because the aruco mappingpackage outputs the orientation estimate as this coordinate systems orientation in theMCS, i.e MCS

CCS q. No calculations are done in this coordinate system but it exists to defineCCSLCSq, effectively providing MCS

LCS q and toghether with PCSGCSq and MCS

PCS q provide the desiredmeasurement of attitude.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 17: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 12

4 Vision System

To estimate positions and linear velocities a vision system is implemented, consisting ofa camera, markers and software for processing images from the camera. The basic ideais that markers are placed on the walls and bottom of the swimming pool, and based onwhich markers the camera is capturing, the position and attitude will be unequivocal.The different parts of the vision system will be described thoroughly in Section 4.1-4.3.

4.1 Camera

The ROV is equipped with a Raspberry Pi Camera Module in the forward direction. Thecamera is connected to the Raspberry Pi with a camera serial interface type 2 (CSI-2).The onboard computer runs a ROS node, raspicam node, which streams the video feed tothe workstation. The camera has a horizontal field of view (FOV) of 54◦ and a verticalFOV of 41◦ [5].

Figure 7: Visualization of the ROV from the side (LCS x-axis pointing left and LCS z-axispointing down), showing the acrylic tube and dome. The camera is placed in the front ofthe ROV inside the dome.

4.1.1 Image Transport Pipeline

There are four nodes involed in the image aquisition and processing for position esti-mates. The raspicam node runs on the ROV and captures video from the Rasperry PiCamera Module and publishes a compressed version of the video feed to the topic /cam-era/image. The ArUco package however requires an uncompressed video feed. The im-age uncompressor node subscribes to the compressed video feed topic and republishes anuncompressed version to the /camera/image raw topic. ArUco could use this video feedas input. However, some improvement on marker detection is possible by first filteringthe video feed by the aruco mapping filter node which performs a histogram equalizationand thresholding and republishes the video feed to the /camera/image raw filtered topic.This image transport pipeline is illustradted in Figure 8.

Figure 8: Illustration of the image transport pipeline. The compressed image is sent fromthe raspicam node through the image uncompressor node and the aruco mapping filternode to the aruco mapping node.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 18: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 13

4.2 ArUco Markers

ArUco markers work like QR codes and every marker has a unique ID. By specifyingthe marker size it it possible to calculate the ROV’s position and orientation relative themarker based on size and perspective of the marker. Figure 9 shows an example of anArUco marker.

Figure 9: Example of an ArUco marker.

The calculation of position and attitude is done in a ROS library called ArUco-mapping [6].The first marker to be seen will decide the origin and orientation of the MCS. In thisproject the ROV will always be initiated with the same marker with a known relationto the PCS and GCS. If the initialization fails and another marker decides the MCS, thePCS and GCS will be placed accordingly and the initialization needs to be redone. Hence,it is critical that at least one marker is always visible for the camera or else the visionsystem cannot function properly.

4.3 Camera Calibration

To obtain high performance from the vision system, the camera needs to be calibrated.This is done in order to determine optical characteristics such as focal length, opticalcentre and distortion parameters.

For the camera calibration, the ROS package camera calibration [7] was used. This pack-age provides a method to determine camera parameters by moving a checkerboard withknown dimensions in front of the camera. The package captures a series of appropriateframes automatically and determines the camera parameters.

The calibration sequence is done with the ROV in the pool, so that the optical phenomenafrom the water is included in the camera calibration.

The aruco maping package expect camera parameters in the Videre INI [8] format shownin Figure 10. The meaning of the parameters is beyond the scope of this project, interestedreaders can read further in [9].

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 19: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 14

[ image ]

width2448

he ight2050

[ camera name ]

camera matrix4827.93789 0.00000 1223.500000.00000 4835.62362 1024.500000.00000 0.00000 1.00000

d i s t o r t i o n−0.41527 0.31874 −0.00197 0.00071 0.00000

r e c t i f i c a t i o n1.00000 0.00000 0.000000.00000 1.00000 0.000000.00000 0.00000 1.00000

p r o j e c t i o n4827.93789 0.00000 1223.50000 0.000000.00000 4835.62362 1024.50000 0.000000.00000 0.00000 1.00000 0.00000

Figure 10: Example of the Videre INI [8] format used to specify camera parameters foraruco mapping.

4.4 Vision System Output

The position and attitude estimates from the vision system are delivered via the Aruco-Marker message on the /aruco poses topic. The message definition of the ArucoMarkermessage can be seen in Figure 11. The word global in Figure 11 stems from aruco mappingnaming conventions and is not related to the GCS. An example of this message contentin human readable format is shown in Figure 12.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 20: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 15

std msgs /Header headerbool m a r k e r v i s i b i l ein t32 num o f v i s i b l e marke r sgeometry msgs /Pose g loba l camera posein t32 [ ] marker idsgeometry msgs /Pose [ ] g l oba l marke r po s e s

Figure 11: Message definition for ArucoMarker.msg. Important parts for the vision systemare the global camera pose, marker isd and global marker poses fields. Note that the wordglobal here stems from aruco mapping naming conventions and is not related to the GCS.

−−−header :

seq : 2014stamp :

s e c s : 1474899459nsec s : 750004907

f rame id : worldm a r k e r v i s i b i l e : Truenum o f v i s i b l e marke r s : 3g loba l camera pose :

p o s i t i o n :x : 0 .300396672687y : 0.152849431408z : 0 .756938908218

o r i e n t a t i o n :x : 0 .729323299519y : −0.672495146656z : 0 .100340602432w: 0.0759576593009

marker ids : [ 1 , 2 , 9 ]g l oba l marke r po s e s :−

p o s i t i o n :x : 0 . 0y : 0 . 0z : 0 . 0

o r i e n t a t i o n :x : 0 . 0y : 0 . 0z : 0 . 0w: 1 . 0

−p o s i t i o n :

x : −0.0773362522034y : 0 .274715276476z : 0 . 0

o r i e n t a t i o n :x : 0 . 0y : 0 . 0z : −0.00661406409158w: 0.999978126839

−p o s i t i o n :

x : 0 .356517452663y : 0 .070867476618z : 0 . 0

o r i e n t a t i o n :x : 0 . 0y : 0 . 0z : 0 .0128297045158w: 0.999917695954

−−−

Figure 12: Dump of a single aruco mapping/ArucoMarker message sent on the aruco posestopic while viewing 3 markers with ID 1, 2 and 9. Note that the word global here stemsfrom aruco mapping naming conventions and is not related to the GCS.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 21: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 16

5 Modelling the ROV

This chapter describes how the ROV is modelled using the framework of Fossen [10] andthe master’s thesis [2]. The notations developed by Fossen in [10] are used.

A dynamic model of the ROV describes how it is affected by forces and torques. Modelscan be derived by for example using different physical laws. A model can often be madesimpler by making assumptions of how the model is affected by the physical properties ofthe object.

An underwater vehicle with 6 DoF can be described by

η = JΘ(η)ν,

τ = Mν +C(ν)ν +D(ν)ν + g(η),

η ,[pnb/n,x pnb/n,y pnb/n,z φ θ ψ

]T,

ν ,[u v w p q r

]T,

(8)

(9)

(10)

(11)

where pnb/n,i denote the coordinates of the b coordinate system origin (LCS origin) relative

the n coordinate system origin, expressed in the basis of n [10]. See Section 5.1 forfurther details. The angles φ, θ and ψ are Euler angles in the zyx-convention as defined inSection 5.1. The matrix JΘ(η) is a transformation matrix for linear and angular velocitiesfrom LCS to linear velocities in GCS and Euler angle time derivatives. The vector ν aregeneralized velocities (velocities and angular velocities along and about, respectively, theLCS axes) that are used to describe motion in 6 DoF. The matrices M , C(ν), D(ν)and the vector g(η) respectively describes how inertia, Coriolis forces, damping forces,gravity and buoyancy affect the ROV. The vector τ are the generalised forces caused bythe ROV’s actuators and disturbances that are not modelled.

The naming convention for the following subdivisions will be

η1 , pnb/n ,

[pnb/n,x pnb/n,y pnb/n,z

]T,

η2 , Θ ,[φ θ ψ

]T,

ν1 , vbb/n ,

[u v w

]T,

ν2 , ωbb/n ,

[p q r

]T.

(12)

(13)

(14)

(15)

5.1 Euler Angles

The orientation of the ROV can be represented by Euler angles, which is three rotationswith a predefined order. The order used here is according to the zyx-convention. With aninitial configuration where the LCS axes (x,y,z) are aligned with the GCS axes (N,E,D),the first rotation (yaw) is a rotation of the LCS about its z-axis by an angle of ψ. Thesecond rotation (pitch) is a rotation of the LCS about its y-axis by an angle of θ. Thefinal rotation (roll) is a rotation of the LCS about its x-axis by an angle of φ. These threerotations (in the specified order) describe the attitude of the LCS relative the GCS. Foran illustration of these rotations, see Figure 13.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 22: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 17

x'''

y'''

z''', z''

x''

y''

ψx''

z''

θy'', y'

x'

z'

ϕ

x',y'

x

z'

y

z

Figure 13: Let (x′′′,y′′′,z′′′) (solid black) be the coordinate system resulting from trans-lating the GCS (N,E,D) origin to coincide with the LCS origin. The transformation from(x′′′,y′′′,z′′′) to LCS consists of three rotations. First, a rotation of (x′′′,y′′′,z′′′) about z′′′

by an angle ψ to yield (x′′,y′′,z′′) (dashed red). This first rotation is called yaw. Then,a second rotation of (x′′,y′′,z′′) about y′′ by an angle θ to yield (x′,y′,z′) (dash-dottedgreen). This rotation is called pitch. Finally, a third rotation of (x′,y′,z′) about x′ by anangle φ to yield (x,y,z) (dotted blue), which is the LCS. The last rotation is called roll.

Let n denote the GCS and b denote the LCS. Rnb means transformation from b to nand vn

b/n means velocities measured in b relative n, expressed in the basis of n. Thetransformation for linear velocities from the body-fixed to the global coordinate system is

vnb/n = Rn

b(Θ)vbb/n. (16)

Let Cα, Sα and Tα denote cos(α), sin(α) and tan(α), respectively. The rotation matrixis then

Rnb(Θ) =

CψCθ −SψCφ + CψSθSφ SψSφ + CψCφSθSψCθ CψCφ + SφSθSψ −CψSφ + SθSψCφ−Sθ CθSφ CθCφ

. (17)

Since Rnb(Θ) is an orthogonal matrix (i.e. Rn

b(Θ)Rnb(Θ)

T= Rn

b(Θ)T

Rnb(Θ) = I) the

inverse transformation matrix is simply the transpose

Rbn(Θ) = Rn

b(Θ)−1

= Rnb(Θ)

T. (18)

Similarly, the Euler angle time derivatives can be expressed as

Θ = TΘ(Θ)ωbb/n, (19)

where,

TΘ(Θ) =

1 TθSφ TθCφ0 Cφ −Sφ0 Sφ/Cθ Cφ/Cθ

. (20)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 23: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 18

The index Θ denotes that the transformation is for Euler angles. This transformationmatrix is not orthogonal. The inverse is instead given by

TΘ(Θ)−1 =

1 0 −Sθ0 Cφ CθSφ0 −Sφ CφCθ

. (21)

5.2 Dynamics

The study of dynamics can be divided into the two parts: kinematics, which only treatsmotion due to geometrical aspects, and kinetics, which is the analysis of the forces whichcause the motion.

5.2.1 Kinematics

The kinematic state equations in vector form can be expressed as

η =

[pn

b/n

Θ

]= JΘ(η)ν =

[Rn

b(Θ) 03x3¯03x3 TΘ(Θ)

] [vb

b/n

ωbb/n

]. (22)

Euler angles are easy to understand and also good for decomposing rotations into indi-vidual degrees of freedom but have a disadvantage when it comes to ambiguity. Using thesame sequence of rotations every time to represent a given orientation is very importantbecause alternating the order might change the resulting attitude. Code that implementsthe convention ”yaw, pitch, roll” will not co-operate with code that assumes ”roll, pitch,yaw”, and it may not be obvious, when looking at the code, which interpretation is beingused.

Another problem when using Euler angles is that they may end up in a so called gimballock for some situations. This means that, for the convention ”yaw, pitch, roll”, thetransformation matrix is singular for the two pitch angles θ = ±π2 . In this situation,which might occur in the ROV, the two remaining rotational axes align.

One way to avoid gimbal lock problems is to use a different convention regarding the rota-tion order, for angles near the singularities. Another approach is to use unit quaternionssince they are always well defined.

The theory of quaternions is summarized in Section 3.1. The notation in this chapter ishowever slightly different from Section 3.1 and the sensor fusion in order to use the samenotation as in [10] and [2]. The The unit quaternions are defined as in [10] with on real

part η and three imaginary parts ε =[ε1 ε2 ε3

]T. In order to use quaternions the

vector η is instead defined as

η =

[pn

b/n

q

],

q =[η ε1 ε2 ε3

]T.

(23)

(24)

To represent angles, quaternions need to be normalized, i.e, satisfying qTq = 1.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 24: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 19

In discrete time, the quaternion is normalized using the equation

q(k + 1) =q(k + 1)√

qT (k + 1)q(k + 1). (25)

With quaternions, the transformation from LCS to GCS coordinates is given by

Rnb(q) =

1− 2(ε22 + ε23) 2(ε1ε2 − ε3η) 2(ε1ε3 + ε2η)2(ε1ε2 + ε3η) 1− 2(ε21 + ε23) 2(ε2ε3 − ε1η)2(ε1ε3 − ε2η) 2(ε2ε3 + ε1η) 1− 2(ε21 + ε22)

. (26)

The transformation matrix for angular velocities is

Tq(q) =1

2

−ε1 −ε2 −ε3η −ε3 ε2ε3 η −ε1−ε2 ε1 η

, (27)

where the index q denotes that the transformation matrix is for a quaternion representa-tion of the attitude. Equation (22) can now instead be expressed as

η =

[pn

b/n

q

]= Jq(η)ν =

[Rn

b(q) 03x3¯04x3 Tq(q)

][vb

b/n

ωbb/n

]. (28)

5.2.2 Rigid-Body Kinetics

By using the Newton-Euler formulation, the rigid-body kinetic relations of the ROV canbe expressed as

MRBν +CRB(ν)ν = τRB , (29)

where MRB is the rigid-body inertia matrix, CRB(ν)ν is a vector describing the cen-tripetal and Coriolis effects and τRB are the forces and moments acting on the rigidbody [10]. The rigid-body inertia matrix

MRB =

[mI3×3 −mS(rbg)mS(rbg) Ib

], (30)

describes the ROV’s resistance to change in linear and angular velocity. The vectorCRB(ν)ν is defined as

CRB(ν)ν =

[mS(ν2) −mS(ν2)S(rbg)

mS(rbg)S(ν2) −S(Ibν2)

]ν =

m(qw − rv)m(ru− pw)m(pv − qu)qr(Iz − Iy)rp(Ix − Iz)qp(Iy − Ix)

, (31)

where it is assumed that the ROV is symmetric about the xy−, yz−, and zx-plane toeliminate cross-terms.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 25: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 20

5.3 Hydrodynamics

When the ROV interacts with water it experiences certain forces and effects. These socalled hydrodynamic effects can be modelled as

τDyn = −MAν −CA(ν)ν −D(ν)ν, (32)

where MA models the added mass and moment of inertia, CA(ν)ν models the Coriolisforces from the added mass and the added moment of inertia, and D(ν)ν models linearand quadratic damping effects [10]. The added mass and moment of inertia is defined as

MA = −

Xu 0 0 0 0 00 Yv 0 0 0 00 0 Zw 0 0 00 0 0 Kp 0 00 0 0 0 Mq 00 0 0 0 0 Nr

, (33)

under the assumption that the ROV moves at low speeds relative to water. The Coriolisforces from the added mass and the added moment of inertia are described as

CA(ν)ν =

0 0 0 0 −Zww Yvv0 0 0 Zww 0 −Xuu0 0 0 −Yvv Xuu 00 −Zww Yvv 0 −Nrr Mqq

Zww 0 −Xuu Nrr 0 −Kpp−Yvv Xuu 0 −Mqq Kpp 0

ν =

Yvvr − ZwwqZwwp−XuurXuuq − Yvvp

(Yv − Zw)vw + (Mq −Nr)qr(Zw −Xu)uw + (Nr −Kp)pr(Xu − Yv)uv + (Kp −Mq)pq

,(34)

under the assumption that the ROV is moving slowly and has three planes of symmetry.The viscous damping matrix D(ν) contains the linear and quadratic damping effects.Multiplying D(ν) with ν gives us the following vector

D(ν)ν = −

(Xu +Xu|u||u|)u(Yv + Yv|v||v|)v

(Zw + Zw|w||w|)w(Kp +Kp|p||p|)p(Mq +Mq|q||q|)q(Nr +Nr|r||r|)r

. (35)

5.4 Hydrostatics

The forces and moments caused by the Earth’s gravitational pull and the buoyancy forceare called restoring forces. These forces and moments can be modelled on the form

τStat = −g(η), (36)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 26: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 21

where τStat are generalised forces. The restoring forces and moments that the ROV willexperience are calculated using the mass m, its buoyancy B, the coordinates of ROV’sCG and center of buoyancy (CB). Their effects can be calculated by

g(η) = −[

fg + fbrb × fb + rg × fg

]=

(W −B) sin θ

−(W −B) cos θ sinφ−(W −B) cos θ cosφ−zB cos θ sinφ−zBB sin θ

0

, (37)

where rb = [0 0 zB ]T and rg = [0 0 0]T are lever arms, fg = Rnb (Θ)T [0 0 W ]T and

fb = −Rnb (Θ)T [0 0 B]T are restoring forces.

5.5 Actuators

To model what forces and torques along or about, respectively, each of the LCS axes thesix thrusters generate, a thrust geometry matrix T is used [10]. To relate control signalsto the actual forces generated by the thrusters, a look-up table f(u) is used. The look-uptable stems from thrust measurement data. This gives

τAct , Tf(u) =

FxFyFzTxTyTz

=

0 0 1 1 0 00 0 0 0 0 −1−1 −1 0 0 −1 0ly1 −ly2 0 0 0 lz6lx1

lx20 0 −lx5

00 0 ly3 ly4 0 0

f(u1)f(u2)f(u3)f(u4)f(u5)f(u6)

, (38)

where lai denotes the a-axis direction offset between the LCS origin and thruster i. Fadenotes the force in the a direction. Ta denotes the torque about the a axis.

5.6 Model in Component Form

Using the relation

τRB = τDyn + τStat + τAct + τDist, (39)

equation (29), (32) and (36) can be combined to form

MRBν +CRB(ν)ν +MAν +CA(ν)ν +D(ν)ν + g(η) = τAct + τDist. (40)

By assuming that the disturbance of the generalized forces, caused by the ROV’s actuators,τDist, are small, (40) can be rearranged to

Mν +C(ν)ν +D(ν)ν + g(η) = τAct + τDist ≈ τAct, (41)

where M = MRB + MA and C(ν) = CRB(ν) + CA(ν) . The expression can then besolved for ν and split into the following components

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 27: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 22

u =f(u3) + f(u4)

m−Xu+

(Xu +Xu|u||u|)um−Xu

+(B −W ) sin θ

m−Xu+

m(rv − qw)

m−Xu+−Yvrvm−Xu

+Zwqw

m−Xu,

v =−f(u6)

m− Yv+

(Yv + Yv|v||v|)vm− Yv

+−(B −W ) cos θ sinφ

m− Yv+

m(pw − ru)

m− Yv+

Xuru

m− Yv+−Zwpwm− Yv

,

w =−f(u1)− f(u2)− f(u5)

m− Zw+

(Zw + Zw|w||w|)wm− Zw

+

−(B −W ) cosφ cos θ

m− Zw+m(qu− pv)

m− Zw+−Xuqu

m− Zw+

Yvpv

m− Zw,

p =f(u1)ly1 − f(u2)ly2 + f(u6)lz6

Ix −Kp+

(Kp +Kp|p||p|)pIx −Kp

+−Mqqr

Ix −Kp+

Nrqr

Ix −Kp+qr(Iy − Iz)Ix −Kp

+−YvvwIx −Kp

+Zwvw

Ix −Kp+BzB cos θ sinφ

Ix −Kp,

q =f(u1)lx1

+ f(u2)lx2− f(u5)lx5

Iy −Mq+

(Mq +Mq|q||q|)qIy −Mq

+Kppr

Iy −Mq+

−NrprIy −Mq

+pr(Iz − Ix)

Iy −Mq+−ZwuwIy −Mq

+Xuuw

Iy −Mq+BzB sin θ

Iy −Mq,

r =f(u3)ly3 − f(u4)ly4

Iz −Nr+

(Nr +Nr|r||r|)rIz −Nr

+−Kppq

Iz −Nr+

Mqpq

Iz −Nr+pq(Ix − Iy)

Iz −Nr+−Xuuv

Iz −Nr+

Yvuv

Iz −Nr.

(42a)

(42b)

(42c)

(42d)

(42e)

(42f)

The remaining state derivatives are: x = u, y = v, z = w, Φ = p, θ = q and Ψ = r.

5.7 Simplified Model Structures

For identifiability reasons it is necessary to reparametrize the model (42) by congregatingsome of the original parameters before conducting estimation. As in [2] the reparametriza-tion Ap = Ix − Kp, Bq = Iy −Mq and Cr = Iz − Nr is used to reduce the number ofunknown parameters. Furthermore the remaining denominators are named Du = m−Xu,Ev = m− Yv and Fw = m− Zw. This gives the following model:

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 28: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 23

u =f(u3) + f(u4)

Du+

(Xu +Xu|u||u|)uDu

+(B −W ) sin θ

Du+

Evrv

Du− Fwqw

Du,

v =−f(u6)

Ev+

(Yv + Yv|v||v|)vEv

+−(B −W ) cos θ sinφ

Ev+

−Duru

Ev+Fwpw

Ev,

w =−f(u1)− f(u2)− f(u5)

Fw+

(Zw + Zw|w||w|)wFw

+

−(B −W ) cosφ cos θ

Fw+Duqu

Fw+−EvpvFw

,

p =f(u1)ly1 − f(u2)ly2 + f(u6)lz6

Ap+

(Kp +Kp|p||p|)pAp

+(Bq − Cr)qr

Ap+

(Ev − Fw)vw

Ap+BzB cos θ sinφ

Ap,

q =f(u1)lx1 + f(u2)lx2 − f(u5)lx5

Bq+

(Mq +Mq|q||q|)qBq

+(−Ap + Cr)pr

Bq+

(−Du + Fw)uw

Bq+BzB sin θ

Bq,

r =f(u3)ly3 − f(u4)ly4

Cr+

(Nr +Nr|r||r|)rCr

+(Ap −Bq)pq

Cr+

(Du − Ev)uvCr

.

(43a)

(43b)

(43c)

(43d)

(43e)

(43f)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 29: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 24

6 Parameter Estimation

A central part of modelling is parameter estimation. In this project it is assumed thatthe model structure of the system is

xk+1 = f(xk, uk, vk|ξ)yk = h(xk, uk, vk|ξ)

(44a)

(44b)

where most of the parameters in ξ are unknown. To estimate parameters in a knownmodel structure is called gray-box modelling. All parameters that are required in themodel are presented in Table 1.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 30: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 25

Table 1: The parameters used in the ROV model

Parameter Value/Source Descriptionm Measured [kg] The ROV’s mass.g 9.82 [m/s2] Gravitational constant.ρ 1000 [kg/m3] Density of water.lx1

0.19 [m] Distance from thruster 1 to CG in x-direction.ly1 0.11 [m] Distance from thruster 1 to CG in y-direction.lx2 0.19 [m] Distance from thruster 2 to CG in x-direction.ly2 0.11 [m] Distance from thruster 2 to CG in y-direction.ly3 0.11 [m] Distance from thruster 3 to CG in y-direction.ly4 0.11 [m] Distance from thruster 4 to CG in y-direction.lx5 0.17 [m] Distance from thruster 5 to CG in x-direction.lz6 0 [m] Distance from thruster 6 to CG in z-direction.zB -0.0420 [m] Distance from CG to CB.V Measured [m3] Displaced volume.B Computed [N] The ROV’s buoyancy (B = ρgV ).W Computed [N] Force of gravity (W = mg).Xu Estimated [kg/s] Linear damping coefficient due to translation in

the x-direction.Xu|u| Estimated [kg/m] Quadratic damping coefficient due to translation

in the x-direction.Yv Estimated [kg/s] Linear damping coefficient due to translation in

the y-direction.Yv|v| Estimated [kg/m] Quadratic damping coefficient due to translation

in the y-direction.Zw Estimated [kg/s] Linear damping coefficient due to translation in

the z-direction.Zw|w| Estimated [kg/m] Quadratic damping coefficient due to translation

in the z-direction.

Kp Estimated [kg m2/s]Linear damping coefficient due to rotation inwater about the x-axis.

Kp|p| Estimated [kg m2]Quadratic damping coefficient due to rotationin water about the x-axis.

Mq Estimated[kg m2/s]Linear damping coefficient due to rotation inwater about the y-axis.

Mq|q| Estimated [kg m2]Quadratic damping coefficient due to rotationin water about the y-axis.

Nr Estimated [kg m2/s]Linear damping coefficient due to rotation inwater about the z-axis.

Nr|r| Estimated [kg m2]Quadratic damping coefficient due to rotationin water about the z-axis.

Ap Estimated [kg m2] Inertia and increased inertia around the x-axis.Bq Estimated [kg m2] Inertia and increased inertia around the y-axis.Cr Estimated [kg m2] Inertia and increased inertia around the z-axis.Du Estimated [kg] Mass and mass added due to translation in the

x-direction.Ev Estimated [kg] Mass and mass added due to translation in the

y-direction.Fw Estimated [kg] Mass and mass added due to translation in the

z-direction.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 31: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 26

6.1 Simplifying the Model Equations

To simplify the parameter estimation it is convenient to divide the model into subcompo-nents and estimate the parameters of each part individually. From (43a)-(43c) it can beseen that u, v and w are not affected by the same thrusters. Hence, it can be convenientto excite the ROV mainly using one of these at a time while keeping the attitude steady.When exciting the ROV in u the effects of v and w can be assumed to be small and sim-ilarly when exciting the ROV in v or w the effects of the two remaining can be assumedto be small.

From (43d)-(43f) it can also be noted that r is not affected by the same thrusters as pand q. If the ROV’s attitude is excited in r only, while still keeping the linear velocitieslow, the effects of p and q can be assumed to be small. Similarly the effects of r can beassumed small when the ROV is only excited in p and q.

Under all these assumptions (43) can be further reduced to

u =f(u3) + f(u4)

Du+

(Xu +Xu|u||u|)uDu

+(B −W ) sin θ

Du,

v =−f(u6)

Ev+

(Yv + Yv|v||v|)vEv

+−(B −W ) cos θ sinφ

Ev,

w =−f(u1)− f(u2)− f(u5)

Fw+

(Zw + Zw|w||w|)wFw

+

−(B −W ) cosφ cos θ

Fw,

p =f(u1)ly1 − f(u2)ly2 + f(u6)lz6

Ap+

(Kp +Kp|p||p|)pAp

+BzB cos θ sinφ

Ap,

q =f(u1)lx1

+ f(u2)lx2− f(u5)lx5

Bq+

(Mq +Mq|q||q|)qBq

+BzB sin θ

Bq,

r =f(u3)ly3 − f(u4)ly4

Cr+

(Nr +Nr|r||r|)rCr

.

(45a)

(45b)

(45c)

(45d)

(45e)

(45f)

The simplified model structure in (45) can be useful when the unknown parameters areestimated. By only exciting the ROV’s motion in one way at a time, just a few parametersneed to be estimated simultaneously. The estimated parameter values can then be pluggedin to the more complex model (43).

6.2 Gathering Estimation Data

Telegraph signals that excited the system in mainly one DoF at a time were performed.The process is documented in Appendix A. Estimation data for the parameter estimationprocess was gathered and stored using ROS.

6.3 Parameter Estimation Using an Extended Kalman Filter

Inspired by the master’s thesis, [2] an extended Kalman filter was used for parameterestimation. An EKF can be used to estimate parameters by extending the state vector ofthe EKF with the parameters that need to be estimated, which gives an augmented state

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 32: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 27

vector xk as

xk =

[xkξk

](46)

The motion model for the added parameters is chosen as a constant position was usedwhich gives a new state-space form as

xk+1 =

[f(xk(ξk),uk, ξk)

ξk

]+ v

yk = h(xk(ξk),uk) + v

(47)

The same EKF as used by the sensor fusion module is implemented in Matlab with theROV-model as motion-model in the time update.

6.4 Parameter Estimation Result

In practice it proved difficult to use the model subcomponents given by (45) individually.This is mainly because a parameter like Du, that initially only was present in (45a) laterwhen plugged in to the more complex model (43) also showed up in multiple cross termsfor (43b)-(43f), often got assigned very poor values. When exciting the ROV in surge(x-motion in LCS) and using the collected data to estimate the parameters in (43a), Du

would usually get very big. Similarly for (45b) and (45c), Ev and Fw were assigned bigvalues. In (43) this meant that the cross terms got dominant to the regular terms whichwas of course not realistic. These parameters getting big might have been because thetelegraph signals, in thruster forces, that were used for collecting data was too high infrequency. The true physical system may then not have had time to respond and the bestremaining option for the EKF was to cancel out all impact from the thrusters.

When assigning values to these parameters by hand the simulator, using the model (43),could be run with reasonable results. However the EKF was never able to converge whentoo many parameters were estimated at once and never managed to estimate realistic val-ues when it was restricted to just modify a few. Better results could probably be obtainedquite easily once the EKF was up and running by tuning the filter design parameters(P0, R and Q) and making clever initial guesses for the parameter values. The quality ofthe telegraph-tests also sets a limit on the possibility to estimate good values but oncethe controller was tuned well enough to keep the ROV from running into the pool edgesduring a test-run the project was coming to an end.

6.5 Further Development

The first step in order to advance the ROV would be to succeed in estimating the modelparameters. This would both enable the use of the simulator and make the controllingmore manageable, thanks to the feedback linearization. As previously mentioned thiscould be achieved by performing new telegraph-tests, covering a wider time span and bytuning the filter design parameters.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 33: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 28

7 Control System

There are two controllers implemented in this project, one which controls the angularand linear velocities and one which controls the attitude and the position of the ROV.The velocity controller is the inner loop of a cascade structure where the position andattitude controller is the outer loop. This means that the outer loop takes a position andEuler angle reference as input, and would produce a reference signal for the inner velocitycontroller as output. Furthermore, a decentralized structure with PI-controllers is used inorder to control the various inputs independently.

One of the main purposes of this project has been to introduce a positioning system inorder to be able to fulfill the ultimate goal of a fully autonomous system in the future.This goal has been achieved with the positioning control, which enables the operator tospecify a position to which the ROV then navigates.

7.1 Decentralized Control

When having a system with input/output cross couplings, one method to deal with this isto simply disregard the cross couplings and establish pairs between input signals and out-put signals. This also means that this control strategy can only be used when the numberof input and output signals are equal. Therefore various PI- controllers are implementedfor each output/input pair due to weak or negligible cross couplings in the system. Thecross couplings in a system can be studied by computing the Relative Gain Array (RGA)for the system [11].

7.1.1 PI Controller

PI stands for Proportional-Integral, which reflects the way a PI controller computes itscontrol signals. As input, it takes tracking errors (x), and computes a control signalaccording to

u(t) = Kpx(t) +Ki

∫ t

0

x(τ)dτ. (48)

The controller parameters Kp and Ki are tuned to achieve satisfactory performance.

7.2 Decoupling Thrust Allocation

By computing the inverse of the thrust geometry matrix, T in (38), a decoupling isachieved. By controlling the different states in a decentralized manner, a vector τd ofthe demanded forces and torques to act along and about, respectively, the LCS axes, iscomputed. This vector is multiplied with the inverse thrust geometry matrix to yield F ,which is a vector containing the force demanded from each thruster. The actual controlsignals are found by using a look-up table from force to control signal, see Figure 14. Thedecoupling is approximate, meaning that some computed τd which should ideally onlyaffect one state may still affect other states even though the decoupling is used.

7.2.1 Cascade Control

Due to several measurement signals in the system, subject to different dynamics, a cascadecontrol structure is used to calculate the control signal. The outer loop controller takes aposition and an Euler angle reference as input, and produce a reference signal for the innervelocity controller as output. This control method works well since the controller in the

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 34: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 29

Figure 14: A vector of control errors ν enter a decentralized controller, in this case com-puting a vector of the demanded forces and torques to act along and about, respectively,the LCS axes, τd. The decoupling is then done by multiplication with the inverse thrustgeometry matrix to determine what force to demand from each of the six thrusters. Theactual control signals are found by using a look-up table from force to control signal.

inner circuit is fast enough. The tuning of this controller structure is done by first tuningthe inner loop controller to achieve desirable performance. The outer loop controller isthen tuned to get desirable outer input/output dynamics. This method is illustrated inFigure 15, which is taken from [12].

Figure 15: An illustration of the cascade control method. Based on a figure found in thecompendium for the course Industriell Reglerteknik at Linkoping University [12].

7.3 General Structure

The general structure used to control the ROV consists of a method to pass state refer-ences, a controller to compute control signals, measurement signals and an observer toestimate the states for use in a feedback loop, see Figure 16.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 35: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 30

Figure 16: The general structure of the feedback control system. The difference betweenstate references (xref ) and state estimates (x) is computed. This difference is known asthe tracking error (x). The controller takes the vector of tracking errors as input andcomputes a vector of control signals, u. The control signals u are then subject to actuatordynamics yielding τAct, which is a vector of the forces and moments, resulting from theactuators, acting on the ROV. The ROV is subject both to these forces and moments fromthe actuators, and external disturbances, τdist. The measurement signals y are fed backthrough an observer (see Section 8 for details regarding the observer) which computes thestate estimates used in the feedback loop for computing the tracking error and a feedbacklinearization in the controller.

7.4 Combined Velocity Controller

Linear and angular velocities are controlled using PI-controllers. The system is approxi-mately decoupled by the inverse thrust geometry matrix, T−1.

The controller computes the desired forces and torques, τd, according to

τd = Kpν +Ki

∫ t

0

νdt. (49)

Note that the controller parameters Kp and Ki are different for the six states. The controlsignals are then computed as

uν = f−1(T−1τd). (50)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 36: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 31

Figure 17: The combined velocity controller. The reference νref contains the desiredvelocities and angular velocities along and about the LCS axes. A feedback linearization,G−1(·), can be enabled. The controllers compute the desired forces and torques (upperPID) or the desired linear and angular accelerations of the ROV in the LCS (lower PID).The demanded thruster forces are then determined by multiplication with the inversethrust geometry matrix, T−1. The control signals are then determined via a look-uptable from force to control signal, f−1(·).

7.5 Depth Velocity Controller

The depth velocity controller is implemented as a transformation channel for passingglobal z-axis velocity references to the combined velocity controller. If other linear velocityreferences are simultaneously passed to the controller, it is important to make sure thatthese velocity references are zero along the global z-axis. Otherwise, addition of severalreferences may result in unexpected behaviour from the operators point of view. Thestructure used to achieve this is displayed in Figure 18. The depth velocity reference isgiven as a GCS z-axis velocity as

vnb/n,d,ref =

[0 0 vnb/n,z,ref

]T, (51)

and is then transformed to LCS velocities according to

νd,ref =

[Rn

b(Θ)Tvn

b/n,d,ref

03x1

]. (52)

The vector is concatenated with 03x1 to yield a reference vector with proper dimensions.While the depth velocity controller is engaged, it is still be possible to give linear andangular LCS velocity references. These references, not related to direct depth velocitycontrol, are called

νref = [νT1,ref νT2,ref ]T . (53)

To make sure ν1,ref does not interfere with the depth velocity control, it is first trans-formed to reference velocities in GCS with the following equation

vnb/n,1,ref = Rn

b(Θ)ν1,ref . (54)

The linear velocity reference in the GCS z-axis direction resulting from ν1,ref is set to

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 37: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 32

zero by redefining vnb/n,1,ref as

vnb/n,1,ref , vn

b/n,1,ref −

0 0 00 0 00 0 1

vnb/n,1,ref =

vnb/n,x,refvnb/n,y,ref

0

. (55)

These references are then transformed back to LCS by using the following expression

ν1,proj,ref = Rnb(Θ)Tvn

b/n,1,ref . (56)

ν1,proj,ref is concatenated with the LCS angular velocity references, ν2,ref , the newreference vector νref is then redefined as

νref , [νT1,proj,ref νT2,ref ]T . (57)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 38: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 33

Figure 18: The depth velocity controller. The only difference between the combinedvelocity controller and the depth velocity controller is the way it handles reference signals.With depth velocity control engaged, a channel for direct GCS z-axis velocity is enabled,while simultaneous LCS velocity references are projected onto the GCS xy-plane, to ensurethat the different reference signals do not result in a conflict. Before being passed to thecombined velocity controller, all references are transformed back into the LCS basis.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 39: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 34

7.6 Position and Attitude Controller

A block diagram of the position and attitude controller is presented in Figure 19, wherethe positional and angular references (η) is given in GCS coordinates (position) and Eu-ler angles (attitude). The output of the outer circuit PI is the linear velocity referencesexpressed in GCS and Euler angle time derivative references. These are determined ac-cording to

vnb/n,ref = Kpη1 +Ki

∫ t

0

η1dt, (58)

Θref = Kpη2 +Ki

∫ t

0

η2dt. (59)

Note that the parameters Kp and Ki are different for each of the six states. The linear

velocity references are transformed to LCS basis (u, v, w) by the rotation matrixRnb (Θ)T ,and the Euler angle time derivative references are transformed by the transformationmatrix TΘ(Θ)−1 into angular velocity references in the LCS (p, q, r). This is done dueto the fact that the implementation of the combined velocity controller allows only forreferences given in LCS. Equation (19) implies that the transformation of Euler angle timederivative references to LCS angular velocity references is given by

ν2,ref = ωbb/n,ref = TΘ(Θ)−1Θref , (60)

while the transformation of linear velocities expressed in GCS to LCS is implied by (16)to be

ν1,ref = vbb/n,ref = Rn

b(Θ)Tvnb/n,ref . (61)

The linear velocity references expressed in LCS, together with the angular velocity refer-ences about the LCS axes, is the reference signal for the PI(D) controller in the innercircuit (the subsystem from ν to uν), see Figure 19. This inner circuit is the combinedvelocity controller. By using this control method the inner circuit controller takes care ofthe disturbances and non-linearities between the linear and angular velocity references tothe actual linear and angular velocities.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 40: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 35

Figure 19: The position and attitude controller. The design is of cascade type whichmeans the controller in fact consists of two circuits with one controller each. The innercircuit (the subsystem from ν to uν) is the combined velocity controller. The outer circuitPID controller takes position and attitude references, η, as GCS coordinates (position)and Euler angles (attitude), and computes linear velocity references expressed in GCS andEuler angle time derivative references. These are then transformed to linear and angularLCS velocities to yield the reference vector νref , which is then handled by the combinedvelocity controller.

7.7 Modes of Operation

There are several different ways to operate the system, both in terms of input methodsand control modes. Reference signals can be passed to the system either using an Xboxcontroller or by using script files. The system can also be operated in several modesof control, namely Manual Control, Linear and Angular Velocity Control, Position andAttitude Control, Manual Linear and Depth Velocity with Attitude Control.

7.7.1 Input Modes

There are two modes to input reference signals to the system. One is the Xbox mode,which is enabled in the controller settings in Dynamic Reconfigure by selecting ’Yes’ inthe dropdown menu named ’xbox’.

The other mode is the References by Script mode. This mode is enabled in the controllersettings in Dynamic Reconfigure by selecting ’No’ in the dropdown menu named ’xbox’.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 41: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 36

7.7.2 Control Modes

In the ’Manual Control’ mode, the operator uses the Xbox controller to input referencescorresponding to a force or a moment in or about the LCS x, y and z axes, respectively.

The ’Linear and Angular Velocity Control’ mode utilizes the combined velocity controllerdescribed in Section 7.4. References are inputed using either the Xbox controller or ascript file. While in this mode, the depth velocity reference channel can be switched onand off with the Xbox controller.

The ’Position and Attitude Control’ mode utilizes the control structure described in Sec-tion 7.6. In this mode, it is only possible to set references by script, there is no supportfor passing references with the Xbox controller.

The last mode, ’Manual Linear and Depth Velocity with Attitude Control’, is a combi-nation of open and closed loop control. This mode only supports operation using theXbox controller. The linear motion, in the LCS x-axis and LCS y-axis, is controlled in thesame way as in the ’Manual Control’ mode, but the open loop control of linear motion inLCS z-axis is instead swapped to open loop control of GCS z-axis motion. The attitudeis controlled with the attitude controller part from the position and attitude controllerdescribed in Section 7.6.

Refer to [3] for further details on how to operate the system in the available combinationsof input and control modes.

7.8 Interface

The control module consists of the controller ROS node running on the workstation.

The controller node subscribes to the topics /sensor fusion/states, /cmd vel, /cmd depth vel,/controller/cmd references, /depth vel enable, /rovio/enable thrusters and controller/params.

It publishes on the topics /rovio/thrusters and /reference.

The communication is described further in Appendix B.

7.9 Further Development

In this section some methods will be presented, which was worked on but never finisheddue to lack of time. These methods could be further developed in order to improve thecontroller.

7.9.1 Feedback Linearization

Feedback linearization was worked on in this project and most of the code structure torun the controller with the linearization exists. It was never tested due to the lack ofestimated model parameters, which can be read about in section 6.4. The linearization isdependent on the parameters in order to work. With a non-linear model in the form

x = f(x) + g(x)u, (62a)

y = h(x), (62b)

a non-linear feedback can be used to select a control signal compensating for the non-linearities. The success of this method is highly dependent on how well the model describes

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 42: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 37

the true system. Using the model (9), the linearizing forces and torques should be selectedas

τLin = G−1(η, ν,ab) = Mab +C(ν)ν +D(ν)ν + g(η), (63)

where ab is the desired linear and angular accelerations in the body-fixed frame LCS [2].By choosing the demanded forces and torques this way, using model (9) and assumingexact estimates and that M has full rank this yields

Mν +C(ν)ν +D(ν)ν + g(η) = Mab +C(ν)ν +D(ν)ν + g(η) ⇐⇒ (64)

⇐⇒ Mν = Mab ⇐⇒ (65)

⇐⇒ ν = ab. (66)

A controller is then designed to select ab in a way that yields satisfactory control perfor-mance. The linearizing input signal can then be selected as

uLin = f−1(T−1τLin), (67)

where f−1(·) is the look-up table from force to control signal, and T−1 the inverse thrustgeometry matrix.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 43: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 38

8 Sensor Fusion

When the ROV is operating it is important to know the state of the ROV in terms ofattitude and position. These are estimated by the sensor fusion module, which runs onthe workstation. Input to the sensor fusion module consists of readings from the IMU,pressure sensor and measurements from the vision system. The outputs from the sensorfusion module are state estimates which are sent back to the onboard computer for controlof the ROV and sent to the GUI for a graphical representation. The states are estimatedusing an observer with a motion model. The measurements from the sensors are nonlinear,hence the extended Kalman filter (EKF) is used as an observer. The estimation of thestates were split in to two filters, one containing positions and linear velocities and theother filter contains the remaining states. The split was done since the states in the twofilters are separable and it requires less computations to update two smaller filters.

8.1 EKF

For a nonlinear time discrete system consisting of a motion model and a measurementmodel, the EKF can be applied to estimate the states. The models are given by

xk+1 = f(xk,uk) + vk,

yk = h(xk,uk) + ek(68)

where f(xk,uk) is a time discrete motion model, vk ∼ N(0,Qk) is the process noise covari-ance where Qk is a design parameter, h(xk,uk) is the time discrete measurement model,ek ∼ N(0,Rk) is the measurement noise covariance where Rk is a design parameter.

8.1.1 EKF Algorithm

The EKF filter is split into a measurement update and a time update, the EKF1 algorithmdescribed in Chapter 8 in [13] will be used:

Initialization: The initialization of the filter is x1|0 = E(x0) and P1|0 = COV(x0), wherex0 is the initial state of the ROV.

Measurement update: In order to calculate the measurement update a linearization ofthe function h around the point xk|k−1 needs to be performed:

Hk = ∇h|xk|k−1. (69)

This gives the following Kalman gain:

Kk = Pk|k−1HTk

(Rk +HkPk|k−1H

Tk

)−1, (70)

which results in the following state and covariance matrix estimates:

xk|k = xk|k−1 +Kk

(yk − h(xk|k−1)

),

Pk|k = Pk|k−1 − Pk|k−1HTk

(Rk +HkPk|k−1H

Tk

)−1HkPk|k−1.

(71)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 44: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 39

Time update: To perform the time update step, the function f needs to be linearizedaround the point xk|k:

Fk = ∇f |xk|k . (72)

This gives the following state and covariance matrix estimates:

xk+1|k = f(xk|k),

Pk+1|k = Qk + FkPk|kFTk .

(73)

Perform measurement update again.

8.2 Notation

The notation used in the sensor fusion module is listed in (74). The position is given asp, velocities as v, quaternions as q and angular velocities as ω. The scalar d is depth andthe biases in the three gyro axeses are denoted b.

p =[x y z

]T,

ν =[u v w

]T,

q =[η ε1 ε2 ε3

]T,

ω =[p q r

]Tb =

[bp bq br

]T.

(74)

8.3 Motion Model

The motion model is the central part of the sensor fusion module. With a motion model,the sensor fusion module can predict how the ROV’s states will change and will thus beable to increase the precision of the estimates.

The master thesis project [2] neglected all linear velocities in the motion model by assum-ing they were very small. In this project the positions and linear velocities are estimatedusing sensor data from the camera and will therefore also be included in the a new mo-tion model. The master thesis project used a reduced version of the model in [13] wherepositions (except for depth), linear velocities, linear accelerations were excluded. Inspiredby the constant velocity model in [13], the model described in continuous time by[

]=

[0 I3×30 0

] [pν

]+

[0I3×3

] [vp], (75)

is used to estimate the states for positions and linear velocities. The model described incontinuous time in

bgyro

d

=

− 1

2S(ω) 0 0 00 0 0 00 0 0 00 0 0 0

qωbgyro

d

+

vq

vgyro

vd

, (76a)

S(ω) =

0 −p −q −rp 0 −r qq r 0 −pr −q p 0

, (76b)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 45: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 40

is used to estimate states for attitude, angular velocities, gyroscope bias and depth. v(·)

denotes a Gaussian system noise.

Equation (75) and (76a) is expressed in discrete time in (77) and (78) using Euler forwardwith sampling time Ts and the same definition of S(ω) as in (76b).

[pk+1

νk+1

]=

[I3×3 TsI3×3

0 I3×3

] [pkνk

]+

[T 2s

2 I3×3TsI3×3

] [vp], (77)

qk+1

ωk+1

bgyrok+1

dk+1

=

I4×4 − 1

2TsS(ω) 0 0 00 I3×3 0 00 0 I3×3 00 0 0 1

qkωkbgyrok

dk

+

vq

vgyro

vd

. (78)

8.4 Measurement Model

The measurement model consists of measurement from the gyroscope, accelerometer, pres-sure sensor and the vision system.

8.4.1 Gyroscope

The measurement equation for the gyroscope is

ygyro = ω + bgyro + egyro, (79)

where ygyro is the measurement from the gyroscope, ω is the angular velocities, bgyro isthe bias in the sensor and egyro ∼ N(0, Rgyro) is Gaussian noise.

8.4.2 Accelerometer

The measurement equation for the accelerometer is

yaccelerometer = R(q)T

00−g

+ eaccelerometer, (80)

where yaccelerometer is the measurement from the accelerometer, R(q)T = R(q−1) isthe rotational matrix representing GCS in LCS, g is the gravitational acceleration andeaccelerometer ∼ N(0, Racc) is Gaussian noise. Since the accelerometer is not always in theROVs centre of rotation, a contribution to the measured acceleration from the rotationof the ROV is expected but assumed small enough to be neglected.

8.4.3 Pressure Sensor

The measurement equation for the pressure sensor is

ypressure = ρg

d+[0 0 1

]R(q)

xpressureoffset

00

+ pair + epressure, (81)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 46: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 41

where ypressure is the pressure measured by the pressure sensor, ρ is the density of thewater, g is the gravitational acceleration, d is the depth, R(q) is the rotational matrixintroduced in (26), xpressureoffset is the offset of the pressure sensor to the centre of rotation(offset in y and z are assumed to be zero) which is assumed to be the CG of the ROV andpair is the barometric pressure at the surface, measured before operation.

8.4.4 Vision System

The measurement equation from the vision system is shown in (82). The position givenby the vision system is the camera position, hence a translation needs to be performedto find the true position of the ROV’s CG. This is however not currently performed dueto uncertainties of the s CG and the estimation of position being of acceptable standardwithout this feature.

yposition =

0 0 −1−1 0 00 1 0

p+ eposition, (82)

where yposition is the measurement of position from the vision system (in MCS) and p isthe position state of the ROV’s in PCS.

The raw attitude measurement from the vision system is given as a quaternion representingthe orientation of the CCS in the MCS, i.e

yattituderaw =MCSCCS q. (83)

Using the constant and known coordinate system transformation quaternions from (4)this measurement can be transformed to represent the same rotation as the state by thefollowing quaternion multiplications

yattitudetransformed = PCSGCSq

−1 ⊗MCSPCS q

−1 ⊗ yattituderaw ⊗ CCSLCSq. (84)

Now, one might be tempted to construct the expected attitude measurement as

yattitude = yattitudetransformed = q, (85)

but that would lead to the innovation εattitude being the difference between two quater-nions, which have no clear mathematical interpretation. Thus it is difficult to construct ameaningful Jacobian for calculating the Kalman gain. Instead we emulate the measure-ment of a vector field similar to the accelerometer measurement of the gravity field. In(80) the expected measurement from the accelerometer is a vector constructed by usingthe quaternion state representing the ROVs attitude to rotate the gravity vector into theROVs coordinate system. The innovation will then be the difference between two vectors,which is well defined. Thus the measurement is constructed as

yattitude = R(yattitudetransformed

)111

, (86)

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 47: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 42

and the expected measurement is given by

yattitude = R(q)

111

+ eattitude, (87)

where q is the quaternion state. The vector multiplied with the rotation matrix is some-what arbitrary. The only requirement is that the vector shall have a non null componentin each dimension.

This procedure ensures that the innovation is constructed by the difference of two vectorsin R3, which is well defined as opposed to the difference between two quaternions on vectorformat, which has no obvious interpretation.

8.5 Conversion of Velocities from PCS to LCS

Since the velocities are estimated from the positions in PCS, these are also in the PCS.However, velocities are often most interesting in the LCS. To perform this conversion,the velocity vector in the PCS is expressed in the LCS using the quaternion state q andknown quaternions defined in (4) according to

LCSν = R((PCSGCSq ⊗ q

)−1) PCSν. (88)

8.6 Further Development

The measurement equation in (82) is not correct, at the moment the position of the ROVis considered to be in the camera and not in the CG of the ROV. When the CG of theROV is determined it can easily be added to (82).

The estimation of gyroscope bias is currently not satisfactory. The process noise for thesestates are currently very low and thus the bias states converge slowly. The current solutionis to force these states to a measured value from when the ROV is stationary, which isnot the ideal solution.

A problem with the library is the naive implementation of the positioning. The firstmarker is trivial since it is taken as the origin, but subsequent markers are positionedrelative to the first marker upon first sighting. This position is then fixed throughout themission. Even though evidence are difficult to find for this claim, we suspect that thepositioning of subsequent tags can be affected by poor calibration for some parts of theimage, effects of the water or random disturbances. Also, it is possible to detect markersas reflections from the water surface which is suspected to be the reason for crashes ofthe vision system and markers are prone to slight movement from water disturbances.This could be improved by either rewriting the aruco mapping package or writing a newutilizing only the aruco and aruco ros dependencies. A simultaneous localization andmapping (SLAM) approach would be possible if the raw measurement of marker positionand attitude was fed into the sensor fusion module. This would probably make the positionestimate more robust towards these types of disturbances.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 48: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 43

9 Graphical User Interface

The GUI is based on rqt [14] and includes the basic functionalities for operating andlogging data from the ROV. Currently, the GUI consists of the standard rqt and rqt pluginsshipped with the ROS Indigo distribution.

9.1 Front End Interface

Through the GUI, the operator can set the active control mode to any of those describedin Section 7.7.2, and set wether the state references for the ROV should be sent via scriptfiles or by using an XBOX controller, as described in Section 7.7.1. The GUI also offersfunctionality to tune parameters related to the sensor fusion module. It is possible torecord and plot data by selecting a topic from a menu of available topics. Logging datais however usually done by using script files. For more information on the GUI and howto use it, see [3]. A brief overview of the GUI can be seen in Figure 20.

9.2 Further Development

The current GUI setup works, but is cumbersome to use in some cases. Especially the dy-namic reconfigure plugin is slow and unreliable. Other plugins are functional, for examplethe plotting tools and video feed. Since rqt is quite customizeable it is possible that a lotof improvement can be made by utilizing existing plugins. If however this is not the case,custom plugins can be developed using C++ or Python and Qt.

Figure 20: An overview of the GUI for the system.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 49: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 44

10 Simulator

A simulator has been developed in Matlab/Simulink, with the goal of speeding up thedevelopment process of the ROV. However the simulator has not been used due to thetroubles with estimating the model parameters. The simulator is implemented in Simulink,but all settings are set in a Matlab script. The reason for this approach is to be able toset all different simulator settings from one place in order to keep it simple.

10.1 Software Requirements

The simulator requires Matlab R2016b or newer to work. We have used a Simulinkproject file for managing file paths. The project is loaded by double clicking the filerov/simulink/Bluerov.prj inside Matlab.

10.2 Simulator in Simulink

An overview of the Simulink implementation of the simulator is presented in Figure 21.Variant subsystem blocks have been used since they allow you to provide multiple im-plementations of a subsystem, but only one is active during simulation. This allows oneto programtically set which implementation that is active without modifying the model.The structure of the simulator is simple, having

• one subsystem block that generates reference signals,

• one variant subsystem block with different options to generate control signals,

• one subsystem block containing the model of the ROV,

• one variant subsystem block with different noise model options.

The output ports are used to save the simulation results.

Figure 21: Overview of the simulator implemented in Simulink.

The subsystem ROV model contains the model of the ROV given by (45), the only dif-ference being that quaternions have been used instead of Euler angles to achieve fastercalcualtions and avoid gimbal lock. The initial states and model parameters are set byrunning initMotionModel.m. The quaternions are transformed to Euler angles before out-putting the calculated states.

The subsystem reference signals outputs reference signals that are used by the variantsubsystem block controllers when the velocity or position and attitude controller isactivated. For each state it is possible to set a specific reference signal in the script

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 50: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 45

initReferenceSignals.m. The user has the option to choose between a constant signal, asmooth step (low pass filtered step) or a sinus signal. Start and end times, as well asspecific settings for the smooth step and sinus wave can be set.

The variant subsystem block controllers contains four different subsystems blocks,shown in Figure 22, that generates the control signal u. Setting the variable variant control,in runSimulator.m, to a number between 0 and 3 will decide which subsystem that is ac-tive. Below is a description of the different subsystems that can be activated.

• Zero - sets all control signals to 0.

• FromFile - reads control signals from a bag file. The control signals are saved as atimeseries object and read into Simulink.

• PosAtt - calculates control signals using the position and attitude controller.

• LinVelAngVel - calculates control signals using the velocity controller.

Figure 22: Overview of the different subsystems that have been implemented in the variantsubsystem block controllers. The currently active subsystem (in this case FromFile) isset in the runSimulator.m script, which connects the input and output ports to the activecontroller.

The variant subsystem block noise models contains two different subsystem blocks, withdifferent noise models. The first subsystem does not add any noise and the second sub-system adds white noise, with constant power, to all states. The active noise model is setby setting the variable variant noise to the value 0 or 1.

10.3 Further Development

Since not all model parameters are estimated, the simulator is not tested. Therefore itcan not be concluded that the simulator does not contain any errors. When the modelparameters have been estimated and the simulator has been tested, and possible errorscorrected, it could be of interest to develop a simulator GUI. Running the simulator froma GUI, instead of using Matlab scripts, is a more user friendly solution. Another possibleimprovement could be to include a 3D visualization of the ROV in the GUI. This is notonly appealing, but it can be used as a quick indicator of the simulation result.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 51: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 46

References

[1] ArUco: a minimal library for Augmented Reality applications based on OpenCV.http://www.uco.es/investiga/grupos/ava/node/26.

[2] Adam Aili and Erik Ekelund. “Model-Based Design, Development and Control ofan Underwater Vehicle”. MSc Thesis. Sweden: Linkoping University, 2016.

[3] Niklas Sundholm. User Manual, Remotely Operated Underwater Vehicle. 2016.

[4] Jizhong Xiao Roberto G. Valenti Ivan Dryanovski. “Keeping a Good Attitude: AQuaternion-Based Orientation Filter for IMUs and MARGs”. In: Sensors 8 (Aug.2015).

[5] Camera Module - Raspberry Pi Camera. https://www.raspberrypi.org/documentation/hardware/camera/. Dec. 18, 2016.

[6] aruco mapping. http://wiki.ros.org/aruco_mapping.

[7] camera calibration. http://wiki.ros.org/camera_calibration.

[8] camera calibration parsers. http://wiki.ros.org/camera_calibration_parsers.

[9] camera parameters. http://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html.

[10] Thor I. Fossen. Handbook of marine craft hydrodynamics and motion control. Wiley,2011.

[11] Torkel Glad and Lennart Ljung. Control Theory - Multivariable and Nonlinear Meth-ods. Taylor & Francis Ltd, 2000. isbn: 9780748408788.

[12] Martin Enqvist et al. Industriell reglerteknik, Kurskompendium. 2014.

[13] Fredrik Gustafsson. Statistical Sensor Fusion. Studentlitteratur, 2012.

[14] rqt: a Qt-based framework for GUI development for ROS. http://wiki.ros.org/rqt.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 52: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 47

A Telegraph Test

This section describes how to perform a telegraph test with the ROV. Before startingthe telegraph tests, ArUco should be setup. How to setup ArUco is documented in theUser Manual [3]. All scripts below are run, in separate terminal windows, from therov/catkin ws folder.

Start ROS nodes that run on the ROV by running

. / s t a r t r o v . sh

Start ROS nodes that run on the workstation by running

. / s t a r t w o r k s t a t i o n t e s t . sh

Note that this script starts the same nodes as ./startworkstation.sh, except for the con-troller node. After the script has been run ArUco should be initialised, by showing all themarkers to the ROV.

Calibrate the gyro sensor by placing the ROV still on a flat surface (it does not matter ofthe surface is tilted, the important thing is that the ROV is completely still) and run

. / c a l i b r a t e s e n s o r s . sh

The telegraph node is started by running

. / startTelegraphNode . sh

This is done from a separate script since the node has to be killed between every test.The node is killed by pressing Ctrl+C. This is not very neat, but it was the only solutionfound that allows the user to set a new loop rate, which decides at what frequency theROS node will run.

The telegraph test is setup by running

. / s tar tTe l egraphTest . sh

An example of how a telegraph test is setup and run is shown in Figure 23. The setupoptions are described as

• Choose testSeven different telegraph tests are available. One test for each individual DoF andone test that excites the ROV in all DoF. More tests can easily be implemented inthe file rov/catkin ws/src/telegraph/src/telegraph.cpp.

• Set scaling of the test signal in the interval [0 - 1]The maximum force and torque from the used thrusters, which depends on thechosen test, are scaled by the value set in this option.

• Set telegraph switchfactor in interval from [0 - 100]The telegraph switch factor sets the probability, in percent, that the signal willswitch sign during a loop in the ROS node.

• Set loop rateThe loop rate sets the frequency at which the ROS node will run.

• Enter the name you want for the log fileThe name entered for the log file will be appended to the current date and time.The log file will be saved to folder rov/testdata/bag.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 53: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 48

The test will start after the test is setup and will exit when the user presses Ctrl+C.The topics that will be logged during the test can be set by modifying the last row ofstartTelegraphTest.sh.

In order to log informative data that can be used for parameter estimation, make surethat the ROV is able to see ArUco markers at all times. Reflections of ArUco markers inthe water surface should also be avoided by positioning the ROV such that the reflectionscan not be seen, or by disturbing the water surface.

Figure 23: Example of how a telegraph test is setup and run using the script startTele-graphTest.sh.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 54: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 49

B Communication

This section describes the communication between the different modules in the system.ROS nodes communicate by subscribing and publishing messages to topics. A topic canbe seen as a bus where every message is available to everyone who subscribes. Thereforethe publisher does not need to know if there are any subscribers for the messages sentwith a specific topic.

B.1 Messages specification

Table 2, Table 3 and Table 4 shows the different topics and the message structure. Theyalso show the subscribers and publishers of each topic. Some messages are sent fromscripts called by the user.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 55: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 50

Table 2: Messages and topics. Continued on the next page.

Message type Topic Subscribers Publisher Description

std msgs:Float64MultiArray

/sensor fusion/states

controller sensorfusionEstimated states for theROV.

geometry msgs:Twist

/cmd vel controller teleop xbox

Scaled user input fromXbox controller,essentialy six numbers, asof now scaled to lie in theinterval [-1, 1]. Left stickup/down sets/cmd vel/linear.x, leftstick left/right sets/cmd vel/linear.y,left/right trigger sets/cmd vel/linear.z,left/right bumper sets/cmd vel/angular.x (thisone is not analog andinstead alternatesbetween -0.5 and +0.5),right stick up/down sets/cmd vel/angular.y andright stick left/right sets/cmd vel/angular.z.

std msgs::Float32MultiArray

/controller/params controllerUser (script),setController-Params.sh

Controller parameters

std msgs::Float32MultiArray

/controller/cmd ref controllerUser (script),setReferences.shand setPath.sh

Topic for readingreferences published byuser via script. There areseveral situations whenthe data on /con-troller/cmd referencesand /reference will differ.For example if the xboxmode is active, /con-troller/cmd referenceswill not contain thereferences set by theXbox user input.Another example wouldbe if the position andattitude controllers areactive, the controllernode will internallycalculate references forthe linear and angularvelocity controller,respectively. Thesereferences will show on/reference, but not /con-troller/cmd references.For logging purposes,always use /reference.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 56: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 51

Table 3: Messages and topics continued. Continued on the next page.

Message type Topic Subscribers Publisher Description

std msgs: Bool/rovio/enable thrusters

controller,bluerov arduino

teleop xboxSignal forenabling/disabling thethrusters.

std msgs: Bool /depth vel enable controller teleop xbox

Signal forenabling/disabling thedepth velocity referencechannel. Should only beused when the linear andangular velocity controlmode is active. See [3] fordetails on this.

std msgs: Float32 /cmd depth vel controller teleop xbox

Depth velocity referenceset by Xbox controllerleft/right trigger. See [3]for details on this.

std msgs:UInt16MultiArray

/rovio/thrusters bluerov arduino controllerControl signals to thethrusters.

std msgs:Float64MultiArray

/reference - controllerThe reference signalsused by the controllers.

std msgs:Float32MultiArray

/rovio/imu/data sensorfusion bluerov arduinoAccelerometer andgyroscope measurementsfrom the IMU.

std msgs: Float32/rovio/water pressure/data

sensorfusion bluerov arduinoWater pressuremeasurements from theHKPilot.

std msgs:Float32MultiArray

/rovio/magnetometer/data

sensorfusion bluerov arduinoMagnetometermeasurements from theHKPilot.

std msgs: Bool/sensor fusion/restart

sensorfusion -Command from the userto restart the filter. Sentfrom a console interface.

std:msgs: Bool/sensor fusion/calibrate/mag

sensorfusion -

Command from the userto calibrate the sensors.Sent from a consoleinterface.

ArucoMarker.msg /aruco poses sensorfusion aruco mappingInformation from visionsystem. This message isdescribed in Section 4.4.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 57: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 52

Table 4: Messages and topics continued.

Message type Topic Subscribers Publisher Description

std msgs: Bool /rovio/heartbeat bluerov arduino heartbeat rovio

Sent continuously to showthat a the communicationbetween workstation andROV works.

sensor msgs: Joy /joy teleop xbox joy nodeInput from the XBOXcontroller.

std msgs: Bool/rovio/magnetometer/calibrate offsets

bluerov arduino -Command to calibratethe magnetometer. Sentform a console interface.

std msgs: Bool/rovio/gyro/calibrate offsets

bluerov arduino -Command to calibratethe gyroscope. Sent froma console interface.

std msgs: Bool/rovio/accelerometer/calibrate offsets

bluerov arduino -Command to calibratethe accelerometer. Sentfrom a console interface.

std msgs: Float32/rovio/water pressure/sample time

bluerov arduino -Sets the sample time forthe water pressure sensor.

std msgs: Float32/rovio/magnetometer/sample time

bluerov arduino -Sets the sample time forthe magnetometer.

std msgs: Float32/rovio/imu/sample time

bluerov arduino -Sets the sample time forthe IMU.

sensor msgs: Com-pressedImage

/camera/image/compressed

rqt gui cpp node,aruco mapping

raspicam nodeVideo feed from theraspicam 2.

sensor msgs: Cam-eraInfo

/camera/camera info

- raspicam nodeContains informationabout the camera.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]

Page 58: Technical Documentation - Linköping University€¦ · project. 1.1 Purpose The main purpose of this project was to develop a positioning system that can be used ... Here the sensors

R VRemotely Operated Underwater Vehicle

Technical Documentation 53

B.2 Dynamic Reconfigure

The sensor fusion node and the controller uses a ROS feature called Dynamic Reconfigure,which allows variables in the source code to be changed during run-time. By marking avariable as dynamically reconfigurable, it is displayed in the GUI and its value can bechanged by the user. This is useful when tuning controllers and the filters in the sensorfusion module. This feature has however showed poor performance and therefore some ofthat functionality has been replaced by interactive scripts that publish directly on topics,e.g. the topics /controller/params and /controller/cmd ref.

TSRT10 Automatic control - Project course, Linkoping UniversityDocument responsible: Niklas Sundholm, [email protected]