Top Banner
27

The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

Feb 01, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

Journal of Autonomous Robots, ??, 1{27 (??)c ?? Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.

OTTER: The Design and Development of an Intelligent Underwater Robot

HOWARD H. WANG, STEPHEN M. [email protected], [email protected]

Aerospace Robotics Laboratory, Department of Aeronautics and Astronautics, Stanford University, Stanford CA 94305

MICHAEL J. [email protected]

Monterey Bay Aquarium Research Institute, Paci�c Grove CA 93950

Received ??. Revised ??.

Editors: J. Yuh

Abstract.

Recent advances in sensing and intelligent control technologies open a whole new dimension in un-derwater autonomy. However, before truly-capable, autonomous underwater robots can be created forsubsea intervention and exploration, many research issues must be �rst investigated and developed ex-perimentally on testbed platforms.

OTTER is an underwater robot designed to be used as a testbed for autonomous technologies. BothOTTER's hardware and software systems are con�gured to support simultaneous development and test-ing of di�erent concepts for underwater robotics by independent researchers. A general control-softwareframework enables common access to all subsystems and avoids the duplication of basic robotic func-tionality jointly required by all projects. Additionally, the new autonomous technologies enabled by theresults of individual research are mutually compatible and can be easily integrated into a single roboticsystem. Examples of new technologies demonstrated on the OTTER underwater robot include controlfrom a real-time vision-sensing system, coordinated arm/vehicle control, and control from 3D graphicaluser interfaces.

Keywords: Autonomous underwater robot, task-level control, control/software architecture, vision-sensing system, coordinated arm/vehicle control, 3D graphical user interfaces

1. Introduction

The Stanford University Aerospace Robotics Lab-

oratory (ARL) and the Monterey Bay Aquarium

Research Institute (MBARI) are working together

in a joint program on technologies that will en-

able underwater robotic systems to become more

useful tools to both science and industry. In sup-

port of this program, several robotic submersibles

have been built as testbed platforms on which to

develop, test, and demonstrate new autonomous

technologies. These testbed systems needed to be

rapidly recon�gurable and accessible to individ-

ual researchers working simultaneously on inde-

pendent topics.

This article discusses the design and develop-

ment of the latest testbed called OTTER, an

Ocean Technology Testbed for Engineering Re-

search. First, some background in autonomous

underwater robots and in the technologies being

investigated using OTTER are given. Then, a dis-

cussion of the philosophy driving OTTER's design

follows. Next, OTTER is described in detail at

the system and component levels. Finally, a few

of the autonomous technologies that have been ex-

perimentally demonstrated to date with OTTER

are discussed.

Page 2: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

2 Wang, et al.

Fig. 1. The OTTER Underwater Robot.

1.1. Background

Many autonomous underwater vehicles (AUVs)

have been constructed by research institutions

around the world, but they are far from being

common tools for daily use by science and in-

dustry in subsea operations. Fundamental is-

sues in intelligence, control, and power remain

open research, as well as engineering, topics.

Earlier e�orts focused on large AUVs such as

the DARPA (ARPA) vehicle (Brancart, 1994)

which can carry large payloads and dive to

great depths. Smaller AUVs have been designed

to cruise for long distances like the Odyssey

(Bellingham et al., 1992) or to stay submerged

for long durations likeABE (Yoerger et al., 1991).

Others have been built to support various research

objectives such as control from sonars, cooperat-

ing multiple vehicles or adaptive non-linear vehi-

cle controllers (e.g. Phoenix (Healey et al., 1994),

EAVE III (Blidberg et al., 1990), and ODIN

(Choi et al., 1995)). There are also no-

table international AUV research programs

in Europe and Japan as well (e.g. VORTEX

(Perrier et al., 1993), MARIUS (Pascoal, 1994),

Aqua Explorer 1000 (Kato et al., 1993), and TwinBurger (Fujii et al., 1993)).

Working underwater robots with interventionand station-keeping capabilities have largely beenlimited to remotely-operated vehicles (ROVs) likeMBARI's Ventana. These robots accomplish theirmissions under tight manual control providedby skilled and experienced pilots communicat-ing through a high-bandwidth tether. By addingon-board intelligence and automatic guidance tohuman-controlled underwater robots, the level atwhich a person interacts with these systems canbe elevated. Since higher-level communication re-duces the requirements on data transfer rates andhas greater tolerance for time delay, bandwidth-limited acoustic modems (Catipovic et al., 1994)can replace tethers as links to control the remotesystems. Eliminating the tether not only removesa major source of operating costs, but also in-creases the portability and maneuverability of theoverall system. Maintaining human intelligencein the control loop preserves much of functional-ity of ROVs for untethered systems. The tech-nologies being addressed by the research in theMBARI/ARL program apply to this class of semi-autonomous underwater robots (SAUVs).

1.2. Motivation

The OTTER testbed has been designed and builtto support a variety of research in technologies forsmall underwater robots (see Figure 1). Individu-als working in each research area share OTTER asa common platform to test and develop their con-cepts experimentally. OTTER's design parame-ters and speci�cations have evolved directly fromthe combined requirements of the di�erent tech-nical areas under investigation. These areas arebrie y described below to help motivate the deci-sions made in the design and construction of OT-TER.

Control from Local Sensors Underwaterrobots working in the open ocean may not have ac-cess to global sensors to �nd their inertial state orthe states of the environment in which they are op-erating. However, local sensors carried on boardare often important sources of robot-relative stateinformation.

Page 3: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 3

Table 1. General Characteristics of OTTER.

Dimension 2.1 m long, 1 m wide, 0.5 m highDisplacement 150 kg dry (approx. 450 kg wet)Drive Thrusters 746 W (1 HP) Brushless VR motors, 26 cm propellers, 5:1 gearingManeuvering Thrusters 186 W (1/4 HP) Brushless VR motors, 15 cm propellers, 3.5:1 gearingArm motor 373 W (1/2 HP) Brushless VR motor, 60:1 harmonic driveMaximum Depth 1000 m estimatedMaximum Speed 4 kts estimatedPower Nickel-Cadmium batteries, 750 W hrs on boardControl Full 6 DOF (roll and pitch statically stable)

One local sensor used extensively by humans,but to a limited extent so far by robots, is vi-sion. Recent advances in vision-processing hard-ware make it feasible to reduce the vast amountof data, inherent in video signals, to usable infor-mation at rates fast enough for feedback control.Along with new algorithms to distill relative-stateinformation from raw vision-hardware outputs, itis now possible for underwater robots to servo di-rectly o� of signals from on-board video cameras.

\Smart" Sensors and Actuators The revo-lutions in the computer and electronics industryhave not been fully utilized by underwater robots.In particular, Very Large-Scale Integrated circuit(VLSI) technology can be used to miniaturize dig-ital electronic components and reduce power con-sumption. Entire microcontroller boards are nowsmall enough to be packaged with each sensor andactuator to create intelligent components.

\Smart-sensor" systems with their own micro-processors can return �ltered measurements of thesensed states as well as estimates on their ratesof change. \Smart actuators" provide commandaccess at higher levels of control and are ableto respond to desired force, velocity, or positioncommands instead of the usual current or voltagecommands. Microprocessors help segment subsys-tems into individual black boxes such that inter-nal changes within each box does not severely im-pact the rest of the system. Smart componentshelp separate functionality from implementation,allowing the sensor or actuator expert to work in-dependently from the systems integrator.

Hydrodynamic Interactions Submersiblehydrodynamics has been widely studied, and thereexists proven methods to develop good modelsuseful for the control of streamlined underwater

vehicles. Many researchers have also worked onthe dynamics and control of robotic manipulators.

Currently, most robotic arms mounted on un-derwater vehicles are teleoperated by humans, re-ducing the need to understand and quantify thecomplex hydrodynamic interactions between thearm and vehicle for control. However, for small,untethered submersibles, the impact of unknownhydrodynamic forces acting on the arm in re-sponse to its motion can degrade the performanceof the combined arm/vehicle system. Models ofthe hydrodynamic interactions between the armand vehicle need to be developed and integratedwith an automatic control system to produce arobot that truly coordinates arm control with ve-hicle control.

Human Interfaces For semi-autonomousrobots, the interface through which a human inter-acts with the remote mechanism is an importantcomponent of the system. As much informationas possible about the current state of the systemmust be made available to the operator in orderto make goal-oriented decisions.

Since the communication link for untethered,underwater robots is, at best, an acoustic modem,issues in presenting low-bandwidth and time-delayed data become important. Humans can in-terpret visual information more easily than tex-tural information, so computer-generated graph-ics can be used to provide the operator with anenvironment in \estimated reality" that is aug-mented with stored information accumulated dur-ing a mission or known a priori. Possible user in-terfaces for remote systems span from simple 2D-graphics solutions to full immersion in 3D virtualreality.

Robotic Programming Programming AUVs,or autonomous vehicles in general, is the sub-

Page 4: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

4 Wang, et al.

Table 2. Summary of OTTER Subsystems.1

Subsystem Component

Structure Aluminum pressure housingsWelded steel frameFree- ooding �berglass shellRemovable 1 DOF manipulatorFoam and �berglassed redwood for otation

Actuation 8 Ducted, oil-�lled thrusters2 DOF, Pan/tilt device for underwater camerasArm motor with harmonic drive

Power Nickel-Cadmium batteriesDC-DC power convertersRelay bank

Sensing Stereo CCD camerasReal-time image processingFluxgate compassTwo-axis inclinometerSystron Donner MotionPak (3 linear accelerometers/3 rate gyros)Pressure transducer (depth sensor)Sonic High-Accuracy Ranging and Positioning System (SHARPS)Manipulator torque and acceleration sensorsLeak detectors and battery monitors

Computation|Hardware On-board real-time VME computerOn-board ADC and DIO boardsO�-board real-time VME computerO�-board UNIX workstations

Computation|Software VxWorksTM real-time operating systemControlShellTM software frameworkVirtual-reality user interfaceX-Window user interface

Communications Ethernet or serial link through tetherNDDSTM communications protocolSerial daisy-chain motor communications network

ject of study for many people working on con-trol/software architectures. Custom softwaretends to work only with very speci�c hardware.For robots, hardware consists of both the com-puter system that is executing the software andthe devices that the software is controlling (e.g.motors, sensors, etc.). Even when written modu-larly, it is almost impossible to exchange softwarebetween di�erent hardware platforms without ar-duous modi�cation.

Di�erent programming languages may be moreappropriate at di�erent levels of control. High-level programming languages are evolving awayfrom the standard, text-based, coding methods tothe paradigm of visual programming where onecreates graphical diagrams of information owinginto and out of reusable processing blocks.

Distributed computation with many processors

interconnected by a communications network will

inevitably become the backbone of many com-

puter architectures for underwater robots. The

challenge of managing the added complexity is

the burden that balances the increase in com-

puting power provided by distributed architec-

tures. What has nominally been a research topic

in computer-software engineering is now another

aspect of robotic design and development.

1.3. Design Philosophy and Principles

To support the various research areas of inter-

est described above, MBARI and ARL have built

OTTER, a multi-purpose, submersible research

Page 5: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 5

testbed. The overriding goal governing its hard-ware and software design was to create a exibleframework upon which students can integrate, de-velop and test new technologies in a single roboticsystem. In addition, a goal of the project was to

be compatible with systems which had been de-veloped at Stanford University for space roboticsand with a new ROV, Tiberon, which was beingdeveloped at MBARI.

OTTER's main function as a laboratory testplatform, operated by students in a test-tank en-

vironment, is re ected in its design. While somecomponents may withstand depths of up to sev-eral thousand feet, OTTER was neither designednor constructed for operations in the harsh envi-ronment of the open ocean or to be used in actualscience and industrial applications. Without theconstraint of building ocean-worthy, operational

hardware, rapid prototyping of new subsystemsresults in a faster development cycle to keep pacewith the quick evolution of technology inherent inthe research environment. In a sense, the designand development of OTTER never ceases. Eachnew student may add advanced capabilities and

hardware, and during the process, disable and re-move older ones belonging to prior students andresearch projects.

Thus, the key attributes that will allow OT-

TER to survive and evolve through generations ofstudents are the generality, exibility, and exten-

sibility of its fundamental software/hardware ar-chitecture. Genericness of the foundation is key tofuture expansibility. In a developmental testbed,this is especially important since it is often impos-sible to predict what may or may not be requiredby future users. While certain subsystems may becustom built, e.g. sensor arrays and manipulator

packages, the framework that interconnects themshould be generic. This philosophy is re ected inthe choice of the VME bus for the hardware back-plane of the computer system and of VxWorks forthe real-time operating system. Because both Vx-Works and VME are popular in the robotics and

other industries, many products and much sup-port exist and are constantly created in responseto new technological advances in areas such as mi-croprocessor chips. With these choices, OTTER'score architecture is not in danger of becoming ob-

solete, and new technologies developed outside thedomain of the underwater robotics program can bequickly and easily integrated.

In terms of the overall physical con�gurationof the system, OTTER was designed to be han-dleable by two people which is desirable from alogistical viewpoint. Many of the science applica-tions driving the technology research require sta-tionary in situ operations necessitating precisionposition control and the ability to maneuver at lowspeeds. The general mission scenario conceivedfor this class of SAUVs consists of a transit phasefrom a home base to the site of operations wherethe SAUV would perform some speci�c task un-der human direction, and �nally, a transit home.Thus, the need to control six degrees-of-freedomduring station-keeping operations drove the cho-sen thruster con�guration, while the general ge-ometry of OTTER and the shape of the hydro-dynamic fairing was selected to optimize motionalong one degree-of-freedom during transit.

What have not been major concerns in OT-

TER's design, so far, are power storage andconsumption. Increased energy density of high-technology batteries and other forms of on-boardenergy production such as fuel cells have not beena focus of the research. Advances in that arenaare being pushed by other industries like electricvehicles which have invested heavily into their de-velopment. The decision was made to use the mostpowerful, easily accessible, o�-the-shelf computersand software systems available, even if they usedmore energy than optimal. This is because com-puters and electronics are advancing rapidly. It isassumed that by the time the technologies devel-oped on OTTER are ready to be moved to oper-ational systems, high, powerful levels of computa-tional capability will be available in low power.

2. OTTER Testbed

The OTTER testbed was designed and built atMBARI's facility in Moss Landing, California.OTTER is the third-generation testbed designedby the program. Many design decisions on OT-

TER resulted from lessons learned during the op-eration of the �rst two platforms. Even thoughOTTER mainly operates in a test-tank environ-ment, OTTER's major structural components|

Page 6: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

6 Wang, et al.

MV147A2D

8 thrusters (6x186W, 2x746W)Integral HC11’s Stereo Pan/Tilt

MVME167A2D/Disk

MVME167Datacube/Teleos

A2D/Disk

750 W-hrsNiCads

0.25 A (3A max) @ 165 V

DC/DCEthernet

RS232

Pitch/Roll/HeadingDepth/MotionPak

Ethernet, Power, 1 Video

HP 735 Sun Sparc2Mass

Dry: 150 kgWet: ≈450 kg

Dimensions2.1x1.0x0.5 m

RS232

1 DOF Arm

SHARPS

SHARPS

SGI Indy

Fig. 2. Components of the OTTER Robotic System.

Redwood Flotation

Stainless-Steel Frame

Main Housing

PVC Thruster Duct

Computer Chasis

Battery Housing

Fig. 3. OTTER without top shell.

e.g. housings for electronics, batteries, thrusters|

have been designed and tested to withstand

depths up to 1000 meters. In addition to provid-

ing power to trickle charge on-board batteries, a

tether is used to facilitate development and test-

ing. Although an acoustic modem is scheduled

to be installed on OTTER, the maturity level of

high-speed acoustic communications is such that

an acoustic modem can not be used reliably with-

out further development by experts in the tech-

Fig. 4. Removable rear propulsion unit.

nology. However, the high-level control of OT-

TER was designed to accommodate a limited-

bandwidth and time-delayed link. OTTER has

been operated with a serial link running at 38.4

kBaud (state-of-the-art acoustic modems have op-

erated at rates up to 20 kBaud). A test tank at

Page 7: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 7

(a) (b)

Fig. 5. The computer chassis attached to the endcap isshown in (a). (b) shows the endcap with tether, junctionbox and video connectors.

(a) (b)

Fig. 6. (a) shows the oil-�lled, electrical junction box and(b) shows the oil pressurization and compensation device.

MBARI's Moss Landing facility is used for testingand operation of the OTTER system. The tank is11 meters in diameter and 3 meters high and pro-vides a benign environment to demonstrate anddevelop many of the advance technologies beingstudied.

Table 1 contains some general system param-eters and characteristics, and the major subsys-tems that comprise OTTER are listed in Table 2.Figure 2 shows a schematic of the subsystems.

2.1. Structure

OTTER's main structural element is a cylindricalpressure housing 0.36 m in diameter by 1.25 mlong made of aluminum (Figure 3). The mainhousing holds the on-board computer systems andmany of the sensors. Two smaller 0.12 m in diam-eter by 1.25 m long aluminum cylinders hold theNiCad batteries and are mounted underneath themain housing. Surrounding the pressure housingsare eight ducted thrusters attached to a weldedstainless-steel frame which runs the length of thevehicle. The ducts are made from PVC piping.Covering the vehicle is a two-piece �berglass shellthat provides a fairing for improved hydrodynam-

ics. Fiberglass-covered redwood pieces and foammounted within the shell are used to trim the vehi-cle's buoyancy. Because the shell is free ooding,the e�ective mass and inertia of the robot under-water is signi�cantly higher than in air.

The main-drive thrusters and aft-lateralthruster form a removable unit that provides ac-cess to the main housing (Figure 4). The mainhousing consists of a hollow cylinder with twoendcaps. Two 6U VME cages are mounted in-line to form a computer chassis attached to therear endcap (Figure 5). The entire endcap/chassisunit can be removed for benchtop operation. Thetether attaches to the endcap through an under-water connector providing power connections forbattery charging, two twisted pairs used for serialcommunications or video, and a BNC for video orEthernet. Connectors for two underwater videocameras also penetrate through the rear endcap.

Analog and digital signals and power lines ofvarious voltages also pass from main housing toan electrical \junction" or breakout box mountedexternally (Figure 6). Signal and power wires areconnected to each individual actuator as well as tothe batteries. The wire bundle connecting the boxto the main housing and all of the actuator wirebundles are encased in Tygon tubing �lled withmineral oil. An oil-compensation system pressur-izes all of the tubes, actuators, and the junctionbox. The non-conductive oil acts to relieve theexternal pressure on the wire bundles and motorhousings while underwater. Even though operat-ing with a pressurized-oil system can be rathermessy, the bene�ts of such a system should therebe a leak is re ected by its widespread use on op-erational underwater systems.

2.2. Actuation

In performing basic research in \smart" actuators,the MBARI/ARL program has designed and con-structed an actuator with custom motor-controlelectronics integrated with a brushless DC mo-tor. All of the actuators on OTTER (thrusters,pan/tilt, manipulator) use the same motor elec-tronics with motors of di�erent sizes. The motorsthemselves are based on variable-reluctance (VR)technology. These low-cost motors are brushlessmotors that do not have any permanent mag-

Page 8: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

8 Wang, et al.

(c)

(a) (b)

Fig. 7. Components of the motor electronics stack. In (a),four individual electronics boards and a motor make upan actuator. (b) shows an assembled motorstack. A com-pleted thruster is shown in (c). The same motor electronicsare also used for the pan/tilt and manipulator actuators.

nets and provide a high power-to-weight and hightorque-to-inertia ratios.

Integrated Microprocessor Stack The ac-tuator modules integrate the motor, the motorcontrol, and the communication logic into oneunit. Four components form each motor elec-tronics stack (Figure 7): a microprocessor/digitalelectronics board, an analog-ampli�er electronicsboard, a DC-DC power-converter board, and apower-ampli�er board. All of the electronics havebeen pressure-rated to 1000 meters because theelectronics operate immersed in mineral oil forpressure compensation of the housing. Since opti-cal encoders do not work well in oil, a custom ca-pacitive encoder is used to determine motor shaftposition to commutate the motor as well as to pro-vide position and velocity information for closed-loop control. Pressure compensation also allowsa low-pressure seal to be used, resulting in muchlower seal friction at the shaft.

A Motorola HC11 microprocessor and anAltera Application Speci�c Integrated Circuit(ASIC) are used in combination to control the VRmotors. Altera's ASIC chip is an example of VLSIuser-programmable chip technology newly avail-able to robotic researchers. All of the logic nec-

essary to decode position and velocity of the mo-tor and for commutation is implemented using theASIC. ASICs are programmable logic devices thatcan implement the logic from hundreds of stan-dard integrated circuits in one user-programmablechip. One can submit a digital-circuit diagram,boolean-logic equations, basic TTL circuits cre-ated with a graphical interface, or even a matrixof desired outputs for given inputs as speci�ca-tions to a compiler that automatically creates thelogic burned into an ASIC.

For control of the actuator, the Altera is usedto convert the encoder signals from the motor intoposition and velocity values that are passed ontothe HC11. The HC11 runs various closed-loopcontrollers and sends torque commands back tothe Altera which does the actual commutation.The HC11 is programmed to control the motorsin one of three modes: open-loop torque, closed-loop velocity, and closed-loop position. A fourthcontroller based on force feedback has been usedin experiments on a test stand but has not yetbeen implemented on OTTER.

The version of the HC11 that is used has only 2kBytes of on-chip EEPROM to store its programs,and thus, software e�ciency is extremely impor-tant. Therefore a simple, but e�ective, customserial protocol was created for the control com-puter to communicate to each thruster at 31.25kBaud. Three daisy-chained, full-duplex serialloops are used. Two of the loops each connectfour thruster motors with the control computer,and the pan/tilt and arm motors are linked bythe third. The control computer acts as the loopmaster and sends commands sequentially to eachmotor on the loop. Every HC11, and therefore ac-tuator, has an unique identifying number burnedinto EEPROM that can be considered its \ad-dress" on the loop. As soon as an actuator hasdetermined that a packet is being sent on the se-rial loop for itself, the actuator immediately sendsback a packet of data containing information suchas motor velocity and position. Thus, an actua-tor only talks on the serial line when it is spokento. In the custom communications protocol, the�rst byte of each packet is the id of the destina-tion actuator. With this scheme, an actuator canstart its reply to the loop master delayed only bythe length of time it takes to transmit one char-

Page 9: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 9

(a) (b)

Fig. 8. The port-drive thruster is shown in (a), and theaft-lateral thruster is shown in (b). The propeller mountedin (a) is modi�ed from a trolling motor propeller, while theone in (b) was originally designed for model aircraft. Also,the SHARPS transceiver is visible in (a).

acter. Packets can be of arbitrary length, but aretypically only six bytes long.

Theoretically, 390 six-byte packets can be sentby the master of the loop per second at 31.25kBaud with a two-character time delay to de-lineate packets. The thrusters are controlled at20 Hz, pan/tilt at 30 Hz and manipulator motorat 230 Hz with the increasing sample rates indica-tive of the increasing bandwidth of the dynamicsof each system.

Thrusters Propulsion is provided to the vehi-cle by two drive thrusters and six maneuveringthrusters (Figure 8). The drive thrusters consist of746 W (1 HP) VR motors with 5:1 planetary-gearreducers. The maneuvering thrusters use smaller186 W (1=4 HP) VR motors with 3.5:1 planetary-gear reducers. The drive thrusters are mounted inthe horizontal plane at the rear of the vehicle andare intended to provide the high-powered propul-sion required to move rapidly in the forward direc-tion during transit operations. Also mounted inthe horizontal plane are the fore-lateral and aft-lateral thrusters, which together with the drivethrusters are used to control the yaw () and thex;y positions of the vehicle (see Figure 9). The re-maining four maneuvering thrusters are mountedin the vertical direction with one on each \corner"of the vehicle. These thrusters provide control ofthe pitch (�), roll (�), and vertical (z) degrees-of-freedom of the vehicle.

Experiments have shown good correlation be-tween thrust produced and propeller angular-velocity squared in steady conditions (see Fig-ure 10). Taking advantage of this relationship,a calibration curve for each thruster describing

X

Z

Y

y

z

x

Ψ

θ

Φ

Fig. 9. The OTTER Coordinate System. (X;Y;Z) are in-ertial linear coordinates with the X vector pointing Northand Z pointing down. (;�;�) are the ZYX Euler an-gles that represent OTTER's orientation with respect tothe global system. (x;y; z) are body-�xed coordinates.

−30 −20 −10 0 10 20 30−30

−20

−10

0

10

20

30

40

50

60

Velocity Command (revs/sec)

Thr

ust (

N)

Forward Lateral Thruster − Thrust Calibration

T (pos) = 0.1255 V2

V (pos) = 2.815 T0.5

T (neg) = −0.04922 V2

V (neg) = −4.495 T0.5

Fig. 10. Typical Thrust vs Velocity Curve. Note the highasymmetry of the thrust curve for the positive and negativedirections and the low saturation levels. For each thruster,a calibration map is used to convert commanded thrust tocommanded velocities.

the relationship between thrust and the square of

the output-shaft rotation rate is generated. These

thrust/velocity relationships are used in the vehi-

cle control to map commanded thrusts to com-

manded motor velocities sent to the thrusters.

Currently, the resolution of the velocity command

is eight bits.

Page 10: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

10 Wang, et al.

Fig. 11. Underwater cameras on a pan-tilt mechanism.

Experiments have also been conducted in usingduct-mounted strain gages to provide feedback forlocal, closed-loop force control. Although prelimi-nary results show great promise for this method ofcontrol, a new mechanical design will be requiredto provide a robust and clean force signal for feed-back control.

OTTER uses two types of commercial, o�-the-shelf propellers: one designed for small, marinetrolling motors, and the other for model aircraft(see Figure 8). These propellers have been shavedto �t within the thruster ducts. Unfortunately,these propellers are highly asymmetric and op-timized to provide thrust in a single direction.Although taking the thrust asymmetry into ac-count through a mapping function helps the per-formance of the control system, the combinationof these propellers and the thruster motors is sub-optimal both in terms of e�ciency and maximumproducible thrust.

Pan/Tilt The pan/tilt device controls the di-rection of the optical axes ofOTTER's stereo cam-era pair. The unique mechanism uses two 186 W(1=4 HP) VR motors that cooperatively drive|through 60:1 harmonic-drive reducers|a singlecable that controls both the pan and tilt degrees-of-freedom (Figure 11). When the motors turn inthe same direction, tilt motion is achieved. Dif-ferentially driven at the same speed, the motorsproduce pure pan motion. The pan/tilt motorscan be controlled in a closed-loop position modeby commanding 16-bit positions or in a velocitymode.

Fig. 12. Single-link manipulator mounted on OTTER.

The pan/tilt device is integrated as part ofthe front-lateral thruster duct to keep the vehi-cle as compact as possible. Because of this, thetilt degree-of-freedom is accomplished by rotatingthe camera around the duct. This leads to moreof a \nod" e�ect than a true tilt, resulting in atranslation of the image planes when tilting thecameras. Because the stereo arrangement o�setseach camera from the pan axis, the image planesalso translate slightly while panning. These trans-lations must be taken into account by the visionsystem (Section 2.4).

Manipulator To support coordinated-arm/ve-hicle-control experiments, a single-link arm can bemounted at the front of OTTER (Figure 12). Thearm is 7.1 centimeters in diameter and 1 meterlong. This arm, being used for initial experiments,has the same general physical characteristics asa prototype, all-electric, 4-DOF manipulator thathas been designed for OTTER for use in futureresearch in automated manipulation. The single-link arm is mounted at the fore-port corner of thevehicle frame and tilted down at an angle of 60degrees from the horizontal plane. This con�gu-ration was chosen because it places the arm in theregion most likely to be the workspace of the 4-DOF manipulator. With the arm mounted in thisway, all of the vehicle's degrees-of-freedom are af-fected by the hydrodynamic forces generated bythe arm as it moves. The actuator used by thearm is a 373 W (1=2 HP) VR motor that drivesthe arm through a 60:1 harmonic-drive reducer.

Page 11: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 11

Fig. 13. On-board NiCad batteries. 60 D-sized NiCad ineach cannister provide a total of 750 W hrs of on-boardpower.

Measurements of output-shaft torque and liftmoment at the arm hub are sensed using aspecially-designed beam element connecting theoutput shaft to the base of the arm. Thisbeam element is out�tted with two full strain-gage bridges, one sensitive to moments about theaxis of the drive shaft, and the other sensitive tobending moments about an axis through the huband normal to both the arm and drive-shaft axes.These sensors measure the output-shaft torquedue to in-line hydrodynamic forces and the liftmoment due to transverse forces generated fromthe shedding of vortices.

2.3. Power

Two battery cannisters, each holding six packsof ten D-sized nickel cadmium (NiCad) cells, aremounted on-board OTTER providing 750 watthours of power at 160 volts DC (see Figure 13).Each of the 120 D-cells stores 5 amp hrs at 1.33volts. Mechanical relays are switched on to en-gage the battery system, and under-voltage andover-temperature sensors built into each batterypack are monitored by the control system.

The tether is used to trickle charge the NiCadsat 165 volts and 0.25 amps. In the current con�gu-ration, the tether is also directly connected to theDC-DC converters powering the main computercage. The main computer controls the batteryrelays as well as a bank of solid-state relays thatprovide power to the rest of the vehicle. When themain computer is booted, the control program canswitch on and o� individual relays through digitalI/O. This con�guration provides a large margin of

Relays controlled by computer in Control Cage

NiCadBatteries

+

_

+ _165 V

Tether

ControlCage

SensorsThruster Electronics

Vision/ArmCage

DC/DC

DC/DC

±5 V

±12 V

±12 V

±15 VDC/DC

DC/DC

DC/DC

DC/DC

±5 V

±12 V

Isolating DC/DCConverters

Thruster Power+_

Fig. 14. Schematic diagram of OTTER's power distribu-tion system. In the tethered con�guration, power fromthe tether directly drives the control computer cage. Oncethe control computer is booted, the relays that engage thebatteries and send power to the other computer cage andsensor systems can be switched on. If tether power shouldfail, all relays switch o� and the batteries disengage auto-matically.

safety during tethered experiments. When powerto the tether is cut o�, the main computer shutso� automatically causing all of the relays to openand disconnecting the batteries. To remove thetether, a slight modi�cation will be made to thedesign so that the batteries can be engaged by anexternal switch.

Internally, Vicor high-e�ciency, solid-stateDC-DC converters are used to change 160 voltsprovided by the batteries/tether to � 5, 12, and15 volts for on-board computers and electronics(see Figure 14). In addition to converting volt-ages, the Vicors electrically isolate di�erent sub-systems from one another. The increased safetyprovided by the electrical isolation of subsystemscannot be understated. All signals through thetether are isolated either through isolation powersupplies, video isolation transformers, or Ether-net tranceivers. By isolating all sources of powerfrom each other and from earth ground, it takestwo faults or shorts to create a potentially danger-ous current path. Ground-fault detectors are usedto monitor the integrity of the electrical system.

Page 12: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

12 Wang, et al.

FilterCorrelator

Robot StateCalculator

VideoStereo disparities,

optical flow,optical displacements

Scene-relativeor object-relative

robot state

Fig. 15. Schematic diagram of OTTER's real-time visionsystem.

2.4. Real-Time Vision System

In the real-time vision system, video signals aredigitized and stored in frame bu�ers. Bu�eredimages are then �ltered and correlated. The vi-sion hardware provides correlations between sec-tions of any mix of two live images or stored im-ages at 30 Hz. From the correlations, algorithmshave been developed to �nd positions and veloc-ities of objects or the background relative to thecameras (Marks, 1995). Figure 15 shows the in-formation ow for the vision-sensing system.

The real-time vision-sensing system physicallyconsists of two CCD video cameras, a mechani-cal pan/tilt unit, several image-processing boards,a real-time computer, and an UNIX workstation.The UNIX workstation acts as an user interface tothe vision system. The image-processing boardsand real-time computer reside on board OTTER

in a VME card cage and perform all of the vision-sensing computation.

Cameras and Video Digitization HardwareTwo Pulnix 840-N, low-light CCD cameras arehoused in pressure vessels mounted in a stereocon�guration on the pan/tilt device at the frontof OTTER (see Section 2.2 and Figure 11). Thesecameras are gen-locked and produce RS-170 videostreams (black-and-white, interlaced, 30-Hz framerate). The horizontal �eld-of-view of each cam-era is 52 degrees in air and is reduced to approx-imately 36 degrees during underwater operationsdue to the change in the refraction index.

Image digitization is accomplished using a Dat-acube DIGICOLOR color digitizer and display de-vice. The DIGICOLOR is used to digitize simulta-neously two RS-170 (NTSC) monochrome, analog-video streams. The resulting, digital-image framesare 512 by 484 pixels with an 8-bit intensitylevel. Also, the DIGICOLOR is used to displayprocessed video with the vision-processing results

overlayed on top of the image. Images producedby the DIGICOLOR can be sent up the tether tothe operator via a twisted pair.

Digital-image frames are stored using a Dat-acube FRAMESTORE board. The FRAME-STORE is capable of simultaneously storingand outputting three digital-image frames. TheFRAMESTORE is used during station keepingand mosaicking to hold the reference imagesagainst which live video is correlated (see Sec-tion 3.1).

Special Image Processing Hardware Thetwo image-processing boards, known as thePRISM3 system and made by Teleos Research,Inc., are used for �ltering and correlation. The�rst board, the Convolver, �lters two digital-videostreams at frame rate (30 Hz) using a signumof the Laplacian-of-Gaussian �lter (Marks, 1995),(Marks et al., 1994a).

The second board, the Correlator, computescorrelations between �ltered-image frames. TheCorrelator is extremely fast|36 correlations ofa 32-pixel square area require less than 100 mi-croseconds. Correlations from the Correlator areprocessed using a Motorola MVME-167, a 68040-based, single-board computer. The disparity orchange in position of identical features betweentwo images are used in algorithms to calculate po-sitions or velocities of objects or the displacementof the cameras, and therefore of the robot, from aprevious location.

When the two images are from a stereo pair,the disparity between the images is a measure ofrange to what the cameras see. If one image isheld static, then the disparity between that im-age and a live image corresponds to the displace-ment of the robot from where the static image wasrecorded. The disparity between two images froma single camera taken successively in time relatesto relative-velocity information.

This vision hardware is another example whereVLSI technology is enabling something only pre-vious achievable in non-realtime software simula-tions.

Graphical Interface to Vision System Keyparameters that a�ect vision-sensing-system func-tionality and performance are viewed and can bemodi�ed at run time using a graphical interface

Page 13: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 13

running on an HP-735 UNIX workstation. Pa-rameters that can be changed include the vision-sensing mode, Laplacian-of-Gaussian �lter width,correlation thresholds, display characteristics, andtracking and mosaicking controls.

2.5. Motion, Position and Attitude Sen-

sors

In order to sense and control the position and at-titude of the vehicle, a variety of devices are used.Integration of redundant information provided bythe various sensors to produce a best estimate ofvehicle state is an on-going topic of research.

Heading (Yaw) A KVH-Industries, ROV-1000 uxgate compass provides measurements of head-ing. The compass has an accuracy of 1 degreeand a resolution of 0.1 degrees in a magnetically-clean environment. However, the heading signal iscorrupted slightly (about �2 degrees of noise) byelectromagnetic noise from the power electronicsand cooling-fan motors within the main pressurehousing. The ROV 1000 is calibrated by runninga built-in program that maps the local magneticenvironment. The heading is calculated internallyat 10 Hz and can be provided digitally or throughan analog signal.

The analog signal wraps around between maxi-mum and minimum output voltage when the com-pass passes through 0 degrees heading. Becausethis signal is continuous and it takes some timefor the wrap-around to complete, the wrap-aroundcan be interpreted by an automatic control sys-tem as a rapid change in heading and pulse thethrusters controlling yaw erroneously. However,\de-glitching" logic is built into the software that�lters the yaw state to provide a smooth transitionthrough 0 degrees.

Pitch/Roll The pitch and roll of OTTER aremeasured using a two-axis inclinometer made bySpectron Systems Technology, Inc. This unit pro-vides analog measurements of pitch and roll witha range of �45 degrees from horizontal and a res-olution of 0.01 degree. The inclinometers produceclean pitch and roll angles, but introduce theirown dynamics at higher frequencies (> 5 Hz).

Linear Acceleration and Angular RatesThe MotionPak sensor package from the SystronDonner Inertial Division of BEI Electronics Com-pany uses three quartz- exure accelerometers andthree solid-state gyros to provide inertial mea-surements of linear acceleration and angular rate.The linear accelerations are not being used atthis time, although future research may exam-ine the possibility of fusing these measurementswith those from the vision-sensing system or along-baseline, acoustic position system to pro-vide linear-positioning information at high band-widths. The angular rates from the MotionPakare used to provide damping when controlling at-titude.

The accelerometers have a resolution of 1micro-g with a range of �5 g's. The rate gyroshave a resolution of 0.002 deg/sec and a range of�100 deg/sec. However, the biases of the signalsare highly temperature dependent and thus driftover time. The accelerometers are more adverselya�ected than the rate gyros. Rate-gyro calibra-tion is done periodically during experiments bymeasuring the biases of the signals with the vehi-cle motionless.

Depth OTTER's depth can be inferred fromhydrostatic-pressure measurements from a pres-sure transducer. The pressure measured is directlyproportional to the depth below the surface. Thisvalue from sensor is used instead of the depthvalue from the SHARPS position-measurementsystem (see below) for two reasons. First, becauseof its simplicity, the pressure transducer is morereliable. Also, the sensor is more accurate, espe-cially when the vehicle is operating close to theplane of the SHARPS transponders. Since thedepth sensor is an analog device, measurementscan be obtained at the full, 100 Hz sample rate ofthe vehicle controller rather than the 3 Hz updaterate available from SHARPS.

Position The inertial X;Y position of OTTERis measured using SHARPS (Sonic High AccuracyRanging and Positioning System) made by theMarquest Group, Inc.2 SHARPS is a long-baselineacoustic system which uses three inertially-�xedacoustic transducers to �nd the position (X;Y;Z)of a fourth. Three transducers are mountedaround the perimeter of the test tank, and a fourth

Page 14: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

14 Wang, et al.

Sensors

Actuators

Filters/Control Laws

Sample Loop

Servo Level

Task Level

OrganizationalLevel

Graphical UserInterface

Task Controller

Tas

k-Le

vel C

ontr

ol Finite-State Machines

Task EngineF

SM

FS

M

FS

M

Net

wor

k C

omm

unic

atio

n

A2D

D2A

LQG

Fig. 16. OTTER Control Architecture.

is mounted on OTTER. SHARPS can be deployedin open waters over a 100 meter range.

The SHARPS transducers are connected to aIBM PC compatible through separate umbilicals.3

The position of the transducer on OTTER is cal-culated by the PC and sent via a serial line toa real-time computer system that broadcasts theposition over Ethernet to the on-board computersand o�-board graphical user interfaces. SHARPSprovides position updates at about 3 Hz with twocentimeters of accuracy. However, because the Zsignal from the pressure sensor is continuous andmore robust, only the X and Y values from theSHARPS system are used by the control system.

2.6. Control Architecture

OTTER's control architecture allows it to be con-trolled at the task level using high-level commandssuch as \go there", \track that object", etc., whichthe robot is able to execute automatically. Thecontrol architecture called \Task-Level Control"(TLC) developed for OTTER is based on similararchitectures used in ARL's other experimentalrobotic platforms (Schneider and Cannon, 1993),(Ullman et al., 1993).

This architecture, discussed in greater detailin (Wang et al., 1993), divides the control of aremote system into a three-layer hierarchy (Fig-

Vision Processing Computers

Robot ControlComputers

DC-DC Power Converters and Relays

Fig. 17. On-board VME cages.

ure 16) similar in organization to the one de-scribed by Saridis (Saridis, 1979). At the low-est layer or servo level, classical control the-ory is employed to design control laws that takein sensor signals and output motor control ata �xed sample rate. The encoding of logicaland sequential actions that de�ne a speci�c tasktakes place at the middle layer or task level.Tasks are \programmed" using �nite-state ma-chines (FSMs). FSMs generally modify the con-trol laws and signal- ow paths of the servo layer toachieve di�erent robot behaviors or control modes.

The top level or organizational level is wherethe human or other higher, intelligent agents se-quence and monitor task execution to achieve mis-sion goals. Typically, the entities at the top-layermust interpret processed sensor data to form ajudgment of the progress of the robot during mis-sion execution and adjust robot actions to accom-modate in-situ anomalies as they occur. In theOTTER control architecture, the human inter-acts with the remote system at the organizationallevel via graphical displays and virtual-reality in-terfaces (see Section 2.7).

Computer Hardware The control of OTTERis distributed among a network of on-board, real-time computers. The user interacts with theon-board computers through a variety of UNIXworkstations including Suns, HPs, and SGIs. Ano�-board, real-time computer serves both as a

Page 15: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 15

communications relay between the robot and theworkstations and as a port for analog manual in-puts such as joystick controllers.

Two independent, on-board VME card cagesare used for robot control and vision process-ing (see Figure 17). Each cage has a Mo-torola MVME-167 (68040, 25MHz) single-boardcomputer with attached hard disk (200MB). Inaddition, OTTER's control VME cage has ananalog/digital board with 16-bit analog-to-digitalconverters and digital IO. A serial-interface boardfor actuator communications and an analog-�lterboard for sensor signals also reside on the samebus. The processor in the second, on-board VMEcage controls both the manipulator and pan/tiltmotors as well as runs the vision-processing algo-rithms mentioned in Section 2.4. The computersin the two cages are connected via Ethernet.

A third real-time processor is used in an o�-board VME cage. During operations when com-munication with theOTTER is performed serially,this processor acts as the gateway between topsideand on-board computers running Serial Line In-ternet Protocol (SLIP) at 38.4 kBaud. This pro-cessor, a MVME 147 (68030, 20MHz), also usesa 12-bit analog-to-digital converter board to readjoystick inputs for sending manual commands toOTTER when it is running in manual mode. How-ever, the main function of this processor is to takeserial data from the SHARPS PC (see Section 2.5)and broadcast vehicle-position information overthe network.

A Sun Sparc 2, an HP 735, and a SGI Indycomprise the suite of UNIX workstations that areused both for developing software and for runningthe graphical interfaces through which users com-mand OTTER. Real-time code can be developedon any of machines using cross compilers. Al-though 3D, graphical user interfaces (GUIs) ex-ist for all of the workstations, the implementa-tion of each GUI is quite di�erent due to theincompatibility of the accelerated graphics hard-ware/software provided by the di�erent computermanufacturers (see Section 2.7).

By using a variety of computer architectures,the program remains compatible with the develop-ment environments used by various research orga-nizations including the ones forming this program.The interoperability of the software produced by

this programmakes the results more readily trans-ferable to other research groups. There are alsoreasons in choosing to use one type of platformover another. For example, advanced graphics re-search is typically done with Silicon Graphics com-puters because they are optimized for that facility.

Control Software To implement the Task-Level Control architecture described earlier, a setof software tools have been assembled to create aTLC system. If one views the de�nition of thecontrol/software architecture as a blueprint of therobot control system, then the toolset serves notonly as the instruments with which system will becreated, but also as the framework that connectsand supports the di�erent software componentswhich make up the system.

Developers using a set of well-designed develop-ment tools bene�t from the corporate knowledgethat the tools can provide. When a person, per-haps a new member of the research team, wantsto learn and possibly create or modify an exist-ing robot control system, the typical thing to dois to read and interpret massive amounts of codethat someone else has already written. Alterna-tively, with a set of software tools, a person �rstlearns how to use the tools and how they are ap-plied to create software components that conformto the blueprint (control architecture). In the be-ginning, the details of components themselves, e.g.a Butterworth �lter or a PD control law, are notas important as how to create the components andhow to interconnect them. Later, new compo-nents are added that are usually a part of theresults of a person's fundamental research (e.g.adaptive, neural-net controllers, vision-processingblocks, etc.).

By using a common software framework, inde-pendent researchers working on the same roboticplatform share the components that implementthe basic robotic functions. When a hardwareand/or software change is made on the funda-mental con�guration, this change is automaticallypropagated to all of the users of the robot just bymodifying the shared components. In addition,the new capabilities added by individuals throughtheir research are easily merged together to createmore complex applications since the software com-ponents that implement each capability already �ttogether in the same framework.

Page 16: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

16 Wang, et al.

cmdVec_bf

cmdYaw

cmdPitch

cmdRoll

cmdZ

cmdY

cmdX

bfCmdVec

motorConfig

thrusterCmds

velGainstorqueGainsthrustLimits

thrusterGain

motorTelem

thrusterComm

KiYKdY

KpY

desVelY_bfdesPosY_bf

velY_bfposY_bf pos

veldesdes

Kp Kv Ki

yPID

KiXKdX

KpX

desVelX_bfdesPosX_bf

velX_bfposX_bf pos

veldesdes

Kp Kv Ki

xPID

KiRollKdRoll

KpRoll

desVelRolldesPosRoll

velRollposRoll pos

veldesdes

Kp Kv Ki

rollPID

KiYawKdYaw

KpYaw

desVelYawdesPosYaw

velYawposYaw pos

veldesdes

Kp Kv Ki

yawPID

KiPitchKdPitch

KpPitch

desVelPitchdesPosPitch

velPitchposPitch pos

veldesdes

Kp Kv Ki

pitchPID

KiZKdZ

KpZ

desVelZ_bfdesPosZ_bf

velZ_bfposZ_bf pos

veldesdes

Kp Kv Ki

zPID

0.11.0

74000

"feedFwdCmd"

1

rawFeedFwd

rawThrustCmd

feedBackCmd

configurationMatrix

"otterData.dif"

feedFwdCmd

feedFwdGain

addFeedFwd

consumeFeedFwdinputForce

thrusterMap

inputThrust

gainFeedFwd

Fig. 18. Data ow diagram for OTTER thruster controllers.

Real-time Operating System The most basicpiece of the toolset is a real-time operating system.Simple robotic systems may be controlled usingonly a single program that implements all of thelogic, sensor processing, safety monitoring, controllaws, and motor drivers. However, as the systemsbecome more complex, currently executing pro-cesses running on a single or multiple processorsneed to communicate with each other, synchronizetheir execution, and share data. Device driversare required to separate what services a speci�cpiece of hardware provides (e.g. serial data inputsand outputs, etc.) from how that piece of hard-ware does it (e.g. control registers, interrupt ser-vice routines, etc.). Since a physical system is tobe controlled, real-time constraints must be satis-�ed, and the control software must be e�cient andexecuted deterministically. A real-time operatingsystem provides many of these services that robotcontrol programs need.

OTTER uses VxWorks, a popular multi-tasking, real-time operating system within thegeneral robotics community, as its foundationupon which low-level control loops and high-level

coordination logic are built. VxWorks has a goodnetwork interface compatible with UNIX worksta-tions, and thus, can support distributed controlapplications like the OTTER controller. UsingVxWorks, the user can literally \login" to the on-board computer systems. Programs can be loadedo� of local, SCSI hard disks or o� of a worksta-tion's disk that has been NFS mounted.

ControlShell Framework On top of Vx-Works, a large set of associated tools, basedon past work done by students at ARL, isused as the basic framework to create TLCcontrollers for OTTER. Real-Time Innovations,Inc., is a company that was created to con-tinue the development of software tools �rstwritten to support Ph.D. research (RTI, 1995).ControlShell is a Computer-Aided SoftwareEnvironment (CASE) for real-time system-software development (Schneider et al., 1994),(Schneider et al., 1991). It provides graphicaltools to help create and link software componentsin an object-oriented framework for real-time ap-plications. ControlShell also provides data man-

Page 17: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 17

dircos_if2bf

yawDegpitchDegrollDeg

attitudeDegR2D

sharpsStatus

velY_bf

velX_bf

pos_bfposSharps

temp

linAccel_bf

mpBiasData

"otterData.dif"

sharpsStatus

a2dSigs

rawA2dSigs

a2dDena2dNum

100.0

posX_bfposY_bfposZ_bf

rollpitchyaw

velDenvelNum

100.0

velZ_bf

angRate_bf

attitude

depthGain

pos_if

depthtransducerOffset

dircos_bf2if

vehPos_if

rawAttitude

integratedAttitude

attitudeFilterGains

derivativeyVelFromSharps

xVelFromSharps

calcDirCosdirectionCosines

frameTranstrans_if2bf

vec2scalarsattDegBreakout

MultScalarattR2D

mpBiasCorrectionmotionSignalsmotionSignals

0xffff4000

0"C" a2d

vehLocationConsumersharpsConsumer

filterfilterA2dSigs

vec2scalars

pitchRateyawRate

rollRate

angRateBreakout

vec2scalarsattBreakout

vec2scalarsposBreakout filter

velFilter

mergeZmergeDepth

v3x1s2v6x1inertialState

attitudeFilterattitudeFilter

Fig. 19. Data ow diagram for OTTER sensors.

"ABORT_TRAJ"

desAngRate

desVel_bf

desPos_bf

desVel_if

desPos_if

desVehPos_if

desVehVel_if

desAttitude

"END_TRAJ"

"NEW_TRAJ""otterData.dif"

maxVel_bfmaxAcc_bf

vehAcc_ifvehVel_if

vehPos_if

viaPointTrajectory

desYawRatedesPitchRatedesRollRate

desAngRateBreakout

desVelZ_bfdesVelY_bfdesVelX_bf

desVelBreakout

desYawdesPitchdesRoll

desAttBreakout

desPosZ_bfdesPosY_bfdesPosX_bf

desPosBreakout

splitPos

splitVel

dircos_if2bf

desPos2bf

desVel2bf

vehVel_ifvehPos_if

maxAcc

fifthOrderTrajectory

Fig. 20. Data ow diagram for OTTER's trajectory generator.

agement and ow control during execution of the

applications.

With ControlShell, software components are

developed in C++ that represent the most ba-

sic modules of a control system. Once the inputs

and outputs of a component are de�ned, Control-

Shell automatically generates skeleton code that

becomes a reusable component when the program-

mer �lls in the working details. Examples of com-

ponents are digital �lters, analog-to-digital device

drivers, PID and LQR controllers, estimators, and

basically, any piece of a control-system block dia-

gram that is executed periodically in a sampled-

data digital system.

With a graphical editor, one connects the in-

puts and outputs of each component to de�ne how

the data ows through the components (see Fig-

ures 18, 19, and 20. Then various con�gurations

are de�ned by grouping di�erent components to-

gether. These con�gurations are, in essence, dif-

ferent control modes for the system. Later, when

the system is running, one can switch between pre-

de�ned con�gurations and thus change the con-

trol mode of the system. Several con�gurations

Page 18: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

18 Wang, et al.

Table 3. Various control con�gurations for OTTER.

Con�guration Description

SENSORS ONLY Sensor processing only, no control to thrusters.MANUAL FORCE Manual force input converted to individual thruster commands.MANUAL THRUST Manual input to each thruster.ATTITUDE DEPTH Automatic control of attitude and depth only.POSITION Position control using SHARPS and attitude sensors.COORDINATED ARM VEHICLE Control using feedforward from arm controller and position feedback.VISION STATIONKEEP Position control using vision feedback.VISION MOSAIC Control to create video mosaics using vision feedback.VISION TRACK Control to track objects using vision feedback.

de�ned for OTTER are shown in Table 3. Inde-pendent researchers sharing the system create cus-tom con�gurations from the existing componentsand from ones that they have added themselvesthrough their work.

An important aspect of control system develop-ment and \tuning" is the ability to change controlparameters such as gains or locations of poles andzeros while the robot is running to observe thedi�erence in performance. ControlShell providesa menu system that gives immediate access to allof the variables in the control system. Using themenu to modify control system parameters, onecan quickly tune the values for maximum perfor-mance.

Finite-State Machines The coordination levelof the TLC control architecture (see Figure 16) isprogrammed using �nite-state machines (FSMs)created graphically using another ControlShellgraphical editor. An FSM is a visual languagethat relates transitions between states of a systemto events that trigger the transitions. High-leveltasks can be encoded as FSMs by identifying dis-crete, intermediate states that lead to the comple-tion of the task and by de�ning events that trig-ger the transitions from state to state. In essence,FSMs are used by the robot as maps to followto reach a goal based on what state the robot isin and what event has just occurred. As thereare may be many possible paths to reach point Bfrom point A, the FSM can encode many di�erentsequences of actions to complete the same task.

Events can be generated in di�erent ways. Aprogram can monitor the progress of the robotdoing a task and generate events when di�erentstages of task completion are reached. Safety mon-itors can run in the background generating emer-

gency events such as ones that indicate low bat-teries or water leakage, or events may be directlygenerated by an user causing the robot to changecontrol modes.

A diagram of the main FSM controlling OT-

TER appears in Figure 21. Additional tasks canbe added to the main FSM by adding FSM \func-tions" or \subroutines" like the RetrieveObject

FSM subroutine shown in the �gure. Each FSMsubroutine is itself a conglomeration of states andtransitions encoding a procedure to accomplish asubgoal.

Real-Time Communications Concurrentlyrunning programs are fundamental elements thatmake up a single application in control of a com-plex robotic system. These programs may resideon the same computer, on di�erent computersconnected by a local network, or even across com-puters located miles apart. Sharing data aboutthe system state, sending commands from oneprogram to another, and synchronizing actionsof independent modules require the elements tocommunicate with each other.

RTI's Network Data Delivery Service

(NDDS) (Pardo-Castellote and Schneider, 1994)is used for communications between the dis-tributed programs running on both real-time andUNIX computers that make up the robotic con-troller network. NDDS makes it easy to sharedata and event information among systems in acomputer network. It uses a subscription-basedparadigm to send packets between producers andconsumers of data. Processes running on OTTERproduce information (e.g. vehicle position and at-titude) that is consumed by the graphical user in-terfaces (GUIs) running on the workstations. On

Page 19: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 19

"INITIALIZE"DeadInWater

"AUTO_VIAPOINT_TRAJECTORY"/ActivateConfiguration()

RetrieveObject

"RETRIEVE_OBJECT"

"VISION_STATIONKEEP"/ActivateConfiguration()

"VISION_PANTILT"/ActivateConfiguration()

"VISION_MOSAIC"/ActivateConfiguration()

"MANUAL_THRUST"/ActivateConfiguration()

"AUTO_POSITION_AND_ATTITUDE"/ActivateConfiguration()

ActivateConfiguration()"AUTO_ATTITUDE"/

"SENSORS_ONLY"/ActivateConfiguration()

"ARM_FEEDFWD_ONLY"/ActivateConfiguration()

"ARM_FEEDFWD_FEEDBACK"/ActivateConfiguration()

"FIND_OBJECT"/FindObject()

Active

"ACTIVATE"/ActivateSystems()

Idle

OK

ERROR Startup

ALWAYS_STIM

Initialize

ALWAYS_STIM/PowerDown()

Shutdown

"INITIALIZE"

FsmStart

"SHUTDOWN"

"SAFE && State != PoweredDown"/EnterSafeMode()

OK

ERROR

ANY_STATE

"ACTIVATE"/ActivateSystems()

Safe

Fig. 21. OTTER's FSM controller. Note that the icons for \states" like Startup actually represent multiple states and canbe considered a �nite-state machine subroutine.

the other hand, the GUIs produce user commandsthat are consumed by OTTER. NDDS supportsmultiple consumers and producers of the samedata, thus allowing the seamless integration ofrobot dynamic simulators and multiple commandinterfaces within the same control network. Be-cause in NDDS neither consumers nor producersof data care about from whom they receive dataor to whom they send data, it is easy to connecta new program module to the robotic applicationwithout modifying the existing modules.

OTTER Low-Level Control System Dia-grams of the components and data ow that makeup the core of the OTTER control system appearin Figures 18, 19, and 20 (not shown are the di-agrams which integrate the components that im-plement vehicle control from vision nor the onesthat implement coordinated arm/vehicle control).

The desired and actual state in the body-�xedcoordinate system (see Figure 9) are fed into indi-vidual PID controllers for each degree-of-freedom.OTTER uses aircraft/spacecraft body-�xed coor-dinates, i.e. x out the nose, y o� to the right, andz pointing down and ZYX Euler angles (;�;�)

to represent orientation. In the inertial coordinatesystem, X points North, Y points East, and Z ispositive downward. Thus when submerged, depthis positive.

From the controllers, a vector of commandedbody forces and torques are mapped into com-manded forces for each of the eight thrusters. Theeight thruster forces are then adjusted for thecharacteristics of each thruster and transformedinto velocity commands.

Sensor data, read from analog-to-digital con-verters or from serial ports, are converted fromraw signals such as voltages into processed datasuch as angular rates or depth and digitally �l-tered to remove high-frequency noise. Signal vec-tors, e.g. attitude, are separated to be passed tothe rest of the control system as individual val-ues, e.g. roll, pitch, yaw angles. Sensor-processingcomponents also create derived signals such as thedirection-cosine matrix representing vehicle orien-tation and rates of signals for which there is nodirect rate measurement such as position.

Under automatic position control, OTTER canbe moved by sending it aMOVE command with adesired, �nal inertial state or with a set of inertialwaypoints through which to pass. Using the com-

Page 20: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

20 Wang, et al.

Fig. 22. A 2D graphical interface based on Tcl/Tk. The dotted vehicle is the ghost icon of OTTER that a user would dragto command motions.

ponents shown in Figure 20, either a smooth 5th-order trajectory or a splined viapoint trajectoryis generated beginning at the current position. Ateach time step after the trajectory has begun, de-sired inertial angles and positions for that timeare transformed into the body-�xed frame and fedinto the PID controllers. As a result, OTTER canbe commanded to move smoothly and in a coor-dinated fashion.

2.7. Graphical User Interface

The user interface is an important componentin the TLC architecture. With telemetry fromthe robot arriving in limited quantities and per-haps greatly delayed, techniques must be devel-oped that intelligently combine the telemetry withstored information known a priori about the robotand its operating environment, presenting to theoperator a best estimate of the current status ofthe remote system. The user can then draw con-clusions, make plans, and issue task commandsbased on what he or she sees in this presentationof \estimated-reality".

Several user interfaces have been created forOTTER on many di�erent types of computers.Some o�er full 3D, virtual environments and oth-ers only give an iconic representation of vehicle

state. Depending on the speed of the computer,especially in displaying complex graphics, it wasoften found that a simple interface could be moreintuitive to use than a 3D interface. While the

3D graphics provided more realistic scenes of OT-TER in its work setting, the lagging response touser commands of the graphics running on slowercomputers hindered the intuitive feel extended by

3D environments. However, when a faster com-puter with dedicated graphics hardware is avail-able to run the same 3D interface, the naturaladvantages of the realism o�ered by computer-

generated graphics is restored.The earlier, simple 2D graphical interfaces were

created in C using basic X-Window graphics andMotif4 widgets. GUI builders|software applica-

tions that provide tools to create, place and oth-erwise manipulate widgets such as buttons or slid-ers, etc.|helped to layout the interface and gen-erated the basic code automatically. Then addi-

tional code to connect the graphical interface andOTTER was added using the NDDS communica-tions library (see Section 2.6).

Currently, these types of interfaces are be-

ing generated using the Tcl/Tk scripting lan-guage and graphics toolkit that have been devel-oped recently to make writing GUI applicationseasier (Ousterhout, 1994). Because Tcl/Tk uses

Page 21: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 21

Fig. 23. The VEVI 3D Virtual Environment. OTTER is shown in the test tank with a menu of interface commands.

scripts stored in plain text �les to con�gure theGUI, after one creates the basic functions requiredto connect to the underwater robot, changes to theGUI can be made simply by modifying the script.No recompiling of code is required, thus makingchanges easier and faster to implement. Figure 22shows a 2D interface written using Tcl/Tk.

Several 3D virtual environments also have beendeveloped for controlling OTTER. For the HPand Sun workstations, libraries based on thePHIGS graphics standard were used to generate3D models embedded in an X-Window environ-ment. While PHIGS is adequate to produce real-istic models of robots, it does not support someadvanced graphics features such as texture map-ping or ray tracing. For research into developingan user interface that is able to integrate infor-mation known a priori about the operating en-vironment with data accumulated during missionexecution, the team uses the advanced graphicscapability provided by Silicon Graphics computersystems. For example, if the benthic bathymetryof where OTTER is operating has been mappedalready during a previous visit, then as OTTERwanders about creating mosaics of the surround-ings, the image mosaics can be returned to theuser and integrated with the virtual environmentby using texture mapping.

Currently, a sophisticated interfacebased on NASA Ames Research Cen-ter's Virtual-Environment Vehicle Interface(VEVI) (Fong, 1993) running on a SGI Indy isused to control OTTER (Fleischer et al., 1995).The VEVI-based interface is a full virtual-realityinterface that can be displayed on head-mountedunits or on a special monitor to be viewed in stereousing LCD glasses (see Figure 23). Versions ofthis interface have been used to control NASA'sTROV experiments in Antarctica, the Dante

walking robot in Alaska, and theMarsokhod, Rus-sia's Mars land-rover prototype, during trials inHawaii. Being compatible with NASA allowsthe team to run experiments cooperatively withNASA in controlling remote, semi-autonomoussystems like OTTER across long distances to ac-complish real science missions. NASA hopes tounderstand the rami�cations of using autonomousrobots for scienti�c exploration of space by �rstinvestigating their use in analogs of these missionsunderwater.

Other types of telemetry information can alsobe displayed by user interfaces. When the vision-sensing system is operated in an object-trackingmode and it locks onto an object, a graphical iconrepresenting the object is placed in the interface,indicating to the user in an intuitive manner where

Page 22: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

22 Wang, et al.

Object Tracking Station Keeping

Mosaicking Navigation

Fig. 24. Experimental tasks using vision feedback.

the object is located relative to OTTER. Conceiv-ably, additional sensor information can be storedwith the icon such as actual video footage of theobject.

Commanding OTTER to move to new posi-tions is accomplished by dragging a \ghost" im-age of the robot in the interface using a mouse ora 6-DOF input device called a spaceball. As theghost is moved about, waypoints for the path canbe recorded as well as the �nal desired position.These waypoints are sent down to OTTER wherean on-board trajectory generator creates a smoothpath passing through the waypoints for the vehicleto follow.

3. Advanced Underwater Technologies

Using the OTTER underwater robot as atest platform, several experiments have beencompleted to demonstrate the autonomous-control technologies being investigated by theMBARI/ARL program. In the following section,some results of the control from local vision andcoordinated arm-vehicle control experiments willbe presented. More details of the work presentedin this section can be found in the dissertationsof the two ARL students responsible for the re-search (Marks, 1995), (McLain, 1995).

3.1. Visual Sensing and Control

A large e�ort has been invested in study-ing optical-vision systems as local sensorsfor the automatic control of underwater

robots (Marks, 1995), (Marks et al., 1994a). Us-ing the vision-sensing system described in Sec-tion 2.4, a number of vision-dependent au-tonomous tasks (see Figure 24) have been ex-perimentally demonstrated on OTTER.

Object Tracking Scientists using MBARI'sROV Ventana often observe marine animals forhours at a time while they move or drift about inthe Monterey Bay. Piloting the ROV is task thatrequires high concentration and takes exceptionalskill gained through long-term experience to dopro�ciently. Often two or more pilots must tradeo� controlling the ROV every hour or so to rest.By enabling the capability of autonomous track-ing of these creatures by the ROV, the workloadon the pilots can be signi�cantly decreased.

Using the stereo cameras with the vision-sensing system, OTTER has demonstrated theability to track arbitrary objects underwater,maintaining both relative position and orien-tation (Marks et al., 1993), (Marks et al., 1992),(Marks et al., 1991). Objects are segmented fromthe background by using stereo disparity to cal-culate their relative positions from the vehicle.Higher object-tracking performance is achieved bycombining vehicle control with control of a high-speed camera pan/tilt system (Wang et al., 1992).The high bandwidth and limited range of thepan/tilt system complements the unlimited rangebut slower dynamics of the vehicle.

Station Keeping Scientists also like to viewstatic scenes with the ROV for long periods forboth biological and geological research. Duringthis time, the pilot must �ght constantly changingdisturbances from unseen currents or the tetherto keep the same scene in view of the cameras.Thus, automatic station keeping is another desir-able ability for underwater robots. In most oper-ating environments, there is no global sensor sys-tem producing the inertial position of the robotfor use in a feedback controller such as an acousticlong-baseline system. At best, underwater robotscan use their on-board attitude and pressure sen-sors to hold orientation and depth automatically.

However, the live images recorded by the on-board cameras contain su�cient information todetermine camera, and therefore robot, displace-ment from a desired static scene. With the vision-

Page 23: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 23

0 20 40 60

0

20

40

(cm

)

Time (sec)

δX

0 20 40 60

0

20

40

(cm

)

Time (sec)

δY

0 20 40 60

0

5

10

(deg

rees

)

δψ

Time (sec)

Fig. 25. OTTER station keeping using vision. Plots of theX, Y, and motions of OTTER while station keepingusing the vision sensor. Large disturbances are impartedto each degree of freedom which are then rejected by thecontrol system.

sensing system, displacement information can beextracted and fed into the underwater-robot con-troller for automatic station keeping. This is ac-complished by storing an image of the area rela-tive to where the robot is to hold station. Then,a number of algorithms have been developed tocompare the live image with the stored image todetermine robot's positional error in the 3 lineardegrees-of-freedom and a single rotation error inthe plane of the image. These errors are usedby the automatic control system to provide drift-free station keeping (Marks et al., 1994b). In Fig-ure 25, the reaction of OTTER to large distur-bances while using vision for station keeping isshown.

Creating Video Mosaics Another applicationof the vision system is to generate mosaics of theocean oor automatically from individual framesof video (Marks et al., 1995). Large-area mo-saics can already be produced from video or stillshots through post-processing techniques. How-ever, with a vision system controlling the robot tomove to the next spot where a frame should berecorded, mosaics can be generated \on-the- y"and displayed to user in real-time. This mosaic isnot only helpful to the pilots in determining theposition of the robot relative to its surroundings,but also can aid the scientist to decide what to door where to go next to accomplish the mission. Amosaic generated by OTTER of the bottom of atest tank (dressed with rocks and pipes) using 18individual images can be seen in Figure 26. To cre-

Fig. 26. 3x6 mosaic created automatically by OTTER.

ate this mosaic, OTTER was driven by the vision-

sensing system back and forth across the tank to

each position where an image was required.

3.2. Coordinated Arm-Vehicle Control

Robots without intervention capabilities are

mainly sensing and transportation platforms.

However, once a manipulator is added to the

robot, entirely new capabilities are opened for

investigation. Current underwater manipulators

are typically teleoperated using \hand-in-glove"

methods with the robot �xed relative to the ob-

ject that they are manipulating, e.g. sitting on the

ocean oor.

Ultimately, the scientist or pilot should be able

to point at objects presented on the live video im-

age or in a computer-generated user interface and

have the arm/vehicle system automatically pick

up or otherwise manipulate them. To do this,

the robot must actively manage the hydrodynamic

and hydrostatic interactions between the manipu-

lator(s) and vehicle in order to be able to place its

end e�ector precisely. These interactions are com-

plex with both kinematic and dynamic parameters

varying as the manipulator changes con�guration.

Understanding these interactions is the �rst step

in creating a system that can automatically pick

up artifacts o� of the ocean oor.

Page 24: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

24 Wang, et al.

Fig. 27. Manipulator hydrodynamic test facility at Stan-ford ARL.

By accurately predicting the forces on the ve-

hicle caused by moving the manipulator underwa-

ter, counter forces can be applied by the thrusters

to keep the vehicle motionless during manipu-

lation. A test-tank facility at Stanford is used

to study experimentally the hydrodynamics of a

simple single-link underwater arm (Figure 27).

From basic uid theory and using both visualiza-

tion of ow patterns and direct measurements of

forces and torques acting on the arm, a hydrody-

namic model of the single-link arm has been devel-

oped (McLain, 1995). This model is used to help

control the arm/vehicle system in a coordinated

manner during manipulator motion.

The severity of the hydrodynamic interaction

of a manipulator with the vehicle and its correc-

tion through the use of new algorithms that co-

ordinates the control of the arm and vehicle have

been demonstrated withOTTER. For these exper-

iments, a single-link, one DOF arm is mounted on

OTTER (see Section 2.2 and Figure 12).

As the arm is swung through 90 degrees mul-

tiple times, the e�ect of the arm motion on ve-

hicle roll and yaw degrees of freedom is signif-

icant. Without any control on the vehicle, the

generated forces can cause the vehicle to roll over

�18 degrees and to drift over 15 degrees in yaw.

Even with the vehicle under feedback control to

maintain both position and orientation, the dis-

turbance of the arm produced over �9 degrees of

error in roll and 8 degrees of drift in yaw.

0 5 10 15 20−50

0

50

angl

e (d

eg)

time (sec)

(a) Arm Joint Angle

0 5 10 15 20−20

−10

0

10

20(b) No Vehicle Control

roll

(deg

)

time (sec)

0 5 10 15 20−20

−10

0

10

20(c) Feedback Control Only

roll

(deg

)

time (sec)0 5 10 15 20

−20

−10

0

10

20(d) Feedforward & Feedback Control

roll

(deg

)

time (sec)

Fig. 28. Performance of coordinated arm/vehicle control.As the arm is rotated 90 degrees several times, plots of OT-TER's roll motion is shown. In (b), the vehicle's thrustersare turned o�. In (c), the vehicle is using only feedbackcontrol on attitude and position. The dramatic improve-ment in performance using feedforward control based on amodel of arm hydrodynamics and feedback control can beseen in (d).

However, by using the hydrodynamic model tocalculate and send feedforward force/torques tothe vehicle's thrusters that counter act the hydro-dynamic disturbance force/torques produced bythe arm, the overall performance of the systemis signi�cantly improved. In Figure 28, the OT-

TER's roll motion while moving the arm is shownwith no control, with only vehicle feedback con-trol, and with the addition of feedforward controltaking hydrodynamic disturbance forces into ac-count. Although not shown, the addition of feed-forward control improved the performance of allof the other degrees-of-freedom as well.

4. Conclusions

The joint MBARI/ARL research program is in-vestigating various technologies for small, semi-autonomous underwater robots. The OTTER un-derwater robot is used as a research platform todevelop low-level algorithms and high-level strate-gies for control as well as to test new, innovativeactuators and sensors. OTTER's design capital-izes on the emergence of VLSI ASIC technologyto build small, low-cost, low-power, custom dig-ital electronics for intelligent sensors and actua-

Page 25: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 25

tors. OTTER uses a generic framework for thecontrol/software architecture that provides gen-eral access to all robotic subsystems. Indepen-dent researchers are able to share common soft-ware components while developing custom codefor individual research projects. The ease of recon-�gurability of both hardware and software com-ponents are key reasons for OTTER's success asa engineering-research testbed.

Examples of the technologies demonstrated ex-perimentally on OTTER are direct robotic con-trol with a sophisticated vision-sensing system andcoordinated arm/vehicle control using a model ofmanipulator hydrodynamics. In addition, controlof OTTER through computer-generated, graph-ical user interfaces shows the semi-autonomousabilities of the OTTER system to carry out hu-man high-level commands by the on-board intel-ligent controller.

Acknowledgements

The authors would like to acknowledge the con-tribution of all of the students, engineers, andresearchers of the MBARI/ARL team, and es-pecially Dick Burton, OTTER's mechanical de-signer. Funding for the research performed in thejoint MBARI/ARL program has been provided bythe Monterey Bay Aquarium Research Institute,the National Aeronautics and Space Administra-tion, the O�ce of Naval Research, and the Na-tional Science Foundation.

Notes

1. UNIX is a registered trademark of AT&T. VxWorks isa trademark of Wind River Systems, Inc. ControlShelland NDDS are trademarks of Real Time Innovations,Inc. X Window System is a trademark of the Mas-sachusetts Institute of Technology.

2. Marquest Group, Inc., has become Imetrix, Inc., andnow markets SNAP as a SHARPS compatible system.

3. A tetherless version of SHARPS is available.

4. Motif is a trademark of Open Software Foundation, Inc.

References

J.G. Bellingham, C.A. Goudey, T.R. Consi, andC. Chyrssostomidis. A small long-rang autonomous vehi-cle for deep ocean exploration. In Proceedings of the Sec-ond International O�shore and Polar Conference, pages

461{467. International Society of O�shore and Polar En-gineers, June 1992.

D. R. Blidberg, S. Chappell, J. Jalbert, R. Turner, G. Se-dor, and P. Eaton. The EAVE AUV program at the ma-rine systems engineering laboratory. In The First Work-shop on: Mobile Robots for Subsea Environments, pages33{42. International Advanced Robotics Programme,October 1990.

C. P. Brancart. The evolution of the AUV: TheDARPA/ARPA UUV program. In Proceedings of IARPSecond Workshop on Mobile Robots for Subsea Environ-ments, pages 181{192, Monterey, May 1994. IARP.

J. Catipovic, D. Brady, and S. Etchemendy. Developmentof underwater acoustic modems and networks. Oceanog-raphy, 6(3), 1994.

S.K. Choi, J. Yuh, and G.Y. Takashige. Development ofthe omni-directional intelligent navigator. IEEE Roboticsand Automation Magazine, 2(1):44{53, March 1995.

S. D. Fleischer, S. M. Rock, and M. J. Lee. UnderwaterVehicle Control from a Virtual Environment Interface. InProceedings of the Symposium on Interactive 3D Graph-ics, Monterey, CA, April 1995. SIGGRAPH.

T. W. Fong. A Computational Architecture for Semi-autonomous Robotic Vehicles. In Proceedings of AIAAComputing in Aerospace 9 Conference, San Diego, CA,October 1993. AIAA.

T. Fujii, T. Ura, and U. Kuroda. Mission executionexperiments with a newly developed AUV the \TwinBurger". In Proceedings of the 8th International Sympo-sium on Unmanned Untethered Submersible Technology,pages 92{105. University of New Hampshire, Marine Sys-tems Engineering Laboratory, September 1993.

A.J. Healey, D.D. Marco, R.B. McGhee, D.P. Brutzman,R. Cristi, F.A. Papoulias, and S.H. Kwak. Tacti-cal/execution level coordination for hover control of theNPS AUV II using onboard sonar servoing. In Proceed-ings of the IEEE Symposium on Autonomous Underwa-ter Vehicle Technology, pages 129{138, Cambridge, MA,July 1994. IEEE.

N. Kato, Y. Ito, J. Kojima, K. Asakawa, and Y. Shirasaki.Guidance and control of autonomous underwater vehi-cle Aqua Explorer 1000 for inspection of underwater ca-bles. In Proceedings of the 8th International Symposiumon Unmanned Untethered Submersible Technology, pages195{211. University of New Hampshire, Marine SystemsEngineering Laboratory, September 1993.

R. L. Marks, H. H. Wang, R. C. Burton, and M. J. Lee.SPOTTER: A submersible platform and ocean technol-ogy testbed for engineering research. Project report,Stanford Aerospace Robotics Laboratory and MontereyBay Aquarium Research Institute, 1991.

R. L. Marks, T. W. McLain, D. W. Miles, S. M. Rock,G. A. Sapilewski, H. H. Wang, R. C. Burton, andM. J. Lee. Monterey Bay Aquarium Research Insti-tute/Stanford Aerospace Robotics Laboratory Joint Re-search Program Summer Report 1992. Project report,Stanford Aerospace Robotics Laboratory and MontereyBay Aquarium Research Institute, 1992.

R. L. Marks, S. M. Rock, and M. J. Lee. Automatic objecttracking for an unmanned underwater vehicle using real-time image �ltering and correlation. In Proceedings of

Page 26: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

26 Wang, et al.

IEEE Systems, Man, and Cybernetics, France, October1993. IEEE.

R. L. Marks, M. J. Lee, and S. M. Rock. Visual sensing forcontrol of an underwater robotic vehicle. In Proceedingsof IARP Second Workshop on Mobile Robots for SubseaEnvironments, Monterey, May 1994. IARP.

R. L. Marks, H. H. Wang, M. J. Lee, and S. M. Rock. Auto-matic visual station keeping of an underwater robot. InProceedings of IEEE Oceans 94 Osates, Brest, France,September 1994. IEEE.

R.L. Marks, S.M. Rock, and M.J. Lee. Real-time videomosaicking of the ocean oor. IEEE Journal of OceanicEngineering, 20(3):229{241, July 1995.

Richard L. Marks. Experiments in Visual Sensing for Au-tomatic Control of an Underwater Robot. PhD thesis,Stanford University, Stanford, CA 94305, June 1995.

Timothy W. McLain. Modelling of Underwater Manipula-tor Hydrodynamics with Application to the CoordinatedControl of an Arm/Vehicle System. PhD thesis, StanfordUniversity, August 1995. Also published as SUDAAR670.

John K. Ousterhout. Tcl and the Tk Toolkit. Addison-Wesley Publishing Company, Inc., Menlo Park, CA, 1994.

G. Pardo-Castellote and S. A. Schneider. The NetworkData Delivery Service: Real-Time Data Connectivity forDistributed Control Applications. In Proceedings of theInternational Conference on Robotics and Automation,San Diego, CA, May 1994. IEEE, IEEE Computer Soci-ety.

A.M. Pascoal. The AUV MARIUS: Mission scenarios, ve-hicle design, construction and testing. In The SecondWorkshop on: Mobile Robots for Subsea Environments,pages 127{140. International Advanced Robotics Pro-gramme, May 1994.

M. Perrier, V. Rigaud, E. Coste-Mani�ere, D. Simon, andA. Peuch. VORTEX: a versatile testbed vehicle forcontrol algorithm evaluation. In Proceedings of the8th International Symposium on Unmanned UntetheredSubmersible Technology, pages 29{36. University ofNew Hampshire, Marine Systems Engineering Labora-tory, September 1993.

RTI. Real-time innovations, inc, 1995. 954 Aster, Sunny-vale, CA. email: [email protected].

G. N. Saridis. Toward the realization of intelligent con-trols. In Proceedings of the IEEE, volume 67:8. IEEE,1979.

S. A. Schneider and R. H. Cannon. Experimental Object-Level Strategic Control With Cooperating Manipula-tors. The International Journal of Robotics Research,12(4):338{350, August 1993.

S. A. Schneider, M. A. Ullman, and V. W. Chen. Control-Shell: A Real-Time Software Framework. In Proceedingsof the 1991 IEEE International Conference on SystemsEngineering, Dayton, OH, August 1991.

S. A. Schneider, V. W. Chen, and G. Pardo-Castellote.ControlShell: A Real-Time Software Framework. In Pro-ceedings of the AIAA/NASA Conference on IntelligentRobots in Field, Factory, Service and Space, volume II,pages 870{7, Houston, TX, March 1994. AIAA, AIAA.

M. A. Ullman, R. H. Cannon, and S. M. Rock. A ModularSystem Architecture for Multi-Manipulator, Free-FlyingSpace Robots. In Proceedings of the International Sym-

posium on Intelligent Control, pages 90{97, Chicago, IL,August 1993.

H. H. Wang, R. L. Marks, S. M. Rock, M. J. Lee, and R. C.Burton. Combined Camera and Vehicle Tracking of Un-derwater Objects. In Proceedings of Intervention/ROV'92, San Diego CA, June 1992.

H. H. Wang, R. L. Marks, S. M. Rock, and M. J. Lee.Task-Based Control Architecture for an Untethered, Un-manned Submersible. In Proceedings of the 8th AnnualSymposium of Unmanned Untethered Submersible Tech-nology, pages 137{147. Marine Systems Engineering Lab-oratory, Northeastern University, September 1993.

D.R. Yoerger, A.M. Bradley, and B.B. Walden. Theautonomous benthic explorer. Unmanned Systems,9(2):17{23, Spring 1991.

Howard H. Wang received his B.S. degree in

Aerospace Engineering from the University of Michi-

gan in 1989 and the M.S. degree in Aeronautics and

Astronautics from Stanford University in 1990. Cur-

rently, he is a Ph.D. candidate at Stanford University

working on task-level control of semi-autonomous un-

derwater robots. This work is part of a joint collabo-

ration between the Stanford Aerospace Robotics Lab-

oratory and the Monterey Bay Aquarium Research In-

stitute to study advanced underwater-robot technolo-

gies.

Stephen M. Rock received his S.B. and S.M.

degrees in Mechanical Engineering from the Mas-

sachusetts Institute of Technology in 1972, and the

Ph.D. degree in AppliedMechanics from Stanford Uni-

versity in 1978. Dr. Rock joined the Stanford fac-

ulty in 1988, and is now a Professor in the depart-

ment of Aeronautics and Astronautics. Prior to join-

ing the Stanford faculty, Dr. Rock led the Controls

and Instrumentation Department of Systems Control

Technology, Inc. In his eleven years at SCT, he per-

formed and led research in integrated control; fault

detection, isolation and accommodation; turbine en-

gine modeling and control; and parameter identi�ca-

tion. Dr. Rock's current research interests include the

development and experimental validation of control

approaches for robotic systems and for vehicle appli-

cations. A major focus is both the high-level and low-

level control of underwater robotic vehicles.

Michael J. Lee received his S.B. degree in Electrical

Engineering form the Massachusetts Institute of Tech-

nology in 1975 and the M.S. degree in Electrical En-

gineering from Stanford University in 1979. Mr. Lee

joined the Hewlett Packard Co. in 1976 and held sev-

eral engineering and management positions including

Department Manager of the Control Systems Depart-

ment at HP Laboratories. In 1987, Mr. Lee joined

MBARI as one of its founders with the position of

Page 27: The Design and Dev elopmen of an In telligen t Underw ater ...web.stanford.edu/group/arl/cgi-bin/drupal/sites/default/files/public/publications...The Design and Dev elopmen t of an

OTTER 27

Technical Director. Mr. Lee was an Adjunct Profes- sor in Electrical Engineering at Boise State University

from 1977 through 1978, and is currently a Consulting

Associate Professor at Stanford University.