Top Banner
Field testing of a 3D Automatic Target Recognition and Pose Estimation Algorithm S. Ruel a , C. English a , L. Melo a , A. Berube a , D. Aikman a , A. Deslauriers a , P. Church a and J. Maheux **b a Neptec Design Group Ltd., 302 Legget Drive, Ottawa, Ont, Canada b Defence Research and Development Canada – Valcartier, 2459 Pie-XI Blvd. N, Val-Bélair, QC, Canada ABSTRACT Neptec Design Group Ltd. has developed a 3D Automatic Target Recognition (ATR) and pose estimation technology demonstrator in partnership with the Canadian DND. The system prototype was deployed for field testing at Defence Research and Development Canada (DRDC) - Valcartier. This paper discusses the performance of the developed algorithm using 3D scans acquired with an imaging LIDAR. 3D models of civilian and military vehicles were built using scans acquired with a triangulation laser scanner. The models were then used to generate a knowledge base for the recognition algorithm. A commercial imaging LIDAR was used to acquire test scans of the target vehicles with varying range, pose and degree of occlusion. Recognition and pose estimation results are presented for at least 4 different poses of each vehicle at each test range. Results obtained with targets partially occluded by an artificial plane, vegetation and military camouflage netting are also presented. Finally, future operational considerations are discussed. Keywords: 3D, ATR, object recognition, range images, pose estimation, LIDAR 1. INTRODUCTION Neptec Design Group Ltd 1 . has developed a 3D Automatic Target Recognition (ATR) and pose estimation technology demonstrator in partnership with DRDC - Valcartier. The objective of this project was to demonstrate the potential of using 3D data to perform ATR in ground-to-ground tactical scenarios. The developed algorithms were initially characterized using synthetically generated data 2 . The system prototype was then deployed at DRDC - Valcartier. This paper presents the performance of the system using 3D scans acquired in the field with an imaging LIDAR. 2. 3D ATR SYSTEM OVERVIEW The 3D ATR system was designed to operate from a three dimensional range image obtained from a scene. Upon target designation by an operator or a target detection system, the system is capable of: 1. Segmenting 3D points on the unknown target from the scene clutter 2. Comparing the segmented 3D points to a set of known potential target vehicles (system knowledge base) 3. Report the probability of occurrence for every known target. The target with the highest probability will be selected or an unknown target is reported if no valid match was found 4. Compute the pose of the identified target (position and orientation) if a valid match was found [email protected]; phone (613)599-7602; fax (613)599-7604; neptec.com ** [email protected]; phone (418)844-4000; drev.dnd.ca Automatic Target Recognition XIV, edited by Firooz A. Sadjadi, Proceedings of SPIE Vol. 5426 (SPIE, Bellingham, WA, 2004) · 0277-786X/04/$15 · doi: 10.1117/12.541390 102
10

\u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

May 14, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

Field testing of a 3D Automatic Target Recognition and PoseEstimation Algorithm

S. Ruel∗ a, C. Englisha, L. Meloa, A. Berubea, D. Aikmana, A. Deslauriersa, P. Churcha

and J. Maheux**b

aNeptec Design Group Ltd., 302 Legget Drive, Ottawa, Ont, CanadabDefence Research and Development Canada – Valcartier, 2459 Pie-XI Blvd. N, Val-Bélair, QC,

Canada

ABSTRACT

Neptec Design Group Ltd. has developed a 3D Automatic Target Recognition (ATR) and pose estimation technologydemonstrator in partnership with the Canadian DND. The system prototype was deployed for field testing at DefenceResearch and Development Canada (DRDC) - Valcartier. This paper discusses the performance of the developedalgorithm using 3D scans acquired with an imaging LIDAR. 3D models of civilian and military vehicles were builtusing scans acquired with a triangulation laser scanner. The models were then used to generate a knowledge base forthe recognition algorithm. A commercial imaging LIDAR was used to acquire test scans of the target vehicles withvarying range, pose and degree of occlusion. Recognition and pose estimation results are presented for at least 4different poses of each vehicle at each test range. Results obtained with targets partially occluded by an artificial plane,vegetation and military camouflage netting are also presented. Finally, future operational considerations are discussed.

Keywords: 3D, ATR, object recognition, range images, pose estimation, LIDAR

1. INTRODUCTIONNeptec Design Group Ltd1. has developed a 3D Automatic Target Recognition (ATR) and pose estimation technologydemonstrator in partnership with DRDC - Valcartier. The objective of this project was to demonstrate the potential ofusing 3D data to perform ATR in ground-to-ground tactical scenarios. The developed algorithms were initiallycharacterized using synthetically generated data2. The system prototype was then deployed at DRDC - Valcartier. Thispaper presents the performance of the system using 3D scans acquired in the field with an imaging LIDAR.

2. 3D ATR SYSTEM OVERVIEWThe 3D ATR system was designed to operate from a three dimensional range image obtained from a scene. Upon targetdesignation by an operator or a target detection system, the system is capable of:

1. Segmenting 3D points on the unknown target from the scene clutter2. Comparing the segmented 3D points to a set of known potential target vehicles (system knowledge base)3. Report the probability of occurrence for every known target. The target with the highest probability will be selected

or an unknown target is reported if no valid match was found4. Compute the pose of the identified target (position and orientation) if a valid match was found

[email protected]; phone (613)599-7602; fax (613)599-7604; neptec.com** [email protected]; phone (418)844-4000; drev.dnd.ca

Automatic Target Recognition XIV, edited by Firooz A. Sadjadi, Proceedings of SPIE Vol. 5426(SPIE, Bellingham, WA, 2004) · 0277-786X/04/$15 · doi: 10.1117/12.541390

102

Page 2: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

Figure 1 shows a high level overview of the ATRsystem. The system is trained by using a set of 3Dmodels of targets to be included in the knowledgebase. The 3D models can be built using a 3Dsensor or obtained directly from CAD models.Using the 3D models of each target, a set of views(appearances) is generated at known object poses.A data reduction algorithm is then used to extractrelevant information from each target view and thisinformation is stored in the system’s knowledgebase. The system can then perform recognition bycomparing a view of an unknown target acquiredwith a 3D sensor to its knowledge base.

3. FIELD SETUPField testing of the ATR system took place at the Canadian Forces Base (CFB) in Valcartier, Québec. The objective ofthe field testing was to evaluate the system performance when using sensor data acquired with an active sensor.

3.1 Test Knowledge BaseThe ATR system used for field testing was trained to detect 10 potential targets of which, 4 are military vehicles and 6are civilian (Figure 2). 72 ground views of each target were used to generate the ATR field testing database.

The 3D models were generated by scanning the target vehicles at close range with Neptec’s Laser Camera System(LCS)3. The LCS uses laser based autosynchronous triangulation scanning technique4 to provide highly accurate 3Ddata at close range (<5m). Several LCS scans of the target vehicles were acquired and then merged into the full 3Dmodels. These models were then used to generate the ATR recognition database.

Vehicle

3D model

3D ATR System

Data Reduction

View Generation

software

3D Sensor

Range Image

Target Segmentation

Silhouette Analysis

3D Geometry AnalysisSynthetic

Range Image

KnowledgeBase

Identified Object& Object Pose

Consistency Checking

Object & Pose Selection

Vehicle

3D model

3D ATR System

Data Reduction

View Generation

software

3D Sensor

Range Image

Target Segmentation

Silhouette Analysis

3D Geometry AnalysisSynthetic

Range Image

KnowledgeBase

Identified Object& Object Pose

Consistency Checking

Object & Pose Selection

Vehicle

3D model

3D ATR System

Data Reduction

View Generation

software

3D Sensor

Range Image

Target Segmentation

Silhouette Analysis

3D Geometry AnalysisSynthetic

Range Image

KnowledgeBase

Identified Object& Object Pose

Consistency Checking

Object & Pose Selection

Figure 1 ATR system overview

Figure 2 Targets included in the ATR field testing knowledge base: Coyote, LAV-III, MLVW, Iltis, Ford F350, Chevy pickup,Honda Civic, Nissan Quest, Ford Mustang, Toyota Tercel

Proc. of SPIE Vol. 5426 103

Page 3: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

3.2 Test SetupFive of the vehicles included in the knowledge base were used forfield testing: F350, Iltis, LAV-III, Coyote and MLVW (2.5Tontruck). For each target vehicle, nominal (i.e. entire object can beseen) scans were acquired at different ranges and in various poses.In addition, the natural surrounding vegetation was used to performvegetation obscuration testing. Also, a military camouflage netwas installed at 75 and 175 meters away from the sensor (Figure3a). It was setup as a curtain so that the target vehicles could beeasily moved behind it (Figure 3b).

The 3D sensor used for field testing is a Riegl LPM-800i scanningLIDAR. The LPM-800i is a time of flight laser sensor (LIDAR)mounted on a high precision pan-tilt unit. This sensor could not beused tactically because it is relatively slow and cannot resolveobjects that are closer together than it can resolve. On the otherhand, it is perfectly adequate to demonstrate the ATR algorithmperformance using real world sensor data at a reasonable cost. Thespecifications of the LPM-800i are shown in Table 15.

4. TEST RESULTSThis section presents results from the field testing performed at CFB Valcartier.

4.1 Nominal TestingIn nominal tests, the entire target could be seen (i.e. noobstruction). The test variables were target range and pose.The LIDAR has a finite ability to point its laser source,(angular resolution) this causes the spatial resolution todecrease with increasing range. Thus, the number of pointson target is lower in longer range scans. Range also affectsthe energy return and the laser spot size (beam divergence).

The five target vehicles were scanned at 8 different ranges:50, 100, 150, 200, 250, 300, 350 and 400 meters. At eachrange, scans were acquired for 4 different target poses: Front(0°), front left corner (45°), left side (90°) and back left cornerviews (135°). The true pose of the targets was not measuredbecause the pose estimation provided by the ATR system isbetter (based on synthetic data testing) than our ability to map

400m400m 50m50mSensors

BushesBushes

CamNetCamNet

400m400m 50m50mSensors

BushesBushes

CamNetCamNet

(a)

(b)

Figure 3 (a) Field configuration (b) Camouflage netconfiguration

Average Recognition Rate vs. RangeField Data

0

10

20

30

40

50

60

70

80

90

100

0 50 100 150 200 250 300 350 400 450

Range (m)

Rec

og

niti

on

Rat

e (%

)

Average Recognition Rate Average Recognition w/ Correct Pose Rate

Figure 4 Nominal test results

Laser class IIIb

Laser wavelength 900nm (near IR)

Laser pulse width 11ns

Beam Divergence 1.3mrad

���������� ±15mmMeasurement rate 1000 pts/sec

Range (good diffuse target) Up to 800m

Range (poor diffuse target) Up to 250m

Angular accuracy ±0.009º

Table 1 Riegl LPM-800i specifications

104 Proc. of SPIE Vol. 5426

Page 4: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

the model coordinate system back on the vehicle. For this reason, a recognition test was defined as successful when thetarget vehicle is identified correctly and the pose estimate is within ±10° degrees of an estimate made by the operator.The ATR software also provided a visual 3D image of the LIDAR data aligned with the information from theknowledge base. This allowed the operator to confirm successful pose estimation more accurately.

The average recognition rate (i.e. all vehicles averagedtogether) obtained in nominal testing is presented inFigure 4. The average recognition rate independent ofthe pose estimation is plotted as well as recognitionwith correct pose rate. The ATR system successfullyidentified all targets in all 4 poses up to 200 meters.The performance starts to drop past 200 meters andreaches 70% at 400 meters. This is due to rapidlydecreasing sensor performance beyond 250 meters(sensor limit for poor diffuse targets). These resultsdemonstrate that the ATR system works very well withreal world data even when comparing data from twodifferent sensors (training models built using Neptec’sLCS vs. field data acquired with the LIDAR).

Since the database is relatively small, it is important evaluate how often the system guesses right. By comparing thetwo curves in Figure 4, we can demonstrate the low guess rate of the system. When the system chooses the right object,it selects the right pose 99.3% of the time (1 pose error out of 141 successful recognitions).

Table 2 presents the confusion matrix of the nominal test for all ranges combined. The left column represents the inputtarget vehicle and the top row the system output. Analyzing the failure cases demonstrates that the system performancedegrades gracefully with decreasing data quality. Even when the system does not select the right vehicle, it selectsobjects that are similar in shape most of the time.

4.1.1 False positivesA false positive is defined as an incorrect recognition.An “unknown” output by the system does notcontribute to false positives. Figure 5 presents theaverage recognition rate and false positive rate for twodifferent ATR system tunings as a function of range.In the first case (solid line), the system was tuned tomaximize the average recognition rate withoutconsidering false positives. These are the settings usedfor field testing. Figure 5 shows that this tuning is bestfor both recognition rate and false positive rate whenoperating within the sensor limit for poor diffusetargets. As the range increases passed this limit, thefalse positive rate increases. Overall, this tuning yieldsan average recognition rate of 88.13% with a falsepositive rate of 11.5% (all ranges combined).

The system can also be tuned to minimize the averagefalse positive rate. This is the second case presented inFigure 5 (dashed line). With these settings the averagefalse positive rate drops to 3.75% but this is at the cost of a lower recognition rate of 73.75%. Making the system moreselective will decrease the number of wrong choices but it will also generate more unknowns, therefore reducing theoverall recognition rate. Figure 5 shows that, when passed the nominal operating sensor range, using the second tuning

ML

VW

Ch

evy

Civ

ic

Co

yote

F35

0

Iltis

LA

VIII

Mu

stan

g

Qu

est

Ter

cel

Un

kno

wn

Total Pass

Total Scans

Rec Rate w/correct pose (%)

MLVW 30 0 0 0 0 0 1 0 0 0 1 30 32 93,75

Chevy 0 0 0 0 0 0 0 0 0 0 0 0 0

Civic 0 0 0 0 0 0 0 0 0 0 0 0 0

Coyote 0 0 0 29 0 0 3 0 0 0 0 29 32 90,63

F350 2 2 0 0 27 0 1 0 0 0 0 27 32 84,38

Iltis 0 3 1 0 0 26 0 0 0 2 0 26 32 81,25

LAVIII 0 0 0 2 0 0 30 0 0 0 0 29 32 90,63

Mustang 0 0 0 0 0 0 0 0 0 0 0 0 0

Quest 0 0 0 0 0 0 0 0 0 0 0 0 0

Tercel 0 0 0 0 0 0 0 0 0 0 0 0 0

Table 2 Nominal test confusion matrix (all ranges combined)

Average Recognition and False Positive Rate vs. Rangefor different system settings

Field Data

0

10

20

30

40

50

60

70

80

90

100

0 50 100 150 200 250 300 350 400 450

Range (m)

Rec

og

niti

on

Rat

e (%

)

Tuned for maximum recognition rate Tuned for minimum false positive

Recognition Rate

False Positive Rate

Figure 5 Average recognition and false positive rate for differentATR settings

Proc. of SPIE Vol. 5426 105

Page 5: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

will improve the overall performance. This might drive the parameters to change based on range or data quality infuture versions.

4.1.2 Decimation TestingTesting with synthetic data has shown that the decreasingimage resolution obtained by scanning at longer rangeshould have very little impact on system performance2.During nominal field testing, a performance drop wasobserved at ranges exceeding 200 meters. This is at thelimit of the sensor operating range for poor diffuse targets.By comparing synthetic scans to LIDAR scans, it is obviousthat the LIDAR data becomes significantly distorted atranges beyond 200 meters (Figure 6). The synthetic datareproduced the spatial resolution of the field sensor but itdid not simulate other artifacts caused by the increasinglaser spot size and decreasing signal return at longer ranges.The increasing laser spot size (~26cm diameter at 200m)causes a smearing effect of the surface, washing out details.The weaker signal return from the poorly diffuse surfacesincreases the amount of missing data points.

To confirm this effect as the cause of the observedperformance drop, LIDAR scans acquired at 50m, wherethe distortions are the smallest (small spot size, strongreturns), are decimated to match the equivalent pointspacing obtained at the other test ranges. The ATR systemis then tested using the decimated scans. The test results areshown in Figure 7. These results clearly indicate thatsurface distortion is the dominant effect in field results.When the distortions are minimal, the system performance

follows predictions using synthetic data for the same 5vehicles (10 poses per range) very closely. Thisconfirms that the performance drop seen in fieldtesting is caused by increasing surface distortions notdecreasing spatial resolution. These artifacts arerelated to the sensor characteristics (or morespecifically laser power and beam divergence). Asensor with better characteristics would thereforeprovide similar performance at longer ranges for thesame pointing accuracy. Also, 50-200 meters LIDARscans still have some level of distortions and noise inthem. Performance using these scans matched thepredicted performance using data generated in a virtualenvironment from the reference models. This showsthat the system can tolerate a certain level of surfacedistortion (up to equivalent LPM-800i distortionsobtained at 200 meters) without any impact on systemperformance.

4.2 Occlusion TestingDifferent types of tests were performed to evaluate the performance of the system when targets are partially obscured:camouflage net, vegetation and planar.

(a) 50m synthetic (b) 50m LIDAR

(c) 250m synthetic (d) 250m LIDAR

Figure 6 Significant distortions appear in longer range LIDARscans

Effect of Changing Resolution on Recognition Rate

0

10

20

30

40

50

60

70

80

90

100

0 500 1000 1500 2000

Range (Meters)

Rec

og

niti

on

Rat

e w

/ Co

rrec

t Po

se (%

)

Synthetic Data Field Data Decimated 50m Field Data

Figure 7 Decimation test results

106 Proc. of SPIE Vol. 5426

Page 6: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

4.2.1 Camouflage NetScans were acquired through military camouflage net for allvehicles at a range of 75 meters using the same target posesas for nominal tests. Longer range scans were also acquiredfor the F350 and the MLVW at 175 meters. The laser pulsewidth and detector dynamics limit how close objects can beto each other and still be discriminated. For the LPM-800i,the minimum distance between objects was 11 feet, thereforethe camouflage net had to be positioned at least this far fromthe target. Figure 8 shows a typical recognition performedthrough camouflage net. The top image represents theintensity bitmap from the sensor. The middle picture showsthe 3D sensor data rotated to reveal the object hidden behindit. The bottom picture shows the segmented LIDAR data(yellow) aligned with the matching knowledge baseinformation (red) according to the ATR system output(object & pose).

Table 3 presents the confusion matrix for all camouflage nettests performed. Data acquired through camouflage netshows increased level of noise and surface distortions. Partsof the targets are also missing. The system recognized thetarget correctly and in the right pose 27 times out of 28 (96%recognition rate). This demonstrates the capability of thesystem to operate with this type of obstruction.

4.2.2 VegetationThe Coyote, LAV-III and MLVW vehicles were positionedbehind surrounding vegetation in various poses and atvarious ranges (175 to 260m). Figure 9 shows a typical recognition performed with vegetation obscuration. Eventhough these scans cannot provide any direct quantitative performance evaluation, they are useful in demonstrating thatthe system can function under such conditions. Table 4 presents a summary of all vegetation scans acquired. Darkerpixels in the pictures represent target pixels. 9 out of 12 vegetations scans yielded successful recognition with correctpose. Typically, the system can still recognize the targets as long as there are enough features left in the scan for thesystem to lock onto.

Figure 8 Typical camouflage net recognition, F350 at 75m.Intensity view (top), 3D view (middle) and system output(bottom)

ML

VW

Ch

evy

Civ

ic

Co

yote

F35

0

Iltis

LA

VIII

Mu

stan

g

Qu

est

Ter

cel

Un

kno

wn

Total Pass

Total Scans

Rec Rate w/correct pose(%)

MLVW 8 0 0 0 0 0 0 0 0 0 0 8 8 100,00Chevy 0 0 0 0 0 0 0 0 0 0 0 0 0 0,00Civic 0 0 0 0 0 0 0 0 0 0 0 0 0 0,00Coyote 0 0 0 4 0 0 0 0 0 0 0 4 4 100,00F350 0 1 0 0 7 0 0 0 0 0 0 7 8 87,50Iltis 0 0 0 0 0 4 0 0 0 0 0 4 4 100,00LAVIII 0 0 0 0 0 0 4 0 0 0 0 4 4 100,00Mustang 0 0 0 0 0 0 0 0 0 0 0 0 0 0,00Quest 0 0 0 0 0 0 0 0 0 0 0 0 0 0,00Tercel 0 0 0 0 0 0 0 0 0 0 0 0 0 0,00

Table 3 Confusion matrix of camouflage net occlusion test

Proc. of SPIE Vol. 5426 107

Page 7: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

4.2.3 Planar ObscurationsThis test was performed to measure the performance of the system when a target is obscured by a solid object. It isperformed artificially by removing points in the nominal scans to simulate a wall hiding a predefined percentage of thetarget. The obscuring plane can be located on any side of the target but the testing was limited to a plane hiding thebottom or the right side of the target. Occlusion levels of 30%, 50% and 70% were tested.

Figure 10 present the results when covering the bottom and the right part of the target. The results obtained from planarocclusion testing demonstrate the ability of the system to work when only a subset of a target is visible. As seen withsynthetic data, the recognition rate is still very good at the shorter ranges. The difference is that with real world LIDARdata, the system has to deal with extra surface distortions. For that reason, the performance drops quickly at the longerranges where the surface distortions are larger. As seen with synthetic data testing, occlusions from the right sideproduce slightly better results than occlusions from the bottom side. The bottom part of vehicles tends to have more

Table 4 Vegetation scans summary

Figure 9 Typical vegetation recognition. Intensity view (top),3D view (middle) and system output (bottom)

108 Proc. of SPIE Vol. 5426

Page 8: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

features for the system to lock onto. Removing the bottom section has therefore more impact on the system’s ability torecognize a target.

4.3 Vehicle ConfigurationsThis test was performed to assess how the system would perform whentargets have articulated parts. For example, two of the militaryvehicles (LAVIII and Coyote) have a turret. The ATR system wastrained with turrets pointing forward. 6 scans of the Coyote from100m were acquired with the vehicle and turret in different poses(Figure 11a). A scan was also acquired with the backdoor of theCoyote opened at 250m (Figure 11b). Note that the vehicle was alsopartly hidden by vegetation for this scan. The system properlyidentified the targets in all scans. The vehicle body being by far thelargest feature, the pose estimation algorithm locked on it to provide acorrect pose estimate wrt. the vehicle body. If the vehicle body isremoved from the image, the system will correctly identify the vehicleand will lock on the turret to define the target pose. This is veryinteresting because it demonstrates that this algorithm could be used todetect individual target parts. The approach should be to find thebiggest feature first (body) and then look for the other parts (turret)using a priori knowledge of the articulation.

4.4 Tests with DRDC – Valcartier Experimental LIDARA LIDAR prototype developed by DRDC-Valcartier was concurrentlytested in the field. Although the objective of field trials did not includeusing this sensor, a few target scans were acquired with it. This sensorhas a better pointing accuracy, a higher laser power, a smaller beamdivergence and a shorter laser pulse than the commercial Riegl LIDARused. Because this sensor has better characteristics, it is expected toprovide less distorted, higher resolution data than the Riegl sensor atlonger ranges. A few scans of the MLVW were acquired at ranges upto 620 meters. As shown in Figure 12, successful recognition wasobtained with 3D data from this sensor at 620 meters even in the

Recognition Rate with correct pose for planar occlusion (From the bottom)

0

10

20

30

40

50

60

70

80

90

100

0 100 200 300 400

Range (m)

Rec

og

niti

on

Rat

e w

ith c

orr

ect

po

se (%

)

No occlusion 30% 50% 70%

Recognition Rate with correct pose for planar occlusion (From the bottom)

0

10

20

30

40

50

60

70

80

90

100

100 1000 10000 100000

Average point on target

Rec

og

niti

on

Rat

e w

ith c

orr

ect

po

se (%

)

No occlusion 30% 50% 70%

(a) (b)

Figure 10 Results from planar occlusion testing (a) covering left part (b) covering bottom part

(a)

(b)

Figure 11 (a) Coyote turret points left (b)coyote with backdoor opened throughvegetation

Proc. of SPIE Vol. 5426 109

Page 9: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

presence of occlusion from vegetation. Surface distortions are fairly low, showing the effect of the smaller laser spot.This clearly demonstrates the ability of ATR system to function at longer ranges when a sensor with bettercharacteristics is used.

5. FUTURE WORKPotential applications of 3D ATR technology point to areas that would benefit from further research. The mostpromising application is low altitude airborne reconnaissance (UAV, helicopters). Research is proceeding to extend thesystem to work in such context. This will involve two main areas of research: fast 3D sensing technology and algorithmextensions.

In airborne scenarios, the 3D sensor will be moving at a relatively high speed and the ground target might also bemoving. Using 3D ATR technology in such a scenario will require the use of fast 3D sensing technology. Neptec iscurrently investigating the use of motion stereo and Flash (scannerless) LADAR technology as potential sensors forsuch a system. Research efforts have been initiated to adapt motion stereo technology with the developed ATR system.

The algorithms developed in the current research focused on using a single scan of an unknown target. The nature ofaerial based ATR system would allow the ATR system to obtain several views of the same target thus helping toimprove both the system performance and robustness6. The algorithms will also be augmented to allow the system todetermine the pose of target articulations. This would provide an operator with a definite tactical advantage.

6. CONCLUSIONA 3D Automatic Target Recognition (ATR) Demonstrator was developed under a Defence-Industry-Research (DIR)program with DRDC-Valcartier. After successful testing using synthetic data, the ATR system prototype was deployedin the field at the CFB - Valcartier to evaluate its performance. The system was trained using 10 3D models of potentialtargets. Scans of 5 vehicles were acquired during field testing with a commercial imaging LIDAR. Nominal 3D imagesof the targets were acquired at ranges of 50 to 400 meters in 4 different poses at every range. Test scans were alsoacquired through military camouflage net and the surrounding vegetation was used to scan targets partially obscured byfoliage. Finally, articulated vehicles were scanned in different configurations than the one used for training to evaluatethe impact on system performance. Analysis of the results observed during field testing has shown that the developedATR system:

� Has a very high recognition rate with a low false positive rate in the nominal operating range of the sensor� Is robust to very large spatial resolution changes compared to training resolution (tested up to 3 orders of

magnitude)

Figure 12 Recognition of a MLVW at 620 meters using the DRDC - Valcartier experimental LIDAR

110 Proc. of SPIE Vol. 5426

Page 10: \u003ctitle\u003eField testing of a 3D automatic target recognition and pose estimation algorithm\u003c/title\u003e

� Is robust to various types of obscurations (camouflage net, vegetation, planar occlusion)� Is robust to changes in vehicle configuration (articulated parts)� Is robust to noise and distortions (up to equivalent LIDAR at 200m)� Is independent of background scene� Performance degrades gracefully

ACKNOWLEDGEMENTSThe Neptec ATR project team would like to thank the following individuals for their contribution in making this projecta success: Jean Maheux, Yves Bouchard, Luc Dubé, Corp. Denise Gosselin and Sgt. Marc Grenier (DRDC Valcartier).Dr. Claire Samson (Carleton University), Dr. Frank P. Ferrie (McGill University), Dr. Xavier Maldague (LavalUniversity), Graeme MacWilliam and Steven Miller (Neptec Design Group Ltd).

REFERENCES

1. http://www.neptec.com2. English, C. Ruel, S. Melo, L., Church, P. and Maheux, J., “Development of a Practical 3D Automatic Target

Recognition and Pose Estimation Algorithm”, SPIE Defense & Security Symposium Proceedings on AutomaticTarget Recognition XIV, Orlando, Florida, April 2004

3. Samson, C., English, C., Deslauriers, A., Christie, I., and Blais, F. , "Imaging and tracking elements of theInternational Space Station using a 3D auto-synchronized scanner.", SPIE's 16th Annual International SymposiumProceedings on Aerospace/Defence Sensing, Simulation, and Controls (AeroSense 2002), Orlando, Florida. April 1-5, 2002

4. F. Blais, & al., “Development of a real-time tracking laser range scanner for space application”, ProceedingsWorkshop on Computer Vision for Space Applications, Antibes, France, September 22-24, 161-171 (1993)

5. http://www.rieglusa.com6. Asher Gelbart, Brian C. Redman, Robert S. Light, Corren A. Schwartzlow, and Andrew J. Griffis, “Flash LIDAR

based on multiple-slit streak tube imaging LIDAR”, Proceedings on Laser Radar Technology and Applications VII,Orlando, Florida, April 3-4, 2002

Proc. of SPIE Vol. 5426 111