Top Banner
INTELLIGENT DISTRIBUTED SURVEILLANCE SYSTEMS Dual camera intelligent sensor for high definition 360 degrees surveillance G. Scotti, L. Marcenaro, C. Coelho, F. Selvaggi and C.S. Regazzoni Abstract: A novel integrated multi-camera video-sensor (panoramic scene analysis – PSA) system is proposed for surveillance applications. In the proposed set-up, an omnidirectional imaging device is used in conjunction with a pan tilt zoom (PTZ) camera leading to an innovative kind of sensor that is able to automatically track at a higher zoom level any moving object within the guarded area. In particular, the catadioptric sensor is calibrated and used in order to track every single moving object within its 360 degree field of view. Omnidirectional image portions are eventually rectified and pan, tilt and zoom parameters of the moving camera are automatically adjusted by the system in order to track detected objects. In addition a co-operative strategy was developed for the selection of the object to be tracked by the PTZ sensor in the case of multiple targets. 1 Introduction In the design of an architecture for video surveillance applications, many functionalities such as detection, track- ing and classification of objects acting within the guarded environment have to be taken into account. In particular, achieving a robust and reliable behaviour of the system against corruption of data is an important issue. In order to enhance overall robustness and to extend the spatial coverage of the system, a multi-sensor approach has been chosen. The architecture here proposed is composed of a 360 degree catadioptric sensor for full range monitoring and a pan tilt zoom (PTZ) camera for high-resolution surveil- lance. The catadioptric sensor solution is adopted in video surveillance systems because of its advantages in terms of coverage and costs. In particular, Harabar [1] utilises this kind of optics to automatically pilot a small traffic surveillance helicopter or a robot. Nayar and Boult at Columbia University produced a system [2] capable of detecting activity in the monitored scene, which fuses catadioptric sensors with PTZ cameras. Boult, in [3] describes a video surveillance system based upon a catadioptric sensor and capable of detecting and tracking objects in complex environments. The system proposed in this paper, ‘closes the loop’ as it is a complete dual camera real-time system that actively tracks objects of interest with a high degree of independence. In fact, the PSA works like an embedded smart sensor capable of detecting and tracking multiple objects whether in low or high resolution by using the mobile camera. This approach greatly simplifies problems of data fusion in multi-camera video surveillance systems, drastically reducing set up, maintenance costs and computational complexity too. 2 The catadioptric sensor In the literature, different types of catadioptric sensors can be found. In particular, what characterises each catadioptric sensor is the shape of the adopted mirror (parabolic, hyperbolic, conical or ellipsoidal). In fact, the mirror has to assure only one point of projection in order to create a perspective image. The mirror chosen for this work has a double parabolic envelope; this means the projection centre is in the parabola focus and the geometric mapping between the image and the real world is invariant to mirror translations (Fig. 1). The chosen reflector is directly placed over a conven- tional camera sensor and its field of view extends from 4 degrees over the horizon to 56 degrees below. By using such a sensor, the obtained resolution image is not constant (Fig. 2). In the next Section, an intrinsic and extrinsic sensor calibration procedure will be explained for transforming the polar image into a Cartesian one and to locate targets on the ground plane. The extrinsic calibration is also required for pointing the PTZ camera via the omnidirec- tional sensor. 3 Sensor calibration Sensor calibration is a procedure for obtaining sensor parameters and positioning. It is divided into three different phases 1. Intrinsic calibration 2. Extrinsic calibration 3. Joint calibration In the first step, image centre, eccentricity and radius are estimated in order to achieve an image rectification [4]. q IEE, 2005 IEE Proceedings online no. 20041302 doi: 10.1049/ip-vis:20041302 G. Scotti and C.S. Regazzoni are with the Department of Biophysical and Electronic Engineering, University of Genova, Via Opera Pia 11a 16145, Genova, Italy C. Coelho and F. Selvaggi are with the Elsag S.p.A., Via G. Puccini 2 16154, Genova, Italy L. Marcenaro is with the Technoaware S.r.L., Italy E-mail: [email protected] Paper first received 16th April and in revised form 7th December 2004 IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005 250
8

Dual camera intelligent sensor for high definition 360 degrees surveillance

May 14, 2023

Download

Documents

Elisa Tonani
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Dual camera intelligent sensor for high definition 360 degrees surveillance

INTELLIGENT DISTRIBUTED SURVEILLANCE SYSTEMS

Dual camera intelligent sensor for high definition360 degrees surveillance

G. Scotti, L. Marcenaro, C. Coelho, F. Selvaggi and C.S. Regazzoni

Abstract: A novel integrated multi-camera video-sensor (panoramic scene analysis – PSA) systemis proposed for surveillance applications. In the proposed set-up, an omnidirectional imagingdevice is used in conjunction with a pan tilt zoom (PTZ) camera leading to an innovative kind ofsensor that is able to automatically track at a higher zoom level any moving object within theguarded area. In particular, the catadioptric sensor is calibrated and used in order to track everysingle moving object within its 360 degree field of view. Omnidirectional image portions areeventually rectified and pan, tilt and zoom parameters of the moving camera are automaticallyadjusted by the system in order to track detected objects. In addition a co-operative strategy wasdeveloped for the selection of the object to be tracked by the PTZ sensor in the case of multipletargets.

1 Introduction

In the design of an architecture for video surveillanceapplications, many functionalities such as detection, track-ing and classification of objects acting within the guardedenvironment have to be taken into account. In particular,achieving a robust and reliable behaviour of the systemagainst corruption of data is an important issue. In order toenhance overall robustness and to extend the spatialcoverage of the system, a multi-sensor approach has beenchosen. The architecture here proposed is composed of a360 degree catadioptric sensor for full range monitoring anda pan tilt zoom (PTZ) camera for high-resolution surveil-lance. The catadioptric sensor solution is adopted in videosurveillance systems because of its advantages in terms ofcoverage and costs. In particular, Harabar [1] utilises thiskind of optics to automatically pilot a small trafficsurveillance helicopter or a robot. Nayar and Boult atColumbia University produced a system [2] capable ofdetecting activity in the monitored scene, which fusescatadioptric sensors with PTZ cameras. Boult, in [3]describes a video surveillance system based upon acatadioptric sensor and capable of detecting and trackingobjects in complex environments.

The system proposed in this paper, ‘closes the loop’ as it isa complete dual camera real-time system that actively tracksobjects of interest with a high degree of independence.In fact, the PSA works like an embedded smart sensor

capable of detecting and tracking multiple objects whetherin low or high resolution by using the mobile camera.

This approach greatly simplifies problems of data fusionin multi-camera video surveillance systems, drasticallyreducing set up, maintenance costs and computationalcomplexity too.

2 The catadioptric sensor

In the literature, different types of catadioptric sensors canbe found. In particular, what characterises each catadioptricsensor is the shape of the adopted mirror (parabolic,hyperbolic, conical or ellipsoidal). In fact, the mirror hasto assure only one point of projection in order to create aperspective image.

The mirror chosen for this work has a double parabolicenvelope; this means the projection centre is in the parabolafocus and the geometric mapping between the image and thereal world is invariant to mirror translations (Fig. 1).

The chosen reflector is directly placed over a conven-tional camera sensor and its field of view extends from 4degrees over the horizon to 56 degrees below. By using sucha sensor, the obtained resolution image is not constant(Fig. 2).

In the next Section, an intrinsic and extrinsic sensorcalibration procedure will be explained for transformingthe polar image into a Cartesian one and to locate targetson the ground plane. The extrinsic calibration is alsorequired for pointing the PTZ camera via the omnidirec-tional sensor.

3 Sensor calibration

Sensor calibration is a procedure for obtaining sensorparameters and positioning. It is divided into three differentphases

1. Intrinsic calibration2. Extrinsic calibration3. Joint calibration

In the first step, image centre, eccentricity and radius areestimated in order to achieve an image rectification [4].

q IEE, 2005

IEE Proceedings online no. 20041302

doi: 10.1049/ip-vis:20041302

G. Scotti and C.S. Regazzoni are with the Department of Biophysical andElectronic Engineering, University of Genova, Via Opera Pia 11a 16145,Genova, Italy

C. Coelho and F. Selvaggi are with the Elsag S.p.A., Via G. Puccini 216154, Genova, Italy

L. Marcenaro is with the Technoaware S.r.L., Italy

E-mail: [email protected]

Paper first received 16th April and in revised form 7th December 2004

IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005250

Page 2: Dual camera intelligent sensor for high definition 360 degrees surveillance

In the second phase, the relationship between the imageplane and the world plane is computed assuming the groundplane and the image plane are parallel. In the third phase, amethodology for pointing the PTZ camera on the target, byusing information taken by the catadioptric sensor, isdescribed.

3.1 Intrinsic calibration

Intrinsic calibration is the procedure required for imagerectification. In particular, rectification is used for enhan-cing the comprehensibility of the polar image (Fig. 3). In theliterature, more complex rectification algorithms [5] gen-erating a perspective view from a rectified one do exist.In our method, an image re-sampling is first performed toovercome the non-homogeneous resolution of the inputpolar image.

In Fig. 4 it is possible to see a schematic representation ofthis procedure in which all points of domain 2 (rectified) aremapped in domain 1 (polar).

The mapping functions are:

x1 ¼ Gðy2Þ cosðx2Þy1 ¼ Gðy2Þ sinðx2Þ ð1Þ

where ðx2; y2Þ is a point in domain 2 and ðx1; y1Þ is a point indomain 1 and G (:) is a stretching function along the verticaldimension.

The values ðx1; y1Þmust be integer so that it is possible toobtain 4 integers as follows:

XI

XF

�such as XI þ

1

8XF � X1 ð2Þ

YI

YF

�such that YI þ

1

8YF � Y1 ð3Þ

This procedure clearly introduces a certain aliasingdepending on the resolution chosen for the output image(domain 2).

The interpolating function is then:

V ¼X3

k¼0

AkPðYF; kÞ ð4Þ

where:

Ak ¼P3n¼0

IMMðXI � 1þ n;YI � 1þ kÞPðXF; nÞfor k ¼ 0; 1; 2; 3 ð5Þ

IMM is the input image (domain 1)V is the pixel value in domain 2P is an 8� 4 matrix of interpolative filters

In order to obtain a correct image re-sampling fiveparameters have to be estimated:

CðX0; Y0Þ: the common centre of the circumferencesK: scale factor along xR(E): external radiusR(I): internal radius

Fig. 1 Catadioptric sensor [10]

Fig. 2 Extrinsic sensor calibration image

IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005 251

Page 3: Dual camera intelligent sensor for high definition 360 degrees surveillance

The estimation procedure is based on an iterative leastmean-square-methodology and takes place by selecting Ncouples of points along the two circumferences.

In particular, this procedure is based on the iterativeminimisation of the following functional Q:

Q ¼XNn¼1

ðRnðEÞ � RðEÞÞ2 þXNn¼1

ðRnðIÞ � RðIÞÞ2 ð6Þ

The last quantity to be evaluated is the stretching functionG(r) that is strictly perspective dependent. For this reason,a vertical graded pattern bar is placed in the scene at a fixeddistance L from the camera. Then by selecting severalpoints on the image belonging to the pattern bar G(r) can becalculated using the least squares technique.

GðrÞ ¼ g0 þ g1 � r þ g2 � r2 þ g3 � r3 þ g4 � r4 ð7Þ

3.2 Extrinsic calibration

Generally, extrinsic calibration deals with the estimation ofthe mathematical relationship between the image plane andthe world space. In general, for a traditional camera thismeans to find out the tilt angle � and the sensordisplacement.

Traditional calibration methodologies have to be modi-fied in order to satisfy sensor requirements. In the literatureseveral algorithms have been proposed [6, 7]. In our case,owing to the physical configuration of the sensor (i.e. themirror is characterised by a field of view covering 4degrees over the horizon to 56 degrees below), it ispossible to identify a different angle value for eachincoming ray of light. In order to solve the problem, aradioscopic chessboard grid is wrapped around the sensor(Fig. 2), obtaining a particularly useful image forestimating the � angle as a function of distance from theimage centre r.

It is then possible to find out the relationship betweenradius r in pixels and the angle value.

� ¼ f ðrÞ ð8ÞInterpolating experimental data obtained by applying thechessboard pattern to the sensor, it has been found that thedependence is linear (Fig. 5).

It is also important to underline how this procedure istotally application independent, and depends only bycatadioptric sensor geometry. By using this dependencythe distance of an object from the camera on the groundplane, once the distance in pixels from the centre of theimage is known, can be evaluated (Fig. 6).

rw ¼ H tanð�Þ ð9Þ

Fig. 3 PSA functionalities [9]

Fig. 4 Image re-sampling

Fig. 5 Angular resolution

IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005252

Page 4: Dual camera intelligent sensor for high definition 360 degrees surveillance

The object ‘world position’ in a polar coordinate system isfinally estimated after evaluating the angle displacement onthe image plane.

3.3 Joint calibration strategy

This strategy is fundamental for the optimal functioning ofthe proposed system. Vertical camera optical axis alignmentand absolute PTZ camera positioning are utilised for thiscalibration phase (Fig. 7).

In particular a reference zero position Zðx0; y0Þ for thePTZ camera on the catadioptric 360 degree image is firstdefined. This position coincides with the default orientationof the PTZ camera and is chosen as the starting point for panangle evaluation (Fig. 8). In fact, given a point P(x, y) and

the image centre CðX0; Y0Þ it is simple to find out theabsolute pan angle j:

j ¼ ZC_

P ð10ÞThe movable camera tilt angle w is instead evaluated usingextrinsic calibration. In fact, by knowing mobile cameraheight Hm and the object distance from camera on theground plane rw; the tilt angle for each image point can becalculated simply by using the following expression:

w ¼ arctanHm

rw

� �ð11Þ

It is then possible for the user to point the moving camerasimply by clicking on the omnidirectional image or directlyon the rectified one.

4 Multi-target tracking system

The PSA sensor is able to track multiple objects directly onthe panoramic image [8]. The software module processingto detect and track objects is depicted in Fig. 9. In thefollowing, a brief description of the most important moduleswill be given.

Object detection is performed by subtracting the currentimage I to a reference image B (background image)obtaining as output another image containing the changesin the scene [8–10].

Ihð j; iÞ � Bð j; iÞj j>Th ) Foreground pixel

Ihð j; iÞ Bð j; iÞj j � Th) Background pixel

By using this image, a bounding box (blob) is drawn andsome features (such as colour, shape and position) extractedfor each object by the feature extraction module (blob

Fig. 6 System resolution

Fig. 7 Sensors physical set up

Fig. 8 PTZ camera pan and tilt angles

Fig. 9 Image processing chain

IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005 253

Page 5: Dual camera intelligent sensor for high definition 360 degrees surveillance

colouring), in order to track the same blob in the next image.In particular, each moving area (called blob) detected in thescene is bounded by a rectangle to which a numerical labelis assigned. Through to the detection of temporal corre-spondences among bounding boxes, a graph-based temporalrepresentation of the dynamics of the image primitives canbe built. The temporal graph provides information on thecurrent bounding boxes and their relations to the boxesdetected in the previous frames. By using the temporalgraph layer as a scene representation tool, tracking can beperformed to preserve temporal coherence between blobs.An alarm is then raised every time an object gets into aforbidden area.

The PSA sensor is able to display the tracked object in thepanoramic camera using rectification (low-resolution), orusing a pan tilt zoom camera (high-resolution).

Many false alarms (i.e. false blob tracks) can appear inoutdoor environments when meteorological conditions areadverse. In fact shadows and illumination changesgenerate noisy artefacts, which lead to misdetection atthe low level (i.e. Change Detection), so a background-updating algorithm is required. The updated backgroundBkþ1ðx; yÞ is then obtained using alpha filter properties asfollows:

Bkþ1ðx; yÞ ¼ Ikðx; yÞ þ a½Bkðx; yÞ � Ikðx; yÞ ð12Þ

where Ikðx; yÞ is the current image, Bkðx; yÞ is thebackground and a : ½0� 1 is an updating coefficient.

4.1 Target selection

Target selection is performed:

1. Automatically: when an object enters, the alarmed zoneis selected over the image by user.2. Manually: selected by the operator on the panoramicimage or on the rectified one.

In both cases the system is able to track the object at lowresolution redirecting the moving camera by using theinformation of the blob position on image [11].

During automatic functioning, the system is also able togenerate an alarm log file and to save a rectified image foreach alarmed blob (Fig. 10).

In addition to the functionalities described above, thissensor is able to track several blobs (multi-target trackingmode) at the same time showing a rectified image for eachone. In this way, it is possible to automatically monitor awide area at low resolution.

5 Results

In order to test the proposed system, the panoramic camerahas been installed at 5.7 meters whereas the PTZ sensor hasbeen placed at 8 meters over a car park (Fig. 7).

For the implementation, two 1=3-inch sensor camerashave been used ensuring an image resolution of 768� 576

Fig. 10 Multi-target tracking

Fig. 11 System working zones

IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005254

Page 6: Dual camera intelligent sensor for high definition 360 degrees surveillance

both for the panoramic 360 degree camera and for the PTZ.The maximum frame rate obtained by this configuration isabout 15 fps, on an Intel Pentium IV 2.4 GHz, permitting alsoto track high speed moving objects such as cars or runningpedestrians. The maximum object speed for tracking hasbeen found to be 60 km=h for blobs belonging to the green‘tracking’ belt of Fig. 11. The panoramic image is otherwisecharacterised by a nonlinear resolution as a function of thedistance from the centre (Fig. 6) and by a blind circular zonethat can be a problem both for detection and for tracking.Consequently, working zones have been identified for eachof the system functionalities as depicted in Fig. 11.

From experimentation it has been found:

yoi ¼ 34�

yi ¼ 39�

yf ¼ 79�

yof ¼ 94� ð13Þ

where y ¼ � þ 90 and yoi is the minimum theoretical(radial) visibility angle, yi is the real visibility angle, yf is

the maximum angle of detection and yof is the maximumangle of visibility.

In particular in the ‘Detection’ zone only visual detectionis allowed because of the low-resolution and the highdistance from camera. Meanwhile in the ‘tracking’ fieldonly detection is granted, and in the ‘classification’ area alsoblob classification is possible in order to determine if adetected ‘change’ is a pedestrian or a car.

Sensor extrinsic calibration error S has also beenevaluated as the difference between the measured distanceand the calculated one

S ¼ jrw meas � rw evalj ð14Þ

as can be seen in Table 1.The values obtained for S demonstrate the validity of

the methodology used for calibrating the sensor. Actually,the error remains quite small for points belonging to thedetection field as it can be seen in Table 1 where to a rw

equal to 24.5 m at an angle y equal to 76 degreescorresponds after placing the camera at 5.7 m.

In order to show PSA capabilities three imagesequences (Figs. 12–14) are presented corresponding toan outdoor and indoor environment, respectively.In Figs. 12–14 it is possible to see how the target objectis tracked over time. In particular, it is possible to see howthe system monitors a large area directly by operating on a360 degree image and is able to track objects to both low-and high-resolution also in presence of more than one target,granting full area coverage.

From 200 different tests it has been found that PSAis characterised by a frequency of object loss (FL) equal to5% in the ‘tracking’ field, by a false detection rate (FFD)

Table 1: Calibration error

�(pel) �w (m) measured �w (m) evaluated S(m)

89 4.50 3.94 0.56

157 9.50 8.90 0.60

205 19.50 19.30 0.20

217 24.50 25.88 1.38

229 34.50 38.60 3.90

Fig. 12 Pedestrian tracking

IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005 255

Page 7: Dual camera intelligent sensor for high definition 360 degrees surveillance

lower than 2% and by a frequency of correct detection(FCD) greater than 95%.

It is also important to observe how the PSA highly reducesinstallation and maintenance costs. In fact, this solution is

equivalent to a four traditional fixed camera system (or threefixed plus one moving camera) but does not require complexdata fusion and calibration procedures. One systemlimitation is represented by the poor resolution at relatively

Fig. 13 Car tracking

Fig. 14 Indoor functioning

IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005256

Page 8: Dual camera intelligent sensor for high definition 360 degrees surveillance

high distances from the camera optical axes projection. Thisfact increases ‘horizon’ and ‘detection’ zones limitingsystem functionalities. A possible solution to this problemcan be found in using mega-pixel cameras obtaining the sameframe rate together with a higher resolution leading to astrong increase in track capabilities.

6 Applications

The proposed architecture can be used as a part of adistributed video surveillance system composed by twovideo cameras and four processors (Fig. 15).

The described system is capable of detecting andclassifying targets in order to reveal intrusions in protectedareas. In particular when an alarm occurs, the omni-directional camera is able to trigger the PTZ camera thatbegins to track the target autonomously. This feature isrequired whenever the object goes outside the tracking fieldof the panoramic camera. Furthermore, each processedvideo stream is also stored in a distributed database togetherwith metadata for future queries. Remote ‘intelligent’database retrieval is then possible. The user is then able tolook for an event of interest using several research keys suchas date, temporal interval, alarm typology or alarmed areacrossing.

Therefore, the integration of the PSA system in adistributed architecture, such as the one depicted here,greatly reduces complexity and costs granting completehigh performance 360 degree surveillance.

7 Conclusions

In this paper, a novel 360 degree video surveillance sensorcalled PSA has been described. It has the advantages of amulti-camera system while maintaining the robustness of asingle camera. Proposed results demonstrate how systemtrack capabilities are equal than traditional multi-camerasystems, but building costs and configuration procedures aresmaller. This sensor has been designed as a multi purposevideo surveillance system so that possible applications canbe traffic monitoring, monitoring of sensitive sites, military

applications and virtual reality. Possible future evolutionscan include an independent PTZ tracking camera in order totrack objects also when they move out from the ‘tracking’field. In this way, the limits imposed by reflector resolutioncan be easily overcome, also permitting to continue trackingtargets more outside the tracking belt.

8 Acknowledgments

This work was performed under co-financing of the MIURwithin the project FIRB-VICOM and ELSAG S.p.a..G. Scotti has been funded under a grant by County ofGenoa and ELSAG Spa.

9 References

1 Harabar, S., and Sukhatme, G.: ‘Omni directional vision for anautonomous helicopter’. Proc. IEEE Int. Conf. on Robotics andAutomation (ICRA), 2003

2 Nayar, K., and Boult, T.: ‘Omni-directional vision systems: PI report’Proc. DARPA Image Understanding Workshop, Monterey, Nov. 1998

3 Boult, T.E., Micheals, R.J., Gao, X., and Eckmann, M.: ‘Into the woodsvisual surveillance of non cooperative and camouflaged targets incomplex outdoor settings’, Proc. IEEE, 2001, 89, pp. 1382–1402

4 Baker, S., and Nayar, S.K.: ‘Catadioptric image formation’. Proc.DARPA Image Understanding Workshop, May 1997

5 Daniilidis, K., Makadia, A., and Bulow, T.: ‘Image processing incatadioptric planes: spatiotemporal derivatives and optical flowcomputation’. Proc. 3rd Workshop on Omni-directional Vision(OMNIVIS), IEEE, 2002, pp. 3–10

6 Paulino, A., Araujo, H., and Salvi, J.: ‘Pose estimation for centralcatadioptric systems: an analytical approach’. ICPR 2002, Quebec,Canada

7 Sturm, P.: ‘Mixing catadioptric and perspective cameras’. Proc. 3rdWorkshop on Omni-directional Vision (OMNIVIS), IEEE, 2002,pp. 37–44

8 Marcenaro, L., Gera, G., and Regazzoni, C.S.: ‘Adaptive changedetection approach for object detection in outdoor scenes under variablespeed illumination changes’, Eusipco, Tampere, Finland, 2000,pp. 1025–1028

9 Regazzoni, C.S., Vernazza, G., and Fabri, G. (Eds.): ‘Advanced video-based surveillance systems’ (Kluwer Academic Publishers, NorwellMA, 1999)

10 Skifstad, K., and Jain, R.: ‘Illumination independent change detectionfor real world image sequences’, Comput. Vis. Graph. Image Process.,1989, 46, pp. 387–399

11 Marchesotti, L., Messina, A., Marcenaro, L., and Regazzoni, C.:‘A cooperative multi-sensor system for face detection in videosurveillance applications’, Int. J. Chin. Autom., 2002, 5, (5)

Fig. 15 Distributed VSS architecture

IEE Proc.-Vis. Image Signal Process., Vol. 152, No. 2, April 2005 257