Reference Independe nt Moving Object Detection: An Edge Segment Based Approach M. Ali Akber Dewan, M. Julius Hossain, and Oksam Chae* Department of Computer Engineering, Kyung Hee University, 1 Seochun-ri, Kiheung-eup, Yongin-si, Kyunggi-do, South Korea, 449-701 dewankhu@gmail. com, [email protected]m, [email protected]Abstract. Reference update to adapt with the dynamism of environment is one of the most challenging tasks in moving object detection for video surveillance. Different background modeling techniques have been proposed. However, most of these methods suffer from high computational cost and difficulties in determining the appropriate location as well as pixel values to update the background. In this paper, we present a new algorithm which utilizes three most recent successive frames to isolate moving edges for moving object detection. It does not require any background model. Hence, it is computationally faster and applicable for real time processing. We also introduce segment based representation of edges in the proposed method instead of traditional pixel based representation which facilitates to incorporate an efficient edge-matching algorithm to solve edge localization problem. It provides robustness against the random noise, illumination variation and quantization error. Experimental results of the proposed method are included in this paper to compare with some other standard methods that are frequently used in video surveillance. Keywords: Video surveillance, reference independent, chamfer matching, distance image, motion detection. 1 Introduction Automatic detection of moving objects is a challenging and essential task in video surveillance. It has many applications in diverse discipline such as automatic video monitoring system, intelligent transportation system, airport security system and so on. Detail review on moving object detection algorithms can be found in [1] and [2]. Background subtraction based methods are the most common approaches that are used for moving object detection. In these methods, background modeling is an important and unavoidable part to accumulate the illumination and other changes in the background scene for proper detection [3]. However, most of the background- modeling methods are complex in computation and time-consuming for real time processing [4]. Moreover, most of the time it suffers from poor performance due to lack of compensation with the dynamism of background scene [5]. Edge based methods are robust against illumination change. In [6] and [7], edge based methods are proposed for moving object detection which utilizes double edge * Corresponding author.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
8/7/2019 Reference Independent Moving Object Detection An
maps. In [6], one edge map is generated from difference image of background and
current frame, I n. Another edge map is generated from difference image of I n and I n+1.
Finally, moving edge points are detected by applying logical OR operation on these
two edge maps. However, due to illumination change and random noise [6] in
background scene, false edge may appear in the first edge map and hence causes falsedetection in the final detection result. In [7], first edge map is computed from the
difference image of I n-1 and I n, and similarly second map is obtained from I n, and I n+1.
Finally, moving edges of I n are extracted by applying logical AND operation on these
two edge maps. However, because of noise and illumination change, edge pixels of an
edge map may be displaced little bit as compared to previous one. So, exact matching
through AND operation extracts scattered edge pixels, which fails to represent reliable
shape of moving objects. Moreover, pixel based processing for moving edge detection
is not feasible in terms of computation. A pseudo-gradient based moving edge
extraction method is proposed in [8]. Though this method is computationally fasterbut its background is not updated to take care of the situation when a moving object
stops its movement in the scene. In this situation, stopped object is continuously
detected as moving object. As no background update method is adopted in this
method, it is not much robust against illumination change. Additionally, this method
also suffers from scattered edge pixels of moving objects.
(a) (b) (c) (d)
Fig. 1. Difference between pixel based and segment based matching. (a) Edge image at time t ;
(b) Edge image of same scene at time t+1; (c) Result obtained by pixel based matching; (d)
Result obtained by segment based matching.
Considering the above-mentioned problems, we present an edge segment based
approach which utilizes three successive frames for moving object detection. In our
proposed method, two difference image edge maps of three successive frames are
utilized to extract moving edges instead of using edge differencing approach. It makes
the system robust against random noise as well as illumination variation. Since the
proposed method does not require any background model for detection, it is
computationally faster and efficient. Moreover, use of most recent frames, embodying
the updated information helps to reduce false detection effectively. In our proposed
method, the difference image edge maps are represented as segments instead of pixels
using an efficiently designed edge class [9]. An edge segment consists of a number of consecutive edge pixels. This novel representation helps to make the decision on
matching or in any other operations based on entire edge segment rather than an
individual pixel. This representation of edge provides the following benefits:
8/7/2019 Reference Independent Moving Object Detection An
Reference Independent Moving Object Detection: An Edge Segment Based Approach 503
a) It facilitates to incorporate an efficient and flexible edge-matching algorithm [10]
in our proposed method which reduces the computation time significantly.
b) This type of representation facilitates our method to take decision about a
complete edge segment at a time instead of an individual edge pixel to keep or
discard it from the edge list during matching. Fig. 1 illustrates the advantages of segment based matching over pixel based matching. Here, pixel based matching
missed 20% edge pixels due to variation of edge localization in different frames.
Segment based matching does not suffer from this problem as it consider all the
points of a segment together. As a result, it reduces the occurrence of scattered
edge pixels in the detection result.
Since moving object segmentation is a separate problem from detection in video
surveillance, we have not considered it in our proposed method. However, because of
segment based representation of edges, our proposed method is able to extract reliableshape information of moving objects. Incorporating this shape information with image
segmentation algorithm, it is possible to segment out moving objects from current
image efficiently. Segment based representation also makes it possible to incorporate
knowledge to edge segments which can facilitate the higher level processing of video
surveillance such as tracking, recognition, human activity recognition and so on.
2 Description of the Proposed Method
The overall procedure of the proposed method is illustrated in Fig. 2. Detaildescription of our method is given in the following subsections.
1 1n n nD I I
− −= −
1n n nD I I
+= −1 1
( * )n n
DE G Dϕ − −
= Δ ( * )n n
DE G Dϕ = Δ
nI
1nI
− 1nI
+
Fig. 2. Flow diagram of the proposed method
2.1 Computation of Difference Image Edge Maps
Simple edge differencing approach suffers a lot with random noise. This is due to the
fact that the appearance of noise created in one frame is different from its successive
frames. This results in change of edge locations to some extent in successive frames.
Hence, instead of using simple edge differencing approach, we utilize difference
image for moving edge detection. Edges extracted from difference image are noise
robust, comparatively stable and hence partially solve the edge localization problem.
Two difference image edge maps are utilized in our proposed method for moving
object detection. To compute difference image edge maps, we compute two difference
images, Dn-1, and Dn utilizing three successive frames I n-1, I n, and I n+1 as follows:
8/7/2019 Reference Independent Moving Object Detection An
After computing Dn-1 and Dn, canny edge detection algorithm [11] is applied and
generates difference image edge maps, DE n-1 and DE n, respectively. In the difference
image edge maps, edge pixels are grouped together to represent as segments using anefficiently designed edge class [9]. To make the edge segments more efficient for
moving edge detection procedure, we maintain the following constrains during edge
segment generation:
a) If the edge segment contains multiple branches, then the braches are broken into
multiple edge segments from its branching point.
b) If the edge segment bends more than a certain limit at an edge point, the edge is
broken into two edge segments from that particular position.
c) If the length of a particular edge segment exceeds a certain limit, then the edge
segment is divided into a number of small edge segments of its permitted length.
Segment based representation helps the proposed system to use the geometric
shape of edges during matching for moving edge detection. It also helps to extract
solid edge segments of moving objects instead of extracting scattered or significantly
small edges. In this case no edge pixels are processed independently; rather all the
edge pixels in an edge segment are processed together for matching or in any other
operations. Fig. 3(d) shows the difference image edge maps generated from Fig. 3(a)
and Fig. 3(b). Similarly edge map in Fig. 3(e) is obtained from Fig. 3(b) and Fig. 3(c).
(a) (b) (c) (d)
(e) (f) (g)
Fig. 3. DT image generation and matching. (a) I n-1; (b) I n; (c) I n+1; (d) DE n-1; (e) DE n; (f) DT
image of DE n-1; (g) Edge matching using DT image. Here, Matching_confidence = 0.91287 .
2.2 Moving Object Detection
Edge maps, DE n-1 and DE n are used in this step to extract moving edges for moving
object detection in video sequence. DE n-1 contains the moving edges of I n-1 and I n, and
DE n contains the moving edges of I n and I n+1, respectively. Thus, the moving edges of
8/7/2019 Reference Independent Moving Object Detection An
Reference Independent Moving Object Detection: An Edge Segment Based Approach 505
I n is common in both of the edge maps. Therefore, to find out moving edges, we
superimpose one edge map on another one and compute matching between them.
Hence, if two edge segments are of almost similar in size and shape, and situated
almost in same positions in the edge maps, then they are considered as moving edges
of I n. However, appearance of noise may cause slightly change of these parameters aswell. Hence, instead of exact matching, introducing some variability reduces
localization problem to obtain better results. Considering these issues, we have
adopted an efficient edge-matching algorithm in this proposed method, which is
known as chamfer ¾ matching [10]. According to the procedure of chamfer matching,
distance transform (DT ) image is generated from one difference image edge map and
then edge segments from another one are superimposed on it and compute matching
confidence. If the matching confidence is less than a certain threshold then the edge
segment is enlisted as moving edge. This threshold value gives the variability during
matching. In our method, we utilize DE n-1 to generate DT image and thereafter, edgesegments of DE n are superimposed on it to compute the matching confidence.
To compute DT image, we use integer approximation of exact Euclidean distance
to minimize the computation time [10]. Each pixel in DT represents the corresponding
distance to the nearest edge pixel in the edge map. In DT image generation, a two-
pass algorithm is used to calculate the distance values sequentially. Initially the edge
pixels are set to zero and rest of the position is set to infinity. The first pass (forward)
modifies the distance image as follows:
, 1, 1 1, 1, 1 , 1 ,min( 4, 3, 4, 3, )i j i j i j i j i j i jv v v v v v− − − − + −
= + + + + (2)
and thereafter, the second pass (backward) works as follows:
, , , 1 1, 1 1, 1 1, 1min( , 3, 4, 3, 4)i j i j i j i j i j i jv v v v v v+ + − + − + +
= + + + + (3)
where vi,j is the distance at pixel position (i, j). Fig. 3(f) illustrates a DT image which
is computed from difference image edge map shown in Fig. 3(d). In Fig. 3(f), distance
values of DT image are normalized into 0 to 255 for better visualization.
During matching, an edge segment of DE n is superimposed on DT image of DE n-1
to accumulate the corresponding distance values. A normalized average of these
values (root mean square) is the measure of matching confidence of the edge segment
in DE n, shown in following equation:
2
1
1 1_ [ ] { ( )}
3
k
i
i
Matching confidence l dist lk
=
= ∑ (4)
where k is the number of edge points in lth edge segment of DE n; dist(li) is the distance
value at position i of edge segment l. The average is divided by 3 to compensate for
the unit distance 3 in the chamfer ¾-distance transformation. Edge segments are
removed from DE n if matching confidence is comparatively higher. Existence of asimilar edge segments in DE n-1 and DE n produces a low Matching_confidence value
for that segment. We allow some flexibility by introducing a disparity threshold,
τ and empirically we set τ = 1.3 in our implementation. We consider a matching
occurs between edge segments, if Matching_confidence[l] ≤ τ . The corresponding
8/7/2019 Reference Independent Moving Object Detection An
Reference Independent Moving Object Detection: An Edge Segment Based Approach 507
their method, the difference between background and current frame incorporates most
of the noise pixels. Fig. 5(f) shows the result applying the method proposed by Dailey
and Cathey [7]. Result obtained from this method is much robust against illumination
changes as it uses most recent successive frame differences for moving edge
detection. However, it suffers from scattered edge pixels as it uses logical AND operation in difference image edge maps for matching. Illumination variation and
quantization error induces edge localization problem in difference image edge maps.
As a result, some portions of the same edge segment are matched and some are not,
and produce scattered edges in final detection result. Our method does not experience
this problem because of applying flexible matching between difference image edge
maps containing edge segments. The result obtained from our proposed method is
shown in Fig. 5(g).
(a) (b) (c) (d)
(e) (f) (g)
Fig. 5. (a) Background; (b) I 172; (c) I 173; (d) I 174; (e) Detected moving edges of I 173 using Kim
and Hwang method; (f) Detected moving edges of I 173 using Dailey and Cathey method; (g)Detected moving edges of I 173 using our proposed method
Table 1. Mean processing time in (ms) for each of the module
Processing steps Mean time (ms)
Computation of difference images 5
Edge map generation from difference images 39
DT image generation 11Computation of matching confidence andmoving edge detection
19
Total time required 74
8/7/2019 Reference Independent Moving Object Detection An
In order to comprehend the computational efficiency of the algorithm, it should be
mentioned that with the processing power and the processing steps described above,
execution time for the moving object detection on grayscale images was
approximately 74 ms. Therefore, the processing speed was around 13 frames per
second. However, using computers with higher CPU speeds which are available thisday and in future as well, this frame rate can be improved. Table 1 depicts
approximate times required to execute different modules of the proposed method.
4 Conclusions and Future Works
This paper presents a robust method for moving object detection which does not
require any background model. Representation of edges as segments helps to reduce
the effect of noise and, incorporates a fast and flexible method for edge matching. So,
the proposed method is computationally efficient and suitable for real time automated
video surveillance system. Our method is robust against illumination changes as it
works on most recent successive frames and utilizes edge information for moving
object detection. However, the presented method is not very effective in the case of
detecting object with very slow movement as it uses three consecutive frames instead
of any background model. The extracted moving edge segments using our proposed
method represent very accurate shape information of moving object. These edge
segments can be utilized for moving object segmentation. Currently we are pursuing
moving object segmentation from moving edges utilizing watershed algorithm. As
segment based representation provides us with shape information of moving object,the proposed method can be easily extended for tracking, recognition and
classification of moving object. Experimental results and comparative studies with
respect to some other standard methods justify that the proposed method is effective
and encouraging for moving object detection problem.
LNCS, vol. 3991, pp. 563–570. Springer, Heidelberg (2006)
5. Gutchess, D., Trajkovics, M., Cohen-Solal, E., Lyons, D., Jain, A.K.: A BackgroundModel Initialization Algorithm for Video Surveillance. Proc. of IEEE Intl. Conf. on
Computer Vision 1, 733–740 (2001)
6. Kim, C., Hwang, J.N.: Fast and Automatic Video Object Segmentation and Tracking for
Content-based Applications. IEEE Trans. on Circuits and Systems for Video Tech. 12,
122–129 (2002)
8/7/2019 Reference Independent Moving Object Detection An
Reference Independent Moving Object Detection: An Edge Segment Based Approach 509
7. Dailey, D.J., Cathey, F.W., Pumrin, S.: An Algorithm to Estimate Mean Traffic Speed
using Un-calibrated Cameras. IEEE Trans. on Intelligent Transportation Sys. 1(2), 98–107
(2000)
8. Makarov, A., Vesin, J.M., Kunt, M.: Intrusion Detection Using Extraction of Moving
Edges, International Conf. on. Pattern Recognition 1, 804–807 (1994)9. Ahn, K.O., Hwang, H.J., Chae, O.S.: Design and Implementation of Edge Class for Image
Analysis Algorithm Development based on Standard Edge. In: Proc. of KISS Autumn