0 U N I V E R S I T Y O F S A O P A U L O , B R A Z I L I C T A I 2 0 1 6 Letricia P. S. Avalhais, Jose Rodrigues-Jr., Agma J. M. Traina Fire detection on unconstrained videos using color-aware spatial modeling and motion flow University of Sao Paulo Institute of Mathematics and Computer Science Sao Carlos, Brazil
37
Embed
Fire Detection on Unconstrained Videos Using Color-Aware Spatial Modeling and Motion Flow
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
0U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Letricia P. S. Avalhais, Jose Rodrigues-Jr., Agma J.
M. Traina
Fire detection on unconstrained videos
using color-aware spatial modeling and
motion flow
University of Sao Paulo
Institute of Mathematics and Computer Science
Sao Carlos, Brazil
1U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Develop solutions to support emergency command center
using intelligent analysis on data provided by
crowdsourcing.
Emergency context
2U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
OUTLINE
Introduction & Background0
1
0
2
0
3
SPATFIRE Method
Experiments & Results
0
4 Conclusions
3U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Automatic detection of fire on videos
‣ Motivation
o Take advantage of different mobile devices with cameras such
as smartphones and tablets
o Low cost and flexible alternative to fixed located sensors
o Fast response to incidents as fire and explosions
4U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Goal
‣ Develop an effective solution to detect fire on
unconstrained videos, focused on:
1. High abrangency (recall)
2. Real time response
5U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Automatic detection of fire on video
‣ Methods from the literature
Rely mainly in color-based
models from different color
spaces: RGB, YCbCr, CIE Lab
and HSVTake advantage of yellow-
reddish appearance of fire
May also combine shape
or texture
Static information only
6U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Automatic detection of fire on video
‣ Methods from the literature
Rely mainly in color-based
models from different color
spaces: RGB, YCbCr, CIE Lab
and HSVTake advantage of yellow-
reddish appearance of fire
May also combine shape
or texture
Static information only
High false positive rates due to
ambiguity with non-fire objects
presenting the same color
Alternative: incorporate dynamic features
7U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Automatic detection of fire on video
‣ Methods from the literature
Generally combined with
color models
Temporal content: flickering
patterns, background subtraction,
shape variation
Better performance than the works
that use only static information
Dynamic information
8U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Does not fit the requirements of
a crowdsourcing emergency
system
Automatic detection of fire on video
‣ Methods from the literature
Dynamic information
Assumptions: stationary cameras, controlled lightening conditions, short cropped video segments
Generally combined with
color models
Temporal content: flickering
patterns, background subtraction,
shape variation
Better performance than the works
that use only static information
9U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
SPATFIRE
SPAtio-Temporal segmentation of FIRe Events
‣ MAIN CONTRIBUTIONS
A color model for spatial segmentation specifically tailored for the detection of fire-like regions
based on the HSV color space;
1. FPD - Fire-like Pixel Detector
An efficient technique to compensate the camera motion observed in videos acquired with non-
stationary cameras;
2. Motion compensation
Perform the temporal segmentation of fire events in adverse uncontrolled situations.
3. Event segmentation
10U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
SPATFIRE
OVERVIEW Fire segments
11U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Spatial segmentation
FPD Color Model
(2)(1)
Visualization of the fire pixels in the HSV color
space
12U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Motion estimation
2. DENSE FLOW ESTIMATION
‣ Match points sampled at uniform intervals in a grid
‣ Uses the “background’’ information
‣ Gunnar Farneback’s Optical Flow
1. SPARSE FLOW ESTIMATION
‣ Match corner points from two consecutive frames on the regions of interest
‣ Harris corner detection
‣ Lucas-Kanade Optical Flow
OPTICAL FLOW
13U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Non-stationary cameras
‣ Usually add an extra motion component from the camera
movement.
‣ Why is this a problem?
14U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Non-stationary cameras
‣ Usually add an extra motion component from the camera
movement.
‣ Why is this a problem?
Sparse flow from the entire frame Sparse flow from the interest region
15U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Block-based motion compensation
BLOCK DOMINANT
ORIENTATIONnon-overlapping regions of 32 x 32. For
each block , the mean local flow is:
ESTIMATE THE BACKGROUND
MOTION FLOW
calculate the average of the orientation
from the block dominant flows at the peak
of histogram and define the
approximated global background flow as:
16U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Feature vector representation and classification
The representation and classification are described in the following steps:
1. Calculate the new compensated set of flow
so that, for each , the correspondent new flow is given by:
2. Calculate the histogram of oriented optical flow (32 bins) from the
set .
3. Use the SVM classifier to determine the class (fire, not fire) using the
histogram as its input.
17U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Experiments
‣ Evaluating FPD color model
o How accurate is the FPD model to correctly select fire pixels?
18U N I V E R S I T Y O F S A O P A U L O ,
B R A Z I L
I C T A I 2 0 1
6
Experiments
‣ Evaluating FPD color model
o How accurate is the FPD model to correctly select fire pixels?