Page 1
EFFICIENT MOVING OBJECT DETECTION BY
USING DECOLOR TECHNIQUE
A PROJECT REPORT
Submitted by
PL.MUTHUKARUPPAN - 105842132503
V.PANDI SELVAM - 105842132030
V.VIGNESH - 105842132046
in partial fulfillment for the award of the degree
of
BACHELOR OF ENGINEERING
in
COMPUTER SCIENCE & ENGINEERING
MADURAI INSTITUTE OF ENGNEERING AND TECHNOLOGY,
SIVAGANGAI.
ANNA UNIVERSITY: CHENNAI 600 025
APRIL 2014
Page 2
ANNA UNIVERSITY: CHENNAI 600 025
BONAFIDE CERTIFICATE
Certified that this project report ―EFFICIENT MOVING OBJECT DETECTION
BY USING DECOLOR TECHNIQUE” is the bonafide work of
―PL.MUTHUKARUPPAN(105842132503),V.PANDISELVAM(105842132030),V.VIGN
ESH(105842132046)” who carried out the project work under my supervision.
SIGNATURE
Mrs.A.Padma.,ME,Ph.D
HEAD OF THE DEPARTMENT
Department of CSE,
Madurai Institute of Engg & Tech
Pottapalayam
Sivagangai-630611
SIGNATURE
Mr.R.Rubesh Selva Kumar.,ME.
SUPERVISOR
ASSISTANT PROFESSOR
Department of CSE,
Madurai Institute of Engg & Tech
Pottapalayam
Sivagangai-630611
Submitted for the Project Viva-Voce held on _____________
INTERNAL EXAMINER EXTERNAL EXAMINER
Page 3
CHAPTER NO. TITLE PAGE NO.
ABSTRACT iii
LIST OF FIGURES vi
LIST OF ABBREVIATIONS vii
1 INTRODUCTION 1
1.1 ABOUT THE PROJECT 1
1.2 EXISTING SYSTEM 2
1.3 PROPOSED SYSTEM 3
1.4 SYSTEM SPECIFICATION 5
1.4.1 Hardware Specification 5
1.4.2 Software Specification 5
1.5 SOFTWARE DESCRIPTION 5
1.5.1 Introduction to JSP 5
1.5.2 Introduction to JAVA 6
1.5.3 Introduction to J2EE 7
1.5.4 Introduction to Servlet 7
1.5.5 Feasibility Studies 8
2 LITERATURE REVIEW 22
2.1 ARCHITECTURE DIAGRAM 22
2.2 MODULE DESCRIPTION 24
2.2.1 Video Capturing
2.2.2 Moving Object Detection
25
2.2.3 Motion Segmentation
2.2.4 SMS Alert System
26
2.3 INDEX TERMS 28
2.3.1 Background Subtraction
2.3.2 Low Rank Representation
28
TABLE OF CONTENTS
Page 4
2.4 SYSTEM DESIGN DIAGRAM
2.4.1 Data Flow Diagram
2.4.2 UML Diagram
31
2.5 SYSTEM TESTING 32
2.6 SOURCE CODE
2.7 SCREEN SHOTS
40
3 CONCLUSION 48
3.1 Future Enhancement 49
REFERENCES 50
Page 5
1. INTRODUCTION
1.1 OVERVIEW OF THE PROJECT:
Automated video analysis is important for many vision applications,
such as surveillance, traffic monitoring , augmented reality, vehicle navigation,
etc. As pointed out in , there are three key steps for automated video analysis:
object detection, object tracking, and behavior recognition. As the first step,
object detection aims to locate and segment interesting objects in a video. Then,
such objects can be tracked from frame to frame, and the track scan be analyzed
to recognize object behavior. Thus, object detection plays a critical role in
practical applications.
Object detection is usually achieved by object detectors or background
subtraction . An object detector is often a classifier that scans the image by a
sliding window and labels each sub image defined by the window as either
objector background. Generally, the classifier is built by offline learning on
separate datasets or by online learning initialized with a manually labeled frame
at the start of a video . Alternatively, background subtraction compares images
with a background model and detects the changes as objects. It usually assumes
that no object appears in images when building the background model .Such
requirements of training examples for object or background modeling actually
limit the applicability of above-mentioned methods in automated video analysis
.
Another category of object detection methods that can avoid training
phases are motion-based methods ,which only use motion information to
separate objects from the background. The problem can be rephrased as follows:
Given a sequence of images in which foreground objects are present and
moving differently from the background, can we separate the objects from the
background automatically? shows such an example, where a walking lady is
Page 6
always present and recorded by a handheld camera. The goal is to take the
image sequence as input and directly output a mask sequence of the walking
lady.
The most natural way for motion-based object detection is to classify
pixels according to motion patterns, which is usually named motion
segmentation . These approaches achieve both segmentation and optical flow
computation accurately and they can work in the presence of large camera
motion. However, they assume rigid motion or smooth motion in respective
regions, which is not generally true in practice. In practice, the foreground
motion can be very complicated with nonrigid shape changes. Also, the
background may be complex, including illumination changes and varying
textures such as waving trees and seawaves. The video includes an operating
escalator, but it should be regarded as background for human tracking purpose.
An alternative motion-based approach is background estimation .Different from
background subtraction, it estimates a background model directly from the
testing sequence. Generally, it tries to seek temporal intervals inside which the
pixel intensity is unchanged and uses image data from such intervals for
background estimation. However, this approach also relies on the assumption of
static background. Hence, it is difficult to handle the scenarios with complex
background or moving cameras.
In this paper, we propose a novel algorithm for moving object detection
which falls into the category of motion based methods. It solves the challenges
mentioned above in a unified framework named DEtecting Contiguous Outliers
in the Low rank Representation (DECOLOR). We assume that the underlying
background images are linearly correlated. Thus, the matrix composed of
vectorized video frames can be approximated by a low-rank matrix, and the
moving objects can be detected as outliers in this low-rank representation.
Formulating the problem as outlier detection allows us to get rid of many
Page 7
assumptions on the behavior of foreground. The low-rank representation of
background makes it flexible to accommodate the global variations in the
background. Moreover, DECOLOR performs object detection and background
estimation simultaneously without training sequences. The main contributions
can be summarized as follows:
1. We propose a new formulation of outlier detection in the low-rank
representation in which the outlier support and the low-rank matrix are
estimated simultaneously. We establish the link between our model and other
relevant models in the framework of Robust Principal Component Analysis
(RPCA). Differently from other formulations of RPCA, we model the outlier
support explicitly. DECOLOR can be interpreted as ‗0-penalty regularized
RPCA, which is a more faithful model for the problem of moving object
segmentation. Following the novel formulation, an effective and efficient
algorithm is developed to solve the problem. We demonstrate that, although the
energy is nonconvex, DECOLOR achieves better accuracy in terms of both
object detection and background estimation compared against the state-of-the-
art algorithm of RPCA .
2. In other models of RPCA, no prior knowledge on the spatial
distribution of outliers has been considered. In real videos, the foreground
objects usually are small clusters. Thus, contiguous regions should be preferred
to be detected. Since the outlier support is modeled explicitly in our
formulation, we can naturally incorporate such contiguity prior using Markov
Random Fields (MRFs) .
3. We use a parametric motion model to compensate for camera motion.
The compensation of camera motion is integrated into our unified framework
and computed in a batch manner for all frames during segmentation and
background estimation.
Page 8
SYSTEM SPECIFICATION:
1.2.1 HARDWARE SPECIFICATION:
SYSTEM : PENTIUM IV 2.5GHz
HARD DISK : 40GB
MONITOR : 15‖ VGA COLOUR
MODEM : SERIAL PORT GSM MODEM
CAMERA : 1.3 megapixel
RAM : 256 MB
KEYBOARD : 110 keys enhanced
1.2.2 SOFTWARE SPECIFICATION:
OPERATING SYSTEM : WINDOWS XP,7
FRONT END : NETBEANS IDE
BACK END : MICROSOFT ACCESS
CODING LANGUAGE : JAVA 1.7, JMF, JSP
SERVER : WEB LOGIC SERVER
Page 9
2: SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
In existing system we are giving the images as input which is
captured by the web camera. We have used SVM algorithm. we have done
comparison between background image and foreground image.
Disadvantages:
Less efficiency.
Images only possible for comparison.
Lacks computation capability while monitoring.
Does not keep track of previous surveillance operations.
Page 10
2.2 PROPOSED SYSTEM
In proposed system we are presenting a Moving Object Detection by
Detecting Contiguous Outliers in the Low-Rank Representation which is used
for efficient object detection. we are using DECOLOR algorithm .we are taking
video as input.
Advantages:
Very efficient
Low memory management
Less power consumption
Low maintenance
Page 12
1.5.1 JSP:
Java Server Pages (JSP) is a Java technology that allows software
developers to dynamically generate HTML, XML or other types of documents
in response to a Web client request. The technology allows Java code and
certain pre-defined actions to be embedded into static content.
The JSP syntax adds additional XML-like tags, called JSP actions, to be
used to invoke built-in functionality. Additionally, the technology allows for the
creation of JSP tag libraries that act as extensions to the standard HTML or
XML tags. Tag libraries provide a platform independent way of extending the
capabilities of a Web server.
JSPs are compiled into Java Servlet by a JSP compiler. A JSP compiler
may generate a Servlet in Java code that is then compiled by the Java compiler,
or it may generate byte code for the Servlet directly. JSPs can also be
interpreted on-the-fly reducing the time taken to reload changes
Java Server Pages (JSP) technology provides a simplified, fast way to
create dynamic web content. JSP technology enables rapid development of web-
based application server.
Page 13
Architecture OF JSP:
Page 14
The Advantages of JSP:
Active Server Pages (ASP). ASP is a similar technology from Microsoft.
The advantages of JSP are twofold. First, the dynamic part is written in
Java, not Visual Basic or other MS-specific language, so it is more
powerful and easier to use. Second, it is portable to other operating
systems and non-Microsoft Web servers.
Pure Servlet. JSP doesn't give you anything that you couldn't in principle
do with a Servlet. But it is more convenient to write (and to modify!)
regular HTML than to have a zillion println statements that generate the
HTML. Plus, by separating the look from the content you can put
different people on different tasks: your Web page design experts can
build the HTML, leaving places for your Servlet programmers to insert
the dynamic content.
Server-Side Includes (SSI). SSI is a widely-supported technology for
including externally-defined pieces into a static Web page. JSP is better
because it lets you use servlets instead of a separate program to generate
that dynamic part. Besides, SSI is really only intended for simple
inclusions, not for "real" programs that use form data, make database
connections, and the like.
JavaScript. JavaScript can generate HTML dynamically on the client.
This is a useful capability, but only handles situations where the dynamic
information is based on the client's environment. With the exception of
cookies, HTTP and form submission data is not available to JavaScript.
And, since it runs on the client, JavaScript can't access server-side
resources like databases, catalogs, pricing information, and the like.
Page 15
1.5.2 Java (programming language)
Java is a programming language originally developed by James
Gosling at Sun Microsystems (which is now a subsidiary of Oracle Corporation)
and released in 1995 as a core component of Sun Microsystems' Java platform.
The language derives much of its syntax from C and C++but has a simpler
object model and fewer low-level facilities. Java applications are typically
compiled to byte code (class file) that can run on any Java Virtual Machine
(JVM) regardless of computer architecture. Java is general-purpose, concurrent,
class-based, and object-oriented, and is specifically designed to have as few
implementation dependencies as possible. It is intended to let application
developers "write once, run anywhere". Java is considered by many as one of
the most influential programming languages of the 20th century, and widely
used from application software to web application.
The original and reference implementation Java compilers, virtual
machines, and class libraries were developed by Sun from 1995. As of May
2007, in compliance with the specifications of the Java Community Process,
Sun relicensed most of their Java technologies under the GNU General Public
License. Others have also developed alternative implementations of these Sun
technologies, such as the GNU Compiler for Java and GNU Class path.
Page 16
1.5.3 J2EE:
This can be a single J2EE module or a group of modules packaged
into an EAR file along with a J2EE application deployment descriptor. J2EE
applications are typically engineered to be distributed across multiple
computing tiers.
Enterprise applications can consist of the following:
EJB modules (packaged in JAR files);
Web modules (packaged in WAR files);
connector modules or resource adapters (packaged in RAR files);
Session Initiation Protocol (SIP) modules (packaged in SAR files);
application client modules;
Additional JAR files containing dependent classes or other components
required by the application;
Any combination of the above.
1.5.4 Servlet
Java Servlet technology provides Web developers with a simple,
consistent mechanism for extending the functionality of a Web server and for
accessing existing business systems. Servlet are server-side Java EE
components that generate responses (typically HTML pages) to requests
(typically HTTP requests) from clients. A Servlet can almost be thought of as an
applet that runs on the server side—without a face.
// Hello.java
import java.io.*;
import javax.servlet.*;
Page 17
public class Hello extends GenericServlet
public void service(ServletRequest request, ServletResponse response)
throws ServletException, IOException
response.setContentType("text/html");
final PrintWriter pw = response.getWriter();
pw.println("Hello, world!");
pw.close();
The import statements direct the Java compiler to include all of the public
classes and interfaces from the java.io. and javax.servlet packages in the
compilation.
The Hello class extends the GenericServlet class; the GenericServlet class
provides the interface for the server to forward requests to the servlet and
control the servlet's lifecycle.
The Hello class overrides the service(ServletRequest, ServletResponse) method
defined by the Servlet interface to provide the code for the service request
handler. The service() method is passed a ServletRequest object that contains
the request from the client and a ServletResponse object used to create the
response returned to the client. The service() method declares that it throws the
exceptions ServletExceptionand IO Exception if a problem prevents it from
responding to the request.
The setContentType(String) method in the response object is called to set the
MIME content type of the returned data to "text/html". The getWriter()
method in the response returns a PrintWriter object that is used to write the data
that is sent to the client. The println(String) method is called to write the
Page 18
"Hello, world!" string to the response and then the close() method is called to
close the print writer, which causes stream to be returned to the client.
Life Cycle OF Servlet:
The Servlet lifecycle consists of the following steps:
1. The Servlet class is loaded by the container during start-up.
2. The container calls the init() method. This method initializes the
Servlet and must be called before the servlet can service any requests.
In the entire life of a servlet, the init() method is called only once.
3. After initialization, the servlet can service client-requests. Each
request is serviced in its own separate thread. The container calls the
service() method of the servlet for every request. The service() method
determines the kind of request being made and dispatches it to an
appropriate method to handle the request. The developer of the servlet
must provide an implementation for these methods. If a request for a
method that is not implemented by the servlet is made, the method of
the parent class is called, typically resulting in an error being returned
to the requester.
4. Finally, the container calls the destroy() method which takes the
servlet out of service. The destroy () method like init() is called only
once in the lifecycle .
Page 19
1.5.4 Feasibility Study
Feasibility study is the test of a system proposal according to its
workability, impact on the organization, ability to meet user needs, and effective
use of recourses. It focuses on the evaluation of existing system and procedures
analysis of alternative candidate system cost estimates. Feasibility analysis was
done to determine whether the system would be feasible.
The development of a computer based system or a product is more likely
plagued by resources and delivery dates. Feasibility study helps the analyst to
decide whether or not to proceed, amend, postpone or cancel the project,
particularly important when the project is large, complex and costly. Once the
analysis of the user requirement is complement, the system has to check for the
compatibility and feasibility of the software package that is aimed at. An
important outcome of the preliminary investigation is the determination that the
system requested is feasible.
Technical Feasibility:
The technology used can be developed with the current equipments
and has the technical capacity to hold the data required by the new system.
This technology supports the modern trends of technology.
Easily accessible, more secure technologies.
Technical feasibility on the existing system and to what extend it can support
the proposed addition. We can add new modules easily without affecting the
Core Program. Most of parts are running in the server using the concept of
stored procedures.
Operational Feasibility:
Page 20
This proposed system can easily implemented, as this is based on JSP
coding (JAVA) & HTML .The database created is with MySql server which is
more secure and easy to handle. The resources that are required to
implement/install these are available. The personal of the organization already
has enough exposure to computers. So the project is operationally feasible.
Economical Feasibility:
Economic analysis is the most frequently used method for evaluating the
effectiveness of a new system. More commonly known cost/benefit analysis, the
procedure is to determine the benefits and savings that are expected from a
candidate system and compare them with costs. If benefits outweigh costs, then
the decision is made to design and implement the system. An entrepreneur must
accurately weigh the cost versus benefits before taking an action. This system is
more economically feasible which assess the brain capacity with quick & online
test. So it is economically a good project.
Page 21
2 LITERATURE REVIEW:
1) A General Framework for Object Detection
This paper presents a general trainable framework for object detection in
static images of cluttered scenes. The detection technique we develop is based
on a wavelet representation of an object class derived from a statistical analysis
of the class instances. By learning an object class in terms of a subset of an over
complete dictionary of wavelet basis functions, we derive a compact
representation of an object class which is used as an input to a support vector
machine classifier.
2) Wallflower: Principles and Practice of Background Maintenance
Background maintenance, though frequently used for video surveillance
applications, is often implemented ado with little thought given to the
formulation of realistic, yet useful goals. We presented Wallflower, a system
that attempts to solve many of the common problems with background
maintenance.
3) Motion-Based Background Subtraction using Adaptive Kernel Density
Estimation
In this paper we have proposed a technique for the modelling dynamic
scenes for the purpose of background foreground differentiation and change
detection. Theme relies on the utilization of optical flow as feature for change
detection. In order to properly utilize the Uncertainties in the features, we
proposed a novel kernel based multivariate density estimation technique that
adapts the bandwidth according the uncertainties in the test and sample
measurements
4) Face Recognition With Contiguous Occlusion Using Markov Random
Fields
Page 22
In this paper, we propose a more principled and general method for face
recognition with contiguous occlusion. We do not assume any explicit prior
knowledge about the location, size, shape, colour, or number of the occluded
regions; the only prior information we have about the occlusion is that the
corrupted pixels are likely to be adjacent to each other in the image plane.
5) Robust Object Tracking with Online Multiple Instance Learning
In this paper, we address the problem of tracking an object in a video
given its location in the first frame and no other information. Recently, a class
of tracking techniques called ―tracking by detection‖ has been shown to give
promising results at real-time speeds. These methods train a discriminative
classifier in an online manner to separate the object from the background. This
classifier bootstraps itself by using the current tracker state to extract positive
and negative examples from the current frame.
6) Motion Competition: A Variation Approach to Piecewise Parametric
Motion Segmentation
That propose two different representations of this motion boundary: an
explicit-based implementation which can be applied to the motion-based
tracking of single moving object, and an implicit multiphase level set
implementation which allows for the segmentation of an arbitrary number of
multiply connected moving objects. Numerical results both for simulated
ground truth experiments and for real world sequences demonstrate the capacity
of our approach to segment objects based exclusively on their relative motion.
Page 23
2.1 SYSTEM ARCHITECTURE:
Page 24
2.2 MODULE DESCRIPTION:
2.2.1 Video capturing:
Digital video refers to the capturing, manipulation, and storage of
moving images that can be displaced on computer screens. First, a camera and a
microphone capture the picture and sound of a video session and send analog
signals to a video-capture adapter board.
2.2.2 Moving object detection:
In an open area the objects will be able to move in any direction,
and with a camera setup typical of surveillance systems, this will give
movement in all directions of the surveillance video, and objects will enter and
leave the field of view on all its boundaries . Furthermore the video will show
some perspective, i.e. the size of an object will change when it moves towards
or away from the camera. The objects‘ freedom of movement also implies that
they can move in a way where they occlude each other, or they may stop
moving for a while. In the case of people the occlusion and stopping will be
very likely when they are interacting, e.g. two people stopping and talking to
each other and then shaking hands or hugging before departure. People may
also be moving in groups or form and leave groups in an arbitrary fashion.
These challenges could be solved by restricting the movement of the objects,
but this would limit the system from being applied in many situations. Different
Web camera Capture Video Client video Frame
Separation
Page 25
types of objects: In some open areas many different types of objects will be
present. A surveillance video of a parking lot for example will contain vehicles,
persons, and maybe birds or dogs. People may also leave or pick up other
objects in the scene. The most general surveillance system would be able to
distinguish between these objects, and treat them in the way most appropriate to
that type of object. Constraints in this respect would limit the system to areas
with only a certain type of objects.
2.2.3 Motion segmentation:
Background subtraction is the first step in the process of
segmenting and tracking people. Distinguishing between foreground and
background in a very dynamic and unconstrained outdoor environment over
several hours is a challenging task. The background model is kept in the data
storage and four individual modules do training of the model, updating of the
model, foreground/background classification and post processing. The first k
video frames are used to train the background model to achieve a model that
represents the variation in the background during this period. The following
frames (from k + 1 and onwards) are each processed by the background
subtraction module to produce a mask that describes the foreground regions
identified by comparing the incoming frame with the background model.
Information from frames k + 1 and onwards are used to update the background
model either by the continuous update mechanism, the layered Updating, or
both. The mask obtained from the background subtraction is processed further
in the post processing module, which minimizes the effect of noise in the mask.
Web camera Moving object
detection
Image stored in server
Page 26
2.2.4 SMS Alert System (Short Message Service):
After detecting the changes in video frames, we are alerting the
central control unit or the user through SMS using the GSM Modem. A GSM
modem is a wireless modem that works with a GSM wireless network. A
wireless modem behaves like a dial-up modem. The main difference
between them is that a dial-up modem sends and receives data through a
fixed telephone line while a wireless modem sends and receives data through
radio waves. Typically, an external GSM modem is connected to a computer
through a serial cable or a USB cable. Like a GSM mobile phone, a GSM
modem requires a SIM card from a wireless carrier in order to operate.
K means cluster
operation
Collect the client
and server
images
Comparing
Operation
Comparing
process
Find out object
movement
SMS alert
Page 27
2.3.1 BACKGROUND SUBTRACTION:
Background subtraction is used in different applications to detect the
moving objects in the scene like in video surveillance, optical motion capture
and multimedia.
Background subtraction presents the following steps:
Background modelling
Background initialization
Background maintenance
Foreground detection
Background subtraction presents the following issues:
Choice of the feature size: pixel, a block or a cluster
Choice of the feature type :colour features, edge features, stereo features,
motion features and texture features
Background subtraction is a computational vision process of extracting
foreground objects in a particular scene. A foreground object can be described
as an object of attention which helps in reducing the amount of data to be
processed as well as provide important information to the task under
consideration. Often, the foreground object can be thought of as a coherently
moving object in a scene. We must emphasize the word coherent here because if
a person is walking in front of moving leaves, the person forms the foreground
object while leaves though having motion associated with them are considered
Page 28
background due to its repetitive behaviour. In some cases, distance of the
moving object also forms a basis for it to be considered a background, e.g. if in
a scene one person is close to the camera while there is a person far away in
background, in this case the nearby person is considered as foreground while
the person far away is ignored due to its small size and the lack of information
that it provides. Identifying moving objects from a video sequence is a
fundamental and critical task in many computer-vision applications. A common
approach is to perform background subtraction, which identifies moving objects
from the portion of video frame that differs from background.
Background subtraction is a class of techniques for segmenting out
objects of interest in a scene for applications such as surveillance. There are
many challenges in developing a good background subtraction algorithm. First,
it must be robust against changes in illumination. Second, it should avoid
detecting non-stationary background objects and shadows cast by moving
objects. A good background model should also react quickly to changes in
background and adapt itself to accommodate changes occurring in the
background such as moving of a stationary chair from one place to another. It
should also have a good foreground detection rate and the processing time for
background subtraction should be real-time.
The purpose of our work is to obtain a real-time system which works well in
indoor workspace kind of environment and is independent of camera
placements, reflection, illumination, shadows, opening of doors and other
similar scenarios which lead to errors in foreground extraction. The system
should be robust to whatever it is presented with in its field of vision and should
be able to cope with all the factors contributing to erroneous results.
Much work has been done towards obtaining the best possible background
model which works in real time. Most primitive of these algorithms would be to
use a static frame without any foreground object as a base background model
and use a simple threshold based frame subtraction to obtain the foreground.
Page 29
This is not suited for real life situations where normally there is a lot of
movement through cluttered areas, objects overlapping in the visual field,
shadows, lighting changes, effects of moving elements in the scene (e.g.
swaying trees), slow-moving objects, and objects being introduced or removed
from the scene.
2.3.2 Low Rank Modelling :
The matrix completion problem is to recover a low-rank matrix from a
subset of its entries. The main solution strategy for this problem has been based
on nuclear-norm minimization which requires computing singular value
decompositions – a task that is increasingly costly as matrix sizes and ranks
increase. To improve the capacity of solving large-scale problems, we propose a
low-rank factorization model and construct nonlinear successive over-relaxation
(SOR) algorithm that only requires solving a linear least squares problem per
iteration. Convergence of this nonlinear SOR algorithm is analyzed. Numerical
results show that the algorithm can reliably solve a wide range of problems at a
speed at least Several times faster than many nuclear-norm minimization
algorithms.
The problem of minimizing the rank of a matrix arises in many
applications, for example, control and systems theory, model reduction and
minimum order control synthesis recovering shape and motion from image
streams data mining and pattern recognitions [6] and machine learning such as
latent semantic indexing, collaborative prediction and low-dimensional
embedding. In this paper, we consider the Matrix Completion(MC) problem of
finding a lowest-rank matrix given a subset of its entries, that is,
W∈Rm×nrank(W), s.t. Wij = Mij, ∀(i, j) ∈ Ω,
Page 30
where rank(W) denotes the rank of W, and Mi,j ∈ R are given for (i, j)
∈ Ω ⊂ (i, j) : 1 ≤ i ≤ m,1 ≤ j ≤ n.
convex optimization problem:
min W∈Rm×n
kWk∗, s.t. Wij = Mij, ∀(i, j) ∈ Ω,
Where the nuclear or trace norm kWk∗ is the summation of the singular values
of W. In particular, Candes and `Resht in proved that a given rank-r matrix M
satisfying certain incoherence conditions can be recovered exactly
By with high probability from a subset Ω of uniformly sampled entries
whose cardinality |Ω| is of the order
O(r(m + n)polylog (m + n))
Min W∈Rm×nµkWk∗ +kPΩ(W − M)k
Assume that the underlying background images are linearly
correlated. Thus, the matrix composed of vector zed video frames can be
approximated by a low-rank matrix, and the moving objects can be detected as
outliers in this low-rank representation. Formulating the problem as outlier
detection allows us to get rid of many assumptions on the behaviour of
foreground. The low-rank representation of background makes it flexible to
accommodate the global variations in the background. Moreover, DECOLOR
performs object detection and background estimation simultaneously without
training sequences.
The main contributions can be summarized as follows:
Page 31
We propose a new formulation of outlier detection in the low-rank
representation in which the outlier support and the low-rank matrix are
estimated simultaneously. We establish the link between our model and other
relevant models in the framework of Robust Principal Component Analysis
(RPCA).Differently from other formulations of RPCA, we model the outlier
support explicitly. Decolourant be interpreted as ‗0-penalty regularized RPCA,
Which is a more faithful model for the problem of moving object segmentation.
Following the novel formulation, an effective and efficient algorithm is
developed to solve the problem. We demonstrate that, although the energy is no
convex, DECOLORachieves better accuracy in terms of both object detection
and background estimation compared against the state-of-the-art algorithm of
RPCA .
2. In other models of RPCA, no prior knowledge on the spatial
distribution of outliers has been considered. In real videos, the foreground
objects usually are small clusters. Thus, contiguous regions should be
preferred to be detected. Since the outlier support is modelled explicitly in
our formulation, we can naturally incorporate such contiguity prior using
Markov Random Fields (MRFs) [14].
3. We use a parametric motion model to compensate for camera motion.
The compensation of camera motion is integrated into our unified framework
and computed in a batch manner for all frames during segmentation and
background estimation.
Basic Low Ranking Approximation:
Page 32
Has analytic solution in terms of the singular value decomposition of
the data matrix. The result is referred to as the matrix approximation lemma
or Eckart–Young–Mirsky theorem. Let
be the singular value decomposition of and partition
, , and as follows:
where is , is , and is . Then the rank-
matrix, obtained from the truncated singular value decomposition
is such that
The minimizer is unique if and only if .
The Frobenius norm weights uniformly all elements of the approximation
error . Prior knowledge about distribution of the errors can be taken into
account by considering the weighted low-rank approximation problem
where victories the matrix column wise and is a
given positive (semi)definite weight matrix.
The general weighted low-rank approximation problem does not
admit an analytic solution in terms of the singular value decomposition and
is solved by local optimization methods.
Kernel Representation:
Page 33
and
the weighted low-rank approximation problem becomes equivalent to the
parameter optimization problems
and
where is the identity matrix of size .
Alternative Algorithm:
The image representation of the rank constraint suggests a
parameter optimization methods, in which the cost function is minimized
alternatively over one of the variables ( or ) with the other one fixed.
Although simultaneous minimization over both and is a difficult no
convex optimization problem, minimization over one of the variables alone is
a linear least squares problem and can be solved globally and efficiently.
The resulting optimization algorithm (called alternating projections)
is globally convergent with a linear convergence rate to a locally optimal
solution of the weighted low- rank approximation problem. Starting value for
the (or ) parameter should be given. The iteration is stopped when a user
defined convergence condition is satisfied.
Mat lab implementation of the alternating projections algorithm for
weighted low-rank approximation.
Page 34
SYSTEM DESIGN DIAGRAM
2.4.1 Dataflow diagram
DFD-0:
Dataset
DFD-1:
Web Cam
Video
Capture
Foregroun
d video
Web Cam
Video
Capture
Page 35
Dataset
DFD-2:
Dataset
Video
motion
Object Find
it
Web Cam
Video
Capture
Foregroun
d Video
Video
motion
object
find it Image
Compariso
n
SMS
Alert
Page 36
2.4.2 UML Diagram
Use case Diagram:
User
Background Image
sms alert
Image Difference
Foreground Image
Webcam
GSM
Moving Object Detection
System
Page 38
Sequence Diagram:
admin Object GSM
3.Moving Object
7.Alert Msg
8.View Object Movements
server
1.webcam
2.Background Image
4.Foreground Image
5.Image Difference
6. GSM Communication
K means Process
Page 39
Collaboration Diagram:
admin Object
GSM
3: 3.Moving Object
server
6: K means Process2: 2.Background Image
9: 8.View Object Movements
1: 1.webcam
5: 5.Image Difference4: 4.Foreground Image
8: 7.Alert Msg
7: 6. GSM Communication
Page 40
Activity Diagram:
User
webcam
backgroun
d Image
Image
Difference
GSM
Msg alert
Moving Object
Foreground
yes
no
Page 41
2.5 SYSTEM TESTING:
White Box Testing
Execution of every path in the program.
Black Box Testing
Exhaustive input testing is required to find all errors.
Unit Testing
Unit testing, also known as Module Testing, focuses verification
efforts on the module. The module is tested separately and this is
carried out at the programming stage itself.
Unit Test comprises of the set of tests performed by an individual
programmer before integration of the unit into the system.
Unit test focuses on the smallest unit of software design- the
software component or module.
Using component level design, important control paths are tested to
uncover errors within the boundary of the module.
Unit test is white box oriented and the step can be conducted in
parallel for multiple components.
Functional Testing:
Functional test cases involve exercising the code with normal input
values for which the expected results are known, as well as the boundary
values
Objective:
The objective is to take unit-tested modules and build a program structure
that has been dictated by design.
Page 42
Performance Testing:
Performance testing determines the amount of execution time spent in
various parts of the unit, program throughput, and response time and
device utilization of the program unit. It occurs throughout all steps in the
testing process.
Integration Testing:
It is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with in the
interface.
It takes the unit tested modules and builds a program structure.
All the modules are combined and tested as a whole.
Integration of all the components to form the entire system and a overall
testing is executed.
Validation Testing:
Validation test succeeds when the software functions in a manner that can
be reasonably expected by the client.
Software validation is achieved through a series of black box testing
which confirms to the requirements.
Black box testing is conducted at the software interface.
The test is designed to uncover interface errors, is also used to
demonstrate that software functions are operational, input is properly
accepted, output are produced and that the integrity of external
information is maintained.
System Testing:
Page 43
Tests to find the discrepancies between the system and its original
objective, current specifications and system documentation.
Structure Testing:
It is concerned with exercising the internal logic of a program and
traversing particular execution paths.
Output Testing:
Output of test cases compared with the expected results created
during design of test cases.
Asking the user about the format required by them tests the output
generated or displayed by the system under consideration.
Here, the output format is considered into two was, one is on
screen and another one is printed format.
The output on the screen is found to be correct as the format was
designed in the system design phase according to user needs.
The output comes out as the specified requirements as the user‘s
hard copy.
User acceptance Testing:
Final Stage, before handling over to the customer which is usually carried
out by the customer where the test cases are executed with actual data.
The system under consideration is tested for user acceptance and
constantly keeping touch with the prospective system user at the time of
developing and making changes whenever required.
It involves planning and execution of various types of test in order to
demonstrate that the implemented software system satisfies the
requirements stated in the requirement document
Two set of acceptance test to be run:
Page 44
1. Those developed by quality assurance group.
2. Those developed by customer.
Methodology
Waterfall Approach
While the Waterfall Model presents a straightforward view of the software life
cycle, this view is only appropriate for certain classes of software development.
Specifically, the Waterfall Model works well when the software requirements
are well understood (e.g., software such as compilers or operating systems) and
the nature of the software development involves contractual agreements. The
Waterfall Model is a natural fit for contract-based software development since
this model is document driven; that is, many of the products such as the
requirements specification and the design are documents. These documents then
become the basis for the software development contract.
There have been many waterfall variations since the initial model was
introduced by Winston Royce in 1970 in a paper entitled: ―managing the
development of large software systems: concepts and techniques‖. Barry
Boehm, developer of the spiral model (see below) modified the waterfall model
in his book Software Engineering Economics (Prentice-Hall, 1987). The basic
differences in the various models is in the naming and/or order of the phases.
The basic waterfall approach looks like the illustration below. Each phase is
done in a specific order with its own entry and exit criteria and provides the
maximum in separation of skills, an important factor in government contracting.
Page 45
Example of a typical waterfall approach
While some variations on the waterfall theme allow for iterations back to the
previous phase, ―In practice most waterfall projects are managed with the
assumption that once the phase is completed, the result of that activity is cast in
concrete. For example, at the end of the design phase, a design document is
delivered. It is expected that this document will not be updated throughout the
rest of the development. You cannot climb up a waterfall.‖ (Murray Cantor,
Object-oriented project management with UML John Wiley, 1998)
The waterfall is the easiest of the approaches for a business analyst to
understand and work with and it is still, in its various forms, the operational
SLC in the majority of US IT shops. The business analyst is directly involved
in the requirements definition and/or analysis phases and peripherally involved
in the succeeding phases until the end of the testing phase. The business analyst
is heavily involved in the last stages of testing when the product is determined
to solve the business problem. The solution is defined by the business analyst in
the business case and requirements documents. The business analyst is also
Page 46
involved in the integration or transition phase assisting the business community
to accept and incorporate the new system and processes.
V Model
The "V" model (sometimes known as the "U" model) reflects the
approach to systems development where in the definition side of the model is
linked directly to the confirmation side. It specifies early testing and preparation
of testing scenarios and cases before the build stage to simultaneously validate
the definitions and prepare for the test stages.
It is the standard for German federal government projects and is
considered as much a project management method as a software development
approach.
―The V Model, while admittedly obscure, gives equal weight to testing
rather than treating it as an afterthought. Initially defined by the late Paul Rook
in the late 1980s, the V was included in the U.K.'s National Computing Centre
publications in the 1990s with the aim of improving the efficiency and
effectiveness of software development. It's accepted in Europe and the U.K. as a
superior alternative to the waterfall model; yet in the U.S., the V Model is often
mistaken for the waterfall…
―In fact, the V Model emerged in reaction to some waterfall models that showed
testing as a single phase following the traditional development phases of
requirements analysis, high-level design, detailed design and coding. The
waterfall model did considerable damage by supporting the common
impression that testing is merely a brief detour after most of the mileage has
been gained by mainline development activities. Many managers still believe
this, even though testing usually takes up half of the project time.‖ (Goldsmith
and Graham, ―The Forgotten Phase‖, Software development, July 2002)
Page 47
As shown below, the model is the shape of the development cycle (a waterfall
wrapped around) and the concept of flow down and across the phases. The V
shows the typical sequence of development activities on the left-hand
(downhill) side and the corresponding sequence of test execution activities on
the right-hand (uphill) side.
Example of a typical V Model (IEEE)
The primary contribution the V Model makes is this alignment of testing and
specification. This is also an advantage to the business analyst who can use the
model and approach to enforce early consideration of later testing. The V
Model emphasizes that testing is done throughout the SDLC rather than just at
the end of the cycle and reminds the business analyst to prepare the test cases
and scenarios in advance while the solution is being defined.
System
Subsystem
requirements
Subsystem
design
Code
Unit
test
Integration
test
Subsystem
test
System
test
Acceptance
test
Concept of
Unit
design
System
Subsystem
Unit
Page 48
The business analyst‘s role in the V Model is essentially the same as the
waterfall. The business analyst is involved full time in the specification of the
business problem and the confirmation and validation that the business problem
has been solved which is done at acceptance test. The business analyst is also
involved in the requirements phases and advises the system test stage which is
typically performed by independent testers – the quality assurance group or
someone other than the development team. The primary business analyst
involvement in the system test stage is keeping the requirements updated as
changes occur and providing ―voice of the customer‖ to the testers and
development team. The rest of the test stages on the right side of the model are
done by the development team to ensure they have developed the product
correctly. It is the business analyst‘s job to ensure they have developed the
correct product.
The business analyst is directly involved in the requirements definition and/or
analysis phases and peripherally involved in the succeeding phases until the end
of the testing phase. The business analyst is heavily involved in the last stages
of testing when the product is determined to solve the business problem. The
solution is defined by the business analyst in the business case and requirements
documents. The business analyst is also involved in the integration or transition
phase assisting the business community to accept and incorporate the new
system and processes.
Page 49
CLIENT SIDE PROGRAM CODING:
Web cam.java :
import java.awt.*;
import javax.swing.*;
import java.io.Serializable;
import java.awt.event.*;
import java.net.*;
import java.io.*;
import java.util.*;
import java.rmi.server.*;
import java.net.*;
import java.rmi.*;
public class Webcam extends JFrame implements ActionListener//, Runnable
TestMotionDetection t1;
long lastPlay = 0;
int imagem=1;
int imagec=1;
int imc=1;
int v;
int tRegions=0;
boolean boo;
JPanel pan;
JButton stcbut;
Page 50
JButton srtcbut;
JButton qubut;
JLabel head;
Thread th;
//remote remob=null;
//String serverip="192.168.1.7";
public static void main(String [] args)
Webcam al=new Webcam();
al.setDefaultCloseOperation(EXIT_ON_CLOSE);
al.setSize(300,150);
al.setTitle("FAST MOTION DETECTION");
al.setResizable(false);
al.setLocationRelativeTo(null);
al.setVisible(true);
public Webcam()
//th = new Thread(this);
pan=new JPanel();
pan.setLayout(null);
srtcbut=new JButton("Start CAM");
stcbut=new JButton("Stop CAM");
qubut=new JButton("Quit");
Page 51
head=new JLabel("REMOTE MONITORING");
getContentPane().add(pan);
//head.setBounds(150,25,150,30);
srtcbut.setBounds(40,40,100,30);
stcbut.setBounds(170,40,100,30);
qubut.setBounds(150,150,150,30);
pan.add(stcbut);
pan.add(srtcbut);
pan.add(head);
//pan.add(qubut);
//pan.setSize(500,250);
pan.setSize(300,100);
stcbut.addActionListener(this);
srtcbut.addActionListener(this);
qubut.addActionListener(this);
public void actionPerformed(ActionEvent e)
if(e.getSource()==stcbut)
System.exit(0);
System.out.println("stop");
//Vision.stopViewer();
Page 52
Testmotion.java:
import java.awt.*;
import java.awt.event.*;
import javax.media.*;
import javax.media.control.TrackControl;
import javax.media.Format;
import javax.media.format.*;
import javax.media.protocol.*;
import javax.media.datasink.*;
import javax.media.control.*;
public class TestMotionDetection extends Frame implements ControllerListener
Processor p;
DataSink fileW = null;
Object waitSync = new Object();
boolean stateTransitionOK = true;
public TestMotionDetection()
super("Test Motion Detection");
public boolean open(MediaLocator ds)
try
p = Manager.createProcessor(ds);
catch (Exception e)
System.err.println("Failed to create a processor from the given
datasource: " + e);
Page 53
return false;
p.addControllerListener(this);
p.configure();
if (!waitForState(p.Configured))
System.err.println("Failed to configure the processor.");
return false;
p.setContentDescriptor(null);
TrackControl tc[] = p.getTrackControls();
if (tc == null)
System.err.println("Failed to obtain track controls from the
processor.");
return false;
TrackControl videoTrack = null;
for (int i = 0; i < tc.length; i++)
if (tc[i].getFormat() instanceof VideoFormat)
videoTrack = tc[i];
break;
if (videoTrack == null)
System.err.println("The input media does not contain a video track.");
return false;
Page 54
setLayout(new BorderLayout());
Component cc;
Component vc;
if ((vc = p.getVisualComponent()) != null)
add("Center", vc);
if ((cc = p.getControlPanelComponent()) != null)
add("South", cc);
p.start();
setVisible(true);
addWindowListener(new WindowAdapter()
public void windowClosing(WindowEvent we)
p.close();
System.exit(0);
);
p.start();
return true;
public void addNotify()
super.addNotify();
pack();
Page 55
boolean waitForState(int state)
synchronized (waitSync)
try
while (p.getState() != state && stateTransitionOK)
System.out.println("fagg");
catch (Exception e)
return stateTransitionOK;
public void controllerUpdate(ControllerEvent evt)
System.out.println(this.getClass().getName()+evt);
if (evt instanceof ConfigureCompleteEvent ||
evt instanceof RealizeCompleteEvent ||
evt instanceof PrefetchCompleteEvent)
synchronized (waitSync)
stateTransitionOK = true;
waitSync.notifyAll();
else if (evt instanceof ResourceUnavailableEvent)
synchronized (waitSync)
stateTransitionOK = false;
Page 56
waitSync.notifyAll();
else if (evt instanceof EndOfMediaEvent)
p.close();
System.exit(0);
Time stamp effect.java :
import javax.media.*;
import javax.media.format.*;
import java.awt.*;
import com.sun.image.codec.jpeg.JPEGCodec;
import com.sun.image.codec.jpeg.JPEGImageEncoder;
import java.awt.image.BufferedImage;
import java.io.*;
import java.util.*;
import java.rmi.server.*;
import java.net.*;
import java.rmi.*;
Page 57
public class TimeStampEffect implements Effect
Format inputFormat;
Format outputFormat;
Format[] inputFormats;
Format[] outputFormats;
java.text.SimpleDateFormat sdf;
static int d=0;
public TimeStampEffect()
sdf = new java.text.SimpleDateFormat("hh:mm:ss MM/dd/yy");
inputFormats = new Format[]
new RGBFormat(null,
Format.NOT_SPECIFIED,
Format.byteArray,
Format.NOT_SPECIFIED,
24,
3, 2, 1,
3, Format.NOT_SPECIFIED,
Format.TRUE,
Format.NOT_SPECIFIED)
;
outputFormats = new Format[]
new RGBFormat(null,
Format.NOT_SPECIFIED,
Format.byteArray,
Page 58
Format.NOT_SPECIFIED,
24,
3, 2, 1,
3, Format.NOT_SPECIFIED,
Format.TRUE,
Format.NOT_SPECIFIED)
;
public Format[] getSupportedInputFormats()
return inputFormats;
public Format [] getSupportedOutputFormats(Format input)
if (input == null)
return outputFormats;
if (matches(input, inputFormats) != null)
return new Format[] outputFormats[0].intersects(input) ;
else
return new Format[0];
public Format setInputFormat(Format input)
inputFormat = input;
return input;
Page 59
public Format setOutputFormat(Format output)
if (output == null || matches(output, outputFormats) == null)
return null;
RGBFormat incoming = (RGBFormat) output;
Dimension size = incoming.getSize();
int maxDataLength = incoming.getMaxDataLength();
int lineStride = incoming.getLineStride();
float frameRate = incoming.getFrameRate();
int flipped = incoming.getFlipped();
int endian = incoming.getEndian();
if (size == null)
return null;
if (maxDataLength < size.width * size.height * 3)
maxDataLength = size.width * size.height * 3;
if (lineStride < size.width * 3)
lineStride = size.width * 3;
if (flipped != Format.FALSE)
flipped = Format.FALSE;
outputFormat = outputFormats[0].intersects(new RGBFormat(size,
maxDataLength,
null,
frameRate,
Format.NOT_SPECIFIED,
Page 60
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED,
lineStride,
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED));
return outputFormat;
public int process(Buffer inBuffer, Buffer outBuffer)
int outputDataLength =
((VideoFormat)outputFormat).getMaxDataLength();
validateByteArraySize(outBuffer, outputDataLength);
outBuffer.setLength(outputDataLength);
outBuffer.setFormat(outputFormat);
outBuffer.setFlags(inBuffer.getFlags());
byte [] inData = (byte[]) inBuffer.getData();
byte [] outData = (byte[]) outBuffer.getData();
RGBFormat vfIn = (RGBFormat) inBuffer.getFormat();
Dimension sizeIn = vfIn.getSize();
int pixStrideIn = vfIn.getPixelStride();
int lineStrideIn = vfIn.getLineStride();
if ( outData.length < sizeIn.width*sizeIn.height*3 )
System.out.println("the buffer is not full");
return BUFFER_PROCESSED_FAILED;
Page 61
Calendar cal = new GregorianCalendar();
String tt;
int hour12 = cal.get(Calendar.HOUR);
int min = cal.get(Calendar.MINUTE);
int sec = cal.get(Calendar.SECOND);
int ampm = cal.get(Calendar.AM_PM);
if(ampm==0)
tt="AM";
else
tt="PM";
String
time=Integer.toString(hour12)+"."+Integer.toString(min)+"."+Integer.toString(s
ec)+"."+tt;
Font.println(sdf.format(new java.util.Date()).toString() + " (math room
205)", Font.FONT_6x11, 10, 20, (byte)255,(byte)255,(byte)255, outBuffer);
System.arraycopy(inData,0,outData,0,inData.length);
try
writeImage("MotDetImages/MD@"+time+".gif",inData,320,240);
catch(Exception es)
try
remote remob;
Thread.sleep(3000);
FileInputStream fin=new
FileInputStream("MotDetImages/MD@"+time+".gif");
Page 62
int size=fin.available();
byte img[]=new byte[size];
fin.read(img);
fin.close();
remob=(remote)Naming.lookup("//127.0.0.1/rob");
remob.writeImageFile(img,"MD@"+time+".gif");
System.out.println("Image File send to server ");
catch(Exception err)System.out.println("Exception remote "+err);
System.out.println(" The Image created for comparing on region ");
return BUFFER_PROCESSED_OK;
public String getName()
return "TimeStamp Effect";
public void open()
public void close()
public void reset()
public Object getControl(String controlType)
return null;
Page 63
public Object[] getControls()
return null;
Format matches(Format in, Format outs[])
for (int i = 0; i < outs.length; i++)
if (in.matches(outs[i]))
return outs[i];
return null;
byte[] validateByteArraySize(Buffer buffer,int newSize)
Object objectArray=buffer.getData();
byte[] typedArray;
if (objectArray instanceof byte[]) // is correct type AND not null
typedArray=(byte[])objectArray;
if (typedArray.length >= newSize ) // is sufficient capacity
return typedArray;
byte[] tempArray=new byte[newSize]; // re-alloc array
System.arraycopy(typedArray,0,tempArray,0,typedArray.length);
typedArray = tempArray;
else
typedArray = new byte[newSize];
Page 64
buffer.setData(typedArray);
return typedArray;
public void writeImage(String s, byte abyte0[], int i, int j)throws
FileNotFoundException, IOException
FileOutputStream fileoutputstream = new FileOutputStream(s);
JPEGImageEncoder jpegimageencoder =
JPEGCodec.createJPEGEncoder(fileoutputstream);
int ai[] = new int[abyte0.length / 3];
int k = 0;
for(int l = j - 1; l > 0; l--)
for(int i1 = 0; i1 < i; i1++)
ai[k++] = 0xff000000 | (abyte0[l * i * 3 + i1 * 3 + 2] & 0xff) << 16 |
(abyte0[l * i * 3 + i1 * 3 + 1] & 0xff) << 8 | abyte0[l * i * 3 + i1 * 3] & 0xff;
BufferedImage bufferedimage = new BufferedImage(i, j, 1);
bufferedimage.setRGB(0, 0, i, j, ai, 0, i);
jpegimageencoder.encode(bufferedimage);
fileoutputstream.close();
Motion detection effect.java:
import javax.media.*;
import javax.media.format.*;
Page 65
import java.awt.*;
public class MotionDetectionEffect implements Effect
public int OPTIMIZATION = 0;
public int THRESHOLD_MAX = 10000;
public int THRESHOLD_INC = 1000;
public int THRESHOLD_INIT = 5000;
private Format inputFormat;
private Format outputFormat;
private Format[] inputFormats;
private Format[] outputFormats;
private byte[] refData;
private byte[] bwData;
private int avg_ref_intensity;
private int avg_img_intensity;
public int threshold = 30;
public int blob_threshold = THRESHOLD_INIT;
public boolean debug = false;
public MotionDetectionEffect()
inputFormats = new Format[]
new RGBFormat(null,
Format.NOT_SPECIFIED,
Format.byteArray,
Format.NOT_SPECIFIED,
24,
Page 66
3, 2, 1,
3, Format.NOT_SPECIFIED,
Format.TRUE,
Format.NOT_SPECIFIED)
;
outputFormats = new Format[]
new RGBFormat(null,
Format.NOT_SPECIFIED,
Format.byteArray,
Format.NOT_SPECIFIED,
24,
3, 2, 1,
3, Format.NOT_SPECIFIED,
Format.TRUE,
Format.NOT_SPECIFIED)
;
public Format[] getSupportedInputFormats()
return inputFormats;
public Format [] getSupportedOutputFormats(Format input)
if (input == null)
return outputFormats;
Page 67
if (matches(input, inputFormats) != null)
return new Format[] outputFormats[0].intersects(input) ;
else
return new Format[0];
public Format setInputFormat(Format input)
inputFormat = input;
return input;
public Format setOutputFormat(Format output)
if (output == null || matches(output, outputFormats) == null)
return null;
RGBFormat incoming = (RGBFormat) output;
Dimension size = incoming.getSize();
int maxDataLength = incoming.getMaxDataLength();
int lineStride = incoming.getLineStride();
float frameRate = incoming.getFrameRate();
int flipped = incoming.getFlipped();
int endian = incoming.getEndian();
if (size == null)
return null;
if (maxDataLength < size.width * size.height * 3)
Page 68
maxDataLength = size.width * size.height * 3;
if (lineStride < size.width * 3)
lineStride = size.width * 3;
if (flipped != Format.FALSE)
flipped = Format.FALSE;
outputFormat = outputFormats[0].intersects(new RGBFormat(size,
maxDataLength,
null,
frameRate,
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED,
lineStride,
Format.NOT_SPECIFIED,
Format.NOT_SPECIFIED));
return outputFormat;
Int outputDataLength =
((VideoFormat)outputFormat).getMaxDataLength();
validateByteArraySize(outBuffer, outputDataLength);
outBuffer.setLength(outputDataLength);
outBuffer.setFormat(outputFormat);
Page 69
outBuffer.setFlags(inBuffer.getFlags());
byte [] inData = (byte[]) inBuffer.getData();
byte [] outData = (byte[]) outBuffer.getData();
RGBFormat vfIn = (RGBFormat) inBuffer.getFormat();
Dimension sizeIn = vfIn.getSize();
int pixStrideIn = vfIn.getPixelStride();
int lineStrideIn = vfIn.getLineStride();
int y, x;
int width = sizeIn.width;
int height = sizeIn.height;
int r,g,b;
int ip, op;
byte result;
int avg = 0;
int refDataInt = 0;
int inDataInt = 0;
int correction;
int blob_cnt = 0;
if (refData == null)
refData = new byte[outputDataLength];
bwData = new byte[outputDataLength];
System.arraycopy (inData,0,refData,0,inData.length);
System.arraycopy(inData,0,outData,0,inData.length);
for (ip = 0; ip < outputDataLength; ip++)
Page 70
avg += (int) (refData[ip] & 0xFF);
avg_ref_intensity = avg / outputDataLength;
return BUFFER_PROCESSED_OK;
if ( outData.length < sizeIn.width*sizeIn.height*3 )
System.out.println("the buffer is not full");
return BUFFER_PROCESSED_FAILED;
for (ip = 0; ip < outputDataLength; ip++)
avg += (int) (inData[ip] & 0xFF);
avg_img_intensity = avg / outputDataLength;
correction = (avg_ref_intensity < avg_img_intensity) ?
avg_img_intensity - avg_ref_intensity :
avg_ref_intensity - avg_img_intensity;
avg_ref_intensity = avg_img_intensity;
ip = op = 0;
for (int ii=0; ii< outputDataLength/pixStrideIn; ii++)
refDataInt = (int) refData[ip] & 0xFF;
inDataInt = (int) inData[ip++] & 0xFF;
r = (refDataInt > inDataInt) ? refDataInt - inDataInt : inDataInt -
refDataInt;
refDataInt = (int) refData[ip] & 0xFF;
inDataInt = (int) inData[ip++] & 0xFF;
Page 71
g = (refDataInt > inDataInt) ? refDataInt - inDataInt : inDataInt -
refDataInt;
refDataInt = (int) refData[ip] & 0xFF;
inDataInt = (int) inData[ip++] & 0xFF;
b = (refDataInt > inDataInt) ? refDataInt - inDataInt : inDataInt -
refDataInt;
r -= (r < correction) ? r : correction;
g -= (g < correction) ? g : correction;
b -= (b < correction) ? b : correction;
result = (byte)(java.lang.Math.sqrt((double)( (r*r) + (g*g) + (b*b) ) /
3.0));
if (result > (byte)threshold)
bwData[op++] = (byte)255;
bwData[op++] = (byte)255;
bwData[op++] = (byte)255;
else
bwData[op++] = (byte)result;
bwData[op++] = (byte)result;
bwData[op++] = (byte)result;
for (op = lineStrideIn + 3; op < outputDataLength - lineStrideIn-3; op+=3)
for (int i=0; i<1; i++)
if (((int)bwData[op+2] & 0xFF) < 255) break;
if (((int)bwData[op+2-lineStrideIn] & 0xFF) < 255) break;
Page 72
if (((int)bwData[op+2+lineStrideIn] & 0xFF) < 255) break;
if (((int)bwData[op+2-3] & 0xFF) < 255) break;
if (((int)bwData[op+2+3] & 0xFF) < 255) break;
if (((int)bwData[op+2-lineStrideIn + 3] & 0xFF) < 255) break;
if (((int)bwData[op+2-lineStrideIn - 3] & 0xFF) < 255) break;
if (((int)bwData[op+2+lineStrideIn - 3] & 0xFF) < 255) break;
if (((int)bwData[op+2+lineStrideIn + 3] & 0xFF) < 255) break;
bwData[op] = (byte)0;
bwData[op+1] = (byte)0;
blob_cnt ++;
if (blob_cnt > blob_threshold)
if (debug)
sample_down(inData,outData, 0, 0,sizeIn.width, sizeIn.height,
lineStrideIn, pixStrideIn);
Font.println("original picture", Font.FONT_8x8, 0, 0,
(byte)255,(byte)255,(byte)255, outBuffer);
sample_down(refData,outData, 0, sizeIn.height/2,sizeIn.width,
sizeIn.height, lineStrideIn, pixStrideIn);
Font.println("reference picture", Font.FONT_8x8, 0,
sizeIn.height , (byte)255,(byte)255,(byte)255, outBuffer);
sample_down(bwData,outData, sizeIn.width/2, 0,sizeIn.width,
sizeIn.height, lineStrideIn, pixStrideIn);
Page 73
Font.println("motion detection pic", Font.FONT_8x8,
sizeIn.width/2, 0 , (byte)255,(byte)255,(byte)255, outBuffer);
else
System.arraycopy(inData,0,outData,0,inData.length);
System.arraycopy(inData,0,refData,0,inData.length);
return BUFFER_PROCESSED_OK;
return BUFFER_PROCESSED_FAILED;
public String getName()
return "Motion Detection Codec";
public void open()
public void close()
public void reset()
public Object getControl(String controlType)
System.out.println(controlType);
return null;
private Control[] controls;
public Object[] getControls()
Page 74
if (controls == null)
controls = new Control[1];
controls[0] = new MotionDetectionControl(this);
return (Object[])controls;
// Utility methods.
Format matches(Format in, Format outs[])
for (int i = 0; i < outs.length; i++)
if (in.matches(outs[i]))
return outs[i];
return null;
void sample_down(byte[] inData, byte[] outData, int X, int Y, int width,
int height, int lineStrideIn, int pixStrideIn)
int p1, p2, p3, p4, op,x,y;
for ( y = 0; y < (height/2); y++)
p1 = (y * 2) * lineStrideIn ; // upper left cell
p2 = p1 + pixStrideIn; // upper right cell
p3 = p1 + lineStrideIn; // lower left cell
p4 = p3 + pixStrideIn; // lower right cell
op = lineStrideIn * y + (lineStrideIn*Y) + (X*pixStrideIn);
for ( int i =0; i< (width /2 );i++)
Page 75
outData[op++] = (byte)(((int)(inData[p1++] & 0xFF) +
((int)inData[p2++] & 0xFF)+ ((int)inData[p3++] & 0xFF) +
((int)inData[p4++] & 0xFF))/4); // blue cells avg
outData[op++] = (byte)(((int)(inData[p1++] & 0xFF) +
((int)inData[p2++] & 0xFF)+ ((int)inData[p3++] & 0xFF) +
((int)inData[p4++] & 0xFF))/4); // blue cells avg
outData[op++] = (byte)(((int)(inData[p1++] & 0xFF) +
((int)inData[p2++] & 0xFF)+ ((int)inData[p3++] & 0xFF) +
((int)inData[p4++] & 0xFF))/4); // blue cells avg
p1 += 3; p2 += 3; p3+= 3; p4 += 3;
byte[] validateByteArraySize(Buffer buffer,int newSize)
Object objectArray=buffer.getData();
byte[] typedArray;
if (objectArray instanceof byte[]) // is correct type AND not null
typedArray=(byte[])objectArray;
if (typedArray.length >= newSize ) // is sufficient capacity
return typedArray;
byte[] tempArray=new byte[newSize]; // re-alloc array
System.arraycopy(typedArray,0,tempArray,0,typedArray.length);
typedArray = tempArray;
Page 76
else
typedArray = new byte[newSize];
buffer.setData(typedArray);
return typedArray;
Motion detection control.java:
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.event.*;
import javax.media.Control;
public class MotionDetectionControl implements Control, ActionListener,
ChangeListener
Component component;
JButton button;
JSlider threshold;
MotionDetectionEffect motion;
public MotionDetectionControl(MotionDetectionEffect motion)
Page 77
this.motion = motion;
public Component getControlComponent ()
if (component == null)
button = new JButton("Debug");
button.addActionListener(this);
button.setToolTipText("Click to turn debugging mode on/off");
threshold = new JSlider(JSlider.HORIZONTAL,
0,
motion.THRESHOLD_MAX,
motion.THRESHOLD_INIT);
threshold.setMajorTickSpacing(motion.THRESHOLD_INC);
threshold.setPaintLabels(true);
threshold.addChangeListener(this);
Panel componentPanel = new Panel();
componentPanel.setLayout(new BorderLayout());
componentPanel.add("East", button);
componentPanel.add("West", threshold);
componentPanel.invalidate();
component = componentPanel;
return component;
public void actionPerformed(ActionEvent e)
Page 78
Object o = e.getSource();
if (o == button)
if (motion.debug == false)
motion.debug = true;
else motion.debug = false;
public void stateChanged(ChangeEvent e)
Object o = e.getSource();
if (o == threshold)
motion.blob_threshold = threshold.getValue()*1000;
SERVER SIDE CODING
Server manager.java :
import java.io.*;
import java.util.*;
import java.net.*;
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.sql.*;
import java.rmi.*;
import java.rmi.server.*;
import java.sql.*;
Page 79
import java.util.Properties;
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.*;
import com.gif4j.ImageUtils;
import com.gif4j.GifEncoder;
import com.gif4j.GifDecoder;
import com.gif4j.GifImage;
import com.gif4j.GifTransformer;
public class servermanager extends UnicastRemoteObject implements remote
static int c=0;
Connection con,con1;
Statement st,st1;
int i,i1;
ResultSet r1;
public servermanager() throws RemoteException
try
System.out.println("server");
Naming.rebind("rob",this);
catch(Exception ee)
public String writeImageFile(byte[]img,String filen)throws
RemoteException
Page 80
String reply="";
if(c==0)
close();
try
String filepath="Images From Client/"+filen;
String imagepath="Server/Images From Client/"+filen;
FileOutputStream fos=new FileOutputStream(filepath);
fos.write(img);
fos.flush();
fos.close();
System.out.println(" File created sucess ");
SmsSend sms=new SmsSend();
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
con1=DriverManager.getConnection("jdbc:odbc:remds","","");
st1=con1.createStatement();
System.out.println("statement established");
System.out.println(filepath);
System.out.println(imagepath);
i=st1.executeUpdate("insert into T_image values('"+imagepath+"')");
System.out.println("i");
Image image = Toolkit.getDefaultToolkit().createImage(img);
System.out.println("i1");
Page 81
BufferedImage bufferedImage = ImageUtils.toBufferedImage(image);
System.out.println("i2");
File fil = new File(filepath);
System.out.println("i3");
// save image
GifEncoder.encode(bufferedImage, fil);
System.out.println("i4");
File gifImageFileToTransform = new File(filepath);
System.out.println("i5");
GifImage gifImage = GifDecoder.decode(gifImageFileToTransform);
System.out.println("i6");
GifImage resizeGifImage = GifTransformer.resize(gifImage, 160,
120, false);
System.out.println("i7");
GifEncoder.encode(resizeGifImage,new File(filepath));
c++;
catch(Exception ee)
System.out.println(" Exception server file "+ee);
//
return reply;
public void close()
Page 82
try
System.out.println("close");
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
con=DriverManager.getConnection("jdbc:odbc:remds","","");
st=con.createStatement();
i1=st.executeUpdate("delete * from T_image");
System.out.println("del");
catch(Exception e)
System.out.println("error"+e.toString());
public static void main(String tt[])
try
servermanager s1=new servermanager();
catch(Exception ee)
Sms send.java:
import java.io.*;
import java.util.*;
Page 83
import javax.comm.*;
public class SmsSend
static Enumeration portList;
static CommPortIdentifier portId;
static String messageString = "5";
static SerialPort serialPort;
static OutputStream outputStream;
static boolean outputBufferEmptyFlag = false;
public SmsSend()
boolean portFound = false;
String defaultPort = "COM1";
String driverName = "com.sun.comm.Win32Driver";
try
CommDriver commdriver =
(CommDriver)Class.forName(driverName).newInstance( );
commdriver.initialize();
catch (Exception e2)
e2.printStackTrace();
portList = CommPortIdentifier.getPortIdentifiers();
while (portList.hasMoreElements())
portId = (CommPortIdentifier) portList.nextElement();
Page 84
if (portId.getPortType() == CommPortIdentifier.PORT_SERIAL)
if (portId.getName().equals(defaultPort))
System.out.println("Found port " + defaultPort);
portFound = true;
try
serialPort =
(SerialPort) portId.open("SimpleWrite", 2000);
catch (PortInUseException e)
System.out.println("Port in use.");
continue;
try
outputStream = serialPort.getOutputStream();
catch (IOException e)
try
serialPort.setSerialPortParams(9600,
SerialPort.DATABITS_8,
SerialPort.STOPBITS_1,
SerialPort.PARITY_NONE);
catch (UnsupportedCommOperationException e)
try
serialPort.notifyOnOutputEmpty(true);
catch (Exception e)
System.out.println("Error setting event notification");
Page 85
System.out.println(e.toString());
System.exit(-1);
System.out.println(
"Writing \""+messageString+"\" to "
+serialPort.getName());
try
final char controlZ = 26;
outputStream.write(("AT+CMGS=\"8807791833\"\rIntrudor
Found"+controlZ).getBytes());
catch (IOException e)
serialPort.close();
if (!portFound)
System.out.println("port " + defaultPort + " not found.");
public static void main(String arg[])
SmsSend sms=new SmsSend();
Page 86
3. CONCLUSION:
In this paper, we propose a novel framework named DECOLOR to
segment moving objects from image sequences. It avoids complicated motion
computation by formulating the problem as outlier detection and makes use of
the low-rank modelling to deal with complex background. We established the
link between DECOLOR and PCP. Compared with PCP, DECOLOR uses the
non convex penalty and MRFs for outlier detection, which is more greedy to
detect outlier regions that are relatively dense and contiguous. Despite its
satisfactory performance in our experiments, DECOLOR also has some
disadvantages. Since DECOLOR minimizes a non convex energy via
alternating optimization, it converges to a local optimum
with results depending on initialization of ^ S, while PCP always minimizes its
energy globally. In all our experiments, we simply start from ^ S ¼ 0. Also, we
have tested other random initialization of ^ S and it generally converges to a
satisfactory result. This is because the SOFT-IMPUTE step will output similar
results for each randomly generated S as long as S is not too dense.
Page 87
3.1 FUTURE ENHANCEMENT:
Currently, DECOLOR works in a batch mode. Thus, it is not suitable for
real-time object detection. In the future, we plan to develop the online version of
DECOLOR that can work incrementally, e.g., the low-rank model extracted
from beginning frames may be updated online when new frames arrive.
DECOLOR may misclassify unmoved objects or large texture less regions as
background since they are prone to entering the low-rank model. To address
these problems, incorporating additional models such as object appearance or
shape prior to improve the power of DECOLOR can be further explored in
future.
Page 88
BIBLIOGRAPHY:
[1] C. Stauffer and W.E.L. Grimson, ―Learning Patterns of Activity Using Real-
Time Tracking,‖ IEEE Trans. Pattern Analysis and Machine Intelligence, vol.
22, no. 8, pp. 747-757, Aug. 2000.
[2] B. Han, D. Comaniciu, and L. Davis, ―Sequential Kernel Density
Approximation through Mode Propagation: Applications to Background
Modelling,‖ Proc. Asian Conf. Computer Vision, 2004.
[3] D.S. Lee, ―Effective Gaussian Mixture Learning for Video Background
Subtraction,‖ IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27,
no. 5, pp. 827-832, May 2005.
[4] Z. Zivkovic and F. van der Heijden, ―Efficient Adaptive Density Estimation
Per Image Pixel for Task of Background Subtraction,‖ Pattern Recognition
Letters, vol. 27, no. 7, pp. 773-780, 2006.
[5] P. Viola and M. Jones, ―Rapid Object Detection Using a Boosted Cascade of
Simple Features,‖ Proc. IEEE Conf. Computer Vision and Pattern Recognition,
pp. 511-518, 2001.
[6] B. Han, D. Comaniciu, Y. Zhu, and L.S. Davis, ―Sequential Kernel Density
Approximation and Its Application to Real-Time Visual Tracking,‖ IEEE Trans.
Pattern Analysis and Machine Intelligence, vol. 30, no. 7, pp. 1186-1197,July
2008.