Click here to load reader
Click here to load reader
Jan 17, 2016
SWE 423: Multimedia SystemsChapter 4: Graphics and Images (4)
Image SegmentationAssigning a unique number to object pixels based on different intensities or colors in the foreground and the background regions of an imageCan be used in the object recognition process, but it is not object recognition on its ownSegmentation MethodsPixel oriented methodsEdge oriented methodsRegion oriented methods....
Pixel-Oriented SegmentationGray-values of pixels are studied in isolationLooks at the gray-level histogram of an image and finds one or more thresholds in the histogramIdeally, the histogram has a region without pixels, which is set as the threshold, and hence the image is divided into a foreground and a background based on that (Bimodal Distribution)Major drawback of this approach is that object and background histograms overlap.Bimodal distribution rarely occurs in nature.
Edge-Oriented SegmentationSegmentation is carried out as followsEdges of an image are extracted (using Canny operators, e.g.)Edges are connected to form closed contours around the objects.Hough TransformUsually very expensiveWorks well with regular curves (application in manufactured parts)May work in presence of noise
Region-Oriented SegmentationA major disadvantage of the previous approaches is the lack of spatial relationship considerations of pixels.Neighboring pixels normally have similar propertiesThe segmentation (region-growing) is carried out as followsStart with a seed pixel.Pixels neighbors are included if they have some similarity to the seed pixel, otherwise they are not.Homogeneity conditionUses an eight-neighborhood (8-nbd) model
Region-Oriented SegmentationHomogeneity criterion: Gray-level mean value of a region is usually used
With standard deviation
Drawbacks: Computationally expensive.
Water Inflow SegmentationFill a gray-level image gradually with water. Gray-levels of pixels are taken as height.The higher the water rises, the more pixels are floodedHence, you have lands and watersLands correspond to objects
Object Recognition LayerFeatures are analyzed to recognize objects and faces in an image database.Features are matched with object models stored in a knowledge base.Each template is inspected to find the closest match.Exact matches are usually impossible and generally computationally expensive.Occlusion of objects and the existence of spurious features in the image can further diminish the success of matching strategies.
Template Matching TechniquesFixed Template MatchingUseful if object shapes do not change with respect to the viewing angle of the camera.Deformable Template MatchingMore suitable for cases where objects in the database may vary due to rigid and non-rigid deformations.
Fixed Template MatchingImage Subtraction: Difference in intensity levels between the image and the template is used in object recognition. Performs well in restricted environments where imaging conditions (such as image intensity) between the image and the template are the same.Matching by correlation: utilizes the position of the normalized cross-correlation peak between a template and image. Generally immune to noise and illumination effects in the image.Suffers from high computational complexity caused by summations over the entire template.
Deformable Template MatchingTemplate is represented as a bitmap describing the characteristic contour/edges of an object shape.An objective function with transformation parameters which alter the shape of the template is formulated reflecting the cost of such transformations.The objective function is minimized by iteratively updating the transformations parameters to best match the object.Applications include: handwritten character recognition and motion detection of objects in video frames.
Prototype System: KMeDMedical objects belonging only to patients in a small age group are identified automatically in KMeD.Such objects have high contrast with respect to their background and have relatively simple shapes, large sizes, and little or no overlap with other objects.KMeD resorts to a human-assisted object recognition process otherwise.
Demohttp://www.cs.washington.edu/research/imagedatabase/demo/cars/ (check car214)
Spatial Modeling and Knowledge Representation Layer (1)Maintain the domain knowledge for representing spatial semantics associated with image databases.At this level, queries are generally descriptive in nature, and focus mostly on semantics and concepts present in image databases.Semantics at this level are based on ``spatial events'' describing the relative locations of multiple objects.An example involving such semantics is a range query which involves spatial concepts such as close by, in the vicinity, larger than. (e.g. retrieve all images that contain a large tumor in the brain).
Spatial Modeling and Knowledge Representation Layer (2)Identify spatial relationships among objects, once they are recognized and marked by the lower layer using bounding boxes or volumes.Several techniques have been proposed to formally represent spatial knowledge at this layer.Semantic networksMathematical logicConstraintsInclusion hierarchiesFrames.
Semantic NetworksFirst introduced to represent the meanings of English sentences in terms of words and relationships between them.Semantic networks are graphs of nodes representing concepts that are linked together by arcs representing relationships between these concepts.Efficiency in semantic networks is gained by representing each concept or object once and using pointers for cross references rather than naming an object explicitly every time it is involved in a relation.Example: Type Abstraction Hierarchies (KMeD)
Brain Lesions Representation
Domain knowledge is represented using a set of constraints in conjunction with formal expressions such as predicate calculus or graphs.A constraint is a relationship between two or more objects that needs to be satisfied.
Example: PICTION systemIts architecture consists of a natural language processing module (NLP), an image understanding module (IU), and a control module.A set of constraints is derived by the NLP module from the picture captions. These constraints (called Visual Semantics by the author) are used with the faces recognized in the picture by the IU module to identify the spatial relationships among people.The control module maintains the constraints generated by the NLP module and acts as a knowledge-base for the IU module to perform face recognition functions.
Mathematical LogicIconic Indexing by 2D strings: Uses projections of salient objects in a coordinated system.These projections are expressed in the form of 2D strings to form a partial ordering of object projections in 2D.For query processing, 2D subsequence matching is performed to allow similarity-based retrieval.Binary Spatial Relations: Uses Allen's 13 temporal relations to represent spatial relationships.
Inclusion HierarchiesThe approach is object-oriented and uses concept classes and attributes to represent domain knowledge.These concepts may represent image features, high-level semantics, semantic operators and conditions.
FramesA frame usually consists of a name and a list of attribute-value pairs. A frame can be associated with a class of objects or with a class of concepts.Frame abstractions allow encapsulation of file names, features, and relevant attributes of image objects.