Top Banner
1 Feature detection Feature detection An Important aspect of An Important aspect of Image Processing Image Processing Hasna O. & Mentor- Dr Hasna O. & Mentor- Dr Man Man
57

Feature detection

Jan 30, 2016

Download

Documents

Harry

Feature detection. An Important aspect of Image Processing Hasna O. & Mentor- Dr Man. A picture is worth more than a thousand words. Image analysis (a.k.a. understanding), image processing & computer vision plays an important role in society today because - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Feature detection

11

Feature detectionFeature detection

An Important aspect of Image An Important aspect of Image ProcessingProcessing

Hasna O. & Mentor- Dr ManHasna O. & Mentor- Dr Man

Page 2: Feature detection

22

A picture is worth more than a A picture is worth more than a thousand wordsthousand words

Image analysis (a.k.a. understanding),Image analysis (a.k.a. understanding),

image processing & computer vision playsimage processing & computer vision plays

an important role in society today becausean important role in society today because

A picture gives a much clearer impression A picture gives a much clearer impression of a situation or an objectof a situation or an object

Having an accurate visual perspective of Having an accurate visual perspective of things has a high social, technical and things has a high social, technical and economic valueeconomic value

Page 3: Feature detection

33

Digital image perception is used Digital image perception is used for:for:

Improving pictorial information for human Improving pictorial information for human perceptionperception

Processing of image data for storage, Processing of image data for storage, transmission and representation for transmission and representation for autonomous machine perception.autonomous machine perception.

Page 4: Feature detection

44

Image ProcessingImage Processing

Image Processing is to perform numerical Image Processing is to perform numerical operations on higher dimensional signals such as operations on higher dimensional signals such as images and video sequences.images and video sequences.

The objectives of image processing include:The objectives of image processing include:– Improving the appearance of the visual dataImproving the appearance of the visual data

image enhancement, image restorationimage enhancement, image restoration– Extracting useful informationExtracting useful information

image analysis, reconstruction from projectionimage analysis, reconstruction from projection– Representing the image in an alternate and possibly Representing the image in an alternate and possibly

more efficient formmore efficient formtransformation, image compressiontransformation, image compression

Page 5: Feature detection

55

Visible Human ProjectVisible Human Project

The Visible HumanThe Visible Human

Project was an effort of theProject was an effort of the

National Library ofNational Library of

Medicine (NLM) to buildMedicine (NLM) to build

a digital image library ofa digital image library of

volumetric datavolumetric data

representing a complete,representing a complete,

normal adult male andnormal adult male and

female. The data sets werefemale. The data sets were

released in 1994 and released in 1994 and 1995. 1995.

Page 6: Feature detection

66

Visible Human ProjectVisible Human Project

Photo MRI CT

Page 7: Feature detection

77

What is Digital Image What is Digital Image Processing?Processing?

Page 8: Feature detection

88

The field of digital image processing refers to processing digital images by means of a digital computer. A digital image composes of a finite number of elements which have a particular location and value and these elements are referred to as picture elements, image elements, pels and pixels.

Digital Image: a two dimensional array of pixels. –Size (or resolution) of an image:

width: N pixels, height: M pixels.–Precision of pixels:

2n amplitude levels n bits per pixel. –Overall data file size: N M n bits

Page 9: Feature detection

99

Image ExampleImage Example

LENA, 512x512, true color (24 b/p), 786432 bytesLENA, 512x512, true color (24 b/p), 786432 bytes..

Page 10: Feature detection

1010

Images are referred to more than just the projections generated by the visual band of the EM (electromagnetic) waves apparent to humans. Images generated from the entire band of the EM waves ranging from gamma to radio waves can be perceived by imaging machines. Some of these images include ultrasound, electron microscopy, and computer-generated images.

Page 11: Feature detection

1111

Image ExamplesImage Examples

Size: 2048x2560

Page 12: Feature detection

1212

Graphic ExampleGraphic Example

Page 13: Feature detection

1313

Video ExampleVideo Example

AKIYO, 352x288 (CIF), 24 b/p, 30 f/s, 72.99 Mbits/s.

Page 14: Feature detection

1414

Animation ExampleAnimation Example

Page 15: Feature detection

1515

There are 3 computerized processing levels: Low-level process: - is characterized by the fact that both the

inputs and outputs are images. These involve primitive operations such as image preprocessing to reduce noise, contrast enhancement and image sharpening.

Mid-level process: - is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images such as, edges, contours, and the identity of individual objects. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects.

High-level process: - involves trying deduce an ensemble of recognized objects, from image analysis to performing the cognitive function usually associated with vision.

Page 16: Feature detection

1616

Origins of Digital Image Origins of Digital Image ProcessingProcessing

Page 17: Feature detection

1717

In the early 1920’s the Bartlane cable picture transmission system was introduced, thereby reducing the transportation time for a picture from New York to England by days. Digital images were first applied in the newspaper industry when pictures were first set by submarine cable between London and New York, then the Bartlane cable was introduced reducing transmission time from three weeks to three hours. There were different phases in technology improvement; in 1921 the method used for receiving images through a coded tape by a telegraph printer was abandoned in favor of a technique based on photographic reproduction made from tapes perforated at the telegraph receiving terminal with evident improvement in both tonal quality and resolution.

Page 18: Feature detection

1818

The history of digital image processing is intimately tied to the development of the digital computer. The first computers powerful enough to carry out meaningful image processing tasks appeared in the early 1960s. This was when there was significant development of the high-level programming languages COBOL (common business-oriented language) and FORTRAN (formula translator) and the development of operating systems. The birth of digital image processing can be traced to the availability of advanced computers and the onset of the space program during that period. Work on using computer techniques for improving images from a space probe began at the Jet Propulsion Laboratory in Pasadena, California in 1964 when pictures of the moon transmitted by RANGER 7 were processed by a computer to correct various types of image distortion inherent in the on-board television camera.

Page 19: Feature detection

1919

Fields that use Digital Image Fields that use Digital Image ProcessingProcessing

In parallel with space applications, DIP is In parallel with space applications, DIP is used inused in

** Medical Imaging Medical Imaging

** Earth Resources observations Earth Resources observations

** Astronomy Astronomy

Page 20: Feature detection

2020

Medical ImagingMedical Imaging (extracted from (extracted from Dr. Man’s Presentation)Dr. Man’s Presentation)

Medical imaging is a rich field with multidiscipline nature, Medical imaging is a rich field with multidiscipline nature, involving nuclear physics, quantum mechanics, fluid involving nuclear physics, quantum mechanics, fluid dynamics, advanced mathematics, biology, chemistry, dynamics, advanced mathematics, biology, chemistry, computer science and computer engineering.computer science and computer engineering.

It has become a primary component of modern medicine. It has become a primary component of modern medicine.

It is still a relatively new field with many unknown effects It is still a relatively new field with many unknown effects and unanswered questions. and unanswered questions.

The technologies are evolving, and new equipment, The technologies are evolving, and new equipment, modality, study methodology have been constantly modality, study methodology have been constantly developed.developed.

Excellent opportunities for research and career Excellent opportunities for research and career development.development.

Page 21: Feature detection

2121

Medical ImagingMedical ImagingThe invention in the early 1970s of computerized axial tomography (CAT) is one of the most important events in the application of image processing in medical diagnosis. Tomography consists of algorithms that use the sensed data to construct an image that represents a slice through the object which compose a three-dimensional (3-D) version of the inside of the object.

Page 22: Feature detection

2222

X-ray CTX-ray CT

Page 23: Feature detection

2323

CONTINUED

Tomography was invented independently by Sir Godfrey N. Hounsfield and Professor Allan M. Cormack, who shared the 1979 Nobel Prize in Medicine for their invention. X-rays were discovered in 1895 by Wilhelm Conrad Roentgen, for which he received the 1901 Nobel Prize in Physics. These two inventions, nearly 100 years apart led to some of the most active application areas of image processing today.Computer procedures are also used to enhance the contrast or code the intensity levels into color for easier interpretation of X-rays and other images used in industry, medicine, and the biological sciences.

Page 24: Feature detection

2424

Computed TomographyComputed Tomography

Besides the natural images acquired from conventional Besides the natural images acquired from conventional optical cameras, computer synthesized images become optical cameras, computer synthesized images become more and more important in many application fields.more and more important in many application fields.

Non-invasive imaging modalities allow people to view Non-invasive imaging modalities allow people to view objects that can not be seen by human eye or camera,objects that can not be seen by human eye or camera,– Internal organ of human body,Internal organ of human body,– Damaged part inside an airplane wing,Damaged part inside an airplane wing,– Cloud covered city,Cloud covered city,– Dark night battle field,Dark night battle field,– Underground oil field… Underground oil field…

Page 25: Feature detection

2525

Projection X-RayProjection X-Ray

Page 26: Feature detection

2626

First X-rayFirst X-ray

The hand of Mrs. Wilhelm The hand of Mrs. Wilhelm Roentgen: the first X-ray Roentgen: the first X-ray image, 1895 image, 1895 (http://www.nlm.nih.gov/exhibition/dreamana(http://www.nlm.nih.gov/exhibition/dreamanatomy/da_g_Z-1.html)tomy/da_g_Z-1.html)

Page 27: Feature detection

2727

For MammographyFor Mammography

Detection and diagnosis of breast cancer Detection and diagnosis of breast cancer based on some of the basis of these signs based on some of the basis of these signs made visible by made visible by Image ProcessingImage Processing

Architectural distortions of normal tissue Architectural distortions of normal tissue patternspatterns

Asymmetry between corresponding regions Asymmetry between corresponding regions of the images on the right and left breast.of the images on the right and left breast.

Page 28: Feature detection

28

Mammography

Page 29: Feature detection

2929

Remote Earth Resources and Remote Earth Resources and ObservationsObservations

Geographers use the same or similar techniques to study pollution patterns from aerial and satellite imagery. Image enhancement and restoration procedures are used to process degraded images of unrecoverable objects or experimental results too expensive to duplicate. In archaeology, image processing methods have successfully restored blurred pictures that were the only available records of rare artifacts lost or damaged after being photographed.

Page 30: Feature detection

3030

In physics and other related fields, computer techniques routinely enhance images of experiments in areas such as high-energy plasmas and electron microscopy. Similarly successful applications of image processing concepts can be found in astronomy, biology, nuclear medicine, law enforcement defense, and industrial applications.These examples illustrate processing results intended for human interpretation. The second major area of application of digital image processing techniques deals with machine perception. In this case interest focuses on procedures for extracting from an image, information in a form suitable for computer processing. Examples of the type of information used in machine perception are statistical moments, Fourier transform coefficients, and multidimensional distance measures. Typical problems in machine perception that routinely utilize image processing techniques are automatic character recognition, industrial machine vision for product assembly and inspection, military recognizance, automatic processing of fingerprints, screening of X-rays and blood samples, and machine processing of aerial and satellite imagery for weather prediction and environmental assessment.

Page 31: Feature detection

3131

Fundamental Steps in DIPFundamental Steps in DIP

Image acquisition- is the first process which involves preprocessing such as scaling.Image enhancement- this is bringing out obscured detail or highlighting certain features of interest in an image. This technique deals with a number of mathematical functions such as the Fourier Transform.Image restoration- it improves the appearance of an image but is objective in the sense that this technique tends to be based on mathematical or probabilistic models of image degradation.Color image processing- this is used as a basis for extracting features of interest in an image.

Page 32: Feature detection

3232

Wavelets- are the foundation for representing images in various degrees of resolution. Compression- deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it.Morphological processing- deals with tools for extracting image components that are useful in the representation and description of shape.Segmentation- partitions an image into its constituent parts or objects.Representation and description- representation is necessary for transforming raw data into a form suitable for subsequent computer processing. Description, also known as feature selection, deals with extracting attributes that result in some quantitative information of interest.Recognition- assigns a label to an object based on its descriptors. Feature Extraction- this is an area of image processing which involves using algorithms to detect and isolate various desired portions of a digitized image or video stream.

Page 33: Feature detection

3333

Image EnhancementImage Enhancement

Page 34: Feature detection

3434

Histogram ExampleHistogram Example

Original

Page 35: Feature detection

3535

Histogram Example (Histogram Example (cont.cont. ) )

Poor contrast

Page 36: Feature detection

3636

Histogram Example (Histogram Example (cont.cont. ) )

Poor contrast

Page 37: Feature detection

3737

Histogram Example (Histogram Example (cont.cont. ) )

Enhanced contrast

Page 38: Feature detection

3838

Smoothing and Sharpening Smoothing and Sharpening ExamplesExamples

Smoothing Sharpening

Page 39: Feature detection

3939

Smoothing and Sharpening Smoothing and Sharpening ExamplesExamples

Smoothing Sharpening

Page 40: Feature detection

4040

Image AnalysisImage Analysis

Page 41: Feature detection

4141

Image analysis is to identify and extract useful Image analysis is to identify and extract useful information from an image or a video scene, typically information from an image or a video scene, typically with the ultimate goal of forming a decision.with the ultimate goal of forming a decision.

Image analysis is the center piece of many Image analysis is the center piece of many applications such as remote sensing, robotic vision applications such as remote sensing, robotic vision and medical imaging.and medical imaging.

Image analysis generally involves basic operations:Image analysis generally involves basic operations:– Pre-processing,Pre-processing,– Object representation,Object representation,– Feature detection,Feature detection,– Classification and interpretation. Classification and interpretation.

Page 42: Feature detection

4242

Image SegmentationImage Segmentation

Page 43: Feature detection

4343

Image segmentation is an important pre-processing tool. Image segmentation is an important pre-processing tool. It produces a binary representation of the object with It produces a binary representation of the object with features of interest such as shapes and edges.features of interest such as shapes and edges.

Common operations include: Common operations include:

ThresholdingThresholding: to segment an object from its : to segment an object from its background through a simple pixel amplitude based background through a simple pixel amplitude based decision. Complicated thresholding methods may be decision. Complicated thresholding methods may be used when the background is not homogeneous.used when the background is not homogeneous.

Edge detectionEdge detection: to identify edges of an object : to identify edges of an object through a set of high-pass filtering. Directional filters through a set of high-pass filtering. Directional filters and adaptive filters are frequently used to achieve and adaptive filters are frequently used to achieve reliable results. reliable results.

Page 44: Feature detection

44

Segmentation Examples

Thresholding Edge detection

Page 45: Feature detection

4545

Feature ExtractionFeature Extraction

This is an area of image processing This is an area of image processing that uses algorithms to detect and isolate that uses algorithms to detect and isolate various desired portions of a digitized various desired portions of a digitized image.image.

Page 46: Feature detection

4646

What is a Feature?What is a Feature?

A feature is a significant piece of A feature is a significant piece of information extracted from an image which information extracted from an image which provides more detailed understanding of provides more detailed understanding of the image.the image.

Page 47: Feature detection

4747

Examples of Feature DetectionsExamples of Feature Detections

Detecting of faces in an image filled with Detecting of faces in an image filled with people and other objectspeople and other objects

Detecting of facial features such as eyes, Detecting of facial features such as eyes, nose, mouthnose, mouth

Detecting of edges, so that a feature can Detecting of edges, so that a feature can be extracted and compared with anotherbe extracted and compared with another

Page 48: Feature detection

4848

Feature Detection and Feature Detection and ClassificationClassification

Page 49: Feature detection

49

Feature Detection and Classification

• Feature detection is to identify the presence of a certain type of feature or object in an image.

• Feature detection is usually achieved by studying the statistic variations of certain regions and their backgrounds to locate unusual activities.

• Once an interesting feature has been detected, the representation of this feature will be used to compare with all possible features known to the processor. A statistical classifier will produce a feature type that has the closest similarity (or maximum likelihood) to the testing feature.

• Data collection and analysis (or the training process)have to be performed at the classifier before any classification.

Page 50: Feature detection

5050

Feature Extraction Feature Extraction TechniquesTechniques

HOUGH TRANSFORMHOUGH TRANSFORM

Page 51: Feature detection

5151

Each line is represented by two parameters, commonly called r and θ which represent the length and angle from the origin of a normal to the line in question. In other words, a line is said to be 90° from θ and r units away from the origin at its closest point. By calculating the value of r for every possible value of θ, a sinusoidal curve is created which is unique to that point. This representation of the two parameters is sometimes referred to as Hough space.

Page 52: Feature detection

5252

Thus the points to be transformed are likely to lie on an ‘edge’ in the image. The transform itself is quantized into an arbitrary number of bins, each representing an approximate definition of a possible line. Each significant point (or feature) in the edge detected is said to vote for a set of bins corresponding to the lines that pass through it. By simply incrementing the value stored in each bin for every feature lying on that line, an array is built up which shows which lines fit most closely to the data in the image.

Page 53: Feature detection

5353

Hough Transform of Curves, and Hough Transform of Curves, and Generalized Hough TransformGeneralized Hough Transform

The transform described above applies to finding The transform described above applies to finding straight lines; a circle for instance can be straight lines; a circle for instance can be transformed into a set of three parameters transformed into a set of three parameters representing its center and radius, so that the representing its center and radius, so that the Hough space becomes three dimensional. Hough space becomes three dimensional. Arbitrary ellipses, curves and shapes expressed Arbitrary ellipses, curves and shapes expressed as a set of parameters can be found this way. as a set of parameters can be found this way. For more complicated shapes, the Generalized For more complicated shapes, the Generalized Hough transform is used, which allows a feature Hough transform is used, which allows a feature to vote for particular position, orientation to vote for particular position, orientation and/scaling or the shape using a predefined and/scaling or the shape using a predefined look-up table.look-up table.

Page 54: Feature detection

5454

Using Weighted FeaturesUsing Weighted Features

The Hough transform accounts for The Hough transform accounts for uncertainty in the underlying detection of uncertainty in the underlying detection of edges by allowing features to vote with edges by allowing features to vote with varying weight.varying weight.

Page 55: Feature detection

5555

Hierarchical Hough TransformHierarchical Hough Transform

A final enhancement that is sometimes A final enhancement that is sometimes

effective is to perform a hierarchical set of effective is to perform a hierarchical set of Hough transform on the same image, using Hough transform on the same image, using progressively smaller bins. If the image is first progressively smaller bins. If the image is first analyzed using a small number of bins, each analyzed using a small number of bins, each representing a large range of potential lines, the representing a large range of potential lines, the most likely of these can then be analyzed in most likely of these can then be analyzed in more detail. That is finding the bins with the more detail. That is finding the bins with the highest count in one stage can be used to highest count in one stage can be used to constrain the range of values searched in the constrain the range of values searched in the text. text.

Page 56: Feature detection

5656

REFERENCESREFERENCES

The data in this documentation has been quoted, cited The data in this documentation has been quoted, cited and extracted from the following sources:and extracted from the following sources:

Introduction to medical Imaging & Instrumentation – Introduction to medical Imaging & Instrumentation – Hong ManHong Man

The Art of Image Processing (operators and applications The Art of Image Processing (operators and applications – – Hong ManHong Man

Digital Image Processing – Digital Image Processing – Rafael C. Gonzalez, Richard Rafael C. Gonzalez, Richard E. WoodsE. Woods

Wikipedia.orgWikipedia.org

Page 57: Feature detection

5757

Cont.dCont.d

Handbook of Image and Video Processor Handbook of Image and Video Processor – – Al BovikAl Bovik

A New Approach to Image Feature A New Approach to Image Feature Detection with Applications – Detection with Applications – BS BS ManjunathManjunath