Top Banner
Chapter 2 Digital Image Fundamentals
56

Chapter 2 Digital Image Fundamentals. 2.1 Human Visual System Purpose of study : Improvement of images for use by a human observer (1)Physical Structure.

Jan 03, 2016

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Chapter 2

    Digital Image Fundamentals

  • 2.1 Human Visual System

    Purpose of study : Improvement of images for use by a human observer

    Physical Structure of the EyeCornea : convex lens, refracting the rays

    Aqueous humor

    Iris : a variable aperture to control the amount of light

  • Lens : Controls the focal length to focus at retina

    Vitreous humor

    Retina : Composed of photoreceptors to convert the intensity and color of light to neural signals ( 108 elements)

    Types of photo Receptors

    Rods : to respond to broad-spectrum color light for low-light vision, and therefore cannot discriminate color.

    Cones : for day-light vision. Three different types of cones for color vision (trichromacy)

  • Distribution of rods and cones

    - Color perception is best for the objects that we are viewing directly forward-Relative insensitivity of cones also accounts for our inability to perceive color under low-light conditions

  • Optic nerve and visual cortex2.2 Image Processing in the Eye

    Webers law

  • -The difference in perceived brightness of the steps does not Appear equalThe eye cannot see the same intensity increments in the brightregions that it sees in the dark regions

  • Note) In DIP, simple darkening of bright regions can make undetectably minute intensity changes perceptible

    Lateral Inhibitions

    Simultaneous ContrastThe square in the left side is brighter than that in the right side

  • Match band effectVisual system accentuates sharp intensity changes Human eye is sensitive to the changes of intensities (edges) Note : SecondOrder System

    Frequency Response of HVS

  • Size of image : M x M

    fmax = M/2/ = M/2(Cycles / degree)

    = 2tan-1(0.5x/6H) (degree)

  • Sensitivity curveOther Properties of HVS : Digital Picture Processing, volume 1, Chapter3, Azriel Rosenfeld and Avinash C. kak

  • 2.3 Sampling and Quantization

    Uniform sampling and quantization Size of DIN = 2n , M = 2k

    Gray levelG = 2m

    # of bits = N x M x m

  • Effect of Image Resolution

  • Effect of Quantization Levels

  • Isopreference curvesThe quality of the images tends to increase as N and m are increased Note : Exceptional CaseFor fixed N, the quality is improved by decreasing m.(contrast)

  • The curves tend to become more vertical Note : For images with a large amount of detail only a few gray levels are needed

    The curves depart markedly from the curves of constant b = N2m

    Non-uniform Sampling / Quantization

    Fine sampling is the neighborhood of sharp gray-level transitions whereas coarse sampling in relatively smooth regionsQuantization according to the sensitivity of HVS. (Subband Coding)

  • 2.4 Some Basic Relationships Between Pixel

    Neighbor of a pixel

    N8(p) = N4(p) ND(p)

  • Connectivity

    V = Set of gray- level values used to define connectivity

  • - m-connectivity q is in N4(p) or q is in ND(p) and the set N4(p) N4(q) is empty A pixel p is adjacent to a pixel q if they are connected Path / length (x0, y0)(x1,y1)(xn, yn)

  • ConnectedIf p and q pixels of an image subset S , then p is connected to q in S if there is a path from p to q consisting entire of pixels in SConnected component SFor any pixel p in S , the set of pixels in S that are connected to p

  • Labeling of Connected Components

    - With 4-Connected Components :If p = 0, move to next scanning position Otherwise (p=1),if r = t = 0, then assign a new labelif , then assign the label that is equal to 1

    if r = t = 1 , and they have the same label, then assign the labelif r = t= 1 , and they have the different labels, then assign one of the labels and make a notethey are equivalent

    merge the equivalent labels

  • - With 8-connected componentsIf p=0 then move to next positionIf p=1 then If only one of neighbors is equal to 1 , thenassign the label.If none of neighbors is equal to 1, thenassign a new label.If two or more neighbor are equal to 1, andthey have different labels, then assign are of thenand make note that they are equivalent label.

    Merge the equivalent labels

  • Relations, Equivalence, and Transitive Closure

    Binary Relation R on set A Ex. R = relation of 4-connected

    Definitions : Equivalence Relations (a) reflexive if for each a A , aRa (b) symmetric if for each a and b A, aRb bRa (c) transitive if for a, b, and c A, aRb and bRc aRcProperty : If R is an equivalence relation on a set A, then A can be divided into disjoint subsets, called equivalence classes.

  • Ex. Note : reflexive relation every diagonal element = 1 symmetric relation symmetric matrixAdjacent Matrix

  • - Transitive Closure of R : R+Ex. Note :1) bRd and dRb bRb dRb and bRd dRd2) B+ = B + BB + BBB + + (B)nmultiplication ANDaddition OR

  • Distance Measures D is a distance function or metric ifEx) Euclidean Distance

  • length of the shortest path = distance in D4 and D8length of path in m-connectivity not unique

    Ex) m-distance from p to p4 = 2orm-distance from p to p4 = 3

  • Relations, Equivalence, and Transitive Closure- Binary Relation R on set A

  • Property : If R is an equivalence relation on a set A , then A can be divided into & disjoint subsets, called equivalence classesNote : reflexive relationevery diagonal element = 1symmetric relationsymmetric matrixEx.

  • - Transitive Closure of R : R+Note : 1)2)

  • Arithmetic and Logic Operationspixel by pixel operation :p + qp AND qp qp OR qp * qp / qMultiplicationpixelBinary imageNote : Quantization of Gray-level image

  • Ex. Some Examples of Logic Operations on Binary Images

  • - neighborhood oriented operation : mask operationComputationally expensive operation 512 5123 3 maskMultiplication 9 512 512 operationAddition 8 512 512parallel operation is needed.- object oriented operationNote : The complexity and parallel architecture that depends onthe operation look-up Table / SIMD / MIMD

    w1w2w3w4w5w6w7w8w9

  • Imaging GeometrySome basic Transform- TranslationV* = T VWorld coordinate system (X,Y,Z)

  • Note : Movement of object point in the same coordinate systemsEx.Change of coordinate systems for on object pointEx.

  • - ScalingBy factors Sx, Sy, Sz along X, Y, Z axis- RotationYZXrotation about z-axis

  • rotation about x-axisrotation about y-axis- Concatenation and inverse transform, where

  • - Inverse TransformPerspective Transformy,YImage planex,Xz,Z(X,Y,Z)(x,y)Lens center

  • - Homogeneous coordinates

  • - Inverse perspective transformNote : It is a useless result. It should be as follows.

  • Free variableNote : Inverse does not uniquely exist. Therefore we introduce the free variable z

  • Camera ModelW in world coordinates (X,Y,Z) w in camera coordinate (x,y,z) c is image coordinate (x,y,0)The world coordinate should be aligned with the camera coordinates (X,Y,Z)- displacement of gimbal center form the origin Fig 2.18 Imaging geometry with two coordinate systems

  • - Pan the x-axis with respect to z-axis :- Tilt the z-axis with respect to x-axis : Displacement of the image plane with respect to the gimbal center

  • Example :

  • Camera CalibrationLettering k=1 is the homogeneous representation yields

  • Note : There are 12 unknown parameters in Eq(1) and two equation as Eq(2). That means we need more than 6 world points and 6 imagepoints to solve the 12-parameters.Stereo Imaging

  • When the first camera is coincide with the world coordinate systemWhen the second camera is coincide with the world coordinate systemFig 2.22 Top view of Fig 2.21 with the first camera brought into coincidence with the world coordinate system.Note : 1. Correspondence Problem to find Depth-Map. Area Based Matching Feature Based Matching 2. Constraints to find the Correspond- -ing points. Epipolar Constraints Ordering Constraints

  • Photographic Film- Film Structure and Exposure() Supercoat for protection.() Emulsion layer of minute silver halide crystals.() Substrate layer to promote adhesion of the emulsion to the film base.() Film base made of cellulose triacetate.() Backing layer to prevent curling.

  • - Film CharacteristicsContrastExposal : E = IT ( Energy per unit area) I = Incident Intensity T = Duration of ExposalNote : As the slope of linear region increases, the contrast of film is increased.Fig 2.24 A typical H & D curve.

  • Speed : ()To determine how much light is needed to produce a certain of silver on development.The lower the speed, the longer the film must be exposed to record a given image.Note : ASA scale ASA 200 is twice as fast (and for a given subject requires half as much exposure) as a film of ASA 100.

    ASAPURPOSEASA 80~160General purpose outdoor and some indoor photographyASA 20~64Fine grain film for maximum image definitionASA 200~600High speed films for poor light and indoor photographyASA 650~ Ultra-speed films for very poor light

  • Graininess-Fast film (large ASA number) has the large graininess.-Slow film (small ASA number) is preferable where fine detail is desired or where enlargement of the negatives is necessary.Resolving power-Depends on the graininess, on the light scattering properties of the emulsion, and on the contrast with which the film reproduces fine details.Note : Fine-grain films with thin emulsions yields the highest resolving power.

  • Diaphragm () and Sutter SpeedDiaphragm : f-number or stop numberNote : ) f-number is inversely proportional to the amount of light admitted. ) Each setting admits twice as much light as the next higher f-number (this giving twice as much exposure)Speed :Note : ) The faster the shutter speed, the shorter the exposure time obtained. ) For the same exposure,

  • Field of ViewEx.FOV

  • Depth of Fieldz-zOptical OriginDOFImage planeObjectLensFocal length = fz: camera constantLens equation :(distance from the optical origin to image plane)

  • i)

    ii)iii) If an object is focused at distance (-z), then the other lights from other distance of objects may not be focused at They will make several pixels to be sensitized. DOF is defined the margin of the distance in (-z) that limits the sensitized pixel within one CCD element.Note :

  • View VolumeHomework : 2.4, 2.5, 2.10, 2.13, 2.16, 2.17Due :