Computer Graphics Computer Graphics & & Image Processing Image Processing Lecture 3 Lecture 3 Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain
Dec 28, 2015
Computer Graphics Computer Graphics & &
Image Processing Image Processing
Lecture 3Lecture 3
Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain
ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENTU.E.T TAXILAEmail:: [email protected] Room #:: 7
3
Image Enhancement
Process an image to make the result more suitable than theoriginal image for a specific application
–Image enhancement is subjective (problem /application oriented)
Image enhancement methods:Spatial domain: Direct manipulation of pixel in an image (on
the image plane)Frequency domain: Processing the image based on modifying the
Fourier transform of an image
Many techniques are based on various combinations of methods from these two categories
4
Image Enhancement
5
Basic Concepts
Spatial domain enhancement methods can be generalized asg(x,y)=T[f(x,y)]
f(x,y): input imageg(x,y): processed (output) imageT[*]: an operator on f (or a set of input images),
defined over neighborhood of (x,y)
Neighborhood about (x,y): a square or rectangular sub-image area centered at (x,y)
6
Basic Concepts
7
Basic Concepts
g(x,y) = T [f(x,y)]Pixel/point operation:
Neighborhood of size 1x1: g depends only on f at (x,y)T: a gray-level/intensity transformation/mapping functionLet r = f(x,y) s = g(x,y)r and s represent gray levels of f and g at (x,y)Then s = T(r)
Local operations:g depends on the predefined number of neighbors of f at (x,y)Implemented by using mask processing or filteringMasks (filters, windows, kernels, templates) :a small (e.g. 3×3) 2-D array, in which the values of the coefficients determine the nature of the process
8
3 basic gray-level transformation functions
• Linear function– Negative and identity
transformations
• Logarithm function– Log and inverse-log
transformation
• Power-law function– nth power and nth root
transformations
9
Identity Function
• Output intensities are identical to input intensities.
• Is included in the graph only for
completeness.
10
Image Negatives
Reverses the gray level order For L gray levels the transformation function is
s =T(r) = (L - 1) - r
11
Log Transformations
Function of s = cLog(1+r)
12
Log Transformations
Properties of log transformations–For lower amplitudes of input image the range of gray
levels is expanded
–For higher amplitudes of input image the range of gray levels is compressed
Application: – This transformation is suitable for the case when the
dynamic range of a processed image far exceeds the capability of the display device (e.g. display of the Fourier spectrum of an image)
– Also called “dynamic-range compression / expansion”
13
Log Transformations
14
Inverse Log Transformations
• Do opposite to the Log Transformations• Used to expand the values of high pixels in an
image while compressing the darker-level values.
15
Power-Law Transformation
16
Power-Law Transformation
For γ < 1: Expands values of dark pixels, compress values of brighter pixels
For γ > 1: Compresses values of dark pixels, expand values of brighter pixels
If γ=1 & c=1: Identity transformation (s = r)
A variety of devices (image capture, printing, display) respond according to power law and need to be corrected
Gamma (γ) correction The process used to correct the power-law response
phenomena
17
Power-Law Transformation
18
Gamma correction
• Cathode ray tube (CRT) devices have an intensity-to-voltage response that is a power function, with varying from 1.8 to 2.5
• The picture will become darker.
• Gamma correction is done by preprocessing the image before inputting it to the monitor with s = crs = cr1/1/
Monitor
Monitor
Gammacorrection
= 2.5
=1/2.5 = 0.4
19
Power-Law Transformation: Example
20
Power-Law Transformation: Example
21
Piecewise-Linear Transformation
Contrast Stretching
Goal: – Increase the dynamic range of the gray levels for low
contrast images
Low-contrast images can result from–poor illumination
–lack of dynamic range in the imaging sensor
–wrong setting of a lens aperture during image acquisition
22
Contrast Stretching Example
23
Piecewise-Linear Transformation: Gray-level slicing
• Highlighting a specific range of gray levels in an image– Display a high value of all
gray levels in the range of interest and a low value for all other gray levels
• (a) transformation highlights range [A,B] of gray level and reduces all others to a constant level
• (b) transformation highlights range [A,B] but preserves all other levels
24
Piecewise-Linear Transformation: Bit Plane slicing
• Highlighting the contribution made to total image appearance by specific bits
• Suppose each pixel is represented by 8 bits
• Higher-order bits contain the majority of the visually significant data
• Useful for analyzing the relative importance played by each bit of the image
Bit-plane 7(most significant)
Bit-plane 0(least significant)
One 8-bit byte
25
8 bit planes
Bit-plane 7 Bit-plane 6
Bit-plane 5
Bit-plane 4
Bit-plane 3
Bit-plane 2
Bit-plane 1
Bit-plane 0
26
Histograms
04/19/23 27
Example Histogram
28
Example Histogram
29
Histogram Examples
30
Contrast Stretching through Histogram
If rmax and rmin are the maximum and minimum gray level of the input image and L is the total gray levels of output image The transformation function for contrast stretching will be
31
Histogram Equalization
32
Histogram Equalization
33
Histogram Equalization
34
Histogram Equalization
•Spreading out the frequencies in an image (or equalising the image) is a simple way to improve dark or washed out images•The formula for histogram equalisation is given where
– rk: input intensity– sk: processed intensity– k: the intensity range
(e.g 0.0 – 1.0)– nj: the frequency of intensity j– n: the sum of all frequencies
)( kk rTs
k
jjr rp
1
)(
k
j
j
n
n
1
35
Histogram Equalization: Example
36
Notice that the minimum value (52) is now 0 and the maximum value (154) is now 255.
Initial Image Image After Equalization
Histogram Equalization: Example
37
Histogram Equalization: Example
38
Histogram Equalization: Example
2 3 3 2
4 2 4 3
3 2 3 5
2 4 2 4
4x4 image
Gray scale = [0,9]histogram
0 1
1
2
2
3
3
4
4
5
5
6
6
7 8 9
No. of pixels
Gray level
39
Histogram Equalization: Example
Gray
Level(j)0 1 2 3 4 5 6 7 8 9
No. of pixels
0 0 6 5 4 1 0 0 0 0
0 0 6 11 15 16 16 16 16 16
0 0
6
/
16
11
/
16
15
/
16
16
/
16
16
/
16
16
/
16
16
/
16
16
/
16
s x 9 0 03.3
3
6.1
6
8.4
89 9 9 9 9
k
jjn
0
k
j
j
n
ns
0
40
Histogram Equalization: Example
3 6 6 3
8 3 8 6
6 3 6 9
3 8 3 8
Output image
Gray scale = [0,9]
Histogram equalization
0 1
1
2
2
3
3
4
4
5
5
6
6
7 8 9
No. of pixels
Gray level
04/19/23 41
Mathematical/Logical Operations on Images
Addition–Averaging images for noise removal
Subtraction–Removal of background from images–Image enhancement–Image matching–Moving/displaced object tracking
Multiplication–Superimposing of texture on an image –Convolution and correlation of images
And and or operations –To remove the unnecessary area of an image through mask operations
04/19/23 42
Image Averaging for Noise Reduction
04/19/23 43
Image Averaging for Noise Reduction
04/19/23 44
Image Averaging for Noise Reduction
04/19/23 45
Image Subtraction
• Takes two images as input and produces a third image whose pixel values are those of the first image minus the corresponding pixel values from the second image
• Variants – It is also often possible to just use a single image as
input and subtract a constant value from all the pixels
– Just output the absolute difference between pixel values, rather than the straightforward signed output.
04/19/23 46
Image Subtraction
• The subtraction of two images is performed in a single pass
• If the operator computes absolute differences between the two input images then:
• If it is simply desired to subtract a constant value C
from a single image then:
04/19/23 47
Image Subtraction
• If the operator calculates absolute differences, then it is impossible for the output pixel values to be outside the range
• In rest of the two cases the pixel value may become negative
• This is one good reason for using absolute differences.
• How to solve problem of negative pixels?
04/19/23 48
Image Subtraction
• How to solve problem of negative pixels? • Let we have an 8 bit Grayscale image (Value Range= 0 t0 255)• The result of image subtraction may come in the range of -255 to
+255• One scheme can be to add 255 to every pixel and then divide by
2• Method is easy and fast • Limitations
– Truncation errors can cause loss of accuracy– Full range of display may not be utilized
04/19/23 49
Image Subtraction
• How to solve problem of Negative Pixels? • Another scheme can be
– first, find the minimum gray value of the subtracted image
– second, find the maximum gray value of the subtracted image
– set the minimum value to be zero and the maximum to be 255
– while the rest are adjusted according to the interval [0, 255], by timing each value with 255/max
04/19/23 50
Examples of Image Subtraction
04/19/23 51
Examples of Image Subtraction
04/19/23 52
Example: Background Removal Using Image Subtraction
04/19/23 53
Example: Background Removal Using Image Subtraction
04/19/23 54
Image Multiplication
• Like other image arithmetic operators, multiplication comes in two main forms.
– The first form takes two input images and produces an output image in which the pixel values are just those of the first image, multiplied by the values of the corresponding values in the second image.
– The second form takes a single input image and produces output in which each pixel value is multiplied by a specified constant. This latter form is probably the more widely used and is generally called scaling.
• How It Works
The multiplication of two images is performed in the obvious way in a single pass using the formula:
Scaling by a constant is performed using:
04/19/23 55
Image MultiplicationGuidelines for Use
There are many specialist uses for scaling. In general though, given a scaling factor greater than one, scaling will brighten an image. Given a factor less than one, it will darken the image. Scaling generally produces a much more natural brightening/darkening effect than simply adding an offset to the pixels, since it preserves the relative contrast of the image better. For instance, shows a picture of model robot that was taken under low lighting conditions. Simply scaling every pixel by a factor of 3, we obtain the one shown in the middle which is much clearer. However, when using pixel multiplication, we should make sure that the calculated pixel values don't exceed the maximum possible value. If we, for example, scale the above image by a factor of 5 using a 8-bit representation, we obtain the one shown in last. All the pixels which, in the original image, have a value greater than 51 exceed the maximum value and are (in this implementation) wrapped around from 255 back to 0.
04/19/23 56
Examples of Image Multiplication
04/19/23 57
Examples of Image Multiplication
• Multiplication also provides a good way of "shading" artwork. You can use it to introduce a sense of diffuse lighting into your paintings or 3D CG objects. Notice in the example below how the warm and cool grays not only darken the final 3D rendering, but also influence its color temperature
04/19/23 58
Examples of Image Multiplication
• Multiplication provides a good way to color line drawings. Here you can really see the "black times anything is black, white times anything is that thing unchanged" rule in action.
04/19/23 59
Logic Operations
• Logic operation performs on gray-level images, the pixel values are processed as binary numbers
• Light represents a binary 1, and dark represents a binary 0
• NOT operation = negative transformation
• Useful for Image analysis e.g in Texture Analysis & Classification
04/19/23 60
Example of Logical Operations using Masks
04/19/23 61
Neighbourhood Operations• Neighbourhood operations simply operate on a
larger neighbourhood of pixels than point operations
• Neighbourhoods are mostly a rectangle around a central pixel
• Any size rectangle and any shape filter are possible
Origin x
y Image f (x, y)
(x, y)Neighbourhood
04/19/23 62
Local Enhancement through Spatial Filtering
The output intensity value at (x,y) depends not only on the input intensity value at (x,y) but also on the specified number of neighboring intensity values around (x,y)
Spatial masks (also called window, filter, kernel,template) are used and convolved over the entire image for local enhancement (spatial filtering)
The size of the mask determines the number of neighboring pixels which influence the output value at (x,y)
The values (coefficients) of the mask determine the nature and properties of enhancing technique
04/19/23 63
Simple Neighbourhood Operations
• Some simple neighbourhood operations include:– Min: Set the pixel value to the minimum in the
neighbourhood
– Max: Set the pixel value to the maximum in the neighbourhood
– Median: The median value of a set of numbers is the midpoint value in that set (e.g. from the set [1, 7, 15, 18, 24] 15 is the median). Sometimes the median works better than the average
04/19/23 64
Simple Neighbourhood Operations Example
123 127 128 119 115 130
140 145 148 153 167 172
133 154 183 192 194 191
194 199 207 210 198 195
164 170 175 162 173 151
Original Image x
y
Enhanced Image x
y
04/19/23 65
The Spatial Filtering Process
r s t
u v w
x y z
Origin x
y Image f (x, y)
eprocessed = v*e + r*a + s*b + t*c + u*d + w*f + x*g + y*h + z*i
FilterSimple 3*3
Neighbourhoode 3*3 Filter
a b c
d e f
g h i
Original Image Pixels
*
• The above is repeated for every pixel in the original image to generate the smoothed image
04/19/23 66
Local Enhancement through Spatial Filtering
04/19/23 67
Basics of Spatial Filtering
Given the 3×3 mask with coefficients: w1, w2,…, w9
The mask covers the pixels with gray levels: z1, z2,…, z9
z gives the output intensity value for the processed image (to be stored in a new array) at the location of z5 in the input image
04/19/23 68
Basics of Spatial Filtering
Mask operation near the image borderProblem arises when part of the mask is located outside theimage plane; to handle the problem:1. Discard the problem pixels (e.g. 512x512input
510x510output, if mask size is 3x3)2. Zero padding: expand the input image by padding zeros
(512x512input 514x514output)– Zero padding is not good; creates artificial lines or edges on the
border
3. We normally use the gray levels of border pixels to fill up the expanded region (for 3x3 mask). For larger masks a border region equal to half of the mask size is mirrored on the expanded region.
04/19/23 69
Mask Operation Near the Image Border
04/19/23 70
Types of Spatial Filtering
04/19/23 71
Spatial Filtering for Smoothing
For blurring/noise reduction; Blurring is usually used in preprocessing steps, e.g., to remove small details from an image prior to object
extraction, or to bridge small gaps in lines or curves Equivalent to Low-pass spatial filtering in frequency domain
because smaller (high frequency) details are removed based on neighborhood averaging (averaging filters) Implementation: The simplest form of the spatial
filter for averaging is a square mask (assume m×m mask) with the same coefficients 1/m2 to preserve the gray levels (averaging).
Applications: Reduce noise; smooth false contours Side effect: Edge blurring
04/19/23 72
Smoothing Filters
04/19/23 73
Smoothing Spatial Filters
• One of the simplest spatial filtering operations we can perform is a smoothing operation– Simply average all of the pixels in a neighbourhood
around a central value
– Especially useful in removing noise from images
– Also useful for highlighting gross detail
1/91/9
1/9
1/91/9
1/9
1/91/9
1/9Simple Averaging Filter
04/19/23 74
Smoothing Spatial Filtering1/9
1/91/9
1/91/9
1/9
1/91/9
1/9
Origin x
y Image f (x, y)
e = 1/9*106 + 1/9*104 + 1/9*100 + 1/9*108 + 1/9*99 + 1/9*98 + 1/9*95 + 1/9*90 + 1/9*85
= 98.3333
FilterSimple 3*3
Neighbourhood106
104
99
95
100 108
98
90 85
1/91/9
1/9
1/91/9
1/9
1/91/9
1/9
3*3 SmoothingFilter
104 100 108
99 106 98
95 90 85
Original Image Pixels
*
• The above is repeated for every pixel in the original image to generate the smoothed image
04/19/23 75
Spatial Filtering for Smoothing (Example)
04/19/23 76
Spatial Filtering for Smoothing (Example)
04/19/23 77
Weighted Smoothing Filters
• More effective smoothing filters can be generated by allowing different pixels in the neighbourhood different weights in the averaging function– Pixels closer to the
central pixel are more important
– Often referred to as a weighted averaging
1/162/16
1/16
2/164/16
2/16
1/162/16
1/16Weighted Averaging Filter
04/19/23 78
Order-Statistics Filtering
Nonlinear spatial filters Output is based on order of gray levels in the masked area
(sub-image) Examples: Median filtering, Max & Min filtering
Median filtering Assigns the mid value of all the gray levels in the mask to the
center of mask; Particularly effective when
– the noise pattern consists of strong, spiky components (impulsenoise, salt-and-pepper)
– edges are to be preserved– Force points with distinct gray levels to be more like their neighbors
04/19/23 79
Median Filtering
04/19/23 80
Median Filtering (Example)
04/19/23 81
Strange Things Happen At The Edges!
Origin x
y Image f (x, y)
e
e
e
e
• At the edges of an image we are missing pixels to form a neighbourhood
e e
e
04/19/23 82
Strange Things Happen At The Edges!
• There are a few approaches to dealing with missing edge pixels:– Pad the image
• Typically with either all white or all black pixels
– Replicate border pixels
– Truncate the image
04/19/23 83
Simple Neighbourhood Operations Example
123 127 128 119 115 130
140 145 148 153 167 172
133 154 183 192 194 191
194 199 207 210 198 195
164 170 175 162 173 151
x
y
04/19/23 84
Correlation & Convolution
• The filtering we have been talking about so far is referred to as correlation with the filter itself referred to as the correlation kernel
• Convolution is a similar operation, with just one subtle difference
• For symmetric filters it makes no difference
eprocessed = v*e + z*a + y*b + x*c + w*d + u*e + t*f + s*g + r*h
r s t
u v w
x y z
Filter
a b c
d e e
f g h
Original Image Pixels
*
04/19/23 85
Spatial Filtering for Image Sharpening
Background: to highlight fine detail in an image or to enhance blurred detail
Applications: electronic printing, medical imaging, industrial inspection, autonomous target detection (smart
weapons)......Foundation (Blurring vs Sharpening): Blurring/smoothing is performed by spatial averaging
(equivalent to integration) Sharpening is performed by noting only the gray level
changes in the image that is the differentiation
04/19/23 86
Spatial Filtering for Image Sharpening
Operation of Image Differentiation Enhance edges and discontinuities (magnitude of
output gray level >>0) De-emphasize areas with slowly varying gray-level
values (output gray level: 0)
Mathematical Basis of Filtering for Image Sharpening First-order and second-order derivatives Gradients Implementation by mask filtering
04/19/23 87
Edge DetectionWhat is an Edge? Edge is a change but every change is not an edge Edge is a noticeable or abrupt change
E.g 2 is not a noticeable change in the range of (0 to 255)
We have to define a threshold if the change is more than a specified threshold then we will define it as an edge point.
Here gradual change exists you cannot pinpoint where the edge exists so the change must be abrupt
For each pixel we have to look in horizontal,vertical and diagonal direction dx/ds -> for horizontal direction dy/ds -> for vertical direction
5 7
50 55 60 65 70
50 150 75
10 200 20
25 195 30
04/19/23 88
Edge Detection
04/19/23 89
Derivatives
First Order Derivative A basic definition of the first-order derivative of a one-
dimensional function f(x) is the difference
Second Order Derivative Ssimilarly, we define the second-order derivative of a
one-dimensional function f(x) is the difference
)()1( xfxfx
f
)(2)1()1(2
2
xfxfxfx
f
04/19/23 90
First and Second Order Derivatives
04/19/23 91
Example for Discrete Derivatives
04/19/23 92
Comparison between f" and f´ f´ generally produces thicker edges in an image
f" has a stronger response to fine detail
f´ generally has a stronger response to a gray-level step
f" produces a double response at step changes in gray level
f" is generally better suited than f´ for image enhancement
Major application of f´ is for edge extraction; f´ used together with f" results in impressive enhancement effect
04/19/23 93
Laplacian for Image Enhancement
04/19/23 94
Laplacian for Image Enhancement
04/19/23 95
Laplacian for Image Enhancement•Image background is removed by Laplacian filtering.•Background can be recovered simply by adding original image to Laplacian output
04/19/23 96
Laplacian for Image Enhancement (Example)
04/19/23 97
Laplacian for Image Enhancement (Example)
04/19/23 98
Image Sharpening Based on Unsharp Masking
04/19/23 99
High Boost Filtering
Principal application:
Boost filtering is used when input image is darker than desired,
high-boost filter makes the image lighter and more natural
04/19/23 100
High Boost Filtering Masks
04/19/23 101
High Boost Filtering Masks
1st Derivative Filtering- The Gradient• Implementing 1st derivative filters is difficult in practice
• For a function f (x, y) the gradient of f at coordinates (x, y) is given as the column vector:
y
fx
f
G
G
y
xf
1st Derivative Filtering (cont…)
• The magnitude of this vector is given by:
• For practical reasons this can be simplified as:
)f( magf
21
22yx GG
21
22
y
f
x
f
yx GGf
1st Derivative Filtering (cont…)• Now we want to define digital approximations and their
Filter Masks
• For simplicity we use a 3x3 region
• For example z5 denotes f(x,y), z1 denotes f(x-1,y-1)
• A simple approximation for First Derivative is
z1 z2 z3
z4 z5 z6
z7 z8 z9
1st Derivative Filtering (cont…)
A simple approximation for First Derivative is
z1 z2 z3
z4 z5 z6
z7 z8 z9
Two other definitions proposed by Roberts use cross- difference
If we use
1st Derivative Filtering (cont…)
z1 z2 z3
z4 z5 z6
z7 z8 z9
If we use absolute values then
The Masks corresponding to these equations are:
Roberts Cross-Gradient Operators
04/19/23 107
Gradient Operators
04/19/23 108
Gradient Operators
Normally the smallest mask used is of size 3 x 3Based on the concept of approximating the gradient several
spatial masks have been proposed:
04/19/23 109
Gradient Operators
04/19/23 110
Gradient Processing (Example)
04/19/23 111
NOTE
The summation of coefficients in all masks equals 0, indicating that they would give a response of 0 in an area of constant gray level.
04/19/23 112
Canny Edge Detection
• The Canny edge detection operator was developed by John F. Canny in 1986 and uses a multi-stage algorithm to detect a wide range of edges in images.
Algorithm Steps
– Image smoothing– Gradient computation– Edge direction computation– Non-maximum suppression– Hysteresis Thresholding
04/19/23 113
Image Smoothing
• Reduces image noise that can lead to erroneous output
• Performed by convolution of the input image with a Gaussian filter
2 4 5 4 2
4 9 12 9 4
5 12 15 12 5
4 9 12 9 4
2 4 5 4 2
1―
159
σ=1.4
Image Smoothing
04/19/23 114
• Determines intensity changes• High intensity changes indicate edges• Performed by convolution of smoothed image
with masks to determine horizontal and vertical derivatives
-1 0 1
-2 0 2
-1 0 1
-1 -2 -1
0 0 0
1 2 1
xy
Gradient Computation
04/19/23 115
• Gradient magnitude determined by adding X and Y gradient images
= x + y
Gradient Computation
04/19/23 116
• Edge directions are determined from running a computation on the X and Y gradient images
• Edge directions are then classified by their nearest 45° angle
x
Θx,y = tan-1 y
Edge Direction Computation
04/19/23 117
0 ° 90 °45 ° 135 °
Edge Direction Computation
04/19/23 118
• Given estimates of the image gradients, a search is then carried out to determine if the gradient magnitude assumes a local maximum in the gradient direction. So, for example,
– if the rounded angle is zero degrees the point will be considered to be on the edge if its intensity is greater than the intensities in the north and south directions,
– if the rounded angle is 90 degrees the point will be considered to be on the edge if its intensity is greater than the intensities in the west and east directions,
– if the rounded angle is 135 degrees the point will be considered to be on the edge if its intensity is greater than the intensities in the north east and south west directions,
– if the rounded angle is 45 degrees the point will be considered to be on the edge if its intensity is greater than the intensities in the north west and south east directions.
• This is worked out by passing a 3x3 grid over the intensity map.• From this stage referred to as non-maximum suppression, a set of
edge points, in the form of a binary image, is obtained.
Non-Maximum Suppression
04/19/23 119
Non-Maximum Suppression
04/19/23 120
121
Thresholding
• Reduce number of false edges by applying a threshold T
– all values below T are changed to 0– selecting a good values for T is difficult– some false edges will remain if T is too low– some edges will disappear if T is too high– some edges will disappear due to softening of the
edge contrast by shadows
04/19/23
122
• Thresholding with hysteresis requires two thresholds - high and low.
• we begin by applying a high threshold. This marks out the edges we can be fairly sure are genuine. Starting from these, using the directional information derived earlier, edges can be traced through the image.
• While tracing an edge, we apply the lower threshold, allowing us to trace faint sections of edges as long as we find a starting point.
• Once this process is complete we have a binary image where each pixel is marked as either an edge pixel or a non-edge pixel.
Hysteresis Thresholding
04/19/23
123
• Apply two thresholds in the suppressed image
– T2 = 2T1
– two images in the output
– the image from T2 contains fewer edges but has gaps in the contours
– the image from T1 has many false edges
– combine the results from T1 and T2
– link the edges of T2 into contours until we reach a gap
– link the edge from T2 with edge pixels from a T1 contour until a T2 edge is found again
Hysteresis Thresholding
04/19/23
Gives Strong Edge pixels
Hysteresis Thresholding
04/19/23 124
T2 T1
Gives Weak Edge pixels
T2 T1
Link Edge pixels of T2 until we reach a gap link the edge from T2 with edge pixels from a T1 contour until a T2 edge is found again
TH TL
125
T2=2 T1=1
02000030
00000030
00000230
00000300
00020200
03020000
30000000
02010030
10100030
00000230
00000300
00021200
03120000
30000000
gapsfilled from
T1
• A T2 contour has pixels along the green arrows
• Linking: search in a 3x3 of each pixel and connect the pixel at the center with the one having greater value
• Search in the direction of the edge (direction of Gradient)04/19/23
Hysteresis Thresholding
• Determines final edge pixels using a high and low threshold
• Image is scanned for pixels with a gradient intensity higher than the high threshold
• Pixels above the high threshold are added to the edge output
• All of the neighbors of a newly added pixel are recursively scanned and added if they fall below the low threshold
Hysteresis Thresholding
04/19/23 126
Hysteresis Thresholding
04/19/23 127
Mask used to estimate the Gradient
04/19/23 128
Combining Spatial Enhancement Methods (cont…)
Laplacian filter of bone scan (a)
Sharpened version of bone scan achieved by subtracting (a) and (b) Sobel filter of bone
scan (a)
(a)
(b)
(c)
(d)04/19/23 129
Combining Spatial Enhancement Methods (cont…)
The product of (c) and (e) which will be used as a mask
Sharpened image which is sum of (a) and (f)
Result of applying a power-law trans. to (g)
(e)
(f)
(g)
(h)
Image (d) smoothed with a 5*5 averaging filter
04/19/23 130
Combining Spatial Enhancement Methods (cont…)
• Compare the Original and Final Images
04/19/23 131
Any question