Speckle Reduction and Tumor Detection in Medical Ultrasound Images Engineers: Rand F. Al-Fayez and Dou’a S. Al -Qhaiwi Supervised By Prof. Mahmoud Hassan Submitted in partial fulfillment of the requirements for the degree of BACHELOR OF SCIENCE in Electronic Engineering And Computer Engineering at Princess Sumaya University for Technology Amman, Jordan FIRST SEMESTER 2011
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Speckle Reduction and Tumor Detection in Medical Ultrasound Images
24- Figure 4.7 : Processes done on the original image to detect tumors. . ................................. #
25- Figure 4.8 : The result of our algorithm. .............................................................................. #
INTRODUCTION
Medical images are often deteriorated by noise due to various sources of
interferences and other phenomena that affect the measurement processes in an imaging and
acquisition system.
Speckle noise is a random mottling of the image with bright and dark spots, which
obscures fine details and degrades the detectability of low-contrast lesions. Speckle noise
occurrence is often undesirable, since it affects the tasks of human interpretation and
diagnosis. On the other hand, its texture carries important information about the tissue being
imaged. Speckle filtering is thus a critical pre-processing step in medical ultrasound imagery,
provided that the features of interest for diagnosis are not lost. In ultrasound images, the
speckle energy is comparable to the signal energy in a wide range of frequency bands.
Several speckle reduction techniques are applied to ultrasound images in order to reduce the
noise level and improve the visual quality for better diagnoses. Several methods have been
proposed for speckle reduction.
We chose to enhance the ultrasound image using statistical models for both noise and
signal. Some other methods use an adaptive technique and others use a statistical approach
based on wavelet transform. Some other methods using contrast enhancement were applied
to treat multiplicative noise. Wavelet speckle reduction in ultrasound was recently tackled ,
but the approaches used in these methods are based on statistical models, which is costly
from a computational and modeling estimation viewpoint.
This project has two parts: first part presents a despeckling method, based on taking
the mean of eight neighbors of a pixel , for medical ultrasound images. The proposed method
has been compared with the median filter and the Wavelet filter. By means of experimental
results it has been shown that the present method yields far better results than the two others.
For the image quality performance measure we used mean absolute error (MAE) and signal-
to-noise ratio (SNR), as they are better measurements for speckle noise. Because of the
limited capability of a display system, the optical imaging noises, and many other factors, the
acquired medical images usually have poor quality. Image enhancement is the procedure
used to alter the appearance of an image or the subset of the image for better contrast or
visualization of certain features and to facilitate the subsequent image-based medical
diagnosis.
The second part is about diagnostic ultrasound which is one of the newer modalities
available to the clinician for evaluating patients for tumor staging. We developed our own
method to detect tumors in an US image.
First, the presence or absence of a mass can be determined. Multiple or solitary masses can
be localized. A differential diagnosis can be projected based on these finding. This will help
to streamline the diagnostic workup with prompt and appropriate treatment.
Chapter 1
Introduction to digital images
1.1 Introduction
This chapter, introduces the fundamentals of the image processing techniques, and also
provides a window to the major medical imaging modalities to understand the diagnostic
tasks for which images are needed and the typical distortions and artifacts associated with
each modality.
1.1.2 Digital Image Processing We are in the middle of an exciting period of time in the field of image processing.
Most of this information is designed for visual consumption in the form of text, graphics, and
pictures, or integrated multimedia presentations. Digital images are pictures that have been
converted into a computer readable binary format consisting of logical 0s and 1s.
Digital images are usually obtained by converting continuous signals into digital
format, although „direct digital‟ systems are becoming more prevalent. Likewise, digital
images are viewed using diverse display media, included digital printers, computer monitors,
and digital projection devices. The frequency with which information is transmitted, stored,
processed, and displayed in a digital visual format is increasing rapidly, and as such, the
design of engineering methods for efficiently transmitting, maintaining, and even improving
the visual integrity of this information is of heightened interest.
Another rich aspect of digital imaging is the diversity of image types that arise, and
which
can derive from nearly every type of radiation. There is an amazing availability of radiation
to be sensed, recorded as images, and viewed, analyzed, transmitted, or stored. In our daily
experience, we think of “what we see” as being “what is there,” but in truth, our eyes record
very little of the information that is available at any given moment. As with any sensor, the
human eye has a limited bandwidth. The band of electromagnetic (EM) radiation that we are
able to see, or “visible light,” is quite small, as can be seen from the plot of the EM band in
Fig. 1.1.
The human eye is receptive to only a very narrow range of frequencies within the
electromagnetic spectrum . Nowadays there are sensors capable of detecting electromagnetic
radiation outside this narrow range of “visible” frequencies, ranging from γ-rays and x-rays,
through ultraviolet and infrared, to radio waves.
Figure 1.1: The electromagnetic spectrum.
A complete digital image processing system (Fig. 1.2) is a collection of hardware
(equipment) and software (computer programs) that can:
(1) Acquire an image, using appropriate sensors to detect the radiation or field and capture
the features of interest from the object in the best possible way. If the detected image is
continuous, i.e. analog, it will need to be digitized by an analog-to-digital converter (ADC).
(2) Store the image, either temporarily in a working image store using read/write memory
devices known as random access memory (RAM) or, more permanently, using
magnetic media (e.g. floppy disks or the computer hard disk memory), optical media
(e.g. CD-ROMs or DVDs) or semiconductor technology (e.g. flash memory devices).
(3) Manipulate, i.e. process, the image; and
(4) Display the image, ideally on a television or computer monitor, which comprises
lines of continuously varying, i.e. analog, intensity. This requires the production of
an analog video display signal by a digital-to-analog converter (DAC).
Figure 1.2 : A digital image processing system.
Once the image signal is sensed, it must be converted into a computer-readable,
digital format. By digital we also mean two things: that the signal is defined on a discrete
(space/time) domain, and that it takes values from a discrete set of possibilities. Before digital
processing can commence, a process of analog-to-digital conversion (A/D conversion) must
occur. A/D conversion consists of two distinct subprocesses: sampling and quantization.
Figure 1.3 :The relationship between an analog image and a digitized image.
Sampling is the process of converting a continuous-space (or continuous-space/time)
signal into a discrete-space (or discrete-space/time) signal. The sampling of continuous
signals is a rich topic that is effectively approached using the tools of linear systems theory.
The other part of image digitization is quantization. Quantization is the process of converting
a continuous valued image that has a continuous range (set of values that it can take) into a
discrete-valued image that has a discrete range.
Table 1.1 : Examples of image processing applications within various fields.
Field Examples
Medical diagnostic imaging Projection radiography and x-ray computed tomography (CT)
using transmission of x-rays through the body; digital subtraction
angiography (DSA) produces enhanced images of the blood vessels
by subtracting “pre-contrast” and “post-contrast” images; and
mammography produces images of the soft tissue in the breast.
Nuclear medicine using emission of gamma rays from radiotracers
injected into the body; includes planar scintigraphy and emission
computed tomography (SPECT and PET).
Ultrasound imaging using reflection of ultrasonic waves within the
body.
Magnetic resonance imaging (MRI) using the precession of spin
systems in a large magnetic field; including functional MRI
(fMRI).
Registration of multi-modal images
Biological imaging Analysis, classification and matching of 3-D genome topology. Automatic counting and classification of cell types and morphology. Growth rate measurements using time-lapse image sequences. Motility assay for motion analysis of motor proteins.
Automation and robotics Vision systems for automatic part recognition, quality inspection and process monitoring. Virtual and augmented reality
Document processing Scanning, archiving, compression and transmission in order to store documents in large, relational, databases. Optical character recognition (OCR) to convert scanned documents, e.g. bank cheques, into editable text files.
Remote sensing Land cover analysis of multi-spectral images to analyze crop yields and assess environmental damage. Weather observation and prediction using images taken in the visible and infrared bands of the spectrum.
Diagnostic medical ultrasound uses high-frequency sound and a simple pulse–echo
technique. When an ultrasound beam is swept across a volume of interest, a crosssectional
image can be formed from a mapping of echo intensities. Current medical ultrasound
imaging systems are based on envelope detection, and therefore only display intensity
information. Despite this shortcoming, ultrasound imaging has become an important and
widely accepted modality for non-invasive imaging of the human body because of its ability
to produce real-time images, its low cost and its low risk to the patient.
Ultrasonic imaging uses high-frequency (~1–10 MHz) sound waves and their echoes
to produce images that can demonstrate organ movement in real time. Unlike
electromagnetic waves, such as x-rays and γ-rays, ultrasound is non-ionizing and, as such, is
considered safe at the intensities used in clinical imaging systems. Ultrasound images are
constructed by calculating the time taken for ultrasound pulses to travel into the body and
return, after reflection off a tissue surface.
Ultrasound pulses, generally 1–5 μs long, are generated from an ultrasound transducer
comprising a piezoelectric crystal, such as lead zirconate titanate (PZT), sandwiched
between a pair of electrodes. A small sinusoidal voltage applied to the crystal causes it to
resonate, producing sound waves as its surfaces move backwards and forwards; the crystal
also detects ultrasound waves by producing a varying electrical signal. A typical pulse, just
over 1 μs in duration with a frequency of 5MHz, contains 5 cycles of the wave (Fig. 1.4).
Pulses might be separated by perhaps 1 ms, resulting in a pulse repetition rate of 1000 Hz;
they travel through soft tissue at a speed of about 1540ms−1 compared with the speed of
sound in air of about 330ms−1.
Figure 1.4: Schematic diagram of a typical clinical ultrasound beam. (After Wolbarst, 1993, p. 408.)
A pulse of ultrasound, which is what is often used in medical ultrasound rather than a
continuous wave, actually comprises a range of frequencies: the shorter the pulse the larger
the range of frequencies comprising it. Luckily, the velocity of sound in a medium is nearly
independent of frequency or wavelength, otherwise the pulse would spread out as it traveled
leading to pulse blurring. This behavior is different from that of light: the speed of light in a
medium depends on wavelength, which is why prisms split sunlight into its constituent
colors.
When an ultrasound wave encounters a tissue surface, separating tissues with different
acoustical properties, a fraction of the wave is backscattered and detected by the transducer
on its return. Generally, only those waves that reflect back through about 180° can contribute
to an ultrasound image. By measuring the delay between pulse transmission and pulse
reception, and knowing the speed of propagation, the depth of the feature can be calculated.
For example, if the time delay is 160 μs and the pulse is passing through soft tissue with a
speed of 1540ms−1, the round-trip path is 24.6 cm and the tissue depth is 12.3 cm.
The intensity of the echo is used to determine the brightness of the image at the reflecting
tissue surface (Fig. 1.5).
Figure 1.5 : The returning echo pulse suffers continuous attenuation along its path, and an abrupt change in intensity on reflection at the interface. (After Wolbarst, 1993, p. 408.)
Figure 1.6: Ultrasonic imaging system
How does it work :
A thin layer of jelly is placed between the probe and the skin to make sure all the
sound enters the body.
The probe contains a transmitter and a receiver.
A pulse of ultrasound is sent out by the transmitter.
The pulse is reflected from a surface and returns to the receiver.
The ultrasound machine measures how long it takes for the pulse to return .
Millions of sound waves are transmitted every second.
As the waves reflected at different times, the computer in the ultrasound machine calculates
how far the wave travelled before being reflected (using distance=velocity*time).
Using this information the computer builds up an image of the inside of the patient.
1.2 Types of Noise in an Image
Here we present some types of noise in digital images:
1.2.1 Additive White Gaussian Noise:
Each pixel in an image is disturbed by a Gaussian random variable with zero mean and
variance 2.
WjHiNjiN
jiNjiXjiY
1,1),,0(~),(
),,(),(),(
2
X: noise-free image, Y: noisy image
Fig 1.7 : Gaussian noise
1.2.2 Salt – and – pepper noise
Each pixel in an image has the probability of p/2 (0<p<1) being contaminated by
either a white dot (salt) or a black dot (pepper).
p-1 ofy probabilitwith ),(
p/2 ofy probabilitwith 0
p/2 ofy probabilitwith 255
),(
jiX
jiY
WjHi 1,1 X: noise-free image, Y: noisy image
. Fig 1.8 : salt and pepper noise
Noisy pixels
Clean pixels
1.2.3 Speckle noise:
This noise is, in fact, caused by errors in data transmission . The corrupted pixels are
either set to the maximum value, which is something like a snow in image or have single bits
flipped over. This kind of noise affects the ultrasound images . Speckle noise has the
characteristic of multiplicative noise . Speckle noise follows a gamma distribution and is
given as
a
g
eg
a !)1( F(g)
1
Where, variance is a2 α and g is the gray level.
Fig 1.9 : speckle noise
1.2.4 Periodic noise:
This arises typically from electrical interference, especially in he presence of a strong
mains power signal during image acquisition. It is spatially dependent and generally
sinusoidal at multiplies of a specific frequency. It is recognizable as pairs of conjugates spots
in the frequency domain, and can be conveniently removed either manually or by using a
notch (narrow band reject ) filter.
1.2.5 Quantization noise :
It is inherent in the amplitude quantization process and occurs in the analog – to –
digital converter, ADC, when sampled values are fitted to a finite number of levels. The
noise is additive and independent of the signal when the number of bits are n≥ 4.
1.3 Noise-Reduction Filters
To reduce the effect of the noise on the digital images many types of filters are used
depending on the type of the noise. If the noise was periodic noise, which gives rise to
particular frequencies in the Fourier domain, it can be removed by using a narrow band-
reject filter which known as notch filter. The median masks are efficient to remove the
impulse or salt – pepper noise etc.
1.3.1 Median filter:
It is normally used to reduce the noise in an image, this filter considers each pixel in
the image in turn and looks at its nearby neighbors to decide whether or not it is
representative of its surroundings, then the pixel value will be replaced by the median value
of those values. The median is calculated by first sorting all the pixel values from the
surrounding neighborhood into numerical order and then replacing the pixel being
considered with the middle pixel value. The figure below shows a calculated example on it .
Fig 1.10 : example on median filter
1.3.2 Wavelet Filter
We use the wavelet transform in order to view and process the digital images at
multiple resolution, in addition for being an efficient, highly intuitive framework for the
representation and storage of multiresolution images, the discrete wavelet transform (DWT),
provides powerful insight into an image‟s spatial and frequency characteristics. it is more
suitable in image processing than other transforms such as Fourier, because wavelet
transform dissect the signal into their frequencies and vary the scale at which the component
frequencies are analyzed. Wavelet denoising involves three stages :