Top Banner
*[email protected]; phone 49 7243 992-252; fax 49 7243 992-299; www.fom.fgan.de Blind Deconvolution Algorithms for the Restoration of atmospherically degraded Imagery: a comparative Analysis Claudia S. Huebner*, Mario Greco Dept. of Signatorics, FGAN-FOM, Gutleuthausstrasse 1, 76275 Ettlingen, Germany; ABSTRACT Suggestions from the field of image processing to compensate for turbulence effects and restore degraded images include motion-compensated image integration after which the image can be considered as a non-distorted image that has been blurred with a point spread function (PSF) the same size as the pixel motions due to the turbulence. Since this PSF is unknown, a blind deconvolution is still necessary to restore the image. By utilising different blind deconvolution algorithms along with the motion-compensated image integration, several variants of this turbulence compensation method are created. In this paper we discuss the differences of the various blind deconvolution algorithms employed and give a qualitative analysis of the turbulence compensation variants by comparing their respective restoration results. This is done by visual inspection as well as by means of different image quality metrics that analyse the high frequency components. Keywords: Image restoration, atmospheric turbulence, blind deconvolution, image quality metrics 1. ITRODUCTIO In imaging applications the prevalent effects of atmospheric turbulence comprise image dancing as well as image blurring. These image-degradation effects arise from random inhomogeneities in the temperature distribution of the atmosphere, producing small but significant fluctuations in the index of refraction which are the most pronounced close to the ground. Light waves propagating through the atmosphere will sustain cumulative phase distortions as they pass through these turbulence-induced fluctuations. When imaging over horizontal paths, as opposed to vertical imaging, the degree of image degradation is particularly severe. As a consequence, image resolution is generally limited by atmospheric turbulence rather than by design and quality of the optical system being used. A number of correction methods have been proposed over the years, prominent among them the hardware-based Adaptive Optics (AO) systems. Their underlying principle is to measure the phase aberration of an incoming wavefront and correct it, directly. Usually, only the higher Zernike modes are corrected, like tip/tilt and defocus, by use of a deformable mirror. Although they have proven invaluable for imaging point sources in a variety of applications, e. g. in astronomical or medical imaging, their effectiveness is somewhat limited where extended targets are concerned. Efforts have been made to remedy some of that drawback by using a hybrid approach, i. e. by using soft-ware based methods in addition to the hardware-based correction, e. g. in [1], either directly "in-the-loop" or in post-processing. Nevertheless, the work involved in the build-up of such a system is comparatively complex [2], and the necessary equipment still quite expensive, not to mention bulky and immobile. On this account, the relative simplicity of a software-based approach which can be put into effect anytime and anywhere, at low cost, holds great appeal for many applications, such as ours. Our intention is, ultimately, the implementation of a mobile turbulence-compensation system, which is able to correct atmospheric image degradation effects in real-time on a live-stream. Ideally, this correction should be put into effect without necessitating the operator to undergo a protracted initialization procedure while trying to find the optimal setting for a multitude of parameters. The work presented in this paper could be considered as a step forward to that end. 1.1 Outline Suggestions from the field of Image Processing to compensate for turbulence effects and restore such degraded images include Motion-Compensated Averaging (MCA), based on an idea of [3], after which the image can be considered as a non-distorted image that has been blurred with a PSF (Point Spread Function) the same size as the pixel motions due to Optics in Atmospheric Propagation and Adaptive Systems XI, edited by Anton Kohnle, Karin Stein, John D. Gonglewski, Proc. of SPIE Vol. 7108 71080M · © 2008 SPIE · CCC code: 0277-786X/08/$18 · doi: 10.1117/12.800124 Proc. of SPIE Vol. 7108 71080M-1 2008 SPIE Digital Library -- Subscriber Archive Copy
12
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • *[email protected]; phone 49 7243 992-252; fax 49 7243 992-299; www.fom.fgan.de

    Blind Deconvolution Algorithms for the Restoration of atmospherically

    degraded Imagery: a comparative Analysis

    Claudia S. Huebner*, Mario Greco

    Dept. of Signatorics, FGAN-FOM, Gutleuthausstrasse 1, 76275 Ettlingen, Germany;

    ABSTRACT

    Suggestions from the field of image processing to compensate for turbulence effects and restore degraded images include

    motion-compensated image integration after which the image can be considered as a non-distorted image that has been

    blurred with a point spread function (PSF) the same size as the pixel motions due to the turbulence. Since this PSF is

    unknown, a blind deconvolution is still necessary to restore the image. By utilising different blind deconvolution

    algorithms along with the motion-compensated image integration, several variants of this turbulence compensation

    method are created. In this paper we discuss the differences of the various blind deconvolution algorithms employed and

    give a qualitative analysis of the turbulence compensation variants by comparing their respective restoration results. This

    is done by visual inspection as well as by means of different image quality metrics that analyse the high frequency

    components.

    Keywords: Image restoration, atmospheric turbulence, blind deconvolution, image quality metrics

    1. ITRODUCTIO

    In imaging applications the prevalent effects of atmospheric turbulence comprise image dancing as well as image

    blurring. These image-degradation effects arise from random inhomogeneities in the temperature distribution of the

    atmosphere, producing small but significant fluctuations in the index of refraction which are the most pronounced close

    to the ground. Light waves propagating through the atmosphere will sustain cumulative phase distortions as they pass

    through these turbulence-induced fluctuations. When imaging over horizontal paths, as opposed to vertical imaging, the

    degree of image degradation is particularly severe. As a consequence, image resolution is generally limited by

    atmospheric turbulence rather than by design and quality of the optical system being used.

    A number of correction methods have been proposed over the years, prominent among them the hardware-based

    Adaptive Optics (AO) systems. Their underlying principle is to measure the phase aberration of an incoming wavefront

    and correct it, directly. Usually, only the higher Zernike modes are corrected, like tip/tilt and defocus, by use of a

    deformable mirror. Although they have proven invaluable for imaging point sources in a variety of applications, e. g. in

    astronomical or medical imaging, their effectiveness is somewhat limited where extended targets are concerned. Efforts

    have been made to remedy some of that drawback by using a hybrid approach, i. e. by using soft-ware based methods in

    addition to the hardware-based correction, e. g. in [1], either directly "in-the-loop" or in post-processing. Nevertheless,

    the work involved in the build-up of such a system is comparatively complex [2], and the necessary equipment still quite

    expensive, not to mention bulky and immobile. On this account, the relative simplicity of a software-based approach

    which can be put into effect anytime and anywhere, at low cost, holds great appeal for many applications, such as ours.

    Our intention is, ultimately, the implementation of a mobile turbulence-compensation system, which is able to correct

    atmospheric image degradation effects in real-time on a live-stream. Ideally, this correction should be put into effect

    without necessitating the operator to undergo a protracted initialization procedure while trying to find the optimal setting

    for a multitude of parameters. The work presented in this paper could be considered as a step forward to that end.

    1.1 Outline

    Suggestions from the field of Image Processing to compensate for turbulence effects and restore such degraded images include Motion-Compensated Averaging (MCA), based on an idea of [3], after which the image can be considered as a non-distorted image that has been blurred with a PSF (Point Spread Function) the same size as the pixel motions due to

    Optics in Atmospheric Propagation and Adaptive Systems XI, edited by Anton Kohnle, Karin Stein, John D. Gonglewski, Proc. of SPIE Vol. 7108

    71080M 2008 SPIE CCC code: 0277-786X/08/$18 doi: 10.1117/12.800124

    Proc. of SPIE Vol. 7108 71080M-12008 SPIE Digital Library -- Subscriber Archive Copy

  • iii !CUI

    the turbulence. Since this PSF is unknown, a (mostly) blind deconvolution is still necessary to restore the image. We are utilising four different algorithms in combination with this motion-compensated image integration, namely the linear Inverse Wiener Filter (IWF) method, the non-linear Lucy-Richardson Deconvolution (LRD) along with its Iterative Blind Deconvolution (IBD) variant which both use maximum likelihood estimation and, last but not least, a deconvolution technique based on Principal Component Analysis (PCA). The specifics of our implementation of the motion compensated image integration are given in Section 2 along with some notions on the concept of image quality. In Section 3 the basic principle behind the blind deconvolution algorithms we employed is briefly described and their respective advantages and disadvantages are discussed. Our efforts to automate deconvolution parameter estimation wherever possible, at least partially, are elaborated on in Section 4 and in Section 5 the respective restoration results of the turbulence compensation variants are presented and evaluated.

    (a) (b) (c)

    Fig. 1. Downsized sample images from the sequence pairs with different test patterns; for (a)-(c) left: 1.0 ms exposure time,

    right:

  • Search space

    M()

    by

    1.3 Approach

    It should be noted that during the development of the algorithm presented in this paper, we did not adhere too rigidly to

    the "real-time" objective in view of the continuing increase in computational power available and with an additional code

    optimisation in mind which will undoubtedly reduce computation time yet a bit further. Nevertheless, we tried to keep

    computation time per frame within reasonable limits, especially considering that we are intending to process a stream of

    images where the output rate should at least be a 30 fps (frames per second) even if the input rate may comprise up to a

    1000 fps. Naturally, such a constraint will come at the expense of at least some of the quality of the results.

    Consequently, we set out to find the best solution for our application in terms of deciding which deconvolution method

    yields the best results for a given choice of parameters within reasonable time.

    2. MOTIO-COMPESATED AVERAGIG

    Motion Compensated Averaging (MCA) is in essence the same as normal image integration, the main difference being,

    that before integrating the next frame of the input sequence it is shifted slightly within a given search space (see Fig. 2)

    of a few pixels in every direction such that the input frame better matches the running average. A sliding window of

    length W is used for calculating this running average or, alternatively, a temporal median as was proposed by [5]. The window length ought to be increased with growing turbulence strength. The maximum window length depends, of

    course, on available memory space. Keeping 100 frames with a 240 256 resolution in the memory posed no problem,

    as this was the setting we used, but keeping 1000 will (at present) become problematic on most PCs.

    Fig. 2. Illustration of the MCA algorithm

    Increasing the search space, automatically also increases the computational complexity by (2*+1) where denotes the maximum pixel shift. Accordingly, the shift operation results in a number of (2*+1) shifted images from which the one, that matches the previous output image the best, needs to be determined. There are a number of different approaches

    to making that decision by employing quality metrics which will be addressed in the following section. To keep calculation time down, the size of the search space was limited to no more than two or three pixels. Nevertheless,

    calculation time per frame still is in the order of 0.1 sec for a search space of 2 pixels and about 0.15 sec for a search

    space of 3 pixels and therefore much too high to be real-time capable. But given that hardware based implementations of

    image shifting and stacking operations are comparatively easy to realise, e. g. by using an FPGA (Field Programmable

    Gate Array), the somewhat extended calculation time of our software implementation shall be of no consequence here.

    The idea of using MCA at all, apart from noise reduction which would obviously also result from normal averaging, is

    the reduction of image dancing between one frame and the next. Unfortunately, in highly anisoplanatic conditions, such

    as we are wont to face, the coherence length r0 of the turbulence is much smaller than our aperture, so that the logic doesnt quite apply. Nevertheless, when compared to the results from normal averaging, the stabilization effect for the

    results from the MCA is generally better.

    2.1 IQM Image Quality Metrics

    The problem of giving a qualitative measure about the degree of similarity between two given images is closely related

    to the concept of image quality and its many interpretations. As such, there is no simple answer to the question about image quality. For once, it strongly depends on the application at hand. In image or video compression, for instance, it is

    In Input

    M(x) Shift

    n (x) = In(x+M(x)) MC Input Sn+1 Current output

    Sn Previous outputSn+1 = a Sn + (1 a) In Normal AverageSn+1 = b [a Sn + (1 a) n ] + (1 b) In MC-Average

    Proc. of SPIE Vol. 7108 71080M-3

  • comparatively easy to decide on an images quality. Here, a higher compression rate is equivalent to lower image quality

    and so-called full reference Image Quality Metrics (IQM) are employed to compare the perfect, i.e. uncompressed, reference image to its compressed instance. As our task is to decide upon the closest match between two images, such an

    IQM is the logical choice. But again calculation time is an obvious issue and for this reason simple error summation

    methods as the Mean Square Error (MSE), Root Mean Square Error (RMSE) or Mean Absolute Error (MAE) are very

    attractive. In [6] another universal IQM was suggested that makes use of the statistical values of two images x and y measuring structural distortion rather than the error energy. It is defined by:

    ( ) ( ) 2y2xyx

    yx

    y x

    221

    2

    yx

    yx 2IQM

    += (2.1)

    while x and y denote the respective mean values, x and y are the standard deviations and xy is the cross-covariance. The basic idea seemed promising, but so far we could detect no discernible improvement in our application over simply

    using the Absolute Error (AE), therefore we settled once more for the fastest solution.

    However, given that we are also interested in evaluating the respective results of our restoration variants, there is another

    class of measures we need to consider, the so-called no reference IQMs where no reference image is required. In the case of evaluating restoration result such an ideal image obviously doesnt exist. Considering, that our aim is to improve the

    resolution of image data, good image quality will consequently refer to high contrast of fine detail. Since the low-pass

    filter effect of the atmosphere removes the high frequency components, it is a logical conclusion to consider a metric that

    exploits this information. Some such metric was proposed by [7] and has been employed by us before in the context of a

    different project concerning a synthetic imaging technique. This metric is the mathematical equivalent of an existing

    physical system and is defined as:

    = rr 242 d|)]}(IiFT{exp[|IQM (2.2) where FT{} denotes the Fourier transform, I(r) the intensity values of the image I at position r = [r1, r2].

    3. BLID DECOVOLUTIO

    Essentially, a deconvolution describes the procedure of separating two convolved signals f and h. In the spatial domain the blind deconvolution problem takes the general form:

    ( ) ( ) ( ) ( )yxnyxfyxhyxg ,,,, += (3.1) where g denotes the blurred image, h the blurring or point spread function (PSF), f the true image, * the convolution operator and n an additive noise component. To simplify further steps, it is common practice to transfer the problem into the Fourier domain where the relatively complicated convolution-operation becomes a simple multiplication:

    ( ) ( ) ( ) ( )vuvuvuvu ,N,F,H,G += (3.2) with G, H, F and N denoting the Fourier transforms of g, h, f and n, respectively. Many attempts at solving this deceptively simple equation for a wide variety of applications can be found throughout literature. An overview of the

    most popular of these blind image deconvolution algorithms is detailed in [8].

    3.1 IWF Inverse Wiener Filtering

    The simplest approach to a solution of Eq. (3.2) is to use direct inverse filtering, i. e. to form an estimate F of the undegraded image:

    ( ) ( )( ) ( ) ( )( )( )vu

    vuvuvu

    vu

    vuvu

    ,H

    ,N,F,F

    ,H

    ,G,F +== (3.3)

    It becomes obvious that blind deconvolution is, as such, an ill-posed problem. Even if the exact blurring function was

    known, due to the random noise component, the true image can never be fully recovered. In addition, the filter function

    H is likely to contain numerous zeros or at least near vanishing values, such that the quotient on the right hand side of Eq. (3.3) will produce significant errors in the restoration estimate, even if noise as such was negligible. For increasing spatial frequencies the ratio will also increase and thus disproportionately enhance the relative effect of the noise.

    Proc. of SPIE Vol. 7108 71080M-4

  • Deconvolution with an inverse Wiener filter uses the least mean square error between the estimate and the true image:

    ( ){ }22 ffEerr = (3.4) where E{} denotes the expectation operator. The Fourier transform of the estimate F can now be expressed as [9]:

    ( ) ( )( ) ( ) ( ) ( ) ( )vuvuvuvuvuvu

    vu ,G,F,N,H,H

    ,H,F

    222

    2

    += (3.5)

    If the noise-to-signal power ratio ||/|F| becomes zero, meaning the absence of additive noise, Eq. (3.5) will reduce to the ideal inverse filter in Eq. (3.3).

    3.2 LRD Lucy-Richardson Deconvolution

    The Lucy-Richardson (LRD) algorithm was developed independently by [10] and [11] and is a nonlinear and basically

    non-blind method, meaning the PSF, or at least a very good estimate, must be a priori known as is the case for the IWF.

    The LRD has been derived from Bayesian probability theory where image data are considered to be random quantities

    that are assumed to have a certain likelihood of being produced from a family of other possible random quantities. The

    problem regarding the likelihood that the estimated true image, after convolution with the PSF, is in fact identical with

    the blurred input image, except for noise, is formulated as a so-called likelihood function, which is iteratively maximized. The solution of this maximization requires the convergence of [9]:

    ( ) ( ) ( ) ( )( ) ( )

    =+

    yxfyxh

    yxgyxhyxfyxf kk

    ,,

    ,,,, 1 (3.6)

    where k denotes the k-th iteration. It is the division by f that constitutes the algorithm's nonlinear nature. The image estimate is assumed to contain Poisson distributed noise which is appropriate for photon noise in the data whereas

    additive Gaussian noise, typical for sensor read-out, is ignored. In order to reduce noise amplification, which is a general

    problem of maximum likelihood methods, it is common practice to introduce a dampening threshold below which further

    iterations are (locally) suppressed. Otherwise high iteration numbers introduce artefacts to originally smooth image

    regions.

    3.3 IBD Iterative Blind Deconvolution

    The iterative blind deconvolution (IBD) algorithm, proposed by [12], is mainly a blind version of the LRD algorithm

    where the PSF needs not to be known, only its support. The IBD is a so-called Expectation Maximization (EM) algorithm which is an optimization strategy for estimating random quantities corrupted by noise. In the case of blind

    deconvolution this means that the likelihood function from the LRD algorithm is again maximized iteratively but with

    specified constraints until an estimate for the blurring PSF is retrieved from the data along with the estimate for the true

    image. The IBD algorithm is characterized by a computational complexity in the order of O( log2 ) per iteration where is the total number of pixels in a single frame while normally more than one iteration is required for its convergence.

    3.4 PCA Principal Component Analysis

    Due to the iterative nature of most blind deconvolution algorithms, calculation time per frame is relatively high, thus

    rendering any real-time applications virtually impossible. But like the Inverse Wiener Filter, the deconvolution algorithm

    based on Principal Component Analysis (PCA), which has been proposed in literature only recently [13], is an explicit,

    i.e. non-iterative, algorithm and executes very fast. The PCA algorithm aims at de-correlating the correlation between

    image matrix columns that has been caused by atmospheric (and system) blurring. There exist two versions of it, a

    "multiple-observation" version that operates on a sequence of images, yielding a single output image, and functions as a

    truly blind deconvolution algorithm and another "single-observation" version that operates on single images and needs

    additional information about the filter support size.

    If a number of M blurred observations of the same scene are given they can be modelled as random vectors { }MYYY ,...,, 21 with mean value Y . The PCA based algorithm looks for a linear transformation or filter that maximizes the variance of its output, i. e. its estimation for the true image, by exploiting the M blurred input images as follows:

    [ ] YYYY vYYYX += ,...,, 21 Mest (3.7)

    Proc. of SPIE Vol. 7108 71080M-5

  • where estX is a random vector representing the estimated true image obtained by this linear combination. The solution s to the maximization problem is a vector of length M and can be written as:

    ( ) ( ){ } maxarg YYv

    XXs = estt

    est (3.8)

    with E{} denoting the expectation operator as before and v the variance of the argument. This solution allows us to

    extract the maximum spatial variance within the M-blurred input images, thus recovering some of the high spatial frequency components that have been removed previously by the blur.

    The most obvious advantage of the PCA-based algorithm, apart from being robust to white noise, is that according to

    [13] its computational complexity is generally lower than that of the iterative deconvolution techniques discussed in this

    context.

    3.5 Discussion

    The four blind deconvolution algorithms discussed in this section were chosen for a number of reasons. The IWF, for

    instance, is a classic method which can be calculated directly and therefore executes very fast, i. e. within 0.05 and 0.09

    seconds for a filter size of 7 7 pixels (which was used for all calculation time measurements). The results generally

    improve if the process is repeated at least 2-3 times but with the real-time application in mind that theoretically only

    allows for a maximum of 0.03 sec for the complete processing of a single frame, one calculation must suffice.

    The IBD was mostly chosen for its general reliability and as a kind of benchmark with which to compare the respective

    results of the other methods. Since it usually takes several iterations to converge, execution time for the 5 to 10 iterations

    necessary to get a decent result is relatively long with 0.5 sec up to 1.1 sec. The main reason to include the non-blind

    LRD as well was as a means to evaluating the effectiveness of our estimate for the point spread function in comparison

    to the results yielded by passing only filter support size to the IBD algorithm. Calculation time per frame for the LRD

    was in the range of 0.36 sec and 0.45 sec for a number of 5 and 10 iterations, respectively, whereas 20 iterations already

    took 0.97 sec.

    The algorithm based on Principal Component Analysis was chosen because it was claimed to be very fast and the

    qualitative analysis in [14] which was done on synthetic data, had given rise to hope that the algorithm would achieve

    similarly good results when applied to real data. Calculation time in the single-observation version proved to be about

    0.18 sec per frame regardless of the number of principal components to be determined. It took the multiple-observation

    version 0.45 sec for 100 frames to calculate the first component, containing the highest variance, 0.19 sec for 50 frames

    and 0.03 sec for 10 frames. Calculation of all four principal components took 0.62 sec for 100 frames, and for 50 and 10

    frames calculation times were almost identical. Thus far, the algorithm was not quite as fast as we had hoped for large

    filter support sizes of 7 pixels and more.

    It should be noted that all computations were done using Matlab such that the given calculation times are relative and

    might very well be sped up by an efficient C/C++-Implementation.

    4. PARAMETER ESTIMATIO

    We tried automating the parameter estimation wherever possible by reducing the input parameters for external conditions

    to turbulence strength, characterized by Cn, the structure parameter of the refractive index fluctuations, and approximate path length. The input parameters for the optical system, like frame-rate and image resolution, ought to be possibly be

    communicated directly from the sensor without requiring manual input as of now they are simply put down in a script

    and read in.

    4.1 Automatic oise Estimation

    For the automatic noise estimation we decided upon using the temporal variance of a pixel region with homogeneous

    grey values. In order to find such a homogeneous region, we first determined the image entropy of a single frame. Since

    we were interested in local information content we performed the calculations in blocks of 16 16 pixels. We then

    picked the block with the lowest entropy and calculated the temporal variance for the corresponding pixels over an

    interval of a 100 frames (in pre-processing) and chose the mean value of this variance as noise parameter. The resulting

    values for the individual sequences have been listed in Table 1. The intrinsic snag in this method is, of course, that the

    variance caused by light which is randomly scattered due to the turbulence fluctuations, becomes an inseparable part of

    Proc. of SPIE Vol. 7108 71080M-6

  • illililt

    S S a

    :

    1I

    "It" "It" iiUiiLi1

    I IILIMI41 I 11111R4

    the result. It is also interesting to note, if perhaps not all that surprising, that the values we derived this way differed

    significantly from the one, namely 1.7519, we derived from a sequence we recorded recently using the same camera

    directed at a sheet of white paper, with the same image resolution and exposure time but a different optics.

    4.2 Estimation of the Point Spread Function

    While the Optical Transfer Function (OTF) describes the response of an optical system to a dirac impulse in the

    frequency domain, its inverse Fourier transform, the so-called Point Spread Function (PSF), characterises the degree to

    which the optical system spreads, or rather, blurs the light from a point source in the spatial domain. The approach to

    image restoration involves the previous determination of the atmospheric point spread function. The simplest

    approximation to an atmospheric blurring function is a Gaussian filter. This is quite a reasonable assumption since the

    optical turbulence basically acts as a low-pass filter by filtering out the high spatial frequencies and thereby blurring

    sharp edges and point-like objects.

    The main problem is to guess the correct filter size in terms of mean and standard deviation : Choosing too big will introduce ringing effects and choosing it too small will retain too much blurring. For each of our sequences, statistics

    regarding the turbulence conditions had been compiled and evaluated with special focus on edge width broadening [15],

    so we were able to use the edge width information that was correlated with the Cn values that had been measured for our respective sequences. The resulting estimates are listed in Table 1.

    Seq.1, median 100 IBD median LRD median IWF median

    Seq.2, median 100 IBD median LRD median IWF median

    Seq.1, PCA single C 1234 for 100 PCA multi C 1234 for 100 Seq.2, PCA single C 1234 for 100 PCA multi C 1234 for 100

    Fig. 3: Some characteristic deconvolution results from Seq. 1 & 2 (7.15 a.m. & 7.20 a.m.), beginning with median of 100

    frames as reference, followed by the resp. IBD, LRD and IWF results (Seq. 1 top, Seq. 2 centre row) and concluding

    with single and multiple version PCA results for components 1, 2, 3 and 4 (bottom row: Seq. 1 left, Seq. 2 right)

    Proc. of SPIE Vol. 7108 71080M-7

  • AIVSI

    AIVSI I

    I

    4AIMS

    I I

    5. RESULTS

    For easier reference, the sequences will be numbered by time of their recording, meaning Seq. 1 will refer to the earliest recording from 7.15 a.m., Seq. 2 to the next, etc. And due to lack of space all the images had to be reduced to 60% of their original size. For Seq. 1-6 the respective average and median images of 100 frames look almost the same as the sample frame, merely a little less noisy, so samples and averages were omitted and only the median image, which was also input for the various deconvolution algorithms, is displayed. Results are presented for sequence pairs, so the image labels have been coloured to group results by sequence. Fig. 3, 4 and 5 present a basic set of results for sequences 1 & 2 (7.15 a.m. & 7.20 a.m., low turbulence), 3 & 4 (7.45 a.m. & 7.48 a.m., medium turbulence) and 5 & 6 (8.12 a.m. & 8.22 a.m., medium to strong turbulence), respectively. This set includes the temporal median of 100 frames (the same 100 frames in every case) as reference to compare to restoration results, the respective IBD, LRD and IWF results, followed by PCA single and multiple (only Seq. 1 & 2) version results for various principal components, where "C 14", for instance, means that components 1 and 4 have been evaluated by exploiting the generalisation made in [14].

    Seq.3, median 100 IBD median LRD median IWF median

    Seq.4, median 100 IBD median LRD median IWF median

    Seq.3, PCA single C 14 for 100 PCA single C 1234 for 100 Seq.4, PCA single C 23 for 100 PCA single C 1234 for 100

    Fig. 4: Some characteristic deconvolution results from Seq. 3 & 4 (7.45 a.m. & 7.48 a.m.), beginning with median of 100

    frames as reference, followed by the resp. IBD, LRD and IWF results (Seq. 3 top, Seq. 4 centre row) and concluding

    with various single version PCA results (bottom row: Seq. 3 left, Seq. 4 right)

    Due to lack of space only one comprehensive set of results could be included. Exemplarily, the results for Seq. 7, i. e. the "worst case scenario", were chosen, since they were the most significant. They are presented in Fig. 6, which spreads over two pages, beginning with a sample, median as well as average of 100 frames and the total average of the sequence as frame of reference. To illustrate the turbulence effect on the edges, the standard deviation of the total sequence is also included. Subsequently, the respective IBD, LRD and IWF results for both, average and median of 100 frames, are displayed; following next are results from the single observation PCA deconvolution when applied to the average of 100, median of 20 and 10 frames and finally results from the multi observation PCA applied to 10, 20 and 100 frames.

    Proc. of SPIE Vol. 7108 71080M-8

  • -1rIII. _.iiill.ill ill

    HI!.

    Seq.5, median 100 IBD median LRD median IWF median

    Seq.6, median 100 IBD median LRD median IWF median

    Seq.5, PCA single C 1 for 100 PCA single C 1234 for 100 Seq.6, PCA single C 2 for 100 PCA single C 1234 for 100

    Fig. 5: Some characteristic deconvolution results from Seq. 5 & 6 (8.12 a.m. & 8.22 a.m.), beginning with median of 100

    frames as reference, followed by the resp. IBD, LRD and IWF results (Seq. 5 top, Seq. 6 centre row) and concluding

    with various single version PCA results (bottom row: Seq. 5 left, Seq. 6 right)

    5.1 Evaluation

    5.1.1 Short exposure vs. long exposure

    When imaging through turbulence, exposure time becomes an issue. For longer exposures, i. e. if exposure time outlasts

    the duration of the turbulence, turbulence-cells of all sizes are averaged over. It is the largest cells which are

    predominantly responsible for the image dancing as they move across the aperture of the optics and the resulting

    "smoke" effect which arises from light that gets deflected by these larger cells. For short exposures, on the other hand,

    these large cells are "frozen" and only the small ones, which are responsible for the blurring, remain. Essentially, that

    means that the geometry of a scene will be preserved, i. e. straight lines will remain straight, but details will be blurred.

    Consequently, the results from the sequences with exposure below 0.1 ms were generally more accurate but also blurrier

    than those with 1 ms exposure time which in the case of sequences 2, 3, 5 and 7 definitely qualified for the long exposure

    case. The long exposure results looked sharper, had higher contrast and contained more details.

    5.1.2 Visual inspection

    The deblurring efforts of IBD, LRD and IWF are barely distinguishable and offer comparatively little improvement over

    the median for sequences 1, 2, 5 and 6, whereas in Seq.3 and 4 it becomes more apparent. Here, the IWF yields the

    highest resolution of all, judging from its ability to resolve the first line of text "A BEAR". In that regard it even

    outperforms the single observation PCA algorithm, in quality as well as in speed. Otherwise, the IBD yields the sharpest

    result for Seq. 3 and the LRD the smoothest without being blurry, unlike the PCA result.

    Proc. of SPIE Vol. 7108 71080M-9

  • eMtMt

    r

    sample image average 100 median 100 average 1000

    IBD average IBD median LRD average LRD median

    standard deviation 1000 IWF average IWF median PCA single C1234 for average 100

    PCA single C1 for average 100 PCA single C2 for average 100 PCA single C3 for average 100 PCA single C4 for average 100

    PCA single C1 for median 20 PCA single C1234 for median 20 PCA single C1 for median 10 PCA single C1234 for median 10

    Proc. of SPIE Vol. 7108 71080M-10

  • PCA multi C1 for 10 PCA multi C1234 for 10 PCA multi C1 for 20 PCA multi C1234 for 20

    PCA multi C1 for 100 PCA multi C4 for 100 PCA multi C34 for 100 PCA multi C1234 for 100

    Fig. 6 (spreads 2 pages): Comprehensive deconvolution results from Seq. 7 (1.07 p.m.), recorded in strong turbulence

    conditions, beginning with sample and averages as reference, with standard deviation of total sequence illustrating

    turbulence effect on edges, followed by the respective IBD, LRD and IWF results, each for average and median of 100

    frames; next are results from PCA single version applied to average of 100, median of 20 and 10 frames and last are

    results from PCA multi version applied to 10, 20 and 100 frames

    For all other sequences, including Seq. 7, IBD and LRD look essentially the same. This, in conjunction with the passable

    IWF results, means that at least our PSF estimates are quite satisfactory. As regards the noise parameter, we could

    discern neither improvement nor worsening in the results compared to results created under the zero noise assumption.

    The PCA single-observation algorithm has proven its ability to resolve vertical and horizontal lines. Even in Seq. 7 it

    manages to distinctly resolve the vertical lines. The first principal component, corresponding to the greatest variance,

    yields the best result. In Fig. 6 (previous page) all four components are depicted individually in order to illustrate their

    respective characteristics and the false 3D-effect in particular, which is inherent to the first two components and

    obviously increases along with filter support which again increases with growing turbulence strength. (It should be

    mentioned that the black borders around the results of the single observation PCA version are caused by the algorithm

    and depend on the size of the corresponding filter support.)

    As to the multi-observation PCA version, for 100 input frames it fails completely, and increasingly so in accordance with

    growing turbulence strength. Concerning its performance for only 10 or 20 input frames, the results for Seq. 7 indicate

    that it does not meet either the standard established by the single version nor that of the other deconvolution algorithms,

    or even of the simple average or median operation.

    5.1.3 Image Quality Metrics

    For lack of a better alternative we used the averages and median and also the input frame as reference for metric IQM1 in

    Eq. (2.1), meaning the results were correlated to the deconvolution input (i. e. average or median).

    As was to be expected, the values for IBD and LRD were almost identical for both metrics; those for the IWF from

    metric IQM2 Eq. (2.2) were generally a bit better but ranged in the same order as the other two. Due to the (at times

    extreme) contrast enhancement of the PCA algorithms, both versions received high quality marks from both metrics, not

    always justified.

    On the whole, the results were rather consistent with our initial assessment but especially the "high quality" of the

    multiple observation PCA results for 100 frames signifies that deconvolution results are prone to noise amplification

    resulting in unnaturally high frequency components and high contrast, thus rendering both metrics somewhat unreliable.

    Proc. of SPIE Vol. 7108 71080M-11

  • 6. COCLUSIOS

    We have described and discussed the various steps in our algorithm for the restoration of images that were degraded by

    atmospheric turbulence. This includes a comparative analysis of four different deconvolution methods and a presentation

    of the respective results for a number of real data image sequences, both long and short exposure. The partially

    automated PSF estimation has produced satisfactory results. The PCA approach has yielded interesting results and

    proven worthy of being pursued further. The general feasibility of the real-time objective for our project has been

    confirmed even if not quite as of yet.

    Future work includes replacing the simple Gaussian filter by a more accurate estimate for the average atmospheric PSF

    and finding an analytical description for the correspondence between Cn and edge width. We also intend to apply MCA locally, by using block matching, in order to improve compensation for image dancing in anisoplanatic conditions.

    Furthermore, the use of thresholded edge images rather than greyscale images might help improve the reliability of the

    quality metrics.

    It should also be mentioned that the full stabilizing effect of the restoring algorithm(s) can best be demonstrated when

    directly comparing the original to the restored images in a video sequence.

    ACKOWLEDGMETS

    The authors would like to thank their colleagues, Gabriele Marchi, Endre Repasi and Wolfgang Schuberth, for supplying last minute measurements, helpful insights and an abundance of literature on atmospheric transfer functions.

    REFERECES

    [1] Vladimir I. Polejaev, Pierre R. Barbier, Gary W. Carhart, Mark L. Plett, David W. Rush, Mikhail A. Vorontsov,

    Adaptive compensation of dynamic wavefront aberrations based on blind optimization technique, Proc. of SPIE, Vol. 3760, pp. 88-95, (1999).

    [2] G. Marchi, R. Wei, "Evaluation and progress in the development of an adaptive optics system for ground object

    observation", Proc. of SPIE, Vol. 6747, (2007) [3]

    E. Mauer, "Investigation of atmospheric turbulence effects on extended objects in high-speed image sequences applying automatic image analysis", Proc. of SPIE, Vol. 5237, pp. 39-48, (2004)

    [4] E. Repasi, "Image Catalogue of Video Sequences recorded by FGAN-FOM during the RTG-40 Field Trial",

    Distributed to group members, (2006) [5]

    J. Gilles, "Restoration algorithms and system performance evaluation for active imagers", Proc. of SPIE, Vol. 6739, 6739B, (2007)

    [6] Z. Wang, A. C. Bovik, "A universal image quality index", IEEE Signal Processing Letters, vol. 9, no. 3, pp. 8184,

    (2002). [7]

    M. A. Vorontsov, G. W. Carhart, D. V. Pruidze, J. C. Ricklin, D. G. Voelz, "Image quality criteria for an adaptive imaging system based on statistical analysis of the speckle field", J. Soc. Am. A, Vol. 13, No. 7, (1996)

    [8] D. Kundur, D. Hatzinakos, "Blind Image Deconvolution", IEE Signal Processing Magazine, 1053-5888/96, pp. 43-

    64, (1996) [9]

    R. C. Gonzalez, R. E. Woods, "Digital Image Processing", 2nd

    ed., Prentice Hall, NJ, (2002) [10]

    W. H. Richardson, "Bayesian-Based Iterative Method of Image Restoration", J. Opt. Soc. Am. 62 (1), pp. 55-60, (1972)

    [11] L. Lucy, "An iterative technique for the rectification of observed distributions", Astron. J. 79, pp. 745, (1974)

    [12] Ayers G. R., and Dainty J. C., Iterative blind deconvolution method and its applications, Opt. Letters, vol. 13, no.

    7, pp. 547549, (1988). [13]

    D. Li, M. Mersereau, S. Simke, "Atmospheric Turbulence-Degraded Image Restoration Using Principal Component Analysis", IEEE Geoscience and Remote Sensing Letters, Vol. 4, No. 3, pp. 340-344, (2007)

    [14] M. Greco, C. S. Huebner, G. Marchi, "Quantitative performance evaluation of a blurring restoration algorithm based

    on principal component analysis", to be published on Proc. SPIE of Optics in Atmospheric Propagation and Adaptive Systems XI, (2008)

    [15] E. Repasi, R. Weiss, "Analysis of Image Distortions by Atmospheric Turbulence and Computer Simulation of

    Turbulence Effects", Proc. of SPIE, Vol. 6941,69410S, (2008)

    Proc. of SPIE Vol. 7108 71080M-12