Top Banner
RECONSTRUCTION OF UNDERWATER IMAGE BY BISPECTRUM
24

Reconstruction of Underwater Image by Bispectrum

Nov 24, 2015

Download

Documents

doc
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

RECONSTRUCTION OF UNDERWATER IMAGE BY BISPECTRUM

ABSTRACT:Reconstruction of an underwater object from a sequence of images distorted by moving water waves is a challenging task. A new approach is presented in this paper. We make use of the bispectrum technique to analyze the raw image sequences and recover the phase information of the true object. We test our approach on both simulated and real-world data, separately. Results show that our algorithm is very promising. Such technique has wide applications to areas such as ocean study and submarine observation.

INTRODUCTIONAssume that we look through the moving water surface and observe an underwater object (for example, a coral reef), we would notice that the object is distorted due to the effect of light refraction and motion blur. Such phenomena can be recorded by a stationary video camera above the water. Each image in the video stream may be degraded and blurred. Figure 1 shows a sample sequence of distorted images. How to recover the target image from the video stream(distorted images) is a very challenging and a cross-disciplinary task, which involves physics, optics, computer vision and image processing. Such technology has wide applications such as underwater surveillance and submarine observation. In past decades, researchers from different areas have addressed the problem of recovering underwater image [1, 2, 3]. Some focused on reconstructing the surface of the water, some tried statistical theory to recover the target, somestudied light refraction, and some applied image processing techniques. One simple method is the average-based method, which temporally averages the data stream [1]. Another methodis to locate the minimum distorted regions and form the final image from these regions [2, 3]. Both methods work well in many situations. The latter method works better under severedisturbed conditions. In this paper, an original approach to obtaining the true image from the sequence of the distorted images is proposed. We estimate the Fourier phase of the target image by analyzing the averaged bispectrum of the image ensemble. Results of our experiments show that our algorithm is promising.

Fig. 1. A sample sequence of distorted images.

PROBLEMS IN UNDERWATERWAVES In this section, we briefly discuss a few problems, pertaining to underwater images, such as light absorption and the inherent structure of the sea. We also discuss the effects of color in underwater images. With respect to light reflection, Church describes that the reflection of the light varies greatly depending on the structure of the sea. Another main concern is related to the water that bends the light either to make crinkle patterns or to diffuse it as shown in Figure 1. Most importantly, the quality of the water controls and influences the filtering properties of the water such as sprinkle of the dust in water. According to Anthony the reflected amount of light is partly polarized horizontally and partly enters the water vertically. An important characteristic of the vertical polarization is that it makes the object less shining and therefore helps to capture deep colors which may not be possible to capture otherwise.

Fig: Water surface effects.Another well-known problem concerning the underwater images is related to the density of the water in the sea which is considered 800 times denser than air. Therefore, when light moves from the air to the water, it is partly reflected back and at the same time partly enters the water. The amount of light that enters the water also starts reducing as we start going deeper in the sea. Similarly, the water molecules also absorb certain amount of light. As a result, the underwater images are getting darker and darker as the depth increases. Not only the amount of light is reduced when we go deeper but also colors drop off one by one depending on the wavelength of the colors. For example, first of all red color disappears at the depth of 3m. Secondly, orange color starts disappearing while we go further. At the depth of 5m, the orange color is lost. Thirdly most of the yellow goes off at the depth of 10m and finally the green and purple disappear at further depth. This is shown diagrammatically in Figure 2. As a matter of fact, the blue color travels the longest in the water due to its shortest wavelength. This is what makes the underwater images having been dominated only by blue color. In addition to excessive amount of blue color, the blur images contain low brightness, low contrast and so forth.

Fig: Color appearance in underwater.This section presents related literature concerning underwater image processing. Gasparini and Schettini have developed a tune able cast remover for digital photographs based on a modified version of the white balance algorithm. This approach first deducts the presence of a cast using a detector and secondly it removes the cast. The approach has been applied to a set of images downloaded from personal web pages. Garciahave presented a significant literature addressing the lighting problems in underwater images. The researchers have reviewed several techniques related to image enhancement. They include illumination reflectance model, local histogram equalization, homomorphic filtering and subtraction of the illumination. Their approach tends to address the issues concerning the correction of light in homogeneities basis with homomorphic filter. They have attempted to reduce the amount of noise using histogram equalization technique. Chambah and Semani have proposed an approach in relation to underwater color constancy enhancement of automatic live fish recognition based on Gray World Automatic Color Equalization. They have used a combined algorithm based on GW (Gray World), ACE (Automatic Color Equalization) and WP (Retinex White Patch) for underwater image recognition in real-time. WP method is based on the mean of the image and it does not have any effect on image. ACE enhances the image without supervision.They carried out several steps in order to apply the proposed approach to underwater image recognition. For the sake of segmentation they subtract the background in order to recognize the image (e.g., fish). Using this process, small false detection is found and discarded using threshold. The use of this approach helps to remotely select the fish from the fish tank and choose the fish display on the screen in order to recognize image in real-time. Andreas have developed an approach for underwater image enhancement by using several algorithms including Histogram Equalization, Gaussian Blur and Log-Gabor. In the first instance, they apply histogram equalization to remove backscattering, attenuation and lighting effect. Applying the histogram methods does not guarantee the removal of noise in the images. In order to address this issue, they further use Gaussian blur, a low pass filtering method. Actually, they select two images from original image using division and subtraction. After fusion, the remaining noise is removed using multi-scale de-noising algorithm based on complex valued Log Gabor wavelets. Cufi have proposed a vision based system using motion detection algorithm. This approach is used to automatically maintain the position of the vehicle when the reference of the corresponding image is lost. In this way, it addresses the issue of image orientation caused by vehicle movement. This approach is twofold. Firstly, this is applied to images to select a set of candidate matches for a particular interest point. Secondly, it uses a texture characterization of the points for selecting the best correspondence. The contrast operator performs a grey scale differentiation in the region. Similarly Fair-weather have used techniques such as contrast stretching and Markov Random field. They applied bimodal histogram model to the images in order to enhance the underwater image, first of all, they applied contrast stretching techniques. Secondly, they divided the image into two parts; object and background and then applied Markov Random field segmentation method. You have used a Physics-based model. They developed scene recovery algorithm in order to clear underwater images/scenes through polarizing. filter. This approach addresses the issue of backscatter rather than blur. It mainly focused on the recovery of the object. They have applied this approach to analyze and remove the physical effects of visibility degradation which can be associated with partial polarization of light.THE PROPOSED APPROACH FOR UNDERWATER IMAGE ENHANCEMENT In the previous sections, we have discussed some issues concerning image processing analysis particularly in the context of underwater image enhancement. It has been highlighted that researchers within the field of marine research in general and computer science in particular are facing problems regarding the quality of the underwater images. Such problems need to be addressed in order to perform an effective and rigorous analysis on the underwater images. Most importantly, the problems need to be addressed in the pre-processing stage in the computer vision system. Given the theoretical and technological perception to marine research, the problem of image enhancement is gaining increasingly importance. One of the most significant issues is how to improve the quality of the underwater images in order to streamline the image processing analysis. The problems related to underwater images come from the light absorption and scattering effects by the marine environment. In order to eliminate this problem, researchers are using state-of-the-art technology such as autonomous underwater vehicles, sensors and optical cameras, visually guided swimming robot. However, the technology has not yet reached to the appropriate level of success. For example, the movement of autonomous underwater vehicles generates shadows in the scene while the optical camera provides limited visibility when it is used to capture underwater images. It has its own merits and demerits. In order to overcome the limitations of technology, some researchers annotate images manually. However this process is labour intensive and it also requires significant agreement amongst the annotators. In order to address the issues discussed above, we propose an approach based on slide stretching. Firstly, we use contrast stretching of RGB algorithm to equalize the color contrast in the images. Secondly, we apply the saturation and intensity stretching of HSI to increase the true color and solve the problem of lighting. The proposed approach is shown in Figure 3. The HSI model provides a wider color range by controlling the color elements of the image. The Saturation (S) and Intensity (I) are the element that generates the wider color range. In a situation when we have the blue color element in the image it is controlled by the S and I value in order to create the range from pale blue to deep blue, for instance. Using this technique, we can control the contrast ratio in underwater images either by decreasing or increasing the value. This is carried out by employing a histogram of the digital values for an image and redistributing the stretching value over the image variation of the maximum range of the possible values. Furthermore linear stretching from S value can provide stronger values to each range by looking at the less output values. Here a percentage of the saturating image can be controlled in order to perform better visual displays. The contrast stretching algorithm is used to enhance the contrast of the image. This is carried out by stretching the range of the color values to make use of all possible values. The contrast stretching algorithm uses the linear scaling function to the pixel values. Each pixel is scaled using the following function: Po = (Pi c) x (b c) / (d c) + a Where - Po is the normalized pixel value; - Pi is the considered pixel value; - a is the minimum value of the desired range; - b is the maximum value of the desired range; - c is the lowest pixel value currently present in the image; - d is the highest pixel value currently present in the image When the contrast stretching algorithm is applied to color images, each channel is stretched using the same scaling to maintain the correct color ratio. The first step is to balance the red and green channel to be slightly the same to the blue channel. This is done by stretching the histogram into both sides to get well-spread histogram. In the second step we transform the RGB image into HSI, using the saturation and intensity transfer function to increase the true color and brightness of underwater images. Using the transform function we have been able to stretch the saturation and intensity values of HSI color model. Using the saturation parameters we can get the true color of underwater images. Brightness of the color is also considered to be important for underwater images. The HSI model also helps to solve the lighting problem using Intensity parameters.

IMAGE ENHANCEMENT TOOL AND RESULTS Based on our methodology, we have developed a software tool to be used for underwater images. We have developed this tool using an object-oriented programming language. Our tool has different stages as discussed above and shown in Fig. 3. In order to overcome the limitations of technology, some researchers annotate images manually. However this process is labor intensive and it also requires significant agreement amongst the annotators. In order to address the issues discussed above, we propose an approach based on slide stretching. Firstly, we use contrast stretching of RGB algorithm to equalize the color contrast in the images.

PROBLEM STATEMENTBefore constructing an algorithm, it is worth introducing the physical background of our problem. Figure 2 demonstrates the refraction law. Assume that light passes straight up fromthe planar ground under the water, there will be no refraction if the water surface is still. Under this condition, the observer at point a will see the object O. However, when the water waves exist, the normal to the water surface N is tilted by an angle. This results in that the observer at point b will see O_ instead of O. According to Snells law, the refractive index of a material is calculated by the angle c and d. Here, the index of refraction of water is 1.33. When recording a particular object under the moving water surface over time, we will see that the object is distorted and moves around its correct position. These movements are Gaussian distributed with its center being around the cor- Fig. 2. A simple illustration of the refraction law.

rect position in accordance with Cox-Munk Law[4]. The law points out that, given a large surface area and stationary waves, the distribution of the normals of the water surface is approximately Gaussian. Efros et. al. confirmed this statement by experiments[2]. Therefore the properties of Gaussian random process can be utilized when processing the images. The most common approach to such a problem is the averagebased method. The average-based method is to compute the temporal average over the image ensemble[1]. It works well under comparatively smooth conditions. However, it fails when the target is too fine and has a lot of details. Recently, several researchers proposed to form the target image by locating and combining the minimum distorted regions over the sequence of the raw images[2, 3, 5]. The result of such method is much sharper than that of the average-based method. In this paper, we consider the task as a phase recovery problem of images captured under disturbed conditions, so that the bispectrum technique can be used to recover the correct phase of the target. Bispectrum techniques are primarily used in astronomical imaging to reconstruct the target affected by atmospheric turbulence[6]. To apply the technique to our problem, a sequence of raw images are required. Weapplied such a technique in our problem and achieved promising results.

BISPECTRUM TECHNIQUEIn this section, we give a brief review on the main technique of our algorithm. Assume that I(u, v) is the Fourier transform of a 2-D signal i(x, y) (for example, an image), which is also expressed as the magnitude-and-phase form: I(u, v) = |I(u, v)|exp{ji(u, v)} (1)From Eq(1), a signal can be recovered given that the magnitude and the phase in the Fourier domain are known. In this paper, the bispectrum technique is used to recover the signal phase.The bispectrum has two main properties: (1) the phase information of a signal can be preserved and (2) the bispectrum correct phase of a signal using the bispectrum technique. The bispectrum is the Fourier form of the triple correlation of a signal. The bispectrum of a 2-D signal is given by where B(u1, u2; v1, v2) denotes the bispectrum, and indicates the complex conjugate. Note that the bispectrum of a 2-D signal is four dimensional. Since the normal of the water surface is a Gaussian distribution, the phase distortion of a submerged object is also considered as Gaussian distributed. This allows us to use the properties of Gaussian random process and overcome thephase corruption by averaging the bispectrum of the ensemble of the raw images [6]. So that we build the relationship between the object phase spectrum O(u, v) and the phase of the mean bispectrum B(u1, u2; v1, v2) by

From Eq.(3), one can estimate the phases at higher frequencies using the phases at lower frequencies and the mean bispectrum phases given that the phases at (1, 0) and (0,1)are known (the phase at (0, 0) is zero).ALGORITHMFrom the above analysis, we now construct our algorithm. The first step is to divide the raw images into smaller size patches and locate and discard the most distorted patches. Such a method is similar to the lucky imaging technique in astronomical imaging[10, 11], which aims to select the good images to recover the target image. In the problem, Efros et. al. show that one particular spot under the water surface suffers different distortion in temporal space[2]: the local image patch is more distorted under higher energy water wave surface and less while the water surface is nearly flat. Here we employ the image quality index proposed in [12] as the selectioncriterion.

where x is the clean image and y is the test image, x and x2, and y and y2 are the expectation and the variance of x and y, respectively, xy is the covariance of x and y. Because the clean image is unknown and is what we are trying to find, we substitute the mean of the raw image ensemble for x. We then estimate the Fourier phase and the Fourier magnitude of each patch of the target image after discarding the most distorted patches. The Fourier magnitude is estimated by the averaged power spectrum of the image ensemble in our experiment. The phase of the object in the Fourier domain is recovered using Eq.(3). Here, the recursive algorithm is employed [7].Since the exact values of the phases at (1, 0) and (0,1) are unknown, we use the mean image of the image ensemble here again: o(1, 0) = mean(1, 0) and o(0,1) = mean(0,1).The final step is to reconstruct each patch of the target with the Fourier magnitude and phase obtained in the previous step, and to combine the patches to their right positions to form the final image.

EXPERIMENTSWe test our algorithm on both simulated and real-world data. Results show that our algorithm is promising. Simulation:

Fig. 3. The simulation results. The image reconstructed by our algorithm (c) is much sharper than the output estimated by the average-based method (b).

Fig. 4. Comparison of seeing-through-water reconstruction results of different methods.

To simulate the water waves, we apply the model proposed in [13]. The model assumes that the pattern in the water is static and flat, and the average slant of the moving water surface on one particular point is zero when observed for a very long time. These assumptions fit our problem. In the model, the shift of one point that moves off from its correct position due to the effect of moving water surface is given by where h is the distance between the water surface and the planar ground under the water, n is the refractive index of water, and N is white Gaussian noise. p and q are the superposed patterns of water waves with different speeds and lengths, they can be expressed by the wave number ui and vi, the angular frequency wi and the amplitude ai as follows:

By adjusting h, ui and vi, we can obtain different types of waves and therefore simulate smooth and deep water waves. We generate a sequence of 100 gray images of size 256x96. The size of each image tile used in our experiments is 32x32. Figure 3(a) is a sample distorted image simulated by the model, (b) the averaged image of the data sequence over time, (c) is the output of our algorithm, and (d) the true image. Note that the image estimated by our algorithm is much sharper than that by the average-based method, and more precisely represents the true image.Real-world Results In an experiment with a real water surface, the natural data is captured by a video camera fixed above the water surface with the object laid on the planar ground under the water. The moving water surface is motivated by natural factors (e.g., wind produced by a fan and an object dropped into the water). The water is clear and no account has been taken of theattenuation of the image by a turbid media. Figure 1 shows a sample sequence of the real-world data. The image size is 288x192. In total 120 images are used and the size of image tiles is also 32x32. The comparison of reconstruction results by different methods is demonstrated in Figure 4. The result of our algorithm is promising, and the fine details of the object can be clearly seen.CONCLUSIONWe propose an original idea to reconstruct a submerged object distorted by moving water surface. We assume the normals of the water surface are Gaussian distributed. The bispectrumtechnique is employed to recover the phase of the true object. Although experiments show that our approach is promising, there exist some limits. One limit is that our algorithm needs a large computer memory and heavy computation because that the bispectrum of an image is four dimensional. Another limit is the recursive phase recovery method with only a subset of the phase information of the averaged bispectrum being used. This may reduce the resolution of the output. To overcome such limits is the next step in our research.

REFERENCES[1] R Shefer, M. Malhi, and A. Shenhar, Waves distortion correction using cross correlation,http://visl.technion.ac.il/projects/2000maor/, 2001.[2] A. Efros, V. Isler, J. Shi, and M. Visontai, Seeing through water, NIPS, 2005.[3] A. Donate and E. Ribeiro, Improved reconstructin of images distorted by water waves, International conference on computer vision theory and applications, 2006.[4] C. Cox and W. Munk, Slopes of the sea surface deduced from photographs of sun glitter, Scripps Inst. Of Oceanogr, vol. 6, pp. 401479, 1956.[5] A. Donate, G. Dahme, and E. Ribeiro, Classification of textures distorted by water waves, ICPR, 2006.[6] M. Roggemann and B. Welch, Imaging Through turbulence, CRC Press LLC, 1996.[7] A. Lohmann and B. Wirnitzer, Triple correlations, Proc. of IEEE, vol. 72, pp. 889901, July 1984.[8] C. Matson, Weighted-least-squares phase reconstruction from the bispectrum, J.Opt.Soc.Am.A, vol. 8, pp. 19051913, 1991.[9] H. Bartle, A. Lohmann, and B. Wirnitzer, Phase and amplitude recovery from bispectra, J.Opt.Soc.Am, vol. 23, pp. 31213129, Sept. 1984.[10] D. Fried, Probability of getting a lucky short-exposure image through turbulence, J.Opt.Soc.Am.A., vol. 68, pp. 16511658, 1978.[11] S. Weddell and R. Webb, Data preprocessing on sequential data for improved astronomical imaging, Image and Vision Computing, Nov 2005.[12] Z. Wang and A. Bovik, A universal image quality index, IEEE Signal Processing Letters, vol. 9, pp. 8184, 2002.[13] H. Murase, Surface shape reconstruction of a nonrigid transparent object using refraction and motion, IEEE Transaction on Pattern Analysis and Machine Intelligence,Oct. 1992.