Top Banner
INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303 90 Multiscale Gradient Based Directional CFA Interpolation with Refinement Aarthy Poornila.A 1 1 Mepco Schlenk Engineering College, ECE Department [email protected] R. Mercy Kingsta 2 Assistant Professor 3 Mepco Schlenk Engineering College, ECE Department [email protected] AbstractSingle sensor digital cameras capture only one color value for every pixel location. The process of reconstructing a full color image from these incomplete color samples output from an image sensor overlaid with a color filter array (CFA) is called demosaicing or Color Filter Array (CFA) interpolation. The most commonly used CFA configuration is the Bayer filter. The proposed demosaicing method makes use of multiscale color gradients to adaptively combine color difference estimates from horizontal and vertical directions and determine the contribution of each direction to the green channel interpolation. This method does not require any thresholds and is non iterative. The red and blue channels are then refined using structural approximation. Index Terms Multiscale color gradients, Color Filter Array (CFA) interpolation, demosaicing, directional interpolation. —————————— —————————— 1. INTRODUCTION emosaicing algorithm is a digital image process used to reconstruct a full color image from the incomplete color samples obtained from an image sensor overlaid with a color filter array (CFA). Also known as CFA interpolation or color reconstruction [21] .The reconstructed image is typically accurate in uniform-colored areas, but has a loss of resolution and has edge artifacts in non uniform-colored areas. A color filter array is a mosaic of color filters in front of the image sensor. The most commonly used CFA configuration is the Bayer filter shown in Fig 1.1. This has alternating red (R) and green (G) filters for odd rows and alternating green (G) and blue (B) filters for even rows. There are twice as many green filters as red or blue ones, exploiting the human eye's higher sensitivity to green light. Figure 1.1: Bayer mosaic of color image 1.1 Existing Algorithms Nearest neighbor interpolation simply copies an adjacent pixel of the same color channel (2x2 neighborhood). It is unsuitable for any application where quality matters, but can be used for generating previews with given limited computational resources [25].In bilinear interpolation, the red value of a non-red pixel is computed as the average of the two or four adjacent red pixels. The blue and green values are also computed in a similar way. Bilinear interpolation generates significant artifacts, especially across edges and other high-frequency content, as it doesn`t take into account the correlation between the RGB values [22]. Cubic interpolation takes into account more neighbors than in algorithm no. [22] (e.g., 7x7 neighborhood). Lower weight is given to pixels which are far from the current pixel.Gradient- corrected bilinear interpolation assumes that in a luminance/chrominance decomposition, the chrominance components don`t vary much across pixels. It exploits the inter- channel correlations between the different color channels and uses the gradients among one color channel, to correct the bilinearly interpolated value [23]. Smooth hue transition interpolation assumes that hue is smoothly changing across an objects surface; simple equations for the missing colours can be obtained by using the ratios between the known colours and the interpolated green values at each pixel [22]. Problem can occur when the green value is 0, so some simple normalization methods are proposed [24].In order to prevent flaws when estimating colours on or around edges, pattern recognition interpolation [3] describes a way to classify and interpolate three different patterns (edge, corner and strip) in the green color plane that are shown in Fig 1.2. The first step in this procedure is to find the average of the four neighboring green pixels, and classify the neighbors as either high or low in comparison to this average. . D
9

Multiscale Gradient Based – Directional CFA Interpolation with Refinement

Nov 11, 2015

Download

Documents

ijtetjournal

Abstract—Single sensor digital cameras capture only one color value for every pixel location. The process of reconstructing a full color image from these incomplete color samples output from an image sensor overlaid with a color filter array (CFA) is called demosaicing or Color Filter Array (CFA) interpolation. The most commonly used CFA configuration is the Bayer filter. The proposed demosaicing method makes use of multiscale color gradients to adaptively combine color difference estimates from horizontal and vertical directions and determine the contribution of each direction to the green channel interpolation. This method does not require any thresholds and is non iterative. The red and blue channels are then refined using structural approximation.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    90

    Multiscale Gradient Based Directional CFA Interpolation with Refinement

    Aarthy Poornila.A1

    1Mepco Schlenk Engineering

    College,

    ECE Department

    [email protected]

    R. Mercy Kingsta2

    Assistant Professor 3Mepco Schlenk Engineering College,

    ECE Department

    [email protected]

    AbstractSingle sensor digital cameras capture only one color value for every pixel location. The process of reconstructing a full color image from these incomplete color samples output from an image sensor overlaid with a color

    filter array (CFA) is called demosaicing or Color Filter Array (CFA) interpolation. The most commonly used CFA

    configuration is the Bayer filter. The proposed demosaicing method makes use of multiscale color gradients to adaptively

    combine color difference estimates from horizontal and vertical directions and determine the contribution of each direction

    to the green channel interpolation. This method does not require any thresholds and is non iterative. The red and blue

    channels are then refined using structural approximation.

    Index Terms Multiscale color gradients, Color Filter Array (CFA) interpolation, demosaicing, directional interpolation.

    1. INTRODUCTION

    emosaicing algorithm is a digital image process used to

    reconstruct a full color image from the incomplete color

    samples obtained from an image sensor overlaid with a color filter

    array (CFA). Also known as CFA interpolation or color

    reconstruction [21] .The reconstructed image is typically accurate in

    uniform-colored areas, but has a loss of resolution and has edge

    artifacts in non uniform-colored areas.

    A color filter array is a mosaic of color filters in front of

    the image sensor. The most commonly used CFA configuration is

    the Bayer filter shown in Fig 1.1. This has alternating red (R) and

    green (G) filters for odd rows and alternating green (G) and blue (B)

    filters for even rows. There are twice as many green filters as red or

    blue ones, exploiting the human eye's higher sensitivity to green

    light.

    Figure 1.1: Bayer mosaic of color image

    1.1 Existing Algorithms Nearest neighbor interpolation simply copies an adjacent pixel of the same color channel (2x2 neighborhood). It is unsuitable for any

    application where quality matters, but can be used for generating

    previews with given limited computational resources [25].In

    bilinear interpolation, the red value of a non-red pixel is computed as the average of the two or four adjacent red pixels. The blue and

    green values are also computed in a similar way. Bilinear

    interpolation generates significant artifacts, especially across edges

    and other high-frequency content, as it doesn`t take into account the

    correlation between the RGB values [22].

    Cubic interpolation takes into account more neighbors

    than in algorithm no. [22] (e.g., 7x7 neighborhood). Lower weight is

    given to pixels which are far from the current pixel.Gradient-

    corrected bilinear interpolation assumes that in a

    luminance/chrominance decomposition, the chrominance

    components don`t vary much across pixels. It exploits the inter-

    channel correlations between the different color channels and uses

    the gradients among one color channel, to correct the bilinearly

    interpolated value [23].

    Smooth hue transition interpolation assumes that hue is

    smoothly changing across an objects surface; simple equations for the missing colours can be obtained by using the ratios between the

    known colours and the interpolated green values at each pixel [22].

    Problem can occur when the green value is 0, so some simple

    normalization methods are proposed [24].In order to prevent flaws

    when estimating colours on or around edges, pattern recognition

    interpolation [3] describes a way to classify and interpolate three

    different patterns (edge, corner and strip) in the green color plane

    that are shown in Fig 1.2. The first step in this procedure is to find

    the average of the four neighboring green pixels, and classify the

    neighbors as either high or low in comparison to this average. .

    D

  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    91

    Figure 1.2: (a) is a high edge pattern, (b) is a low edge pattern, (c) is a

    corner pattern, and (d) is a stripe pattern.

    Adaptive color plane interpolation assumes that the color planes are perfectly correlated in small enough neighborhoods [25].

    That is, in a small enough neighborhood, the equations. G = B + k

    G = R + j

    are true for constants k, j.

    In order to expand the edge detection power

    of the adaptive color plane method, it is prudent to consider more

    than two directions (i.e., not only the horizontal and vertical

    directions). Thus directionally weighted gradient based interpolation uses information from 4 directions (N, S, W, and E as

    shown in Figure1.3)

    Figure 1.3: Neighborhood of B pixel

    A weight is assigned for each direction, using the known

    information about the differences between B and G value [25].

    2. PROPOSED SYSTEM DESIGN

    2.1. System Description

    The first step of the algorithm is to get initial directional

    color channel estimates. The quality can be improved by applying

    the interpolation over color differences using the advantages of

    correlation between the color channels. Now every pixel location

    has a true color channel value and two directional estimates. By

    taking their difference, the directional color difference estimated.

    The next step of the algorithm is to reconstruct the green

    image along horizontal and vertical directions. Once the missing

    green component is interpolated, the same process is performed for

    estimating the next missing green component in a raster scan

    manner. After interpolating all missing green components of the image, the missing red and blue components at green CFA sampling

    positions are estimated. Next, the directional color difference

    estimates are combined from different directions.

    The directional CFA interpolation method is based on

    multi scale color gradients. Gradients are useful for extracting

    directional data from digital images. In this method, the horizontal

    and vertical color difference estimates are blended based on the

    ratio of the total absolute values of vertical and horizontal color

    difference gradients over a local window. For red & green rows and

    columns in the input mosaic image, the directional estimates for the

    missing red and green pixel values are estimated by initial

    directional color channel estimates.

    The color difference gradients calculated are used to find

    weights for each direction. In order to avoid repetitive weight

    calculations, the directional weights are reused.

    Then the artifacts are removed and red and blue channels

    are refined by the Structural Approximation method. The modules

    of the proposed system framework are illustrated in Fig 2.1.

    Fig 2.1 System Framework

    2.1.1. Initial Directional Color Channel Estimation

    To obtain a full color image, various demosaicing

    algorithms can be used to interpolate a set of complete red, green,

    and blue values for each point. The directional estimates for the

    missing red and green pixel values, for red and green rows and

    columns in the input mosaic image, are calculated.

    The directional estimates for the missing blue and green

    pixel values, for blue and green rows and columns in the input

    mosaic image are calculated. Then horizontal and vertical color

    channel estimates are calculated for finding directional color

    channel estimates.

    The directional color channel estimates for the missing

    green pixel values are,

    , = , 1 + , + 1

    2

    +2. , , 2 , + 2

    4 (1)

    , = 1, + ( + 1, )

    2

    +2. , 2, ( + 2, )

    4 (2)

  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    92

    Here,

    , - Horizontal green color channel estimation at red pixel

    , - Vertical green color channel estimation at red pixel

    The color channel estimates are calculated from the Bayer

    pattern. Here H and V denotes horizontal and vertical directions and

    (i,j) denotes the pixel location.

    2.1.2. Directional Color Difference Estimation

    The quality can be improved by applying the interpolation

    over color differences to take advantage of the correlation between

    the color channels. This is an important technique employed in the

    reconstruction of full color images, obtained by interpolation along

    horizontal and vertical direction. Every pixel coordinate has a true

    color channel value and two directional estimates. By taking their

    difference directional color difference estimated.

    Cg,rH i,j =

    gH i,j -R i,j , if G is interpolated

    G i,j -rH i,j , if R is interpolated (3)

    Cg,rV i,j =

    gV i,j -R i,j , if G is interpolated

    G i,j -rV i,j , if R is interpolated (4)

    , , , ,

    , are the horizontal and vertical difference estimates between green and red channels.

    2.1.3. Multiscale Gradient Calculation

    A full-color image is usually composed of three color

    planes. Three separate sensors are required for a camera to measure

    an image. To reduce the cost, many cameras use a single sensor

    overlaid with a color filter array. The most commonly used CFA

    nowadays is the Bayer CFA. In a single sensor digital camera, only

    one color is measured at each pixel and the other two missing color

    values are estimated. This estimation process is known as color

    demosaicing.

    The Bayer pattern is comprised of blue and green and red

    and green rows and columns as shown in Fig 2.2. To obtain a full-

    color image, various demosaicing algorithms can be used to

    interpolate a set of complete red, green, and blue values for each

    point.For red and green rows and columns in the input mosaic

    image, the directional estimates for the missing red and green pixel

    values are calculated .

    Fig 2.2 Bayer pattern

    The quality can be improved by applying the interpolation

    over color differences to take advantage of the correlation between

    the color channels. This is an important technique employs the

    reconstruction of full color images, obtained by interpolation along

    horizontal and vertical direction. For every pixel coordinate has a

    true color channel value and two directional estimates.

    The multi scale gradient equation determine the difference

    between the available color channel values one pixel (instead of two

    pixels) away from the target pixel, then do the same operation in

    terms of the other channel by using its closest samples, and then

    take the difference between these two as shown in Fig 2.3. Observe

    that the first part of this equation is the green channel gradient, and

    the second part is the red channel gradient at twice the scale

    normalized by the distance between their operands.

    Fig 2.3: Multiscale Gradient Equation

    The Multiscale gradient equations for red and green rows and

    column values are,

    MH i,j =

    G i,j+1 -G i,j-1

    2-R i,j+2 -R i,j-2

    N1+

    G i,j+3 -G i,j-3

    N2-

    R i,j+4 -R i,j-4

    N3

    (5)

    MV i,j =

    G i+1,j -G i-1,j

    2-R i+2,j -R i-2,j

    N1+

    G i+3,j -G i-3,j

    N2-

    R i+4,j -R i-4,j

    N3

    (6)

    Where , , , denotes the multiscale gradient equation at each pixel coordinates in horizontal and vertical

    direction and N denotes Normalizers.The normalizer values are

    N1=2, N2=4, N3=6

    The color difference gradient is calculated by taking the

    difference between the available color channel values that are two

    pixels away from the target pixel. The same operation is done for

    other color channels by using simple averaging, and then finding the

    difference between these two operations

    2.1.4. Initial Green Channel Interpolation

    The next step of the algorithm is to reconstruct the green

    image along horizontal and vertical directions. Initial green channel

    interpolation section concentrates on estimating missing green

    pixels from known green and red pixel values using the green-red

    row of Bayer pattern. The same technique is used in estimating

    missing green pixels from known green and blue pixels. For this,

    directional color difference estimates around every green pixel to be

    interpolated has to be estimated. Multiscale gradient a smaller scale

    is more desirable because it allows the local color dynamics to be

    captured at a better resolution. The available color channels are

    replaced at this scale, but still performing the same operations. The

    interpolated green channel is

    g,r i,j = wV.f.Cg,r

    V i-1:i+1,j +wH.Cg,rH i,j-1:j+1 .f'

    wC (7)

    Here

    = +

  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    93

    f = [1/4 2/4 1/4]

    Where , , indicates initial green channel interpolation at red pixel locations.

    2.1.5. Green Channel Update

    After interpolating all missing green components of the

    image, the missing red and blue components at green CFA sampling

    positions are estimated. After the directional color difference

    estimates are combined as explained in the previous section, the

    green channel can be directly calculated and then the other channels

    are completed. However, it is possible to improve the green channel

    results by updating the initial color difference estimates. Consider

    the closest four neighbors to the target pixel with each one having

    its own weight.

    , , = , , . (1

    + . , 2,

    + . , + 2, + . , , 2

    + . , , + 2 . / (8)

    Here the four neighbors of the target pixel calculated as

    north, south, east and west directions. The weights ( , , , ) are calculated by finding the total multiscale color gradients over a

    local window. Once the missing green component is interpolated,

    the same process is performed for estimating the next missing green

    component in a raster scan manner. Once the color difference

    estimate is finalized, we add it to the available target pixel to obtain

    the estimated green channel value.

    , = , , + , (9)

    , = , , + (, ) (10)

    2.1.6. Red and Blue Channel Interpolation

    After the green channel has been reconstructed, interpolate

    the red and blue components. The most common approach for red

    and blue estimation consists of interpolation of the color differences

    R-G, B-G instead of R and G directly. Finally, the missing blue

    (red) components at the red (blue) sampling positions are

    interpolated. For red and blue channel interpolation, first complete

    the missing diagonal samples i.e. red pixel values at blue locations

    and blue pixel values at red locations. These pixels are interpolated

    using the 7 by 7 filter proposed.

    Referring to the estimation of the red component (the same

    strategy is applied for the blue one), thus all the green positions are

    interpolated. Therefore, we choose to perform an interpolation using

    the estimated red samples in the green location.

    R' i,j =G' i,j -g,r

    i-3:i+3,j-3:j+3 X Prb (11)

    B' i,j =G' i,j -g,b

    i-3:i+3,j-3:j+3 X Prb (12)

    With the completion of red and blue pixel values at green

    coordinates the full color image is to be generated.

    2.1.7. Red and Blue Channel Refinement

    The final step of the proposed method is to refine the

    interpolated red and blue values. The equations for doing such

    refinements by using Structural Approximation method [11] are

    given below.

    Let Q (k, l) be either red or blue sample as shown in Fig 2.4. Let

    D (k, l) = G (k, l) Q (k, l). (13)

    Fig 2.4 Reference Bayer pattern

    .

    Here, G is a green sample, and P and Q represent either

    red or blue sample respectively. If P is red, then Q is blue, and vice

    versa.

    1, = 1, 1, 1 + 1, + 1

    2

    , 1 = , 1 1, 1 + + 1, 1

    2

    + 1, = + 1, + 1, 1 + + 1, + 1

    2

    , + 1 = , + 1 + 1, 1 + + 1, + 1

    2

    The final interpolation after the above refinements is given by the

    following equation,

    Q i,j =G i,j -D i-1,j +D i,j-1 +D i+1,j +D i,j+1

    4 (14)

    . The end of this equation can be seen that the proposed method

    produce superior image quality than other demosaicing algorithms

    2.2. Special Features

    This method produces better results in terms of image

    quality. It does not require any thresholds as it does not make any

    hard decisions. It is non iterative. Features of gradients at different

    scales are used. This is applied in digital camera.

    3. RESULTS

    A set of twenty four images from Kodak test set shown in

    Fig 3.1 is used for the experimental verification of the proposed

    algorithm. These images are captured using a single sensor digital

    camera that uses a Color Filter Array (CFA) in which the color

    filters are arranged in Bayer pattern. The sensor alignment of this

  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    94

    single sensor digital camera is of the pattern GRBG as shown in Fig

    2.2.

    Fig: 3.1 Kodak Image Test Set

    One of the 24 images of the Kodak image test set is taken as the

    input for demosaicing process is shown in the Fig 3.2.

    Fig: 3.2 Input Kodak Image

    Mosaic Image is a picture that has been divided into

    (usually equal sized) rectangular sections, each of which gives a

    single color value red or green or blue based on the Bayer pattern as

    shown in Fig 3.3.

    Fig: 3.3 Mosaic Image

    The horizontal estimate for the missing red and green pixel

    values of the red and green rows and columns in the input mosaic

    image and the horizontal estimate for the missing blue and green

    pixel values of the blue and green rows and columns in the input

    mosaic image are calculated.

    Fig: 3.4 Horizontal color channel estimation

    The vertical estimate for the missing red and green pixel

    values of the red and green rows and columns in the input mosaic

    image and the vertical estimate for the missing blue and green pixel

    values of the blue and green rows and columns in the input mosaic

    image are calculated.

  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    95

    Fig: 3.5 Vertical color channel estimation

    Fig: 3.6 Horizontal color difference

    The image quality can be improved by applying the

    interpolation over color differences. This is an important technique

    employs the reconstruction of full color images, obtained by

    interpolation along horizontal and vertical directions as in Fig 3.6

    and Fig 3.7.

    Fig: 3.7 Vertical color difference

    Initial green channel interpolation concentrates on

    estimating missing green pixels from known green and red pixel

    values using the green and red row of Bayer pattern and missing

    green pixels from known green and blue pixel values using the

    green and blue row of Bayer pattern as shown in Fig 3.8.

    Fig: 3.8 Initial Green channel Interpolation

    Fig: 3.9 Green channel update

    The green channel results are improved by updating the

    initial color difference estimates as shown in Fig 3.9. Here the four

    neighbors of the target pixel calculated as north, south, east and

    west directions.

    Fig: 3.10 Before Refinement

    After the green channel has been reconstructed, the red and blue

    components are interpolated. The most common approach for red

    and blue estimation consists in interpolation of the color differences.

    Now the image can be reconstructed with these interpolated color

    channel values as shown in Fig 3.10.

    .

  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    96

    Fig: 3.11 Red plane Refinement

    After interpolating the red and blue channels, the red channel is further

    refined using structural approximation method as shown in Fig 3.11.

    Fig: 3.12 Blue Plane Refinement

    After interpolating the red and blue channels, the blue channel

    is further refined using structural approximation method as shown in

    Fig 3.12.

    Fig: 3.13 Reconstructed image

    The above fig 3.13 is the reconstruction of the whole

    image. After the interpolation red and blue channel refinement takes

    place by using structural approximation method. Here we conclude

    that the proposed method out performs the other methods through

    the tests in terms of PSNR.

    4. Image Quality Metrics

    Objective measures of quality require a reference image

    that is distortion-free to be used for comparison with the image

    whose quality is to be measured. The dimensions of the reference

    image and the dimensions of the degraded image must be identical.

    Quality of the images can be measured in terms of:

    4.1. PSNR

    The peak signal-to-noise ratio is a measure of quality that

    is determined by first calculating the mean squared error (MSE) and

    then dividing the maximum range of the data type by the MSE. This

    measure is simple to calculate but sometimes doesn't align well with

    perceived quality by humans. For example, the PSNR for a blurred

    image compared to an unblurred image is quite high, even though

    the perceived quality is low.

    )(log.10)(log.20

    log.10

    1010

    2

    10

    MSEMAXSNR

    MSE

    MAXSNR

    I

    I

    4.2. SSIM

    The Structural Similarity (SSIM) Index measure of quality

    works by measuring the structural similarity that compares local

    patterns of pixel intensities that have been normalized for luminance

    and contrast. This quality metric is based on the principle that the

    human visual system is good for extracting information based on

    structure.

    covariance-cross anddeviation Standard

    means, local theare and ,,,

    22,

    2

    22

    1

    22

    21

    xyyxyx

    yxyx

    xyyx

    where

    CC

    CCyxSSIM

    4.1.1. Performance Comparison in terms of CPSNR

    The performance of proposed method in terms of CPSNR

    compared with the Local Polynomial Approximation (LPA),

    Gradient Based Threshold Free demosaicing (GBTF) and Multiscale

    Gradient Based Demosaicing (MGBD). Finally the proposed

    method gives more performance than the existing methods.

  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    97

    Table 4.1.1: Comparison of CPSNR Error Measure for Different

    Demosaicing Methods on the BAYER PATTERN

    Fig: 4.1.1. Performance comparisons after refinement

    4.2.1. Performance Comparison in terms of SSIM

    The performance of proposed method in terms of SSIM

    compared with the Multiscale Gradient Based Demosaicing

    (MGBD). Finally the proposed method gives more performance

    than the existing method.

    Table 4.2.1: Comparison of SSIM before and after refinement

    Fig: 4.2.1. Performance comparisons after refinement

    5. CONCLUSION AND FUTURE WORK

    0

    10

    20

    30

    40

    50

    60

    1 4 7

    10

    13

    16

    19

    22

    Avg

    CP

    SN

    R

    Image Number

    Performance Measure in terms of CPSNR

    LPA

    GBTF

    MGBD

    Proposed

    0.8

    0.85

    0.9

    0.95

    1

    1 4 7

    10

    13

    16

    19

    22

    Avg

    SS

    IM

    Image Number

    Performance in terms of SSIM

    MGBD

    Proposed

    No LPA GBTF MGBD Proposed

    1 40.46 36.19 39.87 40.61

    2 41.33 41.99 41.77 46.18

    3 43.47 43.66 43.72 47.86

    4 40.86 42.38 41.13 45.86

    5 37.54 37.86 39.05 42.47

    6 40.93 37.74 41.38 42.87

    7 43.02 43.16 43.51 47.89

    8 37.13 34.94 37.56 39.99

    9 43.49 42.01 43.96 47.89

    10 42.67 42.67 43.20 47.72

    11 40.53 39.09 41.36 43.62

    12 43.98 42.43 44.45 48.26

    13 36.09 35.22 36.00 37.72

    14 36.97 39.19 37.97 42.29

    15 40.09 41.86 40.30 45.00

    16 43.99 40.12 44.86 46.33

    17 41.80 42.43 42.32 46.76

    18 37.42 38.97 38.22 41.97

    19 41.51 38.42 42.17 44.71

    20 41.44 41.86 42.16 45.96

    21 39.63 38.76 40.31 42.44

    22 38.49 40.15 39.05 43.68

    23 43.89 44.08 44.02 47.46

    24 35.37 38.32 35.69 41.38

    Avg 40.50 40.15 41.00 44.46

    No MGBD Proposed

    1 0.9186 0.9523

    2 0.9227 0.9711

    3 0.9110 0.9595

    4 0.9135 0.9616

    5 0.9352 0.9621

    6 0.8887 0.9586

    7 0.9204 0.9615

    8 0.9249 0.9540

    9 0.9116 0.9488

    10 0.9169 0.9529

    11 0.8917 0.9526

    12 0.8801 0.9600

    13 0.9167 0.9473

    14 0.9255 0.9579

    15 0.9288 0.9668

    16 0.9142 0.9544

    17 0.9422 0.9589

    18 0.9368 0.9638

    19 0.9182 0.9553

    20 0.9201 0.9523

    21 0.9193 0.9561

    22 0.9250 0.9571

    23 0.9267 0.9635

    24 0.9297 0.9550

    Avg 0.9183 0.9576

  • INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY VOLUME 4 ISSUE 1 APRIL 2015 - ISSN: 2349 - 9303

    98

    The proposed demosaicing method uses Multiscale color

    gradients to adaptively combine color difference estimates from

    different directions and then the red and blue channels are refined

    using Structural Approximation method. The proposed solution

    does not require any thresholds since it does not make any hard

    decisions. It is non-iterative. The relationship between color

    gradients at different scales can be used to develop a high quality

    CFA interpolation. This method is easy to implement. Experimental

    results show the effectiveness of proposed method as it clearly

    outperforms the other available algorithms by a margin in terms of

    CPSNR and SSIM. Further research efforts can focus on improving

    the results and applying the multi scale gradients idea to other image

    processing problems.

    6. REFERENCES

    [1] IbrahimPekkucuksen and Yucelltunbasak, Multiscale Gradients-

    Based Color Filter Array Interpolation Fellow, IEEE Trans .Image process , vol. 22, no. 1, January 2013

    [2] B. E. Bayer, Color imaging array, U.S. Patent 3 971 065, July 1976.

    [3] Cok,D. R. Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image

    signal, U.S. Patent 4 642 678, Feb 1987. [4] K. L. Chung, W. J. Yang, W. M. Yan, and C. C. Wang,

    Demosaicing of color filter array captured images using gradient edge detection masks and adaptive heterogeneity-projection, IEEE Trans. Image Process. , vol. 17, no. 12, pp. 2356-2367, Dec.

    2008.

    [5] B. Gunturk, Y. Altunbasak, and R. Mersereau, Color plane interpolation using alternating projections, IEEE Trans. Image Process., vol. 11, no. 9, pp. 997-1013, Sept. 2002.

    [6] J. W. Glotzbach, R. W. Schafer, and K. Illgner, A method of color filter array interpolation with alias cancellation properties, in Proc. IEEE Int.Conf. Image Process., vol. 1. 2001, pp. 141144.

    [7] J. W. Glotzbach, R. W. Schafer, and K. Illgner, A method of color filter array interpolation with alias cancellation properties, Proc. IEEE Int.Conf. Image Process. vol. 1, pp. 141-144, Oct.

    2001.

    [8] K. Hirakawa and T. W. Parks, Adaptive homogeneity-directed demosaicing algorithm, IEEE Trans. Image Process., vol. 14, no. 3, pp. 360-369, March 2005.

    [9] J. F. Hamilton Jr. and J. E. Adams, Adaptive color plane interpolation in single sensor color electronic camera, U.S. Patent 5 629 734, May1997.

    [10] Y. Itoh, CFA Interpolation using Unified Geometry Map, Proc. FIT2008, RI-002, Sept. 2008.

    [11] T. Kuno and H. Sugiura, Practical Color Filter Array Interpolation Part 2 with Non-linear Filter, IEEE Trans. Consumer Electron. vol. 52, no. 4, pp. 1409-1417, Nov. 2006.

    [12] X. Li, Demosaicing by successive approximation, IEEE Trans. Image Process., vol. 14, no. 3, pp. 370-379, March 2005.

    [13] R. Lukac and K. N. Plataniotis, Data adaptive filters for demosaicing: A framework, IEEE Trans. Consumer Electron., vol. 51, no. 2, pp. 560-570, May. 2005.

    [14] W. Lu and Y.-P. Tan, Color filter array demosaicing: New method and performance measures, IEEE Trans. Image Process., vol. 12, no. 10, pp. 1194-1210, Oct. 2003.

    [15] N.-X. Lian, L. Chang, Y.-P. Tan, and V. Zagorodnov, Adaptive filtering for color filter array demosaicing, IEEE Trans. Image Process., vol.16, no. 10, pp. 25152525, Oct. 2007.

    [16] B. Leung, G. Jeon, and E. Dubois, Least-squares luma-chroma demultiplexing algorithm for bayer demosaicing, IEEE Trans. Image Process., vol. 20, no. 7, pp. 18851894, Jul. 2011.

    [17] D. Menon, S. Andriani, and G. Calvagno, Demosaicing with directional filtering and a posteriori decision, IEEE Trans. Image Process., vol. 16,no. 1, pp. 132141, Jan. 2007.

    [18] D. Menon and G. Calvagno, Regularization approaches to demosaicing,IEEE Trans. Image Process., vol. 18, no. 10, pp. 22092220, Oct.2009.

    [19] C.-Y. Su and W.-C. Kao, Effective demosaicing using subband correlation, IEEE Trans. Consumer Electron., vol. 55, no. 1, pp.199-204, Feb. 2009.

    [20] Pekkucuksen and Y. Altunbasak, Edge oriented directional color filter array interpolation, in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. May 2011, pp. 993996.

    [21] Wikipedia, the free encyclopedia. Demosaicing, August 2010. [22] Maschal et al. Review of bayer pattern color fillter array (cfa)

    demosaicing with new quality assessment algorithms. Technical

    report, U.S. Army Research Laboratory, 2010

    [23] .[3] Henrique S. Malvar, Li wei He, and Ross Cutler. High-quality linear interpolation for demosaicing of bayer-patterned

    color images. In Proceedings of the IEEE International

    Conference on Speech, Acoustics, and Signal Processing, 2004.

    [24] [4] R Lukac and K N Plataniotis. Normalized color-ratio modeling for CFA interpolation. IEEE Transactions on Consumer

    Electronics, 2004.

    [25] Rami Cohen Demosaicing Algorithms August 30, 2010