Top Banner
. EE 5359 MULTIMEDIA PROCESSING SPRING 2012 PROJECT PROPOSAL IMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING UNIVERSITY OF TEXAS AT ARLINGTON Presented by PAVAN KUMAR REDDY GAJJALA 1000769393 [email protected]
25

[5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Mar 18, 2018

Download

Documents

lehuong
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

.

EE 5359 MULTIMEDIA PROCESSING

SPRING 2012

PROJECT PROPOSAL

IMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264

Under guidance of

DR K R RAODEPARTMENT OF ELECTRICAL ENGINEERING

UNIVERSITY OF TEXAS AT ARLINGTON

Presented by

PAVAN KUMAR REDDY GAJJALA1000769393

[email protected]

Page 2: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

ACRONYMS AND ABBREVIATIONS

ALF: Adaptive Loop Filter

AVC: Advanced Video Coding

CABAC: Context-based Adaptive Binary Arithmetic Coding

CAVLC: Context-based Adaptive Variable Length Coding

DLF: De-blocking Loop Filter

DPB: Decoded Picture Buffer

DVB: Digital Video Broadcasting

FMO: Flexible Macro block Ordering

GOP: Group of Pictures

ISO: International Standards Organization

ITU: International Telecommunication Union

JVT: Joint Video Team

LMSE: Least Mean Square Error

MC: Motion Compensation

MDCT: Modified Discrete Cosine Transform

ME: Motion Estimation

MPEG: Moving Picture Experts Group

PSNR: Peak Signal to Noise Ratio

RDO: Rate Distortion Optimization

SI: Switching I

SP: Switching P

SSIM: Structural Similarity Index Metric

ULF: Unified Loop Filter

VCEG: Video Coding Experts Group

Page 3: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

ABSTRACT

The project is based on implementation and study of unified loop filter for video coding

in (H.264) [1], which suppresses the quantization noise optimally and improves the objective and

subjective qualities of the reconstructed picture simultaneously .Unified loop filter unifies

nonlinear enhancement filter and linear restoration filter within the classical optimization

framework of least mean square error. Experimental results show that unified loop filter achieves

superior objective coding gain and better visual quality improvement around edges and textures,

when compared with loop de-blocking filter in H.264/AVC high profile. [2]

MOTIVATION

The motivation to unify de-blocking filtering and Wiener filtering into one filtering framework is that, if the number of the sources causing information loss is reduced, the capability of picture restoration can be further improved. [2]

INTRODUCTION

1. OVERVIEW OF H.264H.264 or MPEG-4 part 10: AVC [12] is the next generation video codec developed by

MPEG (moving picture experts group) of ISO/IEC and VCEG (video coding experts group) of

ITU-T, together known as the JVT (joint video team). The H.264/MPEG-4 AVC standard, like

previous standards [14], is based on motion compensated transform coding.

H.264 also uses hybrid block based video compression techniques such as transformation

for reduction of spatial correlation, quantization for bit-rate control, motion compensated

prediction for reduction of temporal correlation and entropy coding for reduction in statistical

correlation. The important changes in H.264 occur in the details of each functional element. It

includes adaptive intra-picture prediction, a new 4x4 integer transform, multiple reference

pictures, variable block sizes, a quarter pel precision for motion compensation, an in-loop de-

blocking filter, and improved entropy coding.

Fig 1.1 shows the H.264 encoder block diagram and Fig 1.2 shows the H.264 decoder

block diagram.

Page 4: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Fig. 1.1: H.264 encoder block diagram [12]

Fig. 1.2: H.264 decoder block diagram [12]

Page 5: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

The functions of different blocks of the H.264 encoder are described below:

Transform: A 4x4 integer transform is used and the transform coefficients are explicitly

specified in AVC and allow it to be perfectly invertible. In AVC, the transform coding always

uses predictions to construct the residuals, even in the case of intra macro blocks. [10]

Quantization and scan: The standard specifies the mathematical formulae of the quantization

process. The scale factor for each element in each sub-block varies as a function of the

quantization parameter associated with the macro block that contains the sub block, and as a

function of the position of the element within the sub-block. The rate-control algorithm in the

encoder controls the value of quantization parameter. [10]

CAVLC and CABAC entropy coders: VLC encoding of syntax elements for the compressed

stream is performed using Exp-Golomb codes. For transform coefficient coding AVC includes

two different entropy coding methods for coding quantized coefficients of the transform. The

entropy coding method can change as often as every picture. [10]

De-blocking filter: This filter operates on a macro block after motion compensation and

residual coding, or on a macro block after intra-prediction and residual coding, depending

whether the macro block is inter-coded or intra-coded. The result of the loop filtering operation

is stored as a reference picture. The loop filter operation is adaptive in response to several factors

such as the quantization parameter of the current and neighboring macro blocks, the magnitude

of the motion vector and the macro block coding type. [10] (Fig 1.3)

Fig 1.3 Shows the block diagram of de-blocking filter in H.264 encoder block diagram [10]

Page 6: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Mode decision: It determines the coding mode for each macro block. Mode decision to achieve

high efficiency may use rate distortion optimization. Mode decision works with rate control

algorithm and the outcome is the best-selected coding mode for a macro block. [10]

Intra prediction: Prediction for intra macro blocks is called intra-prediction and is done in

pixel-domain in this standard. The standard describes intra-prediction as linear interpolations of

pixels from the adjacent edges of neighboring macro blocks that are decoded before the current

macro block. The interpolations are directional in nature, with multiple modes, each implying a

spatial direction of prediction. For luminance pixels with 4x4 partitions, 9 intra-prediction modes

are defined. Four intra-prediction modes are defined when a 16x16 partition is used – mode 0,

mode 1, mode 2 and mode 4. [10]. Table 2.1 shows the different prediction block sizes like

16×16, 8×8, 4×4 and their possible prediction modes.

Table 2.1 Different intra prediction block sizes with possible prediction modes [10]

4 × 4 luma prediction modes: Fig 2.1 shows a sample 4×4 luma block (P) to be predicted , the

samples above and to the left, labeled A-M in Fig 2.1, have previously been encoded and

reconstructed and are therefore available in the encoder and decoder to form a prediction

reference. The samples a, b, c . . . . p of the prediction block P (Fig 2.1) are calculated based on

the samples A-M. Table 2.2 shows the possible prediction modes for a 4×4 luma block, the

arrows in Fig 2.2 indicate the direction of prediction in each mode. For modes 3-8, the predicted

samples are formed from a weighted average of the prediction samples A-M. For example, if

mode 4 is selected, the top-right sample of P, labeled‘d’ is predicted by:

d = round (B/4+C/2+D/4).

Page 7: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Fig 2.1 4 × 4 luma block to be predicted [10]

Table 2.2 Labeling of prediction samples, 4 × 4 prediction [10]

Fig 2.2 4 × 4 intra prediction modes [10]

16 × 16 luma prediction modes: As an alternative to the 4×4 luma modes described, the entire

16×16 luma component of a macro block may be predicted in one operation. Four modes are

available, shown in diagram form in Fig 2.3 and tabular form in (Table 2.3) [10]

Page 8: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Fig 2.3 Intra 16 × 16 prediction modes [10]

Table 2.3 Labeling of prediction samples, 16×16 prediction [10]

Inter prediction: Inter prediction is the process of predicting a block of luma and chroma

samples from a picture that has previously been coded and transmitted, a reference picture. This

involves selecting a prediction region, generating a prediction block and subtracting this from the

original block of samples to form a residual that is then coded and transmitted. The block of

samples to be predicted, a macro block partition or sub-macro block partition, can range in size

from a complete macro block, i.e. 16 × 16 luma samples and corresponding chroma samples,

down to a 4 × 4 block of luma samples and corresponding chroma samples. The reference picture

is chosen from a list of previously coded pictures, stored in a decoded picture buffer (DPB),

which may include pictures before and after the current picture in display order .The offset

between the position of the current partition and the prediction region in the reference picture is a

motion vector. The motion vector may point to integer, half- or quarter-sample positions in the

luma component of the reference picture. Half- or quarter-sample positions are generated by

interpolating the samples of the reference picture. Each motion vector is differentially coded

from the motion vectors of neighboring blocks. The prediction block may be generated from a

single prediction region in a reference picture, for a P or B macro block, or from two prediction

regions in reference pictures, for a B macro block [10]. Following are the steps involved in inter

prediction.

Page 9: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Interpolate the picture(s) in the decoded picture buffer (DPB), to generate 1/4-sample

positions in the luma component and 1/8-sample positions in the chroma components.

Fig 3.1 explains the interpolation of luma half pel pixels.

Luma Component: The half-pel samples in the luma component of the reference picture are generated first, Figure 3.1, grey markers. Each half-pel sample that is adjacent to two integer samples, e.g. b, h, m, s in Figure 3.1, is interpolated from integer-pel samples using a 6 tap finite impulse response (FIR) filter with weights (1/32, −5/32, 5/8, 5/8, −5/32, 1/32). For example, half-pel sample b is calculated from the 6 horizontal integer samples E, F, G, H, I and J using a process equivalent to:

b = round ((E − 5F + 20G + 20H − 5I + J)/32)Similarly, h is interpolated by filtering A, C, G, M, R and T. Once all of

the samples adjacent to integer samples have been calculated, the remaining half-pel positions are calculated by interpolating between six horizontal or vertical half-pel samples from the first set of operations. For example, j is generated by filtering cc, dd, h, m, ee and ff. Note that the result is the same whether j is interpolated horizontally or vertically. The 6-tap interpolation filter is relatively complex but produces an accurate fit to the integer-sample data and hence good motion compensation performance.

Page 10: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Fig 3.1 Interpolation of luma half-pel positions [10]

Choose an inter prediction mode from the following options:

(a) Choice of reference picture(s), previously-coded pictures available as sources

for prediction. (Table 3.1)

(b) Choice of macro block partitions and sub-macro block partitions, i.e.

prediction block sizes. (Fig 3.2)

(c) Choice of prediction types:

(i) Prediction from one reference picture in list 0 for P or B macro blocks

or list 1 for B macro blocks only.

(ii) Bi-prediction from two reference pictures, one in list 0 and one in list

1, B macro blocks only, optionally using weighted prediction.

Page 11: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Table 3.1 Reference picture sources [10]

Fig 3.2 Macro block partitions and sub-macro block partitions [10]

Choose motion vector(s) for each macro block partition or sub-macro block partition, one

or two vectors depending on whether one or two reference pictures are used. (Table 3.2)

Table 3.2 Reference frames and motion vectors for P and B macro blocks [10]

Page 12: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Predict the motion vector(s) from previously-transmitted vector(s) and generate motion

vector difference(s). Optionally, use direct mode prediction, B macro blocks only.

Code the macro block type, choice of prediction reference(s), motion vector difference(s)

and residual.

Apply a de-blocking filter prior to storing the reconstructed picture as a prediction

reference for further coded pictures.

H.264/AVC profilesH.264 standard is defined with a large variety of coding tools. This is done to make sure that

standard caters to all classes of applications. However, not all tools are required for a particular

application. So, the coding tools are segregated into different groups called profiles. The basic

profiles defined in the standard are shown in Fig. 4.1

Fig. 4.1: Profile structure in H.264 [12]

Some common features to all profiles are:

1. Intra-coded slices (I slice): These slices are coded using prediction only from decoded

samples within the same slice.

2. Predictive-coded slices (P slice): These slices are usually coded using inter prediction

from previously decoded reference pictures, except for some macro blocks in P slices that

are intra coded. Sample values of each block are predicted using one motion vector and

also weighted using multiple frames.

Page 13: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

3. 4X4 modified integer DCT.

4. CAVLC for entropy encoding.

5. Exponential Golomb encoding for headers and associated slice data.

Baseline profile: I- and P-slice coding, enhanced error resilience tools (flexible macro block

ordering (FMO), arbitrary slices and redundant slices), and CAVLC, offers the least coding

efficiency. [12]

Extended profile: Superset of the baseline profile, besides tools of the baseline profile it

includes B-, SP- and SI-slices, data partitioning, and interlace coding tools, provides better

coding efficiency. [12]

Main profile: I-, P- and B-slices, interlace coding, CAVLC and CABAC, provides highest

possible coding efficiency, designed to best suit the digital storage media, television broadcasting

and set-top box applications. [12]

PRINCIPLE OF UNIFIED LOOP FILTER:

The unified loop filter [2] unifies nonlinear enhancement filter (for removing blocking and ringing artifacts) and linear restoration filter (for improving coding efficiency) within the classical optimization framework of least mean square error (LMSE). The joint use of DLF and ALF [5] can be replaced by ULF [2].The unified loop filter unifies the nonlinear bilateral filter and the linear Wiener filter which are explained in the following section.

Suppose that X = (x1 , x2 ,… .. , xN)Tis a support column vector containing N pixels of the

reconstructed picture arranged by the spatial order surrounding the central pixel xc, as shown in

Fig. 4.2

Page 14: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Fig. 4.2: Support vector X [2].

1. Nonlinear Bilateral Filter

Nonlinear bilateral filter [13] is designed to address the limitation of the low-pass de-blocking

filters, and has shown to be effective in de-noising both blocking and ringing artifacts while

retaining the sharpness of real edges. The output of bilateral filter is given

where ω is the weighting function that is designed to smooth in regions of similar

intensity while keeping edges intact, by heavily weighting those pixels that are both geometric

closeness and photometric similarity to the central pixel

d (xc, x i) is the Euclidean distance between xc, and x i σ gand σ pdenote the geometric

spread parameter and the photometric spread parameter, respectively.

2. Linear Wiener Filter

The Wiener filter [14] is a well-known optimal linear filter to cope with the pictures

degraded by the Gaussian noise, blurring and distortion caused by compression. The output of

Wiener filter is given by

y l=∑i=1

N

wi ∙ x i=W lT X (3)

where W l is a column vector of N optimal filter coefficients and X is the support vector

shown in Fig 4.2 that can be obtained by least mean square error (LMSE) algorithm. [17]

3. Unified loop filterIn order to achieve both objective and subjective quality optimizations, nonlinear

similarity-ordered statistics filter is concatenated with linear spatially ordered statistics filter,

a.k.a. Wiener filter [14], to form the proposed unified loop filter [2]. Therefore, the output of

unified loop filter becomes

Page 15: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

where W u is a column vector M + N optimal filter coefficients (N coefficients for the

nonlinear part and M coefficients for the linear part), and

Xu=(x (1 )' , x (2)

' , ....... , x ( N )' , x1 , x2 ,……x M)T= (xu(1), xu(2) , ... , xu(M +N ))T is used to

support W u.The advantages of this filter are that, in the presence of singularities, the weights on

the similarity ordered pixels (nonlinear part) that are similar to the central pixel value can be

increased to better preserve edges; in cases in which frequency selectivity is most advantageous,

the weights on the spatially ordered pixels (linear part) can be set accordingly [2]. The block

diagram of H.264 encoder with unified loop filter is shown in Fig 4.3.

Fig 4.3: H.264 encoder block diagram with unified loop filter.[2]

4. Unified loop fi lter Design

For each nonlinear and linear group, unified loop filter should be carefully designed to meet different quantization error characteristics. Here, classification-based filter design strategy is proposed. For the enhancement pixels of luma component, the unified loop filter should not only emphasize de-blocking enhancement, but also take restoration into consideration. [2] Therefore, the nonlinear part consists of one 12-tap diamond filter, whereas the linear part consists of four kinds of taps (1-tap, 13-tap, 25-tap, and 41-tap) diamond filters with quadrant symmetry, as shown in Fig. 4.4(a). For the

Page 16: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

restoration pixels of luma component, the unified loop filter should not only emphasize restoration, but also take de-ringing enhancement into consideration. Therefore, the nonlinear part consists of one 8-tap cross filter, whereas the linear part consists of three kinds of taps (13-tap, 25-tap, and 41-tap) diamond filters with central point symmetry, as shown in Fig. 4.4(b). The reason for only one filter type in the nonlinear part is that shorter filter cannot efficiently remove artifacts, where as longer filter will increase side information significantly. For the enhancement pixels of chroma components, the nonlinear part consists of one 4-tap diamond filter, whereas the linear part consists of two kinds of taps (1-tap and 13-tap) diamond filters with quadrant symmetry, as shown in Fig. 4.5(a) and Fig. 4.5(b).

Page 17: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Fig. 4.4: Construction of classification-based unified loop filter for (a) enhancement pixels and (b) restoration pixels of luma component [2]

Fig. 4.5: Construction of classification-based unified loop filter for (a) enhancement pixels and (b) restoration pixels of chroma component [2]

Page 18: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

Since the unified loop filter for each group has different combinations of nonlinear part

and linear part, the best combination for each group is decided by RDO selection on a frame

basis, J filter = D + λ*R

where D is the distortion between the filtered frame and the original frame, and R is the

number of bits for the filter side information which includes filter tap type and filter coefficient

quantization bits. The filter coefficients can be encoded in two modes: 10-bit fixed length coding

or temporal prediction coding with exp-Golomb code. [2]

Goal:

The main aim of the project is to take advantage of the order statistics of the filter [4] and unify

the non linear bilateral filter [13] and linear Wiener filter [14] into an unified loop filter which

improves the reconstructed image (removes ringing, blocking artifacts and reduces quantization

noise) and thus improves the prediction, which in overall increases the compression of video.

The project also compares the performance of H.264 with and without unified loop filter (ULF)

based on various test sequences at different bit rates in terms of MSE, PSNR and SSIM [16].

References

[1] Kwon Soon-kak, A Tamhankar and K R Rao, “Overview of H.264/MPEG-4 part 10”, 4th EURASIP Conference on Video/Image Processing and Multimedia Communications, vol. 1, pp. 1-51, July 2003.

[2] Y. Liu, “Unified Loop Filter for Video Compression”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 10, pp. 1378 – 1382, Oct 2010.

[3] T. Wedi “Adaptive interpolation filter for motion compensated prediction”, IEEE International Conference on Image Processing (ICIP2002), New York, USA, vol. 2, pp. II-509 - II-512, Sept. 2002.

[4] A. Bovik , T. Huang and D. Munson, “A generalization of median filtering using linear combinations of order statistics”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 31,no. 6, pp. 1342 – 1350, Dec. 1983.

[5] Y. Liu and Y. Huo “Unified loop filter for high-performance video coding”, IEEE International

conference on Multimedia and Expo (ICME), pp. 1271-1276, July 2010. [6] S. Wittmann and T. Wedi , “Transmission of post-filter hints for video coding schemes”, IEEE International Conference on Image Processing (ICIP), vol. 1, pp. 81-84, Sept 16 2007-Oct 19 2007.

Page 19: [5] Y. Liu and Y. Huo “Unified loop filter fo · Web viewIMPLEMENTATION AND STUDY OF UNIFIED LOOP FILTER IN H.264 Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING

[7]P. List, A. Joch, J. Lainema, G. Bjontegaard and M. Karczewicz, “Adaptive de blocking filter”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 7, pp. 614-619, July 2003.

[8] C. Qian, Z. Yunfei, Y. Peng, L. Xiaoan, J. Sole, X. Qian, E. Francois and W. Dapeng, “Classified quad tree-based adaptive loop filter”, 2011 IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, 15 July 2011.

[9] A. Bovik, T. Huang and D. Munson , “Nonlinear filtering using linear combinations of order statistics”, IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 2067-2070, May 1982.

[10] I. E. Richardson, “The H.264 advance video compression standard”, 2nd Edition, Wiley 2010.

[11] H.264/AVC JM reference software. Website: http://iphome.hhi.de/suehring/tml/download

[12]T. Wiegand, “Overview of the H.264/AVC video coding standard”, IEEE Trans. on Circuits and Systems for Video Technology, vol. 13, pp. 560-576, Jul 2003.

[13] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images”, Computer Vision, Sixth International Conference, pp. 839-846, Jan 1998.

[14] Y. Chiu and L. Xu, “Adaptive (Wiener) filter for video compression”, ITU-T SG16 Contribution, C437, Geneva, April 2008.

[15] Z. Wang et al, “Image quality assessment: from error visibility to structural similarity”, IEEE transactions on Image processing, vol. 13, pp 600-612, April 2004.

[16] X. Wang, “Recursive algorithms for linear LMSE estimators under uncertain observations”, IEEE Transactions on Automatic control, vol. 29, pp 853-854, Sept. 1984.