This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
3108 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 6, JUNE 2012
REFERENCES
[1] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles:Exactsignal reconstructionfrom highly incomplete frequencyinforma-tion,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.
[2] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol.52, no. 4, pp. 1289–1306, Apr. 2006.
[3] R. G. Baraniuk, V. Cevher, M. Duarte, and C. Hegde, “Model-based
compressive sensing,” IEEE Trans. Inf. Theory, vol. 56, no. 4, pp.1982–2001, Apr. 2010.[4] L. He and L. Carin, “Exploiting structure in wavelet-based Bayesian
[5] J. Huang, D. Metaxas, and T. Zhang, “Learning with structured spar-sity,” in ACM Int. Conf. Proc. Ser., 2009, vol. 382, pp. 417–424.
[6] S. Mun and J. E. Fowler, “Block compressed sensing of images usingdirectional transforms,” in Proc. IEEE ICIP, 2009, pp. 3021–3024.
[7] X. Wu, X. Zhang, and J. Wang, “Model-guided adaptive recovery of compressive sensing,” in Proc. Data Compression Conf., Snowbird,UT, 2009, pp. 123–132.
[8] P. J. Garrigues, “Sparse coding models of natural images: Algorithmsfor efficient inference and learning of higher-order structure,” Ph.D.dissertation, Univ. California, Berkeley, CA, 2009.
[9] Y. Kim, M. S. Nadar, and A. Bilgin, “Exploiting wavelet-domain de-pendencies in compressed sensing,” in Proc. Data Compression Conf.,
Snowbird, UT, 2010, p. 536.[10] S. G. Mallat , A Wavelet Tour of Signal Processing: The Sparse Way.
Amsterdam, The Netherlands: Elsevier, 2009.[11] M. J. Wainwright and E. P. Simoncelli, “Scale mixtures of Gaussians
and the statistics of natural images,” Adv. Neural Inf. Process. Syst.,vol. 12, no. 1, pp. 855–861, 2000.
[12] S. G. Chang, B. Yu, andM. Vetterli, “Spatially adaptivewaveletthresh-olding with context modelingfor image denoising,” IEEE Trans. ImageProcess., vol. 9, no. 9, pp. 1522–1531, Sep. 2000.
[13] J. M. Shapiro, “Embedded image coding using zerotrees of waveletcoefficients,” IEEE Trans. Signal Process., vol. 41, no. 12, pp.3445–3462, Dec. 1993.
[14] A. Said and W. A. Pearlman, “A new, fast, and efficient image codecbased on set partitioning in hierarchical trees,” IEEE Trans. Circuits
Syst. Video Technol., vol. 6, no. 3, pp. 243–250, Jun. 1996.[15] D. S. Taubman and M. W. Marcellin , JPEG2000: Image Compression
Fundamentals, Standards, and Practice. Boston, MA: Kluwer, 2002.[16] Y. M. Lu andM. N. Do,“Sampling signals froma unionof subspaces,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 41–47, Mar. 2008.
[17] T. Blumensath and M. E. Davies, “Sampling theorems for signalsfrom the union of finite-dimensional linear subspaces,” IEEE Trans.
Inf. Theory, vol. 55, no. 4, pp. 1872–1882, Apr. 2009.[18] C. Hegde, M. F. Duarte, and V. Cevher, “Compressive sensing re-
covery of spike trains using a structured sparsity model,” presentedat the Signal Processing Adaptive Sparse Structured RepresentationsConf., Saint-Malo, France, 2009, Paper EPFL-CONF-151471.
[19] M. N. Do and C. N. H. La, “Tree-based majorize-maximize algorithmfor compressed sensing with sparse-tree prior,” Proc. IEEE Int. Work-
shopon Computational Advances in Multi-Sensor AdaptiveProcessing(CAMPSAP 2007), pp. 129–132, 2007.
[20] E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity byreweighted minimization,” J. Fourier Anal. Appl., vol. 14, no. 5, pp.877–905, 2008.
[21] I. Daubechies, R. DeVore, M. Fornasier,and C. S. Güntürk, “Iterativelyreweighted least squares minimization for sparse recovery,” Commun.Pure Appl. Math., vol. 63, no. 1, pp. 1–38, Jan. 2010.
[22] T. Blumensath and M. E. Davies, “Iterative thresholding for sparse ap-proximations,” J. Fourier Anal. Appl., vol. 14, no. 5/6, pp. 629–654,2008.
[23] J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Imagedenoising using scale mixtures of Gaussians in the wavelet domain,”
[25] R. Garg and R. Khandekar, “Gradient descent with sparsification: Aniterative algorithm for sparse recovery with restricted isometry prop-erty,” in Proc. 26th Annu. Int. Conf. Mach. Learn., 2009, pp. 337–344.
[26] D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet
shrinkage,”Biometrika
, vol. 81, no. 3, pp. 425–455, Aug. 1994.
[27] T. T. Do, T. D. Tran, and L. Gan, “Fast compressive sampling withstructurally random matrices,” in Proc. IEEE ICASSP, 2008, pp.3369–3372.
[28] L. Gan, T. T. Do, and T. D. Tran, “Fast compressive imaging usingscrambled block Hadamard ensemble,” in Proc. Eur. Signal Process.Conf. (EUSIPCO), Lausanne, Switzerland, 2008.
[29] E. J. Candès, J. Romberg, and T. Tao, “Stable signal recovery from in-complete and inaccurate measurements,” Commun. Pure Appl. Math.,
vol. 59, no. 8, pp. 1207–1223, Aug. 2006.[30] l Magic Toolbox [Online]. Available: http://users.ece.gatech.edu/ ~justin/l1magic/
Abstract—This paper proposes a novel scheme of scalable coding forencrypted images. In the encryption phase, the original pixel values are
masked by a modulo-256 addition with pseudorandom numbers that arederived from a secret key. After decomposing the encrypted data into adownsampled subimage and several data sets with a multiple-resolutionconstruction, an encoder quantizes the subimage and the Hadamardcoefficients of each data set to reduce the data amount. Then, the data of quantized subimage and coefficients are regarded as a set of bitstreams.At the receiver side, while a subimage is decrypted to provide the roughinformation of the original content, the quantized coefficients can be usedto reconstruct the detailed content with an iteratively updating procedure.
Because of the hierarchical coding mechanism, the principal original
content with higher resolution can be reconstructed when more bitstreamsare received.
Index Terms—Hadamard transform, image compression, image encryp-tion, scalable coding.
I. INTRODUCTION
In recent years, encrypted signal processing has attracted con-
siderable research interests [1]. The discrete Fourier transform and
adaptive filtering can be implemented in the encrypted domain based
on the homomorphic properties of a cryptosystem [2], [3], and a
composite signal representation method can be used to reduce the size
of encrypted data and computation complexity [4]. In joint encryption
and data hiding, a part of significant data of a plain signal is encryptedfor content protection, and the remaining data are used to carry
the additional message for copyright protection [5], [6]. With some
Manuscript received July 26, 2011; revised October 29, 2011 and December15, 2011; accepted January 26, 2012. Date of publication February 13, 2012;date of current version May 11, 2012. This work was supported in part by theNational Natural Science Foundation of China under Grant 61073190, Grant61103181, and Grant 60832010, and in part by the Alexander von HumboldtFoundation. The associate editor coordinating the review of this manuscript andapproving it for publication was Dr. Anthony Vetro.
The authors are withSchool of Communicationand Information Engineering,Shanghai University, Shanghai 200072, China (e-mail: [email protected]).
Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2012.2187671
3112 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 6, JUNE 2012
Fig. 2. Reconstructed Lena using BS , BS BS , BS BS BS , and BS BS BS BS . The values of PSNR in (a), (b), (c), and(d) when regarding the corresponding downsampled versions of original Lena as references are 38.4, 34, 37.1, and 38.4 dB.
TABLE ICOMPRESSION RATIOS, PSNR IN RECONSTRUCTED RESULTS AND ITERATION NUMBERS WITH DIFFERENT
WHEN , , , AND WERE USED FOR LENA AND MAN
error accumulation in a group, so that in (25) may be not close
to . To avoid this case, we let decrease with a increasing
since the spatial correlation in a subimage with lower resolution is
weaker. For instance, , , and for .
Furthermore, in Step 3, thevalueof each pixel is assigned as theaverage
of its four neighbors to further approach its original value. Although
the estimate of a certain pixel may be very different from its original
value, the updating operation in Step 3 can effectively lower the error
on the pixel since its neighbors are probably modified well. At last,we terminate the iterative procedure when the reconstruction quality
is not improved further. Here, the small threshold of 0.10 ensures the
convergence of iterative procedure.
III. EXPERIMENTAL RESULTS AND DISCUSSION
Two test images Lena and Man that are sized 512 512 were used
as the original images in the experiment. We let and encoded
the encrypted images using , , , and
to produce the bitstreams BG, BS , BS , and BS . In
this case, the total compression ratio . Fig. 2 gives the
reconstructed Lena using BG , BG BS , BG BS BS
and BG BS BS BS , respectively. Reconstructed results
with higher resolution were obtained when more bitstreams were used.
When regarding the corresponding downsampled versions of originalimages as reference, the values of PSNR in reconstructed results are
denoted as PSNR , PSNR , PSNR , and PSNR . While the PSNR
values of Lena are 38.4, 34, 37.1, and 38.4 dB, those of Man are 38.4,
31.9, 33.9, and 37.1 dB. In addition, the iterative updating procedure
significantly improved the reconstruction quality. For example, while
PSNR in an interpolated 512 512, Lena is 23.9 dB; this value in the
final reconstructed image is 38.4 dB with a gain of 14.5 dB.
Table I lists the compression ratios; the PSNR in reconstructed re-
sults and the numbers of iterations with respect to different when
, , , and were used for im-ages Lena and Man. All the encryption, encoding and reconstruction
procedures were finished in several seconds by a personal computer.
When the value of is larger, the compression ratio is higher, and the
reconstruction quality are better since provide more detailed in-
formation. As there is less texture/edge content in Lena than Man, the
quality of reconstructed Lena is better than that of Man. In addition,
the larger corresponds to the lower compression ratio and more
detailed . When we changed from (4, 8, 12)
to (4, 12, 32), the compression ratio decreased from 0.318 to 0.283,
and the value of PSNR in reconstructed Lena and Man were 37.8
and 35.2 dB, respectively. Compared with the results in Table I, the
new -PSNR performance of Lena is better, whereas that of Man
is worse. The reason is that Lena is smoother than Man. For Lena, the
larger was helpful to uniformly distribute the errors on pixels intothe Hadamard coefficients, and most of still fell into [ 128,
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 6, JUNE 2012 3113
Fig. 3. Performance of the proposed scheme with different .
Fig. 4. Performance comparison of several compression methods.
128], so that the quality of reconstructed result was better. For Man,
the excessively large caused more with absolute values
bigger than 128, leading to a lower reconstruction quality. Fig. 3 gives
the -PSNR curves with different values of . When an encrypted
image is decomposed within more levels, more data are involved in
quantization and compression; therefore, the -PSNR performance
is better, and more iterations for image reconstruction are required. It is
also shown that the performance improvement is not significant when
using a higher more than 3.We also compare the proposed scheme with the previous methods
and unencrypted JPEG compression in Fig. 4. Because it is difficult
to completely remove the spatial data redundancy by the operations in
the encrypted domain, the rate–distortion performance of the proposed
scheme is significantly lower than that of JPEG compression. On the
other hand, the proposed scheme outperforms the method in [15]. With
the method in [15], the original image is encrypted by pixel permu-
tation, which implies an attacker without the knowledge of the secret
key can know the original histogram from an encrypted image. In this
proposed scheme, the original values of all pixels are encrypted by a
modulo-256 addition with pseudorandom numbers, leading to semantic
security. That means the attacker cannot obtain the original histogram
from an encrypted image. In addition, the method in [15] does not sup-
port the function of scalable coding. Liu et al. [12] proposed a losslesscompression method for encrypted images in a bit-plane based fashion.
By discarding the encrypted data in the lowest bit planes, the method in
[12] can be extended to achieve lossy compression. The performance
of the extended method, which is also given in Fig. 4, is better than
that of the proposed scheme. However, a decoder with higher compu-
tation complexity and the decoder’s feedback for sending rate of each
bit plane are required in the method extended from [12]. That means
the proposed scheme is more suitable for real-time decompression and
some scenarios without feedback channel.
IV. CONCLUSION
This paper has proposed a novel scheme of scalable coding for
encrypted images. The original image is encrypted by a modulo-256
addition with pseudorandom numbers, and the encoded bitstreams
are made up of a quantized encrypted subimage and the quantized
remainders of Hadamard coefficients. At the receiver side, while the
subimage is decrypted to produce an approximate image, the quantized
data of Hadamard coefficients can provide more detailed information
for image reconstruction. Since the bitstreams are generated with a
multiple-resolution construction, the principal content with higher
resolution can be obtained when more bitstreams are received. The
lossy compression and scalable coding for encrypted image with better
performance deserves further investigation in the future.
ACKNOWLEDGMENT
The authors would like to thank the anonymous reviewers for their
valuable comments.
REFERENCES
[1] Z. Erkin, A. Piva, S. Katzenbeisser, R. L. Lagendijk, J. Shokrollahi, G.Neven, andM. Barni, “Protection and retrievalof encryptedmultimediacontent:When cryptography meetssignal processing,” EURASIPJ. Inf.
Security, vol. 2007, pp. 1–20, Jan. 2007.
[2] T. Bianchi, A. Piva, and M. Barni, “On the implementation of the dis-crete Fourier transform in the encrypted domain,” IEEE Trans. Inf.
Forensics Security, vol. 4, no. 1, pp. 86–97, Mar. 2009.[3] J. R. Troncoso-Pastoriza and F. Pérez-González, “Secure adaptive fil-
[4] T. Bianchi, A. Piva, and M. Barni, “Composite signal representationfor fast and storage-efficient processing of encrypted signals,” IEEE
Trans. Inf. Forensics Security, vol. 5, no. 1, pp. 180–187, Mar. 2010.[5] S. Lian, Z. Liu, Z. Ren, and H. Wang, “Commutative encryption and
watermarking in video compression,” IEEE Trans. Circuits Syst. Video
Technol., vol. 17, no. 6, pp. 774–778, Jun. 2007.[6] M. Cancellaro, F. Battisti, M. Carli, G. Boato, F. G. B. Natale, and
A. Neri, “A commutative digital image watermarking and encryptionmethod in the tree structured Haar transform domain,” Signal Process.
Image Commun., vol. 26, no. 1, pp. 1–12, Jan. 2011.[7] N. Memon and P. W. Wong, “A buyer-seller watermarking protocol,”
IEEE Trans. Image Process., vol. 10, no. 4, pp. 643–649, Apr. 2001.[8] M. Kuribayashi and H. Tanaka, “Fingerprinting protocol for im-
ages based on additive homomorphic property,” IEEE Trans. Image
Process., vol. 14, no. 12, pp. 2129–2139, Dec. 2005.[9] M. Johnson, P. Ishwar, V. M. Prabhakaran, D. Schonberg, and K.
Ramchandran, “On compressing encrypted data,” IEEE Trans. Signal
Process., vol. 52, no. 10, pp. 2992–3006, Oct. 2004.[10] D. Schonberg, S. C. Draper, and K. Ramchandran, “On blind compres-
sion of encrypted correlated data approaching the source entropy rate,”in Proc. 43rd Annu. Allerton Conf., Allerton, IL, 2005.
[11] R. Lazzeretti and M. Barni, “Lossless compression of encrypted grey-level and color images,” in Proc. 16th EUSIPCO, Lausanne, Switzer-land, Aug. 2008 [Online]. Available: http://www.eurasip.org/Proceed-ings/Eusipco/Eusipco2008/papers/1569105134.pdf
[12] W. Liu, W. Zeng, L. Dong, and Q. Yao, “Efficient compression of en-crypted grayscale images,” IEEE Trans. Signal Process., vol. 19, no. 4,pp. 1097–1102, Apr. 2010.
Shuang Wang, Lijuan Cui, Samuel Cheng, Lina Stankovic, andVladimir Stankovic
Abstract—We propose an adaptive distributed compression solutionusing particle filtering that tracks correlation, as well as performingdisparity estimation, at the decoder side. The proposed algorithm istested on the stereo solar images captured by the twin satellites system
of NASA’s Solar TErrestrial RElations Observatory (STEREO) project.Our experimental results show improved compression performance w.r.t.
to a benchmark compression scheme, accurate correlation estimation byour proposed particle-based belief propagation algorithm, and significantpeak signal-to-noise ratio improvement over traditional separate bit-planedecoding without dynamic correlation and disparity estimation.
Index Terms—Distributed source coding, image compression, multiview
imaging, remote sensing.
I. INTRODUCTION
Onboard data processing has been a challenging task in remote
sensing applications due to severe computational limitations of
onboard equipment. This is especially the case in deep-space ap-
plications where mission spacecraft are collecting a vast amount of
images. In such emerging applications, efficient low-complexity image
compression is a must. While conventional solutions such as JPEG
have been used in many prior missions, the demand for increasing
image volume and resolution, as well as increased space resolution
Manuscript received July 17, 2011; revised November 28, 2011; acceptedJanuary 17, 2012. Date of publication February 13, 2012; date of current ver-sion May 11, 2012. This work was supported in part by the National ScienceFoundation underGrantCCF 1117886. This paper waspresented in part at IEEEInternational Conference on Image Processing (ICIP-2011), Brussels, Belgium,September2011. The associateeditor coordinating the review of thismanuscriptand approving it for publication was Prof. Brian D. Rigling.
S. Wang, L. Cui, and S. Cheng are with School of Electrical and ComputerEngineering, The University of Oklahomaat Tulsa, Tulsa, OK 74135-2512 USA(e-mail: [email protected]; [email protected]; [email protected]).
L. Stankovic and V. Stankovic are with Department of Electronic and Elec-trical Engineering, University of Strathclyde, Glasgow G1 1XW, U.K. (e-mail:[email protected]; [email protected]).
Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2012.2187669
and wide-swath imaging, calls for larger coding efficiency at reduced
encoding complexity.
NASA’s Solar TErrestrial RElations Observatory (STEREO) is pro-
viding groundbreaking images of the Sun using two space-based obser-
vatories.1 These images aim to reveal the processes in the solar surface
(photosphere) through the transition region into the corona and provide
the 3-D structure of coronal mass ejections (CMEs).
A variety of image compression tools are currently used in deep-space missions, ranging from Rice and lossy wavelet-based compres-
sion tools (used in PICARD mission by CNES 2009), discrete cosine