4实验
(a)加入椒盐噪声的“lena”图 (b)高斯噪声稀疏表达模型去噪结果
(c)我们的去噪结果
图5:对“lena”图像加入椒盐噪声并分别基于DCT基元组采用经典去噪模型和改进模型去噪结果
表4-2:含椒盐噪声图像及两种模型去噪结果PSNR值比较
PSNR(dB) 加入椒盐噪声图像 经典高斯去噪模型 改进的去噪模型
boat 40.0886 33.0430 43.7256
lena 40.0765 34.2431 44.1064
表4-2中所得的数据表明,对加入椒盐噪声的两张样例“boat”和“lena”分别使用基于DCT基元组的经典模型和改进模型对其去噪处理,经典模型得出的结果仍留有不少噪声点,去噪效果差强人意,而改进的去噪模型去噪效果较为令人满意,明显好于原经典模型。这说明我们的改进是非常有效的。
9
西安交通大学本科毕业设计(论文)
5 结论与展望
本文我们系统地研究和学习了基于基元组的稀疏线性表达的方法及其在图像去噪中的应用。针对高斯噪声和椒盐噪声的特性,分别学习和建立了适用于去除高斯噪声的经典的去噪模型和适用于去除椒盐噪声的改进的模型。改进过程中,引进了对图像像素点的噪声可能性的权重函数,并建立带权的稀疏表达模型,减少噪声点对稀疏表达模型的影响。
实现算法方面,我们在稀疏编码阶段常采用正交匹配追踪(OMP)方法,应用K-SVD算法对基元组D迭代更新。实验表明,超完备DCT基元组,从高质量图像中的一组图像块学习得到的基元组,以及对噪声图像本身的图像块学习出的自适应基元组,都有非常好的去噪表现。在高斯去噪模型对椒盐噪声失效时,使用改进的带权稀疏表达模型能得到理想的效果。
进一步的工作包括以下几个方面:
(1)在处理椒盐噪声的带权的稀疏表达模型数值求解过程中,我们可以加入基元组D学习阶段,如高斯去噪模型求解时学习基元组那样,希望获得更好的效果;
(2)更多图像和不同噪声水平下的测试比较,尽量得出客观有效的比较结果; (3)进一步学习与研究图像去噪与稀疏表达的相关内容,寻求更合理的去噪模型及更优的优化方法。
10
参考文献
参考文献
[1] K Engan, SO Aase and JH Hakon-Husoy. Method of optimal directions for frame design[C]. IEEE Int. Conf. Acoustics, Speech, and SignalProcessing, 1999.
[2] K Kreutz-Delgado and BD Rao. Focuss-based dictionary learning algorithms[J]. Electrical&Computer Engineering. 2002, 4119: 459-473.
[3] K Kreutz-Delgado, JF Murray, BD Rao et al. Dictionary learning algorithms for sparse representation[J]. Neur. Comput, 2003, 1(15): 349–396.
[4] MS Lewicki and TJ Sejnowski. Learning overcomplete representations[J]. Neur.Comput, 2000, 1(12): 337–365.
[5] L Lesage, R Gribonval, F Bimbot et al. Learning unions of orthonormal bases with thresholded singular value decomposition[C]. IEEE Intl Conf. Acoustics, Speech, and Signal Processing, 2005, 15, pages:349-396.
[6] M Aharon, M Elad and AM Bruckstein. The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation[J]. IEEE Trans. Signal Process, 2006, 54(11): 4311-4322.
[7] M Aharon, M Elad and AM Bruckstein. On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them[C]. Special Issue devoted to the Haifa 2005 conference on matrix theory. 2006.
[8] YC Pati, R Rezaiifar and PS Krishnaprasad. Orthogonal matching pursuit:Recursive function approximation with applications to wavelet decomposition[C].Proceedings of the 27th Annual Asilomar . Conference on Signals, Systems, and Computers, 1993.
[9] J Portilla, V Strela and MJ Wainwright et al. Image denoising using scale mixtures of gaussians in thewavelet doma[J]. IEEE Trans. Image Process. 2003, 1(12): 1338–135.
[10] JL Starck, EJ Candes and DL Donoho. The curvelet transform for image denoising[J]. IEEE Trans. Image Process.2002,1(11): 670–684.
[11] R Eslami and H Radha. Translation-invariant contourlet transform and its application to image denoising[J]. IEEE Trans. Image Process. 2006, 1(15): 3362–3374.
[12] B Matalon, M Elad and M Zibulevsky.Improved denoising of images using modeling of the redundant contourlet transform[C]. The SPIE Conf. Wavelets, 2005.
[13] OG Guleryuz. Weighted overcomplete denoising[C]. The Asilomar Conf. Signals and Systems, Pacific Grove, CA, 2003.
[14] OG Guleryuz. Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising: PartI—Theory[J]. IEEE Trans. Image Process. 2005, 1(15):539–553.
[15] OG Guleryuz. Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising: PartII—Adaptive algorithms[J]. IEEE Trans. Image Process. 2005, 1(15):554–571.
11
西安交通大学本科毕业设计(论文)
附 录
外国文献翻译:
Image Denoising Via Sparse and Redundant Representations Over
Learned Dictionaries
Michael Elad and Michal Aharon
Abstract—We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries.Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image.We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance,equivalent and sometimes surpassing recently published leading alternative denoising methods.
Index Terms—Bayesian reconstruction, dictionary learning, discrete cosine transform (DCT), image denoising, K-SVD, matching pursuit, maximum a posteriori (MAP) estimation, redundancy,sparse representations.
I.INTRODUCTION
In this paper, we address the classic image denoising problem: An ideal image is measured in the presence of an additive zero-mean white and homogeneous Gaussian noise, v, with standard deviation . The measured image y is, thus
y?x?v (1) We desire to design an algorithm that can remove the noise fromy, getting as close as possible to the original image, x.The image denoising problem is important, not only because of the evident applications it serves. Being the simplest possible inverse problem, it provides a convenient platform over which image processing ideas and techniques can be assessed. Indeed,numerous contributions in the past 50 years or so addressed this problem from many and diverse points of view. Statistical estimators of all sorts, spatial adaptive filters, stochastic analysis,partial differential equations, transform-domain methods,splines and other approximation theory methods, morphological analysis, order statistics, and more, are some of the many directions explored in studying this problem. In this paper, we have no intention to provide a survey of this vast activity. Instead,we intend to concentrate on one specific approach towards the,image denoising problem that we find to be highly effective and promising: the use of sparse and redundant representations overtrained dictionaries.
Using redundant representations and sparsity as driving forces for denoising of signals has drawn a lot of research attention in the past decade or so. At first, sparsity of the unitary wavelet coefficients was considered, leading to the celebrated shrinkage algorithm [1]–[9]. One reason to turn to redundant representations was the desire to have the shift invariance
12
附录
property [10]. Also, with the growing realization that regular separable 1-D wavelets are inappropriate for handling images,several new tailored multiscale and directional redundant transforms were introduced, including the curvelet [11], [12],contourlet [13], [14], wedgelet [15], bandlet [16], [17], and the steerable wavelet [18], [19]. In parallel, the introduction of the matching pursuit [20], [21] and the basis pursuit denoising [22] gave rise to the ability to address the image denoising problem as a direct sparse decomposition technique over redundant dictionaries. All these lead to what is considered today as some of the best available image denoising methods (see [23]–[26]for few representative works).
While the work reported here is also built on the very same sparsity and redundancy concepts, it is adopting a different point of view, drawing from yet another recent line of work that studies example-based restoration. In addressing general inverse problems in image processing using the Bayesian approach, an image prior is necessary. Traditionally, this has been handled by choosing a prior based on some simplifying assumptions, such as spatial smoothness, low/max-entropy,or sparsity in some transform domain. While these common approaches lean on a guess of a mathematical expression for the image prior, the example-based techniques suggest to learn the prior from images somehow. For example, assuming a spatial smoothness-based Markov random field prior of a specific structure, one can still question (and, thus, train) the derivative filters to apply on the image, and the robust function to use in weighting these filters’ outcome [27]–[29].
When this prior-learning idea is merged with sparsity and redundancy,it is the dictionary to be used that we target as the learned set of parameters. Instead of the deployment of a prechosen set of basis functions as the curvelet or contourlet would do, we propose to learn the dictionary from examples. In this work we consider two training options: 1) training the dictionary using patches from the corrupted image itself or 2) training on a corpus of patches taken from a high-quality set of images.
This idea of learning a dictionary that yields sparse representations for a set of training image-patches has been studied in a sequence of works [30]–[37]. In this paper, we propose the K-SVD algorithm [36], [37] because of its simplicity and efficiency for this task. Also, due to its structure, we shall see how the training and the denoising fuse together naturally into one coherent and iterated process, when training is done on the given image directly.
Since dictionary learning is limited in handling small image patches, a natural difficulty arises: How can we use it for general images of arbitrary size? In this work, we propose a global image prior that forces sparsity over patches in every location in the image (with overlaps). This aligns with a similar idea, appearing in [29], for turning a local MRF-based prior into a global one. We define a maximum a posteriori probability (MAP) estimator as the minimizer of a well-defined global penalty term.Its numerical solution leads to a simple iterated patch-by-patch sparse coding and averaging algorithm that is closely related to the ideas explored in [38]–[40] and generalizes them.
When considering the available global and multiscale alternative denoising schemes (e.g., based on curvelet, contourlet,and steerable wavelet), it looks like there is much to be lost in working on small patches. Is there any chance of getting a comparable denoising performance with a local-sparsity based method? In that respect, the image denoising work reported in[23] is of great importance. Beyond the specific novel and highly effective algorithm described in that paper, Portilla and his coauthors posed a clear set of comparative experiments that standardize how image denoising algorithms should be assessed and compared one versus the other. We make use of these exact experiments and showthat the newly proposed algorithm performs similarly, and, often, better, compared to the denoising performance reported in their work.
To summarize, the novelty of this paper includes the way we use local sparsity and
13