理工大学毕业论文
结论
图像分割是图像特征提取和识别等图像理解的基础,对图像分割的研究一直是数字图像处理技术研究中的热点和焦点。本文首先介绍了数字图像处理技术中图像分割技术的基本原理和主要方法,然后分别研究了基于边缘、区域和形态学分水岭法的图像分割方法。基于边缘的分割方法主要论述了基于边缘算子和霍夫变换的分割方法,基于区域的分割方法主要论述了区域生长法、区域分裂与合并法和阈值法。接着使用MATLAB软件对各种分割方法进行了仿真,得到了分割图像,最后对仿真结果进行了分析。
实验结果表明,基于边缘的分割方法对边缘灰度值过渡比较尖锐且噪声较小等不太复杂的图像可以取得较好的效果,受噪声和曲线间断的影响较小;基于区域的分割方法利用了图像的局部空间信息,可有效的克服其它方法存在的图像分割空间不连续的缺点;阈值分割算法直观、实现简单,在图像分割中应用广泛;分水岭法对微弱边缘具有良好的响应,可以将目标物体连接在一起的目标图像很好的分割出来。
基于边缘的分割方法对于边缘复杂的图像效果不太理想,会产生边缘模糊、边缘丢失、边缘不连续等现象;基于区域的分割方法法若选取不好阈值,稳定性、准确性以及运算速度都会受到很大影响;阈值法能区分出图像的前景和背景的主要区域所在,但在图像的细微处还没有很好的区分度;分水岭图像分割算法易产生过分割现象。
在实际应用中,若能将基于边缘的分割方法和基于形态学的方法结合起来,或者用区域分裂与合并法代替区域生长法和区域增长法,则可以得到更好的分割效果。可以在这些方面对实验结果进行改进。
30
理工大学毕业论文
参考文献
[1] 丁亮,张永平,张雪英.图像分割方法及性能评价综述[J].国际IT传媒品牌软件,2010(12):82-87. [2] 杨合超,周雪梅.几种图像分割技术的比较[J].电子知识与技术,2009(9):2440-2441. [3] 乐宋进,武和雷,胡泳芬.图像分割方法的研究现状与展望[J].南昌水专学报,2004(2) :15-20. [4] 章霄,董艳雪,赵文娟,张彦佳.数字图像处理技术[M].北京:冶金工业出版社,2005:175-206. [5] 郑静,梁少华,王腾.基于MATLAB仿真的边缘检测算子研究[J].电脑知识与技术,2010(5):1189-1190. [6] 王坤,刘天伟,杜芳芳,常琳.MATLAB在对图像进行边缘检测方面的应用[J].沈阳师范大学学报(自然科学版),2005(2):161-165.
[7] 徐飞,施晓红.MATLAB应用图像处理[M].西安:西安电子科技大学出版社,2002:207-224. [8] 胡小丹,李 文,刘海博. 基于高斯统计模型的快速图像区域分割方法[J]. 福建师范大学学报(自然科学版),2011(2) :133-135.
[9] 刘中合,王瑞雪.数字图像处理技术现状与展望[J].计算机时代,2005(9) :25-29.
[10]朱秀昌,刘峰,胡栋.数字图像处理与图像通信[M].北京:北京邮电大学出版社,2002:142-173. [11]赵椿荣,赵忠明等.数字图像处理导论[M].西安:西北工业大学出版社,1996:212-234.
[12]冷美萍,鲍苏苏,孟祥玺,彭丰平.一种改进的分水岭分割算法[J]. 贵州师范大学学报(自然科学版),
2010(1) :62-65
[13]张兆和,杨高波,刘志等.视频对象分割提取的原理与应用[M].北京:科学出版社,2009:20-21. [14]陈婷婷,程小平.采用模糊形态学和形态学分水岭算法的图像分割[J].西南大学学报(自然科学版),2008(3):142-145. ",
[15] 孙敬飞,杨红卫.形态学分水岭算法在粘连大米图像分割中的应用[J].粮油食品科技,2010(18):4-6. [16] Rafael C.Gonzalez,Richard E.Woods.Digital Image Processing[M].Beijing:Publishing House of Electronics
Industry,2010:711-809.
[17] Vincent L,Soille P. Watershed in Digital Spaces an Efficient Algorithm Based on Immersion Simulatlon [J].
IEEE Trans PAMI, 1991, 13(6) :582-599.
31
理工大学毕业论文
附录Ⅰ 外文文献翻译
(1)外文文献原文
Robust Analysis of Feature Spaces: Color Image Segmentation
Abstract
A general technique for the recovery of significant image features is presented. The technique is based on the mean shift algorithm, a simple nonparametric procedure for estimating density gradients. Drawbacks of the current methods (including robust clustering) are avoided. Feature space of any nature can be processed, and as an example, color image segmentation is discussed. The segmentation is completely autonomous, only its class is chosen by the user. Thus, the same program can produce a high quality edge image, or provide, by extracting all the significant colors, a preprocessor for content-based query systems. A 512?512 color image is analyzed in less than 10 seconds on a standard workstation. Gray level images are handled as color images having only the lightness coordinate.
Keywords: robust pattern analysis, low-level vision, content-based indexing
1 Introduction
Feature space analysis is a widely used tool for solving low-level image understanding tasks. Given an image, feature vectors are extracted from local neighborhoods and mapped into the space spanned by their components. Significant features in the image then correspond to high density regions in this space. Feature space analysis is the procedure of recovering the centers of the high density regions, i.e., the representations of the significant image features. Histogram based techniques, Hough transform are examples of the approach.
When the number of distinct feature vectors is large, the size of the feature space is reduced by grouping nearby vectors into a single cell. A discretized feature space is called an accumulator. Whenever the size of the accumulator cell is not adequate for the data, serious artifacts can appear. The problem was extensively studied in the context of the Hough transform, e.g.. Thus, for satisfactory results a feature space should have continuous coordinate system. The content of a continuous feature space can be modeled as a sample from a multivariate, multimodal probability distribution. Note that for real images the number of modes can be very large, of the order of tens.
The highest density regions correspond to clusters centered on the modes of the underlying probability distribution. Traditional clustering techniques, can be used for feature space analysis but
32
理工大学毕业论文
they are reliable only if the number of clusters is small and known a priori. Estimating the number of clusters from the data is computationally expensive and not guaranteed to produce satisfactory result.
A much too often used assumption is that the individual clusters obey multivariate normal distributions, i.e., the feature space can be modeled as a mixture of Gaussians. The parameters of the mixture are then estimated by minimizing an error criterion. For example, a large class of thresholding algorithms are based on the Gaussian mixture model of the histogram, e.g.. However, there is no theoretical evidence that an extracted normal cluster necessarily corresponds to a significant image feature. On the contrary, a strong artifact cluster may appear when several features are mapped into partially overlapping regions.
Nonparametric density estimation avoids the use of the normality assumption. The two families of methods, Parzen window, and k-nearest neighbors, both require additional input information (type of the kernel, number of neighbors). This information must be provided by the user, and for multimodal distributions it is difficult to guess the optimal setting.
Nevertheless, a reliable general technique for feature space analysis can be developed using a simple nonparametric density estimation algorithm. In this paper we propose such a technique whose robust behavior is superior to methods employing robust estimators from statistics.
2 Requirements for Robustness
Estimation of a cluster center is called in statistics the multivariate location problem. To be robust, an estimator must tolerate a percentage of outliers, i.e., data points not obeying the underlying distribution of the cluster. Numerous robust techniques were proposed, and in computer vision the most widely used is the minimum volume ellipsoid (MVE) estimator proposed by Rousseeuw.
The MVE estimator is affine equivariant (an affine transformation of the input is passed on to the estimate) and has high breakdown point (tolerates up to half the data being outliers). The estimator finds the center of the highest density region by searching for the minimal volume ellipsoid containing at least h data points. The multivariate location estimate is the center of this ellipsoid. To avoid combinatorial explosion a probabilistic search is employed. Let the dimension of the data be p. A small number of (p+1) tuple of points are randomly chosen. For each (p+1) tuple the mean vector and covariance matrix are computed, defining an ellipsoid. The ellipsoid is inated to include h points, and the one having the minimum volume provides the MVE estimate.
33
理工大学毕业论文
Based on MVE, a robust clustering technique with applications in computer vision was proposed in. The data is analyzed under several \\resolutions\by applying the MVE estimator repeatedly with h values representing fixed percentages of the data points. The best cluster then corresponds to the h value yielding the highest density inside the minimum volume ellipsoid. The cluster is removed from the feature space, and the whole procedure is repeated till the space is not empty. The robustness of MVE should ensure that each cluster is associated with only one mode of the underlying distribution. The number of significant clusters is not needed a priori.
The robust clustering method was successfully employed for the analysis of a large variety of feature spaces, but was found to become less reliable once the number of modes exceeded ten. This is mainly due to the normality assumption embedded into the method. The ellipsoid defining a cluster can be also viewed as the high confidence region of a multivariate normal distribution. Arbitrary feature spaces are not mixtures of Gaussians and constraining the shape of the removed clusters to be elliptical can introduce serious artifacts. The effect of these artifacts propagates as more and more clusters are removed. Furthermore, the estimated covariance matrices are not reliable since are based on only p + 1 points. Subsequent post processing based on all the points declared inliers cannot fully compensate for an initial error.
To be able to correctly recover a large number of significant features, the problem of feature space analysis must be solved in context. In image understanding tasks the data to be analyzed originates in the image domain. That is, the feature vectors satisfy additional, spatial constraints. While these constraints are indeed used in the current techniques, their role is mostly limited to compensating for feature allocation errors made during the independent analysis of the feature space. To be robust the feature space analysis must fully exploit the image domain information.
As a consequence of the increased role of image domain information the burden on the feature space analysis can be reduced. First all the significant features are extracted, and only after then are the clusters containing the instances of these features recovered. The latter procedure uses image domain information and avoids the normality assumption.
Significant features correspond to high density regions and to locate these regions a search window must be employed. The number of parameters defining the shape and size of the window should be minimal, and therefore whenever it is possible the feature space should be isotropic. A space is isotropic if the distance between two points is independent on the location of the point pair. The most widely used isotropic space is the Euclidean space, where a sphere, having only one
34