Human Brain MR Image Segmentation Based on Level Set Method
-
摘要: 为了更好地对解剖结构和形状复杂的非均匀分布人脑图像进行分割,在水平集Chan-Vese模型的基础上引入Otsu技术,给出了基于水平集的人脑磁共振(magnetic resonance,MR)图像分割方法. 该方法利用 Heaviside 函数描述区域内图像分布信息,通过最大类间方差来反映区域间图像分布方差信息,2部分信息经融合后构建新的能量函数,以引导图像分割过程,最终得到所期望的人脑图像分割结果. 采用2个数据集提供的人脑图像数据进行实验,结果表明:所提方法在相似性度量和正误率度量方面,与其他方法相比都有明显的优势,可以很好地实现人脑图像的分割.
-
关键词:
- 图像分割 /
- Otsu方法 /
- Chan-Vese模型 /
- 人脑MR图像
Abstract: Traditional level set method is not suitable to non-uniformly distributed human brain segmentation with complex anatomic structures and shapes. Otsu method merge with level set Chan-Vese model, leading to human brain magnetic resonance (MR) image segmentation was presented based on Level Set method. The image information of intraregional distribution was constructed by Heaviside function, and the variance information of interregional distribution was built by maximum between-class variance. In the process of image segmentation, two parts of the information were integrated each other to guide energy function evolution, and the desired results of human brain segmentation was obtained. Experiments based on two datasets that provided human brain image show that the proposed approach has obvious advantages in similarity metrics and success, and lower error rate,can perfectly complete human brain segmentation. -
表 1 5种分割方法5种指标的白质分割评估结果
Table 1. Brain white matter segmentation results in five kinds of methods
数据集 方法 Dice系数 Jaccard系数 敏感性 特异性 时间/s Brainweb CV 0.70 0.54 0.99 0.17 17.80 Otsu 0.68 0.51 0.99 0.06 0.17 Fclsm 0.80 0.66 0.84 0.90 9.04 Lif 0.71 0.55 0.98 0.22 42.48 CVO 0.83 0.71 0.84 0.97 26.37 IBSR20 CV 0.60 0.43 0.89 0.06 17.80 Otsu 0.60 0.43 0.90 -0.03 0.17 Fclsm 0.46 0.30 0.50 0.82 8.01 Lif 0.60 0.42 0.88 0.06 42.48 CVO 0.65 0.48 0.70 0.85 26.37 IBSR18 CV 0.54 0.37 0.92 -0.43 16.93 Otsu 0.20 0.11 0.25 0.50 0.18 Fclsm 0.66 0.49 0.66 0.99 9.45 Lif 0.53 0.36 0.89 -0.32 47.21 CVO 0.68 0.51 0.72 0.88 21.45 -
[1] SALEHA M, MUHAMMAD S, AFIFA M, et al.A survey on medical image segmentation[J]. Current Medical Imaging Reviews, 2015, 11(1): 3-14. [2] CHAIRA T, PANWAR A.An atanassov’s intuitionistic fuzzy kernel clustering for medical image segmentation[J]. International Journal of Computational Intelligence Systems, 2014, 7(2): 360-370. [3] MA Y Z, CHEN J X. A new medical image segmentation method based on Chan-Vese model[J]. Applied Mechanics & Materials, 2014, 513/514/515/516/517: 3750-3756. [4] GHOSH P, MITCHELL M, TANYI J A, et al.Incorporating priors for medical image segmentation using a genetic algorithm[J]. Neurocomputing, 2016, 195: 181-194. [5] LI B N, CHUI C K, CHANG S, et al.Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation[J]. Computers in Biology & Medicine, 2011, 41(1): 1-10. [6] ZHANG K, SONG H, ZHANG L, et al.Active contours driven by local image fitting energy[J]. Pattern Recognition, 2010, 43(4): 1199-1206. [7] ZHANG R G, LIU X J, DONG L, et al.Superpixel graph cuts rapid algorithm for eextracting object contour shapes[J]. PR&AI, 2015, 28(4): 344-353. (in Chinese) [8] TONY F, LUMINITA A.Active contours without edges[J]. IEEE Trans on Image Processing, 2001, 10(2): 266-277. [9] OTSU N.A threshold selection method from gray-level histograms[J]. IEEE Transactions on System Man and Cybernetic, 1979, 9(1): 62-66. [10] ARRIAGA-GARCIA E F, SANCHEZ-YANEZ R E, GARCIA-HERNANDEZ M G. Image enhancement using bi-histogram equalization with adaptive sigmoid functions[J]. International Conference on Electronics, 2014, 24(5): 28-34. [11] McGill University. Montreal Neurological Institue. Mcconnell Brain Imaging Ceutre. BrainWeb[EB/OL].[2016-03-09]. http://brainweb.bic.mni.mcgill.ca/brainweb. http://brainweb.bic.mni.mcgill.ca/brainweb. [12] Massachusetts General Hospital.Center for Morphometric Analysis. The Internet brain segmentation repository (IBSR)[EB/OL].[2016-02-02]. http://www.cma.mgh.harvard.edu. http://www.cma.mgh.harvard.edu. [13] KAZEMI K, NOORIZADEH N.Quantitative comparison of SPM, FSL, and Brainsuite for brain MR image segmentation[J]. J Biomed Phys Eng, 2014, 4(1): 13-26.