Detection of white blood cells in microscopic leucorrhea images based on deep active learning
-
摘要: 白带显微图像中白细胞的数量可以提示阴道炎症的严重程度。目前对白带中白细胞的检测主要依靠医学专家们的人工镜检,这种人工检查耗时、昂贵且容易出错。近年来,有研究提出基于深度学习技术对白带白细胞实现智能检测,但是这类方法通常需要人工标注大量的样本作为训练集,标注代价高。因此,本研究提出运用深度主动学习算法来实现对白带显微图像中白细胞的智能检测。在主动学习框架下,首先以少量的标注样本作为基础训练集,采用更快的卷积神经网络(Faster R-CNN)训练检测模型,再自动挑选最有价值的样本进行人工标注,从而迭代更新训练集和相应的检测模型,使模型的性能不断提高。实验结果表明,深度主动学习技术能在较少的人工标注样本下获得较高的检测精度,对白细胞检测的平均精度达到了 90.6%,可以满足临床常规检查要求。
-
关键词:
- 白带显微图像 /
- 白细胞 /
- 深度主动学习 /
- 智能检测 /
- Faster R-CNN
Abstract: The number of white blood cells in the leucorrhea microscopic image can indicate the severity of vaginal inflammation. At present, the detection of white blood cells in leucorrhea mainly relies on manual microscopy by medical experts, which is time-consuming, expensive and error-prone. In recent years, some studies have proposed to implement intelligent detection of leucorrhea white blood cells based on deep learning technology. However, such methods usually require manual labeling of a large number of samples as training sets, and the labeling cost is high. Therefore, this study proposes the use of deep active learning algorithms to achieve intelligent detection of white blood cells in leucorrhea microscopic images. In the active learning framework, a small number of labeled samples were firstly used as the basic training set, and a faster region convolutional neural network (Faster R-CNN) training detection model was performed. Then the most valuable samples were automatically selected for manual annotation, and the training set and the corresponding detection model were iteratively updated, which made the performance of the model continue to increase. The experimental results show that the deep active learning technology can obtain higher detection accuracy under less manual labeling samples, and the average precision of white blood cell detection could reach 90.6%, which meets the requirements of clinical routine examination. -
表 1 不同算法在相同数据集上的测试结果对比
Table 1. Comparison of test results of different algorithms on the same data set
方法 训练集 测试集 迭代次数 人工标记图片数量 人工标记耗时 Precision MAP 主动学习 270 30 10 57 45 min 91.8% 90.6% LeNet-5 270 30 0 270 250 min 71.5% 73.3% Faster R-CNN 270 30 0 270 250 min 80.4% 82.3% -
[1] Zhang J, Lu S, Wang X, et al. Automatic identification of fungi in microscopic leucorrhea images. J Opt Soc Am A Opt Image Sci Vis, 2017, 34(9): 1484-1489. doi: 10.1364/JOSAA.34.001484 [2] Hakakha M M, Davis J, Korst L M, et al. Leukorrhea and bacterial vaginosis as in-office predictors of cervical infection in high-risk women. Obstet Gynecol, 2002, 100(4): 808-812. [3] Ushizima D M, Lorena A C, De Carvalho A. Support vector machines applied to white blood cell recognition// International Conference on Hybrid Intelligent Systems. Rio de Janeiro, Brazil: IEEE, 2005: 6. [4] Na L, Chris A, Mulyawan B. A combination of feature selection and co-occurrence matrix methods for leukocyte recognition system. Journal of Software Engineering and Applications, 2013, 5(12): 101-106. [5] Putzu L, Caocci G, Di Ruberto C. Leucocyte classification for leukaemia detection using image processing techniques. Artif Intell Med, 2014, 62(3): 179-191. doi: 10.1016/j.artmed.2014.09.002 [6] Prinyakupt J, Pluempitiwiriyawej C. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers. Biomed Eng Online, 2015, 14(1): 63. doi: 10.1186/s12938-015-0037-1 [7] Shirazi S H, Umar A I, Naz S, et al. Efficient leukocyte segmentation and recognition in peripheral blood image. Technol Health Care, 2016, 24(3): 335-347. doi: 10.3233/THC-161133 [8] Smirnov E A, Timoshenko D M, Andrianov S N. Comparison of regularization methods for imagenet classification with deep convolutional neural networks. Aasri Procedia, 2014, 6: 89-94. doi: 10.1016/j.aasri.2014.05.013 [9] Mohamed S T, Ebeid H M, Hassanien A E, et al. Optimized feed forward neural network for microscopic white blood cell images classification// International Conference on Advanced Machine Learning Technologies and Applications. Cairo: Springer, 2019: 758-767. [10] 钟亚, 张静, 肖峻. 基于卷积神经网络的白带中白细胞的自动检测. 中国生物医学工程学报, 2018, 37(2): 163-168. [11] Liu B, Ferrari V. Active learning for human pose estimation// Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 4363-4372. [12] Zhou Z, Shin J, Zhang L, et al. Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 7340-7351. [13] Wang Z, Du B, Zhang L, et al. On gleaning knowledge from multiple domains for active learning// Twenty-Sixth International Joint Conference on Artificial Intelligence. Melbourne: AAAI, 2017: 3013-3019. [14] Andersson O, Wzorek M, Doherty P. Deep learning quadcopter control via risk-aware active learning// Thirty-First AAAI Conference on Artificial Intelligence. California: AAAI Press, 2017: 3812-3818. [15] Sener O, Savarese S. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv, 2017: 1. [16] Shen Y, Yun H, Lipton Z C, et al. Deep active learning for named entity recognition. arXiv preprint arXiv, 2017: 252-256. [17] Huang S J, Zhao J W, Liu Z Y. Cost-effective training of deep CNNs with active model adaptation// ACM Press the 24th ACM SIGKDD International Conference. London: Knowledge Discovery & Data Mining, 2018: 1580-1588. [18] Hua G, Long C, Yang M, et al. Collaborative active visual recognition from crowds: A distributed ensemble approach. IEEE Trans Pattern Anal Mach Intell, 2017: 1-1. doi: 10.1109/TPAMI.2017.2701373 [19] Zhang S, Yin J, Guo W. Pool-based active learning with query construction. Advances in Intelligent & Soft Computing, 2011, 122: 13-22. [20] Vijayanarasimhan S, Grauman K. Large-scale live active learning: Training object detectors with crawled data and crowds. Int J Comput Vis, 2014, 108(1-2): 97-114. doi: 10.1007/s11263-014-0721-9 [21] Sankar S, Bartoli A. Model-based active learning to detect an isometric deformable object in the wild with a deep architecture. Computer Vision and Image Understanding, 2018, 171: 69-82. doi: 10.1016/j.cviu.2018.05.004 [22] Li J, Liang X, Shen S M, et al. Scale-aware fast R-CNN for pedestrian detection. IEEE Trans Multimedia, 2017, 20(4): 985-996. [23] Meng R, Rice S G, Wang J, et al. A fusion steganographic algorithm based on faster R-CNN. Computers Materials Continua, 2018, 55(1): 1-16. [24] Picon A, Alvarez-Gila A, Seitz M, et al. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Computers and Electronics in Agriculture, 2019, 161: 280-290. doi: 10.1016/j.compag.2018.04.002