留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种基于光流双输入网络的微表情顶点帧检测方法

郑戍华 陈梦心 王向周 弓雪雅

郑戍华, 陈梦心, 王向周, 弓雪雅. 一种基于光流双输入网络的微表情顶点帧检测方法[J]. 机械工程学报, 2022, 42(7): 749-754. doi: 10.15918/j.tbit1001-0645.2021.135
引用本文: 郑戍华, 陈梦心, 王向周, 弓雪雅. 一种基于光流双输入网络的微表情顶点帧检测方法[J]. 机械工程学报, 2022, 42(7): 749-754. doi: 10.15918/j.tbit1001-0645.2021.135
ZHENG Shuhua, CHEN Mengxin, WANG Xiangzhou, GONG Xueya. A Micro-Expression Apex Frame Spotting Method Based on Optical-Flow-Dual-Input Network[J]. JOURNAL OF MECHANICAL ENGINEERING, 2022, 42(7): 749-754. doi: 10.15918/j.tbit1001-0645.2021.135
Citation: ZHENG Shuhua, CHEN Mengxin, WANG Xiangzhou, GONG Xueya. A Micro-Expression Apex Frame Spotting Method Based on Optical-Flow-Dual-Input Network[J]. JOURNAL OF MECHANICAL ENGINEERING, 2022, 42(7): 749-754. doi: 10.15918/j.tbit1001-0645.2021.135

一种基于光流双输入网络的微表情顶点帧检测方法

doi: 10.15918/j.tbit1001-0645.2021.135
基金项目: 国家部委预研资助项目(5200-2020036147A-0-0-00)
详细信息
    作者简介:

    郑戍华(1976—),女,博士,讲师,E-mail:zhengshuhua@bit.edu.cn

    通讯作者:

    王向周(1965—),男,博士,教授,E-mail:wangxiangzhou@bit.edu.cn

  • 中图分类号: TP391

A Micro-Expression Apex Frame Spotting Method Based on Optical-Flow-Dual-Input Network

  • 摘要: 微表情顶点帧蕴含着丰富的微表情信息,为了准确地检测出微表情顶点帧,本文提出了一种基于光流特征的神经网络分类,并利用先验知识规则进行取舍的检测方法. 该方法针对固定滑窗大小内的图像进行光流信息提取,利用双输入特征提取网络对x, y方向的光流信息进行时空特征提取,并进行分类,经根据微表情先验知识所设计的取舍规则后处理后,改善了检测准确度. 实验结果表明,在数据集CASMEⅡ上测试,顶点定位率(apex spotting rate,ASR)指标达到了0.945,F1-score指标达到了0.925.

     

  • 图  总体流程框图

    Figure  1.  Overall flow block diagram

    图  视频预处理框图

    Figure  2.  Video preprocessing block diagram

    图  光流双输入网络总体框架

    Figure  3.  Overall framework of optical-flow-dual-input network

    图  后处理模块处理过程示意图

    Figure  4.  Processing process diagram of post processing module

    图  实验结果图

    Figure  5.  Experimental results

    表  1  网络结构检测对比结果

    Table  1.   Evaluation of network structure

    方法检测结果数量TPFPFN
    单输入结构100.3251.9748.320.12
    双输入结构80.8241.3540.470.05
    下载: 导出CSV

    表  2  后处理方法检测对比结果

    Table  2.   Evaluation of post-processing

    方法检测结果数量TPFPFN
    后处理模块80.8241.35040.4700.05
    有后处理模块0.950.8750.0740.05
    下载: 导出CSV

    表  3  MAE、ASR指标评估

    Table  3.   MAE and ASR evaluation

    方法MAEASR
    ${\text{LBP - }}{\chi ^2}$[16]27.210.8230
    CNN+SW-MAX[10]22.360.8280
    DS-OF[5]22.600.8470
    本文26.420.9450
    下载: 导出CSV

    表  4  F1-score指标评估

    Table  4.   F1-score evaluation

    方法F1-score
    ${\text{LBP - }}{\chi ^2}$[16]0.32
    MDMD[17]0.38
    HIGO-TOP Apex LSTM[9]0.71
    HOG-TOP Apex LSTM[9]0.77
    HIGO-TOP & HOG-TOP[18]0.86
    TSW-FD0.32(sequence)
    MesNet[19]0.943(all)
    本文0.925
    下载: 导出CSV
  • [1] 解仑, 卢亚楠, 姜波, 等. 基于人脸运动单元及表情关系模型的自动表情识别[J]. 北京理工大学学报, 2016, 36(2):163 − 169.

    XIE Lun, LU Yanan, JIANG Bo, et al. Expression automatic recognition based on facial action units and expression relationship model[J]. Transactions of Beijing Institute of Technology, 2016, 36(2):163 − 169. (in Chinese)
    [2] 岳雷, 沈庭芝. 基于自动提取特征点的三维人脸表情识别[J]. 北京理工大学学报, 2016, 36(5):508 − 513.

    YUE Lei, SHEN Tingzhi. 3D expression recognition based on automatically detected facial points[J]. Transactions of Beijing Institute of Technology, 2016, 36(5):508 − 513. (in Chinese)
    [3] LE NGO A C, PHAN R C W, SEE J. Spontaneous subtle expression recognition: imbalanced databases and solutions[C]//Proceedings of Asian Conference on Computer Vision. Cham: Springer, 2014: 33-48.
    [4] LI X, HONG X, MOILANEN A, et al. Towards reading hidden emotions: a comparative study of spontaneous micro-expression spotting and recognition methods[J]. IEEE Transactions on Affective Computing, 2017, 9(4):563 − 577.
    [5] DAVISON A, MERGHANI W, LANSLEY C, et al. Objective micro-facial movement detection using facs-based regions and baseline evaluation[C]//Proceedings of 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). [S. l. ]: IEEE, 2018: 642 − 649.
    [6] DING J, TIAN Z, LYU X, et al. Real-time micro-expression detection in unlabeled long videos using optical flow and lstm neural network[C]//Proceedings of International Conference on Computer Analysis of Images and Patterns. [S. l. ]: Springer, 2019: 622 − 634.
    [7] TAKALKAR M A, THUSEETHAN S, RAJASEGARAR S, et al. LGAttNet: automatic micro-expression detection using dual-stream local and global attentions[J]. Knowledge-Based Systems, 2021, 212:106566. doi: 10.1016/j.knosys.2020.106566
    [8] ESMAEILI V, FEGHHI M M, SHAHDI S O. Automatic micro-expression apex frame spotting using local binary pattern from six intersection planes[J/OL]. [2021-04-05]. https://arxiv.org/abs/2104.02149.
    [9] TRAN T K, VO Q N, HONG X, et al. Dense prediction for micro-expression spotting based on deep sequence model[J]. Electronic Imaging, 2019, 2019(8):401−1 − 401−6. doi: 10.2352/ISSN.2470-1173.2019.8.IMAWM-401
    [10] ZHANG Z, CHEN T, MENG H, et al. SMEConvNet: a convolutional neural network for spotting spontaneous facial micro-expression from long videos[J]. IEEE Access, 2018, 6:71143 − 71151. doi: 10.1109/ACCESS.2018.2879485
    [11] DONAHUE J, HENDRICKS A L, GUADARRAMA S, et al. Long-term recurrent convolutional networks for visual recognition and description[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S. l. ]: IEEE, 2015: 2625 − 2634.
    [12] SUN B, CAO S, HE J, et al. Two-stream attention-aware network for spontaneous micro-expression movement spotting[C]//Proceedings of 2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS). [S. l. ]: IEEE, 2019: 702 − 705.
    [13] ZACH C, POCK T, BISCHOF H. A duality based approach for realtime tv-l 1 optical flow[C]//Proceedings of Joint Pattern Recognition Symposium. Berlin, Heidelberg: Springer, 2007: 214 − 223.
    [14] XIA B, WANG W, WANG S, et al. Learning from macro-expression: a micro-expression recognition framework[C]//Proceedings of the 28th ACM International Conference on Multimedia. [S. l. ]: ACM, 2020: 2936 − 2944.
    [15] YAN W J, LI X, WANG S J, et al. CASMEⅡ: an improved spontaneous micro-expression database and the baseline evaluation[J]. PloS one, 2014, 9(1):e86041. doi: 10.1371/journal.pone.0086041
    [16] PAN H, XIE L, WANG Z, et al. Review of micro-expression spotting and recognition in video sequences[J]. Virtual Reality & Intelligent Hardware, 2021, 3(1):1 − 17.
    [17] OH Y H, SEE J, LE NGO A C, et al. A survey of automatic facial micro-expression analysis: databases, methods, and challenges[J]. Frontiers in Psychology, 2018, 9:1128. doi: 10.3389/fpsyg.2018.01128
    [18] TRAN T K, VO Q N, HONG X, et al. Dense prediction for micro-expression spotting based on deep sequence model[J]. Electronic Imaging, 2019(8):401 − 406.
    [19] FU X, NIU L, HU Z, et al. Deep micro-expression spotting network training based on concept of transition frame[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(11):2128 − 2137.
  • 加载中
图(5) / 表(4)
计量
  • 文章访问数:  91
  • HTML全文浏览量:  93
  • PDF下载量:  0
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-05-17
  • 录用日期:  2021-05-17
  • 刊出日期:  2022-08-17

目录

    /

    返回文章
    返回