Spatial-Temporal Graph-Based AU Relationship Learning for Facial Action Unit Detection
- Resource Type
- Conference
- Authors
- Wang, Zihan; Song, Siyang; Luo, Cheng; Zhou, Yuzhi; Wu, Shiling; Xie, Weicheng; Shen, Linlin
- Source
- 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) CVPRW Computer Vision and Pattern Recognition Workshops (CVPRW), 2023 IEEE/CVF Conference on. :5899-5907 Jun, 2023
- Subject
- Computing and Processing
Engineering Profession
Representation learning
Gold
Computer vision
Codes
Face recognition
Conferences
Predictive models
- Language
- ISSN
- 2160-7516
This paper presents our Facial Action Units (AUs) detection submission to the fifth Affective Behavior Analysis in-the-wild Competition (ABAW). Our approach consists of three main modules: (i) a pre-trained facial representation encoder which produce a strong facial representation from each input face image in the input sequence; (ii) an AU-specific feature generator that specifically learns a set of AU features from each facial representation; and (iii) a spatio-temporal graph learning module that constructs a spatio-temporal graph representation. This graph representation describes AUs contained in all frames and predicts the occurrence of each AU based on both the modeled spatial information within the corresponding face and the learned temporal dynamics among frames. The experimental results show that our approach outperformed the baseline and the spatio-temporal graph representation learning allows our model to generate the best results among all ablated systems. Our model ranks at the 4th place in the AU recognition track at the 5th ABAW Competition. Our code is publicly available at https://github.com/wzh125/ABAW-5.