• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Expression Recognition Method Based on Convolutional Neural Network and Capsule Neural Network

    2024-05-25 14:43:24ZhanfengWangandLishaYao
    Computers Materials&Continua 2024年4期

    Zhanfeng Wang and Lisha Yao

    1School of Computer Science and Artificial Intelligence,Chaohu University,Hefei,238000,China

    2School of Big Data and Artificial Intelligence,Anhui Xinhua University,Hefei,230088,China

    ABSTRACT Convolutional neural networks struggle to accurately handle changes in angles and twists in the direction of images,which affects their ability to recognize patterns based on internal feature levels.In contrast,CapsNet overcomes these limitations by vectorizing information through increased directionality and magnitude,ensuring that spatial information is not overlooked.Therefore,this study proposes a novel expression recognition technique called CAPSULE-VGG,which combines the strengths of CapsNet and convolutional neural networks.By refining and integrating features extracted by a convolutional neural network before introducing them into CapsNet,our model enhances facial recognition capabilities.Compared to traditional neural network models,our approach offers faster training pace,improved convergence speed,and higher accuracy rates approaching stability.Experimental results demonstrate that our method achieves recognition rates of 74.14%for the FER2013 expression dataset and 99.85% for the CK+expression dataset.By contrasting these findings with those obtained using conventional expression recognition techniques and incorporating CapsNet’s advantages,we effectively address issues associated with convolutional neural networks while increasing expression identification accuracy.

    KEYWORDS Expression recognition;capsule neural network;convolutional neural network

    1 Introduction

    Given the rapid pace of scientific and technological advancements,coupled with ongoing progress in computer science,artificial intelligence,and related fields,there is a constant need for improving human-computer interaction.In face-to-face communication,nonverbal cues such as facial expressions and body movements play a crucial role in conveying messages and helping the audience understand the speaker’s intentions[1].Facial expressions serve as manifestations of human thoughts,emotions,and conditions.As science and technology continue to advance with dedicated research efforts,the potential ability of computers to accurately and efficiently recognize facial expressions holds great promise for enhancing the naturalness and harmony of human interactions.The theoretical and practical applications of facial expression recognition technologies are numerous.

    According to Mehrabian’s investigation [2],verbal communication accounts for only 7% of expressing emotions,while other non-verbal means such as rhythm,voice,speed of speech,and particularly facial expressions contribute to 38%.Among these non-verbal cues,facial expressions alone represent 55%[3].Therefore,valuable insights into human thoughts and emotions can be derived from analyzing facial expressions.The primary objective of facial expression recognition technology is to develop an effective and efficient system capable of accurately identifying various human emotions conveyed through expressions including neutrality,surprise,disgust,anger,sadness,and happiness[4].

    The rapid advancement of deep learning technology [5–9],artificial intelligence technology[10–13],and computer hardware has significantly impacted facial expression recognition technology.Facial recognition[14]is a representative interdisciplinary,involving psychology,sociology,psychology and psychology.The continuous development of facial expression recognition technology will surely attract more scholars’attention,and research in various fields will promote the development of many fields.

    Facial expression recognition algorithms can be categorized into machine learning algorithms and deep learning-based methods.Traditional recognition algorithms typically employ specific feature extraction techniques based on the research subject,and these traditional methods have long been the focus of human facial expression recognition research.However,traditional facial expression recognition methods rely on manually designed feature extraction,which is susceptible to interference from irrelevant factors,resulting in relatively low-level semantic features being extracted for most facial expressions.With advancements in computer hardware performance,researchers have increasingly turned to deep learning-based approaches as the mainstream method for facial expression recognition.Deep learning-based algorithms address the limitations of traditional methods by eliminating the need for manual design of facial expression feature extraction [15,16].By increasing network depth and width,higher-level semantic features can be extracted.Currently,convolutional neural networks(CNN)are widely used in facial expression recognition tasks;however,CNNs require a large amount of training data and may struggle with identifying small sample sizes while also being prone to overfitting or underfitting issues.Additionally,CNNs exhibit poor performance when recognizing complex scenes and fail to accurately handle changes in image angles or reflect relationships between internal feature levels[17–19].

    The research motivation of this paper is to address the limited generalization ability of convolutional neural network expression recognition methods when processing small sample data,such as expression data,and to overcome the challenges posed by complex scenes,objects at different scales,and image transformations.Our objective is to optimize the model in order to enhance its recognition capability for small samples,improve its performance in complex scenes,enable better identification of facial images with varying scales,effectively handle rotation,scaling and translation transformations of facial images,and enhance the model’s robustness for recognizing different types of transformations.

    The present paper introduces the capsule neural network(CapsNet)and proposes an expression recognition method based on both convolutional neural network and capsule neural network.Firstly,by leveraging the concept of capsules,the data dimension is reduced to enhance the model’s processing capability for small sample data and facilitate better identification of objects with varying scales.Secondly,the capsule neural network effectively handles the relationship between local features and global features,thereby improving recognition performance in complex scenes.Lastly,spatial orientation information is stored and propagated using the capsule structure within the capsule neural network.

    2 Related Work

    Since the 19th century,foreign scholars have conducted systematic analyses and research on facial expressions.In 1872,biologist Darwin confirmed in his book “The Emotional Expression of Man and Animals”that facial expressions are shared characteristics between humans and animals.In 1971,psychologists Ekman and Friesen developed a comprehensive classification system for facial expressions,encompassing emotions such as happiness,fear,sadness,surprise,anger,and disgust[20].

    Deep learning-based techniques and machine learning algorithms are two categories under which facial expression recognition systems fall.The deep learning approach,surpassing machine learning algorithms and yielding more abstract information,has gained popularity in recent years.In 2016,Li et al.proposed a facial expression recognition method based on Gabor and Conditional Random Fields (Gabor+PCA+CRF [21]).This method utilized the Gabor characteristics extracted by five scales to reduce dimensionality and employed State Random Fields(CRF)for facial expression identification and classification.Similarly,in the same year,AT Lopes presented the Lopes algorithm[22]in 2017,which jointly processed images using convolutional neural networks and specific preprocessing methods.Yan et al.’s cooperative discriminant multi-metric learning algorithm (CDMML) [23] was suggested for video-based facial expression identification by integrating voice and video modalities to enhance recognition performance.In 2021,Pourmirzaei et al.[24],employing self-supervised learning techniques,improved recognition performance and reduced error rates through fine-grained tasks of facial feature expression analysis.Aouayeb et al.[25],on the other hand,proposed a Transformer model for facial expression recognition based on attention mechanisms.The present study introduces a novel joint extrusion and excitation(SE)block for learning visual Transformer in order to address the limited training data issue of the transformer model in FER task.Experimental results demonstrate the competitive performance of this approach.Minaee et al.proposed a deep learning technique based on attention convolution networks (Deep Emotion [26]),which effectively focuses on facial key regions and significantly enhances the performance compared to previous models across multiple datasets.The current status of image-based facial expression recognition was assessed based on the suggestion made by Christopher Pramerdorfer (Inception [27]).To highlight algorithm differences and their performance effects,CNNS were employed.Fard et al.proposed an algorithm model called adaptive correlation(AD-CORRE[28])loss to guide the correlation between samples in the classroom and network level samples,aiming to embed feature vectors with reduced correlation.Khaireddin and his colleagues proposed the adoption of VGGNet[29]architecture,which was fine-tuned with precise adjustments to its approximate parameters and implemented using various optimized experimental methodologies,leading to favorable outcomes.

    The aforementioned research demonstrates that although deep learning has made some progress in face recognition,there are still challenges with convolutional neural networks (CNNs),such as their limited ability to accurately handle changes in angles and directions of images,as well as the relationship between internal feature levels.In 2017,Sabour et al.[30] proposed a novel deep learning network structure called Capsule Neural Network(CapsNet),which initially outperformed CNNs and RNNs in image recognition.Despite being at an early stage of research,several scholars have applied CapsNet in various fields including natural language processing and computer vision,yielding competitive results.For instance,Kim et al.[31] introduced an abnormal driving centerline intersection detection method based on CapsNet.One advantage of using CapsNet is that it recognizes objects as vectors containing all state information within capsules.Suri et al.[32],utilizing signals collected by a specially designed wearable IMU system,proposed a novel one-dimensional deep CapsNet architecture for continuous Indian sign language recognition.Compared to CNNs,CapsNet exhibits higher performance demonstrating its applicability.Li et al.[33] employed CapsNet to determine whether natural text contains secret information and achieved robustness and accuracy.By enhancing the direction and size of information,CapsNet achieves vectorization without losing spatial information from images thereby overcoming limitations of traditional CNNs.To address existing issues with CNNs and enhance expression recognition performance,this paper proposes a model combining CNNs with CapsNet for expression recognition purposes due to the latter’s superior ability in recognizing spatial orientation.

    3 Method

    The VGG16 model serves as the fundamental framework in this study,while the incorporation of a capsule layer significantly enhances facial expression recognition performance.Referred to as CAPSULE-VGG,our proposed convolutional neural network and capsule neural network model represents a fusion CapsNet classification approach.Specifically,we introduce a capsule neural network before the fully connected layer within the existing VGG16 architecture.

    3.1 Convolutional Neural Network

    The convolutional layer,pooling layer,activation layer,fully connected layer,and output layer collectively constitute a convolutional network[34–36],which shares similarities with various fundamental neural networks while maintaining a relatively straightforward architectural design.The convolutional operations in a general convolutional neural network consistently consider the construction of local mean or maximum feature pooling levels[37–40].Utilizing a multi-feature extraction network can significantly reduce the resolution of multi-feature images.The output results from the fully connected layer are fed into the softmax layer,enabling node-based multi-feature extraction and comparison of feature graphs[41,42].

    The convolution layer performs comprehensive feature extraction by employing multiple convolution kernels,as demonstrated in Eq.(1)for precise calculation.

    Thei-th feature map of the upper layer,denoted asxi,serves as the input at this stage.Correspondingly,ωirepresents the convolution kernel associated with it.yjrefers to thej-th feature map,bjdenotes the bias term,andNjsignifies the number of features in each feature map.The activation functionfcan take various forms such as Tanh or ReLu.

    The primary objective of the pooling layer is to reduce the dimensions(width,length,and number of channels) of the preceding layer while minimizing computation,memory,and parameter usage.This facilitates achieving specific scale and space invariance objectives without relying on overly aggressive fitting techniques.For instance,after obtaining the output from the last convolutional layer,eigenvalues representing characteristic quantities of n ?n ?M can be utilized for 2 ?2 pooling operations within a window.Consequently,the pooled output would have dimensions of n/2 ?n/2 ?M.Various common calculation methods such as random pooling,maximum pooling,and average pooling are commonly employed.

    3.2 CapsNet

    The CapsNet model was developed based on the principle of human visual object recognition[43–46].Hinton posits that during object recognition,humans tend to disregard irrelevant details and transmit information about key parts of the observed object from the visual central nervous system to the higher central nervous system for higher-level decision-making.Similarly,in our representation of image features as capsules,upper-level capsules do not fully incorporate all information from each lower-level capsule but selectively integrate salient information.

    The CapsNet architecture,illustrated in Fig.1,represents a high-performance network structure for deep learning.In this configuration,each feature is represented by a vector known as a capsule[47–49].By training this architecture,effective image feature extraction can be achieved.The characteristics of each spatial entity are encapsulated within vectors and gradually combined using clustering methods.To preserve the spatial information of the data,CapsNet replaces the pooling layer commonly found in CNNs,resulting in an impressively low test error rate of only 0.25%on the MNIST dataset.

    Figure 1: Capsule neural network structure diagram

    The feature vector of a capsule possesses the property that its direction represents a specific feature,while its modulus length(L2 norm)indicates the likelihood of this feature’s existence.Furthermore,it is important to note that the modulus length of all capsules always remains less than 1.In contrast to conventional neural networks,the utilization of a dynamic routing algorithm facilitates effective information transfer between upper and lower layers within CapsNet.

    The instantiation parameters of a specific entity type are represented by the input and output vectors of the capsule.The direction of these vectors indicates certain attributes of the entity,while their length signifies the likelihood of its existence.A transformation matrix predicts the instantiation parameters for capsules within the same layer.The dynamic routing algorithm plays a crucial role in prediction,ensuring consistency among multiple predictions within this layer and activating capsules in the subsequent layer.The output vector length of each“capsule”reflects its occurrence probability in the current input,necessitating that it falls within a range from 0 to 1.Nonlinear compression through“squashing”guarantees that short vectors approach zero length,while long vectors are reduced to unit length.At the capsule level,activation function is achieved using discriminant learning approach as expressed by Eq.(2).

    The input vector of capsulej,denoted assj,and its corresponding output vectorvjare defined in this case.Moreover,the sum of the vector weights from all capsulesiin the preceding layer to capsulejis equal tosj.This non-linear compression not only preserves the original direction of the vector,thereby maintaining the attributes of instantiated parameters unchanged but also restricts the length of the vector within a range between 0 and 1.The calculation process for obtaining the input vector involves two steps:Linear combination and Routing,as expressed by Eq.(3).

    Figure 2: Execution process of dynamic routing

    Table 1 displays the steps of the dynamic routing algorithm.

    Table 1: Dynamic routing algorithm

    3.3 Loss Function and Optimization Algorithm

    The internal weight matrix of parameters should be adjusted in accordance with the loss function,while the coupling factor needs to be dynamically modified through routing.

    The probability of a capsule’s representation content is denoted by its vector magnitude,and the sum of the output probabilities does not necessarily equal 1.Therefore,this paper employs interval loss to formulate the network’s loss function,as opposed to the commonly used cross entropy loss in traditional classification tasks.The interval loss function can be mathematically represented as Eq.(4).

    In Eq.(4),crepresents the category.Tcindicates the presence or absence of a classcintrusion.Within the output layer,vcdenotes the length of the capsule,specifically representing the likelihood that a sample belongs to classc.The maximum penalty for false positives is denoted asm+,whereasm-represents the minimum penalty for false negatives.The coefficient of proportionalityλis utilized to adjust their specific gravity accordingly.In this study,we setλ,m+andm-to 0.25,0.9 and 0.1,respectively.Although dynamic routing algorithm resolves weight update issues between capsules exclusively,it remains crucial to implement backpropagation in order to enhance network convergence capabilities effectively.To achieve smooth convergence and minimize loss values iteratively while updating neuron weights,we employ Adam method as an optimization algorithm for our loss function.

    3.4 CAPSULE-VGG

    The VGG16 network is employed in this study for facial expression recognition.Fig.3 illustrates the model structure constructed using the VGG16 network,which consists of 13 convolutional layers accompanied by 5 max-pooling and 3 fully connected levels.In terms of the network architecture,a combination approach integrating CapsNet with the VGG16 network is adopted.

    Due to the parameter sharing in convolution and the local effect of the pooling layer,CNN exhibits translation invariance,implying that when the position and direction of an entity change during prediction,neurons that were active for the original entity will not be activated.Conversely,the pooling layer results in significant degradation of spatial information.To better preserve the spatial properties of data,we optimize the basic VGG16 network and propose a CAPSULE-VGG model.The features extracted from VGG16 are further analyzed using CapsNet,thereby enhancing the representation of spatial information within these features.The structural diagram is illustrated in Fig.4.

    Figure 3: VGG-16 structure diagram

    Figure 4: CAPSULE-VGG structure diagram

    The CAPSULE-VGG model fully mines the preprocessed expression data set and extracts its features using the VGG module.These features are then aggregated by the capsule network layer,with weights between capsules updated through dynamic routing algorithm.The number of capsules is set to 16,and the final classification result is outputted by a Softmax classifier.Table 2 provides detailed configuration information for the CAPSULE-VGG model.

    Table 2: Detailed configuration of CAPSULE-VGG model

    4 Experimental Results and Analysis

    4.1 Data Set

    The performance of the model is evaluated in this study using two facial expression datasets,namely FER2013[50]and CK+[51,52].The CK+dataset is an extension and supplementation of the original Cohn-Kanade data from 2010.Data collection for CK+was conducted indoors.To construct a comprehensive network model,we extracted frames from videos and gathered photographs captured by 123 staff members.Each video yielded 593 photos encompassing seven distinct expressions.A representative image is depicted in Fig.5.

    Figure 5: CK+expression data set example

    The specific configuration of the original dataset is presented in Table 3.

    Table 3: CK+raw data set data configuration

    The FER2013 dataset is a comprehensive collection of freely available data,obtained using the Google image retrieval API.After removing any defective frames and adjusting the cropping area,all images were standardized to a resolution of 48 ?48 pixels.The training set consists of 28,709 images,while the verification and test sets each contain 3589 images.These datasets encompass seven distinct emotional tags:Neutral,happy,sad,surprised,disgusted,and fearful.An illustrative sample image is presented in Fig.6.

    The distribution of different types of data exhibits non-uniformity,and the facial expression dataset is limited in size.This study enhances the dataset to augment the model’s capacity for generalization by incorporating techniques such as random horizontal flip,random cropping,and random rotation.

    4.2 Experimental Parameters

    The parameters utilized in the experimental model are presented in Table 4,encompassing the learning rate,test iterations,learning decay rate,batch size,quantity of capsules within the capsule layer,dynamic routing iterations,as well as training optimizer and loss function parameters.

    Table 4: Model parameter

    Figure 6: FER2013 expression data set example

    4.3 Experimental Analysis

    4.3.1 Preparatory Experiment

    A preliminary experiment was conducted on the CK+dataset to investigate the impact of CapsNet on the novel network architecture and determine the optimal number of capsule layers for achieving superior performance.Notably,the number of test iterations was fixed at 50,while other parameters were set as specified in Table 4.The experimental parameters are presented in Table 5.

    Table 5: Preliminary experimental parameters

    Fig.7 presents a comparative diagram of the training and validation accuracies for various capsule layers in the preliminary experiment.

    Figure 7: FER2013 expression data set example

    The experiment compares the training accuracy,validation accuracy,test accuracy,and duration required for each epoch.The findings are presented in Table 6.

    Table 6: Prepare experimental results

    The mesh structure with a single capsule layer and without a capsule layer exhibits the highest motion accuracy at 30~40 epochs,along with enhanced stability,surpassing the performance of mesh structures with two or three capsule layers(Fig.7a).Additionally,Fig.7b demonstrates that after 30 epochs,the network featuring a single capsule level achieves superior verification accuracy initially and tends to stabilize over time.Although Table 6 indicates some influence of the quantity of capsule layers on experimental accuracy,an increase in the number of capsule layers does not correspondingly improve classification accuracy during practical usage.Consequently,this model is designed to have only one specified capsule layer.

    4.3.2 Ablation Experiment

    Firstly,this paper evaluates the rationality of the CAPSULE-VGG model.The experimental results are presented in Table 7.Among them,VGG represents the baseline model without incorporating the capsule neural network,while CAPSULE-VGG is a composite model that combines convolutional neural networks with capsule neural networks.The findings demonstrate that by introducing the capsule network,training convergence is accelerated,model accuracy becomes more stable,and recognition accuracy improves by 2.24% and 0.86% on CK+and FER2013 datasets,respectively.These results indicate that the capsule network compensates for limitations inherent to convolutional neural networks,thereby facilitating enhanced learning of facial expression features and improved overall network performance.

    Table 7: Experimental comparison of the model on CK+and FER2013

    Table 8: CK+confusion matrix%

    Table 9: FER2013 confusion matrix%

    Through the analysis of CK+expression and FER2013 emotion,we have obtained prediction results for different types of facial expressions using the CAPSULE-VGG network model.The emotions of each category were predicted and corresponding results were provided.Tables 8 and 9 present the different types of mixing matrices for CK+and FER2013 datasets in the CAPSULE-VGG model.

    Comparing the confusion matrix of the CK+dataset with that of FER2013,it is evident that the image quality in CK+surpasses that of FER2013,leading to a significantly higher recognition accuracy for facial expressions.In comparison to the CK+dataset,FER2013 presents more challenging characteristics.The images in FER2013 exhibit a wider range of pose angles and cover a broader age spectrum,thereby resembling real-world scenarios more closely.The lower recognition rate achieved by CAPSULE-VGG on the FER2013 dataset (74.14%) can be attributed largely to the substantial variations in facial postures within this dataset.

    4.3.3 Contrast Experiment

    To further validate the proposed model,we compared the experimental results of the CAPSULEVGG model on the CK+expression dataset with the following experimental approaches.

    Model 1: Collaborative Discriminant Multi-Metric Learning (CDMML).Yan et al.[23] introduced CDMML,an algorithm that combines speech and video modalities to improve facial expression recognition.

    Model 2: Nonlinear eval on SL+SSL puzzling.Pourmirzaei et al.[24] utilized self-supervised learning for fine-grained tasks in facial feature expression recognition,resulting in improved performance and reduced error rate.

    Model 3: ViT+SE.Aouayeb et al.[25] proposed a Transformer model based on the attention mechanism for recognizing facial expressions.To address the lack of training data issue in Transformer models,they incorporated extrusion and excitation(SE)blocks into vision Transformers.Experimental results demonstrate the competitiveness of this approach.

    This paper presents a comparative analysis of four models applied to the CK+expression dataset:CDMML,Nonlinear evaluation on SL+SSL perplexity,and ViT+SE models.

    The comparison experiment of the CK+dataset’s accuracy is illustrated in Fig.8,which demonstrates that this paper achieves the best recognition performance with an accuracy of 99.85%.Subsequently,we conducted experiments to compare the CAPSULE-VGG model’s results on the FER2013 expression dataset.

    Figure 8: The accuracy of CK+data set comparison experiment(unit:%)

    Model 1:DeepEmotion,proposed by Shervin Minaee,presents a deep learning approach based on attention convolutional networks(DeepEmotion)[26].This model effectively focuses attention on the primary facial regions and significantly enhances the performance compared to previous models across multiple datasets.

    Model 2: Inception,introduced by Christopher Pramerdorfer in a comprehensive review of the state-of-the-art techniques,demonstrates image-based facial expression recognition capabilities(Inception [27]).The utilization of CNNs highlights algorithmic distinctions and their impact on performance outcomes.

    Model 3: Ad-Corre is an algorithmic framework developed by Fard et al.,which incorporates adaptive correlation loss(AD-CORRE[28])to guide feature vector embedding with reduced correlation between level samples and classroom samples.

    Model 4: VGGNet,proposed by Khaireddin et al.,adopts the VGGNet architecture [29] and achieves commendable results through precise parameter tuning and various optimization tests.

    During experimentation,CAPSULE-VGG was compared against DeepEmotion model,Inception model,Ad-Corre model,and VGGNet model using FER2013 expression dataset.

    The accuracy of the FER2013 dataset comparison experiment is illustrated in Fig.9.Upon comparison,it has been observed that the suggested algorithm exhibits lower accuracy compared to alternative approaches.Although there has been an improvement in the model’s recognition performance on the FER2013 dataset,further enhancements are still required due to its relatively weaker generalization ability when compared to other datasets.

    Figure 9: The accuracy of FER2013 data set comparison experiment(unit:%)

    5 Conclusion

    The paper introduces the CAPSULE-VGG neural network model of the capsule neural network to address the limitation of convolutional neural networks in considering information such as relative position and angle between image features,resulting in insufficient generalization ability when dealing with complex small sample data like expression data.A novel expression recognition method based on a combination of convolutional neural networks and capsule neural networks is proposed.VGG16 is utilized as the feature extraction component during training,while an additional capsule neural network layer is incorporated to enhance attention towards direction and location features of facial expressions,thereby improving interpretability and network stability.Experimental results on both CK+expression dataset and FER2013 expression dataset demonstrate that the Capsule-VGG model achieves accuracy rates of 99.85%and 74.14%,respectively,outperforming basic VGG16 by enhancing recognition accuracy by 2.24% and 0.89%.Furthermore,it exhibits faster training convergence,improved model speed,and enhanced stability in terms of accuracy.

    Acknowledgement:None.

    Funding Statement:This paper was supported by the following funds: The Key Scientific Research Project of Anhui Provincial Research Preparation Plan in 2023 (Nos.2023AH051806,2023AH0520 97,2023AH052103);Anhui Province Quality Engineering Project (Nos.2022sx099,2022cxtd09 7);University-Level Teaching and Research Key Projects (Nos.ch21jxyj01,XLZ-202208,XLZ-202106);Special Support Plan for Innovation and Entrepreneurship Leaders in Anhui Province.

    Author Contributions:Study conception and design: Z.Wang,L.Yao;data collection: Z.Wang;analysis and interpretation of results: Z.Wang,L.Yao;draft manuscript preparation: Z.Wang,L.Yao.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The accompanying author can provide some of the models or code created or utilized during the study upon request.The datasets generated and/or analysed during the current study are available in the GitHub repository https://GitHub.com/kaiwang960112/Challengecondition-FER-dataset and http://www.jeffcohn.net/Resources/with corresponding permission.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    女人高潮潮喷娇喘18禁视频| 俄罗斯特黄特色一大片| 亚洲成人久久爱视频| 在线播放国产精品三级| 国产亚洲av高清不卡| 亚洲国产看品久久| 日韩欧美免费精品| 精品一区二区三区四区五区乱码| 国产激情欧美一区二区| 日韩中文字幕欧美一区二区| 国产精品一区二区三区四区久久| 香蕉国产在线看| 高清在线国产一区| 亚洲成人免费电影在线观看| 亚洲激情在线av| 丝袜美腿诱惑在线| 免费在线观看成人毛片| 一区二区三区国产精品乱码| 人人妻,人人澡人人爽秒播| 亚洲精品国产精品久久久不卡| 久久久久九九精品影院| 50天的宝宝边吃奶边哭怎么回事| 欧美日韩黄片免| 免费电影在线观看免费观看| 午夜免费激情av| 国产精品 欧美亚洲| 少妇熟女aⅴ在线视频| 午夜免费成人在线视频| 久久久国产精品麻豆| 别揉我奶头~嗯~啊~动态视频| 欧美大码av| 婷婷精品国产亚洲av| 一a级毛片在线观看| 亚洲av第一区精品v没综合| 欧美丝袜亚洲另类 | 一区二区三区高清视频在线| 中文字幕人妻丝袜一区二区| 一本久久中文字幕| 最新美女视频免费是黄的| av福利片在线观看| 黄片小视频在线播放| 这个男人来自地球电影免费观看| 国内久久婷婷六月综合欲色啪| 国产精品国产高清国产av| 精品久久久久久,| 在线观看美女被高潮喷水网站 | 在线观看66精品国产| 久久久久久九九精品二区国产 | 又紧又爽又黄一区二区| 亚洲国产欧美网| 国产激情欧美一区二区| √禁漫天堂资源中文www| 男人的好看免费观看在线视频 | 不卡av一区二区三区| 国产免费男女视频| 在线国产一区二区在线| 超碰成人久久| 一级黄色大片毛片| 三级国产精品欧美在线观看 | 亚洲国产精品sss在线观看| 国产欧美日韩一区二区三| 90打野战视频偷拍视频| 国产伦人伦偷精品视频| 午夜免费激情av| 中文字幕精品亚洲无线码一区| 成人国语在线视频| 一本精品99久久精品77| 亚洲成av人片在线播放无| 国产精品98久久久久久宅男小说| 午夜福利欧美成人| 亚洲一码二码三码区别大吗| 久久久久精品国产欧美久久久| 久久国产精品影院| 欧美午夜高清在线| 91国产中文字幕| 久久久国产成人精品二区| 成人18禁在线播放| 在线看三级毛片| 日本三级黄在线观看| 成人一区二区视频在线观看| 黑人操中国人逼视频| 日韩精品中文字幕看吧| 日本免费一区二区三区高清不卡| 欧美日韩黄片免| 精品国产超薄肉色丝袜足j| 免费在线观看影片大全网站| 国产午夜福利久久久久久| 成人av一区二区三区在线看| 99re在线观看精品视频| 久久精品国产清高在天天线| 日韩欧美一区二区三区在线观看| 1024香蕉在线观看| 亚洲av美国av| 久久精品综合一区二区三区| 亚洲一码二码三码区别大吗| 日本一区二区免费在线视频| 国产片内射在线| 99久久99久久久精品蜜桃| 亚洲熟女毛片儿| 亚洲美女黄片视频| 1024香蕉在线观看| 黄频高清免费视频| 久久久久九九精品影院| 免费看日本二区| 50天的宝宝边吃奶边哭怎么回事| 久久久久国产精品人妻aⅴ院| 色在线成人网| 免费看十八禁软件| 啦啦啦观看免费观看视频高清| 久久久久久免费高清国产稀缺| 中文字幕人妻丝袜一区二区| 亚洲成a人片在线一区二区| 欧美日韩国产亚洲二区| 在线视频色国产色| 在线观看免费日韩欧美大片| 日日爽夜夜爽网站| 欧美日本亚洲视频在线播放| 18美女黄网站色大片免费观看| 日本在线视频免费播放| 老熟妇仑乱视频hdxx| av有码第一页| 国产单亲对白刺激| 国产精品一区二区三区四区免费观看 | 精品久久久久久久末码| 精品国内亚洲2022精品成人| 婷婷精品国产亚洲av在线| 一个人免费在线观看电影 | 成人国产综合亚洲| 国产精品一区二区三区四区久久| 女人被狂操c到高潮| 婷婷亚洲欧美| 久久精品影院6| 中文字幕精品亚洲无线码一区| 国产精品亚洲美女久久久| 国内揄拍国产精品人妻在线| 香蕉国产在线看| 亚洲国产欧美网| 欧美在线黄色| 日本一二三区视频观看| 91在线观看av| 久久精品夜夜夜夜夜久久蜜豆 | 曰老女人黄片| 亚洲熟妇熟女久久| 欧美日韩亚洲国产一区二区在线观看| 亚洲国产高清在线一区二区三| 国产av不卡久久| 国内精品久久久久久久电影| 亚洲一区中文字幕在线| 精品国产美女av久久久久小说| 欧美色视频一区免费| 99久久99久久久精品蜜桃| 日韩 欧美 亚洲 中文字幕| 这个男人来自地球电影免费观看| 在线观看舔阴道视频| 在线观看一区二区三区| 亚洲人与动物交配视频| 免费在线观看视频国产中文字幕亚洲| 免费在线观看完整版高清| 一进一出抽搐动态| 亚洲精品一卡2卡三卡4卡5卡| 亚洲男人天堂网一区| 无人区码免费观看不卡| 一二三四在线观看免费中文在| 天堂动漫精品| 99久久精品国产亚洲精品| 精品福利观看| 舔av片在线| 欧美日韩亚洲国产一区二区在线观看| 亚洲 欧美 日韩 在线 免费| 国产亚洲精品久久久久5区| 伦理电影免费视频| 久久人人精品亚洲av| 亚洲精华国产精华精| 精品国内亚洲2022精品成人| 亚洲精品美女久久久久99蜜臀| 欧美色欧美亚洲另类二区| 老司机在亚洲福利影院| 九九热线精品视视频播放| 婷婷丁香在线五月| 伊人久久大香线蕉亚洲五| 18禁美女被吸乳视频| 欧美大码av| 日韩欧美国产一区二区入口| 日韩高清综合在线| 亚洲va日本ⅴa欧美va伊人久久| 精品国产美女av久久久久小说| 亚洲成人久久爱视频| 欧美日韩瑟瑟在线播放| 亚洲18禁久久av| 99久久精品国产亚洲精品| 亚洲专区国产一区二区| 91麻豆精品激情在线观看国产| 亚洲人成电影免费在线| 精品不卡国产一区二区三区| 日韩欧美免费精品| 日韩大尺度精品在线看网址| 男女做爰动态图高潮gif福利片| 成人av在线播放网站| 国产精品乱码一区二三区的特点| 久久香蕉国产精品| 中文字幕最新亚洲高清| 亚洲 国产 在线| 制服丝袜大香蕉在线| 制服人妻中文乱码| 搡老岳熟女国产| 两性夫妻黄色片| 又黄又粗又硬又大视频| 国产三级黄色录像| 一本精品99久久精品77| 午夜福利欧美成人| 麻豆成人午夜福利视频| 变态另类成人亚洲欧美熟女| a级毛片a级免费在线| 日本在线视频免费播放| 午夜福利在线在线| 亚洲一卡2卡3卡4卡5卡精品中文| 黄色成人免费大全| or卡值多少钱| 夜夜躁狠狠躁天天躁| 久久国产乱子伦精品免费另类| 18禁裸乳无遮挡免费网站照片| 9191精品国产免费久久| 国产成人系列免费观看| 亚洲五月天丁香| 亚洲av片天天在线观看| 少妇人妻一区二区三区视频| cao死你这个sao货| 国产1区2区3区精品| 两个人免费观看高清视频| 久久久精品国产亚洲av高清涩受| 亚洲成人免费电影在线观看| 久久中文字幕一级| 国产1区2区3区精品| 黄色女人牲交| 两人在一起打扑克的视频| 国内精品一区二区在线观看| 国产黄a三级三级三级人| 亚洲精品国产一区二区精华液| 999久久久国产精品视频| 岛国在线免费视频观看| 欧美乱码精品一区二区三区| 19禁男女啪啪无遮挡网站| 成人18禁在线播放| 无遮挡黄片免费观看| 两性夫妻黄色片| 久久久久国产精品人妻aⅴ院| av有码第一页| 免费在线观看日本一区| 色综合亚洲欧美另类图片| 国产视频内射| www.自偷自拍.com| 国产欧美日韩一区二区精品| 国产精品久久久久久久电影 | 他把我摸到了高潮在线观看| 成人亚洲精品av一区二区| 此物有八面人人有两片| 操出白浆在线播放| 两个人的视频大全免费| 丝袜人妻中文字幕| 国产乱人伦免费视频| 国产一区二区三区在线臀色熟女| 中出人妻视频一区二区| 啦啦啦韩国在线观看视频| 在线观看午夜福利视频| 精品日产1卡2卡| 久久精品亚洲精品国产色婷小说| 久久精品夜夜夜夜夜久久蜜豆 | 亚洲国产精品久久男人天堂| 不卡一级毛片| 久久人妻福利社区极品人妻图片| 色av中文字幕| 亚洲欧美日韩东京热| 亚洲成人久久性| 国产精品久久久久久精品电影| 久久亚洲真实| 可以免费在线观看a视频的电影网站| 亚洲av片天天在线观看| 巨乳人妻的诱惑在线观看| 18美女黄网站色大片免费观看| 天堂影院成人在线观看| 午夜福利高清视频| 亚洲av熟女| 一卡2卡三卡四卡精品乱码亚洲| 久久久久国产精品人妻aⅴ院| ponron亚洲| 免费看美女性在线毛片视频| 久久午夜综合久久蜜桃| 亚洲精品在线观看二区| 狠狠狠狠99中文字幕| svipshipincom国产片| 国产又黄又爽又无遮挡在线| 亚洲一区中文字幕在线| 岛国视频午夜一区免费看| 久久精品国产99精品国产亚洲性色| 变态另类成人亚洲欧美熟女| 亚洲激情在线av| 中文字幕精品亚洲无线码一区| 国语自产精品视频在线第100页| 亚洲熟妇熟女久久| 久久草成人影院| 久久香蕉精品热| 淫秽高清视频在线观看| 国产人伦9x9x在线观看| av中文乱码字幕在线| 99久久无色码亚洲精品果冻| 人妻丰满熟妇av一区二区三区| 午夜免费成人在线视频| 岛国在线免费视频观看| 日日摸夜夜添夜夜添小说| 国产高清激情床上av| 亚洲人与动物交配视频| 亚洲色图av天堂| 麻豆国产97在线/欧美 | 久久久久久久久中文| 国产高清激情床上av| 女警被强在线播放| 亚洲精品一卡2卡三卡4卡5卡| 久久久久久国产a免费观看| 午夜福利成人在线免费观看| 午夜福利高清视频| 欧美成人一区二区免费高清观看 | 亚洲国产精品合色在线| 五月玫瑰六月丁香| 免费在线观看黄色视频的| 在线国产一区二区在线| 国产精品,欧美在线| 免费在线观看成人毛片| 亚洲 欧美一区二区三区| 一区福利在线观看| 久久午夜亚洲精品久久| 亚洲va日本ⅴa欧美va伊人久久| 19禁男女啪啪无遮挡网站| 精品久久久久久久久久久久久| 精品人妻1区二区| 中亚洲国语对白在线视频| 免费在线观看日本一区| 午夜免费成人在线视频| 俺也久久电影网| 成人高潮视频无遮挡免费网站| 国产亚洲欧美在线一区二区| 国产精品亚洲av一区麻豆| 免费在线观看影片大全网站| 日本a在线网址| 成人精品一区二区免费| 色综合亚洲欧美另类图片| 夜夜夜夜夜久久久久| 亚洲人成网站在线播放欧美日韩| 嫩草影院精品99| 中文在线观看免费www的网站 | 美女 人体艺术 gogo| 天天一区二区日本电影三级| 欧美一区二区国产精品久久精品 | 少妇粗大呻吟视频| 啪啪无遮挡十八禁网站| 99国产精品99久久久久| 久久婷婷成人综合色麻豆| 亚洲欧美精品综合一区二区三区| 久久精品夜夜夜夜夜久久蜜豆 | 国产单亲对白刺激| 欧美三级亚洲精品| 亚洲熟妇熟女久久| 国产免费av片在线观看野外av| 又大又爽又粗| 国产亚洲精品综合一区在线观看 | 俺也久久电影网| 一区二区三区国产精品乱码| 99re在线观看精品视频| 亚洲欧美日韩高清专用| 亚洲一区中文字幕在线| 一进一出好大好爽视频| tocl精华| 黑人操中国人逼视频| 亚洲成人国产一区在线观看| 亚洲免费av在线视频| 中文亚洲av片在线观看爽| 一二三四社区在线视频社区8| 欧美丝袜亚洲另类 | 成人亚洲精品av一区二区| 又紧又爽又黄一区二区| 亚洲av美国av| 狂野欧美白嫩少妇大欣赏| 免费无遮挡裸体视频| 国产精品日韩av在线免费观看| 女警被强在线播放| 女同久久另类99精品国产91| 黄色视频,在线免费观看| 中文在线观看免费www的网站 | 欧美 亚洲 国产 日韩一| 十八禁网站免费在线| 日本在线视频免费播放| 国产三级中文精品| 99精品久久久久人妻精品| 变态另类成人亚洲欧美熟女| 波多野结衣高清作品| 欧美3d第一页| 日本熟妇午夜| 久久精品国产亚洲av香蕉五月| 非洲黑人性xxxx精品又粗又长| 欧美日韩精品网址| АⅤ资源中文在线天堂| 国内久久婷婷六月综合欲色啪| 少妇人妻一区二区三区视频| 精品欧美国产一区二区三| 丁香六月欧美| 丁香欧美五月| 欧美日韩一级在线毛片| 亚洲成av人片在线播放无| 亚洲熟妇中文字幕五十中出| 中文在线观看免费www的网站 | 亚洲国产中文字幕在线视频| 天天躁夜夜躁狠狠躁躁| 国模一区二区三区四区视频 | 一进一出抽搐gif免费好疼| 精品久久久久久久末码| 757午夜福利合集在线观看| 叶爱在线成人免费视频播放| 欧美久久黑人一区二区| 久久久国产欧美日韩av| x7x7x7水蜜桃| 亚洲全国av大片| 欧美日韩一级在线毛片| 每晚都被弄得嗷嗷叫到高潮| 又大又爽又粗| 亚洲色图av天堂| 狠狠狠狠99中文字幕| 韩国av一区二区三区四区| 亚洲熟妇中文字幕五十中出| 久久热在线av| 男女做爰动态图高潮gif福利片| 国产成人欧美在线观看| 91字幕亚洲| 成人手机av| 国产精品久久久久久久电影 | 国产成+人综合+亚洲专区| 欧美性猛交黑人性爽| 亚洲在线自拍视频| 午夜免费观看网址| 成人18禁在线播放| 日韩 欧美 亚洲 中文字幕| 国产高清激情床上av| 国产爱豆传媒在线观看 | 欧美精品啪啪一区二区三区| 欧美性猛交╳xxx乱大交人| 真人一进一出gif抽搐免费| 亚洲色图av天堂| 叶爱在线成人免费视频播放| 精品国产乱子伦一区二区三区| 一进一出抽搐gif免费好疼| www.www免费av| 欧美日韩瑟瑟在线播放| 精品乱码久久久久久99久播| 老司机福利观看| 久久人人精品亚洲av| 久久久精品国产亚洲av高清涩受| 久久精品人妻少妇| 国产麻豆成人av免费视频| 日韩中文字幕欧美一区二区| 国产精品久久久久久人妻精品电影| 黑人操中国人逼视频| 久久中文看片网| 欧美中文综合在线视频| 国产爱豆传媒在线观看 | 黄色成人免费大全| 少妇裸体淫交视频免费看高清 | 999久久久国产精品视频| 18禁裸乳无遮挡免费网站照片| 婷婷精品国产亚洲av| 欧美成人性av电影在线观看| 久久久久免费精品人妻一区二区| 午夜免费观看网址| 欧美3d第一页| 久久久久久九九精品二区国产 | 日韩精品中文字幕看吧| 天堂动漫精品| 国产精品久久电影中文字幕| 国产亚洲精品第一综合不卡| 国产真实乱freesex| 欧美三级亚洲精品| 国产精品久久视频播放| 熟妇人妻久久中文字幕3abv| 天堂√8在线中文| svipshipincom国产片| 91字幕亚洲| 最近最新免费中文字幕在线| 看片在线看免费视频| 男女下面进入的视频免费午夜| 亚洲五月婷婷丁香| 亚洲一区二区三区不卡视频| 国产精品免费一区二区三区在线| 一进一出抽搐动态| 欧美乱码精品一区二区三区| 国产爱豆传媒在线观看 | 蜜桃久久精品国产亚洲av| 又黄又爽又免费观看的视频| 国内揄拍国产精品人妻在线| av福利片在线| 亚洲色图av天堂| 久久久久久大精品| 日韩欧美在线二视频| 久久欧美精品欧美久久欧美| a级毛片在线看网站| 99热这里只有是精品50| 亚洲欧美日韩无卡精品| 男人的好看免费观看在线视频 | 十八禁网站免费在线| 国产v大片淫在线免费观看| 999精品在线视频| 亚洲欧美一区二区三区黑人| 亚洲av成人精品一区久久| 亚洲国产精品999在线| 91国产中文字幕| 在线十欧美十亚洲十日本专区| 久久久水蜜桃国产精品网| 91麻豆av在线| 999精品在线视频| 母亲3免费完整高清在线观看| 波多野结衣巨乳人妻| 久久精品国产亚洲av香蕉五月| aaaaa片日本免费| 欧美三级亚洲精品| 国产成人精品久久二区二区91| 窝窝影院91人妻| 国产欧美日韩一区二区三| 女人被狂操c到高潮| 午夜视频精品福利| 国产精品久久视频播放| 又黄又粗又硬又大视频| 国产一区二区在线av高清观看| 法律面前人人平等表现在哪些方面| 三级国产精品欧美在线观看 | 听说在线观看完整版免费高清| 国产精品一区二区免费欧美| 在线观看一区二区三区| 在线看三级毛片| 老司机靠b影院| www.www免费av| 色老头精品视频在线观看| 国产亚洲欧美在线一区二区| 欧美大码av| 国产精品一及| 成人18禁高潮啪啪吃奶动态图| 男人舔女人下体高潮全视频| 精品欧美一区二区三区在线| 国产成人系列免费观看| www.精华液| 少妇被粗大的猛进出69影院| 国产v大片淫在线免费观看| 女人高潮潮喷娇喘18禁视频| 亚洲一卡2卡3卡4卡5卡精品中文| 精品电影一区二区在线| 热99re8久久精品国产| 男人舔奶头视频| 精品无人区乱码1区二区| 日韩欧美 国产精品| 亚洲成人久久爱视频| ponron亚洲| 女警被强在线播放| 又粗又爽又猛毛片免费看| 国产成人一区二区三区免费视频网站| 亚洲精品在线观看二区| 老司机午夜福利在线观看视频| 在线观看一区二区三区| 久久久久国产一级毛片高清牌| 在线观看一区二区三区| 91大片在线观看| 国产一区二区在线av高清观看| 国产熟女xx| 老鸭窝网址在线观看| 亚洲 欧美 日韩 在线 免费| 在线观看日韩欧美| 在线视频色国产色| 高清在线国产一区| 国产三级黄色录像| 在线国产一区二区在线| 国产黄a三级三级三级人| 国产午夜精品论理片| a级毛片在线看网站| 妹子高潮喷水视频| 亚洲欧美激情综合另类| 国产成人系列免费观看| 国产aⅴ精品一区二区三区波| 老汉色av国产亚洲站长工具| 久久久久久久久中文| 精品免费久久久久久久清纯| 成人特级黄色片久久久久久久| 狂野欧美白嫩少妇大欣赏| 欧美av亚洲av综合av国产av| 国产av一区在线观看免费| 国产午夜精品久久久久久| 特级一级黄色大片| 黄片大片在线免费观看| 伊人久久大香线蕉亚洲五| 欧美一级毛片孕妇| 日本精品一区二区三区蜜桃| 黑人操中国人逼视频| 最近最新免费中文字幕在线| 免费在线观看影片大全网站| 在线看三级毛片| 午夜a级毛片| www.熟女人妻精品国产| 男女床上黄色一级片免费看| 99国产精品99久久久久| 国产亚洲精品久久久久5区| 久久国产精品影院| 国产精品美女特级片免费视频播放器 | a级毛片在线看网站| 精品一区二区三区视频在线观看免费| 成人av在线播放网站| 欧美日本亚洲视频在线播放| 国产片内射在线| 91成年电影在线观看| 亚洲专区国产一区二区| 午夜激情福利司机影院| 巨乳人妻的诱惑在线观看| 精品国产乱子伦一区二区三区|