• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Micro-Expression Recognition Based on Spatio-Temporal Feature Extraction of Key Regions

    2023-12-12 15:51:42WenqiuZhuYongshengLiQiangLiuandZhigaoZeng
    Computers Materials&Continua 2023年10期

    Wenqiu Zhu,Yongsheng Li,Qiang Liu,? and Zhigao Zeng

    1College of Computer Science,Hunan University of Technology,Zhuzhou,412007,China

    2Intelligent Information Perception and Processing Technology,Hunan Province Key Laboratory,Zhuzhou,412007,China

    ABSTRACT Aiming at the problems of short duration,low intensity,and difficult detection of micro-expressions(MEs),the global and local features of ME video frames are extracted by combining spatial feature extraction and temporal feature extraction.Based on traditional convolution neural network(CNN)and long short-term memory(LSTM),a recognition method combining global identification attention network (GIA),block identification attention network (BIA) and bi-directional long short-term memory (Bi-LSTM) is proposed.In the BIA,the ME video frame will be cropped,and the training will be carried out by cropping into 24 identification blocks(IBs),10 IBs and uncropped IBs.To alleviate the overfitting problem in training,we first extract the basic features of the preprocessed sequence through the transfer learning layer,and then extract the global and local spatial features of the output data through the GIA layer and the BIA layer,respectively.In the BIA layer,the input data will be cropped into local feature vectors with attention weights to extract the local features of the ME frames;in the GIA layer,the global features of the ME frames will be extracted.Finally,after fusing the global and local feature vectors,the ME time-series information is extracted by Bi-LSTM.The experimental results show that using IBs can significantly improve the model’s ability to extract subtle facial features,and the model works best when 10 IBs are used.

    KEYWORDS Micro-expression recognition;attention mechanism;long and short-term memory network;transfer learning;identification block

    1 Introduction

    Compared with traditional expressions,MEs are expressions of short duration and small movements.As a spontaneous expression,ME is produced when people try to cover up their genuine internal emotions.It is an expression that can neither be forged nor suppressed[1].In 1966,Haggard et al.[2]discovered a facial expression that is fast and not easily detected by the human eye and first proposed the concept of MEs.At first,this small and transient facial change did not attract the attention of other peer researchers.Until 1969,when Ekman et al.[3]studied a video of depression,he found that patients with smiling expressions would have extremely brief painful expressions.The patient tried to hide his anxiety with a more positive expression,such as a smile.Unlike macro-expressions,MEs only last for 1/25~1/5 second.Therefore,recognition only by human eyes does not meet the need for accurate identification[4,5],and it is essential to use modern artificial intelligence means.

    Research on micro-expression recognition (MER) has undergone a shift from using traditional image feature extraction methods to deep learning feature extraction methods.Pfister et al.[6,7]extended the feature extraction method from XY direction to three orthogonal planes composed of XY,XT and YT by using the local binary patterns from three orthogonal planes (LBP-TOP)algorithm.The LBP-TOP algorithm has been extended from the previous static feature extraction to the dynamic feature extraction that changes with time information.But this recognition method is not ideal for MEs with small intensity changes.Xia et al.[8] found the problem that facial details with minor changes in MER can quickly disappear in deep models.He demonstrated that lower resolution input data and shallower model structure could help alleviate the phenomenon of detail disappearance.Then,he further proposed a recurrent convolutional network (RCN) to reduce the model and data.However,compared to the CNN with attention mechanisms,this design does not perform well in deep models.Xie et al.[9] proposed an MER method based on action units (AUs).Based on the correlation between facial muscles and AUs,this method improves the recognition rate of MEs to a certain extent.Li et al.[10] proposed a model structure based on 3DCNN,an MER method combining attention mechanism and feature fusion.This model extracts optical flow features and facial features through a deep CNN and adds transfer learning to alleviate the problem of model overfitting.Gan et al.[11]proposed the OFF-ApexNet framework by using the optical flow characteristics between images,which can input the extracted optical flow characteristics between onset frame,apex frame and offset frame into CNN for recognition.However,the ME change is a continuous process,and only relying on the onset frame,apex frame and offset frame may ignore the details between video sequences.Huang et al.[12] proposed a method of MER by using the optical flow characteristics of apex frames and integrating the SHCFNet framework.The SHCFNet framework combines the extraction of spatial and temporal features,but it ignores the processing of local detail features of MEs.Zhan et al.[13] proposed an MER method based on an evolutionary algorithm and named it the GP (genetic programming) algorithm.The GP algorithm can select representative sequence frames from ME video frames and guide individuals to evolve toward higher recognition ability.This method can efficiently extract time-varying sequence features in MER.But it only performs feature extraction globally and does not consider that the importance of different parts of the face varies in MER.Tang et al.[14]proposed a model based on the optical flow method and Pseudo 3D Residual Network (P3D ResNet).This method uses the optical flow method to extract the characteristic information of the ME optical flow sequence,then extracts the spatial information and temporal information of the ME sequence through the P3D ResNet model,and finally classifies and outputs it.However,P3D ResNet is more based on the entire area of the face and does not take into account the minor detail changes in the local MEs.Niu et al.[15] proposed a CBAMDPN algorithm based on a convolutional attention module and a dual-channel network.The method fuses channel attention and spatial attention,thus enabling feature extraction of local details of MEs.Simultaneously,the DPN structure can inhibit useless features and enhance the expression ability of model features.But this method only relies on apex frames,ignoring the sequence correlation between ME video frames.

    To solve the problems of low intensity,short duration and difficult detection of ME,we propose a method for MER using key facial regions.This method can extract spatial and temporal information from ME frames.The design of the local IBs in the experiments overcomes the shortcoming of only utilizing global feature extraction in the SHCFNet[12]framework.Compared with the OFF-ApexNet[11] framework,our method utilizes all video frames from onset to apex,which can further extract more detailed facial change information.After the spatial feature extraction,we added the Bi-LSTM framework,which can further extract the sequence features of the video frames compared with the CBAM-DPN [15,16] algorithm,thereby improving the recognition accuracy.In addition,to further extract the facial details of MEs,in the experiment,we crop the ME video frames into IBs and perform ablation experiments on the uncropped IBs,24 and 10 IBs.Finally,according to the experimental results,the selected schemes of different IBs are compared.

    2 Related Work

    2.1 Facial Expression Coding System(FACS)

    There are 42 muscles in the human face.The rich expression changes are the result of the joint action of a variety of muscles.Some facial muscles that can be consciously controlled are called“voluntary muscles”.There are also some facial muscles that can not be under conscious control are called “involuntary muscles”.In 1976,Ekman et al.[3] proposed a facial expression coding system (FACS) based on facial anatomy.FACS divides the human face into 44 AUs.Different AUs represent different local facial actions.For example,AU1 represents the inner browser raiser,while AU5 represents the upper lip raiser[17–19].ME generation is usually the result of the joint action of one or more AUs.For example,the ME representing happiness results from the joint action of AU6 and AU12,where AU16 represents the downward pull of the lower lip and AU12 represents the upward corner of the mouth.FACS is an essential basis for MER,and it also is an action record of facial key point features such as eyebrows,cheeks and corners of the mouth[20–22].In our experiment,the face will be divided into several ME IBs according to the AU.

    2.2 Neural Network with Attention Mechanism

    To address the shortcomings of short duration and low action intensity in MER,we add an attention mechanism in a CNN[23].This design makes the CNN model not only extract the features of the whole face but also focus on the changes in local details.It enables the model to extract more subtle facial detail features in MER.CNN can extract the abstract features of ME[24].The CNN with a local attention network is used to extract the motion information of critical local units in ME change.In contrast,the CNN with a global attention network can extract the global change information.In the experiment,we combine the CNN with the local attention mechanism and the CNN with the global attention mechanism.We expect the improved CNN model to have the ability to pay attention to both the global and the details.

    2.3 Bi-Directional Long Short-Term Memory Network(Bi-LSTM)

    Traditional CNNs and fully connected (FC) layers have a common feature in that they cannot“memorize”relevant information between time series when dealing with continuous sequences [25].Compared with traditional neural networks,recurrent neural network(RNN)adds a hidden layer that can save state information.This hidden layer includes historical information about the sequence and updates itself with the input sequence.However,the most significant disadvantage of traditional RNN is that with the increase of training scale and layers,it is easy to produce long-term dependencies problems[26,27].That is,it is easy to produce gradient disappearance and gradient explosion when learning a long sequence.To solve the above problems of RNNs,in the early 1990s,Hochreiter et al.proposed LSTM.Each unit block of LSTM includes an input gate,forget gate and output gate[28].The input gate is used to determine how much input data at the current time can be saved to the current state unit;The forgetting gate is used to indicate how many state units at the last time can be saved to this state unit;The output gate controls how many current state units can be used for output.Bi-LSTM adds a backpropagation layer to the LSTM which make the Bi-LSTM model can use not only historical sequence information but also future information[29].Simultaneously,Bi-LSTM can better extract the feature and sequence information in ME than LSTM.

    3 Proposed Method

    3.1 Method Overview

    We propose a neural network structure based on the combination of CNN with attention mechanism and Bi-LSTM.To accurately capture small-scale facial movements,we add global and local attention mechanisms [30] to the traditional CNN framework.The improved framework can extract different feature information from multiple facial regions.Simultaneously,we also increase the processing of global information.The improved model architecture is shown in Fig.1.Firstly,the network uses the transfer learning method to pass the pre-processed feature vector through the VGG16 model with pre-training weight and extract the basic facial features [31].Then,the facial features extracted from each frame are passed through GIA and BIA to extract global and local information.Afterward,we fuse the extracted global and local information and extract the sequencerelated information through Bi-LSTM.Finally,the classification output is carried out through a threelayer FC layer.

    Figure 1:The model combining GIA,BIA and Bi-LSTM.It includes a transfer learning layer,GIA and BIA layer,Bi-LSTM layer and FC layer

    To extract the global and local features of the face,we introduce the BIA and GIA frameworks.As shown in Fig.1,BIA is the upper part of the dashed box in the figure,and GIA is the lower part of the dashed box in the figure.

    3.2 BIA Mechanism

    The range of facial variation of ME is small,which is challenging to be recognized effectively.This experiment adopts the recognition method of increasing the blocks with attention in the critical regions of the face.The representative area and the corresponding attention weight are added to the facial features to be recognized.In the experiments,we will perform ablation experiments on uncropped,cropped into 24 and 10 ME blocks,respectively.

    3.2.1 The Neural Network with Attention Mechanism

    BIA is shown in the upper part of the dashed box in Fig.1.After cropping in the BIA,the local IBs are obtained,and then each IB vector goes through an FC layer and an attention network whose output is a weighted scalar.Finally,each IB gets a weighted feature vector and outputs it.

    In the attention network(the upper half of the dashed box in Fig.1),it is assumed thatcirepresents the input feature vector of thei-th IB.As in Eq.(1),?(·)is the operation in the attention network,andpiis the attention weighted scalar of thei-th IB.As in Eq.(2),τ(·)represents the feature learning of the input feature vector,andrepresents the unweighted feature after thei-th IB is extracted.As in Eq.(3),αiis the feature of thei-th IB with attention weight.Finally,the weighted feature vectors of all IBs are obtained after calculation.

    3.2.2 Generation Method of 24 IBs

    To accurately recognize the local details of the face,we generate 24 detailed IBs based on facial key points.There are Dlib[32]method and face_recognition[33]method to determine face key points.The Dlib method can obtain 68 facial key points(see Fig.2a),and the face_recognition method can obtain 72 facial key points(see Fig.2b).In experiments,we found that the face_recognition method can obtain more accurate facial key point information than the Dlib method.Therefore,we use the face_recognition method to achieve precise positioning when determining the ME IB.The 24 IBs are generated as follows:

    Figure 2:Comparison of ME key points and IPs.(a)68 facial key points(b)72 facial key points(c)24 IPs(d)10 IPs.We select 24 and 10 IPs for experiments on 72 facial key points,respectively

    (1) Determine the identification points (IPs): We first extracted 72 facial key points using the face_recognition method(see Fig.2b).Then,based on 72 facial key points,We converted them to 24 IPs.The location of IPs covers the cheeks,mouth,nose,eyes and eyebrows.The conversion process is as follows.Firstly,16 IPs covering mouth,nose,eyes and eyebrows are selected from 72 facial key points.The extraction sequence numbers of 72 facial key points (see Fig.2b)are: 19,22,23,26,39,37,44,46,28,30,49,51,53,55,59 and 57.The serial numbers of the IPs generated(see Fig.2C)are:1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 and 16.Secondly,for the eyes,eyebrows and cheeks,we generate them through the midpoint coordinates of the key points.For the left eye,left eyebrow and left cheek,we select the midpoint coordinates of(20,38),(41,42),(18,59)point pairs from 72 facial key points(see Fig.2b)as the IPs;For the right eye,right eyebrow and right cheek,we select the midpoint coordinates of(25,45),(47,48)and(27,57)point pairs from 72 facial key points as the IPs;The serial numbers of the generated IPs(see Fig.2c)are:17,19,18,20,21 and 22.Finally,for the left and right corners of the mouth,we select 49 and 55 keys from 72 facial keys(see Fig.2b).Then,according to the coordinates of the two points,the relative offset points of the two corners of the mouth are selected as the generation basis of the coordinates of the IPs.The generated IPs at the left and right corners of the mouth (see Fig.2C) are numbered 23 and 24.Eqs.(4) and (5) are the calculation methods of the IPs at the left and right corners of the mouth.Whereinandare the abscissa and ordinate of the 49th point under 72 fac(ial key )points;andare the abscissa and ordinate of the 5(5th po)int under 72 facial key points;is the coordinate of the 23rd point under the 24 IPs;is the coordinate of the 24th point under the 24 IPs.

    (2)Generate IBs:Finally,we got 24 IPs(see Fig.2c).The re-selected 24 IPs will generate 24 48×48 IBs centered on the IPs.To improve the robustness of the model,we perform feature extraction on IBs after passing through the transfer learning layer.

    3.2.3 Generation Method of 10 IBs

    The 24 IBs can cover the face area relatively wholly,but in the experiment we found that covering the face area too finely may make BIA learn some redundant features.In subsequent experiments,we obtained 10 IBs based on FACS.The 10 IBs relatively completely covered the eyebrows,eyes,nose,mouth and chin of the human face.The detailed experimental steps for obtaining 10 IBs are as follows:

    (1) Determine the IPs: we obtained 72 facial key points through face_recognition and then converted them into 10 IPs.The conversion process is as follows: we first determine the side length of the IB area.We selected half of the abscissa distance between points 49 and 55(see Fig.2b)as the side length of the IB area.For the eyebrow part,we select the midpoint coordinates of the 20th and 25th points(see Fig.2b)among the 72 facial key points as the coordinates of the 8th IP(see Fig.2d).The 9th and 10th IPs are generated based on the existing 8th IP.The generation method of the 9th and 10th IPs is shown in Eqs.(6)and(7).Whererepresent the coordinates of the 9th and 10th points under the 10 IPs.andrepresent the abscissa and ordinate of the 8th point under the 10 IPs.Width is the side length of the square IB area under 10 IPs.

    For the eyes,we select the coordinates of the 37th and 46th points(see Fig.2b)among the 72 facial key points as the coordinate generation basis of the 6th and 7th IPs (see Fig.2d).The generation method of IPs 6 and 7 is shown in Eqs.(8) and (9).Among them,represent the coordinates of the 6th and 7th points under the 10 IPs,respectively;represent the abscissa and ordinate of the 37th point under 72 facial key points;represent the abscissa and ordinate of the 46th point under 72 facial key points;width is the side length of the square IB area under 10 IPs.

    For the nose parts,we select the coordinates of the 32nd and 36th points (see Fig.2b) among the 72 facial key points as the coordinate generation basis of the 4th and 5th IPs (see Fig.2d).The generation method of IPs 4 and 5 is shown in Eqs.(10)and(11).Whererespectively represent the coordinates of the 4th and 5th IPs under 10 IBs;represent the abscissa and ordinate of the 32nd point under 72 facial key points;represent the abscissa and ordinate of the 36th point under 72 facial key points;width is the side length of the square IB area under 10 IBs.

    For the lip part,we directly select the 49th and 55th points(see Fig.2b)among the 72 facial key points as the coordinates of the 1st and 2nd IPs(see Fig.2d).Finally,in the chin part,we select the 9th point(see Fig.2b)among the 72 facial key points as the coordinate generation basis of the 3rd IP(see Fig.2d).The generation method of the 3rd IP is shown in Eq.(12),where()represents the coordinates of the 3rd point under 10 IPs;represent the abscissa and ordinate of the 9th point among the 72 facial key points;width is the side length of the square IB area under 10 IBs.

    (2)Generate IBs:Finally,we get 10 IPs(see Fig.2d).Simultaneously,we select half of the abscissa distance of point 49 and point 55(see Fig.2b)as the side length of the IB.The final re-selected 10 IPs will generate 10 IBs centered on the IPs in the experiment.To improve the robustness of the model,we perform feature extraction on IBs after passing through the transfer learning layer.

    3.3 GIA Mechanism

    BIA can learn subtle changes in facial features.We not only need to extract local facial features but also global features.Therefore,integrating global features into feature recognition is expected to improve the recognition effect of MEs.

    The detailed structure of GIA is shown in the lower half of the dashed box in Fig.1.The input feature vector size of GIA is 512 × 28 × 28.In GIA,we first pass the input feature vector through the conv4_2 to conv5_2 layers of the VGG16 network to obtain a feature vector with an output size of 512×14×14;Then,the feature vector of size 512×14×14 is passed through an FC layer and an attention network whose output is a weighted scalar,and finally,a weighted global feature vector is output.

    3.4 Bi-LSTM Mechanism

    GIA and BIA can extract the local and global information of a frame of MEs.However,ME video frames change dynamically in continuous time,so we also need to extract the temporal sequence information of ME.LSTM is a new structure designed to overcome the long-term dependency problem of traditional RNNs.The Bi-LSTM adds a reverse layer based on LSTM,which makes the new network structure cannot only utilize the historical information but also can capture future available information[34,35].

    Bi-LSTM is shown in Fig.3.Bi-LSTM replaces each node of the bidirectional RNN with an LSTM unit.We define the input feature sequence of the Bi-LSTM network model asX=(x1,...,xT);Define the variable sequence of the hidden layer in the forward propagation asand the variable sequence of the hidden layer in the backpropagation asDefine the Bi-LSTM model output sequence asy=(y1,...,yT).We get the following formula:

    In the above formula,S(x)is the activation function;W represents the weight of Bi-LSTM;b is the bias;Each unit is calculated using LSTM cells,shown in Fig.4.

    Figure 3:Bidirectional RNN model diagram

    The input of the Bi-LSTM layer is the feature vector after BIA and GIA.The Bi-LSTM layer adopts a single-layer bidirectional LSTM structure,which contains a hidden layer with 128 nodes.To increase the robustness of model network nodes and reduce the complex co-adaptation relationship between neurons,we add a dropout layer between the Bi-LSTM layer and the FC layer to mask neurons with a certain probability randomly.

    Figure 4:LSTM cell

    4 Experiments and Results

    We selected four datasets for experiments.We pre-process the dataset and then select accuracy,unweighted f1-score,and unweighted average recall as evaluation criteria.Finally,we conducted experiments on without IBs,24 and 10 IBs,respectively,and compared them with different algorithms.

    4.1 Selection of Datasets

    Four datasets,CASME II,SAMM,SMIC and MEGC,were selected for the experiment.In the experiment,we divided expressions into three categories:negative,positive and surprise.

    4.1.1 CASME II Dataset

    The CASME II[36]dataset was established by the team of Fu Xiaolan,Institute of Psychology,Chinese Academy of Sciences.The CASME II dataset employs a 200 fps high-speed camera with a frame size of 640×480 pixels.There are 255 samples in the dataset,the average age of the participants is 22 years old,and the total number of subjects is 24.The dataset includes emotion labels corresponding to each subject sample and video sequence annotations with the onset frame,apex frame and offset frame[37–39].Labels include depression,disgust,happiness,surprise,fear,sadness,and others.In the experiment,we divided the CASME II dataset into a new division,and the division results are shown in Table 1.

    Table 1:Dataset division on CASME II

    4.1.2 SAMM Dataset

    The SAMM[40]dataset has 149 video clips captured by 32 participants from 13 countries.The participants were 17 white British,accounting for 53.1%of the participants;also included 3 Chinese,2 Arabs,and 2 Malays,in addition to Spanish,Pakistani,Arab,African Caribbean 1 person each,a British African,an African,a Nepalese,and an Indian.The average age of the participants was 33.24 years,with a gender-balanced number of male and female participants.There were significant differences in the race and age of the participants,and the imbalance of the label classes was also evident.The SAMM dataset has a 200 fps high frame rate camera with a resolution of 960×650 per frame[41–43].The dataset is accompanied by the positions of the onset frame,offset frame and apex frame of MEs,as well as emotion labels and action unit information.Labels include disgust,contempt,anger,sadness,fear,happiness,surprise,and others.In the experiment,we divided the SAMM dataset into a new division,and the division results are shown in Table 2.

    Table 2:Dataset division on SAMM

    4.1.3 SMIC Dataset

    The SMIC dataset consists of 16 participants and 164 ME clips.Among the volunteers were 8 Asians and 8 Caucasians.The SMIC dataset has a 100 fps camera and a resolution of 640×480 per frame[44,45].The SMIC dataset includes three categories:negative,positive,and surprised,and we do not re-segment in the experiments.The SMIC dataset classification is shown in Table 3.

    Table 3:Dataset division on SMIC

    4.1.4 MEGC Composite Dataset

    The MEGC composite dataset has 68 volunteers,including 24 from the CASME II dataset,28 from the SAMM dataset,and 16 from the SMIC dataset.The classification of the composite dataset is shown in Table 4.

    Table 4:Dataset division on MEGC composite dataset

    4.2 Data Pre-Processing

    Apex frames are annotated in the CASME II and SAMM datasets.Still,in the experiment we found that some datasets are not accurate in the annotation of apex frames and are even mislabeled.In addition,there is no Apex frame information in the SMIC dataset.Therefore it is necessary to re-label apex frames [46].In the experiments,we obtain the apex frame position by calculating the absolute pixel difference of the gray value between the current frame and the onset and offset frames.To reduce the interference of image noise,we simultaneously calculate the absolute value of the pixel difference between the adjacent frame and the current frame.Then,We divide the two values.Finally,the difference value between each frame and the onset frame and the offset frame is obtained,and the frame with the most considerable difference value is selected as the apex frame.

    As in Eqs.(16)and(17),xi,xjrepresent thei-th frame and thej-th frame in a ME video sequence;f(xi,xj)represents the difference between thei-th frame and thej-th frame in the ME sequence.Adding 1 to the numerator and denominator is to ensure that the formula makes sense when particular values occur.In Eq.(17),xirepresents the currenti-th frame;xonrepresents the onset frame;xoffrepresents the offset frame;difirepresents the difference value between thei-th frame and the onset frame and the offset frame.As shown in Fig.5,the place with the most enormous difference value,that is,the position of the red vertical line represents the position of the apex frame.

    After determining the vertex frame,we then use the temporal interpolation model(TIM)[47]to process the video frames from the onset frame to the apex frame into a fixed input sequence of 10 frames.We use Local Weighted Mean Transformation(LWMT)[48]on the 10-frame sequence.The faces are aligned and cropped at the positions of the eyes in the first frame in the same video,and the video frames are normalized to 224×224 pixels by bilinear interpolation[49].In determining 24 facial IBs,we first use face_recognition to get 72 facial keys.After analyzing the face key points,we select 24 facial motion IPs and generate 24 IBs from 24 IPs.In the experiment of determining 10 facial IBs,we first use face_recognition to get 72 facial keys.After analyzing the key points on the face,we select 10 representative IPs and generate 10 IBs from 10 IPs.Finally,we put the pre-processed video frames and the corresponding IBs of each frame into the model for training.

    Figure 5:The change process of the difference value of different frames in the ME video.The place with the most immense difference value,that is,the position of the red vertical line,represents the position of the apex frame

    4.3 Experimental Evaluation Criteria

    Due to the small sample size of ME datasets,to ensure the accuracy of the experiment,we choose Leave One Subject Out(LOSO)[50].That is,the dataset is divided according to the subjects,and all videos of one subject are selected each time for testing and the remaining fold training.Until all folds are involved in the test.Finally,all test results are combined and used as the final experimental result.

    We adopt the evaluation metrics ofUF1(Unweighted F1-score),UAR(Unweighted average recall)andAcc(Accuracy)[46–51].The calculation ofUF1is shown in Eq.(18),whereTPi,FPi,andFNiare the number of true cases,false positive cases,and false negative cases in thei-th category,respectively,andCis the number of categories.The calculation ofUARis shown in Eq.(19),whereTPiis the number of correct predictions in thei-th category,andNiis the number of samples in thei-th sample.Accis shown in Eq.(20),whereTPis the number of true examples in all categories,andFPis the number of false positives in all categories.

    4.4 Experimental Results

    The training uses the Adam optimizer;the learning rate is 0.0001;the number of iterations epoch is set to 100;the training batch_size is set to 16.Because the ME dataset sample size is small,it is prone to overfitting.To improve the robustness and generalization ability of the model,we take the regularized L2 norm for the model parameters and addλtimes the L2 parameter norm to the loss function.After many experiments,it is shown that the model works best whenλis set to 0.00001.In addition,we add random rotation and random cropping with degrees from-8 to 8 for data augmentation in our experiments.

    4.4.1 Experimental Results on CASME II Dataset

    The experimental results are shown in Table 5.In the CASME II dataset,the average accuracy of LOSO without IBs is 0.7364,UF1is 0.6899,and UAR is 0.7122;When 24 IBs are used,the average accuracy of LOSO is 0.8175,UF1is 0.7779 and UAR is 0.7842;When using 10 IBs,the average accuracy of LOSO is 0.8513,UF1is 0.8256 and UAR is 0.8570.From Table 5,we can see that in the CASME II dataset,the model accuracy of 24 IBs increased by 0.0811,UF1score increased by 0.0880 and UAR score increased by 0.0720 compared with that of the model without IBs.Simultaneously,the accuracy,UF1and UAR scores of 10 IBs are also improved relative to 24 IBs.Among them,the accuracy rate increases by 0.0338,the UF1score increases by 0.0477,and the UAR score increases by 0.0728.

    Table 5:The training results of different IBs

    The confusion matrix of not using IBs,using 24 IBs and using 10 IBs is shown in Figs.6a–6c.The confusion matrices of the three methods show commonality in the CASME II dataset.From the confusion matrix,we found that the prediction results of the three methods are more distributed near“negative”and“surprise”,and the accuracy is relatively high.It is mainly caused by the unbalanced distribution of the datasets.Because it is difficult to trigger the“positive”ME in the collection of the CASME II dataset,the number of dataset labels as“negative”and“surprised”is much larger than that of“positive”.It leads to the imbalance of dataset distribution,which affects the training accuracy.

    4.4.2 Experimental Results on SAMM Dataset

    The experimental results are shown in Table 5.In the SAMM dataset,the average accuracy of LOSO without IBs is 0.7235,UF1is 0.5624,and UAR is 0.5907;When 24 IBs are used,the average accuracy of LOSO is 0.7580,UF1is 0.6066 and UAR is 0.6258;When using 10 IBs,the average accuracy of LOSO is 0.7642,UF1is 0.6850 and UAR is 0.7207.From Table 5,we can see that in the SAMM dataset,the accuracy of 24 IBs is increased by 0.0345,the UF1score is increased by 0.0442 and the UAR score is increased by 0.0351 compared with the model without IBs.Simultaneously,the accuracy,UF1,and UAR scores of 10 IBs are also relatively improved compared with 24 IBs.The accuracy increased by 0.0062,the UF1score increased by 0.0784 and the UAR score increased by 0.0949.

    The confusion matrix of not using IBs,using 24 IBs and using 10 IBs is shown in Figs.6d–6f.In the confusion matrix,we can see that the sample number of“surprise”expressions in the SAMM dataset is tiny,which is one of the reasons why the UF1and UAR scores in Table 5 are far lower than the accuracy.By comparing the confusion matrix of the experimental results without IBs,adding 24 IBs and adding 10 IBs,we can find that adding IBs can improve the recognition performance of the model and reduce the number of misclassification.

    Figure 6:(Continued)

    Figure 6:Confusion matrix results on CASME II,SAMM,SMIC and MEGC datasets.We have experimented with 24,10 IBs,and without IBs on datasets.The experimental results show that using IBs can effectively increase the robustness and recognition effect of the model.Simultaneously,10 IBs work best

    4.4.3 Experimental Results on SMIC Dataset

    The experimental results are shown in Table 5.In the SMIC dataset,the average accuracy of LOSO without IBs is 0.6025,UF1is 0.5931,and UAR is 0.5995;When 24 IBs are used,the average accuracy of LOSO is 0.6602,UF1is 0.6430 and UAR is 0.6423;When using 10 IBs,the average accuracy of LOSO is 0.6858,UF1is 0.6749 and UAR is 0.6735.From Table 5,we can see that in the SMIC dataset,the accuracy of 24 IBs is increased by 0.0577,the UF1score is increased by 0.0499 and the UAR score is increased by 0.0428 compared with the model without IBs.Simultaneously,the accuracy,the UF1and the UAR scores of 10 IBs are also relatively improved compared with 24 IBs,in which the accuracy is improved by 0.0256,the UF1score is improved by 0.0319 and the UAR score is improved by 0.0312.

    The confusion matrix of not using IBs,using 24 IBs and using 10 IBs is shown in Figs.6g–6i.The accuracy of the SMIC dataset is lower than that of the CASME II and SAMM datasets,mainly due to the lower frame rate and pixels captured by SMIC.In addition,the shooting environment of the SMIC dataset is relatively dark,and the interference of the noise environment is also more than that of the CASME II and SAMM datasets.In the SMIC dataset,by comparing the confusion matrix of the experimental results of adding without IBs,adding 24 IBs and adding 10 IBs,we can find that adding IBs can increase the accuracy of model recognition,especially for the recognition performance of “positive”.It is because the addition of IBs with an attention mechanism increases the ability to extract facial detail features of ME.

    4.4.4 Experimental Results on MEGC Composite Dataset

    The experimental results are shown in Table 5.In the MEGC composite dataset,the average accuracy of LOSO without IBs is 0.6674,UF1is 0.6126,and UAR is 0.6070;The average accuracy of LOSO when using 24 IBs is 0.7197,UF1is 0.6627,and UAR is 0.6421;The average accuracy of LOSO when using 10 IBs is 0.7658,UF1is 0.7364,and UAR is 0.7337.From Table 5,we can see that in the MEGC composite dataset,the accuracy of the 24 IBs increases by 0.0523,the UF1score increases by 0.0501,and the UAR score increases by 0.0351 compared with the model without the IBs.Simultaneously,the accuracy and score of 10 IBs are also improved relative to 24 IBs,among which the accuracy rate is increased by 0.0461,the UF1score is increased by 0.0737,and the UAR score is increased by 0.0916.

    Confusion matrices without IBs,24 and 10 IBs are used,as shown in Figs.6j–6l.The MEGC composite dataset has high requirements on the robustness of the model due to the fusion of three datasets with considerable differences.Compared with without IBs,the confusion matrix with IBs shows higher prediction accuracy in negative expressions.Simultaneously,in the confusion matrix,we also found that the prediction accuracy of negative expressions was the highest when using 24 blocks.It is because negative expressions are mainly eyebrow and eye movements.The 24 IBs have more points at the eyebrows and eyes,so more details are extracted from the face.However,paying too much attention to local details makes the overall robustness of the model worse,which is also why the overall accuracy of 10 IBs is higher than that of 24 IBs.

    4.5 Data Analysis

    The comparison of recognition effects of different algorithms is shown in Table 6.The improved algorithm model has the best performance when the number of IBs is 10.The data in Table 6 shows that the accuracy of the model of 10 IBs has been relatively improved compared with the previous recognition algorithms,in which the UF1and the UAR have been increased by 0.0067 and 0.0463,respectively,compared with the P3D ResNet model on the CASME II dataset;On the SAMM dataset,the UF1improves by 0.0447,and the UAR improves by 0.0939;on the SMIC dataset,the UF1improves by 0.0219,and the UAR improves by 0.0236;On the MEGC composite dataset,the UF1improves by 0.0011 and the UAR improves by 0.0094.Compared with the GP model,the UAR increases by 0.0174 on the CASME II dataset;on the SAMM dataset,the UF1increases by 0.0847,and the UAR increases by 0.1253;on the SMIC dataset,the UF1increases by 0.0012,and the UAR increases by 0.0075;accuracy improves by 0.0022 on MEGC composite dataset,UF1by 0.0160,and UAR by 0.0274.Compared with the CBAM-DPN model,the CASME II dataset improves UF1by 0.0772 and UAR by 0.1054;on the SMIC dataset,UF1improves by 0.0433 and UAR improves by 0.0174;On the MEGC composite dataset,UF1improves by 0.0161 and UAR improves by 0.0044.

    Table 6:Comparison of recognition effects of different algorithms

    It is because the GP model is an improved algorithm based on an evolutionary algorithm,which has a good effect on extracting the features of ME sequences that change over time.However,this model only extracts global features and does not consider that different parts of the face have different weights in MER.The CBAM-DPN model adds channel and spatial attention to the feature extraction of local details of MEs.But it only relies on the onset and apex frames for identification and ignores the valuable ME information in other consecutive frames.The P3D ResNet can use the optical flow to extract sequence information.This model considers the spatial and temporal information in consecutive frames.However,it does not take into account the variability of different facial parts.

    5 Conclusion

    Aiming at the characteristics of short duration and small movement range of ME,we propose a recognition method combining the GIA and BIA framework.In the BIA framework,the ME frames will be cropped into blocks.we perform ablation experiments on uncropped,cropped into 24 and 10 blocks.Considering that the ME dataset is a small sample and prone to over-fitting,we first extract the essential features from the pre-processed ME video frames through VGG16;The global and local features are extracted by GIA and BIA;Then,the sequence information of each frame is extracted by Bi-LSTM;Finally,it is classified by three FC layers.Experiments show that the combination of attention networks with IBs and Bi-LSTM can effectively extract useful spatial information and sequence information from video frames with small action amplitude.It show high accuracy in the experiment.Among them,the model effect is the best when there are 10 IBs.However,the small sample size of ME datasets,generally short duration and low intensity,are still the main reasons for the low experimental recognition rate,which is particularly obvious in the confusion matrix.Although the method in this paper uses TIM to process a fixed input sequence,the low efficiency of the model still needs to be solved due to the use of multiple video frames for feature extraction.

    In future research,for the problem of a small sample size of datasets,the quality and quantity of ME datasets need to be further improved.For problems with low intensity of MEs,the next step is to maximize the use of the dataset sequence by doing TIM simultaneously between the video onset frame to apex frame and apex frame to offset frame.In addition,The range of IB can be adjusted according to future experiments.The selection of IBs should be as representative as possible and with high anti-interference.

    Acknowledgement:Firstly,I would like to thank Mr.Zhu Wenqiu for his guidance and suggestions on the research direction of my paper.At the same time,I am also very grateful to the reviewers for their useful opinions and suggestions,which have improved the article.

    Funding Statement:This work is partially supported by the National Natural Science Foundation of Hunan Province,China (Grant Nos.2021JJ50058,2022JJ50051),the Open Platform Innovation Foundation of Hunan Provincial Education Department(Grant No.20K046),The Scientific Research Fund of Hunan Provincial Education Department,China(Grant Nos.21A0350,21C0439,19A133).

    Author Contributions:Conceptualization,Z.W.Q and L.Y.S;methodology,Z.W.Q;validation,Z.W.Q,L.Y.S,Z.Z.G and L.Q;formal analysis,Z.W.Q and Z.Z.G;investigation,L.Y.S;resources,Z.W.Q;data curation,L.Q;writing—original draft preparation,Z.W.Q and L.Y.S;writing—review and editing,Z.W.Q and L.Y.S;visualization,Z.Z.G;supervision,L.Q;project administration,Z.W.Q;funding acquisition,Z.W.Q and Z.Z.G.All authors have read and agreed to the published version of the manuscript.

    Availability of Data and Materials:The data used to support the findings of this study are included within the article.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    草草在线视频免费看| 我的女老师完整版在线观看| 亚洲不卡免费看| 成人午夜高清在线视频| 精品人妻视频免费看| 亚洲人成网站在线观看播放| 精品欧美国产一区二区三| 高清av免费在线| 亚洲av日韩在线播放| 国产高清三级在线| 久久人人爽人人片av| 嫩草影院入口| 亚洲欧洲日产国产| 国产精品一区二区性色av| videos熟女内射| 2018国产大陆天天弄谢| 欧美日韩综合久久久久久| 久久久久九九精品影院| 精品一区二区免费观看| 亚洲欧美成人精品一区二区| 亚洲精品乱码久久久v下载方式| 午夜免费激情av| www.色视频.com| 国产一区二区三区综合在线观看 | 日日摸夜夜添夜夜添av毛片| 精品一区二区三区视频在线| 国产国拍精品亚洲av在线观看| 成人亚洲精品一区在线观看 | 日本爱情动作片www.在线观看| 日韩中字成人| 欧美最新免费一区二区三区| 最近最新中文字幕大全电影3| 丰满少妇做爰视频| 中文精品一卡2卡3卡4更新| 3wmmmm亚洲av在线观看| 成人午夜精彩视频在线观看| 亚洲va在线va天堂va国产| 国产成人91sexporn| 国产av码专区亚洲av| 色5月婷婷丁香| 国产激情偷乱视频一区二区| 女人被狂操c到高潮| 欧美zozozo另类| 七月丁香在线播放| 亚洲精品亚洲一区二区| 中文字幕av在线有码专区| 建设人人有责人人尽责人人享有的 | 老司机影院毛片| 又爽又黄无遮挡网站| 嫩草影院入口| 黑人高潮一二区| 国产伦精品一区二区三区视频9| av在线观看视频网站免费| 高清午夜精品一区二区三区| 成人欧美大片| 亚洲人成网站在线播| 麻豆精品久久久久久蜜桃| 热99在线观看视频| 亚洲av二区三区四区| 国产伦精品一区二区三区四那| 最近2019中文字幕mv第一页| 一级毛片电影观看| 日韩伦理黄色片| 成人亚洲精品av一区二区| 视频中文字幕在线观看| 久久精品国产自在天天线| 成人毛片a级毛片在线播放| 国产精品一区二区三区四区免费观看| 精品国内亚洲2022精品成人| 免费看日本二区| 男人和女人高潮做爰伦理| 国产黄色小视频在线观看| 免费不卡的大黄色大毛片视频在线观看 | 波野结衣二区三区在线| 久久久精品免费免费高清| 久久久久久久大尺度免费视频| 一区二区三区乱码不卡18| 久久久久久久午夜电影| eeuss影院久久| 久久久久久久久久人人人人人人| 国产又色又爽无遮挡免| 国产久久久一区二区三区| 亚洲综合精品二区| 日韩强制内射视频| 男女边摸边吃奶| 特级一级黄色大片| 老女人水多毛片| 丰满少妇做爰视频| 91在线精品国自产拍蜜月| 精品久久久久久久久av| 亚洲真实伦在线观看| 国产伦一二天堂av在线观看| 亚洲在线观看片| 国产精品国产三级国产专区5o| 日日摸夜夜添夜夜爱| 久久久成人免费电影| 午夜福利网站1000一区二区三区| 国产在视频线在精品| 亚洲精品日本国产第一区| 久久韩国三级中文字幕| 久久人人爽人人爽人人片va| 99热网站在线观看| 三级经典国产精品| 国产色爽女视频免费观看| 天天躁日日操中文字幕| 国产成人免费观看mmmm| 欧美激情久久久久久爽电影| 成人欧美大片| 免费黄网站久久成人精品| 国产美女午夜福利| 美女内射精品一级片tv| 日韩三级伦理在线观看| ponron亚洲| 亚洲色图av天堂| 国产成人精品一,二区| 有码 亚洲区| 五月玫瑰六月丁香| 亚洲综合色惰| 男人舔女人下体高潮全视频| 哪个播放器可以免费观看大片| 日日啪夜夜撸| 精品一区二区三区人妻视频| 看免费成人av毛片| 最近最新中文字幕大全电影3| 国产精品伦人一区二区| 日韩亚洲欧美综合| av在线播放精品| 欧美一区二区亚洲| 亚洲一级一片aⅴ在线观看| 国产伦精品一区二区三区视频9| av播播在线观看一区| 国内精品美女久久久久久| 伦理电影大哥的女人| 亚洲欧美成人精品一区二区| 国产 一区 欧美 日韩| 一本一本综合久久| 丰满少妇做爰视频| 亚洲精品国产成人久久av| 久久久国产一区二区| 亚洲自偷自拍三级| 三级国产精品欧美在线观看| 伊人久久精品亚洲午夜| 亚洲丝袜综合中文字幕| 日本三级黄在线观看| 高清欧美精品videossex| 久久韩国三级中文字幕| 一级毛片久久久久久久久女| 有码 亚洲区| 床上黄色一级片| 国产精品国产三级国产av玫瑰| 欧美日韩精品成人综合77777| 国产午夜福利久久久久久| 天天一区二区日本电影三级| 亚洲av中文字字幕乱码综合| 免费黄频网站在线观看国产| 日韩亚洲欧美综合| 校园人妻丝袜中文字幕| 午夜福利在线观看吧| 日韩亚洲欧美综合| 丰满少妇做爰视频| 97热精品久久久久久| 免费看美女性在线毛片视频| 精品久久久久久久久av| 菩萨蛮人人尽说江南好唐韦庄| 日本黄大片高清| 特级一级黄色大片| 国产精品人妻久久久久久| 一级爰片在线观看| 女的被弄到高潮叫床怎么办| 97精品久久久久久久久久精品| 大香蕉97超碰在线| 亚洲精品久久午夜乱码| 日韩av免费高清视频| 91av网一区二区| 成人毛片a级毛片在线播放| 免费观看在线日韩| 亚洲,欧美,日韩| 中文字幕av在线有码专区| 国产乱人偷精品视频| 亚洲美女搞黄在线观看| 国产免费视频播放在线视频 | 人人妻人人澡欧美一区二区| 日韩人妻高清精品专区| 欧美xxⅹ黑人| 国产一区二区三区综合在线观看 | 一级av片app| 国产片特级美女逼逼视频| 欧美日韩国产mv在线观看视频 | 男人和女人高潮做爰伦理| 亚洲国产最新在线播放| 亚洲av一区综合| 亚洲最大成人av| 床上黄色一级片| 最近最新中文字幕免费大全7| 午夜爱爱视频在线播放| 免费播放大片免费观看视频在线观看| 亚洲精品成人久久久久久| eeuss影院久久| 国产精品av视频在线免费观看| 黄片无遮挡物在线观看| 2022亚洲国产成人精品| 久久这里只有精品中国| 国产真实伦视频高清在线观看| 97人妻精品一区二区三区麻豆| 日韩制服骚丝袜av| 亚洲内射少妇av| 精品一区在线观看国产| 91久久精品国产一区二区三区| 久久人人爽人人爽人人片va| 免费电影在线观看免费观看| www.色视频.com| 精品一区二区免费观看| eeuss影院久久| 免费播放大片免费观看视频在线观看| 美女主播在线视频| 热99在线观看视频| 国产一区亚洲一区在线观看| 亚洲综合色惰| 免费不卡的大黄色大毛片视频在线观看 | 特级一级黄色大片| 国产精品人妻久久久久久| 联通29元200g的流量卡| 91狼人影院| 久久亚洲国产成人精品v| 日韩在线高清观看一区二区三区| 国产白丝娇喘喷水9色精品| 国产午夜福利久久久久久| 亚洲真实伦在线观看| 婷婷色综合大香蕉| 热99在线观看视频| www.色视频.com| 日本猛色少妇xxxxx猛交久久| 夫妻午夜视频| 一级毛片电影观看| 欧美性猛交╳xxx乱大交人| 午夜福利视频1000在线观看| 美女内射精品一级片tv| 91精品伊人久久大香线蕉| 激情五月婷婷亚洲| 亚洲精品国产成人久久av| 一级毛片久久久久久久久女| 观看美女的网站| 偷拍熟女少妇极品色| 一夜夜www| 亚洲高清免费不卡视频| 天天躁日日操中文字幕| 亚洲四区av| 97精品久久久久久久久久精品| 男人狂女人下面高潮的视频| 国产精品一二三区在线看| 午夜精品国产一区二区电影 | 成年女人看的毛片在线观看| 22中文网久久字幕| 免费看a级黄色片| 人人妻人人澡欧美一区二区| 精品久久久久久久久亚洲| 亚洲av男天堂| 久久久久久久午夜电影| 成人漫画全彩无遮挡| 女人十人毛片免费观看3o分钟| 91精品伊人久久大香线蕉| 99久国产av精品国产电影| 99热6这里只有精品| 欧美成人精品欧美一级黄| 国产一级毛片在线| 亚洲天堂国产精品一区在线| 久久精品熟女亚洲av麻豆精品 | 成人一区二区视频在线观看| 啦啦啦韩国在线观看视频| 国产成人午夜福利电影在线观看| 欧美人与善性xxx| 国产白丝娇喘喷水9色精品| 午夜福利高清视频| 免费看a级黄色片| 精品国产三级普通话版| 国产欧美另类精品又又久久亚洲欧美| 欧美精品国产亚洲| 在线观看美女被高潮喷水网站| 成人午夜高清在线视频| 久久国内精品自在自线图片| av女优亚洲男人天堂| 麻豆精品久久久久久蜜桃| 日韩欧美国产在线观看| av专区在线播放| av专区在线播放| 国产毛片a区久久久久| 成人亚洲精品一区在线观看 | 亚洲国产最新在线播放| 国产黄色视频一区二区在线观看| 熟妇人妻不卡中文字幕| 欧美97在线视频| 国产在线男女| 欧美不卡视频在线免费观看| 22中文网久久字幕| 久久精品夜夜夜夜夜久久蜜豆| 日本猛色少妇xxxxx猛交久久| 两个人视频免费观看高清| 亚洲精品国产av成人精品| a级毛色黄片| 别揉我奶头 嗯啊视频| 特级一级黄色大片| 黄片wwwwww| 国产精品久久视频播放| 又爽又黄a免费视频| 不卡视频在线观看欧美| 自拍偷自拍亚洲精品老妇| xxx大片免费视频| 欧美xxxx黑人xx丫x性爽| 特大巨黑吊av在线直播| 午夜免费男女啪啪视频观看| freevideosex欧美| 亚洲最大成人av| 99热6这里只有精品| av专区在线播放| 成人毛片60女人毛片免费| 日韩大片免费观看网站| 欧美成人a在线观看| 免费观看精品视频网站| 欧美激情久久久久久爽电影| 久久久午夜欧美精品| 91久久精品国产一区二区三区| 嫩草影院入口| 少妇熟女欧美另类| 内地一区二区视频在线| 国产精品人妻久久久久久| 成人亚洲欧美一区二区av| av在线蜜桃| av免费观看日本| eeuss影院久久| 最近最新中文字幕免费大全7| 狂野欧美激情性xxxx在线观看| 午夜福利视频精品| 国产v大片淫在线免费观看| 九九爱精品视频在线观看| 国产伦理片在线播放av一区| 美女脱内裤让男人舔精品视频| 久久久午夜欧美精品| 菩萨蛮人人尽说江南好唐韦庄| 国产精品爽爽va在线观看网站| 国产精品久久视频播放| 国产伦在线观看视频一区| 97超碰精品成人国产| 成人亚洲欧美一区二区av| 日日撸夜夜添| 好男人视频免费观看在线| 91精品国产九色| 成年女人在线观看亚洲视频 | 夫妻午夜视频| 欧美成人a在线观看| 狂野欧美白嫩少妇大欣赏| 高清毛片免费看| 欧美日本视频| 人人妻人人澡人人爽人人夜夜 | 久久鲁丝午夜福利片| 精品一区二区三区人妻视频| 久久精品久久久久久噜噜老黄| 国产不卡一卡二| 777米奇影视久久| 精品久久久久久久末码| 青春草国产在线视频| 久久久a久久爽久久v久久| 成人高潮视频无遮挡免费网站| 国产高清不卡午夜福利| 亚洲av中文av极速乱| 亚洲国产精品专区欧美| 精品一区在线观看国产| 最近手机中文字幕大全| 免费看日本二区| 日韩成人伦理影院| av在线观看视频网站免费| 久久久精品94久久精品| 国产淫片久久久久久久久| 久久久久久久大尺度免费视频| 国产伦在线观看视频一区| 日韩av在线免费看完整版不卡| 在线免费观看的www视频| 亚洲av电影在线观看一区二区三区 | 亚洲国产欧美在线一区| 亚洲伊人久久精品综合| 嫩草影院新地址| 久久久久性生活片| 男女那种视频在线观看| 成人亚洲欧美一区二区av| 亚洲欧美成人综合另类久久久| 日日干狠狠操夜夜爽| 欧美丝袜亚洲另类| 国产高清三级在线| 国产有黄有色有爽视频| 国产精品av视频在线免费观看| 国产欧美另类精品又又久久亚洲欧美| 国模一区二区三区四区视频| 色尼玛亚洲综合影院| 看免费成人av毛片| 国产免费福利视频在线观看| 亚洲国产精品专区欧美| 大片免费播放器 马上看| 一本一本综合久久| 天堂影院成人在线观看| 国产免费又黄又爽又色| 高清午夜精品一区二区三区| 亚洲欧美成人综合另类久久久| 日韩强制内射视频| 在线免费十八禁| 国产乱人偷精品视频| 日韩,欧美,国产一区二区三区| 国产精品国产三级专区第一集| 国产男人的电影天堂91| 两个人视频免费观看高清| 久热久热在线精品观看| 国产精品一区二区三区四区免费观看| h日本视频在线播放| 日本午夜av视频| 黑人高潮一二区| 能在线免费看毛片的网站| 女的被弄到高潮叫床怎么办| 色综合色国产| 爱豆传媒免费全集在线观看| 男人爽女人下面视频在线观看| av黄色大香蕉| 国产成人精品福利久久| 免费观看a级毛片全部| 国产黄a三级三级三级人| 久久国内精品自在自线图片| 伊人久久国产一区二区| 亚洲色图av天堂| 街头女战士在线观看网站| 欧美一区二区亚洲| 三级国产精品片| 亚洲av中文字字幕乱码综合| 国产成人精品久久久久久| 国产精品一区二区三区四区久久| 毛片一级片免费看久久久久| 美女高潮的动态| 欧美日本视频| 亚洲av中文av极速乱| 国产精品爽爽va在线观看网站| 国产精品一区www在线观看| 真实男女啪啪啪动态图| 亚洲欧美精品自产自拍| 亚洲内射少妇av| 亚洲欧美中文字幕日韩二区| 日韩视频在线欧美| 国产精品综合久久久久久久免费| 国产伦精品一区二区三区视频9| 一个人观看的视频www高清免费观看| 亚洲国产精品成人久久小说| 91午夜精品亚洲一区二区三区| 超碰av人人做人人爽久久| 真实男女啪啪啪动态图| 2021天堂中文幕一二区在线观| 日韩欧美精品免费久久| 国产伦在线观看视频一区| 亚州av有码| 亚洲精品中文字幕在线视频 | 午夜福利视频1000在线观看| 亚洲精华国产精华液的使用体验| 国产精品av视频在线免费观看| 麻豆成人午夜福利视频| 久久鲁丝午夜福利片| 亚洲一级一片aⅴ在线观看| 日韩一区二区三区影片| av天堂中文字幕网| 国国产精品蜜臀av免费| 亚洲久久久久久中文字幕| 国产免费一级a男人的天堂| 视频中文字幕在线观看| 国产 一区 欧美 日韩| 日产精品乱码卡一卡2卡三| 午夜福利在线观看吧| 亚洲在久久综合| 国产av不卡久久| 亚洲图色成人| 国产永久视频网站| 好男人视频免费观看在线| 日韩成人伦理影院| 只有这里有精品99| 久久久精品欧美日韩精品| av线在线观看网站| 亚洲成人av在线免费| 国产成人福利小说| 国产精品蜜桃在线观看| 男的添女的下面高潮视频| 国精品久久久久久国模美| 大又大粗又爽又黄少妇毛片口| 看非洲黑人一级黄片| 熟女人妻精品中文字幕| 日本免费在线观看一区| 视频中文字幕在线观看| 久久精品熟女亚洲av麻豆精品 | 色哟哟·www| 国产大屁股一区二区在线视频| 美女大奶头视频| 成人av在线播放网站| 亚洲经典国产精华液单| 亚洲精品国产成人久久av| 高清在线视频一区二区三区| 久久久午夜欧美精品| 插阴视频在线观看视频| 搞女人的毛片| 天堂√8在线中文| 成年av动漫网址| 午夜日本视频在线| 精品久久久精品久久久| 亚洲性久久影院| 国产精品一区二区三区四区久久| 午夜精品国产一区二区电影 | 国产乱人偷精品视频| 自拍偷自拍亚洲精品老妇| 老司机影院成人| 亚洲成色77777| 免费看美女性在线毛片视频| 久久精品熟女亚洲av麻豆精品 | 少妇裸体淫交视频免费看高清| 又黄又爽又刺激的免费视频.| 精华霜和精华液先用哪个| 亚洲成人久久爱视频| 欧美日韩综合久久久久久| 日本免费a在线| 国产成人午夜福利电影在线观看| 国产成年人精品一区二区| 欧美成人精品欧美一级黄| 久久久亚洲精品成人影院| 性色avwww在线观看| 不卡视频在线观看欧美| 一级毛片电影观看| 日韩电影二区| 爱豆传媒免费全集在线观看| 国产久久久一区二区三区| 亚洲欧美精品专区久久| 国产黄色视频一区二区在线观看| 在线观看av片永久免费下载| 好男人视频免费观看在线| 老司机影院毛片| 丰满少妇做爰视频| 亚洲成色77777| 日韩av在线大香蕉| 特级一级黄色大片| 男女国产视频网站| 可以在线观看毛片的网站| 爱豆传媒免费全集在线观看| 亚洲天堂国产精品一区在线| 99热网站在线观看| 狠狠精品人妻久久久久久综合| 国产不卡一卡二| 久热久热在线精品观看| 街头女战士在线观看网站| 午夜激情福利司机影院| 亚洲av日韩在线播放| 大香蕉97超碰在线| 人妻制服诱惑在线中文字幕| 美女xxoo啪啪120秒动态图| 日日摸夜夜添夜夜添av毛片| 精品一区在线观看国产| 国产激情偷乱视频一区二区| 国产国拍精品亚洲av在线观看| 一级爰片在线观看| 国产精品国产三级国产av玫瑰| 丝袜喷水一区| 亚洲综合色惰| 天堂影院成人在线观看| 久久久成人免费电影| 亚洲人与动物交配视频| 久久热精品热| 最近2019中文字幕mv第一页| 亚洲精品日韩在线中文字幕| 国产精品一区www在线观看| 国内少妇人妻偷人精品xxx网站| 久久国内精品自在自线图片| 女人久久www免费人成看片| 色播亚洲综合网| 欧美不卡视频在线免费观看| 久久99热这里只频精品6学生| 日日摸夜夜添夜夜添av毛片| 综合色av麻豆| 成人一区二区视频在线观看| 少妇的逼好多水| 精品人妻一区二区三区麻豆| 九九久久精品国产亚洲av麻豆| 高清毛片免费看| 色综合亚洲欧美另类图片| 欧美极品一区二区三区四区| 国产不卡一卡二| 欧美极品一区二区三区四区| 神马国产精品三级电影在线观看| 国产精品一区www在线观看| 欧美区成人在线视频| 好男人视频免费观看在线| www.av在线官网国产| 校园人妻丝袜中文字幕| 高清视频免费观看一区二区 | 日本黄大片高清| 大香蕉97超碰在线| 亚洲av日韩在线播放| 男插女下体视频免费在线播放| 成人亚洲精品一区在线观看 | 国产麻豆成人av免费视频| 黄片无遮挡物在线观看| 欧美成人午夜免费资源| 爱豆传媒免费全集在线观看| 亚洲av中文字字幕乱码综合| 91久久精品国产一区二区三区| 人妻一区二区av| 久久精品综合一区二区三区| 精品久久久久久久人妻蜜臀av| 久久久国产一区二区| 大又大粗又爽又黄少妇毛片口| 久久人人爽人人爽人人片va| 欧美97在线视频| 九色成人免费人妻av| 免费观看精品视频网站| 国产国拍精品亚洲av在线观看| 亚洲在线观看片| 亚洲av成人av| 精品国内亚洲2022精品成人| 精品久久久久久久久av|