• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Human-Computer Interaction Using Deep Fusion Model-Based Facial Expression Recognition System

    2023-02-26 10:17:18SaiyedUmerRanjeetKumarRoutShailendraTiwariAhmadAliAlZubiJazemMutaredAlanaziandKulakovYurii

    Saiyed Umer,Ranjeet Kumar Rout,Shailendra Tiwari,Ahmad Ali AlZubi,Jazem Mutared Alanazi and Kulakov Yurii

    1Department of Computer Science&Engineering,Aliah University,Kolkata,700156,India

    2Department of Computer Science and Engineering,National Institute of Technology,Srinagar,Jammu and Kashmir,190006,India

    3Department of Computer Science&Engineering,Thapar University,Patiala,147004,India

    4Computer Science Department,King Saud University,Riyadh,11451,Saudi Arabia

    5Department of Computer Engineering,National Technical University of Ukraine,Igor Sikorsky Kyiv Polytechnic Institute,Kyiv,03056,Ukraine

    ABSTRACT A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed system is compared with some state-of-the-art methods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users.

    KEYWORDS Deep learning;facial expression;emotions;recognition;CNN

    1 Introduction

    Facial expressions are an important way of communication for understanding emotions in human beings.These human emotions are identified by various traits such as text,Electroencephalography,speech,and face.These emotions performed by these traits are more noticeable and observable[1].There is a wide range of applications of these emotions in Computer Vision applications,such as sentiment analysis for pain analysis in the human body,security,criminal interrogation,patient communication,psychological treatment,etc.Facial emotions play an essential role in the various emotional traits that contribute to more exciting expressions.According to Ekman et al.[2],there are seven basic expressions on the human face such as fear,neutral,sadness,disgust,anger,happiness,and surprise.In facial expression recognition,these expressions are recognized.The capturing of these expressions is less invasive and tangible than other emotional traits.Moreover,the intensity variations over the facial region differ due to these expressions.Some examples of these seven basic facial expressions are shown in Fig.1.

    Figure 1:Some basic human facial expression images

    The facial features play an essential role in identifying the emotions on the human face.The seven basic emotions (expressions) where each expression has its significance based on its intensity on the face.Moreover,it has also been observed that there are some mixed emotions[3]which are the combinations of these basic seven emotions.Capturing expressions under unconstrained environments is less invasive and tangible and requires non-interruption while the person is far or moving from some distance.So,over the past few years,emotion recognition using facial expressions has brought much more attention to affective computing and cognitive science research areas.There are various aspects of FER (facial expression recognition) models in human-computer interaction,augmented reality,driving assistant models,etc.During the implementation of the FER model,the categorical subject model derives the emotions in terms of discrete primary emotion[4].

    The facial expressions are obtained from the eye,mouth,and cheeks portion of the face region.In contrast,the other parts of the facial region support enhancing the expression level in the face region.The research areas of facial expression are in the study of affective computing [5] which is an application of computer vision problems.In affective computing research areas,the recognition of facial expressions is a categorical-based model.The analysis of the coding of facial action units is a continuous-based model.We have considered the categorical model for the facial expression recognition (FER) model in this work.The FER model includes both images,and video-based recognition[6].The spatial information is extracted as feature representation in the image-based FER model,whereas both spatial and temporal features are considered in the video-based FER model.The spatial features have high distinctiveness and discriminating power than temporal features[6].Using a small number of training instances in genetic programming for face image classification has been proposed by Bi et al.[7].Similarly,the multi-objective genetic programming for feature learning for face recognition system has been proposed by Bi et al.[8].

    Initially,Ekman et al.[9]defined six facial expressions such as fear,anger,disgust,happiness,sadness,and surprise and performed emotion recognition for the FER model.Further,Ekman et al.proposed the concept of the Facial action coding model[10]to measure the facial movement using facial action points.The recognition of facial expressions mainly depends on the types of feature extraction,which are classified as(i)Appearance-based,(ii)Geometric-based feature representation[11].Many works have been done based on these appearances and geometrical features from the facial images.For example,Castrillon et al.[12]designed a gender classification model by considering several models of analyzing the texture patterns within the facial region in.By incorporating the RGB colour channel features along with depth,informative features about the facial region proposed for the FER model in[13].In their FER model,Yan et al.[14]employed the image filtering based feature representation for the low-resolution based image samples.Sadeghi et al.[15]built the histogram distance learning-based feature representation for the proposed model.Makhmudkhujaev et al.[16] presented the various directional descriptors with prominent local patterns as features from the facial images.These Models and employed techniques follow local to global feature representation schemes,and most of these features are structural and statistical-based features.

    In the computer vision research areas,the above-discussed features have succeeded in solving object recognition,biometric identification,face recognition,instance-based recognition,and texture classification problems.But due to current state-of-the-art problems,these models have limited performance.Learning robust and discriminative low-rank representations for face recognition with occlusion has been proposed in [17].In the current cutting-edge problems,the deep learning-based approaches have gained great success in solving problems either in computer vision or in business world research areas.The deep learning-based approach is described as a neural network with many layers and parameters.This approach defines some fundamental network architectures such as unsupervised pre-trained networks [18],convolutional [19],recurrent [20],and recursive neural networks[21].Among these networks,the convolutional neural networks[19] are used for the FER model.Ye et al.[22]proposed a region-based convolutional fusion network for the facial expression recognition model.By identifying relationships among different regions of a facial image,the FERS is built by Sun et al.[23].Lai et al.[24]developed CNN models to recognize facial expressions.A FER model based on local fine-grained temporal and global spatial appearance features using a global-local CNN network has been built in [25].Hence,with the several benefits of deep learning-based CNN architectures,a facial expression recognition model has been proposed in this work that can predict the challenging expressions in the facial region in both controlled and uncontrolled environments.There are several existing works in the FER model using image/video-based,but still,there are several challenging issues[26].During image acquisition of the facial region,the images suffer from motion blur,noise artifacts,occlusion by the hair,illumination variations,and occlusion by accessories such as glass,makeup,scarf,and mark.Accepting these challenges,we have developed a categorical modelbased facial expression recognition model using images in this work.The contributions of this paper are summarized as follows:

    ? Deep fusion-based facial expression recognition model is proposed for human-computer interaction.

    ? Proposed deep learning models extract more distinctive and discriminant features from the facial images.

    ? To improve the recognition performance of the proposed model,some influential factors such as data augmentation,fine-tuning the hyper-parameters,and multi-resolution with progressive image sizing are employed to improve the recognition model’s performance.

    ? Different deep learning-based approaches are fused at the post-classification stage to obtain the final decision for the recognition model.

    ? The proposed model is tested on three benchmark datasets:SFEW,CK+,and KDEF,and the performance and comparison with the existing state-of-the-art models due to these datasets have been demonstrated with the proposed system.

    This paper is organized as Section 2 describes each step of the proposed Modelology; The experimental dataset description,results in discussion,and comparisons have been demonstrated in Section 3;Finally,the findings of this research have been concluded in Section 4.

    2 Proposed Scheme

    This section discusses implementing the proposed deep fusion-based facial expression recognition(FER) model.Depending upon the input face,the proposed model predicts the type of expressions among the seven facial expressions (anger,sadness,surprise,disgust,happiness,neutral,and fear)classes.The proposed model is decomposed into four steps: (i) the first step is image preprocessing,where the face region(F)is detected from the input imageIm×n,(ii)in the second step,deep learningbased approaches have been employed for feature learning,and classification purposes,(iii)in the third step several parameters regarding the performance improvement of the proposed model have been discussed,(iv)for the usability of different training models the scores due to these training models are fused to obtain the final decision for the facial expression class in the fourth component.The working principle of the proposed model is represented in Fig.2.

    Figure 2:Block diagram of the proposed system

    2.1 Image Preprocessing

    During an unconstrained imaging environment,noise,illuminations,variations in poses,and cluttered backgrounds are mainly the problems,and these may arise some irrelevant features.So,to extract more relevant and valuable features,the face region has been detected as a region of interest from the input image.The extracted face region has been normalized to similar dimensions that the same dimensional feature vector can be extracted.In this work for face detection,a tree-structured part model[27]has been employed,which works for all variants of face poses.This model computes sixtyeight landmark points for the frontal face,while thirty-nine landmark points have been extracted for the profile face.Then these landmark points are employed to calculate the face region from the input image.The Bilinear image interpolation technique has been applied to the detected face region for the normalization purpose.The face detection process for the proposed model is depicted in Fig.3.

    Figure 3:Face preprocessing for the proposed model

    2.2 Feature Learning Followed by Classification

    The proposed facial expression recognition model belongs to a pattern recognition problem.The objective of this problem is to extract more distinctive and discriminative features as a feature vector from the facial region images.Then,the classifiers learn these feature vectors to derive a model that will predict the class for facial expressions in the facial region.There exist several structural and statistical-based approaches [28] to solving the FER problem.But nowadays,deep learningbased approaches have gained tremendous success in solving the various issues and problems in the computer vision research area.The deep learning-based approaches work in an encapsulated way by the combined effect of both feature learning and classification task.There are several deep learningbased approaches,and among them,the convolutional neural network(CNN)[29]based models have been employed in this work.The CNN based approaches are based on the core building blocks of convolutional layers,pooling layers,fully connected layers,and dense layers[29].The convolutional layer is the layer where the input is an image that has been convoluted with several distinct filters(kernels).Then,the convoluted images are computed as feature maps concerning the kernels.The computation of these feature maps increases the complexity of the CNN network by increasing the image size and the number of kernels employed for that convolutional layer in the network.

    During feature learning,the weights in the kernel are adjusted as parameter settings.The benefits of the convolutional layer are (i) it performs local connectivity by obtaining correlations between neighbours pixels,(ii)weight-sharing in the same feature map reduces the complexity of the network,and(iii)it maintains the shift-invariant properties about the location of the objects.So,the input and output in the convolution layer iswhereFn×n×3is a 3-color channel image,wk×l×lbe theknumber of kernels with each kernel hasl×lsize,be the derived feature maps while each feature map hasn×nsize.To extract more discriminanting features from the feature maps,the max-pooling layers [30] have been employed.The technique of max-pooling layer downsamples the matrices of the features to its half size if 2 × 2 filter size has been employed.In this layer,the filter of size 2×2 strides over the feature map and compute in its region the maximum value first horizontally and then vertically for the computation of discriminant features in the matrix maps to the next layer.The benefits of using max-pooling layers are(i)it decreases the parameters,(ii)reduces the computational overheads,(iii)makes the process of parameter settings faster within the network,and(iv)avoids overfitting problems.

    The addition of the fully connected layer is performed at the end of the network to perform a classification task for the learned features from the previous layer.It ensures all neurons from the previous layers are fully connected to the next layer in the form of a 1-dimensional feature map.Another layer is the dense layer[31]which is also a type of fully-connection layer.The main differences between fully-connected and dense layers are(i)linear operations are being performed in a dense layer,and (ii) the dense layer computes the matching scores for each input sample as outcomes using the softmax activation function[32]at the end of the network.In addition to these layers,some other layers,such as batch normalization[33]and dropout layers[34]have also been adopted in this work.The batch normalization layer also reduces the computational overheads while maintaining the homogeneity in the batch of data for learning the parameters in the network.The dropout layer is being used to ignore some randomly selected neurons in the network from learning,i.e.,the weights to that neurons will not be updated during training.The use of dropout layers in the network prevents overfitting problems and combines the predictions of various neural nets.

    Here using convolutional layers,max-pooling layers,fully connected,batch normalization,and dropout layers,we have built some convolutional neural networks (CNNs) architectures.The proposed CNN architectures contain the combination of these layers.The diagram for the first CNN architecture is shown in Fig.4.This figure shows five blocks where each block has a sequence of layers,i.e.,Convolutional+Activation+Maxpooling+Batch-Normalization.After the five blocks,there are two fully connected layers (Dense+Dropout).For better understanding and clarity,the number of convolutional layers with kernel size,number of kernels,the number of max-pooling layers,batch normalization,dropouts,feature map’s output shape,and the number of parameters concerning each layer is reported in Table 1.Similarly,the second CNN architecture is shown in Fig.5,and the explanation of the layers and parameters for this network is reported in Table 2.From Tables 1 and 2,it may be concluded that some activation functions such as ReLu(Rectified Linear Unit),Softmax,and Adam as optimizer have been adopted for learning the parameters in the network.Both CNN architectures have been learned for seven class FER problems in this work.

    Figure 4:Proposed CNN1 architecture for the FER model

    Table 1: Description of parameters,layers,and output shapes for CNN1 architecture

    Figure 5:Proposed CNN2 architecture for the FER model

    Table 2: Description of parameters,layers,and output shapes for CNN2 architecture

    Table 2 (continued)LayerOutput shapeImage sizeParameters Block-6 Convolution2D(3×3@128)(Activation:Relu)(n2,n2,128)(24,24,128)((3×3×128)+1)×128=147584 Batch normalization(n2,n2,128)(24,24,128)4×128=512 Maxpooling2D(2×2)(n3,n3,128)(12,12,128)0 Dropout(n3,n3,128)(12,12,128)0 Fully connected Flatten12×12×128=184320 Dense+ReLu+Batch normalization+Dropout 1024(18432+1)×1024=18,875,392+(4×1024)=18,879,488 Dense+ReLu+Batch normalization+Dropout 512(1024+1)×512=524800 Dense+ReLu+Batch normalization+Dropout 256(512+1)×256=131328 Dense+ReLu7(256+1)×7=1799 Total parameters19,829,287

    2.3 Factors Affecting the Recognition System’s Performance

    2.3.1 Image Augmentation

    In machine learning,the image augmentation technique has been employed for increasing the number of samples that corresponds to each input image,and it is done by applying several filtering and affine transformation techniques[35].The benefits of using the image augmentation techniques are(i)handling the overtraining situation of the convolution neural networks,(ii)reducing the overfitting problems,and (iii) helping the process of fine-tuning for learning the hyper-parameters to get better CNN performance.The image augmentation techniques generate several samples without changing the image fidelity and their visual qualities[36].The generated samples enhance the CNN learning parameters,and learning these better models can be predicted to recognise the required problems.There are several data augmentation techniques and among them,we have employed image filtering techniques such as Bilateral Filtering [37],Unsharp Filter [38],Sharpening Filter[39],Affine transformation[40]: reflection,rotation,scaling [41],shearing [42],zooming[43],filling[44],and horizontally flipping [45] techniques applied on images.Hence,by applying these data augmentation techniques,there are eighteen (original+seventeen augmented) images are generated to correspond to each training image.The image augmentation algorithm for the proposed model has been demonstrated in Fig.6.Algorithm 1 shows the step-by-step computation of the image augmentation technique.

    Figure 6:Demonstration of image augmentation applied on each image F in the proposed model

    Algorithm 1:Image Augmentation Input:Face Region F Output:Faug 1.Apply Bilateral Filtering[37]on F to get F1 2.Apply Unsharp Filtering[38]on F to get F2 3.Apply Sharpening Filters[39]with different filter mask such as{ω1,ω2,ω3,...,ω9}on F to get F3,...,F11 4.Apply image rotation[40]on F to get F12 5.Apply image scaling[41]on F to get F13 6.Apply image shearing[42]on F to get F14 7.Apply image zooming[43]on F to get F15 8.Apply image filling[44]on F to get F16 9.Apply image horizontal flipping[45]on F to get F17 10.Final augmented set for each F is Faug={F1,...,F17}

    2.3.2 Fine Tuning

    In deep learning approaches,the performance of CNN may be improved by fine-tuning the hyperparameters of the trained model[46].The fine-tuning considers a trained network model and initializes it by its trained weight,and uses the data from the same domain for further training of that model to a new model.The fine-tuning technique speeds up the training process while also overcoming the small dataset size problem.In fine-tuning,either the whole layers of the trained network are retrained,or some of the layers of the trained model are frozen,and the remaining layers are trained.The performance of the proposed CNN models can also be improved by tuning the hyper-parameters such as learning rate,L2-regularization,batch size,and increasing the model depth [47].Moreover,increasing the image resolution,i.e.,progressive resizing of the face region,can also improve the performance of the proposed CNN model.

    2.3.3 Scores Fusion

    The techniques under this category are sum-rule,and product-rule based fusion models[48].These fusion techniques are based on scores which are obtained in this work from the proposed CNN trained models with respect to each test sample.Let assume that for any test sampleti,s1∈R1×Mands2∈R1×M,Mbe the class number,are two score vectors obtained from the proposedCNN1andCNN2facial expression trained models.Then the final score vector for the test sampletiusing(i)sum-rule based fusion technique is given bys=s1+s2,and (ii) product-rule based fusion technique is given bys=s1×s2.Now the final score vectorsis used to find the predicted class label for the test sampleti.

    3 Experimental Results

    This section explains the experiments performed for the proposed facial expression recognition model(FERS).Here we have employed the three benchmark datasets for experimental purposes.The first employed dataset is Cohn-Kanade Extended(CK+)[49]which is composed of 593 short videos from 123 subjects with different lighting and aging variations.For experimental purposes,981 image samples were selected from 123 subjects,where the image samples are of six (Surprise,Happiness,Fear,Disgust,Sadness,and Anger)facial expression classes.Fig.7a demonstrates some image samples from this dataset.Karolinska directed emotional faces(KDEF)[50]is our second dataset which is a seven facial expression class dataset.This dataset comprises 4900 emotional images of human faces with a collection of 35 females and 35 males.In this work,we have downloaded 2447 images,and 1222 images are used for training,while the remaining 1225 are used for testing purposes.Fig.7b demonstrates some images of this dataset.The third dataset is Static Facial Expressions in the Wild(SFEW) [51] which is also a seven facial expression class dataset.This dataset selects frames from the AFEW (Acted Facial Expressions in the Wild) dataset,a dynamic temporal facial expression dataset.This dataset covers several challenges of FER problems.The images of this dataset face several challenges such as varied focus,different resolution of face,various head poses,significant variation in age,considerable variation in occlusions,etc.In this dataset total of 700 frames are extracted from the AFEW dataset,where each frame has been labelled as sadness,surprise,happiness,fear,disgust,anger,and neutral expression class.During experimentation,346 images were selected as training images,and 354 images were selected as testing images.Some images of this dataset have been shown in Fig.7c.Table 3 summarizes the detailed description of the employed datasets for the proposed model.

    Figure 7: (Continued)

    Figure 7:Some image samples from(a)CK+,(b)KDEF,and(c)SFEW datasets

    Table 3: Summarizing the employed dataset for the proposed model

    Here,CK+and KDEF datasets have been randomly partitioned,with 50%of the samples from each class being used to form the training set while the remaining 50%of samples from each class form testing set.In the SFEW dataset,the number of training-testing samples is already mentioned in[51].

    3.1 Results and Discussion

    The implementation of the proposed model has been performed in Python on Ubuntu 16.04 LTS O/S version with Intel Core i7 processor 3.20 GHz and 32′GB RAM.For deep learning approaches,several packages have been employed from Keras [52],and for building the CNN architecture,the Theano Python library has been employed.The performance of the proposed model is shown in the correct recognition rate,i.e.,accuracy in%

    During face preprocessing,the face regionFusing the TSPM model is detected from the given input imageI.Then the extracted face regionFis normalized toN×Nfixed size such that a fixed dimensional feature vector can be extracted from eachFN×N.Then the extracted facial regions from the training samples undergo the proposed convolutional neural network architectures,i.e.,CNN1andCNN2.During experimentation,the size of the face regionN×Nis 48×48 while the batch size and the number of epochs vary.To improve the performance of the proposed model,the data augmentation techniques(discussed in Section 2.3.1)have been applied on eachF48×48using Algorithm 1 and hence for eachF,{F1,...,F18}augmented images are obtained.

    ? Different loss functions impact:At first,the experiment was performed by training theCNN1architecture withF48×48input images by varying the different loss functions to minimize the errors in the network.Here,the mean squared error (MSE) [53],binary cross-entropy [54],and Hinge loss [55] loss functions have been considered for the measuring their impact on the performance of facial expression recognition (FER) system using the proposedCNN1model.These performances have been shown in Fig.8 that shows for the binary cross-entropy loss function,and the performance is better.Hence,the binary cross-entropy loss function is considered for further experiment.

    Figure 8:Effectiveness of different loss functions on the performance of CNN1 models for CK+dataset

    ? Batch vs.epoch impact:In this work,it is seen that the recognition of the proposed model also improves due to the variation of {8,16,32} batch sizes with corresponding {50,100,200,500}epochs.Fig.9 demonstrates the effectiveness of batch sizes and the number of epochs over the performance of the proposed model due toCNN1models for CK+,KDEF,and SFEW datasets.From this figure,it has been observed that the performance improves with the increase of epochs employed for learning the trained model,while the batch size is more or less effective over the performance of the proposed model.For this work,it has been observed that for batch size 8,the performance of the FER model is much better for CK+,KDEF,and SFEW datasets.Hence for further experiments,we have employed eight batch sizes of training samples with 500 epochs for learning the parameters ofCNN1andCNN2architectures.

    Figure 9:Effectiveness of trade-off between batch sizes and number of epochs on the performance of CNN1 models:(a)CK+,(b)KDEF,and(c)SFEW dataset

    ? Data augmentation impact:The effectiveness of data augmentation on the performance of the proposed model is depicted in Fig.10.It is found that the data augmentation techniques have increased the performance of the proposed model.Hence,for further implementation of the proposed model,data augmentation is implemented on each training sample to increase the training sample’s size for better learning of the CNN models.

    Figure 10:Effectiveness of data augmentation over the performance of the proposed model due to:(a)CNN1 and(b)CNN2 models

    ? Multiscaling and Multiresolution impact:Hence,the recognition performance of the proposed model is reported in Table 4 where the usefulness and effectiveness of multiscaling and multiresolution of images (progressive image resizing) with variable sizes such asF48×48,F64×64,andF96×96has been shown.In this experiment,we used the Mini-Batch Gradient Descent optimization technique [56] with batch sizes such as 8 and the number of epochs is 500 for reporting the performance.From the Table 4,it has been observed that for CK+,KDEF,and SFEW datasets,the performance of the proposed model increases with increasing the image size in both the CNN architectures and also the performance is slightly better due toCNN2thanCNN1model.So,the proposed model attains the highest performance,95.89%for CK+,78.27% for KDEF,and 35.31% for the SFEW dataset due to theCNN2model and using theCNN1model,the proposed model attains 93.41%for CK+,77.76%for KDEF,and 33.05%for SFEW dataset.

    Table 4: Performance due to CNN1 and CNN2 models in terms of accuracy (%) with varying image sizes

    ? Fine tuning impact:Here,the performance of the proposedCNN1andCNN2architectures are improved by applying the method of fine-tuning to tune the hyper-parameters of the trained model.This fine-tuning method considers the trainedCNN1andCNN2network model,initializes its trained weight,and re-trained the whole network by freezing some of the layers to reduce the computational overhead of training hyper-parameters of the trained model.Hence the impact of fine-tuning for the proposed FERS has been shown in Fig.11.

    Figure 11:Impact of fine tuning the hyper-parameters of the trained CNN1 and CNN2 models on the performance of the proposed FERS

    ? Scores fusion impact:To adapt the effectiveness of both CNN models,the performance of the proposed model has been fused such that the scores due toCNN1andCNN2models have been fused to derive a final decision for the proposed model.Here score level fusion techniques such as sum-rule and product-rule-based methods have been used.Here the sum-rule based score level fusion is defined ass=si+sj,whereas the product-rule based score level fusion is defined ass=si×sj,siandsjbe the scores for a test sample due toCNN1andCNN2models,respectively.The fused performance of the proposed system due toCNN1andCNN2models have been shown in Table 5 concerning each employed facial expression dataset.This table shows that each dataset has attained better performance after fusion,and the product-rule has achieved better performance than the sum-rule-based score level fusion technique.Hence,for CK+,KDEF,and SFEW datasets,the proposed model has obtained 96.89%,82.35%,and 41.73%accuracy,respectively.For these performances,the confusion matrix performance for CK+,KDEF,and SFEW datasets has been shown in Fig.12 for a better understanding of the classification of each test sample in its corresponding class.

    Table 5:Effectiveness of score fusion on the performance of CNN1 and CNN2 in terms of accuracy(%)

    Figure 12:Confusion matrix performance for(a)CK+,(b)KDEF,and(c)SFEW dataset due to the fused performance of CNN1 and CNN2 models

    3.2 Comparisons

    Here,during comparison with other existing CNN models,the input to these CNN models is the same facial region as used by the proposed system.Also,the same data augmentation techniques have been employed for all the compering methods employed here.Hence,the performance comparisons reported herewith have been made under the same training-testing protocol used by the proposed methodology.Table 6 shows the performance of analysis of Res-Net50 [57],Inception-v3[58],Sun et al.[59],and the proposed model on CK+ dataset.It is found that the proposed model achieves better performance with 96.89% performance.Table 7 shows KDEF dataset analysis,and it is found that the proposed model shows an average 82.35%improvement over the existing models.Table 8 shows the comparative analysis on SFEW dataset.It is found that the proposed model achieves better performance than the existing models by showing an average enhancement of 41.73%over the competitive models.

    Table 6: Comparison of performance for CK+dataset(CV is cross validation)

    Table 7: Comparison of performance for KDEF dataset(CV is cross validation)

    Table 8:Comparison of performance for SFEW dataset(here competing models used same trainingtesting protocols)

    Apart from these,the proposed deep fusion model is used to control the music player.Depending upon the human emotions,the music player is controlled.Based upon the useras emotion,a song is selected from the given class.The proposed model can be better used for disabled persons to change their moods.During the real-time testing,it was found that on a computer with 2.4 GHz,the proposed model can predict 28 frames per second.Therefore,the proposed model can be used for other humancomputer interface-based applications.

    4 Conclusion

    A facial expression recognition model was proposed under controlled and uncontrolled imaging environments.The images considered here are captured in the unconstrained environment,such as motion blurred,hazy,rotated,pose invariant,moving at a distance,and off-angle.The implementation of the proposed model was divided into three components: (i) image preprocessing,(ii) feature learning with classification,and(iii)performance fusion.The face region was extracted during image preprocessing as this is the region of interest for the proposed model.The extracted face region undergoes feature learning with classification tasks.Here for feature learning with classification task,two convolutional neural networks(CNNs)have been proposed where each CNN was learned with the facial regions of the training samples.In contrast,the learned CNN model was employed to obtain the classification performance using the facial region of testing samples.Finally,the performances obtained from both the CNN models were fused to build the final recognition model.Several factors affecting the CNN performance,such as data augmentation,fine-tuning the hyper-parameters,and multi-resolution with progressive image sizing,were also performed during experimentation.The proposed model was verified on three well-known datasets,i.e.,CK+,KDEF,and SFEW.Comparative analysis revealed that the proposed model outperforms the state-of-the-art models in various performance metrics.Finally,the proposed deep fusion model was utilized to control the music player using the recognized emotions of the user.

    Funding Statement: This work was supported by the Researchers Supporting Project (No.RSP-2021/395),King Saud University,Riyadh,Saudi Arabia.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产一区二区三区av在线 | 久久国产乱子免费精品| 久久99热这里只有精品18| 制服丝袜大香蕉在线| 国产精品嫩草影院av在线观看 | 波多野结衣高清无吗| 亚洲美女视频黄频| 亚洲国产精品成人综合色| 亚洲欧美激情综合另类| 麻豆一二三区av精品| 午夜福利在线观看免费完整高清在 | 久久精品国产亚洲av天美| 搡老熟女国产l中国老女人| 亚洲中文字幕一区二区三区有码在线看| 热99在线观看视频| 日本 欧美在线| 中出人妻视频一区二区| 亚洲无线观看免费| 日韩欧美精品v在线| 国产精品爽爽va在线观看网站| 一本一本综合久久| av专区在线播放| 国产极品精品免费视频能看的| www.色视频.com| 亚洲aⅴ乱码一区二区在线播放| 日本黄色片子视频| 人妻制服诱惑在线中文字幕| 亚洲成人免费电影在线观看| 色综合亚洲欧美另类图片| 免费搜索国产男女视频| 精品久久久久久久久久久久久| 亚洲国产精品合色在线| 久久国产乱子免费精品| 天天躁日日操中文字幕| 老女人水多毛片| 国产高清不卡午夜福利| 亚洲av美国av| 国产人妻一区二区三区在| 天堂网av新在线| 国产精品98久久久久久宅男小说| 黄色女人牲交| 性插视频无遮挡在线免费观看| 亚洲avbb在线观看| 国产视频一区二区在线看| 午夜a级毛片| 男女之事视频高清在线观看| 精品99又大又爽又粗少妇毛片 | 俺也久久电影网| 99久久成人亚洲精品观看| 婷婷精品国产亚洲av| 一区福利在线观看| 亚洲精华国产精华精| 一卡2卡三卡四卡精品乱码亚洲| 伊人久久精品亚洲午夜| 婷婷六月久久综合丁香| 男插女下体视频免费在线播放| 久久久久久久久久久丰满 | 偷拍熟女少妇极品色| 欧美色欧美亚洲另类二区| 午夜免费男女啪啪视频观看 | 九九热线精品视视频播放| 乱系列少妇在线播放| 99精品在免费线老司机午夜| av中文乱码字幕在线| 亚洲无线在线观看| 久9热在线精品视频| 免费在线观看成人毛片| 看片在线看免费视频| 久久久久久久久久成人| 免费看a级黄色片| 最近视频中文字幕2019在线8| 免费在线观看成人毛片| 黄色一级大片看看| 九九久久精品国产亚洲av麻豆| 少妇猛男粗大的猛烈进出视频 | 夜夜夜夜夜久久久久| 尾随美女入室| 男女下面进入的视频免费午夜| 男女视频在线观看网站免费| 淫妇啪啪啪对白视频| 色哟哟哟哟哟哟| 麻豆国产97在线/欧美| 国产精品美女特级片免费视频播放器| 国产精品人妻久久久久久| 日日啪夜夜撸| 成人午夜高清在线视频| 亚洲人成网站高清观看| 嫩草影视91久久| 亚洲狠狠婷婷综合久久图片| 精品久久久久久成人av| 亚洲国产高清在线一区二区三| 国产精品人妻久久久影院| 精品欧美国产一区二区三| 免费人成在线观看视频色| 亚洲 国产 在线| 亚洲最大成人av| 欧美丝袜亚洲另类 | av视频在线观看入口| 午夜福利成人在线免费观看| 国产高清不卡午夜福利| 色综合站精品国产| 精品午夜福利视频在线观看一区| 熟女电影av网| 国产精品爽爽va在线观看网站| 欧美丝袜亚洲另类 | 国内精品宾馆在线| 亚洲性久久影院| 美女大奶头视频| 69av精品久久久久久| 国产精品永久免费网站| ponron亚洲| 综合色av麻豆| 欧美国产日韩亚洲一区| 一区二区三区高清视频在线| 亚洲最大成人av| 一级av片app| 网址你懂的国产日韩在线| 精品一区二区免费观看| 国产成人av教育| 国产伦一二天堂av在线观看| 午夜福利在线在线| 亚洲最大成人av| 91麻豆av在线| 国产一区二区三区视频了| 变态另类丝袜制服| 欧美成人免费av一区二区三区| 亚洲精品一卡2卡三卡4卡5卡| 99热只有精品国产| 国产淫片久久久久久久久| 国产精品人妻久久久久久| 亚洲精品日韩av片在线观看| 国内精品宾馆在线| 麻豆一二三区av精品| 国产一区二区三区av在线 | 美女xxoo啪啪120秒动态图| 91麻豆av在线| 亚洲中文字幕一区二区三区有码在线看| 色哟哟·www| 欧美3d第一页| av.在线天堂| 免费高清视频大片| 麻豆精品久久久久久蜜桃| 日韩,欧美,国产一区二区三区 | 国产伦精品一区二区三区四那| 自拍偷自拍亚洲精品老妇| 亚洲aⅴ乱码一区二区在线播放| 国产综合懂色| 男人舔奶头视频| 最近中文字幕高清免费大全6 | 精品久久久久久久久亚洲 | 嫩草影视91久久| 日本爱情动作片www.在线观看 | 欧美日韩综合久久久久久 | 级片在线观看| 1000部很黄的大片| 毛片一级片免费看久久久久 | 草草在线视频免费看| 日本欧美国产在线视频| 成人国产综合亚洲| 亚洲av日韩精品久久久久久密| 91精品国产九色| 久久香蕉精品热| 国产精品日韩av在线免费观看| 亚洲一区高清亚洲精品| 亚洲中文字幕一区二区三区有码在线看| a级毛片a级免费在线| 极品教师在线视频| 精品人妻偷拍中文字幕| 色播亚洲综合网| 国产精品一区二区免费欧美| 美女大奶头视频| 久久国内精品自在自线图片| 欧美人与善性xxx| 九九久久精品国产亚洲av麻豆| 别揉我奶头~嗯~啊~动态视频| 国产成人一区二区在线| 哪里可以看免费的av片| 亚洲第一电影网av| 国产免费一级a男人的天堂| 狂野欧美激情性xxxx在线观看| 人妻久久中文字幕网| 日本黄色视频三级网站网址| 国产高清有码在线观看视频| 中文字幕高清在线视频| 黄色女人牲交| 夜夜夜夜夜久久久久| 小说图片视频综合网站| 少妇裸体淫交视频免费看高清| 直男gayav资源| 人妻夜夜爽99麻豆av| 欧美性猛交╳xxx乱大交人| 日韩欧美在线二视频| 国产精品综合久久久久久久免费| 成人无遮挡网站| 国产av麻豆久久久久久久| av国产免费在线观看| www.www免费av| 亚洲在线自拍视频| 亚洲精品成人久久久久久| 桃红色精品国产亚洲av| 亚洲精华国产精华液的使用体验 | 亚洲av中文av极速乱 | 国产乱人视频| 99riav亚洲国产免费| 亚洲国产日韩欧美精品在线观看| 日韩欧美三级三区| 两人在一起打扑克的视频| 又紧又爽又黄一区二区| 一夜夜www| av黄色大香蕉| 婷婷色综合大香蕉| 韩国av在线不卡| 国产精品亚洲一级av第二区| 一夜夜www| 成人美女网站在线观看视频| 婷婷六月久久综合丁香| 亚洲精品456在线播放app | 麻豆av噜噜一区二区三区| 久久香蕉精品热| 偷拍熟女少妇极品色| 久久国产精品人妻蜜桃| 久久精品国产自在天天线| 国产淫片久久久久久久久| 欧美激情久久久久久爽电影| 黄色视频,在线免费观看| 久久人妻av系列| 欧美日本亚洲视频在线播放| 黄色丝袜av网址大全| 中文字幕av成人在线电影| 免费搜索国产男女视频| 老司机深夜福利视频在线观看| 国产精品人妻久久久影院| 极品教师在线视频| av中文乱码字幕在线| 给我免费播放毛片高清在线观看| 亚洲经典国产精华液单| 亚洲七黄色美女视频| 国产精品一区二区免费欧美| 国产蜜桃级精品一区二区三区| 一进一出抽搐动态| av在线蜜桃| 日本 欧美在线| 在线观看舔阴道视频| 免费不卡的大黄色大毛片视频在线观看 | 久9热在线精品视频| 成人国产综合亚洲| 亚洲最大成人av| 又黄又爽又刺激的免费视频.| 97人妻精品一区二区三区麻豆| 久久久国产成人免费| 欧美日韩瑟瑟在线播放| 成年女人永久免费观看视频| 天堂影院成人在线观看| 中文亚洲av片在线观看爽| 国产精品久久久久久精品电影| 99久久中文字幕三级久久日本| 精品不卡国产一区二区三区| 久久久久久九九精品二区国产| 国产主播在线观看一区二区| 国产伦人伦偷精品视频| 亚洲av不卡在线观看| 欧美丝袜亚洲另类 | 91在线观看av| 中文字幕av成人在线电影| 一级毛片久久久久久久久女| 精品久久久久久成人av| 全区人妻精品视频| 床上黄色一级片| 欧美3d第一页| 日韩精品中文字幕看吧| 日韩欧美一区二区三区在线观看| 免费观看的影片在线观看| 国产成人a区在线观看| 日韩欧美 国产精品| or卡值多少钱| 亚洲欧美精品综合久久99| 少妇高潮的动态图| 免费av毛片视频| 国产欧美日韩一区二区精品| 日韩欧美在线乱码| 男女下面进入的视频免费午夜| 日本一二三区视频观看| 99久久精品一区二区三区| 日本与韩国留学比较| 听说在线观看完整版免费高清| 极品教师在线视频| 色综合色国产| 99热只有精品国产| 免费在线观看成人毛片| 国内精品一区二区在线观看| 国产精品免费一区二区三区在线| 美女cb高潮喷水在线观看| 久久国产精品人妻蜜桃| 一区二区三区激情视频| 国产一区二区三区在线臀色熟女| 亚洲av电影不卡..在线观看| 永久网站在线| 美女黄网站色视频| 国产伦人伦偷精品视频| 欧美极品一区二区三区四区| 日本黄色视频三级网站网址| 久久久久久久午夜电影| 如何舔出高潮| 不卡一级毛片| 极品教师在线免费播放| av黄色大香蕉| av天堂在线播放| 精品久久久久久久末码| 免费一级毛片在线播放高清视频| 美女大奶头视频| 亚洲欧美日韩高清专用| 成人一区二区视频在线观看| 国产男靠女视频免费网站| 国产高清有码在线观看视频| 真实男女啪啪啪动态图| 久久久久久九九精品二区国产| 国内毛片毛片毛片毛片毛片| 亚洲图色成人| 不卡一级毛片| 成人午夜高清在线视频| 天堂网av新在线| 无遮挡黄片免费观看| 超碰av人人做人人爽久久| 美女xxoo啪啪120秒动态图| 一区二区三区免费毛片| 色尼玛亚洲综合影院| 午夜福利在线在线| 久久午夜福利片| 真实男女啪啪啪动态图| 美女免费视频网站| 99久久精品国产国产毛片| 亚洲欧美日韩东京热| 欧美黑人欧美精品刺激| 国产中年淑女户外野战色| 亚洲经典国产精华液单| 99在线人妻在线中文字幕| 又爽又黄无遮挡网站| 熟女电影av网| 噜噜噜噜噜久久久久久91| 日本三级黄在线观看| 亚洲专区中文字幕在线| 久久久久久久精品吃奶| 大又大粗又爽又黄少妇毛片口| av中文乱码字幕在线| 在线观看午夜福利视频| 欧美高清成人免费视频www| 草草在线视频免费看| bbb黄色大片| 国产精品1区2区在线观看.| 麻豆国产97在线/欧美| 亚洲美女搞黄在线观看 | 少妇丰满av| 男人狂女人下面高潮的视频| www日本黄色视频网| 久久精品国产99精品国产亚洲性色| 精品人妻1区二区| 久久久久性生活片| 精品人妻视频免费看| 又爽又黄无遮挡网站| 亚洲自拍偷在线| 日本撒尿小便嘘嘘汇集6| 婷婷精品国产亚洲av在线| 国产美女午夜福利| av中文乱码字幕在线| xxxwww97欧美| 亚洲成人中文字幕在线播放| 蜜桃久久精品国产亚洲av| 成人国产麻豆网| 成人亚洲精品av一区二区| 国产不卡一卡二| 亚洲av不卡在线观看| 少妇被粗大猛烈的视频| 九九爱精品视频在线观看| 草草在线视频免费看| 亚洲图色成人| 久久久久久久久久黄片| 成人综合一区亚洲| 女生性感内裤真人,穿戴方法视频| 国产欧美日韩精品一区二区| 丰满乱子伦码专区| 久久久精品大字幕| 久久精品国产自在天天线| 22中文网久久字幕| 国产亚洲91精品色在线| 麻豆av噜噜一区二区三区| 欧美最新免费一区二区三区| 琪琪午夜伦伦电影理论片6080| 如何舔出高潮| 亚洲国产高清在线一区二区三| 中文字幕熟女人妻在线| 成熟少妇高潮喷水视频| 麻豆成人av在线观看| 综合色av麻豆| 国产 一区 欧美 日韩| 亚洲乱码一区二区免费版| 日本一二三区视频观看| 免费av观看视频| 97超视频在线观看视频| 91精品国产九色| 日韩人妻高清精品专区| 最后的刺客免费高清国语| 免费看光身美女| 国产精品三级大全| 国产大屁股一区二区在线视频| 男女下面进入的视频免费午夜| 久久99热6这里只有精品| 亚洲中文字幕一区二区三区有码在线看| 日本欧美国产在线视频| 成人国产一区最新在线观看| 免费看av在线观看网站| 男人舔女人下体高潮全视频| 级片在线观看| 亚洲内射少妇av| 日韩欧美在线乱码| 两个人视频免费观看高清| 国产精品98久久久久久宅男小说| 国产亚洲精品久久久com| 麻豆久久精品国产亚洲av| 欧美xxxx性猛交bbbb| 亚洲真实伦在线观看| 欧美丝袜亚洲另类 | 亚洲欧美激情综合另类| 日韩中字成人| 99久久成人亚洲精品观看| 美女高潮喷水抽搐中文字幕| 成人精品一区二区免费| 男女之事视频高清在线观看| 国产高清不卡午夜福利| 亚洲国产色片| 国产三级在线视频| 蜜桃久久精品国产亚洲av| 在线观看一区二区三区| 最近最新中文字幕大全电影3| 五月玫瑰六月丁香| 日韩欧美在线乱码| 搡老妇女老女人老熟妇| 免费av不卡在线播放| 免费观看的影片在线观看| 国产爱豆传媒在线观看| 欧美中文日本在线观看视频| 久久久成人免费电影| 午夜精品一区二区三区免费看| 女生性感内裤真人,穿戴方法视频| 在线观看舔阴道视频| 岛国在线免费视频观看| 成人特级黄色片久久久久久久| 成人性生交大片免费视频hd| 2021天堂中文幕一二区在线观| 久久精品国产亚洲网站| ponron亚洲| 嫁个100分男人电影在线观看| 日日摸夜夜添夜夜添小说| 亚洲乱码一区二区免费版| 午夜免费激情av| 在线播放无遮挡| 久久久久久久久久黄片| 麻豆av噜噜一区二区三区| 男女那种视频在线观看| 国产一区二区三区av在线 | 国产毛片a区久久久久| 我要看日韩黄色一级片| 国产白丝娇喘喷水9色精品| 国产精华一区二区三区| 男人舔奶头视频| 如何舔出高潮| 亚洲av美国av| 国产亚洲精品av在线| 99国产精品一区二区蜜桃av| 欧美三级亚洲精品| 韩国av在线不卡| 午夜福利欧美成人| 又黄又爽又刺激的免费视频.| 琪琪午夜伦伦电影理论片6080| 欧美激情国产日韩精品一区| www.www免费av| 午夜福利高清视频| 精品午夜福利视频在线观看一区| 97超视频在线观看视频| 日韩在线高清观看一区二区三区 | 日日夜夜操网爽| 日本精品一区二区三区蜜桃| 九九爱精品视频在线观看| 色尼玛亚洲综合影院| 亚洲美女黄片视频| 亚洲国产色片| 成年女人看的毛片在线观看| videossex国产| 天天一区二区日本电影三级| 制服丝袜大香蕉在线| 91狼人影院| 俄罗斯特黄特色一大片| 亚洲中文日韩欧美视频| 亚洲国产欧美人成| 免费在线观看日本一区| 五月玫瑰六月丁香| av在线老鸭窝| 99热只有精品国产| 国产综合懂色| 美女黄网站色视频| 国产在视频线在精品| 99久久中文字幕三级久久日本| 国产av一区在线观看免费| 久久精品国产亚洲av香蕉五月| 欧美3d第一页| 精品人妻1区二区| 非洲黑人性xxxx精品又粗又长| 国内少妇人妻偷人精品xxx网站| 在线免费十八禁| 欧美日韩黄片免| 精品人妻1区二区| 国产精品久久视频播放| 免费无遮挡裸体视频| 日本a在线网址| 日本熟妇午夜| 欧美在线一区亚洲| 成人永久免费在线观看视频| 俄罗斯特黄特色一大片| 91在线观看av| 国产一区二区三区在线臀色熟女| 日韩欧美在线乱码| av在线天堂中文字幕| 亚洲精华国产精华液的使用体验 | 日日摸夜夜添夜夜添小说| 国产成人福利小说| 免费人成在线观看视频色| 麻豆国产97在线/欧美| 天堂av国产一区二区熟女人妻| 欧美日韩中文字幕国产精品一区二区三区| 日日夜夜操网爽| 色综合站精品国产| 亚洲成av人片在线播放无| 舔av片在线| 久久精品国产亚洲av涩爱 | 91久久精品国产一区二区三区| 成人国产综合亚洲| 久久久国产成人免费| 18禁黄网站禁片免费观看直播| 国内精品久久久久精免费| 日本a在线网址| 国产免费一级a男人的天堂| 性欧美人与动物交配| 国产精品,欧美在线| 中国美白少妇内射xxxbb| 97热精品久久久久久| 嫩草影视91久久| 99久久成人亚洲精品观看| 狂野欧美激情性xxxx在线观看| 国产精品一区二区三区四区免费观看 | 亚洲欧美清纯卡通| 神马国产精品三级电影在线观看| 国产色婷婷99| 69人妻影院| 99久久久亚洲精品蜜臀av| 中文字幕免费在线视频6| 国产一区二区激情短视频| 国产白丝娇喘喷水9色精品| 国产麻豆成人av免费视频| 久久婷婷人人爽人人干人人爱| 午夜日韩欧美国产| 一边摸一边抽搐一进一小说| 亚洲乱码一区二区免费版| 日本一本二区三区精品| 嫩草影院新地址| 成人毛片a级毛片在线播放| 亚洲美女搞黄在线观看 | 麻豆精品久久久久久蜜桃| 99热网站在线观看| 久久久久免费精品人妻一区二区| 午夜免费激情av| 亚洲精品日韩av片在线观看| 99精品在免费线老司机午夜| 欧美黑人巨大hd| 婷婷精品国产亚洲av在线| 十八禁国产超污无遮挡网站| 嫩草影院新地址| av在线蜜桃| 九九爱精品视频在线观看| 狂野欧美激情性xxxx在线观看| 全区人妻精品视频| 免费看av在线观看网站| 99在线人妻在线中文字幕| 网址你懂的国产日韩在线| 久久久久性生活片| 亚洲18禁久久av| 国产精品国产高清国产av| 国产中年淑女户外野战色| 精品乱码久久久久久99久播| 亚洲精华国产精华液的使用体验 | 最近中文字幕高清免费大全6 | 国产成人影院久久av| 国产精品福利在线免费观看| 中文字幕久久专区| 欧美zozozo另类| 神马国产精品三级电影在线观看| 亚洲人成网站高清观看| 欧美在线一区亚洲| 亚洲精品亚洲一区二区| 久久人妻av系列| 精品免费久久久久久久清纯| 日韩一本色道免费dvd| 老女人水多毛片| 成人午夜高清在线视频| 欧美又色又爽又黄视频| 成人美女网站在线观看视频| 男女视频在线观看网站免费| 欧美日本亚洲视频在线播放| 99精品在免费线老司机午夜| 97热精品久久久久久| 日本色播在线视频| 人人妻人人看人人澡| 日本欧美国产在线视频| 国内揄拍国产精品人妻在线| netflix在线观看网站| 欧美色视频一区免费| 婷婷色综合大香蕉|