• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automated Facial Expression Recognition and Age Estimation Using Deep Learning

    2022-08-23 02:19:28SyedaAmnaRizwanYazeedYasinGhadiAhmadJalalandKibumKim
    Computers Materials&Continua 2022年6期

    Syeda Amna Rizwan,Yazeed Yasin Ghadi,Ahmad Jalal and Kibum Kim

    1Department of Computer Science,Air University,Islamabad,44000,Pakistan

    2Department of Computer Science and Software Engineering,Al Ain University,Abu Dhabi,122612,UAE

    3Department of Human-Computer Interaction,Hanyang University,Ansan,15588,Korea

    Abstract: With the advancement of computer vision techniques in surveillance systems,the need for more proficient,intelligent,and sustainable facial expressions and age recognition is necessary.The main purpose of this study is to develop accurate facial expressions and an age recognition system that is capable of error-free recognition of human expression and age in both indoor and outdoor environments.The proposed system first takes an input image pre-process it and then detects faces in the entire image.After that landmarks localization helps in the formation of synthetic face mask prediction.A novel set of features are extracted and passed to a classifier for the accurate classification of expressions and age group.The proposed system is tested over two benchmark datasets,namely,the Gallagher collection person dataset and the Images of Groups dataset.The system achieved remarkable results over these benchmark datasets about recognition accuracy and computational time.The proposed system would also be applicable in different consumer application domains such as online business negotiations,consumer behavior analysis,E-learning environments,and emotion robotics.

    Keywords:Feature extraction;face expression model;local transform features and recurrent neural network(RNN)

    1 Introduction

    Recognition of human age and expressions has engaged many researchers in various fields including sustainable security [1], forensics [2], biometrics [2], and cognitive psychology.Interest in this field is spreading fast and is fuelled by scientific advances that provide a better understanding of personal identity,attitudes,and intentions based on facial expressions and age.Facial expressions have a great impact on interpersonal communication.Human emotional responses are very complex and are most directly expressed in facial expressions.In the Mehrabian oral communication effects model, it is stated that 7% intonation, 38% expressions account when the people speak, and 55%body language accounts along with the facial expressions.Over the past few decades,researchers have conducted studies for human facial expressions recognition and age estimation(FERAE)systems that use advanced sensors such as video cameras,eye trackers,thermal cameras,human vision component sensors [3–5], and stereo-cam [6,7] to intelligently recognize the human behaviours, gestures [8–10],emotions and to predict the age of an individual.Problems that arise in automatic FERAE systems are pose variations,uncontrolled lightning,complex backgrounds,partial occlusions,etc.Researchers face many challenges in attempting to overcome these problems.

    Human subjects normally present various expressions all the time in daily life.To develop a sustainable expression recognition and age estimation system, we need to determine whether age estimation is influenced by changes in facial expression, how significant the influence is, and if a solution can be developed to solve the problem caused by facial expressions.Existing works on age estimation are mostly founded on expressionless faces.Most age estimation and expression recognition systems contain mainly frontal-view,neutral expressions,although some used variations in illumination,pose,and expression.To perform a systematic study on age estimation with various expressions,we need to use databases with clear ground truth labels for both age and expression.

    The main distribution of our proposed model is as follows; First, face detection is done using the YCbCr skin color segmentation model.Second, landmark points are plotted on the face based on the connected components technique.Third,Synthetic face a mask is mapped on the face,based on landmark points localization.Fourth, features are extracted and subdivided into two categories.For age estimation, Anthropometric model, energy-based point clouds, and wrinkles are used for feature extraction.For expression recognition, HOG-based symmetry identification, energy-based point clouds,and geodesic distances between landmark points are extracted.Finally,Recurrent Neural Network(RNN)is used for the correct recognition of facial expressions and age.

    The main contributions of the proposed system are:

    · Synthetic face mask mapping increases the multi-face expressions and age recognition accuracy.

    · Our local transform features of both age and expression recognition provide far better accuracy than other state-of-the-art methods.

    · Recurrent Neural Network (RNN) classifier for the accurate age prediction and expressions recognition of individuals.

    Our proposed sustainable FERAE model is evaluated using different performance measures over two multi-face benchmark datasets,namely,the Gallagher collection person dataset and an images of groups dataset which fully validated our system’s efficacy showing that it outperforms other state-ofthe-art methods.

    This article is structured as follows:Section 2 describes related work for both facial expression and age recognition.Section 3 gives a detailed overview of the proposed model that intelligently recognizes multi facial expressions and age.In Section 4, the proposed model performance is experimentally assessed on two publicly available benchmark datasets.Lastly,in Section 5 we sum up the paper and future directions are outlined.

    2 Related Work

    Over the past few years, many researchers have done remarkable work on both single and multi-facial expressions recognition and age estimation.In this section, a comprehensive review of recent related studies of both facial expressions recognition and age estimation models are given in Section 2.1 and 2.2 respectively.

    2.1 Multi-facial Expressions Recognition Systems

    In recent years, many RGB-based facial expressions recognition systems have been proposed.In [11], the authors first detected facial features using Multi-task Cascaded Convolution Neural Network.After that,CNN and a VGG-16 model were used for the classification of facial expressions as Neutral,Positive,or Negative.The facial expression recognition accuracy on the Public dataset was 74%.In[12],the authors developed a system to recognize facial expressions in a variety of social events.Seetaface was used to detect the faces and align them.Visual facial features,i.e.,PHOG,CENTRIST,DCNN features,and VGG features using VGGFace-LSTM and DCNN-LSTM,were then extracted.The system was tested on Group Affect Database 2.0 and achieved recognition accuracy of 79.78%.In[13],a hybrid network was developed in which CNN was used to pretrain the faces and extract scene features,skeleton features,and local features.These fused features were used to predict emotions.The system was tested on a public dataset and achieved system validation and testing accuracies 80.05%and 80.61%respectively.In[14],the authors developed a mood recognition system by first capturing the images from the web cam and trained two machine learning algorithm i-e., Gradient boosting classifier and K-Nearest Neighbor (KNN).The recognition accuracies achieved are 81% and 73%respectively.

    2.2 Multi-facial Age Estimation Systems

    In recent years, different methodologies have been adopted by researchers for the estimation of age or age group.In [15], The authors developed the system to estimate the age of real-life persons.features were extracted via Local Binary Pattern(LBP)and Gabor techniques.For classification,SVM was used.The system was tested on the Images of Group dataset and achieved an accuracy of 87.7%.In [16] extracted the features using LBP and FPLBP techniques.SVM was used for accurate age group classification and achieved an accuracy of 66.6%.In [17] developed a system for automatic classification of age and gender.Features were extracted via Multi-Level Local Binary Pattern(MLLBP)whereas SVM with non-linear RBF kernel was used to classify according to correct age groups and gender.The system was tested on the Images of Group dataset and achieved an accuracy of 43.4%.In[18]proposed a system,the authors extracted the features and classified the correct age group using Convolution Neural Network(CNN).The system was tested on the OUI Adience dataset and achieved an accuracy of 62.34%.

    3 Material and Methods

    This section describes the proposed framework for facial expressions recognition and age estimation.Fig.1 shows the general architecture of the proposed system.

    3.1 Pre-processing and Face Detection

    Our sustainable FERAE system starts with the preprocessing step, which involves two steps;1)background subtraction,and 2)aligning the faces of both datasets at an angle of 180?.First,complex backgrounds are removed from the images to detect the faces more accurately.This is done using the median filtering technique using a 5×5 window to remove the noise and suppress the undesirable distortion in the images.Then,the K means clustering is used for background subtraction.Secondly,if the positions of most of the faces in both datasets are not aligned properly this can be problematic for the detection of faces in the images.Thus,we set the face alignment of both the Gallagher collection person dataset and the Images of Groups dataset using the code available on GitHub[19].

    For face detection,a skin color segmentation technique YCbCr is used.This skin color segmentation model provides remarkable results to detect faces in a scene using YCbCr color space.The skin color of each individual varies,so to get full coverage of each skin pixel the RGB images are converted to YCbCr color space to easily distinguish skin from non-skin pixels.Fig.2 shows the examples of face detection in the Images of Groups dataset.This technique is not affected by the illumination condition Y(luma)factor.Skin representation is based on two componentsCb(blue difference)andCr(red difference).The skin color model is formulated as in Eqs.(1)and(2)[2].

    Figure 2:Some results of pre-processing and face detection over the Images of the group dataset

    3.2 Landmarks Tracking

    Landmarks tracking is the primary step towards face mask mapping.The landmarks are plotted on the facial features to track the pixel positions.They will help us extract different point-based features for the accurate classification of multi-face expressions and age.This section is divided into two subsections;Section 3.2.1 explains landmarks tracking over the Gallagher benchmark dataset for multi-face expressions recognition and Section 3.2.2 describes landmarks tracking over the Images of groups dataset for multi-face age estimation.

    3.2.1 Landmarks Tracking for Multi-face Expressions Recognition

    To plot the landmarks over the Gallagher collection person dataset,the same procedure is used for marking the landmarks on eyebrows,eyes,and lips as mentioned in Section 3.3.1.For the localization of landmarks on the nose.First, the nose is detected using a cascade algorithm.The two nostril points are obtained by applying the concept of connected components inside the bounding box.Then, 3 points are obtained, one on the nose tip and two are on the nostrils.Therefore, a total of 23 landmarks are plotted on the entire face.Figs.3a and 3b show the landmark points symmetry over both benchmark datasets,respectively.

    Figure 3: Landmark points symmetry over the (a) Gallagher collection person dataset and (b) the Images of groups dataset,respectively

    3.2.2 Landmarks Tracking for Multi-face Age Estimation

    After detection of the face,35 landmarks are plotted on the face,on the eyebrows,eyes,and lips,by converting the RGB image into a binary image and detecting the facial features using blob detection.The edges of each facial feature blob are marked with landmarks by taking the central point of each edge using Eq.(3).The nose is detected using the ridge contour method and a total of seven landmark points are marked on the nose.To plot the area of the chin,jawline,and forehead,the midpoints of the face blob or bounding box edges are marked and these are calculated using Eq.(4)[2];

    wherea,b,c,ddenotes the edges length and thee1,e2,e3,ande4are the midpoints of the blob edges.

    3.3 Synthetic Face Mask Prediction

    Synthetic mask prediction is a robust technique for the accurate prediction of the multi-face age of an individual and to recognize the expressions or emotions of a person.This technique is widely used for face detection,face recognition,face aging estimation,etc.For the generation of synthetic masks on the face, we utilized the 35 landmark points for age estimation and 23 landmarks for multi-face expressions recognition.The technique used for both the masks is the same,i.e.,three-sided polygon meshes and perpendicular bisection of a triangle are applied[15].However,the synthetic mask is only generated on facial features for multi-face expression recognition using the sub-triangle formation.The main variations appear on the facial features during changes in facial expressions.Algorithm 1 describes the overall procedure of synthetic face mask prediction over the Gallagher collection person dataset for multi-face expressions recognition.

    Given a face image with 35 and 23 landmarks points over the Images of the Group dataset and the Gallagher collection person dataset images respectively, a multivariate shape model is generated using the landmarks points via polygon meshes and the perpendicular bisection of triangles for age estimation and expression recognition, the large triangles, and sub-triangle formation rule is used.The perpendicular bisections help us distinguish the changes occurring from infancy till adulthood where triangular meshes will further help to extract features for both multi facial expressions and age estimation.Figs.4a and 4b show the synthetic mask prediction over the Images of the Group dataset and the Gallagher collection person dataset,respectively.

    Figure 4:Synthetic face mask prediction over(a)the Images of Groups dataset for age estimation and over(b)the Gallagher collection person dataset for expression recognition respectively

    Algorithm 1:Multi-face expression recognition synthetic mask prediction Input: Input X=Position of 23 landmarks localization points;Output: Mesh of triangles of Y:TM(Y);//initiating feature descriptors matrix//begin 1 Calculate the pixel positions of three outer corners of a triangle(c1,c2,c3);2 for 3 TM(Y):=(c1,c2,c3);(Continued)

    Algorithm 1:Continued 4 /*Initialize TM(Y)a large triangle*/5 The sub-triangles formed inside the TM(Y)is S:=(s1,s2,s3);6 do 7 c1A bisects /c_1;8 c2B bisects /c_2;9 c3C bisects /c_3;10 end for 11 TM(Y)←S;12 return TM(Y);13 end

    3.4 Feature Descriptors

    For the estimation of age and the accurate recognition of facial expressions,we have extracted the age and expression features individually.For age group prediction, the features extraction methods include; 1) Anthropometric model, 2) Interior Angles formulation and, 3) Wrinkles detection (See Section 3.4.1).For expressions recognition,the extracted features are,1)Geodesic distance,2)Energybased point clouds,and 3)0–180?intensity(See Section 3.4.2).

    3.4.1 Feature Extraction for Age Group Classification

    The anthropometric model is the study of the human face and facial features by dimensions and sizes [20].The landmark points marked on the facial features are known by anatomical names e.g.,the lip corners are known as the left and right cheilion and are denoted by lch and rch,likewise,the inner corners of the eyebrows are known as Nasion and are denoted by n.By using this model,we have taken several distances between the facial features which are calculated using the Euclidean distance using Eq.(5)[21].

    wherep1,p2,q1 and q2are the pixel locations along x and y coordinates,respectively.Fig.5 shows the anatomical names and calculated dimensions.

    For the calculation of the interior angles,the above-mentioned face mask is used.From infancy to adulthood the shape of the face mask changes and this results in the variations of angles.We calculated the interior anglesθ1,θ2,θ3using the law of cosine in Eqs.(6),(7)and(8)[21];

    wherep, qandrare the sides of the triangles formed by the face mask.Different measurements of interior angles on two different age groups are shown in Fig.6.

    Figure 5:Anatomical names of the given dimensions

    Figure 6:Interior angle formulations over the Images of groups dataset

    With time,human skin texture changes due to environment,stress,health issues,and many other factors.This texture variation appears in the form of wrinkles,under-eye bags,sagging skin,etc.For wrinkle detection,the Canny edge detection method is used.In Fig.7,the winkles are displayed in the form of edges,i.e.,the white pixels in the binary image over the Images of groups dataset.The quantity of the edges is equal to the number of wrinkles on the face which exhibits the age of the person.These wrinkles are calculated using the Eq.(9)[22];

    whereF,LE,RE,UEandALare the white pixels,i.e.,T1,T2,T3,T4andT5are the total number of pixels on the forehead,left-eyelid,right-eyelid,under-eyes and around the lips.

    Figure 7:The results of wrinkles formation over the images of groups dataset

    3.4.2 Feature Extraction for Facial Expressions Recognition

    The geodesic distance on the surface of the face is the shortest distance between two points.To calculate the geodesic distance,Kimmel and Sethian proposed a method known as fast marching using the Eikonal equation as in Eq.(10)[23];

    The fast-marching algorithm is based on Dijkstra algorithm which computes the shortest distances between two points.In this work,we calculate geodesic distances on the surface of the face using the values of gradient only.Imgis an image having multiple landmark points.To calculate the geodesic distance between two landmark points the distance is(D=d1,d2).The geodesic distance is taken as the parametric manifold which can be represented by mappingF:R2→R3from the parameterization P to the manifold which is given as in Eq.(11);

    The metric tensorgijof the manifold is given as in Eq.(12);

    The geodesic distance is calculated as the shortest distance between the two points.We can calculate the geodesic distance on the surface of the face between 15 landmark points.The geodesic distanceδ(A,B)between two pointsAandBis calculated as in Eq.(13);

    The distance element on the manifold is given as in Eq.(14);

    where the values ofcanddare 1 and 2.We can compute the geodesic distance between 15 landmark points and select the most significant distance that helps in expression recognition.As a result, we obtained a total of 15 distances.

    Energy-based point clouds are the techniques that work on the principle of the Dijkstra algorithm.According to our best knowledge, this technique is used for the very first time for age estimation and expression recognition, simultaneously.This technique is efficient, robust, and quite simple to implement.Using this technique, a central landmark point labeledf∈Fis marked at the center of the face.Its distance is fixed to zero, i.e.,d(f)=0.After that, this value is inserted into a priority queueQ, where the priority is based on the smallest distance between the landmark points.The remaining points are marked asd(q)= ∞.In the priority queue, one point f is selected then the shortest distances between that point to other points are calculated based on the Dijkstra algorithm.Based on those distances,energy-based point clouds are displayed on the face.The alignment of these point clouds changes with variations in the distances from the central point to the other landmark points.The distances from the central point to other varying landmark points are known as optimal distances[24].Fig.8 shows the hierarchical steps for energy-based point clouds extraction.

    Figure 8:The hierarchical steps for energy-based point clouds extraction

    In the 0–180?intensity feature extraction technique,radon transform calculates the projection of an image matrix with some specific axis.The specific axis is used to predict the 2D approximation of the facial expression through different parts of the face using the intensity estimationqalong with the specific set of radial line anglesθdefined as in Eq.(15)[25];

    whereI(q,θ)is the line integral of the image intensity andf(a, b)is the distance from the origin at angleθof the line junction.All these points on a line satisfy Eq.(8)and the projection function can be rewritten as Eqs.(16)and(17)[25];

    Finally, we extracted the top 180 levels of each pixel’s intensity and combined them into a unified vector for different facial expressions.Fig.9 shows the different expression intensity levels(0–180?).

    Our ball can compare favourably20 with the king s, he said, andturned with contempt towards the gazing crowd in the street. What hethought was sufficiently21 expressed in his features and movements: Miserable beggars, who are looking in, you are nothing incomparison to me.

    Figure 9:0–180?intensity levels for different expressions over the Gallagher collection person dataset

    3.5 Long Short-Term Memory Based Recurrent Neural Network(RNN-LSTM)

    Variations in the facial features,while expressions are changing,can exhibit various positions of the facial features.For instance,in the state of sadness,an individual has drooping eyelids,the outer corners of the lips are pulled in a downward direction and very slow eye blinking occurs.In a state of happiness,fast eye blinking and movement of the cheek muscles around the eyes occur and puffiness occurs under the eyes.By comparison with the state of anger, the eyes open widely, eyebrows are drawn together and the lips are tightly closed and become narrower and thin,or the lips are opened to form a rectangle.Similarly,for the prediction of accurate age group classification,changes in facial textures and features occur.In childhood,an individual has more tight skin,no wrinkles on the face,and no under-eye puffiness, whereas, in adulthood, more wrinkles are formed around the eyes, lips,and cheeks, sagging skin and skin color varies.These feature and texture variations are extracted in the form of feature vectors and the Recurrent Neural Network(RNN) takes advantage of them for accurate classification of multi-face expressions and age.

    The feature vectors of expressions and age are fed to the RNN classifier after the features extraction and optimization stage.Our RNN uses one hidden layer along with the 210 unidirectional LSTM fully interconnected cells.The input layer is comprised of 5080 images of the Images of Groups dataset and 589 images of the Gallagher collection person dataset.The vectors size of the Images of groups dataset is 28,231×550 and for the Gallagher collection person dataset it is 931×623 Each features vector is the depiction of the participant facial expressions and age.At the output layer, a SoftMax function,which is responsible for a 1 out K classification task,is used.The SoftMax function output range lies between 0 and 1 where the sum is equal to 1 at every time step.The RNN is trained using the Adam Optimizer with a learning rate of 0.001[26].Fig.12 depicts the hierarchy of RNN for age and expression classification.Algorithm 2 defines the RNN_LSTM training for age estimation and expression recognition.

    Algorithm 2:RNN-LSTM Training Input: Classes ←{“7”,“6”};Features ←{“Age Estimation”,“Expression Recognition”};Output: A ←dataset{n}.Values;B ←dataset{Features}.Values;1 Train_Data,Test_Data,Valid_Data ←Split_Data_Train_Test(A,B,0.33,0.25);2 Size_of_Batch←4;3 RNN_LSTM←Sequential_Model({4 Embedded_Layer(Train_Data.Length,Output_Data_Length,Train_Data.Columns),5 RNN_LSTM_Layer(Output_Data_Length),6 Dense_Layer(Output_Data_Length,activation_Function=‘Sigmoid’)});7 Optimizer←Adam,Epochs←20;8 RNN_LSTM.Compile(Optimizer);9 RNN_LSTM.train(Train_Data,Epochs,Size_of_Batch,Valid_Data);

    4 Performance Evaluation

    This section gives a brief description of two datasets used for facial expressions recognition and age estimation,results of experiments conducted to evaluate the proposed FERAE system and comparison with other systems.

    4.1 Datasets Description

    The description of each dataset used in FERAE system is given in Sections 4.1.1,4.1.2 and 4.1.3.

    4.1.1 The Gallagher Collection Person Dataset for Expression Recognition

    The first dataset used for multi-face expression recognition is the Gallagher Collection Person dataset [27].The images in this dataset were shot in real life at real events of real people with real expressions.The dataset comprises 589 images with 931 faces.Each face is labeled in the image with an expression of Neutral,Happy,Sad,FERAE,Angry and Surprise.The dataset is publicly available.Some examples from this dataset are shown in Fig.10.

    Figure 10:Some examples from the Gallagher collection person dataset

    4.1.2 Images of Groups Dataset for Age Estimation

    The second dataset is the Images of Groups dataset which is used for multi-face age group classification[28].The dataset is the largest dataset comprising 5080 images containing 28,231 faces that are labeled with age and gender.The seven-age group labels of this dataset are 0–2, 3–7, 8–12,13–19, 20–36, 37–65, and 66+.This dataset is publicly available.Some examples of this dataset are shown in Fig.11.

    Figure 11:Some examples from the images of groups dataset

    4.2 Experimental Settings and Results

    All the processing and experimentation are being performed on MATLAB(R2019).The hardware system used is Intel Core i5 with 64-bit Windows-10.The system has 16 GB and 5(GHz)CPUs.To evaluate the performance of the proposed system,we used a Leave One Person Out(LOPO)[29]crossvalidation method.Experiment 1 determined the facial features detection accuracy rates over both benchmark datasets.Experiment 2 determined the multi-face expressions recognition accuracy rates as shown in the form of a confusion matrix.Experiment 3 determined the multi-face age estimation accuracy rates over the Images of groups dataset.Experiments 4 reveal comparisons in a ROC curve graph of the proposed model with another state-of-the-art models for both multi-face expression recognition and age estimation,respectively.

    4.2.1 Experiment 1:Facial Features Detection Accuracies

    In this experiment, facial features detection accuracies over the Images of Groups dataset and Gallagher collection person dataset were determined as shown Fig.12.

    4.2.2 Experiment 2:Multi-face Expressions Recognition Accuracy

    For multi-face expression recognition, the RNN model is used for the accurate classification of expression and age.The Leave One Subject Out (LOSO) cross validation technique is used for the evaluation of the proposed system.Tab.1 shows the confusion matrix of multi-face expressions recognition.

    4.2.3 Experiment 3:Multi-face Age Estimation Accuracy

    For multi-face age estimation,the RNN model was used for the accurate classification of age.The Leave One Subject Out (LOSO) cross validation technique was used for the evaluation of proposed system.Tab.2 shows the confusion matrix for multi-face age estimation.

    Figure 12:Facial features detection accuracies over both benchmark datasets

    Table 1:Confusion matrix for multi-face expressions recognition over the Gallagher person collection dataset

    Table 2: Confusion matrix for multi-face age estimation over the images of groups dataset

    4.2.4 Experiment 4: Results for Comparison of the Proposed Multi-expressions Recognition and Age Estimation Model with Other State of the Art Models.

    Figs.13a–13f, 14a–14f shows the ROC curve graph for all multi-facial expressions and age estimation.The ROC curve is the relationship between the true positive rate and the false positive rate.The true positive is basically showing the sensitivity and false positive rate is showing the 1-specificity.Both the true positive and false positive can be calculated in Eqs.(18)and(19)respectively;

    Figure 13:The ROC curve graphs for all multi-facial expressions over the Gallagher collection person dataset.The lowest and highest values in the expressions ROC curve graphs of both the true positive and false positive using RNN are;Neutral:(0.03,0.00)and(0.80,1.00),Happy:(0.02,0.02)and(0.92,0.98), Sad: (0.14, 0.027) and (0.80, 1.00), Fear: (0.10, 0.00) and (0.81, 0.98), Angry: (0.09, 0.01) and(0.77 and 1.00)and Surprise:(0.12,0.03)and(0.93,0.98)

    We have tested our multi-facial expression recognition and age estimation system (FERAE)model using the state-of the art methods i-e.,Convolution Neural Network(CNN),Recurrent Neural Network(RNN),and Deep Belief Neural Network(DBNN).Experimental Results 4 shows that the RNN along with the other salient feature descriptors of both expression and age provides better results against CNN and DBNN.

    Figure 14:The ROC curve graphs for all the age groups over the Images of groups dataset.The lowest values and highest values in the age groups ROC curve graphs of both the true positive and false positive using RNN are; 0–2: (0.00, 0.00) and (0.90, 1.00), 3–7: (0.18, 0.01) and (0.92, 1.00), 8–12:(0.00, 0.00) and (0.89, 0.99), 13–19: (0.10, 0.00) and (0.90, 1.00), 20–36: (0.00, 0.00) and (0.98 and 0.99)and 37–65:(0.09,0.01)and(0.91,0.97)

    5 Conclusion

    In this paper,a fused model of multi facial expressions recognition and age estimation is proposed.A synthetic face mask is mapped on the face formed by the localization of the landmarks points.The novel point-based and texture-based features obtained using different feature extraction techniques are passed to the RNN classifier for the classification of expressions and age groups The proposed system is tested using the Gallagher collection person dataset for expression recognition and the Images of groups dataset for age estimation.Experimental results show that our approach produced superior classification accuracies i-e.,85.5%over the Gallagher collection person dataset and 91.4%over the images of groups dataset.The proposed system applies to surveillance systems,video gaming,consumer applications, e-learning, audience analysis, and emotion robots.As for limitations, the system fails to detect the detailed facial features of persons from images that are captured too far from the cameras.In the future,we will work on the computational time complexity of the system and also evaluate our system on RGB-D datasets.

    Acknowledgement:This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (No.2018R1D1A1A02085645).Also,this work was supported by the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade,Industry and Energy,the Ministry of Health&Welfare,the Ministry of Food and Drug Safety)(Project Number:202012D05-02).

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产又色又爽无遮挡免费看| 日本五十路高清| 精品久久久久久久久久免费视频| 午夜福利成人在线免费观看| 99久久综合精品五月天人人| 桃色一区二区三区在线观看| 国产视频内射| 两人在一起打扑克的视频| 很黄的视频免费| 免费在线观看亚洲国产| 午夜激情欧美在线| 女人被狂操c到高潮| 精品久久久久久成人av| 国产欧美日韩精品一区二区| 免费在线观看日本一区| 国产爱豆传媒在线观看| 老汉色av国产亚洲站长工具| 在线观看免费午夜福利视频| 窝窝影院91人妻| 九九在线视频观看精品| 欧美黑人欧美精品刺激| 麻豆国产av国片精品| 久久午夜综合久久蜜桃| 国产午夜福利久久久久久| 国产主播在线观看一区二区| 18美女黄网站色大片免费观看| 亚洲第一欧美日韩一区二区三区| 757午夜福利合集在线观看| 村上凉子中文字幕在线| 叶爱在线成人免费视频播放| 校园春色视频在线观看| 麻豆成人av在线观看| 色噜噜av男人的天堂激情| netflix在线观看网站| 一区二区三区高清视频在线| 老汉色av国产亚洲站长工具| 最近最新中文字幕大全免费视频| 久久久久久大精品| 国产精品av视频在线免费观看| 亚洲精品456在线播放app | 99视频精品全部免费 在线 | 久久久久国内视频| 又大又爽又粗| 成年人黄色毛片网站| 精品熟女少妇八av免费久了| 91麻豆av在线| 天堂影院成人在线观看| 国产精品电影一区二区三区| 欧美又色又爽又黄视频| 免费大片18禁| 精品久久久久久成人av| 久久精品91蜜桃| 国产欧美日韩一区二区三| 一个人观看的视频www高清免费观看 | 亚洲成av人片在线播放无| 亚洲自偷自拍图片 自拍| 天堂√8在线中文| 精品电影一区二区在线| 我的老师免费观看完整版| 两个人的视频大全免费| 欧美日韩黄片免| 成年版毛片免费区| 午夜福利在线观看免费完整高清在 | 九九在线视频观看精品| 日韩免费av在线播放| 美女免费视频网站| 午夜a级毛片| 美女扒开内裤让男人捅视频| 国产高清videossex| 校园春色视频在线观看| 色综合亚洲欧美另类图片| 国产乱人视频| 亚洲精品在线美女| 精品一区二区三区视频在线观看免费| 九九在线视频观看精品| 日本免费一区二区三区高清不卡| 日韩 欧美 亚洲 中文字幕| 美女高潮喷水抽搐中文字幕| АⅤ资源中文在线天堂| 叶爱在线成人免费视频播放| 最近最新免费中文字幕在线| 久久久国产成人免费| 两人在一起打扑克的视频| 欧洲精品卡2卡3卡4卡5卡区| 久久人妻av系列| 亚洲av成人一区二区三| 动漫黄色视频在线观看| 国产91精品成人一区二区三区| 亚洲精品久久国产高清桃花| 精品一区二区三区av网在线观看| 97碰自拍视频| 亚洲国产欧美一区二区综合| 精品国内亚洲2022精品成人| 国产三级中文精品| 久久久久久久午夜电影| 特大巨黑吊av在线直播| 熟女电影av网| 色综合站精品国产| 欧美中文综合在线视频| 少妇熟女aⅴ在线视频| 网址你懂的国产日韩在线| 成在线人永久免费视频| 成人无遮挡网站| 91老司机精品| 亚洲国产日韩欧美精品在线观看 | 国产在线精品亚洲第一网站| 日韩欧美在线乱码| 亚洲熟女毛片儿| 最好的美女福利视频网| 久久久成人免费电影| or卡值多少钱| 黄片大片在线免费观看| 国产精品亚洲av一区麻豆| 啦啦啦韩国在线观看视频| 不卡av一区二区三区| 一区二区三区高清视频在线| 婷婷六月久久综合丁香| 国产精品日韩av在线免费观看| 久久精品91无色码中文字幕| 国内精品久久久久久久电影| 成人亚洲精品av一区二区| 国产aⅴ精品一区二区三区波| 日韩大尺度精品在线看网址| 一区二区三区国产精品乱码| 精品久久久久久久人妻蜜臀av| 窝窝影院91人妻| 久久久久久久久中文| 精品国内亚洲2022精品成人| 男女床上黄色一级片免费看| 久久精品aⅴ一区二区三区四区| 在线观看一区二区三区| 国产伦精品一区二区三区视频9 | 99精品久久久久人妻精品| 国产不卡一卡二| 男女床上黄色一级片免费看| 欧美成狂野欧美在线观看| 99热这里只有是精品50| 成年版毛片免费区| 中文字幕人成人乱码亚洲影| www.999成人在线观看| 99riav亚洲国产免费| 精品99又大又爽又粗少妇毛片 | 搞女人的毛片| 老司机深夜福利视频在线观看| 国产精品久久久久久久电影 | 长腿黑丝高跟| 久久性视频一级片| 特大巨黑吊av在线直播| 亚洲欧美精品综合久久99| 狂野欧美激情性xxxx| 国产69精品久久久久777片 | 好男人电影高清在线观看| 国产高清视频在线播放一区| 黄色 视频免费看| 一进一出抽搐gif免费好疼| 最新中文字幕久久久久 | 两个人视频免费观看高清| 51午夜福利影视在线观看| 成人午夜高清在线视频| 国产精品久久久久久人妻精品电影| 极品教师在线免费播放| 国产三级黄色录像| 亚洲国产欧美人成| 国产精品久久久久久人妻精品电影| 精品久久久久久久末码| www日本黄色视频网| 巨乳人妻的诱惑在线观看| 九九久久精品国产亚洲av麻豆 | 日韩av在线大香蕉| 久久久久国内视频| 一本一本综合久久| 成年人黄色毛片网站| 熟女人妻精品中文字幕| 国产美女午夜福利| 欧美黄色淫秽网站| 午夜成年电影在线免费观看| 国产又黄又爽又无遮挡在线| 啦啦啦韩国在线观看视频| 9191精品国产免费久久| 国产伦在线观看视频一区| 757午夜福利合集在线观看| h日本视频在线播放| 亚洲电影在线观看av| 老熟妇仑乱视频hdxx| 人人妻人人澡欧美一区二区| 亚洲成人中文字幕在线播放| 国内揄拍国产精品人妻在线| 婷婷六月久久综合丁香| 88av欧美| 99国产综合亚洲精品| 日韩av在线大香蕉| 一本一本综合久久| 欧美色欧美亚洲另类二区| 成人亚洲精品av一区二区| 真实男女啪啪啪动态图| 精品久久久久久久人妻蜜臀av| 两性午夜刺激爽爽歪歪视频在线观看| 成人国产综合亚洲| 精品国产亚洲在线| 久久欧美精品欧美久久欧美| 国产精品久久久av美女十八| 免费av毛片视频| 成年女人永久免费观看视频| 91av网站免费观看| 亚洲精品久久国产高清桃花| 可以在线观看的亚洲视频| 国产v大片淫在线免费观看| 高清在线国产一区| 啦啦啦韩国在线观看视频| 一个人看视频在线观看www免费 | 亚洲精品在线美女| 国产三级在线视频| av国产免费在线观看| 九九久久精品国产亚洲av麻豆 | 欧美中文综合在线视频| 国产精品一区二区免费欧美| 日日摸夜夜添夜夜添小说| 国产午夜精品论理片| 国产毛片a区久久久久| 国产一区二区在线观看日韩 | 午夜成年电影在线免费观看| 亚洲国产中文字幕在线视频| 热99re8久久精品国产| 成年女人毛片免费观看观看9| 最近最新中文字幕大全电影3| 少妇的逼水好多| 老熟妇乱子伦视频在线观看| 日韩欧美 国产精品| 国产真人三级小视频在线观看| 国产精品一区二区三区四区久久| 亚洲狠狠婷婷综合久久图片| 欧美zozozo另类| tocl精华| 精品午夜福利视频在线观看一区| 99热这里只有是精品50| 成人永久免费在线观看视频| 免费av不卡在线播放| 男人舔奶头视频| 久久精品aⅴ一区二区三区四区| 国产一区二区激情短视频| 国产高清有码在线观看视频| 男插女下体视频免费在线播放| 色视频www国产| 亚洲天堂国产精品一区在线| 99久久99久久久精品蜜桃| av在线蜜桃| 亚洲美女黄片视频| 超碰成人久久| 久久热在线av| 免费在线观看视频国产中文字幕亚洲| 欧美三级亚洲精品| 国产野战对白在线观看| 91麻豆精品激情在线观看国产| 国产精品爽爽va在线观看网站| 国产精品久久久久久人妻精品电影| 成人性生交大片免费视频hd| 国产激情久久老熟女| 中文字幕高清在线视频| 欧美日韩综合久久久久久 | h日本视频在线播放| 国产精品久久久久久亚洲av鲁大| 1024香蕉在线观看| 一个人免费在线观看电影 | 日本成人三级电影网站| 亚洲乱码一区二区免费版| 免费高清视频大片| 天天躁日日操中文字幕| 国产日本99.免费观看| 免费在线观看视频国产中文字幕亚洲| 99久久精品热视频| 可以在线观看的亚洲视频| 午夜日韩欧美国产| 无遮挡黄片免费观看| 亚洲黑人精品在线| 欧洲精品卡2卡3卡4卡5卡区| 麻豆成人av在线观看| 美女黄网站色视频| 好男人在线观看高清免费视频| 成人欧美大片| 日韩中文字幕欧美一区二区| 女人高潮潮喷娇喘18禁视频| 深夜精品福利| 两个人视频免费观看高清| 欧美极品一区二区三区四区| 白带黄色成豆腐渣| av欧美777| 国产私拍福利视频在线观看| 久久久久国产精品人妻aⅴ院| 亚洲国产高清在线一区二区三| 女人高潮潮喷娇喘18禁视频| 国产91精品成人一区二区三区| 欧美性猛交╳xxx乱大交人| 成人无遮挡网站| 日本黄色片子视频| 午夜精品一区二区三区免费看| 国产精品乱码一区二三区的特点| 三级国产精品欧美在线观看 | 日本一二三区视频观看| 夜夜夜夜夜久久久久| 久久久精品大字幕| 手机成人av网站| 亚洲精品在线观看二区| 一进一出好大好爽视频| 一个人免费在线观看电影 | 久久香蕉精品热| 亚洲美女黄片视频| 哪里可以看免费的av片| 欧美黑人巨大hd| 99久久精品国产亚洲精品| 精品一区二区三区av网在线观看| 久久久久国产精品人妻aⅴ院| 久久精品人妻少妇| 亚洲,欧美精品.| 久久久久久久精品吃奶| x7x7x7水蜜桃| 人妻丰满熟妇av一区二区三区| 婷婷亚洲欧美| 人人妻人人澡欧美一区二区| 国产精品一区二区三区四区久久| 亚洲国产精品成人综合色| 精品无人区乱码1区二区| 久久午夜综合久久蜜桃| 国内精品一区二区在线观看| 日韩精品青青久久久久久| 国产精品,欧美在线| 国产高清有码在线观看视频| 美女高潮的动态| 俺也久久电影网| 非洲黑人性xxxx精品又粗又长| 国产在线精品亚洲第一网站| 老熟妇乱子伦视频在线观看| 嫩草影院入口| 精品无人区乱码1区二区| 亚洲欧美日韩高清在线视频| 老熟妇乱子伦视频在线观看| 2021天堂中文幕一二区在线观| 五月玫瑰六月丁香| 一个人观看的视频www高清免费观看 | 黄频高清免费视频| 美女 人体艺术 gogo| 真实男女啪啪啪动态图| 久久精品人妻少妇| 我要搜黄色片| 国产成人一区二区三区免费视频网站| 国产激情偷乱视频一区二区| 国产极品精品免费视频能看的| 亚洲精品粉嫩美女一区| 青草久久国产| 好看av亚洲va欧美ⅴa在| 美女黄网站色视频| 午夜精品久久久久久毛片777| 亚洲成av人片在线播放无| 亚洲美女黄片视频| av视频在线观看入口| 精品久久久久久久人妻蜜臀av| 九九久久精品国产亚洲av麻豆 | 丰满人妻熟妇乱又伦精品不卡| 99久久久亚洲精品蜜臀av| 亚洲中文av在线| 一本一本综合久久| 国产在线精品亚洲第一网站| 麻豆成人av在线观看| 精品国产三级普通话版| 岛国在线免费视频观看| 国产欧美日韩精品一区二区| 久久精品aⅴ一区二区三区四区| 精品无人区乱码1区二区| 国产午夜精品论理片| 亚洲精品在线观看二区| 久久久久久国产a免费观看| 亚洲色图 男人天堂 中文字幕| 波多野结衣巨乳人妻| 国产野战对白在线观看| 18禁美女被吸乳视频| 精品无人区乱码1区二区| 波多野结衣巨乳人妻| 又粗又爽又猛毛片免费看| 嫩草影视91久久| 亚洲无线在线观看| 国产成人av教育| 一区二区三区国产精品乱码| 99久久久亚洲精品蜜臀av| 久久精品影院6| 欧美日韩一级在线毛片| 人妻丰满熟妇av一区二区三区| 国产一级毛片七仙女欲春2| 在线免费观看的www视频| 99riav亚洲国产免费| 国产 一区 欧美 日韩| 婷婷精品国产亚洲av| 在线永久观看黄色视频| 麻豆av在线久日| 最新美女视频免费是黄的| 日本黄色视频三级网站网址| 亚洲精品色激情综合| 1024手机看黄色片| 国产探花在线观看一区二区| 99久久久亚洲精品蜜臀av| 成年女人永久免费观看视频| 一进一出抽搐动态| 亚洲av成人一区二区三| 亚洲美女黄片视频| 欧美成人性av电影在线观看| 美女扒开内裤让男人捅视频| 亚洲欧美精品综合一区二区三区| 男女下面进入的视频免费午夜| 亚洲国产欧美一区二区综合| 成人18禁在线播放| 中文亚洲av片在线观看爽| 88av欧美| 国产美女午夜福利| 亚洲国产日韩欧美精品在线观看 | 国内精品美女久久久久久| 久久香蕉国产精品| 久久久久性生活片| 超碰成人久久| 草草在线视频免费看| 国产成人影院久久av| 国产在线精品亚洲第一网站| av中文乱码字幕在线| 一个人观看的视频www高清免费观看 | 久99久视频精品免费| 一区二区三区国产精品乱码| 色在线成人网| 天天躁日日操中文字幕| 99国产综合亚洲精品| 黄频高清免费视频| 午夜成年电影在线免费观看| 久久久国产欧美日韩av| 性色av乱码一区二区三区2| 法律面前人人平等表现在哪些方面| 国产视频一区二区在线看| 亚洲精品色激情综合| 麻豆成人午夜福利视频| 亚洲成av人片在线播放无| 巨乳人妻的诱惑在线观看| 麻豆av在线久日| 99热精品在线国产| 欧美日本亚洲视频在线播放| 51午夜福利影视在线观看| www.熟女人妻精品国产| 三级国产精品欧美在线观看 | 性色av乱码一区二区三区2| 日韩欧美在线二视频| 三级国产精品欧美在线观看 | 国内精品久久久久精免费| 91字幕亚洲| 日日干狠狠操夜夜爽| 真实男女啪啪啪动态图| 嫩草影视91久久| av视频在线观看入口| 国产 一区 欧美 日韩| 日韩人妻高清精品专区| 男女之事视频高清在线观看| 亚洲天堂国产精品一区在线| 国产爱豆传媒在线观看| 国产伦人伦偷精品视频| 级片在线观看| 国产三级中文精品| 亚洲国产欧美一区二区综合| 99久久99久久久精品蜜桃| 亚洲国产看品久久| 日本一二三区视频观看| 欧美高清成人免费视频www| 国产精品香港三级国产av潘金莲| 小蜜桃在线观看免费完整版高清| 波多野结衣高清无吗| 最近最新中文字幕大全免费视频| 两个人看的免费小视频| 欧美又色又爽又黄视频| 亚洲五月天丁香| 国产伦精品一区二区三区四那| 一级a爱片免费观看的视频| 国模一区二区三区四区视频 | 日韩欧美一区二区三区在线观看| 欧美日韩一级在线毛片| 色老头精品视频在线观看| 97超视频在线观看视频| 97超级碰碰碰精品色视频在线观看| 亚洲欧美精品综合一区二区三区| 国产一区二区在线av高清观看| 狂野欧美激情性xxxx| 亚洲精品在线美女| 黄频高清免费视频| 夜夜躁狠狠躁天天躁| 精华霜和精华液先用哪个| 好男人在线观看高清免费视频| 超碰成人久久| 老汉色av国产亚洲站长工具| 欧美一级毛片孕妇| 日韩欧美 国产精品| 亚洲aⅴ乱码一区二区在线播放| 日韩大尺度精品在线看网址| 美女午夜性视频免费| 午夜精品久久久久久毛片777| 国内精品一区二区在线观看| 亚洲精品美女久久av网站| 精品午夜福利视频在线观看一区| 1000部很黄的大片| 91久久精品国产一区二区成人 | 欧美成狂野欧美在线观看| 亚洲va日本ⅴa欧美va伊人久久| 老汉色av国产亚洲站长工具| 成人亚洲精品av一区二区| 亚洲 国产 在线| 这个男人来自地球电影免费观看| 亚洲国产欧美网| 91麻豆精品激情在线观看国产| 桃色一区二区三区在线观看| 国产一级毛片七仙女欲春2| 在线观看美女被高潮喷水网站 | 欧美日韩精品网址| 色综合站精品国产| 国产 一区 欧美 日韩| 巨乳人妻的诱惑在线观看| 国产伦精品一区二区三区视频9 | 动漫黄色视频在线观看| 日本免费a在线| 日韩大尺度精品在线看网址| 国产一区二区在线观看日韩 | 99在线人妻在线中文字幕| 日本三级黄在线观看| 欧美日本亚洲视频在线播放| 给我免费播放毛片高清在线观看| 欧美日韩瑟瑟在线播放| 久久久久久久午夜电影| 最近最新中文字幕大全电影3| 18禁黄网站禁片免费观看直播| 18禁黄网站禁片午夜丰满| 亚洲人成网站在线播放欧美日韩| 男女之事视频高清在线观看| 看片在线看免费视频| 亚洲,欧美精品.| 国产激情久久老熟女| 免费看a级黄色片| 国产欧美日韩一区二区精品| 亚洲精品一卡2卡三卡4卡5卡| 香蕉丝袜av| 成人精品一区二区免费| 美女高潮喷水抽搐中文字幕| 欧美中文综合在线视频| 午夜影院日韩av| 亚洲人与动物交配视频| 国产午夜福利久久久久久| 天天躁日日操中文字幕| 欧美绝顶高潮抽搐喷水| 午夜福利免费观看在线| 最近最新中文字幕大全电影3| 国产精品香港三级国产av潘金莲| e午夜精品久久久久久久| 老熟妇仑乱视频hdxx| 国产亚洲av嫩草精品影院| 国产精品98久久久久久宅男小说| 国产伦人伦偷精品视频| 99久久99久久久精品蜜桃| 亚洲精品国产精品久久久不卡| 男女那种视频在线观看| 久久久成人免费电影| 校园春色视频在线观看| 日韩欧美精品v在线| 欧美色欧美亚洲另类二区| 18禁观看日本| 麻豆成人午夜福利视频| 国产午夜福利久久久久久| 欧美成人一区二区免费高清观看 | 成在线人永久免费视频| 亚洲av中文字字幕乱码综合| 99在线人妻在线中文字幕| 精品一区二区三区av网在线观看| 久久久成人免费电影| www.精华液| 在线观看一区二区三区| 中出人妻视频一区二区| 婷婷精品国产亚洲av在线| 亚洲人与动物交配视频| 免费在线观看影片大全网站| 性色avwww在线观看| 又紧又爽又黄一区二区| 亚洲精华国产精华精| 国产黄片美女视频| 网址你懂的国产日韩在线| 欧美av亚洲av综合av国产av| 夜夜夜夜夜久久久久| www.自偷自拍.com| 女警被强在线播放| 麻豆久久精品国产亚洲av| 这个男人来自地球电影免费观看| 亚洲午夜精品一区,二区,三区| 亚洲狠狠婷婷综合久久图片| 国产久久久一区二区三区| 色在线成人网| e午夜精品久久久久久久| 亚洲成人免费电影在线观看| 免费看美女性在线毛片视频| 亚洲国产日韩欧美精品在线观看 | 男女床上黄色一级片免费看| 久久热在线av| 欧美黑人欧美精品刺激| 天堂网av新在线| 久久精品影院6| 国产精品野战在线观看| АⅤ资源中文在线天堂| 成人欧美大片| 欧美一区二区国产精品久久精品| 欧美最黄视频在线播放免费| 九色成人免费人妻av| 亚洲av中文字字幕乱码综合| 视频区欧美日本亚洲| 日韩欧美在线乱码| 久久中文字幕人妻熟女| 国产亚洲精品av在线| 免费观看人在逋| 又黄又爽又免费观看的视频| 日韩av在线大香蕉| 亚洲专区国产一区二区|