• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automated Facial Expression Recognition and Age Estimation Using Deep Learning

    2022-08-23 02:19:28SyedaAmnaRizwanYazeedYasinGhadiAhmadJalalandKibumKim
    Computers Materials&Continua 2022年6期

    Syeda Amna Rizwan,Yazeed Yasin Ghadi,Ahmad Jalal and Kibum Kim

    1Department of Computer Science,Air University,Islamabad,44000,Pakistan

    2Department of Computer Science and Software Engineering,Al Ain University,Abu Dhabi,122612,UAE

    3Department of Human-Computer Interaction,Hanyang University,Ansan,15588,Korea

    Abstract: With the advancement of computer vision techniques in surveillance systems,the need for more proficient,intelligent,and sustainable facial expressions and age recognition is necessary.The main purpose of this study is to develop accurate facial expressions and an age recognition system that is capable of error-free recognition of human expression and age in both indoor and outdoor environments.The proposed system first takes an input image pre-process it and then detects faces in the entire image.After that landmarks localization helps in the formation of synthetic face mask prediction.A novel set of features are extracted and passed to a classifier for the accurate classification of expressions and age group.The proposed system is tested over two benchmark datasets,namely,the Gallagher collection person dataset and the Images of Groups dataset.The system achieved remarkable results over these benchmark datasets about recognition accuracy and computational time.The proposed system would also be applicable in different consumer application domains such as online business negotiations,consumer behavior analysis,E-learning environments,and emotion robotics.

    Keywords:Feature extraction;face expression model;local transform features and recurrent neural network(RNN)

    1 Introduction

    Recognition of human age and expressions has engaged many researchers in various fields including sustainable security [1], forensics [2], biometrics [2], and cognitive psychology.Interest in this field is spreading fast and is fuelled by scientific advances that provide a better understanding of personal identity,attitudes,and intentions based on facial expressions and age.Facial expressions have a great impact on interpersonal communication.Human emotional responses are very complex and are most directly expressed in facial expressions.In the Mehrabian oral communication effects model, it is stated that 7% intonation, 38% expressions account when the people speak, and 55%body language accounts along with the facial expressions.Over the past few decades,researchers have conducted studies for human facial expressions recognition and age estimation(FERAE)systems that use advanced sensors such as video cameras,eye trackers,thermal cameras,human vision component sensors [3–5], and stereo-cam [6,7] to intelligently recognize the human behaviours, gestures [8–10],emotions and to predict the age of an individual.Problems that arise in automatic FERAE systems are pose variations,uncontrolled lightning,complex backgrounds,partial occlusions,etc.Researchers face many challenges in attempting to overcome these problems.

    Human subjects normally present various expressions all the time in daily life.To develop a sustainable expression recognition and age estimation system, we need to determine whether age estimation is influenced by changes in facial expression, how significant the influence is, and if a solution can be developed to solve the problem caused by facial expressions.Existing works on age estimation are mostly founded on expressionless faces.Most age estimation and expression recognition systems contain mainly frontal-view,neutral expressions,although some used variations in illumination,pose,and expression.To perform a systematic study on age estimation with various expressions,we need to use databases with clear ground truth labels for both age and expression.

    The main distribution of our proposed model is as follows; First, face detection is done using the YCbCr skin color segmentation model.Second, landmark points are plotted on the face based on the connected components technique.Third,Synthetic face a mask is mapped on the face,based on landmark points localization.Fourth, features are extracted and subdivided into two categories.For age estimation, Anthropometric model, energy-based point clouds, and wrinkles are used for feature extraction.For expression recognition, HOG-based symmetry identification, energy-based point clouds,and geodesic distances between landmark points are extracted.Finally,Recurrent Neural Network(RNN)is used for the correct recognition of facial expressions and age.

    The main contributions of the proposed system are:

    · Synthetic face mask mapping increases the multi-face expressions and age recognition accuracy.

    · Our local transform features of both age and expression recognition provide far better accuracy than other state-of-the-art methods.

    · Recurrent Neural Network (RNN) classifier for the accurate age prediction and expressions recognition of individuals.

    Our proposed sustainable FERAE model is evaluated using different performance measures over two multi-face benchmark datasets,namely,the Gallagher collection person dataset and an images of groups dataset which fully validated our system’s efficacy showing that it outperforms other state-ofthe-art methods.

    This article is structured as follows:Section 2 describes related work for both facial expression and age recognition.Section 3 gives a detailed overview of the proposed model that intelligently recognizes multi facial expressions and age.In Section 4, the proposed model performance is experimentally assessed on two publicly available benchmark datasets.Lastly,in Section 5 we sum up the paper and future directions are outlined.

    2 Related Work

    Over the past few years, many researchers have done remarkable work on both single and multi-facial expressions recognition and age estimation.In this section, a comprehensive review of recent related studies of both facial expressions recognition and age estimation models are given in Section 2.1 and 2.2 respectively.

    2.1 Multi-facial Expressions Recognition Systems

    In recent years, many RGB-based facial expressions recognition systems have been proposed.In [11], the authors first detected facial features using Multi-task Cascaded Convolution Neural Network.After that,CNN and a VGG-16 model were used for the classification of facial expressions as Neutral,Positive,or Negative.The facial expression recognition accuracy on the Public dataset was 74%.In[12],the authors developed a system to recognize facial expressions in a variety of social events.Seetaface was used to detect the faces and align them.Visual facial features,i.e.,PHOG,CENTRIST,DCNN features,and VGG features using VGGFace-LSTM and DCNN-LSTM,were then extracted.The system was tested on Group Affect Database 2.0 and achieved recognition accuracy of 79.78%.In[13],a hybrid network was developed in which CNN was used to pretrain the faces and extract scene features,skeleton features,and local features.These fused features were used to predict emotions.The system was tested on a public dataset and achieved system validation and testing accuracies 80.05%and 80.61%respectively.In[14],the authors developed a mood recognition system by first capturing the images from the web cam and trained two machine learning algorithm i-e., Gradient boosting classifier and K-Nearest Neighbor (KNN).The recognition accuracies achieved are 81% and 73%respectively.

    2.2 Multi-facial Age Estimation Systems

    In recent years, different methodologies have been adopted by researchers for the estimation of age or age group.In [15], The authors developed the system to estimate the age of real-life persons.features were extracted via Local Binary Pattern(LBP)and Gabor techniques.For classification,SVM was used.The system was tested on the Images of Group dataset and achieved an accuracy of 87.7%.In [16] extracted the features using LBP and FPLBP techniques.SVM was used for accurate age group classification and achieved an accuracy of 66.6%.In [17] developed a system for automatic classification of age and gender.Features were extracted via Multi-Level Local Binary Pattern(MLLBP)whereas SVM with non-linear RBF kernel was used to classify according to correct age groups and gender.The system was tested on the Images of Group dataset and achieved an accuracy of 43.4%.In[18]proposed a system,the authors extracted the features and classified the correct age group using Convolution Neural Network(CNN).The system was tested on the OUI Adience dataset and achieved an accuracy of 62.34%.

    3 Material and Methods

    This section describes the proposed framework for facial expressions recognition and age estimation.Fig.1 shows the general architecture of the proposed system.

    3.1 Pre-processing and Face Detection

    Our sustainable FERAE system starts with the preprocessing step, which involves two steps;1)background subtraction,and 2)aligning the faces of both datasets at an angle of 180?.First,complex backgrounds are removed from the images to detect the faces more accurately.This is done using the median filtering technique using a 5×5 window to remove the noise and suppress the undesirable distortion in the images.Then,the K means clustering is used for background subtraction.Secondly,if the positions of most of the faces in both datasets are not aligned properly this can be problematic for the detection of faces in the images.Thus,we set the face alignment of both the Gallagher collection person dataset and the Images of Groups dataset using the code available on GitHub[19].

    For face detection,a skin color segmentation technique YCbCr is used.This skin color segmentation model provides remarkable results to detect faces in a scene using YCbCr color space.The skin color of each individual varies,so to get full coverage of each skin pixel the RGB images are converted to YCbCr color space to easily distinguish skin from non-skin pixels.Fig.2 shows the examples of face detection in the Images of Groups dataset.This technique is not affected by the illumination condition Y(luma)factor.Skin representation is based on two componentsCb(blue difference)andCr(red difference).The skin color model is formulated as in Eqs.(1)and(2)[2].

    Figure 2:Some results of pre-processing and face detection over the Images of the group dataset

    3.2 Landmarks Tracking

    Landmarks tracking is the primary step towards face mask mapping.The landmarks are plotted on the facial features to track the pixel positions.They will help us extract different point-based features for the accurate classification of multi-face expressions and age.This section is divided into two subsections;Section 3.2.1 explains landmarks tracking over the Gallagher benchmark dataset for multi-face expressions recognition and Section 3.2.2 describes landmarks tracking over the Images of groups dataset for multi-face age estimation.

    3.2.1 Landmarks Tracking for Multi-face Expressions Recognition

    To plot the landmarks over the Gallagher collection person dataset,the same procedure is used for marking the landmarks on eyebrows,eyes,and lips as mentioned in Section 3.3.1.For the localization of landmarks on the nose.First, the nose is detected using a cascade algorithm.The two nostril points are obtained by applying the concept of connected components inside the bounding box.Then, 3 points are obtained, one on the nose tip and two are on the nostrils.Therefore, a total of 23 landmarks are plotted on the entire face.Figs.3a and 3b show the landmark points symmetry over both benchmark datasets,respectively.

    Figure 3: Landmark points symmetry over the (a) Gallagher collection person dataset and (b) the Images of groups dataset,respectively

    3.2.2 Landmarks Tracking for Multi-face Age Estimation

    After detection of the face,35 landmarks are plotted on the face,on the eyebrows,eyes,and lips,by converting the RGB image into a binary image and detecting the facial features using blob detection.The edges of each facial feature blob are marked with landmarks by taking the central point of each edge using Eq.(3).The nose is detected using the ridge contour method and a total of seven landmark points are marked on the nose.To plot the area of the chin,jawline,and forehead,the midpoints of the face blob or bounding box edges are marked and these are calculated using Eq.(4)[2];

    wherea,b,c,ddenotes the edges length and thee1,e2,e3,ande4are the midpoints of the blob edges.

    3.3 Synthetic Face Mask Prediction

    Synthetic mask prediction is a robust technique for the accurate prediction of the multi-face age of an individual and to recognize the expressions or emotions of a person.This technique is widely used for face detection,face recognition,face aging estimation,etc.For the generation of synthetic masks on the face, we utilized the 35 landmark points for age estimation and 23 landmarks for multi-face expressions recognition.The technique used for both the masks is the same,i.e.,three-sided polygon meshes and perpendicular bisection of a triangle are applied[15].However,the synthetic mask is only generated on facial features for multi-face expression recognition using the sub-triangle formation.The main variations appear on the facial features during changes in facial expressions.Algorithm 1 describes the overall procedure of synthetic face mask prediction over the Gallagher collection person dataset for multi-face expressions recognition.

    Given a face image with 35 and 23 landmarks points over the Images of the Group dataset and the Gallagher collection person dataset images respectively, a multivariate shape model is generated using the landmarks points via polygon meshes and the perpendicular bisection of triangles for age estimation and expression recognition, the large triangles, and sub-triangle formation rule is used.The perpendicular bisections help us distinguish the changes occurring from infancy till adulthood where triangular meshes will further help to extract features for both multi facial expressions and age estimation.Figs.4a and 4b show the synthetic mask prediction over the Images of the Group dataset and the Gallagher collection person dataset,respectively.

    Figure 4:Synthetic face mask prediction over(a)the Images of Groups dataset for age estimation and over(b)the Gallagher collection person dataset for expression recognition respectively

    Algorithm 1:Multi-face expression recognition synthetic mask prediction Input: Input X=Position of 23 landmarks localization points;Output: Mesh of triangles of Y:TM(Y);//initiating feature descriptors matrix//begin 1 Calculate the pixel positions of three outer corners of a triangle(c1,c2,c3);2 for 3 TM(Y):=(c1,c2,c3);(Continued)

    Algorithm 1:Continued 4 /*Initialize TM(Y)a large triangle*/5 The sub-triangles formed inside the TM(Y)is S:=(s1,s2,s3);6 do 7 c1A bisects /c_1;8 c2B bisects /c_2;9 c3C bisects /c_3;10 end for 11 TM(Y)←S;12 return TM(Y);13 end

    3.4 Feature Descriptors

    For the estimation of age and the accurate recognition of facial expressions,we have extracted the age and expression features individually.For age group prediction, the features extraction methods include; 1) Anthropometric model, 2) Interior Angles formulation and, 3) Wrinkles detection (See Section 3.4.1).For expressions recognition,the extracted features are,1)Geodesic distance,2)Energybased point clouds,and 3)0–180?intensity(See Section 3.4.2).

    3.4.1 Feature Extraction for Age Group Classification

    The anthropometric model is the study of the human face and facial features by dimensions and sizes [20].The landmark points marked on the facial features are known by anatomical names e.g.,the lip corners are known as the left and right cheilion and are denoted by lch and rch,likewise,the inner corners of the eyebrows are known as Nasion and are denoted by n.By using this model,we have taken several distances between the facial features which are calculated using the Euclidean distance using Eq.(5)[21].

    wherep1,p2,q1 and q2are the pixel locations along x and y coordinates,respectively.Fig.5 shows the anatomical names and calculated dimensions.

    For the calculation of the interior angles,the above-mentioned face mask is used.From infancy to adulthood the shape of the face mask changes and this results in the variations of angles.We calculated the interior anglesθ1,θ2,θ3using the law of cosine in Eqs.(6),(7)and(8)[21];

    wherep, qandrare the sides of the triangles formed by the face mask.Different measurements of interior angles on two different age groups are shown in Fig.6.

    Figure 5:Anatomical names of the given dimensions

    Figure 6:Interior angle formulations over the Images of groups dataset

    With time,human skin texture changes due to environment,stress,health issues,and many other factors.This texture variation appears in the form of wrinkles,under-eye bags,sagging skin,etc.For wrinkle detection,the Canny edge detection method is used.In Fig.7,the winkles are displayed in the form of edges,i.e.,the white pixels in the binary image over the Images of groups dataset.The quantity of the edges is equal to the number of wrinkles on the face which exhibits the age of the person.These wrinkles are calculated using the Eq.(9)[22];

    whereF,LE,RE,UEandALare the white pixels,i.e.,T1,T2,T3,T4andT5are the total number of pixels on the forehead,left-eyelid,right-eyelid,under-eyes and around the lips.

    Figure 7:The results of wrinkles formation over the images of groups dataset

    3.4.2 Feature Extraction for Facial Expressions Recognition

    The geodesic distance on the surface of the face is the shortest distance between two points.To calculate the geodesic distance,Kimmel and Sethian proposed a method known as fast marching using the Eikonal equation as in Eq.(10)[23];

    The fast-marching algorithm is based on Dijkstra algorithm which computes the shortest distances between two points.In this work,we calculate geodesic distances on the surface of the face using the values of gradient only.Imgis an image having multiple landmark points.To calculate the geodesic distance between two landmark points the distance is(D=d1,d2).The geodesic distance is taken as the parametric manifold which can be represented by mappingF:R2→R3from the parameterization P to the manifold which is given as in Eq.(11);

    The metric tensorgijof the manifold is given as in Eq.(12);

    The geodesic distance is calculated as the shortest distance between the two points.We can calculate the geodesic distance on the surface of the face between 15 landmark points.The geodesic distanceδ(A,B)between two pointsAandBis calculated as in Eq.(13);

    The distance element on the manifold is given as in Eq.(14);

    where the values ofcanddare 1 and 2.We can compute the geodesic distance between 15 landmark points and select the most significant distance that helps in expression recognition.As a result, we obtained a total of 15 distances.

    Energy-based point clouds are the techniques that work on the principle of the Dijkstra algorithm.According to our best knowledge, this technique is used for the very first time for age estimation and expression recognition, simultaneously.This technique is efficient, robust, and quite simple to implement.Using this technique, a central landmark point labeledf∈Fis marked at the center of the face.Its distance is fixed to zero, i.e.,d(f)=0.After that, this value is inserted into a priority queueQ, where the priority is based on the smallest distance between the landmark points.The remaining points are marked asd(q)= ∞.In the priority queue, one point f is selected then the shortest distances between that point to other points are calculated based on the Dijkstra algorithm.Based on those distances,energy-based point clouds are displayed on the face.The alignment of these point clouds changes with variations in the distances from the central point to the other landmark points.The distances from the central point to other varying landmark points are known as optimal distances[24].Fig.8 shows the hierarchical steps for energy-based point clouds extraction.

    Figure 8:The hierarchical steps for energy-based point clouds extraction

    In the 0–180?intensity feature extraction technique,radon transform calculates the projection of an image matrix with some specific axis.The specific axis is used to predict the 2D approximation of the facial expression through different parts of the face using the intensity estimationqalong with the specific set of radial line anglesθdefined as in Eq.(15)[25];

    whereI(q,θ)is the line integral of the image intensity andf(a, b)is the distance from the origin at angleθof the line junction.All these points on a line satisfy Eq.(8)and the projection function can be rewritten as Eqs.(16)and(17)[25];

    Finally, we extracted the top 180 levels of each pixel’s intensity and combined them into a unified vector for different facial expressions.Fig.9 shows the different expression intensity levels(0–180?).

    Our ball can compare favourably20 with the king s, he said, andturned with contempt towards the gazing crowd in the street. What hethought was sufficiently21 expressed in his features and movements: Miserable beggars, who are looking in, you are nothing incomparison to me.

    Figure 9:0–180?intensity levels for different expressions over the Gallagher collection person dataset

    3.5 Long Short-Term Memory Based Recurrent Neural Network(RNN-LSTM)

    Variations in the facial features,while expressions are changing,can exhibit various positions of the facial features.For instance,in the state of sadness,an individual has drooping eyelids,the outer corners of the lips are pulled in a downward direction and very slow eye blinking occurs.In a state of happiness,fast eye blinking and movement of the cheek muscles around the eyes occur and puffiness occurs under the eyes.By comparison with the state of anger, the eyes open widely, eyebrows are drawn together and the lips are tightly closed and become narrower and thin,or the lips are opened to form a rectangle.Similarly,for the prediction of accurate age group classification,changes in facial textures and features occur.In childhood,an individual has more tight skin,no wrinkles on the face,and no under-eye puffiness, whereas, in adulthood, more wrinkles are formed around the eyes, lips,and cheeks, sagging skin and skin color varies.These feature and texture variations are extracted in the form of feature vectors and the Recurrent Neural Network(RNN) takes advantage of them for accurate classification of multi-face expressions and age.

    The feature vectors of expressions and age are fed to the RNN classifier after the features extraction and optimization stage.Our RNN uses one hidden layer along with the 210 unidirectional LSTM fully interconnected cells.The input layer is comprised of 5080 images of the Images of Groups dataset and 589 images of the Gallagher collection person dataset.The vectors size of the Images of groups dataset is 28,231×550 and for the Gallagher collection person dataset it is 931×623 Each features vector is the depiction of the participant facial expressions and age.At the output layer, a SoftMax function,which is responsible for a 1 out K classification task,is used.The SoftMax function output range lies between 0 and 1 where the sum is equal to 1 at every time step.The RNN is trained using the Adam Optimizer with a learning rate of 0.001[26].Fig.12 depicts the hierarchy of RNN for age and expression classification.Algorithm 2 defines the RNN_LSTM training for age estimation and expression recognition.

    Algorithm 2:RNN-LSTM Training Input: Classes ←{“7”,“6”};Features ←{“Age Estimation”,“Expression Recognition”};Output: A ←dataset{n}.Values;B ←dataset{Features}.Values;1 Train_Data,Test_Data,Valid_Data ←Split_Data_Train_Test(A,B,0.33,0.25);2 Size_of_Batch←4;3 RNN_LSTM←Sequential_Model({4 Embedded_Layer(Train_Data.Length,Output_Data_Length,Train_Data.Columns),5 RNN_LSTM_Layer(Output_Data_Length),6 Dense_Layer(Output_Data_Length,activation_Function=‘Sigmoid’)});7 Optimizer←Adam,Epochs←20;8 RNN_LSTM.Compile(Optimizer);9 RNN_LSTM.train(Train_Data,Epochs,Size_of_Batch,Valid_Data);

    4 Performance Evaluation

    This section gives a brief description of two datasets used for facial expressions recognition and age estimation,results of experiments conducted to evaluate the proposed FERAE system and comparison with other systems.

    4.1 Datasets Description

    The description of each dataset used in FERAE system is given in Sections 4.1.1,4.1.2 and 4.1.3.

    4.1.1 The Gallagher Collection Person Dataset for Expression Recognition

    The first dataset used for multi-face expression recognition is the Gallagher Collection Person dataset [27].The images in this dataset were shot in real life at real events of real people with real expressions.The dataset comprises 589 images with 931 faces.Each face is labeled in the image with an expression of Neutral,Happy,Sad,FERAE,Angry and Surprise.The dataset is publicly available.Some examples from this dataset are shown in Fig.10.

    Figure 10:Some examples from the Gallagher collection person dataset

    4.1.2 Images of Groups Dataset for Age Estimation

    The second dataset is the Images of Groups dataset which is used for multi-face age group classification[28].The dataset is the largest dataset comprising 5080 images containing 28,231 faces that are labeled with age and gender.The seven-age group labels of this dataset are 0–2, 3–7, 8–12,13–19, 20–36, 37–65, and 66+.This dataset is publicly available.Some examples of this dataset are shown in Fig.11.

    Figure 11:Some examples from the images of groups dataset

    4.2 Experimental Settings and Results

    All the processing and experimentation are being performed on MATLAB(R2019).The hardware system used is Intel Core i5 with 64-bit Windows-10.The system has 16 GB and 5(GHz)CPUs.To evaluate the performance of the proposed system,we used a Leave One Person Out(LOPO)[29]crossvalidation method.Experiment 1 determined the facial features detection accuracy rates over both benchmark datasets.Experiment 2 determined the multi-face expressions recognition accuracy rates as shown in the form of a confusion matrix.Experiment 3 determined the multi-face age estimation accuracy rates over the Images of groups dataset.Experiments 4 reveal comparisons in a ROC curve graph of the proposed model with another state-of-the-art models for both multi-face expression recognition and age estimation,respectively.

    4.2.1 Experiment 1:Facial Features Detection Accuracies

    In this experiment, facial features detection accuracies over the Images of Groups dataset and Gallagher collection person dataset were determined as shown Fig.12.

    4.2.2 Experiment 2:Multi-face Expressions Recognition Accuracy

    For multi-face expression recognition, the RNN model is used for the accurate classification of expression and age.The Leave One Subject Out (LOSO) cross validation technique is used for the evaluation of the proposed system.Tab.1 shows the confusion matrix of multi-face expressions recognition.

    4.2.3 Experiment 3:Multi-face Age Estimation Accuracy

    For multi-face age estimation,the RNN model was used for the accurate classification of age.The Leave One Subject Out (LOSO) cross validation technique was used for the evaluation of proposed system.Tab.2 shows the confusion matrix for multi-face age estimation.

    Figure 12:Facial features detection accuracies over both benchmark datasets

    Table 1:Confusion matrix for multi-face expressions recognition over the Gallagher person collection dataset

    Table 2: Confusion matrix for multi-face age estimation over the images of groups dataset

    4.2.4 Experiment 4: Results for Comparison of the Proposed Multi-expressions Recognition and Age Estimation Model with Other State of the Art Models.

    Figs.13a–13f, 14a–14f shows the ROC curve graph for all multi-facial expressions and age estimation.The ROC curve is the relationship between the true positive rate and the false positive rate.The true positive is basically showing the sensitivity and false positive rate is showing the 1-specificity.Both the true positive and false positive can be calculated in Eqs.(18)and(19)respectively;

    Figure 13:The ROC curve graphs for all multi-facial expressions over the Gallagher collection person dataset.The lowest and highest values in the expressions ROC curve graphs of both the true positive and false positive using RNN are;Neutral:(0.03,0.00)and(0.80,1.00),Happy:(0.02,0.02)and(0.92,0.98), Sad: (0.14, 0.027) and (0.80, 1.00), Fear: (0.10, 0.00) and (0.81, 0.98), Angry: (0.09, 0.01) and(0.77 and 1.00)and Surprise:(0.12,0.03)and(0.93,0.98)

    We have tested our multi-facial expression recognition and age estimation system (FERAE)model using the state-of the art methods i-e.,Convolution Neural Network(CNN),Recurrent Neural Network(RNN),and Deep Belief Neural Network(DBNN).Experimental Results 4 shows that the RNN along with the other salient feature descriptors of both expression and age provides better results against CNN and DBNN.

    Figure 14:The ROC curve graphs for all the age groups over the Images of groups dataset.The lowest values and highest values in the age groups ROC curve graphs of both the true positive and false positive using RNN are; 0–2: (0.00, 0.00) and (0.90, 1.00), 3–7: (0.18, 0.01) and (0.92, 1.00), 8–12:(0.00, 0.00) and (0.89, 0.99), 13–19: (0.10, 0.00) and (0.90, 1.00), 20–36: (0.00, 0.00) and (0.98 and 0.99)and 37–65:(0.09,0.01)and(0.91,0.97)

    5 Conclusion

    In this paper,a fused model of multi facial expressions recognition and age estimation is proposed.A synthetic face mask is mapped on the face formed by the localization of the landmarks points.The novel point-based and texture-based features obtained using different feature extraction techniques are passed to the RNN classifier for the classification of expressions and age groups The proposed system is tested using the Gallagher collection person dataset for expression recognition and the Images of groups dataset for age estimation.Experimental results show that our approach produced superior classification accuracies i-e.,85.5%over the Gallagher collection person dataset and 91.4%over the images of groups dataset.The proposed system applies to surveillance systems,video gaming,consumer applications, e-learning, audience analysis, and emotion robots.As for limitations, the system fails to detect the detailed facial features of persons from images that are captured too far from the cameras.In the future,we will work on the computational time complexity of the system and also evaluate our system on RGB-D datasets.

    Acknowledgement:This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (No.2018R1D1A1A02085645).Also,this work was supported by the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade,Industry and Energy,the Ministry of Health&Welfare,the Ministry of Food and Drug Safety)(Project Number:202012D05-02).

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品久久久久久av不卡| 国产一区二区三区在线臀色熟女| 六月丁香七月| 中文字幕制服av| 久久久久久久久中文| 三级国产精品欧美在线观看| 亚洲欧洲国产日韩| 欧美成人a在线观看| 男人和女人高潮做爰伦理| 国产成人福利小说| 老师上课跳d突然被开到最大视频| 国产真实乱freesex| 久久久久九九精品影院| 成人午夜高清在线视频| 最近2019中文字幕mv第一页| 成年av动漫网址| 国产久久久一区二区三区| 精品少妇黑人巨大在线播放 | 男人舔女人下体高潮全视频| 69av精品久久久久久| 成人二区视频| 久久久久免费精品人妻一区二区| 久久人人精品亚洲av| or卡值多少钱| 免费看日本二区| 成人永久免费在线观看视频| 此物有八面人人有两片| 天堂中文最新版在线下载 | 国产成人一区二区在线| 午夜视频国产福利| 亚洲欧美成人精品一区二区| 欧美人与善性xxx| 99精品在免费线老司机午夜| 久久精品国产鲁丝片午夜精品| 亚洲不卡免费看| 小蜜桃在线观看免费完整版高清| 夜夜看夜夜爽夜夜摸| 最好的美女福利视频网| 国产精品无大码| 免费不卡的大黄色大毛片视频在线观看 | 久久久精品94久久精品| 亚洲av第一区精品v没综合| 老司机影院成人| 色尼玛亚洲综合影院| 国产在线男女| 国产成人freesex在线| 一级黄色大片毛片| 亚洲真实伦在线观看| 国产精品av视频在线免费观看| 成年女人永久免费观看视频| 丝袜美腿在线中文| 麻豆乱淫一区二区| 欧洲精品卡2卡3卡4卡5卡区| 精品国内亚洲2022精品成人| 国产激情偷乱视频一区二区| 成人午夜精彩视频在线观看| 午夜久久久久精精品| 一卡2卡三卡四卡精品乱码亚洲| 久久99热6这里只有精品| 三级毛片av免费| 99久国产av精品国产电影| 蜜桃亚洲精品一区二区三区| 国产精品一区二区在线观看99 | 国产探花极品一区二区| 村上凉子中文字幕在线| 亚洲aⅴ乱码一区二区在线播放| 成熟少妇高潮喷水视频| 成人特级黄色片久久久久久久| 搡女人真爽免费视频火全软件| 免费搜索国产男女视频| 真实男女啪啪啪动态图| 在线观看av片永久免费下载| 青春草亚洲视频在线观看| 99热这里只有是精品在线观看| 亚洲成av人片在线播放无| 久久久午夜欧美精品| 亚洲精品国产成人久久av| 美女 人体艺术 gogo| 国产精品人妻久久久久久| 波多野结衣高清无吗| 国产亚洲91精品色在线| 桃色一区二区三区在线观看| 亚洲国产精品合色在线| 国产蜜桃级精品一区二区三区| 卡戴珊不雅视频在线播放| 91精品国产九色| 欧美人与善性xxx| 精品久久久久久久久亚洲| av视频在线观看入口| 欧美成人免费av一区二区三区| 欧美人与善性xxx| 亚洲精品久久久久久婷婷小说 | 嫩草影院新地址| 精品日产1卡2卡| 国产精品国产高清国产av| 日韩精品有码人妻一区| 亚洲五月天丁香| 直男gayav资源| 一级黄色大片毛片| 色综合站精品国产| 亚洲av不卡在线观看| 国语自产精品视频在线第100页| 国产伦精品一区二区三区视频9| 精品久久久久久久末码| 国产 一区 欧美 日韩| 变态另类成人亚洲欧美熟女| a级毛片免费高清观看在线播放| 国产视频内射| 久久鲁丝午夜福利片| avwww免费| 一本久久精品| 欧美高清成人免费视频www| 欧美+亚洲+日韩+国产| 嫩草影院精品99| 99久国产av精品| 婷婷精品国产亚洲av| 久久久久久伊人网av| 精品欧美国产一区二区三| 国产激情偷乱视频一区二区| 在线观看一区二区三区| 亚洲高清免费不卡视频| 国产av麻豆久久久久久久| 国产高潮美女av| eeuss影院久久| 日本与韩国留学比较| 成人三级黄色视频| 国产精品福利在线免费观看| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 国产午夜福利久久久久久| 日韩精品青青久久久久久| 亚洲综合色惰| 99久久精品国产国产毛片| 联通29元200g的流量卡| 国产精品无大码| 嫩草影院入口| 国产午夜精品一二区理论片| 男女做爰动态图高潮gif福利片| 综合色丁香网| 18禁在线无遮挡免费观看视频| 欧美一区二区国产精品久久精品| 六月丁香七月| 午夜爱爱视频在线播放| 女的被弄到高潮叫床怎么办| 久久人妻av系列| 搡女人真爽免费视频火全软件| 日韩av不卡免费在线播放| 99国产极品粉嫩在线观看| 国产高清激情床上av| 国产成人a∨麻豆精品| 99久久精品国产国产毛片| 亚洲图色成人| 99久久九九国产精品国产免费| 日本色播在线视频| 国产一区二区激情短视频| 欧美zozozo另类| 亚洲欧美中文字幕日韩二区| 国产亚洲欧美98| 日韩一区二区视频免费看| 国产成人精品久久久久久| 欧美日韩在线观看h| 成年女人永久免费观看视频| 精品免费久久久久久久清纯| 成人午夜高清在线视频| 成人性生交大片免费视频hd| 精品日产1卡2卡| 人人妻人人看人人澡| 你懂的网址亚洲精品在线观看 | 日韩欧美在线乱码| www.av在线官网国产| 欧美+亚洲+日韩+国产| 中文精品一卡2卡3卡4更新| 最近手机中文字幕大全| 欧美丝袜亚洲另类| 一本精品99久久精品77| 91狼人影院| 亚洲最大成人中文| 91午夜精品亚洲一区二区三区| 黑人高潮一二区| 亚洲七黄色美女视频| 美女 人体艺术 gogo| 男女那种视频在线观看| 波野结衣二区三区在线| 亚洲av二区三区四区| 免费av不卡在线播放| 人妻夜夜爽99麻豆av| 身体一侧抽搐| 久久精品夜夜夜夜夜久久蜜豆| 亚洲自偷自拍三级| 女人十人毛片免费观看3o分钟| 人妻久久中文字幕网| 婷婷精品国产亚洲av| 日韩制服骚丝袜av| 爱豆传媒免费全集在线观看| 国产视频首页在线观看| 卡戴珊不雅视频在线播放| 美女内射精品一级片tv| 午夜激情福利司机影院| 成人美女网站在线观看视频| 免费av不卡在线播放| 久久中文看片网| 亚洲精品国产av成人精品| 给我免费播放毛片高清在线观看| 在线免费观看不下载黄p国产| 久久精品久久久久久久性| 成人综合一区亚洲| 欧美日韩国产亚洲二区| 麻豆久久精品国产亚洲av| 久久这里只有精品中国| 午夜老司机福利剧场| 一级毛片aaaaaa免费看小| 亚洲中文字幕一区二区三区有码在线看| 国产精品综合久久久久久久免费| 亚洲精品日韩av片在线观看| 一区二区三区四区激情视频 | 99久久精品热视频| 午夜福利视频1000在线观看| 天堂网av新在线| 人妻夜夜爽99麻豆av| 乱人视频在线观看| 校园春色视频在线观看| 国产亚洲精品久久久久久毛片| 亚洲av.av天堂| 可以在线观看的亚洲视频| 伦精品一区二区三区| 久久婷婷人人爽人人干人人爱| 能在线免费看毛片的网站| 久99久视频精品免费| 精品99又大又爽又粗少妇毛片| 亚洲,欧美,日韩| 久久热精品热| 免费不卡的大黄色大毛片视频在线观看 | 国产真实乱freesex| 国产毛片a区久久久久| 男女做爰动态图高潮gif福利片| av在线天堂中文字幕| 青青草视频在线视频观看| 99久久精品国产国产毛片| 国产综合懂色| 久久精品91蜜桃| 99久久精品热视频| 毛片一级片免费看久久久久| 日本三级黄在线观看| av在线亚洲专区| 国产亚洲5aaaaa淫片| 国产精品三级大全| 麻豆成人午夜福利视频| 99久久人妻综合| 少妇熟女aⅴ在线视频| 中文欧美无线码| 国产精品精品国产色婷婷| 国产麻豆成人av免费视频| 国产综合懂色| 美女脱内裤让男人舔精品视频 | 天堂√8在线中文| 18禁裸乳无遮挡免费网站照片| av黄色大香蕉| 美女脱内裤让男人舔精品视频 | 美女黄网站色视频| 春色校园在线视频观看| 免费黄网站久久成人精品| 国产精品乱码一区二三区的特点| 亚洲中文字幕一区二区三区有码在线看| av国产免费在线观看| 听说在线观看完整版免费高清| 熟女人妻精品中文字幕| 日本一本二区三区精品| 白带黄色成豆腐渣| 国产伦精品一区二区三区视频9| 91久久精品国产一区二区成人| 99热这里只有精品一区| 一个人看视频在线观看www免费| 免费看a级黄色片| 欧美性感艳星| 国产单亲对白刺激| 亚洲成av人片在线播放无| 大型黄色视频在线免费观看| 亚洲无线在线观看| 欧美xxxx性猛交bbbb| 免费av观看视频| 欧美+亚洲+日韩+国产| 噜噜噜噜噜久久久久久91| 少妇熟女欧美另类| 在线国产一区二区在线| 欧美又色又爽又黄视频| 久久精品综合一区二区三区| 久久精品夜色国产| 免费一级毛片在线播放高清视频| 日韩人妻高清精品专区| 一级二级三级毛片免费看| 日本黄色片子视频| 搞女人的毛片| 精品国产三级普通话版| 18禁在线无遮挡免费观看视频| 国产在视频线在精品| 九九在线视频观看精品| 日韩三级伦理在线观看| 好男人在线观看高清免费视频| av专区在线播放| 欧美又色又爽又黄视频| 一级二级三级毛片免费看| 国产精品久久久久久久久免| 日韩国内少妇激情av| 亚洲精品成人久久久久久| 最近中文字幕高清免费大全6| 国产v大片淫在线免费观看| 九草在线视频观看| 亚洲综合色惰| 男人和女人高潮做爰伦理| 天堂av国产一区二区熟女人妻| 22中文网久久字幕| 18禁在线播放成人免费| 性欧美人与动物交配| 亚洲欧美日韩东京热| 少妇被粗大猛烈的视频| 69av精品久久久久久| 在线播放国产精品三级| 日本黄大片高清| 亚洲无线在线观看| 国产一级毛片在线| 免费av不卡在线播放| 久久中文看片网| 精品人妻熟女av久视频| 久久久久久久久中文| 久久国产乱子免费精品| 91久久精品国产一区二区成人| 高清在线视频一区二区三区 | 久久久久久久久久久丰满| 亚洲av二区三区四区| 精品久久久久久久久亚洲| 亚洲av一区综合| 最近的中文字幕免费完整| 人人妻人人看人人澡| 国产亚洲5aaaaa淫片| 欧美精品国产亚洲| 国产真实乱freesex| 国产精品麻豆人妻色哟哟久久 | 久久99热6这里只有精品| 免费av观看视频| 国产午夜精品久久久久久一区二区三区| 国产黄片视频在线免费观看| 国产一级毛片在线| 在线观看66精品国产| 天堂√8在线中文| 亚洲成a人片在线一区二区| 国产精品99久久久久久久久| 丰满人妻一区二区三区视频av| 国产日本99.免费观看| 亚洲中文字幕日韩| 狠狠狠狠99中文字幕| 亚洲人成网站高清观看| 哪个播放器可以免费观看大片| 麻豆国产av国片精品| 亚洲经典国产精华液单| 亚洲无线在线观看| 极品教师在线视频| 一进一出抽搐gif免费好疼| 天天躁日日操中文字幕| 午夜视频国产福利| 99热只有精品国产| 男人狂女人下面高潮的视频| 国产女主播在线喷水免费视频网站 | 在线免费观看不下载黄p国产| 日本成人三级电影网站| 观看美女的网站| 精品久久久久久久久久久久久| 日韩av不卡免费在线播放| 性色avwww在线观看| 欧美激情国产日韩精品一区| 中文精品一卡2卡3卡4更新| 日本黄色片子视频| 悠悠久久av| 国产精品久久久久久av不卡| 久久久久免费精品人妻一区二区| 99热只有精品国产| 内地一区二区视频在线| 人人妻人人澡欧美一区二区| 日本欧美国产在线视频| 婷婷精品国产亚洲av| 深爱激情五月婷婷| 亚洲丝袜综合中文字幕| 国产精品久久久久久精品电影| 久久久国产成人精品二区| 不卡一级毛片| 精品免费久久久久久久清纯| 亚洲av中文字字幕乱码综合| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 哪里可以看免费的av片| 久久99热6这里只有精品| 内射极品少妇av片p| 国产精品免费一区二区三区在线| 亚洲七黄色美女视频| 成人亚洲精品av一区二区| 九九在线视频观看精品| 麻豆国产97在线/欧美| 狂野欧美白嫩少妇大欣赏| 97超碰精品成人国产| 婷婷色综合大香蕉| 女人十人毛片免费观看3o分钟| 日本一本二区三区精品| 中文欧美无线码| 日韩av在线大香蕉| 中文字幕久久专区| 日韩欧美精品v在线| 亚洲五月天丁香| 天堂av国产一区二区熟女人妻| 亚洲精品成人久久久久久| 久久午夜福利片| 日韩视频在线欧美| 波多野结衣高清无吗| 97在线视频观看| 国产精品伦人一区二区| 国产精品1区2区在线观看.| 国产亚洲91精品色在线| 亚洲欧美中文字幕日韩二区| 亚洲国产日韩欧美精品在线观看| 一进一出抽搐gif免费好疼| 日本爱情动作片www.在线观看| 欧美一级a爱片免费观看看| 97人妻精品一区二区三区麻豆| 禁无遮挡网站| 日韩欧美在线乱码| 亚洲精品成人久久久久久| 中国美女看黄片| 99热这里只有是精品50| 男的添女的下面高潮视频| 日韩强制内射视频| 91精品国产九色| 亚洲欧美日韩东京热| 日日撸夜夜添| 最近2019中文字幕mv第一页| 日韩国内少妇激情av| 久久婷婷人人爽人人干人人爱| 日本欧美国产在线视频| 国产亚洲精品av在线| 日本色播在线视频| 亚洲最大成人手机在线| 精品久久久久久久人妻蜜臀av| 美女大奶头视频| 午夜免费激情av| 欧美变态另类bdsm刘玥| 国产不卡一卡二| 毛片女人毛片| 久久热精品热| 国产国拍精品亚洲av在线观看| 国产精品美女特级片免费视频播放器| 国产成人一区二区在线| 我的女老师完整版在线观看| 国产麻豆成人av免费视频| 欧美日韩综合久久久久久| 听说在线观看完整版免费高清| 十八禁国产超污无遮挡网站| 26uuu在线亚洲综合色| 三级毛片av免费| 小说图片视频综合网站| 免费观看人在逋| 中国美白少妇内射xxxbb| 色哟哟·www| 99热这里只有是精品50| 此物有八面人人有两片| 校园人妻丝袜中文字幕| 成人美女网站在线观看视频| 看非洲黑人一级黄片| 日韩一区二区三区影片| 国内精品美女久久久久久| 一边摸一边抽搐一进一小说| 国产伦在线观看视频一区| 色尼玛亚洲综合影院| 午夜福利高清视频| 好男人在线观看高清免费视频| 在线观看免费视频日本深夜| 免费人成视频x8x8入口观看| 波多野结衣巨乳人妻| 精品国产三级普通话版| 色视频www国产| ponron亚洲| 久久久久九九精品影院| 国产精品乱码一区二三区的特点| 免费电影在线观看免费观看| 国产v大片淫在线免费观看| 久久精品人妻少妇| 国内精品宾馆在线| 91精品一卡2卡3卡4卡| 色哟哟·www| 又黄又爽又刺激的免费视频.| 成人国产麻豆网| 特大巨黑吊av在线直播| 亚洲精品久久久久久婷婷小说 | 一级毛片我不卡| www.av在线官网国产| 一夜夜www| 看片在线看免费视频| 五月玫瑰六月丁香| 高清在线视频一区二区三区 | 国产精品久久久久久精品电影| 国产探花在线观看一区二区| 搡老妇女老女人老熟妇| 高清午夜精品一区二区三区 | 国产伦精品一区二区三区视频9| 大又大粗又爽又黄少妇毛片口| a级毛片a级免费在线| 成年版毛片免费区| 久99久视频精品免费| 日韩中字成人| 亚洲久久久久久中文字幕| 日本色播在线视频| 久久久久性生活片| 天天躁夜夜躁狠狠久久av| 国内精品宾馆在线| 日本黄色视频三级网站网址| 国产淫片久久久久久久久| 村上凉子中文字幕在线| 久久人妻av系列| 欧美性猛交╳xxx乱大交人| 国产高清不卡午夜福利| 99热6这里只有精品| 婷婷六月久久综合丁香| 赤兔流量卡办理| 男女视频在线观看网站免费| 天天躁夜夜躁狠狠久久av| 成人亚洲精品av一区二区| 国产av不卡久久| 久久鲁丝午夜福利片| 日韩av在线大香蕉| 亚洲最大成人中文| 免费看美女性在线毛片视频| 中国美白少妇内射xxxbb| 国产精品精品国产色婷婷| 欧美激情国产日韩精品一区| 99riav亚洲国产免费| 美女脱内裤让男人舔精品视频 | 亚洲av不卡在线观看| 精品人妻视频免费看| 蜜桃亚洲精品一区二区三区| 日韩制服骚丝袜av| 国产精品人妻久久久久久| 一区二区三区免费毛片| 欧美+亚洲+日韩+国产| 日本在线视频免费播放| 男女边吃奶边做爰视频| 成年版毛片免费区| 美女 人体艺术 gogo| 色综合亚洲欧美另类图片| 亚洲成av人片在线播放无| 国产精品久久视频播放| 日产精品乱码卡一卡2卡三| 亚洲天堂国产精品一区在线| 又粗又硬又长又爽又黄的视频 | 亚洲欧美精品自产自拍| 久久国内精品自在自线图片| 亚洲电影在线观看av| 一本久久中文字幕| 22中文网久久字幕| 国产精品永久免费网站| 亚洲精品国产成人久久av| 国产高清不卡午夜福利| 久久99热6这里只有精品| 99热只有精品国产| 天天一区二区日本电影三级| 亚洲欧美中文字幕日韩二区| 国产高清激情床上av| 天堂av国产一区二区熟女人妻| 亚洲高清免费不卡视频| 日本五十路高清| 日韩欧美国产在线观看| 少妇裸体淫交视频免费看高清| 日韩,欧美,国产一区二区三区 | 性色avwww在线观看| 熟女电影av网| 久久久色成人| 久久精品人妻少妇| 午夜久久久久精精品| 在线观看美女被高潮喷水网站| 欧美日韩一区二区视频在线观看视频在线 | 午夜免费男女啪啪视频观看| 午夜老司机福利剧场| 亚洲成人精品中文字幕电影| 精品国内亚洲2022精品成人| 欧美成人a在线观看| 三级国产精品欧美在线观看| 久99久视频精品免费| av在线观看视频网站免费| 国产片特级美女逼逼视频| 亚洲中文字幕日韩| 好男人在线观看高清免费视频| 日韩一本色道免费dvd| 热99re8久久精品国产| 欧美日本亚洲视频在线播放| 国产高清三级在线| 最近中文字幕高清免费大全6| 日本免费一区二区三区高清不卡| 干丝袜人妻中文字幕| 爱豆传媒免费全集在线观看| 黄色配什么色好看| 非洲黑人性xxxx精品又粗又长| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 亚洲人成网站在线播放欧美日韩| 一本久久精品| 两个人的视频大全免费| 99久国产av精品国产电影| 91午夜精品亚洲一区二区三区| 久久精品国产99精品国产亚洲性色| 99久久中文字幕三级久久日本| 国产精品乱码一区二三区的特点| 国产麻豆成人av免费视频| 亚洲一级一片aⅴ在线观看| 成人高潮视频无遮挡免费网站| 少妇人妻精品综合一区二区 | 亚洲精品乱码久久久v下载方式| 国产精品精品国产色婷婷| 我的老师免费观看完整版| 久久精品国产亚洲av天美| 免费观看人在逋| 日韩一本色道免费dvd| 国产精品麻豆人妻色哟哟久久 | 人妻夜夜爽99麻豆av|