• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A New Hybrid Feature Selection Method Using T-test and Fitness Function

    2021-12-14 06:06:46HusamAliAbdulmohsinHalaBahjatAbdulWahabandAbdulMohssenJaberAbdulHossen
    Computers Materials&Continua 2021年9期

    Husam Ali Abdulmohsin,Hala Bahjat Abdul Wahab and Abdul Mohssen Jaber Abdul Hossen

    1Department of Computer Science,Faculty of Science,University of Baghdad,Baghdad,Iraq

    2Faculty of Computer Science,Technology University,Baghdad,Iraq

    3Department of Computer Science,Al-Turath University College,Baghdad,Iraq

    Abstract:Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced system complexity.FS reduces the number of features extracted in the feature extraction phase by reducing highly correlated features,retaining features with high information gain,and removing features with no weights in classification.In this work,an FS filter-type statistical method is designed and implemented,utilizing a t-test to decrease the convergence between feature subsets by calculating the quality of performance value(QoPV).The approach utilizes the well-designed fitness function to calculate the strength of recognition value(SoRV).The two values are used to rank all features according to the final weight(FW)calculated for each feature subset using a function that prioritizes feature subsets with high SoRV values.An FW is assigned to each feature subset,and those with FWs less than a predefined threshold are removed from the feature subset domain.Experiments are implemented on three datasets:Ryerson Audio-Visual Database of Emotional Speech and Song,Berlin,and Surrey Audio-Visual Expressed Emotion.The performance of the F-test and F-score FS methods are compared to those of the proposed method.Tests are also conducted on a system before and after deploying the FS methods.Results demonstrate the comparative efficiency of the proposed method.The complexity of the system is calculated based on the time overhead required before and after FS.Results show that the proposed method can reduce system complexity.

    Keywords:Feature selection;dimensional reduction;feature optimization;pattern recognition;classification;t-test

    1 Introduction

    Feature selection (FS) is a preprocessing step in machine learning [1]that enhances classification accuracy.It is the process of feature subset selection from a pool of correlated features for use in modeling construction [2].This work aims to decrease the high correlation between features that causes numerous drawbacks,including a failure to gain additional information and improve system performance,an increase in computational requirements during training,and instability for some systems [3].

    FS algorithms have goals such as to consume less time for learning,reduce the dimensionality and complexity of systems,reduce correlation and time consumption,and increase system accuracy [4].FS methods have been said to decrease the data burden and avoid overfitting [5].FS methods are considered a combination of search methods that produce feature subsets scored by evaluation measures.The simplest FS method tests all possible subsets and finds the best accuracy.Although,the approach is time-consuming,it can identify the feature subset with the highest clustering accuracy.An enhanced version of the simplest FS method was presented,and a parallel FS method was proposed,in which each feature subset is tested individually and a scoring function measures the relevance between features [6].

    Many FS methods have been developed,which can be categorized according to multiple topologies.This work is concerned with statistics;hence we classify FS methods according to the distance measures used to evaluate subsets.Distance measures distinguish redundant or irrelevant features from the main pool,and four types of FS methods can be identified according to their distance measures [7].

    · Wrapper methods assign scoring values to each feature subset after training and testing the model.This requires considerable time,but it obtains the subset with the highest accuracy.The three wrapped FS methods of optimization selection,sequential backward selection,and sequential forward selection (SFS),based on ensemble algorithms called bagging and AdaBoost,were used [8].Subset evaluations were performed using na?ve Bayes and decision tree classifiers.Thirteen datasets with different numbers of attributes and dimensions were obtained from the UCI Machine Learning Repository.The search technique using SFS based on the bagging algorithm and using decision trees gained the results with the best average accuracy (89.60%).

    · Filter methods measure the relevance of features through univariate statistics.In tests of 32 FS methods on four gene expression datasets,it was found that filter methods outperform wrapper and embedded methods [9].

    · Embedded methods differ in terms of learning and interaction of the FS phase.Unlike filter methods,wrapper methods utilize learning to measure the quality of several feature subsets without knowledge of the structure of the classification or regression method used.Therefore,these methods can work with any learning machine.Embedded methods do not separate the learning and FS phases,and the structure of the class of functions under consideration plays a crucial role.An example is the measurement of the value of a feature using a bound that is valid for support vector machine (SVM) only and not for the decision tree method [10].

    · Hybrid methods utilize two or more FS methods.An efficient hybrid method consisting of principal component analysis and ReliefF was proposed [11].Ten benchmark disease datasets were used for testing.The approach eliminated 50% of the irrelevant and redundant features from the dataset and significantly reduced the computation time.

    FS methods employ strategies based on the types of feature subsets:redundant and weakly relevant,weakly relevant and non-redundant,noisy and irrelevant,and strongly relevant [12].The current study aims to remove redundant and strongly correlated features by deploying a t-test,and to find coupled features with high dependency by deploying a fitness function.Although FS puts an enormous burden on the system performance pool,FS in pattern recognition systems is rarely avoided.

    FS Methods main concepts:

    · FS methods are employed either to reduce system complexity or increase accuracy.A study in 2006 employed two FS algorithms,the t-test method to filter irrelevant and noisy genes and kernel partial least squares (KPLS) to extract features with high information content [13].It was found that neither method achieved high classification results.FS methods do not necessarily increase the classification accuracy of pattern recognition systems.They can remove all relevant features without conflict between the removed features [14,15].

    · There is no superior FS method.Research has shown that no specific group of FS filter methods outperforms other groups constantly,but observations have indicated that certain groups of FS filter methods perform best with many datasets [3,16].Many FS methods have been used in pattern recognition research and in different scientific fields,with largely varying results.Furthermore,each FS filter method performs differently with respect to specific types of datasets,and this is called FS algorithm instability [17].

    One drawback of statistical FS algorithms is that they do not consider the dependency of features on others;statistical FS algorithms can eliminate a feature whose absence negatively affects the performance of another selected feature because of their strong interrelationship [17].This work avoids this drawback by calculating the dependency of each feature on other features.State-of-the-art methods make decisions on the removal of highly correlated features without a basis in proper measurement.Two highly correlated features can be powerful in classifying two different attributes.Thus,to remove one can severely affect classification.To avoid this,we calculate the strength of recognition value (SoRV) and assign it a high weight through an exponential function.The proposed method outperforms the state-of-the-art through a fitness function that calculates SoRV for each feature and subset feature (pair of features).To remove a feature can also affect the performance of another feature.To avoid this,we group features in subsets of pairs to calculate the degree of dependence between each feature and all other features.

    In the proposed method,there is a maximum of two features in each tested subset.To use a combination of three or more features in each feature subset will exponentially increase the time consumption,and to reach the optimal solution will take months.Nevertheless,subsets of two features provide good results in a reasonable amount of time.Hence,we fix the number of features per subset to two.We focus on statistical filter FS methods because of their stability,scalability,and minimal time consumption.

    The remainder of this paper is organized as follows.Section 2 explores some recent FS methods that utilize the t-test and feature ranking approaches.Section 3 explains the proposed methodology.Section 4 shows the experimental setup and the results gained through this work.Section 5 discusses our conclusions and trends for future work.

    2 Related Work

    The t-test is deployed in many fields to measure the convergence relevance between samples.A proposed gene selection method utilized two FS methods,the t-test to remove noisy and irrelevant genes and KPLS to select features with noticeable information content [13].Three datasets were used in a performance experiment,and the results showed that neither method yielded satisfactory results.A modified hybrid ranking t-test measure was applied to genotype HapMap data [18].Each single nucleotide polymorphism (SNP) was ranked relative to other importance feature measures,such as F-statistics and the informativeness for assignment.The highest ranked SNPs in different groups in different numbers were selected as the input to a class SVM to find the best classification accuracy achieved by a specific feature subset.A twoclass FS algorithm utilizing the Student’s t-test was used to extract statistically relevant features,and the-norm SVM and recursive feature elimination were used to determine the patients at risk of cancer spreading to their lymph nodes [19].A proposed FS method used the Student’st-test to measure the term frequency distribution diversity between one category and the entire dataset [20].An FS approach based on the nested genetic algorithm (GA) utilized filter and wrapper FS methods [21].For the filter FS,a t-test was used to rank the features according to convergence and redundancy.A nested neural network and SVM were used as the wrapper FS technique.A t-test was utilized to compare outcome measures pre-and post-ablation through an intraprocedural 18F-fluorodeoxyglucose positron emission tomography (PET) scan assessment before and after PET/contrast-enhanced guided microwave ablation [22].A fatigue characteristic parameter optimization selection algorithm utilized the classification performance of an SVM as an evaluation criterion and applied the sequential forward floating selection algorithm as a search strategy [23].The algorithm aimed to reach the optimal feature subset of fatigue motion by reducing the dimensionality of the domain set of fatigue feature parameters.Based on the t-test analysis of variance method,the algorithm was used to analyze the influence of individual athlete differences and fatigue exercises on sports behavior and eye movement characteristics.

    3 Proposed Method

    A filtered FS method is proposed to improve the emotion classification accuracy of the datasets deployed in this work.These are the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS),Berlin (Emo-DB),and Surrey Audio-Visual Expressed Emotion(SAVEE).The method uses the minimum number of features to achieve the highest accuracy in the least time.

    The structure of the extracted features from each dataset is shown in Tab.1.We explain the structure of features extracted from the RAVDESS dataset as an example.First,2,186 features are extracted from each of the 1,440 audio wave file samples.The same number is extracted from Emo-DB and SAVEE.

    Table 1:Structure of features extracted

    The number of features is k,as is the number of feature subsets.n is the number of samples in each feature subset,represented aswhere n=1,440 for RAVDESS,n=553 for Emo-DB,and n=480 for SAVEE.The feature number in a feature subset is denoted by i,and j is the sample number.Sections 3.1-3.3 discuss the procedures of the proposed FS method.

    3.1 QoPV Calculation

    The t-test value is calculated between each subset and all other subsets through Eq.(1):

    where k is the number of feature subsets,n is the number of samples in each feature subset;i=1,...,k-1,m=1,...,n,and j=i+1,...,k,to avoid calculating the quality of performance value (QoPV) for the same pair of feature subsets.The QoPV is obtained by calculating the t-test value between subset i and all other subsets.The QoPV for a subset decreases each time the t-test value is 0;otherwise,it increases.After calculating the QoPV of each feature subset with respect to all other subsets,the feature subsets are ranked according to their QoPVs in descending order.

    3.1.1 t-test

    This work uses a two-sample t-test,i.e.,the so-called independent t-test,because the two groups of features being tested come from different features.The formula of the t-test function is shown in Eq.(2):

    wherex1andx2are the means of the two feature subsets being compared,as shown in Eq.(1);S2is the pooled standard error of the two subsets;andfe1andfe2are the numbers of samples in the two subsets and are equal.The t-test indicates significant differences between pairs of feature subsets.A large t-test value indicates that the difference between the means of two groups is higher than the pooled standard error of the two feature subsets [24].Thus,the higher the t-test value the better the results.Feature subsets with low t-test values must be removed because of the great similarity of their values to those of other feature subsets.However,the final decision is not made at this step,because a feature subset with a low QoPV might have a high SoRV.In case of a high SoRV,a subset may have a final weight (FW) higher than those of other feature subsets with a high QoPV.This reflects the novel idea of our work.

    3.2 SoRV Calculation

    The SoRV for each subset i is obtained using the neural network-based fitness function.The SoRV is calculated through pairs of subsets to observe the classification effect of each feature subset i on all other feature subsets through Eq.(3):

    where k is the number of feature subsets,n is the number of samples in each feature subset,i=1,...,k-1,m=1,...,n,and j=i+1,...,k.After performing several experiments,the percentage of 37% achieves the highest performance for the tested features.

    3.3 Final Weight(FW)Calculation

    Several experiments show that SoRV is more important than QoPV.Specifically,SoRV indicates the power of recognition for each feature subset,whereas QoPV indicates the convergence of the feature subset with respect to other feature subsets.Nevertheless,we need QoPV to determine the degree of convergence of each feature subset.Thus,we use Eq.(4) to assign a higher weight to SoRV than to QoPV.

    Using Eq.(4),we calculate FW for all feature subsets i,i=1,2,...,k,where k is the number of feature subsets.All feature subsets are sorted in descending order of their FWs.In the final phase of the proposed method,we select features that will gain the highest emotion recognition accuracy.The number of features selected at the beginning is 20,because lengths less than this result in low classification accuracy.Thus the 20 features with the highest FW values are selected and evaluated through the fitness neural network function used in Eq.(3).Other features are added according to the sorted list of FWs.The FS process stops when emotion recognition accuracy stops increasing and adding other features does not improve it.The final numbers of features selected by the proposed method from the 2,186 features extracted from the RAVDESS,Emo-DB,and SAVEE datasets are 333,247,and 270,respectively.The pseudocode of the proposed method is shown in Fig.1.

    Figure 1:Pseudocode of proposed FS method

    4 Experimental Results

    Section 4.1 discusses the experimental setup,Section 4.2 describes the datasets used in the experiments,and Section 4.3 explains the experimental results.

    4.1 Experimental Setup

    All audio files were preprocessed prior to feature extraction.Silent parts at the beginning and end of each file were removed,data were normalized to the interval (0,100),and files were grouped according to the emotions they represented.The number of features extracted from each audio file was 2,186.Audio file samples were selected randomly for evaluation,and 70%,15%,and 15% of the samples of each dataset were selected for training,validation,and testing,respectively.To evaluate the proposed FS method,we used a one-layer,10-node neural network classifier.Feature extraction was applied to each of the three datasets before application of the proposed method.

    4.2 Experimental Data

    The datasets used in this work were selected through an online search according to the following criteria.

    · This work proposes an FS method for use in speech emotion recognition.Thus,the most important criteria are the emotions represented in a dataset.Selected datasets should represent the six basic emotions of fear,disgust,happiness,sadness,anger,and surprise,according to Paul Ekman’s definition [25].The three selected datasets intersect to represent fear,disgust,happiness,neutrality,sadness,and anger,which include five of the basic emotions.The RAVDESS dataset represents eight emotions through 1440 audio files,and Emo-DB and SAVEE represent seven emotions through 535 and 480 audio files,respectively.

    · The selected datasets should be recorded at different frequencies to test the proposed method.The RAVDESS,Emo-DB,and SAVEE datasets were recorded at 48,000,16,000,and 44,100 Hz,respectively,as shown in Tab.2.

    · Datasets should show gender balance;this criterion was met in this work.

    The same feature extraction process was implemented on each of the datasets,and 2,186 features were proposed for each audio file.These were established by a predefined feature extraction method that utilizes 15 features:entropy,zero crossing (ZC),deviation of ZC,energy,deviation of energy,harmonic ratio,Fourier function,Haar,MATLAB fitness function,pitch function,loudness function,Gammatone Cepstrum Coefficient according to time and frequency,and the MFCC function according to time and frequency.The standard deviation (SD) of these features was calculated using 14 degrees on either side of the mean (i.e.,0.25,0.5,0.75,1,1.25,1.5,1.75,2,2.25,2.5,2.75,3,3.5,and 4).All experiments were implemented separately on each dataset.

    4.3 Performance Analyses

    We discuss the experimental results.The performance efficiency of the proposed FS method is evaluated through a neural network classifier.Three emotional datasets are used in the evaluation process,as shown in Tab.2.The accuracy of the classifier is calculated using confusion matrices and receiver operating characteristics (ROCs).The confusion matrices represent emotions as numbers.

    Table 2:Datasets used in this work

    The confusion matrices related to the RAVDESS dataset show the following emotions from left to right,which we denote as 1 to 8,in the following order:neutrality,calm,happiness,sadness,anger,fear,disgust,and surprise.The confusion matrices related to the Emo-DB dataset show the following emotions from left to right,denoted as 1 to 7,in this order:fear,disgust,happiness,boredom,neutrality,sadness,and anger.The confusion matrices related to the SAVEE dataset show the following emotions from left to right,denoted as 1 to 7,in the following order:anger,disgust,fear,happiness,sadness,surprise,and neutrality.The ROC line chart is one of the best techniques for testing the results of a classification system.It is a two-dimensional line chart;the x-axis shows the false-positive rate (FPR),and the y-axis shows the true-positive rate (TPR).The ROC shows the relationship between sensitivity and specificity.It is generated by plotting the TPR value against the FPR value.TPR is the ratio of cases correctly predicted as positive (i.e.,true positive,or TP) to all positive cases (i.e.,false negative,or FN),as shown in Eq.(5).

    The FPR is the ratio of cases incorrectly predicted as positive (i.e.,false positive,or FP) to all negative cases (i.e.,true negative,or TN),as shown in Eq.(6).

    The ROC curve is a compromise between TPR (or sensitivity) and (1-FPR) (or specificity).The degree to which the curves are tangent to the top-left corner of the ROC line chart indicates the performance of the classification process in making correct predictions.The closer the curve is to the 45°diagonal of the ROC space,the less accurate the classification is because of incorrect predictions [26].The greatest advantage of the ROC in evaluating classifiers is that it does not depend on class distribution,but rather depends on classifier prediction.The results achieved from our experiments are presented in Tab.3,which compares the proposed FS method to the widely used F-test and F-score methods.Tab.3 shows that the proposed FS method achieves the highest classification accuracy among these methods.

    Tab.3 and Figs.2-4 present the accuracy classification results for the three datasets before deploying the FS methods (utilizing all 2,186 features),which are 93.05%,95%,and 97.2% for the RAVDESS,Emo-DB,and SAVEE datasets,respectively.

    Figs.5-7 show the classification accuracies after running the three FS methods on the RAVDESS,Emo-DB,and SAVEE datasets,respectively.The highest classification accuracy gained in this work was through running the proposed FS method on all three datasets.The highest classification accuracies achieved from running the proposed,F-test,and F-score FS methods on the RAVDESS dataset are 93.5%,92.6%,and 92.1%,respectively,as shown in Figs.5a-5c,and Tab.3.These values are lower than those obtained without using FS methods because many of the emotions represented in RAVDESS audio samples are similar and are thus difficult to distinguish.The same is true of realistic datasets.This similarity between audio samples produces similarity in the extracted features;hence the proposed,F-test,and F-score FS methods yield poor outcomes.

    Table 3:Accuracy percentages achieved by implementing the experiments with and without the FS method

    Figure 2:Test confusion matrix before applying FS methods on RAVDESS

    Figure 3:Test confusion matrix before applying FS methods on Emo-DB

    Figure 5:(a) Test confusion matrix after applying the proposed FS method on RAVDESS (b) Test confusion matrix after applying F-test FS method on RAVDESS (c) Test confusion matrix after applying F-score FS method on RAVDESS

    Figure 6:(a) Test confusion matrix after applying the proposed FS method on Emo-DB (b) Test confusion matrix after applying F-test on the Emo-DB (c) Test confusion matrix after applying the F-score on Emo-DB

    Tab.3 and Fig.6 show the classification accuracies after deploying the three FS methods on the Emo-DB dataset.The proposed FS method gains the highest classification accuracy.The F-test and F-score FS methods achieve accuracies of 97.5% and 96.3%,respectively.As observed in the confusion matrices,each FS method affects the recognition of a certain emotion.The proposed method affects the recognition of the happy emotion.The F-test FS method affects the recognition of fear and anger.The F-score FS method affects the recognition of boredom.

    Tab.3 and Fig.7 show the classification accuracies after deploying the three FS methods on the SAVEE dataset.We notice that the proposed FS method gains the highest accuracy among all compared methods.Specifically,the proposed FS method gains 100% classification accuracy,compared to 98.6% and 97.2%,respectively,for the F-test and F-score methods.The F-score FS method achieves no improvement to the classification accuracy.

    Figure 7:(a) Test confusion matrix after applying the proposed FS method on SAVEE (b) Test confusion matrix after applying F-test on SAVEE (c) Test confusion matrix after applying F-score on SAVEE

    All the results shown in the confusion matrices are described by the legend charts shown in Fig.8.The results highlight the superiority of the proposed FS method over the F-test and F-score FS methods.

    Figure 8:The legend chart of the results gained in this work on the three datasets utilized

    As mentioned in Section 4.2,the results are also analyzed using the ROC line chart.Figs.9-11 show the ROC curves for the classification processes on the RAVDESS,Emo-DB,and SAVEE datasets,respectively,before deploying the FS methods.Through the confusion matrices,we show numerically the superior performance of the proposed FS method over the other two FS methods.Through the ROC line charts,we show visually that the proposed method outperforms the F-test and F-score FS methods.

    Through a visual comparison of the ROC curves in Fig.9 and the ROC curves in Fig.12,we notice that all the ROC curves in Figs.12a-12c are farther from the top-left corner than those in Fig.9.This demonstrates the failure of FS methods to prove the results,while the proposed method attained the highest results.The ROC curves in Fig.12a,are closer to the top-left corner than those in Figs.12b and 12c.This demonstrates that the optimum performance is achieved by the proposed FS method.

    Figure 9:Test ROC line chart before applying FS methods on RAVDESS

    Figure 10:Test ROC line chart before applying FS methods on Berlin

    Figure 11:Test ROC line chart before applying FS methods on SAVEE

    We similarly compare the ROC curves in Fig.10 with those in Figs.13a-13c for the Emo-DB dataset.

    Figure 12:(a) ROC line chart after applying the proposed FS method on RAVDESS (b) ROC line chart after applying F-test on RAVDESS (c) ROC line chart after applying F-score on RAVDESS

    Figure 13:(a) ROC line chart after applying the proposed FS method on Emo-DB (b) ROC line chart after applying F-test on Emo-DB (c) ROC line chart after applying F-score on Emo-DB

    The ROC curves in Fig.11 are also compared with those in Figs.14a-14c for the SAVEE dataset.All the curves in the ROC line chart shown in Fig.14a pass through the top-left corner of the ROC.Hence the emotions represented in the SAVEE dataset are recognized using the proposed FS method,with 100% accuracy.

    Figure 14:(a) ROC line chart after applying the proposed FS method on SAVEE (b) ROC line chart after applying F-test on SAVEE (c) ROC line chart after applying F-score on SAVEE

    Time consumption is one of the most important factors in classification.Thus,for the proposed FS method,we prioritize time consumption.The proposed FS method performs well in terms of time consumption after decreasing the number of features.We observe in Figs.15-17 that 2,186 features are used as input to the 10-node single-layer neural network.

    Figure 15:NN training window before applying FS methods on RAVDESS

    Figure 16:NN training window before applying FS methods on Emo-DB

    Figure 17:NN training window before applying FS methods on SAVEEE

    For the RAVDESS dataset,eight epochs are needed to achieve 93.1% classification accuracy without using any FS method (Fig.15).For the Emo-DB dataset,67 epochs are needed to achieve 95% classification accuracy (Fig.16).For the SAVEE dataset,six epochs are needed to achieve 97.2% classification accuracy (Fig.17).Tab.4 compares the numbers of epochs needed to classify the emotions in the datasets before deploying the FS methods for RAVDESS,Emo-DB,and SAVEE,respectively (Figs.15-17),and similarly after deploying the FS methods (Figs.18-20).

    Figure 18:(a) NN training window after applying the proposed FS method on RAVDESS (b)NN training window after applying F-test on RAVDESS (c) NN training window after applying F-score on RAVDESS

    Figure 20:(a) NN training window after applying the proposed FS method on SAVEE (b) NN training window after applying F-test on SAVEE (c) NN training window after applying F-score on SAVEE

    When the proposed FS,F-test,and F-score FS methods are applied,classification takes 6 and 7 epochs,respectively (Fig.18).Thus,the three FS methods have adequate classification times,but the proposed FS method is faster than the other two.When the proposed FS,F-test,and F-score FS methods are applied on the Emo-DB dataset,the classification process takes 9,6,and 8 epochs,respectively (Fig.19).Thus,the three FS methods have adequate classification times,and the F-test FS method is faster than the other two.Although the F-test FS method achieves the fastest time,its classification accuracy is 1.3% less than that of the proposed FS method.

    Before and after using the FS methods,six epochs are needed to classify the seven emotions in the SAVEE dataset (Figs.17 and 20).Hence no improvement in classification time is achieved.Nevertheless,the classification accuracies are adequate,as discussed previously.Before applying the FS methods,2,186 features are extracted from each audio file in the three datasets,because the same feature extraction process is applied to all three datasets.The number of features selected by the three FS methods are different (Tab.5).Although the proposed FS method uses the fewest features from the RAVDESS dataset,it records the highest classification accuracy.The same is true for the SAVEE dataset.For the Emo-DB dataset,the proposed method achieves the highest accuracy in recognizing the seven emotions in the Emo-DB dataset and records the largest number of features.

    Table 4:Time required to implement the experiments with and without the FS methods

    Table 5:Numbers of features produced before and after implementing the FS methods

    5 Conclusion and Future Work

    The confusion matrices in this study reveal a strong relationship between each FS method and the number of emotions.Each FS method affects the recognition of one or two emotions and affects different emotions.According to the results for the Emo-DB dataset,the proposed method negatively affects the accurate classification of happiness,the F-test FS method negatively affects the accurate classification of fear and anger,and the F-score FS method negatively affects the accurate classification of boredom.In summary,each FS method negatively affects the classification accuracy of a different emotion.Therefore,to build a hierarchical or ranking FS method from the three FS methods utilized in this work will result in Strong classification results,but it will consume more time.Ultimately,no relationship exists between the number of features,speed,and classification accuracy.The highest accuracy can be obtained with the lowest number of features,and the highest speed can be achieved with the largest number of features.The variation depends on the SoRV factor utilized in selecting the most powerful features in recognizing different emotions.Thus,to measure the power of classification for each feature is the key to the success of the proposed work.Specifically,many features can be excluded from the main feature domain because they are highly convergent but have high classification power.Such features are neglected by most FS methods.By contrast,our work assigns greater importance to the SoRV than to the QoPV because of its contribution to classification.

    Acknowledgement:We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

    Funding Statement:The authors received no specific funding for this study.

    Conflict of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    麻豆国产av国片精品| 又紧又爽又黄一区二区| 国产xxxxx性猛交| 国产亚洲欧美在线一区二区| 国产极品粉嫩免费观看在线| 两个人看的免费小视频| 美女国产高潮福利片在线看| 国产不卡一卡二| 99精国产麻豆久久婷婷| 99久久精品国产亚洲精品| 看免费av毛片| 纯流量卡能插随身wifi吗| 日本精品一区二区三区蜜桃| 又大又爽又粗| 国产区一区二久久| 王馨瑶露胸无遮挡在线观看| 国产深夜福利视频在线观看| 99精国产麻豆久久婷婷| 久久草成人影院| 欧美日韩国产mv在线观看视频| 老熟妇仑乱视频hdxx| 日本精品一区二区三区蜜桃| av福利片在线| 精品国产乱子伦一区二区三区| 一二三四在线观看免费中文在| 精品亚洲成a人片在线观看| 国产一区二区三区综合在线观看| 国产欧美日韩综合在线一区二区| 99国产精品99久久久久| 午夜免费成人在线视频| 51午夜福利影视在线观看| 操美女的视频在线观看| 精品久久久精品久久久| 在线观看一区二区三区激情| 亚洲av第一区精品v没综合| 国产成人免费观看mmmm| 老司机亚洲免费影院| 涩涩av久久男人的天堂| 亚洲av日韩在线播放| 久久久久视频综合| av免费在线观看网站| 精品一品国产午夜福利视频| 免费看十八禁软件| 一级毛片女人18水好多| www.自偷自拍.com| 亚洲九九香蕉| 别揉我奶头~嗯~啊~动态视频| 久久久久国内视频| 国产精品综合久久久久久久免费 | 老汉色av国产亚洲站长工具| 午夜福利免费观看在线| 一级片免费观看大全| 波多野结衣一区麻豆| 成年女人毛片免费观看观看9 | 侵犯人妻中文字幕一二三四区| 国产成人欧美| 亚洲美女黄片视频| 国产精品影院久久| 亚洲国产精品sss在线观看 | av天堂在线播放| 黄色女人牲交| 亚洲精品美女久久av网站| 日韩欧美免费精品| 亚洲精品自拍成人| 99国产综合亚洲精品| 老汉色av国产亚洲站长工具| 欧美在线黄色| 1024香蕉在线观看| 叶爱在线成人免费视频播放| 免费人成视频x8x8入口观看| 久久九九热精品免费| 三级毛片av免费| 久久精品国产99精品国产亚洲性色 | 黄色a级毛片大全视频| 看黄色毛片网站| 女人爽到高潮嗷嗷叫在线视频| 欧美精品啪啪一区二区三区| 亚洲欧美日韩另类电影网站| 亚洲专区中文字幕在线| 欧美av亚洲av综合av国产av| 一二三四社区在线视频社区8| 久久久水蜜桃国产精品网| 亚洲av日韩精品久久久久久密| 最近最新中文字幕大全免费视频| 欧美亚洲 丝袜 人妻 在线| 三级毛片av免费| www.熟女人妻精品国产| 中文字幕高清在线视频| 久热爱精品视频在线9| ponron亚洲| 男女午夜视频在线观看| 午夜福利一区二区在线看| 亚洲精品美女久久久久99蜜臀| а√天堂www在线а√下载 | 在线观看66精品国产| 国产深夜福利视频在线观看| 国产精品国产av在线观看| 亚洲精品国产精品久久久不卡| 高清黄色对白视频在线免费看| 青草久久国产| 十分钟在线观看高清视频www| 9色porny在线观看| 老司机亚洲免费影院| 12—13女人毛片做爰片一| 搡老岳熟女国产| 十八禁网站免费在线| 精品免费久久久久久久清纯 | 精品少妇一区二区三区视频日本电影| 999精品在线视频| 久久国产精品男人的天堂亚洲| 欧美日本中文国产一区发布| 久久久久久久久久久久大奶| 波多野结衣一区麻豆| svipshipincom国产片| 91大片在线观看| 不卡av一区二区三区| 三上悠亚av全集在线观看| 久热这里只有精品99| 免费高清在线观看日韩| 又大又爽又粗| 国产亚洲欧美98| 人人澡人人妻人| 欧美人与性动交α欧美软件| 久久久久久免费高清国产稀缺| 一级a爱片免费观看的视频| 久久中文看片网| 在线看a的网站| 在线观看日韩欧美| 国产片内射在线| 一边摸一边抽搐一进一出视频| 天堂俺去俺来也www色官网| 亚洲国产精品sss在线观看 | 涩涩av久久男人的天堂| 女性生殖器流出的白浆| 亚洲国产精品一区二区三区在线| 在线观看免费午夜福利视频| 国产高清国产精品国产三级| 99久久99久久久精品蜜桃| 国产精品亚洲一级av第二区| 久久国产乱子伦精品免费另类| 欧美午夜高清在线| 免费观看人在逋| av福利片在线| 国产男女内射视频| 一区二区三区国产精品乱码| 在线观看免费视频日本深夜| 国产一区在线观看成人免费| 国产欧美亚洲国产| 在线观看午夜福利视频| 久久精品国产99精品国产亚洲性色 | 久久国产亚洲av麻豆专区| 女性被躁到高潮视频| 老熟妇仑乱视频hdxx| 亚洲成人国产一区在线观看| 啪啪无遮挡十八禁网站| 亚洲欧美激情综合另类| 50天的宝宝边吃奶边哭怎么回事| 欧美日韩瑟瑟在线播放| 国产精品影院久久| 美女高潮喷水抽搐中文字幕| cao死你这个sao货| 久久久精品国产亚洲av高清涩受| 伦理电影免费视频| 亚洲自偷自拍图片 自拍| 麻豆成人av在线观看| 极品少妇高潮喷水抽搐| 一二三四社区在线视频社区8| 亚洲五月天丁香| 免费不卡黄色视频| 亚洲欧美日韩高清在线视频| 岛国在线观看网站| 人人妻,人人澡人人爽秒播| 他把我摸到了高潮在线观看| 中文字幕人妻丝袜制服| 久99久视频精品免费| www.熟女人妻精品国产| 精品高清国产在线一区| 久久婷婷成人综合色麻豆| 国产精品免费一区二区三区在线 | 成人国语在线视频| 夜夜躁狠狠躁天天躁| 97人妻天天添夜夜摸| 久久国产乱子伦精品免费另类| 成年人免费黄色播放视频| 天天操日日干夜夜撸| 日本撒尿小便嘘嘘汇集6| 正在播放国产对白刺激| 咕卡用的链子| 欧美日韩亚洲高清精品| 国产蜜桃级精品一区二区三区 | 在线国产一区二区在线| ponron亚洲| 亚洲av成人av| 亚洲精品中文字幕一二三四区| 一区在线观看完整版| 欧美精品av麻豆av| 又黄又爽又免费观看的视频| 人人妻,人人澡人人爽秒播| 一二三四社区在线视频社区8| 电影成人av| 亚洲第一欧美日韩一区二区三区| 热re99久久国产66热| 超碰97精品在线观看| 他把我摸到了高潮在线观看| 亚洲欧美一区二区三区黑人| 99久久国产精品久久久| 国产99久久九九免费精品| 女同久久另类99精品国产91| 免费黄频网站在线观看国产| 成人免费观看视频高清| 后天国语完整版免费观看| 91大片在线观看| 又紧又爽又黄一区二区| av视频免费观看在线观看| 午夜亚洲福利在线播放| 亚洲人成伊人成综合网2020| 怎么达到女性高潮| 亚洲精品国产区一区二| 他把我摸到了高潮在线观看| 亚洲黑人精品在线| 中文字幕色久视频| 99国产精品一区二区三区| 日韩欧美国产一区二区入口| 人妻一区二区av| 国产精华一区二区三区| 伦理电影免费视频| 亚洲五月色婷婷综合| 热99久久久久精品小说推荐| 母亲3免费完整高清在线观看| 国产亚洲欧美精品永久| 9热在线视频观看99| av欧美777| 男女之事视频高清在线观看| e午夜精品久久久久久久| 亚洲欧美一区二区三区久久| 三上悠亚av全集在线观看| 美女高潮到喷水免费观看| 免费观看a级毛片全部| 国产精品电影一区二区三区 | 高清毛片免费观看视频网站 | 免费观看a级毛片全部| 久久青草综合色| 女人被躁到高潮嗷嗷叫费观| 国产一区二区三区综合在线观看| 午夜精品久久久久久毛片777| 精品第一国产精品| 亚洲精品在线美女| 国产精品二区激情视频| 高清毛片免费观看视频网站 | 国产成人影院久久av| 国产精品永久免费网站| 国产亚洲精品久久久久久毛片 | 成人三级做爰电影| 日韩免费高清中文字幕av| av中文乱码字幕在线| 亚洲熟妇熟女久久| 一进一出抽搐动态| 天天躁日日躁夜夜躁夜夜| videos熟女内射| 狠狠狠狠99中文字幕| 国产男女内射视频| 欧美精品亚洲一区二区| 在线永久观看黄色视频| 视频在线观看一区二区三区| 久久午夜综合久久蜜桃| 亚洲欧美日韩另类电影网站| 欧美老熟妇乱子伦牲交| 黄片大片在线免费观看| 老司机午夜福利在线观看视频| www日本在线高清视频| 美女午夜性视频免费| 亚洲国产精品一区二区三区在线| 成熟少妇高潮喷水视频| 麻豆乱淫一区二区| 亚洲成av片中文字幕在线观看| 国产精华一区二区三区| 怎么达到女性高潮| 999久久久国产精品视频| 亚洲av电影在线进入| e午夜精品久久久久久久| 欧美人与性动交α欧美软件| 一区福利在线观看| 这个男人来自地球电影免费观看| 久久久久精品国产欧美久久久| 热re99久久精品国产66热6| 婷婷精品国产亚洲av在线 | 18禁裸乳无遮挡动漫免费视频| 超碰成人久久| av视频免费观看在线观看| 满18在线观看网站| svipshipincom国产片| 欧美成人午夜精品| 中文字幕人妻丝袜一区二区| 欧美日韩视频精品一区| 看片在线看免费视频| 久久久久久久国产电影| 精品人妻熟女毛片av久久网站| av福利片在线| 天天躁日日躁夜夜躁夜夜| 久久久国产欧美日韩av| 天天操日日干夜夜撸| √禁漫天堂资源中文www| 下体分泌物呈黄色| 1024香蕉在线观看| 久久久国产成人免费| 色播在线永久视频| tube8黄色片| 亚洲av欧美aⅴ国产| 亚洲综合色网址| 色老头精品视频在线观看| 午夜福利乱码中文字幕| 国产成人影院久久av| 国产精品欧美亚洲77777| 欧美日韩精品网址| 91麻豆精品激情在线观看国产 | 亚洲国产欧美一区二区综合| xxx96com| 999精品在线视频| 视频区欧美日本亚洲| x7x7x7水蜜桃| 欧美乱码精品一区二区三区| 亚洲美女黄片视频| 一夜夜www| 婷婷丁香在线五月| 女性被躁到高潮视频| 久久精品国产亚洲av香蕉五月 | 又黄又爽又免费观看的视频| 少妇被粗大的猛进出69影院| 亚洲五月天丁香| 亚洲第一欧美日韩一区二区三区| 亚洲色图av天堂| 人妻 亚洲 视频| 男人的好看免费观看在线视频 | 欧美激情高清一区二区三区| 精品午夜福利视频在线观看一区| 国产亚洲av高清不卡| 一二三四社区在线视频社区8| 国产区一区二久久| 香蕉久久夜色| 午夜福利乱码中文字幕| 色精品久久人妻99蜜桃| 亚洲一区高清亚洲精品| 国产乱人伦免费视频| 久久精品国产清高在天天线| 成年人黄色毛片网站| 国产成人精品久久二区二区免费| 免费黄频网站在线观看国产| 久久天躁狠狠躁夜夜2o2o| 黄色a级毛片大全视频| av天堂在线播放| 99国产精品一区二区蜜桃av | 午夜福利欧美成人| 欧美亚洲 丝袜 人妻 在线| 国产成人av激情在线播放| 乱人伦中国视频| 啪啪无遮挡十八禁网站| 我的亚洲天堂| 大型黄色视频在线免费观看| 亚洲av第一区精品v没综合| 最近最新中文字幕大全电影3 | 亚洲欧美日韩另类电影网站| 成年人免费黄色播放视频| 欧美黄色淫秽网站| 午夜两性在线视频| 视频区欧美日本亚洲| 校园春色视频在线观看| 欧美亚洲日本最大视频资源| 国内毛片毛片毛片毛片毛片| 亚洲欧洲精品一区二区精品久久久| 欧美日韩亚洲高清精品| 久久人妻福利社区极品人妻图片| 国产精品欧美亚洲77777| 12—13女人毛片做爰片一| 午夜亚洲福利在线播放| 欧美久久黑人一区二区| 午夜免费成人在线视频| 久久久久久久久久久久大奶| 人人妻人人添人人爽欧美一区卜| 久久久国产欧美日韩av| 最新的欧美精品一区二区| 看黄色毛片网站| av视频免费观看在线观看| 国产麻豆69| 日本黄色视频三级网站网址 | 黄片播放在线免费| 午夜免费成人在线视频| 这个男人来自地球电影免费观看| 亚洲av片天天在线观看| 国产av一区二区精品久久| 日韩有码中文字幕| 香蕉久久夜色| 欧美午夜高清在线| 成人国语在线视频| 精品国产亚洲在线| 久久中文字幕一级| 欧美精品av麻豆av| 十八禁高潮呻吟视频| 一边摸一边抽搐一进一小说 | 成人亚洲精品一区在线观看| 高清av免费在线| 亚洲精品中文字幕在线视频| 一夜夜www| 免费在线观看日本一区| 99re在线观看精品视频| av福利片在线| 午夜激情av网站| 午夜福利免费观看在线| 丰满人妻熟妇乱又伦精品不卡| 乱人伦中国视频| 最新的欧美精品一区二区| 久久精品亚洲精品国产色婷小说| 亚洲成人国产一区在线观看| 日本vs欧美在线观看视频| 国产亚洲一区二区精品| 日韩欧美免费精品| 高潮久久久久久久久久久不卡| 免费少妇av软件| 免费在线观看黄色视频的| av免费在线观看网站| 91成年电影在线观看| 国产片内射在线| 美国免费a级毛片| 自线自在国产av| 最新的欧美精品一区二区| 又黄又粗又硬又大视频| 亚洲成av片中文字幕在线观看| 亚洲第一欧美日韩一区二区三区| 老汉色∧v一级毛片| 老鸭窝网址在线观看| 国产1区2区3区精品| 亚洲视频免费观看视频| 国产成人免费观看mmmm| 国产精品久久久久久精品古装| 大型av网站在线播放| 精品高清国产在线一区| 亚洲av美国av| 欧美另类亚洲清纯唯美| 久久精品国产a三级三级三级| 国产精华一区二区三区| 下体分泌物呈黄色| 老司机影院毛片| 中文字幕最新亚洲高清| 午夜福利视频在线观看免费| 免费观看a级毛片全部| 水蜜桃什么品种好| 精品国产一区二区三区久久久樱花| av电影中文网址| 欧美+亚洲+日韩+国产| 夜夜爽天天搞| xxxhd国产人妻xxx| 午夜福利一区二区在线看| e午夜精品久久久久久久| 动漫黄色视频在线观看| cao死你这个sao货| 亚洲国产欧美网| 中文字幕人妻丝袜一区二区| www.精华液| 午夜福利视频在线观看免费| 青草久久国产| 亚洲色图av天堂| 久久中文字幕一级| av国产精品久久久久影院| 久久精品国产亚洲av香蕉五月 | 国产精品综合久久久久久久免费 | 人妻丰满熟妇av一区二区三区 | 最新的欧美精品一区二区| 欧美人与性动交α欧美软件| 国产三级黄色录像| 正在播放国产对白刺激| 亚洲欧美色中文字幕在线| 国产欧美日韩综合在线一区二区| 在线观看免费视频日本深夜| cao死你这个sao货| 欧美日韩国产mv在线观看视频| 黄色视频不卡| 最新美女视频免费是黄的| 成年动漫av网址| 亚洲一区高清亚洲精品| www.自偷自拍.com| 午夜精品在线福利| 国产精品免费视频内射| 国产伦人伦偷精品视频| 精品卡一卡二卡四卡免费| 国产欧美亚洲国产| 黑人欧美特级aaaaaa片| 久久精品国产清高在天天线| 欧美日韩亚洲国产一区二区在线观看 | 黄色成人免费大全| 免费女性裸体啪啪无遮挡网站| 真人做人爱边吃奶动态| 1024香蕉在线观看| 亚洲国产中文字幕在线视频| 亚洲少妇的诱惑av| 在线观看日韩欧美| 精品一区二区三区视频在线观看免费 | 亚洲国产精品合色在线| 叶爱在线成人免费视频播放| 美女福利国产在线| 亚洲精品粉嫩美女一区| 精品久久久久久久久久免费视频 | 精品国产超薄肉色丝袜足j| 电影成人av| 欧美不卡视频在线免费观看 | 久久久久久人人人人人| 亚洲熟女精品中文字幕| 亚洲五月色婷婷综合| 在线看a的网站| 一本综合久久免费| 叶爱在线成人免费视频播放| 一区二区三区国产精品乱码| 日韩三级视频一区二区三区| 精品第一国产精品| 国产不卡av网站在线观看| √禁漫天堂资源中文www| 一区二区日韩欧美中文字幕| 欧美日韩亚洲国产一区二区在线观看 | 老熟妇仑乱视频hdxx| 超色免费av| 亚洲成人免费电影在线观看| 久久亚洲真实| 欧美日韩瑟瑟在线播放| 男女之事视频高清在线观看| 午夜福利影视在线免费观看| 丝袜美腿诱惑在线| 在线国产一区二区在线| 国产午夜精品久久久久久| 国产在线精品亚洲第一网站| 在线永久观看黄色视频| 午夜久久久在线观看| 国产不卡一卡二| 国产亚洲欧美98| 久久国产乱子伦精品免费另类| 成人国语在线视频| 国产日韩欧美亚洲二区| 亚洲成a人片在线一区二区| 极品少妇高潮喷水抽搐| 99久久精品国产亚洲精品| 亚洲少妇的诱惑av| 操美女的视频在线观看| 黑人巨大精品欧美一区二区蜜桃| 一进一出抽搐动态| 欧美+亚洲+日韩+国产| 午夜老司机福利片| 久久人人97超碰香蕉20202| 黄色片一级片一级黄色片| 黑人巨大精品欧美一区二区蜜桃| 欧美日韩一级在线毛片| 三上悠亚av全集在线观看| 中文字幕人妻熟女乱码| 丝袜在线中文字幕| 如日韩欧美国产精品一区二区三区| 亚洲国产欧美日韩在线播放| 欧美成狂野欧美在线观看| 久久婷婷成人综合色麻豆| 别揉我奶头~嗯~啊~动态视频| 女人被躁到高潮嗷嗷叫费观| 成人手机av| 国产精品综合久久久久久久免费 | 久久精品亚洲精品国产色婷小说| 丝袜美腿诱惑在线| av不卡在线播放| 国产成人精品久久二区二区91| 757午夜福利合集在线观看| 亚洲国产精品一区二区三区在线| 曰老女人黄片| 中国美女看黄片| 日本黄色视频三级网站网址 | 国产1区2区3区精品| 日韩三级视频一区二区三区| 一级毛片精品| 久久久久久久午夜电影 | 亚洲国产欧美网| 91av网站免费观看| 9热在线视频观看99| 免费看十八禁软件| 亚洲精品成人av观看孕妇| 亚洲人成电影观看| 国产无遮挡羞羞视频在线观看| 一区二区三区精品91| 妹子高潮喷水视频| 久99久视频精品免费| 女人高潮潮喷娇喘18禁视频| 欧美丝袜亚洲另类 | 成人av一区二区三区在线看| 一级作爱视频免费观看| 男人舔女人的私密视频| 亚洲成a人片在线一区二区| 欧美精品高潮呻吟av久久| 校园春色视频在线观看| 日韩欧美一区视频在线观看| 多毛熟女@视频| 亚洲第一青青草原| 亚洲欧美日韩高清在线视频| 国产亚洲欧美在线一区二区| 亚洲国产欧美网| 免费观看人在逋| 手机成人av网站| 大香蕉久久成人网| 三上悠亚av全集在线观看| 亚洲欧美激情综合另类| 免费少妇av软件| 夫妻午夜视频| 精品乱码久久久久久99久播| av国产精品久久久久影院| 国产男女超爽视频在线观看| 亚洲国产精品合色在线| 国产精品 国内视频| 妹子高潮喷水视频| 国产精品秋霞免费鲁丝片| 美女午夜性视频免费| svipshipincom国产片| 国产精品二区激情视频| 精品高清国产在线一区| 天堂中文最新版在线下载| 麻豆成人av在线观看|