• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Dimensional emotion recognition in whispered speech signal based on cognitive performance evaluation

    2015-07-25 06:04:34WuChenjianHuangChengweiChenHong
    關(guān)鍵詞:蘇州大學(xué)維和效價

    Wu ChenjianHuang ChengweiChen Hong

    (1School of Electronic and Information Engineering,Soochow University,Suzhou 215006,China)

    (2College of Physics,Optoelectronics and Energy,Soochow University,Suzhou 215006,China)

    (3School of Mathematical Sciences,Soochow University,Suzhou 215006,China)

    Dimensional emotion recognition in whispered speech signal based on cognitive performance evaluation

    Wu Chenjian1Huang Chengwei2Chen Hong3

    (1School of Electronic and Information Engineering,Soochow University,Suzhou 215006,China)

    (2College of Physics,Optoelectronics and Energy,Soochow University,Suzhou 215006,China)

    (3School of Mathematical Sciences,Soochow University,Suzhou 215006,China)

    The cognitive performance-based dimensional emotion recognition inwhispered speech is studied.First,the whispered speech emotion databases and data collection methods are compared,and the character of emotion expression in whispered speech is studied,especially the basic types of emotions.Secondly,the emotion features for whispered speech is analyzed,and by review ing the latest references,the related valence features and the arousal features are provided.The effectiveness of valence and arousal features in whispered speech emotion classification is studied.Finally,the Gaussian mixturemodel is studied and applied to whispered speech emotion recognition.The cognitive performance is also considered in emotion recognition so that the recognition errors of whispered speech emotion can be corrected.Based on the cognitive scores,the emotion recognition results can be improved.The results show that the formant features are not significantly related to arousal dimension,while the short-term energy features are related to the emotion changes in arousal dimension.Using the cognitive scores,the recognition results can be improved.

    whispered speech;emotion recognition;emotion dimensional space

    M any disabled people rely on hearing aids to communicate through normal speech[1].Studies on whispered speech have drawn much attention.The study on speech emotion recognition is a subfield in whispered speech signal processing,and it is closely related tomultiple disciplines,including signal processing,pattern recognition,phonetics,and psychology.Therefore,whispered speech emotion recognition has attracted researchers from various.However,the understanding of how human beings express emotions in whispered speech and how amachine recognizes the emotions is still a great challenge.

    Schwartz etal.[14]studied the problem of gender recognition from whispered speech signal.Tartterstudied the listening perception of vowels in whispered speech.They showed that some of the vowels,such as[a]and[o]may be confused in whispered speech.The early worksmainly focused on the whispered speech from the phonetic view and adopted the listening test as an experimentalmeasure.In 2002,Morris[17]studied the enhancement ofwhispered speech from the signal processing aspect and his work covered many areas,including speech conversion,speech coding,and speech recognition.Gao[18]studied tones in Chinese whispered speech in a perception experiment.Gong et al.[6,19]studied the detection of endpoint and formant estimation in whispered speech signal.The recognition of tones in whispered speech was also studied in Ref.[7].Jin et al.[20]studied the whispered speech emotion recognition problem,and established a basic whispered speech emotion database.Gong et al.[5]used formants and speech rate features to classify whispered speech in three emotion categories:happiness,anger,and sadness.Also,they studied the emotional features in whispered speech and found that the time duration and short-term energy may classify anger and sadness.

    In this paper,emotion recognition in whispered speech signal is studied.The collection of emotional speech is carried out in an eliciting experiment,in which the cognitive performance is evaluated.Therefore,it is able to fuse two kinds of information,the emotional dimension information and the dynamic change of cognitive ability.The 2D arousal-valence dimensional emotion space is a continuous space for emotions.It can be safely assumed that the whispered speech emotion is also distributed in the same way,as the inner-sate of the subject is the same.Based on the dimensional emotion theory that emotions can be treated as continuous vectors in the 2D space,a system is developed,which can not only recognize the whispered speech emotion,but also model the relationships between the past emotional and the current emotional state.The fundamental belief here is that the emotional state transfer probabilities differ among discrete emotion classes.For instance,rapid shifting between positive and negative emotional states is very likely to be a classification m istake.

    1 W hispered Speech Database

    1.1 Overview

    A high quality emotion database is the foundation of emotion recognition research.There are many speech emotion databases available,and,however,there is still a lack of whispered speech emotion databases.The establishment of a whispered speech emotion database consists of five major steps:1)A data collection plan;2)Whispered speech recording;3)Data validation and editing;4)Emotional sample annotation;5)A listening test.Compared to normal speech emotional data,in view of recognition accuracy,the establishment of a whispered speech emotion database is a great challenge.Various naturalistic data in normal speech is obtained,covering a w ide range of emotion types,e.g.,frustration,fidgetiness,anxiety.However,in the previous studies in whispered speech,therewere not enough types of emotions in the databases.Also,the expression and recognition of emotions in whispered speech ismuchmore difficult than in normal speech.A few examples of whispered speech emotion databases are summarized in Tab.1.

    Tab.1 Comparison of whispered_speech_emotion_database

    1.2 W hispered emotional speech

    The recording of whispered speech emotion is the first step of our research,the quality of the data is essential to the recognition system performance.For normal speech recording,it may only need a quiet laboratory room.However,the recording of whispered speech requires a silent room to avoid the noise.

    During the data collection,normal speech in the same text for comparison is also recorded.The normal speech and whispered speech under a neutral state are shown in Fig.1.The pitch contour of normal speech is demonstrated in Fig.2(a).The formant frequency of whispered speech is demonstrated in Fig.2(b).Due to the m issing pitch frequency in whispered speech,its formant frequency is especially important.Since the intensity of whispered speech ismuch lower than the normal speech,the noise influence becomes an important problem in emotion recognition.Schuller et al.[24]first studied noise influence on speech emotion recognition.Tawari et al.[25]applied noise reduction algorithms in in-vehicle emotion recognition applications.Their study showed that the wavelet-based speech enhancement can improve the emotion recognition performance in normal speech.

    Under different emotional states,it can be seen that the acoustic parameters of the whispered speech signals have changed.In Fig.3,the duration of the whispered speech under anger,happiness,and neutrality has changed significantly.

    1.3 Emotional data analysis in a cognitive task

    In this section the relationship between speech emotion and cognitive performance is studied.

    Emotions are closely related to cognitive tasks,and detecting the cognitive-related emotion states is particularly interesting in the work environment.Operations in an extreme environmentmay induce special types of emotions,such as anxiety,tiredness,fidgetiness,etc.Those emotions are closely related to cognitive process and may threat the success of a task.The detection of negative emotions is important for evaluating the operator’s emotional well-being.

    The data collection methods can be classified into three categories:naturalistic speech,induced speech and acted speech.For whispered speech,the eliciting methods which are successfully applied to normal speech can be applied.Johnstone et al.[26]used the computer game to induce normal speech emotional data,and established a high quality emotion dataset.For negative emotionsw ith a practical value,such as fidgetiness,sleep deprivation,noise stimulation and repeated cognitive task as the eliciting methods can be used[23].

    In this paper,the noise stimulation and repeated math calculations are used to induce negative emotions.The time duration of one particular emotion usually lasts for 1 to 2 min.There is no unhealthy influence on the volunteer subjects.

    Fig.1 Normal and whispered speech under neutral state.(a)Waveform of normal speech signal;(b)Spectrum of normal speech signal;(c)Waveform of whispered speech signal;(d)Spectrum of whispered speech signal

    Fig.2 Speech parameters of normal speech and whispered speech.(a)Pitch contour of normal speech;(b)Formant frequency of whispered speech

    M any environm ental factors can induce negative emotions.Noise is a common cause that induces negative emotions in extreme environments.For example,in the Russian M ir space station,the noise level is between 59 to 72 dB,which can cause a series of stimulated emotions and hearing problems.

    The repeated boring task is a commonly used technique to induce negative emotions in psychology experiments.The subject is required to do math calculations repeatedly and report orally.At the same time,the answers are recorded and evaluated.Correct answers add scores to the cognitive performance.In Fig.4,the relationship between negative emotions and the false answers is analyzed which reflects the subject’s cognitive working ability changing over time.

    2 Annotation and Validation of W hispered Emotional Speech

    A fter the recording of the originalwhispered speech data,a listening test is needed.The validation of the speech data relies on the listening perception.For each utterance,the emotions at five different intensity levelsmay be labeled w ith the scales of 1,3,5,7,9 corresponding to very weak,weak,ordinary,strong,and very strong.

    Then an evaluation result Eion each utterance from each listener is obtained.

    where j is the index of emotional utterances;i denotes thelistener;e represents the evaluation score.

    Fig.3 Whispered speech under emotional states.(a)Waveform of speech of happiness;(b)Formant of speech of happiness;(c)Waveform of speech of anger;(d)Formant of speech of anger;(e)Waveform of speech of neutrality;(f)Formant of speech of neutrality

    Fig.4 Correlation between negative emotion and cognitive score in a cognitive experiment

    In themultiple listener case,to achieve the evaluation result on the j-th sample,the listening results need to be combined,

    where aiis the fusion weight;M is the number of listeners.The weight represents the confidence in each listener,and it is noticed that

    For annotation results from different listeners,a fusion method[15]can be adopted to achieve the final result.For the j-th sample,the similarity between the two listeners p and q can be represented as[23]

    where K is the number of emotion classes.For each eval-uation,the agreementmatrix is

    where N is the total number of the data samples.Based on the sim ilarity between two listeners,the agreement matrixρis achieved,in which each element represents the degree of agreement between two listeners,and M is the total number of the listeners.

    The averaged value represents the degree of agreement between the i-th listener and the others:

    The normalized value is adopted as the fusion weight ai,

    Substituting Eq.(8)into Eq.(2),the final evaluation result Ejof each utterance is achieved.

    3 Emotional Feature Analysis for Whispered Speech

    3.1 Acoustic features of normal speech signal

    The speech feature that can reflect emotional change in speech has been an essential question in emotion research for a long time.In past decades,researchers studied the emotional features from phonetics and psychology.Emotional speech features can be classified into two groups,prosodic features and voice quality features.Prosodic features include intensity,duration,pitch,accent,tone,intonation,etc.In the early research,in normal speech emotion recognition,prosodic features are the most commonly used emotional features.Among them the pitch parameter is the most important.However,in whispered speech,pitch is m issing.Voice quality features include formant frequency,harmonic-to-noise ratio,linear prediction coefficients,etc.Also,voice quality features can be essential for classifying valence dimensional information in normal speech.

    3.2 Acoustic features of whispered speech signals

    Emotional feature analysis isan essential partof emotion recognition.In this section,the characters of whispered speech signal and the extracted basic emotional features are analyzed.In whispered speech,the vocal cords do not vibrate normally since it is an unvoiced mode of phonation.In normal speech,air from the lungs causes the vocal folds of the larynx to vibrate,exciting the resonances of the vocal tract.In whispered speech,the glottis is opened and turbulent flow created by exhaled air passing through this glottal constriction provides a source of sound[17].

    The acoustic features of normal speech and whispered speech is studied.Among the commonly used speech features,pitch is the most important feature to classify emotions in normal speech signals.However,there is no pitch feature in the whispered speech signal.Therefore,this parameter cannot be applied to the whispered speech emotion recognition.The formant frequency,Mel frequency cepstral coefficients(MFCC)and linear prediction coefficients(LPC)are the important speech features and they can be applied to emotion classification in whispered speech signal.These features are generally related to the valence dimension.Short-term energy,speech rate and time duration are related to the arousal level.Experimental results show that short-term energy and speech rate are effective for classifying emotions in whispered speech signals[5].The Teager energy operator(TEO)has also been applied to whispered speech emotion analysis[21].

    Based on the basic acoustic parameters of whispered speech,proper features formodeling can be constructed.Generally speaking,the speech emotional features can be grouped into two categories,the static features and the temporal features.Since the temporal features largely rely on the phonetic changes in the text,the global static features are chosen to construct utterance level features.Difference,mean,maximum,m inimum and variance are used to construct higher level text-independent features.

    4 Recognition M ethodology

    4.1 Overview of classification methods

    In this section,the general speech emotion recognition methods are discussed,as shown in Tab.2.Several algorithms that are successfully applied to whispered speech emotion recognition are also studied.Many of the pattern recognition algorithms,such as the hidden Markov model(HMM),the Gaussian m ixture model(GMM),and the support vector machines(SVM),have been studied for emotion recognition.

    4.2 Emotion recognition for whispered speech

    Hultsch et al.[23]discovered that the expressions of happiness and fear were more difficult in whispered speech.Shortly after,Cirillo et al.[4]studied this problem again in a listening perception experiment,and came to a sim ilar conclusion.In their research,happiness in whispered speech was easily confused w ith fear or neutrality.Further spectrum analysis showed that the confusion of these emotionsm ight be caused by the quality decrease of tones in whispered speech.In a low voice quality speech listening experim ent,the whispered speechsignal was sent by telephone line and happiness was confused w ith sadness or neutrality.Cirillo and his colleagues[4]found thatsadness and fearwere easy to classify in whispered speech and the acoustic analysis also support this conclusion.

    Tab.2 Speech emotion recognition algorithms

    Using a set of discrim inant features on a simple dataset,many popular pattern recognition algorithms can succeed in speech emotion recognition.However,up to now the studies on whispered speech emotion recognition have been very rare,and which algorithm is suitable to whispered speech emotion recognition is still an open question.Whispered speech emotions are successfully classified[6,15].Quantum neural networks are discussed in whispered speech emotion recognition in Ref.[21].By applying quantum genetic algorithms to back-propagation neutral networks,the connection weights are optim ized and the robustness of the neutral network is improved.

    The GMM is the state-of-the-art speaker and language identification algorithm[28],and theoretically,the GMM can be used to model any probability distribution.In practical terms,the GMM parameters need to be empirically set to achieve good performance.In this paper,the GMM-based classifiers to whispered speech emotion recognition is applied.

    The GMM is the weighted sum of M members,

    where X is a D-dimensional vector,bi(X)(i=1,2,…,M)is the Gaussian distribution of each member;ai(i=1,2,…,M)is them ixture weight.

    where them ixture weight satisfy

    The complete GMM parameters can be represented as

    According to the Bayes theory,the classification can bemade by maximizing the posterior probability:

    4.3 Error correction based on context information

    The current detection decisions are made by the fusion of the current outputs of the emotion classifiers,the previous outputs of the emotion classifiers and the current cognitive performance score.The system block diagram is shown in Fig.5.The detection outputs are the likelihoods of the classifier,representing negative and nonnegative emotions.The previous emotion states are used for inferring the current emotion states since emotion state is treated as a continuously changing variable in the time domain.Cognitive performance is modeled by the correctness of the answers the subjectsmade during themath calculation experiment.Cognitive performance information is presented in the system as a total test score dropping or rising,which is based on the current answer being correct or incorrect.

    Fig.5 System block diagram

    The likelihoods of the GMM emotion classifiers can form an emotion vector Ei={p1,p2,…,pm}.Here,i is the sampling of time;m is the number of emotion classes and piis the likelihood of the classifier.Considering the previous emotion states and the cognitive performance P,the emotion vector{Ei,Ei-1,Ei-2,…,Ei-n,Pi}is extended.Error correcting is then achieved by using the naive Bayes classifier trained on the instances of the extended emotion vector.

    By using the context information,the emotional state transfer between neighboring utterances can be modeled.The affective state generally lasts for a certain period of time,and,therefore,it is safe to assume that the neighboring emotion recognition results are dependent on each other.Error correction is,therefore,believed to be effective.

    5 Experimental Results

    5.1 Experiments on arousal features and valence features recognition

    In the emotion dimensional model,it is generally believed that for normal speech the prosodic features are related to the arousal level and the voice quality features are related to the valence level.It is noted that for whispered speech this correlation has not been proved yet.Therefore,the GMM-based recognition experiment is carried out to demonstrate the possible relationship between arousal-valence dimension space and speech feature space in whispered speech.

    In the whispered speech emotion database[20],sadness and anger are located on the negative side of the valence dimension,and happiness is located on the positive side of the valence dimension.In the arousal dimension,anger,happiness,and surprise are located on the positive side,while sadness is on the negative side.Based on the GMM classifiers,200 utterances are chosen for each emotion category.The training and testing ratio is3∶1.Voice quality feature(formant frequency)and prosodic features(short-term energy,speech rate,duration)for the recognition test are used,and the cross-validation results are shown in Tab.3.

    Tab.3 Recognition rates using arousal features and valence features in whispered speech signal

    Surprise and sadness are not well classified,while anger can be easily recognized.When formant features are used alone,the recognition result is not satisfactory.On the other hand,short-term energy and other prosodic features are proved effective.In this experiment,it can be seen that voice quality features are not effective for classifying arousal level,while prosodic features are obviously related to both dimensions.

    5.2 Recognition experim ents com parison

    In the recognition test of our whispered emotional speech database,three popular machine learning algorithms,the GMM,SVM and K-nearest neighbor,are adopted.The GMM is a w idely used modeling tool for the estimation of underlining the probability density function.For the emotional data,the GMM-based classifiers are developed based on the expectation maximum(EM)algorithm.A GMM emotionalmodel consists of several Gaussian members,and them ixture number is set experimentally.In our case,it is set to be 6,due to the lim ited instances.The SVM is a powerful learning algorithm in a small sample set.Since the basic SVM is designed for two-class classification,a decision tree structure to classify multiple emotion classes in the one-against-all fashion is used.However,in the decision tree,how to stop the error from accumulating is an important issue.The SVM classifiers are configured to form the tree structure according to the rank of error rates.The highest error rate appears at the bottom of the tree.In thisway,the error can be prevented from spreading,the resulting emotion tree structure is shown in Fig.6.The KNN is a simple but powerful classification algorithm,and a basic KNN classifier for comparison is also adopted.The number of dimensions is set to be ten,and the radial basis function is chosen for the SVM classifier.The GMM is initialized w ith K-means clustering,and K is set as the number of emotion classes.In the expectation-maxim ization algorithm for parameter estimation,the maximum iteration is set to be 50,which is enough for convergence in our application.

    Fig.6 A depiction of the decision tree for the SVM multi-class classifier

    As shown in Tab.4,it can be seen that the GMM outperforms the other classifiers in average.The highest recognition occurs in the classification of anger using the GMM-based classifier.The GMM is a powerfulmodeling algorithm,given a proper setting ofm ixture numbers andsufficient training data,it can achieve an accurate performance compared to other state-of-the-art machine learning algorithms.Among different speakers,the expressions of anger in whispered speech are perhaps closer to each other.The lowest rate occurs in the classification of surprise.W ith the absence of pitch frequency feature,themodeling of surprise ismuch difficult.Theoretically,the GMM classifier can present any probability distribution and it has a strong ability to fit the training data.

    Tab.4 Comparison of recognition results%

    The SVM classifier performs better than the KNN classifier in the acted data under a small training sample.The SVM classifier has a strong learning ability when the training data is lim ited.

    The error correcting method using the context information proposed in this paper brings an improvement in recognition,as shown in Tab.5.Based on the previous detected emotional states,the current sample can be classified effectively.Using the cognitive performance scores as a correlated factor,the useful context information for the recognition of subject’s inner emotional state is provided.The close relationship between the cognitive process and emotional states has been accepted for a long time,and in this experiment it is able to correct the emotion recognition results using the cognitive scores.

    Tab.5 Emotion detection rates before and after error correction%

    6 Conclusion

    Automatic detection of human emotion is important in many human-computer interaction applications.The detection of emotions is the first step for evaluating the human factor in aman-machine system or in a special operation.Based on the emotion monitor,psychological intervention for people to cope w ith negative emotions can be adopted.

    Studies on whispered speech can lead to future applications in intelligenthuman-computer interaction,especially for natural interaction and person-dependent interaction.In surveillance technology and security,studying whispered speech can help w ith detecting potential dangerous situations and gathering valuable information.For future emotion recognition systems based on whispered speech,multi-modal information including acoustic features,linguistic features and context features will be adopted.

    [1]Liang R,Xi J,Zhao L,et al.Experimental study and improvement of frequency lowering algorithm in Chinese digital hearing aids[J].Acta Physica Sinica,2012,61(13):134305-1- 134305-11.

    [2]Hultsch H,Todt D,Ziiblke K.Einsatz und soziale interpretation gefliisterter signale,umwelt und verhalten[M].Bern,Sw itzerland:Huber Verlag,1992:391- 406.

    [3]Tartter V C,Braun D.Hearing smiles and frowns in normal and whisper registers[J].Journal of Acoustic Society of America,1994,96(4):2101- 2107.

    [4]Cirillo J,Todt D.Decoding whispered vocalizations:Relationships between social and emotional variables[C]//Proceedings of the 9th International Conference on Neural Information Processing.Singapore,2002:1559- 1563.

    [5]Gong C,Zhao H,Tao Z,etal.Feature analysis on emotional Chinese whispered speech[C]//2010 International Conference on Information Networking and Automation.Kunming,China,2010:137- 141.

    [6]Gong C,Zhao H,Wang Y,et al.Development of Chinese whispered database for speaker verification[C]//2009 Asia Pacific Conference on Postgraduate Research, Microelectronics&Electronics.Shanghai,China,2009:197- 200.

    [7]Gong C,Zhao H.Tone recognition of Chinese whispered speech[C]//2008 Pacific-AsiaWorkshop on Computational Intelligence and Industrial Application.Wuhan,China,2008:418- 422.

    [8]Tartter V C.Identifiability of vowels and speakers from whispered syllables[J].Perception and Psychophysics,1991,49(4):365- 372.

    [9]Takeda T K,Itakura F.Acoustic analysis and recognition of whispered speech[C]//Proceedings of IEEE International Conference on Acoustics,Speech and Signal.Orlando,F(xiàn)L,USA,2002:389- 392.

    [10]Yang L,Li Y,Xu B.The establishment of a Chinese whisper database and perceptual experiment[J].Journal of Nanjing University:Natural Science,2005,41(3):311 -317.

    [11]Huang C,Jin Y,Zhao L,etal.Speech emotion recognition based on decomposition of feature space and information fusion[J].Signal Processing,2010,26(6):835-842.

    [12]Huang C,Jin Y,Zhao Y,et al.Recognition of practical emotion from elicited speech[C]//Proceedings of ICISE.Nanjing,China,2009:639- 642.

    [13]Huang C,Jin Y,Zhao Y,etal.Speech emotion recognition based on re-composition of two-class classifiers[C]//Proceedings of ACII.Amsterdam,Netherland,2009:1- 3.

    [14]Schwartz M F,Rine M F.Identification of speaker sex from isolated,whispered vowels[J].Journal of Acoustical Society of America,1968,44(6):1736- 1737.

    [15]Tartter V C.Identifiability of vowels and speakers from whispered syllables[J].Perception and Psychophysics,1991,49(4):365- 372.

    [16]Higashikawa M,M inifie F D.Acoustical-perceptual correlates of“whisper pitch”in synthetically generated vow-els[J].Speech Lung Hear Res,1999,42(3):583- 591.

    [17]Morris R W.Enhancement and recognition of whispered speech[D].Atlanta,USA:School of Electrical and Computer Engineering,Georgia Institute of Technology,2002.

    [18]Gao M.Tones in whispered Chinese:articulatory and perceptual Cues[D].Victoria,Canada:Department of Linguistics,University of Victoria,2002.

    [19]Huang C,Jin Y,Bao Y,et al.Whispered speech emotion recognition embedded w ith Markov networks and multi-scale decision fusion[J].Signal Processing,2013,29(1):98- 106.

    [20]Jin Y,Zhao Y,Huang C,et al.The design and establishment of a Chinese whispered speech emotion database[J].Technical Acoustics,2010,29(1):63- 68.

    [21]Zhao Y.Research on several key technologies in speech emotion recognition and feature analysis[D].Nanjing:School of Information Science and Engineering,Southeast University,2010.

    [22]New T L,F(xiàn)oo SW,Silva L C D.Speech emotion recognition using hidden Markovmodels[J].Speech Communication,2003,41(4):603- 623.

    [23]Huang C,Jin Y,Zhao Y,et al.Design and establishment of practical speech emotion database[J].Technical Acoustics,2010,29(4):396- 399.

    [24]Schuller B,Arsic D,Wallhoff F,etal.Emotion recognition in the noise applying large acoustic feature sets[C]//The 3rd International Conference on Speech Prosody.Dresden,Germany,2006:276- 289.

    [25]Tawari A,TrivediM M.Speech emotion analysis in noisy real-world environment[C]//Proceedings of the 20th International Conference on Pattern Recognition.Washington DC,USA,2010:4605- 4608.

    [26]Johnstone T,van Reekum C M,Hird K,et al.Affective speech elicited w ith a computer game[J].Emotion,2005,5(4):513- 518.

    [27]Zou C,Huang C,Han D,et al.Detecting practical speech emotion in a cognitive task[C]//20th International Conference on Computer Communications and Networks.Hawaii,USA,2011:1- 5.

    [28]Kockmann M,Burget L,Cernocky JH.Application of speaker-and language identification state-of-the-art techniques for emotion recognition[J].Speech Communication,2011,53(9/10):1172- 1185.

    [29]Lin Y,Wei G.Speech emotion recognition based on HMM and SVM[C]//Proceedings of 2005 International Conference on Machine Learning and Cybernetics.Bonn,Germany,2005:4898- 4901.

    [30]Jin Y,Huang C,Zhao L.A sem i-supervised learning algorithm based onmodified self-training SVM[J].Journal of Computers,2011,6(7):1438- 1443.

    [31]Dellaert F,Polzin T,Waibel A.Recognizing emotion in speech[C]//The Fourth International Conference on Spoken Language.Pittsburgh,PA,USA,1996:1970- 1973.

    [32]Lee C,Mower E,Busso C,et al.Emotion recognition using a hierarchical binary decision tree approach[J].Speech Communication,2011,53(9/10):1162- 1171.

    [33]Nicholson J,Takahashi K,Nakatsu R.Emotion recognition in speech using neural networks[J].Neural Computing&Applications,2000,9(4):290- 296.

    [34]Yu H,Huang C,Zhang X,etal.Shuffled frog-leaping algorithm based neural network and its application in speech emotion recognition[J].Journal of Nanjing University of Science and Technology,2011,35(5):659- 663.

    [35]Wang Z.Feature analysis and emotino recognition in emotional speech[D].Nanjing:School of Information Science and Engineering,Southeast University,2004.

    [36]Yu H,Huang C,Jin Y,et al.Speech emotion recognition based on modified shuffled frog leaping algorithm neural network[J].Signal Processing,2010,26(9):1294- 1299.

    基于認知評估的多維耳語音情感識別

    吳晨健1黃程韋2陳 虹3

    (1蘇州大學(xué)電子信息學(xué)院,蘇州215006)
    (2蘇州大學(xué)物理與光電·能源學(xué)部,蘇州215006)
    (3蘇州大學(xué)數(shù)學(xué)科學(xué)學(xué)院,蘇州215006)

    研究了基于認知評估原理的多維耳語音情感識別.首先,比較了耳語音情感數(shù)據(jù)庫和數(shù)據(jù)采集方法,研究了耳語音情感表達的特點,特別是基本情感的表達特點.其次,分析了耳語音的情感特征,并通過近年來的文獻總結(jié)相關(guān)階特征在效價維和喚醒維上的特征.研究了效價維和喚醒維在區(qū)分耳語音情感中的作用.最后,研究情感識別算法和應(yīng)用耳語音情感識別的高斯混合模型.認知能力的評估也融入到情感識別過程中,從而對耳語音情感識別的結(jié)果進行糾錯.基于認知分數(shù),可以提高情感識別的結(jié)果.實驗結(jié)果表明,耳語音信號中共振峰特征與喚醒維度不顯著相關(guān),而短期能量特征與情感變化在喚醒維度相關(guān).結(jié)合認知分數(shù)可以提高語音情感識別的結(jié)果.

    耳語音;情感認知;情感維空間

    TP391.4

    10.3969/j.issn.1003-7985.2015.03.003

    2014-12-21.

    Biography:Wu Chenjian(1983—),male,doctor,lecturer,cjwu@suda.edu.cn.

    The National Natural Science Foundation of China(No.11401412).

    :Wu Chenjian,Huang Chengwei,Chen Hong.Dimensional emotion recognition in whispered speech signal based on cognitive performance evaluation[J].Journal of Southeast University(English Edition),2015,31(3):311- 319.

    10.3969/j.issn.1003-7985.2015.03.003

    猜你喜歡
    蘇州大學(xué)維和效價
    國家藝術(shù)基金“基礎(chǔ)美術(shù)教育百年文獻展”首站在蘇州大學(xué)開幕
    情緒效價的記憶增強效應(yīng):存儲或提取優(yōu)勢?
    蘇州大學(xué)藏《吳中葉氏族譜》考述
    尋根(2022年2期)2022-04-17 11:01:38
    維和親歷記
    Shifting of the Agent of Disciplinary Power in J. M.Coetzee’s Foe
    維和女兵
    應(yīng)用HyD在仔豬斷奶早期可提高維生素D的效價
    海外維和
    方圓(2017年9期)2017-06-02 10:14:23
    如何提高抗生素效價管碟測定法的準(zhǔn)確性
    大學(xué)英語詞匯教學(xué)滿意度調(diào)查——基于蘇州大學(xué)的實證研究
    亚洲精品视频女| 欧美日本中文国产一区发布| 国产精品久久久久久av不卡| 国产亚洲最大av| 桃花免费在线播放| 如日韩欧美国产精品一区二区三区 | 国产亚洲午夜精品一区二区久久| 又大又黄又爽视频免费| 3wmmmm亚洲av在线观看| 中文天堂在线官网| 亚洲av二区三区四区| 免费大片黄手机在线观看| 国产伦理片在线播放av一区| 久久婷婷青草| 丝袜喷水一区| 99热这里只有精品一区| 啦啦啦啦在线视频资源| 亚洲欧美日韩另类电影网站| 女性被躁到高潮视频| 男男h啪啪无遮挡| 黄片播放在线免费| 国产综合精华液| 国产亚洲最大av| 多毛熟女@视频| 一边亲一边摸免费视频| 国产精品三级大全| 亚洲内射少妇av| 日本vs欧美在线观看视频| 3wmmmm亚洲av在线观看| 制服诱惑二区| 只有这里有精品99| 汤姆久久久久久久影院中文字幕| 一级,二级,三级黄色视频| 天天操日日干夜夜撸| 免费高清在线观看日韩| 亚洲久久久国产精品| 欧美激情国产日韩精品一区| 国产又色又爽无遮挡免| 亚洲精华国产精华液的使用体验| 美女中出高潮动态图| 99久久精品一区二区三区| 亚洲丝袜综合中文字幕| 一级毛片aaaaaa免费看小| 欧美97在线视频| 国产午夜精品一二区理论片| 国产午夜精品一二区理论片| 欧美老熟妇乱子伦牲交| 日日摸夜夜添夜夜添av毛片| 麻豆成人av视频| 国产又色又爽无遮挡免| av专区在线播放| 亚洲欧美中文字幕日韩二区| 国产免费视频播放在线视频| 爱豆传媒免费全集在线观看| 一边亲一边摸免费视频| 国产高清国产精品国产三级| 日韩强制内射视频| 青春草亚洲视频在线观看| 久久综合国产亚洲精品| 久久精品久久精品一区二区三区| 一区二区三区四区激情视频| 日日爽夜夜爽网站| 好男人视频免费观看在线| 亚洲婷婷狠狠爱综合网| 最近中文字幕高清免费大全6| 久久国产亚洲av麻豆专区| 一本久久精品| 水蜜桃什么品种好| 国产视频内射| 亚洲av综合色区一区| 亚洲av不卡在线观看| 伊人久久国产一区二区| 久久久a久久爽久久v久久| 亚洲三级黄色毛片| 亚洲精品久久成人aⅴ小说 | 最近中文字幕2019免费版| 免费观看在线日韩| 国产av码专区亚洲av| 交换朋友夫妻互换小说| 在线观看免费日韩欧美大片 | 在线观看www视频免费| 嘟嘟电影网在线观看| 99久久人妻综合| 又黄又爽又刺激的免费视频.| 热99久久久久精品小说推荐| 午夜视频国产福利| 99re6热这里在线精品视频| 精品酒店卫生间| 你懂的网址亚洲精品在线观看| 天天躁夜夜躁狠狠久久av| 久久久亚洲精品成人影院| 这个男人来自地球电影免费观看 | 亚洲美女黄色视频免费看| av播播在线观看一区| 如何舔出高潮| 一级黄片播放器| 成人二区视频| 美女视频免费永久观看网站| 香蕉精品网在线| 18禁观看日本| 九九爱精品视频在线观看| 亚洲不卡免费看| 免费黄频网站在线观看国产| 一级毛片aaaaaa免费看小| av专区在线播放| 久久毛片免费看一区二区三区| 日韩成人伦理影院| 另类亚洲欧美激情| 不卡视频在线观看欧美| .国产精品久久| 午夜福利在线观看免费完整高清在| 夫妻性生交免费视频一级片| 老司机影院毛片| 欧美日韩一区二区视频在线观看视频在线| 大陆偷拍与自拍| 高清午夜精品一区二区三区| 成年人免费黄色播放视频| 亚洲av二区三区四区| 大陆偷拍与自拍| 高清午夜精品一区二区三区| 在线看a的网站| 一区二区三区精品91| 精品国产一区二区久久| 九色成人免费人妻av| av天堂久久9| 午夜免费鲁丝| 中文字幕精品免费在线观看视频 | 中文乱码字字幕精品一区二区三区| 下体分泌物呈黄色| 婷婷成人精品国产| 久久精品国产亚洲av天美| 国产免费又黄又爽又色| 国产精品国产av在线观看| 亚洲经典国产精华液单| 午夜福利在线观看免费完整高清在| 国产免费福利视频在线观看| 亚洲精品久久久久久婷婷小说| 亚洲精品亚洲一区二区| 国产精品成人在线| 国产精品国产三级国产av玫瑰| 亚洲欧洲日产国产| 丰满迷人的少妇在线观看| 免费观看无遮挡的男女| 丰满少妇做爰视频| 51国产日韩欧美| 纯流量卡能插随身wifi吗| 亚洲人成网站在线播| 纯流量卡能插随身wifi吗| 国产免费又黄又爽又色| 男女无遮挡免费网站观看| 欧美激情 高清一区二区三区| 九九爱精品视频在线观看| 国产一级毛片在线| 亚洲人与动物交配视频| 99热这里只有是精品在线观看| 妹子高潮喷水视频| 亚洲第一av免费看| 国产在线免费精品| 欧美亚洲 丝袜 人妻 在线| 七月丁香在线播放| 校园人妻丝袜中文字幕| 精品久久久精品久久久| 天堂俺去俺来也www色官网| 在线观看人妻少妇| 久久精品国产亚洲av涩爱| 日本黄大片高清| 亚洲欧洲国产日韩| 亚洲精品国产色婷婷电影| 春色校园在线视频观看| 国产精品国产三级国产专区5o| 内地一区二区视频在线| 人妻 亚洲 视频| 久久av网站| 插逼视频在线观看| 街头女战士在线观看网站| 亚洲内射少妇av| 日日摸夜夜添夜夜爱| 中文乱码字字幕精品一区二区三区| 三上悠亚av全集在线观看| 亚洲精品一二三| 在线看a的网站| 精品一区二区免费观看| 人人澡人人妻人| 97精品久久久久久久久久精品| av黄色大香蕉| 尾随美女入室| 亚洲第一区二区三区不卡| 欧美人与性动交α欧美精品济南到 | 亚洲国产精品专区欧美| 国产亚洲最大av| 丝袜脚勾引网站| 国产亚洲精品第一综合不卡 | 色94色欧美一区二区| 日韩av不卡免费在线播放| 18禁裸乳无遮挡动漫免费视频| 黄色怎么调成土黄色| 五月天丁香电影| 在线观看人妻少妇| 十八禁网站网址无遮挡| 国产69精品久久久久777片| 高清毛片免费看| 97超碰精品成人国产| 91久久精品电影网| 亚洲国产精品专区欧美| 久久青草综合色| 国产精品国产三级国产专区5o| 精品亚洲成国产av| 在线精品无人区一区二区三| 久久久亚洲精品成人影院| 黑人高潮一二区| 大片免费播放器 马上看| 少妇 在线观看| 亚洲国产精品国产精品| 亚洲无线观看免费| a级毛片在线看网站| 成人国产av品久久久| 色5月婷婷丁香| 永久免费av网站大全| 精品一区二区免费观看| 天堂8中文在线网| 下体分泌物呈黄色| 91成人精品电影| 久久久国产精品麻豆| 日本av免费视频播放| 亚洲精品国产色婷婷电影| av女优亚洲男人天堂| 免费黄频网站在线观看国产| 黑丝袜美女国产一区| 国产一区二区在线观看日韩| 亚洲国产精品一区三区| 亚洲国产精品成人久久小说| 免费黄频网站在线观看国产| .国产精品久久| 亚洲精品日韩av片在线观看| 国产精品一区二区在线不卡| 国产精品嫩草影院av在线观看| 国产亚洲av片在线观看秒播厂| 日韩,欧美,国产一区二区三区| 日本与韩国留学比较| 18禁在线播放成人免费| 国产精品嫩草影院av在线观看| 高清毛片免费看| 久久精品人人爽人人爽视色| 五月伊人婷婷丁香| 国产片内射在线| 我要看黄色一级片免费的| 久久精品国产a三级三级三级| 日韩,欧美,国产一区二区三区| 只有这里有精品99| 国产片内射在线| 国产精品99久久99久久久不卡 | 亚洲丝袜综合中文字幕| 肉色欧美久久久久久久蜜桃| 日韩不卡一区二区三区视频在线| 国产日韩一区二区三区精品不卡 | 黄色欧美视频在线观看| 九九在线视频观看精品| 欧美bdsm另类| 国产综合精华液| 久久久久国产精品人妻一区二区| 久久婷婷青草| 久久久久久久久久久久大奶| 亚洲精品色激情综合| 婷婷色综合www| 老熟女久久久| 五月开心婷婷网| 看非洲黑人一级黄片| 制服人妻中文乱码| a 毛片基地| 亚洲三级黄色毛片| 亚洲av日韩在线播放| 一级毛片 在线播放| 能在线免费看毛片的网站| 国产日韩欧美亚洲二区| 大陆偷拍与自拍| 18禁裸乳无遮挡动漫免费视频| 18禁在线无遮挡免费观看视频| av线在线观看网站| 国产精品.久久久| 久久精品国产自在天天线| 久久人人爽av亚洲精品天堂| 久久精品夜色国产| 国产午夜精品一二区理论片| 男女边摸边吃奶| 国产成人91sexporn| 国产男人的电影天堂91| 欧美日韩一区二区视频在线观看视频在线| 免费黄网站久久成人精品| 啦啦啦在线观看免费高清www| 在线天堂最新版资源| 国产亚洲av片在线观看秒播厂| 成人综合一区亚洲| 成人免费观看视频高清| 亚洲激情五月婷婷啪啪| 亚洲av福利一区| 插阴视频在线观看视频| 久久久久久久久久久丰满| 亚洲无线观看免费| 日韩av不卡免费在线播放| 爱豆传媒免费全集在线观看| 精品国产国语对白av| 婷婷色av中文字幕| 高清视频免费观看一区二区| 视频在线观看一区二区三区| 九色成人免费人妻av| 在现免费观看毛片| av在线老鸭窝| 最近中文字幕2019免费版| 日本wwww免费看| 精品少妇内射三级| 亚洲图色成人| 人妻制服诱惑在线中文字幕| 国产午夜精品久久久久久一区二区三区| 欧美精品国产亚洲| 最后的刺客免费高清国语| 少妇丰满av| 免费久久久久久久精品成人欧美视频 | 久久久久久久久久久丰满| 国产白丝娇喘喷水9色精品| 欧美bdsm另类| 久久人人爽av亚洲精品天堂| a 毛片基地| 国产一区有黄有色的免费视频| 成人免费观看视频高清| 一区在线观看完整版| 国产亚洲最大av| 成年人午夜在线观看视频| 久久婷婷青草| 国产成人精品久久久久久| 最近最新中文字幕免费大全7| 一二三四中文在线观看免费高清| 99热全是精品| 亚洲人成网站在线观看播放| 午夜日本视频在线| 自拍欧美九色日韩亚洲蝌蚪91| 久久国内精品自在自线图片| 免费黄频网站在线观看国产| 亚洲成人av在线免费| 成人黄色视频免费在线看| 免费大片18禁| 狂野欧美白嫩少妇大欣赏| 久热这里只有精品99| 国产成人精品无人区| 久久久久视频综合| 国产精品久久久久成人av| 国产高清三级在线| 久久久久久久久久久免费av| 精品一区二区免费观看| 精品久久久精品久久久| 精品99又大又爽又粗少妇毛片| 国产成人精品在线电影| 精品一区二区免费观看| 大片免费播放器 马上看| 亚洲图色成人| 一级毛片黄色毛片免费观看视频| 精品久久久久久电影网| 美女内射精品一级片tv| 乱码一卡2卡4卡精品| 老司机影院毛片| 一本—道久久a久久精品蜜桃钙片| 大香蕉97超碰在线| 热99久久久久精品小说推荐| av在线观看视频网站免费| 王馨瑶露胸无遮挡在线观看| 亚洲精品久久成人aⅴ小说 | 天堂8中文在线网| av线在线观看网站| 久久热精品热| 免费观看性生交大片5| 在线观看人妻少妇| 人成视频在线观看免费观看| 久久午夜综合久久蜜桃| 狠狠精品人妻久久久久久综合| 男女免费视频国产| 欧美性感艳星| 飞空精品影院首页| 亚洲欧美精品自产自拍| 91久久精品国产一区二区三区| 亚洲图色成人| 交换朋友夫妻互换小说| 日韩强制内射视频| 中文字幕免费在线视频6| 男的添女的下面高潮视频| 亚洲精品国产色婷婷电影| 黑人巨大精品欧美一区二区蜜桃 | 亚洲av欧美aⅴ国产| 观看美女的网站| 精品亚洲乱码少妇综合久久| 人成视频在线观看免费观看| 美女xxoo啪啪120秒动态图| 涩涩av久久男人的天堂| 国产精品国产三级专区第一集| 大片免费播放器 马上看| 在线观看免费视频网站a站| 免费观看a级毛片全部| 国产精品熟女久久久久浪| 国产毛片在线视频| 日韩不卡一区二区三区视频在线| 美女国产高潮福利片在线看| 国产在线视频一区二区| 国产亚洲欧美精品永久| 晚上一个人看的免费电影| 欧美xxⅹ黑人| 一级二级三级毛片免费看| 国产深夜福利视频在线观看| 91久久精品电影网| 亚洲精品av麻豆狂野| 亚洲av成人精品一区久久| 日本av免费视频播放| 免费观看的影片在线观看| 91精品一卡2卡3卡4卡| 国产极品粉嫩免费观看在线 | 久久毛片免费看一区二区三区| 国产欧美日韩综合在线一区二区| 五月天丁香电影| 老司机影院毛片| 成人影院久久| 老女人水多毛片| 街头女战士在线观看网站| 丰满少妇做爰视频| 日日摸夜夜添夜夜添av毛片| 亚洲精品色激情综合| 如何舔出高潮| 国产精品 国内视频| 好男人视频免费观看在线| 蜜桃在线观看..| 在线观看三级黄色| 纯流量卡能插随身wifi吗| 高清黄色对白视频在线免费看| 国产成人av激情在线播放 | 亚洲人成77777在线视频| 免费av中文字幕在线| 精品亚洲成a人片在线观看| 国产黄频视频在线观看| 最近手机中文字幕大全| 国产精品99久久久久久久久| 久久人人爽人人爽人人片va| 蜜桃久久精品国产亚洲av| 人成视频在线观看免费观看| 亚洲精品久久午夜乱码| 精品人妻在线不人妻| 26uuu在线亚洲综合色| 秋霞在线观看毛片| 日本免费在线观看一区| 亚洲欧美日韩卡通动漫| 午夜免费观看性视频| 日本vs欧美在线观看视频| 亚洲精品第二区| 夜夜看夜夜爽夜夜摸| 欧美日韩视频高清一区二区三区二| 欧美一级a爱片免费观看看| 看非洲黑人一级黄片| 国产成人精品在线电影| 国产精品99久久久久久久久| 大片电影免费在线观看免费| 久久久久久久久久久丰满| 国产极品天堂在线| 欧美 亚洲 国产 日韩一| 久久鲁丝午夜福利片| 丰满乱子伦码专区| 又黄又爽又刺激的免费视频.| 欧美人与性动交α欧美精品济南到 | 久久久久网色| 亚洲成色77777| 亚洲情色 制服丝袜| 日韩视频在线欧美| 国产成人91sexporn| 亚洲精品456在线播放app| 久久热精品热| 亚洲欧美清纯卡通| 精品人妻熟女毛片av久久网站| 一级a做视频免费观看| 国产亚洲av片在线观看秒播厂| 亚洲色图 男人天堂 中文字幕 | 99九九线精品视频在线观看视频| 大又大粗又爽又黄少妇毛片口| 亚洲欧洲国产日韩| 在线观看www视频免费| 男的添女的下面高潮视频| 春色校园在线视频观看| 日韩熟女老妇一区二区性免费视频| 久久午夜综合久久蜜桃| 亚洲第一av免费看| 日韩制服骚丝袜av| 性色av一级| 日韩一本色道免费dvd| av不卡在线播放| 在线天堂最新版资源| 大片免费播放器 马上看| 国产成人一区二区在线| 欧美少妇被猛烈插入视频| 精品亚洲成国产av| 日本wwww免费看| 婷婷成人精品国产| 汤姆久久久久久久影院中文字幕| 中文乱码字字幕精品一区二区三区| 插逼视频在线观看| 精品少妇内射三级| 日本黄色日本黄色录像| 色哟哟·www| 夜夜爽夜夜爽视频| 菩萨蛮人人尽说江南好唐韦庄| 亚洲精品中文字幕在线视频| 91国产中文字幕| 97精品久久久久久久久久精品| 成人毛片60女人毛片免费| 日本欧美国产在线视频| 成人免费观看视频高清| 久久 成人 亚洲| 99re6热这里在线精品视频| 亚洲情色 制服丝袜| 色5月婷婷丁香| 18禁在线播放成人免费| 蜜桃在线观看..| av一本久久久久| 赤兔流量卡办理| 国产精品三级大全| 在线观看免费视频网站a站| 在线观看三级黄色| 日本欧美视频一区| 又大又黄又爽视频免费| 国产精品久久久久成人av| 人妻一区二区av| 精品久久久久久久久av| 欧美亚洲日本最大视频资源| 人妻夜夜爽99麻豆av| 老司机亚洲免费影院| 制服丝袜香蕉在线| 成人无遮挡网站| 国产白丝娇喘喷水9色精品| 亚洲,一卡二卡三卡| 九草在线视频观看| 一区二区三区四区激情视频| 日本黄色日本黄色录像| 美女xxoo啪啪120秒动态图| 亚洲国产精品999| 国产高清国产精品国产三级| 蜜桃在线观看..| 久久久久人妻精品一区果冻| 国产av一区二区精品久久| 亚洲精品国产色婷婷电影| 97超碰精品成人国产| 久久久久人妻精品一区果冻| 色5月婷婷丁香| 色视频在线一区二区三区| 人妻制服诱惑在线中文字幕| 啦啦啦中文免费视频观看日本| 不卡视频在线观看欧美| 亚洲精品av麻豆狂野| 黄色一级大片看看| 毛片一级片免费看久久久久| 亚洲人成77777在线视频| videossex国产| 久久久久久久久久人人人人人人| 丰满少妇做爰视频| 99精国产麻豆久久婷婷| 婷婷色综合www| 亚洲国产精品一区二区三区在线| 精品一区二区三区视频在线| 美女大奶头黄色视频| 97精品久久久久久久久久精品| 亚洲精品视频女| 国产乱来视频区| 国产精品无大码| 三上悠亚av全集在线观看| 精品国产一区二区久久| 在线观看人妻少妇| 亚洲精品色激情综合| 久久狼人影院| 国产精品欧美亚洲77777| 尾随美女入室| av在线播放精品| 亚洲成人av在线免费| 大香蕉97超碰在线| av网站免费在线观看视频| 最近的中文字幕免费完整| 中文欧美无线码| 三上悠亚av全集在线观看| 亚洲伊人久久精品综合| 亚洲欧美色中文字幕在线| 国产精品免费大片| 欧美老熟妇乱子伦牲交| 人人妻人人添人人爽欧美一区卜| 国产精品99久久久久久久久| 免费观看性生交大片5| 亚洲av二区三区四区| 亚洲四区av| 亚洲一区二区三区欧美精品| 啦啦啦啦在线视频资源| 国产欧美日韩综合在线一区二区| 久久久久久久久久久丰满| 精品午夜福利在线看| 考比视频在线观看| 国产综合精华液| 亚洲中文av在线| 春色校园在线视频观看| 亚洲精品aⅴ在线观看| 丰满饥渴人妻一区二区三| 欧美日韩精品成人综合77777| 伊人久久国产一区二区| xxx大片免费视频| av视频免费观看在线观看| 久久99热这里只频精品6学生| 国产av精品麻豆| 国产精品一区www在线观看| 黄片无遮挡物在线观看| 在线亚洲精品国产二区图片欧美 | 中文字幕制服av| 亚洲伊人久久精品综合| 国产精品麻豆人妻色哟哟久久| 多毛熟女@视频| 国产毛片在线视频| 国产成人免费无遮挡视频| 亚洲精品乱久久久久久| 国产亚洲精品第一综合不卡 |