• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-head attention-based long short-term memory model for speech emotion recognition

    2022-07-13 02:52:26ZhaoYanZhaoLiLuChengLiSunanTangChuangaoLianHailun

    Zhao Yan Zhao Li Lu Cheng Li Sunan Tang Chuangao Lian Hailun

    (1School of Information Science and Engineering, Southeast University, Nanjing 210096, China)(2School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China)

    Abstract:To fully make use of information from different representation subspaces, a multi-head attention-based long short-term memory(LSTM)model is proposed in this study for speech emotion recognition(SER).The proposed model uses frame-level features and takes the temporal information of emotion speech as the input of the LSTM layer.Here, a multi-head time-dimension attention(MHTA)layer was employed to linearly project the output of the LSTM layer into different subspaces for the reduced-dimension context vectors.To provide relative vital information from other dimensions, the output of MHTA, the output of feature-dimension attention, and the last time-step output of LSTM were utilized to form multiple context vectors as the input of the fully connected layer.To improve the performance of multiple vectors, feature-dimension attention was employed for the all-time output of the first LSTM layer.The proposed model was evaluated on the eNTERFACE and GEMEP corpora, respectively.The results indicate that the proposed model outperforms LSTM by 14.6% and 10.5% for eNTERFACE and GEMEP, respectively, proving the effectiveness of the proposed model in SER tasks.

    Key words:speech emotion recognition; long short-term memory(LSTM); multi-head attention mechanism; frame-level features; self-attention

    Speech emotion recognition(SER)plays a significant role in many real-life applications, such as human-machine interaction[1]and computer-aided human communication.However, it is challenging to make machines fully interpret emotions embedded in speech signals owing to the subtlety and vagueness in spontaneous emotional expressions[2-3].Despite the wide use of SER in applications, its performance remains far less competitive in comparison with those of human beings, and the recognition process still suffers from the local optima trap.Therefore, it is essential to further enhance the performance of automatic SER systems.

    Deep learning networks have shown great efficiency in dealing with SER tasks, such as convolutional neural networks(CNN)and recurrent neural networks(RNN), which brings a great improvement in the recognition accuracy[4].The attention mechanism is also utilized in neural networks to dynamically focus on certain parts of the input.Mirsamadi et al.[5]introduced the local attention mechanism to an RNN to focus on the emotionally salient regions of speech signals.Statistical features were used for the study.Tarantino et al.[6]proposed a new windowing system with the self-attention mechanism to improve the SER performance.These studies[5-6]follow the traditional method of using low-level descriptors as the input.Recently, spectrograms have gained considerable attention as the input feature.For instance, Li et al.[7]adopted the self-attention mechanism for the salient periods of speech spectrogram.Although researchers paid considerable attention to deep networks, the input features are mainly extracted from the time dimension.

    Many studies focus on exploring multiple dimensions’ feature vectors.Xie et al.[8]proposed a weighting algorithm based on time and feature-dimension attention for the long short-term memory(LSTM)output, which could significantly improve the SER performance.Li et al.[9]combined a dilated residual network and multi-head self-attention to model inner dependencies.For the above algorithms[8-9], the last time-step output of models is used as the input of the next layer.The studies indicate that parallel multiple feature vectors help improve the SER performance.Moreover, the attention mechanism has demonstrated great performance for SER tasks[5-9]and has been used in combination with deep neural networks.

    In this research, an improved multi-head attention LSTM model is proposed to overcome the above-mentioned barriers and improve the SER performance.Multi-head time-dimension attention(MHTA)has the ability to jointly attend to information from different representation subspaces at different positions[10].Deep network outputs are performed in parallel through the attention function and then concatenated for the final values.Compared with a single-head attention output, concatenated values contain various context vectors, which are weighted by different attention functions.The transformer based on the multi-head attention mechanism is introduced to the pre-training model, named bidirectional encoder representations from transformers(BERT)[11], becoming one of the most successful models for natural language processing.The success of pre-training by BERT makes multi-head attention widely being used for various fields of speech, such as speech recognition.Lian et al.[12]employed the multi-head attention layer to predict the unsupervised pre-training for the Mel spectrum.Tian et al.[13]introduced multi-head attention into the RNN-transducer structure and achieved excellent results.However, the above-mentioned models did not use the mechanism for mining the temporal relations from the LSTM output.Previous studies mainly focused on directly utilizing multi-head attention layers for pre-training and improving the SER performance.From this point of view, the values contain more information of the salient speech region.The all-time output and last time-step output are utilized for the MHTA calculation.Moreover, SER is not just decided by the output of the MHTA but also by different representations from other aspects.Therefore, the output of feature-dimension attention and the last time-step output of LSTM are introduced to the final context vector.The information loss always exists during the backpropagation of the traditional deep learning network.The residual neural network[14-15]helps solve this problem by connecting the previous layer output with the subsequent layer output directly.Inspired by the idea, feature-dimension attention for the all-time output of the first LSTM layer is employed to select useful information.Finally, the context vector is fed to the fully connected layer.Experiments conducted on the eNTERFACE and GEMEP corpora demonstrate the effective performance of the proposed model.

    1 Proposed Method

    1.1 Frame-level feature extraction

    The openSMILE[16]features proposed by Schuller et al.[17]are the most widely used speech features for SER.In this research, to keep the uniformity and coherence of the previous work[8], the same frame-level features are used.

    1.2 MHTA mechanism

    In this structure, the LSTM layer for processing the time series samples with the variable length is used.The LSTM network, which is proposed by Hochreiter and Schmidhuber[18], models time series sequences and generates a high-level representation.It has the ability to extract features automatically and hierarchically.To maintain continuity and consistency of the previous work[8], which demonstrates the effectiveness of the attention gate, the double-layer LSTM with the attention gate was used in the structure.The LSTM’s output can be described as a matrix composed of time steps and feature data.Therefore, neural components are required to learn hidden information between the time steps and features of the output representations.In this paper, improved attention is introduced as the core mechanism for computing the representations’ time and feature relationship.

    Vaswani et al.[10]introduced an attention function as mapping a query and a set of key-value pairs to an output.The output was computed by weighted values, where the weight was calculated by the query with the corresponding key.Instead of applying a single attention function, Vaswani found it useful to apply multi-head attention functions for queries, keys, and values.The neural network utilizes the multi-head attention algorithm to consume hidden information from different subspaces, which can significantly improve the performance compared with single-head attention.

    Multi-head(Q,K,V)=Concat(h1,h2,…,hn)WO

    (1)

    hi=Attention(QWi,Q,KWi,K,VWi,V)i∈[1,n]

    (2)

    whereQ,K, andVare the query, key, and value vectors;Wiis the parameter matrix for mapping into subspaces;WOis the mapped-back parameter matrix; n is the number of attention heads.

    The frame-level features were selected for SER.Because the frame-level features contain the time and feature relationship of the LSTM output, the attention function applied to the time and feature dimensions helps the model to improve performance.A previous work[8]used the attention mechanism on the time and feature dimensions to achieve state-of-the-art performance for emotion recognition.The attention weighting for the time-dimension calculation is

    St=softmax(Om(OaWt)T)

    (3)

    Ot=StOa

    (4)

    whereOm∈RB×1×Zis the last time output andOa∈RB×F×Zis the all-time output;Bis the batch size;Fis the number of time steps;Zis the feature’s dimension;St∈RB×1×Fis the attention score of the time dimension;Otis the output of the time-dimension attention layer, which is subsequently fed into the fully connected layer.

    Theoretically, the multi-head attention algorithm, intended to project the LSTM output into different subspaces for hidden information with different dimensions, could achieve a good performance.Moreover, for each head, the decreased dimensions help to keep the calculation amount similar to that of the single-head attention.The last time-step output of LSTM accumulates the greatest amount of information because of the memory ability of the LSTM network.By using the all-time output of LSTM and last time-step output, the keys, values, and queries are computed as

    Ki=Wi,KOa+bi,K

    (5)

    Vi=Wi,VOa+bi,V

    (6)

    Qi=Wi,QOm+bi,Q

    (7)

    Next, the calculated keys, values, and queries are utilized to compute the corresponding attention scores and attention output.The calculations are as follows:

    (8)

    omti=siVi

    (9)

    Omt=Concat(omt1,omt2,….,omtn)

    (10)

    wheresiis the multi-head attention score on the time dimension andomtiis its output for each subspace;Omtis the final output of the MHTA layer.After the output from all the subspaces is obtained, outputs from their corresponding subspaces are concentrated, which will be fed into the fully connected layer.

    1.3 Multiple-context-vector generation

    For SER, the features exhibit different influences.To classify the feature difference, the feature-dimension attention mechanism is applied in this work.The feature-dimension attention used in the model helps to relieve the overfitting problem caused by the time-dimension multi-head attention algorithm.The feature weighting is calculated as follows:

    sf=softmax(tanh(Omwf)uf)

    (11)

    Of=∑sfOa

    (12)

    wherewfandufare the training parameters.The feature-dimension attention scoresf, which is different from each other, could indicate the effect of different features.Next, the summation over the time frames is calculated.The outputOfrepresents the statistical value of the time-dimension features.

    Finally, the last time-step output(Ols)is chosen, which accumulates the greatest amount of information as parts of the final output.The final output consists of three different parallel characterizations.After the context vector is calculated, they are put through the unsqueeze function.

    Otfl=Concat(Omt,Of,Ols)

    (13)

    Oap=Averagepooling(Otfl)

    (14)

    However, because a double-layer LSTM structure is used in this study, the LSTM layer may discard vital information during the process.Therefore, the all-time output of the first LSTM layer(Oaf)is taken into consideration.The modified output calculation is

    Omo=Concat(Omt,Of,Ols,Oaf)

    (15)

    Another problem arises becauseOlsandOafcould contain the same information, which leads to information redundancy.This situation is not expected to happen because it may have a bad influence on the effectiveness and performance of the model.To avoid such a situation, the feature-dimension attention mechanism is applied for the first LSTM layer’s all-time output to screen for useful information.

    sa=softmax(tanh(Oafwf)uf)

    (16)

    Oal=∑saOaf

    (17)

    Finally, the multiple context vectors are calculated and used as the input of the fully connected layer.

    Oc=Concat(Omt,Of,Ols,Oal)

    (18)

    The new context vector(Oc)not only provides inherent information from time and feature dimensions but also utilizes the last time-step output as auxiliary information for SER.It can strengthen the key information and ignore irrelevant information to generate a highly effective feature representation.Fig.1 shows the proposed multi-head attention-based LSTM structure.

    Fig.1 Proposed improved multi-head attention structure

    2 Experiment and Analysis

    2.1 Database

    The proposed model is evaluated on eNTERFACE[19]and GEMEP[20]corpus.The eNTERFACE dataset contains 42 subjects(34 male and 8 female).The audio sample rate was 48 MHz, in an uncompressed stereo 16-bit format, with an average duration of 3.5 s.In this research, 1 260 valid speech samples are used for the evaluation, where 260 samples are used as the test set.

    GEMEP is a French-content corpus with 18 speech emotional categories, including 1 260 utterance samples.Twelve labeled classes are selected:relief, amusement, despair, pleasure, anger, panic, interest, joy, irritation, pride, anxiety, and sadness.Therefore, 1 080 samples are used from the chosen categories, where 200 samples are selected as the test set randomly.

    2.2 Experimental setup

    In this section, the proposed model is compared with several baselines:1)LSTM; 2)LSTM with time-dimension attention; 3)LSTM with MHTA.For experiments performed on the same database, the parameters are kept the same for the LSTM layer.

    The input dimension is[128,t,93], where 128 is the number of batch sizes,tis the frame number, and 93 is the number of extracted features.The output dimension is[128,c], wherecrepresents the number of emotion categories in databases.For the double-layer LSTM, the first layer has 512 hidden units, while the second layer has 256 hidden units.To ensure the dependability and reliability of the experiments, other parameters are kept the same.

    As SER is a classification task, the unweighted average recall(UAR)is used as the evaluation metric.

    2.3 Results and discussion

    Experiments are conducted to verify the effectiveness of the proposed multi-head attention mechanism.Tab.1 presents the results of the LSTM model and time-dimension attention-based LSTM models.Compared with the LSTM model, the time-dimension attention LSTM obtains a recognition accuracy of 83.8% and has an 8.0% improvement on the eNTERFACE corpus.For the GEMEP corpus, LSTM with the attention mechanism also obtains an increase of 4.5%.Tab.2 exhibits the results of the models with three context vectors.The results prove that the LT8 model outperforms other models.The results show a tendency of increasing first and then decreasing, which indicates that the model achieves its boundary when it has eight heads.

    Tab.1 Results of the time-dimension attention LSTM models %

    Tab.2 Results of the models with three context vectors %

    Furthermore, the recognition accuracy increases along with the increase in the time-dimension attention head number(less than eight)and then decreases.The reason is that projecting into subspaces has its boundary.Good results cannot always increase by simply increasing the head numbers.This tendency indicates that the multi-head attention is effective for the eNTERFACE corpus.The accuracy of the proposed model is better than that of LSTM, and the curves seem to increase together with the increase in the head number.When the head number is equal to eight, the proposed model achieves the best recognition accuracy of 89.6% and 58.0% in the eNTERFACE and GEMEP corpora, respectively.

    Multitask learning[7]has proven its effectiveness for speech recognition.Compared with multitask learning, multiple context vectors are used to determine SER results.Although time-dimension attention would complicate the model, the classification is dependent not only on the context vector of the multi-head attention layer but also on other context vectors.Finally, the output of the feature-dimension attention layer and last time-step output together are composed with the context vector of MHTA to form the final context vector.

    In this paper, multiple context vectors are proposed to analyze speech emotions.To evaluate the effectiveness of the proposed method, several experiments are conducted.As several studies have proven the effectiveness of the time-feature attention mechanism[8]and skip connection structure[21]for SER tasks, the performance of the proposed model is compared with the attention LSTM network.As the results of multi-head LSTM models hint that the models achieve the best performance when the head number is equal to eight, it is applied for the attention-based models with different context vectors.Model performance comparisons against other techniques are presented in Tab.3.The experimental results indicate that the proposed model outperforms LSTM by 14.6% and 10.5% for eNTERFACE and GEMEP, respectively.The UARs decrease in the eNTERFACE and GEMEP databases when the all-time output of the first LSTM layer is employed as an additional context vector for the fully connected layer.This situation may be a result of the all-time output of the first LSTM layer that provides too much redundant information for the model.It makes the fully connected layer input less effective.To solve this problem, the assumption is to employ the feature-dimension attention mechanism for the all-time output of the first LSTM layer to select useful inherent information.

    Tab.3 Results of the LSTM models with various context vectors %

    The proposed model is compared with other methods.The local attention mechanism[5]is re-implemented for SER.As CNN networks have been widely used for SER tasks, CNN networks[7, 22]are re-implemented as comparisons.Tab.4 shows the comparison of the experiment results on the literature and the proposed model.The model parameters are shown in Tab.5.With a slight improvement in the model parameters, the proposed model shows much better performance than the traditional LSTM model.Figs.2 and 3 present the confusion matrix of the proposed model(LC8).For CNN-based experiments, the audio samples are changed into spectrograms.The spectrograms are used as the input features for the networks.

    Tab.4 Comparison of the experiment results on the literature and the proposed model %

    Tab.5 Comparison of the model parameters in the traditional models and proposed models

    AngerDisgustFearHappySadSurprise92.680.004.880.000.002.442.5689.745.130.000.002.560.006.9890.700.002.330.000.000.000.00100.000.000.002.220.000.000.0097.780.005.267.027.023.510.0077.19AngerDisgustFearHappySadSurprise

    AmusementAnxietyIrritationDespairJoyPanicAngerInterestPleasurePrideReliefSadness81.820.000.009.090.009.090.000.000.000.000.000.0013.3326.6713.3313.336.6713.330.0013.330.000.000.000.000.004.1762.500.004.074.170.004.174.174.174.178.330.000.000.6733.3326.670.000.0020.006.670.000.006.675.000.0015.005.0045.005.005.005.0010.005.000.000.0011.760.0011.765.885.8864.710.000.000.000.000.000.000.000.000.0015.005.0010.0070.000.000.000.000.000.000.000.0010.000.0010.000.000.0050.0020.000.000.0010.000.000.000.000.000.000.000.006.6793.330.000.000.000.0014.290.000.004.760.004.764.769.5261.900.000.000.000.000.000.000.000.000.000.009.090.0081.829.090.0014.299.520.004.760.000.004.764.760.0014.2947.62Amusement AnxietyIrritationDespairJoyPanicAngerInterestPleasurePrideReliefSadness

    Based on the results, the LSTM models show better performance than CNN networks on GEMEP and eNTERFACE corpus.The UARs of the models[7,22]are at least 10% lower than those of the LSTM models.Among all the models, the best UARs(90.4% and 59.0%)are achieved by the proposed model.Therefore, combining the multi-head attention mechanism along with multiple context vectors results in improvement, providing an effective method for SER tasks.

    3 Conclusions

    1)In this research, an MHTA weighting method is proposed to distinguish the salience regions of emotional speech samples.

    2)To form the parts of the input of the full connection layer, the output of the feature-dimension attention layer and last time-step output are utilized.Moreover, feature-dimension attention is employed for the all-time output of the first LSTM layer to screen information for the fully connected layer input.

    3)Evaluations are performed on the eNTERFACE and GEMEP corpora.The proposed model achieves the best performance for SER compared with the other models.The results demonstrate the effectiveness of the proposed attention-based LSTM model.

    一个人观看的视频www高清免费观看 | 欧美日本中文国产一区发布| 91精品三级在线观看| 在线视频色国产色| 日本免费一区二区三区高清不卡 | 神马国产精品三级电影在线观看 | 国产欧美日韩精品亚洲av| 亚洲久久久国产精品| 亚洲男人天堂网一区| 国产免费av片在线观看野外av| 久久亚洲精品不卡| 80岁老熟妇乱子伦牲交| 色综合站精品国产| 免费高清视频大片| 国产亚洲欧美在线一区二区| 欧美激情极品国产一区二区三区| 日韩欧美三级三区| 国产精品久久久人人做人人爽| 久久久水蜜桃国产精品网| 天堂动漫精品| 日本精品一区二区三区蜜桃| 午夜福利,免费看| a级片在线免费高清观看视频| 久久精品国产亚洲av香蕉五月| 一区福利在线观看| 久久久水蜜桃国产精品网| 亚洲国产精品一区二区三区在线| 热99re8久久精品国产| 黑人猛操日本美女一级片| 我的亚洲天堂| 99久久99久久久精品蜜桃| www日本在线高清视频| 一二三四在线观看免费中文在| 亚洲欧美激情在线| 亚洲熟女毛片儿| 老熟妇仑乱视频hdxx| 日本 av在线| 18禁国产床啪视频网站| 亚洲久久久国产精品| 欧美日韩中文字幕国产精品一区二区三区 | 正在播放国产对白刺激| 50天的宝宝边吃奶边哭怎么回事| 国产伦一二天堂av在线观看| 色老头精品视频在线观看| 激情在线观看视频在线高清| 亚洲精品国产区一区二| 99国产极品粉嫩在线观看| 亚洲成人久久性| 国产三级在线视频| 男女做爰动态图高潮gif福利片 | 精品一品国产午夜福利视频| 黄片小视频在线播放| 成年人免费黄色播放视频| 狠狠狠狠99中文字幕| av网站免费在线观看视频| 日韩高清综合在线| 午夜福利在线观看吧| 69av精品久久久久久| 一区在线观看完整版| 国产精品亚洲av一区麻豆| www国产在线视频色| 亚洲欧洲精品一区二区精品久久久| 免费观看精品视频网站| 人妻丰满熟妇av一区二区三区| 看免费av毛片| 午夜91福利影院| 日韩精品中文字幕看吧| 国产无遮挡羞羞视频在线观看| 岛国在线观看网站| 亚洲av五月六月丁香网| 国产精品久久久久成人av| 欧美性长视频在线观看| 国产无遮挡羞羞视频在线观看| 亚洲少妇的诱惑av| 国产xxxxx性猛交| 亚洲第一av免费看| 久久狼人影院| 国产一区二区三区综合在线观看| 亚洲一码二码三码区别大吗| 制服诱惑二区| 亚洲免费av在线视频| 色尼玛亚洲综合影院| 夜夜躁狠狠躁天天躁| 一级毛片精品| 欧美精品啪啪一区二区三区| 国产99白浆流出| 日本五十路高清| 国产一区二区三区综合在线观看| 亚洲aⅴ乱码一区二区在线播放 | 欧美精品一区二区免费开放| 可以免费在线观看a视频的电影网站| 欧美大码av| 国产99久久九九免费精品| 50天的宝宝边吃奶边哭怎么回事| 久久精品国产综合久久久| 欧美 亚洲 国产 日韩一| 国产视频一区二区在线看| 亚洲美女黄片视频| 日本 av在线| 男女床上黄色一级片免费看| 国产免费现黄频在线看| 亚洲国产欧美一区二区综合| 精品日产1卡2卡| 怎么达到女性高潮| 色老头精品视频在线观看| 亚洲第一av免费看| 天天添夜夜摸| 香蕉丝袜av| 精品无人区乱码1区二区| 电影成人av| 最近最新免费中文字幕在线| 操出白浆在线播放| 国产精品一区二区在线不卡| 看黄色毛片网站| 亚洲欧美日韩另类电影网站| 男女做爰动态图高潮gif福利片 | 久久国产精品男人的天堂亚洲| 国产真人三级小视频在线观看| 亚洲国产精品sss在线观看 | 久久精品成人免费网站| 波多野结衣一区麻豆| 精品日产1卡2卡| 又黄又粗又硬又大视频| 成人特级黄色片久久久久久久| 亚洲av成人不卡在线观看播放网| 在线永久观看黄色视频| 黑丝袜美女国产一区| 久久中文字幕一级| 日韩人妻精品一区2区三区| 操美女的视频在线观看| 最新美女视频免费是黄的| 亚洲国产欧美网| 天堂俺去俺来也www色官网| 男女做爰动态图高潮gif福利片 | 午夜福利,免费看| 99riav亚洲国产免费| 精品熟女少妇八av免费久了| 国产片内射在线| 亚洲欧美一区二区三区久久| 女同久久另类99精品国产91| 亚洲国产毛片av蜜桃av| 久久香蕉国产精品| 国产成人精品无人区| av福利片在线| 欧美一级毛片孕妇| 日韩免费高清中文字幕av| 国产三级在线视频| 久久久水蜜桃国产精品网| 啪啪无遮挡十八禁网站| 99国产精品一区二区蜜桃av| 色精品久久人妻99蜜桃| 人人妻人人澡人人看| 成人影院久久| 午夜91福利影院| 午夜a级毛片| 日韩大码丰满熟妇| netflix在线观看网站| 国产xxxxx性猛交| 在线观看免费日韩欧美大片| 亚洲va日本ⅴa欧美va伊人久久| 国产av一区二区精品久久| 后天国语完整版免费观看| 99久久99久久久精品蜜桃| 精品一品国产午夜福利视频| 69精品国产乱码久久久| 亚洲一区中文字幕在线| 视频区欧美日本亚洲| 久久草成人影院| 一区二区三区激情视频| 香蕉丝袜av| 天堂影院成人在线观看| 欧美av亚洲av综合av国产av| 久久九九热精品免费| 精品卡一卡二卡四卡免费| 美女午夜性视频免费| 久久性视频一级片| 国产免费男女视频| 欧美激情极品国产一区二区三区| 色尼玛亚洲综合影院| 日韩免费av在线播放| 中文字幕另类日韩欧美亚洲嫩草| 国产成人av激情在线播放| 人人妻,人人澡人人爽秒播| 亚洲一码二码三码区别大吗| 久久久久久久久中文| 在线观看一区二区三区激情| 操出白浆在线播放| 欧美日韩av久久| 亚洲精品美女久久久久99蜜臀| 夜夜夜夜夜久久久久| 亚洲人成网站在线播放欧美日韩| av在线天堂中文字幕 | 91成人精品电影| 免费看十八禁软件| 两性午夜刺激爽爽歪歪视频在线观看 | 久久性视频一级片| a级毛片黄视频| 十八禁网站免费在线| 久久久国产成人精品二区 | 视频区图区小说| 两性午夜刺激爽爽歪歪视频在线观看 | 国产亚洲精品第一综合不卡| 国产一区在线观看成人免费| 国产精品九九99| 国产熟女午夜一区二区三区| 午夜久久久在线观看| 亚洲精品av麻豆狂野| 亚洲人成网站在线播放欧美日韩| 国产不卡一卡二| 免费人成视频x8x8入口观看| 美女福利国产在线| 国产精品免费一区二区三区在线| 成人亚洲精品av一区二区 | 最新在线观看一区二区三区| 欧美成人午夜精品| 亚洲一码二码三码区别大吗| 91在线观看av| 欧美激情极品国产一区二区三区| 成人18禁高潮啪啪吃奶动态图| 村上凉子中文字幕在线| 日韩有码中文字幕| 亚洲精品一卡2卡三卡4卡5卡| 久久精品人人爽人人爽视色| 午夜免费鲁丝| 亚洲激情在线av| 欧美丝袜亚洲另类 | 久久精品影院6| 色综合欧美亚洲国产小说| 成人av一区二区三区在线看| 又大又爽又粗| 9热在线视频观看99| 欧美不卡视频在线免费观看 | 美国免费a级毛片| 亚洲精品一二三| bbb黄色大片| 黑人操中国人逼视频| 一进一出抽搐gif免费好疼 | 午夜成年电影在线免费观看| 日韩精品中文字幕看吧| 精品久久久久久,| 亚洲人成77777在线视频| 欧美乱妇无乱码| 国产人伦9x9x在线观看| 久久精品亚洲熟妇少妇任你| 国产成人系列免费观看| 99久久国产精品久久久| 老司机靠b影院| 啪啪无遮挡十八禁网站| 欧美人与性动交α欧美软件| 亚洲精品国产精品久久久不卡| 精品久久久久久成人av| 人人妻,人人澡人人爽秒播| 一区二区三区精品91| 欧美国产精品va在线观看不卡| 99在线人妻在线中文字幕| √禁漫天堂资源中文www| 亚洲少妇的诱惑av| 一进一出抽搐动态| 亚洲一区二区三区色噜噜 | 搡老熟女国产l中国老女人| 日韩欧美三级三区| 免费久久久久久久精品成人欧美视频| 日韩成人在线观看一区二区三区| 久9热在线精品视频| 国产亚洲精品一区二区www| 人人妻人人爽人人添夜夜欢视频| 伦理电影免费视频| 国产精品香港三级国产av潘金莲| 在线av久久热| 日本五十路高清| 男人操女人黄网站| 久久国产精品影院| 日本a在线网址| 看黄色毛片网站| 人人妻人人爽人人添夜夜欢视频| 国产aⅴ精品一区二区三区波| 国产一区在线观看成人免费| 久久久国产成人精品二区 | 高清av免费在线| 国产av在哪里看| 男人操女人黄网站| 亚洲三区欧美一区| 国产精品 国内视频| 欧美乱色亚洲激情| 欧美日韩亚洲综合一区二区三区_| 脱女人内裤的视频| 亚洲第一青青草原| 国产高清videossex| 在线视频色国产色| 黄色片一级片一级黄色片| 在线播放国产精品三级| 麻豆国产av国片精品| 成人18禁高潮啪啪吃奶动态图| 亚洲成人久久性| 十八禁网站免费在线| 美女 人体艺术 gogo| 在线观看舔阴道视频| 久久精品aⅴ一区二区三区四区| 亚洲一区二区三区欧美精品| 亚洲在线自拍视频| 国产男靠女视频免费网站| 69av精品久久久久久| 脱女人内裤的视频| 99久久人妻综合| 久久99一区二区三区| 少妇的丰满在线观看| 老司机深夜福利视频在线观看| 两个人免费观看高清视频| 电影成人av| 精品第一国产精品| 国产精品综合久久久久久久免费 | 成人av一区二区三区在线看| 国产av精品麻豆| 成人三级做爰电影| 又紧又爽又黄一区二区| 黄色女人牲交| 欧美另类亚洲清纯唯美| 亚洲国产看品久久| 老司机福利观看| 日韩成人在线观看一区二区三区| 日本wwww免费看| 国产欧美日韩一区二区三区在线| 操出白浆在线播放| 自线自在国产av| 亚洲色图 男人天堂 中文字幕| 欧美成人午夜精品| 久久精品亚洲av国产电影网| 久久久久久免费高清国产稀缺| 免费看十八禁软件| 亚洲成人免费电影在线观看| 久久九九热精品免费| 亚洲人成网站在线播放欧美日韩| 麻豆av在线久日| 又黄又爽又免费观看的视频| 一个人观看的视频www高清免费观看 | 欧美最黄视频在线播放免费 | 高清在线国产一区| 国产欧美日韩一区二区三| 午夜久久久在线观看| 巨乳人妻的诱惑在线观看| 亚洲成a人片在线一区二区| 亚洲伊人色综图| 三级毛片av免费| 精品电影一区二区在线| 一区在线观看完整版| 妹子高潮喷水视频| 国产熟女午夜一区二区三区| 三上悠亚av全集在线观看| 国产欧美日韩一区二区精品| 欧美一区二区精品小视频在线| 麻豆国产av国片精品| 亚洲精品中文字幕在线视频| 国产av又大| 国产99白浆流出| 国产色视频综合| 一区二区三区激情视频| 成人黄色视频免费在线看| 久久久久久免费高清国产稀缺| 久久性视频一级片| 每晚都被弄得嗷嗷叫到高潮| 视频区图区小说| 好男人电影高清在线观看| 99热国产这里只有精品6| 丝袜在线中文字幕| 国产国语露脸激情在线看| 99久久久亚洲精品蜜臀av| 制服诱惑二区| 亚洲专区中文字幕在线| 精品一区二区三区四区五区乱码| 女人高潮潮喷娇喘18禁视频| 久久草成人影院| 免费少妇av软件| av片东京热男人的天堂| 国产一区二区三区视频了| 欧美成人性av电影在线观看| 最近最新中文字幕大全电影3 | 超色免费av| 搡老熟女国产l中国老女人| 久久精品影院6| 欧美成人免费av一区二区三区| 亚洲成人久久性| 满18在线观看网站| 99国产精品99久久久久| 一进一出好大好爽视频| 亚洲免费av在线视频| 超碰97精品在线观看| 宅男免费午夜| 精品一区二区三区四区五区乱码| 久久香蕉精品热| 免费女性裸体啪啪无遮挡网站| 窝窝影院91人妻| 久久天堂一区二区三区四区| 亚洲成国产人片在线观看| 丝袜在线中文字幕| 午夜福利在线观看吧| 一本大道久久a久久精品| 老汉色av国产亚洲站长工具| 成人18禁在线播放| 亚洲自拍偷在线| 日本一区二区免费在线视频| 又紧又爽又黄一区二区| 一区二区日韩欧美中文字幕| 99精国产麻豆久久婷婷| 中文字幕另类日韩欧美亚洲嫩草| 精品卡一卡二卡四卡免费| 亚洲精品国产精品久久久不卡| 欧美久久黑人一区二区| av在线播放免费不卡| 可以免费在线观看a视频的电影网站| 精品人妻1区二区| av福利片在线| 亚洲中文字幕日韩| 久久久久久久久久久久大奶| 欧美人与性动交α欧美软件| 一个人免费在线观看的高清视频| 久久影院123| 在线永久观看黄色视频| 久久伊人香网站| 日本欧美视频一区| 国产97色在线日韩免费| 人人妻,人人澡人人爽秒播| 热99re8久久精品国产| 在线观看www视频免费| 中亚洲国语对白在线视频| 国产精品久久久久成人av| 免费观看人在逋| 欧美人与性动交α欧美软件| 久久人妻熟女aⅴ| 国产免费男女视频| 黑丝袜美女国产一区| 国产日韩一区二区三区精品不卡| 久久国产乱子伦精品免费另类| 在线观看日韩欧美| 国产精品免费视频内射| 男女床上黄色一级片免费看| 免费女性裸体啪啪无遮挡网站| 国产成人免费无遮挡视频| 欧美午夜高清在线| 大码成人一级视频| 久久久久亚洲av毛片大全| 一级作爱视频免费观看| 亚洲欧美日韩另类电影网站| 视频区欧美日本亚洲| svipshipincom国产片| 新久久久久国产一级毛片| 国产精品偷伦视频观看了| 欧美不卡视频在线免费观看 | 日本a在线网址| 国产亚洲精品一区二区www| 51午夜福利影视在线观看| av在线播放免费不卡| 免费在线观看黄色视频的| 一级a爱视频在线免费观看| 极品教师在线免费播放| 国产成人av激情在线播放| 日韩精品免费视频一区二区三区| 久久这里只有精品19| 欧美亚洲日本最大视频资源| 中国美女看黄片| 亚洲va日本ⅴa欧美va伊人久久| 淫秽高清视频在线观看| 日韩大尺度精品在线看网址 | 婷婷六月久久综合丁香| 精品久久久久久久久久免费视频 | 高清在线国产一区| 国产主播在线观看一区二区| 欧美一级毛片孕妇| 神马国产精品三级电影在线观看 | www国产在线视频色| 日韩精品青青久久久久久| 国产精品成人在线| 少妇 在线观看| 亚洲av熟女| 无限看片的www在线观看| 天天添夜夜摸| 精品卡一卡二卡四卡免费| 国产精品日韩av在线免费观看 | 亚洲欧美精品综合一区二区三区| 欧美在线一区亚洲| 欧美激情久久久久久爽电影 | 18禁美女被吸乳视频| 亚洲在线自拍视频| 亚洲视频免费观看视频| 午夜久久久在线观看| 国产真人三级小视频在线观看| 亚洲一区高清亚洲精品| 99国产极品粉嫩在线观看| 99香蕉大伊视频| 久久婷婷成人综合色麻豆| 国产精品偷伦视频观看了| av免费在线观看网站| 久久精品91无色码中文字幕| 欧美一区二区精品小视频在线| www国产在线视频色| 午夜a级毛片| 久久伊人香网站| 两个人免费观看高清视频| 人妻久久中文字幕网| 亚洲国产精品999在线| 日本vs欧美在线观看视频| 超碰97精品在线观看| 久久久久国内视频| 九色亚洲精品在线播放| 久久久久久久精品吃奶| 在线十欧美十亚洲十日本专区| 国产有黄有色有爽视频| 久久精品国产综合久久久| 久久人妻福利社区极品人妻图片| 女同久久另类99精品国产91| 很黄的视频免费| 午夜福利一区二区在线看| 黄色毛片三级朝国网站| 啪啪无遮挡十八禁网站| 亚洲专区中文字幕在线| 性欧美人与动物交配| 色在线成人网| 久久久国产成人精品二区 | 中文欧美无线码| 变态另类成人亚洲欧美熟女 | 久久精品亚洲av国产电影网| 亚洲欧美日韩另类电影网站| 中国美女看黄片| 国产精品亚洲一级av第二区| 亚洲一区二区三区欧美精品| 女同久久另类99精品国产91| 国产欧美日韩一区二区精品| 欧美在线黄色| av国产精品久久久久影院| 午夜亚洲福利在线播放| 成人特级黄色片久久久久久久| e午夜精品久久久久久久| 精品国产一区二区三区四区第35| 自拍欧美九色日韩亚洲蝌蚪91| 国产av一区在线观看免费| 国产欧美日韩一区二区三区在线| 日本wwww免费看| 精品人妻1区二区| 亚洲一区高清亚洲精品| 天天影视国产精品| 欧美日韩亚洲国产一区二区在线观看| 午夜a级毛片| 国产高清videossex| 99香蕉大伊视频| 麻豆一二三区av精品| 久久精品人人爽人人爽视色| 99国产精品99久久久久| 亚洲狠狠婷婷综合久久图片| 国产成+人综合+亚洲专区| 欧美激情高清一区二区三区| 国产亚洲av高清不卡| 久久久精品国产亚洲av高清涩受| 久久天堂一区二区三区四区| 亚洲精品久久午夜乱码| 99热只有精品国产| 国产高清国产精品国产三级| avwww免费| www.熟女人妻精品国产| 国产91精品成人一区二区三区| 熟女少妇亚洲综合色aaa.| 亚洲片人在线观看| 国产精品 欧美亚洲| 亚洲第一青青草原| 国产99久久九九免费精品| 国产成人精品在线电影| 高清欧美精品videossex| 少妇裸体淫交视频免费看高清 | 欧美国产精品va在线观看不卡| 亚洲一卡2卡3卡4卡5卡精品中文| 午夜福利欧美成人| 美女福利国产在线| 18禁裸乳无遮挡免费网站照片 | 日韩精品中文字幕看吧| 国产国语露脸激情在线看| 久久天堂一区二区三区四区| 无限看片的www在线观看| 久久九九热精品免费| 日韩 欧美 亚洲 中文字幕| 麻豆一二三区av精品| 日韩大码丰满熟妇| 搡老岳熟女国产| 免费一级毛片在线播放高清视频 | 亚洲情色 制服丝袜| 亚洲国产精品一区二区三区在线| 国产高清视频在线播放一区| www.熟女人妻精品国产| 少妇裸体淫交视频免费看高清 | 高清毛片免费观看视频网站 | 人人妻人人澡人人看| 18禁国产床啪视频网站| 国产精品乱码一区二三区的特点 | 久久久久久久久中文| 嫁个100分男人电影在线观看| 国产精品久久久av美女十八| 久久 成人 亚洲| 国产成人影院久久av| 国产又爽黄色视频| 看黄色毛片网站| 久久性视频一级片| av在线播放免费不卡| 久久精品国产综合久久久| 后天国语完整版免费观看| 美女午夜性视频免费| 精品卡一卡二卡四卡免费| 在线观看免费日韩欧美大片| 亚洲国产看品久久| 欧美日韩国产mv在线观看视频| 国产欧美日韩一区二区精品| 国产av一区二区精品久久| 又黄又粗又硬又大视频| 久久香蕉激情| 另类亚洲欧美激情| 国产精品久久久久成人av| 99久久国产精品久久久| 男女床上黄色一级片免费看| 777久久人妻少妇嫩草av网站|