• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Emotion Analysis:Bimodal Fusion of Facial Expressions and EEG

    2021-12-11 13:31:36HuipingJiangRuiJiaoDemengWuandWenboWu
    Computers Materials&Continua 2021年8期

    Huiping Jiang,Rui Jiao,Demeng Wu and Wenbo Wu

    1Brain Cognitive Computing Lab,School of Information Engineering,Minzu University of China,Beijing,100081,China

    2Case Western Reserve University,USA

    Abstract: With the rapid development of deep learning and artificial intelligence,affective computing,as a branch field,has attracted increasing research attention.Human emotions are diverse and are directly expressed via nonphysiological indicators,such as electroencephalogram (EEG)signals.However, whether emotion-based or EEG-based, these remain single-modes of emotion recognition.Multi-mode fusion emotion recognition can improve accuracy by utilizing feature diversity and correlation.Therefore, three different models have been established:the single-mode-based EEG-long and short-term memory(LSTM)model,the Facial-LSTM model based on facial expressions processing EEG data,and the multi-mode LSTM-convolutional neural network (CNN) model that combines expressions and EEG.Their average classification accuracy was 86.48%,89.42%,and 93.13%,respectively.Compared with the EEG-LSTM model, the Facial-LSTM model improved by about 3%.This indicated that the expression mode helped eliminate EEG signals that contained few or no emotional features,enhancing emotion recognition accuracy.Compared with the Facial-LSTM model, the classification accuracy of the LSTM-CNN model improved by 3.7%, showing that the addition of facial expressions affected the EEG features to a certain extent.Therefore,using various modal features for emotion recognition conforms to human emotional expression.Furthermore, it improves feature diversity to facilitate further emotion recognition research.

    Keywords:Single-mode and multi-mode; expressions and EEG; deep learning; LSTM

    1 Introduction

    Emotion can be described as a sudden response to external or internal events and occurs instinctively.Emotions have always played an essential role in human life, work, and decisionmaking.With the development of deep learning and artificial intelligence, the prospect of emotion recognition in the field of human-computer interaction is broader.Emotion recognition can be achieved using facial expressions, tone of voice, motion, and physiological signals [1,2].

    Facial expressions are the most direct form of human emotional manifestation and are reflected in the mouth, cheeks, eyes, and other facial features.Therefore, most researchers use facial expressions as a starting point to analyze emotional changes [3].Lu et al.[4] used principal component analysis (PCA) to reduce dimensionality with a support vector machine (SVM) as the classifier, producing a classification result of 78.37%.Qin et al.[5] proposed a method that combined the Gabor wavelet transform and CNN, resulting in a 96.81% accuracy on the CK+data set.Rajan et al.[6] combined CNN with LSTM units for real-time facial expression recognition (FER), effectively utilizing time and space features.

    Although facial expressions can directly reflect personal emotions and are readily obtainable,they are easy to conceal, hide, or provide single data.Therefore, facial expressions sometimes do not reliably reflect true emotions, a common defect of non-physiological signals.Consequently,researchers examine physiological signals instead.Neurophysiologists and psychologists have found that the physiologically manifested EEG signals are closely related to most emotions [7].Zhang et al.[8] combined wavelets and CNN to classify emotions, with the best effect reaching 88%.Zhang et al.[9] proposed a method using CNN for the emotion recognition of EEG signals.This showed that CNN could autonomously extract features from signals.Alhagry et al.[10] used LSTM to learn and classify EEG signal features, obtaining 87.99% classification accuracy in the Database for Emotion Analysis using Physiological Signals (DEAP) data set.

    Although the original EEG signals can provide useful emotional information, solely relying on them for emotion recognition is challenging due to weak signal strength.Whether they are utilized to recognize facial expression modalities or EEG, the expression forms of these signals are relatively straightforward [11].Expression and EEG signals have been extensively examined in a non-physiological and physiological context and can be effectively combined for multi-modal emotion recognition.Therefore, this synergistic relationship allows the complementary information to improve the objectivity and accuracy of emotion recognition [12].Shu et al.[13] proposed a fusion strategy based on the decision matrix in the study of multi-modality to improve the accuracy of the system.Huang [14] proposed two decision-level fusion methods for EEG and facial expression detection.The accuracy rates are 81.25% and 82.75%, respectively.

    Combining facial expressions and EEG information for emotion recognition compensates for their shortcomings as single data sources [15,16].This paper realizes emotion recognition via a modal fusion of facial expressions and EEG data.Since decision fusion does not make fair use of the correlation between different modalities, the method used in this paper involves feature-level fusion.The work content is as follows:

    (a) This paper establishes a model-facial expression recognition system for multi-modal emotion recognition.

    (b) It is expected to add expression information for single-modal EEG emotion recognition.Compared with the original single-modal EEG emotion recognition results, the multi-modal method is found to be superior.

    (c) This article proposes two different ways of combining facial expressions and EEG information for sentiment analysis.The Facial-LSTM model refers to EEG data processing during the first and last facial expression change frames by the model-facial expression recognition system.The LSTM-CNN model feeds the preprocessed facial expressions and EEG data into the LSTM for feature extraction.The output features are then fused and sent to the CNN for classification.

    2 Related Work

    2.1 LSTM

    A challenge is presented by the fact that the recurrent neural network (RNN) is incapable of long-term memory due to gradient disappearance or gradient explosion.Schimidhuber et al.[17]improved the traditional RNN and proposed the LSTM to solve this problem.The LSTM introduces an additional memory unit, C.This is a self-connecting unit that can store long-term signals and help LSTM encode distant historical information.Fig.1 shows the LSTM memory unit, where subscripts t and t ?1 represent the current and previous moments.

    Figure 1:An LSTM memory unit

    The calculation process of the LSTM unit occurs as follows:

    Discarding the gate:theht?1state before the newxtinput determines the portion of C information that can be discarded.TheftandCt?1gates are discarded to calculate and remove part of the information.The σ operator represents the sigmoid operation, 0 for discard, and 1 for save, which are used to determine the parameter change inCt?1.The calculation formula is shown in Eq.(1):

    Here, wfx,wfh, and wfcrepresent the weight of the forgetting door, the memory door of the LSTM unit at the previous moment, and the memory unit of the forgetting door at the previous moment, respectively, whilebdenotes the bias quantity.

    The input gate:Theht?1state before the newxtinput determines the information saved byC.The calculation formula for theitinput gate is shown in Eq.(2):

    Here,itsignifies the control parameter of thecoefficient when new information is added,which is used to updateC.wxi, wihand wcirepresent the weight of the input gate, the input gate of the LSTM unit at the previous moment, and the input gate memory unit at the previous moment, respectively, whilebdenotes the bias quantity.

    Updating the control parameters:According to the oldCt?1control parameters, a new generation ofcontrol parameters are generated the moment the final control parameters, as shown in Eq.(3):

    In the above equation, the status update of the memory unit depends on its ownCt?1state.The current candidate memory unit value,,i is adjusted by the input and discarding gates.

    The output gate:The new LSTM output is generated according to theCtcontrol parameter,as shown in Eq.(4):

    Here,otrepresents the state value of the control memory unit.wxo,who, and wcocorrespond to the weight of the corresponding output gate, the output gate of the LSTM unit at the previous moment, and the output gate at the previous moment, respectively, whilebdenotes the bias quantity, as shown in Eq.(5):

    By introducing a gating design, LSTM can effectively eliminate the RNN gradient disappearance, allowing the RNN model to be effectively applied to the long-distance sequence information.

    2.2 Facial Expression Recognition System(Model-Facial)

    Emotional changes in the EEG signals do not occur continuously.Therefore, there are periods when the EEG signals do not contain enough emotional information.Facial expression data is useful for obtaining emotional information, therefore, establishing a facial expression recognition system.The model-facial is divided into two parts:the training module and the recognition module [18].

    The training module process is shown in Fig.2.

    Figure 2:A flow chart of the Model-Facial training module

    The recognition module process is shown in Fig.3.

    The videos collected during the experiment were segmented into pictures by frame.Here,100 images of facial expressions depicting obvious calm, negative, and positive emotions were selected for each subject.The experiment relied on the critical facial point detection model of the open-source Dlib library.The detection results involving 68 key facial points are shown in Fig.4.Following image normalization and facial alignment, the horizontal and vertical coordinates of the 68 points were saved.A total of 136 values were entered into a text file, representing the extracted features [19].

    Figure 3:A flow chart of the Model-Facial recognition module

    Figure 4:An example of the 68 facial keypoint detection results

    SVM was used during the experiment to classify the extracted features.After model training,the expression videos of the subjects were read according to the frame, classifying each frame.All the participants were fully aware of the purpose of the study, which was approved by the Local Ethics Committee (Minzu University of China, Beijing, ECMUC2019008CO).

    2.3 Multi-Modal Fusion

    In the field of emotion recognition, emotions exhibit various modes.For example, gestures,expressions, words, or other physiological signals (EEG and Electrocardiograph) can express the emotions of an individual.Although these modes can reflect independent feelings, humans generally express multiple emotions simultaneously during interaction with others [20].Multimodality can provide more comprehensive and accurate information, enhancing the reliability and fault tolerance of the emotion recognition system.Unlike single-mode emotion recognition, the multi-modal form involves obtaining single-modal expression while better utilizing the correlation between the various modes to combine them.This is known as modal fusion.

    Furthermore, multi-modal fusion methods can be divided into signal-level fusion, feature level fusion, and decision level fusion according to the processing of different modal signals.

    As the name suggests, signal-level fusion directly combines and processes the originally collected signals and then performs feature extraction and recognition [21].This fusion method retains the original signal, ensuring that the accuracy is high.Moreover, its low anti-interference ability can be ascribed to a substantial amount of data collected for a long time.

    Feature layer fusion refers to the fusion and classification of features extracted from single modes.This technique takes advantage of the correlation between features to a greater extent.However, feature layer fusion requires an exceedingly high synchronization of the collected data.

    Decision level fusion refers to extracting features from single modes, classifying them before fusion judgment.The most significant advantage of the decision layer is that it simplifies merging the decisions acquired from each pattern.Flexibility is increased since each mode can learn its characteristics using the most appropriate classification model.However, decision level fusion does not take advantage of the correlation between modal characteristics.

    Given the differences in the modal characteristics of the expression and EEG signals, utilizing the signal level for fusion is challenging [22].Decision level fusion does not consider the correlation between the two parts.Therefore, this paper selected the fusion of feature layer to realize the bi-modal fusion of expression and EEG data, as shown in Fig.5.

    Figure 5:Feature level fusion

    3 Experimental

    3.1 Stimulus Materials

    The collection of the original data is mainly divided into two parts:emotion induction and data collection.The emotional installation refers to dynamically stimulating the subject to produce the target emotion.Data collection includes the acquisition of EEG signals and expressions.

    Dynamic stimulation materials combined with visual and auditory stimulation characteristics have a better effect on emotional induction.Therefore, 12 videos with positive and negative colors were initially screened.Non-participants then completed questionnaires that screened out the video material used in the formal study.Pleasure and surprise were denoted as positive emotions according to the two-dimensional emotion model and the perspective of Fred Ekman et al.,while several other emotions were designated as negative.Ultimately, six videos portraying high emotional arousal and induction intensity were identified among the 12 videos.Three of these were positive, and three were negative.

    The subjects selected for the experiment were all students aged between 18 and 25, righthanded, in good physical condition, fully rested, and free of brain diseases or mental problems.

    Before the formal experiment, the subjects were required to read the instructions on display carefully, ensuring they fully understood the experimental process.During the experiment, the EEG data and the corresponding facial information of the experimenter were also obtained and saved.

    3.2 Collection and Pretreatment

    The facial expressions were primarily collected using a Logitech Carl Zeiss Tessar camera in conjunction with EV video recording.The resolution was 1920 ?1080, and the video acquisition frequency was 13 fps.The EEG acquisition and recording were performed using the NeuroScan system platform.E-prime was employed to design and present the stimulus materials to the subjects, triggering emotional changes.During the experiment, a 64-electrode cap was used to collect EEG information, which was expanded using an amplifier, and recorded on a computer equipped with scanning software.

    The EEG signal was so weak that it was highly susceptible to the internal and external environment during measurement.This rendered the collected signal unreliable due to disturbance by considerable electrical activity not originating from the brain, known as artifacts.The artifacts were commonly initiated by electrooculograms, electrocardiograms, electromyograms, and electrode motion.Scan 4.5 was used to complete EEG preprocessing, including removing the bad areas,ophthalmological artifacts, and other artifacts, as well as digital filtering.Preprocessing primarily eliminated the noise component of the EEG signal, preparing it for feature analysis or extraction of the emotional elements.

    The original facial expression data was presented in the form of a video, which included the emotional changes detected in the subjects.Therefore, the first step in preprocessing the facial expression data was to cut the tape into frames.The second step involved frame selection.There is no expression data in the before and after video, so exclude some frames in this part.The third step involved facial detection.Factors other than human faces were removed to improve facial feature extraction and noise reduction.

    3.3 Training

    The emotion classification model based on LSTM consisted of four layers, including the input layer, the LSTM layer, the full connection layer, and the output layer.The LSTM layer extracted the relevant features from the EEG input sequence, also known as the time-domain information.The full connection layer integrated the components of the LSTM layer, obtaining the desired classification results.

    While establishing the LSTM layer, it was necessary to select the appropriate number of layers and determine the number of hidden nodes in each.Generally, the presence of many neurons causes overfitting during training, while a small number of neurons may result in underfitting.This necessitated designing as few hidden layer nodes and LSTM layers as possible under the premise of meeting the accuracy requirements.The experiment involving the single- and multi-layer LSTM structure revealed that the latter exhibited a higher classification effect.Finally, it was determined that each layer of the model was formed by 32 hidden nodes in series, consisting of four layers,as shown in Fig.6.

    The Adam algorithm was adopted for parameter optimization, while the learning rate of the model was set to 0.005.The Dropout method was used during neural network training to avoid the overfitting phenomenon, while the parameter value was set to 0.5.The batch processing technology was used during the training process to determine the batch size of 64 training samples.Google’s TensorFlow framework was employed to implement the network model.

    Figure 6:LSTM structure

    The specific parameter settings of the LSTM emotion classification model are shown in Tab.1.

    Table 1:Parameter settings of LSTM model

    3.3.1 EEG-LSTM

    The EEG-LSTM model represents the single-mode of EEG emotion recognition.After preprocessing, the EEG signal is sent to the LSTM for classification.

    The dataset in this section is the complete EEG data of eight subjects.After simple preprocessing, the EEG data of each subject were divided at intervals of 10 ms, while each EEG data set collected by the 64 conducting pole caps had a 64?10 matrix dimension.According to the LSTM principle, the matrix columns represent the data read in one step, while the rows represent the time step.This way of intercepting EEG data produced a large enough amount of information.The ratio of the train set to the test set was about 3:1.Tab.2 shows the classification effect of 8-bit subjects.

    3.3.2 Facial-LSTM

    The Facial-LSTM Model refers to the model-facial expression recognition system used to tailor the EEG data.The cropped EEG information is sent to the LSTM emotion classification model for emotion recognition.Since the expression of a subject does not remain static in a calm state, the movement, relaxation, and twitching of the facial muscles in a natural state also cause expressional changes.Therefore, it was proposed to set the first frame where the facial expression of the subject changed for more than five consecutive frames as the starting keyframe.Similarly,the last frame in which the facial expression of the subject changed for five successive frames or more was set as the end keyframe.

    Tab.3 shows the first and last keyframes of specific subjects.

    Table 2:Classification results of the EEG-LSTM model

    Table 3:Keyframes of specific subjects

    The EEG data of each subject were segmented at intervals of 10 ms to obtain a matrix with a dimension of 64 ?10.After the EEG data were intercepted at the corresponding starting time and end time, 75% of the information was randomly selected as the train set and 25% as the test set.The data volume of the train set and the test set was about 3:1.The final classification results are shown in Tab.4.

    3.3.3 LSTM-CNN

    Both the CNN and LSTM are extensions of traditional neural networks with independent feature extraction and classification functions.The CNN obtains the complete data from the local information aggregation in space, extracting the hierarchical input information.However, LSTM has a sequence in the time dimension that considers the previous input information, displaying prior memory functionality.When the LSTM and CNN are connected, feature fusion can consider both leads in the space and the time dimensions.Therefore, LSTM-CNN represents the bimodal feature fusion model.

    Table 4:Facial-LSTM model classification results

    Figure 7:LSTM-CNN model

    An output marker appeared in the LSTM-CNN model when the signal entered the LSTM.This tag contained both the original tag information and the related information before the output.Subsequently, the CNN searched for local features via the output-rich feature representation to enhance the accuracy.

    During the network structure design of LSTM-CNN, the data input was considered the starting point.The expression and EEG features were extracted using the LSTM model.They were then connected to feature vectors in the output section and sent to the CNN model for classification, as shown in Fig.7.

    This fusion method better utilized the temporal information of each mode, obtaining the characteristic information of its spatial dimension.The classification accuracy of the final LSTM-CNN model was 93.13%.

    3.4 Results and Analysis

    Tab.5 compares the classification results of the EEG-LSTM, Facial-LSTM, and LSTMCNN models:

    Table 5:Comparison of the classification results

    This comparison shows that the classification rates of the EEG-LSTM, Facial-LSTM, and LSTM-CNN models are increasing.Therefore, it is feasible to use emotional-modal-assistant EEG signals for emotion recognition.The Facial-LSTM model intercepted EEG signals via the keyframe of facial expression changes, achieving excellent classification.The LSTM-CNN model used the correlation between features for fusion, obtaining the best classification result of the three models.

    4 Discussion

    This paper aimed to combine expression with EEG data to realize and improve the classification of emotion.Consequently, the Facial-LSTM and LSTM-CNN models were established.

    The Facial-LSTM model involved EEG data processing in the first and last frames of the facial expression change output by the model-facial expression recognition system.The LSTMCNN model fed the preprocessed facial expressions and EEG data into the LSTM for feature extraction, after which the output features were fused and sent to the CNN for classification.The classification accuracy of the Facial-LSTM model was 89.42%, while that of the LSTM-CNN model was 93.13%, improving the precision of the latter model.The results indicated that the bimodal EEG emotion recognition effect surpassed that of single-mode EEG.

    5 Conclusion

    In recent years, the requirements for human-computer interaction have been increasing.Therefore, accurate identification of human emotions via brain-computer interfaces is essential in providing a bridge during these exchanges [23].

    Although current EEG research has become increasingly mature, the moment of emotion generation remains difficult to determine [24].The expression denotes one of the modes that accurately represent emotion in daily life, being feature-rich features and easy to obtain.Therefore,it is feasible to classify and identify emotions by combining expression and EEG data.

    Moreover, there is a correlation between synchronous EEG and facial features, even though they are denoted by different modes.The research regarding bimodal emotion recognition based on EEG and expression indicates that an enhanced effect can be achieved should their feature correlation and the integration of EEG and facial features be better utilized.Therefore, multimodal feature fusion requires further in-depth examination [25].

    Acknowledgement:The author thanks all subjects who participated in this research and the technical support from FISTAR Technology Inc.

    Funding Statement:This work was supported by the National Nature Science Foundation of China(No.61503423, H.P.Jiang).The URL is http://www.nsfc.gov.cn/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲人成伊人成综合网2020| 小说图片视频综合网站| 女的被弄到高潮叫床怎么办 | 日本撒尿小便嘘嘘汇集6| 一边摸一边抽搐一进一小说| 嫁个100分男人电影在线观看| 99在线人妻在线中文字幕| bbb黄色大片| 久久久久久大精品| 久久热精品热| 国产免费男女视频| 黄色欧美视频在线观看| 最近在线观看免费完整版| 91久久精品电影网| 日韩一区二区视频免费看| 国产视频内射| 亚洲va在线va天堂va国产| 在线观看一区二区三区| 国产探花在线观看一区二区| а√天堂www在线а√下载| 成人毛片a级毛片在线播放| 性欧美人与动物交配| 毛片一级片免费看久久久久 | 特级一级黄色大片| 18禁黄网站禁片午夜丰满| 久久久久久久午夜电影| 女同久久另类99精品国产91| 国产av一区在线观看免费| 亚洲欧美精品综合久久99| 中文字幕av在线有码专区| 国产高清视频在线播放一区| 欧美性猛交黑人性爽| 国内精品宾馆在线| 在线免费观看不下载黄p国产 | 久久精品影院6| 欧美+日韩+精品| 日韩在线高清观看一区二区三区 | 亚洲av第一区精品v没综合| 亚洲成人久久爱视频| 美女被艹到高潮喷水动态| 亚洲第一区二区三区不卡| 国产私拍福利视频在线观看| 久久九九热精品免费| 久久人妻av系列| 欧美区成人在线视频| 搡女人真爽免费视频火全软件 | 国产私拍福利视频在线观看| 国产av一区在线观看免费| 一个人免费在线观看电影| 美女高潮喷水抽搐中文字幕| 男女下面进入的视频免费午夜| 欧美日韩黄片免| 国产精品98久久久久久宅男小说| 又爽又黄a免费视频| 两人在一起打扑克的视频| 麻豆精品久久久久久蜜桃| 看黄色毛片网站| 动漫黄色视频在线观看| 国内精品久久久久精免费| 免费看美女性在线毛片视频| 男女做爰动态图高潮gif福利片| 日韩国内少妇激情av| 91在线观看av| 十八禁国产超污无遮挡网站| 日韩欧美精品免费久久| 91av网一区二区| 久久热精品热| 精品久久久久久久久亚洲 | 欧美人与善性xxx| 久久国内精品自在自线图片| 99热只有精品国产| 18禁黄网站禁片免费观看直播| 熟女电影av网| 在线a可以看的网站| 草草在线视频免费看| 97热精品久久久久久| 内地一区二区视频在线| 中文亚洲av片在线观看爽| 精品乱码久久久久久99久播| 1024手机看黄色片| av专区在线播放| 欧美一区二区精品小视频在线| 国产成人福利小说| 午夜激情福利司机影院| 成人永久免费在线观看视频| 午夜a级毛片| 少妇猛男粗大的猛烈进出视频 | 嫩草影院新地址| 国产精品久久久久久久久免| 国产视频一区二区在线看| 久久精品久久久久久噜噜老黄 | 国产精品一区二区免费欧美| 日本一二三区视频观看| 91麻豆精品激情在线观看国产| 国产白丝娇喘喷水9色精品| 国产在线男女| 内射极品少妇av片p| 狂野欧美激情性xxxx在线观看| 观看美女的网站| 男插女下体视频免费在线播放| 亚洲五月天丁香| 97人妻精品一区二区三区麻豆| 久久欧美精品欧美久久欧美| 深夜a级毛片| 久久精品夜夜夜夜夜久久蜜豆| 日本精品一区二区三区蜜桃| 中亚洲国语对白在线视频| 日本撒尿小便嘘嘘汇集6| 一本一本综合久久| 九九在线视频观看精品| 伊人久久精品亚洲午夜| 91狼人影院| 国产欧美日韩精品一区二区| 男女那种视频在线观看| 午夜久久久久精精品| 天堂网av新在线| 国产精品一区二区三区四区免费观看 | 久久热精品热| 日本成人三级电影网站| 淫妇啪啪啪对白视频| 国内久久婷婷六月综合欲色啪| 两人在一起打扑克的视频| 好男人在线观看高清免费视频| 一区二区三区四区激情视频 | 亚洲va在线va天堂va国产| 国产真实乱freesex| 国产久久久一区二区三区| 国产综合懂色| 日日摸夜夜添夜夜添小说| 国产白丝娇喘喷水9色精品| www日本黄色视频网| 九九热线精品视视频播放| 午夜免费激情av| 国内精品宾馆在线| 国产在视频线在精品| 黄色一级大片看看| 午夜影院日韩av| 丰满人妻一区二区三区视频av| 日韩强制内射视频| 午夜爱爱视频在线播放| 老师上课跳d突然被开到最大视频| 久久久久久久久久成人| 中文亚洲av片在线观看爽| ponron亚洲| 国产一区二区亚洲精品在线观看| 国产亚洲精品久久久com| 欧美中文日本在线观看视频| 免费人成在线观看视频色| 国产三级中文精品| 男女边吃奶边做爰视频| 国产成人av教育| 韩国av一区二区三区四区| 午夜福利高清视频| 女生性感内裤真人,穿戴方法视频| 亚洲av一区综合| 国产精品精品国产色婷婷| av女优亚洲男人天堂| 一级av片app| 在线播放无遮挡| 日本色播在线视频| 天美传媒精品一区二区| 欧洲精品卡2卡3卡4卡5卡区| 伊人久久精品亚洲午夜| 久久精品国产亚洲av香蕉五月| 亚洲aⅴ乱码一区二区在线播放| av女优亚洲男人天堂| 亚洲一区高清亚洲精品| 免费在线观看日本一区| 久久久久久久久中文| 日本黄大片高清| 欧美精品国产亚洲| 久9热在线精品视频| 国产在线精品亚洲第一网站| 日本成人三级电影网站| 黄色女人牲交| 俺也久久电影网| 黄色丝袜av网址大全| 精品不卡国产一区二区三区| 婷婷六月久久综合丁香| 极品教师在线免费播放| 69av精品久久久久久| 别揉我奶头~嗯~啊~动态视频| 国产成人aa在线观看| 国产日本99.免费观看| 欧美成人免费av一区二区三区| 91麻豆av在线| 欧美国产日韩亚洲一区| 国产精品电影一区二区三区| 午夜视频国产福利| 老熟妇乱子伦视频在线观看| 九九在线视频观看精品| 亚洲电影在线观看av| www日本黄色视频网| 在线观看一区二区三区| av中文乱码字幕在线| 一a级毛片在线观看| 精品乱码久久久久久99久播| 天堂√8在线中文| 九九热线精品视视频播放| 两个人视频免费观看高清| 一区二区三区四区激情视频 | 97超级碰碰碰精品色视频在线观看| 欧美极品一区二区三区四区| 在线播放国产精品三级| 久久久精品大字幕| 成人精品一区二区免费| 国产成人一区二区在线| 欧美日韩综合久久久久久 | 非洲黑人性xxxx精品又粗又长| 网址你懂的国产日韩在线| 免费高清视频大片| 精品久久久久久久久久久久久| 神马国产精品三级电影在线观看| 最后的刺客免费高清国语| 欧美高清成人免费视频www| 日韩欧美一区二区三区在线观看| 狂野欧美白嫩少妇大欣赏| 亚洲av二区三区四区| 亚洲精品粉嫩美女一区| av在线老鸭窝| 最新中文字幕久久久久| 日本撒尿小便嘘嘘汇集6| 久久香蕉精品热| 偷拍熟女少妇极品色| 又黄又爽又刺激的免费视频.| 午夜影院日韩av| 成年人黄色毛片网站| 在线观看舔阴道视频| 婷婷六月久久综合丁香| 成年女人毛片免费观看观看9| 欧美日韩黄片免| 亚洲av熟女| 美女被艹到高潮喷水动态| 婷婷六月久久综合丁香| 欧美成人免费av一区二区三区| 国产精品综合久久久久久久免费| 日本一二三区视频观看| 波野结衣二区三区在线| 狠狠狠狠99中文字幕| 可以在线观看毛片的网站| 国内揄拍国产精品人妻在线| 男人狂女人下面高潮的视频| 99国产极品粉嫩在线观看| 嫁个100分男人电影在线观看| 搞女人的毛片| 中文字幕熟女人妻在线| 亚洲第一电影网av| 国产高清不卡午夜福利| 亚洲第一区二区三区不卡| 亚洲av电影不卡..在线观看| 别揉我奶头 嗯啊视频| 伊人久久精品亚洲午夜| 国产不卡一卡二| 国产欧美日韩一区二区精品| 夜夜看夜夜爽夜夜摸| 偷拍熟女少妇极品色| 99热这里只有是精品50| 草草在线视频免费看| 亚洲人成伊人成综合网2020| 国产乱人视频| 国产欧美日韩一区二区精品| 久久香蕉精品热| 日本a在线网址| 国产 一区 欧美 日韩| 欧美又色又爽又黄视频| 亚州av有码| av在线蜜桃| 国产伦精品一区二区三区视频9| 久久久精品欧美日韩精品| 三级毛片av免费| 免费一级毛片在线播放高清视频| 成年版毛片免费区| 久久精品夜夜夜夜夜久久蜜豆| 国产黄a三级三级三级人| 亚洲精品影视一区二区三区av| 好男人在线观看高清免费视频| 欧美丝袜亚洲另类 | 1000部很黄的大片| 日韩精品有码人妻一区| 亚洲熟妇中文字幕五十中出| 欧美激情在线99| 禁无遮挡网站| 国产精品亚洲一级av第二区| 一夜夜www| 十八禁国产超污无遮挡网站| 97碰自拍视频| 搡女人真爽免费视频火全软件 | 一夜夜www| 成年人黄色毛片网站| 亚洲男人的天堂狠狠| 久久久精品大字幕| 不卡视频在线观看欧美| 亚洲第一区二区三区不卡| 日韩中字成人| 日韩av在线大香蕉| 国产伦精品一区二区三区四那| 亚洲乱码一区二区免费版| 国产一区二区在线观看日韩| 午夜福利在线在线| a级毛片免费高清观看在线播放| 国产成人aa在线观看| 免费av观看视频| 国内少妇人妻偷人精品xxx网站| 99在线人妻在线中文字幕| 国产人妻一区二区三区在| 亚洲在线自拍视频| 两个人的视频大全免费| 九九热线精品视视频播放| 美女 人体艺术 gogo| 精品人妻一区二区三区麻豆 | 别揉我奶头~嗯~啊~动态视频| 在线观看免费视频日本深夜| 亚洲五月天丁香| 小蜜桃在线观看免费完整版高清| 嫩草影视91久久| 成人特级av手机在线观看| 久久精品国产亚洲网站| 天堂网av新在线| 夜夜看夜夜爽夜夜摸| 国产高清激情床上av| 久久久久久久精品吃奶| 尤物成人国产欧美一区二区三区| 日韩精品中文字幕看吧| 神马国产精品三级电影在线观看| 欧美一区二区亚洲| 十八禁国产超污无遮挡网站| 久久久久久久久久黄片| 99热这里只有是精品50| 欧美精品国产亚洲| 成人欧美大片| 九九热线精品视视频播放| 可以在线观看毛片的网站| 亚洲国产欧洲综合997久久,| 国产探花在线观看一区二区| 男女边吃奶边做爰视频| 成人精品一区二区免费| 嫩草影院精品99| 赤兔流量卡办理| 亚州av有码| 久久国产精品人妻蜜桃| 国产精品一区二区三区四区久久| 中文在线观看免费www的网站| 99国产极品粉嫩在线观看| 日本 欧美在线| 欧美丝袜亚洲另类 | 一个人看的www免费观看视频| 最近视频中文字幕2019在线8| 久久久色成人| 18+在线观看网站| 国产91精品成人一区二区三区| 国产单亲对白刺激| 一级黄色大片毛片| 嫩草影院精品99| 国产 一区 欧美 日韩| 日本色播在线视频| bbb黄色大片| 国产精品亚洲一级av第二区| 日本a在线网址| 日韩在线高清观看一区二区三区 | 禁无遮挡网站| 日本熟妇午夜| 亚洲国产欧洲综合997久久,| 极品教师在线视频| 老熟妇仑乱视频hdxx| 国产精品自产拍在线观看55亚洲| 国产精品一区二区免费欧美| 大型黄色视频在线免费观看| 丝袜美腿在线中文| 国产精品一区二区免费欧美| 22中文网久久字幕| 午夜精品久久久久久毛片777| 亚洲中文字幕一区二区三区有码在线看| 男人的好看免费观看在线视频| 两性午夜刺激爽爽歪歪视频在线观看| 欧美又色又爽又黄视频| 变态另类成人亚洲欧美熟女| 久久久午夜欧美精品| 美女xxoo啪啪120秒动态图| 国产69精品久久久久777片| 亚洲色图av天堂| 国产国拍精品亚洲av在线观看| 少妇人妻精品综合一区二区 | 直男gayav资源| 看免费成人av毛片| 成人国产麻豆网| 亚洲精品影视一区二区三区av| 日韩一区二区视频免费看| 有码 亚洲区| 又黄又爽又刺激的免费视频.| 不卡一级毛片| 狠狠狠狠99中文字幕| 国产69精品久久久久777片| 国产aⅴ精品一区二区三区波| 婷婷精品国产亚洲av| 亚洲自偷自拍三级| 美女 人体艺术 gogo| 白带黄色成豆腐渣| 久久久久久久久久黄片| 日本精品一区二区三区蜜桃| 尤物成人国产欧美一区二区三区| 3wmmmm亚洲av在线观看| 性欧美人与动物交配| 欧美又色又爽又黄视频| 成年人黄色毛片网站| 国产精品亚洲一级av第二区| 日本a在线网址| 欧美高清成人免费视频www| 国产精品一及| 午夜福利成人在线免费观看| 99国产极品粉嫩在线观看| 亚洲欧美日韩高清专用| 一级黄色大片毛片| 91在线精品国自产拍蜜月| 亚洲欧美日韩高清在线视频| 搡女人真爽免费视频火全软件 | 九色成人免费人妻av| 国产精品美女特级片免费视频播放器| 欧美日本视频| 天堂√8在线中文| 很黄的视频免费| 全区人妻精品视频| 岛国在线免费视频观看| 赤兔流量卡办理| 十八禁网站免费在线| 国产精品永久免费网站| 国产 一区精品| 无人区码免费观看不卡| 久久精品91蜜桃| 男女视频在线观看网站免费| 国产精品不卡视频一区二区| 国产av不卡久久| bbb黄色大片| 日韩精品中文字幕看吧| 深爱激情五月婷婷| 久久精品国产亚洲av天美| 国产午夜福利久久久久久| 一区二区三区免费毛片| 久久久久国产精品人妻aⅴ院| 欧美性猛交╳xxx乱大交人| 美女被艹到高潮喷水动态| 美女高潮喷水抽搐中文字幕| 直男gayav资源| 3wmmmm亚洲av在线观看| 免费看日本二区| 久久精品人妻少妇| 午夜精品久久久久久毛片777| 夜夜看夜夜爽夜夜摸| 国产精品自产拍在线观看55亚洲| 人妻少妇偷人精品九色| .国产精品久久| 日日干狠狠操夜夜爽| 国产亚洲精品综合一区在线观看| 丰满人妻一区二区三区视频av| 国产高清三级在线| 免费不卡的大黄色大毛片视频在线观看 | 成人特级黄色片久久久久久久| 99久久无色码亚洲精品果冻| 免费看光身美女| 国产伦人伦偷精品视频| 久久久久久久久久成人| 久久久久久大精品| 亚洲在线自拍视频| 久久精品国产亚洲av香蕉五月| 久久草成人影院| 日本五十路高清| 99久国产av精品| 免费av毛片视频| 窝窝影院91人妻| 国内毛片毛片毛片毛片毛片| 亚洲专区国产一区二区| 能在线免费观看的黄片| 美女免费视频网站| 老师上课跳d突然被开到最大视频| 国产精品三级大全| 国产午夜精品论理片| 亚洲精华国产精华精| 午夜福利在线观看免费完整高清在 | 最近最新免费中文字幕在线| 日韩欧美在线乱码| 国产成年人精品一区二区| 一进一出抽搐动态| 人妻制服诱惑在线中文字幕| 亚洲av一区综合| 老司机深夜福利视频在线观看| 99九九线精品视频在线观看视频| 亚洲久久久久久中文字幕| 草草在线视频免费看| 一级黄片播放器| 人妻夜夜爽99麻豆av| 嫩草影院入口| 免费无遮挡裸体视频| 久久精品影院6| 亚洲成av人片在线播放无| 久久亚洲真实| h日本视频在线播放| 国产精华一区二区三区| 人人妻人人澡欧美一区二区| 可以在线观看的亚洲视频| 2021天堂中文幕一二区在线观| 日本三级黄在线观看| АⅤ资源中文在线天堂| 久久精品国产亚洲av涩爱 | 99久久中文字幕三级久久日本| 波多野结衣巨乳人妻| 免费人成视频x8x8入口观看| 一进一出抽搐gif免费好疼| 国产69精品久久久久777片| 99热网站在线观看| 国产亚洲精品av在线| 中国美女看黄片| 亚洲精品亚洲一区二区| 九九爱精品视频在线观看| 国产成人福利小说| 亚洲av五月六月丁香网| 精品人妻熟女av久视频| 热99在线观看视频| 成人精品一区二区免费| 亚洲精华国产精华精| 91午夜精品亚洲一区二区三区 | 亚洲内射少妇av| 欧美日韩瑟瑟在线播放| 在线免费十八禁| 五月伊人婷婷丁香| 99精品在免费线老司机午夜| 亚洲aⅴ乱码一区二区在线播放| 免费av不卡在线播放| 欧美成人一区二区免费高清观看| 九九在线视频观看精品| 国产亚洲av嫩草精品影院| 12—13女人毛片做爰片一| 能在线免费观看的黄片| 日日啪夜夜撸| 女人被狂操c到高潮| 亚洲美女黄片视频| 性欧美人与动物交配| 人人妻人人看人人澡| 男女做爰动态图高潮gif福利片| 国产久久久一区二区三区| 国内揄拍国产精品人妻在线| 校园春色视频在线观看| 亚洲美女黄片视频| 麻豆精品久久久久久蜜桃| 村上凉子中文字幕在线| 女人被狂操c到高潮| 日韩中字成人| 亚洲精华国产精华精| 三级男女做爰猛烈吃奶摸视频| 丝袜美腿在线中文| 国产乱人视频| 又爽又黄a免费视频| 久久久精品欧美日韩精品| 欧美日韩综合久久久久久 | 赤兔流量卡办理| 欧美xxxx黑人xx丫x性爽| 国产av在哪里看| 久久人妻av系列| 天美传媒精品一区二区| 一进一出抽搐gif免费好疼| 亚洲精品乱码久久久v下载方式| 久久久久久久午夜电影| 欧美日韩综合久久久久久 | 亚洲 国产 在线| 久久久久精品国产欧美久久久| 99九九线精品视频在线观看视频| 日本黄色视频三级网站网址| 极品教师在线视频| 国产精品日韩av在线免费观看| 国产精品1区2区在线观看.| 一级黄片播放器| 97超视频在线观看视频| 黄色配什么色好看| 18禁黄网站禁片午夜丰满| 久久久久久伊人网av| 久久久久久久久久成人| 99精品久久久久人妻精品| 久久99热这里只有精品18| 亚洲无线观看免费| 麻豆国产av国片精品| 久久这里只有精品中国| 色5月婷婷丁香| 国产欧美日韩一区二区精品| 国产高清激情床上av| 欧美日本视频| 美女cb高潮喷水在线观看| 精华霜和精华液先用哪个| 麻豆一二三区av精品| 日日干狠狠操夜夜爽| 精品人妻偷拍中文字幕| 亚洲欧美精品综合久久99| 国产一区二区在线观看日韩| 看黄色毛片网站| 美女大奶头视频| 亚洲美女搞黄在线观看 | eeuss影院久久| 国产高清三级在线| 我要看日韩黄色一级片| 亚洲最大成人中文| 午夜免费激情av| 亚洲精品在线观看二区| 精品久久久久久成人av| 日本色播在线视频| 久久国产精品人妻蜜桃| 淫秽高清视频在线观看| 久久久久国产精品人妻aⅴ院| 亚洲欧美清纯卡通| 中文字幕av成人在线电影| 午夜福利成人在线免费观看| 亚洲国产欧洲综合997久久,| 国产精品亚洲美女久久久| 午夜福利视频1000在线观看| 国产男靠女视频免费网站| 午夜久久久久精精品| 舔av片在线| 三级男女做爰猛烈吃奶摸视频| 三级毛片av免费|