• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Sentiment classification model for bullet screen based on self-attention mechanism

    2021-12-21 13:34:38ZHAOShuxuLIULijiaoMAQinjing

    ZHAO Shuxu, LIU Lijiao, MA Qinjing

    (1. School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China;2. School of Information Engineering, Gansu Forestry Polytechnic, Tianshui 741020, China)

    Abstract: With the development of short video industry, video and bullet screen have become important ways to spread public opinions. Public attitudes can be timely obtained through emotional analysis on bullet screen, which can also reduce difficulties in management of online public opinions. A convolutional neural network model based on multi-head attention is proposed to solve the problem of how to effectively model relations among words and identify key words in emotion classification tasks with short text contents and lack of complete context information. Firstly, encode word positions so that order information of input sequences can be used by the model. Secondly, use a multi-head attention mechanism to obtain semantic expressions in different subspaces, effectively capture internal relevance and enhance dependent relationships among words, as well as highlight emotional weights of key emotional words. Then a dilated convolution is used to increase the receptive field and extract more features. On this basis, the above multi-attention mechanism is combined with a convolutional neural network to model and analyze the seven emotional categories of bullet screens. Testing from perspectives of model and dataset, experimental results can validate effectiveness of our approach. Finally, emotions of bullet screens are visualized to provide data supports for hot event controls and other fields.

    Key words: bullet screen; text sentiment classification; self-attention mechanism; visual analysis; hot events control

    0 Introduction

    With the advancement of online video technologies and rapid popularization of the Internet, the online video industry has developed rapidly. Commentary behaviors of film and television audiences often have a certain impact on development trends of film and television communications and public opinions. In real life, hot events such as “Tianjin Uncle Touching Porcelain” and “Chongqing Bus Falling into Water” occur all the time, and the live situations may be posted to BILIBILI and other platforms by onlookers or passersby, which may lead audiences to comment on the events themselves as well as people and things involved in them in the form of barrage. These comments also reflect audiences’ attentions and attitudes for those events. Through emotional analysis of bullet screens, relevant management departments can timely formulate corresponding measures to control fermentation of those events through audiences’ attitudes towards them and trends of emotional changes, so as to avoid continuous deteriorations. Compared with filmography barrages, only using positive, neutral and negative emotion classification methods cannot accurately describe the cognition of barrage senders about those hot events, that is, simply “happy” and “hate” cannot define whether there will be people looking at the lively scenes, anti-social personalities or others who seriously affect normal public opinions. Therefore, it is necessary to perform fine-grained emotional classifications in video barrages of hot events.

    On this basis, text mining technology is used to model and analyze the data, which can provide reliable data supports for fields of event managements and controls. Based on it, a sentiment classification model multi-head attention convolutional neural network (MH-ACNN) is proposed in this paper for bullet screens of hot events. In this model, position encoding information is added to network input layer to enhance positional relationships among words, and a self-attention mechanism is used to calculate positional relationships among words, which solves the problem that existing sentiment analysis models cannot capture associations among short text words in video barrages. And the multi-head attention mechanism is used to capture feature information of different subspaces and obtain deep semantic representations of barrage sentences, which provides a better input representation for emotion classifications.

    Video bullet screens are a kind of intensive and fast real-time comments that appear on video interfaces in forms of texts and other symbols, which is different from traditional comments in viewing message areas of movies[1]. As a key technology of text data mining, natural language processing (NLP) has been widely used in commodity comment emotion analyses, microblog short text emotion analyses and other aspects[2]. Compared with microblog and e-commerce comments, video barrages have characteristics of temporality and concentration. Meanwhile, as a typical short text, how to analyze emotional information contained in video bullet screens has become a hot topic of text emotion analyses and video researches.

    The most important steps of traditional sentiment analysis methods based on machine learning are feature selection and model training. Feature selection methods[3]mainly include term frequency-inverse document frequency (TF-IDF), chi-square test (CHI), information gain (IG) and mutual information (MI). Words are sorted and thresholds are set to filter the words according to these calculated values. In practical applications, similar to video barrages which are short texts without grammatical structures, it is too sparse to extract features only through a single word in the texts, and network phrases like “666 and not bad” will be deleted as noisy data through data cleaning. Therefore, it is urgent to establish a corpus for video barrages and find a text representation method for video barrage texts. In the framework of machine learning algorithms, features do not need to be manually extracted in deep learning methods, and the texts can be represented by a vector trained through a deep learning algorithm model[4]. Therefore, deep learning methods provide theoretical and methodological supports for emotional analysis of video barrage texts.

    Kim et al.[5]used vectors trained by deep learning model to represent texts and pass CNN[6]with local perceptions, parameter sharing characteristics and a long-short-term memory (LSTM) neural network[7]with temporal characteristics and strong correlations before and after, so as to distinguish the emotional categories of movie reviews. Kalchbrenner et al.[8]proposed a dynamic pooling strategy to model sentence semantics for the problem that CNN could not capture associations among long-distance words in sentences. With limited contextual information, Santos[9]used two convolutional layers of deep CNN to learn information from words to sentences, so as to construct semantic representations.

    As a new type of comment, barrage commentaries have become an object of researches on text sentiment analyses. Based on bullet-screen data of BILIBILI’s films, Zheng et al.[10]analyzed sentence-level emotions through an emotion dictionary, and finally visualized emotions according to experimental results, so as to obtain distribution curves of bullet-screen emotions. Deng Yang et al.[11]constructed a barrage word classification algorithm with an implied Dirichlet distribution as a recommendation basis for video clips, however, its training dataset was a traditional normative text without considering special points of non-normative short essays. Hong Qing et al.[12]conducted an emotional classification of bullet screens by improving k-means algorithm on the basis of users, and used its classification results to analyze emotional differences among audiences of specific films. Zhuang Xuqiang et al.[13]used an attention mechanism to effectively identify emotional keywords in video barrages, and combined a LSTM model with emotional dependence in comments of front and rear barrages in the video to extract “highlighted” video clips. Wang Xiaoyan[14]classified feelings of bullet screens through text emotion analysis method based on deep learning, and marked clustering results of video keyframes with emotions, so that emotional information of those keyframes could be visually displayed for users to decide whether they would watch the segments according to others’ comments. However, their incomplete grammatical structures are not considered in current researches on video barrages, and there are only studies on barrages of film and television works. For this reason, a kind of emotional classification model of bullet screens oriented to hot events is proposed in this paper, so as to play a role of decision support in event management and control.

    1 MH-ACNN model

    Aimed at the needs of fine-grained emotion classifications in hotspot events and other fields, a MH-ACNN model is constructed. In this model, a self-attention mechanism is used to model relationships among words, and a multi-head attention mechanism is used to extend the model to extract emotional expressions in bullet screens at different positions, and in addition to word vector embedding, position and emotion symbol embedding are added to the input part, so that the model can make full use of input information in emotion modeling and analyses.

    1.1 Position encoding

    In a self-attention mechanism, sequence information cannot be captured because there is no iterative operation similar to the recurrent neural network[15]. Therefore, position information of each word must be provided in order to recognize order relationships in a language. In this paper, positional embedding method[16]is used to label word position information, in which position embedding dimension is [lmax,dmodel], among them,lmaxrepresents the maximum length of the text,dmodelis the word vector dimension. Specifically, a linear transformation of sin and cos functions is used to provide position information.

    P(pos,2i)=sin(pos/10 0002i/dmodel),

    (1)

    P(pos,2i+1)=cos(pos/10 0002i/dmodel),

    (2)

    whereposrefers to position of a word in a sentence, whose value range is [0,lmax);iis word vector dimension, whose value range is [0,dmodel). Eqs.(1) and (2) correspond to a group of word vector dimensions of odd and even numbers, such as a set of (0,1) and (2,3), which are processed with the above sin and cos functions, respectively, resulting in different periodic changes. Periods of position embedding function vary from 2π to 10 000*2π, and a combination of sin and cos functions in different periods will be got at each position in the word vector dimension, thereby generating unique texture position information, so that the position between dependencies and timing characteristics of natural language can be learnt through the model.

    1.2 Self-attention mechanism

    The self-attention mechanism[17]is an encoding scheme for learning of text representations proposed by Google Machine Translation Team in 2017. For self-attention, in order to learn multiple meaning expressions, three weights namelyWQ,WKandWVare assigned to the input vectorXand done linear mapping. Specifically, as is shown in Fig.1, whereLxis the length of input, three linear mapping matrices namelyQ,KandVare obtained, whose mathematical expressions are shown in Eqs.(3)-(5).

    Fig.1 Linear mapping of input vector

    Q=Linear(Xembedding)=XembeddingWQ,

    (3)

    K=Linear(Xembedding)=XembeddingWK,

    (4)

    V=Linear(Xembedding)=XembeddingWV.

    (5)

    Self-attention can capture the syntactic or semantic features between the words in the same sentence, and calculate the correlation between any two words in the sentence, reduce the distance between the feature dependence[16], and make it easier to capture the interdependent features of long distances in a sentence. The calculation process is divided into three steps, and as shown in Fig.2.

    Fig.2 Calculation process of self-attention

    In Fig.2,Ki,QandVirepresent keywords, query and weight values, respectively;f(·,·) represents function;sirepresents similarity; * means multiplication;airepresents weight coefficient corresponding to value andArepresents attention value matrix of the input sentence.

    In the first phase, weight coefficients of corresponding values of eachKare obtained by calculating similarities of eachQand eachK. Commonly used similarity measurement functions are dot (Eq.(6)), concat (Eq.(7)) and perceptron (Eq.(8))

    f(Q,Ki)=QTKi,

    (6)

    f(Q,Ki)=concat(Wα[Q,Ki]),

    (7)

    f(Q,Ki)=Vαtanh(WαQ+UαKi).

    (8)

    In the second phase, in order to prevent the result from being too large, scaling is performed, and softmax function is used to normalize the weights. The specific calculation is

    (9)

    In the thirdphase, weightaiandVcorresponding toKare weighted and summed to obtain the final attention expression

    (10)

    1.3 MH-ACNN network framework

    Contents of barrage texts are severely colloquial and contain almost no complete contextual information. Compared with the LSTM model that is widely used in the field of chapter-level sentiment analyses, CNN model has become a preferred one for sentiment analyses of video barrage texts. In this paper, based on CNN and combined with multi-head attention mechanism, an MH-ACNN model is constructed to solve the problem of emotional classifications of video barrage texts. In order to preserve complete feature information of the sentence, a multi-head attention layer is added in front of the convolution layer. As is shown in Fig.3, the MH-ACNN model includes an embedding layer, a multi-head attention layer, a convolutional layer, a pooling layer, a fully connected layer and a softmax layer.

    Fig.3 Structure diagram of MH-ACNN model

    Embedding layer: for each vocabulary and emotional symbol in the text, it will map itself and its corresponding positions to low-dimensional vector spaces. Load Word2vec pre-trained word vectors, construct a dictionary to encode words, and replace each part in sample data with an ID in the dictionary, in which the word vectors are directly obtained through ID mapping. According to encoding Eqs.(1)-(2) given earlier, positions with IDpos are mapped to admodeldimensional position vector, and then text sequences are unified into a fixed length by means of zeroing at the end, through which a position vector matrix is obtained. A matrix combined with word vectors, emotional symbol vectorExand position vector matrixEpare spliced again as input of the self-attention mechanism, whose description is

    Eembedding=Ex⊕Ep.

    (11)

    Multi-head attention layer: the self-attention mechanism linearly maps the input matrix to obtain three matrices ofQ,KandVwith the same dimension. Based on a similarity calculation using dot product operation, in order to prevent inner products from being too large, a scaled dot product is used in adjustments. At this time, weighted value of the scaled dot product attention is calculated by

    (12)

    Q=Linear(Eembedding)=EembeddingWQ,

    (13)

    K=Linear(Eembedding)=EembeddingWK,

    (14)

    V=Linear(Eembedding)=EembeddingWV.

    (15)

    Eight self-attention mechanisms are used by the MH-ACNN model to enable itself to learn relevant information in different representation subspaces, so as to obtain deep semantic expressions of the input.Q,KandVmatrices are linearly transformed as inputs to the scaled dot product attention (as is in Eq.(16)) whose weights are calculated for eight times, but those of linear transformations are different from each time. The values of eight self-attention results are obtained by linearly transforming and are then taken as results of multi-head attention.

    M(Q,K,V)=concat(Wo[h1,h2,…,h8]),

    (16)

    (17)

    Convolutional layer: a dilated convolution is used to expand the receptive field without increasing the number of convolution kernels, thereby expanding the range of feature extractions, in which the expansion factor isnand the original convolution kernel size isN, then the expanded convolution kernel size isN′=n(N-1)+1. For a one-dimensional input sequenceX∈Rnand a convolution kernelf∶{0,…,N-1}→R, operations of a dilated convolution on elementsis defined as

    (18)

    wherefdmeans dilated convolution kernel; * indicates convolution operation at this time;dis dilated rate;s-diaccounts for the past information direction, in which - indicates shift operation anddiis equivalent to the step size of the convolution kernel. Therefore, the dilated convolution is equivalent to introducing a fixed hop interval between every two adjacent filter taps. Whenn=1, the dilated convolution is reduced to an original convolution. Use of a larger expansion factor allows outputs of the top layer to represent inputs of a larger range, thereby effectively expanding the receive domain. Feature graphs are obtained through operations performed by the dilated convolution on weight matrixM, which is obtained by the multi-head attention mechanism.

    (19)

    Pooling layer: local features extracted through dilated convolution are screened to obtain information of the most important emotional features. Max pooling is used to select features and extractC′, the most important feature information. Feature information after pooling is

    (20)

    Fully connected layer: integrate the most abstract features obtained through pooling operations. In order to avoid model overfitting, a dropout layer is integrated in the full connection layer whose neurons are randomly discarded with a certain probability to reduce dependence among the neurons, so as to improve generalization abilities of the model. Finally, a softmax function is used to determine emotional tendencies of the barrage text.

    y=softmax(WfXr+Bf),

    (21)

    whereXris output of the pooling layer;Wfis weight matrix of the fully connected layer, andBfis offset of the fully connected layer.

    This paper aims at minimizing the cross-entropy loss function when training the model, and the expression is

    (22)

    whereDis size of dataset;jis a sentiment category label corresponding to review texti;y′ is an actual category, andyis a predicted category.

    2 Experiment and analysis

    2.1 Experimental data and evaluation indicators

    All barrages of hot events on BILIBILI website crawled by Python are used in this paper as a bullet screen dataset (BSD) to verify performances of the MH-ACNN model. There are noises in video bullet screen datasets, so it is necessary to remove punctuation marks, url links and advertisements before building a corpus.

    In order to verify effectiveness of the MH-ACNN model, according to fine-grained expression of emotions in barrage texts, samples in BSD dataset are labeled as seven emotion tags, named like, sadness, anger, disgust, happiness, fear and surprise. At the same time, in order to verify generalization abilities of the model, public datasets MDS in NLP2013 Chinese microblog emotion evaluation task are used for testing, as is shown in Table 1.

    Table 1 Datasets

    Three measurement indicators based on a confusion matrix are selected as criteria to evaluate classification effects of the model, including precision (p), recall (r) andF1 value, which are calculated as

    p=TP/(TP+FP),

    (23)

    r=TP/(TP+FN),

    (24)

    F1=2pr/(p+r),

    (25)

    whereTPis the number of categories predicted as positive categories;FPis the number of negative categories predicted as positive categories;FNis the number of positive categories predicted as negative categories.

    2.2 Model parameters setting

    In this experiment, convolution kernels of different windows are used in feature extractions, and model hyperparameters are selected through 10-fold cross-validation. After cross validations, the final weight of each layer of neuron connection is retrained by all training data. In training processes, a random search algorithm[17]is used to adjust other parameters, so as to achieve an optimal effect in model training classifications. Parameter settings are shown in Table 2.

    Table 2 Parameters settings of model

    2.3 Analysis of results

    As is shown in Table 3, the highest evaluation index is selected as final experimental result of the MH-ACNN model in BSD dataset through multiple training adjustments.

    Table 3 Experimental results of MH-ACNN model

    In order to better evaluate performances of the MH-ACNN model, a comparison model is also included in multivariate Naive Bayes model (MNB)[18], multi-channel convolutional neural network (MCNN)[5], dual attention convolutional neural network (DAM)[15]and multi-head attention bidirectional LSTM model (MH-BiLSTM). Comparison experimental results are shown in Table 4.

    Table 4 Experimental results of different models

    According to experimental results, conclusions are obtained: the best classification effect is achieved by the MH-ACNN model in this paper onF1. Compared with the MNB model, macro avgF1 and weighted avgF1 have been increased by 12.7% and 24.31%, which shows that deep learning model is better than traditional machine learning model in emotion classification tasks; at the same time, the MH-ACNN model has fewer improvements on macro avgF1 than DAM model, but the weighted avgF1 has more obvious improvements, indicating that the MH-ACNN model can better learn emotional features, thereby improving classification effects. The MH-BiLSTM model has different degrees of improvements in indicators of macro avgF1 and weighted avgF1 compared with the DAM model, but its classification effect is slightly lower than that of MH-ACNN model, showing that a CNN is more suitable for feature extractions in short texts such as bullet screens.

    In order to verify generalization abilities of the model, a microblog review dataset was used to verify classification performances of the MH-ACNN model in this experiment. Experimental results are shown in Table 5.

    As can be seen in the Table 5, classification results of the MH-ACNN model in video barrage datasets are slightly higher than those of the Weibo dataset, with a maximum increase of 11%. Experimental effects of the model on the MDS dataset have been increased by 10%, indicating that the model in this article has stronger generalization abilities.

    Table 5 Experimental results of different datasets

    To evaluate the importance of different components of the MH-ACNN model, the parameters are varied in different ways, and the change in performance of different models on BSD dataset were measured. These results were presented in Figs.4 and 5.

    In Fig.4, the number of attention heads is varied, keeping the other parameters constant. It can be seen by comparing the MH-ACNN model and the MH-BiLSTM model, while single-head attention is worse than the eight-head and four-head setting, quality also drops off with too many heads.

    Fig.4 Effect of number of heads on classification performance

    In Fig.5, the dimension of model embedding is varied, keeping the other parameters constant. Comparing the MH-ACNN model, the MH-BiLSTM model and DAM model, it can be observed that bigger model embedding is better, but with the increase of model embedding, the classification effect is not improved significantly. Therefore, this paper selects 256 as the dimension of model embedding.

    Fig.5 Effect of dimension of model embedding on classification performance

    2.4 Visual analysis of video barrage corpus

    In experimental exploration processes, people should not pay attention to classification effects only, regularities behind the data should also be concerned, so as to provide data supports for practical applications.

    Taking the hot event “Female Passenger Sitting on the Station on the Yangtze River Bridge Line” as an example, its barrage comments are visually analyzed. Bullet screens were counted in the video time from occurrence of the event to its advancements, as shown in Fig.6. As can be seen from the video time, barrage volume change chart that, compared to TV series with more plots, hot event barrage volumes have a clearer sense of hierarchy and more prominent focuses. Climax of the whole incident was at around 20 s-70 s of the video, where a series of actions of the female passenger who missed her stop caused ridicules among the public.

    Fig.6 Amount of hot event barrage in video time

    Bullet screens contain natural time attributes based on the feature of BILIBILI platform, barrages in natural time from the event’s occurrence to its oblivions are visualized. Fig.7 shows the total number of bullet screen statistics in this event on a monthly basis. The figure shows that its bullet screens were concentrated in the period from November to December in 2018, and the number of bullet screens from March 2019 to 2020 indicates that this event has received almost zero attention in later periods.

    Fig.7 Variation curve of number of barrages of hot events in natural time

    Fig.8 shows the distribution of barrages from November to December in 2018 with a relatively concentrated number of barrages intercepted in days. It can be seen that the number of barrages fluctuated in November 2018, most of which were from November 4thto November 9th, while the barrage amount tended to be zero in December. Therefore, barrages in November 2018 were visually analyzed again in units of days. As is shown in Fig.9, December 7 was a turning point from its peak to small fluctuations.

    Fig.8 Variation curve of number of natural time barrages in 2018

    Fig.9 Variation curvehe variation of number of barrages in natural time measured in days

    After the above analyses, the conclusions are drawn. Firstly, the barrage volume reflects that the hotspot event has experienced six stages: initiation, gestation, development, climax, processing and rest periods, which is consistent with the hotspot event from occurrence to evolution and finally to the general law of calming down. Secondly, “the mouths of people are more important than the river”, which shows a simple but powerful expression of the importance of people’s views. In ancient times when information transmissions were extremely inconvenient, public responses were equivalent to a large river. In modern era when modern information is highly developed, public attitudes are changed from a large river to a vast ocean, whose importance is self-evident. According to development rules of hot events, we must know that managements of malignant events must be prior to those in climax period, and we must focus on key issues that must be resolved in development period, so as to avoid the situation from continually deteriorating.

    In order to explore public views and main emotions on the event, key words are extracted from the barrage corpus, and at the same time the MH-ACNN model mentioned in this article is used to classify and visualize emotions in the barrage corpus.

    As is shown in Fig.10, the public believes that the female passenger has the potential to “murder other passengers”. She not only frightens other passengers and users watching videos, but also seriously harms public safety, which has constituted a crime and should bear corresponding legal responsibilities or to be punished.

    Fig.10 Barrage corpus keywords

    Fig.11 intuitively reflects main emotions of video audiences, covering all emotion tags. Among them, emotions of “happiness” and “good” account for only a very small part, while those of “anger” and “disgust” account for the largest proportion. Looking back on the video, it is a typical crime of obstructing public safety. Behaviors of the female passenger are deeply disgusted by video audiences. However, through barrage sentiment analyses, we can see that sentiment tendencies of “happiness” and “l(fā)ike” appear, indicating that some barrage senders do not criticize such kind of behaviors, instead they vent their own emotions through a kind of psychology that is funning and watching lively, which has seriously affected normal directions of public opinions, and there are also certain hidden dangers in those communications. As a platform manager, you need to pay more attention to such groups of people. If there are multiple comments under the same account that do not conform to directions of public opinions, you should arouse high attention.

    Fig.11 Emotional distribution of hot events

    3 Conclusions

    In the field of text sentiment analyses on video barrages, existing researches are mostly directed to personalized recommendations and other fields, which has not been combined with the field of public opinion communications based on the real-time nature of barrages. To this end, a sentiment classification model is built in this article for the needs of fine-grained divisions of public opinions and sentiments in the network, and characteristics of the barrage language are used to extract those of barrage short texts through the CNN, meanwhile relationships among words are calculated through a self-attention mechanism. Existing sentiment analysis models cannot capture relationships among short-text words in video barrages, so a multi-head attention mechanism is used to capture feature information of different subspaces, so as to obtain deep semantic representations of barrage sentences, providing better inputs for emotion classifications. By comparing advantages and disadvantages of each emotion classification model, effectiveness and superiorities of the MH-ACNN model in the field of text emotion classifications are verified. Finally, visual analyses on barrage volumes and emotional distributions in hot events provide certain data supports for event managements and controls.

    一边摸一边抽搐一进一出视频| 欧美少妇被猛烈插入视频| av有码第一页| 乱人伦中国视频| 女人爽到高潮嗷嗷叫在线视频| 各种免费的搞黄视频| 永久免费av网站大全| 欧美黑人欧美精品刺激| 免费看av在线观看网站| 亚洲国产精品一区三区| 如日韩欧美国产精品一区二区三区| 18禁国产床啪视频网站| 三上悠亚av全集在线观看| 国产精品熟女久久久久浪| 后天国语完整版免费观看| kizo精华| 久久久久久久大尺度免费视频| 亚洲午夜精品一区,二区,三区| 国产99久久九九免费精品| 精品久久蜜臀av无| 少妇裸体淫交视频免费看高清 | 岛国毛片在线播放| 国产视频一区二区在线看| 亚洲成人手机| 丰满迷人的少妇在线观看| 国产成人一区二区三区免费视频网站 | 国产av国产精品国产| 中国美女看黄片| 欧美日韩亚洲综合一区二区三区_| 午夜免费观看性视频| 晚上一个人看的免费电影| av不卡在线播放| 亚洲精品国产色婷婷电影| 后天国语完整版免费观看| 考比视频在线观看| 晚上一个人看的免费电影| 久久久久网色| 欧美日韩视频高清一区二区三区二| 亚洲一卡2卡3卡4卡5卡精品中文| 成人免费观看视频高清| 免费看不卡的av| 亚洲成人免费av在线播放| 亚洲三区欧美一区| 婷婷色综合www| 9191精品国产免费久久| av片东京热男人的天堂| 亚洲色图 男人天堂 中文字幕| 母亲3免费完整高清在线观看| 丝袜美腿诱惑在线| 久久女婷五月综合色啪小说| 深夜精品福利| 在线精品无人区一区二区三| 韩国精品一区二区三区| 曰老女人黄片| 亚洲精品av麻豆狂野| 成人18禁高潮啪啪吃奶动态图| 亚洲成国产人片在线观看| 香蕉国产在线看| 欧美在线一区亚洲| 最近最新中文字幕大全免费视频 | 中文精品一卡2卡3卡4更新| e午夜精品久久久久久久| 一区在线观看完整版| 亚洲七黄色美女视频| 国产欧美日韩精品亚洲av| 91精品三级在线观看| 国产伦理片在线播放av一区| 国产午夜精品一二区理论片| 亚洲成人免费av在线播放| 美女视频免费永久观看网站| 日韩一区二区三区影片| 曰老女人黄片| 丝袜脚勾引网站| 手机成人av网站| 亚洲国产最新在线播放| 亚洲国产欧美网| 91精品国产国语对白视频| 国产人伦9x9x在线观看| 又粗又硬又长又爽又黄的视频| av片东京热男人的天堂| 久久久精品国产亚洲av高清涩受| 七月丁香在线播放| 99热网站在线观看| av福利片在线| 菩萨蛮人人尽说江南好唐韦庄| 成人国产av品久久久| 色综合欧美亚洲国产小说| 亚洲精品乱久久久久久| bbb黄色大片| 亚洲天堂av无毛| 操出白浆在线播放| 成人手机av| 超色免费av| 中文字幕人妻熟女乱码| 国产麻豆69| 精品国产一区二区久久| 性少妇av在线| 一本—道久久a久久精品蜜桃钙片| 亚洲欧洲国产日韩| 亚洲人成电影免费在线| 69精品国产乱码久久久| 麻豆国产av国片精品| 99久久精品国产亚洲精品| 69精品国产乱码久久久| 别揉我奶头~嗯~啊~动态视频 | 色网站视频免费| 午夜福利在线免费观看网站| 久久中文字幕一级| 欧美日韩亚洲综合一区二区三区_| 久久人妻熟女aⅴ| 电影成人av| 免费观看a级毛片全部| 美女主播在线视频| 别揉我奶头~嗯~啊~动态视频 | 久久天躁狠狠躁夜夜2o2o | 亚洲熟女毛片儿| 国产精品 欧美亚洲| 日韩av在线免费看完整版不卡| 国产精品九九99| 亚洲,欧美精品.| www.999成人在线观看| 亚洲成人国产一区在线观看 | 精品一区二区三区四区五区乱码 | 日本vs欧美在线观看视频| 国产欧美日韩综合在线一区二区| 亚洲av国产av综合av卡| 成人亚洲欧美一区二区av| 女人久久www免费人成看片| 成在线人永久免费视频| 久久国产精品大桥未久av| 国产精品九九99| www.av在线官网国产| 色94色欧美一区二区| 国产亚洲欧美在线一区二区| 午夜福利免费观看在线| 久久国产精品大桥未久av| 国产亚洲欧美在线一区二区| videosex国产| 国产精品av久久久久免费| 捣出白浆h1v1| 国产精品九九99| 女性生殖器流出的白浆| 男女免费视频国产| 欧美xxⅹ黑人| 免费高清在线观看日韩| 国产成人精品在线电影| 午夜福利免费观看在线| 精品免费久久久久久久清纯 | videosex国产| 成在线人永久免费视频| 你懂的网址亚洲精品在线观看| 婷婷色麻豆天堂久久| 欧美日韩成人在线一区二区| 制服诱惑二区| 精品一区二区三区四区五区乱码 | 欧美黄色淫秽网站| 桃花免费在线播放| 女人精品久久久久毛片| www.精华液| 久久久久久久久免费视频了| 久久青草综合色| av不卡在线播放| 亚洲欧美一区二区三区国产| 婷婷成人精品国产| 亚洲国产毛片av蜜桃av| 精品一品国产午夜福利视频| 宅男免费午夜| 欧美激情极品国产一区二区三区| 欧美激情高清一区二区三区| 成人黄色视频免费在线看| 国产成人一区二区三区免费视频网站 | 国产在线视频一区二区| 国产欧美日韩精品亚洲av| www.999成人在线观看| 成在线人永久免费视频| 黄频高清免费视频| 亚洲av欧美aⅴ国产| 免费高清在线观看视频在线观看| 免费在线观看黄色视频的| 18禁黄网站禁片午夜丰满| 国产极品粉嫩免费观看在线| 两人在一起打扑克的视频| 在线观看免费午夜福利视频| 国产精品国产三级专区第一集| 老鸭窝网址在线观看| 欧美精品一区二区大全| 国产成人影院久久av| 精品卡一卡二卡四卡免费| 亚洲欧美日韩高清在线视频 | 久久国产精品男人的天堂亚洲| 国产免费又黄又爽又色| 国产成人一区二区在线| 成人亚洲精品一区在线观看| 男的添女的下面高潮视频| 大话2 男鬼变身卡| 波多野结衣一区麻豆| 日韩大片免费观看网站| 黑人猛操日本美女一级片| 少妇猛男粗大的猛烈进出视频| 中文字幕高清在线视频| 9热在线视频观看99| 国产爽快片一区二区三区| 国产日韩欧美在线精品| www.999成人在线观看| 免费观看人在逋| 婷婷色综合大香蕉| 一区二区三区乱码不卡18| 91精品国产国语对白视频| 色婷婷久久久亚洲欧美| 久久久久久免费高清国产稀缺| 亚洲 国产 在线| 欧美在线黄色| 亚洲精品美女久久久久99蜜臀 | 国产精品免费大片| 精品第一国产精品| 久久国产亚洲av麻豆专区| 国产精品久久久久成人av| 亚洲三区欧美一区| 新久久久久国产一级毛片| 精品人妻在线不人妻| 午夜福利在线免费观看网站| 国产精品av久久久久免费| 在线观看免费日韩欧美大片| 在线观看免费午夜福利视频| 国产精品 欧美亚洲| 高清不卡的av网站| 91精品伊人久久大香线蕉| 麻豆乱淫一区二区| 波多野结衣av一区二区av| 国产成人一区二区在线| 日本a在线网址| 欧美亚洲 丝袜 人妻 在线| 国产极品粉嫩免费观看在线| 亚洲av电影在线观看一区二区三区| 99国产精品一区二区三区| 国产爽快片一区二区三区| 国产不卡av网站在线观看| 亚洲av日韩在线播放| 国产熟女午夜一区二区三区| 人人妻人人澡人人看| 美女扒开内裤让男人捅视频| 久久综合国产亚洲精品| 亚洲精品久久午夜乱码| 亚洲精品一卡2卡三卡4卡5卡 | 国产片内射在线| 不卡av一区二区三区| 亚洲欧美一区二区三区国产| av在线老鸭窝| 一二三四社区在线视频社区8| a级毛片在线看网站| 国产男人的电影天堂91| 咕卡用的链子| 中文字幕亚洲精品专区| 亚洲av美国av| 十八禁人妻一区二区| 狂野欧美激情性xxxx| 欧美日韩国产mv在线观看视频| 亚洲五月色婷婷综合| 日本五十路高清| 午夜福利免费观看在线| 久9热在线精品视频| 2018国产大陆天天弄谢| 一本大道久久a久久精品| 另类精品久久| 日韩欧美一区视频在线观看| 欧美xxⅹ黑人| 国产成人欧美在线观看 | 黑人欧美特级aaaaaa片| 下体分泌物呈黄色| 免费少妇av软件| 国产亚洲精品久久久久5区| 99精品久久久久人妻精品| 国产精品一区二区在线不卡| 亚洲自偷自拍图片 自拍| www.自偷自拍.com| 99re6热这里在线精品视频| 视频在线观看一区二区三区| 七月丁香在线播放| 久久久久久久精品精品| 欧美日韩黄片免| 高清视频免费观看一区二区| 一级,二级,三级黄色视频| 一级毛片电影观看| 美女国产高潮福利片在线看| 极品少妇高潮喷水抽搐| 国产在线视频一区二区| 69精品国产乱码久久久| 国精品久久久久久国模美| 晚上一个人看的免费电影| av欧美777| 人人妻,人人澡人人爽秒播 | 久久人妻福利社区极品人妻图片 | √禁漫天堂资源中文www| 亚洲图色成人| 女性被躁到高潮视频| 一边摸一边抽搐一进一出视频| 在线亚洲精品国产二区图片欧美| 久久久欧美国产精品| 这个男人来自地球电影免费观看| 欧美日韩亚洲国产一区二区在线观看 | 美女午夜性视频免费| 欧美激情极品国产一区二区三区| 日本午夜av视频| 国产一区二区 视频在线| 热re99久久精品国产66热6| 人人妻人人爽人人添夜夜欢视频| 免费观看av网站的网址| 又大又黄又爽视频免费| 91精品国产国语对白视频| www.自偷自拍.com| 成人18禁高潮啪啪吃奶动态图| 午夜日韩欧美国产| 18禁观看日本| 久久精品人人爽人人爽视色| 久久久精品国产亚洲av高清涩受| 日本av手机在线免费观看| 啦啦啦啦在线视频资源| 在线精品无人区一区二区三| 青草久久国产| 成人影院久久| 韩国高清视频一区二区三区| 国产不卡av网站在线观看| 久久99精品国语久久久| 久久久久久久久免费视频了| 亚洲欧美激情在线| 久久亚洲精品不卡| 日韩人妻精品一区2区三区| 国产精品一区二区免费欧美 | 国产在视频线精品| 97在线人人人人妻| 国产亚洲精品久久久久5区| 各种免费的搞黄视频| 欧美国产精品va在线观看不卡| 国产免费又黄又爽又色| 精品国产乱码久久久久久男人| 亚洲精品乱久久久久久| 国产老妇伦熟女老妇高清| 久久久欧美国产精品| 丝袜人妻中文字幕| 久久久久精品国产欧美久久久 | 多毛熟女@视频| 国产精品国产三级国产专区5o| 在线精品无人区一区二区三| 女警被强在线播放| 国语对白做爰xxxⅹ性视频网站| 天天影视国产精品| 国产精品 国内视频| 国产精品 欧美亚洲| 91九色精品人成在线观看| 午夜福利乱码中文字幕| 高潮久久久久久久久久久不卡| 国产成人欧美| 90打野战视频偷拍视频| 成人国产av品久久久| 中文字幕亚洲精品专区| 高清视频免费观看一区二区| 日本av免费视频播放| 国产av国产精品国产| 十八禁高潮呻吟视频| 纵有疾风起免费观看全集完整版| 精品欧美一区二区三区在线| 久热爱精品视频在线9| kizo精华| 成人手机av| xxxhd国产人妻xxx| 久久人人97超碰香蕉20202| 99国产精品免费福利视频| 中文欧美无线码| 国产成人精品久久二区二区免费| 国产爽快片一区二区三区| 亚洲av日韩精品久久久久久密 | 亚洲 欧美一区二区三区| 国产深夜福利视频在线观看| 久久人妻福利社区极品人妻图片 | 精品亚洲成国产av| 亚洲七黄色美女视频| 欧美国产精品一级二级三级| 免费看av在线观看网站| 色94色欧美一区二区| 国产男人的电影天堂91| 丝袜脚勾引网站| 精品人妻熟女毛片av久久网站| 亚洲色图 男人天堂 中文字幕| 久久青草综合色| 波多野结衣av一区二区av| 国产福利在线免费观看视频| √禁漫天堂资源中文www| 悠悠久久av| 日本a在线网址| 人人澡人人妻人| 亚洲专区中文字幕在线| 欧美激情高清一区二区三区| 精品一区在线观看国产| 久久人人爽人人片av| 91精品三级在线观看| 大香蕉久久成人网| 真人做人爱边吃奶动态| 涩涩av久久男人的天堂| 51午夜福利影视在线观看| 中国国产av一级| 又紧又爽又黄一区二区| 啦啦啦 在线观看视频| 一级毛片 在线播放| 国产亚洲av高清不卡| 国产欧美日韩精品亚洲av| 欧美黄色淫秽网站| 麻豆乱淫一区二区| 国产在线视频一区二区| 各种免费的搞黄视频| 亚洲精品第二区| 女性生殖器流出的白浆| 爱豆传媒免费全集在线观看| 精品少妇一区二区三区视频日本电影| 亚洲,欧美,日韩| 最黄视频免费看| 老司机深夜福利视频在线观看 | 女人被躁到高潮嗷嗷叫费观| 免费女性裸体啪啪无遮挡网站| 久久人人爽av亚洲精品天堂| 好男人电影高清在线观看| 亚洲精品一二三| 亚洲熟女精品中文字幕| 老司机深夜福利视频在线观看 | 一本一本久久a久久精品综合妖精| 日本一区二区免费在线视频| 久久久久国产精品人妻一区二区| 在线天堂中文资源库| 各种免费的搞黄视频| 亚洲欧美激情在线| 99精国产麻豆久久婷婷| 国产亚洲欧美精品永久| 老司机靠b影院| 亚洲av电影在线进入| 色婷婷av一区二区三区视频| 亚洲欧美精品综合一区二区三区| 日本黄色日本黄色录像| 国产国语露脸激情在线看| 国产精品久久久久久精品电影小说| 久久天堂一区二区三区四区| 欧美日韩视频精品一区| 交换朋友夫妻互换小说| 亚洲精品乱久久久久久| 9热在线视频观看99| 久久久久精品人妻al黑| 国产成人欧美在线观看 | 桃花免费在线播放| 精品福利观看| 十八禁网站网址无遮挡| 国产成人91sexporn| 久久人人爽人人片av| 黄色一级大片看看| 各种免费的搞黄视频| 久久久精品区二区三区| 色网站视频免费| xxx大片免费视频| 亚洲精品中文字幕在线视频| 亚洲欧美一区二区三区黑人| svipshipincom国产片| 18禁观看日本| 国产成人精品久久二区二区91| 青春草亚洲视频在线观看| 91字幕亚洲| 国产黄色免费在线视频| 久久 成人 亚洲| 亚洲国产看品久久| 欧美人与性动交α欧美精品济南到| 一本大道久久a久久精品| 免费看不卡的av| 菩萨蛮人人尽说江南好唐韦庄| 亚洲专区国产一区二区| 午夜福利在线免费观看网站| 欧美日韩视频高清一区二区三区二| 亚洲国产看品久久| 亚洲欧洲国产日韩| 亚洲中文字幕日韩| 久久人妻熟女aⅴ| 校园人妻丝袜中文字幕| 国产真人三级小视频在线观看| 亚洲欧美成人综合另类久久久| 久久久久久免费高清国产稀缺| 高清黄色对白视频在线免费看| 精品高清国产在线一区| 人人妻人人爽人人添夜夜欢视频| 91麻豆av在线| 亚洲 国产 在线| 午夜av观看不卡| 精品少妇一区二区三区视频日本电影| 黄频高清免费视频| 中文字幕高清在线视频| 99国产精品免费福利视频| 日本猛色少妇xxxxx猛交久久| 精品人妻1区二区| 美女主播在线视频| 一级黄色大片毛片| 狠狠婷婷综合久久久久久88av| 丰满迷人的少妇在线观看| 国产真人三级小视频在线观看| 国产精品秋霞免费鲁丝片| 亚洲精品国产区一区二| 日韩人妻精品一区2区三区| 亚洲欧美清纯卡通| 在线天堂中文资源库| 黑人欧美特级aaaaaa片| 一区二区日韩欧美中文字幕| 亚洲精品乱久久久久久| 日本五十路高清| 人人妻人人澡人人看| 秋霞在线观看毛片| 大型av网站在线播放| 丝袜美足系列| 女性生殖器流出的白浆| 精品一区二区三区av网在线观看 | 男女边摸边吃奶| 国产精品熟女久久久久浪| 欧美日韩综合久久久久久| 日本av手机在线免费观看| 91麻豆精品激情在线观看国产 | 老汉色av国产亚洲站长工具| √禁漫天堂资源中文www| 国产真人三级小视频在线观看| e午夜精品久久久久久久| 国产欧美日韩一区二区三区在线| 国产精品偷伦视频观看了| 亚洲精品一卡2卡三卡4卡5卡 | av在线老鸭窝| 少妇人妻久久综合中文| 啦啦啦啦在线视频资源| 亚洲一区二区三区欧美精品| 国产精品久久久久成人av| 看免费成人av毛片| 青青草视频在线视频观看| 亚洲中文av在线| 久久精品成人免费网站| 久久精品亚洲熟妇少妇任你| 99精国产麻豆久久婷婷| 91九色精品人成在线观看| 国产成人精品在线电影| xxxhd国产人妻xxx| 熟女av电影| 久久久久精品国产欧美久久久 | 国产一区二区 视频在线| 欧美老熟妇乱子伦牲交| 成人18禁高潮啪啪吃奶动态图| 精品视频人人做人人爽| av又黄又爽大尺度在线免费看| 亚洲国产av新网站| 欧美日韩亚洲高清精品| 制服诱惑二区| 美女视频免费永久观看网站| 久久九九热精品免费| 丰满饥渴人妻一区二区三| 亚洲精品久久久久久婷婷小说| 亚洲专区国产一区二区| 伊人亚洲综合成人网| 丝袜脚勾引网站| 纵有疾风起免费观看全集完整版| 亚洲欧美中文字幕日韩二区| 久久人妻熟女aⅴ| 中文字幕精品免费在线观看视频| 国产熟女欧美一区二区| 亚洲精品国产av成人精品| 久久中文字幕一级| 亚洲欧美一区二区三区久久| 国产精品.久久久| 亚洲国产欧美网| 成人亚洲精品一区在线观看| 亚洲,一卡二卡三卡| 日韩,欧美,国产一区二区三区| 校园人妻丝袜中文字幕| 在线天堂中文资源库| 黄色怎么调成土黄色| 捣出白浆h1v1| 精品一区二区三区av网在线观看 | cao死你这个sao货| 久久久久久久国产电影| 99香蕉大伊视频| 伊人久久大香线蕉亚洲五| 国产欧美日韩综合在线一区二区| 好男人电影高清在线观看| 日本wwww免费看| 日韩制服丝袜自拍偷拍| 中文字幕人妻熟女乱码| 欧美成人精品欧美一级黄| 免费在线观看黄色视频的| 亚洲专区中文字幕在线| 日韩一卡2卡3卡4卡2021年| 精品亚洲成a人片在线观看| 一级毛片 在线播放| 亚洲伊人色综图| 国产成人av教育| 成人国产一区最新在线观看 | 亚洲精品国产区一区二| 国产成人a∨麻豆精品| 五月开心婷婷网| 一级毛片我不卡| 大话2 男鬼变身卡| 丝袜脚勾引网站| 久久精品成人免费网站| 免费看av在线观看网站| 别揉我奶头~嗯~啊~动态视频 | 狠狠精品人妻久久久久久综合| 观看av在线不卡| 国产在线一区二区三区精| 欧美黄色淫秽网站| 精品视频人人做人人爽| 国产精品欧美亚洲77777| 另类亚洲欧美激情| 精品一区二区三区av网在线观看 | 免费不卡黄色视频| 男女午夜视频在线观看| 成人三级做爰电影| 精品一品国产午夜福利视频| av一本久久久久| 亚洲成av片中文字幕在线观看| 免费一级毛片在线播放高清视频 |