• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Long Text Classification Algorithm Using a Hybrid Model of Bidirectional Encoder Representation from Transformers-Hierarchical Attention Networks-Dilated Convolutions Network

    2021-10-22 08:24:36ZHAOYuanyuan趙媛媛GAOShining高世寧LIUYangGONGXiaohui宮曉蕙

    ZHAO Yuanyuan(趙媛媛), GAO Shining(高世寧), LIU Yang(劉 洋) , GONG Xiaohui(宮曉蕙) *

    1 College of Information Science and Technology, Donghua University, Shanghai 201620, China 2 Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China

    Abstract: Text format information is full of most of the resources of Internet, which puts forward higher and higher requirements for the accuracy of text classification. Therefore, in this manuscript, firstly, we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks (BERT_HAN_DCN) which based on BERT pre-trained model with superior ability of extracting characteristic. The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information, fusing context semantic features and hierarchical characteristics. Secondly, the traditional softmax algorithm increases the learning difficulty of the same kind of samples, making it more difficult to distinguish similar features. Based on this, AM-softmax is introduced to replace the traditional softmax. Finally, the fused model is validated, which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN, DCN, based on BERT pre-trained model. Besides, the improved AM-softmax network model is superior to the general softmax network model.

    Key words: long text classification; dilated convolution; BERT; fusing context semantic features; hierarchical characteristics; BERT_HAN_DCN; AM-softmax

    Introduction

    Text classification is aimed at simplifying messy text data and summarizing information from unstructured data[1]. It is a basic task in natural language processing (NLP) and can be applied to sentiment classification, web retrieval, and spam filtering systems[2]. Specific classification rules are a necessary process for automatic text categorization, which mainly include text feature extraction and word vector representation.

    For text feature extraction, experts have proposed a variety of methods, which can be summarized into the following: expert systems, machine learning, and deep neural networks, which are also the three main stages of NLP development. The expert system uses experts with relevant field expertise and experience to summarize rules and extract features for classification, which makes it difficult to deal with the flexible and changeable characteristics of natural language, and long-term dependence on manual feature extraction requires huge manpower. Machine learning algorithms[3-4]is a shallow feature extractor and this kind of feature engineering is based on manual extraction and is not able to automatically extract features from training sets.

    However, in most of the above-mentioned feature extraction methods, high dimension and data sparseness result in poor performance[5]. With the rise and popularity of deep learning, neural networks have acquired excellent achievement in the field of image processing[6-7], and related scholars began to utilize deep learning[8-13]for NLP, which has been known as the feature extraction unit and has gained extraordinary accomplishments. The most representative neural network is convolution neural networks (CNN)[8]which is strong in feature learning, and it improves the feature extraction ability by modifying hyperparameters or increasing the number of layers of convolution, but at the same time facing the problems of a large amount of calculation and parameters adjusting. Dilated convolution network (DCN) is a variant network of CNN network. DCN is able to extract more global features with less parameter-adjusting works[14], but it often loses key information and context structure semantic information in obtaining global information. Attention mechanism can calculate the key information in characters and sentences[9]. Traditional attention mechanism usually performs on characters, but it is inadequate for the acquisition of semantic information, and afterwards Yangetal.[15]proposed a hierarchical attention neural network (HAN). HAN is composed with a two-level attention mechanism on characters and sentences, which cound effectively identify features, structural information and key value semantics. However, at the same time it lost its global features extraction and may generate partial semantic loss.

    In the aspect of vector representation, unsupervised training is essential in vector representation of text, and the pre-trained CNN[16-17]are widely used to fine-tune the downstream tasks[18-19]gaining significant enlarged ability in feature extraction, transfer learning and dynamically fetch context semantics. The traditional models, such as fast text[20]and Glove[21], intend to obtain the semantic information of each word, discarding the semantic relevance with preceding texts, and is prone to the problems of dimension explosion and data sparseness[22]. Bidirectional encoder representation from transformers (BERT) is one of pre-trained word vector models that constructed with then-layer transformer models with strong coding ability, and is able to calculate the semantic weight of each word with others in the sentences. Therefore, the pre-trained language model BERT is used as migration learning to fine-tune downstream tasks.

    With the explosive growth of numbers of texts, a single classifier is not able to accomplish the tasks with high accuracy and precision, and many studies with mixed models have been proven more effective compared to single models in dealing with text classification problems[23-26]. A feature-fused HAN-DCN model was presented in this manuscript, the BERT model trained word vectors to initially understand the text semantics, HAN network obtained the structural dependency between word vectors, and the DCN extracted global and edge semantics in parallel. The features obtained from the two channels are spliced to be more efficient in improving the weight of the key information in the two levels of words and sentences, and extracting global semantic features as much as possible to improve the accuracy. Since softmax is aiming to maximize the probability of categorization by optimize the variances between different classes, and it is unable to minimize the differences within the same category, AM-softmax[27]is used to deepen the feature learning in improving the accuracy and efficiency of news text classification. The feasibility of the BERT_HAN_DCN model based on AM-softmax is verified through a series of experiments and it shows certain advances in improving generalization ability and model convergence speed.

    1 Model Architecture

    The entire architecture of this manuscript is indicated in Fig. 1. Firstly, the data is processed by BERT to initially get a rudimentary idea of the text so that we can obtain the dynamic semantic representation. After receiving the vector of the individual word in the long sentence, the digital vector is sent to the parallel network, which is composed by a three-layer DCN, which can acquire a larger receptive filed with fewer amount of calculation and HAN hybrid model to extract more abundant semantic information and contextual features information. In the relevant image processing, the mesh effect appears in the dilated convolution, resulting in the loss of characteristic information[28-29]. Therefore, in this network design, a three-layer dilated convolution is adopted to overcome the influence of the mesh effect, and the coefficients of expansion of each layer are set to 1, 3, and 5, respectively. Thus, the feature representation of the text is formed by combing the feature information of these two parts. In the end, the softmax function is used to normalize and classify the output probability according to the probability size. The mixed model architecture is shown in Fig. 1.

    1.1 Input of representation layer

    BERT could obtain dynamic and nearly comprehensive semantic information of the text. The BERT model uses a transformer with a bidirectional structure to fuse left and right characters to obtain contextual semantics, complete the two tasks of masking language model (MLM) and next sentence prediction (NSP) at the same time, and conduct joint training to obtain the vector representation of words and sentences. BERT’s embedding layer consists of token embedding (vector representation of words), segment embedding (vector representation of two sentences in a sentence pair, similarity of the sentence pair) and position embedding (learning the order properties of the sentence) to convert Chinese characters into input vectorsW1,W2, …,Wn, and the model can dynamically generate the context semantic representation of words by bidirectional transformer structure[30]to perform the two tasks mentioned above (MLM and NSP) as shown in Fig. 2. The final transformer output of the hidden layer vector with semantic information is avilable from the self-attention layer, the remaining connection and the normalization layer, and the obtained output is the superposition of the character-level vector. The output layer vectors processed by BERT areE1,E2, …,En, which are obtained by multi-layer transformers. In this experiment, BERT_BASE_CHINESE model is used, which is composed by a 12 layers-multi-head attention mechanism transformer.

    Fig. 1 Overall framework of BERT_HAN_DCN

    Fig. 2 BERT model structure

    1.2 HAN layer

    As illustrated in Fig. 3, the HAN model is composed by Chinese character and sentence level attention network, and this hierarchical structure is in line with people’s habitual thinking of understanding articles. In essence, each layer in attention network is composed by two layers of BiGRU, which has the advantage of serialization learning text features in the dotted box in Fig. 3. Considering the hierarchical structure of the network, it is necessary to set the fixed length of each sentence when dividing sentence attention. Thus, in this manuscript, we maximize the characters of the longest sentence in the article as 256, and each sentence is divided into segments with 16 characters. HAN is composed of four parts: word encode, word attention, sentence encode and sentence attention, which will be explained the calculation process in detail.

    Fig. 3 HAN model structure

    (1) Word encode

    In this part, the embedding layer vector is word-encoded. The vectors are initialized and then used as the input of the two-layer BiGRU. The specific conversion method is shown as

    xin=EeEn,n∈[1,t],

    (1)

    (2)

    (3)

    (2) Word attention

    The splicing vectorsh11,h12,...,h1nof the forward hidden state and the reverse hidden state are used as the overall expression of the word. In this part, calculate the size of the attention weight matrix of each word in the sentence. The calculation method is

    u1n=tanh(wsh1n+b),n∈[1,t],

    (4)

    (5)

    (6)

    (3) Sentence attention

    (7)

    (8)

    The purpose of adding a mechanism in this step is to discover the significant meaningful words in the sentence. We can get the final output of the HAN network as illustrated in Eq. (9), in which the vectorω1is the significant local characteristic of the mixed neural network, which is defined as

    ω1=∑nα2nh2n,n∈[1,t],

    (9)

    whereω1is the document vector as well as the final features extracted by HAN, which sums up all the information of the sentence in long text.

    1.3 Dilated convolutional networks layer

    As shown in Fig. 2, the DCN layer is composed of three hollow convolutional blocks with the same structure, and the input of each dilated convolutional layer is the output of the earlier layer. Changing the dilated convolution rate of each layer allows the receptive field of the convolutional layer to quickly cover all input data. As the input expansion rate of each layer increases, the obtained feature information increases exponentially.

    DCN and HAN are parallel network structures, taking the output of the embedding layer, which initialized by BERT as the input, and the input of each word in the sentence areEi∈RB×N×D, whereBis batch_size which is set to 64,Nis the number of words, andDis the word vector dimension of BERT output. The feature extraction of the input text sentence by dilated convolution is completed by setting the filter size. The convolution calculation is shown as

    ci=f(ω·Ei:i+k+(k-1)(r-1)+b),

    (10)

    wherefis a non-linear function,ωis the random initialization weight matrix the convolution kernel,kis the size of the convolution kernel, andris the hole rate of the hole convolution;Ei:i+k+(k-1)(r-1)isi:i+k+(k-1)(r-1) the sentence vector composed ofitoi+k+(k-1)(r-1), andbis the bias term.

    Therefore, after the feature extraction of the dilated convolutional layer, the final vectors obtained areC.The concrete vector representation ofCis shown as

    C=[c1,c2,…,ci+k+(k-1)(r-1)].

    (11)

    The output of HAN is going to be serialized continuous vectors, which needs to keep the dimensions consistently. Connect the vector obtained from the three layers dilate convolution networks and convert the vector into a feature matrixw2, which is as shown in

    w2=[C1,C2,...,Ci],i∈[1,n],

    (12)

    whereCiis the feature matrix of the output of dilated convolutional neural networks.

    1.4 Classification layer

    The classification layer is composed of the following four parts: feature fusion layer, fully connected layer, dropout layer and softmax layer. It consists of a simple softmax classifier (at the top of HAN and DCN) to calculate conditional probability distributions on predefined classification tags. Using Keras’s add function at the model fusion layer, we can get the merge layer vectorω, shown as

    ω=w1⊕w2,

    (13)

    wherew1andw2represent the features output vectors of HAN and DCN respectively, and ⊕ represents a splicing operation. After realizing merge layer operation, the obtained feature vectors are combined. Then extracting the feature vector again, each input unit of the fully connection layer represents the value of each feature vector. In order to avoid overfitting of the model, we use the dropout mechanism. The final feature representations are obtained from the dropout layer, and these feature representations are classified by softmax classification algorithm. The classification algorithm calculates the probability ofωinto categoryz, and the concrete calculation formula is shown as

    (14)

    2 Experiments and Results

    In this section, for verifying the effectiveness of BERT_HAN_DCN model, we use two real-world experimental datasets. We which are extracted the portion from SogouCS and THCNews datasets, explicate the details of the experiment, evaluate the performance of the hybrid model, and analyze the experimental results.

    2.1 Experimental datasets

    The datasets used in this experiment are Chinese text classification datasets launched by the NLP Laboratory of Tsinghua University and Sogou labs. The detailed data amount of the train group, the validation group and the test group are shown in Table 1.

    Table 1 Details of the text classification datasets

    2.2 Multi-classification evaluation index

    On the course of training process of the text classifier, it is indispensable to select appropriate criteria to evaluate the ability of the classifier. The confusion matrix is shown in Table 2 and there are four commonly used criteria in the field of NLP: precision (P), accuracy (A), recall (R), andF1-score (F1).

    Table 2 Confusion matrix

    (1) Accuracy(A)

    He cried so much that the glass splinter swam out of his eye; then he knew her, and cried out, Gerda! dear little Gerda! Where have you been so long? and where have I been? And he looked round him

    (15)

    whereAis measuring the ability of the classifier to distinguish the whole data set, the higher the value of A represents the better classification ability the model has.

    (2)F1-score (F1)

    (16)

    wherePis shown in

    (17)

    andRis shown in

    (18)

    F1 is a comprehensive index which is the harmonic evaluation value of precision and recall. It can be seenF1 combines the results ofPandR, and whenF1 gets closer to 1, it can indicate that the model method is more effective.

    2.3 Main initialization hyperparameters

    In order to train a better classification model, we should set the appropriate hyperparameters settings of the model. The hidden vector dimension of BiGRU and DCN models are respectively set to 64 and 128, the size of batch is 64, the dropout rate of BiGRU is set to 0.1, the maximum input length of data set is set to 256, and the learning rate is 0.000 05. The other main initialization hyperparameters settings of this experiment are shown in Table 3.

    Table 3 Main initialization hyperparameters

    Using optimizer Adam[31]to update network weights and cross-entropy cost function is used for calculating loss. In addition, early stopping is used to prevent over fitting. After multiple training processes in the models, it is found that 3 is the most suitable parameter for all experimental models. Complicated neural networks trained on small data sets often result in overfitting[32-33]. Because of the fewer data sets in this experiment, a certain dropout rate is adopted to prevent the overfitting of the model. Consequently, five groups of experiments were designed to explore the influence of dropout rate on the model effect and the optimal parameters were suitable for this fusion model. When we change the dropout rate, every 0.1 change has an impact on the accuracy of the model. Finally, we found the most appropriate parameter dropout for SogouCS dataset is set to 0.6, while THCNews dataset is set to 0.8.

    2.4 Analysis of experimental results

    This manuscript focuses on two kinds of news data sets. A total of three groups of comparative experiments are designed, which are using the standard BERT to connect the fully connection layer directly, and using the BERT as the embedded representation layer, HAN and DCN as the feature extraction layer respectively. The accuracy and loss rate of the two data sets during the training process are plotted separately, as shown in Fig. 4. The models are trained for 10 epochs on two datasets. Over the course of training data, it can be seen from Fig. 4 that the BERT_HAN model plays a significant role in improving accuracy. However, the model tends to be unstable in the first few iterations. The reason for this phenomenon is the complex structure of HAN model network. In the early stage of learning, the error is relatively large, and some important features may be lost when focusing on local important features. With the constant updating of parameters and BERT_HAN’s strong learning ability, the accuracy and stability of prediction are constantly improved. The BERT_DCN model is more stable than BERT model in both data sets. In addition, the accuracy of data set SogouCS and THCNews improved by 2.89% and 2.03% respectively compared with the BERT model.

    From the experiment results, in SogouCS and THCNews, BERT_HAN_ACN model achieved accuracy value of 91.42% and 95.66% respectively and the loss rate of 39.95% and 17.83% respectively. The accuracy of the BERT_HAN_DCN model in the verification set is higher than that of other models, and it is more stable in the training process than the other models. Compared with other groups of models are showed the best effect which has improved considerably and shows that the designed model fusion is feasible, which can extract deep characteristics of long text and improve the effect of news text classification model.

    Fig. 4 Training performance comparison between the presented model and other basic models: (a)-(b) training curves of verification accuracy and loss of SougoCS; (c)-(d) training curves of verification accuracy and loss of THCNews

    2.4.2ImpactofAM-softmax

    AM-softmax has achieved remarkable results in the field of face recognition. Unlike softmax, AM-softmax can reduce the probability of correct label and increase the effect of loss, which is more helpful to the aggregation of the same class. The specific AM-softmax is shown as

    (19)

    where cosθyjis to calculatexjin the categoryyjregion;mis the area between categories which is at leastmapart. The value ofmhere is set to 0.35 which needs to consider whether there is a clear boundary between the distribution of data in the real scene. The cosine value is between [0, 1], which is too small and cannot effectively distinguish the difference. After increasingstimes to improve the difference of distribution andshere is set to 30. And with the increase of the number of training epochs, the accuracy of validation sets of different models changed as shown in Fig. 5.

    From the accuracy of the verification sets, it can be concluded that after 5-6 times of model training about two datasets, the mixed model BERT_HAN_DCN which based on AM-softmax tends to be stable and finally achieves higher accuracy.

    Fig. 5 Training performance comparison between different models: (a) training accuracy curves of SougoCS based on AM-softmax models and the original models; (b) training accuracy curves of THCNews based on AM-softmax models and the original models

    The training models were verified by the validation set after 10 epochs. The precision, recall,F1 value and accuracy of the 8 categories of the two data sets are obtained respectively as shown in Tables 4-5.

    As shown in Tables 4-5, the models using AM-softmax as loss function have slight improvement in both accuracy rate andF1 value compared with the original models. Although the improvement effect is small, it also proves that changing the way of calculating loss is also a way to improve the feature extraction ability of the training model. Finally, for SogouCS dataset, we find that the finalF1-score and accuracy of the hybrid model are respectively increased by 0.56% (from 91.42% to 91.98%) and 0.55% (from 91.42% to 91.97%). For THCNews dataset, we find thatF1-score and accuracy of the hybrid model are respectively increased by 0.34% (from 95.69% to 99.06%) and 0.3% (from 95.66% to 95.69%).

    Table 4 Model comparison result on SogouCS dataset

    Table 5 Model comparison result on THCNews dataset

    2.4.3Timecomplexitycomparisonexperiment

    Under the same parameter settings, the algorithm running time of 8 different modules was compared, and the effectiveness of the algorithm was verified. The time complexity comparison experiment results are shown in Table 6.

    Table 6 Time complexity comparison

    From Tables 4-6, experimental results show that the mixed model has higher time complexity. However, the accuracy andF1 are much better. For SogouCS and THCNews datasets, the average calculation time per epoch of the hybrid model are 230 s and 244 s longer than BERT, respectively, but the accuracy is improved. It is obvious that the addition of hierarchical attention mechanism increases the complexity of the computing complexity but effectively improves the accuracy of the model. The calculation time of all AM-softmax-based models is less than that of the original models. For the SogouCS dataset, the average calculation time of each round of the hybrid model is reduced by 6 s, and for the THCNews dataset, the average calculation time of each round of the hybrid model is reduced by 11 s. This proves that the calculation of changing the loss improves the convergence speed of the model to a certain extent and slightly reduces the complexity of the model.

    3 Conclusions

    This manuscript adopts the BERT_HAN_DCN model of the composite network and applies it to the task of Chinese long text classification. Compared with the single BERT model, BERT_HAN, BERT_DCN, the accuracy andF1 value of the model are the highest. The results show that the fusion of HAN and DCN is effective and can learn deep features and contextual information in long text.

    In addition, by improving the loss function, the accuracy andF1 of the single model and the mixed model are improved and relatively reduced training time which proves that the mixed model can be better applied to Chinese text classification tasks. This also shows that in the process of model training, not only the ability of feature extraction and word vector transformation, but also the impact of loss function on model accuracy should be paid attention to.

    However, a more complex hybrid model requires more network parameters, which requires more computing power and longer training time. In the following research, we intend to further optimize and improve the details of the algorithm and we will improve this work by building a larger dataset.

    中文字幕亚洲精品专区| 精品亚洲成国产av| 国产亚洲欧美精品永久| 国产高清不卡午夜福利| 婷婷色av中文字幕| 日韩av免费高清视频| 国产精品一区二区在线观看99| 免费看av在线观看网站| 黑人猛操日本美女一级片| 亚洲成色77777| 乱人伦中国视频| 99九九在线精品视频| 亚洲欧美成人精品一区二区| 高清不卡的av网站| 大香蕉久久成人网| 80岁老熟妇乱子伦牲交| 精品少妇一区二区三区视频日本电影 | 亚洲美女视频黄频| 久久久久精品人妻al黑| 亚洲欧美日韩另类电影网站| 国产精品人妻久久久影院| 老司机影院毛片| 天天影视国产精品| 黄色毛片三级朝国网站| 一级,二级,三级黄色视频| 午夜精品国产一区二区电影| 久久韩国三级中文字幕| 国产欧美日韩综合在线一区二区| avwww免费| 大片免费播放器 马上看| 国产 一区精品| 欧美人与善性xxx| 国产精品蜜桃在线观看| 久久国产亚洲av麻豆专区| 午夜av观看不卡| 五月开心婷婷网| 亚洲国产精品国产精品| 一本大道久久a久久精品| 亚洲综合色网址| 亚洲av欧美aⅴ国产| 无遮挡黄片免费观看| 天天躁日日躁夜夜躁夜夜| 日韩制服骚丝袜av| 欧美日韩福利视频一区二区| 国产成人啪精品午夜网站| 日韩大片免费观看网站| 我的亚洲天堂| 国产 精品1| 自线自在国产av| 国产精品久久久av美女十八| 亚洲精品一二三| 国产精品一区二区在线观看99| 满18在线观看网站| 亚洲一区二区三区欧美精品| 亚洲天堂av无毛| 波多野结衣一区麻豆| 超色免费av| 51午夜福利影视在线观看| 大话2 男鬼变身卡| 亚洲国产精品999| 丰满饥渴人妻一区二区三| 日韩免费高清中文字幕av| 日本av手机在线免费观看| 黑人猛操日本美女一级片| 中文字幕色久视频| 久久久精品国产亚洲av高清涩受| 国产精品成人在线| 美女大奶头黄色视频| 亚洲成人av在线免费| 久久av网站| 两个人免费观看高清视频| 精品亚洲乱码少妇综合久久| 九九爱精品视频在线观看| 国产黄色视频一区二区在线观看| 国产精品熟女久久久久浪| 免费黄色在线免费观看| 少妇 在线观看| 一二三四在线观看免费中文在| 久久久精品区二区三区| 日韩制服丝袜自拍偷拍| 欧美在线黄色| 久久国产精品男人的天堂亚洲| 亚洲精品久久成人aⅴ小说| 日本黄色日本黄色录像| 大香蕉久久网| 亚洲久久久国产精品| 久久久国产精品麻豆| 国产色婷婷99| 久久久久久人妻| 在线观看www视频免费| 色婷婷久久久亚洲欧美| 亚洲国产最新在线播放| 亚洲国产欧美网| 精品国产一区二区久久| 美女大奶头黄色视频| 日韩av在线免费看完整版不卡| 母亲3免费完整高清在线观看| 欧美日韩综合久久久久久| 人体艺术视频欧美日本| 国产av码专区亚洲av| 叶爱在线成人免费视频播放| 丰满饥渴人妻一区二区三| 国产片特级美女逼逼视频| 热99国产精品久久久久久7| 免费在线观看视频国产中文字幕亚洲 | 我要看黄色一级片免费的| 亚洲婷婷狠狠爱综合网| 99久久综合免费| 啦啦啦视频在线资源免费观看| 精品少妇内射三级| 一区二区三区乱码不卡18| 成年人午夜在线观看视频| 97精品久久久久久久久久精品| 天堂俺去俺来也www色官网| 国产精品久久久久久久久免| 18禁观看日本| 亚洲少妇的诱惑av| 午夜影院在线不卡| 精品国产一区二区三区四区第35| 乱人伦中国视频| 麻豆精品久久久久久蜜桃| 韩国av在线不卡| 热re99久久精品国产66热6| 国产无遮挡羞羞视频在线观看| 国产亚洲av高清不卡| 麻豆乱淫一区二区| 建设人人有责人人尽责人人享有的| 韩国高清视频一区二区三区| 亚洲av综合色区一区| 国产精品久久久久久久久免| 王馨瑶露胸无遮挡在线观看| netflix在线观看网站| 卡戴珊不雅视频在线播放| xxxhd国产人妻xxx| av卡一久久| 丝袜美足系列| 中文字幕精品免费在线观看视频| 无限看片的www在线观看| a级毛片黄视频| 日韩中文字幕欧美一区二区 | 午夜激情av网站| 婷婷色综合www| 欧美日韩精品网址| 麻豆精品久久久久久蜜桃| 99热全是精品| 亚洲av在线观看美女高潮| tube8黄色片| 久久精品aⅴ一区二区三区四区| 高清视频免费观看一区二区| 国产熟女欧美一区二区| 午夜福利网站1000一区二区三区| 欧美日韩亚洲综合一区二区三区_| 性高湖久久久久久久久免费观看| 日韩熟女老妇一区二区性免费视频| 丝袜人妻中文字幕| 欧美人与性动交α欧美精品济南到| 女人精品久久久久毛片| 18禁裸乳无遮挡动漫免费视频| 国产精品人妻久久久影院| 国产男女内射视频| 免费久久久久久久精品成人欧美视频| 七月丁香在线播放| 国产精品人妻久久久影院| 亚洲精品国产一区二区精华液| 欧美另类一区| 在线观看国产h片| av国产精品久久久久影院| 黄色毛片三级朝国网站| 久久综合国产亚洲精品| 看免费av毛片| 9191精品国产免费久久| 我要看黄色一级片免费的| 中文字幕精品免费在线观看视频| 在线 av 中文字幕| 久热爱精品视频在线9| 久久久久久免费高清国产稀缺| 黄色 视频免费看| 十八禁高潮呻吟视频| 国产精品久久久av美女十八| 天天躁狠狠躁夜夜躁狠狠躁| 免费少妇av软件| 热99久久久久精品小说推荐| 亚洲精品美女久久久久99蜜臀 | 久久婷婷青草| 国产毛片在线视频| 女人久久www免费人成看片| 国产精品一区二区在线观看99| 免费女性裸体啪啪无遮挡网站| 亚洲一码二码三码区别大吗| 又大又爽又粗| 69精品国产乱码久久久| 亚洲美女视频黄频| 成年女人毛片免费观看观看9 | 亚洲国产精品成人久久小说| 日韩中文字幕欧美一区二区 | 婷婷成人精品国产| 夫妻性生交免费视频一级片| 黑丝袜美女国产一区| 免费观看人在逋| 亚洲精品视频女| 精品国产乱码久久久久久小说| 国产麻豆69| www.精华液| 一区二区三区四区激情视频| 一区二区三区乱码不卡18| 九色亚洲精品在线播放| 午夜精品国产一区二区电影| 涩涩av久久男人的天堂| 国产成人午夜福利电影在线观看| 成人亚洲精品一区在线观看| 亚洲欧美一区二区三区黑人| 亚洲人成电影观看| 久久这里只有精品19| 熟女少妇亚洲综合色aaa.| 天堂8中文在线网| 国产1区2区3区精品| 亚洲四区av| 五月天丁香电影| 青春草国产在线视频| 成年av动漫网址| 美女中出高潮动态图| 午夜福利,免费看| 黑丝袜美女国产一区| 国产精品 国内视频| 成人国产麻豆网| 午夜久久久在线观看| 国产精品一区二区在线不卡| av卡一久久| 欧美日韩精品网址| 狠狠精品人妻久久久久久综合| 久久国产精品男人的天堂亚洲| 欧美黄色片欧美黄色片| 色网站视频免费| 又大又爽又粗| av片东京热男人的天堂| 欧美精品av麻豆av| 午夜91福利影院| 国产亚洲av高清不卡| 亚洲少妇的诱惑av| 久久久久精品人妻al黑| 亚洲七黄色美女视频| 美女主播在线视频| 午夜激情久久久久久久| 9色porny在线观看| 人妻人人澡人人爽人人| 韩国高清视频一区二区三区| 伊人久久国产一区二区| 国产成人午夜福利电影在线观看| 90打野战视频偷拍视频| 日韩一本色道免费dvd| 日本午夜av视频| 一级片'在线观看视频| 一区二区三区四区激情视频| 欧美国产精品va在线观看不卡| 丝袜在线中文字幕| 一个人免费看片子| 高清视频免费观看一区二区| 亚洲一码二码三码区别大吗| 又粗又硬又长又爽又黄的视频| 91精品伊人久久大香线蕉| av不卡在线播放| 国产亚洲一区二区精品| 国产一区有黄有色的免费视频| 亚洲精品美女久久久久99蜜臀 | 国产极品天堂在线| 久久人人爽av亚洲精品天堂| 人人妻,人人澡人人爽秒播 | 少妇精品久久久久久久| 精品酒店卫生间| 午夜av观看不卡| 丝袜美足系列| 十八禁高潮呻吟视频| 男女免费视频国产| 欧美另类一区| 黄色视频在线播放观看不卡| 国产精品香港三级国产av潘金莲 | 国产熟女欧美一区二区| 女人精品久久久久毛片| 亚洲欧美清纯卡通| 女人高潮潮喷娇喘18禁视频| 久久99一区二区三区| 久久久久精品国产欧美久久久 | kizo精华| 91国产中文字幕| 午夜激情久久久久久久| 一级a爱视频在线免费观看| 国产不卡av网站在线观看| 国产 一区精品| 亚洲av电影在线观看一区二区三区| 久久久久精品性色| 在线观看免费视频网站a站| 18在线观看网站| 天堂8中文在线网| 人人妻,人人澡人人爽秒播 | 久久狼人影院| 汤姆久久久久久久影院中文字幕| 亚洲av电影在线观看一区二区三区| 日韩中文字幕欧美一区二区 | 成年人免费黄色播放视频| 国产麻豆69| 亚洲综合精品二区| 午夜免费观看性视频| 免费高清在线观看日韩| 亚洲美女视频黄频| 亚洲国产欧美网| 免费少妇av软件| 中文字幕人妻丝袜制服| 亚洲欧美成人综合另类久久久| 中文字幕色久视频| 久久毛片免费看一区二区三区| 伊人亚洲综合成人网| 亚洲免费av在线视频| 精品少妇久久久久久888优播| 成年人午夜在线观看视频| 亚洲精品国产av蜜桃| 波野结衣二区三区在线| 免费高清在线观看视频在线观看| 国产一区亚洲一区在线观看| 亚洲国产欧美网| av线在线观看网站| 国产在线视频一区二区| 成人手机av| 日韩av免费高清视频| 欧美精品亚洲一区二区| 搡老岳熟女国产| 色94色欧美一区二区| 欧美日韩视频精品一区| 欧美国产精品一级二级三级| 国产精品免费视频内射| 无遮挡黄片免费观看| 美女脱内裤让男人舔精品视频| 在线观看免费午夜福利视频| 我的亚洲天堂| 天堂俺去俺来也www色官网| 国产午夜精品一二区理论片| 精品国产露脸久久av麻豆| 一边亲一边摸免费视频| 欧美成人精品欧美一级黄| 80岁老熟妇乱子伦牲交| 欧美日韩视频精品一区| 国产成人av激情在线播放| 免费看av在线观看网站| 精品人妻熟女毛片av久久网站| 国产女主播在线喷水免费视频网站| 亚洲精品乱久久久久久| 最近手机中文字幕大全| 妹子高潮喷水视频| 19禁男女啪啪无遮挡网站| 午夜av观看不卡| 久久 成人 亚洲| 人人妻,人人澡人人爽秒播 | 精品亚洲乱码少妇综合久久| 人人妻人人澡人人爽人人夜夜| 亚洲欧美成人综合另类久久久| 午夜91福利影院| 国产免费又黄又爽又色| 欧美xxⅹ黑人| 中文字幕最新亚洲高清| 在现免费观看毛片| 久久国产精品男人的天堂亚洲| 汤姆久久久久久久影院中文字幕| 日韩av免费高清视频| 97人妻天天添夜夜摸| 欧美黄色片欧美黄色片| 91国产中文字幕| 制服人妻中文乱码| 国产精品国产三级国产专区5o| 久久亚洲国产成人精品v| 一区二区三区精品91| 夫妻午夜视频| 日本欧美国产在线视频| 国产淫语在线视频| 汤姆久久久久久久影院中文字幕| 制服人妻中文乱码| 女人被躁到高潮嗷嗷叫费观| 19禁男女啪啪无遮挡网站| 欧美日韩一级在线毛片| 久久97久久精品| 中文字幕亚洲精品专区| 精品国产露脸久久av麻豆| 中文精品一卡2卡3卡4更新| 国产亚洲一区二区精品| 午夜福利免费观看在线| 精品久久久久久电影网| 最近的中文字幕免费完整| 久久天堂一区二区三区四区| 1024视频免费在线观看| 欧美久久黑人一区二区| 黄色视频在线播放观看不卡| 亚洲精品一二三| 观看av在线不卡| 9191精品国产免费久久| 精品亚洲成a人片在线观看| 成人手机av| 国产精品女同一区二区软件| 妹子高潮喷水视频| 国产免费现黄频在线看| 免费不卡黄色视频| 免费日韩欧美在线观看| 蜜桃在线观看..| 别揉我奶头~嗯~啊~动态视频 | 熟女av电影| 一区在线观看完整版| 午夜日韩欧美国产| 日本欧美视频一区| 看十八女毛片水多多多| 色婷婷av一区二区三区视频| 99久国产av精品国产电影| 欧美黑人精品巨大| 成人黄色视频免费在线看| 午夜福利视频精品| 巨乳人妻的诱惑在线观看| 一级片免费观看大全| 亚洲精品久久午夜乱码| 婷婷成人精品国产| 国产一区二区激情短视频 | 久久久久久久国产电影| 欧美日韩一区二区视频在线观看视频在线| 国产成人啪精品午夜网站| 少妇人妻精品综合一区二区| 亚洲精品美女久久av网站| 亚洲一级一片aⅴ在线观看| 一级a爱视频在线免费观看| www.av在线官网国产| 国产97色在线日韩免费| 丝袜在线中文字幕| 中文天堂在线官网| 日韩成人av中文字幕在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 波多野结衣一区麻豆| 亚洲欧美清纯卡通| 国产又色又爽无遮挡免| 国产精品秋霞免费鲁丝片| 久久久久国产精品人妻一区二区| 精品人妻熟女毛片av久久网站| 99热国产这里只有精品6| 9热在线视频观看99| 亚洲精品美女久久av网站| 性少妇av在线| 久久精品久久久久久久性| 成年美女黄网站色视频大全免费| 国产成人精品无人区| 少妇人妻 视频| 无限看片的www在线观看| 国产伦理片在线播放av一区| 久久久久国产一级毛片高清牌| 麻豆精品久久久久久蜜桃| 亚洲第一区二区三区不卡| 人人妻人人添人人爽欧美一区卜| 涩涩av久久男人的天堂| 日韩精品免费视频一区二区三区| 免费看不卡的av| 亚洲伊人久久精品综合| 亚洲欧美一区二区三区国产| 国产精品.久久久| 亚洲精品国产一区二区精华液| 丝袜在线中文字幕| 麻豆av在线久日| 99久久综合免费| 在线天堂中文资源库| 黑人欧美特级aaaaaa片| videos熟女内射| 一区在线观看完整版| 在线 av 中文字幕| 五月开心婷婷网| 夫妻午夜视频| 丝瓜视频免费看黄片| av在线观看视频网站免费| 咕卡用的链子| 欧美黄色片欧美黄色片| 最新在线观看一区二区三区 | 一本—道久久a久久精品蜜桃钙片| 韩国精品一区二区三区| 人人妻人人澡人人看| 亚洲国产精品成人久久小说| 欧美黄色片欧美黄色片| 国产麻豆69| 国产黄色视频一区二区在线观看| 久久久久精品人妻al黑| e午夜精品久久久久久久| 少妇 在线观看| 成人免费观看视频高清| 亚洲国产中文字幕在线视频| 亚洲av综合色区一区| av.在线天堂| 黄片播放在线免费| 免费黄频网站在线观看国产| 视频在线观看一区二区三区| 久久久久久久精品精品| 日本91视频免费播放| 街头女战士在线观看网站| 国产免费一区二区三区四区乱码| 午夜福利乱码中文字幕| 国产国语露脸激情在线看| 亚洲欧美中文字幕日韩二区| 国产精品二区激情视频| 精品久久蜜臀av无| 悠悠久久av| 纵有疾风起免费观看全集完整版| 国产av国产精品国产| 日韩精品免费视频一区二区三区| 99香蕉大伊视频| 成年人午夜在线观看视频| 99re6热这里在线精品视频| 国产成人欧美在线观看 | 哪个播放器可以免费观看大片| 超碰成人久久| a级片在线免费高清观看视频| 日韩不卡一区二区三区视频在线| 九色亚洲精品在线播放| 色94色欧美一区二区| av天堂久久9| 亚洲国产毛片av蜜桃av| 国产精品一国产av| 亚洲一区中文字幕在线| av有码第一页| 中文字幕最新亚洲高清| 久久久国产欧美日韩av| 久久99精品国语久久久| 夜夜骑夜夜射夜夜干| 亚洲精品一二三| 亚洲成国产人片在线观看| 晚上一个人看的免费电影| 亚洲欧美精品自产自拍| 一二三四中文在线观看免费高清| 一级爰片在线观看| 久久久久精品久久久久真实原创| 亚洲欧美一区二区三区国产| 国产av精品麻豆| 亚洲精品久久午夜乱码| 秋霞伦理黄片| 菩萨蛮人人尽说江南好唐韦庄| 最近2019中文字幕mv第一页| 夫妻性生交免费视频一级片| 国产精品女同一区二区软件| 青春草视频在线免费观看| 别揉我奶头~嗯~啊~动态视频 | 色94色欧美一区二区| 国产一区二区 视频在线| avwww免费| 成人国产av品久久久| 亚洲精品久久午夜乱码| 考比视频在线观看| 一本色道久久久久久精品综合| 欧美成人精品欧美一级黄| 亚洲精品日韩在线中文字幕| 超碰成人久久| 精品酒店卫生间| 亚洲国产精品一区三区| 80岁老熟妇乱子伦牲交| 爱豆传媒免费全集在线观看| 成年av动漫网址| 亚洲色图 男人天堂 中文字幕| 一级a爱视频在线免费观看| 国产伦人伦偷精品视频| 大片免费播放器 马上看| 久久精品国产亚洲av涩爱| 免费av中文字幕在线| 国产又爽黄色视频| 极品人妻少妇av视频| 国产无遮挡羞羞视频在线观看| 亚洲国产欧美一区二区综合| 午夜福利视频精品| 亚洲欧美成人精品一区二区| 国产男女内射视频| 免费少妇av软件| 女人精品久久久久毛片| 啦啦啦啦在线视频资源| 色精品久久人妻99蜜桃| 超碰成人久久| 人体艺术视频欧美日本| 色视频在线一区二区三区| 国产一卡二卡三卡精品 | 免费女性裸体啪啪无遮挡网站| 亚洲情色 制服丝袜| 国产xxxxx性猛交| 最新在线观看一区二区三区 | 欧美久久黑人一区二区| 另类亚洲欧美激情| 国产精品三级大全| 精品一区二区三卡| 日韩中文字幕视频在线看片| 亚洲图色成人| 亚洲成国产人片在线观看| h视频一区二区三区| 天美传媒精品一区二区| 久久国产精品大桥未久av| 亚洲欧美中文字幕日韩二区| 国产一区二区三区av在线| 日韩制服丝袜自拍偷拍| 成人三级做爰电影| 一区福利在线观看| 我的亚洲天堂| 亚洲成人av在线免费| 成年人午夜在线观看视频| 人体艺术视频欧美日本| 国产精品久久久久久精品电影小说| 国产黄色免费在线视频| 蜜桃国产av成人99| 日本wwww免费看| 男人添女人高潮全过程视频| 久久热在线av| 免费高清在线观看日韩| 精品卡一卡二卡四卡免费| 亚洲欧美一区二区三区黑人| 女人爽到高潮嗷嗷叫在线视频| 免费观看人在逋| 一本大道久久a久久精品| 久久精品国产亚洲av高清一级| 亚洲四区av| 母亲3免费完整高清在线观看| 97精品久久久久久久久久精品| 只有这里有精品99| 亚洲三区欧美一区| 香蕉国产在线看|