• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    ACLSTM:A Novel Method for CQA Answer Quality Prediction Based on Question-Answer Joint Learning

    2021-12-14 03:48:32WeifengMaJiaoLouCaotingJiandLaibinMa
    Computers Materials&Continua 2021年1期

    Weifeng Ma,Jiao Lou,Caoting Ji and Laibin Ma

    School of Information and Electronic Engineering,Zhejiang University of Science and Technology,Hangzhou,310023,China

    Abstract:Given the limitations of the community question answering(CQA)answer quality prediction method in measuring the semantic information of the answer text,this paper proposes an answer quality prediction model based on the question-answer joint learning(ACLSTM).The attention mechanism is used to obtain the dependency relationship between the Question-and-Answer(Q&A)pairs.Convolutional Neural Network(CNN)and Long Short-term Memory Network(LSTM)are used to extract semantic features of Q&A pairs and calculate their matching degree.Besides,answer semantic representation is combined with other effective extended features as the input representation of the fully connected layer.Compared with other quality prediction models,the ACLSTM model can effectively improve the prediction effect of answer quality.In particular,the mediumquality answer prediction,and its prediction effect is improved after adding effective extended features.Experiments prove that after the ACLSTM model learning,the Q&A pairs can better measure the semantic match between each other,fully re ecting the model’s superior performance in the semantic information processing of the answer text.

    Keywords:Answer quality;semantic matching;attention mechanism;community question answering

    1 Introduction

    People used to rely on traditional search engines to acquire knowledge.With the explosive growth of information nowadays,it is clear that the traditional search engines have many shortcomings,such as:With excessive numbers of search results,it is difficult to quickly locate the required information;it relies solely on keyword matching technology and does not involve semantics,resulting in poor retrieval,etc.Consequently,a new mode of information sharing—community question answering(CQA)emerged.In particular,the emergence of vertical domain CQA such as Stack Over ow,Brainly,and Auto Home not only satisfies the specific information needs of users,but also promotes the dissemination of high-quality information.The CQA launched by Auto Home is a professional automobile communication platform.Users have put forward a substantial number of real and effective questions and answers with increased user activities.However,the content edited by responders varies greatly,and the quality of the answers is uneven.Auto Home has introduced an automatic Q&A service,but it fails to analyze customized issues and address user needs,so the user experience is undermined.Besides,CQA answer quality analysis indicates that about more than 30% of the answers were worthless[1].Therefore,the basis for CQA success is how to detect high-quality answers from the content edited by the responder.In response to these problems,this paper proposes an answer quality prediction model based on question-answer joint learning with the use of attention mechanism and question text,together with the semantic representation of Q&A pairs in joint learning to filter out high-quality answers that fit the question.

    2 Related Work

    Mining the effective factors that affect the quality of answers is one of the keys to predict high-quality answers.Fu et al.[2]found that reviews and user features among non-textual features are the most effective indicators for evaluating high-quality answers,while the validity of textual features varies across different knowledge domains.Shah et al.[3]found that the responder’s personal information and reciprocal ranking of answers to a given question could significantly predict high-quality answers.Calefato et al.[4]found that answers with links to external resources are positively related to the integrity of their content,and the timeliness of the answers,the responder’s emotions,and the responder’s reputation score are all factors that in uence the acceptance of the answer.Liu et al.[5]found that the quality of the question is also an important factor affecting the quality of the answer.Low-quality answers induce by low-quality questions.On the contrary,high-quality questions contribute to high-quality answers.The quality of the question to some extent determines the quality of the answer.Combining the quantitative and time difference characteristics of answers,Xu et al.[6]proposed to use the relative positional sequence characteristics of the answers to make quality predictions of the answers.The results show that this feature can significantly improve the effect of answer quality prediction.

    The key to predict high-quality answers is not only to explore the effective factors that affect the quality of answers,but also to choose the appropriate model.Machine learning methods are widely used in classification and prediction tasks,such as Random Forest classification model[7],Latent Dirichlet Allocation(LDA)model[8],Support Vector Machines(SVM)[9–10],ensemble learning[11],etc.Because of the advantages in processing structured data,machine learning methods are also widely used in answer quality prediction.Alternate Decision Tree(ADT)classification model and multi-dimensional features extracted were used to predict the quality of answers,which has achieved good results[12–13].By using textual features,non-textual features and the combination of the two,Li et al.[14]established a binary classification model with logical regression.It is found that the discriminant performance of the model for high-quality answers is higher than that for low-quality answers.Wang et al.[15]found that the Random Forest classification model including user social attributes has better classification performance after comparing the classification performance of logistic regression,Support Vector Machine and Random Forest.Wu et al.[16]proposed a new unsupervised classification model to detect low-quality answers.Studies have shown that testing multiple answers under the same question can improve the detection of low-quality answers.Machine learning has achieved remarkable results in the prediction of answer quality.However,most of them take advantage of structural features,so they have a low utilization rate of the text and cannot capture the semantic information of the text.

    With the development of deep learning,the neural network has achieved tremendous success in Computer Vision[17],Signal Processing[18–19]and Natural Language Processing.The neural network can be used to capture text semantic information,and how to better measure the semantic information of the answer has been gradually in the spotlight.Sun et al.[20]used the topic model to extract the keywords of the answer,extend it with context and synonyms,and then train the model with CNN.The results show that the model can effectively improve the accuracy of answer quality prediction.Suggu et al.[21]proposed two different DFFN(Deep Feature Fusion Network)answer quality prediction frameworks to model the Q&A pairs with CNN and Bi-directional Long Short-term Memory Network(Bi-LSTM)with neural attention respectively,and extended features through leveraging various external resources.Experiments show that this method has better performance in answer quality prediction tasks than other general deep learning models.Zhou et al.[22]combined CNN with Recurrent Neural Network(RNN)to capture both the matching semantics of the Q&A pairs and the semantic association between the answers.Considering the interdependence of sentence pairs,Yin et al.[23]studied the dependency semantic relationship between sentence pairs through the attention mechanism and CNN.The results show that this method can better measure the semantic relationship between pairs of sentences than modeling sentences separately.

    3 Method

    Fig.1 shows the architecture of answer quality prediction model based on question-answer joint learning.First,Q&A pairs of Auto Home CQA is used to jointly construct attention text representation and learn the dependency relationship between Q&A pairs;Second,input the attention text representation into parallel CNN to extract the local features of Q&A pairs;Third,input the Q&A pairs after CNN into parallel LSTM to extract long-distance dependent features;Next,calculate the semantic matching degree between the question representation and the answer representation,combine with deep answer semantic representation and other effective extended features as the input representation of the fully connected layer;Finally,the SoftMax classifier is used to predict the quality of the corresponding answers to a given question.

    3.1 Input Layer

    First,for given Q&A pair,the question text and the answer text need to be filled with 0 to equal length,that is,the length of the text iss;where question text sq={v1,v2,…,vi,…,vs},vi∈Rd;and answer text sa={w1,w2,…,wj,…,ws},wj∈Rd.Word2vec model is used to pre-train the word vector to indicate that the word is d-dimensional[24],vq,i={x1,x2,…,xk,…,xd},wherexkrepresents the value of the word vector of thei-th word of the question text sqin thek-th dimension;wa,j={y1,y2,…,yk,…,yd},whereykrepresents the value of the word vector of thej-th word of the question text sain thek-th dimension.Finally,the question text and the answer text are expressed as the word vector matrix.

    3.2 Convolution Layer Based on Attention

    The attention matrix A ∈Rs×sis introduced to weight the semantic similarity between the question text and the answer text.In the attention matrix A,thei-th line value represents the attention distribution of thei-th word of the question text sqrelative to the answer text sa,andj-th column value represents the distribution of attention of thej-th word of the answer text sarelative to the question text sq.The specific calculation of attention matrix A is as follows:

    Figure 1:The architecture of model

    Given the above attention matrix,the attention feature maps with consistent dimensions for the original question text sqand answer text saneed to be generated:

    where W0∈Rs×d,W1∈Rs×dare weight matrices,which need to learn and update in model training.

    Input question text attention representation Sq,tand the answer text attention representation Sa,tinto parallel CNN,respectively,the local features of attention representation are captured through CNN,and new feature representations are obtained as Sq,cand Sa,c.The specific calculation method of convolution operation for attention representation of question text is as follows,answer text is the same:

    wherewrepresents sliding window size;represents subsequence composed of convolution kernel ofw×ddimension size at the position ofiin Sq,t=(z1,z2,…,zi,…,zs);represents the convolution parameter matrix,whered1represents the final dimension size after convolution output;represents the bias;frepresents the non-linear activation function.Here,ReLu is used as the activation function;andrepresents the convoluted final feature representation sequence at the positioni.

    3.3 Long Short-Term Memory Network Layer

    CNN has limitations in dealing with time series-related tasks.The RNN can compensate for this deficiency.Compared with the traditional RNN,LSTM as a variant of RNN,can use its unique gating units to learn the long-distance dependence of text sequence.Therefore,this paper inputs feature representation of Sq,cand Sa,cinto parallel LSTM through the convolution layer to extract context semantic features,as well as to obtain long-distance dependence features Sq,land Sa,lof question and answer texts with the reference of Hochreiter’s LSTM structure[25].The specific calculation formula for the long-distance dependency features of the question text is as follows,answer text is the same:

    wherextrepresents input information of time sequencet(0 <tsw+1);ht1represents the output information of last time sequence;it,ft,otandgtrepresent the input gate,forget gate,output gate and candidate memory information of the time sequencet;ctandhtrepresent the memory state and hidden state of time sequencet;W represents the weight matrix to be learned;b represents the bias;and σ represents the non-linear activation function Sigmoid.

    3.4 Output Layer

    In this paper,the above fusion feature representation is input into the SoftMax classifier for answer quality prediction.For givenwhere quality category labely(i)∈ 1,…,K{}(Kis the number of quality category labels).After entering the SoftMax classifier,the prediction probability distribution of answer quality category corresponding to characteristic F is obtained,as shown in formula(13):

    where Wsrepresents weight parameter matrix;bsrepresents the bias.

    In model training,use cross-entropy as the loss function to measure the model loss,and use regularization to prevent overfitting,as shown in formula(14).The ultimate goal of model training is to minimize the cross-entropy.

    whereirepresents the number of samples;jrepresents the answer quality category;yijrepresents the correct quality category of the answer text;represents the predicted quality category;is the L2 regularization;θ is the model parameter.

    4 Feature Construction and Selection

    By obtaining relevant data from the Auto Home CQA,this paper constructs a feature system for extended features,and evaluates the feature system with multiple indicators,and selects effective features as the extended features of answer quality prediction.

    4.1 Feature Construction

    (1)Structural Features

    Structural features are features after direct statistics,including question length(que_length),answer length(ans_length),number of question words(que_words_num),number of answer words(ans_words_num),number of question characters(que_ch_num),number of answer characters(que_ch_num),answer time interval(time_interval),whether the answer is adopted(is_adopt),whether the question contains a picture(has_picture)and whether the answer includes external links(has_link).

    (2)User Attribute

    User attributes can re ect the activity and authority of users,including the number of answers(ans_num),number of adopted answers(adopt_num),and number of helpful answers(help_num).

    (3)Textual Features

    Textual features refer to the features that are contained in the text but cannot be counted directly.This paper calculates the cosine similarity between the average word vector representation of the question text and the average word vector representation of the answer text as an indicator of the semantic match between the question text and the answer text(similarity).

    4.2 Feature Selection

    In order to make the model have better performance,data is usually processed by the following strategies:feature selection[26],dictionary learning,compressed sensing[27–28],etc.Considering the characters of data,this paper uses feature selection method to obtain high-quality features.Various features are different in unit and dimension and cannot be compared directly.It is therefore necessary to first have dimensionless processing of features,so the features of different attributes are comparable.This paper evaluates the above features with Pearson correlation coefficient(Corr.),L1 regularization(Lasso),L2 regularization(Ridge),maximal information coefficient(MIC),random forest mean decrease impurity(MDI)and recursive feature elimination(RFE),and normalizes the final score.Tab.1 shows the detailed evaluation of various features.

    It requires conducting feature screening to obtain more stable features[29].This paper uses the mean value combined with the threshold method for feature selection[30].Based on the evaluation,it can be known that the length of question/answer,number of words and characters of questions/answers can re ect the richness of the text content of questions/answers,which all belong to redundant features.Chinese sentences are composed of successive single character,a word could consist of a single character or more.Compared with character,words can better measure whether the Chinese sentences are misspelled[31].Therefore,the number of questions/answers is selected as the feature.In other features,whether the question includes pictures and whether the answers include external links has little impact on the model.Therefore,the threshold value is set at 0.12.Finally,the feature of average score above threshold is selected as the adopted feature,including the following eight:Number of question words,number of answer words,answer time interval,whether the answer is adopted,matching degree,number of answers,number of adopted answers and number of helpful answers are selected as the final extended features.

    Table 1:Feature evaluation

    5 Experiments

    5.1 Data Set

    In this paper,the dataset used is a customized benchmark dataset,and sourced from Auto Home CQA.The dataset contains a total of 7,853 questions and 48,815 Q&A pairs,with a time span from April 25,2012 to December 9,2019.The user took only 4.25%out of 48,815 answers.The reason that users did not adopt the answers is not because of their low-quality,but because the questioner did nothing to make his answer stand out,resultantly the majority of the answers were not adopted.Therefore,only taking whether the answer is adopted as an assessment criterion of the quality of the answer is unreliable.Based on the literature[15],this paper evaluates the quality of the answer from 13 dimensions and artificially labels the answer quality as low,medium,and high.

    To verify the validity of the algorithm model proposed,this paper randomly takes 20% of the total sample as a test data set,and randomly selects 10% as the validation data set and the rest as training data set in the remaining 80%of the data.Tab.2 shows the specific statistics after dividing the dataset.

    Table 2:Dataset statistics

    5.2 Evaluation Indicators

    This paper takes precision(P),recall(R),F1 score,macro average precision,macro average recall,macro averageF1-score as evaluation indexes of answer quality with the following calculation formula:

    whereTPrepresents the number of samples actually positive and predicted positive;FPrepresents the number of samples actually negative but predicted positive;FNrepresents the number of samples actually positive but predicted negative;andirepresents the sample size of the test set.

    5.3 Comparative Experiment Selection

    This paper uses customized data sets to compare different models to verify the validity of the proposed models.The Skip-gram mode of Word2vec is used to obtain better word vectors.The context window is set to 5,and the word vector dimension is set to 100,and the question text and answer text of Auto Home CQA are jointly trained to finally obtain the word vector representation of question/answer text pre-training.The comparative experiments include:

    1.SVM.This paper obtains the average word vector representation of the answer text from the pretraining word vector,and predicts its quality with SVM.

    2.Random Forest.This paper obtains the average word vector representation of the answer text from the pre-training word vector,and predicts its quality with Random Forest.

    3.CNN.With the reference of model architecture of literature[32],this paper uses CNN to learn the vector representation of the given question text and the answer text and predicts its quality.

    4.ABCNN.With the reference of model architecture of literature[23],this paper uses the attention mechanism to consider the interaction between question and answer,and integrates it into the CNN to learn the vector representation of the answer text and predicts its quality.

    5.LSTM.This paper uses LSTM to learn the vector representation of the given question text and the answer text and predicts the quality of the answer.

    6.ACLSTM.The proposed model in this paper uses attention mechanism to respectively weight the answer/question text with the question/answer text,and inputs the weighted text representation into CNN and LSTM to learn the deep semantic representation of question/answer texts and predicts its quality.

    This paper not only uses different models to learn answer text representation for quality prediction,but also take structural features,user attribute features and textual features as extended features,and add them to the text representation of the answers after different model training step by step,so as to compare the in uence of different extended features on the quality prediction of the answers.

    5.4 Results and Analysis

    The important issue for this research is whether the models proposed in this paper performs better in semantic match and achieves answer quality prediction.In the next sub-sections,we will discuss the advantages of the model and the task specific set up.

    5.4.1 Evaluation and Comparison

    Tab.3 shows the evaluation values of different models compared with the models proposed in this paper.From the evaluation values of Tab.3,it can be known that the machine learning model SVM and Random Forest do not have the advantage in capturing text semantics,while the mainstream deep learning model has significant advantages.Compared with the mainstream deep learning model,the model proposed in this paper has a better performance in the prediction of the answer quality in the Auto Home CQA.It verified the validity of the proposed model.Experiments show that the introduction of attention mechanism can effectively measure the semantic match between Q&A pairs and capture the deep semantic representation of text.In addition,the predictive power of the model has improved significantly after adding extended features step by step.It indicates that the extended features,as the supplementary features of the answer text,contribute greatly to the answer quality prediction task.

    Fig.2 is the comparison of F1 score among different answer quality categories with SVM,Random Forest,CNN,ABCNN,LSTM and the model proposed in this paper.From the figure,it can be seen that SVM and Random Forest have no advantage in capturing the text semantics of the answers compared to the deep learning model.CNN,ABCNN,LSTM,and the model proposed in this paper have comparable predictive ability to low,high-quality answers,and generally low prediction ability for medium-quality answers.However,compared with other models,the model presented in this paper shows better predictive ability in predicting medium-quality answers,which is significantly better than other models.It shows that attention mechanism can learn the semantic information between Q&A,by integrating the question text with the semantic information covered by the answer text,the model can better distinguish between medium-quality answers whose features are not significant but are consistent with the text of the question.

    5.4.2 The Choice of Different Sources of Q&A Pairs

    To better measure the semantic matching between Q&A pairs and the validity of the model,this paper uses three different sources of Q&A pairs of vector representation to calculate cosine similarity of the two,as a supplementary extended feature:

    Table 3:Evaluation value of comparative experiment

    1.Sim1.Same as Section 4.1 textual features,by using the average word vector of Q&A pairs,calculate the cosine similarity of the two.

    2.Sim2.In the representation layer of the model,the cosine similarity is calculated by using the maxpooling word vector representation of Q&A pairs.

    3.Sim3.Calculate the cosine similarity by using the vector representation of Q&A pairs after model learning.

    Tab.4 shows the comparative experimental results of CNN,ABCNN,LSTM and the model proposed in this paper with different similarity degrees based on the existing extended features.The results show that the model with Sim3 has the best performance,and followed by Sim1,and Sim2 is relatively poor.It indicates that vector representation of Q&A pairs without model learning covers less semantic information,while vector representation of Q&A pairs with model learning has deeper semantic information.Particularly,the model proposed in this paper is optimal.It can combine the semantic information covered by the question text with the answer text to better measure the match between the Q&A pair.In addition,the matching of Q&A pairs measured by average word vectors is better than the max-pooling.

    Figure 2:Comparison of F1 score among different answer quality categories

    Table 4:Experimental results of different similarity

    5.4.3 The Effect of Different Length of Answer Text

    This paper also discusses the effect of different lengths of answer text on the model to reduce computational complexity.The histogram in Fig.3 shows the text length distribution of the answer text after word segmenting for all samples.Based on Fig.3,it can be seen that answers with text length less than 300 account for about 99.25% of the total sample.Therefore,answers with text lengths of 10,50,100,200 and 300 are selected for comparative experiments.Since Samples with answer text lengths less than 1000 account for about 99.98%of the total sample,comparison experiment with a text length of 1000 was also added.When the answer text length is less than the fixed length,it is filled with zero.On the contrary,the answer text is truncated when its length is greater than the fixed length.Fig.4 shows the effect of different lengths of answer text on the model.As can be seen from Fig.4,when the text length of the answer is 10 to 300,the evaluation values gradually rise;when the answer text length is greater than 300,the evaluation values tend to level off.Therefore,the text fixed length is set to be 300.

    Figure 3:Answer text histogram of different length

    Figure 4:The effect of different lengths of answer text on the model

    5.4.4 Visualization

    Fig.5 is a visual display of the attention matrix of a test sample.Tab.5 is the text representation for the attention-based Q&A.Based on Fig.5 and Tab.5,the model can well capture the part related to the semantic meaning of “peculiar smell,” such as “charcoal package,” “grapefruit peel,” “open the window” and“ventilation.” It further clearly shows that the answer quality prediction model proposed in this paper can better learn the dependence between the Q&A pair and better measure the semantic match between them.

    Figure 5:Attention matrix visualization

    Table 5:Attention-based text representation visualization

    6 Conclusions

    Aiming at the existing problems such as the quality of the answers is uneven,the automatic question answering service is unable to specifically analyzed questions and address individualized user needs,user experience is undermined,etc.,this paper takes the Auto Home CQA as the research object,proposes an answer quality prediction model based on question-answer joint learning to filter out high-quality answers that fit the question from the content edited by the respondent.This paper uses CNN based on attention mechanism to learn semantic information of Q&A pairs and extract the local joint features of Q&A pairs,and uses LSTM to extract long-term dependence features of Q&A pairs.The vector representation of obtained Q&A pairs has deep semantic information and can better measure the match between the Q&A pairs.The prediction performance of the model can be further improved by constructing multiple valid extended features.

    However,there are still limitations in this paper:the extended feature constructed based on Auto Home CQA is not universal.In the future study,we will further consider the introduction of multiple generic extended features,such as relative position order of the answers,emotional polarity of question /answer texts,etc.Moreover,the prediction ability of the model for medium-quality answers still needs to be further improved.

    Acknowledgement:The authors are thankful to their colleagues,especially Z.Chen,Y.L.Zhang and Y.F.Cen who provided expertise that greatly during the research.

    Funding Statement:This research was supported by the Zhejiang Provincial Natural Science Foundation of China under Grant No.LGF18F020011.

    Conflicts of Interest:The authors declare that they have no con icts of interest to report regarding the present study.

    男人的好看免费观看在线视频| 一级黄色大片毛片| 国产成人啪精品午夜网站| 可以在线观看的亚洲视频| 精品欧美国产一区二区三| 午夜福利视频1000在线观看| 久久性视频一级片| 美女cb高潮喷水在线观看| 亚洲欧美清纯卡通| 能在线免费观看的黄片| 久99久视频精品免费| 欧美乱妇无乱码| 久久天躁狠狠躁夜夜2o2o| 国产成人aa在线观看| 老司机午夜十八禁免费视频| 乱码一卡2卡4卡精品| 亚洲国产日韩欧美精品在线观看| 内射极品少妇av片p| 精品国产亚洲在线| 亚洲第一区二区三区不卡| 亚洲成人精品中文字幕电影| 久久中文看片网| 18禁在线播放成人免费| 午夜福利视频1000在线观看| 老司机福利观看| 免费一级毛片在线播放高清视频| 亚洲国产精品合色在线| 一个人看的www免费观看视频| 日本黄色视频三级网站网址| 亚洲精品一卡2卡三卡4卡5卡| 国产极品精品免费视频能看的| 悠悠久久av| av天堂在线播放| 在线观看av片永久免费下载| 精品久久久久久,| 亚洲av美国av| 偷拍熟女少妇极品色| 午夜免费激情av| 国产淫片久久久久久久久 | 美女 人体艺术 gogo| 日本黄色片子视频| 亚洲第一电影网av| 日韩大尺度精品在线看网址| 亚洲av五月六月丁香网| 又粗又爽又猛毛片免费看| 久久这里只有精品中国| 色哟哟·www| 啪啪无遮挡十八禁网站| 国产一区二区在线av高清观看| 丰满人妻一区二区三区视频av| 欧美黄色片欧美黄色片| 国产精品自产拍在线观看55亚洲| 99久久无色码亚洲精品果冻| 国产欧美日韩一区二区精品| 国产精品美女特级片免费视频播放器| 狂野欧美白嫩少妇大欣赏| 无人区码免费观看不卡| 久99久视频精品免费| 最新在线观看一区二区三区| 精品99又大又爽又粗少妇毛片 | 亚洲av电影不卡..在线观看| 国产成人啪精品午夜网站| 淫妇啪啪啪对白视频| 午夜日韩欧美国产| 伊人久久精品亚洲午夜| 黄色一级大片看看| 九九在线视频观看精品| 国产野战对白在线观看| 九九久久精品国产亚洲av麻豆| 在线免费观看的www视频| 女人被狂操c到高潮| 国产亚洲欧美98| 亚洲av不卡在线观看| 亚洲在线自拍视频| 黄色丝袜av网址大全| 最新在线观看一区二区三区| 99久久精品国产亚洲精品| 欧美色视频一区免费| 亚洲人与动物交配视频| 美女高潮的动态| 免费看日本二区| 精品日产1卡2卡| 天堂网av新在线| 中文资源天堂在线| 婷婷精品国产亚洲av| 91久久精品电影网| 无遮挡黄片免费观看| 国产主播在线观看一区二区| 床上黄色一级片| 在线观看午夜福利视频| 在线免费观看不下载黄p国产 | 波多野结衣巨乳人妻| 日韩免费av在线播放| 成人特级av手机在线观看| 很黄的视频免费| 中文字幕av在线有码专区| 日韩欧美国产一区二区入口| 69av精品久久久久久| 一卡2卡三卡四卡精品乱码亚洲| 欧美不卡视频在线免费观看| 精品一区二区免费观看| 久久久国产成人免费| 欧美最黄视频在线播放免费| 日本五十路高清| 听说在线观看完整版免费高清| 午夜激情欧美在线| 中文字幕高清在线视频| 精品福利观看| 长腿黑丝高跟| 国产精品久久久久久久久免 | 中文字幕熟女人妻在线| 婷婷丁香在线五月| 亚洲国产高清在线一区二区三| www.999成人在线观看| 国产欧美日韩一区二区精品| 内射极品少妇av片p| 久久精品久久久久久噜噜老黄 | 99久国产av精品| 免费无遮挡裸体视频| 国产免费av片在线观看野外av| www.www免费av| 淫妇啪啪啪对白视频| 日本免费一区二区三区高清不卡| 91在线精品国自产拍蜜月| 亚洲美女搞黄在线观看 | 在线观看舔阴道视频| 精品人妻视频免费看| 深爱激情五月婷婷| 中文在线观看免费www的网站| 动漫黄色视频在线观看| 乱人视频在线观看| 99久久成人亚洲精品观看| 成人特级黄色片久久久久久久| 亚洲国产欧洲综合997久久,| 国产麻豆成人av免费视频| 欧美日本视频| 亚洲精品在线观看二区| 岛国在线免费视频观看| 夜夜躁狠狠躁天天躁| 悠悠久久av| 天堂动漫精品| 免费大片18禁| 亚洲真实伦在线观看| 一本综合久久免费| 人人妻人人看人人澡| av专区在线播放| 久久久久免费精品人妻一区二区| 亚洲av二区三区四区| 日韩免费av在线播放| 久久婷婷人人爽人人干人人爱| 丰满乱子伦码专区| 男人和女人高潮做爰伦理| 免费黄网站久久成人精品 | 国产高清激情床上av| 国产精品爽爽va在线观看网站| a在线观看视频网站| 色综合欧美亚洲国产小说| netflix在线观看网站| 日本与韩国留学比较| 国产不卡一卡二| 我要看日韩黄色一级片| 国产色爽女视频免费观看| 91字幕亚洲| 免费大片18禁| 国产色婷婷99| 日韩中字成人| 51国产日韩欧美| 一个人看视频在线观看www免费| 丰满的人妻完整版| 国产成人aa在线观看| 日韩国内少妇激情av| 国产免费av片在线观看野外av| 国产淫片久久久久久久久 | 床上黄色一级片| 亚洲欧美激情综合另类| 中文亚洲av片在线观看爽| 亚洲熟妇熟女久久| 国产精品伦人一区二区| 亚洲av二区三区四区| 国产一区二区在线av高清观看| 国产成人欧美在线观看| 成人av一区二区三区在线看| 天美传媒精品一区二区| av专区在线播放| 亚洲精品成人久久久久久| 午夜老司机福利剧场| av专区在线播放| 婷婷色综合大香蕉| 午夜激情福利司机影院| 中文亚洲av片在线观看爽| 国产精品爽爽va在线观看网站| 精品99又大又爽又粗少妇毛片 | 变态另类丝袜制服| 精品午夜福利视频在线观看一区| 色哟哟哟哟哟哟| 极品教师在线免费播放| 日本一本二区三区精品| 午夜福利欧美成人| 欧美色视频一区免费| 久久久久久久久久成人| 亚洲国产欧美人成| 高清在线国产一区| 久久国产乱子伦精品免费另类| 欧美成人一区二区免费高清观看| 中文资源天堂在线| 哪里可以看免费的av片| 99久久成人亚洲精品观看| 精品国产亚洲在线| 两个人的视频大全免费| 成人美女网站在线观看视频| 欧美xxxx性猛交bbbb| 在线免费观看不下载黄p国产 | 禁无遮挡网站| 亚洲av电影不卡..在线观看| 看黄色毛片网站| 欧美三级亚洲精品| 此物有八面人人有两片| 欧美黑人欧美精品刺激| 最后的刺客免费高清国语| 亚洲avbb在线观看| 免费看光身美女| 亚洲自拍偷在线| a级毛片免费高清观看在线播放| 日韩欧美精品免费久久 | 男女下面进入的视频免费午夜| 国产精华一区二区三区| 欧美激情久久久久久爽电影| 哪里可以看免费的av片| 乱码一卡2卡4卡精品| 人人妻人人澡欧美一区二区| 1024手机看黄色片| h日本视频在线播放| 国产一区二区亚洲精品在线观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 男插女下体视频免费在线播放| 欧美日韩国产亚洲二区| 九色成人免费人妻av| 欧美黄色淫秽网站| 亚洲精品亚洲一区二区| 日本一二三区视频观看| 欧美绝顶高潮抽搐喷水| 哪里可以看免费的av片| 国产综合懂色| 中文字幕人成人乱码亚洲影| aaaaa片日本免费| 午夜两性在线视频| 啪啪无遮挡十八禁网站| 两性午夜刺激爽爽歪歪视频在线观看| a级毛片免费高清观看在线播放| 国产精品永久免费网站| 自拍偷自拍亚洲精品老妇| 国产亚洲av嫩草精品影院| 伊人久久精品亚洲午夜| 女生性感内裤真人,穿戴方法视频| 成人鲁丝片一二三区免费| 国产精品一区二区性色av| 亚洲国产精品成人综合色| 99久久精品一区二区三区| 精品国产三级普通话版| 中出人妻视频一区二区| 99久久久亚洲精品蜜臀av| 成人午夜高清在线视频| 可以在线观看毛片的网站| 九九久久精品国产亚洲av麻豆| 99热这里只有是精品在线观看 | 亚洲人与动物交配视频| 动漫黄色视频在线观看| 女人十人毛片免费观看3o分钟| 窝窝影院91人妻| 日韩欧美精品v在线| 在现免费观看毛片| 国产老妇女一区| 国产伦人伦偷精品视频| 国产熟女xx| 国产国拍精品亚洲av在线观看| 国产精品久久电影中文字幕| 无遮挡黄片免费观看| 一级作爱视频免费观看| 成人特级黄色片久久久久久久| 日日干狠狠操夜夜爽| 真实男女啪啪啪动态图| 国产亚洲精品久久久久久毛片| 免费黄网站久久成人精品 | 久久国产精品影院| 日韩中字成人| 国产精品爽爽va在线观看网站| 免费看日本二区| 天天一区二区日本电影三级| 午夜福利18| 亚洲精品粉嫩美女一区| 中国美女看黄片| 中文字幕免费在线视频6| 久久久久国产精品人妻aⅴ院| 日韩欧美一区二区三区在线观看| 成人午夜高清在线视频| 午夜激情欧美在线| 伊人久久精品亚洲午夜| 久久精品国产清高在天天线| 国产成人啪精品午夜网站| 2021天堂中文幕一二区在线观| 国产欧美日韩一区二区精品| 国产一区二区三区视频了| 啪啪无遮挡十八禁网站| 丁香六月欧美| 亚洲自偷自拍三级| av黄色大香蕉| 久久精品国产亚洲av香蕉五月| 美女高潮的动态| 在线观看av片永久免费下载| 成人性生交大片免费视频hd| 国产精品一区二区性色av| 久久久久国内视频| 国产老妇女一区| 免费搜索国产男女视频| 欧美不卡视频在线免费观看| 欧美在线一区亚洲| 激情在线观看视频在线高清| 日韩av在线大香蕉| 国产毛片a区久久久久| 偷拍熟女少妇极品色| 国产精品野战在线观看| 麻豆国产97在线/欧美| 国产av一区在线观看免费| 色噜噜av男人的天堂激情| 桃色一区二区三区在线观看| 人人妻人人澡欧美一区二区| 最新在线观看一区二区三区| 国产三级中文精品| 最近在线观看免费完整版| 麻豆成人av在线观看| 91久久精品国产一区二区成人| 亚洲,欧美,日韩| 久久久精品欧美日韩精品| 国产伦精品一区二区三区四那| 久久久久久久久久黄片| 最近最新中文字幕大全电影3| 一区福利在线观看| 校园春色视频在线观看| 亚洲片人在线观看| 日本在线视频免费播放| 国产精品av视频在线免费观看| 国产v大片淫在线免费观看| 2021天堂中文幕一二区在线观| 99热6这里只有精品| 国产高清视频在线播放一区| 欧美日韩福利视频一区二区| 国产爱豆传媒在线观看| 三级毛片av免费| 每晚都被弄得嗷嗷叫到高潮| 熟女人妻精品中文字幕| 又粗又爽又猛毛片免费看| 成人av在线播放网站| 人妻夜夜爽99麻豆av| 亚洲av免费高清在线观看| 亚洲人成网站在线播放欧美日韩| 久久亚洲真实| 脱女人内裤的视频| 757午夜福利合集在线观看| 真人做人爱边吃奶动态| 日韩欧美国产一区二区入口| 亚洲精华国产精华精| 赤兔流量卡办理| 91麻豆精品激情在线观看国产| 国产成人欧美在线观看| 国产高潮美女av| 亚洲黑人精品在线| 欧美极品一区二区三区四区| 9191精品国产免费久久| 国产男靠女视频免费网站| 欧美成人a在线观看| 国产综合懂色| 1000部很黄的大片| 成人特级av手机在线观看| 免费黄网站久久成人精品 | 亚洲,欧美精品.| 熟女人妻精品中文字幕| 搡老熟女国产l中国老女人| 亚洲精品456在线播放app | 99精品在免费线老司机午夜| 天天一区二区日本电影三级| 午夜福利欧美成人| 精品免费久久久久久久清纯| 国产成人欧美在线观看| 亚洲无线观看免费| 国产伦精品一区二区三区视频9| 一区二区三区免费毛片| 尤物成人国产欧美一区二区三区| 波野结衣二区三区在线| 天堂√8在线中文| 国产真实乱freesex| 观看免费一级毛片| 国产极品精品免费视频能看的| 一区福利在线观看| 91久久精品国产一区二区成人| 啦啦啦韩国在线观看视频| 精品人妻视频免费看| 成年人黄色毛片网站| 日韩国内少妇激情av| 亚洲综合色惰| 最近最新中文字幕大全电影3| 99热只有精品国产| 日韩大尺度精品在线看网址| 女生性感内裤真人,穿戴方法视频| 美女被艹到高潮喷水动态| 成年女人永久免费观看视频| 搡老熟女国产l中国老女人| 欧美性感艳星| 国产精品一区二区三区四区免费观看 | 免费看光身美女| 亚洲欧美日韩卡通动漫| 在线观看免费视频日本深夜| 亚洲成av人片免费观看| 亚洲 国产 在线| 老司机午夜十八禁免费视频| 91av网一区二区| 亚洲avbb在线观看| 在线免费观看的www视频| 亚洲人成网站高清观看| 亚洲内射少妇av| 国产精品野战在线观看| 国产精品女同一区二区软件 | 国产真实伦视频高清在线观看 | 少妇高潮的动态图| 高清毛片免费观看视频网站| 每晚都被弄得嗷嗷叫到高潮| 在线看三级毛片| 美女大奶头视频| 欧美成人免费av一区二区三区| avwww免费| 欧美国产日韩亚洲一区| 亚洲18禁久久av| 丁香六月欧美| 少妇裸体淫交视频免费看高清| 又黄又爽又免费观看的视频| 俺也久久电影网| 欧美日韩黄片免| 国产人妻一区二区三区在| 国产精品久久久久久亚洲av鲁大| 老司机午夜福利在线观看视频| 久久精品夜夜夜夜夜久久蜜豆| 色综合欧美亚洲国产小说| 精品人妻偷拍中文字幕| 亚洲最大成人av| 国产蜜桃级精品一区二区三区| av女优亚洲男人天堂| 免费av观看视频| 亚洲av成人不卡在线观看播放网| 国产黄色小视频在线观看| 99热这里只有是精品50| 国产真实伦视频高清在线观看 | 午夜福利成人在线免费观看| 村上凉子中文字幕在线| 蜜桃久久精品国产亚洲av| 丰满人妻一区二区三区视频av| 国产探花在线观看一区二区| 偷拍熟女少妇极品色| www.www免费av| 动漫黄色视频在线观看| 亚洲欧美日韩高清专用| 欧美日本亚洲视频在线播放| 久久伊人香网站| 精品午夜福利在线看| 看黄色毛片网站| 色综合站精品国产| 亚洲在线观看片| 女同久久另类99精品国产91| 国产真实伦视频高清在线观看 | 一区二区三区四区激情视频 | 亚洲va日本ⅴa欧美va伊人久久| 日韩中字成人| 国产欧美日韩一区二区精品| 黄色配什么色好看| 国产欧美日韩精品亚洲av| 日韩欧美在线二视频| 亚洲精品一区av在线观看| 国产淫片久久久久久久久 | 国产不卡一卡二| 亚洲欧美激情综合另类| 麻豆久久精品国产亚洲av| 18美女黄网站色大片免费观看| 日日摸夜夜添夜夜添av毛片 | 久久久精品欧美日韩精品| 成人美女网站在线观看视频| 少妇丰满av| 国产日本99.免费观看| 两个人的视频大全免费| 亚洲欧美清纯卡通| 国产乱人伦免费视频| 国产伦人伦偷精品视频| 成人永久免费在线观看视频| 最近最新免费中文字幕在线| 人妻夜夜爽99麻豆av| 久久99热6这里只有精品| 国产伦精品一区二区三区视频9| 国内精品久久久久精免费| 深爱激情五月婷婷| 女同久久另类99精品国产91| 中文字幕精品亚洲无线码一区| 午夜福利欧美成人| 亚洲三级黄色毛片| 日本a在线网址| 全区人妻精品视频| 精品无人区乱码1区二区| avwww免费| 免费观看精品视频网站| 日本a在线网址| 久久精品91蜜桃| aaaaa片日本免费| 九九久久精品国产亚洲av麻豆| 国产精品电影一区二区三区| 岛国在线免费视频观看| 国产精品一区二区性色av| 欧美黄色淫秽网站| 丰满人妻一区二区三区视频av| 99久久99久久久精品蜜桃| 最近最新免费中文字幕在线| 成人国产一区最新在线观看| 桃色一区二区三区在线观看| 我的女老师完整版在线观看| 亚洲国产日韩欧美精品在线观看| 久久久久国内视频| 精品人妻1区二区| 99久久成人亚洲精品观看| 天堂√8在线中文| 亚洲va日本ⅴa欧美va伊人久久| 免费观看的影片在线观看| 最好的美女福利视频网| 国产大屁股一区二区在线视频| 亚洲无线在线观看| 亚洲经典国产精华液单 | 婷婷色综合大香蕉| 人妻丰满熟妇av一区二区三区| 国产黄色小视频在线观看| 欧美三级亚洲精品| 国产精品伦人一区二区| 欧美精品国产亚洲| 成人高潮视频无遮挡免费网站| 好男人在线观看高清免费视频| 成人三级黄色视频| 99热这里只有是精品50| 精品免费久久久久久久清纯| 麻豆久久精品国产亚洲av| 如何舔出高潮| 亚洲精品粉嫩美女一区| 国产乱人伦免费视频| 欧美xxxx性猛交bbbb| 亚洲美女视频黄频| 日韩欧美在线乱码| 性色avwww在线观看| av在线观看视频网站免费| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 久久伊人香网站| 九九热线精品视视频播放| 在线观看66精品国产| 亚洲人成伊人成综合网2020| 国产精品一及| 久久久久免费精品人妻一区二区| 国产亚洲av嫩草精品影院| 亚洲国产精品999在线| 天堂网av新在线| 成人午夜高清在线视频| 日韩人妻高清精品专区| 亚洲美女搞黄在线观看 | 宅男免费午夜| 特级一级黄色大片| 欧美成狂野欧美在线观看| 精华霜和精华液先用哪个| 成人一区二区视频在线观看| 99国产精品一区二区蜜桃av| 91字幕亚洲| 级片在线观看| 少妇裸体淫交视频免费看高清| 国产一区二区三区视频了| 成熟少妇高潮喷水视频| 色综合站精品国产| 九九在线视频观看精品| 成年免费大片在线观看| 88av欧美| 激情在线观看视频在线高清| 亚洲成人久久爱视频| 日韩中字成人| 啦啦啦观看免费观看视频高清| 制服丝袜大香蕉在线| 每晚都被弄得嗷嗷叫到高潮| 如何舔出高潮| 精品久久久久久久久av| 在线观看66精品国产| 成年免费大片在线观看| 国产欧美日韩精品亚洲av| 国产乱人伦免费视频| 日本撒尿小便嘘嘘汇集6| 他把我摸到了高潮在线观看| 国产精品99久久久久久久久| 简卡轻食公司| 99久久精品热视频| 一本综合久久免费| 一级作爱视频免费观看| 日韩欧美一区二区三区在线观看| 国产成+人综合+亚洲专区| 一级作爱视频免费观看| 他把我摸到了高潮在线观看| 欧美绝顶高潮抽搐喷水| 熟女人妻精品中文字幕| 国产91精品成人一区二区三区| 亚洲va日本ⅴa欧美va伊人久久| 可以在线观看的亚洲视频| 久久精品国产清高在天天线| 日本熟妇午夜| 国产中年淑女户外野战色| 国产成年人精品一区二区| 亚洲av五月六月丁香网| 少妇人妻精品综合一区二区 | 国产淫片久久久久久久久 | 99久国产av精品| 亚洲无线在线观看|