• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Improved Method for Web Text Affective Cognition Computing Based on Knowledge Graph

    2019-04-29 03:21:12BohanNiuandYongfengHuang
    Computers Materials&Continua 2019年4期

    Bohan Niu and Yongfeng Huang

    Abstract: The goal of research on the topics such as sentiment analysis and cognition is to analyze the opinions, emotions, evaluations and attitudes that people hold about the entities and their attributes from the text. The word level affective cognition becomes an important topic in sentiment analysis. Extracting the (attribute, opinion word) binary relationship by word segmentation and dependency parsing, and labeling those by existing emotional dictionary combined with webpage information and manual annotation, this paper constitutes a binary relationship knowledge base. By using knowledge embedding method, embedding each element in (attribute, opinion, opinion word) as a word vector into the Knowledge Graph by TransG, and defining an algorithm to distinguish the opinion between the attribute word vector and the opinion word vector.Compared with traditional method, this engine has the advantages of high processing speed and low occupancy, which makes up the time-costing and high calculating complexity in the former methods.

    Keywords: Affective cognition, fine-grained, knowledge representation, knowledge graph.

    1 Introduction

    Affective cognition, also known as sentiment analysis or opinion mining, aims to analyze the content of people’s emotions, opinions, evaluations and attitudes expressed by entities and their attributes. The entities involved are very extensive and can be products, services,institutions, individuals, events, problems, topics, and so on. Because viewpoint information is very important to people’s actions and behaviors: whether they are individuals or collectives, they often seek opinions and suggestions from others when making decisions. Therefore, the analysis of viewpoint information has a very wide practical significance.

    The number of evaluations for a product is often very large, and the number of words is too long. It is almost impossible for an individual or a business to completely read it carefully. Selecting a comment for analysis tends to ignore some details or have a personal tendency to make users or businesses unable to do so. Get objective and comprehensive feedback. Therefore, an intuitive and efficient network text sentiment analysis mechanism is needed to analyze the reviews. By analyzing the emotions expressed by the product attributes and emotional words in the review text, the user can intuitively and quickly understand the advantages and disadvantages of various attributes of the goods without having to read all the reviews, thereby allowing the user to have a more comprehensive view of the entire product. At the same time, for merchants, it is possible to more quickly re-design or improve the parts that are not highly evaluated according to the user’s feedback on the product attributes, so that the merchants can better grasp the market.

    The concept of sentiment analysis was first proposed by Hatzivisassiloglou et al. in 1997[Hatzivassiloglou and McKeown (1997)]. After that, the related technologies and applications of sentiment analysis have developed rapidly. With the rise and popularity of social media in recent years, a number of domestic and international top conferences have included the sentiment analysis of web texts as the theme. In 2008, Blair-Goldensohn et al. [Blair-Goldensohn, Hannan, McDonald et al. (2008)] proposed a general model of attribute-opinion relationship extraction for service-oriented reviews. The model uses sentence/phrase level emotion classification, attribute-opinion extraction and clustering processes. Kim et al. [KIM and Hovy (2004)] proposed a 4-tuple model: [Topic, Holder,Claim, Sentiment], i.e., (subject, opinion holder, statement, opinion). Liu et al. [Liu and Zhang (2012)] proposed a 5-tuple model (entity/subject, feature/aspect/attribute,sentiment polarity, publisher, publication time). Jin et al. [Jin, Ho and Srihari (2009)]proposed a novel machine learning method based on a lexicalized HMM framework that integrates multiple important language features which can predict new potential product and opinion entities based on the learned patterns. Su et al. [Su, Xu, Guo et al. (2008)]proposed a mutually reinforcing method to solve the problem of extracting opinions, able to cluster and optimize product features and opinion words, and construct a set of the words and product features, and combine the polarity dictionaries to distinguish the opinion of the set. Brody et al. [Brody and Elhadad (2010)] proposed a simple and flexible unsupervised extraction algorithm, which extracts product features by setting a certain topic and discriminates emotional tendencies based on positive and negative emotional word seed sets. With the rise of deep learning and neural networks, many researchers in the field of sentiment analysis have also applied it to the sentiment analysis of online texts. In 2015, Liu et al. [Liu and Chen (2015)] obtained opinions and attitudes on hot topics among microblog users through Convolutional Neural Network (CNN). By using a CNN, the problem of explicit feature extraction and implicit learning in training data is solved. Socher et al. [Socher, Perelygin, Wu et al. (2013)] proposed the re-cursive tensor neural network (RNTN) to solve the problem that the long sentences in the previous model could not be effectively interpreted by the semantic space. The accuracy of RNTN in sentiment prediction reached 80.7%, surpassing the previous model. Yang et al. [Yang, Tu, Wang et al. (2017)] proposed an attention-based long-term memory model(Attention-based LSTM) to improve the accuracy of object-dependent sentiment classification. The accuracy of the algorithm is improved by learning the distance between the target entity and its most significant feature.

    It can be seen from the current research situation that previous scholars have conducted a lot of research on sentiment analysis in such fields as computational linguistics, cognitive psychology, natural language processing, and data mining. In this paper, we will use a different approach from previous studies, which applies the knowledge representation learning and TransG model to the field of affective cognition, and to establish a comprehensive template rule for the extraction of binary relationships, with the structure of (attribute, opinion words) and will act as the smallest emotional expression unit. The method of knowledge representation learning transforms attributes and opinion words into word vectors. Then, the word vectors calculated by the model constitute a binary relationship knowledge map, and the derived knowledge map is accessed to web pages.This method solves the problems of long time and high complexity in the previous sentiment analysis methods which make it possible to complete the online sentiment analysis and processing tasks based on the web.

    2 Related works

    At present, most of the word-level affective cognition systems are constituted mainly by the emotional dictionary, which divides the words into positive emotional words and negative emotional words and is stored in the positive sentiment dictionary and the negative sentiment dictionary respectively. We can regard this representation as expressing a sentiment word and its collocation as a vector, except that the vector has only one dimension that is non-zero, while others are all zero, which we usually call this representation one-hot representation. This representation is very simple, no learning process, and widely used in information retrieval and natural language processing.However, the one-hot representation method has obvious drawbacks, that is, all objects are assumed to be independent of each other, which means the vectors of all objects are orthogonal to each other [Turian, Ratinov and Bengio (2010)]. After learning the main methods of knowledge representation, we found that the Translation model has a high degree of fit for this study as well as has a simple model. Therefore, the subsequent research mainly focuses on the Translation model.

    In 2013, Mikolov et al. [Mikolov, Chen, Corrado et al. (2013)] used word2vec to represent the learning model and found that there is a very common translation invariance in the word vector space:

    Where C(W) represents the word vector of the W word obtained using the word2vec model. This phenomenon indicates that the word vector can find that there is some similar implicit semantic relationship between the word king and queen, man and woman,and the implicit and similar semantic relationship between the words exists widely in the vocabulary. Inspired by this phenomenon, the researchers have successively proposed the Translation model. Bordes et al. [Bordes, Usunier, Garcia-Duran et al. (2013)] proposed the TransE model in 2013, and the vector lrof the relationship r in each triple (?,r,t) is translated as the head entity vector lhand the tail entity vector lt, that is, for each the triads(?,r,t) compared with the previous knowledge representation learning model, the TransE model has great advantages in terms of computational complexity and number of parameters. Especially, on large-scale sparse knowledge maps, the performance of TransE is even more amazing. In the test with Wordnet as the data set, TransE’s accuracy rate (HITS@10) reached 75.4%, far exceeding the results of the previous knowledge representation learning model.

    Although TransE has many advantages, it also has obvious shortcomings. Because the TransE model is too simple, it often fails to deal with the complex relationship (one-tomany, many-to-one, many-to-many) of the knowledge base. Because in complex relationships, we can easily understand: If the relationship r is a many-to-one relationship,we can easily get l?0≈ l?1≈ … ≈ l?m, in the face of the relationship r is a pair the opposite of multiple problems. This deficiency will have a very large impact on the correct rate of the model.

    Wang et al. [Wang, Zhang, Feng et al. (2014)] proposed the TransH model in 2014 to address the shortcomings of the TransE model mentioned earlier. The main idea is to project the head and tail entity vectors in different relationships into different spaces. In addition, the number of head and tail entities corresponding to the same relationship is not necessarily the same in complex relationship problems. Therefore, when the corrupted triples are generated, the probability of the head and tail entities is not randomly replaced as in TransE, but by the probability determined by the number of the head and tail entities.The test results in Wang et al. [Wang, Zhang, Feng et al. (2014)] show that in Freebase15k dataset, where the relationship is relatively complex, TransH's accuracy for link predictions(HITS@10) is as high as 64.4%, which is much higher than that of TransE’s 58.5%,reflecting the superiority of the TransH model for complex relationships.

    Lin et al. [Lin, Liu, Sun et al. (2015)] believe that the TransH model’s hypothese of placing entities and relations in the same semantic space limits the accuracy of TransH to some extent. In order to overcome TransH's shortcomings, they propose TransR model,in which different relationships have different semantic spaces and project entities of different relationships to different semantic spaces.

    Ji et al. [Ji, He, Xu et al. (2015)] believe that although the TransR model makes up some shortcomings of the TransE and TransH models, there are still limits: in the same relationship, the head and tail entities share the same projection matrix. However, the attributes or types of the head and tail entities of a relationship are often different, even huge.The projection from the entities semantic space to the relation semantic space is the result of the interaction of the entities and the relations, so it is unreasonable that the projection matrix is only related to the relationship in the previous model.

    Due to the introduction of spatial projection, TransR has a sharp increase in model parameters compared to TransE and TransH, which greatly increases the computational complexity of the algorithm. In order to solve these problems, Ji et al. [Ji, He, Xu et al.(2015)] proposed the TransD model. For a given triple (h, r, t), the TransD model sets two projection matrices Mr?and Mrtrespectively to project the head and tail entities to the relation space. The use of the projection matrix set by the two projection vectors solves the problem of too many parameters in the TransR model.

    Xiao et al. [Xiao, Huang, Hao et al. (2015)] think that the loss function including TransE and the improved model is too simple, considering each dimension of the entity and the relation vector in the same dimension, which reduces the accuracy to some extent.

    The TransA model changes the distance metric in the loss function from the or distance to the Mahalanobis distance and sets a weight matrix Wr. After using Freebase15k to check the accuracy of the model, the TransA model’s triplet prediction accuracy reaches 80.4%, and the accuracy of the Wordnet18 dataset is 94.3%, which is much higher than all previous models.

    The TransG model, proposed by Xiao et al. [Xiao, Huang and Zhu (2016)], first takes the multi-semantic problem in the relationship in to consideration. The accuracy of the TransG model on the Freebase15k dataset is 88.2%, and the accuracy on the Wordnet18 dataset is 94.9%, which is a significant improvement over the previous model.

    3 An improve method for Web text affective cognition computing

    In order to simplify the model, this paper introduces the Translation method in the knowledge graph into the word-level sentiment analysis, which greatly simplifies the model and various parameters required for training by classifying the relationship between the words while vectorising the words. Taking into account the different semantic characteristics of the same emotion, the paper finally chooses to use the TransG model to replace the input (entity, relationship, entity) triples with (attribute, opinion,opinion word) triples and generate The corrupt triplets by self-sampling method. After the TransG model, we can get the vector of each word and those words compose the knowledge graph. By analyzing the relationship between the word vector of attribute and opinion word, we could obtain the opinion they represent. The process of generating the knowledge map is shown in Fig. 1.

    Figure 1: The generation of knowledge graph

    3.1 Binary relationship extraction

    The Language Technology Platform (LTP) is used as a word segmentation tool, which provides a Python library, called pyltp, that can be easily integrated into existing projects and provides fast and high-accuracy Chinese word segmentation.

    In addition to the word segmentation function, the language technology platform also provides a dependency syntax analysis module, which reveals its syntactic structure by analyzing the dependencies between components in a language unit. The main idea is to use the core verb of a sentence as the origin, and the other sentences. The components are dependent on the core verbs in some grammatical relations. At the same time, they regard the sentences as a dependent syntax tree. The nodes of the tree represent different words, and the edges of the trees represent dependencies, thus reflecting the dependence between words.

    3.2 Improvement of self-sampling

    From TransE, in order to optimize the accuracy for complex problems, self-sampling is used to generate corrupted triples for training:

    While in this project, the effect of the above method is greatly limited. Since the head and tail of the original model are selected from the entity library, which includes all the head and tail entities, the difference between the head and tail entities is so large that such random replacement is easy to generate a corrupted triple that does not contain any useful information. For example, in a set of entities with positive relationship in this article, it contains: (cost-effective, high, price, low). For the golden triple (cost-effective, positive,high), it is easy to generate a corrupted triple (high, positive, low) by Eq. (2), which contains no useful information, reducing the utilization of data. Due to this reason, we proposed an improved self-sampling method:

    where H is the set of head entities, that is, all the attributes; T is the set of tail entites, all the opinion words, H∈E, T∈E.

    By Eq. (3), the above golden triple (cost-effective, positive, high) can only generate corrupted triple (cost-effective, positive, low) or (price, positive, high). The probability determined by Eq. (4), where ?pt means the average number of head entities per tail entities, among all the relations; tp? converts [Wang, Zhang, Feng et al. (2014)]. In this way, the use of data has significantly increased.

    3.3 Generating knowledge graph

    With the golden triples and corrupted triples, this paper uses the TransG to generate knowledge graph. The TransG uses a Bayesian non-parametric infinite mixture embedding model, which is generally as follows:

    (1) For an entity e∈E, initialize the entity vector which mean vector follows a standard normal distribution: ue~N(0,1).

    (2) For a triple (?,r,t):

    (a) Extracting semantic components by Chinese Restaurant Processes: πr,m~CRP(β).

    (b) Initialize the head vectors which mean follows a standard normal distribution:h~N(u?,).

    (c) Initialize the tail vectors which mean follows a standard normal distribution:t~N(ut,).

    (d) Get the opinion vectors: ur,m=t-h~N(ut-u?,()E).

    where u?and utare the mean embedded word vector of attributes and opinion words,respectively,andare the variances of corresponding the attribute and the opinion word, and ur,mis the m-th semantic word vectors. By using the Chinese Restaurant Process (CRP), TransG can automatically detect the different semantics of the same relationship, the different uses of the same opinion in this paper. In this setting, we can define the score function:

    where πr,mis a weight factor represents the weight of the i-th component, and Mris the total number of semantics of the opinion r learned from the CRP.

    In the previous model, when the word vector of the relationship r was determined, the geometric representation of the triple (?,r,t) was also fixed in the form l?+lr≈ lt.While in TransG, the geometric representation of the triple (?,r,t) is changed to:

    During the training process, the maximum data likelihood principle was used. For the non-parametric part, the weight matrix (?,r,t) is generated by the CRP. For a triple(?,r,t), the probability of generating a new semantic component is defined as follows:

    where ?{(?,r,t)} is the currently calculated posterior probability. In order to better distinguish the correct triples from the wrong triples, this model maximizes the likelihood ratio of the golden triples to the corrupted triples. Combining all the conditions mentioned above, the training objective function of the model is as follows:

    where Δ is a collection of golden triples, Δ′ is a collection of corrupted triples, C controls the degree of scaling, E is the set of entities, R is the set of relations, and the weights πr,mand variance σ are also learned from the optimization of the objective function.

    In this model, we use stochastic gradient descent (SGD) to solve the optimization problem. In addition, TransG uses a trick to control the parameter update process during training. For those triples that are very unlikely, the parameter updates will be skipped.Therefore, a condition similar to TransE is introduced in TransG, and the training algorithm updates the embedded word vector only if the following conditions are met:

    where, (?,r,t) ∈ Δ, (?′,r,t′) ∈ Δ′, γ is learning rate.

    Although this trick can shorten the learning time by skipping the impossible triples, for this article, as mentioned in the previous section, a large number of triples in the data set are skipped due to the self-sampling method, reducing the data usage. Therefore, we change the self-sampling method to adapt the algorithm to the purpose of this paper.

    3.4 Opinion inference

    After generating the knowledge graph consisting of attributes, opinions, and opinion words by the method shown in the previous section, the knowledge graph can be used to judge the opinions by the input (attribute, opinion word). The geometric meaning of the triple in TransG is expressed as Eq. (7), so it is easy to get the geometric meaning expression of opinion inference:

    In Eq. (5), a scoring function for the TransG has been given. In the opinion inference, it is only necessary to find the having the highest scoring function in the known word vector,namely:

    All the elements in the Eq. (12) are known, so the opinion expressed by the (attribute,emotional words) can be inferred simply. For the knowledge base containing m attributes and n emotional words, m>n, the time complexity required by the method in this paper is only the time required to find attribute words and emotional words, and the time complexity of the algorithm is O(m+n), for the traditional dictionary method, the whole dictionary needs to be searched to get the emotion expressed by the binary relation.According to the size of the dictionary, the time complexity of the algorithm is from O(m)to O(mn). Therefore, the method proposed in this paper has a significant improvement in computational efficiency compared to the traditional dictionary method except in the extreme case (minimum dictionary).

    4 Results

    The experimental environment of this paper is 64-bit Windows 10 OS, Intel Xeon E3 1230v2 processor, clocked at 3.30 GHz, 16 G memory, implementation language is Python, Python version 2.7, running environment PyCharm Community Edition 2016.

    We crawled 12,902 pieces of data on the Pacific Auto Network. After the word segmentation and labeling method introduced in the previous chapter, we finally obtained 14,115 triples stored in (attribute, opinion, opinion word) as data set. Then the obtained data set is segmented, with 10,812 triples as the training set, 2,703 triples as the validation set, and 600 triples as the test set for subsequent training and testing. In order to more comprehensively test the accuracy of the model, this paper carried out a 10-fold cross-validation on the accuracy of the model, that is, the data set was divided into 10 parts, 9 of which are taken as training data and 1 part are used as test data for experiment.Each test will yield the corresponding accuracy, taking the average of the accuracy of the 10 results as an estimate of the accuracy of the algorithm. This method can reduce the specialty of the data set and carry out a more accurate evaluation. In this experiment, the triples of positive emotion and negative emotion were first divid-ed into 10 parts. Each test set was taken from the data sets of the two emotions to form a test set, and the rest was used as a training set. Since 14,115 (attribute, opinion, opinion word) triples cannot be evenly divided into 10 parts, the number of triples in the training set in the first 9 experiments is 12,704, and the number of test set is 1,411. In the last experiment, the number of triplets in the training set was 12,699, and the number of test set was 1,416.

    Two tasks were used to test the accuracy of the different models, namely the opinion prediction task and the triple classification task. Among them, the opinion prediction task is to input the (attribute, opinion word) binary relation into the trained model, and predict the opinion in the triplet (attribute, opinion, opinion word), and HITS@1 is the probability that the correct opinion ranks the first. The triple classification task is to input a (attribute, opinion, opinion word), let the trained model calculate the relationship matrix to determine whether the triple is the correct ternary group. In terms of parameters,TransE, TransH, and TransA all use the improved sampling method. The knowledge graph generated by the training is 50-dimensional, the learning rate λ of the model is 0.001, the training threshold γ is 1. The original TransG and TransG with improved sampling method share the same the parameters: the knowledge graph generated by the training has a dimension of 50 dimensions, the learning rate λ is 0.001, the training threshold γ is 3.5, and the CRP factor β is 0.025. The test results are shown in Tab. 1.Experiment result of different models Tab. 1.

    Table 1: Experiment result of different models

    As can be seen from Tab. 1, all the Translation models get good results in the opinion prediction task. Even using TransE, the simplest model, the result does not fall behind other models much. After analyzing, we tend to believe that the relationship is only ternary, the calculation complexity is not very high, so the disadvantage of simple model here is not fully demonstrated: in the TransE model, the distant between lt-l?and the correct lris far, while that between lt-l?and the wrong lr′is even further, so TransE will still choose the correct relationship according to the loss function. Compared with the TransE model, the algorithmic accuracy improvement of the TransH model and the TransA model is only 1.7%, limited, and the accuracy of the algorithm is challenged. The TransG model improves the accuracy of TransH and TransA by 2% by considering multiple semantic methods, higher than TransH and TransA to TransE. Among all models, TransG model has the highest accuracy, and in the generated lt-l?vector diagram, TransG also shows a good clustering effect, as shown in Fig. 2.

    In Fig. 2, the red point is the difference between the attribute vector and the opinion vector of the positive opinion, the gray point is that between the attribute vector and the opinion vector of the neutral opinion, and the blue point is that between the attribute vector and the opinion vector of the negative opinion. Although the vector graph generated by the TransG with original self-sampling in Fig. 2 has a certain clustering effect, the lattices generated by the three opinions are closely attached to each other and cannot be properly separated. In order to further improve the accuracy of the algorithm,we propose an improved sampling method. The vector graph after improving the selfsampling in TransG is shown in Fig. 3.

    Figure 2: The vector graph generated by the original TransG

    Figure 3: The vector factor generated by the improved TransG

    By comparing Fig. 2 with Fig. 3, latter shows a significantly improvement in the clustering effect of the improved self-sampling method, and the scattered data is less, resulting in multiple dense lattices. Moreover, it can be clearly seen from the figure that both positive emotions and negative emotions have a larger and denser lattice, which can be inferred to be the semantics of the most commonly used (attribute, opinion word) collocation.

    In Tab. 1, it can be seen that the improved self-sampling method has enhanced the accuracy by 1.2% compared with the original self-sampling method, indicating that the improved sampling method does have a certain effect. Although the accuracy difference is not that large, by analyzing the vector graph, it can be found that the improved sampling method will have a more obvious advantage when the data set becomes larger.Tab. 1 also shows that in the triple classification, the improved self-sampling method has increased from 79.6% to 85.3%, an increase of 5.7%, which is obvious.Since the previous test was only performed on the 600-divided triples, there may be special cases where the data set is randomly categorized causing the high accuracy. In order to eliminate the specialty caused by the data set segmentation and prove the stability of the model proposed in this paper, the model is verified by 10-fold crossvalidation. The results are shown in Tab. 2.

    Table 2: 10-fold cross-validation result

    It can be seen that the average accuracy of the 10-fold cross-validation can reach 91.1%,which is similar to the 92% result in Tab. 2. It can be proved that the model has better accuracy and stability.

    In order to test the speed of our algorithm, this paper randomly extracts 1000 (attribute,opinion word) from 14115 data sets and uses the traditional dictionary method and our method to predict the opinion. The time cost to discriminate the 1000 opinion that binary relation stands for is recorded separately. The results are shown in Tab. 3.

    Table 3: Algorithm time consumption comparison

    As can be seen from Tab. 3, the method in this paper has a calculation speed of about 8%faster compared with the traditional dictionary method (the calculation of the promotion rate is (dictionary time-method time)/dictionary time). Since there are only 14,115 triples in the knowledge base, which number is not huge, the advantage of the calculation methods in Tab. 4 is not very obvious. Further research has found that in the face of collocations that do not exist in the dictionary, the dictionary method will take a lot of time and cannot produce results; and the method in the knowledge graph can still consume the same time in the table and get the correct result.

    5 Conclusion

    This paper designs the crawler script to obtain the evaluation statement on the car review website, and then establishes the rule template for word segmentation, extracts the(attribute, opinion word) binary relation and uses it as the smallest unit of emotion expression. Finally, the annotation of the experimental data set is completed by combining the existing emotion dictionary, the emotional information in the webpage and the manual labeling. The knowledge representation learning related knowledge and research are applied to the field of affective cognition, and the TransG is used to generate the knowledge graph and use it to complete the opinion discrimination. At the same time,the sampling method in the original model is improved, so that the data set obtained in this paper can be more fully utilized and the clustering ability of the model is improved.At present, the ternary emotion is judged, but the emotions in real life are much more complicated than the grading of emotional intensity, such as positive, negative and neutral. Therefore, increasing the category of emotional judgment is an important work to follow, and it is worthy of further study.

    Acknowledgement:This research is supported by the Key Program of National Natural Science Foundation of China (Grant Nos. U1536201 and U1405254), the National Natural Science Foundation of China (Grant No. 61472092).

    eeuss影院久久| 网址你懂的国产日韩在线| 午夜福利成人在线免费观看| 久久九九热精品免费| 免费观看人在逋| 长腿黑丝高跟| 一区福利在线观看| 2021天堂中文幕一二区在线观| 国产视频一区二区在线看| 老司机深夜福利视频在线观看| 老女人水多毛片| 国产真实乱freesex| 18禁裸乳无遮挡免费网站照片| 久久精品综合一区二区三区| 国产精品三级大全| АⅤ资源中文在线天堂| 色综合站精品国产| 亚洲狠狠婷婷综合久久图片| 国产一区二区在线av高清观看| 亚洲精品粉嫩美女一区| 国产男人的电影天堂91| 国产一区二区在线av高清观看| 十八禁网站免费在线| 久久99热这里只有精品18| 九九久久精品国产亚洲av麻豆| 久久久久久久久大av| 美女高潮喷水抽搐中文字幕| 色综合亚洲欧美另类图片| 国产一区二区在线观看日韩| 精品久久久久久久久亚洲 | 国产成人福利小说| 99在线人妻在线中文字幕| 国产91精品成人一区二区三区| 精品国产三级普通话版| 国产免费av片在线观看野外av| 色吧在线观看| 十八禁网站免费在线| 亚洲国产欧美人成| 久久久久久大精品| 在线观看美女被高潮喷水网站| 日本免费a在线| 性插视频无遮挡在线免费观看| 亚洲av免费高清在线观看| 日韩亚洲欧美综合| 热99re8久久精品国产| 在线看三级毛片| 成人av在线播放网站| 99热这里只有是精品在线观看| 老司机午夜福利在线观看视频| 亚洲天堂国产精品一区在线| 国内精品久久久久精免费| a级一级毛片免费在线观看| 天堂av国产一区二区熟女人妻| 尾随美女入室| 成人永久免费在线观看视频| 色5月婷婷丁香| 99久久九九国产精品国产免费| 一级黄色大片毛片| 身体一侧抽搐| 日韩一本色道免费dvd| 1000部很黄的大片| 亚洲成人久久性| 亚洲专区中文字幕在线| 国产三级中文精品| 最好的美女福利视频网| 亚洲欧美日韩无卡精品| 3wmmmm亚洲av在线观看| 黄色女人牲交| 国产成年人精品一区二区| 精品乱码久久久久久99久播| 亚州av有码| 黄色女人牲交| 国产av不卡久久| 露出奶头的视频| 一个人免费在线观看电影| 69av精品久久久久久| 在线观看午夜福利视频| 99riav亚洲国产免费| 嫁个100分男人电影在线观看| 1024手机看黄色片| 可以在线观看的亚洲视频| 国产麻豆成人av免费视频| 一进一出抽搐动态| 成年免费大片在线观看| 色视频www国产| 黄色丝袜av网址大全| 亚洲av成人精品一区久久| 免费av观看视频| 国产探花极品一区二区| 免费无遮挡裸体视频| 亚洲国产高清在线一区二区三| 男女啪啪激烈高潮av片| 老熟妇仑乱视频hdxx| 人人妻,人人澡人人爽秒播| 美女大奶头视频| 久久中文看片网| 日韩欧美三级三区| 亚洲精品色激情综合| 国产aⅴ精品一区二区三区波| 欧美最黄视频在线播放免费| 久久久久久久久中文| netflix在线观看网站| 国语自产精品视频在线第100页| 国产一级毛片七仙女欲春2| 美女大奶头视频| 免费av毛片视频| 亚洲自偷自拍三级| 日韩强制内射视频| 91在线精品国自产拍蜜月| 性色avwww在线观看| 在线看三级毛片| 老女人水多毛片| 精品久久久久久久人妻蜜臀av| 欧美最新免费一区二区三区| 亚洲成人免费电影在线观看| 身体一侧抽搐| 免费在线观看影片大全网站| 国产高清视频在线播放一区| 91av网一区二区| 男女做爰动态图高潮gif福利片| 十八禁网站免费在线| 久久久精品欧美日韩精品| 免费观看人在逋| 一本久久中文字幕| 国产精品一区www在线观看 | 国产精品98久久久久久宅男小说| 亚洲一区二区三区色噜噜| 久久国产乱子免费精品| 黄色女人牲交| 亚洲av日韩精品久久久久久密| av视频在线观看入口| 日本 欧美在线| 午夜激情欧美在线| 99久国产av精品| 尤物成人国产欧美一区二区三区| 亚洲美女搞黄在线观看 | 女人被狂操c到高潮| 国内久久婷婷六月综合欲色啪| 精品人妻一区二区三区麻豆 | av在线天堂中文字幕| 狂野欧美激情性xxxx在线观看| 国产一区二区亚洲精品在线观看| 黄色视频,在线免费观看| 欧美一区二区亚洲| 久久久久国内视频| 毛片女人毛片| 干丝袜人妻中文字幕| 国产在视频线在精品| 在线观看舔阴道视频| 麻豆成人av在线观看| 此物有八面人人有两片| 少妇被粗大猛烈的视频| АⅤ资源中文在线天堂| 18禁在线播放成人免费| 97热精品久久久久久| bbb黄色大片| 亚洲第一电影网av| 日日摸夜夜添夜夜添小说| 人人妻人人看人人澡| 亚洲国产精品合色在线| 国产精品久久久久久久电影| 亚洲经典国产精华液单| 99riav亚洲国产免费| 亚洲精品久久国产高清桃花| 久久6这里有精品| 男人和女人高潮做爰伦理| 亚洲国产精品成人综合色| 国产真实乱freesex| 老司机福利观看| 国产高清三级在线| 人人妻人人看人人澡| 欧美绝顶高潮抽搐喷水| 成人特级黄色片久久久久久久| 亚洲欧美日韩无卡精品| 亚洲av成人av| 91久久精品国产一区二区成人| 91在线精品国自产拍蜜月| 免费搜索国产男女视频| 99国产精品一区二区蜜桃av| 午夜老司机福利剧场| 欧美3d第一页| 亚洲三级黄色毛片| 少妇高潮的动态图| 国产一区二区三区视频了| 最好的美女福利视频网| 国产一区二区激情短视频| 日韩高清综合在线| 两人在一起打扑克的视频| 日日夜夜操网爽| 国产老妇女一区| 美女大奶头视频| 哪里可以看免费的av片| 丝袜美腿在线中文| 国产中年淑女户外野战色| 色尼玛亚洲综合影院| 在线观看舔阴道视频| 动漫黄色视频在线观看| 日本色播在线视频| 听说在线观看完整版免费高清| 久久久国产成人免费| 亚洲va在线va天堂va国产| 麻豆成人午夜福利视频| 精品久久久久久成人av| 一卡2卡三卡四卡精品乱码亚洲| 精品久久久噜噜| 亚洲经典国产精华液单| 久久精品夜夜夜夜夜久久蜜豆| 亚洲精品粉嫩美女一区| 国产精华一区二区三区| 内地一区二区视频在线| 中国美白少妇内射xxxbb| 色哟哟哟哟哟哟| 精品久久久久久,| 亚洲无线在线观看| 性色avwww在线观看| 女的被弄到高潮叫床怎么办 | 97超视频在线观看视频| 亚洲av熟女| 成年免费大片在线观看| 国产精品野战在线观看| 亚洲成人免费电影在线观看| 少妇被粗大猛烈的视频| 琪琪午夜伦伦电影理论片6080| www.www免费av| 国产69精品久久久久777片| 日日摸夜夜添夜夜添小说| 国产熟女欧美一区二区| 在线观看一区二区三区| 日本爱情动作片www.在线观看 | 午夜福利视频1000在线观看| 亚洲国产精品久久男人天堂| 91狼人影院| 久久久午夜欧美精品| 有码 亚洲区| 久久国内精品自在自线图片| 99久久成人亚洲精品观看| 国产乱人伦免费视频| 日韩精品有码人妻一区| 男女之事视频高清在线观看| 最新在线观看一区二区三区| 中文字幕av在线有码专区| 亚洲国产高清在线一区二区三| 久久亚洲精品不卡| 性插视频无遮挡在线免费观看| 99热这里只有精品一区| 国产毛片a区久久久久| 91av网一区二区| 校园人妻丝袜中文字幕| 国产单亲对白刺激| 国产在线男女| 久久精品人妻少妇| 国产男人的电影天堂91| 日韩亚洲欧美综合| 日日夜夜操网爽| 国产高潮美女av| 91麻豆精品激情在线观看国产| 亚洲av第一区精品v没综合| 无人区码免费观看不卡| 成人一区二区视频在线观看| 日本黄色视频三级网站网址| 在线免费观看的www视频| 毛片女人毛片| 欧美最黄视频在线播放免费| 91久久精品电影网| 亚洲av日韩精品久久久久久密| 熟女人妻精品中文字幕| 精品久久久久久久人妻蜜臀av| 成人特级黄色片久久久久久久| 99热只有精品国产| 午夜福利在线观看吧| 亚洲男人的天堂狠狠| 欧美日韩瑟瑟在线播放| 国产不卡一卡二| 狠狠狠狠99中文字幕| 校园春色视频在线观看| 国产成人一区二区在线| 亚洲精品一区av在线观看| 欧美日韩中文字幕国产精品一区二区三区| 麻豆国产av国片精品| 又黄又爽又刺激的免费视频.| 小说图片视频综合网站| 国产精品av视频在线免费观看| 观看美女的网站| 非洲黑人性xxxx精品又粗又长| 精品国内亚洲2022精品成人| 亚洲中文字幕一区二区三区有码在线看| 一区福利在线观看| 两个人视频免费观看高清| 如何舔出高潮| 日韩,欧美,国产一区二区三区 | 欧美激情国产日韩精品一区| 久久久久久九九精品二区国产| 国产精品一区二区三区四区久久| 97热精品久久久久久| 亚洲不卡免费看| 在线观看66精品国产| 久久精品国产自在天天线| a级毛片a级免费在线| 麻豆成人午夜福利视频| 日日摸夜夜添夜夜添小说| 国产乱人视频| 国产精品日韩av在线免费观看| 最近在线观看免费完整版| 黄色欧美视频在线观看| 人妻夜夜爽99麻豆av| 国产国拍精品亚洲av在线观看| 国产亚洲精品久久久com| 成人美女网站在线观看视频| 国产熟女欧美一区二区| 88av欧美| 麻豆成人午夜福利视频| 中文字幕高清在线视频| 两个人的视频大全免费| 成人特级av手机在线观看| 免费在线观看成人毛片| 91狼人影院| avwww免费| 国模一区二区三区四区视频| 午夜久久久久精精品| 国产高清不卡午夜福利| 在线免费观看的www视频| 国内精品宾馆在线| 国产激情偷乱视频一区二区| 精品人妻1区二区| 99热这里只有精品一区| 成熟少妇高潮喷水视频| 欧美激情久久久久久爽电影| 精品久久久久久,| 日本爱情动作片www.在线观看 | 美女高潮的动态| 亚洲va日本ⅴa欧美va伊人久久| 午夜福利欧美成人| 欧美成人a在线观看| 中文字幕人妻熟人妻熟丝袜美| 全区人妻精品视频| 亚洲av日韩精品久久久久久密| 非洲黑人性xxxx精品又粗又长| 18+在线观看网站| 精华霜和精华液先用哪个| 亚洲av免费高清在线观看| av在线亚洲专区| 少妇丰满av| 亚洲黑人精品在线| 老师上课跳d突然被开到最大视频| 日韩欧美 国产精品| 亚洲av熟女| 99热这里只有是精品50| 精品久久久久久,| 99久久无色码亚洲精品果冻| 国产高清三级在线| 国产av在哪里看| 97热精品久久久久久| 午夜视频国产福利| 91av网一区二区| a在线观看视频网站| 国产精品电影一区二区三区| 男女下面进入的视频免费午夜| 久久久色成人| 中文资源天堂在线| 久久久久久伊人网av| 一进一出抽搐动态| 亚洲欧美日韩东京热| 成年女人永久免费观看视频| 又爽又黄a免费视频| 国产高潮美女av| 欧美激情在线99| 国产高清三级在线| 久久久久久九九精品二区国产| 国产黄色小视频在线观看| 如何舔出高潮| 日本-黄色视频高清免费观看| 亚洲av成人av| 成人鲁丝片一二三区免费| 网址你懂的国产日韩在线| 精品午夜福利在线看| 亚洲欧美精品综合久久99| 三级国产精品欧美在线观看| 听说在线观看完整版免费高清| 国产精品永久免费网站| 精品人妻偷拍中文字幕| 岛国在线免费视频观看| 欧美潮喷喷水| www日本黄色视频网| 精品欧美国产一区二区三| 国内精品一区二区在线观看| 精品久久国产蜜桃| 精品人妻一区二区三区麻豆 | 国产伦一二天堂av在线观看| 99久久中文字幕三级久久日本| 成年女人毛片免费观看观看9| 日韩欧美国产一区二区入口| 成人无遮挡网站| av专区在线播放| 九九久久精品国产亚洲av麻豆| 欧美3d第一页| 日韩一本色道免费dvd| 午夜福利视频1000在线观看| 日日摸夜夜添夜夜添av毛片 | 日日摸夜夜添夜夜添小说| 午夜a级毛片| 乱系列少妇在线播放| 成人特级黄色片久久久久久久| 男女之事视频高清在线观看| 亚洲图色成人| 欧美日本亚洲视频在线播放| 麻豆国产av国片精品| 亚洲中文字幕日韩| 欧美日韩乱码在线| 日韩一区二区视频免费看| 97超视频在线观看视频| 99热这里只有是精品在线观看| 熟女电影av网| 久久6这里有精品| 欧美人与善性xxx| 国产成人福利小说| 久久草成人影院| 久久热精品热| 亚洲精华国产精华精| 看黄色毛片网站| 国产黄a三级三级三级人| 老师上课跳d突然被开到最大视频| 亚洲 国产 在线| 国内精品宾馆在线| 亚洲欧美日韩东京热| 99在线人妻在线中文字幕| 久久精品国产亚洲av天美| 国产一区二区三区av在线 | 国产乱人视频| 国模一区二区三区四区视频| 亚洲,欧美,日韩| 变态另类丝袜制服| 99热这里只有精品一区| 99国产极品粉嫩在线观看| 男人的好看免费观看在线视频| 国产伦一二天堂av在线观看| 日韩欧美三级三区| 亚洲欧美精品综合久久99| 精品人妻偷拍中文字幕| 亚洲精华国产精华精| 免费看光身美女| 久久久久久国产a免费观看| 别揉我奶头 嗯啊视频| av在线观看视频网站免费| 熟女人妻精品中文字幕| 欧美日韩综合久久久久久 | 免费看日本二区| 亚洲av中文av极速乱 | 欧美三级亚洲精品| 成人午夜高清在线视频| 中文资源天堂在线| 91久久精品国产一区二区三区| 亚洲三级黄色毛片| 国产免费av片在线观看野外av| 干丝袜人妻中文字幕| 大型黄色视频在线免费观看| videossex国产| 免费看光身美女| 亚洲无线在线观看| 国产精品一区二区免费欧美| 精品日产1卡2卡| 亚洲欧美日韩无卡精品| 精品一区二区三区视频在线| 九九在线视频观看精品| 一区二区三区四区激情视频 | 欧美日韩精品成人综合77777| 岛国在线免费视频观看| 日韩在线高清观看一区二区三区 | 非洲黑人性xxxx精品又粗又长| 国产一区二区三区视频了| 久久久久九九精品影院| av福利片在线观看| 欧美成人a在线观看| 一个人看视频在线观看www免费| 国产一区二区三区在线臀色熟女| 国产精品电影一区二区三区| 国产一区二区在线av高清观看| 欧美国产日韩亚洲一区| 91av网一区二区| 日本黄色片子视频| 欧美日本亚洲视频在线播放| x7x7x7水蜜桃| 日韩欧美 国产精品| 最近在线观看免费完整版| 国产精品久久久久久久电影| 久久6这里有精品| 国产毛片a区久久久久| 最近中文字幕高清免费大全6 | 国产av在哪里看| 少妇人妻精品综合一区二区 | 97热精品久久久久久| 波野结衣二区三区在线| 国产高清有码在线观看视频| 色5月婷婷丁香| 欧美日韩中文字幕国产精品一区二区三区| 淫妇啪啪啪对白视频| 色吧在线观看| 精品久久久久久久久久久久久| 日韩欧美一区二区三区在线观看| 欧美国产日韩亚洲一区| 草草在线视频免费看| 中文字幕免费在线视频6| 久久这里只有精品中国| 亚洲图色成人| 丰满的人妻完整版| 国产大屁股一区二区在线视频| 香蕉av资源在线| 又爽又黄a免费视频| 亚洲美女视频黄频| 久久精品国产亚洲av天美| 久久这里只有精品中国| 欧美极品一区二区三区四区| 国产精品无大码| 久久精品国产99精品国产亚洲性色| 波多野结衣高清无吗| 成年免费大片在线观看| 日日啪夜夜撸| 国内精品久久久久久久电影| 久久欧美精品欧美久久欧美| 日韩 亚洲 欧美在线| 男女啪啪激烈高潮av片| 一进一出抽搐动态| 男人舔奶头视频| 在线看三级毛片| 自拍偷自拍亚洲精品老妇| 99热这里只有是精品在线观看| av天堂在线播放| 国内精品久久久久久久电影| 波野结衣二区三区在线| 国产一区二区激情短视频| 十八禁国产超污无遮挡网站| 美女大奶头视频| 最近在线观看免费完整版| 夜夜看夜夜爽夜夜摸| 春色校园在线视频观看| av天堂中文字幕网| 狠狠狠狠99中文字幕| 色综合色国产| 天天躁日日操中文字幕| 又黄又爽又免费观看的视频| 久久6这里有精品| 婷婷精品国产亚洲av| 一区二区三区免费毛片| 久久香蕉精品热| 久久国内精品自在自线图片| 国产免费一级a男人的天堂| 国产人妻一区二区三区在| 国产女主播在线喷水免费视频网站 | 美女 人体艺术 gogo| 亚洲精品一区av在线观看| 97超级碰碰碰精品色视频在线观看| 国产男靠女视频免费网站| 欧美中文日本在线观看视频| 久久国产乱子免费精品| 成熟少妇高潮喷水视频| 国产伦精品一区二区三区视频9| 中文字幕人妻熟人妻熟丝袜美| 久久精品影院6| 天堂av国产一区二区熟女人妻| 99九九线精品视频在线观看视频| 不卡视频在线观看欧美| 丝袜美腿在线中文| 国产91精品成人一区二区三区| 99热这里只有是精品50| 色哟哟·www| 成人综合一区亚洲| 亚洲国产欧洲综合997久久,| 免费人成在线观看视频色| 波多野结衣高清作品| 大又大粗又爽又黄少妇毛片口| 亚洲精品国产成人久久av| 欧美人与善性xxx| 亚洲aⅴ乱码一区二区在线播放| 黄色日韩在线| 一本久久中文字幕| 精品人妻偷拍中文字幕| 麻豆av噜噜一区二区三区| 最新中文字幕久久久久| 十八禁国产超污无遮挡网站| 久久久久久国产a免费观看| 欧美日韩国产亚洲二区| 69av精品久久久久久| 国产亚洲精品久久久久久毛片| 在现免费观看毛片| 91午夜精品亚洲一区二区三区 | 老熟妇仑乱视频hdxx| 韩国av在线不卡| 3wmmmm亚洲av在线观看| 国产av一区在线观看免费| 床上黄色一级片| 国产伦在线观看视频一区| 狠狠狠狠99中文字幕| 国产精品亚洲一级av第二区| 1024手机看黄色片| 日本五十路高清| 亚洲图色成人| 深夜精品福利| 成人精品一区二区免费| 精品免费久久久久久久清纯| 国语自产精品视频在线第100页| 禁无遮挡网站| 日韩精品青青久久久久久| 美女高潮的动态| 久久这里只有精品中国| 成人综合一区亚洲| 国产精品爽爽va在线观看网站| 国产精品99久久久久久久久| 欧美成人a在线观看| 免费在线观看影片大全网站| 国产成人av教育| 一级a爱片免费观看的视频| 别揉我奶头 嗯啊视频| 天堂动漫精品| 国产伦一二天堂av在线观看| 少妇猛男粗大的猛烈进出视频 | 亚洲狠狠婷婷综合久久图片|