• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Relation Extraction for Massive News Texts

    2019-07-18 01:59:50LiboYinXiangMengJianxunLiandJianguoSun
    Computers Materials&Continua 2019年7期

    Libo Yin, Xiang Meng, Jianxun Li and Jianguo Sun,

    Abstract: With the development of information technology including Internet technologies, the amount of textual information that people need to process daily is increasing.In order to automatically obtain valuable and user-informed information from massive amounts of textual data, many researchers have conducted in-depth research in the area of entity relation extraction.Based on the existing research of word vector and the method of entity relation extraction, this paper designs and implements an method based on support vector machine (SVM) for extracting English entity relationships from massive news texts.The method converts sentences in natural language into a form of numerical matrix that can be understood and processed by computers through word embedding and position embedding.Then the key features are extracted, and feature vectors are constructed and sent to the SVM classifiers for relation classification.In the process of feature extraction, we had two different models to finish the job, one by Principal Component Analysis (PCA) and the other by Convolutional Neural Networks (CNN).We designed experiments to evaluate the algorithm.

    Keywords: Entity relation extraction, relation classification, massive news texts.

    1 Introduction

    Relation extraction is the task of predicting attributes and relations for entities in a sentence [Mooney and Bunescu (2005)].For example, given a sentence “Liu works in Alibaba”, the user may want to extract the semantic relationship between Liu and Alibaba.Relation extraction is the key component for building relation knowledge graphs, and it is of crucial

    significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization [Liu, Shen, Wang et al.(2005)].

    Relation extraction problem can be seen as a classification problem for data containing relational information.Initially, people used rule-based approaches to solve this problem, but the implementation of this kind of approaches requires a lot of labor from experts and linguists in specific fields [Qu, Ren, Zhang et al.(2017)].In order to overcome the shortcomings of the rule-based approaches, people began to use machine learning methods to solve this problem, greatly reducing the cost of relation extraction.The machine learning methods at that time need to construct the training data in the form of feature vectors, and then use the machine learning algorithm to construct the classifiers to classify the relationship types.Therefore, this kind of methods is also called the featurebased methods [Lin, Shen, Liu et al.(2016)].Later, Kernel-based methods were introduced, which were originally introduced in support vector machine and later extended too many other algorithms.Compared with the feature-based methods, the Kernel-based methods do not use the feature vectors as input, but directly use the original string as the operation object of the model, and solve the Kernel (Similarity) function between the two target objects [Gormley, Mo and Dredze (2015)].But the fatal flaw of this type of method is that training and prediction are extremely slow and not suitable for the processing of large amounts of textual data.Besides, among all the machine learning approaches for supervision, the CNN model achieved state-of-the-art performance in the field of natural language processing.

    In this paper, we investigate the effect of PCA and shallow CNN for the extraction of a large number of news texts.We convert natural language texts that do not have structured information into numerical representations, then extract the text features on this basis.According to the extracted features, the relationship types of the existing pairs of objects included in the texts are classified for the extraction of the English entity relation.In this process, the process of training the model, testing the model, and adjusting the parameters needs to be repeated to achieve the best extraction effect.

    The contributions of this paper can be summarized as follows:

    ● We avoid the complex process of traditional feature extraction by CNN and PCA.

    ● We avoid the vanishing gradient problem caused by deep CNN by using shallow CNN and support vector machine (SVM).

    2 Structure

    In this section, we describe two models of SVM for relation extraction.One makes use of PCA to get feature vectors while the other makes use of CNN.

    2.1 Vector representation

    Let xibe the i-th word in the sentence and e1, e2 be the two corresponding entities.Each word will access two embedding look-up tables to get the word embedding WFiand the position embedding PFi.Then, we concatenate the two embeddings and denote each word as a vector of vi=[WFi,PFi].

    2.1.1 Word embeddings

    Each representation vicorresponding to xiis a real-valued vector.All of the vectors are encoded in an embeddings matrix Vw∈?dw×|V|where V is a fixed-sized vocabulary.

    2.1.2 Position embeddings

    In relation classification, we focus on finding a relation for entity pairs.Following Zeng et al.[Zeng, Liu, Lai et al.(2014)], a PF is the combination of the relative distances of the current word to the first entity e1 and the second entity e2.For instance, in the sentence “Steve_Jobs is the founder of Apple.” the relative distances from founder to e1 (Steve_Jobs) and e2 are 3 and -2, respectively.We then transform the relative distances into real-valued vectors by looking up one randomly initialized position embedding matrices Vp∈?dp×‖P‖where P is fixed-sized distance set.It should be noted that if a word is too far from entities, it may be not related to the relation.Therefore, we choose maximum value emaxand minimum value eminfor the relative distance.

    In the example shown in Fig.1 it is assumed that dwis 4 and dpis 1.There are two position embeddings: one for e1, the other for e2.Finally, we concatenate the word embeddings and position embeddings of all words and denote a sentence of length n (padded where necessary) as a vector

    Where ⊕ is the concatenation operator and vi∈?d(d =dw+dp×2).

    2.2 Extracting feature vectors by PCA

    Principal Component Analysis (PCA) is a multivariate statistical analysis method that studies how to reflect most of its information through a few linear combinations of original data.

    Principal component analysis converts multiple variables into several integrated variables by reducing the dimensionality of the original analytical data, while losing less information.Usually, the integrated variable obtained by the conversion is called the principal component.Among them, each principal component represents a linear combination of the original variables, and the variables in this linear combination have no correlation, so the principal component has better performance than the original variables.In this way, when studying complex problems, only a few principal components can be considered, and complex problems can be simplified without losing too much information, which improves the efficiency of analyzing complex problems.

    The principle of principal component analysis is to project the original sample data into a new space.If the sample data is represented by a matrix, it is equivalent to mapping a set of matrices to another coordinate system.However, in the new coordinate system, it means that the original sample does not need so many variables, and only the eigenvalue of the largest linear independent combination of the original sample corresponds to the coordinates of the space.

    Therefore, the PCA replaces the original n feature variables with a smaller number of k feature variables, and projects the n-dimensional features into the k-dimensional coordinate system.The k variables are linear combinations of the original variables, the linear combination samples have the largest variance, and the k characteristic variables are uncorrelated.PCA reduces the n-dimensional data variable to k-dimensional and can be used for data compression, but PCA must ensure that the information loss of the data is as small as possible after the dimension is reduced.In other words, the PCA needs to find a projection direction for the original data space to maximize the data variance information after projection, which helps to retain as much valuable information as possible.

    As shown in Fig.1, after flattening the original input matrix, the PCA algorithm can compress the n×? vector into a smaller size feature vector.

    Figure 1: The process of PCA for extracting features

    2.3 Extracting feature vectors by CNN

    CNN, as a common variant of traditional artificial neural networks, is an important feedforward neural network that has been successfully applied in the field of image recognition and speech processing.The main features of CNN are locally connecting weights sharing, and pooling.Locally connecting is inspired by the working mechanisms of the biological visual system.The global information acquired by human visual nerves is obtained through the combination of local information.This is a bottom-up feature extraction method.The meaning of weights sharing is that when using CNN for local connections, different parts of the data share the same weights.On the one hand, this approach conforms to the objective process of human cognition, and on the other hand, it reduces a large number of network parameters and reduces the complexity of the model.The meaning of the pooling operation is to discard the redundant and non-critical information received during the identification process without destroying the overall characteristics of the target object, further reducing the calculation.

    Fig.2 describes the architecture of our model with CNN to extract features.

    Figure 2: The architecture of CNN used for extracting features

    2.3.1 Convolution

    Let vi:i+jrefer to the concatenation of words vi, vi+1, …, vi+j.A convolution operation involves a filter w ∈??d, which is applied to a window of ? words to produce a new feature.A feature ciis generated from a window of word vi:i+?-1by

    Here b ∈R is a bias term and f is a non-linear function.This filter is applied to each possible window of words from v1to vnto produce feature c=[c1,c2, …,cn-?+1] with c ∈?s(s=n(s=n-?+1).

    2.3.2 Global pooling on feature maps

    In the output of the convolution operation, it contains key information and redundant information.Through the pooling operation, redundant information in the data can be discarded without destroying the main features of the data.

    In our algorithm, we have designed a global maximum pooling strategy on the feature maps.For the feature maps obtained by the convolution operation, we use the filters with the same size of the feature maps for maximum pooling operation on each feature map, which makes each feature map finally generate a single feature value.By splicing all the eigenvalues, we can get the feature vectors we need [Zheng, Lu, Bao et al.(2017)].We apply a max-pooling operation on a feature map and take the maximum valuemax{c}.We have described the process by which one feature is extracted from one filter.We take all features into one high level extracted feature vector z=(note that here we have m filters).

    2.4 SVM for relation classification

    The SVM method is a traditional machine learning algorithm that plays an important role in the field of artificial intelligence.It has excellent performance in many classification tasks.Therefore, it is widely used in many tasks in the industry.Compared with naive Bayes algorithm, k-nearest neighbors algorithm and decision tree algorithm, SVM method not only has a better classification effect but also has stronger robustness, better generalization ability and better high-dimensional data processing characteristics.

    The main idea of SVM can be obtained from Eq.(3).The vector x represents the input sample data.Determine a hyperplane in space, w=[w1,w2,w3,…,wd] is the normal vector of the hyperplane, and b is the offset of the hyperplane.For any input x, the hyperplane equation can give a positive or negative label yias a result of the classification.

    Fig.3 shows the SVM classification effect in two-dimensional space.For inputs in highdimensional space, the SVM will find high-dimensional hyperplanes that have the same classification effect as the lines in Fig.3.When the input data cannot be classified in the current dimension, we can use the kernel method to map the data to a higher dimensional space for classification.The use of the kernel method makes SVM more powerful and can solve more complex classification problems in high-dimensional space, which is one of the reasons why SVM continues to be hot.

    Figure 3: SVM classification in two-dimensional space

    In this paper, the corpus is divided into a training set and a testing set.The feature vectors are obtained after that the texts are processed as input through word embedding, position embedding and feature extraction.Besides, the feature vectors from the training set are used to train the SVM classifiers constructed in a one-versus-rest or one-versus-one manner.The classifiers have the ability to perform multi-classification of relationships, that is, it is determined whether the type of relationship between the pair of entities in the current sample matches one of the predefined relationships.After that, the feature vectors from the testing set are sent to the trained classifiers to finish the relation extraction of the testing set.

    Fig.4 shows the main process of our relation extraction algorithm.

    Figure 4: The steps for relation extraction with SVM

    3 Experiments

    3.1 Experimental settings

    In this paper, we use the word embeddings released by Lin et al.[Lin, Shen, Liu et al.(2016)] which are trained on the NYT-Free base corpus [Riedel, Yao and Mccallum (2010)].We fine tune our model using validation on the training data.The word embedding is of size 50.The input text is padded to a fixed size of 100.

    3.1.1 Experimental Settings for PCA-SVM

    In this lab, we used the open source Scikit-learn machine learning package to implement the PCA and SVM we needed.We transform the input matrix into a one-dimensional vector and then use PCA to extract features in a dimensionally reduced manner.In this process, different dimensionality reduction ratios produce feature vectors of different sizes.

    Table 1: Parameter settings for PCA-SVM

    3.1.2 Experimental settings for CNN-SVM

    Training is performed with Tensorflow adam optimizer, using a mini-batch of size 64, an initial learning rate of 0.001.We initialize our convolutional layers following Glorot et al.[Glorot and Bengio (2010)].The implementation is done using Tensorflow 1.3.All experiments are performed on a single NVIDIA GeForce GTX 1080 GPU.

    After extracting feature vectors from the embedded matrix, we need to train our SVM model to process the feature vectors.We make use of the Scikit-learn package for machine learning to implement our SVM.

    In Tab.2 we show all the parameters used in the experiments.

    Table 2: Parameter settings for CNN-SVM

    SVM Penalty parameter C 500 Kernel type kernel ‘rbf’ Kernel coefficient gamma 1.3 Tolerance for stopping criterion tol 0.01

    3.1.3 Experiments for evaluation

    We experiment with several state-of-the-art baselines.

    ● CNN-B: Our implementation of the CNN baseline [Zeng, Liu, Chen et al.(2015)] which contains one convolutional layer, and one fully connected layer.

    ● CNN+ATT: CNN-B with attention over instance learning [Lin, Shen, Liu et al.(2016)].

    ● PCNN+ATT: Piecewise CNN-B with attention over instance learning [Lin, Shen, Liu et al.(2016)].

    ● CNN: CNN model which includes one convolutional layer and three fully connected layers [Quirk and Poon (2017)].

    ● CNN-x: Deeper CNN model which has x convolutional layers.For example, CNN-9 is a model constructed with 9 convolutional layers and three fully connected layers [Feng, Guo, Qin et al.(2017)].

    ● CNN+PCA: Our proposed SVM model combined PCA.

    ● CNN+SVM: Our proposed SVM model combined CNN.

    We evaluate our models on the widely used NYT freebase larger dataset [Riedel, Yao and Mccallum (2010)].NYT freebase dataset includes 522K training sentences, which is the largest dataset in relation extraction, and it is the only suitable dataset to train our model.

    3.2 Corpus description

    This dataset containing a training set of 522,611 sentences and a testing set consisting of 172,448 sentences can be regarded as a large amount of English news text data.We can use this dataset to train and test our models.The dataset contains 52 predefined relationship types, which means that we can extract these 52 types of relationships from English news texts.

    3.3 Experimental results and analysis

    The feature extraction step is implemented by the PCA method and the CNN method, and the relation classification process uses the SVM as the classifiers.In this process, the parameters of the SVM are set to be optimal and remain unchanged, and the layer settings of the CNN and the n_components for PCA are changed.Observe precision (P) and recall (R) on the training and testing sets.

    In this experiment for PCA, we control the n_components value of the PCA algorithm to switch between 100, 200, 300 and 400, thereby extracting feature vectors of dimensions 100, 200, 300 and 400 from the original input data.The relation extraction is performed by using these feature vectors as inputs to the SVM.The results obtained by this experiments on the training set and the test set are shown in Tab.3.

    Table 3: Test results when n_components for PCA is different

    According to Tab.3, we can find that the features extracted by PCA can be used as effective input for SVM.Moreover, between 100 and 400, when n_components takes a value of 300, a relatively better relation extraction efficiency is achieved.

    In the experiment for CNN-SVM, the excitation function is the ReLU function, and the maximum pooling strategy is selected.The experiment refers to the work of Huang et al.[Huang and Wang (2017)].The level of CNN is shallow and all convolution filters have a height which is same with the sentence matrices’ height.And these convolution filters have different weights.Every 128 filters correspond to a convolutional layer in the CNN, which can construct a 128-dimensional vector.When different numbers of convolution layers are set, feature vectors of different dimensions are obtained.In this experiment, several tests were carried out, and finally, when the number of convolutional layers was 1 to 3, the best results under the optimal SVM parameters are shown in Tab.4.

    Table 4: Test results when the number of convolution layers for CNN is different

    According to Tab.4, it can be found that in the range of 1 to 3, when the number of convolution layers is gradually increased, the P value and R value also slightly increase.There is an excellent performance of a P value of about 98.3% on the training set and about 61.3% on the testing set.

    Table 5: P@N for relation extraction with different models.Top: models that select training data.Bottom: models without selective attention

    In Tab.5, we compare the performances of our models to state-of-the-art baselines.We show that our CNN+SVM outperforms all models.According to the data in the table, we can find that the performance of our CNN+SVM model is similar to that of the PCNN+ATT model.For the more practical evaluation, we compare the results for precision@N where N is small (1, 5, 10, 20, 50) in Tab.6.We observe that our CNN+SVM model dominates the performance when we predict the relation in the range of higher probability.The features extracted by CNN help the SVM classifiers to better understand the meaning of the sentences and extract the relation.Besides, our PCA+SVM model also has a reasonable score, but it is not good enough to be a good choice.

    In our relation extraction experience, we have two important observations:

    ● The SVM classifiers can make use of the feature vectors extracted by the CNN to classify the relationships and achieve good results.

    ● The feature vectors from CNN are better than the vectors from PCA.

    ● The combination of CNN and SVM can avoid the occurrence of overfitting.

    Table 6: P@N for relation extraction with different models where N is small.We get the result of PCNN+ATT using their public source code

    4 Conclusion

    In this paper, we introduced two methods of supervised relation extraction and applied them to the field of English news, one combined SVM with PCA while the other combined SVM with CNN.We showed that CNN can extract high-quality feature vectors form the sentences for SVM.PCA was also an acceptable approach to extract the feature vectors, but it still had a large gap compared to CNN.With the support of feature vectors from CNN, the performances of SVM were significantly improved.These results suggested that SVM with CNN does have positive effects on relation extraction problems.

    Acknowledgement:This work is supported by the Fundamental Research Funds for the Central Universities (HEUCFG201827, HEUCFP201839), Natural Science Foundation of Heilongjiang Province of China (JJ2018ZR1437).

    References

    Feng, X.; Guo, J.; Qin, B.; Liu, T.; Liu, Y.(2017): Effective deep memory networks for distant supervised relation extraction.Proceeding of the 26th International Joint Conference on Artificial Intelligence, pp.4002-4008.

    Glorot, X.; Bengio, Y.(2010): Understanding the difficulty of training deep feedforward neural networks.Journal of Machine Learning Research, vol.9, pp.249-256.

    Gormley, M.R., Mo, Y.; Dredze, M.(2015): Improved relation extraction with Featurerich Compositional embedding models.Proceeding of the Conference on Empirical Methods in Natural Language Processing, pp.1774-1784.

    Huang, Y.Y.; Wang, W.Y.(2017): Deep residual learning for weakly-supervised relation extraction.http://www.techscience.com/books/mlpg_atluri.html.

    Liu, S.; Shen, F.; Wang, Y.; Rastegar-Mojarad, M.; Elayavilli, R.et al.(2017): Attention-based neural net-works for chemical protein relation extraction.Proceeding of the 4th BioCreative.

    Lin, Y.; Shen, S.; Liu, Z.; Luan, H.; Sun, M.(2016): Neural relation extraction with selective attention over instances.Proceeding of the 54th Annual Meeting of the Association for Computational Linguistics, pp.2124-2133.

    Mooney, R.J.; Bunescu, R.C.(2005): Subsequence kernels for relation extraction.Proceedings of the International Conference on Neural Information Processing Systems.

    Quirk, C.; Poon, H.(2017): Distant supervision for relation extraction beyond the sentence boundary.Proceeding of the 15th Conference of the European Chapter of the Association for Computational Linguistics, vol.1, pp.1171-1182.

    Qu, M.; Ren, X.; Zhang, Y.; Han, J.(2017): Weakly-supervised relation extraction by pattern-enhanced embedding learning.Proceeding of the World Wide Web Conference, pp.1257-1266.

    Riedel, S.; Yao, L.; Mccallum, A.(2010) Modeling relations and their mentions without labeled text.Lecture Notes in Computer Science, vol.6323, no.3, pp.148-163,

    Zeng, D.; Liu, K.; Lai, S.; Zhou, G.; Zhao, J.(2014): Relation classification via convolutional deep neural network.Proceedings of COLING 2014: Technical Papers, pp.2335-2344.

    Zheng, S.; Hao, Y.; Lu, D.; Bao, H.; Xu, J et al.(2017): Joint entity and relation extraction based on a hybrid neural network.Neurocomputing, vol.257, pp.59-66.

    Zeng, D.; Liu, K.; Chen, Y.; Zhao, J.(2015): Distant supervision for relation extraction via Piecewise Convolutional Neural Networks.Proceeding of the Conference on Empirical Methods in Natural Language Processing, pp.1753-1762.

    一进一出抽搐gif免费好疼| 中文字幕久久专区| 国产亚洲av嫩草精品影院| 午夜影院日韩av| 神马国产精品三级电影在线观看| 嫩草影视91久久| 男女啪啪激烈高潮av片| 欧美三级亚洲精品| 日本黄色视频三级网站网址| 成人av一区二区三区在线看| 97碰自拍视频| 欧美日韩精品成人综合77777| 丰满人妻一区二区三区视频av| 天堂av国产一区二区熟女人妻| 老女人水多毛片| 中文在线观看免费www的网站| 一进一出抽搐动态| 国产中年淑女户外野战色| 午夜激情福利司机影院| 麻豆国产av国片精品| 亚洲七黄色美女视频| 麻豆乱淫一区二区| 国产一区二区在线av高清观看| 亚洲精品国产成人久久av| 美女xxoo啪啪120秒动态图| 91久久精品国产一区二区三区| 狂野欧美激情性xxxx在线观看| 俄罗斯特黄特色一大片| 一级毛片久久久久久久久女| 嫩草影院精品99| 18禁在线播放成人免费| 亚洲av一区综合| 欧美激情在线99| 男女视频在线观看网站免费| 日日摸夜夜添夜夜爱| 最近手机中文字幕大全| 麻豆国产av国片精品| 亚洲不卡免费看| 成人精品一区二区免费| 91在线精品国自产拍蜜月| av免费在线看不卡| 国产乱人视频| 精品乱码久久久久久99久播| 99久久无色码亚洲精品果冻| 在线观看午夜福利视频| 国产v大片淫在线免费观看| 最近最新中文字幕大全电影3| 亚洲熟妇熟女久久| 亚洲真实伦在线观看| 人人妻人人澡欧美一区二区| 国产高清视频在线播放一区| 变态另类成人亚洲欧美熟女| 日本黄大片高清| 亚洲一区二区三区色噜噜| 1000部很黄的大片| 婷婷精品国产亚洲av在线| eeuss影院久久| 久久这里只有精品中国| 亚洲内射少妇av| 18禁在线播放成人免费| 国产精品精品国产色婷婷| 日日干狠狠操夜夜爽| 亚洲最大成人手机在线| 亚洲最大成人手机在线| a级毛色黄片| 国产单亲对白刺激| 亚洲欧美中文字幕日韩二区| 中出人妻视频一区二区| 久久久久久久久久久丰满| 丝袜喷水一区| 大香蕉久久网| 亚洲精品亚洲一区二区| 少妇丰满av| 露出奶头的视频| 欧美3d第一页| 老师上课跳d突然被开到最大视频| 伦理电影大哥的女人| 亚洲成人中文字幕在线播放| 亚洲高清免费不卡视频| 热99re8久久精品国产| av专区在线播放| 日本黄大片高清| 国产欧美日韩精品亚洲av| 亚洲国产精品成人综合色| 久久99热6这里只有精品| 亚洲精华国产精华液的使用体验 | 日本-黄色视频高清免费观看| 久久欧美精品欧美久久欧美| 偷拍熟女少妇极品色| 美女 人体艺术 gogo| 欧美高清成人免费视频www| 夜夜看夜夜爽夜夜摸| 亚州av有码| 男插女下体视频免费在线播放| 黄色欧美视频在线观看| 亚洲欧美精品自产自拍| 综合色av麻豆| 最新在线观看一区二区三区| 成人高潮视频无遮挡免费网站| 婷婷精品国产亚洲av在线| 99九九线精品视频在线观看视频| 美女cb高潮喷水在线观看| 在线播放国产精品三级| 亚洲国产高清在线一区二区三| 亚洲综合色惰| 国产人妻一区二区三区在| 午夜视频国产福利| 成人av一区二区三区在线看| 在线观看美女被高潮喷水网站| 国产精品久久久久久久久免| 色综合色国产| 在线免费观看的www视频| 99久国产av精品国产电影| 老司机福利观看| 一级黄色大片毛片| 成年女人永久免费观看视频| 国产精品久久久久久久久免| 国产亚洲精品久久久久久毛片| 免费电影在线观看免费观看| 午夜激情欧美在线| 日日摸夜夜添夜夜添av毛片| 日韩精品有码人妻一区| 毛片一级片免费看久久久久| 国产视频内射| 亚洲精品国产成人久久av| .国产精品久久| 亚洲成人av在线免费| 寂寞人妻少妇视频99o| 日本五十路高清| 菩萨蛮人人尽说江南好唐韦庄 | 亚洲四区av| 麻豆av噜噜一区二区三区| 校园春色视频在线观看| 天堂影院成人在线观看| 午夜免费男女啪啪视频观看 | 九色成人免费人妻av| 欧美性猛交╳xxx乱大交人| 美女内射精品一级片tv| 99riav亚洲国产免费| 少妇人妻精品综合一区二区 | 99热全是精品| 婷婷精品国产亚洲av| 色综合亚洲欧美另类图片| 亚洲欧美清纯卡通| 国产精品不卡视频一区二区| 国产伦精品一区二区三区视频9| 少妇猛男粗大的猛烈进出视频 | 亚洲人成网站在线播放欧美日韩| 露出奶头的视频| 免费看av在线观看网站| 亚洲精品一卡2卡三卡4卡5卡| 久久久成人免费电影| 精品欧美国产一区二区三| 中文字幕久久专区| 我的老师免费观看完整版| 欧美一区二区国产精品久久精品| 亚洲第一电影网av| 国产高潮美女av| 色吧在线观看| 少妇人妻精品综合一区二区 | 午夜福利在线观看免费完整高清在 | 国产精品一区二区性色av| 国产黄a三级三级三级人| 国产成人aa在线观看| 中国美白少妇内射xxxbb| 午夜影院日韩av| 嫩草影院新地址| av黄色大香蕉| 自拍偷自拍亚洲精品老妇| 亚洲av熟女| 尤物成人国产欧美一区二区三区| 一区二区三区高清视频在线| 男女之事视频高清在线观看| 精品久久久久久久久久免费视频| 不卡一级毛片| 亚洲七黄色美女视频| 午夜久久久久精精品| 欧美一区二区精品小视频在线| 少妇的逼好多水| 国产人妻一区二区三区在| 99热这里只有是精品50| 在线a可以看的网站| 免费av不卡在线播放| 欧美一级a爱片免费观看看| 欧美成人a在线观看| 99久国产av精品国产电影| 精品午夜福利视频在线观看一区| 欧美一级a爱片免费观看看| 国产精品国产高清国产av| 亚洲自拍偷在线| 中文字幕熟女人妻在线| 国产人妻一区二区三区在| 国产乱人偷精品视频| 欧美绝顶高潮抽搐喷水| 欧美绝顶高潮抽搐喷水| 大又大粗又爽又黄少妇毛片口| 国产精品一区二区三区四区免费观看 | 特级一级黄色大片| 国产午夜精品久久久久久一区二区三区 | 99riav亚洲国产免费| 免费看av在线观看网站| 久久鲁丝午夜福利片| 欧美bdsm另类| 免费大片18禁| 非洲黑人性xxxx精品又粗又长| 一级毛片aaaaaa免费看小| 欧美日韩国产亚洲二区| 亚洲激情五月婷婷啪啪| 观看美女的网站| 色尼玛亚洲综合影院| av在线亚洲专区| 久久精品久久久久久噜噜老黄 | 男女做爰动态图高潮gif福利片| 综合色丁香网| 成熟少妇高潮喷水视频| 少妇高潮的动态图| 国产一区二区在线观看日韩| 午夜福利在线观看吧| 久久久精品大字幕| 成年女人看的毛片在线观看| 国产精品av视频在线免费观看| 狠狠狠狠99中文字幕| 国产日本99.免费观看| 国产精品不卡视频一区二区| 久久精品人妻少妇| 午夜a级毛片| 嫩草影院入口| 亚洲久久久久久中文字幕| 久久鲁丝午夜福利片| 欧美日本亚洲视频在线播放| 久久久久久久久久黄片| 少妇猛男粗大的猛烈进出视频 | 欧美+亚洲+日韩+国产| 国产成人福利小说| 午夜福利视频1000在线观看| 成年免费大片在线观看| 综合色av麻豆| 欧美xxxx性猛交bbbb| 欧美最黄视频在线播放免费| 在线a可以看的网站| 国产精品一及| 午夜精品在线福利| 极品教师在线视频| 日本熟妇午夜| 国产真实伦视频高清在线观看| 国产私拍福利视频在线观看| av视频在线观看入口| 少妇人妻精品综合一区二区 | 男人的好看免费观看在线视频| 国产日本99.免费观看| 亚洲欧美成人精品一区二区| 日韩欧美 国产精品| 亚洲一级一片aⅴ在线观看| 嫩草影院新地址| 国产精品伦人一区二区| 搞女人的毛片| 97人妻精品一区二区三区麻豆| 国产精品,欧美在线| 久久6这里有精品| 成熟少妇高潮喷水视频| av在线亚洲专区| 欧美日本亚洲视频在线播放| 嫩草影院精品99| 热99re8久久精品国产| 国产在线男女| ponron亚洲| 看免费成人av毛片| a级毛色黄片| 一级黄色大片毛片| 久久婷婷人人爽人人干人人爱| 久久午夜福利片| 国产在视频线在精品| 日本免费一区二区三区高清不卡| 亚洲精品国产成人久久av| 麻豆av噜噜一区二区三区| 一个人看的www免费观看视频| 久久精品国产清高在天天线| 午夜老司机福利剧场| 亚洲成av人片在线播放无| 亚洲精品一区av在线观看| 欧美成人一区二区免费高清观看| 99久久精品一区二区三区| 搡老岳熟女国产| 亚洲最大成人手机在线| 免费人成在线观看视频色| 少妇人妻一区二区三区视频| 99久久无色码亚洲精品果冻| 人人妻人人看人人澡| 成年女人看的毛片在线观看| АⅤ资源中文在线天堂| 晚上一个人看的免费电影| 国产精品久久久久久久电影| 久久久成人免费电影| 亚洲四区av| 中文在线观看免费www的网站| av在线蜜桃| or卡值多少钱| 中出人妻视频一区二区| 国产精品99久久久久久久久| 国产一区二区在线观看日韩| 亚洲va在线va天堂va国产| 午夜免费男女啪啪视频观看 | 欧美精品国产亚洲| 1024手机看黄色片| 丝袜喷水一区| 国产精品一区二区性色av| 色哟哟·www| 成人二区视频| 亚洲第一区二区三区不卡| 国产av不卡久久| 天天躁日日操中文字幕| 中文字幕免费在线视频6| 看十八女毛片水多多多| 久久精品影院6| 国产精品嫩草影院av在线观看| 哪里可以看免费的av片| 综合色丁香网| 久久草成人影院| 国产午夜精品论理片| 亚洲成人中文字幕在线播放| 一本久久中文字幕| 国产久久久一区二区三区| av.在线天堂| 日韩强制内射视频| 99热这里只有是精品在线观看| 91狼人影院| 亚洲精品在线观看二区| 久久精品国产清高在天天线| 国产精品嫩草影院av在线观看| 亚洲av成人精品一区久久| 精品一区二区三区人妻视频| 少妇丰满av| 亚洲乱码一区二区免费版| 热99在线观看视频| 亚洲av免费高清在线观看| 精品久久久久久久久久免费视频| 国产男靠女视频免费网站| 日韩欧美三级三区| 国产美女午夜福利| 黄色配什么色好看| 国产精品一及| 51国产日韩欧美| 欧美成人免费av一区二区三区| 女人被狂操c到高潮| 国产精品永久免费网站| 大又大粗又爽又黄少妇毛片口| 日韩欧美 国产精品| 69av精品久久久久久| 别揉我奶头 嗯啊视频| 欧美日韩国产亚洲二区| 国产不卡一卡二| 国产亚洲av嫩草精品影院| 日韩在线高清观看一区二区三区| 精品一区二区三区人妻视频| 九九久久精品国产亚洲av麻豆| 国产一区二区亚洲精品在线观看| 亚洲成a人片在线一区二区| 亚洲人成网站在线播放欧美日韩| 舔av片在线| 中出人妻视频一区二区| 身体一侧抽搐| 一个人观看的视频www高清免费观看| 亚洲av成人精品一区久久| 亚洲中文字幕一区二区三区有码在线看| 亚洲七黄色美女视频| 啦啦啦韩国在线观看视频| 91麻豆精品激情在线观看国产| a级毛色黄片| 国产成人精品久久久久久| 嫩草影院入口| 亚洲人成网站在线观看播放| 亚洲成人中文字幕在线播放| 久久久精品94久久精品| 成熟少妇高潮喷水视频| 午夜福利高清视频| 男插女下体视频免费在线播放| 日本熟妇午夜| 国产一区二区亚洲精品在线观看| 国产综合懂色| 成人亚洲欧美一区二区av| 亚洲av美国av| 在线看三级毛片| 亚洲色图av天堂| 国产一区二区激情短视频| 中文字幕免费在线视频6| 婷婷精品国产亚洲av在线| 国产亚洲精品久久久com| 免费av观看视频| 国产精品国产三级国产av玫瑰| 久久草成人影院| 欧美又色又爽又黄视频| 国产亚洲av嫩草精品影院| 亚洲色图av天堂| 久久综合国产亚洲精品| 日本在线视频免费播放| 国产精品亚洲一级av第二区| 综合色av麻豆| 又爽又黄a免费视频| 日本免费一区二区三区高清不卡| 黄色视频,在线免费观看| 国产av一区在线观看免费| 免费看a级黄色片| 1024手机看黄色片| 国产白丝娇喘喷水9色精品| 免费大片18禁| 亚洲在线自拍视频| 草草在线视频免费看| 日韩 亚洲 欧美在线| 看十八女毛片水多多多| 成人漫画全彩无遮挡| 欧美成人免费av一区二区三区| 欧美又色又爽又黄视频| 99在线视频只有这里精品首页| 色综合色国产| 偷拍熟女少妇极品色| 国产激情偷乱视频一区二区| 精品国产三级普通话版| 成人av在线播放网站| 欧美+日韩+精品| АⅤ资源中文在线天堂| 中文字幕人妻熟人妻熟丝袜美| 22中文网久久字幕| 国产精品久久视频播放| 免费人成在线观看视频色| 国产一级毛片七仙女欲春2| 久久韩国三级中文字幕| 尤物成人国产欧美一区二区三区| 波多野结衣高清无吗| 一本久久中文字幕| 少妇熟女aⅴ在线视频| 99在线人妻在线中文字幕| 亚洲精品乱码久久久v下载方式| 亚洲中文字幕日韩| 丰满乱子伦码专区| 一级av片app| 久久久久久九九精品二区国产| 18禁在线无遮挡免费观看视频 | 男人舔奶头视频| 亚洲最大成人手机在线| 三级男女做爰猛烈吃奶摸视频| 久久久精品欧美日韩精品| 久久久久久久久久久丰满| 国产女主播在线喷水免费视频网站 | 国产精品久久久久久亚洲av鲁大| 最近2019中文字幕mv第一页| 国产午夜福利久久久久久| 18禁裸乳无遮挡免费网站照片| 国产一级毛片七仙女欲春2| 人妻少妇偷人精品九色| 村上凉子中文字幕在线| 精品午夜福利视频在线观看一区| 一进一出好大好爽视频| 欧美最黄视频在线播放免费| 成人漫画全彩无遮挡| 日韩,欧美,国产一区二区三区 | av天堂在线播放| 最后的刺客免费高清国语| 亚洲中文字幕一区二区三区有码在线看| 最近在线观看免费完整版| 成人毛片a级毛片在线播放| 一级毛片我不卡| 不卡一级毛片| 国产69精品久久久久777片| 91av网一区二区| 99久久成人亚洲精品观看| 久久久久久久久大av| 可以在线观看毛片的网站| 淫秽高清视频在线观看| av在线亚洲专区| 六月丁香七月| 1000部很黄的大片| 久久精品国产99精品国产亚洲性色| 成人国产麻豆网| 成人亚洲精品av一区二区| 国产一区二区在线观看日韩| 午夜精品国产一区二区电影 | 免费人成视频x8x8入口观看| 天天躁夜夜躁狠狠久久av| 一边摸一边抽搐一进一小说| 精品一区二区三区av网在线观看| 亚洲中文字幕日韩| 偷拍熟女少妇极品色| 淫妇啪啪啪对白视频| 国产三级中文精品| 亚洲av免费高清在线观看| 色尼玛亚洲综合影院| 成人av一区二区三区在线看| 岛国在线免费视频观看| 日韩一区二区视频免费看| 91麻豆精品激情在线观看国产| 校园春色视频在线观看| 久久国产乱子免费精品| 日本免费a在线| 亚洲人成网站在线观看播放| 变态另类丝袜制服| 亚洲在线观看片| 香蕉av资源在线| 一本精品99久久精品77| 久久久久久伊人网av| 在现免费观看毛片| 国产精品福利在线免费观看| 亚洲美女搞黄在线观看 | 日日摸夜夜添夜夜添小说| 哪里可以看免费的av片| 国产成人一区二区在线| 一级毛片我不卡| 亚洲精品日韩在线中文字幕 | 亚洲av二区三区四区| 好男人在线观看高清免费视频| 日韩精品中文字幕看吧| 日韩亚洲欧美综合| 99久久久亚洲精品蜜臀av| 成年免费大片在线观看| 日韩欧美一区二区三区在线观看| 人人妻人人看人人澡| 欧美zozozo另类| 亚洲电影在线观看av| 美女高潮的动态| 国产伦精品一区二区三区视频9| av在线老鸭窝| 少妇丰满av| 精品久久久噜噜| 嫩草影院精品99| 国产老妇女一区| 丝袜美腿在线中文| 99久久久亚洲精品蜜臀av| 精品日产1卡2卡| 男女做爰动态图高潮gif福利片| 免费人成在线观看视频色| 日日干狠狠操夜夜爽| 精品一区二区三区视频在线观看免费| 免费在线观看成人毛片| 我的女老师完整版在线观看| 亚洲av中文字字幕乱码综合| 久久天躁狠狠躁夜夜2o2o| 三级国产精品欧美在线观看| 成年女人毛片免费观看观看9| 日日撸夜夜添| 国产伦一二天堂av在线观看| 性色avwww在线观看| av中文乱码字幕在线| 亚洲精品乱码久久久v下载方式| 色综合站精品国产| 久久久久国产网址| 偷拍熟女少妇极品色| 国产视频一区二区在线看| 精品乱码久久久久久99久播| 国产伦在线观看视频一区| 日韩高清综合在线| 亚洲精品在线观看二区| 久久久a久久爽久久v久久| 成人漫画全彩无遮挡| 亚洲电影在线观看av| 亚洲一区高清亚洲精品| 亚洲av美国av| 国产精品亚洲美女久久久| 午夜福利视频1000在线观看| 国产午夜精品久久久久久一区二区三区 | 久久九九热精品免费| 最近的中文字幕免费完整| 在线观看av片永久免费下载| 国产人妻一区二区三区在| 久99久视频精品免费| 能在线免费观看的黄片| 伊人久久精品亚洲午夜| 十八禁网站免费在线| 国产探花在线观看一区二区| 日韩制服骚丝袜av| 直男gayav资源| 一a级毛片在线观看| 欧洲精品卡2卡3卡4卡5卡区| 我的女老师完整版在线观看| 国产91av在线免费观看| 此物有八面人人有两片| 午夜老司机福利剧场| 变态另类成人亚洲欧美熟女| 亚洲无线在线观看| 国产免费男女视频| av视频在线观看入口| 国产精品乱码一区二三区的特点| 九色成人免费人妻av| 免费观看精品视频网站| 菩萨蛮人人尽说江南好唐韦庄 | 国产欧美日韩精品一区二区| 国产一区二区激情短视频| 国产乱人偷精品视频| 亚洲精品成人久久久久久| 老司机福利观看| 国产真实伦视频高清在线观看| 国产日本99.免费观看| 欧美日本亚洲视频在线播放| 99久国产av精品| avwww免费| 亚洲精品乱码久久久v下载方式| 久久99热这里只有精品18| 九九爱精品视频在线观看| 久久亚洲精品不卡| 亚洲国产精品国产精品| 免费不卡的大黄色大毛片视频在线观看 | 亚洲欧美日韩高清专用| av在线播放精品| 在线观看免费视频日本深夜| 国产视频内射| av免费在线看不卡| 中文字幕av在线有码专区| 日韩av在线大香蕉| 亚洲av中文av极速乱| 91精品国产九色| 国产一区亚洲一区在线观看| 小蜜桃在线观看免费完整版高清| 久久久久九九精品影院| 精品久久久久久久久亚洲| 亚洲av中文字字幕乱码综合| 美女cb高潮喷水在线观看| 搡老岳熟女国产|