• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adversarial Learning for Distant Supervised Relation Extraction

    2018-06-01 11:12:21DaojianZengYuanDaiFengLiSimonSherrattandJinWang
    Computers Materials&Continua 2018年4期

    Daojian Zeng , Yuan Dai , Feng Li , R. Simon Sherratt and Jin Wang,

    1 Introduction

    There have been many methods proposed for relation extraction. In these methods, the supervised paradigm has been shown to be effective and yields relatively high performance [Kambhatla (2004); Zhou, Su, Zhang et al. (2005)]. However, a large labeled training data is often required for supervision, and manually annotating large labeled training data is a time-consuming and labor-intensive task. In addition, since the manually labeled training data is often domain dependent, the model tends to be biased toward a specific domain.

    Figure 1: An training example

    To address the shortcomings of supervised paradigm, distantly supervised paradigm[Mintz, Bills, Snow et al. (2009)] is proposed to automatically generate training data.After obtaining the distant supervised labeled data, traditional methods sometimes applied supervised models to elaborately handcrafted features [Mintz, Bills, Snow et al.(2009); Riedel, Yao and McCallum (2010); Hoffmann, Zhang, Ling et al. (2011);Surdeanu, Tibshirani, Nallapati et al. (2012)]. These features are often derived from offthe-shelf Natural Language Processing (NLP) tools, which inevitably have errors, and have negative impact on the classification accuracy.

    With the recent revival of interest in neural networks, many researchers have investigated the possibility of using neural networks to automatically learn features for relation classification [Zeng, Liu, Lai et al. (2014); Zeng, Liu, Chen et al. (2015); Jiang, Wang, Li et al. (2016); Lin, Shen, Liu et al. (2016)]. These approaches generally use a softmax classifier with cross-entropy loss and inevitably bring the noise of artificial class NA into classification process. To address the shortcoming, the classifier with ranking loss is employed to relation extraction [Zeng, Zeng and Dai (2017)]. Generally, the ranking loss function needs a negative class to train the model. Randomly selecting a relation or selecting the highest score among all incorrect relations are two common methods for generating negative class.

    Unfortunately, these approaches are not ideal, because the sampled relation could be completely unrelated to the two given entities, so the majority of the generated negative class can be easily discriminated from positive class. Thus the quality of selected negative class is often poor and will contribute little towards the training. Cai et al. [Cai and Wang (2017)] and [Jia and Liang (2017)] have found that the quality of the negative class (pattern) is very important in the discriminative architecture. For example, the positive relation between Obama and Hawaii is /people/person/place_of birth in Fig. 1(ID 1). Obviously, there can be no /business/company/founders relation between a Person and a Location and it is a poor negative relation (ID 3). Accordingly, the birthplace of a person is probably also the place where he lives. Thus, /people/person/place_lived is the high-quality one that can be used to improve the model’s discrimination (ID 2).

    In this paper, we provide a generic solution to improve the training of ranking based DSRE. Inspired by generative adversarial networks (GANs) [Goodfellow, Pouget-Abadie,Mirza et al. (2014)], we propose a novel adversarial learning framework for this task and use a neural network as the negative label generator to assist the training of our desired model, which acts as the discriminator in GANs. More specifically, we consider a two layers fully connected neural network as the generator to supply better quality negative labels at first. Then we adopt the PCNNs as the discriminator to classify the final relation.Through the alternating optimization of generator and discriminator, the generator is learning to produce more and more discriminable negative classes and the discriminator has to become better as well. Since the generator has a discrete generation step, we cannot directly use the gradient-based approach to back propagate the errors. We then consider a one-step reinforcement learning setting and use a REINFORCE method to achieve this goal [Watkins (1992)].

    In sum, the main contributions of this paper lie in three folds:

    ● We combine GAN with the ranking-based approach and propose a new paradigm for DSRE. To our best knowledge, this is the first attempt to consider adversarial learning for this task.

    ● We prove that the generator can consistently provide high-quality negative classes which is crucial for the discriminator to improve DSRE.

    ● Empirically, we perform experiments on a widely used dataset and verify the adversarial learning approach. Experimental results show that our approach obtains state-of-the-art performance on the dataset.

    2 Related work

    2.1 Relation extraction

    Relation extraction is one of the most important topics in NLP. Supervised approaches are the most commonly used methods for relation extraction and yield relatively high performance [Bunescu and Mooney (2005); Zelenko, Aone and Richardella (2003); Zhou,Su, Zhang et al. (2005)]. In the supervised paradigm, relation extraction is considered to be a multi-class classification problem and may suffer from a lack of labeled data for training. To address this issue, Mintz et al. [Mintz, Bills, Snow et al. (2009)] adopts Freebase to perform distant supervision. The algorithm for training data generation is sometimes faced with the wrong label problem. To address this shortcoming, Riedel et al.[Riedel, Yao and McCallum (2010); Hoffmann, Zhang, Ling et al. (2011); Surdeanu,Tibshirani, Nallapati et al. (2012)] develop the relaxed distant supervision assumption for multi-instance learning. Nguyen et al. [Nguyen and Moschitti (2011)] utilize relation definitions and Wikipedia documents to improve their systems.

    The methods mentioned above have been shown to be effective for DSRE. However,their performance depends strongly on the quality of the designed features. Recently,deep learning has made great strides in many tasks [Gurusamy and Subramaniam (2017);Yuan, Li, Wu et al. (2017)]. Many researchers attempt to use deep neural network to automatically learning feature for DSRE. Zeng et al. [Zeng, Liu, Lai et al. (2014)] adopts CNNs to embed the semantics of the sentences. Moreover, Santos et al. [Santos, Xiang and Zhou (2015)] proposes a pairwise ranking loss function in the CNNs to reduce the impact of artificial class. These methods build classifier based on sentence-level annotated data, which cannot be directly applied for DSRE since multiple sentences corresponding to a fact may be achieved in the data generating procedure. Therefore,Zeng et al. [Zeng, Liu, Chen et al. (2015)] incorporate multi-instance learning with neural network model, which can build relation extractor based on distant supervision data.Although the method achieves significant improvement in relation extraction, it only selects the most likely sentence for each entity pair in their multi-instance learning paradigm. To address this issue, Lin et al. [Lin, Shen, Liu et al. (2016)] propose sentence level attention over multiple instances in order to utilize all informative sentences. Jiang et al. [Jiang, Wang, Li et al. (2016)] employ cross-sentence max-pooling to select features across different instances and then aggregates the most significant features for each entity pair.

    The aforementioned works, especially neural networks, have greatly promoted the development of relation extraction. However, these works do not pay attention to the noise of artificial class NA, which are unfortunately very common in DSRE. Zeng et al.[Zeng, Zeng and Dai (2017)] proposed ranking loss and cost-sensitive to address the noise of NA. They select the highest score among all incorrect relations as the negative label. This approach is not ideal, because the quality of the selected label is often poor. In this paper, we propose a novel pair-wise ranking loss whose negative samples are provided by a generator of the GAN.

    2.2 Generative adversarial networks

    GANs [Goodfellow, Pouget-Abadie, Mirza et al. (2014)] was originally proposed for generating samples in a continuous space such as images. A GAN consists of two parts,the generator, and the discriminator. The generator accepts a noise input and outputs an image. The discriminator is a classifier which classifies images as “true” (from the ground truth set) or “fake” (generated by the generator). When training a GAN, the generator and the discriminator play a minimax game, in which the generator tries to generate “real” images to deceive the discriminator, and the discriminator tries to tell them apart from ground truth images. GANs are also capable of generating samples satisfying certain requirements, such as conditional GAN [Mirza and Osindero (2014)]. It is not possible to use GANs in its original form for generating discrete samples like natural language sentences or knowledge graph triples, because the discrete sampling step prevents gradients from propagating back to the generator. SeqGAN [Yu, Zhang, Wang et al. (2017)] is one of the first successful solutions to this problem by using reinforcement learning, which trains the generator using policy gradient. Likewise, our framework relies on policy gradient to train the generator which provides discrete negative labels.

    3 Task definition

    DSRE is usually considered as a multi-instance learning problem. In multi-instance learning paradigm, all sentences labeled by a relation triplet constitute a bag and each sentence is called an instance.

    Suppose that there arein the training set and that the-th bagcontainsinstances.The objective of multi-instance learning is to predict the labels of the unseen bags. We need to learn a relation extractor based on the training data and then use it to predict relations for the test set.

    Figure 2: GAN-based framework. (a) Generator: calculating a probability distribution over a set of candidate negative relations, then sample one relation from the distribution as the output. (b) Discriminator: receiving the generated negative triple as well as the ground truth triple (in the hexagonal box), and calculating their scores. Generator maximizes the score of the generated negative class by policy gradient, and discriminator minimizes the marginal loss between positive and negative class by gradient descent

    Specifically, for a bagin training set, we need to extract features from the bag (from one or several valid instances) and then use them to train a classifier.

    4 Methodology

    Fig. 2 shows the neural network architecture used in this work. It consists of two parts:Discriminator Network and Generator Network. In this section, we first introduce the discriminator network and generator network used in this paper. Then, we define the objective function for discriminator and generator respectively and explain how to alternatively train the proposed model.

    4.1 Discriminator network

    The discriminator in our model is similar to [Zeng, Zeng and Dai (2017)] which is shown in Fig. 3. Different from their model that selects the highest score among all incorrect relations as a negative class, our model uses the negative label generated by the generator which will be described in detail in Section 4.2. In this section, we will first give a detailed description of the discriminator.

    4.1.1 Vector representation

    The inputs of our discriminator are raw word tokens. When using neural networks, we typically transform word tokens into low-dimensional vectors. In this paper, the “word token” refers to word and entity. In the following, we do not distinguish them and call them “word”. In our method, each input word token is transformed into a vector by looking up pre-trained word embeddings. Moreover, we use Position Features (PFs)[Zeng, Liu, Lai et al. (2014); Zeng, Liu, Chen et al. (2015)] to specify entity pairs, which are also transformed into vectors by looking up position embeddings.

    Figure 3: Discriminator network

    4.1.2 Word embeddings

    Word embeddings are distributed representations of words that map each word in a text to a ‘k’-dimensional real-valued vector. They have recently been shown to capture both semantic and syntactic information about words very well, setting performance records in several word similarity tasks [Mikolov, Chen, Corrado et al. (2013); Pennington, Socher and Manning (2014)]. Using word embeddings that have been trained a priori has become common practice for enhancing many other NLP tasks [Huang, Ahuja, Downey et al.(2014)]. In the past years, many methods for training word embeddings have been proposed [Bengio, Ducharme, Vincent et al. (2003); Collobert, Weston, Bottou et al.(2011); Mikolov, Chen, Corrado et al. (2013)]. We employ the method [Mikolov, Chen,Corrado et al. (2013)] to train word embeddings and denote it by E.

    4.1.3 Position embeddings

    Zeng et al. [Zeng, Liu, Chen et al. (2015)] have shown the importance of PFs in relation extraction. Similar to their works, we use PFs to specify entity pairs. A PF is defined as the combination of the relative distances from the current word to entityand entity.We randomly initialize two position embedding matricesand transform the relative distances into vectors by looking them up.

    We concatenate the word representation and position representation as the input of the network (shown in Fig. 3(a)). Assume that the size of word representation iskwand that of position representation is kd, then the size of a word vector is

    4.1.4 Convolution

    Assume thatand, then the convolution of A and B is defined as

    We denote the input sentence by, whereis the-th word, and useto represent its vector. We useto represent the matrix concatenated by sequencedenotes the horizontal concatenation ofx1andx2).We denote the length of filter by(Fig. 3(a) shows an example of= 3), then the weight matrix of the filter is. Then the convolution operation between the filter and sentenceresults in another vector

    where

    In experiments, we usen( n>1)filters (or feature maps) to capture different features of an instance. Therefore, we also needweight matrices, so that all the convolution operations can be expressed by

    where1≤i≤n and. Through the convolution layer, we obtain the results vectors

    4.1.5 Piecewise max pooling

    In order to capture the structural information and fine-grained features, PCNNs divides an instance into three segments according to the given entity pair (two entities cut the sentence into three parts) and do max-pooling operation on each segment. For the result vector ciof convolution operations, it can be divided into three partsThen piecewise max-pooling procedure iswhereand

    After that, we can concatenate all the vectorsto obtain vector. Fig. 3(a) displays an example of, in which the gray circles are the positions of entities. Finally, we compute the feature vectorfor sentence S .

    4.1.6 Classifier

    To compute the score for each relation, the feature vector of each instance is fed into a pair-wise ranking loss based classifier. Given the distributed vector representation of an instance b , the network computes the score for a class label tiby using the dot product:

    where w∈R1×3nis the class embedding for class label t. All the class embeddings

    tiiconstitute the class embedding matrixwhose rows encode the distributed vector representations of the different class labels. Tis equal to the number of possible relation types for the relation extraction system. Note that the number of dimensions in each class embedding must be equal to the size of the distributed vector representation of the input bag3n . The class embedding matrix WTis a parameter to be learned by the network.

    4.1.7 Instance selection

    Distant supervised relation extraction suffers from wrong label problem [Riedel, Yao and McCallum (2010)]. The core problem that needs to be solved in the multi-instance learning is to get the corresponding bag feature vector from all the instance feature vectors in the bag. In fact, the problem is the instance selection strategy. We employ an instance selection strategy borrowed from Zeng et al. [Zeng, Liu, Chen et al. (2015)].Different from [Zeng, Liu, Chen et al. (2015)], we randomly select an instance from the bag with NA label since our model do not give score for NAclass (see Section 4.1.8.).In addition, we choose the instance which has the highest score for the bag label except forNA. The scores are computed using Eq. (3). Therefore, our instance selection strategy will not be disturbed by the noise inNA. Assume that there is a bagthat contains qiinstances with feature vectorsand the bag label is riThe j -th instanceis selected and thej is constrained as follows:

    whereis computed using Eq. (3).

    4.1.8 Pair-wise ranking loss

    The cross-entropy loss brings the noise of artificial class into the classification process.This phenomenon is mainly due to the noise of artificial classNA . To address this shortcoming, we propose a new pairwise ranking loss instead of cross-entropy which is often used for softmax classifier.

    In our model, the network can be stated as a tuple. Assume that there are N bags in training set, and their labels are relationsAfter the instance selection, we get a representative instance and its corresponding feature vector is considered as the bag feature vector. The input for each iteration round is a bag feature vector and the class label. In the pairwise loss, the loss function is defined on the basis of pairs of objects whose labels are different. We can get the loss function by selecting a class label that is different from the input one. In this work, we use the negative samples generated by the generator. For example, when the-th bag with ground truth labelis fed into the network, we will get a negative classwhich provided by the generator. The pair-wise ranking loss function is defined as follows:

    whereis a scaling factor that magnifies the difference between the scores. m+and m-are the margin for correct and incorrect class,respectively. Since it is very difficult to learn patterns for the artificial class NA , softmax classifier often brings noise into the classification process of the natural classes. By using a ranking classifier, we can avoid explicitly leaning patterns for the artificial class. We omit the artificial classNA by setting the first term in the right side of Eq. (5) to zero and do not learn the class embedding forNA. Thus, our model does not give a score for the artificial classNA and the noise inNAis alleviated. At prediction time, an instance is classified asNAonly if all actual classes have negative scores. A bag is positively labeled if and only if the output of the network on at least one of its instances is assigned a positive label and we choose the class which has the highest score.

    4.2 Generator network

    The aim of our generator is to provide the discriminator with high-quality negative classes to improve DSRE. In this paper, we devise a two layers fully-connected neural network as the generator. As shown in Fig. 2 (a), the input of the generator is a vector v that combines the embeddings of two entities (and) and the embedding of the ground truth relationrt. Nonlinear function ReLU is added after the first layer. The output of the network is as follows:

    The softmax function is added to the second layer because they can adequately model the“sampling from a probability distribution process” of discrete GANs. The probability distribution of the relation set ?is defined as:

    Finally, the generator samples one relation from the distributionas the output.

    4.3 Generative adversarial training

    Intuitively, the discriminator should assign a relatively large score for positive class and small score for the negative class. The objective of the discriminator can be formulated as minimizing the following ranking loss function:

    The only difference between this loss function and Eq. (5) is that it uses negative class from the generator. In order to encourage the generator to generate useful negative classes, the objective of the generator is to maximize the score for negative classes. The objective function can be formulated as maximizing the following expectation of scores for negative classes:

    Since LGinvolves a discrete sampling step, it cannot be directly optimized by gradientbased algorithms. Following Cai et al. [Cai and Wang (2017)], we use a one-step reinforcement learning to solve this problem.is the state,is the policy,is the action, andis the reward. We use the policy gradient to optimize the generator. From Eqs. (8) and (9), we could observe that in order to achieve higher reward, the policy used by the generator would punish the trivial negative classes by lowering down their corresponding probability. In the adversarial training, the generator and the discriminator are alternatively trained towards their respective objectives.

    5 Experimental results

    In this section, we first introduce the dataset and evaluation metrics, then test several variants via cross-validation to determine the parameters used in our experiments, finally show the experimental results and analysis.

    5.1 Dataset and evaluation metrics

    We evaluate our method on a widely used dataset4http://iesl.cs.umass.edu/riedel/ecml/that was developed by [Riedel, Yao and McCallum (2010)] and has also been used by Hoffmann et al. [Hoffmann, Zhang,Ling et al. (2011); Surdeanu, Tibshirani, Nallapati et al. (2012); Zeng, Liu, Chen et al.(2015)]. This dataset was generated by aligning Freebase relations with the NYT corpus,with sentences from the years 2005-2006 used as the training corpus and sentences from 2007 used as the testing corpus. Following the previous work [Mintz, Bills, Snow et al.(2009)], we evaluate our approach using held-out evaluation. The held-out evaluation compares the extracted relation instances against Freebase relation data.

    Table 1: Parameters used in our experiments

    5.2 Experimental settings

    In this work, we use the Skip-gram model (word2vec)5https://code.google.com/p/word2vec/to train the word embeddings on the NYT corpus. The tokens are concatenated using the ## operator when the entity has multiple word tokens. Position features are randomly initialized with a uniform distribution between [-1, 1]. For the convenience of comparing with baseline methods,the PCNNs module uses the same parameter settings as Zeng et al. [Zeng, Liu, Chen et al.(2015)]. We use L2 regularization with regularization parameter β=0.001. We tune the proposed model using three-fold validation to study the effects of two parameters: the constant terms λ used in the loss function. We use a grid search to determine the optimal parameters and manually specify subsets of the parameter spaces:. Tab. 1 shows all parameters used in the experiments.

    Figure 4: Performance comparison of proposed method and baseline methods

    5.3 Baselines

    We select some previous works that use handcrafted features as well as the neural network based methods as baselines. Mintz is proposed by Mintz et al. [Mintz, Bills,Snow et al. (2009)] which extracts features from all instances; MultiR is a multi-instance learning method proposed by Hoffmann et al. [Hoffmann, Zhang, Ling et al. (2011)];MIML is a multi-instance multi-labels method proposed by Surdeanu et al. [Surdeanu,Tibshirani, Nallapati et al. (2012)]; PCNNs+MIL is the method proposed by Zeng et al.[Zeng, Liu, Chen et al. (2015)], which incorporates multi-instance learning with PCNNs to extract bag features; CrossMax is proposed by Jiang et al. [Jiang, Wang, Li et al.(2016)], which exploits PCNNs and cross-sentence max-pooling to select features across different instances; Ranking Loss+Cost-Sensitive (RC) in the Zeng et al. [Zeng, Zeng and Dai (2017)], is used to address the problem of NA noise and class imbalance.

    5.4 Comparison with baseline methods

    In this section, we show the experimental results and comparisons with baseline methods.In the following experiments, we use Ours to refer to the proposed model that use the GAN-based framework.

    The held-out evaluation provides an approximate measure of precision without requiring costly human evaluation. Half of the Freebase relations are used for testing. The relation instances discovered from the test articles are automatically compared with those in Freebase. For the convenience of comparing with baseline methods, the prediction results are sorted by the classification scores and a precision-recall curve is created for the positive classes.

    Table 2: Precision, recall and F1 score of some relations

    Fig. 4 shows the precision-recall curves of our approach and all the baselines. We can observe that our model outperforms all the baseline systems and improves the results significantly. It is worth emphasizing that the best of all baseline methods achieve a recall level of 38%. In contrast, our model is much better than the previous approach and enhances the recall to approximately 41%. The significant improvement can be contributed to the magic of our new pair-wise ranking loss function. The classifier uses pair-wise ranking loss which avoids explicitly learning the patterns for NA. Thus, our model will not trend to classify the samples as NA compared to softmax classifier and recalls more positive samples.

    Furthermore, our model achieves a large improvement in precision especially at higher recall levels. From Fig. 4, we can see that our model achieves a precision of 43% when the recall is 41%. In contrast, when PCNNS, CrossMax and RC achieve such precision,the recalls are decreased to approximately 24%, 29% and 37%, respectively. Thus, our approach is advantageous from the point of view of precision. Also note that our model also shows advantages in the precision when the recall is very low compare with RC.This phenomenon is mainly due to the fact that when using adversarial training, the generator can consistently provide high-quality negative classes. Therefore, we can conclude that our model outperforms all the baseline systems and improves the results significantly in terms of both precision and recall.

    5.4.1 Effects of adversarial training

    In order to validate the effects of adversarial training, we compute the confusion matrix and analyze the detail of some relations in Tab. 2. From Tab. 2, we can see that: (1) It achieves better results in the majority of relations when using the adversarial training framework; (2) The F1 score have a significant improvement in /people/person/place_lived an-d /people/person/place_of_birth compared with Zeng et al. [Zeng, Zeng and Dai(2017)], mainly due to the fact that the generator supply high-qulity negative classes.Nonetheless, the GA-N-based framework helps to improve the performance in this case.The precision-recall curves with and without GAN are illustrated in Fig. 5, from which we can also observe that it brings better performance when using the GAN in DSRE.After removing the GAN, our model degrades to PCNNS+MIL+Traditional Ranking loss.In order to further validate the effects of our model, the PCNNS+MIL result is illustrated in Fig. 5. As we expected, our method can get better performance compared with PCNNS+MIL. The superiority of our approach indicates that incorporate the GAN framework in DSRE can effectively improve DSRE.

    Figure 5: Effects of generative adversarial training

    6 Conclusions

    In this paper, we exploit a novel GAN-based framework for DSRE. In the traditional ranking based classifier, the majority of the generated negative class can be easily discriminated from positive class. To address the shortcoming, we design a neural network as the negative class generator to supply high-quality negative classes. The generator is learning to produce more and more discriminable negative classes, while the discriminator has to become better as well. We perform experiments on a widely used dataset and verify the adversarial learning approach. Experiment results show that our approach obtains state-of-the-art performance on the dataset.

    Acknowledgments:This research work is supported by the National Natural Science Foundation of China (NO. 61772454, 6171101570, 61602059). Hunan Provincial Natural Science Foundation of China (No. 2017JJ3334), the Research Foundation of Education Bureau of Hunan Province, China (No. 16C0045), and the Open Project Program of the National Laboratory of Pattern Recognition (NLPR). Professor Jin Wang is the corresponding author.

    Bengio, Y.; Ducharme, R.; Vincent, P.; Jauvin, C.(2003): A neural probabilistic language model. Journal of Machine Learning Research, vol. 3, pp. 1137-1155.

    Bunescu, R. C.; Mooney, R. J.(2005): Subsequence kernels for relation extraction.Advances in Neural Information Processing Systems, pp. 171-178.

    Cai, L.; Wang, W.(2017): Kbgan: Adversarial learning for knowledge graph embeddings. Computation and Language.

    Collobert, R.; Weston, J.; Bottou, L.; Karlen, Michael; Kavukcuoglu, K. et al.(2011): Natural language processing (almost) from scratch. Journal of Machine Learning Research, vol. 12, pp. 2493-2537.

    Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.;Xu, B.; Warde-Farley, D. et al.(2014):Generative adversarial nets. Advances in neural information processing systems, pp.2672-2680.

    Gurusamy, R.; Subramaniam, V.(2017): A machine learning approach for mri brain tumor classification. Computers, Materials & Continua, vol. 53, no. 2, pp. 91-108.

    Hoffmann, R.; Zhang, C.; Ling, X.;Zettlemoyer, L.; Weld, D. S.(2011): Knowledgebased weak supervision for information extraction of overlapping relations. Proceedings of the 49thAnnual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 541-550.

    Huang, F.; Ahuja, A.; Downey, D.; Yang, Y.; Guo, Y. et al.(2014): Learning representations for weakly supervised natural language processing tasks. Computational Linguistics, vol. 40, no. 1, pp. 85-120.

    Jia, R.; Liang, P.(2017): Adversarial examples for evaluating reading comprehension systems. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2021-2031.

    Jiang, X.; Wang, Q.; Li, P.; Wang, B.(2016): Relation extraction with multi-instance multilabel convolutional neural networks. COLING 2016, 26thInternational Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, pp.1471-1480.

    Kambhatla, N.(2004): Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, pp. 22.

    Lin, Y.; Shen, S.; Liu, Z.; Luan, H.; Sun, M.(2016): Neural relation extraction with selective attention over instances. Proceedings of the 54thAnnual Meeting of the Association for Computational Linguistics, vol. 1, pp. 2124-2133.

    Mikolov, T.; Chen, K.; Corrado, G.; Dean, J.(2013): Efficient estimation of word representations in vector space. Computation and Language.

    Mintz, M.; Bills, S.; Snow, R.; Jurafsky, D.(2009): Distant supervision for relation extraction without labeled data. ACL’09 Proceedings of the Joint Conference of the 47thAnnual Meeting of the ACL and the 4thInternational Joint Conference on Natural Language Processing of the AFNLP, vol. 2, pp. 1003-1011.

    Mirza, M.; Osindero, S.(2014): Conditional generative adversarial nets. Learning, pp. 1-7.

    Nguyen, T. V. T.; Moschitti, A.(2011): End-to-end relation extraction using distant supervision from external semantic repositories. Proceedings of the 49thAnnual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 2,pp. 277-282.

    Pennington, J.; Socher, R.; Manning, C. D.(2014): Glove: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746-1751.

    Riedel, S.; Yao, L.; McCallum, A.(2010): Modeling relations and their mentions without labeled text. Proceedings of ECML PKDD, pp. 148-163.

    Santos, C. N. D.; Xiang, B.; Zhou, B.(2015): Classifying relations by ranking with convolutional neural networks. Proceedings of the 53rdAnnual Meeting of the Association for Computational Linguistics and the 7thInternational Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing,vol. 1, pp. 626-634.

    Surdeanu, M.; Tibshirani, J.; Nallapati, R.; Manning, C. D.(2012): Multi-instance multi-label learning for relation extraction. Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 455-465.

    Williams, R. J.(1992): Simple statistical gradient-following algorithms for connectionist reinforcement learning. Reinforcement Learning, pp. 5-32.

    Yu, L.; Zhang, W.; Wang, J. Yu, Y.(2017): Seqgan: Sequence generative adversarial nets with policy gradient. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 2852-2858.

    Yuan, C.; Li, X.; Wu, Q. J.; Li, J.; Sun, X.(2017): Fingerprint liveness detection from different fingerprint materials using convolutional neural network and principal component analysis. Computers, Materials & Continua, vol. 53, no. 4, pp. 357-372.

    Zelenko, D.; Aone, C.; Richardella, A.(2003): Kernel methods for relation extraction.Journal of Machine Learning Research, vol. 3, pp. 1083-1106.

    Zeng, D.; Liu, K.; Chen, Y.; Zhao, J.(2015): Distant supervision for relation extraction via piecewise convolutional neural networks. Conference on Empirical Methods in Natural Language Processing, pp. 1753-1762.

    Zeng, D.; Liu, K.; Lai, S.; Zhou, G.; Zhao, J.(2014): Relation classification via convolutional deep neural network. Proceedings of COLING 2014, 25thInternational Conference on Computational Linguistics: Technical Papers, pp. 2335-2344.

    Zeng, D.; Zeng, J.; Dai, Y.(2017): Using cost-sensitive ranking loss to improve distant supervised relation extraction. Chinese Computational Linguistics and Natural Language Processing Based on Naturally, pp. 184-196.

    Zhou, G.; Su, J.; Zhang, J.; Zhang, M.(2005): Exploring various knowledge in relation extraction. Proceedings of the 43rdAnnual Meeting on Association for Computational Linguistics, pp. 427-434.

    一个人看的www免费观看视频| 草草在线视频免费看| 国产日韩欧美在线精品| 国产淫片久久久久久久久| 国产成人免费无遮挡视频| 精品少妇久久久久久888优播| 亚洲av免费在线观看| 中文精品一卡2卡3卡4更新| 日本猛色少妇xxxxx猛交久久| 亚洲av男天堂| 各种免费的搞黄视频| 亚洲av男天堂| 爱豆传媒免费全集在线观看| 九色成人免费人妻av| 一级片'在线观看视频| 99久久精品热视频| 99久久中文字幕三级久久日本| 一区二区av电影网| 99久久精品热视频| 精品久久国产蜜桃| 校园人妻丝袜中文字幕| 久久97久久精品| 天天躁日日操中文字幕| 久久韩国三级中文字幕| 亚洲综合精品二区| 丝袜脚勾引网站| 亚洲精品日本国产第一区| 大香蕉97超碰在线| 丰满少妇做爰视频| 久久久久网色| 亚洲人与动物交配视频| 熟女人妻精品中文字幕| 国产精品女同一区二区软件| 嫩草影院精品99| 97超视频在线观看视频| 成人黄色视频免费在线看| 日韩人妻高清精品专区| 国产女主播在线喷水免费视频网站| 亚洲av欧美aⅴ国产| 天天躁夜夜躁狠狠久久av| 国产亚洲5aaaaa淫片| 大话2 男鬼变身卡| av在线蜜桃| 国产免费福利视频在线观看| 日韩欧美 国产精品| 婷婷色麻豆天堂久久| 国产一区二区三区av在线| 男人狂女人下面高潮的视频| 九九爱精品视频在线观看| 乱码一卡2卡4卡精品| 亚洲av在线观看美女高潮| 国产久久久一区二区三区| 久久久a久久爽久久v久久| 精品人妻一区二区三区麻豆| av黄色大香蕉| 又爽又黄a免费视频| 免费少妇av软件| 成年女人在线观看亚洲视频 | 亚洲在久久综合| 亚洲在久久综合| 久久女婷五月综合色啪小说 | 免费不卡的大黄色大毛片视频在线观看| 丰满人妻一区二区三区视频av| 日韩在线高清观看一区二区三区| 99久久精品一区二区三区| 特级一级黄色大片| 99re6热这里在线精品视频| 九草在线视频观看| 精品国产一区二区三区久久久樱花 | 亚洲精品,欧美精品| 国产一区二区在线观看日韩| 亚洲欧洲国产日韩| 日韩中字成人| 视频区图区小说| 日本色播在线视频| 熟女电影av网| 欧美最新免费一区二区三区| 亚洲国产高清在线一区二区三| 国产成人精品久久久久久| 国内少妇人妻偷人精品xxx网站| 人妻夜夜爽99麻豆av| 1000部很黄的大片| 边亲边吃奶的免费视频| 亚洲自拍偷在线| 国产成人免费观看mmmm| 中国国产av一级| 亚洲成人精品中文字幕电影| 18+在线观看网站| 男女无遮挡免费网站观看| 日韩欧美一区视频在线观看 | 亚洲av日韩在线播放| 亚洲最大成人中文| 久久这里有精品视频免费| 国产成人精品一,二区| 尤物成人国产欧美一区二区三区| 97热精品久久久久久| 国产成人精品婷婷| 寂寞人妻少妇视频99o| 大话2 男鬼变身卡| 少妇人妻一区二区三区视频| eeuss影院久久| 亚洲内射少妇av| 免费观看无遮挡的男女| 最后的刺客免费高清国语| 毛片女人毛片| 在线免费观看不下载黄p国产| 99热这里只有精品一区| 久久久a久久爽久久v久久| 国产精品国产av在线观看| av福利片在线观看| 你懂的网址亚洲精品在线观看| 永久免费av网站大全| 亚洲精品久久久久久婷婷小说| 精华霜和精华液先用哪个| 成人国产av品久久久| 18禁在线无遮挡免费观看视频| 我的女老师完整版在线观看| 国产精品偷伦视频观看了| 精品国产乱码久久久久久小说| 99久久精品热视频| 一级毛片电影观看| 综合色丁香网| 大香蕉97超碰在线| 热re99久久精品国产66热6| 欧美潮喷喷水| 欧美日韩综合久久久久久| 五月伊人婷婷丁香| 91久久精品国产一区二区三区| 国产美女午夜福利| 国产久久久一区二区三区| 精品少妇黑人巨大在线播放| 亚洲一区二区三区欧美精品 | 极品教师在线视频| 亚洲在久久综合| 精品国产露脸久久av麻豆| 精华霜和精华液先用哪个| 国产精品女同一区二区软件| 水蜜桃什么品种好| 欧美xxxx性猛交bbbb| 精品亚洲乱码少妇综合久久| a级一级毛片免费在线观看| 国产黄片美女视频| 久久精品国产自在天天线| 男女边吃奶边做爰视频| 亚洲国产最新在线播放| 免费看av在线观看网站| 国产欧美日韩精品一区二区| 99精国产麻豆久久婷婷| av国产免费在线观看| 国产亚洲最大av| 男人和女人高潮做爰伦理| 中文字幕人妻熟人妻熟丝袜美| 久久精品国产亚洲av涩爱| 亚洲真实伦在线观看| 久久久国产一区二区| 大片免费播放器 马上看| 美女视频免费永久观看网站| 久久精品国产鲁丝片午夜精品| 内射极品少妇av片p| 天堂中文最新版在线下载 | 久久久久精品久久久久真实原创| 精品酒店卫生间| 国产极品天堂在线| 久久影院123| 99热全是精品| 超碰av人人做人人爽久久| 国产精品久久久久久精品电影小说 | 亚洲欧洲国产日韩| 国产高清国产精品国产三级 | 免费看av在线观看网站| 欧美+日韩+精品| 汤姆久久久久久久影院中文字幕| 欧美少妇被猛烈插入视频| 欧美老熟妇乱子伦牲交| 国产欧美亚洲国产| 久久久久久九九精品二区国产| 国产黄频视频在线观看| 一级片'在线观看视频| 高清欧美精品videossex| av在线app专区| 国产黄片美女视频| 看非洲黑人一级黄片| 舔av片在线| 水蜜桃什么品种好| 欧美潮喷喷水| 中文天堂在线官网| av又黄又爽大尺度在线免费看| 国模一区二区三区四区视频| 亚洲国产成人一精品久久久| 在线精品无人区一区二区三 | 日韩欧美 国产精品| 综合色丁香网| a级一级毛片免费在线观看| 久久精品国产自在天天线| 老司机影院成人| 国产伦理片在线播放av一区| 免费av毛片视频| 日韩 亚洲 欧美在线| 国产精品.久久久| 2018国产大陆天天弄谢| 最新中文字幕久久久久| 黄色视频在线播放观看不卡| 亚洲最大成人中文| 国产精品爽爽va在线观看网站| 日本三级黄在线观看| 国产免费一级a男人的天堂| 亚洲av一区综合| 亚洲天堂国产精品一区在线| 亚洲精品日韩在线中文字幕| 天天躁日日操中文字幕| 伦精品一区二区三区| 国产成人免费观看mmmm| 中文字幕亚洲精品专区| 久久热精品热| 国产亚洲av嫩草精品影院| 午夜视频国产福利| 日产精品乱码卡一卡2卡三| 国产男女内射视频| 久久精品国产亚洲网站| 中国美白少妇内射xxxbb| 秋霞在线观看毛片| 丰满少妇做爰视频| 亚洲不卡免费看| 十八禁网站网址无遮挡 | 亚洲人与动物交配视频| 又黄又爽又刺激的免费视频.| 免费大片黄手机在线观看| 哪个播放器可以免费观看大片| 国产视频首页在线观看| 少妇猛男粗大的猛烈进出视频 | 久久久久久久久久久免费av| av播播在线观看一区| 青春草亚洲视频在线观看| 99热6这里只有精品| 欧美变态另类bdsm刘玥| 亚洲一区二区三区欧美精品 | 欧美日本视频| av播播在线观看一区| 免费大片18禁| 一级av片app| 一个人看视频在线观看www免费| 3wmmmm亚洲av在线观看| 美女内射精品一级片tv| 国产大屁股一区二区在线视频| 肉色欧美久久久久久久蜜桃 | 联通29元200g的流量卡| 亚洲精品久久久久久婷婷小说| 一区二区三区免费毛片| 熟女av电影| 99久久中文字幕三级久久日本| 国产精品久久久久久精品古装| 亚洲第一区二区三区不卡| 免费大片18禁| 日韩在线高清观看一区二区三区| 黑人高潮一二区| 欧美极品一区二区三区四区| 日韩av不卡免费在线播放| 精品久久久精品久久久| 日本熟妇午夜| 国产精品精品国产色婷婷| 日本一本二区三区精品| 日本午夜av视频| 国产人妻一区二区三区在| 网址你懂的国产日韩在线| 亚洲精品国产成人久久av| 国产日韩欧美亚洲二区| 亚洲精品日韩av片在线观看| 熟女人妻精品中文字幕| 国产免费一级a男人的天堂| 亚洲高清免费不卡视频| 亚洲国产精品999| xxx大片免费视频| 亚洲久久久久久中文字幕| 国产极品天堂在线| 久久久久国产精品人妻一区二区| 国产精品久久久久久久电影| 卡戴珊不雅视频在线播放| 国产又色又爽无遮挡免| 久久人人爽人人爽人人片va| 久久精品国产a三级三级三级| 久久99精品国语久久久| 国产淫片久久久久久久久| 成人亚洲欧美一区二区av| 嫩草影院新地址| 秋霞伦理黄片| 大码成人一级视频| 亚洲人成网站高清观看| 亚洲综合色惰| 伊人久久国产一区二区| 美女主播在线视频| 干丝袜人妻中文字幕| 亚洲精品aⅴ在线观看| 国精品久久久久久国模美| 黄色欧美视频在线观看| 亚洲精品色激情综合| 午夜精品国产一区二区电影 | 日本一二三区视频观看| 国产熟女欧美一区二区| 色播亚洲综合网| 尾随美女入室| 我要看日韩黄色一级片| 国产久久久一区二区三区| 一级毛片黄色毛片免费观看视频| 丰满人妻一区二区三区视频av| 老司机影院成人| 久久人人爽av亚洲精品天堂 | 黄色欧美视频在线观看| 国产精品成人在线| 一级二级三级毛片免费看| 精品99又大又爽又粗少妇毛片| 亚洲精品乱码久久久久久按摩| 久久午夜福利片| av在线老鸭窝| 亚洲精品亚洲一区二区| 日韩欧美精品免费久久| 国产午夜精品一二区理论片| 精华霜和精华液先用哪个| 热99国产精品久久久久久7| 中国美白少妇内射xxxbb| 亚洲精品乱久久久久久| 18禁裸乳无遮挡免费网站照片| 精品久久久久久久人妻蜜臀av| 国产 一区精品| 亚洲四区av| 熟女av电影| www.色视频.com| 亚洲欧美中文字幕日韩二区| 亚洲天堂国产精品一区在线| 中文字幕av成人在线电影| 久久精品久久精品一区二区三区| 伊人久久国产一区二区| 一级毛片黄色毛片免费观看视频| 我要看日韩黄色一级片| 夜夜爽夜夜爽视频| 97人妻精品一区二区三区麻豆| 久久6这里有精品| 成人美女网站在线观看视频| 国产亚洲5aaaaa淫片| 色播亚洲综合网| 久久久久久久大尺度免费视频| 人体艺术视频欧美日本| 精品一区在线观看国产| 国产精品蜜桃在线观看| 国产男女超爽视频在线观看| 久久久久久伊人网av| 欧美日韩国产mv在线观看视频 | 亚洲不卡免费看| 日本-黄色视频高清免费观看| 亚洲国产欧美人成| 少妇人妻精品综合一区二区| 久久久久久久久久久丰满| 99久久九九国产精品国产免费| 日韩制服骚丝袜av| 老司机影院成人| 久久久久久久久久成人| 亚洲天堂av无毛| 天天躁日日操中文字幕| 亚洲美女搞黄在线观看| 全区人妻精品视频| 青春草亚洲视频在线观看| 日韩一区二区三区影片| 亚洲欧美成人精品一区二区| 国产伦精品一区二区三区视频9| 色婷婷久久久亚洲欧美| 久久久精品免费免费高清| 免费看a级黄色片| 亚洲aⅴ乱码一区二区在线播放| av黄色大香蕉| 欧美亚洲 丝袜 人妻 在线| 久久综合国产亚洲精品| 97在线人人人人妻| 亚洲av日韩在线播放| 天美传媒精品一区二区| 亚洲久久久久久中文字幕| 亚洲成人av在线免费| 麻豆国产97在线/欧美| 亚洲aⅴ乱码一区二区在线播放| 老女人水多毛片| 成人免费观看视频高清| 国产精品久久久久久精品电影| 国产 精品1| 午夜免费男女啪啪视频观看| 少妇丰满av| 熟女人妻精品中文字幕| 不卡视频在线观看欧美| 久久精品人妻少妇| 精品久久久久久久久av| 国产精品一二三区在线看| 三级经典国产精品| 久久人人爽人人爽人人片va| 色视频www国产| 制服丝袜香蕉在线| 中国国产av一级| av在线app专区| 国产精品成人在线| 色播亚洲综合网| 青春草国产在线视频| 国产成人freesex在线| 又大又黄又爽视频免费| 国产精品久久久久久av不卡| 免费黄网站久久成人精品| 免费看不卡的av| 国产精品av视频在线免费观看| 亚洲国产精品成人综合色| 男人添女人高潮全过程视频| 中文字幕av成人在线电影| 国产视频首页在线观看| 午夜精品国产一区二区电影 | 自拍偷自拍亚洲精品老妇| 亚洲精品影视一区二区三区av| 又黄又爽又刺激的免费视频.| 哪个播放器可以免费观看大片| 日日摸夜夜添夜夜爱| 狠狠精品人妻久久久久久综合| 欧美xxⅹ黑人| 中文精品一卡2卡3卡4更新| 啦啦啦啦在线视频资源| 免费观看在线日韩| 国产亚洲午夜精品一区二区久久 | 日本-黄色视频高清免费观看| 国产国拍精品亚洲av在线观看| 干丝袜人妻中文字幕| 天天躁夜夜躁狠狠久久av| 久久99热这里只频精品6学生| 国产午夜精品一二区理论片| 亚洲欧美成人综合另类久久久| 大片免费播放器 马上看| 久久精品国产亚洲av涩爱| 国产黄a三级三级三级人| 日日摸夜夜添夜夜添av毛片| 黄片wwwwww| 大香蕉97超碰在线| 国产成人免费观看mmmm| 亚洲欧美日韩卡通动漫| 国产精品不卡视频一区二区| 亚洲国产av新网站| 下体分泌物呈黄色| .国产精品久久| 亚洲欧美日韩卡通动漫| 国产毛片在线视频| 久久久久久国产a免费观看| 欧美一级a爱片免费观看看| 亚洲av男天堂| 午夜精品国产一区二区电影 | 日本爱情动作片www.在线观看| 午夜免费男女啪啪视频观看| 中文字幕久久专区| 激情 狠狠 欧美| 久久精品熟女亚洲av麻豆精品| 亚洲欧美日韩无卡精品| 最近的中文字幕免费完整| 永久免费av网站大全| av在线天堂中文字幕| 亚洲av成人精品一区久久| 看非洲黑人一级黄片| 欧美 日韩 精品 国产| 亚洲天堂av无毛| 欧美xxxx性猛交bbbb| 搞女人的毛片| 国产欧美另类精品又又久久亚洲欧美| 欧美xxxx性猛交bbbb| 国产精品.久久久| 亚洲欧美日韩卡通动漫| 日韩视频在线欧美| 18禁裸乳无遮挡动漫免费视频 | 大话2 男鬼变身卡| 一级毛片久久久久久久久女| 精品久久久噜噜| 中文精品一卡2卡3卡4更新| 久久99热6这里只有精品| 亚洲av国产av综合av卡| 少妇熟女欧美另类| freevideosex欧美| 国产精品国产三级国产专区5o| freevideosex欧美| 成年免费大片在线观看| 麻豆久久精品国产亚洲av| 性色avwww在线观看| 成年版毛片免费区| 国产黄片视频在线免费观看| 国产一区二区三区综合在线观看 | 国产亚洲午夜精品一区二区久久 | 街头女战士在线观看网站| 久热久热在线精品观看| 男女啪啪激烈高潮av片| 国产亚洲91精品色在线| 乱码一卡2卡4卡精品| 国产午夜精品久久久久久一区二区三区| 国产精品一区www在线观看| 国产大屁股一区二区在线视频| 少妇被粗大猛烈的视频| 亚洲av在线观看美女高潮| 亚洲精品日韩在线中文字幕| 国产av国产精品国产| 亚洲精品一二三| 大片电影免费在线观看免费| 中文字幕人妻熟人妻熟丝袜美| 亚洲精品乱久久久久久| 真实男女啪啪啪动态图| 日本wwww免费看| 久久精品国产鲁丝片午夜精品| 日日摸夜夜添夜夜添av毛片| 我的女老师完整版在线观看| 人人妻人人爽人人添夜夜欢视频 | 国产精品99久久99久久久不卡 | 欧美激情国产日韩精品一区| 久久久久久久久大av| 国产亚洲一区二区精品| .国产精品久久| 青春草亚洲视频在线观看| 国产成人一区二区在线| 听说在线观看完整版免费高清| 久久久久久久久久成人| 91在线精品国自产拍蜜月| www.av在线官网国产| 天天躁日日操中文字幕| av在线亚洲专区| 一级毛片电影观看| 国产又色又爽无遮挡免| 国产精品爽爽va在线观看网站| 国产又色又爽无遮挡免| 听说在线观看完整版免费高清| 国产又色又爽无遮挡免| 夫妻午夜视频| 亚洲人成网站高清观看| 一级毛片 在线播放| 国产成人a区在线观看| 亚洲欧洲日产国产| 国产探花在线观看一区二区| 国产黄频视频在线观看| 啦啦啦中文免费视频观看日本| 久久久久久久久久成人| 视频中文字幕在线观看| 波多野结衣巨乳人妻| 久久久久久国产a免费观看| 秋霞伦理黄片| 久久久久久国产a免费观看| 大又大粗又爽又黄少妇毛片口| 国产成年人精品一区二区| 亚洲丝袜综合中文字幕| 国产av不卡久久| kizo精华| 亚洲aⅴ乱码一区二区在线播放| 最近中文字幕2019免费版| 婷婷色综合www| 男的添女的下面高潮视频| a级毛色黄片| 少妇人妻 视频| 国产精品麻豆人妻色哟哟久久| xxx大片免费视频| 久久精品熟女亚洲av麻豆精品| 久久久精品94久久精品| 国产成人一区二区在线| 91在线精品国自产拍蜜月| 国产一区二区三区综合在线观看 | 久久99蜜桃精品久久| 色视频www国产| 哪个播放器可以免费观看大片| 韩国高清视频一区二区三区| 国产 一区精品| 亚洲精品456在线播放app| 26uuu在线亚洲综合色| 热re99久久精品国产66热6| 亚洲成人中文字幕在线播放| 国产69精品久久久久777片| 亚洲欧美成人综合另类久久久| 国产久久久一区二区三区| 久久久成人免费电影| 精品亚洲乱码少妇综合久久| 日韩三级伦理在线观看| 午夜免费男女啪啪视频观看| 久久影院123| 亚洲欧美成人精品一区二区| av在线天堂中文字幕| 另类亚洲欧美激情| 午夜亚洲福利在线播放| 人人妻人人澡人人爽人人夜夜| 久久国内精品自在自线图片| 国产高潮美女av| 成人二区视频| www.色视频.com| 成人综合一区亚洲| 日韩av免费高清视频| 午夜视频国产福利| 国产综合精华液| 夫妻性生交免费视频一级片| 国产精品久久久久久精品电影小说 | 亚洲天堂国产精品一区在线| 一边亲一边摸免费视频| 内射极品少妇av片p| 亚洲高清免费不卡视频| 麻豆久久精品国产亚洲av| 少妇人妻精品综合一区二区| 亚洲自拍偷在线| 国产精品久久久久久久久免| 日韩中字成人| 午夜精品一区二区三区免费看| 2018国产大陆天天弄谢| 看黄色毛片网站| 啦啦啦中文免费视频观看日本| 久热这里只有精品99| 欧美性感艳星| 色综合色国产| 欧美成人a在线观看| 国产女主播在线喷水免费视频网站| 韩国av在线不卡| av国产免费在线观看| 国产日韩欧美亚洲二区| 国产精品人妻久久久影院| av专区在线播放| 免费大片18禁| 精品酒店卫生间| 天天躁日日操中文字幕| 日韩人妻高清精品专区| 极品教师在线视频|