• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    深度學(xué)習(xí)技術(shù)在學(xué)科融合研究中的應(yīng)用

    2021-01-09 06:39:40劉曉東倪浩然
    關(guān)鍵詞:考文劉曉東華威

    劉曉東,倪浩然

    1. 中國科學(xué)院計算機網(wǎng)絡(luò)信息中心,信息化戰(zhàn)略發(fā)展與評估中心,北京 100190

    2. 華威大學(xué),真實系統(tǒng)數(shù)學(xué),英國 考文垂

    1 Introduction/Overview

    The number of published papers contributed by the Chinese Academy of Sciences (CAS) is around 70,0001The data in this paper is collected from the public database of Web of Science.in the year 2018, which rose by 21.8% year-on-year. Due to the increasing number of papers each year and the disciplinary inter-crossing, artificial classification of papers by disciplines becomes harder and harder. There are several difficulties with the artificial classification. First, research paper contains massive information, which makes the workload much heavier than other classification tasks. Secondly, reading papers from specific disciplines requires professional knowledge. However, hiring experts for the classification task is expensive and impractical. Finally, the inter-crossing of disciplines makes it hard to tell even when the papers are evaluated by experts in the related fields. In the meantime, the demand for classifying papers and trend prediction is rapidly increasing. In this background, text classification in natural language processing (NLP) is considered to solve this problem.

    As an essential component in NLP applications, text classification has aroused the interest of many researchers. There are many classic application scenarios for it, such as public opinion monitoring (Bing & Lei, 2012)[1], intelligent recommendation system, information filtering and sentiment analysis (Aggarwal & Zhai, 2012)[2].

    One of the main problems in text classification is the word representation. Bag-of-Words (BoW) model (Zhang et al., 2010)[3]is the pioneer in this area, where some designed patterns such as uni-grams, bi-grams and n-grams are extracted as features. With the higher order in n-grams and the complicated feature structure (Post & Bergsma, 2013)[4], the model could consider more contextual information and word orders. However, these models failed in capturing the semantics of the sentences and the data sparsity problem remains unsolved. As a distributed representation of words, pre-trained word embedding offers a way to capture meaningful syntactic and semantic regularities (Mikolov, Chen et al., 2013)[5], and mitigates the data sparsity problem (Bengio et al.,2003)[6]. As one of the state-of-art method of word embedding, word2vec (Rong, 2014)[7]includes two training models: continuous Bag-of-Word (CBoW) (Mikolov, Chen et al., 2013)[5]and skip-grams (Mikolov, Sutskever et al., 2013)[8]. In this paper, we apply the Global Vectors for Word Representation (GloVe) model proposed by Pennington et al. (2014)[9], which outperforms word2vec on the word representation task.

    Another problem is the classifier. There are many machine-learning algorithms that can be used as classifiers, such as logistic regression (LR), naive Bayes (NB) and support vector machine (SVM). However, these methods all have the data sparsity problem and do not have a satisfying performance on NLP tasks. With the development of deep neural networks, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are widely used in NLP tasks as the state-of-art classifiers due to their exceptional performance. In our research, we build a CNN (Kalchbrenner et al., 2014)[10]model and apply it to the paper classification. The input of the model handles sentences of varying length. The layers include several one-dimensional convolutional layers and ak-max pooling layer. The convolutional layers apply one-dimensional filters across each row of features. The max pooling layer is a non-linear subsampling function that returns the maximum of a set of values. A convolutional layer followed by a max pooling layer and/or a non-linearity form a basic block to calculate matrix named feature map. In the input layer, we enrich the representation by computing multiple feature maps with disparate filters. Subsequent layers followed by the input layer also have multiple feature maps computed by convolving filters with all the maps. The weights at these layers form an order-4 tensor. The resulting structure is named as Convolutional Neural Network (Lecun et al., 1998)[11].

    The networks are trained on 7,000 abstracts of the research papers from 23 different subjects labelled artificially by experts in CAS. After training, the networks generally surpasses 90% in the prediction accuracy on the hand-labelled test, experimented on the well-trained networks in the published research papers contributed by the CAS from 2012 to 2018. By labelling the papers with the classifiers of selected subjects, the trend of interdisciplinary research is shown in the results.

    The outline of the paper is as follows. Section 2 reviews the related works including word representation models, neural networks, optimization of hyperparameters and related applications of text classification. Section 3 defines the classifier, word representation model, the relevant operators and the layers of CNN and the auto-optimization method we applied in the paper. Section 4 discusses the experiments and the results.

    2 Related Works

    The text classification tasks mainly include three parts: feature engineering, feature selection and classifiers. The feature engineering was commonly based on the BoW model. Rong (2014)[7]proposed the word2vec method, which is widely accepted as the state-of-the-art for word representation. However, it is limited in utilizing statistical information, since there is no global co-occurrence counts. Pennington et al. (2014)[9]introduced the GloVe model, which combines the advantages of latent semantic analysis (LSA) (Deerwrester et al., 1990)[12]and word2vec. This new word-representation method efficiently improved the accuracy of many basic NLP tasks, especially for limited training dataset. There are also other advanced methods recently, such as Bidirectional Encoder Representation from Transformers (BERT1) (Devlin et al., 2018)[13]. Feature selection is applied for filtering the noisy features and improving the performance of the classifiers. The common feature selection methods include removing stopping words, statistic, mutual information andL1regularization (Ng, 2004)[14]. For classifiers, there are several machine learning algorithms such as logistic regression (LR), naive bayes (NB), and support vector machine (SVM). However, they all have the data sparsity problem.

    2.1 Neural Networks in NLP

    With the development of deep neural networks, the data sparsity problem has been well solved. Many neural models have been proposed for word embedding and classification. Recursive Neural network (RecursiveNN) (Socher, Huang et al., 2011; Socher, Pennington et al., 2011; Socher et al., 2013)[15-17]captures the semantics of a sentence via a tree structure. However, the time complexity of constructing a textual tree is at leastO(n2), which is time-consuming for long documents. The Recurrent Neural Network (RNN) (Elman, 1990)[18]is a model, which analyzes the semantics of the text in fixed-sized hidden layers, with a time complexityO(n). However, it is a biased model where later contents are more dominant than earlier contents. To tackle the bias, the Long Short-Term Memory (LSTM) (Liu et al., 2016)[19]model is proposed, which gives a better performance for long documents. The Convolutional Neural Network (CNN) (Kalchbrenner et al., 2014)[10]is another unbiased model introduced to NLP tasks. Although the time complexity of CNN isO(n)too, we found that empirically it outperforms LSTM in both training time and accuracy. Advanced models include Attention Networks (Abreu et al., 2019)[1], Contextual LSTM (C-LSTM) (Ghosh et al., 2016)[20]and Recurrent CNN (RCNN) (Lai et al., 2015)[21]. We will not discuss them since they are too sophisticated and do not contribute noteworthy improvement in accuracy.

    The performance of the model also depends on the choice of optimizer and the hyper-parameters. There are many classic optimizers, such as Batch Gradient Descent (BGD), Stochastic Gradient Descent (SGD), Momentum, Adadelta, Adam. We apply the SGD method in our experiments due to its sensitivity to hyper-parameters and model performance. Zhang and Wallace (2015)[22]researches deeply in the sensitivity analysis and gives some practical advises in the selection of hyperparameters. However, it requires rich experience in NLP engineering, and consumes massive energy and time. Instead of manual selection, we apply auto-optimization algorithms in our research, which offers a modest performance with time and energy saving. The most direct algorithm is Grid Search. However, it costs too much computing resources. Bergstra and Bengio (2012)[6]proposes Random Search which improves the performance of optimization. Bayesian algorithm (Snoek et al., 2012)[23]is more efficient than Random Search, but it cannot deal with continuous variables. To tackle this problem, we apply Tree-structured Parzen Estimator (Bergstra et al., 2011; Bergstra et al., 2013)[24-25]for optimizing the hyper-parameters. To exploit the parallelism of the operations, the network is trained on GPUs using Pytorch.

    3 Methods

    3.1 Classifier

    For training sample sets {s1,s2,s3,s4}, whereSn,n=1,2,3,4 is the sample set with the categorycn, we define the classifier as an one-versus-rest model in our task.

    Training Samples

    In our research, we train separate classifiers for different subjects and label each paper with these classifiers according to the above model. If there is more than one positive prediction for the paper, we believe cross-discipline happens between the labelled subjects.

    3.2 Word Representation Learning

    We introduce GloVe model in this section. The model is proposed by Pennington et al. (2014)[9]. We denote the matrix of word-word co-occurrence counts asX, where the entriesXijtabulate the number of times wordjoccurs in the context of wordi. The number of times any word appears in the context of wordiis denoted asXi=k Xik. We denotew∈das word vectors and ∈das separate context word vectors. The model is given as follows:

    whereVis the size of the vocabulary,biandbjare the bias. We practically choose the function:

    as the weighting functionf(Xij). We fixxmax= 100 andα= 3/4 as they give the best performance for our experiments (Mikolov, Chen et al., 2013)[5].

    3.3 Convolutional Neural Network

    We build a model using a convolutional architecture that alternates 3 or 4 convolutional layers with pooling layers given byk-max pooling. The resulting architecture is the Convolutional Neural Network.

    3.3.1 Convolution

    For a vector of weightsm∈mand a vector of inputss∈sas the sentence, the one-dimensional convolutional operation is to take the dot product ofmand eachm-gram ofsto obtain the vectorc:

    There are two types of convolution depending on different requirements. The narrow type requiress≥mand the resulting sequencec∈s-m+1, wherejis frommtos. In the wide type of convolution,scould be smaller thanm. The inputsiare taken to be zero wheni<1 ori>s. In the case, it yields the sequencec∈s+m-1, where the range of the indexjis 1 tos + m-1. Notice that, the sequencecin the narrow convolution is a sub-sequence of which in the wide convolution. Wide convolution has some advantages such as ensuring that all weights in the filter reach the entire sentence and producing a non-emptycwhich are independent of the widthmand the sentence lengthsall the time. We apply wide convolution operation in our model. For the sentence matrixs∈d×swhich is constructed by the embeddingwi∈dfor each word, we obtain a convolutional layer by convolving a matrix with trained weightsm∈d×mwith the activations at the layer below. The resulting matrixcat each layer has dimensionsd×s + m-1.

    3.3.2 Non-Linear Activation Function

    After each convolutional layers, a biasb∈dand a non-linear activation functiongare applied componentwise to the resulting matrixcof the convolution. DefineMto be the matrix of diagonals:

    wheremare the weights of thedfilters of the wide convolution. Thus, we obtain:

    This function makes sure that the output of our network is not just another linear combination of the inputs and gives the possibility to generate more complicated functional relationship for different tasks. There are many different activation functions, such asSigmoid,tanh(Gulcehre et al., 2016; Glorot & Bengio, 2010)[26-27],ReLU(Nair & Hinton, 2010; Glorot et al., 2010)[28-29]andmaxout(Goodfellow et al., 2010)[30]. We applyReLUfunction due to its faster convergence rate and generally moderate performance in practice.

    3.3.3 Max Pooling

    Given a numberkand a sequencep∈p, wherep≥k,k-max pooling is the pooling operation that extracts theklargest values ofpwith the same order and constructs a sub-sequenceIn our experiments, we setk=1in thek-max pooling layers.

    where thej-th sequence of the matrixzis the maximum in thej-th elements ofyi.

    By applying thek-max pooling operators after the convolutional layers, it is available to pool thekmost active features inpand to capture the information throughout the entire text. It preserves the order of the features and ignores their positions. It also discerns precisely the number of times the feature is highly activated inpand the change of the feature activations acrossp. There are other kind of pooling operators such as average pooling (Collobert et al., 2011)[31]. We do not apply them here since only a few words are meaningful for capturing the semantics and further classification in the text. After all the convolutional layers and max pooling layers, a fully connected layer followed by the softmax function is applied for converting the output numbers into probabilities, which predicts the classes of the input sentence.

    For binary classifier, we letn=2in both Equation 6 and Equation 7.

    4 Experiments

    4.1 Datasets

    Our datasets are collected from the internal database of CAS. The datasets include 330,907 published papers of CAS from 2012 to 2018. We randomly take 7,000 abstracts out of the papers as the training samples labelled them with 23 subjects by experts. In the word representation training, our pre-trained vectors for GloVe are 50-dimensionalWikipedia2014 +Gigaword5, which are provided in the GloVe project (Pennington et al., 2014)[9]. After that, we classify the remaining papers by 8 main subjects, which are Astronomy, Atmospheric Science, Electronic Science and Technology, Geology, Mathematics, Nuclear Science and Technology, Physics and Computer Science and Technology, using the welltrained models. We investigate the cross-disciplines between Computer Science and Technology and other 7 selected subjects on timeline.

    4.2 Hyper-Parameters and Training

    In our experiments, the optimization goal is to minimize the difference between the prediction and the true distribution, which includes anL2regularization term over the parameters. The set of parameters in the optimization are the word embeddings, the filter and the fully connected weights. The network is trained with mini-batches by back-propagation. Stochastic gradient descent is applied here to optimize the training target. The hyper-parameters comprise the batch size, including the learning rate of optimization, the number of negative samples, the number of layers, the kernel number, and the kernel size of each convolutional layer. To optimize these hyper-parameters, the Tree-structured Parzen Estimator Method is applied here as an auto-optimization algorithm.

    4.3 Experiment Settings

    We preprocess the datasets as follows. We use Natural Language Toolkit (NLTK) to obtain tokens and stems. Stop words and symbols are also removed by this toolkit. We split each dataset into training dataset, validation dataset and testing dataset by the ratio 6:2:2. We apply F1 measure to evaluate the model performance, Precision, Recall and Accuracy to evaluate the classification results. The hyper-parameter settings of the neural networks depend on the used datasets.

    We use the CNN model to classify the abstract information of the paper, and use 3 or 4 layers of convolutions to train the observation effect. The algorithm of the training model chooses to use the stochastic gradient descent algorithm. Because the stochastic gradient descent algorithm is more sensitive to hyper-parameters than other optimization algorithms and can get better results by adjusting the parameters.

    4.4 Results and Discussion

    4.4.1 Interdisciplinary Design

    According to the experimental research in this paper, due to the rapid development of computer science informatization, the development of information technology represented by cloud computing, big data, deep learning, etc. has entered an explosive phase since around 2010, and a large number of cutting-edge technologies of computer science have been used for each scientific research in the field. As shown in the Figure 1, the field of physics ranks first in the number of 48,450 papers published in last 6 years. Using the method proposed in 3.1.2 of this paper, 5,254 papers in the field of physics are used to apply the knowledge in the field of computer science and technology. The crossover rate is as high as 11.1%, ahead of the other seven areas of the dataset. In Figure 2, the integration of various fields and computer science and technology is getting closer and closer, and the correlation between various scientific research work and computer science and technology is getting stronger and stronger.

    Fig. 1 Total number of articles

    Fig. 2 Percentage of crosses

    4.4.2 Disciplinary development and interdisciplinary

    Secondly, in recent years, in the field of physics, the distributed file systems, machine learning, cloud storage and other technologies have been widely used in the field of physics. The number of papers published in this field has increased from 5,252 in 2012 to 9,204 in 2018, the number of interdisciplinary papers in the field has also increased from 578 to 1,072, both of which have shown a consistent trend of overall curve rise; the same is true in the field of atmospheric science, which grew from 3,808 to 8,598 articles. The cross-disciplinary relationship between this field and computer science and technology has also increased from 410 to 1,011. The growth rate is obvious. The focus of electronic science and technology field is from 4,152 in 2012 to 8,531 in 2018, and it is related to computer science. The interdisciplinary division of technology has also grown from 457 to 1,072. Besides, cross-disciplinary relationships over mathematics, astronomy, and other fields are all growing as shown in Figure 3.

    4.4.3 Research Ability and Subject Cross

    In the end, according to our findings, the degree of information infrastructure construction of scientific research institutions shows that the degree of information technology infrastructure construction of scientific research institutions has been steadily increasing year by year, and the scientific research capabilities of large scientific research institutions have been greatly improved. It is consistent with the trend of integration of various research fields with computer science and technology, according to which we believe that there is a positive correlation between the ability of scientific research and the degree of interdisciplinary. The scientific research ability and the degree of interdisciplinary promote and grow mutually.

    4.4.4 Research Extension

    Fig.3 Number of crosses

    For the subject classification and fusion problem of this paper, we can apply the results to the evaluation of subject development, talent cultivation, resource optimization, and the establishment of college education study curriculum, or the cultivation of compound talents. The research results in this paper can guide scientific research institutions to carry out strategic planning and deployment better. In addition, our research ideas can also be applied to related expansion issues, such as subject type prediction, attribute judgment, differentiation of low-frequency subjects and confusing subjects. Furthermore, the introduction of attention mechanism would train the subject prediction models to handle the problem of complex case classifications based on the description of each case.

    5 Conclusion & Discussion

    In this paper, we applied Convolutional Neural Network and Recurrent Neural Network in Research Paper Classification Tasks. We researched the published papers of research institutions over the years, marked the papers and selected eight types of them to train. We used the algorithm model of 1VO+GloVE+CNN to preprocess and train the dataset. The classification and interdisciplinary statistics based on these eight subject models were completed to analyze the number, proportion and regularity of papers with interdisciplinary subjects. Through experiments, this paper believes that the intersection of scientific research work and computer science in various fields is becoming closer, and the degree of interdisciplinary is consistently growing with the rising trend of the number of scientific papers published. In future work, we can apply scientific research results to the development of disciplines, personnel training and resource optimization, which will help scientific research institutions carry out strategic planning and deployment. In the future, we will continue to study different areas on this issue.

    Acknowledgments

    This work was supported by the National Science Library of the Chinese Academy of Sciences. Thanks to the National Science Library.

    猜你喜歡
    考文劉曉東華威
    Structure,electronic,and nonlinear optical properties of superalkaline M3O(M =Li,Na)doped cyclo[18]carbon
    張?zhí)煲怼度A威先生》的敘述人稱與經(jīng)典生成
    藝術(shù)家(2023年2期)2023-09-13 10:13:09
    Up/down-conversion luminescence of monoclinic Gd2O3:Er3+nanoparticles prepared by laser ablation in liquid
    《鐵單質(zhì)的化學(xué)性質(zhì)》教學(xué)設(shè)計
    權(quán)力“變現(xiàn)”高手
    考文垂大學(xué)成立深圳校友會加強與中國聯(lián)系
    母女同霸
    孔華威:用儒家之道“武裝”創(chuàng)業(yè)者
    華東科技(2016年10期)2016-11-11 06:17:49
    Optimal Control Strategy for Buck Converter Under Successive Load Current Change*
    把性命和質(zhì)量拴在一起
    日本欧美视频一区| 成人免费观看视频高清| 国产精品偷伦视频观看了| 狂野欧美激情性xxxx| 成人三级做爰电影| 18禁裸乳无遮挡动漫免费视频| 亚洲国产毛片av蜜桃av| 成人av一区二区三区在线看 | 人人妻人人澡人人爽人人夜夜| 男女床上黄色一级片免费看| 国产av精品麻豆| cao死你这个sao货| 青草久久国产| 国产成人免费观看mmmm| 女人精品久久久久毛片| 日本猛色少妇xxxxx猛交久久| 国产精品一区二区免费欧美 | 亚洲精品粉嫩美女一区| 51午夜福利影视在线观看| 国产精品成人在线| 日本撒尿小便嘘嘘汇集6| 国产亚洲欧美精品永久| 精品第一国产精品| 热re99久久国产66热| 在线十欧美十亚洲十日本专区| 国产精品一二三区在线看| 精品亚洲乱码少妇综合久久| 久久狼人影院| 久热这里只有精品99| h视频一区二区三区| 美女高潮到喷水免费观看| 大陆偷拍与自拍| 国产精品.久久久| 亚洲欧美一区二区三区黑人| 亚洲精华国产精华精| 国产精品成人在线| 99热国产这里只有精品6| 日韩大片免费观看网站| 久久天躁狠狠躁夜夜2o2o| 免费观看人在逋| 高清黄色对白视频在线免费看| 美女大奶头黄色视频| 大香蕉久久成人网| 日本一区二区免费在线视频| 狂野欧美激情性xxxx| 久久国产精品人妻蜜桃| 99久久综合免费| 9热在线视频观看99| 亚洲精品在线美女| 男女午夜视频在线观看| av欧美777| 欧美激情极品国产一区二区三区| 99国产综合亚洲精品| 十八禁人妻一区二区| 亚洲国产欧美在线一区| 老司机影院成人| 亚洲第一av免费看| 午夜福利在线观看吧| 一级毛片电影观看| 亚洲三区欧美一区| 欧美激情极品国产一区二区三区| 免费不卡黄色视频| 在线 av 中文字幕| avwww免费| 国产精品久久久久久人妻精品电影 | 亚洲av国产av综合av卡| 免费在线观看日本一区| 91字幕亚洲| 国产精品.久久久| 免费人妻精品一区二区三区视频| 亚洲一区二区三区欧美精品| 性色av乱码一区二区三区2| 精品乱码久久久久久99久播| 青春草视频在线免费观看| 久久久精品免费免费高清| 精品乱码久久久久久99久播| 欧美激情极品国产一区二区三区| 最近最新免费中文字幕在线| 亚洲伊人久久精品综合| 欧美少妇被猛烈插入视频| 亚洲色图综合在线观看| 老司机亚洲免费影院| 在线观看www视频免费| 中文字幕制服av| 日韩视频在线欧美| 精品人妻熟女毛片av久久网站| 精品免费久久久久久久清纯 | 国产一级毛片在线| 国产有黄有色有爽视频| 男女免费视频国产| 欧美亚洲 丝袜 人妻 在线| 亚洲天堂av无毛| 十分钟在线观看高清视频www| 亚洲免费av在线视频| 婷婷丁香在线五月| 国产一区二区 视频在线| 一区二区三区乱码不卡18| 少妇 在线观看| 亚洲熟女毛片儿| 91成人精品电影| 日韩电影二区| 亚洲视频免费观看视频| 老司机影院毛片| 免费看十八禁软件| 中文字幕色久视频| av天堂久久9| 亚洲自偷自拍图片 自拍| 丝袜在线中文字幕| 欧美黑人精品巨大| 99精国产麻豆久久婷婷| 中国美女看黄片| 免费不卡黄色视频| 人人澡人人妻人| 曰老女人黄片| 99久久人妻综合| 亚洲第一青青草原| 国产成人精品久久二区二区91| 精品国产乱码久久久久久小说| 精品国内亚洲2022精品成人 | 亚洲国产看品久久| 国产欧美日韩精品亚洲av| 黄频高清免费视频| 久久精品久久久久久噜噜老黄| 欧美国产精品一级二级三级| 老司机影院成人| 精品人妻1区二区| 国产精品二区激情视频| 国产有黄有色有爽视频| 美女脱内裤让男人舔精品视频| av超薄肉色丝袜交足视频| 亚洲精品久久久久久婷婷小说| 香蕉国产在线看| 可以免费在线观看a视频的电影网站| 自拍欧美九色日韩亚洲蝌蚪91| 色综合欧美亚洲国产小说| 国产亚洲欧美精品永久| 精品一区二区三区av网在线观看 | 欧美精品一区二区大全| av片东京热男人的天堂| 精品少妇久久久久久888优播| 欧美日韩av久久| 精品亚洲成a人片在线观看| 岛国在线观看网站| 欧美日韩亚洲高清精品| 国产又色又爽无遮挡免| 纵有疾风起免费观看全集完整版| 国产极品粉嫩免费观看在线| 多毛熟女@视频| 国产精品免费视频内射| 美女中出高潮动态图| 热99久久久久精品小说推荐| 国产精品二区激情视频| www.av在线官网国产| 99国产精品免费福利视频| 美女高潮到喷水免费观看| 嫁个100分男人电影在线观看| bbb黄色大片| 亚洲精品国产一区二区精华液| 天堂8中文在线网| av免费在线观看网站| 欧美+亚洲+日韩+国产| 三上悠亚av全集在线观看| tube8黄色片| 日本欧美视频一区| a级片在线免费高清观看视频| 国产精品自产拍在线观看55亚洲 | 黄片播放在线免费| 亚洲精品一卡2卡三卡4卡5卡 | 天天操日日干夜夜撸| 首页视频小说图片口味搜索| 真人做人爱边吃奶动态| 欧美 日韩 精品 国产| 人妻一区二区av| 中国美女看黄片| 美女扒开内裤让男人捅视频| 免费在线观看视频国产中文字幕亚洲 | 欧美黄色片欧美黄色片| 免费高清在线观看视频在线观看| 我的亚洲天堂| 精品一品国产午夜福利视频| 久久久国产成人免费| 欧美人与性动交α欧美软件| 欧美午夜高清在线| 一本—道久久a久久精品蜜桃钙片| 97精品久久久久久久久久精品| 国产在线免费精品| 亚洲欧洲精品一区二区精品久久久| 狠狠精品人妻久久久久久综合| 久久人妻熟女aⅴ| 国产片内射在线| 韩国精品一区二区三区| 久久天躁狠狠躁夜夜2o2o| 真人做人爱边吃奶动态| 久久av网站| 最新在线观看一区二区三区| 欧美日韩亚洲高清精品| 18在线观看网站| 久久精品国产综合久久久| avwww免费| 女警被强在线播放| a 毛片基地| 日本精品一区二区三区蜜桃| 美女扒开内裤让男人捅视频| 极品少妇高潮喷水抽搐| 亚洲精品久久久久久婷婷小说| 久久精品国产综合久久久| 国产欧美日韩一区二区三 | 亚洲精品中文字幕在线视频| 成年美女黄网站色视频大全免费| 日日摸夜夜添夜夜添小说| 国产淫语在线视频| 亚洲av男天堂| 狠狠精品人妻久久久久久综合| 交换朋友夫妻互换小说| 人妻人人澡人人爽人人| av有码第一页| 黑人猛操日本美女一级片| 黑人巨大精品欧美一区二区蜜桃| 欧美日韩成人在线一区二区| 一边摸一边抽搐一进一出视频| 国产在线免费精品| 黄片播放在线免费| 久久久精品免费免费高清| 人人妻人人澡人人看| 狂野欧美激情性bbbbbb| 精品免费久久久久久久清纯 | 国产精品一区二区在线不卡| 啦啦啦免费观看视频1| 国产av又大| 亚洲一区中文字幕在线| 国内毛片毛片毛片毛片毛片| 丰满迷人的少妇在线观看| 精品亚洲成a人片在线观看| 91国产中文字幕| 在线十欧美十亚洲十日本专区| 成年人免费黄色播放视频| 国产男人的电影天堂91| 女人高潮潮喷娇喘18禁视频| 国产极品粉嫩免费观看在线| 蜜桃国产av成人99| 丝袜人妻中文字幕| 中文字幕精品免费在线观看视频| 久久女婷五月综合色啪小说| 黄片播放在线免费| 久久精品aⅴ一区二区三区四区| 别揉我奶头~嗯~啊~动态视频 | 另类亚洲欧美激情| 久久人妻熟女aⅴ| 成人国语在线视频| 一本大道久久a久久精品| 国产在线观看jvid| 最近最新免费中文字幕在线| 正在播放国产对白刺激| 超碰97精品在线观看| 啦啦啦 在线观看视频| 国产成人av教育| 丰满少妇做爰视频| 国产日韩欧美在线精品| a级毛片在线看网站| 欧美久久黑人一区二区| 久久久久久人人人人人| 久久中文字幕一级| 91麻豆av在线| 麻豆乱淫一区二区| 国产欧美亚洲国产| 两个人看的免费小视频| 国产又色又爽无遮挡免| 免费av中文字幕在线| 免费黄频网站在线观看国产| 国产极品粉嫩免费观看在线| 少妇粗大呻吟视频| 波多野结衣一区麻豆| 狂野欧美激情性bbbbbb| 伊人亚洲综合成人网| 免费一级毛片在线播放高清视频 | 国产成人影院久久av| 亚洲自偷自拍图片 自拍| 女人久久www免费人成看片| 久久午夜综合久久蜜桃| 日本五十路高清| 又大又爽又粗| 国产精品1区2区在线观看. | 在线永久观看黄色视频| 91麻豆av在线| 丝袜喷水一区| 精品国产一区二区久久| 天天躁夜夜躁狠狠躁躁| 99热国产这里只有精品6| 黄网站色视频无遮挡免费观看| 51午夜福利影视在线观看| 中文字幕高清在线视频| 国产三级黄色录像| 免费在线观看日本一区| 午夜福利,免费看| 美女扒开内裤让男人捅视频| 美女中出高潮动态图| 亚洲免费av在线视频| 成年av动漫网址| 如日韩欧美国产精品一区二区三区| 手机成人av网站| 美女脱内裤让男人舔精品视频| 一区二区三区四区激情视频| 国产欧美日韩精品亚洲av| 中文字幕最新亚洲高清| 国产91精品成人一区二区三区 | 777久久人妻少妇嫩草av网站| 免费观看av网站的网址| 美国免费a级毛片| 亚洲中文av在线| 国产成人av激情在线播放| 80岁老熟妇乱子伦牲交| 欧美另类亚洲清纯唯美| h视频一区二区三区| 亚洲第一欧美日韩一区二区三区 | av视频免费观看在线观看| 一本—道久久a久久精品蜜桃钙片| 日韩制服丝袜自拍偷拍| 国产麻豆69| 国产欧美日韩一区二区三区在线| 亚洲人成电影免费在线| 老熟妇仑乱视频hdxx| 在线观看免费日韩欧美大片| 天堂俺去俺来也www色官网| 日韩人妻精品一区2区三区| 女性被躁到高潮视频| 少妇裸体淫交视频免费看高清 | 亚洲成av片中文字幕在线观看| 亚洲欧洲精品一区二区精品久久久| 日韩 欧美 亚洲 中文字幕| 久久久久久亚洲精品国产蜜桃av| 午夜福利视频在线观看免费| 国产成人免费观看mmmm| av在线老鸭窝| 色综合欧美亚洲国产小说| tube8黄色片| 亚洲免费av在线视频| 国产在线一区二区三区精| 视频区图区小说| 亚洲国产欧美日韩在线播放| 午夜精品久久久久久毛片777| 建设人人有责人人尽责人人享有的| 1024视频免费在线观看| 侵犯人妻中文字幕一二三四区| 人成视频在线观看免费观看| 亚洲欧美精品综合一区二区三区| 麻豆av在线久日| 老熟妇乱子伦视频在线观看 | 久久精品国产综合久久久| 90打野战视频偷拍视频| 搡老熟女国产l中国老女人| 国产一区二区三区在线臀色熟女 | 男女下面插进去视频免费观看| 99国产精品一区二区三区| 免费观看av网站的网址| 国产老妇伦熟女老妇高清| 久久人妻熟女aⅴ| 我要看黄色一级片免费的| 丝袜美腿诱惑在线| 黑人操中国人逼视频| 性高湖久久久久久久久免费观看| 亚洲国产欧美网| 19禁男女啪啪无遮挡网站| 日日夜夜操网爽| 日韩,欧美,国产一区二区三区| 精品一区二区三区av网在线观看 | 国内毛片毛片毛片毛片毛片| 欧美性长视频在线观看| 久久99一区二区三区| 久久久久网色| 亚洲精品自拍成人| 在线观看www视频免费| 青草久久国产| 狠狠精品人妻久久久久久综合| 99热国产这里只有精品6| 999久久久国产精品视频| 午夜精品久久久久久毛片777| 成人av一区二区三区在线看 | 国产xxxxx性猛交| 一级片'在线观看视频| av在线老鸭窝| www.精华液| 国产精品久久久人人做人人爽| 国产亚洲午夜精品一区二区久久| 18禁国产床啪视频网站| 青青草视频在线视频观看| 91精品三级在线观看| 脱女人内裤的视频| 大香蕉久久网| 亚洲av片天天在线观看| 久久人妻福利社区极品人妻图片| 热99国产精品久久久久久7| 51午夜福利影视在线观看| www日本在线高清视频| 亚洲av日韩在线播放| 搡老熟女国产l中国老女人| 精品少妇黑人巨大在线播放| 人妻久久中文字幕网| av线在线观看网站| 99热网站在线观看| 美女午夜性视频免费| 国产精品免费大片| 国产伦理片在线播放av一区| 国精品久久久久久国模美| 久久久国产精品麻豆| 丰满少妇做爰视频| 永久免费av网站大全| 亚洲精品一卡2卡三卡4卡5卡 | 亚洲一区中文字幕在线| 又大又爽又粗| 久久久久久人人人人人| 亚洲国产av新网站| 老汉色∧v一级毛片| 黄频高清免费视频| 久久热在线av| 亚洲伊人色综图| 中亚洲国语对白在线视频| 亚洲av成人一区二区三| 亚洲国产精品999| 大片电影免费在线观看免费| 亚洲精品国产av蜜桃| 亚洲精品日韩在线中文字幕| 久久精品亚洲熟妇少妇任你| 人人妻人人澡人人爽人人夜夜| 精品第一国产精品| 精品人妻在线不人妻| 狂野欧美激情性xxxx| www.自偷自拍.com| 亚洲色图 男人天堂 中文字幕| 97人妻天天添夜夜摸| 国产有黄有色有爽视频| 一级毛片精品| 人人妻人人爽人人添夜夜欢视频| 美女扒开内裤让男人捅视频| 久久久国产欧美日韩av| 人人妻人人澡人人看| 免费在线观看影片大全网站| 午夜福利在线观看吧| 五月天丁香电影| 日韩欧美免费精品| 狂野欧美激情性bbbbbb| 午夜精品久久久久久毛片777| 久久精品国产a三级三级三级| 成人国产av品久久久| 国产欧美日韩精品亚洲av| 热re99久久国产66热| 性色av乱码一区二区三区2| 亚洲欧美精品综合一区二区三区| 成人黄色视频免费在线看| 国产深夜福利视频在线观看| 亚洲综合色网址| 国产欧美日韩一区二区三 | 国产一卡二卡三卡精品| 一级毛片精品| 男女之事视频高清在线观看| 搡老岳熟女国产| 啦啦啦 在线观看视频| 黄色怎么调成土黄色| 国产成人精品在线电影| 老熟妇乱子伦视频在线观看 | 97人妻天天添夜夜摸| 国产精品成人在线| 亚洲少妇的诱惑av| 国产成人啪精品午夜网站| 免费黄频网站在线观看国产| 精品久久久久久电影网| 狠狠精品人妻久久久久久综合| 国产欧美日韩一区二区精品| 另类精品久久| 制服诱惑二区| 麻豆乱淫一区二区| 午夜福利在线观看吧| 亚洲精品第二区| 十八禁人妻一区二区| 国产精品国产三级国产专区5o| 18在线观看网站| 国产视频一区二区在线看| av网站免费在线观看视频| 一本综合久久免费| 国产精品自产拍在线观看55亚洲 | 亚洲激情五月婷婷啪啪| 曰老女人黄片| 少妇裸体淫交视频免费看高清 | 婷婷丁香在线五月| 免费黄频网站在线观看国产| 色婷婷av一区二区三区视频| 韩国精品一区二区三区| 免费在线观看黄色视频的| 老司机靠b影院| 国产成人av教育| 亚洲国产看品久久| xxxhd国产人妻xxx| 亚洲欧美一区二区三区黑人| 精品国产超薄肉色丝袜足j| 极品人妻少妇av视频| 美女中出高潮动态图| 欧美中文综合在线视频| 精品一品国产午夜福利视频| 伦理电影免费视频| 久久中文看片网| 久久综合国产亚洲精品| 91精品三级在线观看| 久久久久久久精品精品| 亚洲精品美女久久av网站| 国产日韩一区二区三区精品不卡| 18禁国产床啪视频网站| 91大片在线观看| a级片在线免费高清观看视频| 久久ye,这里只有精品| 日韩欧美一区二区三区在线观看 | 国产亚洲精品第一综合不卡| 免费不卡黄色视频| 99国产极品粉嫩在线观看| 免费不卡黄色视频| av在线老鸭窝| 老司机午夜福利在线观看视频 | 女人被躁到高潮嗷嗷叫费观| 午夜精品久久久久久毛片777| 国产高清视频在线播放一区 | av在线app专区| 最新在线观看一区二区三区| 搡老岳熟女国产| 久久综合国产亚洲精品| 狠狠婷婷综合久久久久久88av| 国产精品国产三级国产专区5o| 亚洲伊人久久精品综合| 国产一卡二卡三卡精品| av电影中文网址| 一区二区三区精品91| 日本五十路高清| 搡老乐熟女国产| 最近中文字幕2019免费版| 麻豆乱淫一区二区| 在线av久久热| 亚洲久久久国产精品| 超碰成人久久| 午夜精品久久久久久毛片777| 精品一区二区三卡| 国产av国产精品国产| 欧美精品一区二区大全| 日韩制服丝袜自拍偷拍| 国产高清国产精品国产三级| 俄罗斯特黄特色一大片| 啦啦啦啦在线视频资源| 国产男人的电影天堂91| 国产在线免费精品| 一级a爱视频在线免费观看| 国产麻豆69| 亚洲精品一二三| 午夜福利影视在线免费观看| 欧美精品一区二区免费开放| 欧美在线黄色| 久久 成人 亚洲| 亚洲激情五月婷婷啪啪| 国产日韩欧美亚洲二区| 久久综合国产亚洲精品| 男女免费视频国产| 久久久久久久国产电影| 亚洲欧洲日产国产| 亚洲人成77777在线视频| 国产日韩欧美视频二区| 少妇的丰满在线观看| 日本黄色日本黄色录像| 午夜激情av网站| 亚洲国产av影院在线观看| 欧美老熟妇乱子伦牲交| 欧美性长视频在线观看| 操出白浆在线播放| 久久久欧美国产精品| 亚洲欧美一区二区三区久久| 两性夫妻黄色片| 亚洲国产看品久久| 午夜日韩欧美国产| 国产精品偷伦视频观看了| 天天影视国产精品| 日日爽夜夜爽网站| 男女边摸边吃奶| 丝瓜视频免费看黄片| 成人18禁高潮啪啪吃奶动态图| 中文字幕人妻丝袜制服| 亚洲国产欧美日韩在线播放| svipshipincom国产片| 国产av精品麻豆| 欧美亚洲日本最大视频资源| 法律面前人人平等表现在哪些方面 | av一本久久久久| 国产精品一区二区在线观看99| 热99国产精品久久久久久7| 亚洲精品日韩在线中文字幕| 亚洲精品中文字幕在线视频| 极品人妻少妇av视频| 精品国产一区二区三区四区第35| 黑人猛操日本美女一级片| 久久中文看片网| 免费一级毛片在线播放高清视频 | 欧美黑人精品巨大| 午夜免费鲁丝| 成在线人永久免费视频| a 毛片基地| 麻豆国产av国片精品| 日本五十路高清| 国产精品 欧美亚洲| 国产精品二区激情视频| 在线观看舔阴道视频| 亚洲国产欧美一区二区综合| 亚洲精品在线美女| 性少妇av在线| 巨乳人妻的诱惑在线观看| 精品第一国产精品| 一个人免费在线观看的高清视频 | 久久久欧美国产精品| 中文字幕av电影在线播放| 91av网站免费观看| 少妇 在线观看| 国产色视频综合| 国产欧美日韩一区二区三区在线|