• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    New Generation Model of Word Vector Representation Based on CBOW or Skip-Gram

    2019-07-18 01:59:46ZeyuXiongQiangqiangShenYueshanXiongYijieWangandWeiziLi
    Computers Materials&Continua 2019年7期

    Zeyu Xiong, Qiangqiang Shen, Yueshan Xiong, Yijie Wang and Weizi Li

    Abstract: Word vector representation is widely used in natural language processing tasks.Most word vectors are generated based on probability model, its bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words.Recently, neural-network language models CBOW and Skip-Gram are developed as continuous-space language models for words representation in high dimensional real-valued vectors.These vector representations have recently demonstrated promising results in various NLP tasks because of their superiority in capturing syntactic and contextual regularities in language.In this paper, we propose a new strategy based on optimization in contiguous subset of documents and regression method in combination of vectors, two of new models CBOW-OR and SkipGram-OR for word vector learning are established.Experimental results show that for some words-pair, the cosine distance obtained by the CBOW-OR (or SkipGram-OR) model is generally larger and is more reasonable than CBOW (or Skip-Gram), the vector space for Skip-Gram and SkipGram-OR keep the same structure property in Euclidean distance, and the model SkipGram-OR keeps higher performance for retrieval the relative words-pair as a whole.Both CBOW-OR and SkipGram-OR model are inherent parallel models and can be expected to apply in large-scale information processing.

    Keywords: Distributed word vector, continuous-space language model, hierarchical softmax.

    1 Introduction

    A word vector representation is a mathematical processing object associated with each word.Generating word representations is an essential task of natural language processing (NLP) [Bengio, Ducharme and Vincent (2001);Collobert and Weston (2008)].Many NLP tasks such as sentiment analysis,sentence or text classification and so on consider words as basic units.An important step is the introduction of continuous representations of words [Bengio, Ducharme, Vincent et al.(2003)].When it comes to texts, one of the most commonly used fixed-length features is bag-of-words.Traditionally,the default word representation regards a word as a one-hot vector,which shares the same size of the vocabulary.Despite its popularity,bag-of-words features have two major weaknesses: They lose the ordering of the words and they also ignore semantics of the words.In order to address these issues,Cao et al.use the histogram of the bag of words model(BOW)to determine the number of sub-images in the image that convey secret information for the purpose of improving the retrieval efficiency[Cao,Zhou,Sun et al.(2018)].

    Continuous-space language models[Holger(2007);Bengio,Schwenk,Senécal et al.(2006)]are neural-network language models in which words are represented as high dimensional real-valued vectors.These vector representations have recently demonstrated promising results in various tasks [Collobert and Weston (2008); Bengio, Schwenk, Senécal et al.(2006)] due to their superiority in capturing syntactic and contextual regularities in language.

    Recent works in learning vector representations of words use neural networks [Mnih and Hinton(2008);Turian,Ratinov and Bengio(2010);Mikolov,Sutskever,Chen et al.(2013)].The outcome is that after the neural network model is trained,the word vectors are mapped into a vector space such that semantically similar words have similar vector representations.Distributed word representations draw more attention for better performance in a wider range of natural language processing tasks,ranging from speech tagging[Santos and Zadrozny (2014)], named entity recognition [Turian, Ratinov and Bengio (2010)], partof-speech tagging, parsing [Socher, Lin, Manning et al.(2011)], semantic role labeling[Collobert,Weston,Bottou et al.(2011)],phrase recognition[Socher,Lin,Manning et al.(2011)],sentiment analysis[Socher,Pennington,Huang et al.(2011)],paraphrase detection[Socher,Huang,Pennin et al.(2011)],to machine translation[Cho,Merri?nboer,Gulcehre et al.(2014)].Han et al.[Kim,Kim and Cho(2017)]proposed a method to create concepts by clustering word vectors generated from word2vec, and used the frequencies of these concept clusters to represent document vectors.

    The distributed word vector learning mainly depends on word in the vocabulary and corpus,corpus collected is generally according to time ordered in topic related or event related.In this paper,we divide documents into several subsets,in order to preserve accurate proximity information among subset,a combination model based on strategy of optimization and regression,as an extension of distributed word vector is constructed.

    The rest of this paper is organized as follows:Section 2 introduces prior research related to n-gram,CBOW model and Skip-gram model.Section 3 formally presents our approach in the integrated extension model for word vector representation.Two novel models CBOWOR and SkipGram-OR are proposed.Section 4 describes the experimental settings and experimental results.At last, we conclude the paper and discuss some future work in Section 5.

    2 Related works

    2.1 n-gram model

    The goal of statistical language modeling[Bengio,Ducharme,Vincent et al.(2003)]is to learn the joint probability function of sequences of words in a language.This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training.

    Curse of dimensionality: For example, if one wants to model the joint distribution of 10 consecutive words in a natural language with a vocabulary V of size 100,000, there are potentially 10000010-1=1050-1 free parameters.

    A statistical model of language can be represented by the conditional probability of the next word given all the previous ones,since

    where wtis the t-th word,

    Such statistical language models have already been found useful in many technological applications involving natural language,such as speech recognition,language translation,and information retrieval.

    Following the above mentioned models, n-gram models construct tables of conditional probabilities for the next word and for each one of a large number of contexts,i.e.,combinations of the last n-1 words:

    Bengio et al.[Bengio,Ducharme,Vincent et al.(2003)]proposed a neural network model to calculate formula(2),Feature vectors of words are learned based on their probability of co-occurring in the same documents.

    The training set is a sequence w1,w2,··· ,wTof words belong to V,where the vocabulary V is a large but finite set.The objective is to learn a good model f(wt,···wt-n+1) =that can give high out-of-sample likelihood.The model is decomposed intothe following two parts:

    1) A mapping C from any element i of V to a real vector C(i) ∈Rm.It represents the distributed feature vectors associated with each word in a vocabulary.In practice, C is represented by a|V|×m matrix of free parameters.

    2)A function g maps an input sequence of feature vectors of words in context

    to a conditional probability distribution over words in V for the next word wt.The output of g is a vector whose i-th element estimates the probabilitshown in Fig.1.

    Figure 1: Neural architecture f(i,wt,··· ,wt-n+1)=g(i,C(Wt-1),··· ,C(Wt-n+1))

    Training result is achieved by finding θ that maximizes a training corpus penalized loglikelihood:

    where R(θ)is a regularization term.R is a weight decay penalty applied to the weights of a neural network and to the matrix C.

    The softmax output layer is calculated as follows:

    yiis the unnormalized log-probability for each output word i, computed as follows, with parameters b,W,U,d and H:

    where the hyperbolic tangent tanh is applied element by element,W can be optionally set to zero(no direct connections),and x is the word features layer activation vector,which is the concatenation of the input word features from the matrix C:

    ε is the“l(fā)earning rate”.

    2.2 The word vector model

    Mikolov et al.[Mikolov, Sutskever, Chen et al.(2013)] introduced the CBOW and Skip-Gram model.Both models include three levels: input, projection and output (Fig.2 and Fig.3).The training objective is to learn word vector representations that can predict the nearby words well.

    Figure 2: The CBOW model

    Figure 3: The Skip-gram model

    Given a sequence of training words w1,w2,··· ,wT, the CBOW model is asked to maximize the following average log probability,

    but the Skip-Gram model is asked to maximize the average log probability

    where c is the size of the training context.The basic Skip-Gram formulation of p(wt+j|wt)is defined using softmax function as follows:

    where vwtis input vector representation of word wt,andare output vector representations of words wt+j,wi.W is the number of words in a vocabulary.

    There are many methods for visualizing the relationship between words vector representation.Fig.4 shows one way for terms' relevancy: two-dimensional PCA projection of the 1000-dimensional Skip-Gram vectors of countries and their capital cities [Mikolov,Sutskever, Chen et al.(2013)].It illustrates the ability of the model [Mikolov, Sutskever,Chen et al.(2013)] to automatically organize concepts and learn implicitly.Without any supervised information about what a capital city means during training are given before the mining relationships between them are obtained.

    Figure 4: Country and Capital Vectors Projected by PCA[Kim,Kim and Cho(2017)]

    Fig.5 shows another visualization method for clustering same word association, t-sne clustering method is used.“t-SNE”is a technique which visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map.The technique is a variation of Stochastic Neighbor Embedding [Georey and Roweis (2002)] that is much easier to optimize,and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.Stochastic Neighbor Embedding(SNE) starts by converting the high-dimensional Euclidean distances between datapoints into conditional probabilities that represent similarities [Kim, Kim and Cho (2017)].In t-SNE[Maaten and Hinton(2008)],it employs a Student t-distribution with one degree of freedom as the heavy-tailed distribution in the low-dimensional map.

    As Fig.5 shown that words are represented in a continuous embedded space is very important.Various conventional machine learning and data mining techniques can be applied in this space to solve various text mining tasks[Cui,Shi and Chen(2016);Bansal,Gimpel and Livescu(2014); Xue, Fu and Zhan(2014); Cao and Wang(2015); Ren, Kiros and Zemel(2015)].Fig.5 also shows an example of such embedded space visualized by t-sne[Cui,Shi and Chen(2016)].The embedded words located in one circle represent the names of baseball players, the names of soccer players and the names of countries are separated in different clustering circle, and words similar meanings are located close.The words with different meanings are located far away.

    2.2.1 Hierarchical softmax

    The formula(12)is a full softmax model which is impractical because the cost of computing is proportional to W, which is often large(105-107terms).The hierarchical softmax is a computationally efficient approximation of the full softmax.The main advantage of hierarchical softmax is that instead of evaluating W output nodes in the neural network to obtain the probability distribution,it evaluates only log2(w)nodes.

    Figure 5: Embedded space using t-sne [Maaten and Hinton (2008);Kim, Kim and Cho(2017)]

    Hierarchical probabilistic neural network language model was first proposed by Morin[Morin and Bengio(2005)],Mnih and Hinton[Mnih and Hinton(2008)]explored a number of methods for constructing a tree structure and ameliorated the effect on both the training time and the resulting model accuracy.Mikol et al.[Mikolov,Sutskever,Chen et al.(2013);Mikolov(2012)]used a binary Huffman tree,as it assigns short codes to the frequent words which results in fast training.

    The hierarchical softmax uses a binary tree representation of the output layer with the W words as its leaves.For each word w located at the leaf,let n(w,j)be the j-th node on the path from the root to w,and let Len(w)be the length of this path,so n(w,1) = root and n(w,Len(w))=w.let child(n)be an arbitrary fixed child of inner node n,and let〈x〉be 1 if x is true and-1 otherwise,then the hierarchical softmax is defined as follows:

    where σ(x)=1/(1+exp(-x)),the cost of computing log p(wO|wI)and?log p(wO|wI)is proportional to Len(wO),which on average is no greater than log W.

    3 New learning model of word vector representation

    We first divide training document into several relative subsets of document, the relative property may be considered as document collection of contextual feature and semantic feature.

    Let Cp1,··· ,Cpnbe n subsets,V1(w),··· ,Vn(w)be corresponding distributed word vectors for word w generated from CBOW or Skip-Gram model.Let SAMT be a sampling set in the vocabulary for topic words,in order to preserve accurate proximity information among subsets, we consider a regression model as an extension of distributed word vectors.The new learning model for distributed word vector representation is described as the following optimization problem and regression strategy.

    Step 2.reconstruct word vector for each word w

    Fig.6 shows the integrated extension model for distributed word vector representation.

    Figure 6: The new generation model of word vector representation

    4 Experiments

    4.1 Task description

    Our task is to develop a new method for generating word vectors and verify its efficiency on actual document dataset.The new generation model of word vector includes two submodels, one is based on the combination of CBOW [Mikolov (2012)] and our regression with optimization strategy, we denote CBOW-OR model, and the other is based on the combination of Skip-Gram[Mikolov(2012)]and our regression with optimization strategy,denote SkipGram-OR model.In the following experiments,we divide documents into three sub-documents, and on the premise of sharing same vocabulary, same word vectors are trained respectively on three subsets with same dimension.The optimization and regression methods are used to integrate the three vectors into a vector,which is regarded as the word vector of the word.

    4.2 Dataset description

    The dataset we are using is text8,which is download using Google word2vector.The size of the corpus is 100 MB, vocabulary size is 71291 in documents, some auxiliary words have been removed,for example,a,the,is,and so on.And some rare words are removed.In addition, the whole documents contains 4406976 words.Using method, we divide the documents into three parts in size: 36.6 MB, 36.6 MB, 26.8 MB.We name these three sub-documents as text8_1, text8_2, and text8_3.The same word is trained separately on the 3 sub-documents to obtain 3 vectors, and then a word vector is obtained by using the optimization and regression mentioned above.

    4.3 Evaluation mode

    In order to test the effect of our method, we design two sets of comparative experiments,one is to measure Euclidean distance of the vector in two different vector spaces with same dimension,the other is to measure cosine distance of the two word vectors in each vector space.

    Three groups of experiment are conducted according to different SAMT in Eq.(14), for Tabs.1 and 3, SAMT={cat, China, computer, dog, exam, hospital, Japan, nurse, school,software},for Tabs.4 and 6,SAMT={car,children,country,driver,hospital,nation,nurse,parent,school,students},for Tabs.7 and 9,SAMT={army,chef,friend,fruit,gentlemen,ladies, partner, restaurant, soldier, vegetables}.Four kinds of word vector space are respectively generated by CBOW,Skip-Gram,CBOW-OR and SkipGram-OR.The last two models is proposed in this paper.

    In Tabs.1, 2, 4, 5, 7, 8, we compare the Euclidean distance of the vector learned for the same word under different vector spaces.We know that a vector can be represented as a point in vector space.Since the dimension for each words of vector space is same, we can compare the Euclidean distance of a word in different spaces.We can test the relation between the Euclidean distance of multiple key words in any two different vector spaces,in order to test the structure consistency of different vector spaces.

    Since the cosine distance of the two similar word vectors that are trained should be relatively large, the semantic relationship between words is similar in an article.In other words,the probability of simultaneous occurrence of two words should be large,such as cats and dogs, hospitals and nurses, school and students, etc.So in the Tabs.3, 6, 9, we compare the cosine distance between the vector pairs of the same set of word pairs, which are obtained under three different learning mechanisms respectively,as a criterion for evaluating the vector of words.

    Tab.1 to Tab.9 show some interesting properties: Tabs.1, 2, Tabs.4, 5, Tabs.7, 8 show that the words vector space for Skip-Gram and SkipGram-OR keep the same structure property in Euclidean distance.Tab.3 shows that there exists more accurate cosine distance between two words using SkipGram-OR model than other models of CBOW,CBOW-OR and Skip-Gram,meanwhile,Tabs.6,9 show that there exist more accurate cosine distancebetween two words by using CBOW-OR model than other models of CBOW,SkipGram-OR and Skip-Gram.

    Table 1: Euclidean distance of the vector learned from the same word under different methods

    Table 2: Euclidean distance of the vector learned from the same word under different methods

    Table 3: The cosine distance of the two word vectors

    Table 4: Euclidean distance of the vector learned from the same word under different methods

    Table 5: Euclidean distance of the vector learned from the same word under different methods

    Table 6: The cosine distance of the two word vectors

    Table 7: Euclidean distance of the vector learned from the same word under different methods

    Table 8: Euclidean distance of the vector learned from the same word under different methods

    Table 9: The cosine distance of the two word vectors

    By combining Tabs.3, 6, 9, we get the synopsis for 15 different words towards four different models.Fig.7 shows that the model SkipGram-OR keeps higher performance for retrieval the relative words-pair as a whole.

    Figure 7: the comparative result of four models for 15 different words-pair

    5 Conclusions

    We develop two kinds of models for generating words of vector:CBOW-OR and SkipGram-OR.The key strategy for these two models is using optimization in contiguous training documents and regression method in combination of vectors.CBOW-OR and SkipGram-OR can be performed in parallel.Experimental results show that for some words pair,the cosine distance obtained by the CBOW-OR or SkipGram-OR model is generally larger and is more reasonable than CBOW and Skip-Gram.

    We also achieved exciting results.The Euclidean distance between the vectors of the same word learned under different mechanisms is nearby.It can be seen that the vector space obtained by different models has some consistency.That is, the Euclidean distance of different word vectors in any two vector spaces is approximately the same.Especially, we also find that the vector space for Skip-Gram and SkipGram-OR keep the same structure property in Euclidean distance.

    Based on the inherent parallel in generating words of vector and semantic validity in words pair, the proposed models in this paper can be expected to apply in large-scale information processing.

    Acknowledgement:The authors would like to thank all anonymous reviewers for their suggestions and feedback.This work Supported by the National Natural Science Foundation of China (No.61379103,61379052), the National Key Research and Development Program (2016YFB1000101) the Natural Science Foundation for Distinguished Young Scholars of Hunan Province (Grant No.14JJ1026), Specialized Research Fund for the Doctoral Program of Higher Education (Grant No.20124307110015).

    色在线成人网| 香蕉国产在线看| 亚洲免费av在线视频| 50天的宝宝边吃奶边哭怎么回事| 九色亚洲精品在线播放| 亚洲国产毛片av蜜桃av| 精品人妻在线不人妻| 亚洲国产毛片av蜜桃av| 窝窝影院91人妻| www.精华液| 91大片在线观看| 我的亚洲天堂| 脱女人内裤的视频| 亚洲欧美精品综合一区二区三区| 欧美+亚洲+日韩+国产| 欧美午夜高清在线| 99国产精品一区二区蜜桃av| 老司机午夜福利在线观看视频| 国产精品野战在线观看| 午夜福利高清视频| 亚洲中文av在线| 后天国语完整版免费观看| 中文字幕最新亚洲高清| 麻豆成人av在线观看| 国产又色又爽无遮挡免费看| 国产精品久久久久久人妻精品电影| 欧美中文日本在线观看视频| 欧美激情久久久久久爽电影 | а√天堂www在线а√下载| 一区在线观看完整版| 精品第一国产精品| 12—13女人毛片做爰片一| 久久国产精品男人的天堂亚洲| 成在线人永久免费视频| 欧美成人免费av一区二区三区| 国产成人免费无遮挡视频| 最近最新中文字幕大全电影3 | 亚洲专区字幕在线| 亚洲av片天天在线观看| 国产精品乱码一区二三区的特点 | 日本免费a在线| 人成视频在线观看免费观看| 亚洲五月婷婷丁香| 人人妻人人澡人人看| 99久久综合精品五月天人人| 高清在线国产一区| 一级毛片精品| 99国产精品一区二区三区| 九色亚洲精品在线播放| 久久人人97超碰香蕉20202| 日韩 欧美 亚洲 中文字幕| 日韩欧美一区二区三区在线观看| 欧美激情极品国产一区二区三区| cao死你这个sao货| bbb黄色大片| 啦啦啦观看免费观看视频高清 | 日本vs欧美在线观看视频| 免费在线观看日本一区| 久久人人97超碰香蕉20202| 久久精品人人爽人人爽视色| 性少妇av在线| 美女国产高潮福利片在线看| 中文字幕人妻熟女乱码| 亚洲国产欧美网| 韩国精品一区二区三区| 日韩视频一区二区在线观看| av在线天堂中文字幕| 老鸭窝网址在线观看| 免费久久久久久久精品成人欧美视频| 亚洲五月色婷婷综合| 他把我摸到了高潮在线观看| 亚洲色图 男人天堂 中文字幕| 9色porny在线观看| 亚洲全国av大片| 美女高潮到喷水免费观看| 操出白浆在线播放| 亚洲国产欧美网| 亚洲精品一区av在线观看| 亚洲人成77777在线视频| 很黄的视频免费| 在线国产一区二区在线| 免费少妇av软件| 国产亚洲精品久久久久5区| 亚洲成国产人片在线观看| 中文字幕精品免费在线观看视频| 欧美av亚洲av综合av国产av| 国产精品 国内视频| 久久伊人香网站| 亚洲欧美精品综合一区二区三区| 国产一区二区在线av高清观看| 69精品国产乱码久久久| 一本综合久久免费| 美女免费视频网站| 亚洲一码二码三码区别大吗| 久久久久精品国产欧美久久久| 麻豆成人av在线观看| 美女高潮喷水抽搐中文字幕| aaaaa片日本免费| 国产亚洲精品av在线| 欧美日本亚洲视频在线播放| 国产高清有码在线观看视频 | 国产精品九九99| 老熟妇仑乱视频hdxx| 免费观看精品视频网站| 麻豆久久精品国产亚洲av| 国产亚洲精品一区二区www| 日本 av在线| 亚洲欧洲精品一区二区精品久久久| 免费观看人在逋| 国产成年人精品一区二区| 婷婷精品国产亚洲av在线| 国产精品九九99| 99久久国产精品久久久| 日韩国内少妇激情av| 久久国产精品男人的天堂亚洲| 制服丝袜大香蕉在线| 亚洲人成伊人成综合网2020| 日韩精品中文字幕看吧| 久久精品国产清高在天天线| 国产欧美日韩一区二区精品| 久久久久久久久中文| 婷婷精品国产亚洲av在线| 欧美日韩亚洲综合一区二区三区_| 日本撒尿小便嘘嘘汇集6| 成人精品一区二区免费| 亚洲自拍偷在线| 搡老妇女老女人老熟妇| 国产亚洲欧美精品永久| 在线观看www视频免费| 美女高潮喷水抽搐中文字幕| 很黄的视频免费| 欧美日韩亚洲国产一区二区在线观看| 麻豆av在线久日| 免费观看人在逋| 国产精品精品国产色婷婷| 黄片小视频在线播放| 欧美激情极品国产一区二区三区| 国产一区二区三区在线臀色熟女| 久久久国产精品麻豆| 久久亚洲精品不卡| 妹子高潮喷水视频| 两个人看的免费小视频| 亚洲欧美激情综合另类| 久9热在线精品视频| 99国产精品一区二区蜜桃av| 黄片大片在线免费观看| 久久性视频一级片| 中文字幕高清在线视频| 精品一品国产午夜福利视频| 青草久久国产| 91九色精品人成在线观看| 日韩成人在线观看一区二区三区| 99riav亚洲国产免费| 国产精品一区二区免费欧美| 欧美日本亚洲视频在线播放| 99riav亚洲国产免费| 人妻丰满熟妇av一区二区三区| 老汉色∧v一级毛片| 亚洲熟妇熟女久久| 国产97色在线日韩免费| 97人妻天天添夜夜摸| 亚洲第一青青草原| 国产成人av教育| 国产伦一二天堂av在线观看| 一进一出好大好爽视频| 国内精品久久久久久久电影| 国产高清有码在线观看视频 | 午夜影院日韩av| 18禁美女被吸乳视频| 自拍欧美九色日韩亚洲蝌蚪91| 欧美国产日韩亚洲一区| x7x7x7水蜜桃| 两性夫妻黄色片| 午夜免费成人在线视频| 日韩免费av在线播放| 99国产极品粉嫩在线观看| 女同久久另类99精品国产91| 九色亚洲精品在线播放| 精品一区二区三区视频在线观看免费| 国产99白浆流出| 免费在线观看影片大全网站| 成人国产一区最新在线观看| 99国产极品粉嫩在线观看| 波多野结衣av一区二区av| 国产精品久久久av美女十八| 午夜福利18| 美女免费视频网站| 亚洲片人在线观看| 中文字幕人成人乱码亚洲影| 男人舔女人下体高潮全视频| 午夜视频精品福利| 国产99白浆流出| 999精品在线视频| 亚洲人成网站在线播放欧美日韩| 中文字幕av电影在线播放| xxx96com| 国产aⅴ精品一区二区三区波| 久久精品国产99精品国产亚洲性色 | 免费无遮挡裸体视频| 亚洲专区字幕在线| 91在线观看av| 亚洲欧美日韩高清在线视频| 欧美国产日韩亚洲一区| 成人18禁在线播放| 欧美日韩亚洲国产一区二区在线观看| 极品人妻少妇av视频| 成年人黄色毛片网站| av网站免费在线观看视频| 51午夜福利影视在线观看| 国产欧美日韩精品亚洲av| 国产免费av片在线观看野外av| 亚洲精品一卡2卡三卡4卡5卡| 丝袜美足系列| 日日爽夜夜爽网站| 午夜免费鲁丝| www日本在线高清视频| 午夜福利18| av有码第一页| 国产亚洲av嫩草精品影院| 欧美国产日韩亚洲一区| 熟妇人妻久久中文字幕3abv| 国产高清视频在线播放一区| 精品国产美女av久久久久小说| 国产精品自产拍在线观看55亚洲| 大香蕉久久成人网| 狂野欧美激情性xxxx| 久久精品国产综合久久久| 欧美日韩黄片免| 国产精品久久久久久精品电影 | 日日干狠狠操夜夜爽| 久久久精品国产亚洲av高清涩受| 99精品在免费线老司机午夜| 亚洲精品在线美女| av欧美777| 少妇裸体淫交视频免费看高清 | 亚洲欧美精品综合一区二区三区| 国产成人系列免费观看| 久久久久久久午夜电影| 日韩有码中文字幕| 中文字幕久久专区| 精品一区二区三区视频在线观看免费| 91麻豆精品激情在线观看国产| 欧美成人性av电影在线观看| 性欧美人与动物交配| 色综合婷婷激情| 欧美一区二区精品小视频在线| 一二三四社区在线视频社区8| 啪啪无遮挡十八禁网站| 校园春色视频在线观看| 少妇熟女aⅴ在线视频| 久久国产乱子伦精品免费另类| 视频区欧美日本亚洲| 99久久国产精品久久久| 亚洲国产欧美网| 欧美日本亚洲视频在线播放| 欧美激情高清一区二区三区| 性少妇av在线| 精品国产国语对白av| 日本五十路高清| 中文字幕另类日韩欧美亚洲嫩草| 97超级碰碰碰精品色视频在线观看| 国产熟女xx| 免费看a级黄色片| 国产高清有码在线观看视频 | 日本 av在线| 啪啪无遮挡十八禁网站| 色尼玛亚洲综合影院| 99精品久久久久人妻精品| 午夜久久久在线观看| 50天的宝宝边吃奶边哭怎么回事| 亚洲中文字幕一区二区三区有码在线看 | 午夜福利18| 俄罗斯特黄特色一大片| 中出人妻视频一区二区| 久久人妻av系列| 麻豆久久精品国产亚洲av| 久久精品人人爽人人爽视色| 首页视频小说图片口味搜索| 曰老女人黄片| 国产又色又爽无遮挡免费看| 51午夜福利影视在线观看| 亚洲第一欧美日韩一区二区三区| netflix在线观看网站| 午夜精品国产一区二区电影| 天堂√8在线中文| 国产欧美日韩综合在线一区二区| 国产激情欧美一区二区| 精品电影一区二区在线| 一级a爱片免费观看的视频| 亚洲人成77777在线视频| 久久久久国产一级毛片高清牌| 黑人巨大精品欧美一区二区mp4| 国产私拍福利视频在线观看| 这个男人来自地球电影免费观看| 精品一区二区三区av网在线观看| 欧美亚洲日本最大视频资源| 日韩三级视频一区二区三区| 日本一区二区免费在线视频| 99久久99久久久精品蜜桃| 精品久久久久久,| 大型av网站在线播放| 国产高清videossex| 色播在线永久视频| 国产xxxxx性猛交| 色精品久久人妻99蜜桃| 国产日韩一区二区三区精品不卡| 久久久久久人人人人人| 99国产综合亚洲精品| 免费久久久久久久精品成人欧美视频| 国产视频一区二区在线看| 变态另类丝袜制服| 男人操女人黄网站| 看黄色毛片网站| 亚洲自拍偷在线| 久久天躁狠狠躁夜夜2o2o| 久久青草综合色| 免费看a级黄色片| 人成视频在线观看免费观看| 欧美大码av| 黑人巨大精品欧美一区二区mp4| 人人妻人人爽人人添夜夜欢视频| 精品熟女少妇八av免费久了| 91九色精品人成在线观看| avwww免费| 满18在线观看网站| 亚洲天堂国产精品一区在线| 丝袜在线中文字幕| av视频免费观看在线观看| 中亚洲国语对白在线视频| 少妇的丰满在线观看| 91精品国产国语对白视频| www.自偷自拍.com| av视频在线观看入口| 在线播放国产精品三级| 动漫黄色视频在线观看| 亚洲熟妇熟女久久| 欧美成人免费av一区二区三区| 亚洲欧美激情在线| 黑丝袜美女国产一区| 亚洲av电影不卡..在线观看| 久久香蕉精品热| 欧美日韩中文字幕国产精品一区二区三区 | 两人在一起打扑克的视频| 午夜激情av网站| 久久久水蜜桃国产精品网| 首页视频小说图片口味搜索| 亚洲人成网站在线播放欧美日韩| 亚洲无线在线观看| 欧美另类亚洲清纯唯美| 欧美亚洲日本最大视频资源| 男女之事视频高清在线观看| 久久热在线av| 精品一区二区三区四区五区乱码| 国产亚洲精品av在线| 午夜福利一区二区在线看| 欧美日韩福利视频一区二区| 亚洲最大成人中文| 亚洲久久久国产精品| 三级毛片av免费| 亚洲欧美精品综合一区二区三区| 久久香蕉国产精品| 一卡2卡三卡四卡精品乱码亚洲| 身体一侧抽搐| 亚洲av第一区精品v没综合| 91老司机精品| 日韩成人在线观看一区二区三区| 又大又爽又粗| 中文字幕高清在线视频| avwww免费| 操出白浆在线播放| 在线十欧美十亚洲十日本专区| 日本精品一区二区三区蜜桃| 欧美色欧美亚洲另类二区 | 久久人妻福利社区极品人妻图片| 亚洲精品国产色婷婷电影| 欧美+亚洲+日韩+国产| 亚洲人成电影观看| 欧美乱码精品一区二区三区| 他把我摸到了高潮在线观看| 免费在线观看亚洲国产| 老司机在亚洲福利影院| 久久久久久大精品| av天堂久久9| 亚洲精品一区av在线观看| 一级片免费观看大全| 黑人操中国人逼视频| 午夜免费观看网址| 十分钟在线观看高清视频www| 亚洲一区中文字幕在线| 啦啦啦 在线观看视频| 在线av久久热| 成人av一区二区三区在线看| 黄色视频,在线免费观看| 黑人欧美特级aaaaaa片| 午夜视频精品福利| 1024视频免费在线观看| 美国免费a级毛片| 欧美激情 高清一区二区三区| 国产亚洲欧美98| 亚洲自偷自拍图片 自拍| 欧美激情极品国产一区二区三区| 欧美一级毛片孕妇| 男人舔女人的私密视频| av超薄肉色丝袜交足视频| 人妻丰满熟妇av一区二区三区| 一区福利在线观看| 国产成人av教育| 亚洲熟女毛片儿| 亚洲一码二码三码区别大吗| 色综合婷婷激情| 亚洲男人天堂网一区| 熟女少妇亚洲综合色aaa.| 日本黄色视频三级网站网址| 亚洲精品中文字幕一二三四区| 波多野结衣巨乳人妻| 精品乱码久久久久久99久播| 极品人妻少妇av视频| 黑人巨大精品欧美一区二区mp4| 久久中文字幕一级| 国产激情欧美一区二区| 国产精品久久久人人做人人爽| 好男人在线观看高清免费视频 | 成人国产综合亚洲| 一个人免费在线观看的高清视频| 国产精品香港三级国产av潘金莲| 日日干狠狠操夜夜爽| 国产精品野战在线观看| 免费av毛片视频| 欧美一级毛片孕妇| www.自偷自拍.com| 午夜福利视频1000在线观看 | 女性生殖器流出的白浆| 校园春色视频在线观看| 亚洲伊人色综图| 青草久久国产| 国产乱人伦免费视频| 久久人妻福利社区极品人妻图片| 中文字幕av电影在线播放| 亚洲,欧美精品.| 国产成人啪精品午夜网站| 久久国产乱子伦精品免费另类| 可以在线观看的亚洲视频| 国产午夜福利久久久久久| 免费人成视频x8x8入口观看| 国产亚洲精品av在线| 久久久久久国产a免费观看| 亚洲欧美日韩高清在线视频| 他把我摸到了高潮在线观看| 亚洲国产中文字幕在线视频| 国产精品综合久久久久久久免费 | 99国产精品99久久久久| АⅤ资源中文在线天堂| 精品第一国产精品| 一区二区三区国产精品乱码| 久久婷婷成人综合色麻豆| 久久精品国产亚洲av高清一级| 91精品国产国语对白视频| 久久香蕉国产精品| 免费在线观看日本一区| 亚洲男人天堂网一区| 精品欧美一区二区三区在线| 欧美日本亚洲视频在线播放| 一区在线观看完整版| 成人特级黄色片久久久久久久| 亚洲av日韩精品久久久久久密| 免费在线观看黄色视频的| 老熟妇乱子伦视频在线观看| 国产私拍福利视频在线观看| 国产亚洲欧美在线一区二区| 欧美日韩中文字幕国产精品一区二区三区 | 桃红色精品国产亚洲av| 午夜日韩欧美国产| 亚洲精品美女久久久久99蜜臀| 黄色丝袜av网址大全| 在线观看一区二区三区| 成年女人毛片免费观看观看9| 亚洲精品在线观看二区| 免费在线观看黄色视频的| 久久婷婷成人综合色麻豆| 国产精华一区二区三区| 欧美成人午夜精品| 99热只有精品国产| av天堂久久9| 少妇被粗大的猛进出69影院| 色av中文字幕| 欧美在线黄色| 婷婷丁香在线五月| 国产精品影院久久| 日韩一卡2卡3卡4卡2021年| 欧美日本视频| 久久久久久人人人人人| 母亲3免费完整高清在线观看| 日日摸夜夜添夜夜添小说| 亚洲熟妇中文字幕五十中出| 亚洲在线自拍视频| 免费看a级黄色片| 亚洲五月婷婷丁香| 国产精品一区二区精品视频观看| 国产欧美日韩精品亚洲av| 成人亚洲精品av一区二区| 色在线成人网| 国产单亲对白刺激| 12—13女人毛片做爰片一| 日韩av在线大香蕉| 在线十欧美十亚洲十日本专区| 久久精品国产亚洲av高清一级| 色老头精品视频在线观看| 日韩欧美免费精品| 看黄色毛片网站| 久久久精品欧美日韩精品| 我的亚洲天堂| 亚洲熟妇熟女久久| 精品久久久久久久人妻蜜臀av | 日本vs欧美在线观看视频| 12—13女人毛片做爰片一| 精品欧美国产一区二区三| 精品不卡国产一区二区三区| 这个男人来自地球电影免费观看| 侵犯人妻中文字幕一二三四区| 国产精品自产拍在线观看55亚洲| 国产熟女xx| 最近最新中文字幕大全免费视频| 亚洲最大成人中文| 亚洲国产精品成人综合色| 久久精品亚洲精品国产色婷小说| 他把我摸到了高潮在线观看| 国产aⅴ精品一区二区三区波| 欧美黑人欧美精品刺激| 淫妇啪啪啪对白视频| 亚洲熟妇熟女久久| 一级毛片高清免费大全| 久久久精品欧美日韩精品| 亚洲自偷自拍图片 自拍| 久久久久久久精品吃奶| 黄色成人免费大全| 国产色视频综合| 伊人久久大香线蕉亚洲五| 国内精品久久久久久久电影| 国产亚洲精品第一综合不卡| 久久久久久亚洲精品国产蜜桃av| 国产精品久久久av美女十八| 熟女少妇亚洲综合色aaa.| 两个人免费观看高清视频| 两个人看的免费小视频| 国产精品一区二区三区四区久久 | 在线av久久热| 日本欧美视频一区| 丰满人妻熟妇乱又伦精品不卡| 欧美激情久久久久久爽电影 | av免费在线观看网站| 搡老妇女老女人老熟妇| 亚洲色图av天堂| 禁无遮挡网站| 999久久久国产精品视频| e午夜精品久久久久久久| 免费少妇av软件| 亚洲精品国产色婷婷电影| 99国产精品免费福利视频| 99精品在免费线老司机午夜| 国产三级黄色录像| 日日爽夜夜爽网站| 亚洲狠狠婷婷综合久久图片| 韩国av一区二区三区四区| 叶爱在线成人免费视频播放| 在线十欧美十亚洲十日本专区| 电影成人av| 黑人巨大精品欧美一区二区mp4| 男人操女人黄网站| 婷婷精品国产亚洲av在线| 色综合欧美亚洲国产小说| 国产精品久久久久久精品电影 | 夜夜躁狠狠躁天天躁| 久久精品人人爽人人爽视色| 中文字幕精品免费在线观看视频| 69av精品久久久久久| 夜夜看夜夜爽夜夜摸| 操出白浆在线播放| 亚洲国产精品sss在线观看| 日本三级黄在线观看| 亚洲成av片中文字幕在线观看| 久久国产亚洲av麻豆专区| 制服丝袜大香蕉在线| 在线观看66精品国产| 欧美人与性动交α欧美精品济南到| 乱人伦中国视频| 成熟少妇高潮喷水视频| av视频免费观看在线观看| 免费观看精品视频网站| 国产av精品麻豆| 亚洲专区字幕在线| 老鸭窝网址在线观看| 久久精品91蜜桃| 精品国产乱码久久久久久男人| 激情在线观看视频在线高清| 19禁男女啪啪无遮挡网站| 美国免费a级毛片| 最近最新免费中文字幕在线| 老司机靠b影院| 亚洲一区高清亚洲精品| 国产不卡一卡二| 欧美大码av| 深夜精品福利| 亚洲国产日韩欧美精品在线观看 | 久久精品国产清高在天天线| 久久国产亚洲av麻豆专区| 亚洲精品中文字幕一二三四区| 黄网站色视频无遮挡免费观看| 女警被强在线播放| 美女免费视频网站| 国产精品一区二区免费欧美| 露出奶头的视频| 老司机深夜福利视频在线观看| a在线观看视频网站| 久久人人97超碰香蕉20202| 精品午夜福利视频在线观看一区|