• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Similarity matrix-based K-means algorithm for text clustering

    2015-04-22 02:33:20CAOQimin曹奇敏GUOQiao郭巧WUXianghua吳向華

    CAO Qi-min(曹奇敏), GUO Qiao(郭巧) WU Xiang-hua(吳向華)

    (School of Automation, Beijing Institute of Technology, Beijing 100081,China)

    ?

    Similarity matrix-basedK-means algorithm for text clustering

    CAO Qi-min(曹奇敏), GUO Qiao(郭巧)1, WU Xiang-hua(吳向華)1

    (School of Automation, Beijing Institute of Technology, Beijing 100081,China)

    K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional algorithm, this paper proposes an improvedK-means algorithm based on the similarity matrix. The improved algorithm can effectively avoid the random selection of initial center points, therefore it can provide effective initial points for clustering process, and reduce the fluctuation of clustering results which are resulted from initial points selections, thus a better clustering quality can be obtained. The experimental results also show that theF-measure of the improvedK-means algorithm has been greatly improved and the clustering results are more stable.

    text clustering;K-means algorithm; similarity matrix;F-measure

    Text clustering is a typical unsupervised machine learning task, the aim is to divide a document set into several clusters, and it requires that the intra-document similarity is higher than inter-document similarity[1-4]. In recent years, text clustering has been widely used in information retrieval, multi document automatic summarization, information processing of short text and so on[5].

    Recently, commonly-used text clustering algorithms are categoried into two types: hierarchical method and partitioning method. AGNES and DIANA are the representatives of hierarchical clustering algorithms, they have good clustering results, but time consumption is increasing rapidly with the increase of data[6]. As one representative of partitioning method,K-means algorithm has linear time complexity, its computational consumption is low, so it is applicable to a large set of text clustering[7]. However, this algorithm often end into a local optimal value, it is mainly due to that the number of cluster centers and the selection of the initial cluster centers can affect its results[8]. Especially the selection of the initial center points can easily lead to the fluctuation of the clustering results[9].

    Some researchers use different distance measures to select centroid points, such as Jaccard distance co-efficient[10]and the shape similarity distance[11], they can improve the quality of clustering in some degree, but different distance measures are used for different features of heterogeneous samples. In Ref.[12], it is an incremental approach; in Ref.[13], it starts from some random points and tries to minimize the maximum intra-cluster variance, they can eliminate the dependence on random initial conditions, but they are computationally more expensive. For the choice of initial points, Ref.[14] randomly selects theKcenter points from all the objects, its drawback is that the clustering results are not consistent; Refs.[15-16] use modified differential algorithm(DE) to obtain initial center points, but the initial points of DE are randomly selected and it also increases the time of clustering. Ref.[17] presents theK-means algorithm based onk-dtrees, which is called filtering algorithm, but the initial centers are randomly selected.

    In order to obtain stable clustering results and a better clustering accuracy, in this paper a new initialization method is designed to discover a reasonable set of centorids. Experiment verifies that the improved algorithm is better than the traditionalK-means algorithm.

    The structure of this paper is organized as follows. Section 1 introduces the background knowledge, including vector space model, the definition of similarity and the traditionalK-means algorithm. Section 2 presents the improvedK-means algorithm. Experimental results are shown in section 3. Finally, section 4 concludes the paper.

    1 Background knowledge

    In the process of text clustering, vector space model (VSM) proposed by professor Salton is generally used for text representation[18]. First of all, the document was pretreated, including Chinese word segmentation and removing stop words, and then it is to use the VSM model to represent the text, where the feature words of the text consist of the vector and the weight of the feature which is the value of the vector, and finally theK-means algorithm is used to cluster documents. The flowchart of text clustering is shown in Fig.1.

    Fig.1 Flowchart of text clustering

    1.1 VSM model and definition of similarity

    The main idea of VSM model is to put each document into a vector, so that it realizes the transformation for text from linguistic issues to mathematical problems identified by computer. LetDrepresent a collection of documents, that isD=(d1,d2,…,dn), wherediis thei-th document in the document collection,i=1,…,n. The document can be represented as:di=(Ti1,Ti2,…,Tim), whereTijis the feature item, 1≤i≤n,1≤j≤m. The feature item is the word contained in the document. Usually each feature item was given with a weight showing the degree of importance, i.e.di=(Ti1,Wi1;Ti2,Wi2;…Tim,Wim) , abbreviated asdi=(Wi1,Wi2,…,Wim), whereWijis the weight ofTij,1≤i≤n,1≤j≤m.

    In this paper, a TF-IDF weighting scheme is used as one of the best known schemes, where TF represents term frequency and IDF represents the inverse document frequency. The weight formula using TF-IDF is as follows:

    (1)

    whereftijis the frequency of termtiin the documentdj,fdiis the number of documents containingtiin all documents,Nis the number of all documents, the denominator is a normalization factor.

    In this article, we use a cosine distance to measure similarity between the two texts. The definition of similarity is as follows:

    (2)

    1.2K-means algorithm

    The objective function ofK-means algorithm is the square error criterion, defined as follows:

    (3)

    Step 1 Randomly selectKobjects as the cluster centers.

    Step 2 Assign each document to its nearest cluster center according to similarity between the document and the cluster center.

    Step 3 Calculate the mean value of each changed cluster as the new cluster center.

    Step 4 If the new cluster center and the original cluster is the same, exit the algorithm, else go to step 2.

    2 Improved K-means algorithm

    TheK-means algorithm is used for text clustering. When the result clusters are compact and the differences between clusters are obvious, its results are better. However, because the initial center point is selected randomly, the algorithm often ends into the local optimal. In addition, it is sensitive to “noise” and outliner data, small amounts of such data can have a great effect on the result.

    Although there is no guarantee of achieving a global optimal value, at leastK-means algorithm can converge[19]. So, it is very important to choose proper initial cluster centers for theK-means algorithm. TheK-means algorithm can obtain better results when the initial centroid points were close to the final solution[20]. Therefore, this paper presents similarity matrix-basedK-means text clustering algorithm, the specific process is as follows:

    In step 1 a VSM model is used to present the document as the vector, the similarity between every two documents is calculated by using cosine distance. Assume that there arendocuments, then the similarity matrix S is: S=(sij)n×n, wheresij=Sim(di,dj),i,j=1,…,n.

    Step 2 sums each row of the matrixSaccording to the following formula

    (4)

    this is to calculate the similarity between each document and the document set.

    Step 3 rankssi(i=1,…,n) in a descending order, the maximum is set tosl, and then the documentdlis selected corresponding toslas the first initial center point.

    Step 4 ranksslj(j=1,…,n) in an ascending order, might as well let theK-1 minimum values areslj(j=1,…,K-1), select the documentsdj(j=1,…,K-1) corresponding toslj(j=1,…,K-1) as theK-1 initial center points. Therefore,dlanddj(j=1,…,K-1) consist ofKinitial center points, whereK

    Step 5 is done for each of the remaining documents (not being chosen as initial cluster center), they are assigned to their nearest cluster centers according to their similarities with the cluster center, andKclusters can be obtained.

    Step 6 calculates the mean value of each changed cluster as the new cluster center.

    In step 7 if the new cluster center and the original cluster are the same, exit the algorithm, else go to step 5.

    The logic workflow of this algorithm is in Fig. 2.

    From theK-means algorithm, it is not difficult to see thatKpoints are chosen as initial cluster centers in the initial stage of the algorithm, and then iterate repeatedly on the basis of them. If the initial center points are different, the clustering results might vary significantly. The clustering results of theK-means algorithm are highly dependent on the initial value, which may result into unstable clustering results. The algorithm proposed by this article is on the basis of similarity matrix, first select the central one in all the files, and then selectK-1 documents which are at the greatest distance from the central one, each iteration uses the mean value of the cluster as the judgment standard, if two consecutive iteration results are the same, then algorithm terminates, otherwise the iteration continues.

    Fig.2 Logic workflow of improved algorithm

    Theoretically, theKinitial points selected by the proposed method can be as close to the true centers of clusters as possible and they can belong to different clusters as far as possible, so it can reduce the number of iterations and can improve the accuracy of clustering. Even if the clusters are very sparse, the algorithm also can obtain good clustering results, it reduces the impact of “noise” and outlier data. The improvedK-means algorithm can also avoid the empty cluster problem that plagues traditionalK-means algorithm which is likely to lead to an unsuitable solution. In the next section, experiments will validate this conclusion.

    3 Experiments and analysis

    3.1 Evaluation method of text clustering

    In this paper,F-measure is applied as the evaluation method of text clustering.F-measure belongs to external evaluation, external evaluation method is to compare the clustering results with the results previously specified, and the differences between the subjective of people are measured with the clustering results. For the clusterjand the correct categoryi, the formula of the precision rateP, recall rateK,F-measureF(i) are as follows:

    (5)

    (6)

    (7)

    whereNidenotes the number of text in the correct categoryi,Njdenotes the number of text in the clusterj,Nijdenotes the number of text in the clusterj, which originally belong to the correct categoryi.

    3.2 Datasets

    In the experiment, texts are downloaded from standard corpora. For evaluating its performance in different languages, Reuters 21578 classical corpus and CCL (Peking University modern Chinese corpus) are used. Nine categories are chosen from each corpus, they are Politics, Economics, Sports, Society, Weather, Science, Military, Literature, Culture. Each category contains 2000 documents.

    3.3 Experimental results

    In order to evaluate the improvement of the improvedK-means algorithm in the performance ofF-measure, experiments are carried out to compare with the traditionalK-means algorithm, DE basedK-means algorithm and filtering algorithm, the experimental results are as shown in Tab.1 and Tab.2.

    Tab.1 F-measure comparison in English text clustering

    Tab.2 F-measure comparison in Chinese text clustering

    Meanwhile, in order to literally visualize the effectiveness of the initial centroids selected by the proposed method, some points are selected from three Gaussians and these points are of two dimensions. The experimental results are as shown in Figs.3-6.

    Fig.3 K-means solution from random initial centers

    Fig.4 Proposed solution from proposed initial centers

    Fig.5 DE+K-means solution from its initial centers

    Fig.6 Filtering algorithm solution from its initial centers

    Fig.3 shows random initial center points and the corresponding solution of the traditionalK-means algorithm. Fig.4 shows the initial points chosen using similarity matrix and the results of the algorithm proposed by this paper. Fig.5 shows the initial center points chosen by the DE module and the results of theK-means algorithm based on DE. Fig.6 shows the initial center points and the results of the filtering algorithm.

    3.4 Analysis of experimental results

    It is seen from Tab.1 and Tab.2, compared to the traditionalK-means algorithm,F-measure of the improvedK-means algorithm has been improved obviously, and the experimental results are relatively stable. Compared to DE basedK-means algorithm and filtering algorithm, the improvedK-means algorithm also has higherF-measure. This shows that the improvedK-means algorithm in the certain degree avoids failing into local minima problem, which was met when the traditionalK-means algorithm converges. The initial center points chosen by the improvedK-means algorithm are representative, thus the improvedK-means algorithm obtained a higherF-measure and more stable clustering results.

    From Fig.3- Fig.6, it is shown that the proposed initial points are very close to the true solution. So it can be concluded that the improved algorithm has a better performance.

    4 Conclusion

    TheK-means algorithm is sensitive to initial center points in the process of document clustering, this paper proposes an improvedK-means algorithm based on the similarity matrix. The improved algorithm can effectively avoid the random selection of initial center points, it provides effective initial points for clustering process which are very close to the true solution, reduces the fluctuation of clustering results resulted from initial points, thus it can obtain a better clustering quality. The experimental results also show that theF-measure of the improvedK-means algorithm has been greatly improved and the clustering results are more stable. In this paper, the effectiveness of the initial centroids selected by the proposed method have been literally visualized through experiments.

    However, the performance of the algorithm still has space to be improved by determining the number of clusters automatically. The traditionalK-means algorithm itself has two flaws, first it is easily influenced by the outliner data and poor clustering results can be obtained, the second is that it is influenced by the numberKof clusters, which is appropriate or not, has a great impact on the clustering results. The selection of initial points effectively just only solves the first one flaw, does not resolve the second flaw. Therefore the direction of future research is how to adaptively determine the value ofK.

    [1] Shi Z Z. Knowledge discovery[M]. Beijing: Tsinghua University Press, 2002.

    [2] Han J, Kamber M. Data mining: concepts and techniques[M]. San Francisco: Morgan Kaufmann Publishers, 2000.

    [3] Grabmeier J, Rudolph A. Techniques of cluster algorithms in data mining[J]. Data Mining and Knowledge Discovery, 2002, 6(4):303-360.

    [4] Meyer C D, Wessell C D. Stochastic data clustering[J]. SIAM Journal on Matrix Analysis and Applications, 2012, 33(4): 1214-1236.

    [5] Hammouda K M, Kamel M S. Efficient phrase-based document indexing for web document clustering[J]. IEEE Transactions on Knowledge and Data Engineering, 2004, 16(10):1279-1296.

    [6] Rousseeuw P J, Kaufman L. Finding groups in data: an introduction to cluster analysis[M].New York: John Wiley & Sons, 2009.

    [7] Gnanadesikan R. Methods for statistical data analysis of multivariate observations[M]. New York: John Wiley & Sons, 2011.

    [8] Huang J Z, Ng M K, Rong H, et al. Automated variable weighting inK-means type clustering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(5):657-668.

    [9] Celebi M E, Kingravi H A, Vela P A. A comparative study of efficient initialization methods for the k-means clustering algorithm[J]. Expert Systems with Applications, 2013, 40(1): 200-210.

    [10] Shameem M U S, Ferdous R. An efficient k-means algorithm integrated with Jaccard distance measure for document clustering[C]∥AH-ICI 2009, First Asian Himalayas International Conference on Internet, 2009: 1-6.

    [11] Li D, Li X B, A modified version of the K-means algorithm based on the shape similarity distance[J]. Applied Mechanics and Materials, 2014, 457: 1064-1068.

    [12] Bagirov A M, Ugon J, Webb D. Fast modified global k-means algorithm for incremental cluster construction[J]. Pattern Recognition, 2011, 44(4): 866-876.

    [13] Tzortzis G, Likas A. The MinMax k-means clustering algorithm[J]. Pattern Recognition, 2014, 47(7): 2505-2516.

    [14] Khan S S, Ahmad A. A cluster center initialization algorithm for K-means clustering[J]. Pattern Recognition Letters, 2004, 25(11):1293-1302.

    [15] Aliguliyev R M. Clustering of document collection a weighting approach[J]. Expert Systems with Applications, 2009, 36(4):7904-7916.

    [16] Abraham A, Das S, Konar A. Document clustering using differential evolution[C]∥CEC 2006 IEEE Congress on Evolutionary Computation, 2006: 1784-1791.

    [17] Kanungo T, Mount D M, Netanyahu N S, et al. An efficient K-means clustering algorithm: analysis and implementation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002,24(7):881-892.

    [18] Salton G, Wong A, Yang C S. A vector space model for automatic indexing[J]. Communications of the ACM, 1975, 18(11): 613-620.

    [19] Selim S Z, Ismail M A. K-means-type algorithms: a generalized convergence theorem and characterization of local optimality[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1984(1): 81-87.

    [20] Jain A K, Dubes R C. Algorithms for clustering data[M]. Englewood Cliffs:Prentice Hall, 1988.

    (Edited by Wang Yuxia)

    10.15918/j.jbit1004-0579.201524.0421

    TP 391.1 Document code: A Article ID: 1004- 0579(2015)04- 0566- 07

    Received 2014- 04- 14

    E-mail: caoqiminisbest@163.com

    国语对白做爰xxxⅹ性视频网站| 美女内射精品一级片tv| 韩国av在线不卡| 国产色婷婷99| 精品熟女少妇av免费看| 免费不卡的大黄色大毛片视频在线观看| 亚洲内射少妇av| 精品一区在线观看国产| 久久综合国产亚洲精品| 国产国拍精品亚洲av在线观看| 大码成人一级视频| 97在线视频观看| 天堂8中文在线网| 嫩草影院入口| 国产精品嫩草影院av在线观看| 免费黄色在线免费观看| 麻豆精品久久久久久蜜桃| 国产爽快片一区二区三区| 国产色爽女视频免费观看| 99久久精品一区二区三区| 欧美激情国产日韩精品一区| 尾随美女入室| 日韩欧美一区视频在线观看 | 国产精品福利在线免费观看| 亚洲精品国产成人久久av| 亚洲精品日本国产第一区| 又爽又黄a免费视频| 免费看不卡的av| 国产欧美亚洲国产| 免费人成在线观看视频色| 97超碰精品成人国产| 国产成人精品福利久久| 国产乱人视频| 国产精品成人在线| 丰满人妻一区二区三区视频av| 精品久久久久久久久av| 丰满乱子伦码专区| 日韩成人av中文字幕在线观看| 激情 狠狠 欧美| 一二三四中文在线观看免费高清| 亚洲欧美清纯卡通| 免费观看的影片在线观看| 在线免费十八禁| 中文字幕免费在线视频6| 精品久久久久久电影网| 欧美精品一区二区免费开放| 777米奇影视久久| 国产成人免费无遮挡视频| 亚洲精品第二区| 日本欧美视频一区| 欧美激情极品国产一区二区三区 | 超碰av人人做人人爽久久| 97热精品久久久久久| 国产乱来视频区| 欧美97在线视频| 黑丝袜美女国产一区| 国产精品国产三级专区第一集| 丰满人妻一区二区三区视频av| 亚洲,一卡二卡三卡| 久久人人爽人人片av| 一级爰片在线观看| 欧美最新免费一区二区三区| 亚洲电影在线观看av| 少妇丰满av| 国产精品三级大全| 亚洲成人手机| 18禁裸乳无遮挡免费网站照片| 欧美 日韩 精品 国产| 91狼人影院| 成人毛片60女人毛片免费| 日日啪夜夜撸| 亚洲av福利一区| 高清不卡的av网站| 97精品久久久久久久久久精品| 成人一区二区视频在线观看| 国产成人精品福利久久| 这个男人来自地球电影免费观看 | 亚洲精品乱码久久久v下载方式| 国产又色又爽无遮挡免| 免费观看在线日韩| 又大又黄又爽视频免费| 搡老乐熟女国产| 亚洲av免费高清在线观看| 99热这里只有精品一区| 精品一区二区三卡| 不卡视频在线观看欧美| 边亲边吃奶的免费视频| 国产精品国产三级国产av玫瑰| 亚洲成人手机| 建设人人有责人人尽责人人享有的 | 亚洲av二区三区四区| 午夜福利网站1000一区二区三区| av播播在线观看一区| 亚洲av男天堂| 18禁裸乳无遮挡免费网站照片| 特大巨黑吊av在线直播| kizo精华| 我的老师免费观看完整版| 国产黄片美女视频| 人妻夜夜爽99麻豆av| 成人黄色视频免费在线看| 日日撸夜夜添| 免费大片18禁| 熟女人妻精品中文字幕| 97在线人人人人妻| 日韩国内少妇激情av| 成年美女黄网站色视频大全免费 | 老熟女久久久| 日韩,欧美,国产一区二区三区| 久久久久久九九精品二区国产| av免费在线看不卡| 少妇裸体淫交视频免费看高清| 狂野欧美激情性bbbbbb| 美女中出高潮动态图| 国产av精品麻豆| 亚洲怡红院男人天堂| 亚洲精品国产成人久久av| 一本久久精品| 国产爽快片一区二区三区| 国产成人午夜福利电影在线观看| av天堂中文字幕网| 国产一区二区三区av在线| 国产精品免费大片| 六月丁香七月| 国产永久视频网站| 亚洲av成人精品一二三区| 偷拍熟女少妇极品色| 国产美女午夜福利| 成人美女网站在线观看视频| 国产成人精品久久久久久| 国产亚洲欧美精品永久| 亚洲精品中文字幕在线视频 | 亚洲av日韩在线播放| 丰满乱子伦码专区| 直男gayav资源| 三级国产精品欧美在线观看| 免费久久久久久久精品成人欧美视频 | 在线观看一区二区三区激情| 国产极品天堂在线| 18禁在线无遮挡免费观看视频| 成人美女网站在线观看视频| 久久99精品国语久久久| av国产久精品久网站免费入址| 日本欧美视频一区| 最黄视频免费看| 插阴视频在线观看视频| 国产伦精品一区二区三区四那| 在线观看免费日韩欧美大片 | 欧美日韩视频高清一区二区三区二| 亚洲精品国产av蜜桃| 亚洲av二区三区四区| 又粗又硬又长又爽又黄的视频| 一级黄片播放器| 精品酒店卫生间| 亚洲自偷自拍三级| 啦啦啦啦在线视频资源| .国产精品久久| 少妇的逼好多水| 国产无遮挡羞羞视频在线观看| 伦理电影大哥的女人| 色吧在线观看| 妹子高潮喷水视频| 亚洲欧美一区二区三区黑人 | 免费看日本二区| 欧美区成人在线视频| 少妇的逼好多水| 干丝袜人妻中文字幕| 人人妻人人澡人人爽人人夜夜| 五月玫瑰六月丁香| 国产69精品久久久久777片| 女人久久www免费人成看片| 日韩三级伦理在线观看| 秋霞在线观看毛片| 肉色欧美久久久久久久蜜桃| 天美传媒精品一区二区| 老熟女久久久| 婷婷色麻豆天堂久久| 午夜免费鲁丝| 夜夜爽夜夜爽视频| 深夜a级毛片| 18禁在线无遮挡免费观看视频| 国产成人精品一,二区| 国产乱人视频| 精品久久久久久久末码| 欧美97在线视频| 亚洲真实伦在线观看| 久久97久久精品| 亚洲精品乱码久久久v下载方式| 欧美三级亚洲精品| 在线观看美女被高潮喷水网站| 亚洲美女搞黄在线观看| 日韩av不卡免费在线播放| 美女xxoo啪啪120秒动态图| 狠狠精品人妻久久久久久综合| 99精国产麻豆久久婷婷| 精品一区在线观看国产| 97在线人人人人妻| 少妇人妻一区二区三区视频| 在线观看免费高清a一片| 免费黄频网站在线观看国产| av在线观看视频网站免费| 久久97久久精品| 一个人免费看片子| 国产黄频视频在线观看| 性色avwww在线观看| 欧美97在线视频| 欧美亚洲 丝袜 人妻 在线| 午夜免费鲁丝| 日韩电影二区| 亚洲av成人精品一二三区| 色婷婷久久久亚洲欧美| 不卡视频在线观看欧美| 亚洲精品国产色婷婷电影| 99久久人妻综合| 亚洲精品乱码久久久久久按摩| 大片免费播放器 马上看| 国产午夜精品一二区理论片| 一本—道久久a久久精品蜜桃钙片| 国产视频内射| 午夜福利网站1000一区二区三区| 亚洲aⅴ乱码一区二区在线播放| 久久婷婷青草| 国产成人免费观看mmmm| 香蕉精品网在线| 交换朋友夫妻互换小说| 能在线免费看毛片的网站| 亚洲国产精品999| 国产精品一及| 亚洲美女搞黄在线观看| 欧美日韩在线观看h| 日韩亚洲欧美综合| av福利片在线观看| 黄色怎么调成土黄色| 国产亚洲av片在线观看秒播厂| 欧美亚洲 丝袜 人妻 在线| 午夜福利影视在线免费观看| 久久久久久久久久久免费av| 久久久久久久精品精品| 91精品一卡2卡3卡4卡| 亚洲av男天堂| 亚洲美女搞黄在线观看| 免费观看a级毛片全部| 成年人午夜在线观看视频| 少妇高潮的动态图| 亚洲av电影在线观看一区二区三区| 一个人免费看片子| 蜜臀久久99精品久久宅男| 校园人妻丝袜中文字幕| 日韩亚洲欧美综合| 免费久久久久久久精品成人欧美视频 | 久久这里有精品视频免费| 天天躁夜夜躁狠狠久久av| 欧美性感艳星| 蜜桃久久精品国产亚洲av| av福利片在线观看| 我的女老师完整版在线观看| 一区二区三区乱码不卡18| 亚洲人成网站高清观看| 国产免费福利视频在线观看| 高清日韩中文字幕在线| 美女福利国产在线 | 一级爰片在线观看| 国产av精品麻豆| 国产男人的电影天堂91| 简卡轻食公司| 91精品一卡2卡3卡4卡| av免费观看日本| 午夜老司机福利剧场| av黄色大香蕉| 国产黄频视频在线观看| 97精品久久久久久久久久精品| 肉色欧美久久久久久久蜜桃| 亚洲国产最新在线播放| 五月开心婷婷网| 91久久精品国产一区二区三区| 涩涩av久久男人的天堂| 菩萨蛮人人尽说江南好唐韦庄| 干丝袜人妻中文字幕| 精品人妻偷拍中文字幕| 色婷婷久久久亚洲欧美| 肉色欧美久久久久久久蜜桃| 插阴视频在线观看视频| 99精国产麻豆久久婷婷| videossex国产| 成人高潮视频无遮挡免费网站| 精品一区在线观看国产| 国产av一区二区精品久久 | 99热这里只有精品一区| 看非洲黑人一级黄片| 国产精品一二三区在线看| 我的老师免费观看完整版| 成人无遮挡网站| 十八禁网站网址无遮挡 | 麻豆乱淫一区二区| 亚洲精品乱码久久久久久按摩| 毛片女人毛片| 国产熟女欧美一区二区| 插阴视频在线观看视频| 欧美区成人在线视频| 啦啦啦视频在线资源免费观看| 久久这里有精品视频免费| 各种免费的搞黄视频| 久久精品国产亚洲av天美| 永久免费av网站大全| 91久久精品国产一区二区三区| 涩涩av久久男人的天堂| 欧美bdsm另类| 精品久久久久久电影网| 国产精品久久久久久精品古装| 男女啪啪激烈高潮av片| 51国产日韩欧美| 日本黄大片高清| 插阴视频在线观看视频| 国产熟女欧美一区二区| 国产在视频线精品| 免费av不卡在线播放| 亚洲色图综合在线观看| www.av在线官网国产| 亚洲国产精品一区三区| 91狼人影院| 噜噜噜噜噜久久久久久91| 久久这里有精品视频免费| 亚洲国产欧美在线一区| 国产免费又黄又爽又色| 大片电影免费在线观看免费| 美女中出高潮动态图| 亚洲精品乱久久久久久| 国产 一区 欧美 日韩| 久久久久精品久久久久真实原创| 蜜桃久久精品国产亚洲av| 精华霜和精华液先用哪个| 热re99久久精品国产66热6| 国产免费一级a男人的天堂| 夜夜骑夜夜射夜夜干| 久久久久久久久久人人人人人人| 亚洲高清免费不卡视频| 国产精品一区二区在线观看99| 啦啦啦中文免费视频观看日本| 伦精品一区二区三区| 久久ye,这里只有精品| 亚洲av中文字字幕乱码综合| 国国产精品蜜臀av免费| 国产成人午夜福利电影在线观看| 国产精品精品国产色婷婷| 国产又色又爽无遮挡免| 成人漫画全彩无遮挡| 另类亚洲欧美激情| 日韩强制内射视频| 国产精品不卡视频一区二区| 舔av片在线| 蜜桃在线观看..| a级毛片免费高清观看在线播放| 欧美97在线视频| 国产亚洲91精品色在线| 男人添女人高潮全过程视频| 亚洲内射少妇av| 久久婷婷青草| 国产精品爽爽va在线观看网站| 赤兔流量卡办理| 夫妻午夜视频| 熟女av电影| 舔av片在线| 欧美日本视频| 免费观看性生交大片5| 久久久久久久久久成人| 啦啦啦中文免费视频观看日本| 日韩国内少妇激情av| 日本av手机在线免费观看| 伦精品一区二区三区| 综合色丁香网| 大香蕉久久网| 一级毛片我不卡| 黑丝袜美女国产一区| 六月丁香七月| 最近中文字幕2019免费版| 能在线免费看毛片的网站| 精品国产露脸久久av麻豆| 亚洲精品国产av成人精品| 亚洲电影在线观看av| 亚洲四区av| 高清黄色对白视频在线免费看 | 一级毛片aaaaaa免费看小| 亚洲欧洲日产国产| 中国美白少妇内射xxxbb| 免费大片18禁| 国产成人精品福利久久| 亚洲国产最新在线播放| 在线看a的网站| 久久久久网色| 日日啪夜夜撸| 天堂俺去俺来也www色官网| 亚洲av中文av极速乱| 国产视频内射| 26uuu在线亚洲综合色| 纯流量卡能插随身wifi吗| 亚洲av成人精品一二三区| 97在线人人人人妻| 久久人人爽人人爽人人片va| 国产av一区二区精品久久 | 免费人成在线观看视频色| 亚洲欧洲日产国产| 国产成人一区二区在线| 国产av国产精品国产| 一个人看的www免费观看视频| 国模一区二区三区四区视频| 高清黄色对白视频在线免费看 | kizo精华| 亚洲精品久久午夜乱码| 性高湖久久久久久久久免费观看| 一区二区三区乱码不卡18| 欧美xxⅹ黑人| 久久人人爽人人片av| 91aial.com中文字幕在线观看| 晚上一个人看的免费电影| 日本色播在线视频| 亚洲不卡免费看| 国产 一区精品| 国产淫片久久久久久久久| 赤兔流量卡办理| 亚洲一级一片aⅴ在线观看| 一本久久精品| 久久亚洲国产成人精品v| 丰满人妻一区二区三区视频av| 国产69精品久久久久777片| 成人毛片60女人毛片免费| 国产高清有码在线观看视频| 亚洲国产精品999| 综合色丁香网| 国产精品一区二区在线观看99| 春色校园在线视频观看| 亚洲激情五月婷婷啪啪| 免费看av在线观看网站| 精品一区二区免费观看| 极品少妇高潮喷水抽搐| 麻豆国产97在线/欧美| 久久亚洲国产成人精品v| 国产高清有码在线观看视频| 国产熟女欧美一区二区| 午夜免费观看性视频| 日韩中文字幕视频在线看片 | 日韩制服骚丝袜av| 国产精品蜜桃在线观看| 一区在线观看完整版| 少妇裸体淫交视频免费看高清| 黄色欧美视频在线观看| 在线观看三级黄色| 免费人成在线观看视频色| 伦理电影免费视频| 亚洲国产日韩一区二区| 成人无遮挡网站| 永久免费av网站大全| 国产乱来视频区| 精品国产乱码久久久久久小说| 中文在线观看免费www的网站| 又粗又硬又长又爽又黄的视频| 亚洲欧美成人精品一区二区| 91精品伊人久久大香线蕉| 肉色欧美久久久久久久蜜桃| 亚洲欧美日韩另类电影网站 | 亚洲人与动物交配视频| 美女主播在线视频| 成人美女网站在线观看视频| 青春草亚洲视频在线观看| 国产探花极品一区二区| 欧美成人一区二区免费高清观看| 国内少妇人妻偷人精品xxx网站| 国产在线免费精品| 美女xxoo啪啪120秒动态图| 亚洲最大成人中文| 国语对白做爰xxxⅹ性视频网站| 日韩欧美 国产精品| 一级黄片播放器| 26uuu在线亚洲综合色| tube8黄色片| 免费av中文字幕在线| 伊人久久精品亚洲午夜| 婷婷色综合www| 日韩人妻高清精品专区| 欧美极品一区二区三区四区| 在线观看三级黄色| 久久久久久久久久成人| 国产在视频线精品| 免费高清在线观看视频在线观看| 亚洲图色成人| 黑人猛操日本美女一级片| 日韩,欧美,国产一区二区三区| 亚洲综合精品二区| 久久99精品国语久久久| 97在线人人人人妻| 国产亚洲最大av| 人妻制服诱惑在线中文字幕| 热99国产精品久久久久久7| 蜜臀久久99精品久久宅男| 丝瓜视频免费看黄片| 少妇的逼好多水| 亚洲在久久综合| 亚洲精品国产av成人精品| 亚洲av国产av综合av卡| 亚洲图色成人| 丰满乱子伦码专区| 亚洲经典国产精华液单| 国产美女午夜福利| 亚洲欧洲国产日韩| 久久久精品免费免费高清| 午夜免费鲁丝| 久久人妻熟女aⅴ| 建设人人有责人人尽责人人享有的 | 欧美日韩视频高清一区二区三区二| 国产男女超爽视频在线观看| 少妇人妻精品综合一区二区| 视频区图区小说| 一级毛片aaaaaa免费看小| 久久精品久久精品一区二区三区| 亚洲av日韩在线播放| 一级av片app| 亚洲自偷自拍三级| 伊人久久精品亚洲午夜| 五月天丁香电影| av播播在线观看一区| 久久精品国产鲁丝片午夜精品| 多毛熟女@视频| 看免费成人av毛片| 丝袜脚勾引网站| 少妇丰满av| av专区在线播放| 91久久精品国产一区二区成人| 欧美xxⅹ黑人| 欧美成人精品欧美一级黄| av又黄又爽大尺度在线免费看| 麻豆成人av视频| 最近手机中文字幕大全| 国产精品无大码| 一级爰片在线观看| 最近中文字幕高清免费大全6| 亚洲欧美一区二区三区黑人 | 国产成人精品福利久久| 人妻少妇偷人精品九色| 少妇人妻 视频| 少妇的逼水好多| 国产成人a∨麻豆精品| 男女啪啪激烈高潮av片| 日日啪夜夜爽| 国产精品女同一区二区软件| 国产av码专区亚洲av| 久久精品国产亚洲网站| 最近最新中文字幕免费大全7| 亚洲性久久影院| 日本欧美国产在线视频| 黄片wwwwww| 久久婷婷青草| 大片免费播放器 马上看| 久久久午夜欧美精品| 国产大屁股一区二区在线视频| 偷拍熟女少妇极品色| 成年美女黄网站色视频大全免费 | 国产乱来视频区| 丰满人妻一区二区三区视频av| 亚洲,欧美,日韩| 亚洲av免费高清在线观看| 美女主播在线视频| 日韩欧美一区视频在线观看 | 老女人水多毛片| 男女边吃奶边做爰视频| 亚洲成人av在线免费| 性色av一级| 在线观看三级黄色| 国产精品.久久久| 秋霞在线观看毛片| 久久久久视频综合| 黄色配什么色好看| 欧美精品亚洲一区二区| 久久99热这里只有精品18| 亚洲精品一二三| 午夜福利影视在线免费观看| 成人黄色视频免费在线看| 成人一区二区视频在线观看| 美女内射精品一级片tv| 久久久午夜欧美精品| 国产一区有黄有色的免费视频| 国产av国产精品国产| 91午夜精品亚洲一区二区三区| av黄色大香蕉| 日本爱情动作片www.在线观看| 久久久久久九九精品二区国产| 久久久久久久久久久丰满| 一级毛片久久久久久久久女| 国产女主播在线喷水免费视频网站| 亚洲美女搞黄在线观看| 亚洲国产精品一区三区| 人妻夜夜爽99麻豆av| 亚洲美女搞黄在线观看| 一级二级三级毛片免费看| 人妻夜夜爽99麻豆av| 99久久人妻综合| 欧美日韩视频高清一区二区三区二| 纵有疾风起免费观看全集完整版| 亚洲av在线观看美女高潮| 欧美日韩视频高清一区二区三区二| 亚洲精品aⅴ在线观看| 亚洲不卡免费看| 欧美一区二区亚洲| 成人毛片a级毛片在线播放| 尤物成人国产欧美一区二区三区| 午夜免费鲁丝| 日韩一本色道免费dvd| 又爽又黄a免费视频| 免费观看性生交大片5| 亚洲精品视频女| 精品久久久噜噜| 卡戴珊不雅视频在线播放| 国产黄色免费在线视频| 亚洲人成网站高清观看| 天堂8中文在线网| 亚洲色图av天堂| 亚洲国产色片|