• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MapReduce based computation of the diffusion method in recommender systems①

    2016-12-05 06:39:00PengFeiYouJialiZengXuewenDengHaojiang
    High Technology Letters 2016年3期

    Peng Fei (彭 飛):You Jiali:Zeng Xuewen:Deng Haojiang

    (*National Network New Media Engineering Research Center:Institute of Acoustics,Chinese Academy of Sciences:Beijing 100190:P.R.China)(**University of Chinese Academy of Sciences:Beijing 100049:P.R.China)

    ?

    MapReduce based computation of the diffusion method in recommender systems①

    Peng Fei (彭 飛)*To whom correspondence should be addressed.E-mail:pengf@dsp.ac.cnReceived on July 26,2015***:You Jiali*:Zeng Xuewen*:Deng Haojiang*

    (*National Network New Media Engineering Research Center:Institute of Acoustics,Chinese Academy of Sciences:Beijing 100190:P.R.China)(**University of Chinese Academy of Sciences:Beijing 100049:P.R.China)

    The performance of existing diffusion-based algorithms in recommender systems is still limited by the processing ability of a single computer.In order to conduct the diffusion computation on large data sets:a parallel implementation of the classic diffusion method on the MapReduce framework is proposed.At first:the diffusion computation is transformed from a summation format to a cascade matrix multiplication format:and then:a parallel matrix multiplication algorithm based on dynamic vector is proposed to reduce the CPU and I/O cost on the MapReduce framework:which can also be applied to other parallel matrix multiplication scenarios.Then:block partitioning is used to further improve the performance:while the order of matrix multiplication is also taken into consideration.Experiments on different kinds of data sets have verified the efficiency of the proposed method.

    MapReduce:recommender system:diffusion:parallel:matrix multiplication

    0 Introduction

    Recommender systems[1]adopt knowledge discovery techniques to provide personalized recommendations.It is now considered to be the most promising way to efficiently filter out the overload information.Thus far:recommender systems have successfully found applications in e-commerce:such as book recommendations in Amazon.com[2]:movie recommendations in Netflix.com[3]:and so on.

    Collaborative filtering (CF) is currently the most successful technique in the design of recommender systems[4]:where a user will be recommended with items that people with similar tastes liked in the past.As the CF technique evolves:some diffusion-based algorithms were proposed for better prediction accuracy.Huang et al.[5]proposed a CF algorithm based on an iterative diffusion process.Considering the system as a user-item bipartite network:Zhou:et al.[6]proposed an algorithm based on two-step diffusion.Zhang:et al.[7]proposed an iterative opinion diffusion algorithm to predict ratings in Netflix.com.

    As the size of data grows rapidly:many researchers have focused on the design of distributed recommender algorithms.Jiang:et al.[8]and Zhao:et al.[9]proposed an item-based CF algorithm and a user-based CF algorithm based on Hadoop[10]respectively.Sebanstian et al.[11]proposed a KNN algorithm based on user similarity and implemented it on the MapReduce framework[12].However:there is little research on diffusion-based recommender algorithms on the MapReduce framework.

    As the diffusion-based recommender methods are based on graphs:matrix multiplication can be used to perform the computation task:which can facilitate parallel processing as shown in the next sections.Li:et al.[13]used a parallel matrix multiplication method to do the similarity calculation in recommender systems.They proposed a single tuple method (STM) and a row divided method (RDM) to implement the matrix multiplication computation on the MapReduce framework.STM is rather inefficient as will be explained in the following sections.RDM requires servers in the Hadoop cluster to save the whole matrix in memory:which cannot be used on large data sets.Zheng:et al.[14]proposed a parallel matrix multiplication algorithm based on vector linear combination.However:it still needs servers to save the whole matrix in memory in the reduce step:which suffers from the same problem with RDM.

    In order to make the diffusion-based recommender methods applicable on large data sets:a parallel cascade matrix multiplication algorithm is proposed to realize the classic diffusion method[6]on the MapReduce framework.The contributions of this work are as follows:

    (1) The classic diffusion method in recommender systems is transformed from the summation format to the cascade matrix multiplication format:which can facilitate parallel processing.

    (2) A parallel matrix multiplication algorithm based on dynamic vector on the MapReduce framework is proposed:which can reduce the CPU and I/O cost effectively.In addition:the algorithm is improved by block partitioning.The order of matrix multiplication is also taken into consideration to enhance performance.

    (3) Experiments are conducted on different kinds of data sets:including MovieLens and Jester:to verify the effectiveness of the proposed method.

    The rest of this paper is organized as follows.Section 1 gives the background information of the study.Section 2 introduces the vectorization of the classic diffusion method and describes our dynamic vector based matrix multiplication algorithm.A performance analysis is presented in Section 3.The study is concluded in Section 4.

    1 Preliminaries

    1.1 The diffusion method on bipartite graphs

    A bipartite graph can be used to represent the input of a recommender system.In the bipartite graph:the vertexes are consisted of two sets:users U={U1,U2,…,Um} and items I={I1,I2,…,In} respectively.The user-item relation can be described by an adjacent matrix A.If Uihas used Ij:set A(i,j)=1:otherwise A(i,j)=0.Fig.1 shows an illustration consisting of three users and five items.

    Fig.1 Illustration of the diffusion process on a bipartite graph

    Suppose that a kind of resource is initially located on items.Each item will averagely distribute its resource to all connected users:and then each user will redistribute the received resource to connected items.Plot (a) shows the initial condition given U1as the target user:and plot (b) describes the result after the first step diffusion:during which the resources are transferred from items to users.Eventually:the resources flow back to items:and the result is shown in plot (c).

    Denoting r(0)as the initial resource vector on items.rjis the amount of resource located on Ij.The final resource vector after the two-step diffusion[6]is shown in

    (1)

    (2)

    In this case:the initial resource can be understood as giving a corresponding recommending capacity to each connected item based on history records:and the different initial resource vectors for different users have captured the personalized preferences.The final resource vector r(2)(Ui) is obtained by Eq.(1).Ui’s unconnected items are sorted in descent order based on final resource values.Items with the highest values are recommended.This algorithm was originally motivated by the resource-allocation process on graphs:and has been shown to be more accurate than traditional collaborative filtering based on MovieLens data set[15].

    1.2 MapReduce computing model

    MapReduce is a parallel computing model[12].It splits an input data set into several parts.Each mapper deals with one part:produces key/value pairs and writes them into the intermediate files.The key/value pairs are partitioned and emitted to reducers to calculate the final result.In this work:all implementations are based on the Hadoop platform.

    2 MapReduce based computation of the diffusion method

    2.1 Vectorization of the diffusion method

    The diffusion result is calculated by summation as shown in Eq.(1).Such formation is not convenient for parallel processing.Vectorization of the diffusion process would facilitate the parallel computation.

    In the first step of diffusion:resources are transferred from items to users.The transition matrix is

    T(1)=AT./KI

    (3)

    where ATdenotes the transpose of A:and “./” performs the right-array division by dividing each element of ATby the corresponding element of KI.KIis an n×m matrix as shown in Eq.(4).The elements in row j are initialized to k(Ij).

    (4)

    After the first step:the resource vector on Uiis r(1)(Ui)=r(0)(Ui)·T(1).

    Similarly:the transition matrix in the second step of diffusion is

    T(2)=A./KU

    (5)

    KUis an m×n matrix as shown in Eq.(6).The elements in row i are initialized to k(Ui).

    (6)

    Finally:the resources on each item are shown in Eq.(7):which is the vectorization result of Eq.(1).

    r(2)(Ui)=r(1)(Ui)·T(2)

    =r(0)(Ui)·T(1)·T(2)

    (7)

    The computation complexity of a recommendation process for a target user is O(mn) based on Eq.(1) or Eq.(7).It is impossible to do online computation when the data set is in large scale.So one should employ offline computation to make recommendation for each user.In order to compute the diffusion results of all users:Eq.(7) can be extended to the cascade matrix multiplication format as shown in Eq.(8):where F is the diffusion result of all users in matrix form.

    F=A·T(1)·T(2)

    (8)

    Therefore a cascade matrix multiplication approach is adopted to perform the diffusion computation task.In the next section:the MapReduce framework will be used to design an efficient matrix multiplication algorithm.

    2.2 Parallelism of matrix multiplication

    Eq.(8) entails three matrix-matrix multiplications.So the core of the diffusion method is to do the matrix multiplication in parallel.In this section:the single tuple method (STM)[13]based on the MapReduce framework is the first to be introduced.Then a dynamic vector based method (DVBM) is which can decrease the CPU and I/O cost effectively.further improvement is made for DVBM by block partitioning.In addition:the order of matrix multiplication is also taken into consideration.

    2.2.1 Single Tuple Method

    Suppose A is an m×t matrix and B is a t×n matrix.Then the elements of matrix C=A·B can be calculated as

    (9)

    So an element of matrix C can be got by the inner product of the corresponding row vector in matrix A and column vector in matrix B as shown in Fig.2.

    Fig.2 Matrix multiplication in STM

    In recommender systems:the user history records are usually in a three-tuple format {Ui,Ij,value}:which are the cases both in MovieLens and Jester data sets.Based on this input format:the MapReduce process of the STM is shown in Table 1.

    In the map procedure:{i,k,A(i,k)} and {k,j,B(k,j)} denote the original input format of matrix A and matrix B respectively.〈{i,j },{0,k,A(i,k)}〉 and 〈{i:j},{1,k,B(k,j)}〉 denote the intermediate key/value pairs used for sort and shuffle.Each 〈{i:j},{0,k,A(i,k)}〉 is emitted by n times:while 〈{i:j},{1,k,B(k:j)}〉 is emitted by m times.0 in {0,k,A(i,k)} indicates it is from matrix A:and 1 in {1,k,B(k:j)} indicates it is from matrix B.In the reduce procedure:C(i:j) is obtained as described in Eq.(9).

    Table 1 MapReduce algorithm of STM

    2.2.2 Dynamic vector based method

    However:when the STM on MovieLens-1M data set is taken:it can’t be completed in rational time.It is of time consuming because the elements in the matrices are read or written one by one.There are d(A)·mt+d(B)·tn elements to read and (d(A)+d(B))·mtn elements to write in the map procedure:where d(·) denotes the density of the corresponding matrix.That also means there are (d(A)+d(B))·mtn intermediate elements to sort and shuffle:which is both CPU and I/O consuming.

    If the elements could be read or written in batch:the CPU and I/O cost would decrease.Based on this idea:a dynamic vector based method is proposed to decrease the frequency of read and write:so as to reduce the CPU and I/O cost.

    The elements are compressed in the same row or column into a single vector.The vectors are stored in different formats according to their density.Take a row vector as an example.If the density is larger than β:the vector is in an array format; otherwise:it is in a key/value format as shown in Eq.(10).β is a parameter that controls the density threshold.In the following experiments:set β=50%.Let i be the row index.Ac(i,:) denotes the compression format of row i in matrix A.The second element in a vector represents the compression type.0 indicating the vector is in an array format:and 1 indicates a key/value format.kv(1≤v≤w) denotes the index of nonzero elements in the row.

    Ac(i,:)=

    (10)

    DVBMcontainstwoMapReducejobsasshowninTable2.TheaimofthefirstMapReducejobistocompressthematrixfromtheoriginthree-tupleformattoavectorformat.Inthemapprocedure:eachelementis

    Table 2 MapReduce algorithm of DVBM

    emitted according to its row number or column number.In the reduce procedure:the reducer collects elements that belong to the same row or column and compresses them into a row or column vector based on Eq.(10).

    In the second job:the compressed vector can be taken as input to implment the matrix multiplication.In the map procedure:Ac(i,:) and Bc(:,j) are read and mapped to 〈{i,j},{0,Ac(i,:)}〉 and 〈{i,j},{1,Bc(:,j)}〉.Each 〈{i,j},{0,Ac(i,:)}〉 is emittedntimes:while 〈{i,j},{1,Bc(:,j)}〉 is emittedmtimes.In the reduce procedure:Ac(i,:) and Bc(:,j) are decompressed to A(i,:) and B(:,j).C(i,j) is obtained as described in Eq.(9).

    2.2.3 Dynamic vector based method with block

    In DVBM:each reducer only involves two vectors.This does not make full use of the computing power of each worker.More vectors can be allocated to a reducer:which can help decrease the copy frequency in the map procedure.When the copy frequency cuts down:the CPU and I/O cost would both decrease.

    Based on the above idea:matrix A is partitioned into several sub-matrices by row:and matrix B is partitioned into several sub-matrices by column as shown in Fig.3.Each sub-matrix is called a block.The term block is also used in file systems where it refers to a unit of storage space:which is not the case in this paper.We can map the rows and columns in the same block to a single reducer:and calculate the elements of matrix C in batch.

    Fig.3 Matrix multiplication in DVBMwB

    LetSbe the block size.BI denotes block index:and NB denotes the number of blocks.Then the relations between row/column index and block index can be expressed by Eq.(11) and Eq.(12).The number of blocks can be calculated by Eq.(13) and Eq.(14).

    BI(Ac(i,:))=i/S

    (11)

    BI(Bc(:,j))=j/S

    (12)

    NB(A)=(m+1)/S;S>1:m>0

    (13)

    NB(B)=(n+1)/S;S>1:n>0

    (14)

    The Dynamic Vector Based Method with Block (DVBMwB) just modifies the second job of DVBM as shown in Table 3.In the map procedure:each Ac(i,:) is emitted NB(B) times:while each Bc(:,j) is emitted NB(A) times:which reduces substantial copy cost and I/O cost compared to the DVBM.In the reduce procedure:Ac(i,:) and Bc(:,j) are decompressed to A(i,:) and B(:,j) in batch:and C(i,j) is obtained by the same way as DVBM.

    Table 3 MapReduce algorithm of DVBMwB

    2.2.4 The order of matrix multiplication

    The sequence by which multiplications are done also affects the performance of Eq.(8).An in-order calculation sequence yieldsO(|U|2|I|) time complexity:whereas a reversed order leads toO(|U||I|2).The decision should be based on the relative sizes of the user set and item set.When the number of users is larger:anO(|U||I|2) result is obviously more favorable:or vice versa.

    3 Experiment

    3.1 Experimental environment

    A Hadoop cluster with 3 PCs is constructed.Each machine has a 4-core 2.40GHz Xeon(R) processor:4G memories.The Hadoop version is 1.2.1.

    3.2 Data description

    In this study:two representative data sets are used:MovieLens-1M and Jester:to evaluate the proposed MapReduce algorithm.MovieLens data sets are collected from the MovieLens web site.Ken Goldberg from UC Berkeley released the Jester data set from the Jester Joke Recommender System.The features of each data set are shown in Table 4.It contains the user number:item number:rating number and density of each data set.These two data sets are of quite different density:which can provide more comprehensive verification to our methods.

    Table 4 Features of the data sets

    3.3 Comparison and Analysis

    The performance of STM:DVMB and DVMBwB is compared and the compression ratio of the dynamic vector is also measured.After that:the performance of DVMBwB is evaluated with different block sizes on MovieLens-1M and Jester data sets:and also some experiments are conducted to verify that the order of matrix multiplication is very important.

    3.3.1 STM vs DVMB vs DVMBwB

    As STM cannot complete the diffusion-based recommendation for MovieLens-1M and Jester data sets in a reasonable time:several easier tasks of different user/item dimensions and matrix densities are generated so as to compare the performance of STM:DVMB and DVBMwB.The user/item dimensions include 200×200 and 400×400:while the matrix densities are 5% and 10%.The results are shown in Table 5.The “intermediate matrix density” is the density of the matrix that comes out after the first step of diffusion.It’s usually of high density compared with the original input matrix:which leads to more time consumption in the second step of diffusion.The block size of DVBMwB is 100.

    It is seen that DVMB and DVMBwB perform much better than STM.The main factor that affects the CPU and I/O cost is the read/write frequency.On one hand:the decrease of the read/write frequency would improve the I/O efficiency.Too many read/write operations will lead to additional communication overhead in the MapReduce framework.On the other hand:as the number of intermediate elements is equal to the write frequency of the map procedure:less CPU time to sort and shuffle is required if the write frequency goes down.The frequency difference of STM:DVBM and DVBMwB mainly comes from the matrix multiplication job’s map procedure.As analyzed in Section 2.2.1:the read/write frequencies of the map procedure in STM are d(A)·mt+d(B)·tnand (d(A)+d(B))·mtnrespectively.In DVBM:the read and wirte frequencies of the first job are both d(A)·mt+d(B)·tn.However:the read frequency is reduced to m+n and the write frequency is reduced to 2mnin the second job.The difference of DVBM and DVBMwB only comes from the write frequency of the second job’s map procedure.It is reduced to NB(B)·m+NB(A)·n.Although it takes two jobs to complete the matrix multiplication calculation in DVBM and DVBMwB:the total read/write frequencies are cut down compared with the STM algorithm:and the number of intermediate elements also decreases dramatically which saves a large amount of CPU time.Besides:the dynamic vector format also eliminates some redundant information compared with the origin three-tuple format:which can further reduce the I/O cost.So it is quite worthwhile to take an extra step to compress the matrices into vectors.The performance of STM in Table 5 confirms the above analysis.Meanwhile:DVMBwB takes much less time than DVBM:which indicates that block partitioning can help improve the performance effectively.

    Table 5 Comparison of STM:DVBM and DVBMwB

    3.3.2 Compression ability of dynamic vector

    The sizes of MovieLens-1M and Jester data sets before and after compression are illustrated in Table 6.The compression ratio is also calculated.It is obvious that the dynamic vector proposed in this paper can compress the data effectively.

    Table 6 Compression results on MovieLens-1M and Jester

    Compression experiments is taken on two manually generated data sets of different densities.The dimensions of the generated data sets are 1000×1000 and 2000×2000.The density ranges from 10% to 100%.The relation between the matrix density and compression ratio is illustrated in Fig.4.When the density is more than 50%:the compression ratio gets much better/smaller.The more density a data set has:the better compression ratio we would have.

    Fig.4 Relation between density and compression ratio

    3.3.3 Impact of block size

    In DVBMwB:as the block size increases:the copy cost and I/O cost would decrease.However:the block size can’t be too large because of the memory limitation of a single computer.Experiments are taken on MovieLens-1M and Jester data sets with different block sizes.The results are shown in Table 7.As the user number of MovieLens-1M and Jester are both bigger than the item number:the last two matrices are multiplied first so as to get a better performance.

    The time used for matrix compression is the same for all block sizes.The differences only come from the matrix multiplication procedure.When the block size is small:it is the copy and I/O cost that lead to the rise of completion time.As the block size gets larger:the calculation speed would not increase all the time.It would slow down or get even worse because of memory shortage.So it is necessary to determine a proper block size based on the servers and data sets under production workloads.

    Table 7 DVBMwB of different block size on MovieLens-1M and Jester

    3.3.4 Impact of the order of matrix

    In the previous sections:the matrix multiplication is done in reversed order.In order to display the influence of the order of matrix multiplication:the order of matrix multiplication on MovieLens-1M data set is changed.The block size used here is 100.The time used for matrix compression and matrix multiplication would both rise if we do the diffusion computation in order as shown in Fig.5 and Fig.6.Besides:the intermediate matrix density would also increase from 72.44% to 95.8% according to the experiment.

    Fig.5 Matrix compression time on MovieLens-1M

    4 Conclusion

    In this study:a parallel version of a classic diffusion algorithm is proposed on MapReduce framework.The diffusion method is transformed to the cascade matrix multiplication format so as to implement it in the MapReduce computing model.

    Fig.6 Matrix multiplication time on MovieLens-1M

    A novel dynamic vector based matrix multiplication algorithm is designed on the MapReduce framework:which can improve the performance effectively.It can be also applied to other matrix multiplication scenarios.Comprehensive experiments are conducted to verify the effectiveness of our method.The block size and the order of matrix multiplication both have influence on the time cost of diffusion computation.

    In the future:our matrix multiplication method will be extended on the MapReduce framework to some other graph-based circumstances.Besides:some large-scale computation frameworks:like GraphLab[16]will be also studied.

    [1] Bobadilla J:Ortega F:Hemando A:et al.Recommender systems survey.Knowledge-BasedSystems:2013:46(1):109-132

    [2] Linden G:Smith B:York J.Amazon.com recommendations:Item-to-item collaborative filtering.InternetComputing:2003:7(1):76-90

    [3] Bennett J:Lanning S.The netflix prize.In:Proceedings of the 2007 KDD Cup and Workshop:California:USA:2007.35

    [4] Shi Y:Larson M:Hanjalic A.Collaborative filtering beyond the user-item matrix:A survey of the art and future challenges.ACMComputingSurveys:2014:47(1):3

    [5] Huang Z:Chen H:Zeng D.Applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering.ACMTransactionsonInformationSystems:2004:22(1):116-142

    [6] Zhou T:Ren J:Medo M:et al.Bipartite network projection and personal recommendation.PhysicalReviewE:2007:76(4):70-80

    [7] Zhang Y C:Medo M:Ren J:et al.Recommendation model based on opinion diffusion.EurophysicsLetters:2007:80(6):417-429

    [8] Long G:Zhang G:Lu J:et al.Scaling-up item-based collaborative filtering recommendation algorithm based on Hadoop.In:Proceedings of the 2011 IEEE World Congress on Services:Washington:USA:2011.490-497

    [9] Zhao Z D:Shang M S.User-based collaborative-filtering recommendation algorithms on Hadoop.In:Proceedings of the Third International Conference on Knowledge Discovery and Data Mining:Phuket:Thailand:2010.478-481

    [10] Shvachko K:Kuang H:Radia S:et al.The hadoop distributed file system.In:Proceedings of the 26th Symposium on Mass Storage Systems and Technologies:Incline Village:USA:2010.1-10

    [11] Schelter S:Boden C:Markl V.Scalable similarity-based neighborhood methods with mapreduce.In:Proceedings of the 6th ACM Conference on Recommender Systems:New York:USA:2012.163-170

    [12] Dean J:Ghemawat S.MapReduce:simplified data processing on large clusters.CommunicationsoftheACM:2008:51(1):107-113

    [13] Li L N:Li C P:Chen H:et al.MapReduce-based simrank computation and its application in social recommender system.In:Proceedings of the 2013 IEEE International Congress on Big Data:Santa Clara:USA:2013.133-140

    [14] Zheng J H:Zhang L J:Zhu R:et al.Parallel matrix multiplication algorithm based on vector linear combination using MapReduce.In:Proceedings of the IEEE 9th World Congress on Services:Santa Clara:USA:2013.193-200

    [15] Zhang Z K:Zhou T:Zhang Y.Personalized recommendation via integrated diffusion on user-item-tag tripartite graphs.PhysicaA:StatisticalMechanicsandItsApplications:2010:389(1):179-186

    [16] Kyrola A:Blelloch G:Guestrin C.GraphChi:Large-scale graph computation on just a PC.In:Proceedings of the 10th USENIX Symposium onOperating Systems Design and Implementation:California:USA:2012.31-46

    Peng Fei:born in 1988.He is currently pursuing his Ph.D.degree in the National Network New Media Engineering Research Center:Institute of Acoustics:Chinese Academy of Sciences and the University of Chinese Academy of Sciences.He received his B.S.degree from the department of Electronics and Information Engineering in the Huazhong University of Science and Technology in 2010.His research interests include new media technologies and recommender systems.

    10.3772/j.issn.1006-6748.2016.03.008

    ①Sponsored by the National High Technology Research and Development Program of China (No.2011AA01A102):the Key Program of the Chinese Academy of Sciences (No.KGZD-EW-103-2).

    老鸭窝网址在线观看| 欧美亚洲日本最大视频资源| 啪啪无遮挡十八禁网站| 99热全是精品| 一区二区三区乱码不卡18| 男男h啪啪无遮挡| 国产男女内射视频| 汤姆久久久久久久影院中文字幕| 免费观看a级毛片全部| 亚洲国产欧美在线一区| 国产成+人综合+亚洲专区| 每晚都被弄得嗷嗷叫到高潮| 欧美黑人欧美精品刺激| 成年人黄色毛片网站| www.熟女人妻精品国产| 在线观看www视频免费| 日韩视频在线欧美| 天堂中文最新版在线下载| 精品久久久久久电影网| 超碰成人久久| 巨乳人妻的诱惑在线观看| 成在线人永久免费视频| 一本色道久久久久久精品综合| 精品少妇内射三级| 丝瓜视频免费看黄片| 欧美日本中文国产一区发布| 免费在线观看黄色视频的| 91大片在线观看| 亚洲欧美日韩另类电影网站| 99国产精品免费福利视频| 这个男人来自地球电影免费观看| 国产一区二区 视频在线| 日韩 亚洲 欧美在线| 日韩 欧美 亚洲 中文字幕| 免费观看av网站的网址| 美女扒开内裤让男人捅视频| 亚洲欧美日韩另类电影网站| 捣出白浆h1v1| 日韩有码中文字幕| 美国免费a级毛片| 99久久人妻综合| 五月开心婷婷网| 别揉我奶头~嗯~啊~动态视频 | 美女高潮喷水抽搐中文字幕| 国产一区二区三区av在线| 青春草亚洲视频在线观看| 久久人人爽人人片av| 精品人妻在线不人妻| 永久免费av网站大全| 成人国产av品久久久| 久久久久国产精品人妻一区二区| 宅男免费午夜| 后天国语完整版免费观看| 亚洲av电影在线观看一区二区三区| 99热全是精品| 国产欧美亚洲国产| 国产成人av教育| 日日摸夜夜添夜夜添小说| 18禁观看日本| 成在线人永久免费视频| 爱豆传媒免费全集在线观看| 老司机亚洲免费影院| 亚洲欧美色中文字幕在线| 欧美激情高清一区二区三区| 免费久久久久久久精品成人欧美视频| 亚洲精品成人av观看孕妇| 另类亚洲欧美激情| 制服诱惑二区| 国产成人精品久久二区二区免费| 国产野战对白在线观看| 99国产精品一区二区蜜桃av | 亚洲国产av新网站| 天天躁日日躁夜夜躁夜夜| 亚洲少妇的诱惑av| 久久精品国产亚洲av高清一级| 可以免费在线观看a视频的电影网站| 又黄又粗又硬又大视频| 美女午夜性视频免费| 亚洲av日韩在线播放| 亚洲国产欧美在线一区| 亚洲三区欧美一区| 高清视频免费观看一区二区| 视频区图区小说| 美女福利国产在线| videos熟女内射| 肉色欧美久久久久久久蜜桃| 多毛熟女@视频| 久热爱精品视频在线9| 热99久久久久精品小说推荐| 99热国产这里只有精品6| 在线观看免费高清a一片| 亚洲专区中文字幕在线| 日本a在线网址| 亚洲综合色网址| 欧美精品亚洲一区二区| 99re6热这里在线精品视频| 一区在线观看完整版| 一区二区三区四区激情视频| 黑人巨大精品欧美一区二区蜜桃| 日本a在线网址| 日本猛色少妇xxxxx猛交久久| av免费在线观看网站| 欧美午夜高清在线| 精品熟女少妇八av免费久了| netflix在线观看网站| 国产亚洲精品第一综合不卡| 一级,二级,三级黄色视频| 亚洲欧洲精品一区二区精品久久久| 精品国产超薄肉色丝袜足j| 十八禁网站免费在线| 18禁黄网站禁片午夜丰满| 黑人操中国人逼视频| 无遮挡黄片免费观看| 亚洲av日韩在线播放| 精品一区在线观看国产| 午夜视频精品福利| 亚洲一码二码三码区别大吗| 久久99热这里只频精品6学生| 午夜精品国产一区二区电影| 亚洲九九香蕉| www.熟女人妻精品国产| 亚洲精品国产av蜜桃| 99精国产麻豆久久婷婷| 久久久久精品国产欧美久久久 | 国产精品 国内视频| 久久综合国产亚洲精品| 精品人妻1区二区| 精品免费久久久久久久清纯 | 可以免费在线观看a视频的电影网站| 欧美日韩av久久| 国产一卡二卡三卡精品| 日韩制服骚丝袜av| av免费在线观看网站| 99国产精品免费福利视频| 一区二区av电影网| 欧美精品啪啪一区二区三区 | 日韩 欧美 亚洲 中文字幕| 19禁男女啪啪无遮挡网站| 黄频高清免费视频| 久久香蕉激情| 欧美久久黑人一区二区| 中文字幕精品免费在线观看视频| 天天躁夜夜躁狠狠躁躁| 国产伦理片在线播放av一区| 69av精品久久久久久 | 欧美另类亚洲清纯唯美| 欧美人与性动交α欧美软件| 老熟女久久久| 亚洲精品美女久久久久99蜜臀| 一级,二级,三级黄色视频| 熟女少妇亚洲综合色aaa.| 国产精品熟女久久久久浪| 免费少妇av软件| 99国产综合亚洲精品| 一区在线观看完整版| 夫妻午夜视频| 日本猛色少妇xxxxx猛交久久| 青草久久国产| 日本vs欧美在线观看视频| 国产成人a∨麻豆精品| tube8黄色片| 电影成人av| 国产成人av激情在线播放| 丝瓜视频免费看黄片| 国产成人精品无人区| 91大片在线观看| 亚洲av成人不卡在线观看播放网 | 亚洲精品第二区| 岛国毛片在线播放| 亚洲精品乱久久久久久| 国产视频一区二区在线看| www.av在线官网国产| 欧美黄色淫秽网站| 蜜桃国产av成人99| 久久99一区二区三区| 黄片播放在线免费| 亚洲国产毛片av蜜桃av| 青草久久国产| 9热在线视频观看99| 亚洲成av片中文字幕在线观看| 亚洲伊人久久精品综合| 亚洲中文字幕日韩| 中文字幕人妻丝袜制服| 成人18禁高潮啪啪吃奶动态图| 自线自在国产av| 欧美xxⅹ黑人| 丰满迷人的少妇在线观看| www.自偷自拍.com| 亚洲精品久久久久久婷婷小说| 亚洲成人手机| 亚洲第一青青草原| 成年女人毛片免费观看观看9 | 91老司机精品| 国产xxxxx性猛交| 国产色视频综合| 美国免费a级毛片| 久久亚洲国产成人精品v| 久久久精品国产亚洲av高清涩受| 国产精品1区2区在线观看. | tube8黄色片| 久久性视频一级片| 亚洲av片天天在线观看| 一本一本久久a久久精品综合妖精| 菩萨蛮人人尽说江南好唐韦庄| 一二三四在线观看免费中文在| 五月开心婷婷网| 午夜久久久在线观看| 丝袜在线中文字幕| 亚洲av欧美aⅴ国产| 一级毛片精品| 午夜免费观看性视频| 国产极品粉嫩免费观看在线| 国产精品久久久久久精品电影小说| 啦啦啦 在线观看视频| 大片电影免费在线观看免费| 99国产精品99久久久久| 久久人人爽人人片av| 丁香六月天网| 老汉色av国产亚洲站长工具| 成人黄色视频免费在线看| 午夜精品久久久久久毛片777| 亚洲av男天堂| 窝窝影院91人妻| 国产精品久久久久久精品古装| a级片在线免费高清观看视频| 男女边摸边吃奶| 丰满人妻熟妇乱又伦精品不卡| 水蜜桃什么品种好| 久久精品aⅴ一区二区三区四区| 人成视频在线观看免费观看| 亚洲欧洲日产国产| 亚洲va日本ⅴa欧美va伊人久久 | 欧美在线一区亚洲| 国产片内射在线| 999久久久精品免费观看国产| 叶爱在线成人免费视频播放| 我的亚洲天堂| 国产91精品成人一区二区三区 | 女性被躁到高潮视频| 日韩三级视频一区二区三区| 欧美精品av麻豆av| 精品一区二区三卡| 色婷婷av一区二区三区视频| 亚洲专区中文字幕在线| 亚洲精品久久久久久婷婷小说| 亚洲欧美清纯卡通| 国产免费现黄频在线看| 淫妇啪啪啪对白视频 | 老司机午夜十八禁免费视频| 一区二区三区激情视频| 1024视频免费在线观看| 女性被躁到高潮视频| 亚洲av日韩精品久久久久久密| 精品一区在线观看国产| tube8黄色片| 在线观看免费午夜福利视频| 老汉色∧v一级毛片| 纵有疾风起免费观看全集完整版| 黄色毛片三级朝国网站| 99国产精品免费福利视频| 美女扒开内裤让男人捅视频| 免费黄频网站在线观看国产| 一区二区av电影网| 999久久久国产精品视频| 亚洲精品久久久久久婷婷小说| 欧美性长视频在线观看| 国产免费现黄频在线看| 日日摸夜夜添夜夜添小说| 蜜桃在线观看..| 大码成人一级视频| 久久久国产欧美日韩av| 免费在线观看黄色视频的| 日韩一区二区三区影片| 母亲3免费完整高清在线观看| 国产亚洲精品一区二区www | 美女高潮喷水抽搐中文字幕| 日本猛色少妇xxxxx猛交久久| 国产日韩欧美亚洲二区| av电影中文网址| 亚洲,欧美精品.| 国产男人的电影天堂91| 午夜福利视频在线观看免费| 日韩制服骚丝袜av| 欧美+亚洲+日韩+国产| 超色免费av| 在线 av 中文字幕| 啦啦啦 在线观看视频| 日本vs欧美在线观看视频| svipshipincom国产片| 免费女性裸体啪啪无遮挡网站| 亚洲一卡2卡3卡4卡5卡精品中文| 国产精品一区二区免费欧美 | 亚洲美女黄色视频免费看| 亚洲自偷自拍图片 自拍| 日韩熟女老妇一区二区性免费视频| 亚洲情色 制服丝袜| 久热这里只有精品99| 各种免费的搞黄视频| 国产在线一区二区三区精| 久久香蕉激情| 国产高清视频在线播放一区 | 午夜免费成人在线视频| 国产成人av教育| 老汉色∧v一级毛片| a在线观看视频网站| 777久久人妻少妇嫩草av网站| 精品免费久久久久久久清纯 | 日日摸夜夜添夜夜添小说| 午夜福利,免费看| 男女高潮啪啪啪动态图| 国产主播在线观看一区二区| 人妻久久中文字幕网| 日本91视频免费播放| 午夜老司机福利片| 国产深夜福利视频在线观看| 国产国语露脸激情在线看| 一区二区三区激情视频| 久久精品亚洲av国产电影网| 亚洲精品在线美女| 后天国语完整版免费观看| 日韩中文字幕欧美一区二区| 亚洲精品久久久久久婷婷小说| 丝袜脚勾引网站| 免费观看av网站的网址| 国产精品熟女久久久久浪| 欧美日韩亚洲综合一区二区三区_| 一级毛片精品| videosex国产| 真人做人爱边吃奶动态| 久久精品人人爽人人爽视色| 99国产精品99久久久久| 人人妻人人澡人人爽人人夜夜| 亚洲国产精品一区三区| 国产伦理片在线播放av一区| 久久综合国产亚洲精品| 巨乳人妻的诱惑在线观看| 久久精品亚洲av国产电影网| 欧美日韩黄片免| 免费观看人在逋| www.熟女人妻精品国产| 考比视频在线观看| 久久影院123| 久久久欧美国产精品| 国产精品免费大片| 久热这里只有精品99| 午夜久久久在线观看| 男女无遮挡免费网站观看| 少妇的丰满在线观看| 涩涩av久久男人的天堂| 久久久久久亚洲精品国产蜜桃av| 精品亚洲成国产av| 大码成人一级视频| 国产主播在线观看一区二区| 成人影院久久| 91精品三级在线观看| 黄色a级毛片大全视频| 国产精品二区激情视频| 高潮久久久久久久久久久不卡| 亚洲成av片中文字幕在线观看| 淫妇啪啪啪对白视频 | 男女高潮啪啪啪动态图| 国产伦理片在线播放av一区| 人人妻人人添人人爽欧美一区卜| 岛国毛片在线播放| 精品国产国语对白av| 天天添夜夜摸| 无限看片的www在线观看| 免费观看人在逋| 欧美人与性动交α欧美软件| 欧美精品一区二区大全| 又紧又爽又黄一区二区| 成人手机av| 精品国产超薄肉色丝袜足j| 久久性视频一级片| 欧美日韩亚洲高清精品| 一区二区av电影网| tube8黄色片| 成年女人毛片免费观看观看9 | 777久久人妻少妇嫩草av网站| 久久久久久久久久久久大奶| 亚洲成人免费电影在线观看| 成年人黄色毛片网站| 黄色 视频免费看| 男人操女人黄网站| 纯流量卡能插随身wifi吗| 亚洲一卡2卡3卡4卡5卡精品中文| 日韩欧美国产一区二区入口| 亚洲免费av在线视频| 精品亚洲成a人片在线观看| 女人精品久久久久毛片| 亚洲国产看品久久| 丰满人妻熟妇乱又伦精品不卡| 新久久久久国产一级毛片| 精品久久久久久久毛片微露脸 | tube8黄色片| 国产成人精品在线电影| 女人精品久久久久毛片| 秋霞在线观看毛片| 在线精品无人区一区二区三| 国产主播在线观看一区二区| 免费高清在线观看视频在线观看| 美女福利国产在线| 国产无遮挡羞羞视频在线观看| 国产欧美日韩一区二区三 | 99香蕉大伊视频| 在线十欧美十亚洲十日本专区| 考比视频在线观看| 日韩一卡2卡3卡4卡2021年| av免费在线观看网站| 中文字幕人妻丝袜制服| 亚洲色图综合在线观看| 精品人妻在线不人妻| 免费不卡黄色视频| 悠悠久久av| 中国美女看黄片| av不卡在线播放| 亚洲av日韩精品久久久久久密| 制服人妻中文乱码| 国产真人三级小视频在线观看| 亚洲欧美成人综合另类久久久| 国产亚洲一区二区精品| 人成视频在线观看免费观看| 一本久久精品| 亚洲精品粉嫩美女一区| 正在播放国产对白刺激| 啦啦啦免费观看视频1| 日韩制服骚丝袜av| 欧美亚洲日本最大视频资源| 久久久久国内视频| 亚洲自偷自拍图片 自拍| 亚洲人成77777在线视频| 色视频在线一区二区三区| 久久久欧美国产精品| 国产成人欧美在线观看 | 免费在线观看影片大全网站| 美女主播在线视频| 午夜福利视频在线观看免费| 多毛熟女@视频| 人人妻人人爽人人添夜夜欢视频| 国产成人免费无遮挡视频| 精品国产一区二区三区久久久樱花| 老司机亚洲免费影院| 亚洲av国产av综合av卡| 免费在线观看影片大全网站| 国产97色在线日韩免费| h视频一区二区三区| 正在播放国产对白刺激| 国产在线免费精品| 无限看片的www在线观看| 精品久久久精品久久久| 国产亚洲欧美在线一区二区| 亚洲精品粉嫩美女一区| 91大片在线观看| 午夜免费观看性视频| 久久久久国产一级毛片高清牌| 新久久久久国产一级毛片| bbb黄色大片| 久久久精品区二区三区| 亚洲一区二区三区欧美精品| 曰老女人黄片| 亚洲七黄色美女视频| netflix在线观看网站| 国产亚洲欧美在线一区二区| 人人妻人人添人人爽欧美一区卜| 精品乱码久久久久久99久播| av一本久久久久| 男女无遮挡免费网站观看| 中文字幕色久视频| 国产日韩欧美亚洲二区| 少妇人妻久久综合中文| 午夜精品国产一区二区电影| 精品欧美一区二区三区在线| 一本一本久久a久久精品综合妖精| 男男h啪啪无遮挡| 高清黄色对白视频在线免费看| 黄网站色视频无遮挡免费观看| 国产极品粉嫩免费观看在线| 青春草视频在线免费观看| 9色porny在线观看| 美女脱内裤让男人舔精品视频| 日韩 亚洲 欧美在线| 久久久国产成人免费| 无遮挡黄片免费观看| 久久ye,这里只有精品| 女性生殖器流出的白浆| 下体分泌物呈黄色| 欧美日韩福利视频一区二区| 精品国产一区二区三区四区第35| 9色porny在线观看| 丝袜美腿诱惑在线| 久久久久国产精品人妻一区二区| 日本一区二区免费在线视频| 成年动漫av网址| 脱女人内裤的视频| 黄频高清免费视频| 国产无遮挡羞羞视频在线观看| 桃花免费在线播放| 亚洲第一青青草原| 女人爽到高潮嗷嗷叫在线视频| 久久久久久久久久久久大奶| 亚洲精品国产av蜜桃| 一级片'在线观看视频| www.999成人在线观看| 一本久久精品| 久久性视频一级片| 波多野结衣av一区二区av| 精品一区在线观看国产| 国产精品一区二区在线观看99| 午夜激情av网站| 国产熟女午夜一区二区三区| 欧美精品高潮呻吟av久久| 国产人伦9x9x在线观看| 精品久久久久久久毛片微露脸 | 午夜福利,免费看| 欧美乱码精品一区二区三区| 亚洲人成电影观看| 精品亚洲成国产av| 久久综合国产亚洲精品| 狠狠婷婷综合久久久久久88av| 亚洲欧美日韩另类电影网站| 老司机午夜十八禁免费视频| 欧美午夜高清在线| 大片免费播放器 马上看| 亚洲黑人精品在线| 好男人电影高清在线观看| 2018国产大陆天天弄谢| 91精品三级在线观看| 不卡av一区二区三区| 久久性视频一级片| 国产精品久久久av美女十八| 黄片播放在线免费| 中文字幕人妻丝袜制服| 蜜桃国产av成人99| av网站在线播放免费| 国产欧美亚洲国产| 9热在线视频观看99| 日本一区二区免费在线视频| 国产av一区二区精品久久| 一级毛片精品| 色精品久久人妻99蜜桃| 黑人巨大精品欧美一区二区蜜桃| 国产片内射在线| 成在线人永久免费视频| 久久久精品区二区三区| av又黄又爽大尺度在线免费看| 欧美黄色片欧美黄色片| 亚洲欧美清纯卡通| 亚洲天堂av无毛| 搡老岳熟女国产| 亚洲色图综合在线观看| 日韩人妻精品一区2区三区| 久久精品国产a三级三级三级| 欧美国产精品一级二级三级| 黑丝袜美女国产一区| 丰满饥渴人妻一区二区三| 少妇的丰满在线观看| 老汉色av国产亚洲站长工具| 下体分泌物呈黄色| 极品人妻少妇av视频| 久久久欧美国产精品| 一区二区三区乱码不卡18| 又大又爽又粗| 欧美精品高潮呻吟av久久| 亚洲 欧美一区二区三区| 母亲3免费完整高清在线观看| 老熟女久久久| 亚洲情色 制服丝袜| 亚洲av日韩在线播放| 成年av动漫网址| 久久ye,这里只有精品| 肉色欧美久久久久久久蜜桃| 欧美 日韩 精品 国产| 国产精品 欧美亚洲| 人人妻人人澡人人爽人人夜夜| 国产日韩欧美视频二区| 亚洲第一欧美日韩一区二区三区 | 免费观看av网站的网址| 欧美日韩亚洲高清精品| 男女国产视频网站| 亚洲九九香蕉| 12—13女人毛片做爰片一| svipshipincom国产片| 亚洲九九香蕉| 黄色 视频免费看| 国产精品 欧美亚洲| 欧美日韩亚洲综合一区二区三区_| 欧美亚洲日本最大视频资源| √禁漫天堂资源中文www| 国产精品一区二区在线观看99| 韩国高清视频一区二区三区| 色综合欧美亚洲国产小说| 91成年电影在线观看| 在线观看www视频免费| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲av日韩精品久久久久久密| 亚洲三区欧美一区| 久久久国产一区二区| 欧美日韩中文字幕国产精品一区二区三区 | tocl精华| 人妻 亚洲 视频| 一区二区三区乱码不卡18| 69精品国产乱码久久久| 美女中出高潮动态图| 看免费av毛片| 肉色欧美久久久久久久蜜桃| 97精品久久久久久久久久精品| 成人免费观看视频高清| 老司机午夜十八禁免费视频| 精品亚洲成国产av| 大码成人一级视频| 久久久精品免费免费高清| 久久青草综合色| 国产伦人伦偷精品视频| av电影中文网址| 成人av一区二区三区在线看 |