• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MapReduce based computation of the diffusion method in recommender systems①

    2016-12-05 06:39:00PengFeiYouJialiZengXuewenDengHaojiang
    High Technology Letters 2016年3期

    Peng Fei (彭 飛):You Jiali:Zeng Xuewen:Deng Haojiang

    (*National Network New Media Engineering Research Center:Institute of Acoustics,Chinese Academy of Sciences:Beijing 100190:P.R.China)(**University of Chinese Academy of Sciences:Beijing 100049:P.R.China)

    ?

    MapReduce based computation of the diffusion method in recommender systems①

    Peng Fei (彭 飛)*To whom correspondence should be addressed.E-mail:pengf@dsp.ac.cnReceived on July 26,2015***:You Jiali*:Zeng Xuewen*:Deng Haojiang*

    (*National Network New Media Engineering Research Center:Institute of Acoustics,Chinese Academy of Sciences:Beijing 100190:P.R.China)(**University of Chinese Academy of Sciences:Beijing 100049:P.R.China)

    The performance of existing diffusion-based algorithms in recommender systems is still limited by the processing ability of a single computer.In order to conduct the diffusion computation on large data sets:a parallel implementation of the classic diffusion method on the MapReduce framework is proposed.At first:the diffusion computation is transformed from a summation format to a cascade matrix multiplication format:and then:a parallel matrix multiplication algorithm based on dynamic vector is proposed to reduce the CPU and I/O cost on the MapReduce framework:which can also be applied to other parallel matrix multiplication scenarios.Then:block partitioning is used to further improve the performance:while the order of matrix multiplication is also taken into consideration.Experiments on different kinds of data sets have verified the efficiency of the proposed method.

    MapReduce:recommender system:diffusion:parallel:matrix multiplication

    0 Introduction

    Recommender systems[1]adopt knowledge discovery techniques to provide personalized recommendations.It is now considered to be the most promising way to efficiently filter out the overload information.Thus far:recommender systems have successfully found applications in e-commerce:such as book recommendations in Amazon.com[2]:movie recommendations in Netflix.com[3]:and so on.

    Collaborative filtering (CF) is currently the most successful technique in the design of recommender systems[4]:where a user will be recommended with items that people with similar tastes liked in the past.As the CF technique evolves:some diffusion-based algorithms were proposed for better prediction accuracy.Huang et al.[5]proposed a CF algorithm based on an iterative diffusion process.Considering the system as a user-item bipartite network:Zhou:et al.[6]proposed an algorithm based on two-step diffusion.Zhang:et al.[7]proposed an iterative opinion diffusion algorithm to predict ratings in Netflix.com.

    As the size of data grows rapidly:many researchers have focused on the design of distributed recommender algorithms.Jiang:et al.[8]and Zhao:et al.[9]proposed an item-based CF algorithm and a user-based CF algorithm based on Hadoop[10]respectively.Sebanstian et al.[11]proposed a KNN algorithm based on user similarity and implemented it on the MapReduce framework[12].However:there is little research on diffusion-based recommender algorithms on the MapReduce framework.

    As the diffusion-based recommender methods are based on graphs:matrix multiplication can be used to perform the computation task:which can facilitate parallel processing as shown in the next sections.Li:et al.[13]used a parallel matrix multiplication method to do the similarity calculation in recommender systems.They proposed a single tuple method (STM) and a row divided method (RDM) to implement the matrix multiplication computation on the MapReduce framework.STM is rather inefficient as will be explained in the following sections.RDM requires servers in the Hadoop cluster to save the whole matrix in memory:which cannot be used on large data sets.Zheng:et al.[14]proposed a parallel matrix multiplication algorithm based on vector linear combination.However:it still needs servers to save the whole matrix in memory in the reduce step:which suffers from the same problem with RDM.

    In order to make the diffusion-based recommender methods applicable on large data sets:a parallel cascade matrix multiplication algorithm is proposed to realize the classic diffusion method[6]on the MapReduce framework.The contributions of this work are as follows:

    (1) The classic diffusion method in recommender systems is transformed from the summation format to the cascade matrix multiplication format:which can facilitate parallel processing.

    (2) A parallel matrix multiplication algorithm based on dynamic vector on the MapReduce framework is proposed:which can reduce the CPU and I/O cost effectively.In addition:the algorithm is improved by block partitioning.The order of matrix multiplication is also taken into consideration to enhance performance.

    (3) Experiments are conducted on different kinds of data sets:including MovieLens and Jester:to verify the effectiveness of the proposed method.

    The rest of this paper is organized as follows.Section 1 gives the background information of the study.Section 2 introduces the vectorization of the classic diffusion method and describes our dynamic vector based matrix multiplication algorithm.A performance analysis is presented in Section 3.The study is concluded in Section 4.

    1 Preliminaries

    1.1 The diffusion method on bipartite graphs

    A bipartite graph can be used to represent the input of a recommender system.In the bipartite graph:the vertexes are consisted of two sets:users U={U1,U2,…,Um} and items I={I1,I2,…,In} respectively.The user-item relation can be described by an adjacent matrix A.If Uihas used Ij:set A(i,j)=1:otherwise A(i,j)=0.Fig.1 shows an illustration consisting of three users and five items.

    Fig.1 Illustration of the diffusion process on a bipartite graph

    Suppose that a kind of resource is initially located on items.Each item will averagely distribute its resource to all connected users:and then each user will redistribute the received resource to connected items.Plot (a) shows the initial condition given U1as the target user:and plot (b) describes the result after the first step diffusion:during which the resources are transferred from items to users.Eventually:the resources flow back to items:and the result is shown in plot (c).

    Denoting r(0)as the initial resource vector on items.rjis the amount of resource located on Ij.The final resource vector after the two-step diffusion[6]is shown in

    (1)

    (2)

    In this case:the initial resource can be understood as giving a corresponding recommending capacity to each connected item based on history records:and the different initial resource vectors for different users have captured the personalized preferences.The final resource vector r(2)(Ui) is obtained by Eq.(1).Ui’s unconnected items are sorted in descent order based on final resource values.Items with the highest values are recommended.This algorithm was originally motivated by the resource-allocation process on graphs:and has been shown to be more accurate than traditional collaborative filtering based on MovieLens data set[15].

    1.2 MapReduce computing model

    MapReduce is a parallel computing model[12].It splits an input data set into several parts.Each mapper deals with one part:produces key/value pairs and writes them into the intermediate files.The key/value pairs are partitioned and emitted to reducers to calculate the final result.In this work:all implementations are based on the Hadoop platform.

    2 MapReduce based computation of the diffusion method

    2.1 Vectorization of the diffusion method

    The diffusion result is calculated by summation as shown in Eq.(1).Such formation is not convenient for parallel processing.Vectorization of the diffusion process would facilitate the parallel computation.

    In the first step of diffusion:resources are transferred from items to users.The transition matrix is

    T(1)=AT./KI

    (3)

    where ATdenotes the transpose of A:and “./” performs the right-array division by dividing each element of ATby the corresponding element of KI.KIis an n×m matrix as shown in Eq.(4).The elements in row j are initialized to k(Ij).

    (4)

    After the first step:the resource vector on Uiis r(1)(Ui)=r(0)(Ui)·T(1).

    Similarly:the transition matrix in the second step of diffusion is

    T(2)=A./KU

    (5)

    KUis an m×n matrix as shown in Eq.(6).The elements in row i are initialized to k(Ui).

    (6)

    Finally:the resources on each item are shown in Eq.(7):which is the vectorization result of Eq.(1).

    r(2)(Ui)=r(1)(Ui)·T(2)

    =r(0)(Ui)·T(1)·T(2)

    (7)

    The computation complexity of a recommendation process for a target user is O(mn) based on Eq.(1) or Eq.(7).It is impossible to do online computation when the data set is in large scale.So one should employ offline computation to make recommendation for each user.In order to compute the diffusion results of all users:Eq.(7) can be extended to the cascade matrix multiplication format as shown in Eq.(8):where F is the diffusion result of all users in matrix form.

    F=A·T(1)·T(2)

    (8)

    Therefore a cascade matrix multiplication approach is adopted to perform the diffusion computation task.In the next section:the MapReduce framework will be used to design an efficient matrix multiplication algorithm.

    2.2 Parallelism of matrix multiplication

    Eq.(8) entails three matrix-matrix multiplications.So the core of the diffusion method is to do the matrix multiplication in parallel.In this section:the single tuple method (STM)[13]based on the MapReduce framework is the first to be introduced.Then a dynamic vector based method (DVBM) is which can decrease the CPU and I/O cost effectively.further improvement is made for DVBM by block partitioning.In addition:the order of matrix multiplication is also taken into consideration.

    2.2.1 Single Tuple Method

    Suppose A is an m×t matrix and B is a t×n matrix.Then the elements of matrix C=A·B can be calculated as

    (9)

    So an element of matrix C can be got by the inner product of the corresponding row vector in matrix A and column vector in matrix B as shown in Fig.2.

    Fig.2 Matrix multiplication in STM

    In recommender systems:the user history records are usually in a three-tuple format {Ui,Ij,value}:which are the cases both in MovieLens and Jester data sets.Based on this input format:the MapReduce process of the STM is shown in Table 1.

    In the map procedure:{i,k,A(i,k)} and {k,j,B(k,j)} denote the original input format of matrix A and matrix B respectively.〈{i,j },{0,k,A(i,k)}〉 and 〈{i:j},{1,k,B(k,j)}〉 denote the intermediate key/value pairs used for sort and shuffle.Each 〈{i:j},{0,k,A(i,k)}〉 is emitted by n times:while 〈{i:j},{1,k,B(k:j)}〉 is emitted by m times.0 in {0,k,A(i,k)} indicates it is from matrix A:and 1 in {1,k,B(k:j)} indicates it is from matrix B.In the reduce procedure:C(i:j) is obtained as described in Eq.(9).

    Table 1 MapReduce algorithm of STM

    2.2.2 Dynamic vector based method

    However:when the STM on MovieLens-1M data set is taken:it can’t be completed in rational time.It is of time consuming because the elements in the matrices are read or written one by one.There are d(A)·mt+d(B)·tn elements to read and (d(A)+d(B))·mtn elements to write in the map procedure:where d(·) denotes the density of the corresponding matrix.That also means there are (d(A)+d(B))·mtn intermediate elements to sort and shuffle:which is both CPU and I/O consuming.

    If the elements could be read or written in batch:the CPU and I/O cost would decrease.Based on this idea:a dynamic vector based method is proposed to decrease the frequency of read and write:so as to reduce the CPU and I/O cost.

    The elements are compressed in the same row or column into a single vector.The vectors are stored in different formats according to their density.Take a row vector as an example.If the density is larger than β:the vector is in an array format; otherwise:it is in a key/value format as shown in Eq.(10).β is a parameter that controls the density threshold.In the following experiments:set β=50%.Let i be the row index.Ac(i,:) denotes the compression format of row i in matrix A.The second element in a vector represents the compression type.0 indicating the vector is in an array format:and 1 indicates a key/value format.kv(1≤v≤w) denotes the index of nonzero elements in the row.

    Ac(i,:)=

    (10)

    DVBMcontainstwoMapReducejobsasshowninTable2.TheaimofthefirstMapReducejobistocompressthematrixfromtheoriginthree-tupleformattoavectorformat.Inthemapprocedure:eachelementis

    Table 2 MapReduce algorithm of DVBM

    emitted according to its row number or column number.In the reduce procedure:the reducer collects elements that belong to the same row or column and compresses them into a row or column vector based on Eq.(10).

    In the second job:the compressed vector can be taken as input to implment the matrix multiplication.In the map procedure:Ac(i,:) and Bc(:,j) are read and mapped to 〈{i,j},{0,Ac(i,:)}〉 and 〈{i,j},{1,Bc(:,j)}〉.Each 〈{i,j},{0,Ac(i,:)}〉 is emittedntimes:while 〈{i,j},{1,Bc(:,j)}〉 is emittedmtimes.In the reduce procedure:Ac(i,:) and Bc(:,j) are decompressed to A(i,:) and B(:,j).C(i,j) is obtained as described in Eq.(9).

    2.2.3 Dynamic vector based method with block

    In DVBM:each reducer only involves two vectors.This does not make full use of the computing power of each worker.More vectors can be allocated to a reducer:which can help decrease the copy frequency in the map procedure.When the copy frequency cuts down:the CPU and I/O cost would both decrease.

    Based on the above idea:matrix A is partitioned into several sub-matrices by row:and matrix B is partitioned into several sub-matrices by column as shown in Fig.3.Each sub-matrix is called a block.The term block is also used in file systems where it refers to a unit of storage space:which is not the case in this paper.We can map the rows and columns in the same block to a single reducer:and calculate the elements of matrix C in batch.

    Fig.3 Matrix multiplication in DVBMwB

    LetSbe the block size.BI denotes block index:and NB denotes the number of blocks.Then the relations between row/column index and block index can be expressed by Eq.(11) and Eq.(12).The number of blocks can be calculated by Eq.(13) and Eq.(14).

    BI(Ac(i,:))=i/S

    (11)

    BI(Bc(:,j))=j/S

    (12)

    NB(A)=(m+1)/S;S>1:m>0

    (13)

    NB(B)=(n+1)/S;S>1:n>0

    (14)

    The Dynamic Vector Based Method with Block (DVBMwB) just modifies the second job of DVBM as shown in Table 3.In the map procedure:each Ac(i,:) is emitted NB(B) times:while each Bc(:,j) is emitted NB(A) times:which reduces substantial copy cost and I/O cost compared to the DVBM.In the reduce procedure:Ac(i,:) and Bc(:,j) are decompressed to A(i,:) and B(:,j) in batch:and C(i,j) is obtained by the same way as DVBM.

    Table 3 MapReduce algorithm of DVBMwB

    2.2.4 The order of matrix multiplication

    The sequence by which multiplications are done also affects the performance of Eq.(8).An in-order calculation sequence yieldsO(|U|2|I|) time complexity:whereas a reversed order leads toO(|U||I|2).The decision should be based on the relative sizes of the user set and item set.When the number of users is larger:anO(|U||I|2) result is obviously more favorable:or vice versa.

    3 Experiment

    3.1 Experimental environment

    A Hadoop cluster with 3 PCs is constructed.Each machine has a 4-core 2.40GHz Xeon(R) processor:4G memories.The Hadoop version is 1.2.1.

    3.2 Data description

    In this study:two representative data sets are used:MovieLens-1M and Jester:to evaluate the proposed MapReduce algorithm.MovieLens data sets are collected from the MovieLens web site.Ken Goldberg from UC Berkeley released the Jester data set from the Jester Joke Recommender System.The features of each data set are shown in Table 4.It contains the user number:item number:rating number and density of each data set.These two data sets are of quite different density:which can provide more comprehensive verification to our methods.

    Table 4 Features of the data sets

    3.3 Comparison and Analysis

    The performance of STM:DVMB and DVMBwB is compared and the compression ratio of the dynamic vector is also measured.After that:the performance of DVMBwB is evaluated with different block sizes on MovieLens-1M and Jester data sets:and also some experiments are conducted to verify that the order of matrix multiplication is very important.

    3.3.1 STM vs DVMB vs DVMBwB

    As STM cannot complete the diffusion-based recommendation for MovieLens-1M and Jester data sets in a reasonable time:several easier tasks of different user/item dimensions and matrix densities are generated so as to compare the performance of STM:DVMB and DVBMwB.The user/item dimensions include 200×200 and 400×400:while the matrix densities are 5% and 10%.The results are shown in Table 5.The “intermediate matrix density” is the density of the matrix that comes out after the first step of diffusion.It’s usually of high density compared with the original input matrix:which leads to more time consumption in the second step of diffusion.The block size of DVBMwB is 100.

    It is seen that DVMB and DVMBwB perform much better than STM.The main factor that affects the CPU and I/O cost is the read/write frequency.On one hand:the decrease of the read/write frequency would improve the I/O efficiency.Too many read/write operations will lead to additional communication overhead in the MapReduce framework.On the other hand:as the number of intermediate elements is equal to the write frequency of the map procedure:less CPU time to sort and shuffle is required if the write frequency goes down.The frequency difference of STM:DVBM and DVBMwB mainly comes from the matrix multiplication job’s map procedure.As analyzed in Section 2.2.1:the read/write frequencies of the map procedure in STM are d(A)·mt+d(B)·tnand (d(A)+d(B))·mtnrespectively.In DVBM:the read and wirte frequencies of the first job are both d(A)·mt+d(B)·tn.However:the read frequency is reduced to m+n and the write frequency is reduced to 2mnin the second job.The difference of DVBM and DVBMwB only comes from the write frequency of the second job’s map procedure.It is reduced to NB(B)·m+NB(A)·n.Although it takes two jobs to complete the matrix multiplication calculation in DVBM and DVBMwB:the total read/write frequencies are cut down compared with the STM algorithm:and the number of intermediate elements also decreases dramatically which saves a large amount of CPU time.Besides:the dynamic vector format also eliminates some redundant information compared with the origin three-tuple format:which can further reduce the I/O cost.So it is quite worthwhile to take an extra step to compress the matrices into vectors.The performance of STM in Table 5 confirms the above analysis.Meanwhile:DVMBwB takes much less time than DVBM:which indicates that block partitioning can help improve the performance effectively.

    Table 5 Comparison of STM:DVBM and DVBMwB

    3.3.2 Compression ability of dynamic vector

    The sizes of MovieLens-1M and Jester data sets before and after compression are illustrated in Table 6.The compression ratio is also calculated.It is obvious that the dynamic vector proposed in this paper can compress the data effectively.

    Table 6 Compression results on MovieLens-1M and Jester

    Compression experiments is taken on two manually generated data sets of different densities.The dimensions of the generated data sets are 1000×1000 and 2000×2000.The density ranges from 10% to 100%.The relation between the matrix density and compression ratio is illustrated in Fig.4.When the density is more than 50%:the compression ratio gets much better/smaller.The more density a data set has:the better compression ratio we would have.

    Fig.4 Relation between density and compression ratio

    3.3.3 Impact of block size

    In DVBMwB:as the block size increases:the copy cost and I/O cost would decrease.However:the block size can’t be too large because of the memory limitation of a single computer.Experiments are taken on MovieLens-1M and Jester data sets with different block sizes.The results are shown in Table 7.As the user number of MovieLens-1M and Jester are both bigger than the item number:the last two matrices are multiplied first so as to get a better performance.

    The time used for matrix compression is the same for all block sizes.The differences only come from the matrix multiplication procedure.When the block size is small:it is the copy and I/O cost that lead to the rise of completion time.As the block size gets larger:the calculation speed would not increase all the time.It would slow down or get even worse because of memory shortage.So it is necessary to determine a proper block size based on the servers and data sets under production workloads.

    Table 7 DVBMwB of different block size on MovieLens-1M and Jester

    3.3.4 Impact of the order of matrix

    In the previous sections:the matrix multiplication is done in reversed order.In order to display the influence of the order of matrix multiplication:the order of matrix multiplication on MovieLens-1M data set is changed.The block size used here is 100.The time used for matrix compression and matrix multiplication would both rise if we do the diffusion computation in order as shown in Fig.5 and Fig.6.Besides:the intermediate matrix density would also increase from 72.44% to 95.8% according to the experiment.

    Fig.5 Matrix compression time on MovieLens-1M

    4 Conclusion

    In this study:a parallel version of a classic diffusion algorithm is proposed on MapReduce framework.The diffusion method is transformed to the cascade matrix multiplication format so as to implement it in the MapReduce computing model.

    Fig.6 Matrix multiplication time on MovieLens-1M

    A novel dynamic vector based matrix multiplication algorithm is designed on the MapReduce framework:which can improve the performance effectively.It can be also applied to other matrix multiplication scenarios.Comprehensive experiments are conducted to verify the effectiveness of our method.The block size and the order of matrix multiplication both have influence on the time cost of diffusion computation.

    In the future:our matrix multiplication method will be extended on the MapReduce framework to some other graph-based circumstances.Besides:some large-scale computation frameworks:like GraphLab[16]will be also studied.

    [1] Bobadilla J:Ortega F:Hemando A:et al.Recommender systems survey.Knowledge-BasedSystems:2013:46(1):109-132

    [2] Linden G:Smith B:York J.Amazon.com recommendations:Item-to-item collaborative filtering.InternetComputing:2003:7(1):76-90

    [3] Bennett J:Lanning S.The netflix prize.In:Proceedings of the 2007 KDD Cup and Workshop:California:USA:2007.35

    [4] Shi Y:Larson M:Hanjalic A.Collaborative filtering beyond the user-item matrix:A survey of the art and future challenges.ACMComputingSurveys:2014:47(1):3

    [5] Huang Z:Chen H:Zeng D.Applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering.ACMTransactionsonInformationSystems:2004:22(1):116-142

    [6] Zhou T:Ren J:Medo M:et al.Bipartite network projection and personal recommendation.PhysicalReviewE:2007:76(4):70-80

    [7] Zhang Y C:Medo M:Ren J:et al.Recommendation model based on opinion diffusion.EurophysicsLetters:2007:80(6):417-429

    [8] Long G:Zhang G:Lu J:et al.Scaling-up item-based collaborative filtering recommendation algorithm based on Hadoop.In:Proceedings of the 2011 IEEE World Congress on Services:Washington:USA:2011.490-497

    [9] Zhao Z D:Shang M S.User-based collaborative-filtering recommendation algorithms on Hadoop.In:Proceedings of the Third International Conference on Knowledge Discovery and Data Mining:Phuket:Thailand:2010.478-481

    [10] Shvachko K:Kuang H:Radia S:et al.The hadoop distributed file system.In:Proceedings of the 26th Symposium on Mass Storage Systems and Technologies:Incline Village:USA:2010.1-10

    [11] Schelter S:Boden C:Markl V.Scalable similarity-based neighborhood methods with mapreduce.In:Proceedings of the 6th ACM Conference on Recommender Systems:New York:USA:2012.163-170

    [12] Dean J:Ghemawat S.MapReduce:simplified data processing on large clusters.CommunicationsoftheACM:2008:51(1):107-113

    [13] Li L N:Li C P:Chen H:et al.MapReduce-based simrank computation and its application in social recommender system.In:Proceedings of the 2013 IEEE International Congress on Big Data:Santa Clara:USA:2013.133-140

    [14] Zheng J H:Zhang L J:Zhu R:et al.Parallel matrix multiplication algorithm based on vector linear combination using MapReduce.In:Proceedings of the IEEE 9th World Congress on Services:Santa Clara:USA:2013.193-200

    [15] Zhang Z K:Zhou T:Zhang Y.Personalized recommendation via integrated diffusion on user-item-tag tripartite graphs.PhysicaA:StatisticalMechanicsandItsApplications:2010:389(1):179-186

    [16] Kyrola A:Blelloch G:Guestrin C.GraphChi:Large-scale graph computation on just a PC.In:Proceedings of the 10th USENIX Symposium onOperating Systems Design and Implementation:California:USA:2012.31-46

    Peng Fei:born in 1988.He is currently pursuing his Ph.D.degree in the National Network New Media Engineering Research Center:Institute of Acoustics:Chinese Academy of Sciences and the University of Chinese Academy of Sciences.He received his B.S.degree from the department of Electronics and Information Engineering in the Huazhong University of Science and Technology in 2010.His research interests include new media technologies and recommender systems.

    10.3772/j.issn.1006-6748.2016.03.008

    ①Sponsored by the National High Technology Research and Development Program of China (No.2011AA01A102):the Key Program of the Chinese Academy of Sciences (No.KGZD-EW-103-2).

    成人特级av手机在线观看| 老师上课跳d突然被开到最大视频| 国产黄频视频在线观看| 91久久精品国产一区二区三区| 欧美成人a在线观看| 三级毛片av免费| 亚洲在线自拍视频| 全区人妻精品视频| 国产精品人妻久久久影院| 国产高清有码在线观看视频| 欧美极品一区二区三区四区| 成人高潮视频无遮挡免费网站| 亚洲精品视频女| 色吧在线观看| 美女内射精品一级片tv| 精品久久久久久成人av| 人人妻人人澡人人爽人人夜夜 | 在线观看美女被高潮喷水网站| 国产成人aa在线观看| 成人美女网站在线观看视频| 一区二区三区高清视频在线| av又黄又爽大尺度在线免费看| 欧美xxxx黑人xx丫x性爽| 如何舔出高潮| 人妻一区二区av| 一个人看的www免费观看视频| 亚洲精品日韩在线中文字幕| 色哟哟·www| 日本欧美国产在线视频| 男女那种视频在线观看| 我的老师免费观看完整版| 色吧在线观看| 在线免费十八禁| 国产成人精品一,二区| 人体艺术视频欧美日本| av又黄又爽大尺度在线免费看| 插阴视频在线观看视频| 国产色爽女视频免费观看| 国产高清不卡午夜福利| 免费电影在线观看免费观看| 插阴视频在线观看视频| 最近最新中文字幕大全电影3| 波野结衣二区三区在线| 热99在线观看视频| 91久久精品国产一区二区成人| 免费看日本二区| 亚洲性久久影院| 亚洲乱码一区二区免费版| 激情 狠狠 欧美| 97精品久久久久久久久久精品| 成人欧美大片| 久久99精品国语久久久| 国产av国产精品国产| 91精品国产九色| 人妻制服诱惑在线中文字幕| 国产成人91sexporn| 美女内射精品一级片tv| 欧美一级a爱片免费观看看| 免费观看在线日韩| 男女边吃奶边做爰视频| 啦啦啦啦在线视频资源| 久久久久久久久久久免费av| 精品一区二区免费观看| 午夜精品在线福利| 免费人成在线观看视频色| 国产爱豆传媒在线观看| 国产精品女同一区二区软件| 亚洲美女搞黄在线观看| 午夜福利在线观看免费完整高清在| 免费不卡的大黄色大毛片视频在线观看 | 精品久久久久久久久久久久久| 国产精品1区2区在线观看.| 国产精品av视频在线免费观看| 久久韩国三级中文字幕| 国产精品麻豆人妻色哟哟久久 | 天堂俺去俺来也www色官网 | 一级毛片久久久久久久久女| 精品久久久久久成人av| 国产精品一区二区在线观看99 | 如何舔出高潮| 你懂的网址亚洲精品在线观看| 亚洲精品乱久久久久久| 国产麻豆成人av免费视频| 中文字幕av在线有码专区| 日韩人妻高清精品专区| 嫩草影院新地址| 亚洲av电影不卡..在线观看| 九九在线视频观看精品| 国产精品三级大全| 人妻系列 视频| 久久久久久久久中文| 亚洲精品自拍成人| av在线播放精品| 精品99又大又爽又粗少妇毛片| 黄色日韩在线| 国产黄a三级三级三级人| 日韩 亚洲 欧美在线| 人人妻人人看人人澡| 亚洲精品,欧美精品| 观看美女的网站| 亚洲精品中文字幕在线视频 | 天堂√8在线中文| 精品欧美国产一区二区三| 国产欧美日韩精品一区二区| 美女大奶头视频| 啦啦啦中文免费视频观看日本| 精品久久久久久成人av| 国产免费一级a男人的天堂| 高清av免费在线| 亚洲欧美一区二区三区国产| 久久鲁丝午夜福利片| 深夜a级毛片| 在线观看免费高清a一片| 久久精品国产鲁丝片午夜精品| 亚洲成人av在线免费| 日韩欧美三级三区| 午夜激情福利司机影院| 亚洲色图av天堂| 国产精品一及| 麻豆乱淫一区二区| 亚洲成色77777| 免费电影在线观看免费观看| 三级国产精品片| 伊人久久精品亚洲午夜| 黑人高潮一二区| 亚洲成人久久爱视频| av在线亚洲专区| 看非洲黑人一级黄片| 美女内射精品一级片tv| 久久久久久久久中文| 久久国产乱子免费精品| 大香蕉97超碰在线| 亚洲无线观看免费| 美女国产视频在线观看| 亚洲精品乱久久久久久| 韩国av在线不卡| 亚洲最大成人中文| 2018国产大陆天天弄谢| 久热久热在线精品观看| 精品一区二区三区人妻视频| 亚洲av电影在线观看一区二区三区 | 国产精品久久久久久久久免| 国产精品一区二区在线观看99 | 欧美 日韩 精品 国产| 老司机影院毛片| 国产av不卡久久| 亚洲精品成人av观看孕妇| 天天一区二区日本电影三级| 亚洲一级一片aⅴ在线观看| 日韩亚洲欧美综合| 搞女人的毛片| 在线免费观看的www视频| 在线 av 中文字幕| 免费大片黄手机在线观看| 亚洲av男天堂| 色视频www国产| 欧美不卡视频在线免费观看| 国产精品日韩av在线免费观看| 99热这里只有是精品在线观看| 亚洲色图av天堂| 亚洲av福利一区| 一级黄片播放器| 少妇高潮的动态图| 亚洲精品国产成人久久av| 久久草成人影院| 九九在线视频观看精品| 欧美日韩视频高清一区二区三区二| 国产精品久久久久久久久免| 国内精品美女久久久久久| 精品第一国产精品| 97在线人人人人妻| 免费人妻精品一区二区三区视频| 丝袜美足系列| 精品一品国产午夜福利视频| 日韩不卡一区二区三区视频在线| 国产 精品1| 高清视频免费观看一区二区| 成人国语在线视频| 在线观看www视频免费| 免费高清在线观看日韩| 中文字幕色久视频| 亚洲av综合色区一区| 夫妻性生交免费视频一级片| 国产片内射在线| 999久久久国产精品视频| 天堂中文最新版在线下载| 亚洲av电影在线观看一区二区三区| 亚洲图色成人| 国产精品国产三级专区第一集| 毛片一级片免费看久久久久| 蜜桃在线观看..| 日本wwww免费看| 亚洲国产色片| 午夜福利在线免费观看网站| 青春草亚洲视频在线观看| 欧美成人午夜免费资源| freevideosex欧美| 一本久久精品| www.av在线官网国产| 亚洲国产欧美在线一区| 韩国精品一区二区三区| 久久99蜜桃精品久久| 日韩欧美精品免费久久| 亚洲久久久国产精品| 最近最新中文字幕免费大全7| 九色亚洲精品在线播放| 大陆偷拍与自拍| 免费av中文字幕在线| 999久久久国产精品视频| 午夜福利网站1000一区二区三区| 亚洲精品久久午夜乱码| 国产精品久久久av美女十八| 26uuu在线亚洲综合色| 色婷婷久久久亚洲欧美| 成人亚洲欧美一区二区av| 欧美另类一区| 波多野结衣av一区二区av| 国产精品偷伦视频观看了| 亚洲综合精品二区| 亚洲国产日韩一区二区| 国产精品免费大片| 丝瓜视频免费看黄片| 交换朋友夫妻互换小说| 大片免费播放器 马上看| 国产又色又爽无遮挡免| 欧美精品国产亚洲| 高清在线视频一区二区三区| 亚洲精品,欧美精品| 99久久综合免费| 欧美变态另类bdsm刘玥| 色吧在线观看| 女的被弄到高潮叫床怎么办| 亚洲人成电影观看| 亚洲四区av| 亚洲精品美女久久av网站| 中文精品一卡2卡3卡4更新| 亚洲精品中文字幕在线视频| av网站免费在线观看视频| 卡戴珊不雅视频在线播放| 亚洲人成网站在线观看播放| 精品久久久久久电影网| 18禁动态无遮挡网站| 中文天堂在线官网| 中文字幕精品免费在线观看视频| 国产成人精品在线电影| 亚洲,一卡二卡三卡| 亚洲欧美色中文字幕在线| 精品国产露脸久久av麻豆| 久久精品熟女亚洲av麻豆精品| 美女脱内裤让男人舔精品视频| 看非洲黑人一级黄片| 亚洲色图综合在线观看| 在线观看www视频免费| 国产不卡av网站在线观看| 亚洲精品,欧美精品| 亚洲综合精品二区| 亚洲中文av在线| 中国国产av一级| 国产色婷婷99| 亚洲一级一片aⅴ在线观看| 女人被躁到高潮嗷嗷叫费观| 天天躁夜夜躁狠狠久久av| 人妻人人澡人人爽人人| 欧美最新免费一区二区三区| 黄片播放在线免费| 18禁国产床啪视频网站| 97在线视频观看| videos熟女内射| 观看av在线不卡| 亚洲五月色婷婷综合| 久久国产精品大桥未久av| 青青草视频在线视频观看| 亚洲国产看品久久| 国产麻豆69| 亚洲精品中文字幕在线视频| 国产黄频视频在线观看| 亚洲五月色婷婷综合| 亚洲国产精品一区二区三区在线| 高清欧美精品videossex| 大香蕉久久网| 久久精品久久精品一区二区三区| 免费在线观看完整版高清| av网站在线播放免费| 老熟女久久久| 搡女人真爽免费视频火全软件| 日韩一区二区三区影片| 老鸭窝网址在线观看| 亚洲综合精品二区| 精品国产一区二区三区久久久樱花| 亚洲精品久久午夜乱码| 久久av网站| 欧美国产精品一级二级三级| 在线观看免费视频网站a站| 色吧在线观看| 在线观看人妻少妇| 最新的欧美精品一区二区| 色婷婷久久久亚洲欧美| 国产免费福利视频在线观看| 一本大道久久a久久精品| av有码第一页| 我的亚洲天堂| 亚洲欧美成人精品一区二区| 伦理电影免费视频| 80岁老熟妇乱子伦牲交| 午夜久久久在线观看| 丝袜喷水一区| 色吧在线观看| 免费黄频网站在线观看国产| 一本—道久久a久久精品蜜桃钙片| 久久久久久久国产电影| 日韩中字成人| 狠狠精品人妻久久久久久综合| 一区二区三区精品91| 狂野欧美激情性bbbbbb| 久久久亚洲精品成人影院| 超碰成人久久| 亚洲久久久国产精品| 国产人伦9x9x在线观看 | 可以免费在线观看a视频的电影网站 | 国产精品久久久av美女十八| 日韩三级伦理在线观看| 亚洲欧美色中文字幕在线| 亚洲伊人久久精品综合| 高清在线视频一区二区三区| 久久久久久伊人网av| 国产爽快片一区二区三区| 波多野结衣一区麻豆| 一区二区三区激情视频| 欧美精品一区二区免费开放| 999精品在线视频| 大香蕉久久成人网| 啦啦啦啦在线视频资源| 波多野结衣一区麻豆| 久久精品久久久久久久性| 男女边摸边吃奶| 亚洲欧美中文字幕日韩二区| 国产精品 国内视频| 久久久久精品性色| 美女大奶头黄色视频| 国产老妇伦熟女老妇高清| 中文字幕人妻丝袜制服| 亚洲四区av| 肉色欧美久久久久久久蜜桃| 欧美人与性动交α欧美软件| 国产成人精品久久久久久| 夜夜骑夜夜射夜夜干| 人体艺术视频欧美日本| 我要看黄色一级片免费的| 国产av精品麻豆| 亚洲国产最新在线播放| 一级毛片黄色毛片免费观看视频| 不卡av一区二区三区| av.在线天堂| 我要看黄色一级片免费的| 精品国产乱码久久久久久男人| 天天躁日日躁夜夜躁夜夜| 久久精品人人爽人人爽视色| 精品少妇内射三级| 人体艺术视频欧美日本| 热99国产精品久久久久久7| 午夜福利在线观看免费完整高清在| 最近手机中文字幕大全| 亚洲欧美一区二区三区黑人 | 久久99蜜桃精品久久| 国产亚洲一区二区精品| 成人黄色视频免费在线看| 欧美亚洲日本最大视频资源| 最黄视频免费看| 如何舔出高潮| 搡女人真爽免费视频火全软件| 韩国精品一区二区三区| 成人国产av品久久久| 免费少妇av软件| av在线观看视频网站免费| 亚洲精品国产av成人精品| 国产毛片在线视频| 一级片'在线观看视频| 亚洲美女搞黄在线观看| 七月丁香在线播放| 国产亚洲av片在线观看秒播厂| 亚洲一级一片aⅴ在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 性少妇av在线| 亚洲第一av免费看| 欧美中文综合在线视频| 亚洲精华国产精华液的使用体验| 搡老乐熟女国产| 午夜福利一区二区在线看| 午夜激情久久久久久久| 一区福利在线观看| 久久久久人妻精品一区果冻| 99re6热这里在线精品视频| 黄色怎么调成土黄色| 不卡视频在线观看欧美| 午夜福利视频精品| 国产男女超爽视频在线观看| 亚洲欧美一区二区三区久久| 亚洲欧美精品自产自拍| 黄色视频在线播放观看不卡| 亚洲精品久久久久久婷婷小说| 国产亚洲一区二区精品| 26uuu在线亚洲综合色| 国产深夜福利视频在线观看| 日日撸夜夜添| 亚洲三级黄色毛片| 国产视频首页在线观看| 欧美日本中文国产一区发布| 99九九在线精品视频| 国产av码专区亚洲av| 日本欧美国产在线视频| 亚洲一区二区三区欧美精品| 丝袜在线中文字幕| 又大又黄又爽视频免费| 欧美av亚洲av综合av国产av | 一区二区三区乱码不卡18| 日本-黄色视频高清免费观看| 男人爽女人下面视频在线观看| 一级,二级,三级黄色视频| 麻豆乱淫一区二区| 看非洲黑人一级黄片| 国产乱人偷精品视频| 成年女人毛片免费观看观看9 | 亚洲精品第二区| 久久99蜜桃精品久久| 欧美日韩亚洲高清精品| 女人精品久久久久毛片| 久久这里有精品视频免费| 中国国产av一级| 国产一区二区三区综合在线观看| 亚洲国产最新在线播放| 中文天堂在线官网| 热re99久久精品国产66热6| 日韩中字成人| tube8黄色片| 一边亲一边摸免费视频| 国产成人一区二区在线| 午夜日韩欧美国产| 在线看a的网站| 少妇人妻精品综合一区二区| 久久99精品国语久久久| 九九爱精品视频在线观看| 十分钟在线观看高清视频www| 美女中出高潮动态图| 男女国产视频网站| 精品人妻熟女毛片av久久网站| 久久 成人 亚洲| 精品国产一区二区三区久久久樱花| 女性被躁到高潮视频| freevideosex欧美| 久久久久久久国产电影| 国产精品人妻久久久影院| av视频免费观看在线观看| 青青草视频在线视频观看| 看十八女毛片水多多多| av不卡在线播放| 精品人妻偷拍中文字幕| 欧美日韩亚洲高清精品| 国产又爽黄色视频| 一本—道久久a久久精品蜜桃钙片| 久久婷婷青草| 青春草亚洲视频在线观看| 岛国毛片在线播放| 亚洲av免费高清在线观看| 日韩av在线免费看完整版不卡| 国产精品久久久久久精品古装| 亚洲美女视频黄频| 日韩精品免费视频一区二区三区| 人人妻人人添人人爽欧美一区卜| 久久久久久久久久人人人人人人| 国产精品嫩草影院av在线观看| 亚洲精品久久久久久婷婷小说| 午夜精品国产一区二区电影| 爱豆传媒免费全集在线观看| 欧美激情 高清一区二区三区| 卡戴珊不雅视频在线播放| 国产成人精品久久久久久| 欧美成人午夜精品| 日本wwww免费看| 亚洲三级黄色毛片| 黄色怎么调成土黄色| 国产av精品麻豆| www.熟女人妻精品国产| 国产一区二区三区综合在线观看| 欧美人与性动交α欧美软件| 亚洲精品,欧美精品| 久久精品国产亚洲av天美| 一边亲一边摸免费视频| 日本黄色日本黄色录像| 日本欧美国产在线视频| 91aial.com中文字幕在线观看| 亚洲精品一区蜜桃| 激情五月婷婷亚洲| 国产在线免费精品| 亚洲一区中文字幕在线| 一区在线观看完整版| 老女人水多毛片| 成人影院久久| 亚洲人成77777在线视频| 美女国产高潮福利片在线看| 亚洲色图 男人天堂 中文字幕| 国产亚洲精品第一综合不卡| av天堂久久9| 国产精品一区二区在线观看99| 欧美日韩精品网址| 五月伊人婷婷丁香| 欧美97在线视频| 欧美日韩视频高清一区二区三区二| 欧美日韩综合久久久久久| 久久久久国产精品人妻一区二区| 国产 一区精品| tube8黄色片| 国产一区二区 视频在线| 国产成人精品在线电影| 一级a爱视频在线免费观看| 免费高清在线观看视频在线观看| 十八禁高潮呻吟视频| 一级毛片电影观看| 国产白丝娇喘喷水9色精品| 老司机亚洲免费影院| 国产精品一国产av| 亚洲第一av免费看| 久久青草综合色| 亚洲国产精品一区二区三区在线| 丝袜在线中文字幕| av网站在线播放免费| 亚洲国产日韩一区二区| 男人爽女人下面视频在线观看| 日本午夜av视频| 欧美在线黄色| 亚洲色图综合在线观看| 天天操日日干夜夜撸| 久热久热在线精品观看| 欧美精品国产亚洲| www.av在线官网国产| 国产精品久久久久久精品电影小说| 婷婷色综合大香蕉| 美女中出高潮动态图| 高清不卡的av网站| 男的添女的下面高潮视频| 狠狠精品人妻久久久久久综合| 有码 亚洲区| 国产探花极品一区二区| 日日摸夜夜添夜夜爱| 色94色欧美一区二区| 夜夜骑夜夜射夜夜干| a级片在线免费高清观看视频| 天天躁夜夜躁狠狠久久av| 色婷婷久久久亚洲欧美| a级毛片在线看网站| 婷婷色av中文字幕| 亚洲综合色惰| 精品一区二区三卡| 一级毛片电影观看| 久久av网站| 高清不卡的av网站| 人人妻人人澡人人爽人人夜夜| 老司机亚洲免费影院| 亚洲精品,欧美精品| 韩国精品一区二区三区| 一级片'在线观看视频| 午夜福利,免费看| 毛片一级片免费看久久久久| 狠狠婷婷综合久久久久久88av| 最近2019中文字幕mv第一页| 观看av在线不卡| 在线观看人妻少妇| 男女国产视频网站| 女性生殖器流出的白浆| 欧美 亚洲 国产 日韩一| 一级片免费观看大全| 香蕉精品网在线| 色吧在线观看| 国产又色又爽无遮挡免| 国产免费视频播放在线视频| 高清黄色对白视频在线免费看| 性色av一级| 色婷婷av一区二区三区视频| 成年动漫av网址| 观看美女的网站| 国产熟女欧美一区二区| 亚洲欧美日韩另类电影网站| 美女主播在线视频| 精品亚洲成a人片在线观看| 国产激情久久老熟女| 亚洲在久久综合| 久久韩国三级中文字幕| 寂寞人妻少妇视频99o| 99热全是精品| 精品久久久久久电影网| h视频一区二区三区| 久久99热这里只频精品6学生| 欧美日韩综合久久久久久| 久久久久精品性色| 国产一区二区三区av在线| 韩国精品一区二区三区| xxx大片免费视频| 亚洲人成网站在线观看播放| 日韩成人av中文字幕在线观看| 国产熟女欧美一区二区| 欧美 日韩 精品 国产| av一本久久久久| 中文字幕人妻丝袜制服| 午夜精品国产一区二区电影| 久久午夜综合久久蜜桃| 男女免费视频国产| av线在线观看网站| 久久综合国产亚洲精品| 美女大奶头黄色视频| 精品亚洲成国产av| 久久99热这里只频精品6学生| 91午夜精品亚洲一区二区三区| 欧美av亚洲av综合av国产av | 九色亚洲精品在线播放| 一本色道久久久久久精品综合| 五月开心婷婷网|