• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning

    2021-04-22 03:53:30XinLuoSeniorMemberIEEEWenQinAniDongKhaledSedraouiandMengChuZhouFellowIEEE
    IEEE/CAA Journal of Automatica Sinica 2021年2期

    Xin Luo, Senior Member, IEEE, Wen Qin, Ani Dong, Khaled Sedraoui, and MengChu Zhou, Fellow, IEEE

    Abstract—A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel datasplitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.

    I. INTRODUCTION

    BIG date-related industrial applications like a recommender systems (RS) [1]–[5] have a major influence on our daily life. An RS commonly relies on a high-dimensional and sparse (HiDS) matrix that quantifies incomplete relationships among its users and items [6]–[11]. Despite its extreme sparsity and high-dimensionality, an HiDS matrix contains rich knowledge regarding various patterns [6]–[11] that are vital for accurate recommendations. A latent factor (LF)model has proven to be highly efficient in extracting such knowledge from an HiDS matrix [6]–[11].

    In general, an LF model works as follows:

    1) Mapping the involved users and items into the same LF space;

    2) Training desired LF according to the known data of a target HiDS matrix only; and

    3) Estimating the target matrix’s unknown data based on the updated LF for generating high-quality recommendations.

    Note that the achieved LF can precisely represent each user and item’s characteristics hidden in an HiDS matrix’s observed data [6]–[8]. Hence, an LF model is highly efficient in predicting unobserved user-item preferences in an RS.Moreover, it achieves fine balance among computational efficiency, storage cost and representative learning ability on an HiDS matrix [10]–[16]. Therefore, it is also widely adopted in other HiDS data related areas like network representation[17], Web-service QoS analysis [3], [4], [18], user track analysis [19], and bio-network analysis [12].

    Owing to its efficiency in addressing HiDS data [1]–[12], an LF model attracts the attention of researchers. A pyramid of sophisticated LF models are proposed, including a biased regularized incremental simultaneous model [20], singular value decomposition plus-plus model [21], probabilistic model [13], non-negative LF model [6], [22]–[27], and Graph regularized Lpsmooth non-negative matrix factorization model [28]. When constructing an LF model, a stochastic gradient descent (SGD) algorithm is often adopted as a learning algorithm, owing to its great efficiency in building a learning model via serial but fast-converging training [14],[20], [21]. Nevertheless, as an RS grows, its corresponding HiDS matrix explodes. For instance, Taobao contains billions of users and items. Although the data density of a corresponding HiDS matrix can be extremely low due to its extremely high dimension, it has a huge amount of known data. When factorizing it [21]–[28], a standard SGD algorithm suffers from the following defects:

    1) It serially traverses its known data in each training iteration, which can result in considerable time cost when a target HiDS matrix is large; and

    2) It can take many iterations to make an LF model converge to a steady solution.

    Based on the above analyses, we see that the key to a highly scalable SGD-based LF model is also two-fold: 1) reducing time cost per iteration by replacing its serial data traversing procedure with a parallel one, i.e., implementing a parallel SGD algorithm, and 2) reducing iterations to make a model converge, i.e., accelerating its convergence rate.

    Considering a parallel mechanism, it should be noted that an SGD algorithm is iterative to take multiple iterations for training an LF model. In each iteration, it accomplishes the following tasks:

    1) Traversing the observed data of a target HiDS matrix,picking up user-item ratings one-by-one;

    2) Computing the stochastic gradient of the instant loss on the active rating with its connected user/item LF;

    3) Updating these user/item LF by moving them along the opposite direction of the achieved stochastic gradient with a pre-defined step size; and

    4) Repeating steps 1)–3) until completing traversing a target HiDS matrix’s known data.

    From the above analyses, we clearly see that an SGD algorithm makes the desired LF depend on each other during a training iteration, and the learning task of each iteration also depends on those of the previously completed ones. To parallelize such a “single-pass” algorithm, researchers [29],[30], have proposed to decompose the learning task of each iteration such that the dependence of parameter update can be eliminated with care.

    A Hogwild! algorithm [29] splits the known data of an HiDS matrix into multiple subsets, and then dispatches them to multiple SGD-based training threads. Note that each thread maintains a unique group of LF. Thus, Hogwild! actually ignores the risk that a single LF can be updated by multiple training threads simultaneously, leading to partial loss of the update information. However, as proven in [29], such information loss will barely affect its convergence.

    On the other hand, a distributed stochastic gradient descent(DSGD) algorithm [30] splits a target HiDS matrix into J segmentations where each one consists of J data blocks with J being a positive integer. It makes user and item LF connected with different blocks in the same segmentation not affect each other’s update in a single iteration. Thus, when performing matrix factorization [31]–[39] a DSGD algorithm’s parallelization is implemented in the following way: learning tasks on J segmentations are taken serially, where the learning task on the jth segmentation is split into J subtasks that can be done in parallel.

    An alternative stochastic gradient descent (ASGD)algorithm [40] decouples the update dependence among different LF categories to implement its parallelization. For instance, to build an LF-based model for an RS, it splits the training task of each iteration into two sub-tasks, where one updates the user LF while the other updates the item LF with SGD. As discussed in [40], the coupling dependences among different LF categories are eliminated with such design,thereby making both subtasks dividable without any information loss.

    The parallel SGD algorithms mentioned above can implement a parallelized training process as well as maintain model performance. However, they cannot accelerate an LF model’s convergence rate, i.e., they consume as many training iterations as a standard SGD algorithm does despite their parallelization mechanisms. In other words, they all ignore the second factor of building a highly-scalable SGD-based LF model, i.e., accelerating its convergence rate.

    From this point of view, this work aims at implementing a parallel SGD algorithm with a faster convergence rate than existing ones. To do so, we incorporate a momentum method into a DGSD algorithm to achieve a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm. Note that a momentum method is initially designed for batch gradient descent algorithms [34], [35]. Nonetheless, as discussed in [33], [34], it can be adapted to SGD by alternating the learning direction of each single LF according to its stochastic gradients achieved in consecutive learning updates. The reason why we choose a DSGD algorithm as the base algorithm is that its parallelization is implemented based on data splitting instead of reformulating SGD-based learning rules. Thus, it is expected to be as compatible with a momentum method as a standard SGD algorithm appears to be [33]. The main contributions of this study include:

    1) An MPSGD algorithm that achieves faster convergence than existing parallel SGD algorithms when building an LF model for an RS;

    2) Algorithm design and analysis for an MPSGD-based LF model; and

    3) Empirical studies on four HiDS matrices from industrial applications.

    Section II gives preliminaries. Section III presents the methods. Section IV provides the experimental results.Finally, Section V concludes this paper.

    II. PRELIMINARIES

    An LF model takes an HiDS matrix as its fundamental input, as defined in [3], [16].

    Definition 1: Given two entity sets M and N. R|M|×|N|has its each entry rm,ndescribe the connection between m ∈ M and n ∈ N. Let Λ and Γ respectively denote its known and unknown data sets, it is HiDS if |Λ| ? |Γ|.

    Note that the operator |·| computes the cardinality of an enclosed set. Thus, we define an LF model as in [3], [16].

    Definition 2: Given R and Λ, an LF model builds a rank-d approximation R?=PQTto R as P|M|×dand Q|N|×dbeing LF matrices and d ? min{|M|, |N|}.

    To obtain P and Q, an objective function distinguishing R and R? is desired. Note that to achieve the highest efficiency, it should be defined on Λ only. With the Euclidean distance[16], it is formulated as

    Note that in (3) t denotes the tth update point and η denotes learning rate. Following the Robbins-Siegmund theorem [36],(3) ensures a solution to the bilinear problem (2) with proper η.

    III. PROPOSED METHODS

    A. DSGD Algorithm

    As mentioned before, a DSGD algorithm’s parallelization relies on data segmentation. For instance, as depicted in Fig.1,it splits the rating matrix into three segmentations, i.e., S1?S3.Each segmentation consists of three data blocks, e.g., Λ11, Λ22and Λ33belong to S1as in Fig.1. As proven in [30]. In each iteration, LF update inside a block does not affect those of other blocks from the same segmentation because they have no rows or columns in common as shown in Fig.1.Considering S1in Fig.1, we set three independent training threads, where the first traverses on Λ11, second on Λ22and third on Λ33. Thus, these training threads can run simultaneously.

    Fig.1. Splitting a rating matrix to achieve segmentations and blocks.

    B. Data Rearrangement Strategy

    However, note that different data segmentations do have rows and columns in common, as depicted in Fig.1.Therefore, each training iteration is actually divided into J tasks where J is the segmentation count. These J tasks should be done sequentially, where each task can be further divided into J subtasks that can be done in parallel, as depicted in Fig.2.Note that all J subtasks in a segmentation are executed synchronously to achieve bucket effects, i.e., the time cost of addressing each segmentation is decided by that of its largest subtask. From this perspective, when the data distribution is imbalanced as in an HiDS matrix, a DSGD algorithm can only speedup each training iteration in a limited way. For example,the unevenly distributed data of the MovieLens20M(ML20M) matrix is depicted in Fig.3(a), where Λ11,Λ22, Λ33and Λ44are independent blocks in the first data segmentation while its most data are in Λ11. Thus, for treads n1-4handling Λ11?Λ44, their time cost is decided by the cost of n1.

    Fig.2. Handling segmentations and blocks in an HiDS matrix with DSGD.

    For addressing this issue, data in an HiDS matrix should be rearranged to balance their distribution, making a DSGD algorithm achieve satisfactory speedup [38]. As shown in Fig.3(b), such a process is implemented by exchanging rows and columns in each segmentation at random [38].

    C. Data Rearrangement Strategy MPSGD Algorithm

    A momentum method is very efficient in accelerating the convergence rate of an SGD-based learning model [31], [33],[34]. It determines the learning update in the current iteration by building a linear combination of the current gradient and learning update in the last iteration. With such design,oscillations during a learning process decrease, making the resultant model converge faster. According to [33], with a momentum-incorporated SGD algorithm, the decision parameter θ of objective J(θ) is learnt as

    Fig.3. Illustration of an MLF model.

    In (4), Vθ(0)denotes the initial value of velocity, Vθ(t)denotes the t th iterative value of velocity, γ denotes the balancing constant that tunes the effects of the current gradient and previous update velocity, and o(t)denotes the t th training instance.

    To build an SGD-based LF model, the velocity vector is updated at each single training instance. We adopt a velocity parameter v(P)m,kfor pm,kto record its update velocity, and thus generate V(P)|M|×dfor P. According to (4), we update pm,kfor single loss εm,non training instance rm,nas

    Velocity constant γ in (5) adjusts the momentum effects.Similarly, we adopt a velocity parameter v(Q)n,kfor qn,kto record its update velocity, and thus V(Q)|N|×dis adopted for Q.The momentum-incorporated update rules for qn,kare given as

    As depicted in Figs. 3(c)–(d) , with the momentumincorporated learning rules presented in (5) and (6), LF matrices P and Q can be trained with much fewer oscillations.Moreover, by integrating the principle of DSGD into the algorithm, we achieve an MPSGD algorithm that parallelizes the learning process of an LF model at high convergence rate.After dividing Λ into J data segmentations with J × J data blocks, we obtain

    D. Data Rearrangement Strategy MPSGD Algorithm

    With an MPSGD algorithm, we design Algorithm 1 for an MPSGD-based LF (MLF) model. Note that algorithm MLF further depends on the procedure update shown in Algorithm 2. To implement its efficient parallelization, we first rearrange Λ according to the strategy mentioned in Section III-B to balance Λ, as in line 5 of Algorithm 1. Afterwards, the rearranged Λ is divided into J data segmentations with J × J data blocks, as in line 6 of Algorithm 1. Considering the ith data segmentation, its j th data black is assigned to the jth training threads, as shown in lines 8?10 of Algorithm 1. Then all J training threads can be started simultaneously to execute procedure update, which addresses the parameter update related to its assigned data block with MPSGD discussed in Section III-C.

    In Algorithm MLF, we introduce V(P)and V(Q)as auxiliary arrays for improving its computational efficiency. With J data segmentations and J training threads, each training thread actually takes 1/J of the whole data analysis task when the data distribution is balanced as shown in Fig.3(b). Thus, its time cost on a single training thread in each iteration is:

    Therefore, its time cost in t iterations is:

    * Note that all J training threads are started in parallel. Hence, the actual cost of this operation is decided by the thread consuming the most time.

    A lgorithm 2 Procedure Update Operation 1for each rm,n in Λij//Cost: ×|Λ|/J 2 2 ?rm,n=d∑ //Cost: Θ(d)3 for k = 1 to d do//Cost: × d 4 v(t)k=1 pm,kqn,k(P)m,k=γv(t?1)(P)m,k+η?ε(t?1)m,n?p(t?1)m,k //Cost: Θ(1)5 v(t)(Q)n,k=γv(t?1)(Q)n,k+η?ε(t?1)m,n?q(t?1)n,k //Cost: Θ(1)p(t)6 m,k ←p(t?1)m,k ?v(t)(P)m,k //Cost: Θ(1)7 p(t)m,k ←p(t?1)m,k ?v(t)(P)m,k //Cost: Θ(1)8 end for 9end for

    Note that J?1, d and t in (8) and (9) are all positive constants, which result in a linear relationship between the computational cost of an MLF model and the number of known entries in the target HiDS matrix. However, owing to its parallel and fast converging mechanism, J?1and t can be reduced significantly, thereby greatly reducing its time cost.Next we validate its performance on several HiDS matrices generated by industrial applications.

    IV. EXPERIMENTAL RESULTS AND ANALYSIS

    A. General Settings

    1) Evaluation Protocol: When analyzing an HiDS matrix from real applications [1]–[5], [7]–[10], [16], [19], a major motivation is to predict its missing data for achieving a complete relationship among all involved entities. Hence, this paper selects missing data estimation of an HiDS matrix as the evaluation protocol. More specifically, given Λ, such a task makes a tested model predict data in Γ. The outcome is validated on a validation set Ψ disjoint with Λ. For validating the prediction accuracy of a model, the root mean squared error (RMSE) and mean absolute error (MAE) are chosen as the metrics [9]–[11], [16], [37]–[39]

    2) Datasets : Four HiDS matrices are adopted in the experiments, whose details are given below:

    a) D1: Douban matrix. It is extracted from the China’s largest online music, book and movie database Douban [32].It has 16 830 839 ratings in the range of [1, 5] by 129 490 users on 58 541 items. Its data density is 0.22% only.

    b) D2: Dating Agency matrix. It is collected by the online dating site LibimSeTi with 17 359 346 observed entries in the range of [1, 10]. It has 135 359 users and 168 791 profiles [11,12]. Its data density is 0.076% only.

    c) D3: MovieLens 20M matrix. It is collected by the MovieLens site maintained by the GroupLens research team[37]. It has 20 000 263 known entries in [0.5, 5] among 26 744 movies and 138 493 users. Its density is 0.54% only.

    d) D4: NetFlix matrix. It is collected by the Netflix business website. It contains 100 480 507 known entries in the range of[1, 5] by 2 649 429 users on 17 770 movies [11, 12]. Its density is 0.21% only.

    e) All matrices are high-dimensional, extremely sparse and collected by industrial applications. Meanwhile, their data distributions are all highly imbalanced. Hence, results on them are highly representative.

    The known data set of each matrix is randomly divided into five equal-sized, disjoint subsets to comply with the five-fold cross-validation settings, i.e., each time we choose four subsets as the training set Λ to train a model predicting the remaining one subset as the testing set Ψ. This process is sequentially repeated for five times to achieve the final results.The training process of a tested model terminates if i) the number of consumed iterations reaches the preset threshold,i.e., 1000, or ii) the error difference between two sequential iterations is smaller than the preset threshold, i.e., 10?5.

    B. Comparison Results

    The following models are included in our experiments:

    M1: A DSGD-based LF model proposed in [30]. Note that M1’s parallelization is described in detail in Section III-A.However, it differs from an MLF model in two aspects: a) it does not adopt the data rearrangement as illustrated in Fig.3(b);and b) its learning algorithm is a standard SGD algorithm.

    M2: An LF model adopting a modified DSGD scheme,where the data distribution of the target HiDS matrix is rearranged for improving its speedup with multiple training threads. However, its learning algorithm is a standard SGD algorithm.

    M3: An MLF model proposed in this work.

    With such design, we expect to see the accumulative effects of the acceleration strategies adopted by M3, i.e., the data rearrangement in Fig.3(b) and momentum effect in Fig.3(c).

    To enable fair comparisons, we adopt the following settings:

    1) For all models we adopt the same regularization coefficient, i.e., λP= λQ= 0.005 according to [12], [16].Considering the learning rate η and balancing constant γ, we tune them on one fold of each experiment to achieve the best performance of each model, and then adopt the same values on the remaining four folds to achieve the most objective results. Their values on each dataset are summarizes in Table I.

    2) We adopt eight training threads for each model in all experiments following [29].

    TABLE I PARAMETERS OF M1?M3 ON D1?D4

    3) For M1?M3, on each dataset the same random arrays are adopted to initialize P and Q . Such a strategy can work compatibly with the five-fold cross validation settings to eliminate the biased results brought by the initial hypothesis of an LF model as discussed in [3].

    4) The LF space dimension d is set at 20 uniformly in all experiments. We adopt this value to enable good balance between the representative learning ability and computational cost of an LF model, as in [3], [16], [29].

    Training curves of M1?M3 on D1?D4 with training iteration count and time cost are respectively given in Figs. 4 and 5. Comparison results are recorded in Tables II and III.From them, we present our findings next.

    a) Owing to an MPSGD algorithm, an MLF model converges much faster than DSGD-based LF models do. For instance, as recorded in Table II, M1 and M2 respectively take 461 and 463 iterations on average to achieve the lowest RMSE on D1. In comparison, M3 takes 112 iterations on average to converge on D1, which takes less than one fourth of that by M1 and M2. Meanwhile, M3 takes 110 iterations averagely to converge in MAE, which are also much less than 441 iterations by M1 and 448 iterations by M2. Similar results can also be observed on the other testing cases, as shown in Fig.4 and Tables II and III.

    Meanwhile, we observe an interesting phenomenon that M1 and M2 converge at the same rate. Their training curves almost overlap on all testing cases according to Fig.4. Note that M2 adopts the data shuffling strategy mentioned in Section III-A as in [30] to balance the known data distribution of an HiDS matrix distribute uniformly while M1 does not.This phenomenon indicates that the data shuffling strategy barely affects the convergence rate or representative learning ability of an LF model.

    b) With an MPSGD algorithm, an MLF model’s time cost is significantly lower than those of its peers. For instance, as shown in Table II, M3 averagely takes 89 s to converge in RMSE on D3. In comparison, M1 takes 1208 s, which is over 13 times M3’s time. M2 takes 308 s, which is still over three times M3’s average time. The situation is the same with MAE as the metric, as recorded in Table III.

    c) Prediction accuracy of an MLF model is comparable with or slightly higher than those of its peers. As recorded in Tables II and III, on all testing cases M3’s prediction error is as low as or even slightly lower than those of M1 and M2.Hence, an MPSGD algorithm can slightly improve an MLF model’s prediction accuracy for missing data of an HiDS matrix as well as its high computational efficiency.

    d) The stability of M1?M3 are close. According to Tables I and II, we see that the standard deviations of M1?M3 in MAE and

    RMSE are very close on all testing cases. Considering their time cost, since M1 and M2 generally consume much more time than M3 does, their standard deviations in total time cost are generally larger than that of M3. However, it is also datadependent. On D4, we see that M1?M3 have very similar standard deviations in total time. Hence, we reasonably conclude that two acceleration strategies, i.e., data rearrangement and momentum-incorporation, do not affect its performance stability.

    C. Speedup Comparison

    A parallel model’s speedup measures its efficiency gain with the deployed core count, i.e.,

    where T1and TJdenote the training time of a parallel model deployed on one and J training threads, respectively. High speedup of a parallel model indicates its high scalability and feasibility for large-scale industrial applications.

    Fig.4. Training curves of M1?M3 in iteration count. All panels share the legend of panel (a).

    Fig.5. Training curves of M1?M3 in time cost. All panels share the legend of panel (a).

    TABLE II PERFORMANCE COMPARISON AMONG M1?M3 ON D1?D4 WITH RMSE AS AN ACCURACY METRIC

    TABLE III PERFORMANCE COMPARISON AMONG M1?M3 ON D1?D4 WITH MAE AS AN ACCURACY METRIC

    Fig.6. Parallel performance comparison among M1?M3 as core count increases. Both panels share the legend in panel (a).

    The speedup of M1?M3 on D4 as J increases from two to eight is depicted in Fig.6. Note that similar situations are found on D1?D3. From it, we clearly see that M3, i.e., the proposed MLF model, outperforms its peers in achieving higher speedup. As J increases, M3 always consumes less time than its peers do, and its speedup is always higher than those of its peers. For instance, from Fig.6(b) we see that M3’s speedup as J = 8 is 6.88, which is much higher than 4.61 by M1 and 4.44 by M2. Therefore, its scalability is higher than those of its peers, making it more feasible for real applications.

    D. Summary

    Based on the above results, we conclude that:

    a) Owing to an MPSGD algorithm, an MLF model has significantly higher computational efficiency than its peers do;and

    b) An MLF model’s speedup is also significantly higher than that of its peers. Thus, it has higher scalability for large scale industrial applications than its peers do.

    V. CONCLUSIONS

    This paper presents an MLF model able to perform LF analysis of an HiDS matrix with high computational efficiency and scalability. Its principle is two-fold: a) reducing its time cost per iteration through balanced data segmentation,and b) reducing its converging iteration count by incorporating momentum effects into its learning process.Empirical studies show that compared with state-of-the-art parallel LF models, it has obviously higher efficiency and scalability in handing an HiDS matrix.

    Although an MLF model performs LF analysis on a static HiDS matrix with high efficiency, its performance on dynamic data [12] remains unknown. As discussed in [41], a GPU-based acceleration scheme is highly efficient when manipulating full matrices in context of recommender systems and other applications [42]–[50]. Nonetheless, more efforts are required to adjust its fundamental matrix operations to be compatible with an HiDS one as concerned in this paper. We plan to address these issues in the future.

    国产精品99久久99久久久不卡 | 欧美国产精品va在线观看不卡| 天天操日日干夜夜撸| 亚洲国产毛片av蜜桃av| 9热在线视频观看99| 国产人伦9x9x在线观看| 欧美日韩成人在线一区二区| 国产成人精品福利久久| 黑人巨大精品欧美一区二区蜜桃| 毛片一级片免费看久久久久| 18禁裸乳无遮挡动漫免费视频| 男女免费视频国产| 欧美人与善性xxx| 99久久人妻综合| 国产片特级美女逼逼视频| 国产亚洲最大av| 日韩大片免费观看网站| 美女大奶头黄色视频| 国产av一区二区精品久久| 黄色 视频免费看| 久久久久精品性色| 人人妻人人添人人爽欧美一区卜| 日本wwww免费看| 欧美精品人与动牲交sv欧美| 国产男女超爽视频在线观看| 亚洲国产毛片av蜜桃av| 啦啦啦啦在线视频资源| 欧美日韩国产mv在线观看视频| 青青草视频在线视频观看| bbb黄色大片| 中文天堂在线官网| 啦啦啦 在线观看视频| 一区福利在线观看| 晚上一个人看的免费电影| 精品国产国语对白av| www.熟女人妻精品国产| 久久这里只有精品19| 国产黄色免费在线视频| 国产男女超爽视频在线观看| 少妇被粗大的猛进出69影院| 亚洲av成人不卡在线观看播放网 | 亚洲av电影在线进入| 欧美黑人精品巨大| 国产野战对白在线观看| 亚洲精品国产区一区二| 18禁动态无遮挡网站| 国产黄色视频一区二区在线观看| 亚洲精品av麻豆狂野| 国产亚洲欧美精品永久| 日本vs欧美在线观看视频| 婷婷色麻豆天堂久久| 久久精品人人爽人人爽视色| 国产av一区二区精品久久| 日本欧美视频一区| 欧美 日韩 精品 国产| 美女国产高潮福利片在线看| 日韩成人av中文字幕在线观看| 欧美少妇被猛烈插入视频| 国产极品粉嫩免费观看在线| 国产成人精品无人区| 久久精品人人爽人人爽视色| 欧美黄色片欧美黄色片| 国产一区有黄有色的免费视频| 美女视频免费永久观看网站| 99久国产av精品国产电影| 国精品久久久久久国模美| 久久久久人妻精品一区果冻| 最近最新中文字幕免费大全7| 国产精品偷伦视频观看了| 国产免费福利视频在线观看| 人人妻人人爽人人添夜夜欢视频| 午夜福利视频在线观看免费| 另类精品久久| a 毛片基地| 丝袜脚勾引网站| 99九九在线精品视频| 国产一区二区 视频在线| 亚洲欧洲日产国产| 久久久国产一区二区| 成人国产av品久久久| 久热这里只有精品99| 无遮挡黄片免费观看| 蜜桃国产av成人99| 99久国产av精品国产电影| 丁香六月欧美| 香蕉丝袜av| 一个人免费看片子| 婷婷成人精品国产| 久久久久精品性色| 欧美人与性动交α欧美软件| 嫩草影院入口| 欧美久久黑人一区二区| 国产男女内射视频| av片东京热男人的天堂| 99香蕉大伊视频| 两性夫妻黄色片| 国产精品 国内视频| 国产精品av久久久久免费| 19禁男女啪啪无遮挡网站| 91精品三级在线观看| 两个人免费观看高清视频| 亚洲成人免费av在线播放| 亚洲国产欧美日韩在线播放| 少妇人妻精品综合一区二区| a 毛片基地| 咕卡用的链子| 日本av手机在线免费观看| 久久影院123| 国产精品欧美亚洲77777| 国产精品人妻久久久影院| 中国三级夫妇交换| 大码成人一级视频| 色婷婷av一区二区三区视频| 日本一区二区免费在线视频| av在线观看视频网站免费| 美女大奶头黄色视频| 欧美在线一区亚洲| 国产福利在线免费观看视频| 日韩一本色道免费dvd| 人体艺术视频欧美日本| 19禁男女啪啪无遮挡网站| 亚洲成人国产一区在线观看 | 久久久久久久大尺度免费视频| 国产精品99久久99久久久不卡 | 人人妻人人爽人人添夜夜欢视频| 午夜福利乱码中文字幕| 在线观看免费视频网站a站| 国产成人系列免费观看| 婷婷成人精品国产| 女性被躁到高潮视频| 亚洲av男天堂| 国产av码专区亚洲av| 日韩 欧美 亚洲 中文字幕| 亚洲少妇的诱惑av| 啦啦啦 在线观看视频| 国产一区二区激情短视频 | 在线观看三级黄色| 国产精品人妻久久久影院| 中文精品一卡2卡3卡4更新| av又黄又爽大尺度在线免费看| 狂野欧美激情性xxxx| 性少妇av在线| 久久这里只有精品19| 亚洲欧洲国产日韩| 国产一区二区三区av在线| 国产精品国产三级专区第一集| 大话2 男鬼变身卡| 国产在线免费精品| 精品人妻熟女毛片av久久网站| 91精品三级在线观看| 欧美黑人精品巨大| 欧美激情 高清一区二区三区| 黄片小视频在线播放| 91精品三级在线观看| 久久久久精品人妻al黑| 女人久久www免费人成看片| 国产免费现黄频在线看| 国产男女超爽视频在线观看| 大陆偷拍与自拍| 免费观看人在逋| 国产av精品麻豆| 一级片免费观看大全| 免费高清在线观看视频在线观看| 91国产中文字幕| 99精品久久久久人妻精品| 国产97色在线日韩免费| 一级毛片 在线播放| 日本wwww免费看| 久久久精品国产亚洲av高清涩受| 国产国语露脸激情在线看| 一区在线观看完整版| 波多野结衣一区麻豆| bbb黄色大片| 99久久精品国产亚洲精品| 性少妇av在线| 欧美中文综合在线视频| 亚洲国产欧美在线一区| 久久久国产一区二区| av一本久久久久| 国产精品三级大全| 亚洲欧美成人精品一区二区| 80岁老熟妇乱子伦牲交| 秋霞伦理黄片| 人妻 亚洲 视频| 久热这里只有精品99| 久久久精品免费免费高清| 深夜精品福利| 波野结衣二区三区在线| 免费女性裸体啪啪无遮挡网站| 国产成人精品久久二区二区91 | 爱豆传媒免费全集在线观看| 男女边吃奶边做爰视频| 黑人猛操日本美女一级片| 91老司机精品| 午夜福利视频精品| 黄色毛片三级朝国网站| 日韩免费高清中文字幕av| 国产精品国产三级国产专区5o| 啦啦啦啦在线视频资源| 晚上一个人看的免费电影| 日日啪夜夜爽| 美女午夜性视频免费| 九草在线视频观看| 90打野战视频偷拍视频| 国产伦人伦偷精品视频| 久久精品国产亚洲av高清一级| 国产成人精品无人区| 国产精品国产三级专区第一集| 日本一区二区免费在线视频| 久久99精品国语久久久| bbb黄色大片| 在线观看三级黄色| 久久热在线av| 老司机深夜福利视频在线观看 | 亚洲七黄色美女视频| 免费人妻精品一区二区三区视频| 国产 一区精品| 亚洲免费av在线视频| 国产成人啪精品午夜网站| 国产1区2区3区精品| 欧美人与性动交α欧美软件| 少妇 在线观看| 嫩草影视91久久| 日本欧美视频一区| 熟女av电影| 黑丝袜美女国产一区| 成人毛片60女人毛片免费| 两性夫妻黄色片| 最新在线观看一区二区三区 | 国产女主播在线喷水免费视频网站| 十八禁人妻一区二区| 国产精品三级大全| 亚洲欧美一区二区三区久久| 国产精品久久久久久精品电影小说| 久久久久久免费高清国产稀缺| 丁香六月天网| 欧美日本中文国产一区发布| 国产av码专区亚洲av| 国产极品天堂在线| 99精品久久久久人妻精品| 亚洲av福利一区| 精品少妇久久久久久888优播| 欧美人与性动交α欧美软件| 免费黄色在线免费观看| 国产爽快片一区二区三区| 美女午夜性视频免费| 欧美黑人欧美精品刺激| 国产黄频视频在线观看| 最近最新中文字幕免费大全7| 国产人伦9x9x在线观看| 午夜日韩欧美国产| 色婷婷久久久亚洲欧美| 婷婷成人精品国产| 一区福利在线观看| 国产免费现黄频在线看| 丝袜人妻中文字幕| 爱豆传媒免费全集在线观看| 久久综合国产亚洲精品| 夜夜骑夜夜射夜夜干| 亚洲国产欧美日韩在线播放| 美女午夜性视频免费| 中文字幕人妻丝袜一区二区 | 一区二区三区四区激情视频| 国产免费一区二区三区四区乱码| 欧美精品亚洲一区二区| 黑人巨大精品欧美一区二区蜜桃| 午夜免费观看性视频| 国产成人精品在线电影| 国产精品偷伦视频观看了| xxx大片免费视频| 一区二区三区四区激情视频| 老司机影院成人| 999精品在线视频| 一区福利在线观看| 日本黄色日本黄色录像| 午夜福利在线免费观看网站| 青草久久国产| 日韩中文字幕视频在线看片| 日本爱情动作片www.在线观看| 狂野欧美激情性bbbbbb| 国产在线一区二区三区精| 80岁老熟妇乱子伦牲交| 岛国毛片在线播放| av网站免费在线观看视频| 亚洲熟女精品中文字幕| 精品一区在线观看国产| 最近手机中文字幕大全| 久久久久久久大尺度免费视频| 国产精品香港三级国产av潘金莲 | 日韩电影二区| 成人免费观看视频高清| 电影成人av| 日日撸夜夜添| 久久久久久久精品精品| 777久久人妻少妇嫩草av网站| 新久久久久国产一级毛片| 91精品三级在线观看| av网站免费在线观看视频| 女人精品久久久久毛片| 日本爱情动作片www.在线观看| 久热爱精品视频在线9| 精品人妻一区二区三区麻豆| 成人免费观看视频高清| 国产精品国产三级专区第一集| 啦啦啦在线观看免费高清www| 91精品伊人久久大香线蕉| 久久久久久久大尺度免费视频| 亚洲视频免费观看视频| 交换朋友夫妻互换小说| 久久久久视频综合| 嫩草影院入口| 女的被弄到高潮叫床怎么办| 午夜福利,免费看| 在线观看免费日韩欧美大片| av网站在线播放免费| 丝袜美足系列| 飞空精品影院首页| 尾随美女入室| 亚洲激情五月婷婷啪啪| 亚洲av福利一区| 亚洲精品国产av成人精品| 亚洲美女搞黄在线观看| 久久av网站| 一级,二级,三级黄色视频| 大片电影免费在线观看免费| av电影中文网址| 国产色婷婷99| 狠狠婷婷综合久久久久久88av| 日韩制服骚丝袜av| 亚洲 欧美一区二区三区| 亚洲精品在线美女| 午夜福利一区二区在线看| 久久99一区二区三区| 各种免费的搞黄视频| 国产老妇伦熟女老妇高清| 一级爰片在线观看| 国产伦人伦偷精品视频| 可以免费在线观看a视频的电影网站 | 男女国产视频网站| 男人爽女人下面视频在线观看| 午夜影院在线不卡| 亚洲精品aⅴ在线观看| 精品午夜福利在线看| 一区二区三区乱码不卡18| 亚洲欧美色中文字幕在线| 精品国产乱码久久久久久男人| 操美女的视频在线观看| 在线观看免费高清a一片| av线在线观看网站| 久久久久久久久久久久大奶| 久久热在线av| 自线自在国产av| 最近中文字幕2019免费版| 最近最新中文字幕免费大全7| 毛片一级片免费看久久久久| 久久久久久人人人人人| 亚洲,欧美精品.| 亚洲精品,欧美精品| 国产精品一区二区在线不卡| 亚洲精品中文字幕在线视频| 亚洲精品久久成人aⅴ小说| 永久免费av网站大全| 亚洲国产av影院在线观看| 男男h啪啪无遮挡| 国产精品久久久av美女十八| 叶爱在线成人免费视频播放| 日韩中文字幕视频在线看片| 人妻人人澡人人爽人人| 青春草视频在线免费观看| 久久99一区二区三区| 2018国产大陆天天弄谢| 欧美精品亚洲一区二区| 人妻 亚洲 视频| 一级片免费观看大全| 伦理电影免费视频| 一级黄片播放器| 亚洲男人天堂网一区| 精品久久久久久电影网| 久久97久久精品| 欧美成人精品欧美一级黄| 捣出白浆h1v1| 免费观看a级毛片全部| av天堂久久9| 欧美日韩亚洲高清精品| 精品国产一区二区三区久久久樱花| 狂野欧美激情性xxxx| 亚洲欧美日韩另类电影网站| 久久久久精品人妻al黑| 国产极品粉嫩免费观看在线| 大片免费播放器 马上看| 丝瓜视频免费看黄片| 九草在线视频观看| 亚洲一卡2卡3卡4卡5卡精品中文| 久久 成人 亚洲| 午夜福利视频在线观看免费| 国产精品久久久久久人妻精品电影 | 丝袜美腿诱惑在线| 大码成人一级视频| 久久久久国产精品人妻一区二区| 亚洲久久久国产精品| 国产国语露脸激情在线看| 日韩免费高清中文字幕av| 国产精品香港三级国产av潘金莲 | 免费观看人在逋| www.精华液| 另类精品久久| 大陆偷拍与自拍| 色综合欧美亚洲国产小说| 建设人人有责人人尽责人人享有的| 久久鲁丝午夜福利片| 两个人看的免费小视频| 国产精品久久久久久久久免| 老司机影院毛片| 久久久国产一区二区| 久久久久久免费高清国产稀缺| 美女高潮到喷水免费观看| 亚洲美女视频黄频| av不卡在线播放| 午夜福利乱码中文字幕| 黄片无遮挡物在线观看| 99九九在线精品视频| 国产色婷婷99| 黄片无遮挡物在线观看| 热re99久久国产66热| 国产一区有黄有色的免费视频| 久久99一区二区三区| av一本久久久久| 97在线人人人人妻| 精品一区二区免费观看| 国产成人欧美| 日韩av不卡免费在线播放| 毛片一级片免费看久久久久| 秋霞伦理黄片| 69精品国产乱码久久久| 成年人午夜在线观看视频| 黄色一级大片看看| 777米奇影视久久| 国产精品久久久久久久久免| 午夜福利网站1000一区二区三区| 校园人妻丝袜中文字幕| 日日啪夜夜爽| 麻豆av在线久日| 91精品国产国语对白视频| avwww免费| 久久精品久久久久久噜噜老黄| 色婷婷久久久亚洲欧美| 中文字幕人妻丝袜一区二区 | 欧美成人精品欧美一级黄| 亚洲七黄色美女视频| 综合色丁香网| 涩涩av久久男人的天堂| 久久性视频一级片| 国产精品久久久久成人av| 久久精品国产亚洲av涩爱| 18禁裸乳无遮挡动漫免费视频| 秋霞在线观看毛片| 国产精品.久久久| 美女扒开内裤让男人捅视频| 丝袜在线中文字幕| 国产成人精品福利久久| 欧美国产精品一级二级三级| 久久人人97超碰香蕉20202| 中文天堂在线官网| 少妇猛男粗大的猛烈进出视频| 国产成人午夜福利电影在线观看| 日韩一卡2卡3卡4卡2021年| 久久精品久久久久久久性| 精品久久久久久电影网| 国产伦人伦偷精品视频| e午夜精品久久久久久久| 欧美人与性动交α欧美软件| 在线精品无人区一区二区三| 少妇人妻久久综合中文| 国产精品99久久99久久久不卡 | 桃花免费在线播放| 韩国高清视频一区二区三区| 中文字幕另类日韩欧美亚洲嫩草| 多毛熟女@视频| 天天躁夜夜躁狠狠躁躁| 亚洲一码二码三码区别大吗| 看免费av毛片| 久久国产精品男人的天堂亚洲| 亚洲精品日韩在线中文字幕| 色播在线永久视频| 亚洲成人国产一区在线观看 | 久久久久久久久久久免费av| 可以免费在线观看a视频的电影网站 | 男人舔女人的私密视频| 国产精品一区二区在线观看99| 久久久精品94久久精品| 女人高潮潮喷娇喘18禁视频| av在线观看视频网站免费| netflix在线观看网站| 国产精品久久久人人做人人爽| 桃花免费在线播放| 亚洲熟女毛片儿| 欧美精品一区二区大全| 91精品国产国语对白视频| 亚洲精品美女久久久久99蜜臀 | 亚洲av国产av综合av卡| 日本爱情动作片www.在线观看| 亚洲国产看品久久| 欧美亚洲 丝袜 人妻 在线| 精品一区二区免费观看| 水蜜桃什么品种好| 久久鲁丝午夜福利片| 少妇人妻 视频| 日韩 亚洲 欧美在线| 天天添夜夜摸| av卡一久久| 国产一区二区三区综合在线观看| 亚洲国产av影院在线观看| 丰满饥渴人妻一区二区三| 欧美日韩亚洲综合一区二区三区_| av一本久久久久| 亚洲伊人色综图| 精品国产一区二区久久| 免费黄网站久久成人精品| 亚洲av中文av极速乱| 黄色视频在线播放观看不卡| 天天操日日干夜夜撸| a级毛片黄视频| 电影成人av| 久热爱精品视频在线9| 亚洲一区二区三区欧美精品| 最近的中文字幕免费完整| 精品福利永久在线观看| 99热全是精品| 黄片无遮挡物在线观看| 飞空精品影院首页| 精品一区二区免费观看| 国产xxxxx性猛交| svipshipincom国产片| 国产日韩欧美在线精品| 激情五月婷婷亚洲| 久久av网站| 日日爽夜夜爽网站| 欧美激情高清一区二区三区 | 操美女的视频在线观看| 欧美精品高潮呻吟av久久| 亚洲一级一片aⅴ在线观看| tube8黄色片| 少妇的丰满在线观看| 久久精品aⅴ一区二区三区四区| 国产片内射在线| av视频免费观看在线观看| 国产男女内射视频| 成年动漫av网址| 亚洲精品国产一区二区精华液| 国产精品久久久av美女十八| 亚洲精品国产一区二区精华液| www.自偷自拍.com| 亚洲欧洲国产日韩| 999久久久国产精品视频| 看非洲黑人一级黄片| 三上悠亚av全集在线观看| 操美女的视频在线观看| 久久国产精品男人的天堂亚洲| 国产男人的电影天堂91| 在线观看一区二区三区激情| 亚洲av男天堂| 国产在线一区二区三区精| 久久久久精品久久久久真实原创| 一级黄片播放器| 婷婷色麻豆天堂久久| a级毛片黄视频| 亚洲精品aⅴ在线观看| av女优亚洲男人天堂| 国产有黄有色有爽视频| 热re99久久国产66热| 国产精品一区二区在线观看99| 王馨瑶露胸无遮挡在线观看| 国产探花极品一区二区| 青青草视频在线视频观看| 热99久久久久精品小说推荐| 欧美国产精品一级二级三级| 男人添女人高潮全过程视频| 狂野欧美激情性xxxx| 丁香六月天网| 亚洲免费av在线视频| videos熟女内射| 少妇人妻久久综合中文| www.自偷自拍.com| 久久这里只有精品19| 自线自在国产av| 一级毛片 在线播放| 久久久久久人妻| 人人妻人人澡人人看| 亚洲专区中文字幕在线 | 999久久久国产精品视频| 少妇人妻久久综合中文| 老熟女久久久| av网站在线播放免费| 日韩一区二区视频免费看| 亚洲情色 制服丝袜| 一本一本久久a久久精品综合妖精| 亚洲成人手机| 成人毛片60女人毛片免费| 精品国产超薄肉色丝袜足j| 69精品国产乱码久久久| 嫩草影视91久久| 日日撸夜夜添| 日韩精品免费视频一区二区三区| 9色porny在线观看| 黄色视频不卡| 国产成人a∨麻豆精品| 中文字幕人妻丝袜制服| 超碰成人久久| 日韩av不卡免费在线播放| 侵犯人妻中文字幕一二三四区| 汤姆久久久久久久影院中文字幕| 国产日韩欧美视频二区| 如日韩欧美国产精品一区二区三区| 悠悠久久av| 国产极品粉嫩免费观看在线|