• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning

    2021-04-22 03:53:30XinLuoSeniorMemberIEEEWenQinAniDongKhaledSedraouiandMengChuZhouFellowIEEE
    IEEE/CAA Journal of Automatica Sinica 2021年2期

    Xin Luo, Senior Member, IEEE, Wen Qin, Ani Dong, Khaled Sedraoui, and MengChu Zhou, Fellow, IEEE

    Abstract—A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel datasplitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.

    I. INTRODUCTION

    BIG date-related industrial applications like a recommender systems (RS) [1]–[5] have a major influence on our daily life. An RS commonly relies on a high-dimensional and sparse (HiDS) matrix that quantifies incomplete relationships among its users and items [6]–[11]. Despite its extreme sparsity and high-dimensionality, an HiDS matrix contains rich knowledge regarding various patterns [6]–[11] that are vital for accurate recommendations. A latent factor (LF)model has proven to be highly efficient in extracting such knowledge from an HiDS matrix [6]–[11].

    In general, an LF model works as follows:

    1) Mapping the involved users and items into the same LF space;

    2) Training desired LF according to the known data of a target HiDS matrix only; and

    3) Estimating the target matrix’s unknown data based on the updated LF for generating high-quality recommendations.

    Note that the achieved LF can precisely represent each user and item’s characteristics hidden in an HiDS matrix’s observed data [6]–[8]. Hence, an LF model is highly efficient in predicting unobserved user-item preferences in an RS.Moreover, it achieves fine balance among computational efficiency, storage cost and representative learning ability on an HiDS matrix [10]–[16]. Therefore, it is also widely adopted in other HiDS data related areas like network representation[17], Web-service QoS analysis [3], [4], [18], user track analysis [19], and bio-network analysis [12].

    Owing to its efficiency in addressing HiDS data [1]–[12], an LF model attracts the attention of researchers. A pyramid of sophisticated LF models are proposed, including a biased regularized incremental simultaneous model [20], singular value decomposition plus-plus model [21], probabilistic model [13], non-negative LF model [6], [22]–[27], and Graph regularized Lpsmooth non-negative matrix factorization model [28]. When constructing an LF model, a stochastic gradient descent (SGD) algorithm is often adopted as a learning algorithm, owing to its great efficiency in building a learning model via serial but fast-converging training [14],[20], [21]. Nevertheless, as an RS grows, its corresponding HiDS matrix explodes. For instance, Taobao contains billions of users and items. Although the data density of a corresponding HiDS matrix can be extremely low due to its extremely high dimension, it has a huge amount of known data. When factorizing it [21]–[28], a standard SGD algorithm suffers from the following defects:

    1) It serially traverses its known data in each training iteration, which can result in considerable time cost when a target HiDS matrix is large; and

    2) It can take many iterations to make an LF model converge to a steady solution.

    Based on the above analyses, we see that the key to a highly scalable SGD-based LF model is also two-fold: 1) reducing time cost per iteration by replacing its serial data traversing procedure with a parallel one, i.e., implementing a parallel SGD algorithm, and 2) reducing iterations to make a model converge, i.e., accelerating its convergence rate.

    Considering a parallel mechanism, it should be noted that an SGD algorithm is iterative to take multiple iterations for training an LF model. In each iteration, it accomplishes the following tasks:

    1) Traversing the observed data of a target HiDS matrix,picking up user-item ratings one-by-one;

    2) Computing the stochastic gradient of the instant loss on the active rating with its connected user/item LF;

    3) Updating these user/item LF by moving them along the opposite direction of the achieved stochastic gradient with a pre-defined step size; and

    4) Repeating steps 1)–3) until completing traversing a target HiDS matrix’s known data.

    From the above analyses, we clearly see that an SGD algorithm makes the desired LF depend on each other during a training iteration, and the learning task of each iteration also depends on those of the previously completed ones. To parallelize such a “single-pass” algorithm, researchers [29],[30], have proposed to decompose the learning task of each iteration such that the dependence of parameter update can be eliminated with care.

    A Hogwild! algorithm [29] splits the known data of an HiDS matrix into multiple subsets, and then dispatches them to multiple SGD-based training threads. Note that each thread maintains a unique group of LF. Thus, Hogwild! actually ignores the risk that a single LF can be updated by multiple training threads simultaneously, leading to partial loss of the update information. However, as proven in [29], such information loss will barely affect its convergence.

    On the other hand, a distributed stochastic gradient descent(DSGD) algorithm [30] splits a target HiDS matrix into J segmentations where each one consists of J data blocks with J being a positive integer. It makes user and item LF connected with different blocks in the same segmentation not affect each other’s update in a single iteration. Thus, when performing matrix factorization [31]–[39] a DSGD algorithm’s parallelization is implemented in the following way: learning tasks on J segmentations are taken serially, where the learning task on the jth segmentation is split into J subtasks that can be done in parallel.

    An alternative stochastic gradient descent (ASGD)algorithm [40] decouples the update dependence among different LF categories to implement its parallelization. For instance, to build an LF-based model for an RS, it splits the training task of each iteration into two sub-tasks, where one updates the user LF while the other updates the item LF with SGD. As discussed in [40], the coupling dependences among different LF categories are eliminated with such design,thereby making both subtasks dividable without any information loss.

    The parallel SGD algorithms mentioned above can implement a parallelized training process as well as maintain model performance. However, they cannot accelerate an LF model’s convergence rate, i.e., they consume as many training iterations as a standard SGD algorithm does despite their parallelization mechanisms. In other words, they all ignore the second factor of building a highly-scalable SGD-based LF model, i.e., accelerating its convergence rate.

    From this point of view, this work aims at implementing a parallel SGD algorithm with a faster convergence rate than existing ones. To do so, we incorporate a momentum method into a DGSD algorithm to achieve a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm. Note that a momentum method is initially designed for batch gradient descent algorithms [34], [35]. Nonetheless, as discussed in [33], [34], it can be adapted to SGD by alternating the learning direction of each single LF according to its stochastic gradients achieved in consecutive learning updates. The reason why we choose a DSGD algorithm as the base algorithm is that its parallelization is implemented based on data splitting instead of reformulating SGD-based learning rules. Thus, it is expected to be as compatible with a momentum method as a standard SGD algorithm appears to be [33]. The main contributions of this study include:

    1) An MPSGD algorithm that achieves faster convergence than existing parallel SGD algorithms when building an LF model for an RS;

    2) Algorithm design and analysis for an MPSGD-based LF model; and

    3) Empirical studies on four HiDS matrices from industrial applications.

    Section II gives preliminaries. Section III presents the methods. Section IV provides the experimental results.Finally, Section V concludes this paper.

    II. PRELIMINARIES

    An LF model takes an HiDS matrix as its fundamental input, as defined in [3], [16].

    Definition 1: Given two entity sets M and N. R|M|×|N|has its each entry rm,ndescribe the connection between m ∈ M and n ∈ N. Let Λ and Γ respectively denote its known and unknown data sets, it is HiDS if |Λ| ? |Γ|.

    Note that the operator |·| computes the cardinality of an enclosed set. Thus, we define an LF model as in [3], [16].

    Definition 2: Given R and Λ, an LF model builds a rank-d approximation R?=PQTto R as P|M|×dand Q|N|×dbeing LF matrices and d ? min{|M|, |N|}.

    To obtain P and Q, an objective function distinguishing R and R? is desired. Note that to achieve the highest efficiency, it should be defined on Λ only. With the Euclidean distance[16], it is formulated as

    Note that in (3) t denotes the tth update point and η denotes learning rate. Following the Robbins-Siegmund theorem [36],(3) ensures a solution to the bilinear problem (2) with proper η.

    III. PROPOSED METHODS

    A. DSGD Algorithm

    As mentioned before, a DSGD algorithm’s parallelization relies on data segmentation. For instance, as depicted in Fig.1,it splits the rating matrix into three segmentations, i.e., S1?S3.Each segmentation consists of three data blocks, e.g., Λ11, Λ22and Λ33belong to S1as in Fig.1. As proven in [30]. In each iteration, LF update inside a block does not affect those of other blocks from the same segmentation because they have no rows or columns in common as shown in Fig.1.Considering S1in Fig.1, we set three independent training threads, where the first traverses on Λ11, second on Λ22and third on Λ33. Thus, these training threads can run simultaneously.

    Fig.1. Splitting a rating matrix to achieve segmentations and blocks.

    B. Data Rearrangement Strategy

    However, note that different data segmentations do have rows and columns in common, as depicted in Fig.1.Therefore, each training iteration is actually divided into J tasks where J is the segmentation count. These J tasks should be done sequentially, where each task can be further divided into J subtasks that can be done in parallel, as depicted in Fig.2.Note that all J subtasks in a segmentation are executed synchronously to achieve bucket effects, i.e., the time cost of addressing each segmentation is decided by that of its largest subtask. From this perspective, when the data distribution is imbalanced as in an HiDS matrix, a DSGD algorithm can only speedup each training iteration in a limited way. For example,the unevenly distributed data of the MovieLens20M(ML20M) matrix is depicted in Fig.3(a), where Λ11,Λ22, Λ33and Λ44are independent blocks in the first data segmentation while its most data are in Λ11. Thus, for treads n1-4handling Λ11?Λ44, their time cost is decided by the cost of n1.

    Fig.2. Handling segmentations and blocks in an HiDS matrix with DSGD.

    For addressing this issue, data in an HiDS matrix should be rearranged to balance their distribution, making a DSGD algorithm achieve satisfactory speedup [38]. As shown in Fig.3(b), such a process is implemented by exchanging rows and columns in each segmentation at random [38].

    C. Data Rearrangement Strategy MPSGD Algorithm

    A momentum method is very efficient in accelerating the convergence rate of an SGD-based learning model [31], [33],[34]. It determines the learning update in the current iteration by building a linear combination of the current gradient and learning update in the last iteration. With such design,oscillations during a learning process decrease, making the resultant model converge faster. According to [33], with a momentum-incorporated SGD algorithm, the decision parameter θ of objective J(θ) is learnt as

    Fig.3. Illustration of an MLF model.

    In (4), Vθ(0)denotes the initial value of velocity, Vθ(t)denotes the t th iterative value of velocity, γ denotes the balancing constant that tunes the effects of the current gradient and previous update velocity, and o(t)denotes the t th training instance.

    To build an SGD-based LF model, the velocity vector is updated at each single training instance. We adopt a velocity parameter v(P)m,kfor pm,kto record its update velocity, and thus generate V(P)|M|×dfor P. According to (4), we update pm,kfor single loss εm,non training instance rm,nas

    Velocity constant γ in (5) adjusts the momentum effects.Similarly, we adopt a velocity parameter v(Q)n,kfor qn,kto record its update velocity, and thus V(Q)|N|×dis adopted for Q.The momentum-incorporated update rules for qn,kare given as

    As depicted in Figs. 3(c)–(d) , with the momentumincorporated learning rules presented in (5) and (6), LF matrices P and Q can be trained with much fewer oscillations.Moreover, by integrating the principle of DSGD into the algorithm, we achieve an MPSGD algorithm that parallelizes the learning process of an LF model at high convergence rate.After dividing Λ into J data segmentations with J × J data blocks, we obtain

    D. Data Rearrangement Strategy MPSGD Algorithm

    With an MPSGD algorithm, we design Algorithm 1 for an MPSGD-based LF (MLF) model. Note that algorithm MLF further depends on the procedure update shown in Algorithm 2. To implement its efficient parallelization, we first rearrange Λ according to the strategy mentioned in Section III-B to balance Λ, as in line 5 of Algorithm 1. Afterwards, the rearranged Λ is divided into J data segmentations with J × J data blocks, as in line 6 of Algorithm 1. Considering the ith data segmentation, its j th data black is assigned to the jth training threads, as shown in lines 8?10 of Algorithm 1. Then all J training threads can be started simultaneously to execute procedure update, which addresses the parameter update related to its assigned data block with MPSGD discussed in Section III-C.

    In Algorithm MLF, we introduce V(P)and V(Q)as auxiliary arrays for improving its computational efficiency. With J data segmentations and J training threads, each training thread actually takes 1/J of the whole data analysis task when the data distribution is balanced as shown in Fig.3(b). Thus, its time cost on a single training thread in each iteration is:

    Therefore, its time cost in t iterations is:

    * Note that all J training threads are started in parallel. Hence, the actual cost of this operation is decided by the thread consuming the most time.

    A lgorithm 2 Procedure Update Operation 1for each rm,n in Λij//Cost: ×|Λ|/J 2 2 ?rm,n=d∑ //Cost: Θ(d)3 for k = 1 to d do//Cost: × d 4 v(t)k=1 pm,kqn,k(P)m,k=γv(t?1)(P)m,k+η?ε(t?1)m,n?p(t?1)m,k //Cost: Θ(1)5 v(t)(Q)n,k=γv(t?1)(Q)n,k+η?ε(t?1)m,n?q(t?1)n,k //Cost: Θ(1)p(t)6 m,k ←p(t?1)m,k ?v(t)(P)m,k //Cost: Θ(1)7 p(t)m,k ←p(t?1)m,k ?v(t)(P)m,k //Cost: Θ(1)8 end for 9end for

    Note that J?1, d and t in (8) and (9) are all positive constants, which result in a linear relationship between the computational cost of an MLF model and the number of known entries in the target HiDS matrix. However, owing to its parallel and fast converging mechanism, J?1and t can be reduced significantly, thereby greatly reducing its time cost.Next we validate its performance on several HiDS matrices generated by industrial applications.

    IV. EXPERIMENTAL RESULTS AND ANALYSIS

    A. General Settings

    1) Evaluation Protocol: When analyzing an HiDS matrix from real applications [1]–[5], [7]–[10], [16], [19], a major motivation is to predict its missing data for achieving a complete relationship among all involved entities. Hence, this paper selects missing data estimation of an HiDS matrix as the evaluation protocol. More specifically, given Λ, such a task makes a tested model predict data in Γ. The outcome is validated on a validation set Ψ disjoint with Λ. For validating the prediction accuracy of a model, the root mean squared error (RMSE) and mean absolute error (MAE) are chosen as the metrics [9]–[11], [16], [37]–[39]

    2) Datasets : Four HiDS matrices are adopted in the experiments, whose details are given below:

    a) D1: Douban matrix. It is extracted from the China’s largest online music, book and movie database Douban [32].It has 16 830 839 ratings in the range of [1, 5] by 129 490 users on 58 541 items. Its data density is 0.22% only.

    b) D2: Dating Agency matrix. It is collected by the online dating site LibimSeTi with 17 359 346 observed entries in the range of [1, 10]. It has 135 359 users and 168 791 profiles [11,12]. Its data density is 0.076% only.

    c) D3: MovieLens 20M matrix. It is collected by the MovieLens site maintained by the GroupLens research team[37]. It has 20 000 263 known entries in [0.5, 5] among 26 744 movies and 138 493 users. Its density is 0.54% only.

    d) D4: NetFlix matrix. It is collected by the Netflix business website. It contains 100 480 507 known entries in the range of[1, 5] by 2 649 429 users on 17 770 movies [11, 12]. Its density is 0.21% only.

    e) All matrices are high-dimensional, extremely sparse and collected by industrial applications. Meanwhile, their data distributions are all highly imbalanced. Hence, results on them are highly representative.

    The known data set of each matrix is randomly divided into five equal-sized, disjoint subsets to comply with the five-fold cross-validation settings, i.e., each time we choose four subsets as the training set Λ to train a model predicting the remaining one subset as the testing set Ψ. This process is sequentially repeated for five times to achieve the final results.The training process of a tested model terminates if i) the number of consumed iterations reaches the preset threshold,i.e., 1000, or ii) the error difference between two sequential iterations is smaller than the preset threshold, i.e., 10?5.

    B. Comparison Results

    The following models are included in our experiments:

    M1: A DSGD-based LF model proposed in [30]. Note that M1’s parallelization is described in detail in Section III-A.However, it differs from an MLF model in two aspects: a) it does not adopt the data rearrangement as illustrated in Fig.3(b);and b) its learning algorithm is a standard SGD algorithm.

    M2: An LF model adopting a modified DSGD scheme,where the data distribution of the target HiDS matrix is rearranged for improving its speedup with multiple training threads. However, its learning algorithm is a standard SGD algorithm.

    M3: An MLF model proposed in this work.

    With such design, we expect to see the accumulative effects of the acceleration strategies adopted by M3, i.e., the data rearrangement in Fig.3(b) and momentum effect in Fig.3(c).

    To enable fair comparisons, we adopt the following settings:

    1) For all models we adopt the same regularization coefficient, i.e., λP= λQ= 0.005 according to [12], [16].Considering the learning rate η and balancing constant γ, we tune them on one fold of each experiment to achieve the best performance of each model, and then adopt the same values on the remaining four folds to achieve the most objective results. Their values on each dataset are summarizes in Table I.

    2) We adopt eight training threads for each model in all experiments following [29].

    TABLE I PARAMETERS OF M1?M3 ON D1?D4

    3) For M1?M3, on each dataset the same random arrays are adopted to initialize P and Q . Such a strategy can work compatibly with the five-fold cross validation settings to eliminate the biased results brought by the initial hypothesis of an LF model as discussed in [3].

    4) The LF space dimension d is set at 20 uniformly in all experiments. We adopt this value to enable good balance between the representative learning ability and computational cost of an LF model, as in [3], [16], [29].

    Training curves of M1?M3 on D1?D4 with training iteration count and time cost are respectively given in Figs. 4 and 5. Comparison results are recorded in Tables II and III.From them, we present our findings next.

    a) Owing to an MPSGD algorithm, an MLF model converges much faster than DSGD-based LF models do. For instance, as recorded in Table II, M1 and M2 respectively take 461 and 463 iterations on average to achieve the lowest RMSE on D1. In comparison, M3 takes 112 iterations on average to converge on D1, which takes less than one fourth of that by M1 and M2. Meanwhile, M3 takes 110 iterations averagely to converge in MAE, which are also much less than 441 iterations by M1 and 448 iterations by M2. Similar results can also be observed on the other testing cases, as shown in Fig.4 and Tables II and III.

    Meanwhile, we observe an interesting phenomenon that M1 and M2 converge at the same rate. Their training curves almost overlap on all testing cases according to Fig.4. Note that M2 adopts the data shuffling strategy mentioned in Section III-A as in [30] to balance the known data distribution of an HiDS matrix distribute uniformly while M1 does not.This phenomenon indicates that the data shuffling strategy barely affects the convergence rate or representative learning ability of an LF model.

    b) With an MPSGD algorithm, an MLF model’s time cost is significantly lower than those of its peers. For instance, as shown in Table II, M3 averagely takes 89 s to converge in RMSE on D3. In comparison, M1 takes 1208 s, which is over 13 times M3’s time. M2 takes 308 s, which is still over three times M3’s average time. The situation is the same with MAE as the metric, as recorded in Table III.

    c) Prediction accuracy of an MLF model is comparable with or slightly higher than those of its peers. As recorded in Tables II and III, on all testing cases M3’s prediction error is as low as or even slightly lower than those of M1 and M2.Hence, an MPSGD algorithm can slightly improve an MLF model’s prediction accuracy for missing data of an HiDS matrix as well as its high computational efficiency.

    d) The stability of M1?M3 are close. According to Tables I and II, we see that the standard deviations of M1?M3 in MAE and

    RMSE are very close on all testing cases. Considering their time cost, since M1 and M2 generally consume much more time than M3 does, their standard deviations in total time cost are generally larger than that of M3. However, it is also datadependent. On D4, we see that M1?M3 have very similar standard deviations in total time. Hence, we reasonably conclude that two acceleration strategies, i.e., data rearrangement and momentum-incorporation, do not affect its performance stability.

    C. Speedup Comparison

    A parallel model’s speedup measures its efficiency gain with the deployed core count, i.e.,

    where T1and TJdenote the training time of a parallel model deployed on one and J training threads, respectively. High speedup of a parallel model indicates its high scalability and feasibility for large-scale industrial applications.

    Fig.4. Training curves of M1?M3 in iteration count. All panels share the legend of panel (a).

    Fig.5. Training curves of M1?M3 in time cost. All panels share the legend of panel (a).

    TABLE II PERFORMANCE COMPARISON AMONG M1?M3 ON D1?D4 WITH RMSE AS AN ACCURACY METRIC

    TABLE III PERFORMANCE COMPARISON AMONG M1?M3 ON D1?D4 WITH MAE AS AN ACCURACY METRIC

    Fig.6. Parallel performance comparison among M1?M3 as core count increases. Both panels share the legend in panel (a).

    The speedup of M1?M3 on D4 as J increases from two to eight is depicted in Fig.6. Note that similar situations are found on D1?D3. From it, we clearly see that M3, i.e., the proposed MLF model, outperforms its peers in achieving higher speedup. As J increases, M3 always consumes less time than its peers do, and its speedup is always higher than those of its peers. For instance, from Fig.6(b) we see that M3’s speedup as J = 8 is 6.88, which is much higher than 4.61 by M1 and 4.44 by M2. Therefore, its scalability is higher than those of its peers, making it more feasible for real applications.

    D. Summary

    Based on the above results, we conclude that:

    a) Owing to an MPSGD algorithm, an MLF model has significantly higher computational efficiency than its peers do;and

    b) An MLF model’s speedup is also significantly higher than that of its peers. Thus, it has higher scalability for large scale industrial applications than its peers do.

    V. CONCLUSIONS

    This paper presents an MLF model able to perform LF analysis of an HiDS matrix with high computational efficiency and scalability. Its principle is two-fold: a) reducing its time cost per iteration through balanced data segmentation,and b) reducing its converging iteration count by incorporating momentum effects into its learning process.Empirical studies show that compared with state-of-the-art parallel LF models, it has obviously higher efficiency and scalability in handing an HiDS matrix.

    Although an MLF model performs LF analysis on a static HiDS matrix with high efficiency, its performance on dynamic data [12] remains unknown. As discussed in [41], a GPU-based acceleration scheme is highly efficient when manipulating full matrices in context of recommender systems and other applications [42]–[50]. Nonetheless, more efforts are required to adjust its fundamental matrix operations to be compatible with an HiDS one as concerned in this paper. We plan to address these issues in the future.

    亚洲一码二码三码区别大吗| www.自偷自拍.com| 手机成人av网站| 国产精品熟女久久久久浪| 女人被躁到高潮嗷嗷叫费观| www.999成人在线观看| 色精品久久人妻99蜜桃| 中文精品一卡2卡3卡4更新| 精品卡一卡二卡四卡免费| 老汉色∧v一级毛片| 精品一区二区三卡| 欧美激情高清一区二区三区| 一区二区av电影网| 久久精品国产综合久久久| 午夜视频精品福利| 十八禁网站网址无遮挡| 别揉我奶头~嗯~啊~动态视频 | 午夜久久久在线观看| 人妻人人澡人人爽人人| 久久久久久人人人人人| 看十八女毛片水多多多| 免费在线观看黄色视频的| 50天的宝宝边吃奶边哭怎么回事| 国产成人欧美| 国产精品九九99| 国产精品一国产av| 丰满人妻熟妇乱又伦精品不卡| 美女国产高潮福利片在线看| 两个人免费观看高清视频| 亚洲欧美一区二区三区黑人| 高清av免费在线| av不卡在线播放| 亚洲少妇的诱惑av| 黄频高清免费视频| 免费av中文字幕在线| 亚洲午夜精品一区,二区,三区| 亚洲欧美色中文字幕在线| 精品国产乱码久久久久久男人| 日韩制服骚丝袜av| 国产精品久久久久成人av| av在线app专区| 亚洲成色77777| 又大又黄又爽视频免费| 91精品国产国语对白视频| 建设人人有责人人尽责人人享有的| 激情视频va一区二区三区| 考比视频在线观看| 美女脱内裤让男人舔精品视频| 国产色视频综合| 青春草亚洲视频在线观看| 亚洲精品第二区| 久久久精品免费免费高清| 交换朋友夫妻互换小说| 搡老乐熟女国产| 亚洲欧美精品综合一区二区三区| 国产精品久久久人人做人人爽| 国产女主播在线喷水免费视频网站| 少妇的丰满在线观看| 精品国产国语对白av| 亚洲伊人色综图| 90打野战视频偷拍视频| 少妇的丰满在线观看| 热99国产精品久久久久久7| 欧美精品高潮呻吟av久久| 久久人妻熟女aⅴ| 国产一区亚洲一区在线观看| 亚洲五月色婷婷综合| 久热这里只有精品99| 欧美日韩综合久久久久久| 国产精品久久久久久人妻精品电影 | 丰满人妻熟妇乱又伦精品不卡| 一边亲一边摸免费视频| 亚洲精品国产色婷婷电影| 99久久综合免费| 欧美日韩av久久| 18禁观看日本| 欧美变态另类bdsm刘玥| 19禁男女啪啪无遮挡网站| 美女高潮到喷水免费观看| 又紧又爽又黄一区二区| www日本在线高清视频| 黑人巨大精品欧美一区二区蜜桃| 国产亚洲精品第一综合不卡| 欧美变态另类bdsm刘玥| 午夜福利在线免费观看网站| 99国产精品一区二区蜜桃av | 国产成人免费观看mmmm| av国产精品久久久久影院| 午夜日韩欧美国产| a级毛片黄视频| 91字幕亚洲| 嫩草影视91久久| 成年动漫av网址| 老汉色av国产亚洲站长工具| 你懂的网址亚洲精品在线观看| 亚洲熟女精品中文字幕| 日本猛色少妇xxxxx猛交久久| 狂野欧美激情性bbbbbb| 国产伦理片在线播放av一区| 国产一区二区在线观看av| av电影中文网址| 久久久国产欧美日韩av| 十八禁人妻一区二区| 中国国产av一级| 性色av一级| 欧美日本中文国产一区发布| 成人国语在线视频| 麻豆乱淫一区二区| 国产一区二区激情短视频 | 一级毛片 在线播放| 一级片'在线观看视频| 国产免费一区二区三区四区乱码| 国产精品香港三级国产av潘金莲 | 老熟女久久久| 精品免费久久久久久久清纯 | 十分钟在线观看高清视频www| 两个人看的免费小视频| 一本久久精品| 美女视频免费永久观看网站| 亚洲成av片中文字幕在线观看| 中文欧美无线码| 国精品久久久久久国模美| 韩国精品一区二区三区| 国产精品 国内视频| 欧美久久黑人一区二区| 国产成人欧美| 男女下面插进去视频免费观看| 黑人猛操日本美女一级片| 丰满饥渴人妻一区二区三| 亚洲,欧美,日韩| 成年美女黄网站色视频大全免费| 最近中文字幕2019免费版| 亚洲国产欧美一区二区综合| 亚洲一卡2卡3卡4卡5卡精品中文| 欧美中文综合在线视频| 午夜91福利影院| 亚洲国产成人一精品久久久| 亚洲视频免费观看视频| 制服诱惑二区| av电影中文网址| 久久国产精品男人的天堂亚洲| 日韩av在线免费看完整版不卡| 欧美久久黑人一区二区| tube8黄色片| 1024视频免费在线观看| 亚洲 欧美一区二区三区| 人人妻,人人澡人人爽秒播 | 又黄又粗又硬又大视频| 欧美黄色片欧美黄色片| 成人国产av品久久久| 精品第一国产精品| 亚洲精品一卡2卡三卡4卡5卡 | 亚洲欧美一区二区三区国产| www.自偷自拍.com| 国产精品二区激情视频| 美女大奶头黄色视频| 七月丁香在线播放| 美女福利国产在线| 韩国高清视频一区二区三区| 成人亚洲精品一区在线观看| √禁漫天堂资源中文www| 欧美亚洲日本最大视频资源| 天天添夜夜摸| 伦理电影免费视频| 天天躁狠狠躁夜夜躁狠狠躁| 波多野结衣一区麻豆| 一区二区三区乱码不卡18| 一区福利在线观看| 国产熟女午夜一区二区三区| 在线 av 中文字幕| 看十八女毛片水多多多| 欧美大码av| 亚洲专区中文字幕在线| a级毛片在线看网站| 免费在线观看日本一区| 欧美黑人精品巨大| 亚洲av成人精品一二三区| 一级毛片我不卡| 久热这里只有精品99| 中文字幕人妻熟女乱码| 狂野欧美激情性bbbbbb| 香蕉丝袜av| 中文字幕最新亚洲高清| 欧美+亚洲+日韩+国产| 人成视频在线观看免费观看| 午夜福利视频精品| 欧美老熟妇乱子伦牲交| 亚洲国产欧美在线一区| 中文字幕制服av| 欧美日韩综合久久久久久| 激情五月婷婷亚洲| 无遮挡黄片免费观看| 免费看十八禁软件| 不卡av一区二区三区| 男女床上黄色一级片免费看| 大片电影免费在线观看免费| www.自偷自拍.com| 两性夫妻黄色片| 国产在视频线精品| 老汉色av国产亚洲站长工具| 日本wwww免费看| 制服人妻中文乱码| 一本综合久久免费| 国产黄色视频一区二区在线观看| 亚洲少妇的诱惑av| 叶爱在线成人免费视频播放| 成人国产av品久久久| 国产熟女欧美一区二区| 国产日韩一区二区三区精品不卡| 国产又爽黄色视频| 在线观看免费午夜福利视频| 一边摸一边做爽爽视频免费| 国产女主播在线喷水免费视频网站| 亚洲色图综合在线观看| 日韩电影二区| 桃花免费在线播放| 晚上一个人看的免费电影| 性色av乱码一区二区三区2| 9色porny在线观看| 一区福利在线观看| 爱豆传媒免费全集在线观看| 亚洲专区中文字幕在线| 91国产中文字幕| 一本综合久久免费| 性高湖久久久久久久久免费观看| 亚洲国产精品一区三区| 精品一品国产午夜福利视频| 99热网站在线观看| 在线亚洲精品国产二区图片欧美| 美女大奶头黄色视频| 国产成人a∨麻豆精品| av视频免费观看在线观看| 亚洲伊人久久精品综合| 亚洲欧美激情在线| 99国产精品一区二区蜜桃av | 欧美精品一区二区大全| 国产成人免费无遮挡视频| 欧美黑人精品巨大| 两人在一起打扑克的视频| 国产日韩欧美视频二区| 最近最新中文字幕大全免费视频 | 国产成人精品久久久久久| 看免费av毛片| 久久亚洲国产成人精品v| 成年女人毛片免费观看观看9 | 国产精品成人在线| 侵犯人妻中文字幕一二三四区| 午夜福利在线免费观看网站| 国精品久久久久久国模美| www.av在线官网国产| 一本—道久久a久久精品蜜桃钙片| 国产亚洲一区二区精品| 亚洲精品自拍成人| 欧美日韩av久久| 欧美日韩综合久久久久久| 欧美日韩视频高清一区二区三区二| 国产在线观看jvid| 99热网站在线观看| 丝瓜视频免费看黄片| 日韩大码丰满熟妇| 日韩 亚洲 欧美在线| 色94色欧美一区二区| 久久ye,这里只有精品| 日本91视频免费播放| 欧美日韩福利视频一区二区| 激情视频va一区二区三区| 国产成人精品无人区| 在线观看免费高清a一片| 两个人免费观看高清视频| av片东京热男人的天堂| 自线自在国产av| 国产精品偷伦视频观看了| 欧美 亚洲 国产 日韩一| av视频免费观看在线观看| 最黄视频免费看| 午夜日韩欧美国产| 搡老岳熟女国产| a级毛片在线看网站| 啦啦啦中文免费视频观看日本| 十八禁高潮呻吟视频| 欧美日韩综合久久久久久| e午夜精品久久久久久久| 亚洲欧美清纯卡通| 男女无遮挡免费网站观看| 亚洲精品日韩在线中文字幕| 黄色a级毛片大全视频| 丝瓜视频免费看黄片| 国产成人精品无人区| 黄色怎么调成土黄色| 赤兔流量卡办理| 国产男人的电影天堂91| 亚洲成人免费av在线播放| 久久精品久久久久久噜噜老黄| 韩国高清视频一区二区三区| 免费人妻精品一区二区三区视频| 亚洲精品第二区| 国产福利在线免费观看视频| 色婷婷久久久亚洲欧美| 欧美日韩亚洲综合一区二区三区_| 亚洲欧美日韩高清在线视频 | 日本欧美视频一区| 国产精品亚洲av一区麻豆| 91字幕亚洲| 制服诱惑二区| 国产一级毛片在线| 欧美人与性动交α欧美精品济南到| 99热全是精品| 亚洲精品自拍成人| 一级黄片播放器| 国产成人精品无人区| 婷婷色综合www| 亚洲人成77777在线视频| 多毛熟女@视频| 成年av动漫网址| 国产成人精品久久二区二区91| 色综合欧美亚洲国产小说| 少妇 在线观看| 91精品三级在线观看| 女人精品久久久久毛片| 少妇 在线观看| 永久免费av网站大全| 中文字幕精品免费在线观看视频| 狠狠婷婷综合久久久久久88av| 亚洲av片天天在线观看| 美女高潮到喷水免费观看| 久久99热这里只频精品6学生| 1024香蕉在线观看| 亚洲情色 制服丝袜| 国产精品国产av在线观看| 国产精品免费大片| 久久狼人影院| 国产精品 欧美亚洲| 精品国产一区二区三区四区第35| 亚洲国产欧美在线一区| 久久狼人影院| 黑人猛操日本美女一级片| 亚洲精品久久成人aⅴ小说| 18禁裸乳无遮挡动漫免费视频| 十八禁高潮呻吟视频| 久久久精品国产亚洲av高清涩受| 精品亚洲乱码少妇综合久久| 日本一区二区免费在线视频| 成年av动漫网址| 亚洲人成电影免费在线| 91字幕亚洲| 18禁黄网站禁片午夜丰满| 中文字幕制服av| 观看av在线不卡| 超碰97精品在线观看| 国产色视频综合| 性色av一级| 亚洲人成电影观看| 丰满饥渴人妻一区二区三| 又大又黄又爽视频免费| 日韩制服骚丝袜av| 一二三四在线观看免费中文在| 国产亚洲欧美在线一区二区| 国产片内射在线| 国产亚洲av片在线观看秒播厂| 国产亚洲欧美在线一区二区| 久久久国产一区二区| 自线自在国产av| 亚洲精品自拍成人| 又黄又粗又硬又大视频| 热re99久久国产66热| 在线观看一区二区三区激情| 波多野结衣一区麻豆| 精品高清国产在线一区| 一区二区日韩欧美中文字幕| 极品人妻少妇av视频| 久久国产精品影院| 午夜福利在线免费观看网站| 中文字幕最新亚洲高清| 在线观看免费视频网站a站| 操出白浆在线播放| 老司机影院成人| 黄色视频不卡| www.精华液| 婷婷成人精品国产| 国产男女超爽视频在线观看| www.精华液| 成年av动漫网址| 大型av网站在线播放| 50天的宝宝边吃奶边哭怎么回事| 91成人精品电影| 国产成人影院久久av| 午夜福利视频精品| 欧美黑人精品巨大| 视频区图区小说| 一级黄色大片毛片| 女人久久www免费人成看片| 亚洲欧洲日产国产| 成人国产一区最新在线观看 | 亚洲 欧美一区二区三区| 日韩视频在线欧美| 最黄视频免费看| 日韩中文字幕欧美一区二区 | 精品久久久精品久久久| 国产一区二区三区综合在线观看| 多毛熟女@视频| 免费看av在线观看网站| 天天躁夜夜躁狠狠久久av| 亚洲精品自拍成人| 女人爽到高潮嗷嗷叫在线视频| 精品人妻在线不人妻| 桃花免费在线播放| 免费在线观看影片大全网站 | 久久午夜综合久久蜜桃| 欧美av亚洲av综合av国产av| 黄色片一级片一级黄色片| 国产无遮挡羞羞视频在线观看| 一级,二级,三级黄色视频| netflix在线观看网站| 久久久精品免费免费高清| 极品人妻少妇av视频| 久久久久久久国产电影| 亚洲一码二码三码区别大吗| 夜夜骑夜夜射夜夜干| 国产三级黄色录像| 成人影院久久| 精品亚洲乱码少妇综合久久| 久久久久久久久久久久大奶| 我要看黄色一级片免费的| 九草在线视频观看| 久久热在线av| 另类亚洲欧美激情| 多毛熟女@视频| 巨乳人妻的诱惑在线观看| 最新在线观看一区二区三区 | 最近中文字幕2019免费版| 大片免费播放器 马上看| 亚洲成人国产一区在线观看 | kizo精华| 国产精品人妻久久久影院| 欧美 日韩 精品 国产| 电影成人av| 2021少妇久久久久久久久久久| 在线天堂中文资源库| svipshipincom国产片| 亚洲国产毛片av蜜桃av| 99热国产这里只有精品6| 婷婷色综合www| 1024香蕉在线观看| 性少妇av在线| 亚洲欧美精品综合一区二区三区| 十八禁网站网址无遮挡| 一本色道久久久久久精品综合| 欧美 日韩 精品 国产| 永久免费av网站大全| 日日爽夜夜爽网站| 日韩视频在线欧美| 亚洲熟女精品中文字幕| 无限看片的www在线观看| 亚洲av美国av| 色婷婷av一区二区三区视频| 欧美日韩黄片免| 满18在线观看网站| 国产成人精品久久二区二区免费| 成人午夜精彩视频在线观看| 精品国产乱码久久久久久小说| 一级a爱视频在线免费观看| 午夜福利在线免费观看网站| 2018国产大陆天天弄谢| 久久精品国产综合久久久| 欧美黄色片欧美黄色片| 色综合欧美亚洲国产小说| 丝瓜视频免费看黄片| 嫁个100分男人电影在线观看 | 美女脱内裤让男人舔精品视频| 丰满迷人的少妇在线观看| 国产一区有黄有色的免费视频| 69精品国产乱码久久久| 亚洲欧美一区二区三区国产| 成人亚洲欧美一区二区av| 午夜免费鲁丝| 欧美黑人精品巨大| 中文字幕色久视频| 国产爽快片一区二区三区| 亚洲精品自拍成人| 2021少妇久久久久久久久久久| 亚洲中文av在线| 久久影院123| 免费少妇av软件| 欧美日韩一级在线毛片| 国产亚洲av高清不卡| 男女下面插进去视频免费观看| 最近手机中文字幕大全| 久久女婷五月综合色啪小说| 国产精品一区二区在线不卡| 日韩电影二区| 久久久久久亚洲精品国产蜜桃av| 亚洲国产欧美日韩在线播放| 交换朋友夫妻互换小说| 好男人视频免费观看在线| www.av在线官网国产| 国产成人av激情在线播放| 久久久欧美国产精品| 可以免费在线观看a视频的电影网站| 天天躁夜夜躁狠狠躁躁| 欧美精品一区二区免费开放| 国产精品国产三级国产专区5o| 亚洲国产欧美一区二区综合| 一边摸一边抽搐一进一出视频| 最近最新中文字幕大全免费视频 | 高清视频免费观看一区二区| 国产激情久久老熟女| 国产成人系列免费观看| 亚洲欧洲国产日韩| 日韩一区二区三区影片| 亚洲七黄色美女视频| 精品卡一卡二卡四卡免费| 欧美黑人欧美精品刺激| 午夜久久久在线观看| 搡老岳熟女国产| 女人精品久久久久毛片| 天天躁夜夜躁狠狠躁躁| 国产熟女午夜一区二区三区| 免费看av在线观看网站| 欧美亚洲 丝袜 人妻 在线| 精品少妇一区二区三区视频日本电影| 99国产精品一区二区蜜桃av | 久久久国产欧美日韩av| 一级毛片 在线播放| 在线观看一区二区三区激情| 亚洲国产欧美在线一区| 久热这里只有精品99| 精品第一国产精品| 欧美激情 高清一区二区三区| 色视频在线一区二区三区| 五月开心婷婷网| 婷婷色综合www| 男女下面插进去视频免费观看| 国产精品一区二区精品视频观看| 久久天躁狠狠躁夜夜2o2o | 伊人亚洲综合成人网| 亚洲精品日本国产第一区| 交换朋友夫妻互换小说| 九色亚洲精品在线播放| 一级,二级,三级黄色视频| 一级a爱视频在线免费观看| 久久99精品国语久久久| 亚洲精品中文字幕在线视频| 中文字幕人妻熟女乱码| 黄色 视频免费看| 亚洲欧美成人综合另类久久久| 中文字幕制服av| 久久精品久久久久久噜噜老黄| 成年av动漫网址| 最新在线观看一区二区三区 | 国产亚洲精品久久久久5区| av不卡在线播放| 男女之事视频高清在线观看 | 国产真人三级小视频在线观看| 日韩制服骚丝袜av| 国产日韩欧美亚洲二区| 成人国产av品久久久| 日本一区二区免费在线视频| 欧美 日韩 精品 国产| 精品熟女少妇八av免费久了| 男人舔女人的私密视频| 午夜老司机福利片| 国产日韩欧美亚洲二区| 亚洲免费av在线视频| 在线观看一区二区三区激情| 久久久国产一区二区| 男人添女人高潮全过程视频| 激情五月婷婷亚洲| 国产主播在线观看一区二区 | 美女中出高潮动态图| 久久久国产一区二区| 日韩电影二区| 夫妻午夜视频| 热99久久久久精品小说推荐| 人人澡人人妻人| 女人久久www免费人成看片| 丁香六月欧美| 黄色一级大片看看| 一级毛片我不卡| 肉色欧美久久久久久久蜜桃| 国产老妇伦熟女老妇高清| 欧美精品一区二区大全| 久久人人爽人人片av| 免费观看a级毛片全部| 国产欧美日韩一区二区三区在线| 操出白浆在线播放| 伊人久久大香线蕉亚洲五| 欧美黄色淫秽网站| 精品一品国产午夜福利视频| 男女国产视频网站| 高清视频免费观看一区二区| 少妇猛男粗大的猛烈进出视频| 人人妻人人澡人人爽人人夜夜| 中文字幕精品免费在线观看视频| 永久免费av网站大全| 久久国产精品影院| 国产熟女欧美一区二区| 2021少妇久久久久久久久久久| 亚洲av日韩在线播放| a级毛片黄视频| 汤姆久久久久久久影院中文字幕| 午夜两性在线视频| 婷婷色综合大香蕉| av线在线观看网站| 亚洲精品乱久久久久久| 久久精品熟女亚洲av麻豆精品| 国产亚洲午夜精品一区二区久久| 人妻 亚洲 视频| 久久影院123| 91精品三级在线观看| 国产成人a∨麻豆精品| 欧美日本中文国产一区发布| √禁漫天堂资源中文www| 国产精品久久久av美女十八| 又粗又硬又长又爽又黄的视频| 亚洲av成人不卡在线观看播放网 |