• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Robust Latent Factor Analysis for Precise Representation of High-Dimensional and Sparse Data

    2021-04-13 10:47:28DiWuMemberIEEEandXinLuoSeniorMemberIEEE
    IEEE/CAA Journal of Automatica Sinica 2021年4期

    Di Wu, Member, IEEE and Xin Luo, Senior Member, IEEE

    Abstract—High-dimensional and sparse (HiDS) matrices commonly arise in various industrial applications, e.g.,recommender systems (RSs), social networks, and wireless sensor networks. Since they contain rich information, how to accurately represent them is of great significance. A latent factor (LF) model is one of the most popular and successful ways to address this issue. Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix, i.e., they sum the errors between observed data and predicted ones with L2-norm. Yet L2-norm is sensitive to outlier data. Unfortunately, outlier data usually exist in such matrices. For example, an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users. To address this issue, this work proposes a smooth L1-norm-oriented latent factor (SL-LF) model. Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss, making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix. Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices.

    I. INTRODUCTION

    IN this era of information explosion, people are frequently inundated by big data from various industrial applications[1]–[5], e.g., recommender systems (RSs), social networks,and wireless sensor networks. Among these applications,matrices are commonly adopted to represent the relationship between two types of entities. For example, a user-item rating matrix is frequently seen in recommender systems (RSs)[6]–[9], where each row indicates a specific user, each column indicates a specific item (e.g., movie, electronic product, and music), and each entry indicates a user’s preference on an item.

    In big data-related applications like Amazon [10], since the relation among numerous entities is unlikely to be fully observed in practice, matrices from these applications are usually high-dimensional and sparse (HiDS) [7], [11]. Yet these HiDS matrices contain rich formation regarding various valuable knowledge, e.g., user’s potential preferences on items in RSs. Hence, how to precisely extract useful information from an HiDS matrix becomes a hot yet thorny issue in industrial applications.

    Up to now, various approaches and models are proposed to address this issue [6]–[9]. Among them, a latent factor (LF)model, which originates from matrix factorization techniques[11], [12], is becoming increasingly popular due to its high accuracy and scalability in industrial applications [13], [14].Given an HiDS matrix, an LF model represents it by training two low-dimensional LF matrices based on its observed data only [13], [15]. To model an accurate LF model, how to design its Loss function is very crucial, where Loss denotes the sum of errors between observed data (ground truths) and predicted ones computed by a specific norm on an HiDS matrix [13], [14].

    Currently, most LF models adopt L2-norm-oriented Loss[12]–[18] while few ones adopt L1-norm-oriented Loss [19].Fig.1(a) illustrates the differences between L1-norm-oriented and L2-norm-oriented Losses [19]–[21] where Error denotes the errors between predicted results and ground truths: 1) the former is less sensitive to Error than the latter, thereby enhancing the robustness of a resultant model [19], [21], [22],and 2) the latter is smoother than the former when the absolute value of Error is small (less than 1), thereby enhancing the stability of a resultant model [23].

    Hence, although an LF model with L2-norm-oriented Loss can achieve a steady and accurate prediction for the missing data of an HiDS matrix space [23], its robustness cannot be guaranteed when such a matrix is mixed with outlier data.Unfortunately, outlier data usually exist in an HiDS matrix.For example, an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users(e.g., a user rates an item randomly in the feedback or badmouths a specific item) [24], [25].

    On the other hand, although an LF model with L1-normoriented Loss has intrinsic robustness, its solution space for predicting the missing data of an HiDS matrix is multimodal.The reason is that L1-norm-oriented Loss is not smooth when the predicted results and ground truths are close to each other.As a result, an LF model with L1-norm-oriented Loss may be stacked by some “bad” solutions, making it unable to guarantee high prediction accuracy.

    From the aforementioned discussions, we see that both L1and L2-norm-oriented Losses are not the best choice for modeling an LF model. Then, do we have an alternative one?This work aims to answer it. Fig.1(b) illustrates the smooth L1-norm-oriented Loss, where we observe that Loss not only is robust to Error but also has a smooth gradient when the absolute value of Error is smaller than 1. Motivated by this observation, we propose a smooth L1-norm-oriented latent factor (SL-LF) model. Its main idea is to adopt smooth L1-norm to form its Loss, making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.

    Fig.1. The relationship between Loss and Error with different norms to solve a regression problem.

    Main contributions of this work include:

    1) Proposing an SL-LF model with strong robustness and high accuracy in predicting the missing data of an HiDS matrix.

    2) Performing a suite of theoretical analyses and algorithm designs for the proposed SL-LF model.

    3) Conducting extensive empirical studies on eight HiDS matrices generated by industrial applications to evaluate the proposed model and other state-of-the-art ones.

    To the author’s best knowledge, this is the first study to employ smooth L1-norm to implement an LF model for predicting the missing data of an HiDS matrix. Experimental results demonstrate that compared with state-of-the-art models, SL-LF achieves significant accuracy gain when predicting missing data of an HiDS matrix. Its computational efficiency is also highly competitive when compared with the most efficient LF models.

    The rest of the paper is organized as follows. Section II states preliminaries. Section III presents an SL-LF model.Section IV reveals experimental results. Section V discusses related work. Finally, Section VI concludes this paper.

    II. PRELIMINARIES

    IV. EXPERIMENTS AND RESULTS

    A. General Settings

    Datasets: Eight benchmark datasets are selected to conduct the experiments. Table I summarizes their properties. They are real HiDS datasets generated by industrial applications and frequently adopted by prior studies [13], [29]. Dating is collected by an online dating website LibimSeTi [30], Douban is collected by Douban.com [13], [29], Eachmovie is collected by the EachMovie system by the DEC Systems Research Center [1], [31], Epinion is collected by Trustlet website [29],Flixter is collected by the Flixter website [32], Jester is collected by the joke-recommender Jester [32], and MovieLens_10M and MovieLens_20M are collected by the MovieLens system [33].

    LFˉb Algorithm 1 SLInput: ZK Operation Cost initializing f, λ, η, Nmtr (Max-training-round count) Θ(1)initializing X randomly Θ(|U|×f)initializing Y randomly Θ(|I|×f)while t ≤ Nmtr && not converge ×Nmtr for each entity zu,i in ZK ×|ZK| for k=1 to f ×f computing xu,k according to (7) and (11) Θ(1) computing yi,k according to (7) and (11) Θ(1) end for ? end for ? t=t+1 Θ(1)end while ?Output: X, Y Algorithm 2 SL-LFb Input: ZK Operation Cost initializing f, λ, η, Nmtr (Max-training-round count) Θ(1)initializing X randomly Θ(|U|×f)initializing Y randomly Θ(|I|×f)computing μ according to (13) Θ(|ZK|)for each entity zu,i in ZK ×|ZK| computing bi according to (14) Θ(1)end for ?for each entity zu,i in ZK ×|ZK|computing bu according to (14) Θ(1)end for ?while t ≤ Nmtr && not converge ×Nmtr for each entity zu,i in ZK ×|ZK| for k=1 to f ×f computing xu,k according to (11) and (15) Θ(1) computing yi,k according to (11) and (15) Θ(1) end for ? end for ? t=t+1 Θ(1)end while ?Output: X, Y

    Evaluation Metrics: Missing data prediction is a common but important task in representing an HiDS matrix [34]. To evaluate prediction accuracy, mean absolute error (MAE) and root mean squared error (RMSE) are widely adopted:

    where Γ denotes the testing set and |·|absdenotes the absolute value of a given number. Lower MAE and RMSE indicate higher missing data prediction accuracy. Besides, to evaluate the computational efficiency of missing data prediction, we measure CPU running time.

    TABLE I PROPERTIES OF ALL THE DATASETS

    Experimental Designs: For each dataset, its 80% known data are used as a training dataset and the remaining 20% ones as a testing dataset. Five-fold cross-validations are adopted.All the experiments are run on a PC with 3.4 GHz i7 CPU and 64 GB RAM. In the next experiments, we aim at answering the following research questions:

    1) How do the hyper-parameters of an SL-LF model impact its prediction performance?

    2) How do the outlier data impact the prediction performance of an SL-LF model?

    3) Does the proposed SL-LF model outperform related state-of-the-art models?

    B. Hyper-Parameter Sensitivity Tests

    1https://pan.baidu.com/s/1o_8sKP0HRluNH1a4IWHW8w, Code: t3sw

    Fig.2. The training process of SL-L Fbˉ with different f on D8, where λ =0.01 and η = 0.001.

    Fig.3. The training process of SL-LFb with different f on D8, where λ =0.01 and η = 0.001.

    2) Impacts of λ and η

    In this set of experiments, we increase λ from 0.01 to 0.1 and η from 0.0001 to 0.01 by performing a grid-based search [36]. Figs. 4 and 5 show the results on D8. The complete results on all the datasets are presented in the Supplementary File1. From them, we conclude that:

    Fig.4. The experimental results of SL- LFbˉ with respect to λ and η on D8,where f = 20.

    Fig.5. The experimental results of SL-LFb with respect to λ and η on D8,where f = 20.

    TABLE II DESCRIPTIONS OF ALL THE COMPARISON MODELS

    a) Both λ and η have a significant impact on prediction accuracy of both SL-L Fbˉand SL-LFb. As λ and η increase,MAE and RMSE decrease at first and then increase in general.For example, on D8, RMSE of SL-L Fbˉdecreases from 0.8116 to 0.7729 at the beginning. Then, it increases up to 0.8451 as λ and η continue to increase.

    b) λ and η have different situations in searching their optimal values on the tested datasets. On the different datasets, the optimal value of η is a small value like 0.001.However, the optimal value of λ is different for the different datasets. It distributes in the range from 0.02 to 0.09. Hence, λ should be carefully tuned on the target dataset.

    C. Outlier Data Sensitivity Tests

    In this section, we compare an SL-LF model with a basic LF (BLF) model when outlier data are added to the datasets.BLF is modeled based on the L2-norm-oriented Loss while SL-LF is done by the smooth L1-norm-oriented Loss. The specific method of adding outlier data is: 1) randomly selecting an unknown entity between two known entities as the outlier entity for the input HiDS matrix Z, 2) assigning a value (maximum or minimum known value) to the outlier entity, 3) the percentage that outlier entities account for known entities is increased from 0% to 100% with an interval of 10%, and 4) the outlier entities are only added into the training set. To illustrate this method, an example is given in Fig.6.

    Fig.6. An example of adding outlier data.

    Fig.7 records the experimental results on D8.Supplementary File1records the complete results on all the datasets. Since smooth L1-norm is less sensitive to outlier data than L2-norm, we find that both SL-L Fbˉand SL-LFbbecome much more robust than BLF as the percentage of outlier data increases. For example, on D8, RMSEs of SL-L Fbˉand BLF are 0.7767 and 0.7761, respectively when there are no outlier data, and then become 0.8536 and 1.1244, respectively when the percentage of outlier data is 100%. The improvement of RMSE of BLF is 0.3483, which is about 4.53 times as large as that of SL-L Fbˉat 0.0769. Therefore, we conclude that an SLLF model is robust to the outlier data.

    Fig.7. The outlier data sensitivity tests results of BLF, SL- LFbˉ,and SL-LFb on D8, where λ = 0.01, η = 0.001, and f = 20.

    D. Comparison Between SL-LF and State-of-the-Art Models

    We compare an SL-LF model with five related state-of-theart models, including three LF-based models (basic latent factor (BLF), non-negative latent factor (NLF), and fast nonnegative latent factor (FNLF)) and two deep neural network(DNN)-based models (AutoRec and deep collaborative conjunctive recommender (DCCR)). Table II gives a brief introduction to these models. To make a fair comparison, f is set as 20 for all the LF-based models and the proposed SL-LF model. Besides, we tune the other hyper-parameters for all the involved models to make them achieve their highest prediction accuracy.

    1) Comparison of Prediction Accuracy

    Table III presents the detailed comparison results. Statistical analysis is conducted on these comparison results. First, the win/loss counts of SL-L Fbˉ/SL-LFbversus other models are summarized in the third/second-to-last row of Table III.Second, we perform Friedman test [40] on these comparison results. The result is recorded in the last row of Table III,where it accepts the hypothesis that these comparison models have significant differences with a significance level of 0.05.From these comparisons and statistical results, we find that a)both SL-L Fbˉand SL-LFbachieve lower RMSE/MAE than the other models on most testing cases, and b) SL-LFbachieves the lowest F-rank value among all the models. Hence, we conclude that SL-LFbhas the highest prediction accuracyamong all the models.

    TABLE III THE COMPARISON RESULTS ON PREDICTION ACCURACY, INCLUDING WIN/LOSS COUNTS STATISTIC AND FRIEDMAN TEST, WHERE ●AND ?RESPECTIVELY INDICATE THAT SL- LFbˉ , AND SL- LFb HAVE A HIGHER PREDICTION ACCURACY THAN COMPARISON MODELS

    Next, we check whether SL-LFbachieves significantly higher prediction accuracy than each single model. To do so,we conduct the Wilcoxon signed-ranks test [41], [42] on the comparison results of Table III. Wilcoxon signed-ranks test is a nonparametric pairwise comparison procedure and has three indicators – R+, R?, and p-value. The larger R+ value indicates higher performance and the p-value indicates the significance level. Table IV records the test results, where we see that SL-LFbhas a significantly higher prediction accuracy than all the comparison models with a significance level of 0.05 except for SL-L Fbˉ. However, SL-LFbachieves a much larger R+ value than SL-L Fbˉ, which verifies that linear biases can boost an SL-LF model’s prediction accuracy.

    2) Comparison of Computational Efficiency

    To compare the computational efficiency of all the tested models, we measure their CPU running times on all the datasets. Fig.8 presents the results. From it, we observe that:

    a) DNN-based models (AutoRec and DCCR) cost much more CPU running time than the other models due to their time-consuming DNN-based learning strategy [43].

    b) SL-LF costs slightly more CPU running time than BLF.The reason is that SL-LF has the additional computational procedures of discrimination (11) while BLF does not.

    c) SL-LF costs less or more CPU running time than NLF and FNLF on the different datasets.

    Therefore, these results verify that SL-LF’s computational efficiency is higher than those of DNN-based models and comparable to those of other LF-based models.

    TABLE IV STATISTICAL RESULTS ON TABLE III BY CONDUCTING THE WILCOXON SIGNED-RANKS TEST

    Fig.8. The comparison CPU running time of involved models on D1–D8.

    E. Summary of Experiments

    Based on the above experimental results and analyses, we have the following conclusions:

    1) An SL-LF model’s prediction accuracy is closely connected with λ and η. As a rule of thumb, we can set η=0.001 while λ should be fine-tuned according to a specific target dataset.

    2) SL-LF has significantly higher prediction accuracy than state-of-the-art models for the missing data of an HiDS matrix.

    3) SL-LF’s computational efficiency is much higher than those of DNN-based models and comparable to those of most efficient LF-based models.

    4) Linear biases have positive effects on improving SL-LF’s prediction accuracy.

    V. RELATED WORK

    An LF model is one of the most popular and successful ways to efficiently predict the missing data of an HiDS matrix[13], [14]. Up to now, various approaches are proposed to implement an LF model, including a bias-based one [14], nonnegativity-constrained one [15], randomized one [17],probabilistic one [44], dual-regularization-based one [45],posterior-neighborhood-regularized one [16], graph regularized one [18], neighborhood-and-location integrated one [6],data characteristic-aware one [35], confidence-driven one[46], deep-latent-factor based one [47], and nonparametric one[48]. Although they are different from one another in terms of model design or learning algorithms, they all adopt an L2-norm-oriented Loss, making them sensitive to outlier data[20]. Since outlier data are frequently found in an HiDS matrix [24], [25], their robustness cannot be guaranteed.

    To make an LF model less sensitive to outlier data, Zhu et al. [19] proposed to adopt an L1-norm-oriented Loss.However, such an LF model is multimodal because L1norm is less smooth than L2norm, as shown in Fig.1(a). Hence, an LF model with an L1norm-oriented Loss tends to be stacked by some “bad” solutions, resulting in its failure to achieve high prediction accuracy. Differently from these approaches, the proposed SL-LF model adopts smooth L1-norm-oriented Loss,making its solution space smoother and less multimodal than that of an LF model with L1norm-oriented Loss. Meanwhile,its robustness is also higher than that of an LF model with L2norm-oriented Loss.

    Recently, DNN-based approaches to represent an HiDS matrix have attracted extensive attention [49]. According to a recent review regarding DNN-based studies [34], various models are proposed to address the task of missing data prediction for an HiDS matrix. Representative models include an autoencoder-based model [37], hybrid autoencoder-based model [39], multitask learning framework [50], neural factorization machine [51], attentional factorization machine[52], deep cooperative neural network [53], and convolutional matrix factorization model [54]. However, DNN-based models have the limit of high computational cost caused by their learning strategies. For example, they take complete data rather than known data of an HiDS matrix as input.Unfortunately, an HiDS matrix generated by RSs commonly has a very low rating density. In comparison, SL-LF trains only on the known data of an HiDS matrix, thereby achieving highly computational efficiency.

    As analyzed in [13], [16], an LF model can not only predict the missing data of an HiDS matrix but also be used as a data representation approach. Hence, SL-LF has some potential applications in representation learning, such as community detection, autonomous vehicles [5], and medical image analysis [55]–[57]. Besides, some researchers incorporate non-negative constraints into an LF model to improve its performance [17]. Similarly, we plan to improve SL-LF by considering non-negative constraints [58] in the future.

    VI. CONCLUSIONS

    This study proposes, for the first time, a smooth L1-normoriented latent factor (SL-LF) model to robustly and accurately predict the missing data of a high-dimensional and sparse (HiDS) matrix. Its main idea is to employ smooth L1-norm rather than L2-norm to form its Loss (the error between observed data and predicted ones), making it achieve highly robust and accurate prediction of missing data in a matrix.Extensive experiments on eight HiDS matrices from industrial applications are conducted to evaluate the proposed model.The experimental results verify that 1) it is robust to the outlier data, 2) it significantly outperforms state-of-the-art models in terms of prediction accuracy for the missing data of an HiDS matrix, and 3) its computational efficiency is much higher than those of DNN-based models and comparable to those of most efficient LF models. Although it has shown promising prospects, how to make its hyper-parameter λ selfadaptive and improve its performance by considering nonnegative constraint remains open. We plan to fully investigate these issues in the future.

    91老司机精品| 精品人妻1区二区| kizo精华| 久久精品91无色码中文字幕| 亚洲欧洲精品一区二区精品久久久| 一本大道久久a久久精品| 久久国产精品影院| 午夜福利欧美成人| 天天操日日干夜夜撸| aaaaa片日本免费| 91老司机精品| 免费在线观看影片大全网站| 啦啦啦 在线观看视频| 黄片大片在线免费观看| 久久影院123| 亚洲精品一卡2卡三卡4卡5卡| 国产一区有黄有色的免费视频| 免费高清在线观看日韩| 国产成人免费观看mmmm| 一边摸一边抽搐一进一出视频| 麻豆乱淫一区二区| 男人舔女人的私密视频| 怎么达到女性高潮| 一区二区av电影网| 亚洲欧洲日产国产| 精品一区二区三区av网在线观看 | 国产成人av激情在线播放| www日本在线高清视频| 亚洲 国产 在线| 欧美老熟妇乱子伦牲交| 99国产综合亚洲精品| 欧美精品人与动牲交sv欧美| 老鸭窝网址在线观看| 国产区一区二久久| 欧美久久黑人一区二区| 国产免费视频播放在线视频| 一本久久精品| 欧美日韩黄片免| 国产又爽黄色视频| 国产伦理片在线播放av一区| 狠狠婷婷综合久久久久久88av| a在线观看视频网站| 菩萨蛮人人尽说江南好唐韦庄| 久久久精品94久久精品| 精品国产国语对白av| 亚洲专区国产一区二区| 午夜视频精品福利| 黄色 视频免费看| 这个男人来自地球电影免费观看| 一边摸一边抽搐一进一小说 | 亚洲av日韩在线播放| 亚洲精品中文字幕一二三四区 | 麻豆成人av在线观看| 久久人人97超碰香蕉20202| 男女午夜视频在线观看| 一区福利在线观看| 亚洲男人天堂网一区| 亚洲人成77777在线视频| 久久精品国产亚洲av高清一级| 亚洲av国产av综合av卡| 最新的欧美精品一区二区| 精品国产乱码久久久久久小说| 亚洲成a人片在线一区二区| www.精华液| 美女福利国产在线| 97在线人人人人妻| 大陆偷拍与自拍| 国产成人免费观看mmmm| 国产精品98久久久久久宅男小说| 国产精品国产av在线观看| 国产黄色免费在线视频| 精品国内亚洲2022精品成人 | 午夜福利视频在线观看免费| 久久人妻福利社区极品人妻图片| 成人三级做爰电影| 久久久精品区二区三区| 国产精品一区二区在线观看99| 99热网站在线观看| 老司机在亚洲福利影院| 亚洲男人天堂网一区| 麻豆国产av国片精品| 一级毛片精品| 老熟女久久久| 大型av网站在线播放| svipshipincom国产片| 亚洲欧美日韩另类电影网站| 国产精品二区激情视频| 老司机午夜十八禁免费视频| 激情在线观看视频在线高清 | 久久亚洲真实| 免费在线观看日本一区| 91成人精品电影| 天堂中文最新版在线下载| av网站在线播放免费| 手机成人av网站| av视频免费观看在线观看| 视频在线观看一区二区三区| 成人18禁高潮啪啪吃奶动态图| 热re99久久精品国产66热6| 精品少妇黑人巨大在线播放| 99热网站在线观看| 女性被躁到高潮视频| 在线观看免费视频网站a站| 啪啪无遮挡十八禁网站| xxxhd国产人妻xxx| 亚洲精品在线观看二区| 电影成人av| 国产精品一区二区在线不卡| 国产精品久久久久久精品古装| 久久久久精品人妻al黑| 欧美黄色淫秽网站| 国产老妇伦熟女老妇高清| 在线观看免费视频网站a站| 日韩大码丰满熟妇| 中文字幕av电影在线播放| 自线自在国产av| 丝袜美腿诱惑在线| 国产高清国产精品国产三级| 亚洲 国产 在线| 国内毛片毛片毛片毛片毛片| 中文字幕最新亚洲高清| 亚洲伊人色综图| 亚洲av第一区精品v没综合| 国产成人啪精品午夜网站| 欧美成人午夜精品| 欧美激情极品国产一区二区三区| 亚洲综合色网址| 国产视频一区二区在线看| 十八禁网站免费在线| 69精品国产乱码久久久| 超碰成人久久| 99国产精品免费福利视频| 久久久国产成人免费| 热re99久久国产66热| 少妇精品久久久久久久| 日本av免费视频播放| 大片免费播放器 马上看| 涩涩av久久男人的天堂| 交换朋友夫妻互换小说| av国产精品久久久久影院| 黑人巨大精品欧美一区二区蜜桃| 少妇猛男粗大的猛烈进出视频| 新久久久久国产一级毛片| 亚洲七黄色美女视频| 成人18禁在线播放| 18禁观看日本| a级毛片在线看网站| 黄片播放在线免费| 丝袜喷水一区| 欧美精品人与动牲交sv欧美| 少妇猛男粗大的猛烈进出视频| 精品亚洲乱码少妇综合久久| 亚洲精品美女久久久久99蜜臀| 夜夜爽天天搞| 日本撒尿小便嘘嘘汇集6| 精品一区二区三区av网在线观看 | 国产精品成人在线| 可以免费在线观看a视频的电影网站| 91麻豆av在线| 欧美性长视频在线观看| 免费黄频网站在线观看国产| 午夜福利在线免费观看网站| 国产欧美日韩一区二区精品| 50天的宝宝边吃奶边哭怎么回事| 少妇 在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 免费观看av网站的网址| 久久久久视频综合| 国产精品98久久久久久宅男小说| 热99久久久久精品小说推荐| a级毛片在线看网站| 日韩中文字幕视频在线看片| 亚洲专区字幕在线| 中文亚洲av片在线观看爽 | 亚洲全国av大片| 欧美精品高潮呻吟av久久| 国产日韩欧美视频二区| 久久热在线av| 国产亚洲精品一区二区www | 黑人操中国人逼视频| 亚洲三区欧美一区| 天天躁狠狠躁夜夜躁狠狠躁| 18禁国产床啪视频网站| 最近最新中文字幕大全免费视频| 后天国语完整版免费观看| 国产成人啪精品午夜网站| 99国产精品99久久久久| 动漫黄色视频在线观看| 日韩 欧美 亚洲 中文字幕| 在线天堂中文资源库| 日韩大码丰满熟妇| 精品卡一卡二卡四卡免费| 一区福利在线观看| 99热网站在线观看| 老汉色∧v一级毛片| 久久中文字幕一级| av国产精品久久久久影院| 亚洲第一av免费看| 99久久国产精品久久久| 中文亚洲av片在线观看爽 | 夜夜夜夜夜久久久久| 少妇 在线观看| 午夜福利免费观看在线| 久久久久久久久免费视频了| 国产成人系列免费观看| 日本wwww免费看| 一级a爱视频在线免费观看| 性高湖久久久久久久久免费观看| 成人18禁在线播放| 乱人伦中国视频| 18在线观看网站| 日日摸夜夜添夜夜添小说| 久久国产精品男人的天堂亚洲| 亚洲天堂av无毛| 国产精品久久久久成人av| 久久av网站| 十分钟在线观看高清视频www| 一区二区三区激情视频| 男女下面插进去视频免费观看| 精品国内亚洲2022精品成人 | 天堂动漫精品| svipshipincom国产片| 黑人巨大精品欧美一区二区蜜桃| 制服诱惑二区| 在线观看免费视频日本深夜| 亚洲欧美一区二区三区黑人| netflix在线观看网站| 久久精品aⅴ一区二区三区四区| 18在线观看网站| 日日摸夜夜添夜夜添小说| videos熟女内射| 国产三级黄色录像| 中文字幕人妻熟女乱码| 成人手机av| 亚洲欧美一区二区三区久久| 91字幕亚洲| 亚洲精品美女久久久久99蜜臀| 亚洲天堂av无毛| 无限看片的www在线观看| 日韩欧美一区视频在线观看| 欧美日韩成人在线一区二区| 国产免费视频播放在线视频| 久久久久久久国产电影| 成人亚洲精品一区在线观看| 首页视频小说图片口味搜索| 色老头精品视频在线观看| 午夜福利一区二区在线看| 亚洲人成电影观看| 中文字幕av电影在线播放| 亚洲 国产 在线| 成人免费观看视频高清| 777米奇影视久久| 在线观看免费视频日本深夜| 欧美激情极品国产一区二区三区| e午夜精品久久久久久久| 色老头精品视频在线观看| 国产老妇伦熟女老妇高清| 久热这里只有精品99| 麻豆av在线久日| 精品国产乱子伦一区二区三区| 亚洲欧美精品综合一区二区三区| 国产精品免费一区二区三区在线 | 一区在线观看完整版| 一级毛片精品| 女同久久另类99精品国产91| 99在线人妻在线中文字幕 | 国产精品美女特级片免费视频播放器 | 国产在视频线精品| 亚洲伊人色综图| 久久精品人人爽人人爽视色| 亚洲天堂av无毛| 91大片在线观看| 欧美日韩av久久| 91麻豆精品激情在线观看国产 | 搡老岳熟女国产| 国产精品av久久久久免费| 久久精品国产99精品国产亚洲性色 | 久久精品成人免费网站| 亚洲欧美日韩高清在线视频 | 免费观看人在逋| 国产深夜福利视频在线观看| av有码第一页| 在线观看免费午夜福利视频| 高潮久久久久久久久久久不卡| 国产亚洲一区二区精品| 久久久欧美国产精品| 99riav亚洲国产免费| 久久人妻av系列| 欧美成人午夜精品| 女性生殖器流出的白浆| 黄色片一级片一级黄色片| 色播在线永久视频| 乱人伦中国视频| 黑丝袜美女国产一区| 老司机午夜福利在线观看视频 | 亚洲中文字幕日韩| 久久毛片免费看一区二区三区| 高清毛片免费观看视频网站 | 久久久欧美国产精品| 99久久国产精品久久久| 黑人巨大精品欧美一区二区mp4| 极品教师在线免费播放| 国产aⅴ精品一区二区三区波| 悠悠久久av| 中文字幕人妻丝袜制服| 国产亚洲精品一区二区www | 欧美一级毛片孕妇| 国产在线观看jvid| av又黄又爽大尺度在线免费看| 91字幕亚洲| 国产欧美日韩一区二区精品| 美女视频免费永久观看网站| 久9热在线精品视频| 一区二区三区激情视频| 在线观看免费日韩欧美大片| 一区二区三区激情视频| 成人手机av| 女同久久另类99精品国产91| av网站免费在线观看视频| 无限看片的www在线观看| 人人妻人人澡人人爽人人夜夜| 我的亚洲天堂| 亚洲成av片中文字幕在线观看| 久久ye,这里只有精品| 国产又色又爽无遮挡免费看| 欧美一级毛片孕妇| 国产又色又爽无遮挡免费看| 国产激情久久老熟女| 啦啦啦免费观看视频1| 一本—道久久a久久精品蜜桃钙片| 18禁观看日本| 视频区图区小说| 少妇粗大呻吟视频| 美女主播在线视频| 亚洲国产成人一精品久久久| 免费观看a级毛片全部| 国产亚洲一区二区精品| 如日韩欧美国产精品一区二区三区| 女同久久另类99精品国产91| 免费人妻精品一区二区三区视频| 中文亚洲av片在线观看爽 | 久久人妻av系列| 国产免费av片在线观看野外av| 欧美成人免费av一区二区三区 | a级片在线免费高清观看视频| 丰满人妻熟妇乱又伦精品不卡| 精品一品国产午夜福利视频| www.精华液| 午夜福利乱码中文字幕| 在线播放国产精品三级| 亚洲第一青青草原| 久久久久久久久免费视频了| 精品人妻熟女毛片av久久网站| 两个人看的免费小视频| 十八禁网站免费在线| 精品一区二区三区av网在线观看 | 亚洲avbb在线观看| av视频免费观看在线观看| 欧美另类亚洲清纯唯美| 80岁老熟妇乱子伦牲交| 亚洲欧洲精品一区二区精品久久久| 午夜福利欧美成人| 一级毛片电影观看| 热99久久久久精品小说推荐| 久久午夜综合久久蜜桃| 日韩欧美三级三区| 99精品在免费线老司机午夜| 90打野战视频偷拍视频| 在线观看舔阴道视频| 黄色视频,在线免费观看| 亚洲av片天天在线观看| 精品国产一区二区三区久久久樱花| 午夜91福利影院| 俄罗斯特黄特色一大片| 老司机深夜福利视频在线观看| 中文字幕人妻丝袜制服| 亚洲午夜精品一区,二区,三区| 纵有疾风起免费观看全集完整版| 老司机在亚洲福利影院| 亚洲人成电影观看| 悠悠久久av| 欧美日韩福利视频一区二区| 日韩制服丝袜自拍偷拍| 欧美黄色淫秽网站| 久久国产精品男人的天堂亚洲| 精品午夜福利视频在线观看一区 | 99久久精品国产亚洲精品| 人人妻,人人澡人人爽秒播| 两人在一起打扑克的视频| 91成人精品电影| 少妇的丰满在线观看| 亚洲国产毛片av蜜桃av| 免费久久久久久久精品成人欧美视频| 考比视频在线观看| 久久国产精品人妻蜜桃| 亚洲精品国产精品久久久不卡| 真人做人爱边吃奶动态| 久久中文字幕人妻熟女| 超碰成人久久| 色精品久久人妻99蜜桃| 国产无遮挡羞羞视频在线观看| 成年人黄色毛片网站| 婷婷成人精品国产| 菩萨蛮人人尽说江南好唐韦庄| 99国产极品粉嫩在线观看| 十分钟在线观看高清视频www| 欧美av亚洲av综合av国产av| 午夜久久久在线观看| 国产欧美日韩一区二区三| 黑人欧美特级aaaaaa片| 亚洲av日韩精品久久久久久密| 亚洲午夜理论影院| 一夜夜www| 女性被躁到高潮视频| cao死你这个sao货| 天堂俺去俺来也www色官网| 一区在线观看完整版| 久久人人97超碰香蕉20202| 操美女的视频在线观看| 亚洲精品av麻豆狂野| 90打野战视频偷拍视频| 欧美性长视频在线观看| 久久免费观看电影| 热re99久久精品国产66热6| 中文字幕人妻熟女乱码| av天堂在线播放| 可以免费在线观看a视频的电影网站| 国产日韩欧美在线精品| 欧美精品av麻豆av| 大香蕉久久网| 亚洲欧美一区二区三区久久| 天天操日日干夜夜撸| 欧美成人午夜精品| 免费在线观看黄色视频的| av线在线观看网站| 大码成人一级视频| 国产一卡二卡三卡精品| 亚洲精品av麻豆狂野| 黑丝袜美女国产一区| 老司机亚洲免费影院| 日韩人妻精品一区2区三区| 国产视频一区二区在线看| av国产精品久久久久影院| 黑人巨大精品欧美一区二区mp4| 欧美乱妇无乱码| 亚洲 国产 在线| 久久精品国产99精品国产亚洲性色 | 性色av乱码一区二区三区2| 老司机在亚洲福利影院| 欧美日韩视频精品一区| 精品人妻1区二区| 成人三级做爰电影| 黄片小视频在线播放| 国产99久久九九免费精品| 黑丝袜美女国产一区| 久久久久网色| 欧美乱妇无乱码| av线在线观看网站| 亚洲国产中文字幕在线视频| 中文字幕人妻丝袜一区二区| 99国产精品99久久久久| 一夜夜www| 女同久久另类99精品国产91| 免费在线观看黄色视频的| 波多野结衣av一区二区av| 国产精品免费大片| 人人妻人人爽人人添夜夜欢视频| 亚洲中文日韩欧美视频| 天天操日日干夜夜撸| 另类精品久久| 亚洲免费av在线视频| 人妻 亚洲 视频| xxxhd国产人妻xxx| 欧美日韩亚洲综合一区二区三区_| 国产精品久久久久久精品电影小说| 在线观看免费日韩欧美大片| av线在线观看网站| √禁漫天堂资源中文www| 99精品欧美一区二区三区四区| 真人做人爱边吃奶动态| 日本a在线网址| 黑人巨大精品欧美一区二区mp4| 99香蕉大伊视频| 国产精品麻豆人妻色哟哟久久| 亚洲欧洲精品一区二区精品久久久| 亚洲精品国产精品久久久不卡| av不卡在线播放| 汤姆久久久久久久影院中文字幕| 日韩免费av在线播放| 少妇的丰满在线观看| 欧美午夜高清在线| 色播在线永久视频| 久久精品91无色码中文字幕| 亚洲全国av大片| 大片电影免费在线观看免费| 人妻一区二区av| 精品高清国产在线一区| 电影成人av| 三上悠亚av全集在线观看| 精品久久蜜臀av无| 蜜桃在线观看..| 精品少妇内射三级| 日韩人妻精品一区2区三区| 国产高清视频在线播放一区| 欧美一级毛片孕妇| 99国产精品免费福利视频| 成人永久免费在线观看视频 | 亚洲av成人不卡在线观看播放网| 欧美日韩亚洲国产一区二区在线观看 | 欧美在线黄色| 久久久久久久久免费视频了| 少妇裸体淫交视频免费看高清 | 高潮久久久久久久久久久不卡| 精品国产乱码久久久久久男人| 国产精品亚洲一级av第二区| 99热网站在线观看| a级片在线免费高清观看视频| 亚洲av成人一区二区三| 久久精品成人免费网站| 亚洲精品在线观看二区| av天堂久久9| 国产一区二区三区在线臀色熟女 | 五月天丁香电影| 亚洲色图 男人天堂 中文字幕| 天天躁狠狠躁夜夜躁狠狠躁| 人人澡人人妻人| 五月开心婷婷网| 国产老妇伦熟女老妇高清| 欧美人与性动交α欧美精品济南到| 天天躁狠狠躁夜夜躁狠狠躁| 午夜免费鲁丝| 亚洲av日韩在线播放| 丝袜在线中文字幕| 老司机福利观看| 老汉色av国产亚洲站长工具| av不卡在线播放| 欧美日韩视频精品一区| 日韩 欧美 亚洲 中文字幕| 人人妻人人添人人爽欧美一区卜| 免费观看av网站的网址| 999精品在线视频| 可以免费在线观看a视频的电影网站| 欧美黑人欧美精品刺激| 高清欧美精品videossex| 大香蕉久久成人网| 欧美精品高潮呻吟av久久| 久久人人爽av亚洲精品天堂| 亚洲av第一区精品v没综合| 国产成人欧美| 成人三级做爰电影| 美女高潮到喷水免费观看| 国产精品99久久99久久久不卡| 一区二区av电影网| 欧美午夜高清在线| 国产成+人综合+亚洲专区| 日本av免费视频播放| 国产真人三级小视频在线观看| 一级毛片精品| 麻豆av在线久日| 精品亚洲成a人片在线观看| 久久影院123| 天堂中文最新版在线下载| 超碰97精品在线观看| 老司机福利观看| 性高湖久久久久久久久免费观看| 欧美精品亚洲一区二区| 午夜老司机福利片| 国产精品免费一区二区三区在线 | 九色亚洲精品在线播放| 男女边摸边吃奶| 精品一区二区三区视频在线观看免费 | 国产欧美日韩一区二区三区在线| 国产成人一区二区三区免费视频网站| 国产精品久久久av美女十八| 久久久国产一区二区| 考比视频在线观看| 黄色怎么调成土黄色| 国产精品.久久久| 一本综合久久免费| 久久精品成人免费网站| 18禁观看日本| 亚洲avbb在线观看| 国产不卡av网站在线观看| 久久 成人 亚洲| 国产淫语在线视频| 国产aⅴ精品一区二区三区波| 国产精品二区激情视频| 亚洲精品久久午夜乱码| 女人久久www免费人成看片| 亚洲中文日韩欧美视频| 免费女性裸体啪啪无遮挡网站| 午夜激情av网站| 一边摸一边抽搐一进一小说 | 久久国产精品影院| 伊人久久大香线蕉亚洲五| 99精品久久久久人妻精品| 日本五十路高清| 亚洲精品国产一区二区精华液| 91精品三级在线观看| 国产在线精品亚洲第一网站| 国产在线视频一区二区| 国产成人免费无遮挡视频| 视频区欧美日本亚洲| 丰满少妇做爰视频| 肉色欧美久久久久久久蜜桃| 露出奶头的视频| 高清欧美精品videossex| 国产精品久久久久久人妻精品电影 | 欧美乱码精品一区二区三区| 天天操日日干夜夜撸| 一区二区三区激情视频| 日本wwww免费看| 久久狼人影院| 久久影院123| 久久中文字幕一级|