• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Graph-enhanced neural interactive collaborative filtering

    2022-07-13 02:52:28XieChengyanDongLu

    Xie Chengyan Dong Lu

    (1School of Automation, Southeast University, Nanjing 210096, China)(2School of Cyber Science and Engineering, Southeast University, Nanjing 211189, China)

    Abstract:To improve the training efficiency and recommendation accuracy in cold-start interactive recommendation systems, a new graph structure called item similarity graph is proposed on the basis of real data from a public dataset.The proposed graph is built from collaborative interactions and a deep reinforcement learning-based graph-enhanced neural interactive collaborative filtering(GE-ICF)model.The GE-ICF framework is developed with a deep reinforcement learning framework and comprises an embedding propagation layer designed with graph neural networks.Extensive experiments are conducted to investigate the efficiency of the proposed graph structure and the superiority of the proposed GE-ICF framework.Results show that in cold-start interactive recommendation systems, the proposed item similarity graph performs well in data relationship modeling, with the training efficiency showing significant improvement.The proposed GE-ICF framework also demonstrates superiority in decision modeling, thereby increasing the recommendation accuracy remarkably.

    Key words:interactive recommendation systems; cold-start; graph neural network; deep reinforcement learning

    Personalized recommendation systems have become ubiquitous in the information industry, and they have been applied to classic online services.Traditional recommendation systems have been widely studied under the assumption of a stationary environment, where user preferences are assumed to be static[1-2].However, such models fail to explore users’ interests when few reliable user-item interactions are provided, such as that in a cold-start scenario.They also fail to model the dynamics of user preferences, thus leading to poor performance.Therefore, the research into interactive recommendation systems(IRSs)has flourished in recent years.IRSs consider recommendations as sequential interactions between systems and users.The main idea in modeling IRSs is to capture the dynamic nature of user preferences and achieve optimal recommendations in a time periodT[3].IRS research has two directions: contextual bandit and reinforcement learning(RL).Although contextual bandit algorithms have been used in different recommendation scenes, such as collaborative filtering[4-5]and e-commerce recommendation[6], they are usually invalid in nonlinear models and demonstrate too much pessimism toward recommendations.RL is a suitable learning framework for interactive recommendation tasks as it does not suffer from such issues.In the study of applying RL to IRSs, the themes include large action spaces, off-policy training[7-8], and online model framework[9].

    The interactive recommendation problem in the current study is set in a cold-start scenario, which provides nothing about items or users other than insufficient observations of user-item ratings.A deep RL framework[10], which can be regarded as a generalized neural Q-network, is adopted to tackle the above problem.As for the representation of items, an embedding lookup tableX∈RN×deis adopted, with each itemebeing represented as a vectorxe∈Rde.The embedding lookup table is trained end to end in the framework.However, because such an embedding layer is optimized by user-item interactions in interactive recommendations and lacks an explicit encoding of crucial collaborative signals, an item similarity graph is proposed, and an embedding propagation layer constructed by graph neural networks(GNNs)is devised in this work to refine items’ embeddings by aggregating the embeddings of similar items.

    Given the graph structure from the data in recommendation systems, designing a proper graph and utilizing GNNs in recommendation systems are appealing.

    User-item bipartite graphs are constructed in traditional recommendation methods for improved performance in rating prediction tasks[11], while sequence graphs are transformed in sequential recommendation methods to capture sequential knowledge[12].Knowledge graphs[13]are utilized for additional information.By introducing an item similarity bipartite graph in the recommendation framework, we make interactive recommendations effective because of the deep exploitation in structural item similarity information inferred from user-item interactions.A user-item bipartite graph is suggested in the RL framework for interactive recommendations[14].

    In sum, a new graph called an item similarity graph is built in this study to alleviate the computational burden while showing comparative structural information as a user-item bipartite graph.Then, a graph-enhanced neural interactive collaborative filtering(GE-ICF)framework, which devises an embedding propagation layer into an RL framework, is proposed for interactive recommendation tasks.Empirical studies on a real-world benchmark dataset are conducted, and the results show that the proposed GE-ICF framework outperforms baseline methods.

    1 GE-ICF Framework’s Method

    1.1 Preliminaries

    A typical recommendation system has a set ofmusersU={1, 2, …,m} andnitemsI={1, 2, …,n} with an observed feedback matrixY∈Rm×n, whereyijrepresents the feedback from userito itemj.Here feedback can be explicit(e.g., rating, like/dislike choice)or be implicitly inferred from watching time, clicks, reviews, etc.For cold-start users in this study, an interactive recommendation process is conducted in a time periodTbetween the recommender agent and a certain user.At each time steptin the interaction time period {0, 1, 2, …,T}, the recommendation agent calculates itemitto be recommended by policyπ:st→Iand suggests it to certain useru, wherestrepresents the user state at time stept.Then, the user gives feedbackyu,iton the recommended item to the recommender agent, and this feedback guides the agent in updating the user’s state and making next-round recommendations.The goal of designing an interactive recommendation system is to design a policyπthat maximizesGπ(T)as

    (1)

    whereGπ(T)is the expected cumulative feedback in a time periodT.Although exploiting the user state at the current time step facilitates the derivation of accurate recommendations and maximization of immediate user feedbackyu,it, the exploration of items for recommendation is necessary for completing user profiles and maximizing cumulative user feedbackG(T), which is regarded as the delayed reward for a recommendation.RL is a sequential decision learning framework that is aimed at maximizing the sum of delayed rewards from an overall aspect[10].Therefore, RL is applied in our system to balance exploitation and exploration during interactive recommendations.

    The essential underlying model of RL is a Markov decision process(MDP).An MDP occurs between agents and the environment.In this study, the agent is the proposed recommendation system, and the environment is equivalent to the users of the system, as well as all the movies recorded in the system.The MDP is defined with five factors(S,A,P,D,γ).These factors are introduced and instantiated in the IRS for cold-start users.Fig.1 illustrates the interactive recommendation in the MDP formulation.

    Fig.1 Interactive recommendation process in MDP formulation

    State spaceScontains a set of statesst.In this study, a state at timet:st={i0,yu,i0,…,it-1,yu,it-1} denotes the browsing history and corresponding feedback of a userubefore timet.To reflect the change of user interests with time, the items instare sorted in chronological order.

    Action spaceAis equivalent to item setIin a recommendation.An action at timet:at∈Adenotes the item recommended to a user by the recommender system according to current user statest.

    RewardDis a set the recommender system receives depending on user feedback.

    Feedbackyu,iton the recommended itemitis returned by useru, and it may be explicit or implicit depending on certain systems.The recommendation system receives immediate rewardrst,ataccording to the feedback.Rewards may not be the same as feedback; that is, reward shaping technology may be used to improve algorithm performance.

    Transition probabilityp(st+1|st,at)defines the probability of state transition fromsttost+1after an item is recommended as an action.An MDP is assumed to have a Markov property; that is, it satisfiesp(st+1|st,at,…,s1,a1)=p(st+1|st,at).p(st+1|st,at)= 1 is set at any time step, which means that the state at the next time stept+1 is determined once statestand actionatare fixed.In this work, the state att+1 is updated by appending actionatand corresponding feedbackyu,itto statest; that is, it is accumulative.

    Discount factorγ∈[0,1]defines the discount factor measuring the importance of future reward in the present state value.Specifically,γ=0 means that the recommender agent only considers the immediate reward whileγ=1 means that all future rewards are thought to be as important as the immediate reward.

    Solving the RL task is to find an optimal policyπθ:SAthat maximizes the expected cumulative rewards from a global view.The expected cumulative rewards can be presented by a value functionNote thatEπθis the expectation under policyπθ,tis the current time step, andrt+kpresents the immediate reward at a future time stept+k.A variant of neural networkQ(s,a;θ)(i.e., Q-network)[15]is adopted to estimate the policyπθ.A Q-network adopts the experience replay mechanism and a periodically updated target network to ensure the coverage of the model.A finite-size memory called a replay buffer is applied, and transition samples represented by(st,at,rt,st+1)are stored there for sampling and model training.

    In the recommendation procedure,the state space and action space are represented by item vectors.In practice, building item vectors by one-hot encoding is not efficient enough because of the one-hot encoding’s extremely high dimension and sparsity, especially in problems with a large action space.Instead, we train dense, low-item vectors end to end in the RL framework.GNNs are integrated into the embedding process because of their superiority in representation learning.

    1.2 Item similarity bipartite graph construction

    Although a user-item interaction bipartite graph is widely used in collaborative filtering, it suffers from huge data size and a high calculational burden.Therefore, we propose to build an item similarity bipartite graph with the assumption that a user’s interest does not change frequently.On the basis of this assumption, we count the frequency of two items simultaneously existing in one user’s history.Assume that two items exist in n users’ histories; they are thought to be similar ifn≥g, wheregdenotes an item similarity coefficient.An edge exists between two similar item nodes in the item similarity graph.We set all edges to have equal weights initially and learn the contribution of each neighbor to central nodes with an attention network.A toy sample of a user-item interaction bipartite graph and an item similarity bipartite graph is illustrated in Fig.2.

    Fig.2 Illustration of a user-item interaction bipartite graph and an item similarity bipartite graph.(a)User-item interaction bipartite graph;(b)Item similarity bipartite graph

    Through the design of the item similarity graph, structural information among items is modeled with the graph size sharply decreasing because user nodes are no longer built in it.

    1.3 Model architecture

    We now present details of the proposed GE-ICF framework, the architecture of which is illustrated in Fig.3.The framework is structured with four components: 1)an embedding layer that initializes all item embeddings in the system; 2)an embedding propagation layer that refines the item embeddings by injecting structural item similarity relations; 3)a stacking self-attention block that takes item embeddings and a user’s corresponding feedback as input to generate a user profile; 4)a policy layer that predicts the most preferable item for the user.The framework is trained end to end with Q-learning[15].

    Fig.3 Architecture of the GE-ICF framework

    1.3.1 Embedding layer

    Given a user statest={i0,yu,i0,…,it-1,yu,it-1}, we first represent itemsitwith embedding vectors.We build an embedding lookup tableX∈RN×defor the initialization of allNitems’ embeddings in the system, withdedenoting the embedding size.The embedding lookup table is initialized randomly and optimized in an end-to-end style.In contrast to traditional collaborative filtering methods, which take these ID embeddings as items’ final embeddings, they are refined by propagating the information of similar items on an item similarity graph in the GE-ICF framework, thus leading to effective item representations.

    1.3.2 Embedding propagation layer

    We develop an embedding propagation layer from the idea of GAT.This layer aims to aggregate similar items’ features to refine the central nodes’ embedding vectors.It takes the embedding lookup tableX∈RN×deand item similarity bipartite graph as input and outputs a graph-aware embedding lookup tableX′∈RN×d′e, thus transforming an itemi’s embedding vector fromxi∈Rdetox′i∈Rd′e.

    A shared weight matrixW∈Rde×d′eis necessary in the first step for transforming inputted embedding vectors into high-order features.This step allows the framework to obtain sufficient expressive power.Then, an attention mechanism attention:Rd′e×Rd′e→Ris adopted to measure different importance levels of neighbor nodes for central nodes in the form of attention coefficients:

    eij=attention(Wxi,Wxj)

    (2)

    whereeijis an attention coefficient calculated to measure the contribution of a neighbor nodej∈Nifor the central nodeicentralandNidenotes all the one-hop neighbors of nodeicentralin the graph as well as the nodeicentralitself.A softmax function is then applied to all attention coefficientsei*as

    (3)

    whereαijis a normalized attention coefficient that makes all the importance levels of nodes inNicomparable.

    We adopt a single-layer feed forward neural network for the attention mechanism, in which the normalized attention coefficient can be expanded as

    αij=softmaxj(eij)=

    (4)

    whereaT∈R2×d′eis a parameter vector for linear transformation;‖ is the concatenation operation; LeakyReLU(·)is a function for nonlinearity modeling.

    As the central nodeicentralis already contained in the node setNi, the message propagation process and the message aggregation process can be regarded to be conducted simultaneously by a linear combination of the features corresponding to related nodes and the nonlinearity transformation on the combined embedding vector:

    (5)

    wherex′iis a graph-aware embedding vector of itemicentral.

    We employ multihead attention to stabilize the learning process of self-attention.The final item vectors can be represented by the concatenation or average ofKindependent attention outputs.We find that concatenation is more sensible to capture graph-aware item representations in this work:

    (6)

    wherekis the serial number of each attention head.

    1.3.3 Stacking self-attention block

    A user profile is then generated by stacking self-attention blocks with user history and the corresponding feedback, and user history is represented with refined item embeddings.

    The numbers of items with different user feedback items in a user’s history show extreme imbalance; that is, positive feedback items are much fewer than negative feedback items with the assumption that unexposed items are negative samples for users.As we use a dataset within an explicit rating in this work, the items in user history are classified with ratingsyu,itin user state, and different rated items are processed independently by stacked self-attentive neural networks[10].

    1.3.4 Policy layer

    With the generated user profile, we apply a two-layerperceptron(MLP)to extract useful information and model corresponding action-value functionQθ(st,·)for all items under the current state:

    (7)

    whereutis the user profile vector at timestampt;W(1),W(2)are the weight matrixes of each perceptron layer;b(1),b(2)are the biases of each perceptron layer; ReLU(·)is a function for nonlinearity modeling.

    The policyπθ(st)is to recommend itemiwith maximal Q-value for useruat timet:

    πθ(st)=argmaxiQθ(st,i)

    (8)

    1.4 Model training

    We utilize Q-learning[15]to train the whole GE-ICF framework(see Fig.3).The adopted datasets are divided into training setΓtrainand test setΓtestby users.Before the interaction, an item similarity graph is constructed with training users’ interactive dataΓtrainand item similarity coefficient g and is applied to the framework.In thet-th trial, a user statest={i0,yu,i0,…,it-1,yu,it-1} is observed, and the item with the largest value calculated by the approximated value functionQθ(st,·)is chosen as corresponding recommendationit.Theζ-greedy policy is used for exploration during training to enrich the learning samples.Then, the recommender agent receives the user’s feedbackyu,itonitand maps it into rewardrst,it.At the same time, the user state is updated intost+1={i0,yu,i0,…,it,yu,it}.Therefore, a new transition sample(st,it,rst,it,st+1)is generated and stored in the memory buffer for batch learning.

    We train the weights in the framework in each episode by minimizing the mean squared error:

    error(θ)=E(st,it,ru,it,st+1)~M[(yt-Qθ(st,it))2]

    (9)

    where

    (10)

    is the target value from the optimized Bellman equation, and the target network[15]is applied to improve system robustness.γis a discount factor, andQθ-(st+1,it+1)is theQ-value calculated by the target network.Efficient learning is adopted[10]in this study, withγset to be dynamic for improved model training.Mis a transition sample set stored in the memory buffer.

    2 Experiments

    We conduct extensive experiments to answer the following questions:

    1)Does the application of GNNs refine the item embeddings and improve the recommendation efficiency?

    2)Does the designed item similarity graph achieve comparable results to user-item bipartite graphs while sharply decreasing training time?

    3)How does the depth of GNNs influence the final recommendation efficiency?

    The experimental settings are reviewed first in the following subsection.Thereafter, the questions are discussed in the Results and Analysis section.

    2.1 Experimental setting

    2.1.1 Datasets

    Experiments on recommendation systems should be conducted online to determine their interactive performance.However, online experiments are not always possible as they require a platform and could possibly sacrifice user experience.Therefore, a stable benchmark dataset, MovieLens 100K, is adopted for the experiment in this work.The statistics of the dataset are summarized in Tab.1.

    Tab.1 Summary statistics of datasets

    To make the experiments reasonable, we assume that each item in a user’s history in the dataset is the user’s instinctive action and is not biased by recommendations.In addition, the ratings from users for items not in their records are assumed to be 0 following existing studies.

    2.1.2 Comparison methods

    To verify the efficiency of our proposed GE-ICF framework, we select five baselines among different types of recommendations for comparison.

    1)Random:A policy uniformly samples items to recommend to users.It is a baseline to output the worst performance, in which no algorithms are used for recommendations.

    2)Popular: An algorithm that orders items with the number of ratings and recommends items accordingly.Before the popularity of personalized recommendations, Popular was a most widely adopted policy because of its surprisingly excellent performance on recommendations.

    3)Thompson sampling(TS)[5]: An interactive collaborative filtering algorithm achieved with the combination of probabilistic matrix factorization(PMF)and Thompson sampling.Thompson sampling can be replaced with other exploration techniques, such as GLM-UCB.We choose PMF with Thompson sampling as a representation of such techniques to compare it with our algorithm with the goal of balancing exploitation and exploration in recommendations.

    4)NICF[10]: A state-of-the-art algorithm that applies RL to interactive collaborative filtering.We refer to its idea on the construction of the DQN-based framework and compare our work with it to verify whether the devised GNNs make sense.

    5)GCQN[14]: A DQN-based recommendation that applies a user-item bipartite graph to detect the collaborative signal and uses GRU layers to generate the user profile.

    6)GE-ICF:The proposed approach to the interactive recommendation with the item similarity bipartite graph devised.

    7)GE-ICF-β: The same architecture as the GE-ICF, except that a user-item bipartite graph is devised in the framework.

    We compare GE-ICF and GE-ICF-βto investigate whether the proposed item similarity graph achieves comparable performance in abstracting collaborative signals with the user-item bipartite graph while sharply reducing the burden on calculation.

    We adopt cumulative precision duringTinteractionspTto evaluate the accuracy of recommendations:

    (11)

    wherebtis a parameter indicating whether the recommendation is satisfiable or not andnusersis the number of users.bt=1 ifyu,it≥4, and 0 otherwise.As we set the rewardrst,itunder the same rule, the cumulative precision is equivalent to the cumulative reward inTinteractions.

    The dataset is divided into three disjoint sets by users: 85% of the users and their interactions are set as a training set, 5% of the users and their interactions comprise the validation set, and the remaining 10% of the users are set as the test set.In our approach, the batch size of learning is set to be 128, and the learning rate is fixed to 0.001.The memory buffer to replay training samples is set as large as 1×106for sufficient learning, and the exploration factorζdecays from 1 to 0 during training.The optimizer is chosen to be the Adam optimizer.The item similarity coefficientgis set to be 10.The experiments are conducted on the same machine with a 4-core 8-thread CPU(i5-8300h, 2.30 GHz), Nvidia GeForce GTX 1050 Ti GPU, and 64 GB RAM.We run each model separately five times under five different seeds and average the outputs for the final results.

    2.2 Results and analysis

    2.2.1 Influence of GNNs

    The results ofpTover different models on the dataset MovieLens 100K are reported in Tab.2, whereT=10, 20, 40.

    Tab.2 pT of different models on MovieLens 100K

    We compare our proposed framework with five baselines and find that whenT=10, 20, and 40, the proposed framework remarkably outperforms the other baselines in terms of recommendation accuracy.This result verifies that the embedding propagation layer we proposed indeed improves the model’s capability of detecting collaborative signals and improves the recommendation accuracy in a cold-start scenario.

    2.2.2 Efficiency of the proposed item similarity graph

    The algorithms GE-ICF and GE-ICF-βare further compared onpTand seconds per training step(SPT)withT=40 in Tab.3.Although the precision of GE-ICF-βis slightly higher than that of GE-ICF whenTis small, the training time of GE-ICF-βis more than one and a half times as long as that of GE-ICF.This result means that the item similarity bipartite graph achieves comparable results to user-item bipartite graphs while the training efficiency is improved remarkably.

    Tab.3 Performance comparison between GE-ICF and GE-ICF-β on MovieLens 100K

    2.2.3 Influence of GNN depth

    To investigate the influence of the GNN layers in the proposed framework, we vary the depths of the GNN layers in the range of {1, 2, 3}.Tab.4 summarizes the experimental results, and the results of the framework without GNN layers are presented for reference.

    Tab.4 pT of the GE-ICF framework with different GNN depths on dataset MovieLens 100K

    The results in Fig.4 indicate that although the application of GNN layers improves the recommendation precis-ion during time periodT, the recommendation performance worsens as the depth of the GNN layers increases.p10achieves the best performance when the GNN layer depth is equal to 1, and the GE-ICF framework with two GNN layers works the best in the time periodT=20, 40.When the layer depth is up to 3, the recommendation efficiency decreases more sharply, even becoming worse than that of the framework without GNN layers.The reason might be that applying an excessively deep architecture would introduce noise to representation learning.Moreover, the multistacking of GNN layers might bring about an over smoothness issue.

    (a)

    3 Conclusions

    1)A GE-ICF framework is proposed in this work to enhance neural interactive filtering performance by recommending GNNs to capture collaborative signals.Extensive experiments are conducted on a benchmark dataset in this work.The results indicate that the recommended GNNs indeed make sense for the training of item embeddings and that the proposed GE-ICF framework outperforms others in interactive recommendation tasks.

    2)The proposed item similarity graph is of great significance because it contains as much collaborative information as user-item bipartite graphs while sharply decreasing graph size and shortening training time.

    3)Our future work involves several possible directions.Firstly, we would like to investigate how to extend the model by incorporating rich user information(e.g., age, gender, nationality, occupation)and context information(e.g., location, dwell time, device)in a heuristic way.Secondly, we are interested in the effective utilization of RL in IRSs under the guidance of the diversity of recommendations, which is the key indicator of model exploration degree.

    在线观看免费日韩欧美大片| 久久国产精品人妻蜜桃| 又爽又黄无遮挡网站| 国产亚洲欧美98| 首页视频小说图片口味搜索| 国产区一区二久久| 亚洲人成伊人成综合网2020| 少妇的丰满在线观看| 亚洲av电影不卡..在线观看| 国内精品久久久久久久电影| 久久久久国产一级毛片高清牌| 中出人妻视频一区二区| 99热这里只有精品一区 | 久久久久久大精品| 搡老妇女老女人老熟妇| 亚洲黑人精品在线| 亚洲五月天丁香| 在线视频色国产色| 国产精品影院久久| 啦啦啦免费观看视频1| 免费电影在线观看免费观看| 久久久久久国产a免费观看| 叶爱在线成人免费视频播放| cao死你这个sao货| 桃红色精品国产亚洲av| 欧美日韩福利视频一区二区| 亚洲欧美激情综合另类| 国产精品亚洲av一区麻豆| 色av中文字幕| 久久国产精品人妻蜜桃| 看黄色毛片网站| 搞女人的毛片| 国产精品1区2区在线观看.| 日本黄色视频三级网站网址| 久久精品人妻少妇| 一级a爱片免费观看的视频| 亚洲激情在线av| 国产三级黄色录像| 国产伦人伦偷精品视频| 久久精品91无色码中文字幕| 欧美乱妇无乱码| 亚洲av中文字字幕乱码综合| 国内精品一区二区在线观看| 99久久久亚洲精品蜜臀av| 精华霜和精华液先用哪个| 亚洲国产欧洲综合997久久,| 在线观看美女被高潮喷水网站 | 午夜精品久久久久久毛片777| 老司机午夜福利在线观看视频| 国产成人av激情在线播放| 亚洲自偷自拍图片 自拍| 亚洲电影在线观看av| 日韩欧美精品v在线| 宅男免费午夜| 国产精品日韩av在线免费观看| 国产黄色小视频在线观看| 午夜免费激情av| 国产精品亚洲av一区麻豆| 久久国产精品人妻蜜桃| 91在线观看av| 亚洲专区国产一区二区| 天天躁夜夜躁狠狠躁躁| 欧美av亚洲av综合av国产av| 免费av毛片视频| 精品久久久久久久毛片微露脸| 日本 欧美在线| 国产成人一区二区三区免费视频网站| 黄色片一级片一级黄色片| 校园春色视频在线观看| 国产精品电影一区二区三区| 亚洲最大成人中文| 免费在线观看影片大全网站| 啦啦啦免费观看视频1| 女生性感内裤真人,穿戴方法视频| 久久久久久久精品吃奶| 国产精品美女特级片免费视频播放器 | 色在线成人网| 亚洲熟妇中文字幕五十中出| а√天堂www在线а√下载| 欧美日韩一级在线毛片| √禁漫天堂资源中文www| 亚洲成av人片在线播放无| 国产99白浆流出| 午夜a级毛片| 露出奶头的视频| 嫁个100分男人电影在线观看| 久久精品国产99精品国产亚洲性色| 夜夜躁狠狠躁天天躁| 一本大道久久a久久精品| 999久久久精品免费观看国产| 成人三级黄色视频| 女生性感内裤真人,穿戴方法视频| 国产精品久久视频播放| 亚洲成a人片在线一区二区| 精品久久久久久久久久免费视频| 亚洲一区二区三区不卡视频| 国产精品久久久久久久电影 | 中文字幕久久专区| 欧美激情久久久久久爽电影| 日韩欧美国产一区二区入口| 国产私拍福利视频在线观看| 国产高清有码在线观看视频 | 天天一区二区日本电影三级| 成人av一区二区三区在线看| 久久婷婷人人爽人人干人人爱| 亚洲九九香蕉| 亚洲成人久久爱视频| 观看免费一级毛片| 亚洲国产日韩欧美精品在线观看 | 婷婷丁香在线五月| 精品高清国产在线一区| 日本黄色视频三级网站网址| av欧美777| 国内毛片毛片毛片毛片毛片| 99在线视频只有这里精品首页| 日韩欧美国产在线观看| 国产伦一二天堂av在线观看| 黄色 视频免费看| 国产精品久久久久久亚洲av鲁大| 久久人人精品亚洲av| 巨乳人妻的诱惑在线观看| 久久天堂一区二区三区四区| 国产伦人伦偷精品视频| 亚洲av成人av| 成在线人永久免费视频| 精品欧美国产一区二区三| 国产成人啪精品午夜网站| 女人爽到高潮嗷嗷叫在线视频| 午夜视频精品福利| 国产精品久久久久久精品电影| 国产亚洲精品久久久久5区| 一进一出抽搐动态| 国产成人av激情在线播放| 一级黄色大片毛片| 精华霜和精华液先用哪个| 听说在线观看完整版免费高清| 亚洲欧美日韩高清专用| 国产精品永久免费网站| 欧美性长视频在线观看| 久久久久亚洲av毛片大全| www.999成人在线观看| 夜夜爽天天搞| 欧美成人免费av一区二区三区| 久久久久久久久中文| 免费在线观看影片大全网站| 99热只有精品国产| 亚洲五月天丁香| 一卡2卡三卡四卡精品乱码亚洲| 国产伦在线观看视频一区| 日本免费一区二区三区高清不卡| 亚洲av美国av| 在线观看午夜福利视频| 精品午夜福利视频在线观看一区| 国产成人精品久久二区二区免费| 日本黄色视频三级网站网址| 久久婷婷人人爽人人干人人爱| 69av精品久久久久久| www国产在线视频色| 亚洲午夜理论影院| 久久精品人妻少妇| 天堂动漫精品| 99riav亚洲国产免费| 成人三级做爰电影| 亚洲国产欧美网| 午夜视频精品福利| 五月玫瑰六月丁香| 18美女黄网站色大片免费观看| 午夜福利成人在线免费观看| 在线观看舔阴道视频| 一区二区三区高清视频在线| 在线永久观看黄色视频| 男女之事视频高清在线观看| 精品欧美国产一区二区三| 中文在线观看免费www的网站 | 亚洲一码二码三码区别大吗| 中文字幕精品亚洲无线码一区| 久久午夜综合久久蜜桃| 久久久久国产精品人妻aⅴ院| 久久欧美精品欧美久久欧美| 亚洲精品色激情综合| 免费搜索国产男女视频| 亚洲av电影不卡..在线观看| 欧美 亚洲 国产 日韩一| 此物有八面人人有两片| 久久久水蜜桃国产精品网| 一本久久中文字幕| www.熟女人妻精品国产| 成年版毛片免费区| 久久国产乱子伦精品免费另类| 久久久久久国产a免费观看| 午夜精品在线福利| 最新美女视频免费是黄的| 变态另类成人亚洲欧美熟女| 色噜噜av男人的天堂激情| 男女之事视频高清在线观看| 国产97色在线日韩免费| 91在线观看av| 国产亚洲精品第一综合不卡| av在线天堂中文字幕| 国产视频内射| 国产亚洲精品av在线| 午夜影院日韩av| 18禁美女被吸乳视频| 婷婷丁香在线五月| 亚洲国产精品合色在线| 五月伊人婷婷丁香| 两个人视频免费观看高清| 人妻久久中文字幕网| 国产精品乱码一区二三区的特点| 无遮挡黄片免费观看| 制服丝袜大香蕉在线| 老司机午夜福利在线观看视频| 超碰成人久久| 97人妻精品一区二区三区麻豆| 制服丝袜大香蕉在线| 久久久久国产一级毛片高清牌| www日本在线高清视频| 五月伊人婷婷丁香| 99久久无色码亚洲精品果冻| 黄色丝袜av网址大全| 日韩中文字幕欧美一区二区| 亚洲av美国av| 国产欧美日韩一区二区精品| 看免费av毛片| 老熟妇仑乱视频hdxx| 90打野战视频偷拍视频| 国产97色在线日韩免费| 亚洲国产精品合色在线| 两个人免费观看高清视频| 国产精品 国内视频| 国产高清videossex| 亚洲中文字幕日韩| 天堂√8在线中文| 在线观看免费午夜福利视频| 亚洲国产欧美网| 亚洲成人中文字幕在线播放| 99久久无色码亚洲精品果冻| 最近最新中文字幕大全电影3| 日本黄色视频三级网站网址| videosex国产| 性欧美人与动物交配| 激情在线观看视频在线高清| 欧美午夜高清在线| 男插女下体视频免费在线播放| 中文字幕久久专区| 亚洲精品在线观看二区| 亚洲七黄色美女视频| 国产精品永久免费网站| 亚洲一区二区三区色噜噜| 久久久水蜜桃国产精品网| xxxwww97欧美| 日本熟妇午夜| 亚洲美女视频黄频| 免费人成视频x8x8入口观看| 亚洲成a人片在线一区二区| 又黄又爽又免费观看的视频| 午夜免费观看网址| 一本一本综合久久| 中文字幕av在线有码专区| 国产野战对白在线观看| 日韩av在线大香蕉| 天堂影院成人在线观看| 在线观看www视频免费| 床上黄色一级片| 亚洲欧美激情综合另类| 国内精品一区二区在线观看| 91麻豆精品激情在线观看国产| 久99久视频精品免费| 国产精品亚洲美女久久久| 久久精品国产亚洲av香蕉五月| 亚洲五月婷婷丁香| 中文字幕av在线有码专区| 五月伊人婷婷丁香| 久久性视频一级片| 中文字幕av在线有码专区| 巨乳人妻的诱惑在线观看| 十八禁人妻一区二区| 真人做人爱边吃奶动态| 丁香欧美五月| 18禁裸乳无遮挡免费网站照片| 成人一区二区视频在线观看| 村上凉子中文字幕在线| 一级片免费观看大全| 中文字幕久久专区| 无遮挡黄片免费观看| 日本 欧美在线| 淫妇啪啪啪对白视频| 久久精品人妻少妇| 亚洲欧美一区二区三区黑人| 国产精品 欧美亚洲| 夜夜看夜夜爽夜夜摸| 亚洲无线在线观看| 婷婷丁香在线五月| 夜夜看夜夜爽夜夜摸| 视频区欧美日本亚洲| 又爽又黄无遮挡网站| 白带黄色成豆腐渣| 18禁国产床啪视频网站| 一本一本综合久久| 在线观看舔阴道视频| 国产精品香港三级国产av潘金莲| 男人的好看免费观看在线视频 | 色综合站精品国产| x7x7x7水蜜桃| 欧美不卡视频在线免费观看 | 高潮久久久久久久久久久不卡| 亚洲av成人不卡在线观看播放网| 精品国产乱码久久久久久男人| 亚洲avbb在线观看| 麻豆成人av在线观看| 国产三级黄色录像| 色av中文字幕| 国产99白浆流出| 亚洲狠狠婷婷综合久久图片| 人妻久久中文字幕网| 久久精品91蜜桃| a级毛片a级免费在线| 午夜成年电影在线免费观看| 国产av不卡久久| av片东京热男人的天堂| 亚洲欧美精品综合一区二区三区| 88av欧美| 成年免费大片在线观看| 黄色成人免费大全| 99精品欧美一区二区三区四区| 国产日本99.免费观看| 婷婷精品国产亚洲av| 国产成年人精品一区二区| 97超级碰碰碰精品色视频在线观看| 国产精品98久久久久久宅男小说| 亚洲精品美女久久av网站| 欧美中文日本在线观看视频| 亚洲中文字幕日韩| 叶爱在线成人免费视频播放| 成人18禁高潮啪啪吃奶动态图| 人人妻人人澡欧美一区二区| 午夜福利视频1000在线观看| 91九色精品人成在线观看| 色综合亚洲欧美另类图片| 中文字幕精品亚洲无线码一区| 在线观看免费视频日本深夜| 国产精品日韩av在线免费观看| 欧美乱妇无乱码| 人人妻人人看人人澡| 午夜福利欧美成人| 18禁美女被吸乳视频| 久久久精品大字幕| 成人三级黄色视频| 日韩有码中文字幕| 国产精品一区二区精品视频观看| 19禁男女啪啪无遮挡网站| 亚洲欧美精品综合久久99| 每晚都被弄得嗷嗷叫到高潮| 欧美zozozo另类| 免费一级毛片在线播放高清视频| 久久亚洲精品不卡| 三级毛片av免费| 中文亚洲av片在线观看爽| 日本免费a在线| 成人一区二区视频在线观看| 午夜a级毛片| 中文字幕精品亚洲无线码一区| 国产精品亚洲一级av第二区| 亚洲av日韩精品久久久久久密| www.熟女人妻精品国产| 国产av在哪里看| 国产精华一区二区三区| 欧美绝顶高潮抽搐喷水| 99精品欧美一区二区三区四区| 成人三级做爰电影| 大型黄色视频在线免费观看| 母亲3免费完整高清在线观看| 亚洲av中文字字幕乱码综合| 欧美黑人精品巨大| 精品一区二区三区四区五区乱码| 变态另类成人亚洲欧美熟女| 欧美性猛交╳xxx乱大交人| 欧美大码av| 精华霜和精华液先用哪个| 亚洲人与动物交配视频| 全区人妻精品视频| av视频在线观看入口| 99国产精品99久久久久| 国产精品一及| 精品欧美一区二区三区在线| 男人舔女人的私密视频| 男插女下体视频免费在线播放| 欧美性长视频在线观看| 国产精品日韩av在线免费观看| 一本精品99久久精品77| 美女免费视频网站| 狂野欧美白嫩少妇大欣赏| 国产亚洲精品久久久久久毛片| 国内毛片毛片毛片毛片毛片| 日日夜夜操网爽| 香蕉av资源在线| 特大巨黑吊av在线直播| 少妇粗大呻吟视频| 免费看十八禁软件| 欧美绝顶高潮抽搐喷水| 久久香蕉国产精品| 中文字幕最新亚洲高清| 欧美av亚洲av综合av国产av| 三级毛片av免费| 日本黄大片高清| 99久久无色码亚洲精品果冻| 成人永久免费在线观看视频| 国产精品日韩av在线免费观看| 久久国产精品人妻蜜桃| www.熟女人妻精品国产| 亚洲欧美日韩无卡精品| 非洲黑人性xxxx精品又粗又长| 午夜两性在线视频| 日韩av在线大香蕉| 欧美日韩国产亚洲二区| 国产亚洲欧美在线一区二区| 久久久精品大字幕| 国产免费av片在线观看野外av| 最新在线观看一区二区三区| 国产高清视频在线播放一区| 又大又爽又粗| 色老头精品视频在线观看| 精品国产乱码久久久久久男人| 男男h啪啪无遮挡| 免费在线观看完整版高清| 欧美国产日韩亚洲一区| 午夜福利18| 很黄的视频免费| 999久久久精品免费观看国产| 午夜福利在线观看吧| 757午夜福利合集在线观看| 嫩草影院精品99| 成人欧美大片| 日本精品一区二区三区蜜桃| √禁漫天堂资源中文www| 97人妻精品一区二区三区麻豆| 在线看三级毛片| 麻豆久久精品国产亚洲av| 欧美zozozo另类| 国产伦在线观看视频一区| 91成年电影在线观看| 十八禁网站免费在线| 午夜老司机福利片| 亚洲一区二区三区色噜噜| 俺也久久电影网| 国产亚洲av高清不卡| 一级片免费观看大全| 757午夜福利合集在线观看| 日韩欧美免费精品| 久久久久免费精品人妻一区二区| 国产精品九九99| 国内精品久久久久精免费| 国产精品免费一区二区三区在线| 美女扒开内裤让男人捅视频| 欧美av亚洲av综合av国产av| 最新在线观看一区二区三区| 日韩精品青青久久久久久| 国产熟女午夜一区二区三区| 国产精品久久久久久久电影 | 五月伊人婷婷丁香| 少妇的丰满在线观看| 国产野战对白在线观看| 久久久久性生活片| 日韩免费av在线播放| 精品熟女少妇八av免费久了| 我的老师免费观看完整版| 成人手机av| 精品少妇一区二区三区视频日本电影| 国产成人精品无人区| 熟妇人妻久久中文字幕3abv| 亚洲九九香蕉| 好看av亚洲va欧美ⅴa在| 99国产综合亚洲精品| 成人精品一区二区免费| 国产三级黄色录像| 国产精品野战在线观看| 19禁男女啪啪无遮挡网站| 国产av麻豆久久久久久久| 久久香蕉国产精品| 亚洲精品av麻豆狂野| 黄色视频不卡| 丁香六月欧美| 19禁男女啪啪无遮挡网站| 国产精品野战在线观看| 女生性感内裤真人,穿戴方法视频| 国产激情欧美一区二区| 黄色视频不卡| 亚洲av中文字字幕乱码综合| 亚洲av日韩精品久久久久久密| 波多野结衣巨乳人妻| av有码第一页| 波多野结衣巨乳人妻| 亚洲av电影在线进入| 黑人操中国人逼视频| 丝袜美腿诱惑在线| 久久久久免费精品人妻一区二区| 在线观看美女被高潮喷水网站 | 三级国产精品欧美在线观看 | 成人三级做爰电影| 日韩 欧美 亚洲 中文字幕| 国产激情久久老熟女| 国产一区在线观看成人免费| 可以免费在线观看a视频的电影网站| 一级a爱片免费观看的视频| 欧美日韩亚洲国产一区二区在线观看| 久久精品夜夜夜夜夜久久蜜豆 | 亚洲成人久久性| 国产麻豆成人av免费视频| 日本五十路高清| e午夜精品久久久久久久| 午夜影院日韩av| 特大巨黑吊av在线直播| 色噜噜av男人的天堂激情| 欧洲精品卡2卡3卡4卡5卡区| 色综合亚洲欧美另类图片| 久久国产精品人妻蜜桃| 一二三四社区在线视频社区8| 亚洲精品av麻豆狂野| 九九热线精品视视频播放| 9191精品国产免费久久| 国内毛片毛片毛片毛片毛片| 少妇熟女aⅴ在线视频| 99国产精品一区二区蜜桃av| av福利片在线观看| 黄片大片在线免费观看| 老司机靠b影院| 无人区码免费观看不卡| 2021天堂中文幕一二区在线观| 国产单亲对白刺激| 国产精品香港三级国产av潘金莲| 精品久久久久久久久久久久久| 精品乱码久久久久久99久播| 午夜精品一区二区三区免费看| av有码第一页| 好看av亚洲va欧美ⅴa在| 狠狠狠狠99中文字幕| 777久久人妻少妇嫩草av网站| a在线观看视频网站| 麻豆久久精品国产亚洲av| 中文字幕精品亚洲无线码一区| 精品久久久久久,| 亚洲av电影不卡..在线观看| 他把我摸到了高潮在线观看| 99热这里只有是精品50| 久久久水蜜桃国产精品网| 久久精品国产综合久久久| 日本a在线网址| 久热爱精品视频在线9| 日本一二三区视频观看| 成人av一区二区三区在线看| 三级毛片av免费| 97人妻精品一区二区三区麻豆| 国产精品久久电影中文字幕| 香蕉av资源在线| 啦啦啦韩国在线观看视频| 久久久国产精品麻豆| 超碰成人久久| a级毛片在线看网站| 国产亚洲精品久久久久久毛片| 国产av又大| 此物有八面人人有两片| 中出人妻视频一区二区| 他把我摸到了高潮在线观看| 人妻丰满熟妇av一区二区三区| 色综合站精品国产| 欧美绝顶高潮抽搐喷水| 最近最新免费中文字幕在线| 国产亚洲精品久久久久久毛片| 国产真实乱freesex| 国产野战对白在线观看| 99热这里只有精品一区 | 亚洲av中文字字幕乱码综合| 免费观看人在逋| 三级男女做爰猛烈吃奶摸视频| 欧美乱妇无乱码| 日韩欧美国产一区二区入口| 可以免费在线观看a视频的电影网站| 久久99热这里只有精品18| 人妻丰满熟妇av一区二区三区| 国产一区二区三区在线臀色熟女| 成人国产综合亚洲| 久久午夜亚洲精品久久| 国产三级中文精品| 亚洲欧美精品综合久久99| 丰满人妻一区二区三区视频av | av有码第一页| 超碰成人久久| 99riav亚洲国产免费| 亚洲专区中文字幕在线| 老司机午夜福利在线观看视频| 日韩欧美一区二区三区在线观看| 脱女人内裤的视频| 亚洲九九香蕉| 日本成人三级电影网站| 国产精品美女特级片免费视频播放器 | 老熟妇乱子伦视频在线观看| 精品欧美国产一区二区三| 午夜影院日韩av| 一本精品99久久精品77| 淫秽高清视频在线观看| 一本大道久久a久久精品| 久久久久国内视频| 好男人电影高清在线观看| 色综合欧美亚洲国产小说| 午夜免费观看网址| 在线国产一区二区在线| 色播亚洲综合网| 亚洲成人久久性| 成年女人毛片免费观看观看9| 国产欧美日韩一区二区三| 国产精品电影一区二区三区| 搡老妇女老女人老熟妇| 动漫黄色视频在线观看| 国产97色在线日韩免费| 久久天堂一区二区三区四区|