• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Predictive Model of Live Shopping Interest Degree Based on Eye Movement Characteristics and Deep Factorization Machine

    2022-09-29 01:47:16SHIXiujin石秀金LIHaoSHIHangWANGShaoyu王紹宇SUNGuohao孫國豪

    SHI Xiujin(石秀金), LI Hao(李 昊), SHI Hang(史 航), WANG Shaoyu(王紹宇), SUN Guohao (孫國豪)

    School of Computer Science and Technology, Donghua University, Shanghai 201620, China

    Abstract: In the live broadcast process, eye movement characteristics can reflect people’s attention to the product. However, the existing interest degree predictive model research does not consider the eye movement characteristics. In order to obtain the users’ interest in the product more effectively, we will consider the key eye movement indicators. We first collect eye movement characteristics based on the self-developed data processing algorithm fast discriminative model prediction for tracking (FDIMP), and then we add data dimensions to the original data set through information filling. In addition, we apply the deep factorization machine (DeepFM) architecture to simultaneously learn the combination of low-level and high-level features. In order to effectively learn important features and emphasize relatively important features, the multi-head attention mechanism is applied in the interest model. The experimental results on the public data set Criteo show that, compared with the original DeepFM algorithm, the area under curve (AUC) value was improved by up to 9.32%.

    Key words: eye movement; interest degree predictive; deep factorization machine (DeepFM); multi-head attention mechanism

    Introduction

    Online live shopping has now become one popular way for people to obtain information. Obtaining the users’ interest in the live broadcast process can not only improve the merchant’s live broadcast strategy and increase users’ satisfaction with watching the live broadcast, but also help designers develop more humanized live broadcast interaction methods to enhance user experience. Therefore, it is of great practical significance to obtain users’ interest when watching live shopping broadcasts.

    The eye movement characteristic refers to the data feature of the subject’s eyeball when watching the live broadcast. Traditionally, eye tracking technology is an analysis tool that can be used in different disciplines such as medicine, psychology, and marketing[1-3]. In the process of visual evaluation, the method that combined eye tracking with some data processing methods can obtain fine-grained information in the process of individual cognition, and has achieved satisfactory results in a variety of scene detection.

    Current assessments of interest in live streaming are mostly based on “black box” research. Specifically, this kind of research relies on the viewer’s self-expression to reflect the degree of interest in live streaming. However, the interest obtained by “black box” not only involves the subjective factors of the viewer, but also will be affected by many objective factors such as the environment and mood, which makes them difficult to truly reflect the influence of the viewer’s interest in live shopping. With the development of neural networks, click-through rate(CTR) estimation technology is increasingly used in interest degree predictive models. However, CTR ignores a lot of objective information such as the level of detail of products in live shopping, and some important factors like dynamic parameters. Figure 1 is a simple example of the attributes of each dimension entity in the live broadcast process. Many data dimensions can be extracted from a live video, such as eye movement data, traditional interest model dimensions and other dimensions, where eye movement data extracts data dimensions through video processing algorithm. It is necessary to consider the various factors shown in Fig. 1 in the live shopping interest model.

    Fig. 1 Example diagram of each entity attribute

    In this paper, we take the eye movement factor into account in the proposed model. At the same time, the base model also has room for improvement. We make innovations from these two aspects.

    1 Related Work

    1.1 Application of eye tracking technology

    In recent years, eye tracking technology has been used more and more widely in visual evaluation research. Baazeemetal.[4]used eye movement data for machine learning to detect developmental dyslexia, and used random forest to select the most important eye movement features as input to the support vector machine classifier. This hybrid approach can reliably identify fluent readers. Bitkinaetal.[5]used eye movement indicators to classify and predict driving perception workload, and then studied the ability of eye movement indicators to predict driving load, and obtained a conclusion that some factors were correlated with gaze indicators. Relloetal.[6]studied the extent to which eye tracking improved the readability of Arabic texts, and used different regression algorithms to build several readability prediction models.

    Eye tracking technology is used in many fields to complete recommendation tasks or classification tasks. In the recommendation task, the improvement of the index area under curve (AUC) is mostly between 2% and 10%, and specific conclusions or models have been drawn on the respective research issues. However, most of these models are based on machine learning methods and the number of samples used is small, from tens to hundreds, which brings certain accidental factors to the experiment, and the learning ability of these models can be further improved.

    1.2 Interest prediction model

    Existing interest degree predictive models are mainly divided into two categories, namely, CTR predictive models based on machine learning and deep learning. Interest prediction models based on machine learning are mainly divided into two categories: single model and combined model prediction. In a single model, logistic regression and decision tree are the more common models. In terms of model combination, gradient boosting decision tree(GBDT)+logistic regression(LR) and field-weighted factorization machines (FwFM)[7-10]are the more common models. However, the interest degree predictive model based on machine learning relies more on the processing of features manually, and a lot of manual feature engineering is required in the early stage of application model. The interest degree prediction model based on deep learning has shown good results by exploring the high-level combination of features in the interest degree prediction field. Among them, wide&deep, fast growing cascade neural network (FGCNN),etc.[11-13]are the more common models.

    In the research related to the interest prediction in live broadcast, eye movement data is not used as a data dimension in the model.

    2 Eye Tracking Data Obtaining Algorithm

    Since the obtaining of eye movement data is an automated process and the eye tracker supporting software does not provide the calculation of the parameters that the subject is concerned about a single area. This paper proposes the fast discriminative model prediction for tracking(FDIMP) algorithm to solve the task of live video processing, and improves the tracking model based on the ability to discriminate goals and backgrounds and reduce the number of iterations. It provides an automated function to output the required data from the video. After the operation of filling the dimensions, the obtained dimensions are the CTR dimension and the eye movement dimension. The data set contains the data supplemented by the tested person and the characteristics of the large data set, which is used by the subsequent interest degree prediction model.

    2.1 Obtaining live video

    All subjects in this article have normal or corrected vision, and have no eye problems such as color blindness. The subjects include online shoppers and infrequent online shoppers, and are divided into different occupations. We apply the Noldus-Eye tracking glasses(ETG) eye tracker to collect the user’s eye movement data. The viewing distance is set as 60 cm. The device is calibrated before the experiment. If there is obvious head movement or it is detected on the screen used by the researcher to track eye movement drift, then we repeat the calibration[14]. The algorithm can generate relevant data such as live video corresponding to the viewpoint trajectory. In addition to the video, the eye tracker can also produce a variety of visual images, such as a heat map directly related to the number of gaze points, and a path diagram indicating the transition direction of gaze. These images are used as a supplement to the eye movement data and can intuitively show the characteristics of the learner when they are looking at the video. The experiment requires the subject to wear an eye tracker device to watch the live broadcast within a one-minute live shopping video. In the process, the data processing model is used to capture the relative gaze time and pupil concentration of the user for different areas of the plate. At the end of the experiment, we record the subject’s degree of satisfaction with the items introduced in the video.

    2.2 Live shopping video processing model

    The FDIMP processing model is used to process the subject’s eye movement video into the user’s relative gaze time and pupil concentration for different plate areas. As an end-to-end tracking architecture, it can make full use of the target and background appearance information to predict the target model[15]. The process is shown in Fig. 2. We apply random samples in the video sequence for training, where we extract three frames from a certain frame and the front as the training set, and extract three frames from the back of the frame as the test set, and pool the features of the extracted target area to obtain the initialized feature image. The function of model initialization is to initialize the features in the target area to generate a three-dimensional(4×4×n) feature filter. The initialized filter is combined with the background information of the target area to optimize, and the optimized filter is obtained in an iterative manner.

    Fig. 2 Target tracking process

    In the loss function setting,srepresents the number of training images, andrrepresents the residual function for calculating the predicted position and the target position. The common form is

    r(s,q)=s-yq,

    (1)

    whereyqis the desired target scores at each location, popularly set to a Gaussian function centered atq. It is worth noting that, this simple form is directly used with mean-square-error (MSE) for optimization. Since there are many negative examples, and the labels of the negative examples are collectively referred to as 0, this requires the model to be sufficiently complex. In this case, performing optimization on the negative examples will cause the model to be biased towards learning negative examples instead of distinguishing between negative and positive examples. In order to solve this problem, we added weights to loss, and referred to hinge loss[16]in support vector machine (SVM) to filter out a large number of negative examples in the score map. For the positive sample area, MSE loss is used, so the final residual function is

    r(s,c)=vc·[mcs+(1-mc)max(0,s)-yc],

    (2)

    where the subscriptcrepresents the degree of dependence on the center point;vcmeans weight;mc∈[0, 1] means mask, and in the background area,mc≈0; in the corresponding area of the object,mc≈ 1. In this way, the hinge loss can be used in the background area, and MSE loss can be used in the object area. In the design of this paper, regression factorsyccan be learned.

    Obtaining eye movement parameters in different regions requires frequent switching of unlearned tracking objects. Compared with offline pre-training and similarity measurement models, FDIMP, an online learning and iterative update strategy, can be used in live shopping broadcasts, and can play a better role in tracking situations where objects are not clear.

    2.3 Data used in the interest model

    It is necessary to establish a tracking frame as the user’s point of view and target area when the packaged data processing algorithm is used to track live broadcast items. When the target area covers the user’s viewpoint, it is judged to be coincident. That is, the user’s viewpoint is paying attention to the area within the corresponding time. For the demo sales items, such as live broadcast anchors, background, comment area, and event coupon area, the data collecting method is shown as described above.

    In addition to eye movement data, it is also necessary to collect user explicit feedback data, such as user age, user gender, and other customized information, to obtain user basic information and eye movement information (average blink time, number of blinks, attention time rate of sold items, attention time rate of anchor area, attention time rate of discount area, attention time rate of sold items, attention time rate of discount area and the number of attention points), and explain the subsequent model training after filling in the data.

    3 Interest Recommendation Model Based on Eye Movement Characteristic Data

    DeepFM algorithm is selected as the basic algorithm in this paper and it is improved on the basis of wide&deep. It does not need pre-training factor machine(FM) to obtain hidden vectors or artificial feature engineering. It can learn low-order and high-order combined features at the same time. FM module and deep module share the feature embedding part, which enables faster training and more accurate training and learning. It is very suitable for complex scenes such as interest prediction. Based on the DeepFM architecture, this model embeds and encodes eye movement data after introducing a collaborative information graph. Adding a self-attention mechanism to the deep neural network(DNN) improves the model’s ability to learn key information.

    3.1 Embedded coding layer design

    Since the original input features in interest degree prediction have various data types[17], some dimensions are even incomplete. In order to normalize the mapping of different types of feature components and reduce the dimensionality of the input feature vector, it is necessary to perform one hot vector mapping on the input feature first, and then perform one hot vector mapping on the input feature, followed by the extremely sparse after hot encoding the input layer, and cascading the embedding layer. Like field-aware factorization machines(FFM), DeepFM[18]summarizes features with the same characteristics as a field, and its formula is

    x=f(S,M),

    (3)

    wherexis the corresponding vector after embedding coding,Sis the one-hot coding sparse eigenvector,Mis the parameter matrix, and its elements are the weight parameters of the connecting lines in Fig. 3. These parameters are iterated by learning during the training of the CTR prediction model.

    As shown in Fig. 3, the embedded layer coding maps the one-hot code sparse vectors of different fields to low-dimensional vectors, which can compress the original data information and greatly reduce the input dimension.

    Fig. 3 Original input sparse feature vector to dense vector to embedding mapping

    As shown in Fig. 4, taking the particularity of the newly added data dimensions into account, the user behavior and project knowledge are coded into a unified relationship diagram through the collaboration information graph. To make an information graph, we first define a user item bipartite graph {(eu,yui,ei)∣eu∈U,ei∈I}, whereeuis a user entity,yuirepresents links between usersuand itemsi,eirepresents the project entity, anduandirepresent users and itemsets respectively. When there is an interaction between the two entities,yuiis set as 1. The collaboration info-graphic incorporates new data dimensions into it, where each user’s behavior can be represented as a triple(eu,R,ei).R= 1 indicates that there is an additional interactioneuandei. In this way, the user information graph can be integrated with the newly added dimension into a unified graph.

    Fig. 4 Collaboration information graph structure

    As shown in Fig. 5, the multi-modal information encoder takes the newly added dimension entities and the original information entities as input, and uses the entity encoder and attention layer to learn a new entity representation for each entity. The new entity representation retains its own information. At the same time, information about neighboring entities is aggregated. We use the new entity to represent the embedding in the interest prediction model.

    Fig. 5 Multi-modal information encoder

    3.2 Factorization machine

    In the CTR prediction, due to the extremely sparse input characteristics and the correlation between the input characteristics, the factorization machine model aims to fully consider the first-order features and the second-order combination characteristics when predicting the user’s CTR[19]. The regression prediction model in the factorization machine is

    (4)

    whereyFMis predicted output,nis the dimension of the input feature vector,xiis the feature vector for theith dimension,wiis the weight parameter of the first-order feature, andwijis the weight parameter of the second-order combination feature. In the second item, the estimated value ofwixiis taken and accumulated. There are many parameters to be learned for the second-order combination feature of the model, the number of parameters isn(n-1)/2. However, due to the sparseness of data in practical applications, this model is difficult to train. Therefore, we decompose the matrixwijintoVTV, where the matrixVis

    V=[v1,v2, …,vi, …,vn],

    (5)

    whereviis thek-dimensional hidden vector associated withxi.

    We encode different types of input data (images, texts, labels,etc.) into high-order hidden vectors. Then we combine multi-dimensional data based on the multi-modal graph attention mechanism module (multi-modal-knowledge-graphs attention layer).

    3.3 DNN architecture

    The DeepFM prediction model introduces DNN[20]to cascade the embedding and encoded feature vector in a fully connected layer to establish a regression or classification model. The output of each neuron is the linear weighted value of the neurons in the previous layer corresponding to the nonlinear mapping. That is, for the neurons in thel+1 layer, the corresponding output value is

    a(l+1)=φ(W(l)a(l)+b(l)),

    (6)

    whereW(l),a(l)andb(l)respectively represent the first layer of weight matrix, thellayer of neuron output corresponding, connecting thellayer and thellayer of the bias value vector. For the nonlinear mapping function, the following ReLU function and Sigmoid function are commonly used. The corresponding expressions are

    φ(d)=1/[1+exp(-d)],

    (7)

    (8)

    wheredrepresents the input of the previous layer.

    3.4 Self-attention mechanism

    The self-attention mechanism was proposed in the field of image processing[21], and later used in various fields[22-26]. The purpose is to focus on certain feature information during model training. The conventional attention mechanism is to use the state of the last hidden layer of the neural network, or use the state of the hidden layer output by the neural network at a moment to align with the hidden state of the current input. The self-attention is directly weighted to the current input, which is a special case of the attention mechanism. It uses the sequence itself as the key and value vector of the data, and the output vector can be aggregated from the previous hidden output of the neural network.

    A single attention network is not enough to capture multiple aspects and the multi-head attention network allows the model to focus on information from different locations and different representation spaces, and can simulate user preferences from multiple views of interest. Therefore, we adopt multi-head attention module after the hidden layer as shown in Fig. 6. The data dimensions are processed and fed into the input layer.

    Fig. 6 DeepFM model of multi-head attention mechanism

    4 Experiment and Analysis

    4.1 Experiment preparation

    The eye movement data set is collected manually from related industries and obtained from cooperative units, with a total of 673 pieces of data. The data set includes videos with the subject’s gaze area ranging from 30 s to 3 min, marked pictures of each area of interest, personal information, operation history and other related information. The eye tracking data set is populated and added to the public data set. In order to verify the performance of the proposed prediction model, the public data set Criteo is selected for evaluation, and the data in the data set Critro is filled with eye movement data. The data set contains more than 450 million user click events and 7-dimensional eye movement parameters. The data types include two major categories of numeric and hash values. The dimensions of click events are 13 and 26 respectively, and the proportions of positive and negative samples are 22.912 0% and 77.087 5% respectively. The data set is divided into training data set and test data set based on the ratio of 8∶2 respectively.

    4.2 Interest model performance evaluation index

    The interest model evaluation index uses the binary cross-entropy loss function Logloss and AUC. Logloss is defined as

    (9)

    AUC is defined as the area enclosed by the coordinate axis under the receive operating characteristic(ROC) curve:

    (10)

    whereArepresents AUC, andfpris the false positive rate. Different classification thresholds can get the true positive rate curve under different false positive rates, namely ROC.

    4.3 Experimental results

    4.3.1Experimentalsetup

    In order to reduce the impact of the order of samples on the performance of the final model after training, we first randomize the samples in the data and divide the labeled data setDinto two parts, namely the training data setDtrainand the test data setDtest. In the experiment, the batch size is set as 512, the learning rate is set as 0.001, the scale of the embedded coding layer is 8, and the maximum supported dimension for one-hot coding mapping is 450. The effects of the probabilitypof random inactivation and the number of fully connected layers of adaptive residual DNN are studied respectively. The optimal parameters are selected, and then the improvement designed in this paper is compared with the DeepFM model.

    4.3.2Hyperparameterimpactresearch

    Figure 7 shows the influence of the random inactivation probabilitypand the number of fully connected layers of the adaptive residual DNN on AUC. It can be observed that when the probability of random inactivation gradually increases, the AUC performance on the test set gradually becomes more and more better, but when the probability of random inactivation exceeds 0.6, the AUC performance of the test set begins to decrease. This is because when there are too many neurons in inaction, the number of effective neurons is not enough to learn and not enough to represent the interest model as feature information. It can be also seen from Fig. 7 that, as the number of DNN fully connected layers increases, the AUC of the test set gradually increases, which is 0.856 6. The experimental results show that the random inactivation probability and the number of DNN fully connected layers have an important impact on the generalization performance of the model.

    Fig. 7 AUC value of the test set under different random inactivation probabilities and different fully connected layers

    4.4 Model performance evaluation

    According to the experimental results in Table 1, after adding the eye movement data set, the AUC values of the logistic regression(LR) model, xDeepFM model, and DeepFM model were all improved, and the improvement rates were 2.40%, 4.46%, and 5.91%, respectively. Among them, DeepFM had the largest improvement. The combination of data dimensions will have better results. The proposed model shows the best performance. Compared with the basic model DeepFM, the AUC of the improved DeepFM on the data set Criteo increases by 5.91%, and the Logloss is reduced by 0.29%.

    Table 1 Performance results of different models and improvements on data set Criteo

    When the eye movement data dimension is added at the same time, the AUC of the proposed model on the data set Criteo is 5.91% higher than that of the basic algorithm DeepFM, and the AUC of xDeepFM on the data set Criteo is 4.46% higher than that without the eye movement data dimension. The AUC of LR on the data set Criteo is only 2.4% higher than that without the eye movement data dimension. That is, after adding the eye movement data dimension, the improvement of the xDeepFM model and the DeepFM model is larger, and the improvement of the LR model is smaller.

    According to the experimental results in Table 2, the number of DNN fully connected layers is 4. Table 2 shows the performance parameters of the current mainstream interest degree predictive model after increasing the dimension of eye movement data and adding the self-attention mechanism. It can be seen that improvement 1 (increasing the dimension of eye movement data) and improvement 2 (increasing the dimension of eye movement data and self-attention mechanism) are respectively 8.25% and 9.32% better than DeepFM, which proves that eye movement data can be used as an important factor of user interest. Dimensionality and self-attention mechanism also improve the accuracy of the interest model to a certain extent.

    Table 2 AUC value of the test set under different improvements

    5 Conclusions

    This paper proposes a prediction model of interest in live shopping based on eye movement features and DeepFM. In this model, we develop eye movement indicators, process eye movement videos through data processing algorithms, and add data dimensions to the original data set through information filling. We apply the DeepFM architecture in the proposed model. In addition, in order to effectively learn important features from different heads and emphasize the relatively important features, the multi-head attention mechanism is introduced into the interest model. Experiment on public data set Criteo shows that, compared with the DeepFM algorithm, the model proposed in this paper has lower Logloss and better AUC performance after increasing the data dimension and introducing the multi-head attention mechanism.

    12—13女人毛片做爰片一| 乱人伦中国视频| 黄片播放在线免费| 波多野结衣一区麻豆| 亚洲全国av大片| 在线观看66精品国产| 真人做人爱边吃奶动态| 欧美成人免费av一区二区三区 | 深夜精品福利| 午夜免费鲁丝| 亚洲熟女精品中文字幕| 51午夜福利影视在线观看| 男女床上黄色一级片免费看| 国产精品综合久久久久久久免费 | 无人区码免费观看不卡| 岛国毛片在线播放| 亚洲成a人片在线一区二区| 久久亚洲精品不卡| 一区二区三区激情视频| 精品少妇久久久久久888优播| 亚洲精品乱久久久久久| 国产亚洲精品久久久久5区| 午夜福利一区二区在线看| 老司机靠b影院| 国产精品98久久久久久宅男小说| 啦啦啦视频在线资源免费观看| 国产精品免费大片| 99久久国产精品久久久| 亚洲精品中文字幕一二三四区| 一二三四社区在线视频社区8| 村上凉子中文字幕在线| 精品少妇一区二区三区视频日本电影| 久久 成人 亚洲| 国内久久婷婷六月综合欲色啪| 精品国产一区二区久久| а√天堂www在线а√下载 | tube8黄色片| 激情视频va一区二区三区| 黑人巨大精品欧美一区二区mp4| 国产成+人综合+亚洲专区| 日本a在线网址| 成在线人永久免费视频| 女人久久www免费人成看片| 在线视频色国产色| 国产精品久久久久久精品古装| 亚洲一区二区三区不卡视频| 黄片播放在线免费| 国产精品欧美亚洲77777| 欧美色视频一区免费| 少妇裸体淫交视频免费看高清 | 国产精品免费一区二区三区在线 | 精品卡一卡二卡四卡免费| 免费久久久久久久精品成人欧美视频| 久久精品国产99精品国产亚洲性色 | 日韩视频一区二区在线观看| 黄片小视频在线播放| 1024香蕉在线观看| 无遮挡黄片免费观看| 丝袜在线中文字幕| 国产精品综合久久久久久久免费 | 狠狠狠狠99中文字幕| 成人免费观看视频高清| 大香蕉久久网| 日本精品一区二区三区蜜桃| 亚洲视频免费观看视频| 另类亚洲欧美激情| 美女午夜性视频免费| 一级a爱片免费观看的视频| 新久久久久国产一级毛片| www.自偷自拍.com| 999精品在线视频| 手机成人av网站| 亚洲专区字幕在线| 91九色精品人成在线观看| 国产精品电影一区二区三区 | 中文字幕精品免费在线观看视频| 国产激情欧美一区二区| 亚洲第一欧美日韩一区二区三区| 亚洲色图 男人天堂 中文字幕| 一区二区三区精品91| 精品熟女少妇八av免费久了| 精品熟女少妇八av免费久了| 一进一出抽搐动态| 在线观看免费午夜福利视频| 别揉我奶头~嗯~啊~动态视频| tocl精华| 桃红色精品国产亚洲av| 亚洲人成电影观看| 黄频高清免费视频| 国产一区二区三区在线臀色熟女 | 亚洲第一欧美日韩一区二区三区| 日本wwww免费看| 国产色视频综合| 99riav亚洲国产免费| 男女下面插进去视频免费观看| 国产亚洲欧美在线一区二区| 亚洲,欧美精品.| 久久精品国产亚洲av香蕉五月 | 国产午夜精品久久久久久| 美女高潮喷水抽搐中文字幕| 精品视频人人做人人爽| 男女午夜视频在线观看| 午夜精品国产一区二区电影| 一级毛片女人18水好多| 欧美精品av麻豆av| 首页视频小说图片口味搜索| 18在线观看网站| 中亚洲国语对白在线视频| 69av精品久久久久久| 91av网站免费观看| 久久中文字幕人妻熟女| videosex国产| 90打野战视频偷拍视频| 欧美老熟妇乱子伦牲交| 久久久精品免费免费高清| 久久精品国产a三级三级三级| 又紧又爽又黄一区二区| av不卡在线播放| 亚洲成人手机| 久久ye,这里只有精品| 校园春色视频在线观看| 在线av久久热| 首页视频小说图片口味搜索| 夜夜夜夜夜久久久久| 亚洲熟女精品中文字幕| 十八禁人妻一区二区| 狠狠婷婷综合久久久久久88av| 亚洲精品国产精品久久久不卡| 在线免费观看的www视频| 91麻豆精品激情在线观看国产 | 午夜免费鲁丝| av天堂久久9| 精品久久久久久电影网| 如日韩欧美国产精品一区二区三区| 精品国产一区二区三区久久久樱花| 男女床上黄色一级片免费看| 精品国产美女av久久久久小说| 在线视频色国产色| 老汉色∧v一级毛片| 精品国产超薄肉色丝袜足j| 曰老女人黄片| 欧美激情久久久久久爽电影 | 亚洲黑人精品在线| 俄罗斯特黄特色一大片| 欧美日本中文国产一区发布| 一区福利在线观看| 免费少妇av软件| 亚洲国产精品合色在线| 久久精品国产综合久久久| 夜夜躁狠狠躁天天躁| 成人三级做爰电影| 国产成人一区二区三区免费视频网站| 男女免费视频国产| 免费观看人在逋| 欧美日韩亚洲综合一区二区三区_| 成年人免费黄色播放视频| 在线免费观看的www视频| avwww免费| 国产淫语在线视频| 欧美黑人精品巨大| 久久狼人影院| 乱人伦中国视频| 999久久久国产精品视频| 美女视频免费永久观看网站| 中文字幕制服av| 欧美人与性动交α欧美软件| 国产精品国产高清国产av | 国产日韩欧美亚洲二区| 久久婷婷成人综合色麻豆| 老熟妇仑乱视频hdxx| 亚洲精品在线美女| 中文亚洲av片在线观看爽 | 热99久久久久精品小说推荐| 女人被躁到高潮嗷嗷叫费观| 久久久久久人人人人人| 女人被躁到高潮嗷嗷叫费观| 身体一侧抽搐| 亚洲国产欧美日韩在线播放| 亚洲第一青青草原| 欧美乱色亚洲激情| 成人影院久久| 制服人妻中文乱码| 久久人人爽av亚洲精品天堂| 欧美日韩福利视频一区二区| 在线观看日韩欧美| 精品久久久久久电影网| 欧美日韩亚洲国产一区二区在线观看 | 国产精品亚洲一级av第二区| 黄色 视频免费看| 黄色 视频免费看| 色婷婷久久久亚洲欧美| 在线观看免费高清a一片| 一区二区三区国产精品乱码| 搡老乐熟女国产| 欧美国产精品va在线观看不卡| 亚洲午夜理论影院| 99国产精品99久久久久| 欧美最黄视频在线播放免费 | 在线观看一区二区三区激情| 女人高潮潮喷娇喘18禁视频| 久久狼人影院| 另类亚洲欧美激情| 亚洲精品美女久久av网站| 欧美日韩亚洲综合一区二区三区_| 久久久久久久精品吃奶| 中文欧美无线码| 国产精品偷伦视频观看了| 亚洲av第一区精品v没综合| 黄频高清免费视频| 交换朋友夫妻互换小说| 久久精品亚洲av国产电影网| 69精品国产乱码久久久| 亚洲人成伊人成综合网2020| 精品欧美一区二区三区在线| 久久久国产成人精品二区 | 黄色视频,在线免费观看| 亚洲国产毛片av蜜桃av| 国产蜜桃级精品一区二区三区 | 国产野战对白在线观看| 三上悠亚av全集在线观看| 嫩草影视91久久| 国产精品久久久av美女十八| 久久国产精品男人的天堂亚洲| 巨乳人妻的诱惑在线观看| 在线观看免费视频网站a站| 日韩 欧美 亚洲 中文字幕| 国产淫语在线视频| 欧美午夜高清在线| 午夜免费成人在线视频| 新久久久久国产一级毛片| 极品少妇高潮喷水抽搐| 久久久精品免费免费高清| 老熟女久久久| 精品国产一区二区三区四区第35| 久久久久视频综合| 精品一区二区三区四区五区乱码| 老司机午夜十八禁免费视频| 精品欧美一区二区三区在线| 欧美丝袜亚洲另类 | av电影中文网址| 欧美日韩国产mv在线观看视频| 又黄又粗又硬又大视频| 色婷婷久久久亚洲欧美| 欧美日韩精品网址| 欧美日韩亚洲高清精品| 丝瓜视频免费看黄片| 国产欧美日韩一区二区三区在线| 飞空精品影院首页| 老熟女久久久| 日韩精品免费视频一区二区三区| 久久精品国产亚洲av香蕉五月 | 99国产精品一区二区蜜桃av | 久9热在线精品视频| 香蕉国产在线看| 十八禁人妻一区二区| 丝袜人妻中文字幕| 在线观看免费午夜福利视频| 热99re8久久精品国产| 亚洲色图 男人天堂 中文字幕| 在线观看日韩欧美| 很黄的视频免费| 免费久久久久久久精品成人欧美视频| 18禁裸乳无遮挡动漫免费视频| 欧美丝袜亚洲另类 | 男女视频在线观看网站免费| 99国产精品一区二区蜜桃av| 国产精品美女特级片免费视频播放器| 精品99又大又爽又粗少妇毛片 | 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 国产欧美日韩精品一区二区| 国产伦一二天堂av在线观看| 日韩中文字幕欧美一区二区| 亚洲精品一区av在线观看| 日韩 欧美 亚洲 中文字幕| 国产成年人精品一区二区| 欧美激情在线99| 狠狠狠狠99中文字幕| 亚洲av免费高清在线观看| 亚洲欧美精品综合久久99| 99热这里只有是精品50| 伊人久久精品亚洲午夜| 一进一出抽搐gif免费好疼| 男人舔女人下体高潮全视频| 国产精品日韩av在线免费观看| 久久精品91无色码中文字幕| 在线观看日韩欧美| 亚洲av日韩精品久久久久久密| 女人十人毛片免费观看3o分钟| 可以在线观看的亚洲视频| 高清在线国产一区| 国产成人a区在线观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 最新中文字幕久久久久| 国产精品免费一区二区三区在线| 露出奶头的视频| 一本久久中文字幕| 亚洲第一欧美日韩一区二区三区| 啦啦啦观看免费观看视频高清| 一个人看视频在线观看www免费 | 99国产极品粉嫩在线观看| 国内揄拍国产精品人妻在线| 欧美午夜高清在线| 一二三四社区在线视频社区8| 国内毛片毛片毛片毛片毛片| 日韩欧美三级三区| 国内毛片毛片毛片毛片毛片| 美女cb高潮喷水在线观看| 久久精品国产99精品国产亚洲性色| 天堂网av新在线| www日本在线高清视频| 国产99白浆流出| eeuss影院久久| 成年女人永久免费观看视频| www.www免费av| 午夜老司机福利剧场| 美女高潮的动态| 亚洲在线自拍视频| 欧美+日韩+精品| 国内精品美女久久久久久| 国产一区二区在线av高清观看| 精品一区二区三区视频在线观看免费| 老司机午夜福利在线观看视频| 午夜福利18| 国产成人啪精品午夜网站| 久久久成人免费电影| 久久久久久久久大av| 岛国视频午夜一区免费看| 国产黄色小视频在线观看| 欧美日韩一级在线毛片| 欧美高清成人免费视频www| 欧美日本亚洲视频在线播放| 免费观看人在逋| 欧美一级毛片孕妇| 国产精品1区2区在线观看.| 亚洲精品色激情综合| 欧美成人一区二区免费高清观看| 69av精品久久久久久| xxxwww97欧美| 国内精品久久久久精免费| 麻豆成人av在线观看| 午夜a级毛片| 亚洲美女视频黄频| 18禁美女被吸乳视频| 精品无人区乱码1区二区| 免费在线观看日本一区| 一级黄色大片毛片| 欧美日本亚洲视频在线播放| 亚洲精品美女久久久久99蜜臀| 欧美黑人巨大hd| 18禁黄网站禁片午夜丰满| 国产成人欧美在线观看| 99热这里只有是精品50| 白带黄色成豆腐渣| 悠悠久久av| 高清日韩中文字幕在线| 国产免费av片在线观看野外av| 丰满乱子伦码专区| 欧美日韩一级在线毛片| 欧美中文综合在线视频| 国产亚洲精品av在线| 波多野结衣巨乳人妻| 日本黄色视频三级网站网址| 色噜噜av男人的天堂激情| 中文字幕熟女人妻在线| 国产精品自产拍在线观看55亚洲| 亚洲av熟女| 99在线视频只有这里精品首页| 成年免费大片在线观看| 欧美一级a爱片免费观看看| 99久久九九国产精品国产免费| 国产精品亚洲美女久久久| 午夜福利高清视频| 亚洲精品在线观看二区| 网址你懂的国产日韩在线| 亚洲成人久久性| 一a级毛片在线观看| 精品久久久久久久毛片微露脸| 国产精品嫩草影院av在线观看 | 成人永久免费在线观看视频| 日本免费一区二区三区高清不卡| 欧美黑人欧美精品刺激| 一夜夜www| 在线免费观看不下载黄p国产 | 99国产极品粉嫩在线观看| 大型黄色视频在线免费观看| 91av网一区二区| 日本精品一区二区三区蜜桃| 欧美黑人巨大hd| 精品日产1卡2卡| 欧美乱码精品一区二区三区| 91久久精品电影网| 国产真人三级小视频在线观看| 国产亚洲欧美在线一区二区| 久久性视频一级片| 欧美极品一区二区三区四区| 国产伦在线观看视频一区| 此物有八面人人有两片| 变态另类丝袜制服| 国产成人影院久久av| 国产中年淑女户外野战色| 可以在线观看毛片的网站| 精品不卡国产一区二区三区| 日韩精品青青久久久久久| 免费看a级黄色片| 波多野结衣高清作品| 欧美日韩乱码在线| 午夜激情福利司机影院| 中文字幕人妻熟人妻熟丝袜美 | 亚洲av不卡在线观看| 欧美乱色亚洲激情| 亚洲精品亚洲一区二区| 十八禁网站免费在线| 午夜福利在线观看吧| 欧美另类亚洲清纯唯美| 国产亚洲精品综合一区在线观看| 亚洲中文字幕日韩| 免费av毛片视频| 亚洲七黄色美女视频| 三级毛片av免费| 啦啦啦韩国在线观看视频| 女同久久另类99精品国产91| 久久这里只有精品中国| 男插女下体视频免费在线播放| 91在线精品国自产拍蜜月 | 精品人妻1区二区| 国产亚洲av嫩草精品影院| 国产熟女xx| 美女高潮喷水抽搐中文字幕| 欧美黄色淫秽网站| 日本黄色视频三级网站网址| 少妇裸体淫交视频免费看高清| 人妻丰满熟妇av一区二区三区| 国产真人三级小视频在线观看| 久9热在线精品视频| 网址你懂的国产日韩在线| 亚洲av日韩精品久久久久久密| 日韩欧美在线二视频| 大型黄色视频在线免费观看| 欧美黑人巨大hd| 色综合婷婷激情| 宅男免费午夜| 精品久久久久久成人av| 欧美日韩黄片免| 中亚洲国语对白在线视频| 搡老熟女国产l中国老女人| 99精品欧美一区二区三区四区| 亚洲第一欧美日韩一区二区三区| 一区二区三区高清视频在线| 欧美成人a在线观看| 午夜免费男女啪啪视频观看 | 精品免费久久久久久久清纯| 看免费av毛片| 免费av毛片视频| 亚洲精品456在线播放app | 熟女少妇亚洲综合色aaa.| 国产三级中文精品| 禁无遮挡网站| 国产精品一区二区三区四区免费观看 | 欧美日本亚洲视频在线播放| 国产精品嫩草影院av在线观看 | 亚洲国产欧洲综合997久久,| 欧美另类亚洲清纯唯美| av中文乱码字幕在线| 亚洲欧美日韩高清专用| 网址你懂的国产日韩在线| 免费大片18禁| 国产高清三级在线| 3wmmmm亚洲av在线观看| 热99re8久久精品国产| 亚洲中文字幕日韩| 亚洲av第一区精品v没综合| 少妇的逼水好多| 岛国视频午夜一区免费看| 亚洲avbb在线观看| 国产男靠女视频免费网站| 在线观看日韩欧美| 欧美三级亚洲精品| 69人妻影院| 欧美日本视频| 亚洲精品色激情综合| 日本在线视频免费播放| 成人三级黄色视频| 国产精华一区二区三区| 黄色日韩在线| 免费观看人在逋| 美女被艹到高潮喷水动态| 国产精品,欧美在线| 国产精品久久久久久久久免 | 国产又黄又爽又无遮挡在线| 在线观看一区二区三区| 国产97色在线日韩免费| 欧美国产日韩亚洲一区| 久久中文看片网| 99久久综合精品五月天人人| 亚洲专区中文字幕在线| www.www免费av| 国产毛片a区久久久久| 听说在线观看完整版免费高清| 最近最新中文字幕大全电影3| 一区福利在线观看| 在线看三级毛片| 欧美+亚洲+日韩+国产| 精品欧美国产一区二区三| 午夜两性在线视频| 欧美成人a在线观看| 亚洲国产精品合色在线| 国产黄色小视频在线观看| 久久久久免费精品人妻一区二区| 国产私拍福利视频在线观看| 午夜福利成人在线免费观看| 亚洲精品在线观看二区| 国产成人aa在线观看| 精品一区二区三区视频在线观看免费| 欧美成人一区二区免费高清观看| 久久久久国产精品人妻aⅴ院| 精品免费久久久久久久清纯| 韩国av一区二区三区四区| 波多野结衣高清作品| 亚洲人成网站高清观看| 久久精品国产亚洲av香蕉五月| 91麻豆av在线| 婷婷精品国产亚洲av在线| 精品欧美国产一区二区三| 午夜两性在线视频| 亚洲欧美日韩卡通动漫| 岛国在线观看网站| 淫秽高清视频在线观看| 99热精品在线国产| 十八禁网站免费在线| 精品久久久久久久久久免费视频| 国产精品一及| 成人18禁在线播放| 久久久久久久久大av| 老司机福利观看| 3wmmmm亚洲av在线观看| www国产在线视频色| 亚洲精品在线观看二区| 久久久国产成人免费| 三级国产精品欧美在线观看| 禁无遮挡网站| 夜夜看夜夜爽夜夜摸| 欧美xxxx黑人xx丫x性爽| 变态另类成人亚洲欧美熟女| 天美传媒精品一区二区| 亚洲国产欧美网| 丰满乱子伦码专区| 亚洲在线观看片| 亚洲精品粉嫩美女一区| 日日摸夜夜添夜夜添小说| 久久久国产精品麻豆| 亚洲av中文字字幕乱码综合| 亚洲av成人av| 久久久久久久亚洲中文字幕 | 久久久久久人人人人人| 久久久久国内视频| 久久精品91蜜桃| 一区二区三区免费毛片| 亚洲av五月六月丁香网| 色播亚洲综合网| 亚洲专区国产一区二区| 日本 欧美在线| 亚洲专区国产一区二区| 久久久久国内视频| 日韩中文字幕欧美一区二区| 黄片大片在线免费观看| 日韩精品中文字幕看吧| 国产精品 国内视频| 女同久久另类99精品国产91| 一本久久中文字幕| 精品一区二区三区视频在线观看免费| 成年人黄色毛片网站| 精品一区二区三区人妻视频| 国产淫片久久久久久久久 | 脱女人内裤的视频| 麻豆国产97在线/欧美| 国产av麻豆久久久久久久| 欧美日韩中文字幕国产精品一区二区三区| 真人做人爱边吃奶动态| 亚洲av电影在线进入| 特级一级黄色大片| 搡老妇女老女人老熟妇| 免费无遮挡裸体视频| 男人的好看免费观看在线视频| 亚洲中文字幕一区二区三区有码在线看| 久久久色成人| 97碰自拍视频| 在线观看免费午夜福利视频| 嫩草影视91久久| 女人被狂操c到高潮| 午夜福利欧美成人| 丰满人妻熟妇乱又伦精品不卡| 国产91精品成人一区二区三区| 久久国产精品人妻蜜桃| 久久精品影院6| 人人妻人人看人人澡| 欧美性猛交╳xxx乱大交人| 日本熟妇午夜| 中亚洲国语对白在线视频| 精品国产亚洲在线| 在线观看免费午夜福利视频| 久久欧美精品欧美久久欧美| 国产成人欧美在线观看| 国产主播在线观看一区二区| 亚洲天堂国产精品一区在线| 亚洲内射少妇av| 韩国av一区二区三区四区| 69人妻影院| h日本视频在线播放| 十八禁人妻一区二区| 两个人的视频大全免费| 亚洲人成伊人成综合网2020| 无遮挡黄片免费观看| 久久草成人影院| 九九久久精品国产亚洲av麻豆| 久久人妻av系列|