• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Knowledge Tracing Embedding Neural Network for Individualized Learning

    2020-02-01 09:04:52HUANGYongfengSHIJie施杰

    HUANGYongfeng,SHIJie(施杰)

    College of Computer Science and Technology, Donghua University, Shanghai 201620, China

    Abstract: Knowledge tracing is the key component in online individualized learning, which is capable of assessing the users’ mastery of skills and predicting the probability that the users can solve specific problems. Available knowledge tracing models have the problem that the assessments are not directly used in the predictions. To make full use of the assessments during predictions, a novel model, named deep knowledge tracing embedding neural network (DKTENN), is proposed in this work. DKTENN is a synthesis of deep knowledge tracing (DKT) and knowledge graph embedding (KGE). DKT utilizes sophisticated long short-term memory (LSTM) to assess the users and track the mastery of skills according to the users’ interaction sequences with skill-level tags, and KGE is applied to predict the probability on the basis of both the embedded problems and DKT’s assessments. DKTENN outperforms performance factors analysis and the other knowledge tracing models based on deep learning in the experiments.

    Key words: knowledge tracing; knowledge graph embedding (KGE); deep neural network; user assessment; personalized prediction

    Introduction

    Nowadays, programming contests are thriving, which involve an increasing number of skills and become dramatically difficult. But, available online learning platforms lack the ability to present suitable exercises for practicing.

    Knowledge tracing is a key component to individualized learning, owing to its ability to assess the mastery of skills for each user and predict the probability that a user can correctly solve a specific problem. Introducing knowledge tracing into the learning of skills in programming contest can benefit both the teachers and the students.

    Nevertheless, there are some challenges. The traditional knowledge tracing models, such as Bayesian knowledge tracing (BKT)[1](and its variants with additional individualized parameters[2-3], problem difficulty parameters[4]and the forget parameter[5]), learning factors analysis (LFA)[6]and performance factors analysis (PFA)[7], have the following two assumptions for the skills. First, the skills are properly partitioned into ideal and hierarchical structure,i.e., if all the prerequisites are mastered, one skill can be mastered after repeated practicing. Second, the skills are independent of each other,i.e., ifpAis the probability that a user correctly solves a problem requiring skill A, andpBis the probability that the user can solve another problem requiring a different skill B, it is assumed the probability that the user correctly solves a problem requiring both skills A and B ispA×pB, implying that the interconnections between skills are ignored. However, in individualized learning, a reasonable partition should make it easier for the users to master the skills instead of purely enhancing models’ performance. In addition, according to the second assumption, ifpA=pB= 1, thenpA×pB= 1. This is not in line with the cognitive process, since a problem requiring two skills is sure to be far more difficult than problems each requiring only one skill.

    Recently, knowledge tracing models based on deep learning have been attracting tremendous research interests. Deep knowledge tracing (DKT)[8-11], which utilizes recurrent neural network (RNN) or long short-term memory (LSTM)[12], is robust[5]and has the power to infer the users’ mastery of one skill from another[13]. Along with its variants[14-15], like deep item response theory (Deep-IRT)[14], dynamic key-value memory network (DKVMN)[16]uses memory network[17](where richer information on the users’ knowledge states can be stored). Some researchers[18-19]used the attention mechanism[20]to prevent the models from discarding important information in the future. Specifically, self-attentive knowledge tracing (SAKT)[21]uses transformer[22]to enable faster training process and better prediction performance. These deep learning-based models are superior to the traditional models in that they have no extra requirements on the partition of skills. However, they are used to either assess the mastery of skills, or predict the probability that a problem can be solved. Even though some models[18]can assess and predict at the same time, the assessments are not convincing enough because of not being used in the predictions and only being evaluated according to the experts’ experience. Other researchers[23-24]integrated the traditional methods into the deep learning models, making assessing and predicting at the same time possible. Yet, their proposed models still had constraints for the partition of skills.

    In this paper, a novel knowledge tracing model, which makes use of both DKT and knowledge graph embedding (KGE)[25-29], is proposed. DKT has the capacity of assessing users and has no extra requirements on the partition of skills. KGE can be used to make inferences on whether two given entities (such as users and problems) have certain relation (such as a user is capable of solving a problem). A combination of both has the power to assess and predict simultaneously. Three datasets are used to evaluate the proposed model, compared with the state-of-the-art knowledge tracing models.

    1 Problem Formalization

    Individualized learning involves user assessment and personalized prediction. In an online learning platform, suppose there areKskills,Qproblems, andNusers. A user’s submissions (or attempts)s={(e1,a1), (e2,a2), …, (et,at), …, (eT,aT)} is a sequence, whereTis the length of the sequence,etis the problem identifier andatis the result of the user’stth attempt (1≤t≤T). If the user correctly solves problemet,at= 1, otherwiseat=0.

    Definition1User assessment: given a user’s submissionss, after theTth attempt, assess the user’s mastery of all the skillsyT= (yT, 1,yT, 2, ...,yT, j, ...,yT, K) ∈RK, whereyT, jis the mastery of thejth skill (0 ≤yT, j≤ 1, 1 ≤j≤K). yT, j=0 means the user knows nothing about thejth skill;yT,j= 1 means the user has already mastered thejth skill.

    Definition2Personalized prediction: given a user’s submissionss, after theTth attempt, predict the probabilitypT+1(0 ≤pT+1≤ 1) that the user correctly solves problemeT+1in the (T+1)th attempt.

    2 Deep Knowledge Tracing Embedding Neural Network (DKTENN)

    Fig. 1 Projection from the entity space to the relation space

    2.1 Model architecture

    As is shown in Fig. 2, DKTENN contains four components,i.e., user embedding, problem embedding, projection and normalization (proj. & norm.), and predictor.

    Userembedding: a user’s submissionssis first encoded into {x1,x2, …,xt, …,xT}.xt= (qt,rt) ∈R2Kis a vector containing the required skills of problemetand the resultatin the user’stth attempt, whereqt,rt∈RK(1 ≤t≤T). If problemetrequires thejth skill (1≤j≤K),qt’sjth entry is one andrt’sjth entry isat, otherwise both of thejth entries are zero. DKT’s input is {x1,x2, …,xt, …,xT}, and its output is {y1,y2, …,yt, …,yT}, whereyt= (yt, 1,yt, 2, …,yt, j, …,yt, K)∈RK, andyt, jindicates the user’s mastery of thejth skill after thetth attempt (1≤j≤K, 1≤t≤T).

    Fig. 2 Network architecture of DKTENN

    In this paper, LSTM is used to implement DKT. The update equations for LSTM are:

    it=σ(Wixxt+Wihht-1+bi),

    (1)

    ft=σ(Wfxxt+Wfhht-1+bf),

    (2)

    gt=tanh(Wgxxt+Wghht-1+bg),

    (3)

    ot=σ(Woxxt+Wohht-1+bo),

    (4)

    ct=ft*ct-1+it*gt,

    (5)

    ht=ot*tanhct,

    (6)

    whereWix,Wfx,Wgx,Wox∈Rl×2K;Wih,Wfh,Wgh,Woh∈Rl×l;bi,bf,bg,bo∈Rl;lis the size of the hidden states; * is the element-wise multiplication for matrices; andσ(·) is the sigmoid function:

    (7)

    ht∈Rlandct∈Rlare the hidden states and the cell states. Initially,h0=c0=0=(0, 0, …, 0) ∈Rl.

    The assessments are obtained by applying a fully-connected layer to the hidden states:

    yt=σ(Wy· dropout(ht) +by),

    (8)

    whereWy∈RK×l,by∈RK, and dropout(·) is used during model training to prevent overfitting[30].

    DKTENN uses DKT’s outputyTas the user embedding, which is in the user entity space and summarizes the user’s capabilities in skill-level. In this paper, the DKT in user embedding differs from standard DKT (DKT-S) in the input and output. The input of DKT is a sequence of skill-level tags (xtis encoded usinget’s required skills) and the output is the mastery of each skill, while the input of DKT-S is a sequence of problem-level tags (xtis encoded directly usingetinstead of its required skills) and the output is the probabilities that a user can correctly solve each problem.

    The projected vector ofyTisyTM1, whereM1∈RK× dis the projection matrix, anddis the dimension of the projected vector. The user vectoru∈Rdis the L2-normalized projected vector

    (9)

    whereεis added to avoid zero denominator, and ‖·‖2is the L2-norm. of a vector

    (10)

    wherey= (y1,y2, …,yd) ∈Rd, and |y| is the absolute value ofy.

    Similarly, the problem vectorv∈Rdis:

    (11)

    whereM2∈RK×dis the projection matrix.

    During predictions, the projection matricesM1andM2are invariant and independent of submissions.

    Predictor: the prediction is made based on the user vectoruand the problem vectorv. The concatenated vector (u,v) ∈R2dis used as the input of a feed-forward neural network (FNN):

    z=dropout(σ((u,v)W1+b1))W2+b2,

    (12)

    whereW1∈R2d×h,b1∈Rh,W2∈Rh×2,b2∈R2, andz= (z0,z1) ∈R2.

    A final softmax layer is applied tozto get the final prediction

    (13)

    wherep0is the probability that the user cannot solve the problem, whilep1is the probability that the user can correctly solve the problem, andp0+p1= 1.

    2.2 Model training

    There are two parts in training. The first part is user embedding. In order to assess the users’ mastery of skills, following Piechetal.[8], the loss function is defined by

    (14)

    (15)

    Considering DKT suffers from the reconstruction problem and the wavy transition problem, three regularization terms have been proposed by Yeung and Yeung[31]:

    (16)

    (17)

    (18)

    where ‖·‖1is the L1-norm. of a vector:

    (19)

    wherey= (y1,y2, …,yK) ∈RK.

    The regularized loss function of DKT is:

    (20)

    whereλR,λ1andλ2are the regularization parameters.

    The second part is problem embedding, proj. & norm. and predictor. The output of predictor indicates the probability. So, this part can be trained by minimizing the following binary cross entropy loss:

    (21)

    Adam[32]is used to minimize the loss functions. Gradient clipping is applied to deal with the exploding gradients[33].

    3 Experiments

    Experiments are conducted to show that: (1) DKT’s assessments are reasonable; (2) DKTENN outperforms the state-of-the-art knowledge tracing models; (3) KGE is necessary.

    3.1 Datasets

    The datasets used in this paper include users, problems and their required skills. The skills are partitioned according to the teaching experience of the domain experts. The data come from three publicly open online judges.

    Codeforces(CF): CF regularly holds online contests. All the problems on CF come from these contests. The problems with their required skills labeled by the experts, 500 top-rated users and their submissions are prepared as the CF dataset.

    HangzhouDianziUniversityonlinejudge(HDU) &PekingUniversityonlinejudge(POJ): the problems on HDU and POJ have no information on the required skills. Solutions from Chinese software developer network (CSDN) are collected and used to label the problems. The users who have solved the most problems and their submissions are prepared as the HDU and POJ datasets.

    The details of the datasets are shown in Table 1, where the numbers of users, problems, skills and submissions are given. For each dataset, 20% of the users are randomly chosen as the test set, and the remaining users are left as the training set.

    Table 1 Dataset overview

    3.2 Evaluation methodology

    Evaluationmetrics: the area under the ROC curve (AUC) and root mean square error (RMSE) are used to measure the performance of the models. AUC ranges from 0.5 to 1.0. RMSE is greater than or equal to 0. The model with a larger AUC and a smaller RMSE is better.

    Fig. 3 Architecture of DKTML

    3.3 Results and discussion

    The results for user assessment are shown in Table 2. DKT outperforms BKT and BKT-F, and achieves an average of 35.6% and 17.3% gain in AUC, respectively. On the one hand, BKT can only model the acquisition of a single skill, while DKT takes all the skills into account and is capable of adjusting the users’ mastery of one skill based on another closely related skill according to the results of the attempts. On the other hand, as a probabilistic model relying on only four (or five) parameters, BKT (or BKT-F) has difficulty in modeling the relatively complicated learning process in programming contest. Benefited from LSTM, DKT has more parameters and stronger learning ability. Nevertheless, the input of DKT does not contain information such as the difficulties of the problems,i.e., if two users have solved problems requiring similar skills, their assessments are also similar, though the problem difficulties may vary greatly. Thus, DKT’s assessments are only rough estimations on how well a user has mastered a skill.

    The results for personalized prediction are shown in Table 3. DKTENN outperforms the state-of-the-art knowledge tracing models over all of the three datasets, and achieves an average of 0.9% gain in AUC and an average of 0.6% decrease in RMSE, which proves the effectiveness of the proposed model.

    To predict whether a user can solve a problem, not only the required skills, but also other information such as the difficulties, should be considered. Both DKTML and DKTENN are based on DKT’s assessments, but the difference is that DKTML uses the required skills in a straightforward manner, while DKTENN uses a method similar to KGE to make full use of the information such as the users’ mastery of skills and the problems’ difficulties besides the required skills. Compared to DKTFNN (having the best performance in all kinds of DKTML), DKTENN achieves an average of 2.5% gain in AUC, which shows KGE is an essential component.

    Since the prediction of DKTENN is based on the assessments of DKT, better performance of DKTENN shows that the assessments of DKT are reasonable.

    Table 2 Experimental results of user assessment

    Figure 4 is drawn by projecting the trained problem embeddings into the 2-dimensional space using t-SNE[36]. The correspondence between the 50 problems and their required skills can be found in the Appendix. To some extent, Fig. 4 reveals that DKTENN is able to cluster similar problems. For example, problems 9 and 32 are clustered possibly because they share the skill “data structures”; problems 17 and 49 are both “interactive” problems. So, it is believed that the embeddings can help to discover the connections between problems. Due to the complexity of the problems in programming contest, further research on the similarity between problems is still needed.

    Table 3 Experimental results of personalized prediction

    Fig. 4 Visualizing problem embeddings using t-SNE

    4 Conclusions

    A new knowledge tracing model, DKTENN, making predictions directly based on the assessments of DKT, has been proposed in this work. The problems, the users and their submissions from CF, HDU and POJ are used as datasets. Due to the perfect combination of DKT and KGE, DKTENN outperforms the existing models in the experiments.

    At present, the problem or skill difficulties are not incorporated into the assessments of DKT. In the future, to further improve the assessments and the prediction performance of the model, better embedding methods will be explored to encode the features of problems and skills.

    Table Ⅰ Selected CF problem and their required skills

    (Table Ⅰ continued)

    99国产综合亚洲精品| 精品少妇一区二区三区视频日本电影| 黄片播放在线免费| 亚洲色图综合在线观看| 国产1区2区3区精品| 又黄又粗又硬又大视频| av国产久精品久网站免费入址| 午夜91福利影院| 国产精品熟女久久久久浪| 一区二区三区乱码不卡18| 一本大道久久a久久精品| 欧美日韩精品网址| 妹子高潮喷水视频| 久久人人97超碰香蕉20202| 亚洲伊人色综图| 日韩电影二区| 深夜精品福利| 成人亚洲精品一区在线观看| 大话2 男鬼变身卡| 免费人妻精品一区二区三区视频| 免费在线观看黄色视频的| 亚洲免费av在线视频| 又粗又硬又长又爽又黄的视频| 成人影院久久| 高清不卡的av网站| 曰老女人黄片| 99热全是精品| 狂野欧美激情性bbbbbb| 国产精品99久久99久久久不卡| 曰老女人黄片| 无限看片的www在线观看| a级毛片黄视频| 久久久精品区二区三区| 久久久久久久国产电影| 国产老妇伦熟女老妇高清| 下体分泌物呈黄色| 久久久精品区二区三区| 国产xxxxx性猛交| 宅男免费午夜| 性高湖久久久久久久久免费观看| 你懂的网址亚洲精品在线观看| 国产免费福利视频在线观看| 十八禁人妻一区二区| 精品一区二区三区四区五区乱码 | 麻豆国产av国片精品| 欧美人与性动交α欧美精品济南到| 精品熟女少妇八av免费久了| 午夜老司机福利片| 母亲3免费完整高清在线观看| 香蕉国产在线看| 69精品国产乱码久久久| 国产精品亚洲av一区麻豆| 大码成人一级视频| 久久国产精品人妻蜜桃| 91精品国产国语对白视频| 一本综合久久免费| 妹子高潮喷水视频| 国产成人欧美| 老司机深夜福利视频在线观看 | 亚洲 欧美一区二区三区| 成人影院久久| 天堂8中文在线网| 真人做人爱边吃奶动态| 免费在线观看视频国产中文字幕亚洲 | 国产熟女欧美一区二区| 日韩大码丰满熟妇| 国产成人一区二区三区免费视频网站 | 狂野欧美激情性xxxx| 久久午夜综合久久蜜桃| 日日夜夜操网爽| 亚洲国产欧美网| 九色亚洲精品在线播放| 黄片小视频在线播放| 青草久久国产| av国产久精品久网站免费入址| 精品亚洲成a人片在线观看| 欧美日韩综合久久久久久| 精品一区二区三区四区五区乱码 | 久久久久精品人妻al黑| 日本vs欧美在线观看视频| 好男人视频免费观看在线| 精品亚洲乱码少妇综合久久| 永久免费av网站大全| 欧美中文综合在线视频| 久久精品久久久久久久性| 人人妻人人爽人人添夜夜欢视频| 少妇人妻 视频| 国产免费一区二区三区四区乱码| 男女无遮挡免费网站观看| 男女下面插进去视频免费观看| 久久久国产一区二区| 手机成人av网站| 午夜免费观看性视频| 1024视频免费在线观看| 久久久久久久久免费视频了| 妹子高潮喷水视频| 国产精品av久久久久免费| 最新在线观看一区二区三区 | 国产一区二区激情短视频 | av国产久精品久网站免费入址| 国产亚洲av片在线观看秒播厂| 一级毛片黄色毛片免费观看视频| 大片电影免费在线观看免费| 黄色视频在线播放观看不卡| 男女床上黄色一级片免费看| 18禁黄网站禁片午夜丰满| 精品卡一卡二卡四卡免费| 国产亚洲av片在线观看秒播厂| 亚洲一码二码三码区别大吗| 精品一区二区三区av网在线观看 | 一区二区三区四区激情视频| 黑人巨大精品欧美一区二区蜜桃| 亚洲欧美精品自产自拍| 国产精品欧美亚洲77777| 欧美日韩精品网址| av视频免费观看在线观看| 久久精品成人免费网站| 中文字幕av电影在线播放| 亚洲国产成人一精品久久久| 色网站视频免费| 日本vs欧美在线观看视频| 久久精品亚洲熟妇少妇任你| 国产成人一区二区在线| 亚洲国产av影院在线观看| 欧美人与性动交α欧美软件| 国产一卡二卡三卡精品| 少妇人妻 视频| 亚洲自偷自拍图片 自拍| 久久狼人影院| 汤姆久久久久久久影院中文字幕| 人人妻人人添人人爽欧美一区卜| 精品人妻一区二区三区麻豆| 纵有疾风起免费观看全集完整版| 黑人欧美特级aaaaaa片| 欧美成人精品欧美一级黄| 男女床上黄色一级片免费看| 少妇裸体淫交视频免费看高清 | 久久天堂一区二区三区四区| 国产一区二区三区av在线| 亚洲欧洲日产国产| 亚洲午夜精品一区,二区,三区| 亚洲精品久久成人aⅴ小说| 国产免费现黄频在线看| 精品欧美一区二区三区在线| 欧美日韩黄片免| 成人亚洲精品一区在线观看| 久久久精品94久久精品| 三上悠亚av全集在线观看| 成在线人永久免费视频| 激情五月婷婷亚洲| 99久久人妻综合| 男女下面插进去视频免费观看| 亚洲av电影在线观看一区二区三区| 免费在线观看影片大全网站 | 高潮久久久久久久久久久不卡| 日韩制服丝袜自拍偷拍| 亚洲av日韩在线播放| 亚洲欧洲精品一区二区精品久久久| 每晚都被弄得嗷嗷叫到高潮| 永久免费av网站大全| 人人澡人人妻人| 欧美+亚洲+日韩+国产| 51午夜福利影视在线观看| 精品一区在线观看国产| 女人高潮潮喷娇喘18禁视频| 母亲3免费完整高清在线观看| 亚洲人成电影免费在线| 少妇的丰满在线观看| 美女扒开内裤让男人捅视频| 亚洲成人手机| 亚洲欧美激情在线| 巨乳人妻的诱惑在线观看| 久久久国产精品麻豆| 亚洲人成电影观看| 国产男女内射视频| 日日爽夜夜爽网站| 蜜桃国产av成人99| 亚洲成人国产一区在线观看 | 美女国产高潮福利片在线看| 在线观看人妻少妇| 国产在线免费精品| 亚洲美女黄色视频免费看| 搡老乐熟女国产| 飞空精品影院首页| 亚洲激情五月婷婷啪啪| av国产久精品久网站免费入址| 欧美黑人精品巨大| 亚洲av男天堂| 老汉色∧v一级毛片| 久久中文字幕一级| 啦啦啦中文免费视频观看日本| 亚洲欧美色中文字幕在线| 大香蕉久久成人网| 亚洲精品国产av成人精品| 国产欧美日韩一区二区三 | 大码成人一级视频| 18禁观看日本| 嫁个100分男人电影在线观看 | 日韩一本色道免费dvd| 久久99热这里只频精品6学生| videos熟女内射| 免费看十八禁软件| 日本欧美国产在线视频| 日本黄色日本黄色录像| 日本猛色少妇xxxxx猛交久久| 亚洲精品中文字幕在线视频| 男的添女的下面高潮视频| 久久精品久久精品一区二区三区| 青春草视频在线免费观看| 成人黄色视频免费在线看| 夜夜骑夜夜射夜夜干| 黄片小视频在线播放| 国产97色在线日韩免费| 婷婷色av中文字幕| a级毛片在线看网站| 国精品久久久久久国模美| 精品一区二区三卡| 国产成人一区二区三区免费视频网站 | 天天影视国产精品| 国产成人av教育| 国产精品一区二区精品视频观看| 90打野战视频偷拍视频| 高潮久久久久久久久久久不卡| 国产成人系列免费观看| 人人妻人人澡人人看| 97精品久久久久久久久久精品| 国产高清不卡午夜福利| 国精品久久久久久国模美| 亚洲精品久久成人aⅴ小说| 视频区图区小说| 亚洲欧美精品综合一区二区三区| 你懂的网址亚洲精品在线观看| 黄片播放在线免费| 天天影视国产精品| 七月丁香在线播放| 亚洲精品中文字幕在线视频| 婷婷色麻豆天堂久久| 一本久久精品| 成人免费观看视频高清| 精品一区二区三区av网在线观看 | 成年人午夜在线观看视频| 国产在线免费精品| 久久久久久久久免费视频了| 男女无遮挡免费网站观看| 18禁黄网站禁片午夜丰满| 侵犯人妻中文字幕一二三四区| 大香蕉久久网| 国产精品一区二区在线不卡| 国产淫语在线视频| 首页视频小说图片口味搜索 | 久久久久国产一级毛片高清牌| 丝袜在线中文字幕| 亚洲精品久久久久久婷婷小说| 国产一区二区在线观看av| 狂野欧美激情性bbbbbb| 在线看a的网站| 中文字幕av电影在线播放| 国产成人系列免费观看| 国产麻豆69| 久久人人爽人人片av| 国产精品av久久久久免费| 国产成人av激情在线播放| 波多野结衣av一区二区av| 如日韩欧美国产精品一区二区三区| 99国产精品一区二区三区| 在线天堂中文资源库| 夫妻午夜视频| 国产精品麻豆人妻色哟哟久久| 欧美日韩一级在线毛片| 蜜桃国产av成人99| 国产精品国产三级专区第一集| 一本大道久久a久久精品| www.av在线官网国产| 丰满人妻熟妇乱又伦精品不卡| av在线app专区| 欧美日韩视频精品一区| 久久精品久久久久久久性| 亚洲av欧美aⅴ国产| 欧美乱码精品一区二区三区| videosex国产| 亚洲色图综合在线观看| 深夜精品福利| 日本猛色少妇xxxxx猛交久久| 99久久人妻综合| 香蕉国产在线看| 九色亚洲精品在线播放| 欧美在线一区亚洲| 亚洲欧美一区二区三区久久| 久久久国产精品麻豆| 精品一区二区三区av网在线观看 | 多毛熟女@视频| 国产高清视频在线播放一区 | 精品国产乱码久久久久久小说| 一边摸一边做爽爽视频免费| 美女大奶头黄色视频| 国语对白做爰xxxⅹ性视频网站| 日本黄色日本黄色录像| 高潮久久久久久久久久久不卡| 亚洲精品一区蜜桃| 亚洲黑人精品在线| 国产精品成人在线| 日本a在线网址| 久久ye,这里只有精品| 亚洲伊人色综图| 视频区欧美日本亚洲| 日韩av免费高清视频| 亚洲精品在线美女| 亚洲人成77777在线视频| 国产高清videossex| 国产一级毛片在线| 天天添夜夜摸| 男女高潮啪啪啪动态图| 丝瓜视频免费看黄片| 亚洲国产精品一区二区三区在线| 亚洲情色 制服丝袜| 十八禁高潮呻吟视频| 99re6热这里在线精品视频| av福利片在线| 黄色毛片三级朝国网站| 午夜影院在线不卡| 99国产综合亚洲精品| 国产亚洲午夜精品一区二区久久| 国产亚洲av高清不卡| 19禁男女啪啪无遮挡网站| 色婷婷av一区二区三区视频| 亚洲午夜精品一区,二区,三区| 久久狼人影院| 麻豆国产av国片精品| av电影中文网址| av视频免费观看在线观看| 亚洲人成电影免费在线| 9191精品国产免费久久| 久久精品熟女亚洲av麻豆精品| 欧美亚洲 丝袜 人妻 在线| 久久99热这里只频精品6学生| 国产主播在线观看一区二区 | av线在线观看网站| 日本wwww免费看| 少妇粗大呻吟视频| 欧美精品一区二区免费开放| 精品国产超薄肉色丝袜足j| 1024视频免费在线观看| 日韩一区二区三区影片| 欧美精品一区二区免费开放| 亚洲国产欧美日韩在线播放| 中文字幕人妻熟女乱码| 亚洲专区中文字幕在线| 久久性视频一级片| 精品国产超薄肉色丝袜足j| 欧美亚洲 丝袜 人妻 在线| 侵犯人妻中文字幕一二三四区| 亚洲av欧美aⅴ国产| 久久久久网色| av网站免费在线观看视频| 2021少妇久久久久久久久久久| 看十八女毛片水多多多| 国产成人a∨麻豆精品| 午夜福利,免费看| 精品第一国产精品| 亚洲第一av免费看| av网站在线播放免费| 在线观看一区二区三区激情| 国产精品人妻久久久影院| 在线观看人妻少妇| 色综合欧美亚洲国产小说| 欧美在线黄色| svipshipincom国产片| 在线观看免费视频网站a站| 国产精品免费大片| 在线观看免费视频网站a站| 日本猛色少妇xxxxx猛交久久| 成年动漫av网址| 欧美日韩亚洲国产一区二区在线观看 | 国产亚洲欧美在线一区二区| 午夜日韩欧美国产| 99精国产麻豆久久婷婷| 热99国产精品久久久久久7| 久久精品国产亚洲av高清一级| 又黄又粗又硬又大视频| 99国产精品一区二区蜜桃av | 欧美日韩视频精品一区| 亚洲av综合色区一区| 久久九九热精品免费| 日韩中文字幕欧美一区二区 | 涩涩av久久男人的天堂| 爱豆传媒免费全集在线观看| 在线观看免费视频网站a站| 国产亚洲午夜精品一区二区久久| 亚洲精品一二三| 久久人人爽av亚洲精品天堂| 黄色一级大片看看| 精品卡一卡二卡四卡免费| 欧美日韩综合久久久久久| 人妻人人澡人人爽人人| 男女之事视频高清在线观看 | 午夜福利在线免费观看网站| 精品久久久精品久久久| 中文字幕人妻丝袜制服| a级毛片黄视频| 久久免费观看电影| 欧美日韩精品网址| 一区二区三区精品91| 亚洲av综合色区一区| 最黄视频免费看| 成年av动漫网址| 少妇人妻久久综合中文| 国产又爽黄色视频| 精品一区在线观看国产| 亚洲欧洲日产国产| 性色av一级| 欧美97在线视频| 国产成人免费观看mmmm| 国产亚洲欧美精品永久| 下体分泌物呈黄色| 欧美激情高清一区二区三区| 高潮久久久久久久久久久不卡| 国产又色又爽无遮挡免| 可以免费在线观看a视频的电影网站| 午夜精品国产一区二区电影| 欧美日韩综合久久久久久| 日日摸夜夜添夜夜爱| 熟女av电影| 成人三级做爰电影| 国产无遮挡羞羞视频在线观看| 亚洲欧美精品自产自拍| 老鸭窝网址在线观看| 天堂8中文在线网| 久久狼人影院| 老鸭窝网址在线观看| 亚洲欧美清纯卡通| 97精品久久久久久久久久精品| 国产精品av久久久久免费| 少妇人妻久久综合中文| 在线亚洲精品国产二区图片欧美| av网站免费在线观看视频| 丁香六月欧美| 久久精品aⅴ一区二区三区四区| 久久精品国产亚洲av高清一级| 日韩中文字幕欧美一区二区 | 在线av久久热| 妹子高潮喷水视频| 久久免费观看电影| 又紧又爽又黄一区二区| 观看av在线不卡| 女人高潮潮喷娇喘18禁视频| 免费久久久久久久精品成人欧美视频| 一本一本久久a久久精品综合妖精| 看免费av毛片| 国产有黄有色有爽视频| 亚洲欧美精品综合一区二区三区| 国产熟女午夜一区二区三区| 涩涩av久久男人的天堂| 国产精品 欧美亚洲| 熟女av电影| 国产成人免费无遮挡视频| 国产高清不卡午夜福利| 欧美精品亚洲一区二区| 欧美人与性动交α欧美精品济南到| 亚洲国产精品国产精品| 亚洲国产中文字幕在线视频| 亚洲成av片中文字幕在线观看| 亚洲成人手机| 亚洲黑人精品在线| 悠悠久久av| 一级毛片 在线播放| 在线观看免费视频网站a站| 亚洲情色 制服丝袜| videosex国产| 国产精品久久久久久精品电影小说| 又大又爽又粗| 国产爽快片一区二区三区| 久久久久视频综合| 91麻豆精品激情在线观看国产 | 国产高清视频在线播放一区 | 欧美性长视频在线观看| 亚洲精品久久久久久婷婷小说| 中文字幕亚洲精品专区| 爱豆传媒免费全集在线观看| 久久亚洲国产成人精品v| 国产1区2区3区精品| 欧美xxⅹ黑人| 性色av一级| 极品人妻少妇av视频| 久久精品熟女亚洲av麻豆精品| 黄片播放在线免费| 国产有黄有色有爽视频| 国产亚洲av高清不卡| 精品国产乱码久久久久久小说| 欧美精品高潮呻吟av久久| 国产精品欧美亚洲77777| 亚洲精品美女久久av网站| 无限看片的www在线观看| 久久精品成人免费网站| 久久久久久久国产电影| 亚洲av男天堂| 久久天堂一区二区三区四区| 国产成人精品久久二区二区免费| 久久久久久久久久久久大奶| 久久久久久久国产电影| 免费久久久久久久精品成人欧美视频| 国产在线视频一区二区| 热99国产精品久久久久久7| 超色免费av| 中文字幕亚洲精品专区| 高清av免费在线| 国产在线视频一区二区| 在现免费观看毛片| 大香蕉久久成人网| 在线观看人妻少妇| 9色porny在线观看| 色网站视频免费| 久热爱精品视频在线9| 99精品久久久久人妻精品| 国产成人系列免费观看| 黄色a级毛片大全视频| 国产熟女欧美一区二区| 国产精品一二三区在线看| 性少妇av在线| 国产不卡av网站在线观看| 色播在线永久视频| 免费看十八禁软件| 精品人妻1区二区| 国产高清videossex| 人人妻人人澡人人爽人人夜夜| 手机成人av网站| 乱人伦中国视频| 夫妻午夜视频| 久久女婷五月综合色啪小说| 亚洲精品一卡2卡三卡4卡5卡 | 久久久国产一区二区| 国产一区二区三区综合在线观看| 色婷婷久久久亚洲欧美| 99香蕉大伊视频| 国产xxxxx性猛交| 国产99久久九九免费精品| 免费女性裸体啪啪无遮挡网站| 亚洲成人免费电影在线观看 | 亚洲人成网站在线观看播放| 少妇猛男粗大的猛烈进出视频| 伊人亚洲综合成人网| 少妇粗大呻吟视频| 午夜久久久在线观看| 看十八女毛片水多多多| 又紧又爽又黄一区二区| 亚洲av男天堂| 人人妻人人爽人人添夜夜欢视频| av不卡在线播放| 亚洲美女黄色视频免费看| 国产精品麻豆人妻色哟哟久久| 我的亚洲天堂| 在线看a的网站| 亚洲国产欧美日韩在线播放| 日韩av免费高清视频| 女性生殖器流出的白浆| 亚洲专区国产一区二区| 色婷婷av一区二区三区视频| 黄片小视频在线播放| 又黄又粗又硬又大视频| 国产熟女午夜一区二区三区| 一区福利在线观看| 国产成人a∨麻豆精品| 视频在线观看一区二区三区| 十八禁高潮呻吟视频| 久久人人爽av亚洲精品天堂| 男女免费视频国产| 国产高清视频在线播放一区 | 亚洲精品第二区| 桃花免费在线播放| 欧美人与善性xxx| 国产片特级美女逼逼视频| bbb黄色大片| 国产不卡av网站在线观看| 最近手机中文字幕大全| 免费观看a级毛片全部| 亚洲国产看品久久| 啦啦啦中文免费视频观看日本| 欧美精品人与动牲交sv欧美| 午夜免费成人在线视频| 天堂8中文在线网| 纵有疾风起免费观看全集完整版| www.自偷自拍.com| 精品第一国产精品| 大话2 男鬼变身卡| 久久久久久久国产电影| 久久ye,这里只有精品| 黄频高清免费视频| 亚洲精品国产av蜜桃| 伦理电影免费视频| 黄频高清免费视频| 久久久久久久国产电影| 国产高清国产精品国产三级| 国产亚洲精品久久久久5区| 悠悠久久av| 男女国产视频网站| 国产伦人伦偷精品视频| 国产高清视频在线播放一区 | 黄网站色视频无遮挡免费观看| 观看av在线不卡| 亚洲成人免费av在线播放| 精品欧美一区二区三区在线| 久久亚洲精品不卡| 亚洲精品国产一区二区精华液| 91精品三级在线观看| 精品高清国产在线一区| 51午夜福利影视在线观看| 亚洲一区中文字幕在线| 电影成人av| 蜜桃国产av成人99| 观看av在线不卡| 精品亚洲成国产av| 操出白浆在线播放| 久久国产精品男人的天堂亚洲| 欧美少妇被猛烈插入视频| 日本黄色日本黄色录像| 欧美国产精品va在线观看不卡|