• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Design and analysis of traffic incident detection based on random forest

    2014-09-17 06:00:44LiuQingchaoLuJianChenShuyan

    Liu Qingchao Lu Jian Chen Shuyan

    (Jiangsu Key Laboratory of Urban ITS, Southeast University, Nanjing 210096, China)

    (Jiangsu Province Collaborative Innovation Center of Modern Urban Traffic Technologies, Nanjing 210096, China)

    T raffic incident detection is important in the modern ITS.Here the traffic incidents are defined as a traffic congestion phenomenon by occasional events,such as traffic accidents, car breakdowns, scattered goods, and natural disasters[1].Freeway and arterial incidents often occur unexpectedly and cause undesirable congestion and mobility loss.If the abnormal condition cannot be detected and fixed in time,it may increase traffic delay and reduce road capacity,and it often causes second traffic accidents.Therefore, traffic incident detection plays an important role in most advanced freeway traffic management systems.

    The artificial intelligence algorithm is one of the recent developed algorithms in traffic incident detection,which can detect incidents by either a rule-based algorithm or a pattern-based algorithm.Traffic incident detection networks usually are multi-layer feed forward neural networks(MLF),and signals are input to the neural network, which has previous data, and the signals are weighted and propagated to an output signal,suggesting either incident or incident-free conditions[2].Some techniques based on artificial intelligence are adopted to detect traffic incidents.Srinivasan et al.[3]evaluated the incident detection performances of three promising neural network models:the MLF,the basic probabilistic neural network(BPNN)and the constructive probabilistic neural network(CPNN)and drew a conclusion that the CPNN model had the highest potential in the freeway incident detection system.

    Although the artificial neural networks have achieved better performances than the classical detection algorithms,there are two defects limiting its wide application.The defects are that artificial neural networks cannot afford a clear explanation of the principle about how their parameters adjust,and it is difficult to obtain the optimal parameters of the neural networks.Payne et al.[4]used decision trees for the traffic incident detection[4].The algorithm in Ref.[4]uses the decision trees with states,and the states correspond to distinct traffic conditions.Chen et al.[5-6]used decision tree learning for freeway automatic incident detection in 2009,and the decision tree was used as a classifier.Compared with the artificial neural networks,their method not only avoids the burden of adjusting appropriate parameters,but also improves the average performance of traffic incident detection.However,the defects of the decision tree learning algorithm include two aspects:the classification strength of a decision tree is low and the decision tree is easy to overfit.In order to solve these two problems,we adopt random forest to detect traffic incidents,which is based on a decision trees ensemble.

    1 Random Forest for Traffic Incident Detection

    1.1 Principle of random forest

    Breiman[7]proposed the random forest algorithm in 2001.Random forest is an ensemble of unpruned classification trees,which is induced from bootstrap samples of the training data,and it uses random feature selection in the tree induction process.Prediction is made by aggregating the predictions of the ensemble.The common element in all of these procedures is that for thek-th tree,a random vector Θkis generated, independent of the past random vectors Θ1,…,Θk-1but with the same distribution;and a tree is grown using the training set and Θk,resulting in a classifierh(x,Θk), where x is an input vector.Decision trees in the random forest model are generated by the bagging algorithm.Bagging(bootstrap aggregating)is a classic algorithm in machine learning.It is an ensemble method for multiple classifiers.For more details, refer to Ref.[8].For instance, in the bagging algorithm, the random vector Θ is generated as the counts inNboxes resulting fromNdarts thrown randomly at the boxes,whereNis the number of examples in the training set.In random split selection, Θ consists of a number of independent random integers between 1 andK.The nature and dimensionality of Θ depend on its use in tree construction.After a large number of trees are generated,they vote for the most popular class[7].

    Random forest is a classifier consisting of a collection of tree-structured classifiers{h(x,Θk),k=1,2,…},where the vectors{Θk}are independent identically distributed random vectors,and each tree casts a unit vote for the most popular class at input x.Given an ensemble of classifiersh1(x),h2(x),…,hk(x), and with the training set drawn randomly from the distribution of the random vector{X,Y}the margin function is defined as

    whereI(·)is the indicator function.The margin measures the extent to which the average number of votes at X,Yfor the right class exceeds the average vote for any other class.The larger the margin,the more the confidence in the classification.The generalization error is given by

    where subscripts X,Yindicate that the probability is over the X,Yspace.In random forest,hk(X)=h(X,Θk).For a large number of trees,it follows the strong law of large numbers and the tree structure that,as the number of trees increases,for almost all sequences Θ1,…,Θk-1,PE*converges to

    The result of Eq.(3)explains why random forest does not overfit as more trees are added,but produces a limit value of the generalization error.That is to say,random forest can compensate for the defect of the decision tree.An upper bound for the generalization error is given by

    whereis the mean value of the correlation;sis the strength of the set of classifiers{h(x,Θ)}.

    It shows that the two ingredients involved in the generalization error for random forest are the strength of the individual classifier in the forest and the correlation between them in terms of the raw margin functions.If random forest wants to get larger classification strength,the correlation of each decision tree classifier must be smaller.To obtain a smaller correlation,the differences between each decision tree must be larger.

    Suppose that for an incident detection problem,we define three different decision treesh(x,Θ1),h(x,Θ2)andh(x,Θ3).We can combine these trees in a way to produce a classifier that is superior to any of the individual trees by voting.In other words,the value of x is classified to the class that receives the largest number of votes.As shown in Fig.1,the predictor space is divided into three regions.In the first region,R1 and R2 classify correctly but R3 is incorrect;in the second region,R1 and R3 are correct but R2 incorrect;and in the third region,R2 and R3 are correct but R1 is incorrect.If a test point is equally likely to be in any of the three regions,each of the individual trees will be incorrect one third of the time.However,the combined tree will always give the correct classification.Of course,there is no guarantee that this will occur and it is possible(though uncommon)for the combined classifier to produce an inferior performance.So random forest can basically compensate for the problem of classification strength and improve the classification accuracy.

    Fig.1 Vote procedure diagram

    1.2 Construction of data sets for training and testing

    The incident is detected based on section,which means that the traffic data collected from two adjacent detectors,the up-stream detector and the down-stream detector,are used for calibration and testing.The traffic data consists of at least the items as follows:

    · Time when data collectedti,i=1,2,…,n;

    · Speed,volume and density of the up-stream detectorsupi,vupi,dupi,i=1,2,…,n;

    · Speed,volume and density of the down-stream detectorsdni,vdni,ddni,i=1,2,…,n;

    · Traffic stateLi,i=1,2,…,n.

    where the item of traffic state is a label.The value of the label is-1 or 1,referring to the non-incident or incident,respectively,which is determined by the incident dataset.Typically,the model is fit for part of the data(the training set),and the quality of the fit is judged by how well it predicts the other part of the data(the test set).The entire data set is divided into two parts:a training set that is used to build the model and a test set that is used to test the model's detection ability.The training set consists of 45 518 samples including 43 418 non-incident instances and 2 100 incident instances(22 incident cases)and the test set consists of 45 138 samples including 43 102 non-incident instances and 2 036 incident instances(23 incident cases).The test set is separated from the data and is not used to monitor the training process.This process prevents any possibility that the best regression models selected may have a chance correlation to peculiarities in the measurements of the test set and reduces the risk of over fitting.

    The number of X-variables(predictor variables)is 7.This means that the matrix X used in training the model has the size of 45 518×7.The test data X forms a matrix with a size of 45 138×7.The formal description of matrices X and Y can be written as follows:where each row is composed of one observation;nis the number of instances;andyi∈{-1,1}.The data analysis problem is related to matrix Y,which is predicted by some function of matrix X(e.g.traffic state)using the data of X,y=f(x).The training set is used to develop the random forest model that is in turn used to detect incidents for the test set samples.The output values of detection models are then compared with the actual ones for each of the calibration samples,and the performance measures are calculated and compared.

    2 Performance Measures

    2.1 Definition of DR,F(xiàn)AR,MTTD and CR

    Four primary measures of performance,namely,the detection rate(DR),the false alarm rate(FAR),the mean time to detection(MTTD)and the classification rate(CR)are used to evaluate traffic incident detection algorithms.We cite the definitions as[9]

    2.2 ROC curves

    Receiver operator characteristic(ROC)curves illustrate the relationship between the DR and the FAR.Often the comparison of two or more ROC curves consists of either looking at the area under the ROC curve(AUC)or focusing on a particular part of the curves and identifying which curve dominates the other in order to select the best-performing algorithm.It is also equivalent to the Wilcoxon test of ranks.The AUC is related to the Gini coefficientG1,

    2.3 Statistics indicators

    In statistics,the mean absolute error(MAE)is a quantity used to measure how forecasts or predictions are close to the eventual outcomes.The mean absolute error is given by

    The root-mean-square error(RMSE)is a frequently used measure of the differences between values predicted by a model or an estimator and the values actually observed.These individual differences are called residuals when the calculations are performed over the data sample that is used for estimation,and are called prediction errors when computed out-of-samples.The RMSE serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power.

    The equality coefficient(EC)is useful for comparing different forecast methods.For example,whether a fancy forecast is in fact any better than a na?ve forecast repeat-ing the last observed value.The closer to 1 the value of EC,the better the forecast method.A value of zero means that the forecast is no better than a na?ve guess.

    3 Case Study

    In this section,we perform three groups of experiments:the first experiment compares decision tree with random forest,the second experiment compares random forest detection performance from the perspective of the number of trees,and the last experiment compares MLF with random forest.Three experiments are performed on I-880 real data to investigate the performance of the random forest method.Evaluation indicators include DR,F(xiàn)AR,MTTD,CR,ROC and AUC.Compared with the other four indicators,ROC and AUC can comprehensively evaluate the performances.

    3.1 Data description

    The data was collected by Petty et al.at the I-880 Freeway in the San Francisco Bay area,California,USA.This is the most recent and probably most well-known freeway incident data set collected,and it has been used in many researches related to incident detection.

    3.2 Experiment 1

    The number of trees in the group of experiments is from 10 to 100.We increase the number of trees in order to obtain a greater difference.Five performance measures,DR,F(xiàn)AR,MTTD,CR and AUC are computed for different numbers of trees,which are shown in Tab.1.It is observed that different numbers of trees yield similar classification rates,and random forest obtains a better CR.As FAR is concerned,0.87%of the FAR yield by 40 trees is the best one.10 trees obtains the lowest MTTD with 0.84 min;however,the MTTD of 100 trees is obviously longer than that of 10 trees.The DR of 100 trees is 92.09% ,which is the best.The AUC of 100 trees is 94.69% ,which is also the best.Among five comparisons,RF-100 outperforms the other RF methods in DR,CR and AUC.To a certain extent,it can be concluded that the one with the larger number can obtain greater classification strength and slightly better incident detection ability.In the I-880 data set,when the tree number is 100,it can obtain some improvement except in the case of FAR.

    Next,we compare the performance of random forest by ROC curves.ROC graphs plot FAR on thex-axis and DR on they-axis.Fig.2(a)illustrates DR vs.FAR,whereit is the total ROC and Fig.2(b)is the part enlarged corresponding to FAR from 0%to 0.1% .It is seen from Fig.2 that 100 trees is slightly superior to others,since its curve is higher than that of others and very close to the coordinate point(0,1)at the far left of the figure,which means that it achieves a higher detection rate at the same false alarm rate.We ran 50 replications of 10-fold crossvalidation to assess the error rate for a range of trees numbers with I-880 data.Tree number is from 10 to 100 in this case.In 10-fold cross-validation,the training set is split into 10 approximately equal partitions and each in turn is used for testing and the remainder is used for training.In the end,every instance is used exactly once for testing.

    Tab.1 Comparison of different numbers of trees

    Fig.2 Comparison of different numbers of trees.(a)Total ROC curve;(b)Part enlarged of ROC curve

    Figs.3(a)to(h)show box plots of the error rates.Horizontal lines inside the boxes are the median error rates.Figs.3(a)to(e)are incident detection indicators,which are different degrees of growth except for FAR.When the number of trees is fewer than 70,DR,CR and AUC grow relatively fast.FAR fluctuates around the median error rate 0.093.In Figs.3(f)to(h),MAE and RMSE decrease gradually and reach the lowest when the tree number is 100.The value of EC is very close to 100,which shows that random forest is highly effective.

    Fig.3 Box plots of 10-fold cross-validation test error rates of I-880 data set.(a)DR;(b)FAR;(c)MTTD;(d)CR;(e)AUC;(f)MAE;(g)RMSE;(h)EC

    3.3 Experiment 2

    In the experiment,we compare the random forest classifier with the decision tree classifier.We use C4.5 and CART as the decision tree classifier.The random forests of 10 trees,40 trees and 100 trees consider three random features(namely,RF-10,RF-40,and RF-100)when constructing.The results from the random forest algorithm are compared with those from C4.5 and the CART algorithm with the I-880 data set.Five performance measures,DR,F(xiàn)AR,MTTD,CR and AUC are computed for three algorithms,which are shown in Tab.2.It is observed that two decision tree classifiers yield a similar detection rate,and C4.5 obtains a slightly better DR with the corresponding number of 69.49%;however,the DR of random forests(RF-10,RF-40,RF-100)are more than 84%.Because the I-880 data set is unbalanced,the performances of C4.5 and CART are not ideal.That is to say,random forest can deal with unbalanced data.As MTTD is concerned,the 0.84 min of MTTD yielded by RF-10 is the best one.Random forests(RF-40,RF-100)generate 40 trees and 100 trees,so they consume more time.RF-40 obtains the lowest FAR with 0.87%.Both CR and AUC reach more than 90%,which are superior to those of C4.5 and CART.The values of MAE,RMSE and EC of random forest are the best among three different algorithms,especially RF-100.Among five comparisons,to a certain extent,it can be concluded that random forest can obtain a better incident detection ability compared with C4.5 and CART.

    Tab.2 Comparison of C4.5,CART and random forest

    Next,we compare the performance of the random forest algorithm by ROC curves.Here we give one kind of ROC curve,which is a transfiguration plot by DR against FAR.As we all know,a single incident scenario contains several incident instances.If an instance belonging to this incident scenario is classified to an incident class,an alarm is declared for this incident scenario and this incident scenario is detected triumphantly.When multiple instances are classified to an incident class,only the instance with the maximal probability is used for depicting ROC curves,since its probability represents the probability of the incident scenario being detected.Therefore,such kind of ROC curve emphasizes the ability of an algorithm to detect an incident as opposed to its FAR,so DR and FAR are more meaningful for evaluating incident detection algorithms.Figs.4(a)and(b)illustrate DR vs.FAR,where Fig.4(a)is the total ROC and Fig.4(b)is the part enlarged corresponding to FAR from 0 to 0.1.It is clear seen from this figure that random forest is superior to C4.5 and CART,since its AUC is larger than that of C4.5 and CART.When the FAR is equal to 0.1,the AUC of random forest is greater than that of C4.5 and CART.In Fig.3(b),when FAR's value is less than 0.02,random forests(RF-10,RF-40,RF-100)are closer to they-axis.So,it achieves higher DR at the same false alarm rate.

    3.4 Experiment 3

    Fig.4 Comparisons of C4.5,CART and random forest.(a)Total ROC curve;(b)Part enlarged of ROC curve

    Among existing traffic incident detection algorithms,the MLF has been investigated in freeway traffic incident detection and achieved good results.The magnitude of weight adjustment and the convergence speed can be controlled by setting the learning and momentum rates.The value of the learning rate is set to be 0.3,and the momentum rate is 0.2.The tree number of random forest is set to be 100.Five performance measures,DR,F(xiàn)AR,MTTD,CR and AUC are computed for MLF and random forest,which are shown in Tab.3.It is observed that they yield similar classification rates,and random forest obtains a better CR.DR and AUC of random forest are better than those of MLF.As FAR is concerned,the 0.95%of the FAR yielded by random forest is the best one.MLF obtains the higher MTTD with 4.73 min.The values of MAE,RMSE and EC of random forest are better than those of other algorithms;especially when the tree number is 100,the performance is the best.

    Figs.5(a)and(b)illustrate DR vs.FAR,where Fig.5(a)is the total ROC and Fig.5(b)is the part enlarged corresponding to FAR from 0%to 0.6%.The performance of MLF is lower than that of random forest.It is shown that random forest is significantly comparative to an MLF neural network and our experiments demonstrate that random forest has great potential for traffic incident detection.

    Tab.3 Comparison of MLF and random forest

    Fig.5 Comparisons of MLF and random forest.(a)Total ROC curve;(b)Part enlargment of ROC curve

    4 Conclusion

    Based on the results of three experiments,the following conclusions are made:1)Random forest is effective in enhancing the classification strength.2)Random forest is effective in avoiding over fitting.3)Random forest has strong potential in traffic incident detection.

    Random forest achieves satisfactory incident detection rates with deemed acceptable false alarm rates and mean times to detect.As our experiments point out,random forest can achieve better result if the number of trees is appropriate for MTTD.The decision tree is an individual classifier which only needs training one time,while random forest needs to train many individual tree classifiers to construct a decision tree ensemble.As a result,compared with the decision tree algorithm,the random forest algorithm consumes more time.It is concluded from our testing results that random forest can provide a comparable performance to a neural network.So it has a good potential for application in traffic incident detection.

    If the decision tree number is appropriate,the random forest running time is short.So there is a great potential for real-time detection of traffic incidents.The MTTD problem should be noted when using random forest.There are many trees in the forest,but the key is how many trees can achieve an ideal MTTD.Besides,random forest lacks transferability like neural networks.So,how to produce a transferable incident detection algorithm without the requirement of explicit off-line retraining in the new site,that is to say,adaptive traffic incident detection based on random forest,needs further research.

    [1]Li L,Jiang R.Modern traffic flow theory and applicationⅠ:freeway traffic flow[M].Beijing:Tsinghua University Press,2011.(in Chinese)

    [2]Cheu R,Srinivasan D,Loo W.Training neural networks to detect freeway incidents by using particle swarm optimization [J].Transportation Research Record,2004,1867:11-18.

    [3]Srinivasan D,Jin X,Cheu R.Adaptive neural network models for automatic incident detection on freeways[J].Neurocomputing,2005,64:473-496.

    [4]Payne H J,Tignor S C.Freeway incident-detection algorithms based on decision trees with states[J].Transportation Research Record,1978,682:30-37.

    [5]Chen S,Wang W.Decision tree learning for freeway automatic incident detection[J].Expert Systems with Ap-plications,2009,36(2):4101-4105.

    [6]Bi J,Guan W.A genetic resampling particle filter for freeway traffic-state estimation[J].Chin Phys B,2012,21(6):068901-01-068901-05.

    [7]Breiman L.Random forests[J].Machine Learning,2001,45(1):5-32.

    [8]Breiman L.Bagging predictors[J].Machine Learning,1996,24(2):123-140.

    [9]Hand D J,Till R J.A simple generalization of the area under the ROC curve to multiple class classification problems[J].Machine Learning,2001,45(2):171-186.

    只有这里有精品99| 七月丁香在线播放| 777米奇影视久久| 黑人高潮一二区| 97在线视频观看| 久久综合国产亚洲精品| 国产免费一区二区三区四区乱码| 99热6这里只有精品| 青春草国产在线视频| 午夜福利视频精品| 黄色视频在线播放观看不卡| 日韩视频在线欧美| 永久免费av网站大全| 亚洲av.av天堂| 国精品久久久久久国模美| 日韩一区二区三区影片| 中国美白少妇内射xxxbb| 美女大奶头黄色视频| 18禁裸乳无遮挡动漫免费视频| 卡戴珊不雅视频在线播放| 欧美3d第一页| 免费大片黄手机在线观看| 国产精品偷伦视频观看了| 国产亚洲午夜精品一区二区久久| 国产 精品1| 亚洲人成网站在线播| 国产免费又黄又爽又色| 亚洲中文av在线| 免费av不卡在线播放| 亚洲av日韩在线播放| 街头女战士在线观看网站| 亚洲国产av新网站| 精品一区在线观看国产| 99九九在线精品视频 | 赤兔流量卡办理| 内射极品少妇av片p| 熟女电影av网| 3wmmmm亚洲av在线观看| 丰满乱子伦码专区| 亚洲欧美清纯卡通| 三级国产精品片| av在线老鸭窝| av有码第一页| 国产女主播在线喷水免费视频网站| 欧美日韩在线观看h| 99久久中文字幕三级久久日本| av一本久久久久| 六月丁香七月| 免费看av在线观看网站| 免费人成在线观看视频色| 日本黄色日本黄色录像| 黄色日韩在线| 少妇人妻一区二区三区视频| 久久人人爽人人爽人人片va| 卡戴珊不雅视频在线播放| 日韩亚洲欧美综合| 亚洲av在线观看美女高潮| 日韩精品免费视频一区二区三区 | 99热这里只有是精品在线观看| 99热全是精品| 欧美 亚洲 国产 日韩一| 亚洲av欧美aⅴ国产| 国产亚洲av片在线观看秒播厂| 丝袜在线中文字幕| 91成人精品电影| 99久久精品一区二区三区| 国产精品人妻久久久久久| 高清视频免费观看一区二区| 精品国产露脸久久av麻豆| 午夜日本视频在线| 男女免费视频国产| 久久久国产欧美日韩av| 亚洲第一区二区三区不卡| 久久这里有精品视频免费| 国产日韩欧美亚洲二区| 男人和女人高潮做爰伦理| 18禁在线播放成人免费| 日韩av免费高清视频| 中国三级夫妇交换| 亚洲精品,欧美精品| 熟妇人妻不卡中文字幕| 青春草国产在线视频| 国产精品蜜桃在线观看| 97在线人人人人妻| 午夜视频国产福利| 天堂中文最新版在线下载| 欧美亚洲 丝袜 人妻 在线| 26uuu在线亚洲综合色| 中国三级夫妇交换| 啦啦啦中文免费视频观看日本| 亚洲欧美清纯卡通| 婷婷色av中文字幕| 国产精品国产av在线观看| 日韩熟女老妇一区二区性免费视频| 永久网站在线| 久久精品久久久久久噜噜老黄| 久久ye,这里只有精品| 一本久久精品| 自线自在国产av| kizo精华| 亚洲一区二区三区欧美精品| 黑人猛操日本美女一级片| 多毛熟女@视频| 国产在视频线精品| 久久精品国产鲁丝片午夜精品| 久久亚洲国产成人精品v| 亚洲第一区二区三区不卡| 精品国产乱码久久久久久小说| 欧美另类一区| 热re99久久国产66热| 国产日韩欧美视频二区| 亚洲精华国产精华液的使用体验| 男人舔奶头视频| 最近最新中文字幕免费大全7| 美女xxoo啪啪120秒动态图| 中国三级夫妇交换| 国产亚洲最大av| 色视频www国产| 久久精品国产亚洲av涩爱| 最黄视频免费看| 男男h啪啪无遮挡| 新久久久久国产一级毛片| 久久99一区二区三区| 国产亚洲最大av| 免费看日本二区| 久久女婷五月综合色啪小说| 成人特级av手机在线观看| 亚洲精品456在线播放app| 大香蕉久久网| 美女国产视频在线观看| 简卡轻食公司| 黄色配什么色好看| 午夜激情久久久久久久| 国产精品久久久久久精品古装| 黄色一级大片看看| 桃花免费在线播放| 久久精品国产亚洲网站| 欧美日本中文国产一区发布| 五月玫瑰六月丁香| 少妇 在线观看| 久久久久久久久久成人| 精品国产一区二区久久| 国产黄色视频一区二区在线观看| 黄色日韩在线| 男女国产视频网站| 嘟嘟电影网在线观看| 人人妻人人澡人人爽人人夜夜| 亚洲综合色惰| 久久精品夜色国产| h日本视频在线播放| 国产精品国产三级国产av玫瑰| 国产美女午夜福利| 久久久久久久久久久免费av| 一级片'在线观看视频| 国产中年淑女户外野战色| 国产精品99久久久久久久久| 亚洲精品色激情综合| 秋霞在线观看毛片| 人体艺术视频欧美日本| 观看美女的网站| av线在线观看网站| 国产黄频视频在线观看| 春色校园在线视频观看| 色5月婷婷丁香| 日韩精品有码人妻一区| 韩国av在线不卡| 性高湖久久久久久久久免费观看| 伦理电影免费视频| 黑人巨大精品欧美一区二区蜜桃 | 九九爱精品视频在线观看| 免费观看在线日韩| 久久久久久久精品精品| 在线观看免费视频网站a站| av播播在线观看一区| 亚洲成人手机| 97超碰精品成人国产| 亚洲美女视频黄频| 女性被躁到高潮视频| 97超碰精品成人国产| 日本wwww免费看| 男人爽女人下面视频在线观看| 尾随美女入室| 女人精品久久久久毛片| 欧美丝袜亚洲另类| 插逼视频在线观看| 综合色丁香网| 国产精品麻豆人妻色哟哟久久| 亚洲一区二区三区欧美精品| 春色校园在线视频观看| 国产在线免费精品| a级毛片免费高清观看在线播放| 精品99又大又爽又粗少妇毛片| 制服丝袜香蕉在线| 久久 成人 亚洲| 三上悠亚av全集在线观看 | 久久久久久久精品精品| 三级国产精品片| 人妻系列 视频| 91成人精品电影| 亚洲精品自拍成人| 亚洲国产最新在线播放| 国产精品99久久久久久久久| √禁漫天堂资源中文www| 如何舔出高潮| 中文天堂在线官网| 一本久久精品| 国产毛片在线视频| 2018国产大陆天天弄谢| 如何舔出高潮| 蜜臀久久99精品久久宅男| 99久久综合免费| 日韩欧美一区视频在线观看 | 曰老女人黄片| 国产在线视频一区二区| av又黄又爽大尺度在线免费看| 99热这里只有是精品在线观看| 欧美区成人在线视频| 伦理电影免费视频| 免费少妇av软件| 免费观看a级毛片全部| 久久国内精品自在自线图片| 国产在线免费精品| 人人澡人人妻人| 久久国产精品男人的天堂亚洲 | 免费高清在线观看视频在线观看| 2021少妇久久久久久久久久久| 免费看不卡的av| 亚洲国产精品专区欧美| 国产精品蜜桃在线观看| 国产极品天堂在线| 在线 av 中文字幕| 熟女电影av网| 毛片一级片免费看久久久久| 国产毛片在线视频| 欧美另类一区| 大又大粗又爽又黄少妇毛片口| 日本午夜av视频| 精品人妻熟女av久视频| 国产熟女欧美一区二区| 中文字幕av电影在线播放| 日韩精品免费视频一区二区三区 | 99视频精品全部免费 在线| 偷拍熟女少妇极品色| 国产一区有黄有色的免费视频| 免费观看a级毛片全部| 人妻少妇偷人精品九色| 一级毛片黄色毛片免费观看视频| 久久6这里有精品| 亚洲丝袜综合中文字幕| 十分钟在线观看高清视频www | 久久热精品热| 色婷婷久久久亚洲欧美| 免费av不卡在线播放| 国产精品女同一区二区软件| 精品视频人人做人人爽| 草草在线视频免费看| 亚洲精品第二区| 九九在线视频观看精品| 六月丁香七月| 欧美日本中文国产一区发布| 久久综合国产亚洲精品| 成人午夜精彩视频在线观看| 欧美精品国产亚洲| av在线app专区| 久久99精品国语久久久| 中文乱码字字幕精品一区二区三区| 九草在线视频观看| 中国国产av一级| 99热这里只有是精品50| www.色视频.com| 丰满饥渴人妻一区二区三| 国产精品久久久久久久电影| 亚洲欧美日韩卡通动漫| 国产又色又爽无遮挡免| 有码 亚洲区| 日日啪夜夜爽| 日韩精品有码人妻一区| 国产69精品久久久久777片| 99视频精品全部免费 在线| 高清av免费在线| 国产91av在线免费观看| 看非洲黑人一级黄片| 最近的中文字幕免费完整| 99精国产麻豆久久婷婷| 黄色怎么调成土黄色| 秋霞伦理黄片| 精品久久国产蜜桃| 日韩精品免费视频一区二区三区 | 欧美日韩一区二区视频在线观看视频在线| 国产成人a∨麻豆精品| 人人妻人人爽人人添夜夜欢视频 | 国产一区有黄有色的免费视频| 少妇被粗大猛烈的视频| 国产精品伦人一区二区| 在线观看免费日韩欧美大片 | 国产伦精品一区二区三区四那| 日韩大片免费观看网站| 在线观看人妻少妇| 十分钟在线观看高清视频www | 国产午夜精品久久久久久一区二区三区| 日韩亚洲欧美综合| 在线看a的网站| 插阴视频在线观看视频| 国产精品.久久久| 欧美性感艳星| 人人妻人人爽人人添夜夜欢视频 | 黄色欧美视频在线观看| 中文精品一卡2卡3卡4更新| videos熟女内射| 看非洲黑人一级黄片| 热re99久久精品国产66热6| 成人毛片a级毛片在线播放| 最后的刺客免费高清国语| 女性生殖器流出的白浆| 国产成人a∨麻豆精品| 国产一区二区三区综合在线观看 | 大片电影免费在线观看免费| 午夜影院在线不卡| 日韩三级伦理在线观看| 亚洲第一av免费看| 亚洲精品乱久久久久久| 免费少妇av软件| 99久久中文字幕三级久久日本| 亚洲美女搞黄在线观看| 国产伦精品一区二区三区视频9| 国产精品久久久久久精品古装| 这个男人来自地球电影免费观看 | 日日摸夜夜添夜夜添av毛片| 国产视频首页在线观看| 欧美老熟妇乱子伦牲交| 国产日韩一区二区三区精品不卡 | 欧美激情极品国产一区二区三区 | 一级毛片电影观看| 人人妻人人看人人澡| 夫妻午夜视频| 欧美高清成人免费视频www| 国产成人精品婷婷| 国产免费又黄又爽又色| 国产男女内射视频| 亚洲丝袜综合中文字幕| 激情五月婷婷亚洲| 另类亚洲欧美激情| 丰满迷人的少妇在线观看| 成人午夜精彩视频在线观看| 欧美国产精品一级二级三级 | 色视频在线一区二区三区| 久久毛片免费看一区二区三区| √禁漫天堂资源中文www| 精品视频人人做人人爽| 国产成人精品婷婷| 精品一品国产午夜福利视频| av又黄又爽大尺度在线免费看| 欧美 日韩 精品 国产| 亚洲第一区二区三区不卡| 亚洲欧洲日产国产| 国产精品伦人一区二区| 精品酒店卫生间| 桃花免费在线播放| 日本欧美国产在线视频| 欧美日韩综合久久久久久| av国产久精品久网站免费入址| www.av在线官网国产| 日本av手机在线免费观看| 内射极品少妇av片p| 天美传媒精品一区二区| 精品酒店卫生间| 黄色毛片三级朝国网站 | 女性被躁到高潮视频| 国产免费福利视频在线观看| 在线 av 中文字幕| 亚洲精品自拍成人| 国产无遮挡羞羞视频在线观看| 寂寞人妻少妇视频99o| 91在线精品国自产拍蜜月| 99re6热这里在线精品视频| 色视频在线一区二区三区| 久久久久久久久久成人| 日日爽夜夜爽网站| 大陆偷拍与自拍| 午夜91福利影院| 国产熟女欧美一区二区| 中文乱码字字幕精品一区二区三区| 国产免费一级a男人的天堂| 国产精品熟女久久久久浪| 久久99一区二区三区| 欧美日韩av久久| 亚洲精品aⅴ在线观看| 国产 精品1| 亚洲一区二区三区欧美精品| 国产无遮挡羞羞视频在线观看| 日本色播在线视频| 成年av动漫网址| 精品一区二区免费观看| 久久久a久久爽久久v久久| 色吧在线观看| 国产色婷婷99| 欧美国产精品一级二级三级 | 嫩草影院入口| 99视频精品全部免费 在线| 久久国内精品自在自线图片| 国内少妇人妻偷人精品xxx网站| 欧美xxxx性猛交bbbb| 亚洲精品乱码久久久久久按摩| 狂野欧美白嫩少妇大欣赏| 国产精品国产三级专区第一集| 在线观看www视频免费| 亚洲天堂av无毛| 女的被弄到高潮叫床怎么办| 一区二区av电影网| 国产成人精品一,二区| 在线观看三级黄色| 亚洲精华国产精华液的使用体验| 久久99热这里只频精品6学生| 欧美日本中文国产一区发布| 午夜91福利影院| 午夜福利在线观看免费完整高清在| 99久国产av精品国产电影| 国产成人精品无人区| 丝袜脚勾引网站| 国产精品久久久久久久电影| 国产成人aa在线观看| 亚洲精品日韩在线中文字幕| 热re99久久精品国产66热6| 少妇被粗大猛烈的视频| 免费黄频网站在线观看国产| 一边亲一边摸免费视频| 日韩在线高清观看一区二区三区| 狂野欧美激情性xxxx在线观看| 熟女av电影| 久久97久久精品| 三级经典国产精品| 国产精品熟女久久久久浪| 在线观看av片永久免费下载| 国产片特级美女逼逼视频| 久久久久久久久久久久大奶| 亚洲成人一二三区av| 最黄视频免费看| 国产日韩欧美亚洲二区| 国产精品三级大全| 精品人妻熟女毛片av久久网站| 黄色毛片三级朝国网站 | 久久人人爽人人片av| 国产 一区精品| 久久精品国产a三级三级三级| 黑人高潮一二区| 赤兔流量卡办理| 观看美女的网站| 人人妻人人澡人人爽人人夜夜| 高清欧美精品videossex| 久久免费观看电影| 男人狂女人下面高潮的视频| 国产伦精品一区二区三区视频9| 国产一区有黄有色的免费视频| 国产精品欧美亚洲77777| 久久亚洲国产成人精品v| 高清不卡的av网站| 高清午夜精品一区二区三区| 黄色视频在线播放观看不卡| 日本午夜av视频| 精品亚洲成a人片在线观看| 久久久国产一区二区| 中文资源天堂在线| 国产成人免费无遮挡视频| 精品国产一区二区三区久久久樱花| 国产精品久久久久久精品电影小说| 我的老师免费观看完整版| 国产老妇伦熟女老妇高清| 汤姆久久久久久久影院中文字幕| 婷婷色av中文字幕| 色吧在线观看| 97超碰精品成人国产| 中文字幕av电影在线播放| 亚洲电影在线观看av| 在线免费观看不下载黄p国产| 婷婷色综合大香蕉| 国产成人精品一,二区| 久久久久人妻精品一区果冻| 少妇高潮的动态图| 国产老妇伦熟女老妇高清| 国产极品天堂在线| 欧美成人午夜免费资源| 国产精品免费大片| 亚洲国产日韩一区二区| 国产亚洲精品久久久com| 亚洲精品久久久久久婷婷小说| 国产精品久久久久久久久免| av免费在线看不卡| av视频免费观看在线观看| 黄色怎么调成土黄色| 国产熟女欧美一区二区| 久久精品国产亚洲网站| 亚洲伊人久久精品综合| h视频一区二区三区| 日本av手机在线免费观看| 99视频精品全部免费 在线| 丝袜脚勾引网站| 69精品国产乱码久久久| av在线老鸭窝| 人人澡人人妻人| 九九在线视频观看精品| 久久精品国产亚洲网站| 日本vs欧美在线观看视频 | 久久国产亚洲av麻豆专区| 亚洲国产精品一区二区三区在线| 婷婷色av中文字幕| 国产欧美另类精品又又久久亚洲欧美| kizo精华| 最后的刺客免费高清国语| 男女边摸边吃奶| 特大巨黑吊av在线直播| 国产欧美另类精品又又久久亚洲欧美| 亚洲性久久影院| 一本久久精品| 国产精品偷伦视频观看了| 精品人妻一区二区三区麻豆| av天堂久久9| 成人亚洲欧美一区二区av| 少妇人妻精品综合一区二区| 久久精品久久久久久噜噜老黄| 大又大粗又爽又黄少妇毛片口| 嫩草影院新地址| 你懂的网址亚洲精品在线观看| 日韩 亚洲 欧美在线| 欧美日韩精品成人综合77777| 一级爰片在线观看| 最近中文字幕高清免费大全6| 午夜福利在线观看免费完整高清在| 韩国av在线不卡| 美女国产视频在线观看| 熟女av电影| 亚洲va在线va天堂va国产| 亚洲av中文av极速乱| 亚洲美女搞黄在线观看| a级毛片免费高清观看在线播放| 国产黄片美女视频| 国产中年淑女户外野战色| www.色视频.com| 在线播放无遮挡| 亚洲综合精品二区| 亚洲真实伦在线观看| 亚洲国产精品国产精品| 国产高清国产精品国产三级| av天堂中文字幕网| 国产中年淑女户外野战色| 日本vs欧美在线观看视频 | 毛片一级片免费看久久久久| 亚洲激情五月婷婷啪啪| av在线app专区| 美女xxoo啪啪120秒动态图| 肉色欧美久久久久久久蜜桃| 色视频www国产| 国产精品人妻久久久影院| 麻豆成人av视频| 美女cb高潮喷水在线观看| 亚洲精品,欧美精品| 又黄又爽又刺激的免费视频.| 99九九线精品视频在线观看视频| 夜夜看夜夜爽夜夜摸| 丰满饥渴人妻一区二区三| 久久久久视频综合| 人妻制服诱惑在线中文字幕| 乱系列少妇在线播放| 又大又黄又爽视频免费| 日韩欧美一区视频在线观看 | 久久免费观看电影| av黄色大香蕉| 人人妻人人添人人爽欧美一区卜| 人妻夜夜爽99麻豆av| 免费黄网站久久成人精品| 国产视频首页在线观看| av.在线天堂| 欧美日韩亚洲高清精品| 国国产精品蜜臀av免费| 久久女婷五月综合色啪小说| 三级国产精品片| 国产精品国产三级国产av玫瑰| 久久精品国产亚洲av涩爱| 久久久久精品性色| 日韩不卡一区二区三区视频在线| 国产精品人妻久久久久久| 美女cb高潮喷水在线观看| 精品久久久久久久久亚洲| 国产在线男女| 日韩免费高清中文字幕av| 免费观看a级毛片全部| 久热这里只有精品99| 日韩欧美精品免费久久| 欧美日韩视频精品一区| 午夜免费观看性视频| 六月丁香七月| 亚洲真实伦在线观看| 夜夜爽夜夜爽视频| 亚洲高清免费不卡视频| 亚洲精品久久久久久婷婷小说| 国产一区有黄有色的免费视频| 中文字幕av电影在线播放| 久久久午夜欧美精品| av.在线天堂| 国产欧美日韩一区二区三区在线 | 国产视频内射| 一级片'在线观看视频| 免费av不卡在线播放| 另类精品久久| 欧美国产精品一级二级三级 | 建设人人有责人人尽责人人享有的| 97超视频在线观看视频| 国产一区二区三区综合在线观看 | 色婷婷久久久亚洲欧美| 草草在线视频免费看| 欧美+日韩+精品| 国产精品久久久久久精品电影小说| 国产成人91sexporn| 久久久久久久久久成人| 国产成人aa在线观看| 免费不卡的大黄色大毛片视频在线观看|