• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Road Safety Performance Function Analysis With Visual Feature Importance of Deep Neural Nets

    2020-05-21 05:43:54GuangyuanPanLipingFuQiliChenMingYuandMatthewMuresan
    IEEE/CAA Journal of Automatica Sinica 2020年3期

    Guangyuan Pan, Liping Fu, Qili Chen, Ming Yu, and Matthew Muresan

    Abstract—Road safety performance function (SPF) analysis using data-driven and nonparametric methods, especially recent developed deep learning approaches, has gained increasing achievements. However, due to the learning mechanisms are hidden in a“black box” in deep learning, traffic features extraction and intelligent importance analysis are still unsolved and hard to generate.This paper focuses on this problem using a deciphered version of deep neural networks (DNN), one of the most popular deep learning models. This approach builds on visualization, feature importance and sensitivity analysis, can evaluate the contributions of input variables on model’s “black box” feature learning process and output decision. Firstly, a visual feature importance(ViFI) method that describes the importance of input features is proposed by adopting diagram and numerical-analysis. Secondly,by observing the change of weights using ViFI on unsupervised training and fine-tuning of DNN, the final contributions of input features are calculated according to importance equations for both steps that we proposed. Sequentially, a case study based on a road SPF analysis is demonstrated, using data collected from a major Canadian highway, Highway 401. The proposed method allows effective deciphering of the model’s inner workings and allows the significant features to be identified and the bad features to be eliminated. Finally, the revised dataset is used in crash modeling and vehicle collision prediction, and the testing result verifies that the deciphered and revised model achieves state-of-theart performance.

    I. Introduction

    EVALUATING safety effects of countermeasures relies greatly on collision prediction models or safety performance functions (SPF), which is an important topic in road safety studies. Safety performance functions are commonly developed separately for different types of highways or entities and locally using data collected from the study area representing the specific highway types to be modelled. Traditionally, they are well reflected in the highway safety manual(HSM), in which several example SPFs for various types of highways and intersections from different jurisdictions are documented [1], [2]. Moreover, one of the most commonly used methods is called parametric modeling (e.g., negative binomial model, NB), and it requires a series of trial and error process before arriving at the final model structure with a set of significant variables [3]–[5]. Although this model is easy to understand and apply, the predicted results have low accuracies due to the random nature of collision occurrences and the strong distribution assumption. Another technique that has been studying on is called non-parametric modeling (e.g., kernel regression (KR); support vector modeling (SVM); artificial neural networks (ANN)), and it has achieved satisfying prediction accuracy [6]–[9]. However, road safety performance function parameters in this method cannot be quantified and is hence difficult to generalize.

    The recent developed technology of artificial intelligence(AI) brings in new solution potentials for this problem.Artificial intelligence has revolutionized many industries already, bringing in new ideas and exciting technologies. It has also brought changes to nearly every scientific field, and many more advancements remain. Among the most notable techniques developed, deep learning (also called deep neural networks), is often considered as one of the most remarkable[10]–[12]. Since its proposal, it has been successfully applied to solve complex problems in a variety of fields, including but not limited to pattern recognition, game theory, computer vision, medical treatment, transportation logistics and financial [13]–[18]. In our previous research, we have applied deep belief network, one of the most popular deep learning models, in establishing SPF and the trained model has outperformed the traditional methods [19], [20]. However,despite the seemingly endless benefits deep learning brings, it possesses a certain opacity and darkness that often cause doubt and resistance from policy makers and scientists. Some findings highlight the fact that although deep learning models are trained to solve tasks based on human knowledge, the models see those objects differently than humans. As a result,these findings have made AI not totally trusted for scientists and industry applications [21], [22]. One of the biggest reasons that leads to the findings is the unanalyzable of black box training process problem. To address these limitations,researchers have begun studying on defending, strengthening and deciphering deep learning, and have proposed several detection methods on understanding feature learning process,especially on convolutional neural nets [23]–[34]. However,to the best of our knowledge, there is still lack of a general method for effectively decipher DNN for SPF analysis and variables selection. In current literature, three detection methods have been developed to study the black box problem.The first tool analyzes the black box based on its external input [25]–[27]. The second uses transparent algorithms that are comparable to those implemented in deep learning to examine the model’s workings [28], [29]. Finally, the third set of methods uses other machine learning models as tools to study the situation, particularly those whose workings are more easily understood [30]–[34].

    This paper builds on the works of feature importance and visualization [35], [36], demonstrates a diagram and numericalanalysis based method, called visual feature importance(ViFI), to understand the black box feature learning process and provide the potential to analyze the contributions of the various input features of SPFs. We specifically focus on deep belief network (DBN). This method intuitively highlights which area responds positively or negatively to the inputs. Through this method, it highlights how a DBN model, especially in unsupervised learning, studies differently from other methods.Our previous efforts have already shown significant progress in applying DBN for SPF development. In this study, we will sequentially utilize this ViFI method as a tool to describe the feature importance and establish a more reasonable road safety performance function.

    This paper’s structure is organized as follows: Section II introduces our methodology, including the method used to generate a weights-based diagram and the calculation process used to identify a features’ importance. In the unsupervised setting, a visualization diagram utilizing the change in the weights’ values is generated, and contrastive divergence is combined to understand the knowledge learning process.Then, the importance of each input feature is calculated,including how knowledge is transferred in input-hidden layer and hidden-hidden layer. Similarly, in the supervised setting another diagram using the same method is generated but is investigated using stochastic gradient descent instead and each feature’s contribution is then determined. Section III demonstrates a case study based on our previous research,more details on implementing ViFI will be explained. This case study will also demonstrate data from highway 401 Canada for vehicle collision prediction and show an improved and more convincing result by representing this more intelligent way of training a model. Section IV summarizes the study as well as directions for the future research.

    II. Methodology of Visual Feature Importance (ViFI)

    Deep belief network (DBN) is one of the most typical in DNN area. What makes DBN different is its unique training method called greedy unsupervised training. By stacking several restricted Boltzmann machines (RBM, a kind of recursive ANN model that contains two layers, one input layer and one output layer), one upon another in training, DBN learns the features of input signals without needing a supervisor and obtains a better distributed representation of the input data, without requiring extra labelled data like back propagation.

    The proposed ViFI method is divided into four steps: 1)initialize a DBN structure with its training parameters; 2)observe the changing of weights during unsupervised learning,focusing primarily on the magnitudes of each input feature; 3)after unsupervised training, generate a reconstructed input layer utilizing each hidden layer. By observing the activation and non-activation areas, the exact knowledge that is learned can be better understood; 4) continue running the supervised learning step and generate the weights diagram using both visualization and numerical analysis that calculates the contribution of input features (either accepted or rejected).

    In Step 1, for a given deep belief net with the structure VH1-H2-O (V input neurons, H1 and H2 hidden neurons in two hidden layers respectively, and O output for prediction),weights are randomly pre-set.

    In Step 2, train the first restricted Boltzmann machine(RBM, including V and H1) using greedy unsupervised learning, the equations of the feature learning and weights updating are given below in (1)–(6). Equation (1) represents the starting state of input data (values between 0 and 1), with weightsWbetween the two layers randomly given (all zeros are recommended to use here, for easier calculation in the following steps).viandhjare neurons in V and H1. Equations(2)–(4) are the feature learning equations of an RBM,V0,H0,V1, andH1are the four states recorded during transformation.p(·) is the probability of a neuron being activated,wijis the weight betweeniin V andjin H1, andbjandciare the biases.Finally, the weights are updated by applying equations (5) and(6). As the weights are all zero at first, in unsupervised learning, if the model senses a feature should be important, the weights between the specific feature neuron and hidden layer 1 will be strengthened, which mathematically will lead to a negative ΔWin (5) because more neurons will be 1 inV1andH1. If a feature is thought to be useless to learn by the model itself, the ΔWwill be a positive value becauseV1andH1are mostly 0, andWt+1will keep increasing. An illustration is shown in Fig. 1. After unsupervised learning, generate a reconstructed input using each layer (Fig. 1(b)). This step helps us to understand what knowledge the hidden layers have learned, because the reconstructed data will highlight the truly useful features [30].

    Fig. 1. Illustration of Visualization. (a) Visualization on change of weights.(b) Visualization on layers reconstruction.

    The mean value of the weights on each feature is then calculated using the results of the unsupervised learning and with those from supervised learning. As the weight updating equation (6) is linear, we define the contributions in feature learning using a linear function shown as (11)–(13), in which theFIiis the importance of featurei,means importance ofiin unsupervised learning andis the importance after fine-tuning.is the weights that connect toiin epochn,Vis number of features andHis the number of hidden units.

    III. Case Study

    A. Experimental Design

    To evaluate the effect of ViFI, an empirical study is conducted. This study is conducted using historical data from Highway 401, a multilane-access controlled highway in Ontario, Canada. This highway is one of the busiest highways in North America and connects Quebec in the east and the Windsor-Detroit international border in the west. The total length of the highway is 817.9 km of which approximately 800 km was selected for this study. According to 2008’s traffic volume data, the annual average daily traffic ranges from 14 500 to 442 900 indicating comparatively a very busy road corridor. In this study, the processed crash and traffic data are integrated into a single dataset with homogeneous sections and year as the mapping fields that result into total 3762 records.The six input features included in this dataset are annual average daily commercial traffic (AADCT), median width, left shoulder width, right shoulder width, curve deflection, and exposure. We also provide a summary description ofcontinuous input features in Table I, including the sample sizes for training and testing. After training, the performance of each model is estimated based on mean absolute error (MAE) and root mean square error (RMSE), as defined in (14) and (15).

    TABLE I Summary of the Dataset (Highway 4 01, Ontario)

    In our previous research, we have applied an improved version of DBN (regularized DBN, R-DBN) to predict the collision, and in comparison, it outperformed negative binomial(NB), one of the most widely used techniques in road safety analysis which had been adopted by highway safety manual [2],[9], [19]. The improved DBN utilizes continuous version of transfer function for unsupervised learning ((16)–(18)) and Bayesian regularization for fine-tuning ((19) and (20)). We keep using this model not only because ViFI is generalizable to it but also for easier comparison to the published results. In the equations,xiandyjare the continuous values of unitiandjin two layers;wijis the weight between them,N(0,1) is a Gaussian random variable with mean 0 and variance 1; σ is a constant;φ(X) denotes a sigmoid-like function with asymptote ofθHandθL;ais a variable that controls noise,FWthe new optimization function in fine-tuning,RWis the Bayesian regularization item that reduces over-fitting by controlling the values of weights;αandβare called performance parameters that can be calculated during the iteration by adopting Bayesian regularization.

    B. Applying ViFI in Unsupervised Leaning

    The model is initialized with six input neurons, one for each feature (exposure, AADCT, left shoulder width, median width, right shoulder width, curve deflection), two hidden layers with ten neurons in each layer, and one output layer that contains only one neuron for vehicle collision prediction. In Step 2, the weights between input and hidden layer 1 will be written as,W1 = (w11,w12, …,w110,w21, …,wij,w61, …,w610), wherei(from 1 to 6) andj(from 1 to 10) are neurons in the two layers. A visualization of the structure that highlights how the weights form the different connections between layers and how they are updated is shown in Fig. 2.In Fig.2(b), the top row is the first epoch, and the bottom last row is the sixtieth epoch. As the weights are set to be zero at first, the color is all white at first. The vertical direction means the change of the weights. During unsupervised learning,some weights become very dark in the vertical direction while others are not so much. According to previous numerical analysis, the more important a feature is, the more knowledge the hidden layer needs to learn, thus the bigger the difference will be. Therefore, we infer that all features seem useful in unsupervised learning, especially features exposure and curve flection (input neurons 1 and 6). After Steps 1 and 2 are complete, (14) is applied on the hidden layer to reconstruct the input data. After the comparison, the patterns in the reconstructed features from the two hidden layers are also similar, which can be a sign of equal feature learning ability.

    Fig. 2. Applying visualization in unsupervised learning. (a) The structure of the model being used in experiment. (b) The trained W1 after 60 epochs.

    Fig. 3. Applying visualization in supervised learning. (a) The change of weights between input and hidden layer 1. (b) The change of weights on each feature.

    C. Applying ViFI in Fine-tuning

    The process moves on to supervised training (fine-tuning), in which the model is trained on 5000 iterations, and the change of weights between input and hidden layer 1 is visualized (see Fig. 3(a)) and is compared with the previous results in Fig. 2 from Step 2. This step studies how the black box uses the teacher’s signal in supervised learning. This step also acts as validation and assists in the self-learning process. After finetuning, the weights that join each feature and the black box are drawn in Fig. 3(b). In each sub-figure of Fig. 3(b), TheXaxis shows the 5000 iterations while theYaxis shows the value of the weights. There are ten lines in each sub-figure, each one represents a specific weight between one feature and a neuron in hidden layer 1. Sequentially, by applying the same analysis shown previously in Section III, if the weights increase, it means the corresponding feature is considered to be more important than before; if they decrease, it could be a sign of wrong judgement in self-learning. Moreover, the principle of sparse connections suggest that weights should become dispersive, otherwise it could lead to over-fitting. According to Fig. 3, we find that the first features weights have slightly dropped at first and increased again, the second features weights keep decreasing, the third and fourth features increase at first then fall a little, weights of features five and six are increasing all the time. According to the previous analysis, we conclude that the model has reduced the magnitude of Feature 2 and increased the importance of Features 5 and 6. Besides, this figure has also showed the statement of sparse connection of the model, which means the training is good and no over-fitting exists.

    By using this equation, the calculated results are found to be[0.428, 0.117, 0.143, 0.084, 0.087, 0.393], for the six features(exposure, AADCT, left shoulder width, median width, right shoulder width, curve deflection) respectively. After fine-tuning, as the weights updating is based on a nonlinear function, we defined the changes of the contributions using a sigmoid function, which are calculated to be [0.928, –0.321,0.688, 0.589, 0.635, 1.015]. The result is then plotted to a figure that compares the judgements (contributions of the features) in the two stages in Fig. 4 and Table II. In Fig 4, the features (from left to right) are exposure, annual average daily commercial traffic (AADCT), left shoulder width, median width, right shoulder width, and curve deflection. In the beginning, features 1 and 6 are found to be significant by selflearning (blue bars). After fine-tuning, the result is modified,all the features are becoming more important except the second feature, AADCT, which surprisingly is even considered to be a distraction (negative) to the training.

    TABLE II Model Testing Comparison

    Fig. 4. The calculated feature importance in the two training stages.

    D. Applying ViFI in Fine-tuning

    The experiments above highlight the information and understanding gained from each step of the ViFI process. Step 2 expresses the importance of features in unsupervised learning, Step 3 presents the understanding of a black box, and Step 4 provides a fine-tuning calculation on feature magnitudes. For the specific case study considered, Features 1 and 6 (exposure and curve flection) were confirmed to be the most important, while Features 3, 4 and 5 (left shoulder width,median width, and right shoulder width) were also found to be significant contributors. Feature 2 (AADCT) was identified as a distraction to the model. At the same time, Step 3 also showed that the second hidden layer was as important as the first layer because it has similar feature learning ability. To calibrate the findings, we design the following experiments.To calibrate the findings, a sensitivity experiment is designed and implemented. A model of size 5-10-10-1 (input, hidden layer 1, hidden layer 2, output) with the same learning rate and epochs is designed. Then, six models are trained, each trained without a specific feature (starting with Feature 1 and ending with Feature 6). Finally, a model with all the features is trained as the baseline. An additional model with one of the hidden layers removed is also trained for comparison. The results are shown in Fig. 5 and Table II.

    Fig. 5 indicates that when Feature 1 is excluded, the testing MAE increases dramatically; however, when a model without Feature 2 is trained, the result outperforms the baseline model.The left sub-figure in the first row compares the minimal testing MAE by excluding each feature. It is obvious that after eliminating the second feature (AADCT), the model outperforms all the others. However, if the first feature(exposure) is deleted, the performance becomes much worse,and when the other features are excluded, the results also turn worse at different levels. The same trend also happens in the second figure in the first row when looking at average testing MAE. It should be noted that when hidden layer 2 is deleted,the testing result is better than using regular DBN when training datasets are few, this is a reasonable result as the dataset used in this analysis is too small, and a large model could easily be overfitted during training. This result confirms the theory discussed previously. Fig. 6 shows a comparison of performance between R-DBN without Feature 2 and three other models, including NB, KR and R-DBN as related to data sizes. As have reported in our previous paper [20], the performance of NB does not seem to change substantially as training data increases. Similarly, in KR, the best results show some improvement, but eventually reaches a limit. The RDBN method clearly shows am improvement, especially with training data increases. As a contrast, for the decoded R-DBN,by eliminating the unwanted feature, it achieves much better performance than the others. In Fig.6(a), The minimal testing MAE is lower than the others all the time, in Fig.6(c), it beats KR at the training data percentage 40, much faster than the previous R-DBN. The sub-figures shown in Fig. 6 on the second and third rows are the MAE by training data set sample size using different model parameters. The central mark in the boxes is the median, and the edges of the box are the 25th and 75th percentages. MAE is high at low data sizes but decreases quickly as the data size increases. Normally, the lower the box, the more accurate the prediction, and the narrower it is, the more robust the model. Therefore, these figures provide another way of verifying the effectiveness of the model. In Table II, four models are compared, which are,negative binomial (NB)–one of the most popular models used in real-world applications; kernel regression (KR) and back propagation neural networks (BPNN)–two popular traditional machine learning methods; and regularized deep belief network (R-DBN)–an improved version of DBN, one of the most significant models in deep learning. From the results, RDBN demonstrates excellent performance when compared to the other traditional models, and the decoded R-DBN outperforms the original version by achieving the minimal MAE 7.58 and minimal RMSE 15.03. Moreover, based on the results, the feature importance using traditional numerical method and deep neural nets are compared in Table II. Similar trends of the feature importance can be observed, which shows that deep neural net not only correctly identifies unwanted features but also makes better use of useful features.

    Fig. 5. Testing results applying ViFI.

    Fig. 6. Results comparison. (a) Testing minimum MAE. (b) Testing maximum MAE. (c) Testing average MAE.

    Fig. 7. Testing result on extra rubbish features.

    E. Model Testing With Extra Noisy Features

    Experiments above have proved that the deep neural network model is capable of distinguishing contributions of features, but because Feature 1 (exposure) is integrated from Feature 2 (AADT) already, it is reasonable to conclude that AADT-commercial is a redundant feature. So, more evidence is required, for example, can the model distinguish features that are totally unrelated to the dataset? To answer that question, in this experiment, the model will be further tested with designed extra rubbish features.

    Two extra input features are designed. Feature 7 is random values between 0 and 1, and Feature 8 is a constant value 0.5.The model size is set to be 8 inputs, 2 hidden layers with 10 units in each layer, 1 output unit. The other training parameters are kept as before. After the training process,feature importance is analyzed in Fig. 7. Fig.7 shows the feature importance analysis process. Fig.7(a) is the result in unsupervised learning using ViFI, and Fig.7(b) is the conclusion after fine-tuning. According to Fig.7(a), rubbish features are easily distinguished in unsupervised learning because the connections are mostly weak already. According to Fig.7(b), the model confirms its previous judgement on the rubbish features and give them large negative points on the contributions. Besides, this figure also shows that the model can still make the same decision on the original features as before in Fig.5. Table III compares the feature importance with or without extra rubbish features. It can be observed that the model makes very similar judgements on the six originalfeatures, and the two extra unrelated features are easily found out and their quantified contributions are interestingly close(–0.502 and –0.501).

    TABLE III Feature Importance Comparison

    IV. Conclusion

    In this paper we have proposed a visual feature importance(ViFI) method of deep neural networks to quantify the significance of features in road safety performance function.This method utilizes numerical analysis and visualization, to help developers understand the differences of feature learning between machines and humans in a more intuitive and digital way. This method also helps us to better understand and study the training process of deep learning. Our study highlights that in unsupervised learning, bigger changes in the weights correlate strongly to the apparent importance of a feature while during fine-tuning, bigger increases in the weights highlight the significance of the features that they connect to. It should be noted that all these conclusions are based on assuming that the training state maintains the condition of sparse connections and that no over-fitting occurs. Finally, the effect of six features that have been identified as contributing factors to vehicle collisions on a highway using the decoded model. Experiments also indicate that the model successfully distinguished the useful and useless features, and by coordinating the training dataset,a more accurate and robust model is trained.

    Despite the benefits and results achieved here, this research also has many issues and uncertainties that need to be studied further. The first is the input dimension problem which arises from inputs with large dimensions for a global general model,especially in the settings there are not only many input neurons, but these neurons also do not have specific physical meanings because the features are too abstract and conserved in a certain area instead of in only one neuron [37], [38]. To do so, the generalized version (convolutional DBN, for example) should be studied. The second issue is even if we have totally understood the learning processes of black box,should we reteach it if we find that it does not look at the features that we want it to learn, even if the output is perfectly correct? Will this change the learning process that we as humans have been using for years? While these questions may ultimately be answered in the future, in the present, the only explanation that can be concluded is that machines learn differently from humans, and, in some cases, differently from other machines.

    看黄色毛片网站| 99热这里只有是精品50| 欧美激情久久久久久爽电影| 午夜福利在线在线| 亚洲内射少妇av| 六月丁香七月| 中文字幕免费在线视频6| 国产精品野战在线观看| 日本黄大片高清| 亚洲成人精品中文字幕电影| 国国产精品蜜臀av免费| 麻豆国产av国片精品| 日本一二三区视频观看| 哪里可以看免费的av片| 亚洲精品456在线播放app| 欧美日韩乱码在线| 日韩av不卡免费在线播放| 看黄色毛片网站| 欧洲精品卡2卡3卡4卡5卡区| 亚洲四区av| 美女xxoo啪啪120秒动态图| 亚洲熟妇中文字幕五十中出| 中文欧美无线码| 身体一侧抽搐| 全区人妻精品视频| 小说图片视频综合网站| 两性午夜刺激爽爽歪歪视频在线观看| 欧美日韩国产亚洲二区| 色5月婷婷丁香| 国产亚洲精品av在线| 人妻夜夜爽99麻豆av| 国内揄拍国产精品人妻在线| 免费搜索国产男女视频| 欧美高清性xxxxhd video| 欧美+日韩+精品| 此物有八面人人有两片| 激情 狠狠 欧美| 精品一区二区三区视频在线| 啦啦啦韩国在线观看视频| 熟女电影av网| 黑人高潮一二区| 精品久久久久久久末码| 精品久久久久久久久久久久久| 99久久九九国产精品国产免费| 欧美日本亚洲视频在线播放| 日本免费a在线| 在线免费观看不下载黄p国产| 国产探花在线观看一区二区| 欧美一区二区精品小视频在线| 亚洲一级一片aⅴ在线观看| 久久6这里有精品| 女人被狂操c到高潮| 在线国产一区二区在线| 亚洲真实伦在线观看| 久久精品综合一区二区三区| 热99re8久久精品国产| 中文字幕久久专区| 男的添女的下面高潮视频| 99热只有精品国产| 欧美色视频一区免费| 国产色爽女视频免费观看| .国产精品久久| 热99在线观看视频| 在线观看午夜福利视频| 久久久a久久爽久久v久久| 一个人看的www免费观看视频| 男人舔女人下体高潮全视频| 嫩草影院新地址| 久久精品久久久久久久性| kizo精华| 欧美又色又爽又黄视频| 我要搜黄色片| 成年av动漫网址| 99久久中文字幕三级久久日本| 亚洲一级一片aⅴ在线观看| 国产成人影院久久av| 亚洲国产精品久久男人天堂| 国产熟女欧美一区二区| 精品午夜福利在线看| 亚洲va在线va天堂va国产| 日韩欧美国产在线观看| av.在线天堂| 久久久久久久久大av| 性插视频无遮挡在线免费观看| 成人鲁丝片一二三区免费| 欧美+日韩+精品| 国产探花极品一区二区| 欧洲精品卡2卡3卡4卡5卡区| 亚洲精品亚洲一区二区| 欧美三级亚洲精品| 久久99精品国语久久久| 亚洲成人中文字幕在线播放| 欧美精品一区二区大全| 久久久久久久久久成人| 小蜜桃在线观看免费完整版高清| 国产精品麻豆人妻色哟哟久久 | 欧美高清成人免费视频www| 别揉我奶头 嗯啊视频| 亚洲人与动物交配视频| 日本欧美国产在线视频| av在线天堂中文字幕| 久久人人爽人人爽人人片va| 日韩欧美一区二区三区在线观看| 国产私拍福利视频在线观看| 精品午夜福利在线看| 一边亲一边摸免费视频| 国产一区二区亚洲精品在线观看| a级毛片免费高清观看在线播放| 国产精品日韩av在线免费观看| 久久久精品94久久精品| 嫩草影院精品99| 国产精品爽爽va在线观看网站| 秋霞在线观看毛片| 国产探花极品一区二区| 我的女老师完整版在线观看| 可以在线观看的亚洲视频| 免费一级毛片在线播放高清视频| 91aial.com中文字幕在线观看| 我要搜黄色片| 中文在线观看免费www的网站| 免费观看在线日韩| 国产精品精品国产色婷婷| 久久精品国产99精品国产亚洲性色| 日韩,欧美,国产一区二区三区 | 亚洲国产欧美在线一区| 欧美区成人在线视频| 婷婷色av中文字幕| 性欧美人与动物交配| 直男gayav资源| 大又大粗又爽又黄少妇毛片口| 岛国毛片在线播放| 亚洲激情五月婷婷啪啪| 久久99热这里只有精品18| 美女xxoo啪啪120秒动态图| 麻豆国产av国片精品| 久久精品国产鲁丝片午夜精品| 亚洲性久久影院| 国产精品综合久久久久久久免费| 国产伦精品一区二区三区视频9| 超碰av人人做人人爽久久| 欧美xxxx性猛交bbbb| 老女人水多毛片| 国产精品1区2区在线观看.| 免费观看a级毛片全部| 国产视频首页在线观看| 亚洲最大成人手机在线| 国产精品人妻久久久久久| 久久国产乱子免费精品| 久久国内精品自在自线图片| 亚洲在久久综合| 草草在线视频免费看| 免费观看的影片在线观看| 乱码一卡2卡4卡精品| 少妇的逼水好多| 亚洲国产精品成人综合色| 高清毛片免费观看视频网站| 亚洲精品456在线播放app| www.av在线官网国产| 黄片wwwwww| 午夜亚洲福利在线播放| 好男人在线观看高清免费视频| 国产综合懂色| 日本成人三级电影网站| 亚洲18禁久久av| 欧美激情在线99| 女的被弄到高潮叫床怎么办| 中国美白少妇内射xxxbb| 国产av在哪里看| 国产黄色小视频在线观看| 久久人人爽人人片av| 亚洲欧美成人精品一区二区| 成人毛片a级毛片在线播放| 国产成人福利小说| 三级毛片av免费| 日韩欧美在线乱码| 亚洲激情五月婷婷啪啪| 中文字幕熟女人妻在线| 老司机影院成人| 成人午夜精彩视频在线观看| 日本黄大片高清| 久久精品影院6| 久99久视频精品免费| 国产精品电影一区二区三区| 日日啪夜夜撸| 最近视频中文字幕2019在线8| 国产亚洲欧美98| 熟妇人妻久久中文字幕3abv| 少妇被粗大猛烈的视频| 岛国在线免费视频观看| 日韩精品有码人妻一区| 久久这里只有精品中国| 色播亚洲综合网| 中文字幕人妻熟人妻熟丝袜美| 久久久久国产网址| av免费观看日本| 亚洲高清免费不卡视频| 午夜久久久久精精品| 久久精品91蜜桃| 变态另类丝袜制服| 1024手机看黄色片| 久久久久久久亚洲中文字幕| 日韩欧美 国产精品| 国产精品久久久久久精品电影| 一级二级三级毛片免费看| 性插视频无遮挡在线免费观看| 人妻系列 视频| 中文字幕久久专区| 一进一出抽搐gif免费好疼| 丰满乱子伦码专区| 99riav亚洲国产免费| 国内久久婷婷六月综合欲色啪| 别揉我奶头 嗯啊视频| 午夜免费激情av| 99久久人妻综合| 老熟妇乱子伦视频在线观看| 村上凉子中文字幕在线| а√天堂www在线а√下载| 级片在线观看| 国产一级毛片七仙女欲春2| 亚洲av.av天堂| 一级毛片我不卡| eeuss影院久久| 麻豆乱淫一区二区| 中文字幕人妻熟人妻熟丝袜美| 久久鲁丝午夜福利片| 国产精品一区二区在线观看99 | 精品久久久噜噜| 国产精品99久久久久久久久| 成人毛片a级毛片在线播放| 哪里可以看免费的av片| 久久久色成人| 免费一级毛片在线播放高清视频| 亚洲精品亚洲一区二区| 最近中文字幕高清免费大全6| 精品日产1卡2卡| 变态另类成人亚洲欧美熟女| 爱豆传媒免费全集在线观看| 一本一本综合久久| 精品久久久久久久久久免费视频| 国产成人精品久久久久久| 国国产精品蜜臀av免费| 久久久国产成人精品二区| 少妇高潮的动态图| 又粗又爽又猛毛片免费看| 91久久精品国产一区二区成人| 亚洲在线观看片| 国产三级在线视频| 免费观看人在逋| 少妇人妻精品综合一区二区 | 天堂网av新在线| 亚洲中文字幕日韩| 又粗又硬又长又爽又黄的视频 | av专区在线播放| 人人妻人人澡欧美一区二区| 91久久精品国产一区二区成人| 十八禁国产超污无遮挡网站| 亚洲精品乱码久久久久久按摩| 99久久无色码亚洲精品果冻| 国产 一区 欧美 日韩| 国产极品精品免费视频能看的| 日韩欧美一区二区三区在线观看| 狠狠狠狠99中文字幕| 中文字幕免费在线视频6| 美女脱内裤让男人舔精品视频 | 亚洲欧美日韩卡通动漫| 最近的中文字幕免费完整| 国产伦精品一区二区三区四那| 两个人的视频大全免费| 男女边吃奶边做爰视频| 桃色一区二区三区在线观看| 日本熟妇午夜| 男女做爰动态图高潮gif福利片| 能在线免费看毛片的网站| 日韩欧美国产在线观看| 国产成人精品久久久久久| 久久精品影院6| 国产日本99.免费观看| 黄色一级大片看看| 国产高清激情床上av| 禁无遮挡网站| a级一级毛片免费在线观看| 精品人妻视频免费看| 亚洲aⅴ乱码一区二区在线播放| 午夜视频国产福利| 麻豆国产97在线/欧美| 日本在线视频免费播放| 日韩中字成人| 婷婷色av中文字幕| 日本与韩国留学比较| 亚洲国产色片| 99久久久亚洲精品蜜臀av| videossex国产| 人妻制服诱惑在线中文字幕| 国产精品精品国产色婷婷| 欧美潮喷喷水| 天堂√8在线中文| 色尼玛亚洲综合影院| av在线观看视频网站免费| 能在线免费观看的黄片| 麻豆久久精品国产亚洲av| 成人无遮挡网站| 国产成人freesex在线| 嫩草影院精品99| 成人一区二区视频在线观看| 亚州av有码| 亚洲国产精品国产精品| 亚洲欧洲国产日韩| 中文字幕免费在线视频6| 亚洲成人精品中文字幕电影| 啦啦啦啦在线视频资源| 国产av在哪里看| 乱系列少妇在线播放| 国产精品久久久久久久电影| 男插女下体视频免费在线播放| 白带黄色成豆腐渣| 久久韩国三级中文字幕| 一个人看的www免费观看视频| 亚洲七黄色美女视频| 国产成人午夜福利电影在线观看| 日韩强制内射视频| 91精品一卡2卡3卡4卡| 久久久久久国产a免费观看| 久久久a久久爽久久v久久| 草草在线视频免费看| 亚洲欧美精品综合久久99| 男人舔奶头视频| 亚洲欧美日韩高清专用| 亚洲av熟女| 欧美+日韩+精品| 中文字幕熟女人妻在线| 欧美日本视频| 乱人视频在线观看| 在线国产一区二区在线| 嫩草影院精品99| 看黄色毛片网站| 成年女人永久免费观看视频| 国产成人精品一,二区 | 天堂av国产一区二区熟女人妻| 国产精品人妻久久久影院| 国产综合懂色| 亚洲精品色激情综合| 国产 一区 欧美 日韩| 亚洲综合色惰| 精品人妻一区二区三区麻豆| 美女黄网站色视频| 真实男女啪啪啪动态图| 97超视频在线观看视频| 国产色婷婷99| 国产精品三级大全| 联通29元200g的流量卡| 久久久久久久久久黄片| 国产视频内射| 国产亚洲精品久久久久久毛片| 国产国拍精品亚洲av在线观看| 久久久久九九精品影院| 国产色爽女视频免费观看| 黄片无遮挡物在线观看| 老司机影院成人| 国产免费男女视频| 免费不卡的大黄色大毛片视频在线观看 | 国产精品精品国产色婷婷| 中出人妻视频一区二区| av福利片在线观看| 精品午夜福利在线看| 人妻系列 视频| 美女脱内裤让男人舔精品视频 | 性欧美人与动物交配| 婷婷精品国产亚洲av| 精品久久国产蜜桃| 精品久久久久久成人av| 两个人的视频大全免费| 免费人成视频x8x8入口观看| 国产伦精品一区二区三区四那| 亚洲美女视频黄频| 一级黄片播放器| 在线a可以看的网站| 免费大片18禁| 成人综合一区亚洲| 欧美日韩在线观看h| 亚洲国产色片| 99久久九九国产精品国产免费| 精品免费久久久久久久清纯| 九草在线视频观看| 毛片一级片免费看久久久久| 99视频精品全部免费 在线| 99riav亚洲国产免费| 日本黄色视频三级网站网址| 亚洲国产色片| 日产精品乱码卡一卡2卡三| 欧美色欧美亚洲另类二区| 欧美又色又爽又黄视频| 久久草成人影院| 少妇被粗大猛烈的视频| 亚洲不卡免费看| 国产极品天堂在线| 99在线视频只有这里精品首页| 寂寞人妻少妇视频99o| 好男人在线观看高清免费视频| 亚洲成人久久性| 熟女人妻精品中文字幕| 在线天堂最新版资源| 青青草视频在线视频观看| 亚洲在久久综合| 中国国产av一级| 最近最新中文字幕大全电影3| 午夜a级毛片| 内地一区二区视频在线| 日韩欧美三级三区| a级一级毛片免费在线观看| 国产视频首页在线观看| 国产精品乱码一区二三区的特点| 亚洲国产精品成人综合色| 黄色日韩在线| 欧美成人一区二区免费高清观看| 亚洲一区二区三区色噜噜| 人妻少妇偷人精品九色| 亚洲乱码一区二区免费版| 久久精品国产99精品国产亚洲性色| 欧美日韩精品成人综合77777| 亚洲av不卡在线观看| 男女做爰动态图高潮gif福利片| 高清在线视频一区二区三区 | 九九热线精品视视频播放| 小蜜桃在线观看免费完整版高清| 免费观看人在逋| 欧美激情久久久久久爽电影| 插阴视频在线观看视频| ponron亚洲| 久久精品夜色国产| 97人妻精品一区二区三区麻豆| 如何舔出高潮| 99久久中文字幕三级久久日本| 黄色欧美视频在线观看| 一级毛片电影观看 | 美女脱内裤让男人舔精品视频 | 变态另类成人亚洲欧美熟女| 好男人在线观看高清免费视频| 日韩人妻高清精品专区| 日本黄色片子视频| 亚洲美女视频黄频| 国产色婷婷99| 18禁在线播放成人免费| 国语自产精品视频在线第100页| 国产成人影院久久av| 日韩欧美 国产精品| 人妻系列 视频| eeuss影院久久| 天堂影院成人在线观看| 男女边吃奶边做爰视频| 亚洲在久久综合| 亚洲婷婷狠狠爱综合网| 12—13女人毛片做爰片一| 久久精品国产亚洲网站| avwww免费| 真实男女啪啪啪动态图| 淫秽高清视频在线观看| 国产精品一区二区三区四区免费观看| 哪里可以看免费的av片| 精品久久久久久成人av| 3wmmmm亚洲av在线观看| 久久草成人影院| 国产真实伦视频高清在线观看| 欧美色欧美亚洲另类二区| 日韩av不卡免费在线播放| 在现免费观看毛片| 色综合色国产| 日韩成人av中文字幕在线观看| 久久这里有精品视频免费| 国产精品免费一区二区三区在线| 内地一区二区视频在线| 精品久久久久久久久亚洲| 日本撒尿小便嘘嘘汇集6| 国产免费一级a男人的天堂| 国产亚洲精品久久久com| 三级经典国产精品| 嫩草影院精品99| 小蜜桃在线观看免费完整版高清| АⅤ资源中文在线天堂| 精品久久久噜噜| 我的女老师完整版在线观看| 亚洲精品乱码久久久v下载方式| 3wmmmm亚洲av在线观看| 免费看日本二区| 少妇丰满av| 国产精品麻豆人妻色哟哟久久 | 久久国产乱子免费精品| 国产人妻一区二区三区在| 色尼玛亚洲综合影院| 中文字幕av在线有码专区| 亚洲av成人av| 国产成人a区在线观看| 最新中文字幕久久久久| 国产v大片淫在线免费观看| 村上凉子中文字幕在线| 久久久久性生活片| 国产精品久久久久久亚洲av鲁大| 亚洲人与动物交配视频| 国产精品综合久久久久久久免费| 99久国产av精品国产电影| 中文亚洲av片在线观看爽| 91久久精品国产一区二区成人| 熟妇人妻久久中文字幕3abv| av在线天堂中文字幕| 久久久久免费精品人妻一区二区| 欧美性猛交黑人性爽| 午夜免费男女啪啪视频观看| 长腿黑丝高跟| 亚洲人成网站高清观看| 亚洲精品色激情综合| 又黄又爽又刺激的免费视频.| 五月玫瑰六月丁香| 内射极品少妇av片p| 国产成人影院久久av| 日日啪夜夜撸| 日本在线视频免费播放| 久久精品综合一区二区三区| 三级经典国产精品| 狂野欧美白嫩少妇大欣赏| 国产不卡一卡二| 国产探花在线观看一区二区| 狠狠狠狠99中文字幕| 身体一侧抽搐| 欧美成人免费av一区二区三区| 国产男人的电影天堂91| 两性午夜刺激爽爽歪歪视频在线观看| 国内精品宾馆在线| 18禁黄网站禁片免费观看直播| 老女人水多毛片| 99九九线精品视频在线观看视频| 在线天堂最新版资源| 国产亚洲欧美98| 国产私拍福利视频在线观看| 国产伦理片在线播放av一区 | 天堂网av新在线| 一夜夜www| 亚洲国产高清在线一区二区三| 桃色一区二区三区在线观看| 国内揄拍国产精品人妻在线| 精品久久久久久久久久久久久| 爱豆传媒免费全集在线观看| 变态另类丝袜制服| 免费一级毛片在线播放高清视频| 一级二级三级毛片免费看| 国产精品99久久久久久久久| av在线天堂中文字幕| 亚洲在久久综合| 毛片一级片免费看久久久久| 成年女人看的毛片在线观看| 国产亚洲精品久久久com| 国产高潮美女av| 国产精品一二三区在线看| 久久国产乱子免费精品| 人妻久久中文字幕网| 九九久久精品国产亚洲av麻豆| 国产日韩欧美在线精品| 国产一区二区激情短视频| 精品久久久噜噜| 久久中文看片网| 国产av麻豆久久久久久久| 长腿黑丝高跟| 蜜桃久久精品国产亚洲av| 最好的美女福利视频网| 亚洲成a人片在线一区二区| www日本黄色视频网| 色尼玛亚洲综合影院| 国产成年人精品一区二区| 18+在线观看网站| 91在线精品国自产拍蜜月| 国产亚洲精品久久久久久毛片| 亚洲国产色片| 国产三级在线视频| 亚洲不卡免费看| 五月玫瑰六月丁香| 日韩欧美在线乱码| 国产高清有码在线观看视频| 在线观看66精品国产| 国产老妇伦熟女老妇高清| 男人舔奶头视频| 99热全是精品| 亚洲国产精品sss在线观看| 午夜精品国产一区二区电影 | 边亲边吃奶的免费视频| 国产成人aa在线观看| 亚洲精品久久久久久婷婷小说 | 日本欧美国产在线视频| 日韩人妻高清精品专区| 免费看av在线观看网站| 热99在线观看视频| 又黄又爽又刺激的免费视频.| 成年女人永久免费观看视频| 成人永久免费在线观看视频| 一个人观看的视频www高清免费观看| 99视频精品全部免费 在线| 成人av在线播放网站| 国产三级中文精品| 国产精品久久电影中文字幕| 天堂√8在线中文| 国产免费男女视频| 美女cb高潮喷水在线观看| 久久99精品国语久久久| 色尼玛亚洲综合影院| 少妇的逼好多水| 国产探花极品一区二区| 最近手机中文字幕大全| 天天躁夜夜躁狠狠久久av| 成人亚洲精品av一区二区| 久久中文看片网| 成人午夜精彩视频在线观看| 岛国在线免费视频观看| 中文亚洲av片在线观看爽| 最近2019中文字幕mv第一页| eeuss影院久久| 国内久久婷婷六月综合欲色啪| 色吧在线观看| 亚洲人成网站在线观看播放|