• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Detecting Driver Distraction Using Deep-Learning Approach

    2021-12-14 09:59:02KhalidAlShalfanandMohammedZakariah
    Computers Materials&Continua 2021年7期

    Khalid A.AlShalfan and Mohammed Zakariah

    1College of Computer and Information Sciences,Al-Imam Muhammad Ibn Saud Islamic University,Riyadh,11564,Saudi Arabia

    2College of Computer and Information Science,King Saud University,Riyadh,11442,Saudi Arabia

    Abstract:Currently,distracted driving is among the most important causes of traffic accidents.Consequently,intelligent vehicle driving systems have become increasingly important.Recently, interest in driver-assistance systems that detect driver actions and help them drive safely has increased.In these studies,although some distinct data types,such as the physical conditions of the driver,audio and visual features,and vehicle information,are used,the primary data source is images of the driver that include the face,arms,and hands taken with a camera inside the car.In this study,an architecture based on a convolution neural network(CNN)is proposed to classify and detect driver distraction.An efficient CNN with high accuracy is implemented,and to implement intense convolutional networks for large-scale image recognition,a new architecture was proposed based on the available Visual Geometry Group(VGG-16)architecture.The proposed architecture was evaluated using the StateFarm dataset for driver-distraction detection.This dataset is publicly available on Kaggle and is frequently used for this type of research.The proposed architecture achieved 96.95%accuracy.

    Keywords: Deep learning; driver-distraction detection; convolution neural networks; VGG-16

    1 Introduction

    According to the World Health Organization (WHO) [1], every year approximately 1.35 million people die due to traffic accidents.On average, globally, approximately 3,700 people die every day due to road accidents.A heartbreaking statistic in this report is that injuries caused by road accidents lead to the death of young people between 5 and 29 years of age [1].As per the WHO report, the total number of mortalities increases each year, and the most common cause is vehicle driver distraction.In Saudi Arabia, a report by the Department of Statistics of King Abdul-Aziz City for Science and Technology (KACST) indicated that, in 2003, approximately 4,293 people were killed and more than 30,439 were injured due to traffic accidents [2].The use of mobile phones is becoming common, and young people in particular talk on their mobile phones while driving.Using a mobile phone while driving increases the possibility of accidents leading to death.The KACST report indicates that mobile-phone usage during driving would increase the risk of being in an accident by a factor of 4.In addition, texting while driving is approximately 23 times riskier than talking on a mobile phone.Driver reaction time is significantly reduced when talking on a mobile phone while driving; the KACST report states that reaction time is reduced by approximately 50%, and further indicates that, for years, distracted driving has been considered one of the main causes of car accidents [3].

    Apart from loss of life, other issues, such as property damage, are associated with traffic accidents, and such damage is attributed to distracted drivers.According to American National Highway Traffic Safety Administration, approximately 20% of traffic accidents in the United States are due to driver distraction, and 90% of accidents are caused by human error [4].Driver error is considered the major reason for the increasing number of accidents [5].For example, in 2015, approximately 3,477 people died and nearly 391,000 people were injured in traffic accidents because of distracted drivers [6].Drivers are distracted for various reasons, including talking on a mobile phone, changing the radio station, eating, drinking, and talking to fellow passengers.Therefore, to reduce the number of accidents, these actions should be monitored and checked [7,8].A considerable number of studies have been conducted to investigate these problems.For example,mobile-phone usage has been detected using acoustic or cell-phone sensing techniques to locate the phone.Another technique involves monitoring and tracking a driver’s gaze movement [9,10].Driver distraction is detected by capturing images using a camera placed in front of the driver or inside the car.The captured images are transmitted for classification to detect driver actions.As noted above, drivers are distracted in many ways.The common distractions are eating and drinking while driving [11], communicating with fellow passengers [12–14], using in-vehicle electronic devices [15,16], observing roadside digital billboards/advertisings/logo signs [17], and using a mobile phone to call or text.An effective way to address distracted driving is by developing a system that can monitor the distractions.These techniques adapt the in-vehicle information-system functionalities according to the driver’s state.In these systems, correctly identifying the driver’s state is very important.

    In this study, the focus is on identifying the driver’s state.In the future, this technology could be applied in smart cities to detect driver distraction automatically and then send a warning message to the driver to prevent accidents.This technique can allow law-enforcement authorities to identify a distracted driver and monitor them using radar and cameras.Once detected, these drivers can be penalized.Moreover, recently developed semi-autonomous commercial cars require drivers to pay attention to road and traffic conditions [18].Autonomous steering-control [19]systems require drivers to be ready to take control of the wheel [18].Thus, distracted-driver detection is an important system component in these cars.Distraction detection can also be used to enable advanced driver-assistance system features [20], such as collision-avoidance systems that plan evasive maneuvers [21].To reduce vehicle accidents and improve transportation safety,a system that can classify distracted driving is highly desirable and has attracted much research interest.In this study, a deep-learning technique is applied, which is part of the machine-learning methods based on artificial intelligence.Deep learning is applied in many fields, e.g., computer vision, speech recognition, natural language processing, and audio recognition.In this work,experimental work was conducted on driver images to detect if the drivers are distracted while driving.As discussed above, the consequences of distracted driving can be grave.Driver distraction is classified into nine different classes based on driver actions while driving.In this study, the StateFarm dataset is used, and the convolution-neural-network (CNN) technique is applied to learn the machine and further classify the real image.

    The rest of this article is organized as follows.In Section 2, work related to driver-distraction detection is reviewed, and, in Section 3, the dataset used in this study is described.In Section 4,the overall methodology of distraction detection is explained.The experimental setup is described in Section 5 and experimental results presented in Section 6.Finally, conclusions are presented in Section 7 and future scope detailed in Section 8.

    2 Related Work

    The authors of [22] developed a support vector machine-based (SVM-based) model to detect the use of a mobile phone while driving.The dataset used in this work consists of frontal images of the driver’s face.The pictures used to develop the model showed both hands and face with a pre-made assumption and were focused on two driver actions, i.e., a driver with a phone and one without a phone.A SVM classifier was applied to detect the driver actions and frontal images of the drivers were used.In a similar study [23], SVM classification was used to detect the use of a mobile phone while driving.The dataset used in this study was collected using transportation imaging cameras placed on highways and traffic lights.The authors of another study [24] did some extraordinary work by sitting on a chair and mimicking a certain type of distraction (e.g., talking on a mobile phone).In this work, the AdaBoost classifier was applied along with hidden Markov models to classify Kinect RGB-D data.However, this study had two limitations, i.e., the effect of lighting and the distance between the Kinect device and the driver.In real-time applications, the effect of light is very important, and this should be taken into consideration for efficient results.The authors of [25] suggest using a hidden conditional random-fields model to detect mobile-phone usage.The database was created using a camera mounted above the dashboard and the hidden conditional random-fields model used to detect cell-phone usage.The model considers the face, mouth, and hand features of images obtained from a camera mounted above the dashboard.In another study related to phone usage [26],a faster Region Based Convolutional Neural Networks (R-CNN) model was designed.In that study, both the use of a phone by the driver and hands on the wheel were detected.Here, the main contribution was the segmentation of hand and face images.The dataset used to train the faster R-CNN was the same as that used in [27], in which a novel methodology called multiscale faster R-CNN was proposed.With this methodology, higher accuracy with low processing time was obtained.In [27], the authors created a dataset for hand detection in an automotive environment and achieved an average precision of 70.09% using the aggregate channel features object detector.Another study [28] discusses the detection of cell-phone usage by a driver.In that work, the authors applied a histogram of gradients (HOGs) and AdaBoost classifiers.Initially, the supervised descent method is applied to locate the landmarks on the face, followed by extraction of bounding boxes from the left-hand side of the face to the right-hand side.A classifier was applied to train these two regions to detect the face, followed by left- and right-hand detection.Finally, after applying segmentation and training, 93.9% accuracy at 7.5 frames per second was obtained.Motivated by the same, several researchers worked on cell-phone usage detection while driving.In [29], the authors designed a more inclusive distracted-driving dataset with a side view of the driver considering four activities:Safe driving, operating the shift lever, eating, and talking on a cell phone.The authors achieved 90.5% accuracy using the contourlet transform and random forest.The authors also proposed a system using a pyramid of histogram of gradients (PHOG)and multilayer perceptron that yields an accuracy of 94.75% [30] and Faster R-CNN [31].Also investigated in [32] was action detection using three actions, i.e., hands on the wheel, interaction with the gears, and interaction with the radio; an attempt was then made to classify these actions.Separate cameras were fixed to capture face and hand actions.A SVM classier was applied to determine the driver’s actions; 90% and 94% accuracy were achieved for hand actions and face and hand-related information, respectively.In another study [33], the Southeast University dataset was used.In this study, the authors focused on hands on the wheel, usage of a phone,eating, and operating the shift lever.The dataset consists of labels for each image for these four driver actions.The authors applied various classifiers and obtained 99% accuracy for the CNN classifier [33].Another study [34] focused on the following seven actions:Checking the left-hand mirror, checking the right-hand mirror, checking the rear mirror, handling in-hand radio devices,using the phone (receiving calls and texting), and normal driving.In a pre-processing step, the author cropped the driver’s body from the image, removed the background information, and applied the Gaussian mixture model; after pre-processing the data, CNN classification was applied to classify these actions.Using AlexNet [35], 91% accuracy was achieved for predicting driver distraction.In other studies, [36,37], segmentation was performed to extract the drivers’actions by applying Speeded Up Robust Features (SURF) [36] key points.After pre-processing, HOG features were extracted and K-NN classification applied.A recognition rate of approximately 70%was achieved.In [37], a similar methodology was applied.However, in that study, two different CNN models were applied to classify the actions.In the first model, triplet loss was applied to increase the overall accuracy of the classification model.By doing this, the authors achieved a 98% accuracy rate.They used 10 actions and the StateFarm dataset [38], which comprises labeled images, for distraction detection.Here, the detected actions included safe driving, texting using the right hand, talking on the phone using the right hand, texting using the left hand,talking on the phone using the left hand, changing the radio station, drinking, talking to the passenger sitting behind the driver, checking hair and makeup, and talking to the passenger sitting beside the driver.In [39,40], the American University in Cairo (AUC) dataset was used to detect distraction.This dataset is similar to the StateFarm dataset.In [39], training was done using five different types of CNNs.After applying the CNN, the experimental results were weighted by implementing a genetic algorithm to further classify the actions.In another work conducted by the authors of [40], a modified CNN was applied, followed by numerous regularization techniques.These techniques helped reduce the overfitting problem.The accuracy achieved by that work was 96% for driver distraction.A deep-learning technique was applied in [41] using the AUC and StateFarm datasets in which the video version of the dataset was worked on.The authors proposed a deep neural network approach called multi-stream long short-term memory (LSTM)(M-LSTM).The features used in this work were from [42], with contextual information, which was retrieved from another pertaining CNN; 52.22% and 91.25% accuracy were achieved on the AUC and the StateFarm distracted-driver datasets, respectively.In [43], a deep-convolutionnetwork technique was applied for the recognition of large-scale images.Similar work for the recognition of face was implemented in [44] in three dimensions by applying feature-extraction techniques and classification methodologies.Another recognition technique [45] has been applied for palmprints by applying a robust two-dimensional Cochlear transformation technique.

    3 Dataset

    The StateFarm distraction-detection dataset was used in the present study.It was published on Kaggle [38] for a competition.This dataset is the most commonly used dataset for the detection of driver distraction and has been applied in many studies.The StateFarm dataset includes nine classes, and each image is classified among these classes.The categories include the following driver actions:Driving safely, texting with the right hand, talking on the phone with the right hand, texting with the left hand, talking on the phone with the left hand, operating the radio,reaching behind, dressing the hair and makeup activities, and talking to passengers, as shown in Fig.1 and discussed in [46].There are approximately 2,200 RGB images for each class and each image’s resolution is 640×480 pixels.The number of pictures for each class is listed in Tab.1.The holdout split technique was applied to produce 10% and 30% testing sets to create new training and testing subsets from the original training set.The number and percentage of samples for each class label in the training and testing sets are shown in Tab.2.

    Figure 1:Sample images representing different actions in the StateFarm dataset

    Table 1:Dataset details with the number of images per class

    Table 2:Distribution of images into training and testing samples

    4 Methodology

    A deep CNN is a type of artificial neural network.Deep CNNs are motivated by the animal visual cortex.Currently, for several recent years, CNNs have demonstrated tremendous achievements in various applications, e.g., image classification, object and action detection, and natural language processing.

    4.1 Basic CNN Architecture

    A CNN includes filters/layers, activation functions, pooling layers, and a fully connected layer, as shown in Fig.2.A complete CNN is a collection of these layers in the proper form and sequence.

    Figure 2:Basic CNN architecture

    4.2 Proposed Architecture

    Currently, a standard CNN is modified and enhanced by introducing large amounts of labeled data and increasing computational power.Some modifications to CNNs have been performed with the following architectures, e.g., AlexNet, ZFNet, VGGNet, GoogleNet, and ResNet, which were developed for computer-vision tasks.In the work described in this study, the Visual Geometry Group (VGG-16) architecture was considered, which was modified to detect distracted drivers as shown in Fig.3.

    Figure 3:CNN architecture for distracted driver detection

    Original and Modified VGG16 Architecture:According to the literature, VGGNet is the most powerful CNN architecture, in which the main idea is to develop a network that should be deep and simple.The VGG architecture is shown in Fig.4.VGG is an efficient tool for imageclassification and localization tasks.The standard VGG architecture comprises 3×3 filters in all convolution layers, ReLu is used as the activation function, pooling is 2×2 with a stride of 2,and the loss calculation is performed using a cross-entropy function.The pre-trained ImageNet model weight were applied for initializations, and then all layers were fine-tuned according to our dataset.Pre-processing was also performed on the images, and the images were resized to 224×224 pixels, and further from each pixel, the RGB planes are removed.The pre-processed data were transferred to the network.The initial layers of the CNN perform filtering to extract features.Softmax is employed as an activation function in the network’s final layer to classify images into one of the pre-defined categories.Here, parameter reduction is significant; thus, numerous regularization techniques were applied to handle generalization errors [40].The fully connected layer was then replaced with convolutional layers because dense layers become increasingly computationally complex as it takes nearly all of the parameters.In addition, to control overfitting, the batch normalization technique and dropout were applied between layers.Several of the neurons were randomly dropped in the training phase, as shown in Fig.5.

    Figure 4:Original VGG architecture

    Figure 5:Modified VGG architecture

    4.3 Activation Function

    The ReLU function is currently the most used activation function in the World because it is used in almost all the CNNs or deep learning, as shown in Eq.(1).

    As can be seen, the ReLU function is half-rectified (from the bottom).f(z) is zero when z is less than zero and equal to z when z is above or equal to zero.

    The dense layer’s output is then passed through a dropout layer, and a dense layer with several neurons equals the number of class labels and softmax activation function, as shown in Eq.(2).The softmax activation function is used to compute the probability of each class label, and can be calculated as

    4.4 Performance Metrics

    To evaluate the experimental results, two standard performance metrics were used, i.e., Accuracy and Sensitivity.These evaluation metrics are computed by using the following equations in which TP is the number of true positives samples in the testing set, which were correctly detected,and TN is the number of true negatives, which truly gives the number of negative samples in the testing set.FP is the number of false positives, which give the number of false positives in the testing set that are actually incorrectly detected.FN is the number of false negatives, which gives the number of false negative samples in the testing set that are incorrectly detected.The equations are as follows:

    Accuracy is used to calculate the portion of samples that are detected correctly during the testing phase with all the datasets.Precision is used to calculate the percentage of samples that are correctly detected during the testing phase with the total number of FP and TP samples.The sensitivity is calculated by dividing the number of TP samples at testing with the overall sum of TP and FN samples as shown in Eqs.(3)–(5).

    5 Experimental Setup

    A CNN was designed to detect distracted-driver behaviors.In this network, the weights were initialized using the ImageNet model, followed by the application of the transfer learning concept.Moreover, depending on the dataset, the weights at each network layer of the network were adjusted, and various hyperparameters were fine-tuned through a trial-and-error process.Here,stochastic gradient descent [42] was used for training with a learning rate of approximately 0.0001.The batch size was 64 and the number of epochs was 100.Training the neural network was performed by optimizing the objective functions.Here, the objective function was the cross-entropy loss function.The purpose of applying this function is to optimize loss to make the model able to handle large datasets and select samples at random, which are then used to estimate the gradients at each layer and iteration.Here, the proposed model was evaluated using a standard stochastic gradient descent algorithm and the Adam optimizer.As mentioned previously, the learning rate was set to 0.0001.The learning rates were applied to manage the weights and update them according to the expected errors.Note that the learning rate should be evaluated carefully.If it is shallow, then the learning process will be slow, and, as the learning rate increases, weights would become very large, which could lead to divergence.With these issues in mind, the model was optimized using prevalent learning rates of 0.1–0.0001.In addition, overfitting is handled using dropout and regularization techniques.Typically, the difference is training accuracy, and validation is called overfitting, which must be addressed.Overfitting should be as low as possible for an ideal model to predict test datasets accurately.Dropout is achieved by dropping a few network nodes during the training process.After dropout, the batch size was tuned.The batch size defines the number of instances to be propagated prior to updating the model’s parameters.In addition, the pixel values should be adjusted before training and testing the model.The pixel values of the input images should be normalized between 0 and 1.Here, the normalization process was performed,dividing the pixel values over the maximum intensity value of 255.

    Initially, the original VGG-16 architecture was applied to distracted-driver detection, and good results were obtained in the training phase; however, during testing, the results were not so good and resulted in overfitting.To overcome these issues, dropout was applied to improve the model’s performance.In addition, weight regularization and batch normalization were used, significantly enhancing the results and accuracy rate.It was found that this model could handle an average of 39 images per second.Tab.3 presents the results of the system in the form of a confusion matrix and Tab.4 shows the class-wise accuracies for each of the 10 classes in the dataset.Another issue with the core VGG-16 architecture is the number of parameters.As the number of parameters was huge, memory problems were inevitable.To overcome this issue, the model was modified to reduce the parameters by 75%, which reduced memory costs and improved accuracy.

    6 Results and Discussion

    As can be seen from the confusion matrix, the “Driving Normal”and “Talking to Passenger”classes are somewhat similar and were difficult to differentiate because the hand posture in both images is on the wheel; thus, the lack of hand movement caused this misclassification.In addition,the “Talking with the right” and “Texting with the right” classes were also mismatched and difficult to differentiate because, in the images in these classes, the right is used and had to make the action.This misclassification is the cause of a few temporal information of the image and its analysis.As shown in Tab.3, the “C0” and “C9” classes achieve almost near accuracy, and the remaining classes also show promising results.The “Driving Normal” class obtained an accuracy of 96.15% and the “Talking to Passenger” class obtained 95.19% accuracy.The best accuracy(96.95%) was obtained for the “Operating the Radio” class (with class name “C5”).The worst accuracy was obtained for the “Texting with Right” class (90.47%), as shown in Tab.4.However,real-time performance was realized with real-time dynamic data.To achieve real-time prediction accuracy and apply the system to real images, the parameters list should be reduced.If the number of parameters is large, computational overhead increases, which may lead to breaking the system.The loss and accuracy of the proposed model is shown in Fig.6.

    Table 3:Confusion matrix of the proposed model on StateFarm Dataset

    Table 4:Accuracy of the proposed method for all ten classes

    Figure 6:Loss and accuracy of the proposed method

    7 Conclusions

    Driver distraction is a major issue leading to a consistent increase in the number of vehiclerelated accidents.Drivers can become distracted by various activities, e.g., talking on the phone,texting on the phone, eating, drinking, and talking to passengers.A system to detect such actions and warn drivers to avoid them to reduce accidents is therefore required.The StateFarm dataset was developed to facilitate such research and is available on Kaggle for public use.This dataset comprises many images, each action is classified, and their images are listed separately.The total number of actions (classes) in this dataset is 10.The dataset is divided into training and testing data, where 70% of the data are dedicated to training and 30% for testing and validation.These images were applied to deep learning to learn the image features and train the system with some efficient models.These models were tested on the test images.In this study, the VGG architecture,which has shown promising results in the past, was used.However, the VGG architectures include a vast number of parameters.In this study, this parameter issue was handled, and our modified VGG architecture achieved an accuracy of approximately 96.95%.

    8 Future Scope

    The current methodology is working fine, but, in the future, developing our own dataset with a larger size and applying this methodology to further try and use the more intensive techniques and customize the current methodology are desirable.Planned future work also includes development of real-time detection of driver distraction and application of wireless techniques to impose tickets on drivers based on images of driver distraction.Such a system would detect the distraction and send a traffic violation ticket as a message to the driver’s mobile phone.

    Funding Statement:This work is a partial result of a research project supported by Grant Number 13-INF2456-08-R from King Abdul Aziz City for Sciences and Technology (KACST), Riyadh,Kingdom of Saudi Arabia.

    Conficts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    麻豆乱淫一区二区| 伊人亚洲综合成人网| 久久99蜜桃精品久久| 国产一区二区在线观看av| 9色porny在线观看| 麻豆精品久久久久久蜜桃| 新久久久久国产一级毛片| 国产伦理片在线播放av一区| 纵有疾风起免费观看全集完整版| 日本vs欧美在线观看视频| 国产成人av激情在线播放 | 亚洲人成网站在线播| 国产午夜精品一二区理论片| 22中文网久久字幕| 男女国产视频网站| 最近的中文字幕免费完整| 涩涩av久久男人的天堂| 99精国产麻豆久久婷婷| 久久久久久久久大av| 国产黄片视频在线免费观看| 欧美亚洲日本最大视频资源| 18+在线观看网站| 亚洲av日韩在线播放| 亚洲美女视频黄频| 亚洲高清免费不卡视频| 亚洲图色成人| 久久av网站| 成人毛片a级毛片在线播放| 国产极品粉嫩免费观看在线 | 在线观看一区二区三区激情| 精品熟女少妇av免费看| 亚洲伊人久久精品综合| av电影中文网址| 国产精品蜜桃在线观看| 国产成人freesex在线| 亚洲精品久久久久久婷婷小说| 高清av免费在线| 一本大道久久a久久精品| 亚洲不卡免费看| 国产在线视频一区二区| av福利片在线| 91久久精品国产一区二区成人| 久久精品国产亚洲网站| 亚洲国产精品999| a级毛片在线看网站| 亚洲国产欧美在线一区| 亚洲欧美中文字幕日韩二区| 精品人妻在线不人妻| 国产成人精品无人区| 成人国产av品久久久| 精品人妻在线不人妻| 亚洲av日韩在线播放| 97超视频在线观看视频| 在现免费观看毛片| 久久精品熟女亚洲av麻豆精品| 久久久久久久精品精品| 狂野欧美激情性bbbbbb| 亚洲欧美成人精品一区二区| 只有这里有精品99| 日本av手机在线免费观看| 青春草国产在线视频| 少妇的逼水好多| 亚洲在久久综合| 国产精品熟女久久久久浪| 国产极品粉嫩免费观看在线 | 日本91视频免费播放| 极品少妇高潮喷水抽搐| 久久狼人影院| 午夜精品国产一区二区电影| 热99久久久久精品小说推荐| 国产午夜精品久久久久久一区二区三区| 欧美精品国产亚洲| 国产精品熟女久久久久浪| 一级爰片在线观看| 久久热精品热| 久久久精品免费免费高清| 夫妻性生交免费视频一级片| 18禁动态无遮挡网站| 国产精品99久久久久久久久| 国产黄频视频在线观看| 黄色毛片三级朝国网站| 久久久a久久爽久久v久久| 99视频精品全部免费 在线| 亚洲国产av新网站| 日韩免费高清中文字幕av| av黄色大香蕉| 免费观看av网站的网址| 高清av免费在线| 一本大道久久a久久精品| 天堂俺去俺来也www色官网| 最后的刺客免费高清国语| 夜夜看夜夜爽夜夜摸| 国产成人精品福利久久| 波野结衣二区三区在线| 国产白丝娇喘喷水9色精品| 边亲边吃奶的免费视频| 日韩中字成人| 男女国产视频网站| 黄色毛片三级朝国网站| av国产精品久久久久影院| 国产av国产精品国产| 亚洲欧洲日产国产| 九色成人免费人妻av| 免费久久久久久久精品成人欧美视频 | 少妇被粗大猛烈的视频| 熟女电影av网| 肉色欧美久久久久久久蜜桃| 国产白丝娇喘喷水9色精品| 成人免费观看视频高清| 黄色配什么色好看| 久久精品人人爽人人爽视色| 精品国产一区二区三区久久久樱花| 欧美日韩综合久久久久久| 欧美三级亚洲精品| 伊人亚洲综合成人网| 一区二区三区精品91| 日本wwww免费看| 大码成人一级视频| 搡老乐熟女国产| 卡戴珊不雅视频在线播放| 国产精品嫩草影院av在线观看| 最近2019中文字幕mv第一页| 97超碰精品成人国产| 大香蕉久久成人网| 欧美日韩国产mv在线观看视频| 新久久久久国产一级毛片| 亚洲第一区二区三区不卡| 欧美精品国产亚洲| 999精品在线视频| 亚洲精品美女久久av网站| 日韩不卡一区二区三区视频在线| 国产片特级美女逼逼视频| 两个人免费观看高清视频| 亚洲精品视频女| 18禁在线播放成人免费| 亚洲少妇的诱惑av| 亚洲精品自拍成人| 好男人视频免费观看在线| 丁香六月天网| 最新的欧美精品一区二区| 成人漫画全彩无遮挡| 欧美成人精品欧美一级黄| 免费av不卡在线播放| 女人精品久久久久毛片| 亚洲精品成人av观看孕妇| 色5月婷婷丁香| 日本午夜av视频| 男男h啪啪无遮挡| 丁香六月天网| 免费人妻精品一区二区三区视频| 丝袜美足系列| 赤兔流量卡办理| 91aial.com中文字幕在线观看| 欧美日韩视频高清一区二区三区二| 香蕉精品网在线| 亚洲av二区三区四区| 国产精品偷伦视频观看了| 3wmmmm亚洲av在线观看| 成年人免费黄色播放视频| 寂寞人妻少妇视频99o| 午夜日本视频在线| 爱豆传媒免费全集在线观看| 国产精品不卡视频一区二区| 国产精品久久久久久久久免| 最新的欧美精品一区二区| 又大又黄又爽视频免费| 免费不卡的大黄色大毛片视频在线观看| 国产精品免费大片| 嫩草影院入口| av不卡在线播放| a级毛色黄片| videos熟女内射| 国产探花极品一区二区| 欧美日韩一区二区视频在线观看视频在线| 国产成人91sexporn| 男女高潮啪啪啪动态图| 成年人午夜在线观看视频| 国产成人aa在线观看| 国产国拍精品亚洲av在线观看| 中文字幕av电影在线播放| 亚洲成人手机| 纵有疾风起免费观看全集完整版| 婷婷色麻豆天堂久久| 麻豆成人av视频| 午夜福利在线观看免费完整高清在| 菩萨蛮人人尽说江南好唐韦庄| av国产久精品久网站免费入址| 黑人巨大精品欧美一区二区蜜桃 | 国产日韩欧美在线精品| av网站免费在线观看视频| 欧美国产精品一级二级三级| 97精品久久久久久久久久精品| 人妻 亚洲 视频| 亚洲av二区三区四区| 亚洲综合色惰| 亚洲四区av| 一级毛片aaaaaa免费看小| 高清黄色对白视频在线免费看| 大香蕉久久成人网| 夫妻午夜视频| 丰满少妇做爰视频| 亚洲国产毛片av蜜桃av| 精品国产露脸久久av麻豆| 欧美人与性动交α欧美精品济南到 | 一本—道久久a久久精品蜜桃钙片| 自拍欧美九色日韩亚洲蝌蚪91| 国产av国产精品国产| 中文乱码字字幕精品一区二区三区| 一个人免费看片子| 在线天堂最新版资源| 人人妻人人澡人人看| 免费大片18禁| 国产成人a∨麻豆精品| 亚洲五月色婷婷综合| 国产一区有黄有色的免费视频| 99国产精品免费福利视频| 久久久久久久久久成人| 国产成人免费观看mmmm| 美女福利国产在线| 中文字幕亚洲精品专区| 不卡视频在线观看欧美| 国产一区二区三区av在线| 国产精品一区二区在线不卡| 丝袜脚勾引网站| 日韩欧美一区视频在线观看| 久久99一区二区三区| 国产午夜精品久久久久久一区二区三区| 一个人看视频在线观看www免费| 18在线观看网站| 一级片'在线观看视频| 高清欧美精品videossex| 日日啪夜夜爽| 国产熟女欧美一区二区| 国产精品一国产av| 久久ye,这里只有精品| 日本欧美国产在线视频| 久久久a久久爽久久v久久| 免费高清在线观看日韩| 婷婷色综合大香蕉| 精品久久国产蜜桃| 人妻少妇偷人精品九色| 男女无遮挡免费网站观看| 满18在线观看网站| 啦啦啦在线观看免费高清www| 高清欧美精品videossex| 欧美一级a爱片免费观看看| 岛国毛片在线播放| 永久网站在线| 精品久久蜜臀av无| 久久久久网色| 精品人妻一区二区三区麻豆| 亚洲人成77777在线视频| 成人国产麻豆网| 黄色怎么调成土黄色| 欧美丝袜亚洲另类| 国产在线视频一区二区| 成人黄色视频免费在线看| 国产在视频线精品| 性高湖久久久久久久久免费观看| 人人妻人人添人人爽欧美一区卜| 18+在线观看网站| 最近最新中文字幕免费大全7| 亚洲精品成人av观看孕妇| 在线天堂最新版资源| 国产黄片视频在线免费观看| 日韩欧美一区视频在线观看| 五月玫瑰六月丁香| videos熟女内射| 成人黄色视频免费在线看| 亚洲欧美中文字幕日韩二区| 91精品一卡2卡3卡4卡| 亚洲第一区二区三区不卡| 久久av网站| 免费av不卡在线播放| 亚洲精品乱码久久久久久按摩| 在线观看三级黄色| 一本久久精品| 国产亚洲av片在线观看秒播厂| 欧美日韩综合久久久久久| 亚洲av.av天堂| 精品国产国语对白av| 高清毛片免费看| 女性生殖器流出的白浆| 免费高清在线观看视频在线观看| 国国产精品蜜臀av免费| 国产黄色视频一区二区在线观看| 久热这里只有精品99| 亚洲精品日韩在线中文字幕| 色视频在线一区二区三区| 九九爱精品视频在线观看| 欧美三级亚洲精品| 国产视频首页在线观看| 熟妇人妻不卡中文字幕| 肉色欧美久久久久久久蜜桃| 精品久久久久久久久av| a级毛片免费高清观看在线播放| 国产精品欧美亚洲77777| 亚洲人与动物交配视频| 97超碰精品成人国产| 国产黄色免费在线视频| 色视频在线一区二区三区| 自线自在国产av| 九九在线视频观看精品| 亚洲av福利一区| 99热6这里只有精品| 日本欧美视频一区| 22中文网久久字幕| 久久影院123| 欧美精品亚洲一区二区| 国产精品人妻久久久久久| 久久99蜜桃精品久久| 校园人妻丝袜中文字幕| 国产精品一二三区在线看| 精品酒店卫生间| 中国国产av一级| 欧美97在线视频| 极品人妻少妇av视频| 国产69精品久久久久777片| 人妻夜夜爽99麻豆av| 精品久久久精品久久久| 久久精品人人爽人人爽视色| 亚洲欧美日韩卡通动漫| 精品人妻熟女毛片av久久网站| 国国产精品蜜臀av免费| 热re99久久精品国产66热6| 国产精品国产av在线观看| 另类精品久久| 亚洲图色成人| 国产av码专区亚洲av| 又粗又硬又长又爽又黄的视频| 亚洲av综合色区一区| 中文字幕人妻丝袜制服| 黄色怎么调成土黄色| 国产日韩欧美亚洲二区| 肉色欧美久久久久久久蜜桃| 97在线视频观看| 99久久人妻综合| 最新中文字幕久久久久| 纯流量卡能插随身wifi吗| 制服诱惑二区| 蜜桃国产av成人99| 国产精品国产三级专区第一集| 老司机影院成人| 狠狠婷婷综合久久久久久88av| 亚洲国产最新在线播放| 成人亚洲欧美一区二区av| 亚洲欧洲精品一区二区精品久久久 | 91精品伊人久久大香线蕉| 中文字幕亚洲精品专区| 欧美xxⅹ黑人| 日韩成人av中文字幕在线观看| 国产免费又黄又爽又色| 观看av在线不卡| 久久久久网色| 大香蕉久久网| 免费av中文字幕在线| 亚洲美女搞黄在线观看| 22中文网久久字幕| 亚洲成人手机| 国产精品人妻久久久久久| 成人国产av品久久久| 一边摸一边做爽爽视频免费| 99re6热这里在线精品视频| 国产欧美日韩综合在线一区二区| 人妻少妇偷人精品九色| www.av在线官网国产| 日本vs欧美在线观看视频| 丝袜脚勾引网站| 黄色一级大片看看| 午夜激情久久久久久久| 最近手机中文字幕大全| 久热这里只有精品99| 一区二区三区精品91| 欧美日韩av久久| 亚洲精品aⅴ在线观看| 中文乱码字字幕精品一区二区三区| 寂寞人妻少妇视频99o| 亚洲精品日本国产第一区| 边亲边吃奶的免费视频| 久久久久国产精品人妻一区二区| 国产精品偷伦视频观看了| 另类精品久久| 国产国语露脸激情在线看| 久久精品国产亚洲av天美| 亚洲国产av新网站| 久久久久久久亚洲中文字幕| 日韩av免费高清视频| 99久久精品一区二区三区| 国国产精品蜜臀av免费| 久久av网站| 日韩成人av中文字幕在线观看| 国产探花极品一区二区| 亚洲高清免费不卡视频| 国产av码专区亚洲av| 久久国内精品自在自线图片| 午夜福利网站1000一区二区三区| 51国产日韩欧美| 国产精品国产三级专区第一集| 性色av一级| av免费观看日本| 狠狠精品人妻久久久久久综合| 久久久久精品久久久久真实原创| 乱人伦中国视频| 久久久久久久久大av| av在线老鸭窝| 有码 亚洲区| 免费少妇av软件| 精品国产一区二区久久| av免费在线看不卡| 色视频在线一区二区三区| 日韩电影二区| 日韩伦理黄色片| 欧美日韩av久久| 国产av精品麻豆| xxxhd国产人妻xxx| 两个人免费观看高清视频| www.色视频.com| 久久99热6这里只有精品| 久久久久久久久久人人人人人人| 国产av国产精品国产| 色94色欧美一区二区| 国语对白做爰xxxⅹ性视频网站| 国产熟女欧美一区二区| 水蜜桃什么品种好| 如日韩欧美国产精品一区二区三区 | 熟女人妻精品中文字幕| 蜜臀久久99精品久久宅男| 成人二区视频| 国产成人91sexporn| 亚洲第一av免费看| 搡老乐熟女国产| 国产探花极品一区二区| 亚洲国产最新在线播放| 交换朋友夫妻互换小说| 热re99久久精品国产66热6| 最新中文字幕久久久久| 在线观看一区二区三区激情| 伦理电影免费视频| 精品人妻熟女毛片av久久网站| 少妇猛男粗大的猛烈进出视频| 99视频精品全部免费 在线| 成年人午夜在线观看视频| 亚洲精品日韩在线中文字幕| 中文字幕亚洲精品专区| 久久久久视频综合| 男女无遮挡免费网站观看| 久久久久人妻精品一区果冻| 日韩,欧美,国产一区二区三区| 国产成人a∨麻豆精品| 精品国产一区二区三区久久久樱花| 免费观看的影片在线观看| 三级国产精品欧美在线观看| 国内精品宾馆在线| 边亲边吃奶的免费视频| 春色校园在线视频观看| 少妇熟女欧美另类| 2022亚洲国产成人精品| 欧美 日韩 精品 国产| 色婷婷av一区二区三区视频| 日日啪夜夜爽| 精品久久久久久久久av| 人妻少妇偷人精品九色| 欧美人与性动交α欧美精品济南到 | 91精品国产九色| 日韩精品有码人妻一区| 欧美3d第一页| av免费在线看不卡| 国产av一区二区精品久久| 高清av免费在线| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 亚洲性久久影院| 少妇猛男粗大的猛烈进出视频| 大陆偷拍与自拍| xxxhd国产人妻xxx| 色5月婷婷丁香| 日韩av在线免费看完整版不卡| 哪个播放器可以免费观看大片| 大香蕉97超碰在线| 欧美精品高潮呻吟av久久| 国产伦精品一区二区三区视频9| 秋霞在线观看毛片| 国产av码专区亚洲av| 亚洲av福利一区| 国产精品一二三区在线看| 高清黄色对白视频在线免费看| 精品一区在线观看国产| 精品亚洲成国产av| 性高湖久久久久久久久免费观看| 天堂8中文在线网| 国产精品久久久久久久久免| 18在线观看网站| 国产综合精华液| 丝袜脚勾引网站| 亚洲av中文av极速乱| 国产老妇伦熟女老妇高清| 91精品一卡2卡3卡4卡| 日韩精品有码人妻一区| 香蕉精品网在线| 国产免费一级a男人的天堂| 一本大道久久a久久精品| 九九在线视频观看精品| 欧美日韩国产mv在线观看视频| 97在线人人人人妻| 亚洲av成人精品一二三区| 简卡轻食公司| 国产免费福利视频在线观看| 狂野欧美激情性bbbbbb| 高清黄色对白视频在线免费看| 夜夜骑夜夜射夜夜干| 免费看不卡的av| 视频中文字幕在线观看| 亚洲精品美女久久av网站| 在线观看国产h片| 少妇的逼水好多| 亚洲综合色惰| av线在线观看网站| 天天操日日干夜夜撸| 最近的中文字幕免费完整| 亚洲欧美一区二区三区黑人 | www.av在线官网国产| 校园人妻丝袜中文字幕| 日韩熟女老妇一区二区性免费视频| 国产成人av激情在线播放 | 夫妻性生交免费视频一级片| 天堂8中文在线网| www.av在线官网国产| 亚洲欧洲精品一区二区精品久久久 | 在线精品无人区一区二区三| 国产极品天堂在线| 一本一本综合久久| 边亲边吃奶的免费视频| 亚洲国产最新在线播放| 国产有黄有色有爽视频| 国产精品久久久久久av不卡| 日韩,欧美,国产一区二区三区| 久久影院123| 亚洲在久久综合| 伦理电影免费视频| 久久亚洲国产成人精品v| 99久久中文字幕三级久久日本| 精品亚洲成a人片在线观看| 男人操女人黄网站| 在线观看www视频免费| 91午夜精品亚洲一区二区三区| 美女国产高潮福利片在线看| av.在线天堂| 日韩亚洲欧美综合| 亚洲中文av在线| 99久久综合免费| 亚洲,欧美,日韩| 亚洲精品av麻豆狂野| 国产精品一区二区三区四区免费观看| 日韩av免费高清视频| 另类精品久久| 久久久久久久久久人人人人人人| 亚洲,一卡二卡三卡| 老司机影院毛片| 久久午夜福利片| 国产精品一区二区三区四区免费观看| 久久99热这里只频精品6学生| 赤兔流量卡办理| 亚洲国产精品专区欧美| 在线观看www视频免费| 哪个播放器可以免费观看大片| 成年人午夜在线观看视频| 精品人妻偷拍中文字幕| 99九九在线精品视频| 人妻制服诱惑在线中文字幕| 人妻人人澡人人爽人人| 高清午夜精品一区二区三区| 国产精品久久久久成人av| 欧美日韩综合久久久久久| 性色avwww在线观看| 国产精品99久久久久久久久| 国精品久久久久久国模美| 欧美激情 高清一区二区三区| 国产日韩一区二区三区精品不卡 | 高清黄色对白视频在线免费看| 最后的刺客免费高清国语| 日本免费在线观看一区| 国模一区二区三区四区视频| 中文字幕人妻丝袜制服| 精品卡一卡二卡四卡免费| 美女脱内裤让男人舔精品视频| 国产亚洲精品久久久com| 日韩av在线免费看完整版不卡| 在线观看www视频免费| 欧美亚洲 丝袜 人妻 在线| 国产无遮挡羞羞视频在线观看| 观看av在线不卡| 日本91视频免费播放| 国产片内射在线| 99国产精品免费福利视频| 久久久精品94久久精品| 久久亚洲国产成人精品v| 91精品伊人久久大香线蕉| 亚洲第一区二区三区不卡| 狂野欧美白嫩少妇大欣赏| 永久免费av网站大全| 久久99热6这里只有精品| 菩萨蛮人人尽说江南好唐韦庄| 精品亚洲乱码少妇综合久久| 欧美精品国产亚洲| 男女无遮挡免费网站观看| 免费观看性生交大片5| 国产精品久久久久久久电影| 大香蕉97超碰在线| 男女边摸边吃奶| 熟女人妻精品中文字幕| 哪个播放器可以免费观看大片| 国产无遮挡羞羞视频在线观看| 色吧在线观看| 亚洲少妇的诱惑av| 丰满乱子伦码专区| 国产免费又黄又爽又色|