• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Performance Comparison of Deep CNN Models for Detecting Driver’s Distraction

    2021-12-14 06:07:04KathiravanSrinivasanLalitGargDebajitDattaAbdulellahAlaboudiJhanjhiRishavAgarwalandAnmolGeorgeThomas
    Computers Materials&Continua 2021年9期

    Kathiravan Srinivasan,Lalit Garg,Debajit Datta,Abdulellah A.Alaboudi,N.Z.Jhanjhi Rishav Agarwal and Anmol George Thomas

    1School of Information Technology and Engineering,Vellore Institute of Technology,Vellore,632014,India

    2Faculty of Information and Communication Technology,University of Malta,Msida,2080,MSD,Malta

    3School of Computer Science and Engineering,Vellore Institute of Technology,Vellore,632014,India

    4College of Computer Science,Shaqra University,Kingdom of Saudi Arabia

    5School of Computer Science and Engineering,SCE,Taylor’s University,Subang Jaya,47500,Malaysia

    Abstract:According to various worldwide statistics,most car accidents occur solely due to human error.The person driving a car needs to be alert,especially when travelling through high traffic volumes that permit high-speed transit since a slight distraction can cause a fatal accident.Even though semiautomated checks,such as speed detecting cameras and speed barriers,are deployed,controlling human errors is an arduous task.The key causes of driver’s distraction include drunken driving,conversing with co-passengers,fatigue,and operating gadgets while driving.If these distractions are accurately predicted,the drivers can be alerted through an alarm system.Further,this research develops a deep convolutional neural network (deep CNN)models for predicting the reason behind the driver’s distraction.The deep CNN models are trained using numerous images of distracted drivers.The performance of deep CNN models,namely the VGG16,ResNet,and Xception network,is assessed based on the evaluation metrics,such as the precision score,the recall/sensitivity score,the F1 score,and the specificity score.The ResNet model outperformed all other models as the best detection model for predicting and accurately determining the drivers’activities.

    Keywords:Deep-CNN;ResNet;Xception;VGG16;data;classification

    1 Introduction

    Various reports have indicated that several road accidents occurred over the years due to the driver’s distraction.An inattentive driver is one of the main reasons behind the vast majority of accidents.The yearly statistics indicate that nearly half a million people are injured due to these accidents,and thousands of deaths occur each year [1-4].There are several reasons for driver’s distraction,such as operating gadgets,conversing with their co-passengers,drunken driving,and fatigue.There is a need for a reliable method that guarantees road safety.To this end,this research’s main objective is to develop a suitable solution to curb such occurrences and ensure road safety.Predicting the reasons for the driver’s distraction and possibly alerting the driver could avoid such accidents.Further,this work devises the tools and methods to determine the best and most efficient Deep Convolutional Neural Network (deep CNN) model for detecting the reason behind a driver’s distraction.The deep CNNs have proven to perform exceptionally well in classifying images;thus,it seems to be an excellent fit for resolving this problem.

    A deep CNN usually requires significantly less preprocessing than the other classification algorithms [5-10].The entire process of finding the best deep CNN model begins with comparing the models in terms of different evaluation metrics and selecting the best among them.The deep CNN models help to classify the distracted driver dataset.Further,this system would ensure road safety in high-risk roads and highways,where speed is also a concern,and the fatality rate is much higher.Even though the external checks are essential for curbing accidents,predicting the driver’s distraction plays a significant role in saving lives and guaranteeing road safety.

    This research determines an optimized approach among different deep CNN models for detecting the driver’s distraction.The various models’performances were compared using the evaluation metrics,and then the best-suited approach was determined based on these metrics.The materials and methods section deals with the background concepts and related works on this topic,and it briefly introduces the deep CNN models.The implementation section discusses the hardware and software requirements,the dataset utilized,and the individual deep CNN models’parameter settings.Next,the results and discussions section provides the performance comparisons of various deep CNN models.Finally,the conclusion section summarizes this work along with a brief discussion about possible future enhancements.

    2 Materials and Methods

    2.1 The Deep Convolutional Neural Network(Deep CNN)

    The concept of image recognition,classification and processing has evolved through various architectures and algorithms,and deep CNN models are a branch of Deep Learning [11].Firstly,the images get converted into the two-dimensional matrix [12-15].However,this reduces the quality of the image when it has pixel dependencies.The deep CNN algorithm ensures that the image quality and its spatial and temporal dependencies are also preserved.A deep CNN model trained on a larger dataset usually generalizes much better than a model trained with a smaller dataset.Further,the deep CNN model processes the images with minimum computation and minimal damage to the pixel values.The entire process of the deep CNN image classification can be broadly divided into three steps.The image passes through the convolutional layers,the pooling layers,and the Fully Connected Layers [16].Finally,a probabilistic function is applied to classify the images.Various deep CNN architectures such as LeNet,AlexNet,VGGNet,ResNet,and Xception can be deployed for image classification.This work focuses on three prominent deep CNN architectures:the ResNet,Xception,and the VGG16 model.

    2.2 The ResNet Model

    Generally,in deep CNN models,the classification efficiency keeps improving proportionately with the number of network layers.However,this causes a consequent increase in the training and testing error rate.This phenomenon is referred to as the vanishing or exploding gradient.Further,this issue can be resolved using the Residual Network (ResNet) [17-19].These networks deploy an approach known as skip connections.Further,the network skips the training from a few layers and connects directly to the output.ResNet’s basic architecture is inspired by the VGG network,where the convolutional layers use 3 × 3 filters.The architecture involves two concepts for model optimization.The layers possess the same number of filters for the same type of output feature maps.Moreover,when the output feature map’s size is halved,the number of filters is doubled to preserve each layer’s time complexity [20-22].In this work,the ResNet model was trained and tested over the Kaggle dataset for Distracted Driver Detection by State Farm.Moreover,this model efficiently classifies the driver’s distraction.Fig.1 portrays the architecture for the ResNet model that consists of 152 layers.Each step is carried forward with four layers of similar behavioural pattern in a ResNet.Every subsequent segment follows the same pattern.A three-by-three convolution is performed with a constant dimension:64,128,256,and 512,respectively.Thus,it bypasses the input after every two convolutions.Moreover,the width and height dimensions during the entire layer remain constant.Skip connections perform identity mapping,and their outputs are added to the outputs of the stacked layers.Furthermore,the ResNet model is less complicated and can be easily optimized compared to the other networks.Besides,this model converges faster and generates better results than other peer-level networks.

    Figure 1:Architecture of the ResNet deep CNN model

    2.3 The Xception Model

    The Extreme Inception or the Xception model is an inspired version of CNN’s Inception model,an ‘extreme’improvement.The Inception model has deep convolutional layers and wider convolutional layers that work in a parallel manner.This model has two different levels,each with three convolutional layers.Unlike the inception model,the Xception model has two levels,where one of them has a single layer.This layer slices the output into three segments and passes it on to the next set of filters.The first level has a single convolutional level of 1 * 1 filter,while the next level has three convolutional levels of a 3 * 3 filter.The aspect that defines the Xception model is the Depthwise Separable Convolution [23-25].A general deep CNN model takes care of spatial and channel distribution,but the Xception model involves depthwise and pointwise convolution.The work by Chollet [26]shows the improvement of Xception over the previous models.This research uses this Xception model to evaluate the distracted driver dataset for classifying the driver’s distraction.The architecture of the Xception network model is illustrated in Fig.2.The Xception model is a 71-layer deep CNN,inspired by the Inception model from Google,and it is based on an extreme interpretation of the Inception model [27].Its architecture is stacked with depthwise separable convolutional layers.The pre-trained version of the model is trained using millions of images from the Imagenet database.Moreover,this model can classify hundreds of object categories and has rich representations of its utilities for a wide range of pictures.The Xception model has profound utilities in the domains of image identification and classification.

    Figure 2:Architecture of the Xception deep CNN model

    2.4 The VGG16 Model

    The VGG16 architecture is an improved version of the AlexNet deep CNN model.When this model was tested over the Imagenet dataset,it showed a top-5 test accuracy of 92.7%.The VGG16 model uses 16 layers with tunable parameters.There are 13 convolutional layers and three fully connected layers.It also contains five max-pooling layers in the middle,and at the output,it has the Softmax activation function [28-30].The entire module’s architecture is divided into various sets of convolutional layers and max-pooling layers,following which the fully connected layer and the activation function are present.In the VGG16 model,the image passes through two sets of two convolutional layers and one max pooling layer.Subsequently,it is followed by three sets of three convolutional layers and one max pooling layer.After this stage,the image passes through the three dense,fully connected layers,finally entering the Softmax activation function [31].

    The VGG16 model also has hidden layers with the Rectified Linear Unit (ReLU) as the activation function.This model happens to be less computationally intensive than the previous ones due to the decrease in kernels.Besides that,the convolutional layer preserves the image resolution as it has a small receptive field,that of 3 * 3,and a stride of 1.Fig.3 represents the architecture of the VGG16 model.The input of the first convolution layer is of a definite size and a specific fixated RGB image.The picture moves across many network layers,utilizing the filters with a minimal 3 * 3-pixel responsive field.The stride of convolution is fixated at a pixel,and the in-space resolution is saved even after the convolution [32].

    For the 3 * 3 convolutional layers,one layer of zeros gets added to the borders for the same padding.The max-pooling function is performed across a 2 * 2-pixel window,with a stride of 2.Three fully connected layers follow a stack of convoluting sheets,with the final layer being the Softmax layer.The fully connected layer configuration is similar in every network,and every hidden layer is provided with the ReLU activation function.

    2.5 Model Comparison

    Figure 3:Architecture of the VGG16 deep CNN model

    Figure 4:Methodological flow of the work

    This research presents an accurately trained model for classifying the driver’s distraction.The rate of fatal accidents due to the driver’s human error or negligence has been at a record high for the past few years.Accidents can be prevented by alerting drivers whenever they tend to get distracted.The input provided for training the system is the distracted driver’s images,such as the driver using a mobile phone,adjusting radio channels,drinking,and/or engaged in other such activities [33].This dataset will then train the various deep CNN algorithms,and the best model for this task is determined.For increasing distraction levels,the model proportionately recognizes a wide range of distracted drivers better while eliminating the non-distracted ones.The deep CNN algorithms require minimal preprocessing of the data;also,they can capture the spatial and temporal dependencies in images.However,basic preprocessing methods are still needed to ensure that the dataset does not provide irrelevant details.The RGB images are converted into the grey-scale format,where a two-dimensional matrix structure represents each image.The images’thresholding is necessary due to the car seats’background noise.Thresholding ensures the extraction of only the relevant part(s) from the image—characterizing the driver’s distraction.The primary image processing methods guarantee the obtained image’s appropriateness and contribute to the dataset’s variety.Fig.4 shows this work’s methodological flow.As mentioned earlier,deep CNN architecture provides various image classification algorithms and models.We used three models:ResNet,Xception,and VGG16.These models were trained separately using the distracted driver dataset.Further,various evaluation metrics were employed to assess these models’performance.The best model was decided based on the evaluation metrics.To this end,the ResNet was observed to be the best model for performing a successful driver’s distraction classification.

    3 Implementation

    3.1 Hardware Requirement

    The system was executed on a Hewlett-Packard (HP) Spectre ×360 convertible workstation with a 64-bit Intel? Core?i7 processor and a GPU.It had 16 GB RAM and a 64-bit operating system with touch and pen input supports.The camera used in this system was an HP TrueVision Full HD WVA Webcam that comes inbuilt with the workstation and interspersed with dual digital microphones.

    3.2 Software Requirement

    The software applications used for this system included a Python platform and R-Studio.The system was built primarily on the Python language along with secondary support from R programming.Several Python libraries like NumPy,Keras,TensorFlow,Pandas,and Matplotlib were used to implement the deep CNN models.Further,these models were executed using opensource machine learning and deep learning libraries like Keras and TensorFlow.

    3.3 Dataset Description

    The State Farm Distracted Driver Detection dataset used in this work was obtained from Kaggle.This dataset comprises more than 20,000 image data,totalling an overall size of approximately 8 GB.All the dataset images had the same dimension,480 * 480 pixels,and several driver images in various driving postures.The pictures were classified into ten classes,as shown in Tab.1.The different deep CNN models were trained to predict the likelihood of the driver’s distraction in each picture.Fig.5 shows the demo pictures from each of the ten classes of images.Further,this dataset possesses the distribution of more than 20,000 images into the ten distinguished classes.The histogram visualized in Fig.6 shows that approximately 2500 image data are present under each class.However,one exception is the number of images in class C8,which consists of people talking to a passenger.This category has 4000 data compared to the other class images,whose average frequency is around 2350.

    Table 1:Classes of images in the dataset and their description

    Figure 5:Sample pictures from each of the ten classes of images-(a) class C0-safe driving (b)class C1-texting with right hand (c) class C2-talking on the phone with right hand (d) class C3-texting with left hand (e) class C4-talking on the phone with left hand (f) class C5-operating radio (g) class C6-drinking (h) class C7-reaching back (i) class C8-doing hair (j) class C9-talking to a passenger

    Figure 6:Frequency of images in each class

    3.4 Data Preprocessing

    Certain observations were drawn after acquiring and evaluating the information about the dataset.Not all the pixel values contributed equally to the class value assigned to a particular image.For example,in most cases,hands and head positioning play a vital role in determining the image class.The images are preprocessed to remove the background noise,which barely contributed as a prominent feature for the evaluation.The image data was converted into 64 * 64 pixels from its original resolution of 480 * 480 pixels.The images had many background noises not required for the prediction,such as the windshield and the seats.The essential characteristics of the image are the positioning of hands,head,and legs.Hence,unwanted information was removed using image processing techniques like grey-scaling and thresholding.The mean RGB values of every image in the dataset were determined,and these values were 95.124,96.961,and 80.123.Every image’s pixel values were subtracted by the mean value to retain only valuable information for the training model.The position of arms,head,legs,and any new object was still clearly identifiable,making the image appropriate for further processing by the deep CNN models.

    3.5 Execution of ResNet Model

    The ResNet model used fivefold cross-validation to verify the results’stability and authenticity.A checkpoint was created after each set of validations to avoid the loss of the stored weights.Further,each cross-validation was set to run with ten epochs,and the various performance evaluation metrics were determined.As shown in Fig.7,the model was prepared using the ResNet50 layer using the ‘Imagenet’data as its weights,as available in the Keras library.Next,these values were flattened using a flatten layer.The ResNet deep CNN model was fine-tuned with the dense layer using the ‘Softmax’function.Further,to utilize an adaptive learning rate,Adam optimization was used instead of Gradient Descent optimization.

    Figure 7:The ResNet model description-left:layer name,right:input-output size

    3.6 Execution of Xception Model

    The Xception model was set up using transfer learning,utilizing a pre-trained VGG16 model.Like the ResNet model,in the Xception model,each cross-validation was run with ten epochs,and the various evaluation metrics were determined.As shown in Fig.8,the Xception model was prepared using the Xception layer with the weights trained using the ‘Imagenet’dataset.The shuffle parameter was set to true,and the verbose parameter was set to 1.Further,these values were flattened using a flatten layer.The Xception model,like the ResNet model,was finetuned with the dense layer using the ‘Softmax’function.Adam optimization was used instead of Gradient Descent optimization,and the loss parameter was set to ‘Categorical Crossentropy.’

    Figure 8:The Xception model description-left:layer name,right:input-output size

    3.7 Execution of VGG16 Model

    The VGG16 model was set up with the Softmax function and the ReLU activation function.The ReLU activation function helped filter out the negative values and pass only the non-negative values onto the next layer.The fully connected layers were initially added to the network with appropriate activation functions.Two dense layers were used with 1024 and 512 units,respectively,in the initial few layers,utilizing the ReLU activation function.After implementing the two dense ReLU layers,a dense Softmax with ten units was added to the network.Ten units were used to predict the occurrences of the ten distraction classes created.The Softmax layer finally returned a value in the range of 0 to 1,based on the distracted drivers’image class (C0 to C9).Further,while training the model,Adam optimization was used,rather than the Stochastic Gradient Descent(SGD),to reach the global minima.The learning rate was set as 1e-5.This learning rate was tweaked several times to reach the current results.The description of the VGG16 Model network is shown in Fig.9.The input data was passed through these different layers.The fully connected dense layers were included in the model,and finally,a ten-unit output was used to classify the images under the ten distraction classes.

    Figure 9:The VGG16 model description-left:layer name,right:input-output size

    4 Results and Discussions

    The performance comparison was accomplished based on the evaluation metrics—precision score,recall/sensitivity score,F1 score,and specificity score [34].True positive,true negative,false positive,and false negative values were used to compute the evaluation metrics [35-41].The results were plotted using Python’s Matplotlib library for better interpretation and visualization.The results tabulated in Tab.2 represent the evaluation metric scores for the ten classes of images obtained by the deep CNN ResNet model.The highest precision,recall/sensitivity,and F1 score were observed for the class label C7,and the lowest precision,recall/sensitivity,and F1 score were seen in class label C6.However,the specificity score was highest for the class label C9 and lowest for the class label C2.

    Table 2:Evaluation metric scores for ResNet model

    Figure 10:Evaluation metric scores for the ResNet model

    Table 3:Evaluation metric scores for the Xception model

    The visualization in Fig.10 shows the precision,recall/ sensitivity,F1 score,and specificity score for the ResNet model.Overall,this model performed well for all the class labels,especially the C7-C9 class labels.The evaluation metric scores obtained by the Xception model are tabulated in Tab.3.The visualization of the evaluation metric scores for the Xception model is shown in Fig.11,where the precision,recall/sensitivity,F1 score,and specificity score are plotted.It can be observed that these scores are lower than those of the ResNet model.The highest precision,recall/sensitivity,and F1 score were observed for the class label C6,while the lowest precision and F1 score were seen in the case of the class label C1,and the lowest recall/sensitivity was observed for the class label C4.The specificity score was highest for the class label C0 and lowest for C5.The evaluation metric scores obtained by the VGG16 model are tabulated in Tab.4.It can be observed that these scores are lower than those of the ResNet and the Xception models.

    Figure 11:Evaluation metric scores for the Xception model

    Table 4:Evaluation metric scores for the VGG16 model

    The graphical visualization of the evaluation metric scores for the VGG16 model is shown in Fig.12.The highest precision score was observed for the class label C6,and the lowest was observed for C2.Similarly,the highest recall/sensitivity score was seen in C2,and the lowest was observed for C7.The F1 score was maximum for C6 and minimum for C7.Also,the specificity score was maximum for C8 and minimum for C5.Comparing the evaluation metrics scores shows that the ResNet model provides the most superior performance,followed by the Xception model.Even though the VGG16 model yielded lower evaluation metric scores than the other two models,the results were satisfactory [42-45].These models can be further optimized to prevent the overfitting issue in the network.Fine-tuning the learning rates or the hyper-parameters and/or adding or removing layers can also optimize the model.The activation functions such as ReLU,Sigmoid,and Softmax functions could also be more efficiently used for achieving better results.

    Figure 12:Evaluation metric scores for the VGG16 model

    5 Conclusion

    After implementing all the deep CNN models—ResNet,Xception,and VGG16—it can be concluded that the ResNet model provides the most superior performance,followed by Xception and VGG16,respectively.The evaluation metrics used for comparing the models’performances were the precision score,the recall/sensitivity score,the F1 score,and the specificity score.The dataset consisted of distracted driver images,and this work classified them into ten classes based on the distractions.Even though the VGG16 model is primitive compared to the other two models,it offers satisfactory results.However,as the complexity of the images and the dataset increases,the differences tend to become more prominent,and the superior performance of the ResNet model becomes evident.The advantage of using the ResNet deep CNN architecture for the distracted driver dataset is that the layers are stacked better while having much lesser kernels than in the VGG16 model.The ResNet model is less complicated and can be easily optimized compared to the other networks.Also,this model converges faster and generates better results than other networks.Furthermore,by using the ResNet deep CNN architecture for detecting the driver’s distraction,the system can also create various alerting prototypes in the future by integrating cloud technology,the Internet of Things,and other disciplines.Moreover,alarm systems can be installed to detect the driver’s distraction and ensure road safety.In conclusion,these systems help reduce accidents and guarantee self-awareness in drivers by continuously alerting them.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久这里只有精品19| 免费黄频网站在线观看国产| 久久久欧美国产精品| 欧美日韩综合久久久久久| www.999成人在线观看| 国产成人一区二区在线| 国产精品av久久久久免费| 午夜av观看不卡| 国产视频首页在线观看| a 毛片基地| 精品亚洲成国产av| 成人亚洲精品一区在线观看| 精品熟女少妇八av免费久了| 高清不卡的av网站| 晚上一个人看的免费电影| 80岁老熟妇乱子伦牲交| 99国产精品一区二区蜜桃av | 亚洲专区国产一区二区| 欧美97在线视频| 精品久久久久久久毛片微露脸 | 日韩制服骚丝袜av| 王馨瑶露胸无遮挡在线观看| av天堂在线播放| 女人久久www免费人成看片| 两性夫妻黄色片| 国产人伦9x9x在线观看| 啦啦啦啦在线视频资源| 亚洲av美国av| 成年人黄色毛片网站| av一本久久久久| 亚洲精品一卡2卡三卡4卡5卡 | 亚洲精品国产av蜜桃| 在线观看免费视频网站a站| 免费高清在线观看日韩| 亚洲国产av影院在线观看| 免费一级毛片在线播放高清视频 | 97精品久久久久久久久久精品| 日韩视频在线欧美| 国产精品 国内视频| 自拍欧美九色日韩亚洲蝌蚪91| 一本综合久久免费| 国产免费又黄又爽又色| cao死你这个sao货| 一级毛片电影观看| 亚洲专区国产一区二区| 99re6热这里在线精品视频| 一区二区三区精品91| 国产片内射在线| 狂野欧美激情性bbbbbb| 深夜精品福利| 久久精品久久久久久噜噜老黄| 啦啦啦在线免费观看视频4| 久久久久久久大尺度免费视频| 在线精品无人区一区二区三| 日本一区二区免费在线视频| 美国免费a级毛片| 亚洲成人免费av在线播放| 欧美日韩综合久久久久久| 免费不卡黄色视频| 欧美日韩国产mv在线观看视频| 老熟女久久久| 久久人妻熟女aⅴ| 亚洲成国产人片在线观看| 午夜91福利影院| 国产在线一区二区三区精| videosex国产| 亚洲精品乱久久久久久| 女性被躁到高潮视频| 男女午夜视频在线观看| 久久99一区二区三区| 欧美黄色淫秽网站| 中文乱码字字幕精品一区二区三区| 美女国产高潮福利片在线看| 韩国精品一区二区三区| 精品一品国产午夜福利视频| 午夜福利在线免费观看网站| 精品欧美一区二区三区在线| 老司机在亚洲福利影院| 人妻一区二区av| 欧美日韩成人在线一区二区| 日日摸夜夜添夜夜爱| 狠狠精品人妻久久久久久综合| 五月开心婷婷网| 国产成人av教育| 热re99久久国产66热| 亚洲av在线观看美女高潮| 久久中文字幕一级| 爱豆传媒免费全集在线观看| 久久久久网色| 在线精品无人区一区二区三| 少妇人妻久久综合中文| 国产片特级美女逼逼视频| 国产主播在线观看一区二区 | 纯流量卡能插随身wifi吗| 国产亚洲欧美在线一区二区| 国产成人欧美在线观看 | 久久九九热精品免费| 久久热在线av| 国产欧美日韩一区二区三区在线| 黄频高清免费视频| 熟女少妇亚洲综合色aaa.| 老司机在亚洲福利影院| 国产一区二区三区综合在线观看| 在线av久久热| 国产精品一区二区精品视频观看| 亚洲国产日韩一区二区| 国产麻豆69| 国产成人精品久久二区二区免费| 国产xxxxx性猛交| av网站免费在线观看视频| 久久ye,这里只有精品| 精品国产超薄肉色丝袜足j| 18禁国产床啪视频网站| 婷婷色综合大香蕉| 婷婷丁香在线五月| 多毛熟女@视频| 男女边吃奶边做爰视频| 中文字幕最新亚洲高清| 99国产精品99久久久久| 一本大道久久a久久精品| 国产精品一区二区精品视频观看| 男女午夜视频在线观看| 91精品三级在线观看| 国产成人精品久久久久久| 视频区图区小说| 国产精品一区二区免费欧美 | 亚洲欧美中文字幕日韩二区| 亚洲 国产 在线| 欧美黑人精品巨大| 久久国产精品大桥未久av| 国产精品一国产av| 首页视频小说图片口味搜索 | av网站免费在线观看视频| 搡老乐熟女国产| 黄网站色视频无遮挡免费观看| 亚洲欧美日韩另类电影网站| 亚洲综合色网址| 欧美少妇被猛烈插入视频| 精品第一国产精品| 国产在视频线精品| 精品久久久精品久久久| 亚洲精品中文字幕在线视频| 久久久欧美国产精品| 亚洲国产精品一区二区三区在线| 亚洲伊人久久精品综合| 久久人人爽av亚洲精品天堂| 亚洲精品日本国产第一区| 91字幕亚洲| 久久亚洲国产成人精品v| 国产成人精品久久二区二区91| 一区二区av电影网| 久久热在线av| 午夜激情av网站| 国产精品偷伦视频观看了| 亚洲国产中文字幕在线视频| 侵犯人妻中文字幕一二三四区| 久久精品国产亚洲av涩爱| 青草久久国产| 9191精品国产免费久久| av视频免费观看在线观看| 亚洲色图 男人天堂 中文字幕| 大片免费播放器 马上看| 巨乳人妻的诱惑在线观看| 久久天躁狠狠躁夜夜2o2o | 国产精品.久久久| 十分钟在线观看高清视频www| 曰老女人黄片| 亚洲成av片中文字幕在线观看| 日本vs欧美在线观看视频| 一边摸一边抽搐一进一出视频| 欧美黄色片欧美黄色片| 爱豆传媒免费全集在线观看| 亚洲中文字幕日韩| 国产在线免费精品| 伦理电影免费视频| 免费黄频网站在线观看国产| 美女大奶头黄色视频| 国产一区二区 视频在线| 久久精品国产a三级三级三级| 久久99一区二区三区| 久久久久久久久免费视频了| 搡老岳熟女国产| 青草久久国产| 欧美日韩精品网址| 亚洲av电影在线进入| 国产精品 欧美亚洲| 亚洲精品成人av观看孕妇| 国产一区二区激情短视频 | 精品福利观看| 午夜福利在线免费观看网站| 国产高清国产精品国产三级| 日韩制服丝袜自拍偷拍| 国产免费福利视频在线观看| 免费一级毛片在线播放高清视频 | 91成人精品电影| 日韩av不卡免费在线播放| xxxhd国产人妻xxx| 亚洲国产精品国产精品| 91国产中文字幕| 99九九在线精品视频| 另类亚洲欧美激情| xxx大片免费视频| 亚洲精品第二区| 99re6热这里在线精品视频| 精品国产一区二区久久| 久久久久网色| 成在线人永久免费视频| 午夜免费鲁丝| av欧美777| 99精品久久久久人妻精品| 男人爽女人下面视频在线观看| 亚洲精品美女久久久久99蜜臀 | 两个人免费观看高清视频| av视频免费观看在线观看| 大型av网站在线播放| 搡老乐熟女国产| 天天躁夜夜躁狠狠躁躁| 日韩一本色道免费dvd| 国产成人精品久久二区二区免费| 久久亚洲国产成人精品v| 桃花免费在线播放| 妹子高潮喷水视频| 一级片免费观看大全| 亚洲,一卡二卡三卡| 精品人妻一区二区三区麻豆| 视频在线观看一区二区三区| 精品高清国产在线一区| 亚洲精品久久成人aⅴ小说| 日本午夜av视频| 亚洲欧美一区二区三区久久| 18禁观看日本| 亚洲成色77777| 男女高潮啪啪啪动态图| 精品人妻一区二区三区麻豆| 美女主播在线视频| 侵犯人妻中文字幕一二三四区| 亚洲中文av在线| 久久久精品94久久精品| 精品亚洲成a人片在线观看| 亚洲美女黄色视频免费看| 美国免费a级毛片| 蜜桃在线观看..| 精品一区二区三区四区五区乱码 | 国产男女超爽视频在线观看| 男女高潮啪啪啪动态图| videos熟女内射| 久久99精品国语久久久| 欧美性长视频在线观看| 亚洲中文av在线| 啦啦啦在线观看免费高清www| 蜜桃国产av成人99| 汤姆久久久久久久影院中文字幕| 97人妻天天添夜夜摸| 亚洲av电影在线进入| 精品第一国产精品| 免费看不卡的av| 精品一区二区三区四区五区乱码 | 亚洲七黄色美女视频| 国产99久久九九免费精品| 精品视频人人做人人爽| 久久这里只有精品19| 热re99久久精品国产66热6| 天堂中文最新版在线下载| 免费一级毛片在线播放高清视频 | 亚洲精品久久成人aⅴ小说| 亚洲国产av影院在线观看| 啦啦啦啦在线视频资源| 嫁个100分男人电影在线观看 | 只有这里有精品99| a 毛片基地| 国产有黄有色有爽视频| 少妇猛男粗大的猛烈进出视频| 亚洲视频免费观看视频| 91麻豆av在线| 午夜影院在线不卡| 国产成人免费观看mmmm| 亚洲图色成人| 午夜老司机福利片| 午夜视频精品福利| 欧美人与性动交α欧美精品济南到| 午夜激情av网站| 精品国产乱码久久久久久男人| 黄色怎么调成土黄色| 日韩欧美一区视频在线观看| 一本色道久久久久久精品综合| 欧美日韩亚洲高清精品| 最新的欧美精品一区二区| 欧美国产精品va在线观看不卡| 国产亚洲欧美在线一区二区| 日日爽夜夜爽网站| 啦啦啦 在线观看视频| 国产片内射在线| 男人添女人高潮全过程视频| 精品久久蜜臀av无| 亚洲欧美色中文字幕在线| 久久精品久久久久久噜噜老黄| 国产精品一二三区在线看| 交换朋友夫妻互换小说| 老司机亚洲免费影院| 国产av精品麻豆| 丁香六月欧美| 黄色一级大片看看| 亚洲,一卡二卡三卡| 欧美黄色片欧美黄色片| 欧美久久黑人一区二区| 亚洲av美国av| 久久国产亚洲av麻豆专区| 亚洲男人天堂网一区| 国产成人啪精品午夜网站| 无遮挡黄片免费观看| 国产成人一区二区三区免费视频网站 | 热re99久久精品国产66热6| 天天躁夜夜躁狠狠久久av| 欧美国产精品va在线观看不卡| 色网站视频免费| 国产av国产精品国产| 深夜精品福利| 黄色怎么调成土黄色| 熟女av电影| 一级毛片电影观看| 一区二区日韩欧美中文字幕| 国产精品欧美亚洲77777| 天天影视国产精品| 精品欧美一区二区三区在线| 日韩视频在线欧美| 日韩av免费高清视频| 在线观看一区二区三区激情| 各种免费的搞黄视频| 精品人妻1区二区| 亚洲欧美成人综合另类久久久| 欧美 日韩 精品 国产| 成年人黄色毛片网站| 天堂俺去俺来也www色官网| a级毛片在线看网站| 久久精品亚洲熟妇少妇任你| 男女免费视频国产| 亚洲国产精品成人久久小说| 国产女主播在线喷水免费视频网站| 婷婷色综合www| 精品久久久久久久毛片微露脸 | 一边摸一边抽搐一进一出视频| 免费观看av网站的网址| 欧美 亚洲 国产 日韩一| 99国产综合亚洲精品| 亚洲av片天天在线观看| 99国产综合亚洲精品| 国产免费一区二区三区四区乱码| 天天添夜夜摸| av电影中文网址| 99国产综合亚洲精品| 欧美日韩国产mv在线观看视频| 久久女婷五月综合色啪小说| 99久久99久久久精品蜜桃| 国产在线视频一区二区| 少妇猛男粗大的猛烈进出视频| 欧美日本中文国产一区发布| 真人做人爱边吃奶动态| 国产高清不卡午夜福利| 一区二区三区乱码不卡18| 午夜福利在线免费观看网站| 最近手机中文字幕大全| 性色av乱码一区二区三区2| 国产成人精品久久二区二区免费| 99国产综合亚洲精品| 久久人人爽av亚洲精品天堂| 天天影视国产精品| 婷婷丁香在线五月| 男女无遮挡免费网站观看| 欧美成狂野欧美在线观看| 女性被躁到高潮视频| 亚洲一码二码三码区别大吗| 一二三四在线观看免费中文在| 亚洲黑人精品在线| 欧美成人午夜精品| 波多野结衣一区麻豆| 免费看av在线观看网站| 婷婷色综合大香蕉| 人人妻,人人澡人人爽秒播 | 久久久久久久大尺度免费视频| 亚洲一卡2卡3卡4卡5卡精品中文| 在线亚洲精品国产二区图片欧美| 菩萨蛮人人尽说江南好唐韦庄| av线在线观看网站| 少妇人妻 视频| 人人妻,人人澡人人爽秒播 | 乱人伦中国视频| 精品一区二区三区四区五区乱码 | 欧美日韩一级在线毛片| 伊人久久大香线蕉亚洲五| 国产成人一区二区三区免费视频网站 | 国产欧美亚洲国产| 91精品伊人久久大香线蕉| 夫妻性生交免费视频一级片| 欧美性长视频在线观看| a级毛片在线看网站| 亚洲精品成人av观看孕妇| 在线观看免费日韩欧美大片| 黄色 视频免费看| 久9热在线精品视频| 久久久久久久国产电影| 老司机午夜十八禁免费视频| 国产一区有黄有色的免费视频| 中文字幕色久视频| 人妻 亚洲 视频| www.熟女人妻精品国产| 人人妻人人爽人人添夜夜欢视频| 中文字幕av电影在线播放| 五月开心婷婷网| 9191精品国产免费久久| 最近最新中文字幕大全免费视频 | 欧美人与性动交α欧美精品济南到| 三上悠亚av全集在线观看| 国产欧美日韩综合在线一区二区| 男女午夜视频在线观看| 亚洲精品国产一区二区精华液| 在线观看免费视频网站a站| 国产视频首页在线观看| 日本黄色日本黄色录像| 精品国产国语对白av| 久久中文字幕一级| 97人妻天天添夜夜摸| 十八禁人妻一区二区| 婷婷成人精品国产| 日本vs欧美在线观看视频| 在线观看一区二区三区激情| 激情五月婷婷亚洲| 日韩,欧美,国产一区二区三区| 国产1区2区3区精品| 久久女婷五月综合色啪小说| 女性被躁到高潮视频| 亚洲精品自拍成人| 国产亚洲一区二区精品| 美女主播在线视频| av在线app专区| 婷婷丁香在线五月| 黄片小视频在线播放| 黄色怎么调成土黄色| 人人妻人人澡人人爽人人夜夜| 人妻人人澡人人爽人人| 国产精品 国内视频| 激情视频va一区二区三区| 日本午夜av视频| 十八禁高潮呻吟视频| 久久久久国产一级毛片高清牌| h视频一区二区三区| 成人影院久久| 19禁男女啪啪无遮挡网站| 欧美乱码精品一区二区三区| 永久免费av网站大全| 日韩一区二区三区影片| 视频区图区小说| av在线老鸭窝| 精品一区在线观看国产| 黄色一级大片看看| 2021少妇久久久久久久久久久| 久久久国产精品麻豆| 午夜福利视频精品| 亚洲一码二码三码区别大吗| 精品国产超薄肉色丝袜足j| 亚洲欧洲国产日韩| 亚洲国产看品久久| 两个人看的免费小视频| 国产免费现黄频在线看| 咕卡用的链子| 伊人亚洲综合成人网| 超碰97精品在线观看| 国产国语露脸激情在线看| 人人妻,人人澡人人爽秒播 | 在线观看一区二区三区激情| 在线观看国产h片| 亚洲中文av在线| 久久精品亚洲熟妇少妇任你| 国产精品偷伦视频观看了| 嫩草影视91久久| xxx大片免费视频| 熟女av电影| 精品少妇内射三级| 男女下面插进去视频免费观看| 久久久精品区二区三区| 两人在一起打扑克的视频| 日韩欧美一区视频在线观看| 一区二区三区四区激情视频| 性色av乱码一区二区三区2| 99热全是精品| 亚洲国产av新网站| 80岁老熟妇乱子伦牲交| 久久 成人 亚洲| 亚洲七黄色美女视频| 久久久精品区二区三区| 女人精品久久久久毛片| 国产精品国产av在线观看| 国产99久久九九免费精品| 肉色欧美久久久久久久蜜桃| 日韩中文字幕欧美一区二区 | 两个人免费观看高清视频| 国产99久久九九免费精品| 狠狠婷婷综合久久久久久88av| 国产黄色视频一区二区在线观看| 色94色欧美一区二区| av不卡在线播放| 婷婷色麻豆天堂久久| 欧美+亚洲+日韩+国产| 成人三级做爰电影| 中文字幕人妻丝袜一区二区| 美国免费a级毛片| 丝袜人妻中文字幕| 韩国精品一区二区三区| 免费av中文字幕在线| 在线天堂中文资源库| 中文字幕精品免费在线观看视频| 国产黄频视频在线观看| 在线观看人妻少妇| 制服人妻中文乱码| 国产精品av久久久久免费| 精品少妇一区二区三区视频日本电影| 久久这里只有精品19| 成人午夜精彩视频在线观看| 中文字幕高清在线视频| 日韩一本色道免费dvd| 国产爽快片一区二区三区| 777米奇影视久久| 亚洲男人天堂网一区| 日韩免费高清中文字幕av| 欧美亚洲 丝袜 人妻 在线| 乱人伦中国视频| av一本久久久久| 精品久久久久久久毛片微露脸 | 自线自在国产av| 免费久久久久久久精品成人欧美视频| www.av在线官网国产| 亚洲国产精品成人久久小说| 免费在线观看黄色视频的| 日韩视频在线欧美| 亚洲精品av麻豆狂野| 天天操日日干夜夜撸| 飞空精品影院首页| 免费久久久久久久精品成人欧美视频| 五月开心婷婷网| 国产精品一区二区精品视频观看| 国产精品秋霞免费鲁丝片| 国产精品国产av在线观看| 9191精品国产免费久久| 超色免费av| 91成人精品电影| 黄片播放在线免费| 亚洲欧美精品自产自拍| 婷婷成人精品国产| 高清av免费在线| 女人久久www免费人成看片| 日韩 亚洲 欧美在线| 午夜老司机福利片| 91麻豆精品激情在线观看国产 | 免费少妇av软件| 啦啦啦中文免费视频观看日本| 永久免费av网站大全| 成年av动漫网址| 欧美中文综合在线视频| 精品久久久久久久毛片微露脸 | 精品亚洲乱码少妇综合久久| 国产片内射在线| 最近中文字幕2019免费版| 精品国产超薄肉色丝袜足j| 久久久久国产一级毛片高清牌| 久久鲁丝午夜福利片| 男人爽女人下面视频在线观看| 国产极品粉嫩免费观看在线| 午夜免费男女啪啪视频观看| 99九九在线精品视频| 国产精品成人在线| 欧美97在线视频| 91精品伊人久久大香线蕉| 丝袜人妻中文字幕| 亚洲国产毛片av蜜桃av| 啦啦啦在线免费观看视频4| 一级片免费观看大全| 十八禁高潮呻吟视频| 黄片播放在线免费| 纵有疾风起免费观看全集完整版| 18禁观看日本| 制服人妻中文乱码| 亚洲欧美精品自产自拍| 天天躁夜夜躁狠狠躁躁| 欧美另类一区| 中文字幕另类日韩欧美亚洲嫩草| 少妇粗大呻吟视频| 国产片内射在线| 狠狠婷婷综合久久久久久88av| 晚上一个人看的免费电影| svipshipincom国产片| 丝袜美腿诱惑在线| 飞空精品影院首页| 激情五月婷婷亚洲| 狠狠婷婷综合久久久久久88av| 女人被躁到高潮嗷嗷叫费观| 丰满人妻熟妇乱又伦精品不卡| 99精国产麻豆久久婷婷| 国产一级毛片在线| 男女高潮啪啪啪动态图| 狠狠婷婷综合久久久久久88av| 久久久久久人人人人人| 久久久精品94久久精品| 狠狠婷婷综合久久久久久88av| 午夜视频精品福利| 久久ye,这里只有精品| 亚洲七黄色美女视频| 老熟女久久久| 色视频在线一区二区三区| 天天躁夜夜躁狠狠久久av| 一个人免费看片子| 国产1区2区3区精品| 国产精品偷伦视频观看了| 国产淫语在线视频| 一区二区三区乱码不卡18| 美女主播在线视频| 国产一卡二卡三卡精品|