• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Traffic Sign Recognition for Autonomous Vehicle Using Optimized YOLOv7 and Convolutional Block Attention Module

    2023-12-12 15:50:06KuppusamySanjayDeepashreeandIwendi
    Computers Materials&Continua 2023年10期

    P.Kuppusamy,M.Sanjay,P.V.Deepashree and C.Iwendi

    1School of Computer Science and Engineering,VIT-AP University,Andhra Pradesh,522237,India

    2School of Creative Technology,University of Bolton,Manchester,BL3 5AB,UK

    ABSTRACT The infrastructure and construction of roads are crucial for the economic and social development of a region,but traffic-related challenges like accidents and congestion persist.Artificial Intelligence(AI)and Machine Learning(ML)have been used in road infrastructure and construction,particularly with the Internet of Things(IoT)devices.Object detection in Computer Vision also plays a key role in improving road infrastructure and addressing trafficrelated problems.This study aims to use You Only Look Once version 7(YOLOv7),Convolutional Block Attention Module(CBAM),the most optimized object-detection algorithm,to detect and identify traffic signs,and analyze effective combinations of adaptive optimizers like Adaptive Moment estimation (Adam),Root Mean Squared Propagation(RMSprop)and Stochastic Gradient Descent(SGD)with the YOLOv7.Using a portion of German traffic signs for training,the study investigates the feasibility of adopting smaller datasets while maintaining high accuracy.The model proposed in this study not only improves traffic safety by detecting traffic signs but also has the potential to contribute to the rapid development of autonomous vehicle systems.The study results showed an impressive accuracy of 99.7%when using a batch size of 8 and the Adam optimizer.This high level of accuracy demonstrates the effectiveness of the proposed model for the image classification task of traffic sign recognition.

    KEYWORDS Object detection;traffic sign detection;YOLOv7;convolutional block attention module;road sign detection;Adam

    1 Introduction

    Infrastructure and construction of roads in any geographical area play a pivotal role in the economic and social development of the region,as it connects people to business and allows the movement of locomotives and services.One of the present-day primary challenges relating to road infrastructure is accidents,and other traffic-related concerns like traffic congestion,restricted infrastructure capacity,low maintenance of roads,etc.[1,2].Classically,human conception and past experiences have guided the progress of road infrastructure.However,as technology has become ubiquitous,and owing to advancements in automobile-related technologies such as self-parking systems,self-driving cars,fully autonomous systems,etc.,all of which are essentially categorized under the umbrella of Autonomous Driving Systems(ADS).There has been a significant increase in the usage of AI,and its sub-domains in accomplishing some cardinal tasks in ADS.An evaluative study on Deep Neural Networks(DNN)for Traffic Sign Detection (TSD),throws some light on how the detection of traffic signs is an indispensable study because these detection systems encompass anchor components required for safety and support in ADS [3].The IoT devices are utilized to gather the data from the environment,and ML analyses the data to solve the challenges in traffic management systems.The traffic management system contains three layers such as data acquisition,network transmission,and application.The data acquisition is done via sensors,cameras,video monitoring,and online monitoring.The collected data is transmitted over the network using Bluetooth,Wi-Fi (Wireless Fidelity),mobile network,etc.Finally,AI and ML play a major role in the analysis,visualize the analyzed outputs,and derive the systems based on the outputs like ADS [4].The study using ML showcased the latest and most advanced techniques for monitoring construction progress,including methods for collecting data,retrieving information,estimating progress,and presenting the results visually.Along similar lines,AI and ML are used in many more traffic-related issues[5].A review of traffic congestion prediction using AI described the probabilistic reasoning models like fuzzy logic,Hidden Markov Model(HMM),the Bayesian network,Support Vector Machine(SVM),Artificial Neural Networks(ANN),Decision Trees,etc.,Deep Learning(DL)algorithms like Convolutional Neural Networks(CNN),Long Short-Term Memory(LSTM),are used for short-term traffic congestion prediction[6].AI and ML-based incident detectors in Road Transport Systems(RTS)discussed dire problems and plausible solutions for reducing traffic accidents that enhanced the automatic incident system detectors[7].

    The technology development in computer vision plays a key role with goals revolving around improving road infrastructure like road accidents,traffic congestion,etc.Object detection is a subfield of computer vision that uses various DL architectures for recognizing and classifying objects.A comparative analysis of CNN-based object detection algorithms shows YOLOv3 is the fastest,and performs best overall outperforming Single-Shot Detector(SSD),and Faster Region-based CNN(RCNN) [8].However,it is also highlighted that the choice of the algorithm may be dependent upon the specific situation or problem that needs to be solved.For instance,R-CNN works best for small datasets that do not require real-time video outputs,whereas YOLO works best for object detection in the live environment.YOLOv4 runs twice as fast as EfficientNet,with an Average Precision(AP)of 14%more than YOLOv3.The YOLOv7 algorithm surpasses all the well-known real-time objectdetection algorithms concerning AP at 56.8%,and speed with a maximum range of 160 FPS[9].So far,research on the detection of traffic signs has been done using several versions of YOLO,and other object-detection algorithms.

    This study aims to use the fairly latest version of the most optimized object-detection algorithm YOLOv7 to detect and identify traffic signs.This study also tries to dive deep into analyzing effective combinations of adaptive optimizers like Adam and SGD along with YOLOv7.SGD has solid theoretical and mathematical support,along with an exhibition of enhanced stability and generality[10].In most applications,the Adam optimizer is recommended as the default optimization method because it usually generates better results,is faster to compute,and requires fewer tuning parameters than conventional optimization methods [11].Batch sizes 8 and 16 are used for the task of TSD.A portion of the German traffic signs is used for the training purpose.This study also explores the feasibility of adopting smaller datasets while keeping high accuracy to modify the application domain.Fig.1 shows the various traffic sign classes that are pointed by red arrows for human reference.

    Traffic sign recognition is a primary factor for autonomous cars to make safe travel.However,traffic sign recognition system contains more challenges due to limitations that are shown by recent incidents involving autonomous vehicles[12].Conventional traffic sign recognition encounters numerous challenges,such as occlusion,lighting conditions,and the existence of several neighboring traffic signs[13].

    Figure 1:Input images of each class for prediction

    1.1 Motivation

    The motivation for this research work is as follows:

    ? Traffic sign recognition is essential for autonomous cars to navigate safely and efficiently.But there are severe worries regarding the limitations of traffic sign recognition systems and the methods they employ,as shown by recent incidents involving autonomous vehicles and research connected to recognition system failure.Therefore,it becomes even more crucial to create powerful algorithms that can get beyond these constraints and provide precise and trustworthy traffic sign detection to improve the performance and safety of autonomous cars.Effective traffic sign identification is essential for maximizing traffic flow and raising overall road safety in addition to lowering the likelihood of accidents.

    ? Traditional techniques of traffic sign recognition,on the other hand,encounter various problems,such as occlusion,fluctuating lighting conditions,complicated backdrops,and the presence of multiple signs nearby.Due to these challenges,improved approaches must be proposed to manage these situations and provide precise and dependable traffic sign detection and identification.

    1.2 Contributions

    The contributions of this research work are as follows:

    ? This research intends to improve the accuracy and speed of traffic sign detection by incorporating the CBAM into the YOLOv7 framework.The CBAM’s potent attention mechanism enables the model to effectively acquire,and highlight key spatial and channel-wise information,enabling reliable detection of traffic signs even under difficult conditions like occlusion or complicated backdrops.

    ? Investigate and compare the effectiveness of the proposed model with different optimizers and batch sizes.

    ? The enhanced model proposed in this study exhibits improved feature representation,higher detection accuracy,and resilience by combining the characteristics of YOLOv7 and three CBAM modules in a complementary way,advancing the development of autonomous driving technology.

    ? The significance of this study is to achieve high accuracy on a small-sized real-time dataset thereby applying the model proposed in this study to a larger,and more diverse dataset for real-time applications in autonomous vehicles.

    The remainder of this study is organized as follows: Section 2 gives an overview of existing literature on traffic sign recognition for autonomous vehicles,highlighting strengths,limitations,and extensions of current knowledge.Section 3 focuses on the description of the YOLOv7 with the CBAM framework,including its working principle,architecture,and loss function.The dataset used for training and evaluation is described in Section 4.Section 5 describes the evaluation metrics,hyperparameters,and hardware/software configurations used in the experiments.Section 6 presents a detailed analysis of the results,including performance comparisons and visual representations of the model’s capabilities.Finally,the conclusion and future scope are discussed in Section 7.

    2 Literature Review

    There has been a positive trend toward applications in computer vision resulting in a substantial amount of research on TSD using various object-detection algorithms.Relevant to this study,an indepth inspection and analysis of various machine vision-based traffic detection models divided into 5 categories viz.color,shape,color and shape,ML,and Light Detection and Ranging(LiDAR)based models[14].A TSD system based on novel DL architectures used the YOLOv3,and Xception models along with Adam and RMSprop optimizers.These models are designed using the dataset with 3 classes such as “Yellow,diamond-shaped pedestrian crossing sign”,“Yellow,diamond-shaped other traffic signs”and“others”.However,this study is processed with a lower frame rate of 4.5 fps,which could be increased to improve the processing time and performance accuracy[15].

    A study specifically focused on the detection of Indian traffic signs using YOLOv3,and CNN over 5 classes,and attained an accuracy of 87%.However,the authors have not used a real-time traffic detection system to predict each frame in a video[16].A study proposed a cascaded R-CNN to obtain the multi-scale features for TSD that resulted in an accuracy of 99.7%.Additionally,the study also proposed a multi-scale attention mechanism to improve the detection of true traffic signs and reduce false detections[17].The YOLOv5 model is implemented on 8 classes of datasets viz.“No U-turn”,“Road bump”,“Road works”,“Watch for children crossing”,“Crosswalk ahead”,“Give way”,“Stop”,and “No entry”,along with a thorough comparison between the YOLOv5 and SSD.The own dataset used for the model displayed an accuracy of 97.70%.The future scope of the study is to expand the existing dataset and apply newly developed models like Mask R-CNN,CapsNet,and Siamese Neural Networks[18].An improved YOLOv5 model is implemented for real-time multi-scale TSD over a massive size of 182 classes.Data augmentation,and Adaptive Feature Fusion Pyramid Network(AF-FPN)methods were implemented to increase the performance of the standard YOLOv5 model,which indeed increased the accuracy from 60.18%to 62.67%.The performance of the model is low due to the blurring of images captured by the high-speed motion of a vehicle[19].An indigenous CNN architecture is used for TSD with the dataset having 16 classes viz.“green light”,“speed limit”,“no parking”,“bicycle and pedestrians only”,“crossroad 1”,“red light”,“crosswalk 1”,“straight ahead or left turn permitted”,“crossroads 2”,“traffic division”,“no overtaking”,“no turns”,“stop”,“one-way street”and “yellow light”.This approach outperforms YOLOv2,and Fast R-CNN,with an average accuracy of 90% in all types of weather conditions.However,authors have developed a model with less training data that could be increased to improve performance in more environments[20].A combination of Faster R-CNN,and Extreme Learning Machines(ELM)is used over 3 classes.However,the accuracy and performance of the model are not discussed quantitatively,but qualitatively it is stated that combining CNN with ELM increases the accuracy[21].

    A study on TSD and classification in the wild constructed a benchmark dataset “Tsinghua—Tencent 100K” covering real-world conditions.The study trained two models CNN and Fast RCNN which resulted in an accuracy of 88%,and 50%,respectively.The study had been implemented with a minimum number of traffic sign classes that rarely appear in benchmark datasets [22].Another study presented the YOLOv3 model in detecting temporary traffic control detection for road construction projects.The mentioned study used a dataset containing 8 classes viz.“construction cones”,“l(fā)ooper cones”,“construction barrels”,“construction barricades”,“end construction signs”,“road construction ahead signs”,“right lane reduction signs”,“right lane closed ahead signs”.The training resulted in a mean Average Precision(mAP)of 90.82%.The proposed model in the mentioned study recognized more than 98% of the temporary traffic signs correctly and approximately 81% of temporary traffic control devices correctly [23].A design for real-time TSD was implemented with CNN on 50,000 traffic-sign images and reached an accuracy of 97.3%.This model is designed by considering more traffic sign classes,and possible weather conditions affecting the visibility of the signs[24].The“WAF-LeNet”(an upgraded version of LeNet)is developed to recognize and identify traffic signs for autonomous vehicles.The accuracy attained in the study was 96.4%among 43 classes[25].Though there is a fairly small amount of research studies revolving around TSD using YOLOv7,research work was carried out to collect,and label the road damage data using Google Street View.The YOLOv7 model is trained with the collected data and results in an F1 score of 81.7%[26].A study focused on improving the performance of YOLOv5 for the detection of traffic signs in bad weather conditions made use of the Global Context (GC) block,which combined with YOLOv5’s results in an accuracy of 79.2% [27].A study quantitatively demonstrates that the combination of YOLOv7 with a lightweight convolution-based Spatial Pyramidal Pooling Fusion (SPPF) module leads to a significant improvement in model accuracy.The study reports a precise increase of 6.7%in accuracy when incorporating the SPPF module into the YOLOv7 framework[28].A portable image-based ADS system was developed using the YOLOv5 algorithm and Tesla P100 Graphics Processing Unit(GPU)system.It achieved a remarkable speed of 43.59 frames per second[29].Multiple studies have utilized a pre-trained model for TSD on large datasets,and have fine-tuned the respective models by using various optimizers [30–32].Some studies have implemented multi-task learning to simultaneously detect objects like pedestrians and bicycles [33,34].The use of LiDAR and Radar sensors has come up as one of the ways to increase the accuracy of models for TSD in challenging conditions like low lighting[35].A unique method is described in a study for analyzing Global Positioning System(GPS)trajectory data to detect vehicle turns,which involves converting the data to image-based data,postconversion,a personalized CNN model is designed[36,37].

    Previous approaches to TSD have used models like YOLOv5[27],YOLOv7[28],and CNN[36],which are popular and efficient models.However,these models do not have an attention mechanism,which can limit their performance.The model proposed in this study uses YOLOv7 with CBAM,which is an attention mechanism that helps to improve the model’s performance.Specifically,CBAM helps to focus the model’s attention on the most important features in an image,which can lead to better object detection,especially in cases where the objects in an image are small or have low contrast.

    3 Traffic Sign Detection Using YOLOv7 with CBAM

    YOLOv7 is the latest and state-of-the-art object detection model in the family of YOLO singleshot object detection models.YOLOv7 is currently the fastest and best-performing object detection model.YOLOv7 significantly enhances real-time object detection accuracy while lowering inference costs.By cutting around 40% of the parameters and 50% of the processing speed,YOLOv7 effectively beats other well-known object detectors with faster inference speeds,and higher recognition accuracy[38].

    3.1 Working Principle

    The four components that the YOLO algorithm uses to operate are residual blocks,bounding box regression,Intersection Over Union(IOU),and Non-Maximum Suppression(NMS).The initial component of the residual block divides the original image(A)into N equal-sized grid cells,where N is a hyperparameter.Localizing and determining the object’s class using the probability/confidence value is the responsibility of each grid cell.Bounding box regression is the second element that identifies the bounding boxes that correspond to rectangles highlighting all the objects in the image.There can be as many bounding boxes as there are objects within a given image.YOLO uses a single regression module to compute the characteristics of these bounding boxes.Y is the final vector of each bounding box as given in Eq.(1).

    where,Pcis the grid’s probability score for the cell that contains the object.The bounding box’s center’s x and y coordinates in relation to the surrounding grid cell are represented by bx,by.The height,and the width of the bounding box are represented by bh,bw,respectively.The four classes namely prohibitory,dangerous,mandatory,and others are represented by C1,C2,C3,and C4,respectively.Despite not all of them being significant,a single object in an image might frequently have many grid box possibilities for prediction.Such grid boxes are to be discarded in order to retain the relevant grid boxes using the third component IOU.IOU always ranges from 0 and 1.The IOU selection threshold is initially set at 0.5.Fig.2 shows the intersection area divided by the union area which is then calculated for each grid cell by YOLO.Finally,it considers grid cells with an IOU>threshold rather than those predicted to have an IOU ≤threshold.

    Figure 2:Intersection over union

    The final part NMS algorithm is a post-processing technique to remove duplicate and overlapping detections of the same object.When an object is detected,the YOLO algorithm generates multiple bounding boxes with confidence scores indicating the likelihood of an object being present in each box.However,some of these boxes may overlap or contain the same object,resulting in multiple detections for the same object.To address this issue,NMS is used to suppress all but the most confident detection of each object.The algorithm works by first sorting the detected bounding boxes by their confidence scores.Then for each box,it compares its overlap with all other boxes.If the overlap exceeds a certain threshold,the box with the lower confidence score is suppressed.The process is repeated until all boxes have been considered.The generated output helps to improve the overall performance and accuracy of the object detection algorithm.Establishing an IOU threshold is not always adequate since an item may contain several overlapping boxes.Noise might be included if many boxes are overlapped based on an IOU that exceeds the threshold and all those boxes are left unclosed.NMS can be used in these circumstances to keep only the boxes with the highest likelihood of being identified.Hence,the algorithm is designed by initializing the confidence threshold,and IOU threshold values.Then the bounding boxes are organized according to decreasing confidence.If any bounding box contains a confidence threshold 0 that is eliminated.The rest of the bounding boxes are iterated through in a loop beginning with the greatest confidence,and the IOU of the current box with every remaining box that belongs to the same class is calculated.If the IOU of the 2 boxes>IOU_Threshold,then the box with lower confidence is removed from the list of boxes.This operation is repeated until all the boxes are processed in the list.Here is an outline of the code for YOLO,a popular object detection algorithm.Table 1 shows the pseudocode of the steps involved in implementing YOLO.

    3.2 Architecture

    YOLOv7 can be used in many applications other than object detection,like instance segmentation,pose estimation,etc.In comparison to YOLOv4,YOLOv7 utilizes 36% less processing,reduces the number of parameters by 75%,and generates 1.5%higher AP.When compared to the edge-optimized version,YOLOv4-tiny,and YOLOv7-tiny reduce the number of parameters by 39%and computation by 49% while keeping the same AP.Hence,it can be stated that YOLOv7 is more optimized.A YOLO architecture is made up of various components,including a head,neck,and backbone.For the inference speed,the effectiveness of the YOLO network’s backbone is essential.The full YOLOv7 architecture can be seen in Fig.3.

    Figure 3:Proposed architecture of YOLOv7 with CBAM

    The Extended Efficient Layer Aggregation Network (E-ELAN) helps the model learn better while preserving its original gradient path.To increase the speed and accuracy of the model,EELAN considers several variables,including memory cost,input-output channel ratio,element-wise operation,activations,gradient routes,etc.[39].The CSPDarknet53 serves as the backbone network for the YOLOv7 architecture that makes up the Efficient Layer Aggregation Network(ELAN)model.CSPDarknet53 was created to increase the precision and effectiveness of object detection models.On the other hand,E-ELAN is another YOLOv7 architecture that uses EfficientNet as the backbone network.A series of CNNs called EfficientNet is created to attain cutting-edge accuracy while keeping the model’s computing cost to a minimum.The main difference between these two models is the backbone network,i.e.,ELAN uses CSPDarknet53,and E-ELAN uses EfficientNet.EfficientNet is more computationally efficient,but it may sacrifice some accuracy compared to CSPDarknet53.YOLOv7 uses an optimized compound model scaling approach that modifies the characteristics to produce suitable models for various application requirements.For instance,model scaling can improve the resolution of the model,the size of the input image,the depth,or the number of stages,and the width,or the number of channels.The compound scaling technique can keep the model’s original design characteristics.

    After training,one way to improve the model is by re-parameterizing it.The inference process takes longer,but the outcomes are more substantial.The two forms of ensemble re-parameterizations used to complete models are model level and module level.Model level re-parameterization can be done in two ways.In the first method,distinct sets of data are used to train several models with the same architecture,and then average their weights to get the final model.The second method is to take the average of a model’s weight at different epochs.But recently,module-level re-parameterization has been used in a lot of research works.The YOLOv7 contains several heads,including the Lead Head,which is accountable for all the output,and the Auxiliary Head,which helps with training middle layers.To enhance deep network training,a Label Assigner method was created that assigns soft labels after considering ground truth and network prediction results.Reliable soft labels employ optimization techniques to raise the standard and distribution of prediction output in addition to the accuracy of the prediction.However,conventional label assignment generates hard labels based on predetermined norms by directly referencing reality.The YOLOv7 architecture shown above uses kernel sizes such as 3×3,and 1×1 in all its convolution layers with padding of 1 and 2.

    Fig.4 shows a crucial component Cross-Branch Scalability (CBS).It is designed with a convolution layer,Batch Normalization(BN)layer,and a Sigmoid Linear Unit(SILU)activation function to extract images at various scales.Based on the CBS module,which makes up the upper and lower divisions,the MP1 module adds the max-pooling layer.Using max-pooling and the CBS module,the upper division reduces the image’s length and width in half.The lower division uses the first CBS module to reduce the image channel in half,the second CBS layer reduces the image’s width and length in half,and finally,the Concatenation(CAT)operation is used to combine the features retrieved from the top and lower branches,enhancing the network’s ability for feature extraction.The upsampling and CBS modules make up the UP module.

    Figure 4:CBS,MP1,UP modules

    In Fig.5,the ELAN module is made up of numerous CBS modules that have been piled on top of one another while maintaining the same input and output feature sizes.The learning capacity of the network is increased without deviating from the initial gradient path by directing the computing units of various feature groups to learn more diverse features.The Spatial Pyramid Pooling Concat Spatial Convolutional(SPPCSPC)module shown in Fig.6,ELAN-H(Extreme Low-latency Architecture for Network Heads) module,and UP module makes up the majority of the Path Aggregation Feature Pyramid Network(PAFPN)structure that makes up the Neck component of YOLOv7.The bottomup approach makes it simple to move bottom-level data up to the top level,allowing for the effective fusion of various hierarchical aspects.The CBS module,CAT module,and max-pooling module make up the majority of the SPPCSPC module.SPPCSPC uses different pooling kernel sizes such as 5 × 5,9 × 9,and 13 × 13.These modules obtain various perception fields through max-pooling.To predict confidence,category,and anchor frame,Head uses Re-parameterization Visual Geometry Group Block (RepVGG) structure to adjust the number of image channels for the output of Neck at three distinct scales and then passes through 1×1 convolution.The model proposed in this study addresses the scale problem in TSD by utilizing the SPPCSPC module in the last layer of the proposed model.Spatial Pyramid Pooling(SPP)allows capturing features at different scales without reducing the input resolution,while Cross Stage Partial (CSP) connections reduce the number of parameters in the proposed model.By incorporating these modules,the model can effectively handle the large variations in object scales commonly encountered in TSD tasks,improving the accuracy of predictions and enhancing the overall performance of the proposed model.

    Figure 5:ELAN module

    Figure 6:SPPCSPC module

    CBAM is a module used to enhance the performance of CNNs by incorporating spatial and channel attention mechanisms.It focuses on capturing both local and global context information from input feature maps,allowing the network to prioritize relevant image regions while suppressing irrelevant ones.The module consists of two components such as spatial attention and channel attention.The spatial attention module captures spatial dependencies among different channels by modeling interdependencies between spatial locations.This enables the network to focus on relevant regions and suppress background regions.The channel attention module captures interdependencies among channels by assessing the importance of each channel in conveying discriminative information.It emphasizes informative channels while suppressing less informative ones.The spatial and channel attention maps are combined to generate an attention map that captures both spatial and channel-wise information.This attention map is used to weigh the feature maps,allowing the network to selectively attend to relevant features.The YOLOv7 model is trained using the sum of the squared error between the predicted bounding boxes and the actual boxes,along with the cross-entropy loss for the class predictions.Its combination of a lightweight backbone network,effective neck,and multi-scale head make it a powerful tool for a variety of computer vision applications.The technical contribution of this study lies in the integration of three CBAM units before the three outputs of YOLOv7,a model that already detects objects at three different scales.By incorporating the CBAM module,weights are assigned to channel and spatial features of the feature map,which effectively increases the importance of useful features while suppressing irrelevant ones.This attention mechanism enables the proposed model to focus on target regions containing important information that improves accuracy in detecting objects of various sizes.

    3.3 Loss Function

    The loss function used in YOLOv7 is a mixture of different components,including:

    3.3.1 Localization Loss(LL)

    This component of the loss measures the difference between the predicted bounding box coordinates and the actual bounding box coordinates.It uses the Mean Squared Error(MSE)loss function to calculate the loss.

    3.3.2 Confidence Loss

    This component of the loss measures how confident the model is in its predictions.It calculates the difference between the predicted confidence score and the actual confidence score.The confidence score indicates whether the bounding box contains an object or not.The binary cross-entropy loss function is used to calculate this confidence.

    3.3.3 Classification Loss

    This component of the loss measures the difference between the predicted class probabilities and the actual class probabilities.The cross-entropy loss function is used to calculate this classification loss.

    3.3.4 Total Loss

    The overall loss function is a weighted sum of these three components.The weights are hyperparameters that are tuned during training to balance the contributions of the different components.The loss function’s ultimate goal is to reduce the difference between predicted,and ground truth bounding boxes,confidence scores,and class probabilities.

    4 Dataset Description

    The images in the dataset are part of the famous German traffic sign dataset and were preprocessed to ensure consistency in size,resolution,and color.The dataset consists of 741 images of traffic signs that are divided into three subsets like training set of 592 images(79.8920%),a validation set of 99 images (13.3606%),and a test set of 50 images (6.7476%) in a stratified method,ensuring that each subset had a proportional representation of each class.Table 2 shows the dataset with four classes such as prohibitory,dangerous,mandatory,and others with a total of 1,213 appearances of traffic signs.The prohibitory class(class 0)has 731 appearances of traffic signs accounting for 45.89%of the dataset.This class includes traffic signs that prohibit certain actions such as no trucks,speed limit,no traffic both ways,and no overtaking.The dangerous class(class 1)has 268 appearances of traffic signs accounting for 18.04%of the dataset.This class includes traffic signs that warn drivers of potential hazards or dangers,such as construction,priority at next intersection,bend left,bend right,bend,uneven road,slippery road,the road narrows,traffic signal,pedestrian crossing,school crossing,dangerous,cycles crossing,animals and snow.The mandatory class(class 2)has 211 appearances of traffic signs accounting for 13.44%of the dataset.This class includes traffic signs that indicate actions that drivers must take,such as a roundabout,go straight,go right,go left,go left or straight,go right or straight,keep right,and keep left.The other class (class 3) has 345 appearances of traffic signs accounting for 22.63% of the dataset.This class includes traffic signs that do not fall into the prohibitory,dangerous,or mandatory categories such as no entry,stop,give way,priority road,and restriction ends.

    Table 2:Description of dataset

    5 Experimental Setup

    The objective of this study is to achieve high accuracy while keeping the model size and computational complexity low,making it suitable for deployment on embedded systems.The dataset contains 741 images of traffic signs with varying lighting conditions,occlusions,and backgrounds.The pre-processed version of the dataset is used,where images were cropped and resized to 416×416×3 pixels and annotated properly.The dataset was divided into training(79.89%),validation(13.36%),and test (6.747%) sets.The latest version YOLOv7 object detection model is used.The model has three components that predict the class,location,and confidence of the traffic sign detected.It was first trained using the SGD optimization algorithm with batch sizes 8 and 16.It had the following values of hyperparameter with a learning rate of 0.001,weight decay of 0.0005,and momentum of 0.937.The model was trained for 100 epochs,and the total training time was 1.868,and 1.845 h for batch sizes 8,and 16,respectively.Then the model was trained using the Adam optimization algorithm with batch sizes 8,and 16.It had the following values of hyperparameter with a learning rate of 0.001,weight decay of 0.0005,and momentum of 0.937.The model was trained for 100 epochs,and the total training time was 1.916,and 1.862 h for batch sizes 8,and 16 respectively.At last,the model was trained using the AdamW optimization algorithm with batch sizes 8,and 16.It had the following values of hyperparameter with a learning rate of 0.001,weight decay of 0.0001,and momentum of 0.937.The model was trained for 100 epochs,and the total training time was 1.942,and 1.857 h for batch sizes 8,and 16,respectively.

    The evaluation metric mAP is used to measure the accuracy of the model in detecting traffic signs of different sizes,and aspect ratios.Precision,recall,and F1 score are used as secondary evaluation metrics [50,95].The training was conducted on a single Tesla K80 GPU which is available for free version of Google Colab.The calculation of mAP involves calculating the AP for each class of the detected object and then averaging those AP values across all classes.AP is the area under the precisionvs.recall curve.

    The precision-recall curve shows how the precision and recall of the algorithm vary with the detection threshold.The precision is the fraction of detected objects that are correct.The recall is the fraction of true positive predictions among all the real positive cases.Precision and recall are calculated as follows:

    The F1 score is the harmonic mean of precision and recall,calculated as follows:

    A good model should have a high F1 score,high recall,high accuracy,and high precision.Precision and recall,however,typically trade off against one another.To determine the ideal balance between precision and recall,use the F1 score.The precision-recall curve shows how accuracy and recall are traded off for different thresholds.While a low false negative rate is related to great recall,a low false positive rate is related to superior accuracy.A large area under the precision-recall curve indicates that the precision and recall are high.

    6 Results and Discussion

    The approach used in this study is evaluated using the standard mAP metric which measures the accuracy of object detection by computing the AP over all possible levels of recall.Table 3 shows the YOLOv7 model’s performance under different training configurations using four optimization algorithms such as Adam,SGD,AdamW,and RMSProp with batch sizes of 8,and 16.The outcomes demonstrated that both during training and testing,the YOLOv7 model was able to obtain high precision and recall values.This proved that the model has a low rate of FP and FN when identifying objects in images.The mAP metric measures the model’s ability to detect objects at different IOU thresholds.In particular,mAP50 evaluates the AP at an IOU threshold of.5 or 50 percent,while mAP[50,95]measures the AP across all IOU thresholds from.5 to.95,with increments of.05 or 5 percent.The YOLOv7 model achieves high mAP values for both mAP50 and mAP[50,95]indicating that the model can accurately detect objects at different IOU thresholds.

    Table 3:Results of all the combinations of YOLOv7 for TSD

    The results show that Adam performs consistently better than SGD in terms of precision,recall,and mAP50,regardless of batch size.On the other hand,AdamW is similar to Adam in most cases but shows slightly lower performance in terms of mAP50 for batch size 8.In Table 3,it can be observed that the batch size significantly impacts the performance of the YOLOv7 model.For instance,SGD shows lower precision,recall,and mAP values compared to the other configurations for batch size 16.This suggests that SGD is less effective in handling larger batch sizes,possibly due to its inherent instability in noisy,and high-dimensional optimization spaces.

    Fig.7 shows the predicted images for each class including dangerous,other,prohibitory,and mandatory signs,and each image is accompanied by the corresponding confidence level.Fig.7a inferred that the model accurately predicted dangerous class signs with high confidence levels,indicating its ability to detect,and classify potentially hazardous situations on the road.Fig.7b shows that the model was able to predict other class signs with high confidence levels,indicating its ability to accurately classify signs that do not fit into well-defined categories.Fig.7c tells that the model accurately predicted prohibitory class signs with high confidence levels,indicating its ability to detect and classify signs that restrict certain behaviors on the road.Similarly,Fig.7d indicates that the model was able to predict mandatory class signs with high confidence levels,indicating its ability to accurately detect and classify signs that require specific actions from drivers.

    Figure 7:Predicted image for each class

    Figs.8a and 9a illustrate that the YOLOv7 model performs well as it has high recall at low confidence,and as the confidence increases the recall values decrease,and finally confidence becomes 1 when recall becomes zero.Figs.8b and 9b illustrate that the model is performing well as it has high precision at high confidence,and as the confidence threshold decreases,the precision also decreases.Figs.8c and 9c prove that the curve near the upper right corner indicates the increase in recall,the reduction in precision is not immediately apparent,and the overall performance of the model is better.Figs.8d and 9d show that the model has a high F1 score at high confidence,and as the confidence threshold decreases,the F1 score also decreases.It proves that the model becomes less conservative and makes more predictions.Both the models Adam with batch sizes 8,and 16 are performing well in the dataset.However,the model with batch size 8 is performing slightly better than the one with 16.It can be inferred that the precision and the mAP50 of Adam with batch size 8 is more than the Adam with batch size 16.

    Fig.10a illustrates that the model performs well as it has high recall at low confidence,and as the confidence increases the recall values decrease,and finally confidence becomes 1 at recall is zero.Fig.10b shows that the model is performing well but the performance is less than the Adam models as the class-wise prediction is deviating from the overall prediction.By comparing Figs.10c,8c and 9c exhibited that the Adam optimizer performs better than the SGD optimizer.Fig.10d shows that the model has a high F1 score at high confidence,although not as high as the models with Adam,and as the confidence decreases,the F1 score also decreases.It showed that the model becomes less conservative.

    Figure 8:Adam optimizer with batch-size 8

    Figure 9:Adam optimizer with batch-size 16

    Figure 10:SGD optimizer with batch-size 8

    Fig.11 conveys that the model is performing very poorly compared to the previous Adam models,and SGD with batch size 8.For instance,the precisionvs.recall curve in Fig.11c depicts much below at the top corner indicating the model has not learned properly and is struggling to predict the test images and in Fig.11d the model has a very low F1 score at high confidence,and as the confidence decreases,the F1 score also decreases.

    Figure 11:SGD optimizer with batch-size 16

    Figs.12 and 13 indicate that the models AdamW with batch-size 8 and 16 perform well but not as well as models with Adam.Although these models perform better than the SGD optimizer for this particular dataset.The AdamW with batch size 8 performs better than the AdamW with batch size 16.This results that the model with the less batch size performs better for this dataset.

    Figure 12:AdamW optimizer with batch-size 8

    Figure 13:AdamW optimizer with batch-size 16

    Figs.14 and 15 illustrate that the models RMSProp with batch-size 8 and 16 perform well but not as well as models with other optimizers.In comparison between the different batch sizes,RMSProp with batch size 16 performs better than the RMSProp with batch size 8.This results that the model with the more batch size performs better for this dataset.However,its performance is lesser than other all optimizers.

    Figure 14:RMSProp optimizer with batch-size 8

    Figure 15:RMSProp optimizer with batch-size 16

    The results proved that Adam is a more effective optimization algorithm for training the YOLOv7 model and that the performance of the model varies with batch size.This information can be useful for selecting the best configuration for their specific use case.Table 4 shows the comparison of the proposed model with existing models.

    Table 4:Comparison of results

    Overall,the results illustrate the effectiveness of the YOLOv7 model in object detection tasks and provide valuable insights into its performance under different training configurations.This study concluded that Adam with batch size 8 is the most effective of all the combinations above for the TSD use case.

    Considering the proposed model’s effectiveness,it is important to acknowledge its potential limitations in extreme weather conditions and low-lighting scenarios.Since the model is trained on images captured under specific conditions,its performance may be compromised when exposed to diverse and challenging environments.To enhance the model’s generalizability and improve its performance,a more diverse range of input images can be included that encompass varied conditions during the training process.This will enable the model to learn and adapt to different environmental factors,ultimately enhancing its robustness and reliability in real-world applications.

    7 Conclusion and Future Scope

    The proposed study is implemented with four optimization algorithms,namely Adam,SGD,AdamW,and RMSProp with different batch sizes for TSD using YOLOv7.The integration of CBAM improved the model’s performance by focusing on the spatial and channel regions in the input.The evaluation has been performed on both the training,and testing datasets by considering four different metrics namely Precision,recall,mAP50,and mAP[50,95].The experimental results have shown that the Adam optimizer with batch-size 8 and 16 achieves the highest accuracy in terms of all four metrics for both training and testing datasets.Specifically,the precision and recall rates of the Adam optimizer are very high.It proved that the model can correctly identify traffic signs in the input images with high accuracy.Moreover,the results indicate that the choice of batch size can also have a substantial effect on the accuracy of the model.In general,the smaller batch size can lead to better performance,but it also increases the training time.Hence,the trade-off between accuracy,and training time should be considered while selecting the batch size.

    The results obtained in this research provide a strong basis for future work in the field of traffic sign detection.There are several directions in which this work can be extended.One prominent direction for future work is optimizing the performance of YOLOv7 specifically for real-time traffic sign detection on embedded systems like Raspberry Pi and NVIDIA Jetson.By enhancing the efficiency and speed of the model,its applicability in resource-constrained environments can be significantly expanded,enabling practical implementation in various settings.Another crucial aspect that holds great potential is the interpretability of deep learning models.In recent years,there has been a growing interest in understanding and visualizing the decision-making processes of such models.Hence,future research can delve into exploring the interpretability of YOLOv7 for traffic sign detection.This can involve visualizing saliency maps or activation patterns of the model to gain insights into how it makes predictions.These directions hold immense potential for advancing traffic sign detection systems,paving the way for improved road safety,efficient traffic management,and safer autonomous vehicles in the future.

    Acknowledgement:The authors would like to thank the editors and reviewers.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:conceptualization,validation,review and editing: Kuppusamy Pothanaicker;supervision: Kuppusamy Pothanaicker and Celestine Iwendi;methodology,formal analysis,result analysis,writing:Sanjay Mythili and Deepashree Pradeep Vaideeswar.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data used in this study is available from Traffic Signs Dataset in YOLO format,Version 1.Retrieved December 26,2022 from https://www.kaggle.com/datasets/valentynsichkar/traffic-signs-dataset-in-yolo-format.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    丁香六月欧美| 天堂俺去俺来也www色官网| 国产激情久久老熟女| 天天操日日干夜夜撸| 精品视频人人做人人爽| 亚洲欧美精品综合一区二区三区| 成人黄色视频免费在线看| 欧美日韩亚洲综合一区二区三区_| 高清欧美精品videossex| 亚洲综合色网址| 亚洲人成电影免费在线| 香蕉丝袜av| 亚洲精品粉嫩美女一区| 久久av网站| 久久久久久久久免费视频了| 国产精品亚洲av一区麻豆| 免费在线观看黄色视频的| 欧美精品一区二区大全| www.999成人在线观看| 亚洲第一av免费看| 精品人妻一区二区三区麻豆| 美女福利国产在线| e午夜精品久久久久久久| 亚洲色图 男人天堂 中文字幕| 叶爱在线成人免费视频播放| 欧美午夜高清在线| 精品亚洲乱码少妇综合久久| 亚洲精品第二区| 亚洲精品国产av成人精品| 国产精品一区二区在线不卡| 最新在线观看一区二区三区| a级毛片在线看网站| 国产一区二区三区综合在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 97精品久久久久久久久久精品| 久久久久久久精品精品| 三级毛片av免费| 男女下面插进去视频免费观看| 精品人妻熟女毛片av久久网站| 国产在线观看jvid| 999精品在线视频| 两性午夜刺激爽爽歪歪视频在线观看 | 欧美+亚洲+日韩+国产| 国产成人a∨麻豆精品| 精品乱码久久久久久99久播| 黄色视频在线播放观看不卡| 女人精品久久久久毛片| 精品国产一区二区三区久久久樱花| 美女中出高潮动态图| 亚洲中文字幕日韩| 久久精品国产综合久久久| 一进一出抽搐动态| 肉色欧美久久久久久久蜜桃| 精品亚洲成国产av| 黄色视频在线播放观看不卡| 人人妻人人添人人爽欧美一区卜| 丝袜人妻中文字幕| 欧美日本中文国产一区发布| 80岁老熟妇乱子伦牲交| 超碰成人久久| 久久人人爽av亚洲精品天堂| 日本av手机在线免费观看| 男男h啪啪无遮挡| 在线观看舔阴道视频| 9热在线视频观看99| 久久国产精品影院| 一本大道久久a久久精品| 一级毛片电影观看| 国产成人av教育| a在线观看视频网站| 久久精品人人爽人人爽视色| videosex国产| 午夜福利免费观看在线| 亚洲av电影在线进入| 满18在线观看网站| 后天国语完整版免费观看| 国产免费福利视频在线观看| 大陆偷拍与自拍| 国产男女超爽视频在线观看| 肉色欧美久久久久久久蜜桃| 国产精品av久久久久免费| videosex国产| 日韩欧美免费精品| 免费高清在线观看日韩| a级片在线免费高清观看视频| 久久久欧美国产精品| 国产精品麻豆人妻色哟哟久久| 啦啦啦啦在线视频资源| 欧美 亚洲 国产 日韩一| 日日夜夜操网爽| 真人做人爱边吃奶动态| 中文字幕av电影在线播放| 老司机深夜福利视频在线观看 | 免费在线观看视频国产中文字幕亚洲 | 亚洲熟女精品中文字幕| 电影成人av| 日韩精品免费视频一区二区三区| 黑人巨大精品欧美一区二区蜜桃| videos熟女内射| 午夜影院在线不卡| 国产av一区二区精品久久| 一区福利在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 欧美+亚洲+日韩+国产| 99久久人妻综合| 蜜桃国产av成人99| 黄网站色视频无遮挡免费观看| 国产精品国产三级国产专区5o| 欧美成人午夜精品| 日韩一区二区三区影片| 亚洲国产精品成人久久小说| 欧美日韩精品网址| 国产亚洲av高清不卡| 黑人巨大精品欧美一区二区蜜桃| 久久精品国产亚洲av香蕉五月 | 亚洲国产中文字幕在线视频| 50天的宝宝边吃奶边哭怎么回事| 欧美国产精品va在线观看不卡| 一边摸一边抽搐一进一出视频| 亚洲专区国产一区二区| av线在线观看网站| av线在线观看网站| 99久久人妻综合| 久久亚洲精品不卡| 日韩欧美一区视频在线观看| 日韩欧美一区二区三区在线观看 | 一级,二级,三级黄色视频| 欧美日韩亚洲国产一区二区在线观看 | 夜夜骑夜夜射夜夜干| 午夜福利视频在线观看免费| 黄频高清免费视频| 久久ye,这里只有精品| 国产精品一区二区精品视频观看| 在线观看免费视频网站a站| 亚洲av国产av综合av卡| 精品国内亚洲2022精品成人 | 欧美另类亚洲清纯唯美| 777久久人妻少妇嫩草av网站| 国产又爽黄色视频| cao死你这个sao货| 最新在线观看一区二区三区| 日日摸夜夜添夜夜添小说| 侵犯人妻中文字幕一二三四区| 精品一区二区三卡| 国产黄频视频在线观看| 成年女人毛片免费观看观看9 | 99国产精品一区二区蜜桃av | 国产黄频视频在线观看| 人人妻人人添人人爽欧美一区卜| 午夜精品国产一区二区电影| 亚洲一码二码三码区别大吗| 丰满人妻熟妇乱又伦精品不卡| 国产亚洲av高清不卡| 啦啦啦免费观看视频1| 女性生殖器流出的白浆| 欧美国产精品一级二级三级| 韩国高清视频一区二区三区| 亚洲精品中文字幕一二三四区 | tocl精华| 午夜精品国产一区二区电影| 精品人妻在线不人妻| 成年美女黄网站色视频大全免费| 国产97色在线日韩免费| 国产成人精品久久二区二区免费| 天堂俺去俺来也www色官网| 一区二区日韩欧美中文字幕| 一个人免费在线观看的高清视频 | 一进一出抽搐动态| 久久久精品94久久精品| 看免费av毛片| 搡老熟女国产l中国老女人| 悠悠久久av| 精品乱码久久久久久99久播| 黄色视频,在线免费观看| 日本一区二区免费在线视频| 黄片播放在线免费| 日韩 亚洲 欧美在线| 国产亚洲精品第一综合不卡| 亚洲av电影在线进入| a在线观看视频网站| 少妇的丰满在线观看| 午夜91福利影院| 99九九在线精品视频| 动漫黄色视频在线观看| 9191精品国产免费久久| 亚洲中文字幕日韩| 国产又爽黄色视频| 国产免费现黄频在线看| 久久久精品国产亚洲av高清涩受| 天堂8中文在线网| 一级a爱视频在线免费观看| 他把我摸到了高潮在线观看 | 国产精品一区二区在线观看99| 亚洲天堂av无毛| 亚洲欧洲精品一区二区精品久久久| a 毛片基地| 亚洲国产毛片av蜜桃av| 精品国产乱码久久久久久男人| 久久这里只有精品19| 日日夜夜操网爽| 91九色精品人成在线观看| 国产亚洲av片在线观看秒播厂| 他把我摸到了高潮在线观看 | 亚洲精品国产区一区二| 精品乱码久久久久久99久播| 亚洲精华国产精华精| 国产欧美日韩一区二区精品| 黄色怎么调成土黄色| 男人爽女人下面视频在线观看| 精品亚洲成国产av| 欧美日韩中文字幕国产精品一区二区三区 | 深夜精品福利| 99国产综合亚洲精品| 在线观看免费午夜福利视频| 真人做人爱边吃奶动态| 精品人妻一区二区三区麻豆| 精品国产一区二区三区久久久樱花| 欧美人与性动交α欧美精品济南到| 一区二区av电影网| 免费av中文字幕在线| 真人做人爱边吃奶动态| 777久久人妻少妇嫩草av网站| 国产精品香港三级国产av潘金莲| 日本精品一区二区三区蜜桃| 永久免费av网站大全| 久久精品久久久久久噜噜老黄| 欧美精品av麻豆av| 丝袜脚勾引网站| 国产亚洲av片在线观看秒播厂| 十八禁网站免费在线| 国产一区二区三区av在线| 中文字幕色久视频| 99国产精品99久久久久| 天天躁夜夜躁狠狠躁躁| 亚洲成人国产一区在线观看| 亚洲精华国产精华精| 亚洲中文日韩欧美视频| 精品视频人人做人人爽| 在线av久久热| 亚洲av电影在线进入| 法律面前人人平等表现在哪些方面 | avwww免费| 丝袜人妻中文字幕| 黑人操中国人逼视频| 1024香蕉在线观看| 老司机午夜十八禁免费视频| 一级a爱视频在线免费观看| 国产三级黄色录像| 美女扒开内裤让男人捅视频| 日韩制服丝袜自拍偷拍| 亚洲国产欧美一区二区综合| 男男h啪啪无遮挡| 国产一卡二卡三卡精品| 久久久久国产一级毛片高清牌| 国产成人欧美| 99精国产麻豆久久婷婷| 国产精品欧美亚洲77777| 黑人巨大精品欧美一区二区蜜桃| a在线观看视频网站| 免费在线观看视频国产中文字幕亚洲 | 亚洲av国产av综合av卡| 两性午夜刺激爽爽歪歪视频在线观看 | 男女边摸边吃奶| 精品一区二区三区av网在线观看 | 无限看片的www在线观看| 99精品欧美一区二区三区四区| 操出白浆在线播放| 亚洲午夜精品一区,二区,三区| 黄色视频不卡| 人人妻人人添人人爽欧美一区卜| 91麻豆精品激情在线观看国产 | 一区二区三区激情视频| 国产一区二区激情短视频 | 亚洲第一青青草原| av又黄又爽大尺度在线免费看| 亚洲成人免费电影在线观看| 精品国产一区二区三区久久久樱花| 国产成人av激情在线播放| 99国产极品粉嫩在线观看| 欧美日韩视频精品一区| 亚洲午夜精品一区,二区,三区| 90打野战视频偷拍视频| 国产日韩一区二区三区精品不卡| 日韩电影二区| 国产极品粉嫩免费观看在线| 久久久久国产精品人妻一区二区| 国产男女超爽视频在线观看| 热re99久久国产66热| 午夜免费观看性视频| 国产精品亚洲av一区麻豆| 国产亚洲av高清不卡| 天堂中文最新版在线下载| 黄色a级毛片大全视频| 国产一级毛片在线| 亚洲国产精品一区二区三区在线| 亚洲国产欧美网| 欧美黑人欧美精品刺激| 自拍欧美九色日韩亚洲蝌蚪91| 天天躁夜夜躁狠狠躁躁| 国产精品偷伦视频观看了| 久久久久网色| 亚洲av电影在线进入| 亚洲 欧美一区二区三区| 成年动漫av网址| 国产av又大| 亚洲欧美精品综合一区二区三区| 91av网站免费观看| 又黄又粗又硬又大视频| 国产欧美亚洲国产| 高潮久久久久久久久久久不卡| 久久狼人影院| 亚洲精品乱久久久久久| 韩国精品一区二区三区| 欧美日韩视频精品一区| a 毛片基地| 国产免费福利视频在线观看| 少妇猛男粗大的猛烈进出视频| 免费在线观看完整版高清| 爱豆传媒免费全集在线观看| 亚洲精品中文字幕在线视频| 亚洲人成电影观看| 在线 av 中文字幕| 欧美在线一区亚洲| 中文字幕精品免费在线观看视频| 国产亚洲精品久久久久5区| 50天的宝宝边吃奶边哭怎么回事| 99国产精品免费福利视频| 亚洲国产欧美在线一区| 久久国产精品男人的天堂亚洲| 久久香蕉激情| 国产一级毛片在线| 超色免费av| 97精品久久久久久久久久精品| 国内毛片毛片毛片毛片毛片| 99久久99久久久精品蜜桃| kizo精华| 少妇人妻久久综合中文| 亚洲国产欧美日韩在线播放| 人成视频在线观看免费观看| 美女国产高潮福利片在线看| 人人妻人人澡人人爽人人夜夜| 免费一级毛片在线播放高清视频 | 美女高潮喷水抽搐中文字幕| 亚洲欧洲精品一区二区精品久久久| 国产亚洲精品第一综合不卡| 午夜老司机福利片| 婷婷色av中文字幕| 国产av又大| 亚洲欧美精品综合一区二区三区| 亚洲国产欧美一区二区综合| 高清欧美精品videossex| 久久中文字幕一级| 99久久99久久久精品蜜桃| 成人亚洲精品一区在线观看| 一本色道久久久久久精品综合| 中文字幕高清在线视频| 91成人精品电影| 国产免费现黄频在线看| 99久久精品国产亚洲精品| 国产成人欧美| 如日韩欧美国产精品一区二区三区| 久热爱精品视频在线9| 叶爱在线成人免费视频播放| 亚洲情色 制服丝袜| 精品一区二区三卡| 亚洲国产av新网站| 丝袜喷水一区| 在线观看免费午夜福利视频| av线在线观看网站| 亚洲美女黄色视频免费看| 曰老女人黄片| 狠狠婷婷综合久久久久久88av| 国产1区2区3区精品| 午夜福利一区二区在线看| 亚洲avbb在线观看| 波多野结衣av一区二区av| 爱豆传媒免费全集在线观看| 韩国精品一区二区三区| 女警被强在线播放| 久久久久久人人人人人| 老司机午夜十八禁免费视频| 嫩草影视91久久| 国产成人影院久久av| 色婷婷久久久亚洲欧美| 少妇裸体淫交视频免费看高清 | 不卡av一区二区三区| 日本精品一区二区三区蜜桃| 国产成人免费无遮挡视频| 亚洲成人国产一区在线观看| 国产熟女午夜一区二区三区| 黄色 视频免费看| 中亚洲国语对白在线视频| 亚洲自偷自拍图片 自拍| 欧美xxⅹ黑人| a级毛片黄视频| 亚洲国产日韩一区二区| 久久国产亚洲av麻豆专区| a级片在线免费高清观看视频| 欧美久久黑人一区二区| 精品乱码久久久久久99久播| 亚洲欧美日韩高清在线视频 | 黄色视频,在线免费观看| 久久精品国产综合久久久| 欧美激情极品国产一区二区三区| 国产av国产精品国产| 亚洲成av片中文字幕在线观看| 黄色a级毛片大全视频| 国产不卡av网站在线观看| 在线av久久热| 亚洲第一av免费看| e午夜精品久久久久久久| 亚洲国产毛片av蜜桃av| 亚洲av男天堂| 日韩视频在线欧美| av网站在线播放免费| 精品一品国产午夜福利视频| 欧美一级毛片孕妇| 两个人免费观看高清视频| 精品欧美一区二区三区在线| 欧美黑人精品巨大| 免费av中文字幕在线| 亚洲第一青青草原| 正在播放国产对白刺激| 久久久水蜜桃国产精品网| 黑人欧美特级aaaaaa片| 欧美激情极品国产一区二区三区| 日日摸夜夜添夜夜添小说| 亚洲性夜色夜夜综合| 啦啦啦视频在线资源免费观看| 老司机影院毛片| 大片免费播放器 马上看| 99精国产麻豆久久婷婷| 大片免费播放器 马上看| 99国产综合亚洲精品| 麻豆国产av国片精品| 久久亚洲国产成人精品v| 国产激情久久老熟女| 日韩 欧美 亚洲 中文字幕| 国产成人啪精品午夜网站| 亚洲国产毛片av蜜桃av| 男女高潮啪啪啪动态图| 日韩欧美一区二区三区在线观看 | 日韩欧美免费精品| 男男h啪啪无遮挡| 少妇裸体淫交视频免费看高清 | 男人舔女人的私密视频| 亚洲伊人久久精品综合| 欧美日韩亚洲高清精品| 中文欧美无线码| 丝袜在线中文字幕| 少妇人妻久久综合中文| 亚洲av成人不卡在线观看播放网 | 国产在线一区二区三区精| 最新在线观看一区二区三区| 一个人免费看片子| 国产亚洲午夜精品一区二区久久| 在线观看免费视频网站a站| 国产一区二区 视频在线| 热99久久久久精品小说推荐| 午夜免费鲁丝| 国产三级黄色录像| 在线十欧美十亚洲十日本专区| 香蕉丝袜av| av免费在线观看网站| 亚洲激情五月婷婷啪啪| 亚洲国产成人一精品久久久| 99热国产这里只有精品6| 精品人妻一区二区三区麻豆| 亚洲av男天堂| 日韩 欧美 亚洲 中文字幕| 夫妻午夜视频| 久久av网站| 亚洲成人手机| 永久免费av网站大全| 久热爱精品视频在线9| 久9热在线精品视频| 婷婷色av中文字幕| 日韩大片免费观看网站| 国产一卡二卡三卡精品| 丝袜美足系列| 永久免费av网站大全| 久久久久网色| 国产av又大| 国产亚洲精品第一综合不卡| 18禁裸乳无遮挡动漫免费视频| 国产主播在线观看一区二区| 久久午夜综合久久蜜桃| 亚洲人成77777在线视频| 精品卡一卡二卡四卡免费| 交换朋友夫妻互换小说| 狂野欧美激情性bbbbbb| 18禁黄网站禁片午夜丰满| 好男人电影高清在线观看| 777米奇影视久久| 亚洲精品久久成人aⅴ小说| 国产精品一区二区免费欧美 | 大片免费播放器 马上看| 国产精品自产拍在线观看55亚洲 | 淫妇啪啪啪对白视频 | 99久久国产精品久久久| 欧美在线黄色| 亚洲国产精品一区三区| 亚洲精品美女久久久久99蜜臀| 99九九在线精品视频| 久久人妻熟女aⅴ| 一区二区三区激情视频| 欧美日韩一级在线毛片| 亚洲第一av免费看| 国产男女内射视频| 国产精品九九99| 在线观看舔阴道视频| 中国美女看黄片| 国产视频一区二区在线看| 这个男人来自地球电影免费观看| 97人妻天天添夜夜摸| 老鸭窝网址在线观看| 国产亚洲精品一区二区www | 国产成人av激情在线播放| 国产xxxxx性猛交| 亚洲国产av新网站| 久久精品国产a三级三级三级| 两性夫妻黄色片| 亚洲一区中文字幕在线| 51午夜福利影视在线观看| 国产成人a∨麻豆精品| 久久久水蜜桃国产精品网| 脱女人内裤的视频| 两个人免费观看高清视频| av又黄又爽大尺度在线免费看| 天天躁狠狠躁夜夜躁狠狠躁| 欧美日韩福利视频一区二区| 国产高清视频在线播放一区 | 国产成人精品久久二区二区免费| 啦啦啦 在线观看视频| 精品少妇久久久久久888优播| 国产精品九九99| 大片免费播放器 马上看| 亚洲激情五月婷婷啪啪| 动漫黄色视频在线观看| 日韩中文字幕视频在线看片| 一本一本久久a久久精品综合妖精| 国产av国产精品国产| cao死你这个sao货| 国产亚洲av高清不卡| 99热网站在线观看| 国产成人系列免费观看| 色视频在线一区二区三区| 亚洲欧美一区二区三区黑人| 久热爱精品视频在线9| 国产一区二区在线观看av| 无遮挡黄片免费观看| 精品第一国产精品| 日韩欧美免费精品| 欧美日韩精品网址| 亚洲国产av影院在线观看| 免费黄频网站在线观看国产| 黄片播放在线免费| 色94色欧美一区二区| 777久久人妻少妇嫩草av网站| 亚洲色图 男人天堂 中文字幕| 日韩视频在线欧美| 亚洲情色 制服丝袜| 国产免费福利视频在线观看| 久久久久国产精品人妻一区二区| 操美女的视频在线观看| 日韩电影二区| 热re99久久国产66热| 国产在线免费精品| 欧美性长视频在线观看| 在线观看免费午夜福利视频| 国产成人系列免费观看| 色视频在线一区二区三区| 99精国产麻豆久久婷婷| 精品少妇一区二区三区视频日本电影| 一级片免费观看大全| 国产欧美日韩一区二区三区在线| 一级毛片电影观看| 久久国产精品大桥未久av| a级毛片黄视频| 国产精品秋霞免费鲁丝片| 精品视频人人做人人爽| 亚洲av成人一区二区三| 电影成人av| 精品亚洲成a人片在线观看| 国产成人免费观看mmmm| 国产不卡av网站在线观看| 高清欧美精品videossex| 亚洲人成电影观看| 国产xxxxx性猛交| 黑人猛操日本美女一级片| 欧美日韩亚洲综合一区二区三区_| 婷婷色av中文字幕| 亚洲成国产人片在线观看| 十八禁网站网址无遮挡| 亚洲自偷自拍图片 自拍| 精品熟女少妇八av免费久了| 成人国产一区最新在线观看| 国产成人啪精品午夜网站| 俄罗斯特黄特色一大片| av免费在线观看网站| 首页视频小说图片口味搜索| 国产主播在线观看一区二区| 一区二区三区乱码不卡18| 久9热在线精品视频| 在线观看一区二区三区激情| 国产日韩一区二区三区精品不卡| 日韩熟女老妇一区二区性免费视频| 国产老妇伦熟女老妇高清| av福利片在线| 国产主播在线观看一区二区| 国产1区2区3区精品| a级毛片在线看网站| 国产日韩欧美亚洲二区| 50天的宝宝边吃奶边哭怎么回事| 电影成人av|