• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Detection of Safety Helmet-Wearing Based on the YOLO_CA Model

    2024-01-12 03:46:46XiaoqinWuSongrongQianandMingYang
    Computers Materials&Continua 2023年12期

    Xiaoqin Wu,Songrong Qianand Ming Yang

    State Key Laboratory of Public Big Data,Guizhou University,Guiyang,550025,China

    ABSTRACT Safety helmets can reduce head injuries from object impacts and lower the probability of safety accidents,as well as being of great significance to construction safety.However,for a variety of reasons,construction workers nowadays may not strictly enforce the rules of wearing safety helmets.In order to strengthen the safety of construction site,the traditional practice is to manage it through methods such as regular inspections by safety officers,but the cost is high and the effect is poor.With the popularization and application of construction site video monitoring,manual video monitoring has been realized for management,but the monitors need to be on duty at all times,and thus are prone to negligence.Therefore,this study establishes a lightweight model YOLO_CA based on YOLOv5 for the automatic detection of construction workers’helmet wearing,which overcomes the shortcomings of the current manual monitoring methods that are inefficient and expensive.The coordinate attention (CA) addition to the YOLOv5 backbone strengthens detection accuracy in complex scenes by extracting critical information and suppressing noncritical information.Further parameter compression with deeply separable convolution(DWConv).In addition,to improve the feature representation speed,we swap out C3 with a Ghost module,which decreases the floating-point operations needed for feature channel fusion,and CIOU_Loss was substituted with EIOU_Loss to enhance the algorithm’s localization accuracy.Therefore,the original model needs to be improved so as to enhance the detection of safety helmets.The experimental results show that the YOLO_CA model achieves good results in all indicators compared with the mainstream model.Compared with the original model,the mAP value of the optimized model increased by 1.13%,GFLOPs cut down by 17.5%,and there is a 6.84% decrease in the total model parameters,furthermore,the weight size cuts down by 4.26%,FPS increased by 39.58%,and the detection effect and model size of this model can meet the requirements of lightweight embedding.

    KEYWORDS Safety helmet;CA;YOLOv5;ghost module

    1 Introduction

    Since construction sites[1]and other construction and building sites are generally exposed to the outdoors,therefore,the risk variables are going to be much greater than in other industry sectors,resulting in a higher accident rate.As a result,wearing personal protection equipment [2] can keep workers from safety accidents and decrease injuries and even deaths[3].However,in reality,there are many examples of helmets not being worn or not being worn right,examples like sultry weather and lack of safety awareness among employees.Moreover,it is sometimes difficult for safety managers on construction sites to keep track of whether or not employees are wearing helmets,resulting in several incidents involving manufacturing safety.In China,based on data from the Ministry of Housing and Urban-Rural Development,there were 689 production security mishaps in housing and municipal construction in 2020,and 794 workers died during production activities.Among them,there were 407 accidents of falling from a height,accounting for 59.07%of the total;and 83 accidents involving object strikes,accounting for 12.05%of the total[4].Similarly,123 people suffered fatal injuries on the job in the United Kingdom in 2021/2022,as reported by the Health and Safety Executive (HSE),with falls from height being the most serious fatal accident accounting for 23.6%[5]and being struck by an object also accounted for 14.6%.Helmets protect workers by absorbing the impact of objects hitting their heads directly,and studies have shown that helmets are an effective way for construction workers to lessen the risk of skull fractures,neck sprains,and concussions when falling from heights[6].At the same time,helmets can also minimize the risk of serious brain injury from impacts[7].

    Helmet-wearing supervision is an important part of creating a safe environment for construction operations.Usually,construction units use manual supervision,but due to the excessive range of workers’activities,they cannot be managed promptly in real scenarios.Therefore,helmet-wearing detection based on intelligent technology is gradually becoming a vital management measure for companies.Safety helmet-wearing detection may be broken down into two groups: those that rely on sensors and those that rely on vision-based detection methods.Remote location and tracking technologies are the primary focus of emphasis for sensor-based detection approaches.Cyber-Physical Systems(CPS)were suggested for real-time monitoring and detection by Zhang et al.[8].The Internet of Things (IoT) architecture was used by Zhang et al.to design a smart helmet system [9],which identifies whether the helmet is being used based on whether or not both the infrared light detector and the thermal infrared sensor in the helmet are activated.Nevertheless,detector tracking technology can be limited by the need to wear physical tags or sensors,and requires a large investment,by adding a significant number of extra devices(like physical tags and sensors),resulting in low scalability.Besides,with the present radio-frequency identification(RFID)solution,connecting to the network requires workers to wear a terminal device,which is inconvenient for workers’work[10].

    Contrasted with sensor-based detection techniques,visual technology detection methods are gaining attention[11].By collecting rich pictures of building sites to get a faster and more comprehensive grasp of complex scenes of construction sites[12].Fang et al.suggested an improved target detection method for the case of workers not wearing helmets,but the method is inefficient and does not match the real-time requirement [13].In contrast,Wu et al.improved the performance of Single Shot MultiBox Detector(SSD)for helmet-wearing detection by using a reverse progressive attention mechanism[14].K-nearest neighbor(KNN)was used by Wu et al.in order to detect moving objects from video to classify pedestrians,heads,and helmets [15].Xie et al.examined the performance of several detection techniques using that same dataset,and You Only Look Once(YOLO)had the best average accuracy and the speediest detection compared to SSD and Faster region based convolutional neural network(Faster R-CNN)[16].Wang et al.enhanced the representation ability of target features by introducing convolutional block attention modules in the neck to assign weights and weaken feature extraction from complex backgrounds [17].Wen et al.used the soft-NMS algorithm to optimize the YOLOv3 model,and the improved YOLOv3 algorithm was able to effectively detect occluded targets,but the target detection was not satisfactory when the occlusion rate exceeded 60% [18].Wang et al.proposed an improved helmet wear detection algorithm,YOLOv4-P,which improves the accuracy of helmet wear detection by increasing the mAP value by 2.15%compared to the YOLOv4 algorithm[19].Proposed an improved lightweight YOLOv5 vehicle detection method that improves the model’s performance by inserting C3Ghost and Ghost modules in the YOLOv5 neck network,adding a convolutional block attention module(CBAM)attention mechanism in the backbone,etc.Headcounting is gradually becoming an emerging research hotspot in the field of video surveillance,Khan et al.generated scale-aware head suggestions based on scale graphs,thus proposing a method for counting the number of people in a sports video by detecting their heads,which solves the problem of different scales,and which is clearly superior to the state-of-the-art(SoA)method[20].In addition,an end-to-end scale-invariant head detection framework is proposed,which can handle a wide range of scales and is important for high-density crowd counting and crowd safety management[21].

    The following is the major organizing framework for the remainder of this article: Section 2 describes the relevant materials and methods.In Section 3,conduct experiments and analyses.Introduce a detailed discussion in Section 4.Section 5 is the final section,which is the conclusion of this article.

    2 Materials and Methods

    2.1 Environment and Data for Experimentation

    The hardware and software settings of the experimental platform are as follows: the operating system is Linux,the GPU is NVIDIA Tesla A10 GPU 24 GB,and the deep learning framework is Python.

    For some existing datasets mostly collected from advertisements some of these images have backgrounds other than construction sites [14].Considering the robustness that the model should have in practical applications,this research creates a new dataset that can detect construction workers wearing helmets in terms of building background,angle,and category.A total of 10,700 pictures were gathered as the dataset for model training through open-source dataset restructuring and were randomly divided into an 1:1:8 a set for validation,a set for testing,and a set for training.Using the graphical image annotation application LabelImg[22]identify the photographs as being two classes,then save them in the YOLO format.To increase reliability in the experimental data,both the training and validation sets were equally scaled to a 640 × 640 size before being used to train the models.Using a technique for improving data Mosaic cuts and stitches together any four photos at random,increasing data variety;gradients are updated using the asynchronous stochastic gradient descent(SGD)approach.Following training,the model performance is assessed using the test dataset.

    2.2 Performance Evaluation Metrics

    To validate the YOLO_CA model proposed in this study,Evaluation criteria like accuracy,recall,and mAP are applied in order to assess the performance of the network model.mAP is the most extensively used evaluation statistic in target identification algorithms,as the area under the precisionrecall curve(P-R),is the average accuracy achieved at varying recall percentages[23].The greater the Mean Average Precision(mAP)number,the more effective the present target detection model is for this dataset[24].It is calculated by setting all categories’precision to IOU=0.5[25].Eqs.(1)to(4)demonstrate how to calculate accuracy,recall,and mAP:

    The proportion of all targets that can be successfully forecasted is referred to as the recall.The following is a definition of it:

    where TP is the quantity of properly categorized positive instances,FP indicates the number of negative examples that are incorrectly classified,FN is the number of instances of positives that were misclassified,and TN is the number of properly categorized negative instances.

    The amount of time spent processing detections on average includes both the amount of time spent processing by the NMS and the amount of time spent processing by the network.The size of the model is determined by the model’s size that was stored after the completion of all training.

    2.3 Building a Deep Learning Network

    2.3.1 YOLO_CA Network Structure

    YOLOv5 is broken down into four distinct network architectures (YOLOv5s,YOLOv5m,YOLOv5l,and YOLOv5x) based on the required depth and breadth of the network [26].The four model structures each see a progressive rise in their overall size as well as the number of parameters they need[27].Due to the fact that the helmet-wearing algorithm for detecting in building construction scenarios has to meet real-time requirements,in this research,we use the smallest version of YOLOv5s with the smallest size and fewest parameters as the basic network for detection,Fig.1 presents the proposed organizational framework for the YOLO_CA model.To compress the network parameters,minimize the network computation,and enhance the model inference performance,introducing the Ghost module into the backbone of the model to reduce the number of model parameters and GFLOPs[28].In the bottleneck module,deep separable convolution(DWConv)is used in place of the original model’s CONV to achieve parameter compression.Finally,the backbone module adds the Coordinate Attention (CA) technique to improve the characteristics [29],thereby enhancing helmet detection’s precision in such complicated circumstances.

    2.3.2 Coordinate Attention Mechanism

    The idea of an attention mechanism resembles the nerve system of the human body.Distinguishing between useful and useless data,and focusing on the essentials of the objective to be found as a result[30].To enable more accurate localization and detection of target regions in complex environments,a CA structure [29] was introduced,which has the following benefits.First of all,it may collect information that is position-and orientation-aware in addition to cross-channel information,enabling the model to more accurately recognize and identify the object of interest.Second,CA is light and flexible,and easy to insert into classical modules.Eventually,an already-trained model,the CA technique could prove very helpful for downstream activities on lightweight networks,particularly those requiring extensive prediction,including techniques like semantic segmentation.There are a pair of steps to capturing channel relations and long-range dependencies with precise location information using coordinate attention:coordinated attention creation and information embedding.Fig.2 illustrates the specific principle.

    Figure 1:Structure of the proposed YOLO_CA model

    Figure 2:CA module structure

    Step 1:Information embedding that is coordinated.For the attention module to accurately capture spatial long-range dependencies,channel attention often uses global pooling to globally encode spatial information into channel descriptors.Two one-dimensional feature encoding procedures are then constructed from the initial global pooling:

    For input X,the information of different channels is encoded along the horizontal and vertical directions using pooling kernels of size (H,1) and (1,W).Consequently,the c channel’s output at height h can be as:

    Similarly,the output of channel c with width w is shown as follows:

    A pair of direction-aware feature maps are generated by applying these two transformations to feature aggregations along two spatial directions[31].Furthermore,because capturing the long-range dependencies along one spatial orientation and retaining the precise location details along the other spatial orientation,strengthens the network’s capacity to pinpoint the location of a desired target[32].

    Step 2:Coordination of attention generation.The feature maps of the global receptive field’s width and height are stitched together and passed to the 1 × 1 convolution module with dimensionality reduced to the original C/r.A feature map f of form 1 × (W+H) × C/r is obtained by feeding the batch-normalized feature map F1 into the sigmoid activation function,as seen by Eq.(8).

    δrepresents the nonlinear activation function,where f ∈R(C/r)×(W+H)is the intermediate horizontal and vertical feature map of spatial information,and the downsampling rate is r.

    Using the original height and width,multiply the feature map f by 1×1 to get the feature maps Fhand Fw,having exactly as many channels as the input x.Making use of the sigmoid activation function,determine the attention weights ghalong the feature maps’height axis and gwalong their width axis.

    Eventually,as shown in Eq.(11),the initial weighted feature map is obtained by multiplication for the purpose of obtaining a feature map with width and height attention weights.

    CA mechanism is a new mechanism that embeds location information into channel attention.It has been proved that embedding it into the backbone network is a lightweight structure,and in subsequent experiments,it improves the performance of helmet detection in complex backgrounds.

    2.3.3 Ghost Module

    The YOLOv5 model uses the C3 structure for backbone feature extraction,but the structure’s vast array of parameters,as well as sluggish detection speed,can lead to limited applications,making it difficult to apply in some practical scenarios,such as mobile or embedded devices[33].Therefore,this paper uses Han et al.[34]to propose a new Ghost module for creating effective neural networks to replace the original C3 structure to achieve a lightweight network model that balances speed and accuracy.The fundamental Ghost module divides the initial convolutional layer into two sections and generates many intrinsic feature maps with fewer filters.After that,in order to efficiently construct the reimage feature maps,there will be a certain amount of transformations performed.The underlying concept is shown in Fig.3.

    Figure 3:Conventional convolution and Ghost module

    Assume that the input feature map size ish×w×c,andh′×w′×c′is the output feature map size.The input feature map’s height and width are h and w,respectively,whereas the output feature map’s height and width are h′and w′.There is one constant mapping and m×(s-1)=n×s×(s-1)linear operations,each of which has a convolution kernel size of d×d and a regular convolution kernel size of k×k.Linear operations of the same size are used in a Ghost module,and after s transformations,d and k are of similar size when s ?c.The theoretical ratio between the Ghost module and the standard convolution is:

    In the same way,the number of parameters is:

    From the above,it is obvious that Ghost modules are superior in terms of computational cost.The Ghost Bottleneck can be formed by stacking two Ghost modules,replacement of the bottleneck module in the C3 module with the Ghost Bottleneck generates a new C3Ghost,this may reduce computing costs and reduce model size.

    2.3.4 Deeply Separable Convolution Module

    To further reduce the parameters in a network and to create a model that is lightweight,the YOLOv5s’original Neck layer has been updated.The Conv in the original PANet module is replaced by DWConv [35].Unlike traditional convolutional operations,Deep Convolutional Xception [36]and MobileNet [37] the core idea of DWConv Xception and MobileNet is to split a convolution process into two separate sections: Depthwise Convolution (DW) layer and Pointwise Convolution(PW)layer,which is a network[38].DW is a convolution of separate channels,i.e.,each convolution kernel corresponds to each channel of input,thus the features of each layer are separated,and the effective information of different layers at the same spatial location cannot be effectively utilized,then a second part(PW)is needed to produce a fresh feature map by combining the separate feature maps of the first part(DW),as shown in Fig.4b.To prevent gradients from disappearing and the establishment of complex parameters,the BN algorithm adjusts the distribution of the data[39].

    Figure 4:Standard convolution and depth-separable convolution

    2.3.5 EIOU Loss Function

    1.Limits of CIOU

    Three important geometrical elements are taken into account by Complete Intersection over Union(CIOU)Loss:overlap area,centroid distance,and aspect ratio[40].Following is the definition of the CIOU loss given the prediction frame B and the target frame Bgt:

    2.Suggested Approach

    To deal with the above problem,the more efficient Efficient Intersection over Union(EIOU)Loss is used instead of the network model’s CIOU Loss [41].EIOU is based on CIOU Using the aspect ratio’s influencing factor to determine the target box’s and anchor box’s individual lengths and widths,the width-height true difference,the center distance,and the overlap area are the components of this loss function,The approach used in EIOU in the first two portions is continued in CIOU.However,the loss of width-to-height resolves the ambiguous aspect ratio definition based on CIOU,it immediately reduces the height and breadth differences between the target box and anchor box,hastening convergence.It is defined as follows:

    where wcand hcare the smallest closed box’s width and height that encloses these two boxes.That is,three sections make up the loss function: the IOU loss LIOU,the distance loss Ldisand the aspect ratio loss Lasp.In the above case,we may keep the CIOU loss’s profitability features.Meanwhile,the disparity between the target and anchor frames’width and height is immediately minimized by the EIOU loss.Having a quicker convergence rate as well as better localization results.

    3 Experiment and Analysis

    3.1 The Outcomes of Model Training

    During training,the YOLO_CA network model uses the Adam optimizer with a batch size of 16,a learning rate momentum of 0.93,a weight decay of 0.0005,with a 0.01 starting learning rate.Each model has the same parameters for a total of 100 iterations,and the best weights obtained are used as the weights for helmet-wearing detection.According to Fig.5,the model’s training precision gradually increases with the number of iterations,and the loss value gradually decreases with an increase in the number of iterations.The model learning efficiency is higher in the initial model training phase,and the training loss curve converges faster.In the first 20 periods of training,mAP,recall,and precision exhibit a tendency for rapid growth as the network rapidly converges.Around 50 cycles later,in terms of precision,recall,and mAP,the model achieves a stable and then stabilizes at around 80.Finally,mAP,Precision,and Recall stabilized at 96.73%,95.87%,and 95.31%,respectively.Precision-Recall value distributions with training are shown in Fig.5.During the testing period,the thresholds for non-maximal suppression(NMS),confidence(C),and the intersection of sets(IoU)were set at 0.45,0.25,and 0.5,respectively.

    To test the recognition performance of the model,first,input the images of the divided test set and validation set into the trained network for detection.The validation results of the YOLO_CA model are shown in Table 1,the recognition accuracy in the validation set is 93.6%,the recall rate is 90.1%,the mAP is 94.8%,and the recognition speed is 134 FPS.Fig.6 displays the recognition outcomes of a few test sets.

    Table 1:YOLO_CA model validation

    In order to validate the generality of the model proposed in this study,600 and 402 images from the public data SHWD [42] and CHV [43] were used for validation,respectively,and the validation results are shown in Table 1.As shown in Table 1,the mAP on the two publicly available datasets is 93.0%and 93.6%,the accuracy is 96.0%and 95.7%,and the FPS on GPU is 117 and 119,respectively.Fig.7 displays some of the test results from publicly available datasets.In summary,it can be seen that this model has good robustness and generality.

    Figure 5:Precision recall rate

    Figure 6:Partial test set identification results

    Figure 7:Partial public dataset test results

    3.2 Analysis of Model Improvement Performance

    3.2.1 Comparative Analysis of Similar Methods

    To further verify the performance of the put-forward method,it was contrasted with network models like the YOLOv3_tiny,Yolov5s,Yolov7,and the newly proposed Yolov8.Fig.8 shows the comparison of Recall,Precision,and detection speed on GPU,and mAP for each model,the weight parameters and other evaluation indicators are shown in Table 2,and Fig.9 shows the detection rate FPS of each model.

    Table 2:Comparison of evaluation indexes of different models

    Figure 8:(Continued)

    From Table 2,it can be seen that compared to other models in the YOLO series,YOLO_CA has the best mAP value,accuracy,and recall.In terms of model weights and parameters,both Yolov7-tiny and Yolov8n have smaller weights and parameters than the YOLO_CA model,with Yolov7-tiny’s weights and parameters reduced by 9.7%and 8.7%,respectively,while it is of interest that Yolov8n’s weights and parameters are both reduced by about 54%,however,this model’s FPS and mAP values are also reduced 59.5%and 7.63%from YOLO_CA,respectively.In terms of the detection rate of the model,as shown in Table 2 and Fig.9,the FPS of Yolov7-tiny and Yolov8n are 20%and 28%faster than YOLO_CA,respectively,but the Recall of YOLO_CA is 11.31%and 14.41%higher than Yolov7-tiny and Yolov8n,respectively.As a result of the above study,it is obvious that the proposed method in this paper offers benefits over existing typical network models.

    Figure 9:PFS for each model

    3.2.2 Ablation Experiments

    Ablation tests were carried out to validate the performance of various components so as to assess the validity and feasibility of the proposed model.Because the proposed model in this study depends on Yolov5s,the Yolov5s model is used as the benchmark for the ablation experiments to conduct a comparative analysis of 2 different parts with CA attention mechanisms at different locations,and different lightweight models.They are the following:

    1.Comparative analysis of CA attention mechanisms in different locations

    To further evaluate the impact of the location of CA mechanism addition on algorithm performance,attention mechanisms(CA)were added to the backbone and neck of the model,respectively.The experimental results are shown in Table 3.Fig.10’s a–c and a+c are the CA addition locations 1–4 in Table 3,respectively.According to the results of the experiment,the CA addition position used in this study has the most effective detection effect,moreover,its accuracy has been substantially enhanced.

    Table 3:Comparison of the results of CA attention mechanism in different locations

    2.Comparative analysis of different lightweight models

    In order to make the model more convenient to be applied in actual production practice,the model is subjected to lightweight processing.To achieve optimal experimental results,Yolov5s was used as the experimental benchmark,and the other parameters of the model were kept consistent.

    The experimental results are shown in Table 4,and the final model has good recognition accuracy and can balance the model weight size and running speed.From Table 4,it can be concluded that the 4th model has the best experimental results,having the fastest detection speed and the best mAP values.Where the FPS was 39.58%,10.74%,and 14.53%faster than the other three models,respectively,and the mAP value for this model was 96.73%,which is 1.13%,0.62%,and 0.41%higher than the other 3.Compared with the original model,the number of parameters is decreasing by 20.54%,3.86%,and 6.84%,while the GFLOPs are lowered by 4.7,1.2,and 2.8,as compared to the model,weight sizes were reduced by 2.5,0.9,and 0.6,respectively,compared to yolov5s.Therefore,the model lightweight method used in this study is effective.In summary,choose the fourth model in the table as the final model.

    Table 4:Comparison of different lightweight models

    Figure 10:CA attention mechanism in different locations

    4 Discussion

    In this paper,an automatic detection method for real-time detection of helmet-wearing is discussed.To satisfy the requirements of construction enterprises to know the helmet-wearing status of construction workers at any time,this study uses the YOLO_CA model for a more in-depth study.The detection performance in complex scenes is improved by adding an attention mechanism.EIoU is used as the loss function,it results in faster convergence and better localization.The method’s feasibility is as follows:

    1.When it comes to model weight size and detection precision,the dataset scenes used in this study are all based on images of construction sites with relatively complex background information,which can meet the application in actual building construction scenarios.In order to be better deployed in the device,the model was optimized to decrease the number of parameters and the size of the model’s weight.After optimization,the weight size and the number of parameters are both reduced by 4.26%and 6.8%,respectively,but the accuracy and mAP value are only reduced by 0.37%and 0.49%,respectively,during the lightweight of the model.As a consequence,the model suggested in this study may be applied in practice and produces good results.

    2.As for the speed of detection,to satisfy the real-time requirements of supervision and management,as well as to detect models across numerous objects,our proposed YOLO_CA model,which serves as a useful example of a sophisticated single-stage object identification approach,is considered.In the same hardware environment,the one-stage model outperforms the two-stage method in processing speed.When compared with other one-stage methods(Such as YOLOv3_tiny,Yolov7,and Yolov8 in the above experiments),this model has a higher mAP value and accuracy,as well as faster detection speed.Although the updated network structure is still complicated when compared to the other basic model,the improved model has improved detection speed and accuracy,which can meet real-time requirements.

    3.Regarding the capacity to generalize models,the method of using public datasets for validation improves the generalization ability and robustness of the model.

    Following the preceding discussion,it is believed that the proposed method in this study is an effective method for construction personnel helmet-wearing detection,which might boost firm safety monitoring management’s effectiveness and reduce the corresponding manual supervision cost,as well as promote the intelligent development of construction production safety management.

    5 Conclusion

    In this essay,we put forth and build a lightweight model YOLO_CA for real-time helmet detection in construction scenarios,based on the CA mechanism and the Yolov5 target identification algorithm.A specific dataset under the building construction scene is constructed for training the network model to overcome the issue of poor detection precision due to complex background and uneven illumination of the site environment.The addition of the Ghost module,CA attention mechanism,and DWConv decreases the model’s overall size while also enabling the model to interpret redundant information better and faster,as well as saving model parameters and operating costs,thus resulting in a lighter model.The results of the comparison experiments showed that the model proposed in this study achieved good results in all indicators,with Precision and Recall values of 95.87%and 95.31%,mAP value of 96.73%,and FPS of 134.Compared with Yolov3_tiny,Yolov5s,Yolov7-tiny,Yolov7,Yolov8n and Yolov8s,Precision increased by 3.49%,0.67%,4.67%,3.27%,5.97%and 4.77%,respectively,and mAP was up by 15.85%,1.13%,7.43%,4.53%,10.13%and 6.63%,respectively.It can be seen that the improved method has good superiority and effectiveness,and the proposed YOLO_CA model’s higher detection accuracy and recognition speed.Therefore,the YOLO_CA model can be better applied to mobile and embedded devices,so that the supervision and inspection of helmet-wearing can be gradually shifted from manual to artificial intelligence,and the efficiency of safety supervision can be improved.

    In future studies,to expand the uses of algorithms and be able to expand to more detection devices,the following aspects can be addressed.Firstly,data sources from different domains need to be expanded to further extend the application in helmet detection.Because more scenes of helmet-wearing samples can not only test the method’s generalizability but also look into other types of complex scenewearing features,which will assist in increasing the detection level.Secondly,the enhanced dataset is included in the subsequent training model to provide a more accurate and comprehensive helmet detection algorithm,therefore that the proposed YOLO_CA model in this paper can be used not only for helmet-wearing detection in building construction scenes but also in other industrial scenes as well as traffic scenes.In addition,two-stage target detection methods or traditional deep learning methods can be included in the follow-up work to compare with the methods used to make the model more convincing;it is also necessary to pay attention to the cutting edge of the technology,and the latest models or techniques can be considered in the follow-up to replace the previous methods,so as to better enhance the effectiveness of the application.Finally,the proposed model can be applied to embedded devices with limited computational power and real-time computing requirements,such as UAVs and smartphones.

    Acknowledgement:We thank the funders of this project Guizhou Optoelectronic Information and Intelligent Application International Joint Research Center,and all the teams and individuals who supported this research.

    Funding Statement:This research was funded by Guizhou Optoelectronic Information and Intelligent Application International Joint Research Center(Qiankehe Platform Talents No.5802[2019]).

    Author Contributions:Conceptualization,X.Q.W.and M.Y.;methodology,X.Q.W.;software,X.Q.W.and M.Y.;validation,M.Y.;formal analysis,X.Q.W.and M.Y.;investigation,S.R.Q.;resources,S.R.Q.;data curation,X.Q.W;writing—original draft preparation,X.Q.W.;writing—review and editing,M.Y.and S.R.Q.;visualization,X.Q.W.and M.Y.;supervision,S.R.Q.;project administration,M.Y.and S.R.Q.;funding acquisition,S.R.Q.All authors have read and agreed to the published version of the manuscript.

    Availability of Data and Materials:Two open datasets SHWD and CHV.SHWD[42]at https://github.com/njvisionpower/Safety-Helmet-Wearing-Dataset (accessed on 05/12/2022).CHV dataset [43] at https://github.com/ZijianWang-ZW/PPE_detection(accessed on 06/01/2023).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    我的女老师完整版在线观看| 婷婷成人精品国产| 免费观看性生交大片5| 国产av国产精品国产| 国产欧美另类精品又又久久亚洲欧美| 最近最新中文字幕免费大全7| 亚洲成人av在线免费| 亚洲av电影在线进入| 亚洲成色77777| 亚洲一码二码三码区别大吗| 超色免费av| a级毛片在线看网站| 久久青草综合色| 黄色怎么调成土黄色| 国产综合精华液| 大片免费播放器 马上看| 午夜91福利影院| 91久久精品国产一区二区三区| 精品国产露脸久久av麻豆| 国产免费一级a男人的天堂| 99热国产这里只有精品6| 亚洲精品色激情综合| 精品少妇久久久久久888优播| 久久久久人妻精品一区果冻| 777米奇影视久久| 97在线视频观看| 美女福利国产在线| 亚洲国产精品成人久久小说| 如何舔出高潮| 妹子高潮喷水视频| 亚洲国产看品久久| 欧美激情国产日韩精品一区| 亚洲丝袜综合中文字幕| 涩涩av久久男人的天堂| 午夜免费观看性视频| 日韩在线高清观看一区二区三区| 国产一区二区激情短视频 | 777米奇影视久久| 久久午夜综合久久蜜桃| 成人18禁高潮啪啪吃奶动态图| 国产成人免费无遮挡视频| 国产精品久久久久久av不卡| av在线播放精品| 又黄又粗又硬又大视频| 免费黄色在线免费观看| 欧美成人午夜免费资源| 成年动漫av网址| 女人久久www免费人成看片| 伊人亚洲综合成人网| 香蕉国产在线看| 国产精品秋霞免费鲁丝片| 自拍欧美九色日韩亚洲蝌蚪91| 在线观看美女被高潮喷水网站| 好男人视频免费观看在线| 亚洲精品成人av观看孕妇| 极品少妇高潮喷水抽搐| 午夜福利在线观看免费完整高清在| 亚洲av男天堂| 王馨瑶露胸无遮挡在线观看| 高清在线视频一区二区三区| 永久免费av网站大全| xxx大片免费视频| 免费日韩欧美在线观看| 国产淫语在线视频| 两性夫妻黄色片 | 新久久久久国产一级毛片| 插逼视频在线观看| 亚洲国产av影院在线观看| 国产综合精华液| 999精品在线视频| 在线精品无人区一区二区三| 五月伊人婷婷丁香| 午夜福利影视在线免费观看| 日韩精品免费视频一区二区三区 | 精品国产国语对白av| 久久精品国产a三级三级三级| 亚洲成色77777| 日日撸夜夜添| 欧美精品亚洲一区二区| 亚洲人成网站在线观看播放| 免费观看a级毛片全部| 免费观看性生交大片5| 飞空精品影院首页| 欧美激情极品国产一区二区三区 | 一个人免费看片子| 视频在线观看一区二区三区| 国精品久久久久久国模美| 久久久久久人人人人人| 99国产综合亚洲精品| 国产欧美另类精品又又久久亚洲欧美| 麻豆乱淫一区二区| 性高湖久久久久久久久免费观看| 中文精品一卡2卡3卡4更新| 女性被躁到高潮视频| 在线观看免费视频网站a站| 亚洲精品自拍成人| 91精品国产国语对白视频| 狠狠精品人妻久久久久久综合| 少妇高潮的动态图| 国产国拍精品亚洲av在线观看| 欧美日韩av久久| 久久毛片免费看一区二区三区| 成人手机av| 青青草视频在线视频观看| 久久久久久久国产电影| 成年美女黄网站色视频大全免费| 久久热在线av| 亚洲伊人色综图| 在线天堂最新版资源| 大码成人一级视频| 天堂俺去俺来也www色官网| 免费在线观看完整版高清| 欧美人与性动交α欧美精品济南到 | 成人免费观看视频高清| 又粗又硬又长又爽又黄的视频| 丝袜脚勾引网站| 国产成人午夜福利电影在线观看| 日韩不卡一区二区三区视频在线| 丝袜脚勾引网站| 欧美激情 高清一区二区三区| 成人黄色视频免费在线看| 国精品久久久久久国模美| 97在线人人人人妻| 只有这里有精品99| 亚洲精品国产av成人精品| 亚洲丝袜综合中文字幕| 日韩精品有码人妻一区| 国产老妇伦熟女老妇高清| 国产成人午夜福利电影在线观看| 国产一区二区在线观看av| 热99国产精品久久久久久7| 国产淫语在线视频| 欧美bdsm另类| 五月天丁香电影| av视频免费观看在线观看| 香蕉国产在线看| 王馨瑶露胸无遮挡在线观看| 国产精品人妻久久久久久| 一区在线观看完整版| 热re99久久国产66热| 欧美成人午夜精品| 中国美白少妇内射xxxbb| 国产一区二区在线观看日韩| 在现免费观看毛片| 人妻一区二区av| 最新中文字幕久久久久| 欧美精品亚洲一区二区| 国产免费现黄频在线看| 91久久精品国产一区二区三区| 蜜桃国产av成人99| 久久99一区二区三区| 久久影院123| 女人被躁到高潮嗷嗷叫费观| 精品熟女少妇av免费看| 国国产精品蜜臀av免费| 久久久久久人妻| 国产熟女欧美一区二区| 91久久精品国产一区二区三区| 两个人看的免费小视频| 一个人免费看片子| 18在线观看网站| 最近中文字幕高清免费大全6| 18禁观看日本| 亚洲av中文av极速乱| 亚洲欧美日韩卡通动漫| 亚洲丝袜综合中文字幕| 亚洲国产av影院在线观看| av女优亚洲男人天堂| 免费在线观看完整版高清| 日韩制服骚丝袜av| 在线观看www视频免费| 亚洲成色77777| 一本—道久久a久久精品蜜桃钙片| 欧美日韩av久久| 欧美日本中文国产一区发布| 高清av免费在线| 亚洲精品456在线播放app| 最近中文字幕2019免费版| 亚洲一级一片aⅴ在线观看| 99热全是精品| 国产有黄有色有爽视频| 精品久久国产蜜桃| 韩国精品一区二区三区 | 亚洲精品456在线播放app| 国产熟女欧美一区二区| av国产精品久久久久影院| 欧美成人午夜免费资源| 国产黄频视频在线观看| 亚洲四区av| 伦精品一区二区三区| 美女国产视频在线观看| 国产精品一区www在线观看| 成年人免费黄色播放视频| 亚洲一级一片aⅴ在线观看| a级毛色黄片| 国产精品一区二区在线观看99| av又黄又爽大尺度在线免费看| 青春草国产在线视频| 夫妻性生交免费视频一级片| 美女xxoo啪啪120秒动态图| 久久精品国产鲁丝片午夜精品| 22中文网久久字幕| 婷婷色综合www| 国产成人一区二区在线| 久久久久视频综合| 内地一区二区视频在线| 亚洲精品日韩在线中文字幕| 国内精品宾馆在线| 女人久久www免费人成看片| 有码 亚洲区| 黄色 视频免费看| 成年美女黄网站色视频大全免费| 国产精品久久久久久av不卡| 亚洲久久久国产精品| 十分钟在线观看高清视频www| 亚洲国产色片| 亚洲成色77777| 亚洲欧洲精品一区二区精品久久久 | 国产一区亚洲一区在线观看| 18禁观看日本| 国产精品无大码| 在现免费观看毛片| 成人亚洲欧美一区二区av| 男女高潮啪啪啪动态图| 在线观看人妻少妇| 免费播放大片免费观看视频在线观看| 久久99热6这里只有精品| 国产男女超爽视频在线观看| 久久久久久久精品精品| 80岁老熟妇乱子伦牲交| 一本—道久久a久久精品蜜桃钙片| 国产69精品久久久久777片| 国产成人精品久久久久久| 观看美女的网站| 久久狼人影院| 少妇的丰满在线观看| 如何舔出高潮| 只有这里有精品99| 最近2019中文字幕mv第一页| 人体艺术视频欧美日本| 91精品国产国语对白视频| 久久毛片免费看一区二区三区| 国产成人免费观看mmmm| 亚洲美女搞黄在线观看| 亚洲熟女精品中文字幕| 亚洲五月色婷婷综合| 看十八女毛片水多多多| 久久影院123| 亚洲欧美色中文字幕在线| 最黄视频免费看| 亚洲欧美一区二区三区国产| 欧美xxⅹ黑人| 午夜福利影视在线免费观看| 亚洲av电影在线进入| 9色porny在线观看| 在线观看免费视频网站a站| 极品人妻少妇av视频| 在线观看www视频免费| 满18在线观看网站| 99久久综合免费| kizo精华| 伦理电影免费视频| 久久婷婷青草| 久久久国产一区二区| 韩国高清视频一区二区三区| 一级片免费观看大全| 一级毛片我不卡| 亚洲精品一二三| 亚洲国产欧美在线一区| 一个人免费看片子| 搡女人真爽免费视频火全软件| 国产精品一二三区在线看| 国产女主播在线喷水免费视频网站| 嫩草影院入口| 黄色一级大片看看| 国国产精品蜜臀av免费| 在线观看免费视频网站a站| 桃花免费在线播放| 自线自在国产av| 欧美精品一区二区免费开放| 久久久久国产精品人妻一区二区| 国产福利在线免费观看视频| 毛片一级片免费看久久久久| av有码第一页| 啦啦啦视频在线资源免费观看| 亚洲欧美成人精品一区二区| 亚洲国产欧美在线一区| 亚洲欧美色中文字幕在线| 亚洲美女视频黄频| 免费看av在线观看网站| 黄片无遮挡物在线观看| 亚洲精品日本国产第一区| 99精国产麻豆久久婷婷| 成人无遮挡网站| 久久精品国产亚洲av涩爱| 男女高潮啪啪啪动态图| 国产av一区二区精品久久| 婷婷色综合大香蕉| 欧美成人精品欧美一级黄| 国产精品国产三级专区第一集| 成年人午夜在线观看视频| 国产成人精品在线电影| 男男h啪啪无遮挡| 国产探花极品一区二区| 精品第一国产精品| 色5月婷婷丁香| 男女高潮啪啪啪动态图| 久久毛片免费看一区二区三区| 欧美精品国产亚洲| 在线观看一区二区三区激情| 亚洲av.av天堂| 中文字幕制服av| 亚洲精品日韩在线中文字幕| 秋霞在线观看毛片| 国产精品国产三级专区第一集| a 毛片基地| 一级毛片黄色毛片免费观看视频| 夜夜骑夜夜射夜夜干| 亚洲人成77777在线视频| 欧美日韩视频高清一区二区三区二| 亚洲精品久久成人aⅴ小说| 成人国产av品久久久| 91aial.com中文字幕在线观看| 亚洲精品国产av成人精品| 波野结衣二区三区在线| 男男h啪啪无遮挡| www.av在线官网国产| 成人漫画全彩无遮挡| 另类亚洲欧美激情| 国产精品国产三级专区第一集| 九色亚洲精品在线播放| 午夜日本视频在线| 久久人人爽人人片av| a级毛片在线看网站| 亚洲国产色片| 天天躁夜夜躁狠狠久久av| 老司机亚洲免费影院| 狠狠婷婷综合久久久久久88av| 丝袜美足系列| 亚洲人成77777在线视频| 久久韩国三级中文字幕| 欧美最新免费一区二区三区| 日本av手机在线免费观看| 中国美白少妇内射xxxbb| 国产亚洲最大av| 秋霞伦理黄片| 亚洲精品色激情综合| 少妇精品久久久久久久| 各种免费的搞黄视频| 97在线人人人人妻| 免费不卡的大黄色大毛片视频在线观看| 高清在线视频一区二区三区| 99久久中文字幕三级久久日本| 纯流量卡能插随身wifi吗| 欧美精品一区二区免费开放| 欧美 日韩 精品 国产| av免费观看日本| 激情视频va一区二区三区| 亚洲精品乱码久久久久久按摩| 97人妻天天添夜夜摸| 永久免费av网站大全| 精品一区二区三区视频在线| 交换朋友夫妻互换小说| 少妇人妻精品综合一区二区| 18在线观看网站| 99热全是精品| 欧美日韩视频精品一区| 99热6这里只有精品| 这个男人来自地球电影免费观看 | 2021少妇久久久久久久久久久| 成人亚洲精品一区在线观看| 9191精品国产免费久久| 免费观看在线日韩| 我要看黄色一级片免费的| 免费观看在线日韩| 国产精品久久久久久精品电影小说| 亚洲精品aⅴ在线观看| 在现免费观看毛片| 久久免费观看电影| 久久精品久久久久久久性| 国产精品无大码| 丝袜在线中文字幕| 亚洲国产av影院在线观看| 精品人妻一区二区三区麻豆| 亚洲国产欧美在线一区| 少妇被粗大的猛进出69影院 | 夜夜爽夜夜爽视频| 九色成人免费人妻av| 久久精品熟女亚洲av麻豆精品| 久久人人爽人人爽人人片va| 另类亚洲欧美激情| 天美传媒精品一区二区| 色94色欧美一区二区| 午夜91福利影院| 全区人妻精品视频| 国产 精品1| a级毛片在线看网站| 秋霞在线观看毛片| 国产熟女午夜一区二区三区| 老熟女久久久| xxx大片免费视频| 成年动漫av网址| 激情五月婷婷亚洲| 深夜精品福利| 国产成人a∨麻豆精品| 久久久久精品人妻al黑| 9191精品国产免费久久| 男女免费视频国产| 少妇人妻精品综合一区二区| 男人操女人黄网站| 高清在线视频一区二区三区| 久久人人爽人人爽人人片va| 国产不卡av网站在线观看| 青青草视频在线视频观看| 看免费成人av毛片| videossex国产| av.在线天堂| av女优亚洲男人天堂| 日韩一本色道免费dvd| 国产精品一二三区在线看| 69精品国产乱码久久久| 97在线人人人人妻| 亚洲内射少妇av| 80岁老熟妇乱子伦牲交| 亚洲精品久久午夜乱码| videosex国产| 久久国内精品自在自线图片| 国产高清不卡午夜福利| 国产精品久久久久久精品电影小说| 1024视频免费在线观看| 飞空精品影院首页| 99久久中文字幕三级久久日本| 亚洲三级黄色毛片| 在线观看国产h片| 亚洲,欧美精品.| 午夜免费鲁丝| 亚洲成色77777| av不卡在线播放| 成人毛片60女人毛片免费| 九色亚洲精品在线播放| 日本色播在线视频| 91成人精品电影| 亚洲国产av新网站| 日韩精品免费视频一区二区三区 | 精品一品国产午夜福利视频| 黑人欧美特级aaaaaa片| 看免费成人av毛片| 国产精品久久久久成人av| 你懂的网址亚洲精品在线观看| 久久久久精品性色| 日本与韩国留学比较| 亚洲精品国产av蜜桃| 最近最新中文字幕免费大全7| 久久婷婷青草| 亚洲精品久久午夜乱码| 在线观看国产h片| 久热这里只有精品99| 国产片内射在线| 日韩,欧美,国产一区二区三区| 久久精品国产综合久久久 | 美女视频免费永久观看网站| 亚洲国产精品专区欧美| 日韩欧美一区视频在线观看| 亚洲国产精品一区三区| 国产一区二区激情短视频 | 国产免费又黄又爽又色| 9色porny在线观看| 久久国产精品大桥未久av| 亚洲国产色片| 成人黄色视频免费在线看| 精品亚洲乱码少妇综合久久| 国产成人精品婷婷| 欧美激情国产日韩精品一区| 美国免费a级毛片| av网站免费在线观看视频| 一级黄片播放器| 国产极品粉嫩免费观看在线| 精品国产一区二区三区四区第35| 国产综合精华液| 久久久久久久久久人人人人人人| 黄色视频在线播放观看不卡| av免费观看日本| 久久久国产欧美日韩av| 制服人妻中文乱码| 国产高清不卡午夜福利| 少妇 在线观看| 日韩伦理黄色片| 国产又爽黄色视频| 色视频在线一区二区三区| 99视频精品全部免费 在线| 最近2019中文字幕mv第一页| 久久久久精品久久久久真实原创| 久久久久久久亚洲中文字幕| 亚洲欧洲精品一区二区精品久久久 | 又黄又粗又硬又大视频| 蜜臀久久99精品久久宅男| 老熟女久久久| 成年人免费黄色播放视频| 春色校园在线视频观看| freevideosex欧美| 久久精品国产鲁丝片午夜精品| 日本猛色少妇xxxxx猛交久久| 不卡视频在线观看欧美| 9191精品国产免费久久| 亚洲精品日韩在线中文字幕| av线在线观看网站| 亚洲欧洲日产国产| 五月开心婷婷网| a级毛片黄视频| 国产熟女午夜一区二区三区| 国产xxxxx性猛交| 日韩成人av中文字幕在线观看| 香蕉丝袜av| 免费黄色在线免费观看| 亚洲伊人久久精品综合| 蜜桃在线观看..| 久久午夜福利片| 纵有疾风起免费观看全集完整版| 女的被弄到高潮叫床怎么办| 少妇人妻久久综合中文| 亚洲人成网站在线观看播放| 国产免费一区二区三区四区乱码| a级毛片在线看网站| 男人舔女人的私密视频| 黑人猛操日本美女一级片| 五月开心婷婷网| 97人妻天天添夜夜摸| 日韩精品有码人妻一区| 80岁老熟妇乱子伦牲交| 精品卡一卡二卡四卡免费| 亚洲精品自拍成人| 97在线人人人人妻| 久久热在线av| 午夜福利在线观看免费完整高清在| 天堂俺去俺来也www色官网| av免费在线看不卡| 欧美+日韩+精品| 我要看黄色一级片免费的| 日本av免费视频播放| 亚洲国产毛片av蜜桃av| 制服人妻中文乱码| 中文字幕人妻丝袜制服| a级毛色黄片| 亚洲一级一片aⅴ在线观看| 人人妻人人爽人人添夜夜欢视频| 大香蕉97超碰在线| 国产黄频视频在线观看| 国产成人精品一,二区| 最近中文字幕高清免费大全6| 国产福利在线免费观看视频| 中文字幕免费在线视频6| 中国三级夫妇交换| 国产成人一区二区在线| www日本在线高清视频| 男人操女人黄网站| 久久婷婷青草| 18禁在线无遮挡免费观看视频| 久久久亚洲精品成人影院| 内地一区二区视频在线| 搡老乐熟女国产| 人人妻人人澡人人爽人人夜夜| 久久人人爽人人片av| 日韩大片免费观看网站| 国产伦理片在线播放av一区| 热99国产精品久久久久久7| 日韩中字成人| av视频免费观看在线观看| 丰满少妇做爰视频| 国产黄色免费在线视频| tube8黄色片| 亚洲色图综合在线观看| 亚洲激情五月婷婷啪啪| 老女人水多毛片| 国产亚洲欧美精品永久| 久久久a久久爽久久v久久| 日本爱情动作片www.在线观看| 成人毛片a级毛片在线播放| 丝瓜视频免费看黄片| 国产片内射在线| 中文字幕人妻丝袜制服| 久久久久久久大尺度免费视频| 我的女老师完整版在线观看| 免费看光身美女| 亚洲国产精品成人久久小说| 日韩制服骚丝袜av| 亚洲,欧美,日韩| 9热在线视频观看99| 亚洲伊人久久精品综合| 欧美激情国产日韩精品一区| 下体分泌物呈黄色| av片东京热男人的天堂| 日本黄色日本黄色录像| 热99国产精品久久久久久7| 99热国产这里只有精品6| 亚洲精华国产精华液的使用体验| 一区二区三区乱码不卡18| 亚洲美女黄色视频免费看| 精品一品国产午夜福利视频| 丰满少妇做爰视频| 哪个播放器可以免费观看大片| 人妻 亚洲 视频| 中文字幕av电影在线播放| 男女免费视频国产| 99久久人妻综合| 国产日韩一区二区三区精品不卡| 最近中文字幕2019免费版| 如何舔出高潮| 搡老乐熟女国产| 大陆偷拍与自拍| 晚上一个人看的免费电影| 伦理电影免费视频| 大话2 男鬼变身卡| 亚洲伊人色综图| 日本av手机在线免费观看| 一级,二级,三级黄色视频|