• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Novel Foreign Object Detection Method in Transmission Lines Based on Improved YOLOv8n

    2024-05-25 14:42:14YakuiLiuXingJiangRuikangXuYihaoCuiChenhuiYuJingqiYangandJishuaiZhou
    Computers Materials&Continua 2024年4期

    Yakui Liu ,Xing Jiang ,Ruikang Xu ,Yihao Cui ,Chenhui Yu ,Jingqi Yang and Jishuai Zhou

    1School of Mechanical and Automotive Engineering,Qingdao University of Technology,Qingdao,266520,China

    2State Key Laboratory of Electrical Insulation and Power Equipment,Xi’an Jiaotong University,Xi’an,710049,China

    3Key Lab of Industrial Fluid Energy Conservation and Pollution Control,Qingdao University of Technology,Ministry of Education,Qingdao,266520,China

    ABSTRACT The rapid pace of urban development has resulted in the widespread presence of construction equipment and increasingly complex conditions in transmission corridors.These conditions pose a serious threat to the safe operation of the power grid.Machine vision technology,particularly object recognition technology,has been widely employed to identify foreign objects in transmission line images.Despite its wide application,the technique faces limitations due to the complex environmental background and other auxiliary factors.To address these challenges,this study introduces an improved YOLOv8n.The traditional stepwise convolution and pooling layers are replaced with a spatial-depth convolution(SPD-Conv)module,aiming to improve the algorithm’s efficacy in recognizing low-resolution and small-size objects.The algorithm’s feature extraction network is improved by using a Large Selective Kernel(LSK)attention mechanism,which enhances the ability to extract relevant features.Additionally,the SIoU Loss function is used instead of the Complete Intersection over Union (CIoU) Loss to facilitate faster convergence of the algorithm.Through experimental verification,the improved YOLOv8n model achieves a detection accuracy of 88.8% on the test set.The recognition accuracy of cranes is improved by 2.9%,which is a significant enhancement compared to the unimproved algorithm.This improvement effectively enhances the accuracy of recognizing foreign objects on transmission lines and proves the effectiveness of the new algorithm.

    KEYWORDS YOLOv8n;data enhancement;attention mechanism;SPD-Conv;Smoothed Intersection over Union(SIoU)Loss

    1 Introduction

    Due to increasing urbanization,construction machinery operations may damage transmission lines.Additionally,human activities have destroyed many natural habitats,causing birds to seek alternative nesting sites,such as transmission towers.This scenario poses a threat to the safety of transmission lines.Power failure problems caused by external factors not only cause huge economic losses,but also threaten the safety of people’s lives,and have now become a hidden problem of the power system that urgently needs to be solved.Therefore,it is necessary to detect faults [1] in transmission lines to ensure the stable operation of the power system.Traditional inspection methods typically rely on manual inspection,which can be influenced by the environment and subjective judgment.Therefore,it is essential to adopt more efficient and effective inspection methods for transmission lines.The rapid development of unmanned aerial vehicles(UAVs)and machine vision has made this possible.UAVs capture images of transmission lines,ensure the security,transparency,and traceability of drone inspection data transmission through blockchain[2,3]technology,and machine vision is used to identify foreign objects in the images.

    Deep learning[4]is a research field of machine learning that is widely used in various applications,including image recognition,speech recognition,and target detection.It is created using artificial neural networks that simulate the neural structure of the human brain.This results in a multi-layer neural network that extracts low-level features from data,such as images,speech,and text.These features are then combined to form more abstract high-level features,which better represent the distributional characteristics of the data.Traditional target detection methods depend on manually designed features,which are inefficient and struggle to utilize the extensive image data available.In recent years,deep learning has emerged as a rapid and powerful tool in image classification and target detection and has gained popularity in agriculture,medicine,remote sensing,and other fields.Its impressive feature learning capability has transformed image processing and target detection.Compared to conventional image processing methods,target detection techniques based on deep learning are characterized by stronger fault tolerance and robustness,as well as a more stable rate of recognition accuracy.Additionally,these techniques possess the benefit of being more economically viable and requiring lower labor costs.

    Deep learning-based target detection algorithms can be broadly categorized into two groups.Firstly,there are the two-stage detection algorithms based on candidate regions,which involve the detection and recognition phases.Prominent examples of these algorithms include R-CNN[5],Fast R-CNN [6],Faster R-CNN [7],R-FCN [8],and others.The algorithms utilize feature information,including texture,color and image details.This data is initially divided into a range of proportionate and sized region boxes for detecting target presence.These region boxes are then inputted into the network for target detection.One-Stage detection algorithms,such as YOLO [9–12],SSD [13–15],and OverFeat[16],can determine the location and category of a detected object within a single step.These algorithms do not require the separate screening of candidate boxes to deliver a detection result,resulting in a faster detection speed.

    In transmission lines scenarios,real-time and accurate detection and analysis of critical objects and large construction machinery that may cause damage in the transmission lines is required,and YOLO and convolutional neural (CNN) algorithms have the characteristics of fast detection,high accuracy,and strong feature extraction ability,so they are being detected in the field of transmission lines critical objects.Literature[1]proposes a genetic model that conditions the increase of the number and diversity of training images.Literature[17]designs a system based on edge cloud collaboration and reconuration of convolutional neural networks,combining pruned extraction network and compressed sign fusion network to improve the efficiency of multi-scale prediction.However,CNN localization targeting algorithms frequently involve varying parameters,resulting in optimal values differing across different scenarios,and posing challenges in dense area targeting.Literature[18]calculates the shape eigenvalues of insulators and backgrounds,and designed the classification decision conditions to be able to recognize insulators accurately.Literature[19]uses techniques such as CoordConv,DropBlock and Spatial Pyramid Pooling(SPP)to extract insulator features in complex backgrounds and trained the YOLO system with the dataset,which greatly improved the accuracy of aerial insulator detection.Literature [20] enhances the pyramid network by employing the attention mechanism as a feature extraction network,leading to a significant improvement in prediction accuracy.The identification and detection of transmission lines has been a persistent issue,but literature[21]proposes improvements to the miniature target detection YOLO model by simplifying the feature extraction network(Darknet)and implementing a streamlined prediction anchor structure,resulting in effective transmission lines detection.However,as the network structure of the target detection model deepens,its theoretical performance is expected to gradually improve.Nevertheless,experiments have revealed that adding more layers beyond a certain depth does not lead to performance enhancement.Instead,it slows down the network training convergence process and ultimately reduces detection effectiveness.Empirical evidence suggests that residual networks can effectively solve the aforementioned issues.Facing the problem of low detection precision of some small and medium-sized targets such as detecting bird’s nests,a new feature pyramid in feature fusion,path aggregation network and bidirectional feature pyramid[22]are used to improve the precision of small target detection.In the face of the complexity of the environment of the transmission lines,the loss function in the YOLOX algorithm is modified,and the literature adds Convolutional Block Attention Module(CBAM)[23]attention mechanism to the network to improve the feature extraction ability,while modifying the strong feature extraction part and introducing MSR algorithm to further optimize the picture,which significantly improves the recognition effect compared with the traditional YOLOX algorithm [24].However,the CBAM attention mechanism performs lines and spatial attention operations sequentially,ignoring channelspace interactions and thus losing cross-dimensional information.In terms of the function of fast and accurate identification and localization of dangerous objects in transmission lines,literature[25]takes the YOLOv3 detection model as the basis and improves the bounding box non-maximum value suppression algorithm with reference to the Soft-NMS algorithm and Generalized Intersection over Union(GloU)algorithm to improve the detection model target detection precision rate and recall rate.Tiny remote sensing objects may be incorrectly detected without reference to a long enough range,and the range that needs to be referenced is mostly different for different objects,the introduction of LSKNet[26]in the literature can dynamically adjust the spatial field of view so as to better detect the research objects in different scenarios.In order to solve the limitation of using Complete Intersection over Union(CIoU)Loss[27],literature[28]adopts Smoothed Intersection over Union(SIoU)Loss instead of CIoU Loss function to improve the detection precision of the model and proposes the YOlOv8n-GCBLock-GSConv model,which not only reduces the cost of use,but also can quickly and accurately complete the detection of the target.Literature[29]uses a regression loss combining Weighted Intersection over Union (WIoU) [30] and distributed focusing loss to improve the model convergence ability and model performance superiority.Literature[31]used SIoU Loss instead of the original CIoU Loss in YOLOv7 to speed up the convergence speed and finally used SIoU-NMS to reduce the problem of detection omission due to occlusion.In the actual detection of transmission lines hazards,there are usually occlusions and external interference factors,literature[32]uses SPDConv combined with the CBAM attention mechanism so that the model can analyze the density in a specific region.

    According to previous studies,deep learning-based algorithms for detecting transmission lines have many limitations,including decreased performance at lower resolutions,smaller objects,and complex environmental backgrounds.In the present paper,the above issues can be addressed by the following improvements:

    (1)The SPD-Conv method is utilized to enhance the model’s scale and spatial invariance.This is achieved by utilizing spatial pyramid pooling in combination with deep convolutional neural networks to construct a feature pyramid for parameter sharing and convolutional kernel size adaption.As a result,the accuracy and robustness of target detection are improved.

    (2) The LSK module efficiently weights the features generated by a series of large deep convolutional kernels that are spatially merged through a spatially selective mechanism.The weights of these convolutional kernels are dynamically determined based on the inputs,enabling the model to adaptively use different large convolutional kernels and adjust the sensory field for each target as needed.

    (3) The CIoU Loss is replaced with the SIoU Loss.The SIoU Loss takes into account angular loss,distance loss,and shape loss.It penalizes the size of the target frames and is more reflective of the true similarity between target frames.This replacement will speed up convergence and improve detection accuracy.

    The paper is organized as follows:Section 2 introduces the YOLOv8n model.Section 3 details the proposed model,including the LSK Module,SPD-Conv module,and SIoU Loss.Section 4 presents the experimental results and analysis.Section 5 concludes the research.

    2 Basic Structure of YOLOv8n

    Ultralytics released YOLOv8 in January 2023,following the success of YOLOv5.YOLOv8 offers 5 official configurations,namely YOLOv8n,YOLOv8s,YOLOv8m,YOLOv8l,and YOLOv8x,to cater to various scenario requirements.YOLOv8 introduces new features and improvements,such as a new backbone network,decoupled detection heads,and a new loss function,building on the previous version.YOLOv8n adopts a lightweight design that reduces the computational and storage requirements of the algorithm,which can enable the UAV to better process image and video data,improve the real-time and efficiency of the inspection,and enable the UAV to respond quickly and detect problems in a timely manner.Compared with YOLOv7,YOLOv8n improves the detection of small targets by improving the network architecture and training strategy,which increases its usefulness in the inspection process.Overall,the application of YOLOv8n in UAV inspection has higher accuracy,faster speed,and better adaptability and practicality.Therefore,YOLOv8n is selected as the basic training model in this paper.

    The YOLOv8n detection model is comprised of four main components:Input,Backbone,Neck,and Head.

    (1)Input.The data augmentation technique Mosaic[24]is often utilized in Input,with the anchorfree mechanism being employed to predict the object’s center directly in lieu of the offset of the known anchor frames.This results in a reduction in the number of predicted anchor frames,thereby expediting non-maximal suppression NMS[33].Data augmentation with Mosaic is discontinued in the final ten epochs.

    (2) Backbone.The primary purpose of the Backbone is to extract features,and it comprises modules like Conv,C2f,and SPPF.Among them,the Conv module performs convolution,BN,and SiLU activation function operations on the input image.YOLOv8n introduces a new C2f structure as the main module for learning residual features,following YOLOv7s ELAN module.The C2f structure enriches the gradient flow rate by connecting more branches across layers,resulting in a neural network with superior feature representation capability.The SPPF module,also recognized as spatial pyramid pooling,expands the sensory field and captures feature information at various levels within the scene.

    (3)Neck.The primary function of the Neck is to merge multi-scale features to produce a feature pyramid.This is achieved by implementing a path aggregation network,which involves using the C2f module to combine the feature maps that are obtained from three distinct phases of the Backbone.These measures facilitate the gathering of shallow data into more profound features.

    (4)Head.The current prevalent decoupled header structure is employed by Head to separate the classification and detection headers,thus mitigating any potential disagreements between classification and localization tasks.

    3 YOLOv8n Algorithm Improvement Strategy

    In light of the lacklustre performance of conventional neural networks in handling low-resolution images and small objects,the SPD-Conv[34]module is applied.This module is capable of dynamically adapting its vast spatial perceptual field to better replicate the range context of diverse targets in a given scene.The Selective Attention LSK module is introduced to enhance the precision of the target detection and accelerate the training process of the neural network.Meanwhile,CIoU Loss is replaced by SIoU Loss to accelerate the convergence and improve detection precision.Based on the above work,the YOLOv8n network model has been improved,as depicted in Fig.1.

    Figure 1: Improved YOLOv8n network mode

    3.1 LSK Module

    Current improvements for target detection algorithms often ignore the unique a priori knowledge of what occurs in a scene;aerial imagery is often captured in a high-resolution bird’s-eye view,and many of the objects in the imagery may be small in size,making it difficult to accurately identify them based on appearance alone.Instead,the recognition of these objects often relies on their context,tiny remotely sensed objects may be mistakenly detected without reference to a sufficiently long range,and the long-range required may vary for different types of objects.But the surrounding background can provide valuable clues as to their shape,orientation and other characteristics.Therefore,this paper introduces the Large Selective Kernel Network(LSKNet)depicted in Fig.2[26],which can adaptively modify its expansive spatial sensing field to more accurately represent the remote sensing scene of diverse objects within the scene.

    Figure 2: Conceptual drawing of the LSK module

    The specific implementation of LSK is as follows:Firstly,two different feature maps are obtained by ordinary convolution and expansion convolution respectively,then the number of channels of the two are converted to the same size by the convolution kernel size of 1?1,and then the two are stacked to obtain the feature map corresponding to c.Then,average pooling and maximum pooling are carried out on the feature map.Then,the two are stacked,convolved and sigmoid so that the selection weights for different convolution kernel sizes are obtained,and finally,the final output Y is obtained by multiplying and summing the weights with the proposed feature map and multiplying it with the initial input X.

    3.2 SPD-Conv Module

    Because of the advantages for processing low-resolution images and small target objects,SPDConv is introduced to replace the step-size convolution and pooling layers in the traditional CNN architecture.The structure of SPD-Conv is shown in Fig.3 [34],which consists of a space-to-depth(SPD)layer and a non-step-size convolution(Conv)layer.The input feature maps are first transformed through the SPD layer.Then the convolution operation is performed through the Conv layer.The combination of SPD and Conv layer can reduce the number of parameters without losing information.

    Figure 3: SPD-Conv structure

    The process of the SPD-Conv can be summarized as follows:For an intermediate feature map X of arbitrary size,a sub-map consisting of and can be formed by scaling,and each sub-map is downsampled proportionally to X.When scale=2,4 sub-maps are obtained,and the scale of each sample is 1/Scale of the original sample,and then the sub-feature maps are spliced along the lines to obtain the feature map X′,whose scale iss/2×s/2×4C1.Then the scale of the feature map X′is changed tos/2×s/2×C2,whereC2<4C1,by using a non-step-size convolutional layer to preserve the key information as much as possible.

    3.3 SIoU Loss

    The CIoU Loss is a target detection loss function that integrates bounding box regression metrics.It is used by the traditional YOLOv8n model as its regression loss.Eq.(1)shows the loss function.

    The coordinates of the center point of the prediction frame are denoted byb,whilebgtdenotes the coordinates of the center point of the real frame.The Euclidean distance between the prediction frame and the center point of the real frame is denoted byρ2,andcrepresents the diagonal length of the prediction frame and the real frame with the smallest external frame.The width and height of the frame are denoted by w and h,respectively.Additionally,υrepresents the shape loss,andαrepresents the weight.

    However,the approach has three obvious disadvantages: Low convergence and efficiency,due to its complex structure;highly sensitive to changes in target box scale,making it challenging to adapt the model to varying target box sizes;the misleading results when the aspect ratios of different prediction boxes and real frames are equal.To address the above issues,the SIoU Loss is applied as an alternative approach to CIoU Loss.SIoU Loss considers the vector angle between the actual frame and the predicted frame,redefines the penalty indicator in the loss function,and resolves the problem of mismatched directions that occur with un-framed frames in CIoU Loss.Moreover,SIoU Loss helps to avoid the predicted frame from training process of unstable drift during the training process,which improves the convergence speed of the model.SIoU Loss is calculated as:

    where Δ is the distance loss,Ω is the shape loss,UIOis the IoU loss.

    4 Result and Analysis

    4.1 Preparation before Calculation

    4.1.1 Experimental Environment Configuration

    The computing is conducted using Python and the Pytorch deep learning framework,the computing environment can be seen in Table 1.

    Table 1: Experimental environment configuration

    4.1.2 Dataset Construction

    Currently,the foreign hazards of transmission lines mainly stem from improper construction practices employed by large machinery as well as short circuits caused by avian nesting.This paper presents six target datasets of transmission lines captured through UAV,featuring excavators,trucks,bulldozers,tower cranes,bird nests,and cranes.The issue of a high number of images that were similar due to being captured by the same camera in a single scene was addressed by varying the angle and distance of the camera when taking shots.Images with different poses,including close-up,wide-angle,and side views,were collected.A dataset of 7,425 unique images was ultimately obtained.The categories and amounts of their datasets are displayed in Fig.4,and some representative images are shown in Fig.5.

    Figure 4: Type and number of data sets

    The previous studies suggest that the application of combined data enhancement strategies can effectively improve the performance of machine learning models in tasks requiring precise image recognition capabilities.In this research,a methodological approach combining mosaic data augmentation with traditional data enhancement techniques was utilized to augment the diversity of the target sample dataset.This process entailed a sequence of transformations applied to the image,encompassing operations such as flipping,scaling,and adjustments in the color gamut.The altered images were then merged with their corresponding bounding boxes.This amalgamation notably contributed to an enhancement in the model’s generalization ability and robustness,as demonstrated in Fig.6.

    Figure 5: Partial images of transmission lines from UAV

    To refine the accuracy of the target detection model and assist the neural network in assimilating attribute and locational data of the targets,precise labeling of the objects within the images is imperative.For this purpose,the current study employed the Make Sense web platform for the annotation of the dataset,subsequently acquiring labels in the CoCo format.These labels encompass essential details,including the object name and its spatial coordinates within the image.

    Additionally,the dataset underwent a random partitioning in an 8:1:1 ratio,resulting in the formation of a training set comprising 5,940 samples,a validation set comprising 742 samples,and a test set comprising 743 samples.Given the substantial dimensions of tower cranes and the pronounced issues of occlusion they present,a deliberate emphasis was placed on augmenting the representation of tower crane samples within the dataset.This approach is aimed at enhancing the model’s capability to accurately identify and analyze such large-scale objects,despite the challenges posed by their size and potential for partial visibility in object detection.

    4.2 Evaluation Indicators

    To evaluate the model’s performance objectively,we introduce several evaluation metrics,including precision,recall,F1,mAP50,mAP50-95,and frames per second transmitted (FPS).Precision,recall,and F1 are calculated as follows:

    where TP represents the count of detection frames that match the actual labelling and are predicted as positive samples;FP represents the count of detection frames that are predicted as positive samples but do not match the real labelling,and FN represents the count of real labelling that cannot be detected.

    The graph in question is structured to represent Precision and Recall metrics along the horizontal and vertical axes,respectively.Within this framework,the area enclosed by the plotted curve is indicative of the Average Precision(AP)value for a given category.The mean Average Precision(mAP)is subsequently derived as the mean of the AP values across all categories.Specifically,mAP50 denotes the average of Precision values for all categories at the 50%Intersection over Union(IoU)threshold,while mAP50-95 represents the mean of mAP values calculated at various IoU thresholds,ranging from 50%to 95%.

    Furthermore,the calculation of mAP is governed by the following formula,which quantitatively assesses the model’s accuracy by averaging the precision across different recall levels and categories,thereby providing a comprehensive evaluation of the model’s performance in object detection tasks.This formula encapsulates the integral aspects of precision and recall,offering a robust metric for the assessment of detection algorithms.

    where P is the proportion of prediction frames that exactly detected the target out of all prediction frames,and R is the proportion of prediction frames that actually detected the target out of all true labelled frames.

    FPS indicates the number of frames processed per second,which is used to measure the speed of the detection performance;the higher the value,the faster the detection speed and the better the detection performance.

    4.3 Comparison of the Effects of Improved Methods

    The effect of common loss functions,including CIoU,SIoU,DIoU,GIoU and WioU,is assessed by comparing the IoU loss function,and the results can be seen in Table 2.

    Table 2: Comparison of model performance using different IoU loss functions

    As illustrated in Table 2,the YOLOv8n+SIoU model,which uses the SIoU loss function,achieves the highest mAP50 compared to the original YOLOv8n+CIoU model.Although the accuracy decreases slightly,the mAP50,mAP50-95,Recall,FPS,and F1 improve by 1.8%,0.4%,2%,7.8%,and 0.2%,respectively.Compared to the traditional model,the proposed method(YOLOv8n+SIoU Loss)achieves higher detection speed and precision.The use of SIoU effectively improves model fitting and recognition accuracy.Compared to models with different loss functions,the model incorporating SIoU demonstrates superior comprehensive performance and the greatest ability to improve the original model.

    To evaluate the efficacy of the LSK attention mechanism,a few other common attention mechanisms are used to compare the detection ability.Specifically,the LSK and CBAM along with the SE [35],and EMA [36] attention mechanisms are added to the final layer of the backbone.The results are shown in Table 3.

    Table 3: Performance comparison of models incorporating different attention mechanisms

    Table 3 shows that,with the exception of the SE attentional mechanism,the accuracy of models incorporating the other three attentional mechanisms improved to varying degrees compared to the original model.The models that integrated the CBAM and LSK attentional mechanisms improved by 1.4% and 1.3%,respectively,in terms of accuracy.In terms of mAP50,the CBAM,EMA,and LSK attention mechanisms improved the model by 0.7%,0.8%,and 0.8%,respectively.The models that utilized both CBAM and LSK attention mechanisms showed a 0.7%improvement in F1 score.Regarding recall,the models that employed CBAM,SE,and LSK attention mechanisms showed improvements of 0.2%,0.12%,and 0.3%,respectively.Furthermore,the model that utilized LSK attention mechanism demonstrated the fastest detection speed,with a 3.1%improvement compared to the original YOLOv8n model.In conclusion,the model that incorporates the LSK attention mechanism provides a better trade-off between speed and accuracy and has the best overall performance.

    To better illustrate the impact of integrating the attention mechanism on model detection effectiveness,GradCAM [37] heat maps are utilized to visually analyze and compare the detection outcomes of the unimproved model and the model enhanced with the LSK attention mechanism.Fig.7 displays the detection outcome before enhancement.Conversely,Fig.8 illustrates the detection outcome after integrating the attention mechanism.The red area highlights the region towards which the model pays more attention while the lighter area shows the opposite.The application of the LSK attention mechanism indicates that the model focuses more on the area nearby to the target,which also helps to suppress the computational power occupied by non-target region.

    Figure 7: Test results of traditional YOLOv8n

    Figure 8: Test result of proposed YOLOv8n

    4.4 Ablation Study

    To further validate the efficacy of the various improvement methods of the YOLOv8n model,ablation experiments were conducted using different combinations of multiple enhancement modules.

    To substantiate the effectiveness of the diverse enhancement methodologies applied to the YOLOv8n model,a series of ablation studies were conducted.These studies involved the utilization of various combinations of enhancement modules,providing a comprehensive evaluation of each method’s impact on model performance.

    The ablation experiments were conducted using the same training and test sets.YOLOv8n was used as the base framework,and different attentional mechanisms and loss functions were sequentially adopted to obtain new models for training.The results are shown in Table 4.

    Table 4: Ablation study results

    Upon adding SPD-Conv to the original model,all indexes,except for mAP50,were reduced to varying degrees.The reason for this is unclear.SPD-Conv is a type of spatial depth convolution that can alter the feature map representation,making it difficult for the network to accurately learn target boundaries or features.To address this issue,this paper proposes adding an attention mechanism to enhance the model’s focus on important features,thereby improving target detection accuracy.The table data shows that the model with LSK attention mechanism has significantly improved comprehensive performance compared to the original model.Although there is a slight decrease in detection speed and recall rate,the detection accuracy,mAP50,mAP50-95,and F1 have all improved to varying degrees.Specifically,the detection accuracy has improved by 1.2%,mAP50 by 0.8%,mAP50-95 by 1%,and F1 by 0.3%.Compared to the original model,the model that introduces SIoU Loss has shown improvement in all aspects except for a slight decrease in mAP50-95.The detection accuracy has reached 91.7%,while mAP50 has reached 88.8%.The experimental results indicate that the combination method containing the SPD-Conv module,the LSK attentional mechanism,and the SIoU Loss has achieved the highest accuracy level,demonstrating the effectiveness of the improved model.

    4.5 Algorithm Verification

    A comparative analysis of the detection performance between the traditional YOLOv8n model and its enhanced counterpart is presented in Fig.9.The left column shows the traditional model,and the right column shows the improved model.This figure illustrates the detection capabilities of the YOLOv8n model,particularly highlighting the challenges posed by targets with diminutive scales and low contrast against the background.The traditional YOLOv8n model demonstrates variable degrees of detection failures,particularly noted in the imagery of groups a,b,and d,encompassing instances of both leakage and misdetection,especially evident in the images from groups b and c.Conversely,the augmented YOLOv8n model exhibited a consistent ability to accurately detect all targets,even in conditions of low clarity or small target size.The enhancement in detection efficacy is particularly noticeable in scenarios involving targets with ambiguous outlines or reduced scales.This comparative evaluation solidly establishes the superior performance of the enhanced YOLOv8n model,affirming its efficacy in complex detection environments where precision is critical.

    Figure 9: Comparison of YOLOv8n model detection results

    5 Conclusion

    This study presents an improved version of the YOLOv8n algorithm that is specifically designed to detect foreign objects on transmission lines.The iterative version of the algorithm employs the SPD-Conv module,which replaces the stepping and pooling operations with a space-to-depth convolution followed by a non-stepping convolution.This eliminates the stepping and pooling operations altogether.The model’s ability to handle objects with low image resolution or size is enhanced by downsampling the feature map while preserving distinguishing feature information.Additionally,the selective attention LSK module is included to dynamically adjust its large spatial receptive field to better simulate the ranging context of various objects in the scene and improve the accuracy of detecting small targets.Additionally,substituting the CIoU Loss function with the SIoU Loss function aids in quicker model convergence.The experimental results demonstrate that the SIoU Loss function produces superior detection outcomes when there is significant overlap between target frames.These experimental results confirm the effectiveness of the improved model in object detection and significantly enhance detection accuracy.The data indicate that the enhanced algorithm attains an average detection accuracy of 88.8% and a detection speed of 59.17 frames per second (FPS),demonstrating its potential applicability in identifying foreign objects on transmission lines.

    This paper reports progress in recognizing cranes at lower image resolutions or with smaller objects,particularly in the field of leakage detection misdetection.However,the algorithm still has limitations,and tower crane identification accuracy,a significant threat to transmission lines,needs improvement.Tower cranes typically present multiple intersecting lines and angles in their images,increasing the difficulty of accurate bounding box localization.Additionally,tower crane environments are often cluttered with complex scenes,such as construction sites,which can be easily mistaken for surrounding objects.This makes it challenging for algorithms to accurately extract tower crane features from the background.Future research will concentrate on methodological enhancements to improve network performance.To address the identified shortcomings and enable practical applications of the algorithms in complex real-world environments,we will explore the use of larger datasets,increased sensitivity to boundary information,and expanding the sensory field of the model.

    Acknowledgement:We would like to express our sincere gratitude to the Natural Science Foundation of Shandong Province and the State Key Laboratory of Electrical Insulation and Power Equipment for providing the necessary financial support to carry out this research project.

    Funding Statement:This research was funded the Natural Science Foundation of Shandong Province(ZR2021QE289),and State Key Laboratory of Electrical Insulation and Power Equipment(EIPE22201).

    Author Contributions:The authors confirm contribution to the paper as follows: Study conception and design:Yakui Liu,Xing Jiang;data collection:Yakui Liu,Xing Jiang;analysis and interpretation of results: Ruikang Xu,Yihao Cui,Jingqi Yang;draft manuscript preparation: Yakui Liu,Chenhui Yu,Jishuai Zhou.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data is not available due to commercial restrictions.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产男女内射视频| 国产精品99久久99久久久不卡 | 国产探花极品一区二区| 欧美精品一区二区大全| 色哟哟·www| 大片免费播放器 马上看| 丝袜在线中文字幕| 99re6热这里在线精品视频| 亚洲欧美一区二区三区黑人 | 精品视频人人做人人爽| 美女大奶头黄色视频| 精品国产一区二区久久| videossex国产| 麻豆精品久久久久久蜜桃| 日韩一区二区视频免费看| 中文字幕人妻丝袜制服| 亚洲综合色网址| 美国免费a级毛片| 免费黄网站久久成人精品| 99re6热这里在线精品视频| 亚洲人成电影观看| 人人妻人人添人人爽欧美一区卜| 性色av一级| 热99久久久久精品小说推荐| 亚洲av中文av极速乱| 熟女少妇亚洲综合色aaa.| 亚洲色图综合在线观看| 欧美成人午夜免费资源| 久久国产精品大桥未久av| 久久精品国产自在天天线| xxxhd国产人妻xxx| 妹子高潮喷水视频| 欧美日本中文国产一区发布| 中文字幕人妻熟女乱码| 大片免费播放器 马上看| 免费黄网站久久成人精品| 国产成人精品福利久久| 色婷婷av一区二区三区视频| 免费日韩欧美在线观看| 久久狼人影院| freevideosex欧美| 国产综合精华液| 建设人人有责人人尽责人人享有的| 亚洲人成网站在线观看播放| 亚洲精品久久久久久婷婷小说| 国产精品三级大全| 啦啦啦啦在线视频资源| www.av在线官网国产| 哪个播放器可以免费观看大片| 人人澡人人妻人| 国产亚洲一区二区精品| 高清视频免费观看一区二区| 久久精品久久久久久噜噜老黄| 中文字幕av电影在线播放| 欧美亚洲日本最大视频资源| 亚洲成av片中文字幕在线观看 | 亚洲国产精品成人久久小说| h视频一区二区三区| 纵有疾风起免费观看全集完整版| 啦啦啦在线观看免费高清www| 欧美国产精品va在线观看不卡| 午夜影院在线不卡| freevideosex欧美| 高清不卡的av网站| 亚洲伊人色综图| 久久久久久久久久人人人人人人| 国产男人的电影天堂91| 国产精品免费视频内射| 久久亚洲国产成人精品v| 国产极品天堂在线| 国精品久久久久久国模美| 中文字幕亚洲精品专区| 国产精品av久久久久免费| 99久久中文字幕三级久久日本| 国产精品久久久久久av不卡| 久久国产精品男人的天堂亚洲| 你懂的网址亚洲精品在线观看| 最近最新中文字幕免费大全7| 午夜福利影视在线免费观看| 一个人免费看片子| 天天躁夜夜躁狠狠久久av| 多毛熟女@视频| 97人妻天天添夜夜摸| 国产av一区二区精品久久| 日韩av在线免费看完整版不卡| 亚洲精品国产色婷婷电影| 秋霞在线观看毛片| 亚洲国产日韩一区二区| 十分钟在线观看高清视频www| 春色校园在线视频观看| 日韩一区二区视频免费看| 1024视频免费在线观看| av国产精品久久久久影院| 久久人人97超碰香蕉20202| 国产成人aa在线观看| 国产精品国产av在线观看| 2022亚洲国产成人精品| 91精品国产国语对白视频| 99久久人妻综合| 最近中文字幕2019免费版| 国产成人91sexporn| 最近手机中文字幕大全| 国产精品国产av在线观看| 久久精品亚洲av国产电影网| 久久国产亚洲av麻豆专区| 久久精品国产亚洲av天美| 免费高清在线观看日韩| 国产男女内射视频| 国产一级毛片在线| 久久国产精品大桥未久av| 亚洲精品乱久久久久久| 韩国av在线不卡| 亚洲,欧美精品.| 免费观看a级毛片全部| 国产免费视频播放在线视频| 亚洲少妇的诱惑av| 亚洲第一区二区三区不卡| 午夜激情久久久久久久| 久久久精品区二区三区| 色网站视频免费| av又黄又爽大尺度在线免费看| 黄色视频在线播放观看不卡| 日韩,欧美,国产一区二区三区| 日韩不卡一区二区三区视频在线| 老司机影院成人| 少妇被粗大的猛进出69影院| 亚洲美女视频黄频| 成年人午夜在线观看视频| 成人午夜精彩视频在线观看| 亚洲av中文av极速乱| 亚洲精品久久午夜乱码| 女的被弄到高潮叫床怎么办| 欧美激情高清一区二区三区 | 国产精品久久久久久精品电影小说| 成人午夜精彩视频在线观看| 国产欧美日韩综合在线一区二区| 国产精品熟女久久久久浪| 天天躁狠狠躁夜夜躁狠狠躁| 在线观看免费日韩欧美大片| videossex国产| 精品国产一区二区三区四区第35| 国产成人精品久久二区二区91 | 91精品伊人久久大香线蕉| 新久久久久国产一级毛片| 国产福利在线免费观看视频| 久久久久久久精品精品| 一个人免费看片子| 精品人妻偷拍中文字幕| 国产综合精华液| 久久精品国产a三级三级三级| 午夜福利在线免费观看网站| 亚洲精品国产色婷婷电影| 欧美日韩亚洲高清精品| 午夜福利网站1000一区二区三区| 精品少妇久久久久久888优播| 伊人久久大香线蕉亚洲五| 久久久久精品人妻al黑| 国产精品偷伦视频观看了| 青春草亚洲视频在线观看| 亚洲国产成人一精品久久久| av网站免费在线观看视频| 两个人看的免费小视频| 国产乱人偷精品视频| 看免费成人av毛片| 日韩av免费高清视频| av女优亚洲男人天堂| 国产野战对白在线观看| 香蕉丝袜av| 中文精品一卡2卡3卡4更新| 久久久久网色| 女人久久www免费人成看片| 18禁裸乳无遮挡动漫免费视频| 国产精品免费视频内射| 久久婷婷青草| 最新的欧美精品一区二区| 青青草视频在线视频观看| 亚洲精品日本国产第一区| 欧美变态另类bdsm刘玥| 亚洲av成人精品一二三区| 国产精品嫩草影院av在线观看| 国产精品不卡视频一区二区| 亚洲av.av天堂| 97人妻天天添夜夜摸| 91精品三级在线观看| 日韩不卡一区二区三区视频在线| 精品少妇一区二区三区视频日本电影 | 97人妻天天添夜夜摸| 看免费成人av毛片| av一本久久久久| 999久久久国产精品视频| 亚洲国产精品一区二区三区在线| 国产精品一区二区在线不卡| 丁香六月天网| 如日韩欧美国产精品一区二区三区| 人妻人人澡人人爽人人| 精品人妻熟女毛片av久久网站| 十分钟在线观看高清视频www| 99热全是精品| 久久人妻熟女aⅴ| av天堂久久9| av线在线观看网站| 免费人妻精品一区二区三区视频| 国产精品嫩草影院av在线观看| 亚洲在久久综合| 免费不卡的大黄色大毛片视频在线观看| 久久人人爽人人片av| 看十八女毛片水多多多| 久久热在线av| 国产亚洲av片在线观看秒播厂| 婷婷色麻豆天堂久久| 嫩草影院入口| 国产探花极品一区二区| 国产亚洲最大av| av线在线观看网站| 日本色播在线视频| 少妇 在线观看| 90打野战视频偷拍视频| www.熟女人妻精品国产| 国产一区有黄有色的免费视频| 黄片无遮挡物在线观看| 我的亚洲天堂| 中文字幕制服av| 久久女婷五月综合色啪小说| 黑人欧美特级aaaaaa片| 久久精品熟女亚洲av麻豆精品| 69精品国产乱码久久久| 亚洲欧美精品综合一区二区三区 | 两个人看的免费小视频| 日韩一区二区视频免费看| 视频区图区小说| 国产成人aa在线观看| 欧美最新免费一区二区三区| 精品少妇久久久久久888优播| 精品一区二区三卡| 国产欧美日韩综合在线一区二区| 性色av一级| 亚洲国产精品一区二区三区在线| 亚洲欧美一区二区三区久久| 美女高潮到喷水免费观看| 午夜久久久在线观看| 久久精品国产亚洲av天美| 成年动漫av网址| 日韩精品有码人妻一区| 超碰97精品在线观看| 男女下面插进去视频免费观看| 国产1区2区3区精品| 日韩欧美一区视频在线观看| 一本—道久久a久久精品蜜桃钙片| 另类精品久久| 亚洲精品自拍成人| 我要看黄色一级片免费的| 亚洲精品美女久久av网站| 97在线视频观看| 乱人伦中国视频| 久久99精品国语久久久| 777久久人妻少妇嫩草av网站| 亚洲人成77777在线视频| 黑人猛操日本美女一级片| 丁香六月天网| av福利片在线| 欧美亚洲日本最大视频资源| 大香蕉久久成人网| 亚洲精品成人av观看孕妇| 香蕉丝袜av| 亚洲精品av麻豆狂野| 久久久欧美国产精品| 亚洲一级一片aⅴ在线观看| 国产精品蜜桃在线观看| 两性夫妻黄色片| 曰老女人黄片| 免费观看av网站的网址| 一级毛片我不卡| 亚洲精品美女久久久久99蜜臀 | 最近最新中文字幕免费大全7| 飞空精品影院首页| 欧美日韩成人在线一区二区| 侵犯人妻中文字幕一二三四区| 国产熟女欧美一区二区| 一区二区三区四区激情视频| 91精品三级在线观看| 男女高潮啪啪啪动态图| 一区福利在线观看| 中文精品一卡2卡3卡4更新| 99久久人妻综合| 波多野结衣一区麻豆| 最近的中文字幕免费完整| 免费观看av网站的网址| 亚洲精品国产av成人精品| 精品酒店卫生间| 亚洲情色 制服丝袜| 国产淫语在线视频| av又黄又爽大尺度在线免费看| 丝瓜视频免费看黄片| 国产黄频视频在线观看| 亚洲欧美日韩另类电影网站| 男女边摸边吃奶| 欧美人与性动交α欧美软件| 色网站视频免费| 黄色毛片三级朝国网站| 深夜精品福利| 久久热在线av| 亚洲国产精品999| 国产精品免费大片| 青春草视频在线免费观看| 亚洲综合色惰| 97在线人人人人妻| 色婷婷av一区二区三区视频| 亚洲精品一区蜜桃| 日韩一卡2卡3卡4卡2021年| 香蕉精品网在线| 免费人妻精品一区二区三区视频| 国产精品国产三级国产专区5o| 日本免费在线观看一区| 一本久久精品| 精品国产一区二区三区久久久樱花| 日日爽夜夜爽网站| 欧美激情极品国产一区二区三区| 国精品久久久久久国模美| 黄片小视频在线播放| 一边亲一边摸免费视频| 国产日韩欧美在线精品| 国产一区亚洲一区在线观看| 国产又爽黄色视频| 成年女人毛片免费观看观看9 | 亚洲国产最新在线播放| 国产成人精品在线电影| 精品卡一卡二卡四卡免费| 在线天堂中文资源库| 丰满迷人的少妇在线观看| 久久婷婷青草| 一级毛片电影观看| 免费看不卡的av| 午夜久久久在线观看| 欧美在线黄色| 桃花免费在线播放| 国产亚洲精品第一综合不卡| 亚洲成国产人片在线观看| 99久久人妻综合| 日韩精品有码人妻一区| 18禁动态无遮挡网站| 制服诱惑二区| 亚洲国产成人一精品久久久| 日韩精品有码人妻一区| 老熟女久久久| 七月丁香在线播放| 丝袜喷水一区| 亚洲精品乱久久久久久| 美女午夜性视频免费| 精品久久蜜臀av无| 国产免费又黄又爽又色| 亚洲综合色网址| 亚洲第一av免费看| 中文精品一卡2卡3卡4更新| 欧美精品高潮呻吟av久久| 七月丁香在线播放| 久久久久久久亚洲中文字幕| 亚洲av福利一区| 波多野结衣av一区二区av| 极品人妻少妇av视频| 国产深夜福利视频在线观看| 亚洲国产av影院在线观看| 纯流量卡能插随身wifi吗| 国产 一区精品| 建设人人有责人人尽责人人享有的| freevideosex欧美| 国产无遮挡羞羞视频在线观看| 久久99精品国语久久久| 婷婷成人精品国产| 欧美bdsm另类| 久久精品国产a三级三级三级| 一区二区av电影网| 久久久国产欧美日韩av| 亚洲国产欧美在线一区| 欧美97在线视频| 国产男人的电影天堂91| 久热这里只有精品99| 国产精品 欧美亚洲| 午夜福利一区二区在线看| 色哟哟·www| 久久久精品免费免费高清| 丰满乱子伦码专区| 黄片小视频在线播放| 亚洲情色 制服丝袜| 在线精品无人区一区二区三| 久久国产亚洲av麻豆专区| av有码第一页| 国语对白做爰xxxⅹ性视频网站| 精品国产一区二区三区久久久樱花| 精品久久久精品久久久| 高清黄色对白视频在线免费看| 超碰成人久久| a级毛片在线看网站| 最近的中文字幕免费完整| 这个男人来自地球电影免费观看 | 国产成人一区二区在线| 久久精品久久精品一区二区三区| 成人亚洲精品一区在线观看| 午夜福利影视在线免费观看| 边亲边吃奶的免费视频| 欧美成人午夜精品| 美女主播在线视频| 日日啪夜夜爽| 大片电影免费在线观看免费| 在线观看免费视频网站a站| 黑人猛操日本美女一级片| 国产不卡av网站在线观看| 9色porny在线观看| 亚洲在久久综合| 啦啦啦在线免费观看视频4| 美女高潮到喷水免费观看| 精品卡一卡二卡四卡免费| 国产av码专区亚洲av| 春色校园在线视频观看| 精品人妻偷拍中文字幕| 精品国产超薄肉色丝袜足j| 免费观看a级毛片全部| 天天躁夜夜躁狠狠躁躁| 国产97色在线日韩免费| 在线观看免费视频网站a站| 91久久精品国产一区二区三区| 久久这里有精品视频免费| 免费av中文字幕在线| 国产成人午夜福利电影在线观看| 日韩成人av中文字幕在线观看| 免费大片黄手机在线观看| 丝袜美足系列| 男女免费视频国产| 亚洲欧美日韩另类电影网站| av网站免费在线观看视频| 一二三四在线观看免费中文在| 汤姆久久久久久久影院中文字幕| 女人被躁到高潮嗷嗷叫费观| 久久这里只有精品19| 丝袜喷水一区| 欧美另类一区| 日韩,欧美,国产一区二区三区| 丝袜美足系列| 午夜福利网站1000一区二区三区| 青春草国产在线视频| 男人操女人黄网站| 97在线人人人人妻| 欧美中文综合在线视频| 高清视频免费观看一区二区| 精品卡一卡二卡四卡免费| 成年美女黄网站色视频大全免费| 日韩 亚洲 欧美在线| 黄片无遮挡物在线观看| 久久精品国产a三级三级三级| 观看美女的网站| 热99久久久久精品小说推荐| 国产av国产精品国产| 国产av精品麻豆| 超碰成人久久| 免费高清在线观看视频在线观看| 亚洲精品国产av蜜桃| 中国三级夫妇交换| 午夜福利视频在线观看免费| 国产亚洲午夜精品一区二区久久| 午夜福利,免费看| 男男h啪啪无遮挡| 肉色欧美久久久久久久蜜桃| 亚洲精品成人av观看孕妇| 欧美人与性动交α欧美精品济南到 | 不卡视频在线观看欧美| 咕卡用的链子| 男的添女的下面高潮视频| 性色av一级| 国产精品秋霞免费鲁丝片| 国产精品久久久av美女十八| 成人18禁高潮啪啪吃奶动态图| 美女高潮到喷水免费观看| 高清欧美精品videossex| 日韩av在线免费看完整版不卡| 成人二区视频| 成人国语在线视频| 亚洲视频免费观看视频| 国产精品久久久久久av不卡| 国产女主播在线喷水免费视频网站| 久久人人爽人人片av| 欧美日韩一级在线毛片| 免费少妇av软件| 成人漫画全彩无遮挡| 深夜精品福利| 国产在视频线精品| 婷婷色麻豆天堂久久| 999久久久国产精品视频| 精品久久蜜臀av无| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 多毛熟女@视频| 老汉色av国产亚洲站长工具| 一本久久精品| 欧美亚洲 丝袜 人妻 在线| 黄色 视频免费看| 亚洲av.av天堂| 考比视频在线观看| 91精品伊人久久大香线蕉| 高清黄色对白视频在线免费看| 狂野欧美激情性bbbbbb| 国产成人精品无人区| 曰老女人黄片| 日韩av不卡免费在线播放| 一二三四在线观看免费中文在| 汤姆久久久久久久影院中文字幕| 亚洲欧美一区二区三区国产| 亚洲精品久久久久久婷婷小说| 啦啦啦啦在线视频资源| 国产精品无大码| 国产一级毛片在线| 哪个播放器可以免费观看大片| 欧美成人精品欧美一级黄| 又大又黄又爽视频免费| 乱人伦中国视频| 爱豆传媒免费全集在线观看| 天天影视国产精品| 大片电影免费在线观看免费| 考比视频在线观看| 亚洲情色 制服丝袜| 九草在线视频观看| 亚洲,一卡二卡三卡| 性高湖久久久久久久久免费观看| 啦啦啦视频在线资源免费观看| 岛国毛片在线播放| 精品卡一卡二卡四卡免费| 精品国产露脸久久av麻豆| 一本大道久久a久久精品| 久久99一区二区三区| av不卡在线播放| 国产成人精品无人区| 久久久久精品性色| 老鸭窝网址在线观看| 免费黄色在线免费观看| 只有这里有精品99| 另类精品久久| 日韩中字成人| 欧美成人精品欧美一级黄| 日韩中字成人| 国产深夜福利视频在线观看| 亚洲av.av天堂| 在线观看一区二区三区激情| 午夜激情久久久久久久| 秋霞伦理黄片| 在线观看国产h片| 人人妻人人爽人人添夜夜欢视频| 校园人妻丝袜中文字幕| 岛国毛片在线播放| 精品卡一卡二卡四卡免费| 日本免费在线观看一区| 亚洲精品日本国产第一区| 国产精品 国内视频| 三级国产精品片| 制服诱惑二区| 男人舔女人的私密视频| 亚洲av成人精品一二三区| 久久狼人影院| av卡一久久| 亚洲精品自拍成人| av在线app专区| 丰满迷人的少妇在线观看| 狠狠婷婷综合久久久久久88av| 久久精品久久久久久久性| 中文字幕色久视频| 久久久久国产网址| 九色亚洲精品在线播放| 久久这里只有精品19| 18禁国产床啪视频网站| 精品久久久精品久久久| 国产一区二区三区av在线| 国产免费又黄又爽又色| 久久 成人 亚洲| 久久久久久久久久久久大奶| www.自偷自拍.com| 中国国产av一级| av.在线天堂| 如何舔出高潮| 欧美国产精品一级二级三级| 99精国产麻豆久久婷婷| 妹子高潮喷水视频| 亚洲av男天堂| 久久婷婷青草| 日韩成人av中文字幕在线观看| 国产高清不卡午夜福利| 国产精品久久久久久精品电影小说| av在线观看视频网站免费| 三上悠亚av全集在线观看| 日韩一卡2卡3卡4卡2021年| 欧美在线黄色| 成人18禁高潮啪啪吃奶动态图| 亚洲中文av在线| 国产在视频线精品| 最新的欧美精品一区二区| 欧美成人午夜精品| 最新中文字幕久久久久| 国产亚洲av片在线观看秒播厂| av一本久久久久| 伦理电影大哥的女人| 不卡av一区二区三区| 黄片无遮挡物在线观看| 女人被躁到高潮嗷嗷叫费观| av免费在线看不卡| h视频一区二区三区| 女人精品久久久久毛片| 成人午夜精彩视频在线观看| 9色porny在线观看| 久久久国产精品麻豆| 久久国内精品自在自线图片| 男的添女的下面高潮视频| 韩国高清视频一区二区三区| 亚洲欧洲国产日韩| 国产一区有黄有色的免费视频| 男女国产视频网站| 黄色一级大片看看| 中文字幕精品免费在线观看视频| 久久精品人人爽人人爽视色| 久久久久网色| 欧美日韩视频精品一区|