• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    FIR-YOLACT:Fusion of ICIoU and Res2Net for YOLACT on Real-Time Vehicle Instance Segmentation

    2024-01-12 03:47:22WenDongZiyanLiuMoYangandYingWu
    Computers Materials&Continua 2023年12期

    Wen Dong ,Ziyan Liu,2,? ,Mo Yang and Ying Wu

    1College of Big Data and Information Engineering,Guizhou University,Guiyang,550025,China

    2The State Key Laboratory of Public Big Data,Guizhou University,Guiyang,550025,China

    ABSTRACT Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving systems.The vehicle instance segmentation can perform instance-level semantic parsing of vehicle information,which is more accurate and reliable than object detection.However,the existing instance segmentation algorithms still have the problems of poor mask prediction accuracy and low detection speed.Therefore,this paper proposes an advanced real-time instance segmentation model named FIR-YOLACT,which fuses the ICIoU (Improved Complete Intersection over Union)and Res2Net for the YOLACT algorithm.Specifically,the ICIoU function can effectively solve the degradation problem of the original CIoU loss function,and improve the training convergence speed and detection accuracy.The Res2Net module fused with the ECA (Efficient Channel Attention) Net is added to the model’s backbone network,which improves the multi-scale detection capability and mask prediction accuracy.Furthermore,the Cluster NMS (Non-Maximum Suppression) algorithm is introduced in the model’s bounding box regression to enhance the performance of detecting similarly occluded objects.The experimental results demonstrate the superiority of FIR-YOLACT to the based methods and the effectiveness of all components.The processing speed reaches 28 FPS,which meets the demands of real-time vehicle instance segmentation.

    KEYWORDS Instance segmentation;real-time vehicle detection;YOLACT;Res2Net;ICIoU

    1 Introduction

    The development of the global economy boosts the automotive industry,and the number of vehicles is increasing yearly.According to relevant data,cars will be over 1.446 billion globally by 2021.The growth of the automobile industry has brought many conveniences to people’s lives.On the other hand,it has also caused many social problems,such as traffic congestion and safety.These issues have also become one of the hot points in society.Around 1.35 million people lost their lives in traffic accidents worldwide,according to the World Health Organization’s 2018 Global Status Report on Road Safety.In the face of increasingly serious traffic problems,enhancing vehicle driving safety and reducing road accidents are challenges for the entire automotive industry and related researchers.

    In recent years,with the development of computer vision technology,many countries have launched research on autonomous driving technology [1,2].As the critical component of the transportation system,the improved autonomous driving technology will increase driving safety and reduce the risk of human-caused road traffic accidents.Therefore,autonomous driving has become a research focus for the global automotive industry [3–6].Excellent environmental perception technology is essential for self-driving cars.Current environmental perception technology includes LIDAR-based and camera-based ones.The former is not popularized because of its high cost.On the contrary,due to the low cost,the latter is applied to process the image information in computer vision[7].

    The rapid growth of computer hardware and deep learning technology stimulates image processing development.Convolutional neural networks (CNNs) have made excellent achievements in computer vision,and deep learning has become one of the popular scientific research directions.As a comprehensive subject,computer vision covers various popular research directions,such as object detection,pattern recognition,instance segmentation,and object tracking.In autonomous driving technology,determining the category and location of objects in front of the vehicle is a complex task for the computer.Thus,instance segmentation combining object detection and image segmentation can meet the requirements of autonomous driving.

    As it is known,autonomous driving is challenging because the different scales of target objects,such as pedestrians and vehicles,may appear in front of the vehicle at the same time.So,it is essential to extract information at different scales of the detection algorithm.However,the drawback of the current instance segmentation networks is poor instance mask prediction because the backbone network mainly concentrates on global features and ignores local features.In addition,existing models’loss functions converge slowly,which prolongs the training time of segmentation models.Furthermore,the standard NMS method also has lower detection accuracy and efficiency.

    This paper proposes a real-time instance segmentation algorithm,FIR-YOLACT,by fusing the ICIoU and Res2Net into YOLACT.The FIR-YOLACT can represent multi-scale features at a granular level and enhance each network layer’s range of receptive fields,which is practical and robust in various scenarios.In addition,this paper adds the Cluster NMS algorithm to the models’bounding box regression to improve the performance of detecting similar occluded objects.The results demonstrate that FIR-YOLACT outperforms the basic model both quantitatively and qualitatively.

    The main contributions of this paper are listed as follows:

    ? The paper proposes a fusion of ICIoU and Res2Net for the YOLACT algorithm named FIR-YOLACT.Compared with the original algorithm,the proposed algorithm achieves better performance and higher accuracy while meeting the requirements for real-time instance segmentation.

    ? The paper updates the network’s loss function and Non-Maximum Suppression (NMS) algorithm to ICIoU and Cluster NMS,respectively,to improve the accuracy of prediction and detection of similarly obscured objects.

    ? The paper proposes a module named Res2nEt,which fuses the Res2Net with the ECA Net to represent the multi-scale features at a granular level and enhance each network layer’s range of receptive fields.

    This paper is organized as follows.In Section 2,this paper reviews related studies on vehicle detection,instance segmentation,and bounding box regression.Section 3 describes the algorithm’s overall structure and each improved section’s work.This paper indicates the experimental study on the training and evaluation of the model in Section 4.Section 5 summarizes the work and discusses future work.

    2 Related Word

    2.1 Vehicle Detection

    As one of the hotspot research topics in Artificial Intelligence(AI),vehicle detection is an essential part of autonomous driving.Traditional vehicle detection techniques rely heavily on digital image processing,and the image information is digitized through image segmentation,image augmentation,and image transformation.However,with the development of transportation systems,today’s traffic environment has become complex and changeable [8,9].In today’s traffic system,the traditional instance segmentation method is inadequate,so this paper focuses on vehicle instance segmentation based on deep learning.Compared with conventional methods,the algorithm relies on CNN to extract images’features,which can achieve higher detection accuracy and satisfy the real-time requirements for vehicle detection and segmentation.Also,the model is robust and can adjust to complex situations and variable environments.

    2.2 Image Instance Segmentation

    Images are an essential medium for humans to acquire knowledge.Today,in the age of copious data,images are used in various fields,such as medical,remote sensing images,and industrial fields[10,11].With the advancement of computer vision algorithms and hardware performance,image instance segmentation has emerged as one of the computer vision technologies.Moreover,instance segmentation has achieved many outstanding achievements in computer vision research [12,13].Instance segmentation is the fusion of object detection and semantic segmentation tasks[14].Object detection needs to locate and recognize the target object,but it is inaccurate to represent objects with a detection box.Because the box usually contains much background information,accurate boundary information about the object cannot be obtained[15].Furthermore,instance segmentation separates the target objects at pixel level and clusters them according to their instance classes,which can more accurately separate the target objects in the scene[16].Therefore,instance segmentation can get more detailed image information and have a broader application scope.

    Image instance segmentation methods are generally classified into two-stage methods and onestage methods.The two-stage instance segmentation method consists of two steps: detection and segmentation.According to the sequential processing order,the two-stage instance segmentation method includes the top-down method based on detection and the bottom-up method based on segmentation.Mask R-CNN [17] is a classical two-stage detection framework extending from Fast R-CNN[18].One-stage methods obtain the results directly by coalescing detection and segmentation into a single network.And the representatives include PolarMask[19],YOLACT[20]and SOLO[21].The current instance segmentation methods have been summarized in Table 1.

    Table 1:The overview of instance segmentation methods

    2.3 Loss Function for Bounding Box Regression

    Bounding box regression is an essential task in instance segmentation,and the bounding box regression function evaluates the predicted box’s detection performance.

    Current bounding box regression loss functions include the following categories,such as Mean Square Error(MSE)to evaluate the degree of data variation;Smooth L1 loss for Faster R-CNN;The Intersection over Union(IoU)loss;GIoU[27]loss and DIoU[28]loss.The IoU,a standard function,represents the coverage between the predicted and ground truth boxes to evaluate the algorithms’accuracy.However,it is not suitable that the two boxes do not overlap.To solve this problem,Rezatofighi’s team proposed the GIoU function,but it still faces the issues of slow convergence and inaccurate regression.By introducing the central point distance,Zheng presented DIoU loss with much faster convergence than GIoU loss.Based on DIoU,Zheng proposed CIoU[29],adding three important geometric metrics (overlap area,center point distance,and aspect ratio) to achieve faster convergence and better performance than GIoU and DIoU.

    3 Methodology

    This paper proposes an improved YOLACT algorithm of fusion ICIoU and Res2Net to achieve higher accuracy and speed on real-time vehicle instance segmentation tasks.As illustrated in Fig.1,first,the paper updates the prediction head’s loss function with ICIoU instead of the original one.Then,the original NMS algorithm is enhanced with Cluster NMS[29]to resolve the occlusion problem of similar objects.Third,the backbone network is strengthened using the Res2Net module combined with ECA(Res2nEt)to enhance the advanced feature capability and raise the mask AP score.

    Figure 1:Overall framework

    3.1 Backbone

    The network structure of the YOLACT algorithm is depicted in Fig.2.The backbone network is constructed based on ResNet101 and extracts the input image features to generate five feature maps.The three feature maps are used as the middle input layer of the feature pyramid,and the another five feature maps are generated by fusing the features at multiple scales.And then,they are sent into two parallel branching tasks.The first branch takes the feature map through the fully convolutional networks to generate prototype masks.The second branch not only predicts each prediction box’s class confidence and position but also generates mask coefficients for each instance.Fast NMS selects the region proposal after bounding box regression to obtain the final instance prediction boxes.Then,the prototype masks and the corresponding mask coefficients are combined linearly to generate the instance mask.

    YOLACT uses ResNet-101 as the backbone network.Furthermore,the Bottleneck is the basic unit of ResNet-101.This paper chooses the improved Res2Net module as the Bottleneck to enhance the multi-scale representation ability.The Res2Net constructs hierarchical residual-like connections within one single residual block to increase the range of receptive fields for each network layer[30].The structure is shown in Fig.3.The Res2Net replaces the 3×3 convolution with smaller groups of filters.The feature map is divided into four feature map subsets with the same spatial size,and then the output results of the four feature map subsets are gradually fused.Finally,the feature map is output by a 1×1 convolution.The Res2Net will improve the ability of the backbone to extract multi-scale information and the accuracy of the mask prediction.

    Figure 2:Architectures of YOLACT

    Figure 3:Architectures of the Bottleneck and Res2Net

    The improved Res2Net(Res2nEt)is incorporated with the ECA attention module(ECA Net)[31],and the introduction of the channel attention module is exceptionally beneficial to the performance of the convolutional neural network model.ECA Net is an improvement on SE Net [32],which can effectively balance the performance and complexity of the model by avoiding dimensionality reduction.Furthermore,it introduces appropriate cross-channel interaction to preserve performance while significantly decreasing model complexity.And Fig.4 shows the ECA module’s structure.

    Figure 4:Architectures of the Res2nEt and ECA Net

    The ECA Net procedure is as follows:the input feature layer is first processed by global averaging pooling,as given in the following equation:

    In Eq.(1):xirepresents thei-th feature map with input sizeH×W,andyrepresents the global feature.

    Then,the number of cross channelskis calculated adaptively using the channel dimensionC.The adaptive function is formulated as follows:

    where|t|odddenotes the nearest odd number tot;Cis the channel dimension;bandγare both constants,b=1,γ=2.

    Then,the channel weights are calculated by using a one-dimensional convolution with a convolution kernel size ofkto obtain the interdependencies between channels.1D convolution is formulated as follows:

    whereωis the channel weight;σis the sigmoid function;C1Dis the one-dimensional convolution;yis the result of global average pooling,andkis the convolution kernel size.Finally,the original input features are dotted with channel weights to obtain features with channel attention so that the network can selectively enhance valuable features and suppress useless features.

    3.2 ICIoU for Loss Function

    The bounding box regression loss function measures the position difference between the prediction and ground truth boxes.The loss function of YOLACT is Smooth L1,and the loss is calculated for the length,width,and bias of the horizontal and vertical coordinates of the center point of the prediction box[33].The original loss function cannot accurately measure the location of the prediction box because it lacks the calculation of the intersection over the union and the minimum bounding rectangle.The CIoU considers three geometric factors:the Intersection over Union,the center point distance,and the aspect ratio of the prediction box and the ground truth box.So the CIoU can measure the performance of the bounding box regression more accurately compared with the original loss function.The equation of CIoU is given below:

    whereρdenotes the distance between the predicted boxband the geometric center of the ground truth boxbgt;cdenotes the diagonal length of the minimum bounding rectangle of the predicted box and the ground truth box;wgtandhgtare the width and height of the ground truth box,whilewandhare the width and height of the predicted box,respectively.

    Whenwgt/hgtw/h,ν>0,andαν>0,the penalty termανplays an active role in the loss calculation.However,whenwgt/hgt=w/h,thenν=0 andαν=0.Currently,LCIoUdegenerates toLDIoU,and the convergence speed reduces,as Fig.5 shows.

    Inspired by the CIoU algorithm,Wang et al.proposed the ICIoU algorithm[34],which takes the ratio of the corresponding widths of the two bounding boxes of ground truth and prediction as the geometric factor.Moreover,the penalty function is calculated based on the ratio variance between each side of the ground truth box and the predicted box.

    The method improves the comprehensiveness of the loss function calculation and effectively avoids the degradation of the CIoU algorithm to the DIoU algorithm when the aspect ratios of the actual and predicted boxes are equal.ICIoU also increases the localization accuracy and enhances the robustness of the loss function for calculating different box sizes.

    Figure 5:The degradation of CIoU

    3.3 Cluster NMS

    The YOLACT algorithm uses the Fast NMS algorithm to reduce the redundant boxes.Although the Fast NMS algorithm will significantly increase the processing speed,it quickly causes the region proposal of different instances with a high overlap rate to be mistakenly removed,making some adjacent similar objects easily regarded as one instance.To address this issue,this paper introduces Cluster NMS.Furthermore,it is defined as follows:

    (1) Assuming that there are eight prediction boxes arranged in descending order according to confidence scores,B=[B1,...,B8];initialize the one-dimensional tensora0=[1,1,1,1,1,1,1,1,1]andt=1;compute the IoU matrixA=withxij=and carry out upper triangulation:

    (2)The IoU matrix A is binarized according to(12),εtakes the value 0.5,and the processed matrix is:

    (3)Expand the initial one-dimensional tensora0into the diagonal matrixP1and left multiplyAbyP1to obtainC1:

    (4)Taking the maximum valuegby column and ifg>ε,setg=0,otherwiseg=1,the new onedimensional tensora1consists ofg;thena1is expanded into the diagonal matrixP2and left multiplyAbyP2to obtainC2,t=2:

    (5) Repeat the above operation,skipping the intermediate calculation process here,untilt=4,when the matrixC4is obtained:

    (6) Whent=5,the maximum value is obtained forC4by column fora4=[1,0,1,0,1,1,1,0],a3=a4.Then stop the iterative calculation,and predict boxesB2,B4,B8are suppressed,andB1,B3,B5,B6,B7reprsent the final output.The algorithm flow is shown in Table 2.

    Table 2:The algorithm flow of Cluster-NMS

    If the Fast NMS algorithm is as the NMS algorithm,the obtained binarized one-dimensional tensor will bea=[1,0,0,0,0,1,0,0,0],and the prediction boxesB2,B3,B4,B6,B7,B8will be all suppressed,so the prediction results are inaccurate.However,the prediction results of the Cluster NMS algorithm are the same as the traditional NMS results,with the operation time shortened and the detection speed improved.

    4 Experiment

    4.1 Datasets

    This paper’s experiments are mainly based on the MS COCO,a large-scale dataset that composes 80 common object classes for object detection and instance segmentation tasks.The MS COCO consists of three data sets:the train set has about 115,000 images,the val set has 5,000 images,and the test-dev set has about 20,000 images.

    The experiment selects 9880 vehicle images containing five categories of cars,trucks,buses,motorcycles,and bicycles with annotation information from the COCO 2017 as the training set.Similarly,870 vehicle images are selected as the val set.The COCO 2017 test-dev is used as the test set,removing the images with annotation information to distinguish training set images and ensure the model’s generalization capability.After training the method on the train set,the model’s performance is evaluated on the val set.And the model is also compared with state-of-the-art models on the testdev set.

    4.2 Ablation Studies

    The ablation experiments are conducted on the train and val set to validate the model’s effectiveness.So,the experiments provide the impact of progressively adding the different components,including ICIoU loss function,Cluster NMS,and Res2nEt module,into the baseline.The experiments are accomplished on a computer with an AMD Ryzen 9 5900HX and Nvidia GTX3080 GPU(16G).The software is Python3.9-Conda-Pytorch 1.11.The total number of iterations is 300000,and the batch size is 8.

    4.2.1 ICIoU Loss Function and Cluster NMS

    In the ablation experiments,the YOLACT instance segmentation model is used to discuss the performance impact of ICIoU loss and Cluster NMS on the prediction box,respectively.The base and the modified models with the ICIoU loss functions are trained separately.

    The ICIoU loss function and Cluster NMS perform better in the test results.The improved loss function ICIoU enables the prediction box to more accurately label the target object’s location and improve the confidence score,while Cluster NMS increases the detection rate for similar occluded objects.Fig.6 shows the test results of our model and YOLACT.

    The results show that the ICIoU loss function performs better than the CIoU loss function,predicting the target object more accurately and improving the confidence score on the target.For example,the confidence score of the bus on the left of the first-row images is improved after adding ICIoU.In the second and third rows,the confidence scores of the targets on the right side are also improved compared to the left.

    The NMS algorithm has good results in detecting single target objects but has shortcomings in the occlusion problem of similar objects.NMS removes the target object with a lower confidence score when the overlap of two objects is high,while Cluster NMS can improve the detection rate for similar occluded objects.As in the first row of pictures,the improved model can detect the target missed in the middle(the part circled by the red dotted line).The second row can detect the motorcycle obscured in the right,and the third row of pictures can detect the vehicle missing at the edge.

    To demonstrate their effectiveness,the paper also designed a quantitative experimental analysis,and the evaluation function includes average accuracy AP,inference time Time,and FPS.

    Figure 6:Comparison of our model(with ICIoU and Cluster NMS)and YOLACT(with CIoU and Fast NMS).Our results are shown in the right column

    The curves of the box AP values with the number of iterations for the base model and the added ICIoU loss function model are shown in Fig.7.The dotted line is the actual measured value,fitted separately for more straightforward observation,and the solid line in the figure is the fitted one.

    As shown in Fig.7,the improved model converges more rapidly than the original base model.Furthermore,the AP values are consistently better than the latter,indicating that the ICIoU loss function can improve the model’s performance.

    Table 3 compares the CIoU loss function with the ICIoU loss function in the box AP score under different NMS algorithms,and Table 4 shows the results of the mask AP score.

    Table 3:The box AP score of comparison of CIoU loss and ICIoU loss on the val set

    Table 4:The mask AP score of comparison of CIoU loss and ICIoU loss on the val set

    According to the experimental results,the average accuracy of the ICIoU loss function has improved compared with the CIoU loss function in the box AP score and the mask AP score under the same NMS conditions,demonstrating that ICIoU loss predicts the target box more accurately.Also,with the same loss function,the box AP score and the mask AP score of Cluster NMS are better than the fast NMS.

    Figure 7:The average accuracy of our model and YOLACT

    4.2.2 Res2nEt Model

    The improved Res2net module fused with the ECA module enhances the ability to extract global and local information and represent the multi-scale features at a granular level.Thus,it can improve mask prediction accuracy.Fig.8 shows the results of comparing the improved models with the baseline.

    Fig.8 shows that the improved model with the Res2nEt has higher accuracy for mask prediction in the instance segmentation task.For example,the mask of the bicycle on the right of the first row is more complete than that on the left,and the model for mask prediction of the motorcycle in the second row is also improved than the baseline.Similarly,the mask prediction of the bicycle in the third row is also more accurate and complete.

    Figure 8:Comparison of our model with the original YOLACT.Our results are shown in the right column

    To demonstrate the effectiveness of the Res2nEt,Table 5 shows the ablation experiments comparing the box AP values and mask AP values in adding the Res2net module and the ECA module,with the ICIoU as the loss function,on the val set.

    Table 5:The box AP score of comparison of the CIoU loss and the ICIoU loss on the val set

    Table 5 shows that when only Res2Net is available,the model improves in both box AP values and mask AP values.While the ECA module is added,the model performs best,which proves the effectiveness of the Res2Net module fused with the ECA.

    Then,the ablation experiments are performed on the val set using the backbone ResNet101 to prove the effectiveness of integrating the different individual components,including Culser NMS,CIoU,ICIoU,and Res2nEt.The detailed results are shown in Table 6.

    Table 6:Ablation study results on val set

    Experiment 1 is the basic YOLACT;Experiment 2 changes the Fast NMS to the Culser NMS;Experiment 3 replaces the loss function with CIoU based on Experiment 2;And in Experiment 4,the ICIoU is applied as the loss function;Experiment 5 adds the Res2nEt module.

    The results of the experiments show that all the components contribute towards improving the accuracy.It can be seen from Experiment 2 that the application of Culser NMS has improved the box AP score.Compared with Experiment 3,Experiment 4 has improved the box AP,so the ICIoU has played a more positive role in the model.The final results of Experiment 5 show that the mask AP score increased after integrating Res2nEt module into the baseline,proving the effectiveness of Res2nEt.After combining all improvements into the baseline,our model obtains a box AP score of 42.56%and a mask AP score of 36.73%.

    4.3 Algorithm Comparison and Analysis

    In this subsection,the experiments are conducted on the MS COCO train set to compare our method with some typical state-of-the-art methods on MS COCO test-dev in terms of accuracy(mask AP),speed (milliseconds and FPS) and model complexity (parameters P and FLOPs).The total number of iterations is 800000,and the batch size is 8.The results are demonstrated in Table 7.

    Table 7:Comparison of our method to other instance segmentation frameworks on MS COCO testdev datasets

    Table 7 indicates that the MNC and FCIS have mask AP scores of 24.6%and 29.2%,respectively.Moreover,the typical Mask R-CNN,MS R-CNN,PoinInst,and SOLO have mask AP scores of 35.7%,38.3%,38.3%,and 37.8%,respectively.Although they outperform our model regarding mask AP,they require more than 75 milliseconds(ms)to detect an image during inference.Moreover,the QueryInst and SipMask algorithms only improve the accuracy and ignore processing speed.Therefore,they are unsuitable for real-time image processing for autonomous driving scenarios.Conversely,our model meets the demands of real-time image processing while outperforming the original YOLACT by 3.0%of the mask AP score,and our model is not much more complex than the original algorithm according to the FLPOs and P.

    In addition,Table 8 compares our model with other instance segmentation methods on the Pascal SBD test set.The experimental results show that our method performs best on the Pascal SBD dataset.

    Table 8:Experimental results of different methods on the Pascal SBD test set

    Table 9:The box AP score of comparison of our method and YOLACT on the MS COCO test-dev datasets

    Table 10:The mask AP score of comparison of our method and YOLACT on the MS COCO test-dev datasets

    Meanwhile,the improved model also enhances the original model’s small-scale object detection capability,as shown in Fig.9.

    From Fig.9,the improved model is superior to small-scale objects.For example,the car,the bicycle,and the smaller car are all detected correctly.

    The proposed model is tested on the MS COCO test-dev datasets and compared with the original YOLACT algorithm on small,medium,and large targets.The results are listed in Tables 9 and 10.

    The data from Tables 9 and 10 indicates that the improved algorithm increases by 4.2%in the box AP and 3.0%in the mask AP compared to the original YOLACT algorithm,while it achieves 1.8%and 1.1%improvement in the small object’s box AP and mask AP,respectively.

    Figure 9:Comparison of our model with the original YOLACT.Our results are shown in the right column

    5 Conclusions and Future Work

    This paper proposes the FIR-YOLACT vehicle instance segmentation algorithm to tackle the current problems,such as slow convergence and long training time.The proposed algorithm utilizes the Cluster NMS algorithm for bounding box regression to improve the accuracy of predicting and detecting similarly obscured objects.Additionally,the original loss function is replaced with ICIoU to prevent the degradation of the CIoU algorithm and strengthen the model’s robustness.To extract richer image information and increase the mask accuracy scores,this paper makes the backbone network incorporate the Res2net module fused with ECA Net.The experimental results demonstrate that FIR-YOLACT performs significantly better than the original model,with a 4.2% and 3.0%increase in box AP and mask AP scores,respectively.Moreover,FIR-YOLACT achieves a processing speed of 28 FPS,indicating its excellent performance in balancing accuracy and processing speed.

    However,the proposed method still has some shortages to be further improved.For example,the model does not consider the impact of complex weather scenarios and needs optimization for interframe timing information in the video dataset.In the future,we plan to deploy the model on a low-cost mobile platform and employ Tensor RT technology to improve the speed of the model detection and extend the proposed approach to video instance segmentation detection.We will explore practical methods to optimize the model to improve the system’s real-time detection performance and find new application scenarios in the times of new energy smart vehicles.

    Acknowledgement:Thanks are given for the computing support of the State Key Laboratory of Public Big Data,Guizhou University.

    Funding Statement:This work is supported by the Natural Science Foundation of Guizhou Province(Grant Number:20161054),Joint Natural Science Foundation of Guizhou Province(Grant Number:LH20177226),2017 Special Project of New Academic Talent Training and Innovation Exploration of Guizhou University(Grant Number:20175788),The National Natural Science Foundation of China under Grant No.12205062.

    Author Contributions:The authors confirm their contribution to the paper as follows: Wen Dong:Methodology: development or design of methodology;creation of models;software: programming,software development;designing computer programs;implementation of the computer code and supporting algorithms;testing of existing code components;writing-original draft: preparation,creation,and presentation of the published work,specifically writing the initial draft (including substantive translation);validation: verification,whether as a part of the activity or separate,of the overall replication/reproducibility of results/experiments and other research outputs.Ziyan Liu:Conceptualization: ideas;formulation or evolution of overarching research goals and aims;project administration: management and coordination responsibility for the research activity planning and execution;supervision:oversight and leadership responsibility for the research activity planning and execution,including mentorship external to the core team;writing-review &editing: preparation,creation and/or presentation of the published work by those from the original research group,specifically critical review,commentary or revision including pre-or post-publication stages;funding acquisition:acquisition of the financial support for the project leading to this publication.Mo Yang:Software;validation.Ying Wu: Software;validation.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Research data are not shared.Due to the participants of this study did not agree for their data to be shared publicly,so supporting data is not available.

    Conflicts of Interest:The authors declare they have no conflicts of interest to report regarding the present study.

    久久久久国产精品人妻一区二区| 99国产精品免费福利视频| 婷婷色av中文字幕| www.熟女人妻精品国产| 超色免费av| 蜜桃在线观看..| 日韩中文字幕欧美一区二区 | 国产精品三级大全| 中文字幕最新亚洲高清| 国产精品国产三级专区第一集| 久久av网站| 久久久久久人妻| 人妻人人澡人人爽人人| 久久久久国产一级毛片高清牌| 黄频高清免费视频| 国产精品麻豆人妻色哟哟久久| 欧美日韩国产mv在线观看视频| 亚洲精品,欧美精品| 九草在线视频观看| 亚洲av在线观看美女高潮| 最近手机中文字幕大全| 亚洲精品国产一区二区精华液| 亚洲精品自拍成人| 亚洲成av片中文字幕在线观看| 999久久久国产精品视频| netflix在线观看网站| 久久久久精品性色| 91精品伊人久久大香线蕉| 久久久久精品久久久久真实原创| 天天添夜夜摸| 久久精品人人爽人人爽视色| 精品亚洲成a人片在线观看| 久久久久网色| 久久热在线av| 极品人妻少妇av视频| 在线看a的网站| 国产成人午夜福利电影在线观看| 国产无遮挡羞羞视频在线观看| 国产免费视频播放在线视频| 国产精品亚洲av一区麻豆 | 国产成人a∨麻豆精品| 国产精品三级大全| 哪个播放器可以免费观看大片| 老司机影院毛片| 久久婷婷青草| 伊人久久国产一区二区| a级片在线免费高清观看视频| 欧美在线黄色| 久热爱精品视频在线9| 久久久久久久精品精品| 亚洲激情五月婷婷啪啪| 人妻 亚洲 视频| 女的被弄到高潮叫床怎么办| av不卡在线播放| 两个人看的免费小视频| 99热网站在线观看| 日韩av不卡免费在线播放| 久久鲁丝午夜福利片| 国产一区有黄有色的免费视频| 日韩一区二区三区影片| 色播在线永久视频| 黑人猛操日本美女一级片| 在线观看免费午夜福利视频| 亚洲欧美精品自产自拍| 欧美黄色片欧美黄色片| videos熟女内射| 成年人免费黄色播放视频| 亚洲久久久国产精品| 一级爰片在线观看| 黄色 视频免费看| 无限看片的www在线观看| 9色porny在线观看| 国产成人欧美在线观看 | 久久久久国产精品人妻一区二区| 美女福利国产在线| 男女高潮啪啪啪动态图| 视频区图区小说| 日韩av不卡免费在线播放| 久久久久久久久久久免费av| 免费观看性生交大片5| 亚洲精品av麻豆狂野| 日本vs欧美在线观看视频| 最新的欧美精品一区二区| 亚洲人成77777在线视频| 免费看不卡的av| 精品亚洲乱码少妇综合久久| 亚洲精品第二区| 女人爽到高潮嗷嗷叫在线视频| 成人影院久久| 桃花免费在线播放| 国产精品无大码| 久久精品国产亚洲av高清一级| 国产探花极品一区二区| 国产精品香港三级国产av潘金莲 | 亚洲欧美日韩另类电影网站| 久久精品国产综合久久久| 丰满迷人的少妇在线观看| xxx大片免费视频| 一级毛片 在线播放| 欧美精品一区二区免费开放| 51午夜福利影视在线观看| 欧美激情极品国产一区二区三区| 午夜免费观看性视频| 伊人久久国产一区二区| 亚洲精品国产区一区二| 9热在线视频观看99| av国产久精品久网站免费入址| 制服人妻中文乱码| 精品久久蜜臀av无| 日韩伦理黄色片| 在线观看免费日韩欧美大片| 午夜福利在线免费观看网站| 免费在线观看完整版高清| 男人添女人高潮全过程视频| 成年av动漫网址| 国产日韩欧美视频二区| 欧美国产精品va在线观看不卡| 卡戴珊不雅视频在线播放| 曰老女人黄片| 少妇人妻精品综合一区二区| netflix在线观看网站| 校园人妻丝袜中文字幕| 如何舔出高潮| 亚洲成人免费av在线播放| 99精国产麻豆久久婷婷| 丰满饥渴人妻一区二区三| 老司机在亚洲福利影院| 亚洲国产中文字幕在线视频| 在线观看免费午夜福利视频| 国产精品国产av在线观看| 免费在线观看完整版高清| 人人妻,人人澡人人爽秒播 | 亚洲欧美精品自产自拍| 亚洲精品久久成人aⅴ小说| 免费观看性生交大片5| 毛片一级片免费看久久久久| 亚洲色图综合在线观看| 久久影院123| 久久精品久久精品一区二区三区| 久久国产亚洲av麻豆专区| 亚洲av电影在线观看一区二区三区| av国产久精品久网站免费入址| 中文字幕亚洲精品专区| 天天添夜夜摸| svipshipincom国产片| 亚洲五月色婷婷综合| 蜜桃在线观看..| 国产精品久久久久久人妻精品电影 | 国产精品三级大全| 久久99精品国语久久久| 九色亚洲精品在线播放| av卡一久久| 2021少妇久久久久久久久久久| 国产1区2区3区精品| 亚洲国产精品一区三区| 免费久久久久久久精品成人欧美视频| 日本午夜av视频| 爱豆传媒免费全集在线观看| 国产日韩一区二区三区精品不卡| 久久久久久久精品精品| 亚洲四区av| 美女视频免费永久观看网站| 精品少妇一区二区三区视频日本电影 | 伦理电影大哥的女人| 欧美av亚洲av综合av国产av | av片东京热男人的天堂| 国产国语露脸激情在线看| 人成视频在线观看免费观看| av免费观看日本| 色婷婷av一区二区三区视频| 一本色道久久久久久精品综合| 日韩精品有码人妻一区| 久久免费观看电影| 香蕉国产在线看| 久久99精品国语久久久| 中文字幕色久视频| av天堂久久9| 丰满少妇做爰视频| 最近的中文字幕免费完整| 涩涩av久久男人的天堂| 亚洲视频免费观看视频| 欧美日本中文国产一区发布| 久久精品国产a三级三级三级| 丰满少妇做爰视频| 久久 成人 亚洲| 国产免费视频播放在线视频| 97精品久久久久久久久久精品| 自线自在国产av| av视频免费观看在线观看| 国产欧美亚洲国产| 中文天堂在线官网| 少妇人妻久久综合中文| 国产成人啪精品午夜网站| 一区二区三区四区激情视频| 一二三四中文在线观看免费高清| 五月天丁香电影| 纯流量卡能插随身wifi吗| 秋霞伦理黄片| 国产精品一区二区在线观看99| 日韩人妻精品一区2区三区| 亚洲色图 男人天堂 中文字幕| 亚洲精品国产一区二区精华液| 精品国产国语对白av| 久久久精品区二区三区| 成人午夜精彩视频在线观看| 欧美激情高清一区二区三区 | 丰满饥渴人妻一区二区三| 日本91视频免费播放| 九草在线视频观看| 中文字幕另类日韩欧美亚洲嫩草| 亚洲精品自拍成人| 国产成人精品在线电影| 国产av精品麻豆| 一级片免费观看大全| 成人国产麻豆网| 欧美av亚洲av综合av国产av | 亚洲久久久国产精品| 国产精品二区激情视频| 午夜免费观看性视频| 女性被躁到高潮视频| 少妇 在线观看| 一区二区三区激情视频| 女的被弄到高潮叫床怎么办| 精品少妇黑人巨大在线播放| 亚洲av中文av极速乱| 久久97久久精品| videos熟女内射| 伊人亚洲综合成人网| 久久久久久久久免费视频了| 伊人久久大香线蕉亚洲五| av有码第一页| 最新在线观看一区二区三区 | 男女下面插进去视频免费观看| 菩萨蛮人人尽说江南好唐韦庄| 国产成人a∨麻豆精品| 悠悠久久av| 大陆偷拍与自拍| 久久久精品国产亚洲av高清涩受| 午夜日韩欧美国产| 久久久国产欧美日韩av| 麻豆av在线久日| 一区二区三区四区激情视频| 欧美精品高潮呻吟av久久| 十八禁人妻一区二区| 久久99热这里只频精品6学生| 狠狠精品人妻久久久久久综合| 国产精品一区二区在线观看99| 国产欧美日韩综合在线一区二区| 久久久久久久久免费视频了| 18禁动态无遮挡网站| 两个人免费观看高清视频| 狠狠婷婷综合久久久久久88av| 在线观看三级黄色| 国产免费一区二区三区四区乱码| 欧美成人午夜精品| 超碰97精品在线观看| 18禁国产床啪视频网站| 人成视频在线观看免费观看| 国产精品欧美亚洲77777| 久久精品久久精品一区二区三区| 999精品在线视频| 国产精品.久久久| 一区在线观看完整版| 精品一区二区三区av网在线观看 | 中文字幕高清在线视频| 18禁观看日本| 亚洲精品久久成人aⅴ小说| 一区二区三区精品91| 成人漫画全彩无遮挡| 国产成人a∨麻豆精品| www.自偷自拍.com| 最近中文字幕2019免费版| e午夜精品久久久久久久| 在线观看国产h片| 国产精品国产三级国产专区5o| 九色亚洲精品在线播放| 亚洲久久久国产精品| 香蕉国产在线看| 啦啦啦中文免费视频观看日本| 水蜜桃什么品种好| 少妇被粗大的猛进出69影院| 亚洲av中文av极速乱| 丰满迷人的少妇在线观看| 热99国产精品久久久久久7| 精品亚洲成a人片在线观看| videosex国产| 日韩一卡2卡3卡4卡2021年| 国产99久久九九免费精品| av线在线观看网站| 19禁男女啪啪无遮挡网站| 免费在线观看完整版高清| 精品福利永久在线观看| 久久久久人妻精品一区果冻| 亚洲精品av麻豆狂野| 国产爽快片一区二区三区| 精品酒店卫生间| 99热网站在线观看| 黑人猛操日本美女一级片| 亚洲欧洲精品一区二区精品久久久 | 成人国产麻豆网| 菩萨蛮人人尽说江南好唐韦庄| 无限看片的www在线观看| 人人妻,人人澡人人爽秒播 | 亚洲av成人不卡在线观看播放网 | 亚洲五月色婷婷综合| 亚洲国产精品999| 中文欧美无线码| 亚洲国产欧美网| 一个人免费看片子| 亚洲精品久久久久久婷婷小说| 亚洲精品av麻豆狂野| 丝袜喷水一区| a级毛片在线看网站| 国产av国产精品国产| 国产一区有黄有色的免费视频| 亚洲精品视频女| 免费观看性生交大片5| 国产亚洲av片在线观看秒播厂| 亚洲成人免费av在线播放| 中文字幕人妻丝袜制服| 精品第一国产精品| 伊人久久国产一区二区| 国产精品 国内视频| 国产免费一区二区三区四区乱码| 热99久久久久精品小说推荐| 99热网站在线观看| 麻豆乱淫一区二区| 老司机影院毛片| 免费av中文字幕在线| 国产成人欧美| 亚洲欧美精品自产自拍| 久久人人爽av亚洲精品天堂| 欧美成人精品欧美一级黄| 亚洲成色77777| 丰满乱子伦码专区| 久久综合国产亚洲精品| 亚洲国产欧美日韩在线播放| 男女下面插进去视频免费观看| 99久久综合免费| 男人添女人高潮全过程视频| 亚洲国产av新网站| 97精品久久久久久久久久精品| 国产男女超爽视频在线观看| 国产成人免费观看mmmm| 亚洲欧美清纯卡通| 久热这里只有精品99| 制服诱惑二区| 免费日韩欧美在线观看| 老汉色av国产亚洲站长工具| 精品一区二区三区四区五区乱码 | 亚洲一卡2卡3卡4卡5卡精品中文| 女性被躁到高潮视频| 亚洲欧美色中文字幕在线| av有码第一页| 成人18禁高潮啪啪吃奶动态图| 国产亚洲av高清不卡| 午夜福利,免费看| 青草久久国产| 亚洲一区中文字幕在线| 女性生殖器流出的白浆| 久久热在线av| 久久精品熟女亚洲av麻豆精品| 在线观看免费日韩欧美大片| 国产精品熟女久久久久浪| 久久精品aⅴ一区二区三区四区| 成人黄色视频免费在线看| 亚洲精品日本国产第一区| www.av在线官网国产| 中文精品一卡2卡3卡4更新| 欧美日韩亚洲高清精品| 亚洲一区二区三区欧美精品| 精品国产国语对白av| kizo精华| e午夜精品久久久久久久| 啦啦啦啦在线视频资源| 欧美日韩综合久久久久久| 久久久久人妻精品一区果冻| 国产乱人偷精品视频| 国产精品 国内视频| 久久97久久精品| 欧美av亚洲av综合av国产av | 侵犯人妻中文字幕一二三四区| 国产精品一区二区在线观看99| 成人18禁高潮啪啪吃奶动态图| 日韩制服丝袜自拍偷拍| 亚洲国产精品一区二区三区在线| 亚洲 欧美一区二区三区| 久久狼人影院| 国产精品偷伦视频观看了| 中文字幕精品免费在线观看视频| 国产欧美亚洲国产| 久久久久精品国产欧美久久久 | 国产成人精品在线电影| 一边亲一边摸免费视频| 一边摸一边做爽爽视频免费| 一级毛片黄色毛片免费观看视频| 久久久久久久精品精品| 新久久久久国产一级毛片| 久久精品亚洲av国产电影网| 欧美日韩国产mv在线观看视频| 男女床上黄色一级片免费看| 国产精品亚洲av一区麻豆 | 亚洲精品,欧美精品| 国产在视频线精品| 免费在线观看完整版高清| 亚洲欧洲国产日韩| av不卡在线播放| 久久av网站| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲精品在线美女| 国产成人精品无人区| 亚洲精品av麻豆狂野| 精品人妻在线不人妻| bbb黄色大片| 亚洲精品成人av观看孕妇| 久久人妻熟女aⅴ| 免费观看性生交大片5| 男女无遮挡免费网站观看| 国产国语露脸激情在线看| 国产老妇伦熟女老妇高清| 欧美日韩综合久久久久久| 久久99热这里只频精品6学生| 午夜福利乱码中文字幕| 国产精品秋霞免费鲁丝片| 国产精品99久久99久久久不卡 | 午夜91福利影院| 欧美人与善性xxx| 又粗又硬又长又爽又黄的视频| 国产在线一区二区三区精| 欧美人与性动交α欧美精品济南到| 中文天堂在线官网| 久久青草综合色| 国产xxxxx性猛交| 亚洲熟女精品中文字幕| 免费日韩欧美在线观看| 香蕉丝袜av| 亚洲欧美清纯卡通| 看十八女毛片水多多多| 欧美另类一区| 日韩视频在线欧美| 亚洲一级一片aⅴ在线观看| 黄色 视频免费看| 免费观看人在逋| xxxhd国产人妻xxx| 男女无遮挡免费网站观看| 日韩制服骚丝袜av| 一区二区三区激情视频| 久久免费观看电影| 欧美人与性动交α欧美软件| 成年人免费黄色播放视频| 男女国产视频网站| 日韩视频在线欧美| 一级爰片在线观看| 看十八女毛片水多多多| 久久精品国产亚洲av高清一级| 欧美精品人与动牲交sv欧美| 热99国产精品久久久久久7| 亚洲成色77777| av卡一久久| 伦理电影大哥的女人| 久久 成人 亚洲| 人人妻人人澡人人爽人人夜夜| xxxhd国产人妻xxx| 精品人妻熟女毛片av久久网站| 男女边吃奶边做爰视频| 日韩一区二区视频免费看| 一本色道久久久久久精品综合| 国产欧美日韩一区二区三区在线| 波多野结衣一区麻豆| 青春草国产在线视频| 母亲3免费完整高清在线观看| 不卡av一区二区三区| 色94色欧美一区二区| 国产一区亚洲一区在线观看| 一级爰片在线观看| 少妇的丰满在线观看| 老熟女久久久| 99香蕉大伊视频| 999久久久国产精品视频| 哪个播放器可以免费观看大片| a 毛片基地| 久久女婷五月综合色啪小说| 国产1区2区3区精品| 精品酒店卫生间| 久久99热这里只频精品6学生| 亚洲第一青青草原| 久久午夜综合久久蜜桃| 热99久久久久精品小说推荐| 一级毛片电影观看| 国产一卡二卡三卡精品 | 久久韩国三级中文字幕| 国产在线视频一区二区| 亚洲情色 制服丝袜| 18禁国产床啪视频网站| av天堂久久9| 亚洲国产毛片av蜜桃av| 国产一区亚洲一区在线观看| 国产又爽黄色视频| 色综合欧美亚洲国产小说| 波多野结衣av一区二区av| 亚洲国产av影院在线观看| 亚洲国产成人一精品久久久| 久久99一区二区三区| 午夜激情av网站| 一区二区三区精品91| 国产精品久久久久久精品古装| 欧美精品一区二区免费开放| 男人爽女人下面视频在线观看| 亚洲一码二码三码区别大吗| 国产精品熟女久久久久浪| 久久久国产精品麻豆| 男男h啪啪无遮挡| 成人亚洲欧美一区二区av| av在线播放精品| 国产一区二区激情短视频 | 欧美日韩视频精品一区| 亚洲国产欧美日韩在线播放| 国产熟女午夜一区二区三区| 久久久精品区二区三区| 波多野结衣一区麻豆| 999精品在线视频| 丝袜美腿诱惑在线| 99九九在线精品视频| 色网站视频免费| 女人爽到高潮嗷嗷叫在线视频| 国产有黄有色有爽视频| 午夜91福利影院| 性高湖久久久久久久久免费观看| 一区二区三区四区激情视频| 亚洲男人天堂网一区| 中国国产av一级| 人人妻人人爽人人添夜夜欢视频| 亚洲,一卡二卡三卡| 大陆偷拍与自拍| 在线 av 中文字幕| 最新的欧美精品一区二区| 亚洲熟女毛片儿| 又大又爽又粗| 久久久久久免费高清国产稀缺| 欧美日韩成人在线一区二区| 欧美亚洲日本最大视频资源| 如日韩欧美国产精品一区二区三区| 老司机靠b影院| 色婷婷av一区二区三区视频| 男女无遮挡免费网站观看| 中国国产av一级| 亚洲精品美女久久久久99蜜臀 | 丰满迷人的少妇在线观看| 免费在线观看完整版高清| av卡一久久| 中文字幕另类日韩欧美亚洲嫩草| 美国免费a级毛片| 波野结衣二区三区在线| 王馨瑶露胸无遮挡在线观看| 精品国产一区二区三区久久久樱花| 国语对白做爰xxxⅹ性视频网站| 啦啦啦 在线观看视频| 亚洲 欧美一区二区三区| 中文字幕人妻丝袜一区二区 | 亚洲国产精品999| 青青草视频在线视频观看| 国产一卡二卡三卡精品 | 日韩 欧美 亚洲 中文字幕| 亚洲欧洲日产国产| a级片在线免费高清观看视频| 免费观看性生交大片5| 最近中文字幕2019免费版| 国产老妇伦熟女老妇高清| 日韩av不卡免费在线播放| 老司机亚洲免费影院| 麻豆av在线久日| 晚上一个人看的免费电影| 免费不卡黄色视频| 亚洲国产看品久久| 国产一区二区 视频在线| 91精品三级在线观看| 在线观看免费高清a一片| 中文字幕人妻丝袜制服| 精品久久久精品久久久| 亚洲国产精品一区二区三区在线| 日韩一区二区三区影片| 国产精品无大码| 国产爽快片一区二区三区| 国产 一区精品| 韩国精品一区二区三区| 成人三级做爰电影| 久久av网站| 9191精品国产免费久久| 亚洲av欧美aⅴ国产| 亚洲av综合色区一区| 免费不卡黄色视频| 另类亚洲欧美激情| 人人妻人人添人人爽欧美一区卜| 久久女婷五月综合色啪小说| 成年人午夜在线观看视频| 十八禁人妻一区二区| 国产爽快片一区二区三区| 亚洲一级一片aⅴ在线观看| 国产一区有黄有色的免费视频| 一级毛片电影观看| 精品人妻熟女毛片av久久网站| 丰满少妇做爰视频| 老鸭窝网址在线观看| 麻豆乱淫一区二区| 99热国产这里只有精品6| 极品少妇高潮喷水抽搐| 日日撸夜夜添| 欧美97在线视频| 久久影院123| 人人妻人人澡人人爽人人夜夜| 色婷婷久久久亚洲欧美| 久久99精品国语久久久| 老熟女久久久| 国产成人啪精品午夜网站| 国产伦理片在线播放av一区|