• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An infrared target intrusion detection method based on feature fusion and enhancement

    2020-06-28 03:04:06XiaodongHuXinqingWangXinYangDongWangPengZhangYiXiao
    Defence Technology 2020年3期

    Xiaodong Hu, Xinqing Wang, Xin Yang, Dong Wang, Peng Zhang, Yi Xiao

    College of Field Engineering, Army Engineering University of PLA, Nanjing, 210007, China

    Keywords:Target intrusion detection Convolutional neural network Feature fusion Infrared target

    ABSTRACT Infrared target intrusion detection has significant applications in the fields of military defence and intelligent warning.In view of the characteristics of intrusion targets as well as inspection difficulties,an infrared target intrusion detection algorithm based on feature fusion and enhancement was proposed.This algorithm combines static target mode analysis and dynamic multi-frame correlation detection to extract infrared target features at different levels. Among them, LBP texture analysis can be used to effectively identify the posterior feature patterns which have been contained in the target library, while motion frame difference method can detect the moving regions of the image, improve the integrity of target regions such as camouflage, sheltering and deformation. In order to integrate the advantages of the two methods, the enhanced convolutional neural network was designed and the feature images obtained by the two methods were fused and enhanced. The enhancement module of the network strengthened and screened the targets, and realized the background suppression of infrared images.Based on the experiments, the effect of the proposed method and the comparison method on the background suppression and detection performance was evaluated, and the results showed that the SCRG and BSF values of the method in this paper had a better performance in multiple data sets,and it’s detection performance was far better than the comparison algorithm. The experiment results indicated that,compared with traditional infrared target detection methods,the proposed method could detect the infrared invasion target more accurately, and suppress the background noise more effectively.

    1. Introduction

    Target intrusion detection is a significant technical means in the field of military defence[1],and it has been preliminarily applied in the key monitoring areas, including prohibited military zones,border posts, airport perimeter and national defense engineering.Monitoring enemies and weapons and equipment with satellite imagery, unmanned aerial vehicle photography, video monitoring and other equipment can effectively reduce the cost and workload of alert tasks. Compared with optical target detection, infrared target detection is characterized by high accuracy, long operating distance, outstanding anti-interference and so on, because investigators and equipment have thermal radiation which is very difficult to be hidden. Therefore, infrared imaging equipment can reduce the difficulty of the detection algorithm.

    At present, infrared target intrusion detection mostly aims at ground-air target detection [2-4]. Ground target detection technology is usually interfered by a number of factors, such as background environment, high-temperature area, weak targets,deformation, sheltering and so on, which make infrared intrusion target detection become a challenging subject. In addition, the military alert missions have some uniqueness compared with usual target detection missions.On the one hand,the targets which need to be guarded are usually the reconnaissance personnel and small ground equipment of the other party,which limits the targets of the monitoring mission. On the other hand, the intrusion targets are usually characterized by movement, which enables the detection task to combine with the dynamic target detection technology,thus effectively increasing the accuracy.

    Currently, there are two kinds of target detection approaches:one is to use the spatial convolution filter template for static detection of single frame image, and the other uses the continuity and similarity between adjacent frames for motion detection. The former is susceptible to be interfered by environmental, thus leading to a high false alarm rate, so it is not competent for detection in complex background. The latter has a good adaptability to the environment,but it cannot detect the target area very accurately,and it may not be able to extract the target boundary of slowly moving targets. At the same time, the target area extracted from fast moving targets is too large, and the detection results are prone to have “cavity” and “double shadow”. Benefited from the rapid growth of the deep convolutional neural network (DCNN)[5,6],the target detection method based on deep learning which is proposed recently, such as R-CNN [7], Fast R-CNN [8], Faster R-CNN [9] and YOLO series algorithms etc. [10-12], has made a great breakthrough on target detection performance.Deep learning can automatically learn useful features directly from abundant training samples, however, the infrared target only contains grayscale information but does not have the distinguishing features such as obvious size,texture and color of the target to be detected in the application of computer vision.Therefore,the existing object detection methods based on deep learning in the field of computer vision are not suitable for infrared target detection. The manual feature has the advantages of fast calculation speed, no need to train the network,and can intuitively express the characteristics of the target. By manually acquiring the visual features of the target and combining it with the deep learning method,it is beneficial to extract the high-level features of the target.

    In view of the above problems and specific task background in infrared target intrusion detection, an infrared target intrusion detection method based on feature fusion and enhancement was proposed. It comprehensively used the gray-scale texture features of infrared images and the correlation between adjacent frames to efficiently extract target features as well as suppress background noise.Meanwhile, an enhanced convolutional neural network was designed to fuse the extracted feature images, further strengthen the target characteristics of infrared images,suppress false targets and backgrounds,and realize fast and accurate detection of infrared invasion targets.

    The remainder of this paper is arranged as follows.In the second section, the related work was briefly discussed. The third section presented the infrared target intrusion detection framework. The content of the fourth section is experimental analysis, and the experimental results of the proposed method and other methods are discussed. Finally, our research was summarized in the fifth section.

    2. Related work

    Traditional infrared target detection technology generally includes the approach based on single frame image as well as the approach based on sequential images. The detection approach based on single frame images mainly carries out the detection on the basis of the basic features of the images,such as edge features,gray information, etc. The general detection process is preprocessing first and then carrying out threshold detection. Traditional detection methods based on single frame spatial domain include maxmean filtering [13], morphological top-hat filtering[14], high-pass filtering [15] as well as wavelet transform [16].However, these methods usually lead to a lot of false alarms and poor detection performance in the case where the signal-to-noise ratio of the target is relatively low. The detection approach based on sequential images mainly carries out the detection on the basis of the continuity and similarity of target motion. When using sequential images for detection, the accumulated information can be obtained, so that the signal-to-noise ratio can be effectively improved, thus highlighting the target. The main target detection approach based on sequential multi-frame images includes difference method [17], optical flow method [18], 3D matched filter method [19] and grayscale accumulation method [20], whose shortcoming is that when the inter-frame motion speed is fast,the target energy may not be effectively accumulated, making the detection performance of this method decline.

    In the past ten years,many researches have been carried out on the detection technology of intrusion targets, among which the detection technologies proposed are to detect potential invasion targets through enhancing the characteristics of invasion targets and suppressing background noise and clutter. Literature [21]adopts filter based on hidden Markov model (HMM) to effectively detect intrusion targets.HMM filter is the optimal filter for discrete time process,but a threshold needs to be preset before detection.A high preset threshold can reduce the incidence of false alarms,but it also reduces the probability of detecting the targets. Literature[22] puts forward the idea of combining tracking technology with HMM filter before detection, and constructs a new HMM filter library. The research shows that the HMM filter library is more flexible than other HMM filters, and its detection target performance is also better, but both the HMM filter system and HMM filter don’t have recognition function, so the occurrence of false alarm rate is inevitable.Literature[23]proposes the morphological filtering method which combining with the trained classifier,and it is helpful to identify real invasion targets from the images of“suspected targets”that may cause false alarm,thus reducing false alarm rate. However, it must emphasize that this type of methods are strongly dependent on the training data set.

    In recent years, many scholars have studied the detection of infrared targets. Literature [24] proposes to use local steering kernel (LSK) to encode infrared images, but does not use deep network training. Literature [25] proposes a new learning framework to transfer knowledge from remote sensing image scene classification tasks to multiple types of geospatial target detection tasks. However, due to the dense distribution of objects and complex background structure, the robustness of this method to noise is not Strong. For the past few years, with the amelioration of computer performance as well as the rapid growth of neural network, many infrared target detection methods based on DCNN have obtained good achievements. In view of the problems of infrared ship images,such as low recognition rate and slow speed,Wang et al.[26]proposed a method which combines marked-based watershed segmentation algorithm with DCNN, and experimental results show that the proposed method could identify infrared ship targets more quickly and accurately. Lin et al. [27] proposed an infrared point target detection approach based on DCNN, and designed two kinds of deep networks,regression and classification,to achieve the detection and classification of infrared point targets,the results indicate that the method is suitable for point target detection for infrared oversampling scanning systems. Wu et al.[28] proposed a new deep convolutional network which can address the issue of small target detection of infrared images. The network is composed of fully convolutional network (FCN) and classification network. The fully convolutional network carries out the enhancement and preliminary screening of infrared small targets, while the classification network classifies the position distribution of small targets, and the experiment validates the advantages of the new detection network compared with traditional small target detection algorithm.

    3. Proposed method

    3.1. Overview of the proposed method

    Image feature extraction methods based on single-frame analysis can extract target areas quickly, but they are generally limited to specific application environments. In the image preprocessing phase,the effectiveness of target segmentation depends heavily on the priori knowledge of targets and background. Image feature extraction methods based on multi-frame correlation can obtain accumulated information, so it can effectively improve the signal-to-noise ratio and highlight the movement of the target.However,when the inter-frame movement speed is relatively fast,it may not effectively accumulate the target energy,decreasing the detection performance of this method. In addition, the invasion targets in the military alert zone may be the feature targets which have been already contained in the knowledge base,or the targets that cannot be recognized after deformation and camouflage. To overcome the above problems, this paper combines single-frame mode analysis with multi-frame correlation motion analysis, and carries out feature extraction and target detection of images through feature enhancement convolutional network. The overall algorithm flow is shown in Fig.1.

    Fig.1 contains two parts. The former part is feature extraction,while the latter part is target detection based on enhancement neural network. The first part contains two modules: the upper module is image texture feature extraction based on LBP model,and the lower module is motion feature extraction of three-frame difference method. The two features are combined and then taken as the input of the enhancement network. Through the training of the network, on the one hand, the infrared texture feature mode structure of the existing targets can be recognized,and on the other hand, the position of the moving targets can be sensed. Based on the network, the features of two different ways are normalized and combined as the input,and the final output is a binary image containing detection results.

    3.2. Feature extraction

    3.2.1. Image texture feature extraction based on LBP

    LBP(Local Binary Pattern)texture feature proposed in 1994[29]is an operator which is used to describe local features of images.As LBP can calculate features simply and have good effect and other obvious advantages such as gray scale invariance and rotation invariance,they have been widely used in many fields of computer vision.In this paper,LBP algorithm is adopted to represent texture features [30], because it can better express the target feature patterns which have been contained in the sample database, thus providing a strong support for static or slow-moving target analysis.

    The original LBP operator is defined in a neighborhood window of 3×3.Taking the center pixel of the window as the threshold,the gray values of 8 pixels in the neighborhood are compared with it.If the value of surrounding pixel above or equal to the threshold,then the value of the pixel point will be marked as 1; otherwise, it is 0.After comparison,8 points in the 3×3 neighborhood will generate 8-bit binary number, this value is converted to an LBP value to reflect the texture information of the region [31]. For a neighborhood (P, R), the above process can be expressed as:

    where P represents the number of sampling pixels on the circle,which determines the specific degree of texture features.The larger the value is,the more the sampling points will be,the more specific the texture features will be obtained, and the higher the computation complexity will be; R is the radius of the circle, which determines the neighborhood size of the operator. The smaller the value is,the more localized the texture features will be;pcdenotes the gray value of the corresponding center pixel; pidenotes the gray value of each sampling pixel on the circle with a radius of R;s is a symbolic function, which is denoted by:

    In order to extract the most basic structure and rotation invariance mode from LBP, the LBP texture model with gray scale and rotation invariance is adopted [32]:

    Fig.1. Proposed infrared target intrusion detection framework.

    where,the superscript“riu2”in the above formula means that the maximum of U value of the rotation invariant “uniform” is 2.

    3.2.2. Motion frame difference

    As military alert areas need long-term monitoring,the position of the target can be sensed by analyzing its infrared image motion.The image of the target has a displacement between adjacent frames, while the location of background image is fixed between the adjacent frames. The frame difference method [33] is used to carry out point-to-point subtracting of adjacent frames, so as determine the absolute value of gray difference.

    Traditional two-frame difference method obtains the contour of the moving target by detecting the changing areas in the images of the two adjacent frames,and it can be represented by the following equation:

    Where ft(x,y) denotes the gray value of pixel point (x,y) at time t;ft-1(x,y) represents the gray value of pixel point (x, y) at time t-1;dt(x,y)represents the pixel difference between the adjacent frames at two time points. The threshold is set as T, and the binarization processing is carried out on pixel points one by one according to Eq.(6), and then the binarization imageis obtained. Among them,the point with a gray value of 255 represent the foreground point,and the point with a gray value of 0 represent the background point. Based on the connectivity analysis of image, image Rtcontaining complete moving target can be acquired finally.

    In the actual scene of infrared intrusion target detection,as the target to be measured is distant from the detection equipment,the imaging area of the target is small,so the motion of the target can be approximately uniform. Two-frame difference method is not sensitive to the slowly moving targets and easy to produce cavitation, and complete moving targets cannot be obtained after the subtraction of two frames. Therefore, this paper chooses threeframe difference method [34] to extract moving objects. First of all,three adjacent frames of images are taken as a group to carry out the difference in pairs;secondly,logical operation is carried out to the two difference results, and the specific algorithm process is as follows:

    1) Assuming there is an image sequence containing n frames of infrared invasion targets,denoted by{f1(x,y),…,fk(x,y),…,fn(x,y)},where fk(x,y)denotes the k-th frame in the image sequence.The difference between two adjacent frames is calculated as follows:

    2) An appropriate threshold value is set as T,and then binarize the obtained two difference images:

    3) For each pixel point(x,y),carry out logical“or”operation on the two binary images obtained from step (2) to calculate the following binary image:

    3.3. Enhancement network

    Based on the above steps, LBP texture feature map and motion frame difference map can be obtained. The enhancement network we designed integrates the two different feature maps and further enhances the characteristics of the target, so as to suppress background clutter and improve the detection rate. The specific structure of the enhancement network designed in this paper is shown in Fig. 2. The main purpose of the enhancement network is to highlight the target features, in order to obtain the candidate position with the highest probability and reduce false alarm rate.The enhancement network consists of two modules.The first module is the feature fusion module, which integrates and comprehensively analyzes the captured information of the two feature graphs. The second module is the feature enhancement module,which aims to suppress background clutter characteristics and effectively highlight the target area.

    In the feature fusion module, the feature images obtained by gray-scale texture analysis and the feature images obtained by frame difference method are fused to input the enhancement network, and the extracted feature images are normalized before the feature fusion, and the expression is as follows:

    where F(i,j)denotes the extracted feature map;F′(i,j)denotes the normalized feature map;μ and σ denote the mean and variance of the feature map, respectively.

    The feature fusion phase introduces inception function module[35],which can increase the network’s adaptation to the dimension without increasing network complexity. Fig. 3(a) shows the basic structure of the inception module, which stacks the convolution kernel of 1×1, 3×3 and 5×5 together, and then aggregates the features of each layer,thus providing initial features with different scales for the next extraction work. This paper refers to the improvement of the inception module in the literature [36], the branches for pooling operations are removed to avoid losing a lot of feature information and causing difficulties in model training; the 5×5 convolution kernel is replaced with two 3×3 convolution kernels, so as to obtain the same field of vision and have fewer parameters,as well as indirectly increase the depth of the network.In order to reflect the importance of convolution kernel with different scales, the outputs of three convolution layers are given different weights: 1/4, 1/2 and 1/4, respectively. To speed up the model training, BN (Batch Normalization) is used after each convolution layer of the inception module, and the improved inception module is depicted in Fig. 3(b).

    Fig. 2. Enhancement network structure.

    Fig. 3. Inception structure. (a) basic structure; (b) improved structure.

    Three convolutional layers are adopted in the enhancement module. The size of the convolution kernel is 3×3. The activation function of each convolutional layer is ReLU, and the last layer adopts deconvolution to change the output channel into 1[28].The design idea of the enhancement module is shown in Fig.4.By using the characteristics of the target, the convolutional neural network can extract target features through convolution and form a classification ability on the target through multi-layer convolution.Then the probability of the target point is obtained, and then the obtained probability values are filtered layer by layer through convolution, and finally, a probability distribution image S is output, and the stronger the target properties are, the larger the corresponding probability values will be.

    After getting the probability distribution image S, the probability value threshold is adaptively selected by iteration method.First of all, Sis divided into equal rectangular areas according to 8×8 grids. Secondly, the mean value of the probability value in each rectangular is calculated,so that 64×1 array can be obtained.Secondly, the elements which are less than 0.1 are rounded, then judged for no goal, so the array C of n x 1 can be obtained. The process of adaptive threshold selection is as follows:

    Fig. 4. Design ideas of enhancement module.

    1) Firstly, the maximum and minimum values of the array are calculated,denoted as Pmaxand Pmin,respectively,and the initial threshold is set as T0= (Pmax+ Pmin)/2.

    2) The array is divided into high-pixel area and low-pixel area according to the threshold value Tk(k =0,1,2,…,k),the average pixel values of the two are calculated and expressed as H1and H2respectively;

    3) Calculate the new threshold value Tk+1= (H1+ H2)/ 2;

    4) If Tk=Tk+1,Tkis the threshold value of this probability image S;Otherwise, go to step 2 and carry out the iterative calculation.

    After obtaining the threshold value,the probability value above the threshold value is denoted as 1,and the probability value below the threshold value is denoted as 0. Finally, the target image obtained is a binary image,in which the location of the target pixel is labeled to 1, and the rest are 0.

    The loss function of the convolutional network adopts the loss function based on grayscale cross-correlation.As the target image is a binary image,only a few points in the image take a value of 1,and the remaining points take a value of 0, adopting grayscale crosscorrelation loss can obtain a larger loss gradient than the mean square error loss[28].Loss function L consists of two parts:L1and L2. The former calculates the mean deviation between network output image and target image,and its purpose is to make the two images approximate on the mean value; the latter calculates the gray cross-correlation coefficient, aiming to make the two images consistent in the changes of the pixels. The loss function L adopts the way of L1plus L2to make the output image and the target image achieve consistent in all of the pixels as much as possible, and the error calculation formula is as follows:

    where G represents the network output image and S denotes the target image. To prevent the occurrence of lg(0), smoothing coefficient 0.01 is added to each term.“batch_size”represents the batch size of each training, which is set as 32 in the experiment [28].

    3.4. Training and parameter setting of the network

    In this paper,the implementation platform of the experiment is 64-bit Ubuntu16.04 LTS,based on DELL Precision R7910(AWR7910)graphic workstation, and the processor is Intel Xeon e5-2603 v2(1.8ghz/10 M), and NVIDIA Quadro K620 GPU is adopted for accelerated computing.

    In the experiment,the initial learning rate of the model training is 0.01; the optimization mode is random gradient descent; the momentum is 0.9;the weight attenuation is 0.0005;32 images are processed each time;and the maximum iterations are 60,000.The learning rate of the first 30,000 times is 0.01,and the learning rate of the last 30,000 times is 0.001.

    4. Experimental analysis

    4.1. Data set

    A representative data set is produced for training and testing the model. The data set contains 700 sets of samples, among which three consecutive-frame images are combined as a set of samples,and Ground truth of the target area is drawn manually as the sample label. Considering that the invasion targets of the prohibited military zone are usually suspicious persons and vehicles,when selecting the targets in the data set,the people with different postures and quantity and different types of vehicles are taken as the targets to be detected,as shown in Fig.1.To ensure the diversity of data sources, the sample data is obtained in multiple environments,including woodland,grassland,urban complex background,monotonous background, etc. The shooting process is carried out by the unmanned aerial vehicle (UAV) with infrared lens. The shooting band is 8 μm-14 μm and the image size is 256×256.The flight height of the UAV isn’t fixed, allowing both small and large targets to be covered. In the process of making data sets, considering that the performance of different methods needs to be evaluated objectively,the data sets can be divided into three types,and in each type of data sets, we divide training sets and data sets according to the ratio of 4:1.The characteristics of the three classes of data sets is presented in Table 1.

    Table 1 The characteristics of three classes of data sets.

    4.2. Evaluation metric

    The evaluation indexes selected in this paper include signal-tonoise ratio gain (SCRG) [37], background suppression factor (BSF)[38], detection rate Pdand false alarm rate Pf[39,40], which are used to evaluate the performance of the comparative methods.Among them,SCRG and BSF reflect the effect of target enhancement and background suppression,and their expressions are as follows:

    where SNRoutand SNRinrepresent the signal-to-noise ratio of the output image and the input image, respectively, and σoutand σindenote the mean square error of the output image and the input image,respectively.

    Receiver operating characteristic curve (ROC curve) [41] is drawn according to the variation relationship between Pdand Pf.The expressions of Pdand Pfare as follows:

    where Nsis the number of targets successfully detected and Nrdenotes the number of real targets; Nfdenotes the number of targets wrongly detected,and Ntrepresents the total number of pixels detected.

    4.3. Comparison with other methods

    Some simulation experiments based on real infrared images were completed to verify the effectiveness and superiority of the proposed method.Some classical infrared target detection methods are selected as experimental comparison algorithms, including Max-Mean filter method(MM)[13],Fusion of two different motion cues method (FTDMC) [42], Co-Detection method (COD) [43] and Full Convolutional Network and Region Growth(FCNARG)[44].MM method uses the maximum mean filtering and maximum median filtering to detect small targets in infrared images.FTDMC method uses two kinds of motion cues, including background subtraction and time difference,to realize the motion detection and recognition of the target.COD method proposes an infrared target cooperative detection model which combines background auto-correlation features with the common characteristics of the target in timespace domain. The FCNARG method proposes an infrared segmentation algorithm combining the full convolutional neural network and the dynamic adaptive region growing method for infrared images in complex background.

    4.3.1. Comparison of background suppression performance

    To further evaluate the capability of the proposed method in target enhancement and background suppression,we analyzed the SCRG and BSF of different methods. Table 2 and Table 3 show the mean values of SCRG and BSF of different methods in three data sets.A higher mean value means a better target detection performance of corresponding method.

    As depicted in the table above, the proposed method has the highest SNRG value in Class 1 and Class 3,while FTDMC method has the worst performance in the other two classes, though the SNRG value in Class 2 is slightly better than that of the proposed method.In terms of BSF performance,COD method performs best in Class 1,with the highest value, but the proposed method has better capability in Class 1 than the other two methods,and achieves the best performance in both class 2 and class 3.All in all,the success of the three existing infrared target detection methods is only limited to their specific applications. Although the proposed method doesn’t have the highest SCRG and BSF values in all classes, they generally have better performance in improving target signal-to-noise ratio and suppressing background.

    4.3.2. Comparison of detection performance

    ROC curves of different methods are drawn to further demonstrate their target detection performance. Fig. 5 shows the ROC curves obtained by the above four methods in three data sets,various methods show their own characteristics under different conditions. Meanwhile, it can be seen from the data set of Class 3 that the MM method and FTDMC method cannot effectively detect target pixels under complex backgrounds.In the data sets of Class 1 and Class 2, although the COD method and FCNARG method are similar to the proposed method in ROC curve distribution, the proposed method can achieve a higher detection rate Pdunder the condition of guaranteeing a lower false alarm rate Pf. Therefore,according to the results of ROC curve, the proposed method has a higher robustness, a higher detection rate as well as a lower false alarm rate.In this paper,the proposed experimental conditions and data sets are used to analyze the speed performance of the proposed method. The results show that the proposed method can process 6 samples per second,which basically meets the real-time requirements of intrusion detection.

    To intuitively show the detection effects of different methods,a group of infrared images are selected from each of the three data sets for detection. As shown in Fig. 6, the middle frame of each group of images appears in the first column; the second to fourthcolumns are the experimental results produced by the comparison method,the fifth column is the results of the proposed method,and the sixth column represents the Ground truth of the target area.According to the experimental results, the first row of sample images is multi-object slowly moving images; MM method detects more noise points, while FTDMC and FCNARG method fails to detect the complete target area; the detection results of COD also contain some noise points and missed detection;and the proposed method can accurately detect the invasion target. The samples in the second row are the single-object rapidly moving images; MM and COD methods have a relatively poor background suppression effect, and the double shadow appears in the detection results of FTDMC method,the result of the FCNARG method has a false alarm,and the proposed method has the effect which is closest to Ground truth. In the samples of the third row, we implemented a deformation barrier camouflage on the target,and placed the target at a position partially shaded by trees;and there are a large amount of residual background interference in the results of MM and FTDMC methods; and COD has a better detection effect for larger targets,but has omissions for the detection of small targets; the results of the FCNARG method also showed a large number of false alarms;meanwhile, the proposed method preserves less background region, and it not only detects large targets but also enhances small targets.The samples in the fourth row are pure background images,and the proposed method achieves the best background suppression effect. In summary, since the MM method uses the edge retention and noise filtering performance of the filter to detect the infrared target, the FCNARG method performs regional growth based on the feature extraction using the full convolutional neural network,and neither uses the motion information of the target,so the test results are prone to false alarms; the FTDMC method and the COD method have no training process of the model, so the complete target area cannot be detected well. It can be seen from the detection results that the proposed method can not only suppress complex background areas but also enhance the intrusion targets in practical applications.

    Table 2 Average values of SNRG corresponding to different methods.

    Table 3 Average values of BSF corresponding to different methods.

    Fig. 5. ROC curves of different methods in the three data sets, (a) to (c) represent the experimental results on data set class 1 to 3 respectively.

    Fig. 6. Detection results of different methods.

    5. Conclusion

    Aiming at the key problems of infrared target intrusion detection in modern defense system and infrared detection system,this paper proposed an infrared target intrusion detection method based on feature fusion and enhancement.On the one hand,multilevel feature extraction was completed by combining single-frame analysis and multi-frame correlation, making full use of the advantages of single frame image and sequence image,and effectively suppressing background clutter. On the other hand, the extracted feature map was fused through the constructed enhancement convolutional neural network, and at the same time, the enhancement module in the network was used to enhance the target characteristics and suppress false alarm rate. In this paper,three typical target detection methods were selected to carry out an experimental comparison with the proposed method,and in terms of the performance of SCRG and BSF values, the control methods could achieve good results in their respective applications.Although the proposed method wasn’t better than the control methods in all classes, in general, it had a better performance in improving SNR of the target and suppress background.In terms of detection performance, the control method showed their own advantages under different conditions. However, the proposed method can not only suppress complex background areas but also enhance the intrusion targets in practical applications. The experimental results indicate that,compared with some classical infrared target detection methods,this method has a stronger robustness in improving SCRG and BSF values of the image, and significant performance in detection rate and false alarm rate. The intrusion targets such as detection personnel and suspicious objects are important tasks in the field of protection.In the future,this method can be applied in key military protection and monitoring fields,including prohibited military zones,border posts,airport boundary and national defense engineering, so this method has a broad military application prospect.

    Funding

    This work was supported by the National Natural Science Foundation of China (grant number: 61671470), the National Key Research and Development Program of China (grant number:2016YFC0802904), and the Postdoctoral Science Foundation Funded Project of China (grant number: 2017M623423).

    Declaration of competing interest

    We declare that the contents of this manuscript have not been copyrighted or published previously and does not have any financial or non-financial conflict of interest.I am one author signing on behalf of all co-authors of this manuscript, and attesting to the above.

    国产男女超爽视频在线观看| 高清黄色对白视频在线免费看 | 欧美另类一区| av在线播放精品| 观看美女的网站| 丰满少妇做爰视频| 亚洲精品国产av蜜桃| 一级爰片在线观看| 日韩熟女老妇一区二区性免费视频| 嘟嘟电影网在线观看| 国产高清三级在线| 久久国产精品大桥未久av | 一本大道久久a久久精品| 欧美丝袜亚洲另类| 免费人成在线观看视频色| 精品人妻熟女毛片av久久网站| 久久亚洲国产成人精品v| 全区人妻精品视频| 亚洲国产av新网站| 亚洲av电影在线观看一区二区三区| 久久精品久久久久久噜噜老黄| 国产亚洲午夜精品一区二区久久| 王馨瑶露胸无遮挡在线观看| 国产精品久久久久久久久免| 午夜影院在线不卡| 国产av精品麻豆| 日本av免费视频播放| 五月玫瑰六月丁香| 婷婷色综合www| 男人狂女人下面高潮的视频| 国产成人一区二区在线| av在线观看视频网站免费| 97超视频在线观看视频| 中文字幕人妻熟人妻熟丝袜美| 91精品一卡2卡3卡4卡| 精品国产露脸久久av麻豆| 亚洲美女黄色视频免费看| 成人国产av品久久久| 亚洲一区二区三区欧美精品| 亚洲国产欧美在线一区| 美女内射精品一级片tv| 熟妇人妻不卡中文字幕| 亚洲性久久影院| 午夜福利网站1000一区二区三区| 日本vs欧美在线观看视频 | 成人免费观看视频高清| 日韩av在线免费看完整版不卡| 丰满人妻一区二区三区视频av| 国产黄片视频在线免费观看| 亚洲成人一二三区av| 在线观看免费视频网站a站| 夫妻性生交免费视频一级片| 精品国产一区二区久久| 成人18禁高潮啪啪吃奶动态图 | 午夜老司机福利剧场| 妹子高潮喷水视频| 午夜免费观看性视频| 黄色一级大片看看| 自拍偷自拍亚洲精品老妇| 国产亚洲欧美精品永久| 色婷婷久久久亚洲欧美| 亚洲第一av免费看| 天堂中文最新版在线下载| 人妻人人澡人人爽人人| 高清毛片免费看| 午夜影院在线不卡| 国产免费一级a男人的天堂| 99久久中文字幕三级久久日本| 大码成人一级视频| 午夜91福利影院| 深夜a级毛片| 免费不卡的大黄色大毛片视频在线观看| 亚洲欧美日韩卡通动漫| 婷婷色av中文字幕| 欧美人与善性xxx| 久久久久久久久久久久大奶| 国模一区二区三区四区视频| 秋霞在线观看毛片| 黑人巨大精品欧美一区二区蜜桃 | 久久午夜综合久久蜜桃| 国产美女午夜福利| 亚洲丝袜综合中文字幕| 免费观看在线日韩| 日日摸夜夜添夜夜添av毛片| 午夜福利网站1000一区二区三区| 自拍欧美九色日韩亚洲蝌蚪91 | 内射极品少妇av片p| 成人毛片a级毛片在线播放| 黄片无遮挡物在线观看| 日日撸夜夜添| freevideosex欧美| a级毛片免费高清观看在线播放| 久久久久人妻精品一区果冻| 黄色毛片三级朝国网站 | 国产伦在线观看视频一区| 久久热精品热| 少妇被粗大猛烈的视频| 国产国拍精品亚洲av在线观看| 亚洲精品久久午夜乱码| 三级国产精品片| 80岁老熟妇乱子伦牲交| 日韩熟女老妇一区二区性免费视频| 成人亚洲欧美一区二区av| 大陆偷拍与自拍| 在线观看三级黄色| 在线观看三级黄色| 国产精品女同一区二区软件| 色哟哟·www| 男女国产视频网站| 国产男女内射视频| 日韩中文字幕视频在线看片| 各种免费的搞黄视频| 97在线视频观看| 啦啦啦中文免费视频观看日本| 校园人妻丝袜中文字幕| 亚洲精品久久午夜乱码| 一级片'在线观看视频| 日韩不卡一区二区三区视频在线| 最新的欧美精品一区二区| 国产极品天堂在线| 亚洲精品日韩av片在线观看| 日韩人妻高清精品专区| 黄色一级大片看看| 欧美日韩综合久久久久久| 高清黄色对白视频在线免费看 | 99久久综合免费| 精品一区二区免费观看| 伊人亚洲综合成人网| 纵有疾风起免费观看全集完整版| 亚洲精品自拍成人| 日韩中字成人| 亚洲欧美成人综合另类久久久| 少妇丰满av| 亚洲一级一片aⅴ在线观看| 中国三级夫妇交换| 97超视频在线观看视频| 欧美性感艳星| 一区二区三区乱码不卡18| 亚洲欧美一区二区三区国产| 美女福利国产在线| 91久久精品国产一区二区三区| 久久人人爽av亚洲精品天堂| 精品少妇久久久久久888优播| 国产黄片视频在线免费观看| 丰满少妇做爰视频| 久热这里只有精品99| 一级二级三级毛片免费看| a级毛片免费高清观看在线播放| 午夜福利网站1000一区二区三区| 国产精品99久久久久久久久| 国产一区二区三区综合在线观看 | 久久99一区二区三区| 插逼视频在线观看| 久久免费观看电影| 在线观看免费日韩欧美大片 | 91午夜精品亚洲一区二区三区| 久久99精品国语久久久| 免费播放大片免费观看视频在线观看| av免费在线看不卡| 如日韩欧美国产精品一区二区三区 | 女人久久www免费人成看片| 欧美最新免费一区二区三区| 天堂中文最新版在线下载| 91久久精品电影网| 成年美女黄网站色视频大全免费 | 在线观看人妻少妇| 国产成人精品一,二区| 国产在线一区二区三区精| 久久久久久久亚洲中文字幕| 久久国产精品男人的天堂亚洲 | 国产色婷婷99| 国产又色又爽无遮挡免| 亚洲丝袜综合中文字幕| 亚洲人成网站在线观看播放| 中国美白少妇内射xxxbb| 色网站视频免费| 亚洲av福利一区| 亚洲国产av新网站| 少妇人妻久久综合中文| 亚洲综合色惰| 色哟哟·www| 久久97久久精品| 免费黄色在线免费观看| 亚洲精品国产av蜜桃| 久热这里只有精品99| 国产高清不卡午夜福利| 成人黄色视频免费在线看| 久久影院123| 久久精品夜色国产| 涩涩av久久男人的天堂| 日日摸夜夜添夜夜添av毛片| 欧美日韩一区二区视频在线观看视频在线| 国产一区二区三区av在线| 丰满乱子伦码专区| 最近手机中文字幕大全| 少妇熟女欧美另类| 久久久久久久亚洲中文字幕| 久久久久久久久久人人人人人人| 黄片无遮挡物在线观看| 日本vs欧美在线观看视频 | 成人综合一区亚洲| 久久久久久久久久久免费av| 亚洲精品日韩av片在线观看| 国产亚洲最大av| 曰老女人黄片| 蜜桃久久精品国产亚洲av| 日韩成人伦理影院| .国产精品久久| 亚洲欧美精品专区久久| 亚洲欧洲日产国产| 男人爽女人下面视频在线观看| 老女人水多毛片| 中国美白少妇内射xxxbb| 日韩人妻高清精品专区| 18禁裸乳无遮挡动漫免费视频| 91精品伊人久久大香线蕉| 嘟嘟电影网在线观看| 午夜视频国产福利| 久热这里只有精品99| 精品人妻熟女毛片av久久网站| 亚洲欧美精品专区久久| 日韩中文字幕视频在线看片| 91在线精品国自产拍蜜月| 狂野欧美白嫩少妇大欣赏| 高清午夜精品一区二区三区| 日韩中字成人| 国产国拍精品亚洲av在线观看| 交换朋友夫妻互换小说| 亚洲熟女精品中文字幕| 国产高清国产精品国产三级| 热99国产精品久久久久久7| 极品少妇高潮喷水抽搐| 在线精品无人区一区二区三| 亚洲精品aⅴ在线观看| 99热网站在线观看| 一级二级三级毛片免费看| 久久久久国产精品人妻一区二区| 久久久国产欧美日韩av| 国产在线视频一区二区| 精品人妻偷拍中文字幕| 国产成人免费观看mmmm| 国产黄片视频在线免费观看| 热re99久久精品国产66热6| 午夜视频国产福利| 欧美xxⅹ黑人| 国产欧美日韩精品一区二区| 亚洲中文av在线| 又大又黄又爽视频免费| 国产黄片视频在线免费观看| 少妇人妻精品综合一区二区| 国产成人精品一,二区| 成人免费观看视频高清| 国产熟女午夜一区二区三区 | 精品午夜福利在线看| 国产男人的电影天堂91| 五月天丁香电影| 精品一区二区三卡| 国产精品久久久久久久电影| 一区二区av电影网| 久久青草综合色| 嘟嘟电影网在线观看| 尾随美女入室| 国产黄色免费在线视频| 精品一区在线观看国产| 欧美精品亚洲一区二区| 中文字幕久久专区| 精品国产一区二区三区久久久樱花| 国产伦精品一区二区三区视频9| 一级毛片久久久久久久久女| 国产乱来视频区| 亚洲欧美一区二区三区国产| 欧美成人午夜免费资源| 午夜影院在线不卡| 国产精品久久久久久久电影| 国内揄拍国产精品人妻在线| 精品少妇久久久久久888优播| 亚洲国产色片| 亚洲国产精品国产精品| av福利片在线观看| 欧美日韩视频精品一区| 国产高清有码在线观看视频| 国产乱来视频区| 另类亚洲欧美激情| 国内少妇人妻偷人精品xxx网站| 国产男女内射视频| a 毛片基地| 日韩欧美一区视频在线观看 | 亚洲精品乱久久久久久| 久久97久久精品| 极品教师在线视频| 91精品一卡2卡3卡4卡| 国产av码专区亚洲av| 91aial.com中文字幕在线观看| 午夜激情久久久久久久| 男人和女人高潮做爰伦理| 三上悠亚av全集在线观看 | 国产av国产精品国产| 妹子高潮喷水视频| 高清视频免费观看一区二区| 亚洲精品亚洲一区二区| 偷拍熟女少妇极品色| 曰老女人黄片| 国内精品宾馆在线| 亚洲四区av| 中文字幕亚洲精品专区| 性高湖久久久久久久久免费观看| 精品久久久久久久久亚洲| 黄色毛片三级朝国网站 | 国产女主播在线喷水免费视频网站| 亚洲国产欧美在线一区| 亚洲国产精品成人久久小说| 午夜影院在线不卡| 国产免费一级a男人的天堂| 国产亚洲91精品色在线| 亚洲国产精品成人久久小说| 精品一区二区三卡| 亚洲av国产av综合av卡| 亚洲精品国产色婷婷电影| 免费黄色在线免费观看| 黄色配什么色好看| 亚洲精品aⅴ在线观看| 成人毛片60女人毛片免费| 五月玫瑰六月丁香| 色婷婷久久久亚洲欧美| 美女中出高潮动态图| 高清黄色对白视频在线免费看 | 亚洲av在线观看美女高潮| 黑丝袜美女国产一区| 亚洲av男天堂| 成人无遮挡网站| tube8黄色片| 精品人妻偷拍中文字幕| 欧美国产精品一级二级三级 | 男女边摸边吃奶| 黄色怎么调成土黄色| 18禁动态无遮挡网站| 国产在线免费精品| 一区二区av电影网| 一本久久精品| 日韩一区二区视频免费看| 欧美日韩在线观看h| 人人妻人人添人人爽欧美一区卜| 如何舔出高潮| 国产精品.久久久| 日韩大片免费观看网站| 国产精品嫩草影院av在线观看| a级一级毛片免费在线观看| 亚洲精品,欧美精品| 黑人猛操日本美女一级片| 少妇人妻精品综合一区二区| 日韩成人伦理影院| 精品国产乱码久久久久久小说| 国产精品久久久久久av不卡| tube8黄色片| 久久久久久久久久久免费av| 色94色欧美一区二区| 寂寞人妻少妇视频99o| 汤姆久久久久久久影院中文字幕| 精品久久久久久久久亚洲| 五月玫瑰六月丁香| 日日摸夜夜添夜夜爱| 亚洲av免费高清在线观看| 精品一区二区三区视频在线| 欧美日韩av久久| 成人二区视频| 天天操日日干夜夜撸| 久久人人爽av亚洲精品天堂| 在线亚洲精品国产二区图片欧美 | 国产在线免费精品| 国产极品粉嫩免费观看在线 | 大片电影免费在线观看免费| 亚洲国产精品成人久久小说| 久久av网站| 亚洲怡红院男人天堂| 免费观看的影片在线观看| 国产精品人妻久久久久久| 又大又黄又爽视频免费| 女性被躁到高潮视频| 久久av网站| 日本vs欧美在线观看视频 | 欧美三级亚洲精品| 亚洲精品国产色婷婷电影| 三上悠亚av全集在线观看 | 我要看黄色一级片免费的| av专区在线播放| 男女免费视频国产| 人人澡人人妻人| 久久精品国产a三级三级三级| 一级毛片黄色毛片免费观看视频| 精品久久久久久久久av| 性色av一级| 免费看av在线观看网站| a级毛片免费高清观看在线播放| 国产亚洲一区二区精品| 看免费成人av毛片| 免费不卡的大黄色大毛片视频在线观看| 夜夜骑夜夜射夜夜干| 久久久欧美国产精品| 在线观看免费日韩欧美大片 | 又大又黄又爽视频免费| 免费观看的影片在线观看| 在线观看免费视频网站a站| 亚洲欧美精品专区久久| 亚洲高清免费不卡视频| 国产免费又黄又爽又色| 在线观看国产h片| 在线免费观看不下载黄p国产| .国产精品久久| 欧美国产精品一级二级三级 | 国产一区二区在线观看日韩| 丝袜脚勾引网站| 看免费成人av毛片| 国产精品免费大片| 2018国产大陆天天弄谢| 老熟女久久久| 日韩av在线免费看完整版不卡| 日日爽夜夜爽网站| 乱人伦中国视频| 在线观看av片永久免费下载| av国产久精品久网站免费入址| 精品一品国产午夜福利视频| 久久人人爽人人片av| 热99国产精品久久久久久7| 欧美97在线视频| 欧美亚洲 丝袜 人妻 在线| 99久久综合免费| 久久这里有精品视频免费| 日韩中字成人| 欧美日韩视频高清一区二区三区二| 亚洲精品国产av蜜桃| 一级毛片黄色毛片免费观看视频| 国国产精品蜜臀av免费| 国产精品一二三区在线看| 久久女婷五月综合色啪小说| 国产精品人妻久久久影院| 欧美 日韩 精品 国产| 精品熟女少妇av免费看| 又粗又硬又长又爽又黄的视频| 晚上一个人看的免费电影| 国产精品偷伦视频观看了| 国产av精品麻豆| 中国国产av一级| 十八禁高潮呻吟视频 | 一区二区三区免费毛片| 男人添女人高潮全过程视频| av播播在线观看一区| 好男人视频免费观看在线| av黄色大香蕉| 婷婷色麻豆天堂久久| 久久 成人 亚洲| 国产色婷婷99| 欧美日韩精品成人综合77777| 国产老妇伦熟女老妇高清| 国产成人freesex在线| 视频区图区小说| 精品人妻熟女av久视频| 免费看光身美女| 国产男人的电影天堂91| 韩国高清视频一区二区三区| 国产精品一区二区在线观看99| 日本猛色少妇xxxxx猛交久久| 国产深夜福利视频在线观看| 久久久久久久久久人人人人人人| 午夜久久久在线观看| 久热久热在线精品观看| 欧美激情国产日韩精品一区| 精品久久久久久电影网| 久久久久国产精品人妻一区二区| 久久久久久久久久久免费av| 黑人猛操日本美女一级片| 国产欧美日韩一区二区三区在线 | 精品酒店卫生间| 日本猛色少妇xxxxx猛交久久| 熟妇人妻不卡中文字幕| 亚洲av中文av极速乱| 伦理电影免费视频| 精品亚洲成a人片在线观看| 国产极品粉嫩免费观看在线 | 久久久久久久精品精品| 精品视频人人做人人爽| 国产成人91sexporn| 超碰97精品在线观看| 亚洲一区二区三区欧美精品| 热99国产精品久久久久久7| 免费av不卡在线播放| 成人影院久久| 欧美精品高潮呻吟av久久| 搡女人真爽免费视频火全软件| 麻豆乱淫一区二区| 国产免费一级a男人的天堂| 哪个播放器可以免费观看大片| 在线 av 中文字幕| 国产av码专区亚洲av| 免费观看无遮挡的男女| 另类亚洲欧美激情| 多毛熟女@视频| 中文字幕人妻熟人妻熟丝袜美| 一区在线观看完整版| 国产高清三级在线| 中文在线观看免费www的网站| 亚洲熟女精品中文字幕| 狂野欧美激情性bbbbbb| 国产免费一级a男人的天堂| 极品人妻少妇av视频| 777米奇影视久久| 久久久久久久大尺度免费视频| 91久久精品电影网| 亚洲国产最新在线播放| 岛国毛片在线播放| 我要看日韩黄色一级片| 精品视频人人做人人爽| 亚州av有码| 极品少妇高潮喷水抽搐| av福利片在线| 在线观看www视频免费| 日日啪夜夜撸| 久久精品夜色国产| 在线看a的网站| 插阴视频在线观看视频| 国产高清不卡午夜福利| 高清不卡的av网站| 尾随美女入室| av天堂久久9| 国产伦精品一区二区三区四那| 中文精品一卡2卡3卡4更新| 内地一区二区视频在线| 久久99蜜桃精品久久| 国产视频内射| 精品人妻偷拍中文字幕| 丝袜脚勾引网站| 两个人的视频大全免费| 丰满少妇做爰视频| 岛国毛片在线播放| 亚洲色图综合在线观看| 久久精品国产亚洲av涩爱| 中国国产av一级| kizo精华| 久久久久人妻精品一区果冻| 国产精品国产av在线观看| 国国产精品蜜臀av免费| 久久久久久久亚洲中文字幕| 亚洲国产精品专区欧美| 纯流量卡能插随身wifi吗| 最新的欧美精品一区二区| 亚洲欧美日韩另类电影网站| 一级,二级,三级黄色视频| 狂野欧美激情性bbbbbb| 日本黄大片高清| 夜夜骑夜夜射夜夜干| 亚洲电影在线观看av| videossex国产| 久久女婷五月综合色啪小说| 最近中文字幕高清免费大全6| 99国产精品免费福利视频| 男女边吃奶边做爰视频| 日韩精品免费视频一区二区三区 | 国产精品熟女久久久久浪| 国产成人精品福利久久| 久久6这里有精品| 中国美白少妇内射xxxbb| 多毛熟女@视频| 日韩电影二区| 自拍欧美九色日韩亚洲蝌蚪91 | 极品人妻少妇av视频| 精品亚洲成a人片在线观看| 女人精品久久久久毛片| 亚洲国产精品国产精品| 精华霜和精华液先用哪个| 国产亚洲一区二区精品| 伊人亚洲综合成人网| 国产美女午夜福利| 国产成人精品无人区| videossex国产| 国产有黄有色有爽视频| 午夜91福利影院| 成人亚洲精品一区在线观看| 嫩草影院入口| 大香蕉久久网| 高清不卡的av网站| 国产欧美亚洲国产| 一二三四中文在线观看免费高清| 久久久久国产网址| 99九九线精品视频在线观看视频| a级片在线免费高清观看视频| 久久ye,这里只有精品| 人妻一区二区av| 91在线精品国自产拍蜜月| 亚洲中文av在线| 亚洲成人手机| 色网站视频免费| 亚洲va在线va天堂va国产| 全区人妻精品视频| 国产亚洲精品久久久com| 午夜福利网站1000一区二区三区| 国产精品嫩草影院av在线观看| 在线观看av片永久免费下载| 日韩 亚洲 欧美在线| 国产黄频视频在线观看| 免费看av在线观看网站| 最黄视频免费看| 内地一区二区视频在线| www.色视频.com| av在线老鸭窝| 国产亚洲91精品色在线| 日韩欧美一区视频在线观看 | 国产免费又黄又爽又色| 国产精品三级大全| 熟女av电影| 天堂8中文在线网| 免费看光身美女| 国产精品久久久久久av不卡| 成人免费观看视频高清| 性色avwww在线观看| 亚洲精品aⅴ在线观看| 免费在线观看成人毛片|