• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fusion of Infrared and Visible Images Using Fuzzy Based Siamese Convolutional Network

    2022-03-14 09:25:34KanikaBhallaDeepikaKoundalSurbhiBhatiaMohammadKhalidImamRahmaniandMuhammadTahir
    Computers Materials&Continua 2022年3期

    Kanika Bhalla,Deepika Koundal,Surbhi Bhatia,Mohammad Khalid Imam Rahmani and Muhammad Tahir

    1Department of Electrical Engineering,National Taipei University of Technology,Taipei,10608,Taiwan

    2Department Virtualization,School of Computer Science,University of Petroleum&Energy Studies,Dehradun,India

    3College of Computer Science and Information technology,King Faisal University,36362,Saudi Arabia

    4College of Computing and Informatics,Saudi Electronic University,Riyadh,11673,Saudi Arabia

    Abstract: Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared (IR)/visible (VS) images.Dissimilarities in various kind of features in these images are vital to preserve in the single fused image.Hence,simultaneous preservation of both the aspects at the same time is a challenging task.However,most of the existing methods utilize the manual extraction of features; and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image.Therefore,this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.Firstly,fuzzification of two IR/VS images has been done by feeding it to the fuzzy sets to remove the uncertainty present in the background and object of interest of the image.Secondly,images have been learned by two parallel branches of the siamese convolutional neural network (CNN) to extract prominent features from the images as well as high-frequency information to produce focus maps containing source image information.Finally,the obtained focused maps which contained the detailed integrated information are directly mapped with the source image via pixelwise strategy to result in fused image.Different parameters have been used to evaluate the performance of the proposed image fusion by achieving 1.008 for mutual information(MI),0.841 for entropy(EG),0.655 for edge information(EI),0.652 for human perception (HP),and 0.980 for image structural similarity(ISS).Experimental results have shown that the proposed technique has attained the best qualitative and quantitative results using 78 publically available images in comparison to the existing discrete cosine transform (DCT),anisotropic diffusion&karhunen-loeve(ADKL),guided filter(GF),random walk (RW), principal component analysis (PCA), and convolutional neural network(CNN)methods.

    Keywords: Convolutional neural network; fuzzy sets; infrared and visible;image fusion; deep learning

    1 Introduction

    The infrared sensors or multi-sensors are used to capture the infrared and visible images.As different objects like the environment, people, and animals emit thermal or infrared radiations which are further used for the detection of target and parametric inversion.These images have a lesser effect and insensitive to the illumination variations and disguise.Thus, it overcomes the hurdles while detecting the targets by working day and night [1].But the most important visible feature such as texture information get lost due to the small spatial resolution of the infrared images, as a result objects contain insufficient details.This is due to the temperature-based nature of the object.The objects that are warmer and colder than the background is easier to detect.On the contrary, visible images deal with the spectral resolution and sensitivity to the effect of changing brightness or illumination in the scene.It illustrates the perceptual scenes for the human eyes and human vision system (HVS) [2].Sharpened high spatial resolution VS images depicts the important information on the geometric details of the objects, and thus helps for overall recognition [3].But mostly, a target cannot be easily identified due to the changing environmental and poor lighting conditions like objects covered in smoke, disguises, night time, and disordered background.Sometimes background and targets are similar due to which obtained information is insufficient.Hence, IR/VS images offer integrative advantages.

    Therefore, there is a need for an automatic fusion method that can fuse the two complementary images into a single image, i.e., integration of thermal radiations of the IR and texture appearance of the VS images to produce an enhanced vision of an image [4,5].Furthermore, the main aim is to obtain the fused image with abundant VS image details and chief thermal targets from the IR images.Hence, the goal of the IR/VS image fusion is to preserve the useful features of IR and VS images.

    In recent years, more attention has been paid towards the field of IR and VS image fusion.Many researchers presented a lot of IR/VS image fusion approaches which are roughly classified into various categories as multi-scale decomposition (MST), principal component analysis (PCA),sparse representation (SR), fuzzy sets (FS), and deep learning (DL).In consideration to this problem, the main motivation behind this work was to extend the research in the direction of the examination of the fused image to be helpful in the object tracking, object detection, biometric recognition, and RGB-infrared fusion tracking.Therefore, goal is to propose a reliable automatic anti-noise infrared/visible image fusion technique for generating a fused image that has the largest degree of visual representation of environmental scenes to be used in.

    Major contributions of this study are: (1) The unique integration of fuzzification and siamese CNN based infrared/visible fusion technique for the integration of complementary infrared/visible images has been put forward.(2) Fuzzification has been done using the fuzzy sets to model various uncertainties efficiently for problems like ambiguousness, vagueness, unclearness, and distortion present in the image by the determination of the membership grade of the background environment as well as target detection.Whereas, feature classification has been done by the CNN model with the extraction of the low level as well as high level infrared/visible features.Furthermore, fusion rules are also automatically generated to fuse the obtained features.(3)The proposed technique is more reliable and robust as compared to the classical infrared/visible technique due to its advantage of making it less laborious.(4) A publically accessible dataset consisted of 78 infrared/visible images has been used for the experiments.(5) The qualitative as well as quantitative evaluation has been done using six classical infrared/visible techniques such as discrete cosine transform (DCT), anisotropic diffusion & karhunen-loeve (ADKL), guided filter (GF), random walk (RW), principal component analysis (PCA), and convolutional neural network (CNN) methods by using five metrics, i.e., mutual information (MI), entropy(EG), edge information (EI), human perception (HP), and image structural similarity (ISS).Higher results are given by the proposed technique proves its effectiveness concerning other pre-existing techniques.This study deals with the problem of pixel level multi-sensor image fusion.

    The key motivation of this research is to combine the advantages of the spatial domain(CNN) and fuzzy based method to achieve the accurate extraction of IR targets while maintaining the background features of VS images which is not easy to attain as there occurs various challenges during this process.Efficacious evaluation of the quality of pixels has been done with the extraction of target features and background features in order to integrate them for the generation of clear focused fused image.Additionally, it is a laborious task.Then, investigation of the determination of the pixels belongingness is an issue of relevance.Furthermore, from the literature,it has been analyzed that FS represented the uncertain features.Therefore, indeterminacies, noise,and imprecision present in the images can be considered as a problem of fuzzy image processing.Subsequently, due to the powerful ability of the CNN for automatic data extraction, this research work generated the data-driven decision maps with the utilization of CNN.Hence, as per the literature, no attempt has been made to integrate the FS with CNN for IR/VS image fusion.Therefore, in this research work, an attempt has been made to propose a novel fuzzy CNN based IR/VS image fusion method for the fusion of images.The key contributions of this study are outlined as follows.

    ? It helped to integrate different modality images to produce a clear more informative fused image.

    ? It also improved the infrared image recognition quality of the modern imaging system.

    ? Subjective and objective experimental analysis have been performed.

    The remaining structure of this study is presented as follows: Section 2 briefly describes the background and related approaches for infrared/visible image fusion.In Section 3 detailed description of the proposed technique methodology is given.Section 4 presents the dataset,evaluation metrics, and validates the experimental results by doing an extensive comparison with existing techniques.In Section 5, concluding remarks and future works discussion is drawn.

    2 Related Works

    In the past, numerous techniques for infrared/visible fusion had been developed like pyramid decomposition [6], and DCT [7].But, they were not suitable methods as they produced oversampling, high redundancy, and so many other problems.Whereas, histogram-based methods [8,9]produced unsatisfactory results due to their inability to amplify gray levels of the images as well as background distortion.Hence, they produced the low-quality fused images.Bavirisetti et al.[10] introduced the edge preserving ADKL transform technique.Although, good results were obtained but still the qualitative as well as quantitative results needs to be improved and along with this it was a labor-intensive method.Further, Liu et al.[11] presented the convolutional sparse representation method whose main drawback was that only the last layer was used for the extraction of features which resulted in the loss of the most useful information.Hence, it was a crude method.Liu et al.[12] developed a variation model which was based on saliency preservation.Only seven image sets were used, which were the main limitation of this study.Many non-subsampled contourlet transform (NSCT) approaches [13,14] were developed.However,these methods gave satisfactory fused images but there were many drawbacks like the process was cumbersome and tedious.The decomposition of the image and reconstruction of the fused image was computationally intensive and was not a feasible method to be used in real time applications.Yang et al.[15] developed the guided filter (GF) technique for the measurement of visual features of the image.Although a better-quality of fused image was obtained but still subjective and quantitative results need to be enhanced.Only five sets of infrared/visible images with three evaluation metrics were used for the validation purpose.Ma et al.[16] developed a boosted RW method for the effective estimation of the two-scale focus maps.The quality of the fused image needed to be improved.Afterwards, Shahdoosti et al.[17] introduced a hybrid technique with an integration of PCA and spatial PCA techniques with the usage of an optimal filter.Subsequently, obtained the synthesized results similar to the corresponding multisensors observed at the high-resolution level.Liu et al.[18] developed the DL framework for the integration of multi-focus images which was computationally intensive.They have exhibited their applications on other types of modalities such as infrared/visible image fusion.Many other DL based techniques were introduced for the fusion of different modality images.Li et al.[19] presented a fusion framework based on DenseNet.Four convolution layers were included in the encoder block.Shallow features were extracted by one of the convolutional layers.Another three layers constituted the Dense block were used to obtain both shallow and deep image features.Then, Li et al.[20] fused the visible and infrared images using VGG network.In this approach, middle layer information were utilized but the information loss during the integration of features limited the model’s performance.Ma et al.[21] propounded an image fusion method based on generative adversarial network (GAN).Whereas, adversarial network was adopted to extract more visible details of the images.Zhang et al.[22] designed a transform domain based convolutional neural network approach constituting both feature extraction and reconstruction blocks.In this architecture, 2 CNN layers utilized to obtain features of an image for fusion, and then reconstructions of image features were done to generate fused images.Xu et al.[23] developed the U2Fusion architecture for the fusion of images.This method was based on DenseNet [24] where vital information were retained by the designed information measurement.Zhao et al.[25] attained the fused images by the designing of selfsupervised feature adaption architecture.Moreover, fuzzy set-based approaches [26-28] were also used due to their very strong mathematical operations to deal with the fuzzy concepts even whose quantitative illustration was not possible.Thus, on the basis of the literature study, the above limitations motivated us to hybridize the advantages of fuzzy sets with deep learning concepts.The presented work focused on the development of an automatic effective infrared/visible image fusion technique for enhancing the vision of the fused image.With this incorporation, this method preserved vital information.

    3 Proposed Methodology

    In order to handle the former problems, hybridization of the fuzzy set and Siamese CNN has been employed to fuse the infrared/visible images.The proposed technique is presented as follows.

    3.1 Fuzzification

    Zadeh et al.[29] introduced the concept of a fuzzy set which is a very useful mathematical expression to handle an object with some kinds of imprecision and uncertainties like distortion,vague boundaries, ambiguity, blurriness, uneven brightness, and poor illumination [30].When the infrared/visible images are captured by sensors, there occurs an ambiguity in image pixels.Their belongingness to the target or background is considered to be a typical problem.Therefore, this problem has been solved by the use of fuzzy sets that further helps to solve the existence of intermediate values by the assignment of a degree of truth ranges from 0 to 1 typically deals with an uncertain problem.

    For the processing of an image, input imagesLandMwas converted from pixel domain to fuzzy domain.Eq.(1) illustrated the image representation.Let’s assume an imageLas for illustration.L(i,j)implies its pixel values whose mapping has to be done into its fuzzy characteristic domain.It has expressed as shown below by Eq.(1).

    whereμL(i,j)is a membership degree, whose values range from 0 to 1 i.e.,μL(i,j):L→[0,1],Lis an element of the universal set.Each pixel is represented byμL(i,j)/L(i,j).Therefore, mapping of the original pixel value (0 to 255) is mapped to (0 to 1) i.e., fuzzy plane.

    The membership grade describes the element’s degree of belongingness to a FS.Here, 1 indicates the elements with complete belongingness to a FS, whereas 0 implies it’s belonginess to the fuzzy set.Summation of all the membership functions of the element ‘L’is 1 as represented below.

    where,prepresents the number of FS whereLbelongs.

    As input grayscale image includes darker, brighter, and gray level pixels whose value ranges from 0 to 255.Therefore, image mapping has been done from pixel scale to fuzzy domain by assigning triangular membership function.

    Now, image having pixel values between 0 to 255 was converted to 0 to 1 indicating the pixel fuzziness.

    where F,h, and g implies the minimum, average, and maximum pixel intensity value, respectively.L(i,j)indicates the input image pixel value.

    The triangular membership function of the image has been applied whose mathematical representation is shown in Eq.(3).Now, the image having pixel values between 0 to 255 is converted to 0 to 1 indicating the pixel fuzziness.The membership grade describes the element’s degree of belongingness to a fuzzy set.Here, 1 indicates the elements with complete belongingness to a fuzzy set, whereas 0 implies that it does not belong to the fuzzy set.The calculation of membership value i.e., the process of fuzzification is given by Eq.(3).

    So, by using the above equation, pixels having minimum intensity value are assigned 0 whilst pixels having maximum value are assigned 1, and the uncertainty, as well as ambiguity are removed.Hence, the uncertainty was removed without diminishing the image quality.

    3.2 Siamese CNN

    The proposed Siamese CNN or convNet model designed for the fusion of IR/VS images is described here.It is designed to automatically learn mid and high level abstractions of the data presented in the two heterogeneous images.By the use of the Siamese network, the same weights were shared between two different branches.One branch was used to handle the infrared image and the other was to process the visible image.Each branch has step-wise stages of CNN such as convolution layer, max pooling, flattening, and full connection, i.e., fully connected layer.

    These layers generate the feature maps parallel to each level of abstraction of features from an image [31].The CNN framework configuration for infrared/visible image fusion is used as a stack of varied convolutional layers consisting of 3 convolutional layers, one max pooling, two FC layers, and one output softmax layer.Therefore, the above-discussed features have been captured by using three convolutional filters with a feature detector size of 3 × 3 pixels.ReLU is slid over the whole input volume.ConvNet has been used during the implementation of the proposed technique which has each layer of the convolution composed of (a) 3 × 3 convolutions (b) Batch Normalization (BN′)(c) ReLU function and (d) max pooling.

    Then, features extracted from the previous CNN layers are concatenated by the fully connected (FC) layer.Subsequently, pooled feature maps are obtained by the flattening of the pooling layers.The last layer consists of the output neuron which assigns a probability to the image.CNN gives scalar output whose value ranges from 0 to 1.

    3.3 Fusion Scheme

    The proposed technique for the fusion scheme consisted of five steps: fuzzification,focus detection by the feature map generations, segmentation, unwanted region removal, and infrared/visible image fusion.This attempt has been made to generate a fused image consisting of all its useful features as illustrated by the schematic block diagram of the proposed technique for infrared/visible image fusion in Fig.1.

    Figure 1: Schematic block diagram of the proposed infrared/visible image fusion

    Firstly,LandM, the infrared/visible images, respectively are fed to the fuzzy set.Then, the fuzzification has been done by doing the processing of the information presented in the image,followed byL′andM′, then, the output fuzzified images are passed to the pre-trained Siamese CNN model.Additionally, binary classification of the infrared/visible images has been done by pretrained Siamese CNN.During this, various distinct extracted image pixels are transferred to the next convolutional layer until the entire classification is done.

    For the first three convolutional layers, the fixed stride of 1 has been used.Max pooling has been applied for the localization of the parts of the images using a window size of 2 × 2.This has a stride of two, further helped in choosing the larger pixel value from each part of an image.Therefore, every time the CNN layers stack is followed by the ReLU to keep the constant network volume.In summary, the first three convolutional layers are represented as Con1, Con2,and Con3.Con1 has generated the 64FM′after applying the 64 filters, Con2 has produced the 128FM′by the use of 128 filters.Due to the self-learning feature of the CNN, these filters are automatically applied.After that, 256FM′are obtained which then passed to the FC layer for further combining with 256 dimensional vectors to produce two dimensional output vector.Lastly,probability distribution among the two classes has been obtained using the SoftMax function.These are followed by the generation of feature maps.The main task of the CNN is to do automatic feature extraction from the given input images.

    Thus, during the fusion, the network which has been trained using the patch size of 16 × 16 is fed with 2 fuzzified source images to generate a score mapSM′.In detail, first Con 1 has extracted only low i.e., dark feature maps having high frequency information from the images.Therefore, to capture the spatial detail of the image, Con 2 has been added.It has produced the feature maps covering varied gradient orientations.Hence, the third convolutional layer has integrated the gradient information and produced the output feature map i.e., score map.Here,SM′illustrated the focused information describing the image focus ability of a set of patches having 16 × 16 size in the source image whose value ranges from 0 to 1.A more detailed focused patch has been obtained when its value is near 0 (black) and 1 (white).SM′size is given in Eq.(4).

    If 0<SM′<1, it implied the focused parts,htandwtimplies the height and width of the image respectively.Here, there is a reduction of the size of(SM)′to half due to the presence of overlapping pixels.Therefore,ht,wt, andconv_patchsizeis also reduced to half and conv_patchsize was reduced to 8 × 8.

    Moreover,SM′consisted of an overlapped pixel.Hence, an averaging method was utilized to produce a focus map of the same size to the source image.Now, focused information is correctly detected where the black or white region represents the more abundant detailed image information.However, the plain regions (gray) constitute a value of 0.6.To generate accurate focused map, threshold factor of 0.6 was chosen empirically to keep the balance between good quality and computationally expensiveness.As this is an optimum value chosen which gives the best binary segmentation map with the best evaluation metrics results in comparison to the other values.

    Now, more detailed information is contained in the focus map of the image which is near to 0 or 1 values.From Fig.2, it can be observed that the obtained focus map constitutes of correctly classified gray pixels, as shown in the white background.

    Further processing of the focus map has been done to preserve the maximum of useful features i.e., only to have focused parts i.e., black or white.For this purpose, the maximum method has employed a 0.6 threshold value to segment FM′to get binary mapBM′.As for the segmentation purpose, user defined threshold value, i.e., 0.6 has been selected to obtain the good quality BM′by the following conditions.

    The obtained binary map contained some misclassified pixels and unwanted small objects or holes as clearly seen in Fig.1.Therefore, for the removal of some of the misclassified pixels from theFM′small region removal has been done using bwareaopen to generate the initial decision(ID′)map which will produce an image free from unwanted objects by using Eq.(7).

    Here, the area threshold value is manually adjusted to 0.03 i.e., the threshold for area is given in Eq.(8).

    Now the computedID′further contained undesirable artifacts on the edges.This has been improved by using edge preserving guided filter.Fig.2 has clearly justified the difference between the resultant output fused image with and without using a guided filter.The fused image obtained by using the average rule on theID′without using guided filter containing blurriness whereas the image obtained after using a GF is sharper and brighten.Subsequently, final decision mapFD′has been calculated with the use of a guided filter.

    Figure 2: Fused image generated without using GF and with the use of GF

    whereris set as 5,epsis 0.2.This initial fused image is used as a guidance image for the calculation ofFD′.

    Lastly, the pixel-wise weighted average method has been used to obtain the resultant single fused image as described in Fig.1 using Eq.(10).

    where,L′andM′are the given fuzzified images,FD′(x,y)is the final decision map and fused single image is represented byF′(x,y).

    The proposed algorithm for infrared/visible image fusion is described in detail in Algorithm 1.

    ?

    4 Experimental Evaluations

    In this research work, both subjective and objective assessment has been done for the validation of the superiority of the proposed technique.For this purpose, six pre-existing infrared/visible image fusion techniques such as DCT [7], ADKL [10], GF [15], DL [18], RW [16], and PCA [17]have been compared.

    4.1 Data Acquisition

    In this, IR/VS images are obtained under changing environmental conditions.The publically available datasets are acquired from RoadScene [23], TNO [32], and CVC-14 [33] datasets.The experimental results have been conducted using 78 sets of infrared/visible images.Simulations are conducted in Matlab R2016a, 64-bit using PC with processor Intel? Core?i5-3470 CPU, 16.0 GB RAM.

    The RoadScene dataset consisted of total 221 IR/VS sets of images.Images are of rich road traffic spots.For an instance, pedestrians, roads, and vehicles.These highly representative spots are acquired from naturalistic driving videos.These images have no uniform resolution.

    The TNO dataset is common publically used for the IR/VS research.It includes the varied military relevant scenes images that has registered with distinct multi-band cameras with nonuniform resolution.

    The CVC-14 dataset included pedestrian scenes that is highly utilized for the manufacturing of autonomous driving technologies.It is composed of two pair of sequence, namely day and night pairs, respectively.Total are 18710 images, among which 8821 is the daytime sequence and 9589 as the nighttime sequence.All images have resolution of 640 × 471.

    4.2 Performance Evaluation Metrics

    Towards this approach,EG,MI, EI, ISS, andHP[34-39] metrics have been opted for the validation of the proposed technique.

    Entropy (EG):It is used to calculate the spatial information of an image.It tells about the richness of useful information.A higher entropy value signifies the excellent performance of the method.The mathematical representation is shown below.

    where, ‘S’represents the number of gray levels i.e., 256 andpscontains the pixels with ‘s’gray values in the image.

    MI:It tells about the transfer of the quantity of important useful information from the given input source images to the single fused image.

    where, two source input images are described byA′andB′,F′is the fused image.Joint histograms of the source input and fused output image are denoted bypA′F′andpB′F′.Whereas,pA′,pB′,andpF′ depicts the corresponding histograms ofA′,B′, andF′.

    Edge information:It calculates the transference of the visual as well as edge information from the two input source images to the fused image.

    where,A′andB′denotes the source input images,F′is a fused image.WA′(i,j)andWB′(i,j)are the weights of the pixels.QA′F′(i,j)andQB′F′(i,j)indicate the similarity betweenA′andB′.The location of an image is referred by (i,j).

    Image structural similarity:It describes the amount of structural information preservation into the resultant single image.It tells about the similarity between the given input images with resultant single fused images.

    Human Perception:The IF method is dependent on the perception of humans which is calculated using human perception.Input as well as output images are filtered using a contrast sensitivity filter.Then, the calculation of the contrast preservation map is done.It is represented in Eq.(15).

    whereαA′(i,j)andαB′(i,j)are a saliency map,QA′F′(i,j)andQB′F′(i,j)describe the similarity between input and resultant image.

    All metrics values have ranges in the [0, 1] interval [29].0 indicates low-quality image whereas,1 implies high-quality image.

    4.3 Experimental Setup

    In this study, Siamese CNN has been presented.It consisted of two branches having the same neural structure with the same weights for the extraction of the features of two different infrared/visible images.The network training has been done by using a framework of caffe [40].Xavier algorithm is used for the weight’s initialization of each convolutional layer.

    Training has been done on 50,000 natural images derived from the ImageNet dataset [41].Due to the lack of a labeled datasets, a Gaussian filter has been used to obtain the blurred version of the images.After that, for every blurred version of the image, 20 sets of 16 × 16 patch size are sampled from the input image.Thus, 1,000,000 sets of patches have been obtained.However, only about 10,000 images have been used.The softmax loss function has been used as an optimization objective whereas, minimization of loss function has been done using stochastic gradient descent.Weights have been updated by the rule given below [42].

    where,αis a momentum variable,iis an iteration index,Mis a momentum set at 0.9,Wis a weight decay that is set at -0.0005,Lis the loss function andγis a learning rate.Loss function derivative is denoted aswiis a weight and Learning rateγis chosen as 0.002.A higher learning rate has adverse effects on the calculated loss, whereas a smaller learning rate takes an increased epoch for system convergence.These values have been set after performing various experiments.Lastly, standardization has been obtained by balancing the dataset.Transformation and augmentations have been done on the infrared/visible images as shown below.

    ? Random flipping: both horizontal and vertical flipping is done.

    ? Rotation: both horizontal as well as vertical rotation of images is done by 90?and 180?.

    ? Gaussian filter: blurring of images are obtained for noise smoothening.

    4.4 Subjective Visibility

    The fusion result on the six different sets of infrared/visible images has been attained.Based on the fused images, it can be observed that infrared images have apparent objects and visible images have an obvious background.The techniques such as GF, DL, RW, PCA, and ADKL failed in retaining the objects presented in the images well.

    From Fig.3, it can be noted that in Figs.3I-3III, the fused images produced by DL,PCA, and ADKL are low-intensity images, hence, not able to keep the intensities of the object information.They contained blurriness and artifacts in the images as shown by the area in red boxes.Thus, unclear and poor quality of images have been obtained.The visual quality of images obtained from RW and GF methods are worst because there is information loss and the upper right corner of these images seemed to be darker than the original image with some distortion too.The image produced by DCT is better as compared to above-discussed techniques but is also incapable to extract all the information.Thus, the proposed technique overcame these problems very well as shown in Figs.3I-3III by producing images of enhanced quality.

    It is evident from the Figs.3IV-3VI that the fused image produced by the proposed technique contained more detailed information of the target image depicting image characteristics as well.By contrast, DL, ADKL, and DCT generated a noisy, blurred image with a poor quality of fused image.The DCT technique produced some distortion giving the distorted image.From the fused images of GF, RW, and PCA it can be analyzed that all the contents from the source images are not transferred to the resultant output images.On comparative analysis of the proposed technique with the other techniques, it has been analyzed that other techniques exhibited the loss of contrast,brightness, edges as well as incapable of fusing many types of features among two different heterogeneous images.Thermal radiations of the infrared and target object of the visible image has not been retained by these techniques and most of the information got damaged.Hence, the proposed technique has outperformed all other techniques by producing the better fused image.

    Figure 3: Qualitative fused images on 6 infrared/visible image pairs, (a)-(i) represents infrared image, visible image, fused output by DL, fused output by PCA, fused output by ADKL, fused output by DCT, fused output by RW, fused output by GF, and fused output by proposed technique in (I)-(VI)

    4.5 Objective Visibility

    For further illustrations of the fusion effects, five evaluation metrics such as MI, HP, ISS,EG, and EI have been used.Higher values of metrics indicate the best quality of fused images.Its value ranges in the interval of 0 to 1 where 1 or more than 1 indicates the enhanced quality image [43-45].Tab.1, lists an average value of 78 sets of images.These values are compared on the basis of five evaluation metrics.It can be clearly observed that the proposed technique has fused the images with more MI, that is, more information transfer from the given input images to the resultant single fused image.It has also achieved the highest entropy which indicates that it has consisted of more spatial information of the given source images.On the contrary, ADKL has attained the lowest entropy which implies the ineffectiveness of this technique and less information has been transferred.Furthermore, the proposed technique also attained the highest values in terms of EI and HP that validate the fused image containing better visual edges and sharpening of the image.The proposed technique has also given the best ISS value near 1, which shows superiority in comparison to other existing techniques.

    Therefore, from the above discussions, it can be concluded that the proposed technique attained the highest values in terms of every metric as shown in bold in Tab.1.Hence, it has outperformed the other traditional infrared/visible image fusion techniques.

    Table 1: Average comparison of metrics values for 78 sets of images

    5 Conclusion and Future Directions

    This paper designed an infrared/visible image fusion technique based on the fuzzification and convolutional neural network.The main goal of this study is to solve the issue regarding the maintenance of thermal radiation features in the pre-existing IR/VS based methods.Therefore,benefits of two theories have been taken with the integration of FS and CNN to devise a new strong and adaptable technique into a single scheme.The proposed technique firstly retained the details of the thermal radiation of the infrared images, whereas simultaneously accumulated the visibility in the visible image.Therefore, correct target location can be observed which further helped in the processing and also vital for increasing precision and focused ability of the output image.This technique dealt with 78 sets of infrared/visible images.Furthermore, high quality and enhanced image has been produced even under bad illumination and varied expressions.The main goal behind this work is the designing of the advanced automatic technique to obtain the fused image containing contour, brightness, and texture information between IR/VS images to illustrate clear target features of the infrared image while distinctly visible background which will be further helpful in the military surveillance and object detection.The subjective, as well as objective evaluation, indicated that the proposed technique has given a higher performance in comparison to the existing techniques in feature extraction and information gathering.

    In the future, we intend on the optimization of the developed technique with the hybridization of the neuro fuzzy and CNN.Moreover, this technique can be more generalized for the fusion of more than two images at the same time by adapting the convolutional operations.Also, we intend to extend this research in other domains as well.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲激情五月婷婷啪啪| 午夜激情欧美在线| 午夜福利视频精品| 美女黄网站色视频| 97在线视频观看| 校园人妻丝袜中文字幕| 秋霞在线观看毛片| 精品久久久久久成人av| 99re6热这里在线精品视频| 一个人看的www免费观看视频| 午夜日本视频在线| 26uuu在线亚洲综合色| 寂寞人妻少妇视频99o| 久久99精品国语久久久| 亚洲18禁久久av| 狂野欧美激情性xxxx在线观看| 免费观看性生交大片5| 最近的中文字幕免费完整| 国产精品久久久久久av不卡| 女的被弄到高潮叫床怎么办| 中文字幕久久专区| 麻豆久久精品国产亚洲av| 久久99蜜桃精品久久| 久久久久久久久大av| 床上黄色一级片| 欧美一区二区亚洲| 免费在线观看成人毛片| 亚洲av电影不卡..在线观看| 欧美不卡视频在线免费观看| 日韩人妻高清精品专区| 男女国产视频网站| 日本一二三区视频观看| 青春草视频在线免费观看| 精品少妇黑人巨大在线播放| 只有这里有精品99| 免费大片18禁| 极品教师在线视频| 91精品一卡2卡3卡4卡| 亚洲精品aⅴ在线观看| 最近最新中文字幕免费大全7| 久久97久久精品| 亚洲国产精品成人综合色| 好男人视频免费观看在线| 水蜜桃什么品种好| av黄色大香蕉| 亚洲精品一二三| 日韩人妻高清精品专区| 欧美精品国产亚洲| 久久6这里有精品| 中文乱码字字幕精品一区二区三区 | 成人二区视频| 99久久精品一区二区三区| 伦理电影大哥的女人| 午夜老司机福利剧场| 日韩av在线大香蕉| 99热6这里只有精品| 免费在线观看成人毛片| 国产伦一二天堂av在线观看| 久久99精品国语久久久| 91在线精品国自产拍蜜月| 国产精品熟女久久久久浪| 国产高清三级在线| 老司机影院毛片| 午夜福利高清视频| 舔av片在线| 少妇猛男粗大的猛烈进出视频 | 一区二区三区免费毛片| 99久国产av精品国产电影| 日本一本二区三区精品| 欧美+日韩+精品| 亚洲自拍偷在线| 亚洲18禁久久av| 老司机影院成人| 精品久久久久久成人av| 亚洲电影在线观看av| 亚洲欧美日韩无卡精品| 国产免费视频播放在线视频 | 男女边摸边吃奶| 国产伦精品一区二区三区视频9| 欧美日韩视频高清一区二区三区二| 99久国产av精品国产电影| 国产日韩欧美在线精品| 国产一级毛片七仙女欲春2| 亚洲国产精品sss在线观看| 日本免费a在线| 水蜜桃什么品种好| 男女那种视频在线观看| 大香蕉久久网| 亚洲精品,欧美精品| 国产成人精品婷婷| 国产在视频线精品| 高清在线视频一区二区三区| 1000部很黄的大片| 国产精品久久久久久精品电影| 欧美成人一区二区免费高清观看| 婷婷色av中文字幕| av福利片在线观看| 精品国产三级普通话版| 美女cb高潮喷水在线观看| 乱系列少妇在线播放| 天堂av国产一区二区熟女人妻| 成人亚洲欧美一区二区av| 人妻夜夜爽99麻豆av| 国产国拍精品亚洲av在线观看| 一个人观看的视频www高清免费观看| 亚洲美女搞黄在线观看| 国产淫语在线视频| 人人妻人人澡人人爽人人夜夜 | 色吧在线观看| 人妻少妇偷人精品九色| 午夜免费观看性视频| 久久精品人妻少妇| av免费在线看不卡| 99热6这里只有精品| 久久精品熟女亚洲av麻豆精品 | 床上黄色一级片| 日韩 亚洲 欧美在线| 久久久久久久久久久丰满| 三级男女做爰猛烈吃奶摸视频| 国产一区二区在线观看日韩| 一夜夜www| 午夜精品国产一区二区电影 | 中国美白少妇内射xxxbb| 成年女人看的毛片在线观看| 3wmmmm亚洲av在线观看| 毛片女人毛片| 国产免费又黄又爽又色| 日本与韩国留学比较| 亚洲国产欧美在线一区| 午夜激情欧美在线| 久久精品国产亚洲av涩爱| 久久精品久久久久久噜噜老黄| 中国国产av一级| 一级av片app| 少妇的逼水好多| 成年女人看的毛片在线观看| 欧美另类一区| 国产精品一区www在线观看| 欧美高清成人免费视频www| 国产高潮美女av| 国产亚洲午夜精品一区二区久久 | 干丝袜人妻中文字幕| 精品久久久久久久人妻蜜臀av| 亚洲成人av在线免费| 欧美bdsm另类| 亚洲av.av天堂| 国产老妇伦熟女老妇高清| 乱码一卡2卡4卡精品| 纵有疾风起免费观看全集完整版 | 亚洲一区高清亚洲精品| 91狼人影院| 国产伦精品一区二区三区视频9| 欧美zozozo另类| 亚洲精品第二区| 日韩欧美一区视频在线观看 | 天美传媒精品一区二区| 最近最新中文字幕大全电影3| 大香蕉久久网| 夫妻午夜视频| 青春草国产在线视频| 成人高潮视频无遮挡免费网站| 精品久久久久久久久亚洲| 国产精品三级大全| 高清在线视频一区二区三区| 一个人看视频在线观看www免费| 亚洲伊人久久精品综合| 丰满少妇做爰视频| 国产亚洲精品av在线| 亚洲激情五月婷婷啪啪| 国产在视频线精品| 美女被艹到高潮喷水动态| 黄片无遮挡物在线观看| 18禁在线播放成人免费| av在线老鸭窝| 一级毛片黄色毛片免费观看视频| 校园人妻丝袜中文字幕| 直男gayav资源| 三级国产精品欧美在线观看| 亚洲内射少妇av| 超碰97精品在线观看| 成人高潮视频无遮挡免费网站| 美女黄网站色视频| 国产黄色视频一区二区在线观看| 啦啦啦韩国在线观看视频| 免费观看在线日韩| 精品一区在线观看国产| 天堂影院成人在线观看| 国产精品一区二区三区四区免费观看| 亚洲熟妇中文字幕五十中出| 国产美女午夜福利| 色综合色国产| 日韩制服骚丝袜av| 国产精品国产三级国产专区5o| 久久这里有精品视频免费| 国产精品无大码| 天美传媒精品一区二区| 午夜免费观看性视频| 亚洲精品乱码久久久久久按摩| 欧美激情在线99| 精品久久久久久成人av| 在线a可以看的网站| 日韩伦理黄色片| 91在线精品国自产拍蜜月| 亚洲在线观看片| 只有这里有精品99| 老司机影院成人| 亚洲av中文字字幕乱码综合| 最近最新中文字幕免费大全7| 日本欧美国产在线视频| 久久久久久久久久成人| 99热网站在线观看| 亚洲综合精品二区| 国内精品一区二区在线观看| 亚洲av成人精品一二三区| 狂野欧美白嫩少妇大欣赏| 国产探花在线观看一区二区| 久久精品夜夜夜夜夜久久蜜豆| 一二三四中文在线观看免费高清| 成人午夜精彩视频在线观看| 精品久久久久久久久亚洲| 中文乱码字字幕精品一区二区三区 | 人人妻人人澡欧美一区二区| 69人妻影院| 99久久人妻综合| 美女脱内裤让男人舔精品视频| 最近最新中文字幕免费大全7| 久久国内精品自在自线图片| 国产精品.久久久| 亚洲国产av新网站| 成年人午夜在线观看视频 | 国产精品国产三级专区第一集| 激情 狠狠 欧美| 亚洲精品一区蜜桃| 三级国产精品欧美在线观看| 亚洲婷婷狠狠爱综合网| 成人美女网站在线观看视频| 欧美日韩精品成人综合77777| 欧美成人a在线观看| av专区在线播放| 2022亚洲国产成人精品| videossex国产| 在线观看人妻少妇| 91av网一区二区| 国产成人a∨麻豆精品| 国产在线男女| 久久精品国产自在天天线| 国产成人精品婷婷| av卡一久久| 观看免费一级毛片| 99re6热这里在线精品视频| 亚洲色图av天堂| 国产精品国产三级专区第一集| 美女主播在线视频| 欧美精品国产亚洲| 能在线免费看毛片的网站| 晚上一个人看的免费电影| 久久精品国产亚洲av天美| 一级爰片在线观看| 欧美不卡视频在线免费观看| 亚洲av成人精品一二三区| 视频中文字幕在线观看| 国产淫片久久久久久久久| 高清在线视频一区二区三区| 全区人妻精品视频| 亚洲欧美成人综合另类久久久| 日韩强制内射视频| 成人无遮挡网站| 观看免费一级毛片| 国产精品1区2区在线观看.| 91av网一区二区| 校园人妻丝袜中文字幕| 国产麻豆成人av免费视频| 99热这里只有是精品50| 国产探花极品一区二区| 男女下面进入的视频免费午夜| 午夜亚洲福利在线播放| 欧美xxxx性猛交bbbb| 国产精品麻豆人妻色哟哟久久 | 男女下面进入的视频免费午夜| 又粗又硬又长又爽又黄的视频| 激情五月婷婷亚洲| 亚洲欧美日韩卡通动漫| 最近手机中文字幕大全| 少妇熟女欧美另类| 中文在线观看免费www的网站| 可以在线观看毛片的网站| 小蜜桃在线观看免费完整版高清| 国产精品不卡视频一区二区| 免费大片黄手机在线观看| 男的添女的下面高潮视频| 欧美日韩在线观看h| 日韩中字成人| 久久精品综合一区二区三区| 在线观看免费高清a一片| 男人狂女人下面高潮的视频| 丝袜喷水一区| 十八禁网站网址无遮挡 | 欧美不卡视频在线免费观看| 夜夜看夜夜爽夜夜摸| 国产乱人视频| 激情 狠狠 欧美| 免费大片18禁| 亚洲精品日本国产第一区| 成人美女网站在线观看视频| 亚洲欧美日韩东京热| 全区人妻精品视频| 欧美日韩在线观看h| 日本av手机在线免费观看| videossex国产| 午夜精品国产一区二区电影 | 听说在线观看完整版免费高清| 男人和女人高潮做爰伦理| 秋霞在线观看毛片| 观看免费一级毛片| 不卡视频在线观看欧美| 神马国产精品三级电影在线观看| 国产精品爽爽va在线观看网站| 插阴视频在线观看视频| 亚洲国产色片| 国产乱来视频区| 一级爰片在线观看| 十八禁国产超污无遮挡网站| 国产又色又爽无遮挡免| 九草在线视频观看| 99久久精品热视频| 一级二级三级毛片免费看| 国模一区二区三区四区视频| 一级黄片播放器| 亚洲成人一二三区av| 免费少妇av软件| 国产精品.久久久| 热99在线观看视频| 国产淫片久久久久久久久| 亚洲成色77777| 国产亚洲91精品色在线| 插阴视频在线观看视频| av专区在线播放| 在线观看av片永久免费下载| 国产成人91sexporn| 噜噜噜噜噜久久久久久91| 国产精品嫩草影院av在线观看| 免费人成在线观看视频色| 白带黄色成豆腐渣| 最新中文字幕久久久久| 男女国产视频网站| 国产精品嫩草影院av在线观看| 亚洲av成人精品一区久久| 亚洲国产高清在线一区二区三| 日韩不卡一区二区三区视频在线| 国产在视频线精品| 欧美高清性xxxxhd video| 精品久久久噜噜| 国产精品久久久久久久电影| 精品久久久久久久久亚洲| 欧美精品国产亚洲| 在线 av 中文字幕| 内地一区二区视频在线| 欧美一级a爱片免费观看看| 免费看不卡的av| 天堂中文最新版在线下载 | 91av网一区二区| 国产精品av视频在线免费观看| 免费不卡的大黄色大毛片视频在线观看 | 九九在线视频观看精品| 国产免费一级a男人的天堂| 床上黄色一级片| 日韩亚洲欧美综合| 乱码一卡2卡4卡精品| eeuss影院久久| 亚洲最大成人手机在线| 波野结衣二区三区在线| 18禁在线播放成人免费| 精品久久久久久久人妻蜜臀av| av播播在线观看一区| 精品一区在线观看国产| 日本爱情动作片www.在线观看| 国产亚洲精品久久久com| 久久草成人影院| 一个人看的www免费观看视频| 国产一区二区三区av在线| 有码 亚洲区| 建设人人有责人人尽责人人享有的 | 十八禁网站网址无遮挡 | 欧美丝袜亚洲另类| 久久久久久久大尺度免费视频| 边亲边吃奶的免费视频| 丰满乱子伦码专区| 超碰97精品在线观看| 亚洲国产高清在线一区二区三| 男女国产视频网站| 男女那种视频在线观看| 成人毛片60女人毛片免费| 五月玫瑰六月丁香| 亚洲一区高清亚洲精品| av在线亚洲专区| 中文欧美无线码| 亚洲va在线va天堂va国产| 国产精品麻豆人妻色哟哟久久 | 青春草亚洲视频在线观看| 久99久视频精品免费| 中文字幕久久专区| 日本wwww免费看| 久久久久九九精品影院| 国产大屁股一区二区在线视频| 中文字幕av成人在线电影| 高清av免费在线| 亚洲综合精品二区| 国产精品美女特级片免费视频播放器| 成人午夜精彩视频在线观看| 国产午夜福利久久久久久| 免费av不卡在线播放| 黄片wwwwww| 久久99热这里只频精品6学生| 久久久久久久久久久丰满| 成人亚洲精品一区在线观看 | 男人狂女人下面高潮的视频| av天堂中文字幕网| 高清日韩中文字幕在线| 国语对白做爰xxxⅹ性视频网站| 精品一区二区三卡| 亚洲av国产av综合av卡| 男插女下体视频免费在线播放| av黄色大香蕉| 亚洲精品日韩在线中文字幕| 欧美精品国产亚洲| 久99久视频精品免费| 亚洲欧美成人精品一区二区| 久久99蜜桃精品久久| 成人午夜高清在线视频| 国产色婷婷99| 国产爱豆传媒在线观看| 在线观看免费高清a一片| 老女人水多毛片| 麻豆久久精品国产亚洲av| 一夜夜www| 日韩欧美国产在线观看| 国产精品一区二区性色av| 深爱激情五月婷婷| 特大巨黑吊av在线直播| 白带黄色成豆腐渣| 欧美bdsm另类| 国产69精品久久久久777片| 精品不卡国产一区二区三区| 色吧在线观看| 亚洲人成网站在线播| 欧美日韩综合久久久久久| 日韩大片免费观看网站| 99久国产av精品| 肉色欧美久久久久久久蜜桃 | 精品久久久久久电影网| 精品人妻偷拍中文字幕| 日韩伦理黄色片| 丝袜美腿在线中文| 成人欧美大片| 最新中文字幕久久久久| 非洲黑人性xxxx精品又粗又长| 亚洲三级黄色毛片| 日韩一本色道免费dvd| 午夜爱爱视频在线播放| 日韩国内少妇激情av| 激情五月婷婷亚洲| 直男gayav资源| 日韩中字成人| 午夜激情福利司机影院| 夫妻性生交免费视频一级片| 一级片'在线观看视频| 成人高潮视频无遮挡免费网站| 亚洲av国产av综合av卡| 69av精品久久久久久| 又黄又爽又刺激的免费视频.| 亚洲精品乱码久久久久久按摩| 亚洲高清免费不卡视频| 成人高潮视频无遮挡免费网站| 精品久久久久久久久av| 伊人久久国产一区二区| 国产一级毛片七仙女欲春2| 舔av片在线| 丝瓜视频免费看黄片| 久久久久久九九精品二区国产| 99九九线精品视频在线观看视频| 精品久久久久久久久久久久久| 黄色日韩在线| 亚洲av福利一区| 免费在线观看成人毛片| 国内少妇人妻偷人精品xxx网站| 国产精品久久久久久av不卡| 色5月婷婷丁香| 26uuu在线亚洲综合色| 嫩草影院入口| 精品不卡国产一区二区三区| 国产综合精华液| 国产精品99久久久久久久久| 精品午夜福利在线看| 国产v大片淫在线免费观看| av天堂中文字幕网| 一夜夜www| 国产亚洲精品av在线| av免费在线看不卡| 麻豆精品久久久久久蜜桃| 国产有黄有色有爽视频| 最近中文字幕高清免费大全6| 亚洲在久久综合| 七月丁香在线播放| 精品99又大又爽又粗少妇毛片| 国产女主播在线喷水免费视频网站 | 中文天堂在线官网| 啦啦啦中文免费视频观看日本| 亚洲在久久综合| 精华霜和精华液先用哪个| 国模一区二区三区四区视频| 黄色配什么色好看| 777米奇影视久久| 夜夜爽夜夜爽视频| 欧美区成人在线视频| 久久久久久国产a免费观看| 久久久色成人| 精品酒店卫生间| 精品久久国产蜜桃| 精品亚洲乱码少妇综合久久| 99热网站在线观看| 欧美3d第一页| 国内少妇人妻偷人精品xxx网站| 亚洲aⅴ乱码一区二区在线播放| 欧美激情久久久久久爽电影| 国产一区二区亚洲精品在线观看| 亚洲精品乱码久久久v下载方式| 国产精品无大码| 日本欧美国产在线视频| 精品国产一区二区三区久久久樱花 | 少妇被粗大猛烈的视频| 久久久久九九精品影院| 99久久九九国产精品国产免费| 乱人视频在线观看| 色5月婷婷丁香| 日韩强制内射视频| 国产伦在线观看视频一区| 亚洲精品,欧美精品| 大香蕉97超碰在线| 国产激情偷乱视频一区二区| 精品一区二区三区人妻视频| 国产综合精华液| 亚洲av二区三区四区| 欧美日韩视频高清一区二区三区二| 亚洲欧美日韩东京热| 国产国拍精品亚洲av在线观看| 成人欧美大片| 国产成年人精品一区二区| 日韩视频在线欧美| 高清日韩中文字幕在线| 青春草视频在线免费观看| 婷婷色麻豆天堂久久| 最近的中文字幕免费完整| 国产av码专区亚洲av| 久久久久九九精品影院| 日韩,欧美,国产一区二区三区| 天天躁日日操中文字幕| 精品一区二区三卡| 中文字幕av成人在线电影| 午夜福利视频1000在线观看| 久久鲁丝午夜福利片| 18禁在线无遮挡免费观看视频| 十八禁网站网址无遮挡 | 日韩国内少妇激情av| 综合色av麻豆| 久久鲁丝午夜福利片| 日韩欧美精品免费久久| 国产成人freesex在线| 天堂√8在线中文| 久久久久久久亚洲中文字幕| 自拍偷自拍亚洲精品老妇| 亚洲精品日本国产第一区| 国产熟女欧美一区二区| 性色avwww在线观看| 免费观看精品视频网站| 久久精品久久久久久噜噜老黄| a级毛色黄片| 日本午夜av视频| 日韩亚洲欧美综合| 天天躁夜夜躁狠狠久久av| 久久久色成人| 日产精品乱码卡一卡2卡三| 在现免费观看毛片| 国产精品国产三级专区第一集| 狂野欧美白嫩少妇大欣赏| 久久国产乱子免费精品| 国产亚洲一区二区精品| 国产精品av视频在线免费观看| 熟妇人妻不卡中文字幕| 精品一区二区免费观看| 午夜精品国产一区二区电影 | 人妻少妇偷人精品九色| 欧美人与善性xxx| 亚洲精品影视一区二区三区av| 久久精品国产亚洲av涩爱| 天天躁日日操中文字幕| 国产毛片a区久久久久| 能在线免费看毛片的网站| 成年版毛片免费区| 国产精品国产三级国产av玫瑰| 国产在线男女| 久久久精品94久久精品| 国产精品99久久久久久久久| 综合色av麻豆| 国产毛片a区久久久久| 亚洲精品久久午夜乱码| 少妇的逼水好多| 又粗又硬又长又爽又黄的视频| 亚洲久久久久久中文字幕| 精品一区二区免费观看| 国产单亲对白刺激| 一级毛片 在线播放| 亚洲图色成人| 亚洲内射少妇av| 国产在视频线在精品| 91av网一区二区|