• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image Dehazing Based on Pixel Guided CNN with PAM via Graph Cut

    2022-08-24 03:29:52FayadhAlenezi
    Computers Materials&Continua 2022年5期

    Fayadh Alenezi

    Department of Electrical Engineering,College of Engineering,Jouf University,Sakaka,Saudi Arabia

    Abstract: Image dehazing is still an open research topic that has been undergoing a lot of development, especially with the renewed interest in machine learning-based methods.A major challenge of the existing dehazing methods is the estimation of transmittance, which is the key element of haze-affected imaging models.Conventional methods are based on a set of assumptions that reduce the solution search space.However, the multiplication of these assumptions tends to restrict the solutions to particular cases that cannot account for the reality of the observed image.In this paper we reduce the number of simplified hypotheses in order to attain a more plausible and realistic solution by exploiting a priori knowledge of the ground truth in the proposed method.The proposed method relies on pixel information between the ground truth and haze image to reduce these assumptions.This is achieved by using ground truth and haze image to find the geometric-pixel information through a guided Convolution Neural Networks (CNNs) with a Parallax Attention Mechanism (PAM).It uses the differential pixel-based variance in order to estimate transmittance.The pixel variance uses local and global patches between the assumed ground truth and haze image to refine the transmission map.The transmission map is also improved based on improved Markov random field (MRF) energy functions.We used different images to test the proposed algorithm.The entropy value of the proposed method was 7.43 and 7.39,a percent increase of ?4.35%and ?5.42%,respectively,compared to the best existing results.The increment is similar in other performance quality metrics and this validate its superiority compared to other existing methods in terms of key image quality evaluation metrics.The proposed approach’s drawback,an over-reliance on real ground truth images,is also investigated.The proposed method show more details hence yields better images than those from the existing state-of-the-art-methods.

    Keywords: Pixel information; human visual perception; convolution neural network;graph cut;parallax attention mechanism

    1 Introduction

    Images acquired in an outdoor environment are sometimes affected by degradation due to atmospheric conditions such as fog,rain,snow,or wind-blown sand.Such haze is a type of degradation that affects the image quality more or less homogeneously and persistently, making the visibility of details very difficult.This inevitably reduces the performance of high-level tasks such as the interpretation of the content of the observed scene[1].

    The haze phenomenon is due to the presence of water droplets suspended in the air.These droplets cause the phenomenon of light scattering, the distribution and photometric appearance of which depends on the size of the water particles scattering and the wavelength of light.Weather conditions can cause fluctuations in the particles that in turn causes the haze in the atmosphere[2].These particles’collective effect arises due to the illumination effect in the image at any given pixel.These effects can be dynamic(snow or rain)or steady(haze,mist,and fog)[2].

    Dehazing aims at removing the light-scattering effect from the image by making it more exploitable in various image processing and analysis tasks.However, dehazing methods generally try to reduce or eliminate this phenomenon in a global way without taking into account local aspects, and in particularly typically fail to account for spatial structures and inter-pixel interactions [3].Thus, this proposal takes into account local aspects to yield a better result.

    In order to restore the salient and essential feature regions in the images, the existing image dehazing algorithms tend to use specific points in the image region to approximate the atmospheric light [4].The majority of the proposed image dehazing algorithms based on atmospheric scattering models,aim at deriving a haze-free image from the observed image[5]by estimating the transmission map.Atmospheric light and the transmission map are estimated in some dehazing methods through the use of physical maps such as color-attenuation on some non-local priors or through the observation of haze-free outdoor images as in the dark-channel prior approach[6].Despite the huge successes born from these methods,they do not work well in certain cases.For example,in the case of Fattal et al.[5],transmission fails in the presence of white objects in the background.Similarly,non-local prior-based methods like that of Berman et al.[6]have failed in cases of heavy hazed regions as the transmission designed becomes irrelevant.Cai et al.[7]has also suggested color-attenuation prior underestimates the transmission of distant region.

    The traditional proposed dehazing methods have been recently combined with CNNs[8].This has been facilitated by the success of CNNs in the majority of the image processing tasks.CNNs have been combined with other filters to estimate transmission maps,while conventional methods such as Retinex theory have been used to estimate atmospheric light[8].However,the existing dehazing methods still lack accuracy in the estimation of transmission maps.For instance, Alenezi et al.[9] disregard the physical model of the imaging principle while improving the image quality.Other models such as saliency extraction[10],histogram equalization[11]and Retinex theory[12]have yielded images with color distortion due to incomplete recovery effects[13].Even promising state-of-the-art methods like that developed by Salazar-Colores et al.[14]yield inaccurate results since their procedures are based on many assumptions.

    Image dehazing methods based on supplementary haze removal have various shortcomings.For instance,Wang et al.[1]proposed a method in which final images having a washed-out effect in darker regions due to atmospheric light failures.Middleton[15]have exaggerated contrast on the final images.Vazquez-Corral[16]proposed dehazing technique yields final images with poor information content.Feng et al.[17]proposed using sky and non-sky regions regions as the basis to improve hazy images.The method’s strength lies in its bright sky regions, where the results generated have superior edges and good robustness.However, the results from the other sky regions are darker and have a hazed background.These results are similar to those from Wang et al.[1].

    Fattal et al.[5,18]dark channel prior contribution in image dehazing has found numerous usages.The soft matting employed in Zhou et al.[18]algorithm makes its computation extensive.The use of a guided filter in soft matting in the first step reduces calculation-and application-related costs.However,He et al.technique has produced outcomes with deprived edges and discriminatory dehazing,which are only sound in non-sky area images [5,8].He et al.[19] proposed method introduces wavelet transform,assuming haze effects solitarily affect low-frequency element of the image.Yet He et al.[19]proposed technique has not accounted for the differential light from the scene and the atmospheric light,subsequently making the results darker.

    Some methods combine traditional existing dehaze methods and Artificial Neural Networks(ANN)to yield promising results.For instance,the multilayer perceptron(MLP),which has nurtured usage in numerous areas in image processing applications such as skin divisions and image denoising[20],has been used by Guo et al.[20].Guo et al.[20]suggested method was based on the MLP,which draws the transmission map of the haze image directly from the dark channel.The results indicate extended contrast and intensified dynamic range of the dehazed image.However, visual inspection shows that Guo et al.[20] proposed outcomes retain haze towards the horizon, yielding imperfect edges.Other existing hybrid methods of CNNs with traditional methods have also produced imperfect images.For instance,Alenezi et al.[9]estimate a transmission map via DehazeNet.Their method has produced superior results against existing state-of-the-art methods but the CNN functions were limited in predicting the transmission map.

    O’Shea et al.[21]proposed a method where the attention block captures the informative spatial and channel-wise features.A visual analysis of the dehazed image results reveals a haze towards the horizon in both simulated and natural images.Unlike the existing methods,a more recent method by Zhu et al.[8]considers the existence of differential pixel-values.This method[8]combines graph-cut with single-pass CNN algorithms estimating transmission maps via global and local patches.However,the proposed method yielded images where the over-bright areas tended to lose some final image features.A more recent study by Zhao et al.[22]merged the merits of prior-based and learning-based approaches.The method[22]combines visibility restoration and realness improvement sub-tasks using two-staged weakly supervised dehazing network.The results of the work had little washed-out effects despite having better performance than existing state-of-the art methods.

    In summary, the existing image dehazing techniques have varied drawbacks, which necessitates further research into the topic.The proposed paper uses global and local Markov random fields and graph cuts to[8]improve the transmission map,exploiting the geometric-variance pixel-based guided local and global relationships between the‘assumed’ground truth and hazed image.This helps to estimate the transmittance medium and to extract a dehaze image accurately.Thus,this paper’s proposed method uses the local and global pixel variance within the local and global image neighborhoods to estimate the transmittance medium.This is achieved by comparing the corresponding local and global pixels between the haze and its assumed ground truth.The energy variations in the global and local Markov fields function as a proposed extension based on corresponding high-low pixel gradient and variance-based boundary in between the two images,and to help smooth and constrain the connection between local and global pixel neighborhoods.These proposed geometric-based methods improve the dehazed image features.The rest of the paper is as follows:Section 2 outlines the contribution of the paper.Section 3 outlines the proposed method, then offers a description of the experiments in Section 4.Finally,Section 5 offers the conclusion.

    2 Contribution

    This paper makes three significant contributions:it presents a novel combination of CNNs with a parallax attention mechanism and graph-cut algorithms which results in a novel dehazed image;a transmittance medium dependent on pixel variance corresponding to local-and global-based neighborhood between the ground truth and haze image,which serves to strengthen local and global image features; and a local and global correspondence between the ground truth and haze image pixel-based energy function based on the pixel variance restraints of corresponding neighborhoods that enhances the transmission map,which has the effect of enhancing the finer details of the dehazed image.The later stages(the global and local Markov random fields and the graph cut)are an extension of existing work[8].

    3 Proposed Method

    3.1 Atmospheric Scattering Model

    Fig.1 shows a hazy condition with numerous particles suspended in the environment,resulting in a scattering effect on the light[8,13,17].Scattered particles during hazy weather conditions allow the attenuation of reflected light on the surfaces of objects.The attenuated light deteriorates the image’s brightness and decreases the image’s resolution as a forward scattering consequence substantially persists between the particles and surfaces [13].The ultimate hazed image differs from the ground truth image locally and globally based on their pixels’information.The back-scattering of atmospheric particles in ordinary light yields images with reduced contrast,hue deviation,and image saturation,contrasting with the ground truth image [23].These irregular scattering effects on sensor light and natural light in hazy images are broadly demonstrated via a dark channel prior prototype as follows[8,13,17]:

    Figure 1:A summary of formation of scattering effect diagram showing environment radiance Ω,atmospheric light φ,attenuation or transmission ω,and observed intensity Υ

    In(1),Υ(γ)is the observed image or brightness of the hazy image as established by the observer at pixelγ;Ω(γ) is the scene or environment radiance of the haze-free image;Ψis the atmospheric light;andω(γ)is the attenuation or transmittance medium,which ranges between 0 and 1.Thus can be redefined as,

    whereηis the scattering coefficient of the atmosphere, andξ(γ) is the depth of the scene.Eq.(2) is related to homogeneity in the atmosphere;otherwise,ω(γ)is given by(3).

    The observed image brightness,Υ(γ)can be obtained by eliminating atmospheric light,Ψ,while rewarding attenuation of the light,ω(γ),to reinstate haze-free scenes,Ω(γ).RGB color space vectorsΩ(γ),Ψ,andΥ(γ)in(2)are coplanar from a geometric point of view.The terminal points of vectorsΩ(γ),Ψ,andΥ(γ)are collinear and the transmission,ω(γ),is relative to the length of the two lines,as defined in(3)

    Eq.(4) emanates from (2) and shows that haze removal is based on accurate retrieval ofΩ(γ),Ψ,andω(γ)fromΥ(γ).Ω(γ)ω(γ),which shows emissivity decay of the natural environment in the medium, represents direct attenuation.Ψ(1-ω(γ)) is air-light based on previously scattered light,leading to alteration of natural environment color.Therefore, the greater the distance between the sensor and the object the larger the attenuation (Ω(γ)ω(γ)) and scattering effect (Ψ(1-ω(γ))),suggesting the exponential transmission that is shown in(2).

    3.2 Convolution Neural Network

    CNN is similar to ordinary neural networks:they are composed of learnable weights and biases[24].In CNN’s, each neuron receives an input, such as an image that performs a dot product and may follow a non-linear computation.CNNs are expressed as a single differentiated score function,scoring input image pixel to one another.CNN also has a loss function on the last layer of the network[24].ConvNets explicitly assumes image inputs, making it possible to encode image properties such as texture and information content into the architecture.This feature makes the forward function in the architecture of ConvNet more efficient during implementation, thus reducing the number of parameters in the network [25].The rest of the literature on the structure and architecture of Convolution Neural Network(CNNs/ConvNets)is widely presented in papers[26].

    4 Image Dehazing Based on Pixel Guided CNN with PAM via Graph Cut

    4.1 Transmission Map

    We define the mapping of pixel value fluctuations along the smallest regions of the hazed and ground truth image asΔpHand that of ground truth asΔpG.The pixel fluctuations also replicate changes in image features.If we symbolize the variance of these deviations withζ=(Δpi)2, wherei=H,Gfor H(haze)and G(ground truth),whenζ→0,the variation is invisible.Since pixel values are between 0 and 1,then variance in the neighboring pixels,qiis given by‖Δpi-Δqi‖2,where,‖.‖is the magnitude of the pixels.We designate threshold process in input imagesId(haze) and ground truthIGas

    Eq.(5)is analogous to(3),that is,

    Thus,the transmittance medium defined by(2)becomes

    (7)is substituted into(1)leads

    Eq.(8) shows that a major challenge of image dehazing is solved.In contrast, while in the beginning there were three unknowns present,(8)shows that only two unknowns are left;ΨiandΥ(γi).However,Ψican be estimated based on Retinex theory[12],which derives the atmospheric light of the brightest pixel fromΨi= [max(R),max(G),max(B)]t,where R,G,B are the three-color channels in the image andtregulates the weights of the colors.

    4.2 Global and Local Markov Random Fields

    Scene depth changes gradually and entails variation in local and global neighborhood pixels.Thus,accurate depth variation estimation depends on features including color,texture,location and shape as well as both the local and global neighborhood pixels between the haze and ground-truth images.This paper proposes that these are attainable via a novel energy function in the depth estimation network.The energy function is based on a novel global-local Markov chain already discussed in detail in[8].The resultant energy function is optimized by the graph-cut as discussed in Zhu et al.[8].However,in this model,we use the color channel features as representative of both global and local color moments,proposed by[27].This opposes the super-pixels in global and local neighborhoods as presented in[1].Thus, the ambient light used epitomizes the connection between global and local pixels and superpixels.The approach extends the global and local consistency, which helps to protect the proposed convolution neural network from the problem of smoother far apart pixels.It also assists in evading over-saturation of color and produces sharper boundaries.The relationship between the global and local neighborhood pixels and super-pixels is modeled via the long and short-range interaction.This is achieved by considering the global relationship between neighboring local pixels as proposed by Song et al.[27].The results are extended to the global and local pixels to map the relationship between haze and ground truth image.The constructed Markov Random Fields have edge costs representative of the neighboring pixel’s consistency in overlapping regions based on high gradient boundary.

    The graph cut and parallax attention mechanism(PAM),which has already been proposed by Zhu[8],helps in optimizing MRF.Furthermore,it protects against over-saturation of color and sharpens boundaries.PAM helps in estimating the correspondences between haze and ground truth pixel values[28].It also helps in the computation of occlusion maps and warps ground truth image features into the final dehazed image.PAM has inputs from feature mapsFMGandFMLdenoting global and local features,respectively(see Fig.4).FMG,F(xiàn)ML∈RR,G,Brepresent color channels from the feature extraction based on pixel information.The onset of the PAM has two residual blocks with shared weights adapting input features for transmission estimation and generation of feature mapsFMG0andFML0.This helps in maximization of the training process to avoid training conflicts [29].A 1 × 1 convolution layer convertsFMG0into a query feature mapQFM∈?R,G,Band another 1×1 layer convertsFML0into a feature mapFM∈?R,G,Bwhich is reshaped to ?R,G,B,a feature map depending on the shared global and local features of the haze and ground truth image.QFMandFMare multiplied and graph cut with softmax (see Fig.6).The results are then applied to obtain a parallax attention mapMFMG→FML∈?R,G,B.MFML→FMGis seen as a cost matrix encoding the correspondence along with pixel correlations between the haze and ground truth images.The proceeding step seesFMLprocessed by 1×1 convolution layer to obtainR∈?R,G,B, which is multiplied byMFMG→FMLto generateO∈?R,G,B(the warping ofFMLintoFMG).PAM also helps in estimation occlusion maps,VFML→FMG, to help refine the transmission medium between ground truth and haze image.During estimation of the occlusion map,a second PAMMFML→FMGis estimated by exchangingFMGandFML.The rest of the details about the occlusion maps are presented in[30].The literature on the functionality of the PAM about its applicability in image processing is extensively presented in the following existing papers[30].Graph cut is widely illustrated by Zhu et al.[8]and extensively discussed and described by[9,31].The two main components of the graph cut are data and regularization [32].The data part assesses the image data compliance,such as image features,while the regularization part polishes the boundaries of the different conformity areas.

    5 Experiments

    5.1 Data and Implementation

    The proposed technique(summarized in Fig.3 and comprehensive in Figs.2 and 6)was applied to various images(presented in Figs.4,5 and 7–12)obtained from different databases.These images were resized to reduce computational complexities.The images presented in Figs.4, 5 and 7–12 are examples obtained from a dataset of 56 examples used in the experiment.The performance metrics presented in Tab.2 are constructed from the results,whose parameter values are represented in Tab.1.We used a total of 24640 images to train the network using 440 partitions from 56 image samples.We validated the network results with 11000 images.These were generated from simulated clear images from the images presented in Figs.4,5 and 7–12.We extracted images(see Fig.9(validation images))from regions with rich textures.Thus,the quality could be compromised for these set of results due to the absence of ground truth to validate the images.We constructed the final images outputs from 440 partitioned images to yield the results presented in Figs.4,5 and 7–12.The partition helped organize images into patches of similar local and global neighborhoods for the corresponding haze and ground truth images.A BIZON X5000 G2 with 16 GB RAM pc was used to train the process for the proposed dehazing technique.

    5.2 Evaluation Metrics

    The proposed method’s performance evaluation was conducted using five image quality criteria,including:(i).Entropy [33]; (ii).e (visible edges) [11]; (iii).r (edge preservation performance) [11];(iv).Contrast,and(v).Homogeneity[28].These criteria were chosen based on the proposed method’s objective:improving information content,measuring human visual quality and textural features,and comparing the similarities between a dehazed image with the ground truth.

    Table 1:Values obtained and used during the experiment for the proposed dehazing algorithm

    Fig.6a is comprised of input,encoder,and decoder.The encoder consists of convolution neural networks which extract global and local features from the hazy images and compare their corresponding features to the ground truth images.The decoder functions like the encoder except for its residual functions which contain PAM with graph cut (see Figs.6b and 6c).The residual decoder function permits full connection with other neurons,thus enhancing the learning rate and merging the training models.Fig.6c is a build graph designed to minimize the energy problem.The graph consists of nodes corresponding to image pixels and pixel labels.The pixels are weighted based on their label.The cut consists of a configuration of pixels at its maximum label based on haze and ground truth image.The cut also ensures the energy is minimal at all configurations.

    Figure 2:The schematic detail shows the proposed architecture with seven neurons in the second hidden layer,eight neurons in the third hidden layer,and a single output.The series contains alternating global and local feature extraction before full connection and PAM via graph cut to obtain the final dehazed image

    Figure 3:A detail of the proposed image dehazing using ground truth-based geometric-pixel guided CNN with PAM via graph cut

    5.3 Results Analysis and Comparison

    Figure 4:Comparison of the effect of proposed energy function on the image features(a)Tan[33](b)Zhu et al.[8]and the proposed method in the last column.The effectiveness of the proposed method has visibly extracted extra features in the dehazed image compared with existing dehaze cut method’s results([35]and[8])

    Figure 5:Comparison showing dehaze-cut results (the first image) and results from [8] (the second image)as well as the proposed method’s result(the last image).The red and green patches show the effectiveness of the proposed method in terms of detailed information in comparison with the existing dehaze-cut methods[35]and[8]

    Figure 6:Detailed CNN for the proposed dehazing method with the encoder and decoder,which are similar except in the residual phase.The residual function ensures that each hidden neuron is fully connected,enhances the learning rate,and merges the training data set models.(b)The dense-residual phase is composed of softmax, which feeds information to the (c) PAM via graph-cut algorithm,which conforms image features and smooths the boundaries of varying conformity areas between the corresponding haze and ground truth image

    5.4 Quantitative Comparison

    Figure 7:Summary of the test comparison with extracted synthetic images from O’Shea et al.[21]showing the original image in the I haze image, and dehazing results from (a) Fattal et al.[5], (b)Barman et al.[6], (c)Zhu et al.[36], (d)Sener et al.[37], (e)Ancuti et al.[4], (f)Meng et al.[31], (g)O’Shea et al.[21],and (h)results from the proposed algorithm,along with T the ground truth in the last column

    Figure 8:Summary of the test comparison showing the original haze image in the first column followed by the results from multilayer perceptron[36],residual-based dehazing method[37],the results from the proposed algorithm in the second-to-last column,and the ground truth in the last column

    Figure 9:Summary of the test comparison with natural images showing the original image in the I haze image,and dehazing results from (a)Fattal et al.[5], (b)Barman et al.[6], (c)Meng et al.[31], (d) Zhu et al.[38], (e) He et al.[19], (f) Li et al.[39], (g) O’Shea et al.[21] and (h) results from the proposed algorithm in the last column.The patches marked red are the regions of the assumed ground truth for the purposed of training the proposed method

    5.5 Comparison Analysis

    In all the cases, (see Tab.2), the images that resulted from the proposed algorithm on average demonstrated higher entropy,e,r,contrast and homogeneity.This suggests that the proposed method resulted in a dehazed image with improved information content,visibility,and with better texture than existing methods(7–12).The difference in the textural properties of the proposed method is compared with those of the state-of-the-art methods in Figs.11 and 12.The difference in the textures in Figs.11 and 12 shows that a modification of the combination of PAM via graph cut and CNN with modified energy function and pixel-guided transmission based on‘assumed ground’ultimately yields a better dehazed image.The true ground truth and ‘assumed ground truth’informs the pixel reconstruction to yield an image with cutting edge experience in color correction and visible blue sky(see proposed in the(h)or last column of Fig.10).A further visual inspection of patched sections of the proposed results in Figs.11 and 12 compared to the existing methods reveals its strength and weakness.

    The proposed method’s major strength lies in its capacity to extract more details in the dehazed images(see blue patches in Fig.12).The areas marked blue tend to have more details than those in Zhu et al.[34].The extra information can be credited to the proposed pixel differential-based transmittance medium,which emphases the global and local patches’pixel difference.This explains the addition of some tree leaves in the patched sections.The approximation of transmittance medium via local and global pixels with image neighborhood distinguishes regions,resulting in more information extraction.

    Figure 10:Summary of hazed images used in the paper.(a) input image, (b) Zhu et al.[34], and(c) proposed results.The red (d), (e), and (f) patches represent the regions assumed as ground truth and used for training in the proposed method.The blue patches present the visible differences between the proposed method (i) and input similar region (g) and existing state-of-the-art method (h) Zhu et al.[34]

    Figure 11:Summary of the proposed method’s strength compared to existing state-of-the-art method of Sener et al.[35].The proposed method preserves light and gives almost similar results to that of ground truth.The green patches show the Sener et al.[35] method tends to exaggerate light, an indication of retention of most of the haze particles.The proposed method in the middle column,although it appears darker,has better visibility than Sener et al.[35]

    Figure 12:Summary of hazed images used in the paper.(a) input image, (b) Yousaf et al.[23], and(c) proposed results.The red (d), (e), and (f) patches represent the regions assumed as ground truth and used for training in the proposed method.The blue patches present the visible differences between the proposed method(i)and input similar region(g)and existing state-of-the-art method(h)Yousaf et al.[23]

    The visual inspection of patched sections of the proposed results in Fig.10 compared to the existing methods reveals its weakness.While the proposed method focuses on extracting finer details of the dehazed images(see blue patches),the regions with excess light still retain some light,and hence less information(see also Fig.5).The red patched areas,for instance,in the areas marked red and black,tend to blur over the entire regions compared to those in Zhu et al.[34]and the input image.This is attributed to the proposed pixel differential-based transmittance medium’s reliance on the assumed ground truth, which is not accurate.However, the contrary is true in areas where there exists real ground truth,such as in the simulated image results presented in Figs.7 and 8.The pixel difference of the global and local patches between the haze and ground truth images functions but fails to extract features with similar pixels within regions as noised, causing a blur in ‘assumed ground truth’(see Figs.10 and 12)but correctly extracting details in real ground truth(see Fig.11).The estimation of transmittance medium via local and global pixels within haze and ground truth image neighborhood distinguishes regions with similar traits,leading to better results,presented in Tab.2.This also explains the clear visibility of the sky and clouds in (h) Fig.9, in which a highly textured region is used as‘assumed ground truth,as well as the conservation of color and light in the green patches in Fig.11.

    In all the examples,extra features of the proposed image results which arose from the proposed novel estimation of transmittance medium are clearly visible in comparison to existing results.The standard deviation values in all cases,as presented in Tab.2,show lower values than the corresponding benchmark algorithms.Tab.2 also shows that our proposed algorithm has a higher entropy of 7.43 than Zhu et al.[34] algorithm entropy of 7.12 and Sener et al.[35] algorithm entropy of 6.89.Also,our proposed algorithm has better consistency than others as tabulated in Tab.2 for Fig.9.These show that the proposed method gives more consistent and predictable results than existing algorithms.However, the proposed method faces a challenge:some regions of the dehazed image tend to blur instances where the ground truth is assumed since this method relies on the actual ground truth.This is a common problem even in the existing state-of-the-art methods used for comparison.

    Table 2:Comparison of mean and standard deviation of performance evaluation metrics of the proposed and existing state-of-the-art algorithm for example presented in Figs.7 and 8. e and r are blind assessment indicators.e assesses increased rate of visible edges while r assesses edge preservation performance.higher values of μ indicate better method while lower values of σ show the consistency of the results

    6 Conclusion

    This paper presents a novel method for image dehazing.We propose to solve the dehazing problem using a combination of CNN with PAM via graph-cut algorithms.The method considers the transmittance based on differential pixel-based variance, and uses local and global patches between the ground truth and haze image as well as energy functions to improve the transmission map.Through the outcomes presented presented and demonstrated in given examples, the paper shows that the proposed algorithm yields a better dehazed image than those of the existing state-of-the-art methods,as shown in Figs.8,10 and 11.Comparison of entropy values in Figs.7 and 8 suggest the proposed method improved the information content of dehazed image by ?4.35% and ?5.42% respectively,compared to the best values.In all the comparison metrics, the proposed method gives consistent results than those of existing methods.These show that our proposed method gives images with better visibility,greater clarity of features,and more features.In general,our results show more details compared to existing benchmark enhancement methods.These improved results can be attributed to strengthening local and global image features by a transmittance medium dependent on image pixel variance.However,the proposed method faces a challenge:some regions of the dehazed image tend to blur instances where the ground truth is assumed since this method relies on the actual ground truth.Future research could consider combining our method with other existing algorithms such as dark channel prior, since at least one-color channel of an RGB image has some pixels of the lowest intensities.This can be achieved via sub-tasking of the CNN framework based on the problems to be solved.This will enhance algorithm complexity while reducing the operational cost.Future research could also test out combining conditions for atmospheric homogeneity and ratio between the ground truth and haze image segments during estimation of transmittance medium.This can be achieved by developing a framework for finding the variation in atmospheric light and the best blend to give optimal results.

    Funding Statement:This work was funded by the Deanship of Scientific Research at Jouf University under grant No DSR-2021-02-0398.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美变态另类bdsm刘玥| 国产亚洲午夜精品一区二区久久| 91九色精品人成在线观看| 国产精品亚洲一级av第二区| 久久av网站| av网站免费在线观看视频| 久久久精品国产亚洲av高清涩受| 欧美黄色淫秽网站| 亚洲专区字幕在线| 人人妻,人人澡人人爽秒播| 日本黄色日本黄色录像| 日本一区二区免费在线视频| 久久99热这里只频精品6学生| 女同久久另类99精品国产91| 国产伦理片在线播放av一区| 亚洲色图 男人天堂 中文字幕| 一本色道久久久久久精品综合| 大片免费播放器 马上看| 天堂俺去俺来也www色官网| 成年动漫av网址| 国产免费福利视频在线观看| 国产精品免费大片| av免费在线观看网站| 欧美在线黄色| 国产精品 国内视频| 国产成人一区二区三区免费视频网站| 91精品国产国语对白视频| 蜜桃国产av成人99| 亚洲中文av在线| 国产av又大| 国产日韩一区二区三区精品不卡| 成人av一区二区三区在线看| 久久精品人人爽人人爽视色| 夜夜夜夜夜久久久久| 热re99久久国产66热| 免费在线观看影片大全网站| 亚洲av日韩精品久久久久久密| 亚洲一卡2卡3卡4卡5卡精品中文| 热re99久久国产66热| 男人舔女人的私密视频| 久久久国产精品麻豆| 精品国产一区二区久久| 亚洲成人手机| 一个人免费看片子| 国产精品久久久久久人妻精品电影 | 免费av中文字幕在线| 在线观看舔阴道视频| 91字幕亚洲| 十八禁网站网址无遮挡| 无人区码免费观看不卡 | 国产精品九九99| 叶爱在线成人免费视频播放| 99热网站在线观看| 久久人妻av系列| 九色亚洲精品在线播放| 人人妻人人爽人人添夜夜欢视频| 一区在线观看完整版| 怎么达到女性高潮| 乱人伦中国视频| 国产成人精品在线电影| 极品人妻少妇av视频| 亚洲中文日韩欧美视频| 十八禁高潮呻吟视频| 久久国产精品大桥未久av| 亚洲人成伊人成综合网2020| 一本大道久久a久久精品| 一个人免费看片子| 久久天躁狠狠躁夜夜2o2o| 成人18禁在线播放| 麻豆成人av在线观看| 一进一出好大好爽视频| 操出白浆在线播放| 丰满少妇做爰视频| 久久精品熟女亚洲av麻豆精品| 69av精品久久久久久 | 一二三四社区在线视频社区8| 中国美女看黄片| 久久这里只有精品19| 亚洲综合色网址| 成人18禁高潮啪啪吃奶动态图| 亚洲欧美日韩高清在线视频 | 亚洲中文av在线| 国产免费福利视频在线观看| 国产精品偷伦视频观看了| 69精品国产乱码久久久| 中文字幕色久视频| 人人妻人人添人人爽欧美一区卜| 又黄又粗又硬又大视频| 亚洲精品国产区一区二| 丝袜喷水一区| 欧美日本中文国产一区发布| 中亚洲国语对白在线视频| 亚洲伊人久久精品综合| 黄色毛片三级朝国网站| 午夜免费成人在线视频| 久久天躁狠狠躁夜夜2o2o| 久久ye,这里只有精品| 动漫黄色视频在线观看| 精品久久久久久久毛片微露脸| 99久久精品国产亚洲精品| 在线观看舔阴道视频| 免费少妇av软件| 亚洲国产欧美一区二区综合| 久久99热这里只频精品6学生| 狠狠精品人妻久久久久久综合| 国产三级黄色录像| 欧美黄色淫秽网站| 日韩中文字幕欧美一区二区| 久久久国产精品麻豆| 十分钟在线观看高清视频www| 高清欧美精品videossex| h视频一区二区三区| 汤姆久久久久久久影院中文字幕| 久久99热这里只频精品6学生| 成人国产一区最新在线观看| 中文字幕人妻丝袜制服| 国产aⅴ精品一区二区三区波| 久久久久久免费高清国产稀缺| 国产欧美日韩精品亚洲av| 无人区码免费观看不卡 | 十八禁人妻一区二区| 天堂8中文在线网| 久久久久久久大尺度免费视频| 久久久国产精品麻豆| 一本色道久久久久久精品综合| 亚洲色图av天堂| 亚洲男人天堂网一区| 在线观看免费午夜福利视频| 岛国在线观看网站| 免费在线观看完整版高清| 免费在线观看黄色视频的| 美国免费a级毛片| av在线播放免费不卡| 欧美亚洲 丝袜 人妻 在线| 怎么达到女性高潮| 久久国产精品男人的天堂亚洲| av超薄肉色丝袜交足视频| 精品第一国产精品| 新久久久久国产一级毛片| 精品第一国产精品| 91字幕亚洲| 18禁美女被吸乳视频| 亚洲avbb在线观看| 老熟妇仑乱视频hdxx| 国产有黄有色有爽视频| svipshipincom国产片| 不卡一级毛片| 满18在线观看网站| 国产精品电影一区二区三区 | 少妇裸体淫交视频免费看高清 | 十八禁人妻一区二区| 淫妇啪啪啪对白视频| 午夜视频精品福利| 久久亚洲真实| 热re99久久国产66热| 欧美在线黄色| 操出白浆在线播放| 一二三四在线观看免费中文在| 亚洲成av片中文字幕在线观看| 精品一区二区三区四区五区乱码| 五月天丁香电影| 热99久久久久精品小说推荐| 日韩一卡2卡3卡4卡2021年| 国产老妇伦熟女老妇高清| 国产免费av片在线观看野外av| 久久久国产成人免费| 久久精品亚洲精品国产色婷小说| 亚洲男人天堂网一区| 精品久久久久久电影网| 成人特级黄色片久久久久久久 | 怎么达到女性高潮| 亚洲精品久久成人aⅴ小说| 日本av免费视频播放| 王馨瑶露胸无遮挡在线观看| a级片在线免费高清观看视频| 国产欧美日韩综合在线一区二区| 少妇 在线观看| 亚洲精品国产精品久久久不卡| 操美女的视频在线观看| 天堂俺去俺来也www色官网| 亚洲人成77777在线视频| 国产激情久久老熟女| 999精品在线视频| 国产有黄有色有爽视频| 久久青草综合色| 一个人免费看片子| 中文字幕人妻熟女乱码| 国产精品99久久99久久久不卡| 捣出白浆h1v1| 久久久国产一区二区| 黄色 视频免费看| 国产伦人伦偷精品视频| 精品少妇久久久久久888优播| 国产亚洲精品第一综合不卡| 男人操女人黄网站| 亚洲专区国产一区二区| 国产男女超爽视频在线观看| 淫妇啪啪啪对白视频| 免费黄频网站在线观看国产| 精品熟女少妇八av免费久了| 一边摸一边做爽爽视频免费| 久久精品国产亚洲av高清一级| 国产成人av激情在线播放| 在线观看人妻少妇| 午夜精品久久久久久毛片777| 免费在线观看视频国产中文字幕亚洲| 亚洲成av片中文字幕在线观看| 在线av久久热| 成人av一区二区三区在线看| 国产成人精品久久二区二区91| 国产99久久九九免费精品| 国产真人三级小视频在线观看| 91国产中文字幕| 成人永久免费在线观看视频 | 精品久久久精品久久久| 亚洲熟妇熟女久久| 正在播放国产对白刺激| 精品一区二区三区视频在线观看免费 | 亚洲美女黄片视频| 亚洲专区国产一区二区| 国产亚洲精品第一综合不卡| 一本一本久久a久久精品综合妖精| 高清毛片免费观看视频网站 | 亚洲国产欧美日韩在线播放| 99国产精品免费福利视频| 久久午夜亚洲精品久久| 黄色怎么调成土黄色| 久久婷婷成人综合色麻豆| 久久久水蜜桃国产精品网| 免费观看a级毛片全部| 亚洲色图综合在线观看| 纯流量卡能插随身wifi吗| 人人妻人人澡人人爽人人夜夜| 两人在一起打扑克的视频| 亚洲自偷自拍图片 自拍| 久久精品aⅴ一区二区三区四区| 国产国语露脸激情在线看| 国产精品免费视频内射| 亚洲人成77777在线视频| av天堂久久9| 一个人免费看片子| 69精品国产乱码久久久| 动漫黄色视频在线观看| 亚洲一码二码三码区别大吗| 欧美日韩国产mv在线观看视频| 久久天堂一区二区三区四区| 亚洲avbb在线观看| 男女午夜视频在线观看| 18禁黄网站禁片午夜丰满| 变态另类成人亚洲欧美熟女 | 黄网站色视频无遮挡免费观看| 性色av乱码一区二区三区2| 怎么达到女性高潮| 免费在线观看完整版高清| 精品久久蜜臀av无| 色在线成人网| 国产一区二区在线观看av| 日本黄色视频三级网站网址 | 色尼玛亚洲综合影院| 人人妻,人人澡人人爽秒播| 中文欧美无线码| 手机成人av网站| 90打野战视频偷拍视频| 黄色毛片三级朝国网站| 欧美日本中文国产一区发布| 天堂8中文在线网| 成年动漫av网址| 两人在一起打扑克的视频| 飞空精品影院首页| 久久久久国内视频| 欧美激情久久久久久爽电影 | 亚洲欧美日韩另类电影网站| 国产成人精品无人区| 9191精品国产免费久久| 天堂动漫精品| 国产在线精品亚洲第一网站| 热99久久久久精品小说推荐| 国产精品 欧美亚洲| 国产日韩欧美亚洲二区| 国产麻豆69| 欧美日韩黄片免| 日韩欧美三级三区| 亚洲人成电影免费在线| 中文亚洲av片在线观看爽 | 老司机福利观看| 99国产极品粉嫩在线观看| 亚洲天堂av无毛| 国产视频一区二区在线看| 满18在线观看网站| 老司机亚洲免费影院| 久久国产精品大桥未久av| 看免费av毛片| 十八禁网站免费在线| 多毛熟女@视频| 日韩成人在线观看一区二区三区| 桃花免费在线播放| 女人高潮潮喷娇喘18禁视频| 极品人妻少妇av视频| 少妇粗大呻吟视频| 亚洲熟女精品中文字幕| 精品国产超薄肉色丝袜足j| av超薄肉色丝袜交足视频| 我要看黄色一级片免费的| 成人免费观看视频高清| 日本av手机在线免费观看| 另类亚洲欧美激情| 免费av中文字幕在线| 啦啦啦免费观看视频1| 99久久精品国产亚洲精品| 日韩免费av在线播放| 欧美日韩视频精品一区| 久久久久久免费高清国产稀缺| 男女边摸边吃奶| 色婷婷久久久亚洲欧美| 大型av网站在线播放| 午夜福利影视在线免费观看| 中文字幕另类日韩欧美亚洲嫩草| av电影中文网址| 90打野战视频偷拍视频| 王馨瑶露胸无遮挡在线观看| 国产av精品麻豆| 成人精品一区二区免费| 国产成人精品在线电影| 亚洲av成人一区二区三| 99精国产麻豆久久婷婷| 精品久久久精品久久久| 91av网站免费观看| av不卡在线播放| 成在线人永久免费视频| 黑人操中国人逼视频| 成人免费观看视频高清| 色综合婷婷激情| 黄色视频不卡| 中文字幕av电影在线播放| 亚洲久久久国产精品| 久久国产精品人妻蜜桃| 深夜精品福利| 高潮久久久久久久久久久不卡| kizo精华| 黄片播放在线免费| 亚洲久久久国产精品| 国产在线精品亚洲第一网站| 精品少妇黑人巨大在线播放| xxxhd国产人妻xxx| 美女高潮到喷水免费观看| 国产精品电影一区二区三区 | 99热国产这里只有精品6| 亚洲三区欧美一区| 国产黄频视频在线观看| 不卡一级毛片| 精品国产乱码久久久久久小说| 老汉色∧v一级毛片| 欧美乱妇无乱码| 国产亚洲一区二区精品| 欧美日韩视频精品一区| 久久久久久久国产电影| 亚洲国产av新网站| 精品亚洲成国产av| 欧美成人免费av一区二区三区 | 黄片小视频在线播放| 色尼玛亚洲综合影院| 久久久精品免费免费高清| 欧美另类亚洲清纯唯美| 国产精品国产av在线观看| 日韩中文字幕视频在线看片| 国产又色又爽无遮挡免费看| 高清在线国产一区| 午夜福利,免费看| 久久久精品94久久精品| 欧美激情 高清一区二区三区| av一本久久久久| 国产精品国产av在线观看| 黄色丝袜av网址大全| 午夜福利欧美成人| 18禁黄网站禁片午夜丰满| a级毛片黄视频| 精品国产亚洲在线| 女人被躁到高潮嗷嗷叫费观| 中文字幕人妻丝袜一区二区| 成人黄色视频免费在线看| 亚洲第一欧美日韩一区二区三区 | 国产高清国产精品国产三级| 每晚都被弄得嗷嗷叫到高潮| 国产亚洲精品久久久久5区| 最新美女视频免费是黄的| 国产精品麻豆人妻色哟哟久久| 久久午夜亚洲精品久久| 欧美激情久久久久久爽电影 | 国产欧美日韩综合在线一区二区| 国产精品1区2区在线观看. | 蜜桃在线观看..| 国产成人欧美在线观看 | 免费看a级黄色片| 一区在线观看完整版| 一边摸一边抽搐一进一出视频| 性高湖久久久久久久久免费观看| 亚洲欧洲日产国产| videos熟女内射| 最黄视频免费看| 伊人久久大香线蕉亚洲五| 久9热在线精品视频| 下体分泌物呈黄色| 久久精品人人爽人人爽视色| 中文字幕最新亚洲高清| 国产精品香港三级国产av潘金莲| 黄色丝袜av网址大全| 日韩欧美三级三区| 国产1区2区3区精品| 成人亚洲精品一区在线观看| 高清欧美精品videossex| 国产成人精品久久二区二区91| 老司机在亚洲福利影院| 一区二区三区精品91| 亚洲精品国产区一区二| 日韩有码中文字幕| 一个人免费看片子| 多毛熟女@视频| 亚洲第一青青草原| 国产1区2区3区精品| 极品教师在线免费播放| 激情在线观看视频在线高清 | 久久ye,这里只有精品| 深夜精品福利| 99久久99久久久精品蜜桃| 日韩欧美免费精品| 成年版毛片免费区| 免费人妻精品一区二区三区视频| 亚洲全国av大片| av片东京热男人的天堂| 午夜福利乱码中文字幕| 黄片播放在线免费| 午夜福利在线免费观看网站| 成年人午夜在线观看视频| 午夜福利在线观看吧| 动漫黄色视频在线观看| 亚洲国产av影院在线观看| 少妇精品久久久久久久| 国产精品欧美亚洲77777| 宅男免费午夜| 麻豆av在线久日| 又紧又爽又黄一区二区| 久久九九热精品免费| 久久久久久久大尺度免费视频| 亚洲伊人久久精品综合| 成人三级做爰电影| 国产成人欧美| 亚洲一区二区三区欧美精品| 女人爽到高潮嗷嗷叫在线视频| 19禁男女啪啪无遮挡网站| 中文字幕最新亚洲高清| avwww免费| 欧美一级毛片孕妇| 亚洲国产毛片av蜜桃av| 夜夜夜夜夜久久久久| 亚洲第一av免费看| 久久影院123| 国产亚洲欧美在线一区二区| 久久精品亚洲熟妇少妇任你| 在线天堂中文资源库| 精品国产国语对白av| 黄频高清免费视频| 中文亚洲av片在线观看爽 | 一区二区三区国产精品乱码| 久久人妻福利社区极品人妻图片| 国产精品免费视频内射| 精品国产乱码久久久久久小说| 精品亚洲成国产av| 正在播放国产对白刺激| av有码第一页| 欧美激情久久久久久爽电影 | 精品亚洲成国产av| 欧美精品av麻豆av| 亚洲中文av在线| 欧美性长视频在线观看| 高清毛片免费观看视频网站 | netflix在线观看网站| 亚洲精品中文字幕一二三四区 | 精品一区二区三区四区五区乱码| avwww免费| 色视频在线一区二区三区| 午夜福利免费观看在线| 手机成人av网站| 欧美日韩视频精品一区| 欧美av亚洲av综合av国产av| 丰满迷人的少妇在线观看| 国产免费现黄频在线看| 美女国产高潮福利片在线看| 麻豆国产av国片精品| 免费黄频网站在线观看国产| 久久影院123| 国产精品一区二区在线观看99| 国产精品国产高清国产av | 国产精品电影一区二区三区 | 熟女少妇亚洲综合色aaa.| 高清视频免费观看一区二区| av电影中文网址| 一边摸一边抽搐一进一小说 | 国产精品自产拍在线观看55亚洲 | 宅男免费午夜| 久久精品国产综合久久久| 国产免费福利视频在线观看| 精品一区二区三区视频在线观看免费 | 丁香六月天网| 久久午夜综合久久蜜桃| 99精国产麻豆久久婷婷| 亚洲欧美激情在线| 亚洲成a人片在线一区二区| 考比视频在线观看| 中文字幕高清在线视频| 中亚洲国语对白在线视频| 18禁观看日本| √禁漫天堂资源中文www| 国产在线免费精品| 亚洲一区中文字幕在线| 午夜精品久久久久久毛片777| 搡老岳熟女国产| 我的亚洲天堂| 一夜夜www| 999久久久精品免费观看国产| 欧美黄色淫秽网站| 一本一本久久a久久精品综合妖精| 天天躁日日躁夜夜躁夜夜| 中国美女看黄片| 乱人伦中国视频| 亚洲精品国产色婷婷电影| 久久精品人人爽人人爽视色| 一二三四在线观看免费中文在| 国产国语露脸激情在线看| 国产高清激情床上av| 日日夜夜操网爽| 亚洲人成77777在线视频| 美国免费a级毛片| 亚洲欧美一区二区三区黑人| 久久精品亚洲精品国产色婷小说| 亚洲黑人精品在线| 日本黄色视频三级网站网址 | 法律面前人人平等表现在哪些方面| 人妻 亚洲 视频| 欧美大码av| 久久中文看片网| 国产在线精品亚洲第一网站| 亚洲精品国产精品久久久不卡| 国产亚洲精品一区二区www | 麻豆av在线久日| 中文亚洲av片在线观看爽 | 国产一区二区在线观看av| 新久久久久国产一级毛片| 夜夜夜夜夜久久久久| 国产精品久久久av美女十八| 亚洲第一欧美日韩一区二区三区 | 一级片免费观看大全| 亚洲欧美精品综合一区二区三区| cao死你这个sao货| 久久国产精品影院| 国产色视频综合| 天天躁夜夜躁狠狠躁躁| 夜夜夜夜夜久久久久| 亚洲av美国av| 欧美性长视频在线观看| 最黄视频免费看| 久久久精品免费免费高清| 9191精品国产免费久久| 国产精品偷伦视频观看了| 国产精品久久久av美女十八| 99精品在免费线老司机午夜| 久久免费观看电影| 新久久久久国产一级毛片| 天堂8中文在线网| 亚洲一码二码三码区别大吗| 狠狠精品人妻久久久久久综合| 丰满人妻熟妇乱又伦精品不卡| 久久国产精品大桥未久av| 在线观看www视频免费| 一本—道久久a久久精品蜜桃钙片| 91麻豆精品激情在线观看国产 | 国产成人精品久久二区二区免费| 99riav亚洲国产免费| 少妇精品久久久久久久| 日日夜夜操网爽| 午夜福利在线观看吧| 色在线成人网| 国产精品电影一区二区三区 | 国产日韩一区二区三区精品不卡| 少妇裸体淫交视频免费看高清 | 99九九在线精品视频| 青草久久国产| 久久精品熟女亚洲av麻豆精品| 大片免费播放器 马上看| 久久精品亚洲av国产电影网| 精品国产一区二区三区四区第35| 黄色毛片三级朝国网站| 国产成人影院久久av| 亚洲国产毛片av蜜桃av| 啦啦啦视频在线资源免费观看| 久久久久国内视频| 亚洲欧美色中文字幕在线| 国产成人免费无遮挡视频| 视频在线观看一区二区三区| 欧美av亚洲av综合av国产av| 免费一级毛片在线播放高清视频 | 日本五十路高清| 黄色怎么调成土黄色| av超薄肉色丝袜交足视频| 在线观看人妻少妇| 一级毛片精品| 欧美日韩成人在线一区二区| 午夜福利视频在线观看免费| 日日夜夜操网爽| a在线观看视频网站| 99re在线观看精品视频| 99久久99久久久精品蜜桃| 制服诱惑二区| 国产精品久久久人人做人人爽| 一本久久精品| 日本一区二区免费在线视频|