• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Visibility Enhancement of Scene Images Degraded by Foggy Weather Condition:An Application to Video Surveillance

    2021-12-14 06:05:20GhulfamZahraMuhammadImranAbdulrahmanQahtaniAbdulmajeedAlsufyaniOmarAlmutiryAwaisMahmoodandFayezEidAlazemi
    Computers Materials&Continua 2021年9期

    Ghulfam Zahra,Muhammad Imran,Abdulrahman M.Qahtani,Abdulmajeed Alsufyani,Omar Almutiry,Awais Mahmood and Fayez Eid Alazemi

    1Department of Computer Science,Shaheed Zulfikar Ali Bhutto Institute of Science and Technology,Islamabad,44000,Pakistan

    2Department of Computer Science,College of Computers and Information Technology,Taif University,Taif,21944,Saudi Arabia

    3College of Applied Computer Science,King Saud University(Almuzahmiyah Campus),Riyadh,11543,Saudi Arabia

    4Department of Computer Science and Information Systems,College of Business Studies,PAAET,12062,Kuwait

    Abstract:In recent years,video surveillance application played a significant role in our daily lives.Images taken during foggy and haze weather conditions for video surveillance application lose their authenticity and hence reduces the visibility.The reason behind visibility enhancement of foggy and haze images is to help numerous computer and machine vision applications such as satellite imagery,object detection,target killing,and surveillance.To remove fog and enhance visibility,a number of visibility enhancement algorithms and methods have been proposed in the past.However,these techniques suffer from several limitations that place strong obstacles to the real world outdoor computer vision applications.The existing techniques do not perform well when images contain heavy fog,large white region and strong atmospheric light.This research work proposed a new framework to defog and dehaze the image in order to enhance the visibility of foggy and haze images.The proposed framework is based on a Conditional generative adversarial network(CGAN)with two networks;generator and discriminator,each having distinct properties.The generator network generates fog-free images from foggy images and discriminator network distinguishes between the restored image and the original fog-free image.Experiments are conducted on FRIDA dataset and haze images.To assess the performance of the proposed method on fog dataset,we use PSNR and SSIM,and for Haze dataset use e,r-,and σ as performance metrics.Experimental results shows that the proposed method achieved higher values of PSNR and SSIM which is 18.23,0.823 and lower values produced by the compared method which are 13.94,0.791 and so on.Experimental results demonstrated that the proposed framework Has removed fog and enhanced the visibility of foggy and hazy images.

    Keywords:Video surveillance;degraded images;image restoration;transmission map;visibility enhancement

    1 Introduction

    Restoring fogy and hazed images is important for various computer vision applications in outdoor scenes.Fog reduces visibility drastically and causes various computer vision systems to likely fail.Therefore to remove fog,haze and enhance the visibility of an image is very important because an image can be used for many purposes such as surveillance,highway driving aircraft take-off,landing,object tracking,object identification and others various fields.Image defogging and dehazing are important for the field of outdoor computer vision systems.The defogging algorithms are very important there are number of situations in which dehazing algorithms are needed such as,traffic used,tourist everywhere especially in winter and hilly areas where fog,rain and haze are very common.Poor visibility not only degrades the perceptual images quality of the image but also affects the performance of computer vision applications [1].Images captured in foggy and haze weather condition lose their reality such as image contrast and image true color details [2].It is the natural marvels which decrease the color contrast and surface color of object with deference to the distance from the sight object.Due to this poor contrast and low visibility degraded images create difficulty in various real time applications.

    In bad weather condition,images are not clear seen due to atmospheric light and attenuation in the atmosphere.In the presence of atmospheric particles,the atmospheric light intensity gets observed in atmosphere.Due to less light coming from scene object the image contrast become decrease and image color become blur,which are strong obstacle to a poor visual perception of the images [3].Visibility enhancement of haze and foggy image are effective role playing in various real times out door computer vision applications.Such as robot navigation,transportation,monitoring in outdoor scene,object tracking,object identification and Remote sensing systems [4].There are a number of outdoor images taken in foggy weather condition reduced visibility and drastically degraded [5].He et al.[6]proposed a DCP dark channel prior method.In foggy and haze weather condition captured images have low intensity.DCP was used to estimate transmission map.Almost the technique gives good result but when the image is in gray scale the technique does not work well.Another drawback of this technique which is used to selected as the color of the pixel and the largest dark channel value which is 0.1% for the estimation of atmospheric light of the haze and fog image.Fellow et al.[7]proposed Generative Adversarial Network (GAN).The GAN consist of two networks discriminator and generator each have distinct properties.The generator networks take input foggy and haze image and generate fog free image.The goal of discriminator network is to differentiate between original fog and haze image and generated fog and haze image.

    The reason behind enhancing the visibility of foggy and haze images is to help numerous computer vision and machine vision applications such as,satellite imagery,object detection,target killing and surveillance.It should be essential for the systems to be able to enhance visibility of foggy and haze images and also necessary for the several real time computer vision applications.It is not only sufficient to classify the visibility of images.Identification of dehazed image such as color,brightness,texture is also necessary to adjust the contrast of images.It could be a great input and solution to the real-world computer vision applications.Images captured in inclement weather condition are always poor visibility in foggy and haze condition.This is because light reflected from sight object is distributed in the atmosphere less light receiving from camera.Due to presence of aerosols such as fog,dust and water droplets are mixed with the light which is the ambient light limited into the vision.When fog,haze,mist,rain snow is present in the atmosphere,interruption of very fine droplets in the fog sources blocking and scattering of the light through medium.In worldwide during winter many trains and flight are affected due to fog.Due to bad weather and poor visibility,driving vehicle and road sign system are being affected.However;it is a dire necessity to propose image defogging and dehazing framework to discriminate between foggy image and fog free images.We proposed conditional generative adversarial network that can directly remove haze and fog from an image.In the proposed Method CGAN consists of two networks generators and discriminator.The generator networks take input foggy image and generate fog-free image.The goal of discriminator network is to differentiate between original fog free image and restored image.According to researcher’s atmospheric light and Transmission map is an important step in image defogging/dehazing.In proposed method the generator network directly estimate transmission map,atmospheric light and scene Radiance without producing any hallo artifacts.

    Figure 1:An illustrative example of an image defogging approach (a) the input fog image (b) the defog image

    The remainder of this paper is organized as follows.Section 2 provides the literature review of related work done in the context of this research.Section 3 is dedicated to explaining the proposed model with the help of a hypothetical example.Section 4 presents experimental results of the proposed methodology and comparison with state-of-art methods.Finally,the conclusion is presented in Section 5.

    2 Related Works

    In this section,briefly review existing literature.Several image visibility enhancement methods,algorithms,techniques and framework have been proposed to enhance visibility of foggy and haze images.This study presents related work for enhancing both foggy and haze images.Many researchers presented visibility enhancement of foggy and haze images methods in the past.A review of these is categorized and given in the following sections:

    Images taken in different environmental conditions have caused color shift and localized light problem.Huang et al.[8]introduced three modules Color analysis (CA),visibility restoration (VR)and hybrid dark channel prior (HDCP).The HDCP module was used to estimate atmospheric light.The CA module was based on the gray assumption technique to determine the intensity of the RGB color space of captured images and color shift information.The VR module restores a high quality of the foggy free image.The aim of this study was to remove the fog and haze for better visibility and safety.For experiment purpose,FRIDA data set was used.Three performance metrics were used such as e,r-,andσto compute the result.The value of e finds the new edge after restoration and value of r-metric finds the contrast of restoration image andσfinds the black and white pixel after restoration.The experimental result demonstrated that the proposed techniques recovered scene radiance and localized light sources,color shift than existing techniques.

    In foggy weather condition captured images have low contrast.Negru et al.[9]proposed contrast restoration approach based on a koschmiders law.To estimate atmospheric veil color image was converted into gray scale image.The atmospheric veil V was a smooth function which gave the amount of white background when subtracted from the colored images.Median filter was used to remove noise from image and preserve the fine details of edges.The koschmiders law computes and restored luminance of a background object.The proposed model works on day time fog and also enhanced the contrast of foggy images.For experimental purpose used FRIDA dataset.The two performances metric was used to measure image restoration quality such as rand e.Remove fog from foggy images is a difficult task in computer vision.To overcome these drawbacks,Guo et al.[10]proposed a Markov random field (MRF) framework.The graph basedαexpression technique was used to estimate transmission map.The bilateral filter was applied to handle discontinuities scene in images.This filter smooth’s the images and preserves the fine details of edges.To estimate atmospheric light from foggy image used three properties bright region,flat intensity and image upper position.The proposed method sometimes the color of the defogging result was seemed over-saturated the resultant image has gradient effects.For the experiment,FRIDA dataset was used.The three parameters e,r-,andσwere used as a performance metric.The experimental result described that the proposed method produced better result than existing literature.Zhao et al.[11]presented a defogging and dehazing method based on local Extrema.The method consists of three phases.White balance was used to estimate skylight in color image,atmospheric scattering model was used to find atmospheric veil and to enhance visibility used multi-scale time manipulation algorithm.The objective of this study was to improve visibility under both normal and foggy weather conditions.The proposed method did not produce a good result in heavy foggy and haze scene images.For the experiment,randomly 66 foggy images were selected.In the proposed method (EPI s) edge preserving index was used to measure edges which calculates the gradient sum pixel of the restored image and original image.For image restoration rate,four performance indicators e,∈,r-and h were used.The indicator h was used to identify color retention degree in the defogging image.

    Outdoor images were often degraded visibility and produced gray or bluish hue in weather degraded images.Nair et al.[12]proposed an algorithm using Gaussian filter for center surrounded de-haze images.The training images consist of three different color spaces images.Tai et al.[13]proposed a method which consists of two parts atmospheric scattering model and McCartney.The method consists of two phases transmission assumption haze similarities block.Guided filter was used to estimate transmission map and atmospheric light.The Fog Road Image Database (FRIDA) was used for experiment purpose.To find results of de-haze image restoration rate,three performance parameter e,r-,andσwere computed.Images taken during haze condition suffer low visibility and low contrast.Yuan et al.[14]proposed a highly correlated Reference Retrieval dehazing (HCRRD) algorithm.

    Visibility enhancement methods usually cannot restore images color cast problem an image contrast due to poor approximation of haze thickness.Chai et al.[15]proposed a visibility enhancement method Laplacian strategy.The method consists of two modules such as image visibility restoration (IVR) module and Haze thickness estimation (THE) module.The IVR module recovers brightness in the fog free image.For the test,1586 real world images were used.Three well-known performance metrics were used such as e,r-,andσ.The Experimental result shows that the method good result than advised methods.

    Traditional foggy and haze removal method fails to restore sky region images.Zhu et al.[16]introduced a new method (F-LDCP) Fusion Luminance and Dark Channel Prior techniques to bring back the original images from the haze.The proposed method comprises of three steps(i) transmission map correction (ii) atmospheric light (iii) soft segmentation method.The aim of this study was to recover long and short images with the sky to access the techniques.For experiment used 60 UAV images.To assess the performance of method used two metrics PNSR and SSIM.After experiment the method performed well and preserved the naturalness details of sky region images.Luan et al.[17]introduced a defogging method based on learning framework.In dehazing section median filter was used to estimate atmospheric light.Seven different quality based foggy feature were extracted.These features include MIC feature,MSE feature,HIS feature,WEB feature,MEA feature and SAT feature.The Michelson contrast feature (MIC) was used for periodic patterns and texture.The web contrast feature (WEB) defined normal difference between color object and background.The histogram feature (HIS) frequently used as a parameter for image quality.The saturation feature (SAT) was the ratio of minimum and maximum of pixels.The real world 427 outdoor foggy images were used for the experiment purposed.Three metrics were used e,r-andσto compute the results.

    Images taken in poor weather condition lose the image contrast.Li et al.[18]proposed a defogging method conditional generative adversarial network CGAN.The generator network generates the input haze images.Li et al.[19]proposed a defog images using CNN cascaded convolutional neural network.The medium transmission map was estimated through deny connected CNN and global atmospheric light was estimated through weight CNN.

    In this research,different paradigm has been reviewed that were proposed by researchers for visibility enhancement of different foggy and haze images [8-19].Therefore,these techniques do not perform well when images having heavy fog,large white region and strong atmospheric light.Therefore,the defogged and dehazed images contained low contrast and low visibility.As a powerful class of deep learning CGAN can directly remove fog,haze and enhance visibility of bad weather degraded foggy and haze images.The proposed CGAN consists of two networks generators and discriminator.The generator networks take input foggy image and generate fog-free image.The goal of discriminator network is to differentiate between original fog free image and restored image.

    3 Proposed Framework

    As the weaknesses of earlier work have been discussed in previous chapter so in order to overcome those weaknesses,a defogging and dehazing framework has been proposed.The overall flowchart of the proposed methodology for the visibility enhancement of foggy and haze images are shown in Fig.2.The proposed framework is novel in terms of the issues it has addressed collectively.Thus the purpose research is to overcome the problem of low-visibility and it is also very beneficial for National Highway Traffic safety Administration system (NHTSAS),remote sensing system,Traffic monitoring system,and object recognition system.First time CGAN is used to remove fog from images.In pre-processing phase,median filter is used to remove noise from images to produce quality results.Median filter is a nonlinear filter its removes noise from bad weather degraded both foggy and haze images.There are various number of filters used in image processing.The Median filter has remove noise preserves edges,make image smooth and maintains image color details.The main function of median filter it take less time to compute the results.After pre-processing CGAN is used.The proposed CGAN consist of two networks generator network and discriminator network.The purpose of generator network takes input haze and fog image to generate haze and fog free image.In proposed method the generator network directly estimate atmospheric light and transmission.The discriminator network distinguishes generated image and original fog and haze free image.

    Figure 2:Proposed framework for image dehazing/defogging

    In computer vision and image processing,the generally using image formation model and atmospheric scattering model:[10]

    In above Eq.(1),I(x) represent the fog image and J(x) is the scene radiance which is restored fog and haze free image.Where air light is demoted by A and transmission map is represented by t(x).Where atmospheric scattering coefficient isβand d(x) is the scene depth.The value of transmission is range between 0 and 1 for every pixel and values of each pixel are scene depth which is denoted by d(x).We assume atmospheric light globally which is (0.7 and 0.1) for each images andβis a scattering coefficient the value ofβis (1.5,1.6,and 0.1).Using Eq.(2) we used dept d(x) and scattering coefficientβto effectively calculate transmission map t(x).

    3.1 Pre-Processing

    Due to the existence of mist,fog,snow,haze and rain in outdoor captured images reduced visibility during foggy and haze weather conditions.To enhance visibility and quality result,it is important to remove noise from foggy and haze image.We cannot enhance visibility of foggy and haze image in the existence of noise its lower the performance of computer vision applications.Image processing essential part of visibility of foggy and haze images.In pre-processing phase,Median filter is applied on input foggy and haze image.Median filter is a nonlinear filter its removes noise from bad weather degraded both foggy and haze image.The Median filter has remove noise preserves edges,make image smooth and maintains image color details.The main purpose of the median filter is to improve the image quality that has been corrupted by noise.

    3.2 Conditional Generative Adversarial Network

    Once pre-processing is done CGAN is applied.The proposed framework conditional generative adversarial network is a combination of two networks generator network and discriminator network.

    3.2.1 Generator Network

    The generator network directly estimates atmospheric light and transmission map.The generator network consists of three steps atmospheric light,Transmission map,and Scene Radiance.

    Transmission Map:The generator network architecture is as shown in Fig.3.In generator network used four layers such as a convolutional layer,pooling layer,up-sampling layer,and fully connected layer.The generator network used these layers to calculate the transmission map.In generator network,the first convolutional layer has 16 filters with kernel size 7×7 and the second layer has 32 filters with a kernel size 5×5 and the third layer has 64 filters with a kernel size 3×3.After each convolutional layers used Pooling layers and up-sampling layer.The second layer is the pooling layer which is used to operate upon the image feature map separately for creating a new set of the same number of pooled feature maps.The fourth layer is the up-sampling layer.The layer consists of a 7×7 convolutional filter with a kernel size of 3×3.The final layer is a fully connected layer that takes the output of previous layers and combined features together to create a model.

    Atmospheric Light:The atmospheric light component aims to estimate atmospheric light A.As shown in Fig.3.The generator network used four layers to estimate atmospheric light A,which are convolutional layer,pooling layer,up-sampling layer and dense layer.The up-sampling layer consists of a 7×7 convolutional filter with a kernel size of 3×3.The final layer is a fully connected layer that takes the output of previous layers and combined features together to create a model.

    Figure 3:Generator network architecture

    Scene Radiance:After estimating atmospheric light and transmission map the scene radiance is recovered by the following equation.The purpose of scene radiance is to the combination of the atmospheric light A and the transmission map t(x),fog image I(x),and generated restored image J(x) from the following Eq.(3).

    3.2.2 Discriminator Network

    The Discriminator network architecture is as shown in Fig.4.The purpose of the discriminator network is to differentiate between original fog-free image and the generating image.The basic operation of the discriminator is convolutional,batch normalization;Leaky Relu and sigmoid function is the final layer of the discriminator [18].Finally,the discriminator distinguishes the original fog-free image and restored the fog-free image.

    Figure 4:Discriminator network architecture

    4 Experiments and Results

    4.1 Experimental Setup

    We implemented our proposed method on the PC Intel (R) Core (TM) GPU@3.20 GHz processor,16.0 GB RAM and window 10 pro the 2.2.8 open-cv.We trained the proposed method on NVIDIA TITANX GPU and model is coded on python using tensor flow framework.The proposed framework was implemented using on PyCharm.The pytorch are also used for the implementation of CGAN.We used conditional generative adversarial network to remove fog and haze from images.It almost took 6 hours to train the dataset.We performed experiment of two of commonly used datasets.The Two datasets are FRIDA (fog road image database) and Haze images [20].The FRIDA comprises of 18 urban road scene and 90 synthetic images.In FRIDA dataset four foggy images and their depth maps and different type of fog was added on each images which is a cloudy fog,heterogeneous fog etc.The Haze images contain 35 haze images and 35 haze free images which have different kind of scenes.All the training images are resized 256×256 and used 0.001 learning rate.To perform gradient decent used Adam Optimizer.For experiment we split data set into two phases training phase and testing phase.We used 80%images are training purpose remaining 20% images are testing purpose.To assess the performance and validation of proposed method PSNR and SSIM performance measures are used for FRIDA dataset whereas e,r-andσ[14]are used on and haze images.

    I.FRIDA(Fog Road Image Database):

    To assess the performance of proposed method on FRIDA [20]dataset used two performance metrics PSNR and SSIM.The peak to signal noise ratio is used to measure the fog ratio in foggy image and fog free image.The structure similarity index measures the similarity between original fog free image and restored image.MSE is a mean square error and MAXf is the maximum signal value that exist in original image.From below Eq.(5) i(x,y) represent the brightness comparison function which is measure the familiarity of two image,c(x,y) is a contrast function between two images and s(x,y) denote is the structure comparison function and correlation coefficient between two images.The higher values of SSIM and PSNR show the good quality result and better image quality and enhance visibility of the restored reconstructed image.

    II Haze Dataset:

    To validate and find image restoration rate of proposed method on Haze images used three performance indicators e,r-andσ.The e metric finds a new visible edge in the fog free enhanced image as shown in Eq.(6).The r-metrics find the contrast restoration in fog free image as shown in Eq.(7).Theσmetric represent the pixel being completely black or white in the generated restored haze free image as shown in Eq.(8).mi,denote the number of edges point in haze free restored image points ri is the rate of gradient ratio between original haze image,restored haze free image,mtrepresents the feasible range of pixel and (dimx dimy) is represents the size of output.

    4.2 Experimental Results

    4.2.1 Experimental Result on FRIDA Dataset

    To assess the performance of proposed experiments are performed on the FRIDA dataset which consist of eighteen urban roads of foggy scene and 90 synthetic images.PSNR and SSIM are used as performance metrics.Tab.1 shows the values of PSNR and SSIM achieved by proposed framework.

    Table 1:Result of proposed method on FRIDA dataset

    To validate the performance of proposed method,comparative analysis is presented in Tab.2.Where comparison is performed based on PSNR and SSIM,we can observed from Tab.2 the proposed method achieved higher values of SSIM and PSNR than existing methods.The comparison result also presented graphically in Fig.8.One can observe that proposed method have greater values of performance metrics such as PSNR and SSIM then existing method.The reason behind the success of proposed method is correct calculation of atmospheric light and transmission map and preprocessing to good defogging result and visual color.According to our proposed method the generator network accurately estimate atmospheric light and transmission map.Due to accurate valuation of these two modules get good defogging and dehazing results.There are numbers of methods,techniques and algorithms used to remove fog from images but we used first time deep learning framework CGAN to remove fog and enhance visibility of bad weather degraded foggy image.According to our knowledge there is no any existing literature available that has applied CGAN on foggy image.The defogging result of proposed method and other methods are shown in Figs.5-7.The resultant image produced by proposed method is closer the original fog free ground truth image.We notice that the defogging results of existing methods still slight fog and low visibility.The comparison results of SSIM and PSNR are as shown in Tab.2,which is clearly shows that the proposed method performed well,and enhance contrast and visual quality than existing methods.The experimental result of the road scene images are as shown in below Figs.5-7d.It can be detected from that the defogging result generated by proposed method on Figs.5-7d.The restored image have increase contrast and image true color details and enhanced visibility.The resultant image produced is closer the original fog free image.It can be observed that the defogging results of existing methods still have slight fog.

    Table 2:PSNR and SSIM comparative analysis of proposed methodon FRIDA dataset with previous techniques

    Figure 5:Defogging result of proposed method in comparison with Huang et al.[8]based on Fog data.(a) Original Fog free image.(b) Foggy image.(c) HDCP [8].(d) Proposed method

    We compared the performance of proposed method with existing three methods.Figs.5-7(a) represent the original fog free image (b) represent input fog image.Fig.5c restoration result of HDCP method.As we can see that the result generated from Fig.5c,our restored image has closed to ground truth image,vivid color and true image color details.HDCP.Reference [8]results are as shown in Fig.5c.This method did not perfectly remove fog and there is large scale of gray level present in image.After defogging it produce dim and noisy sky and reduced contrast.The restored image look too darker and image have low visibility due to wrong estimation of transmission map.Vector quantization.Reference [13]results are presented in Fig.6c the bottom part of image mostly consists of road its enhanced visibility,but the bottom part of image is too dark.The restored image still exist artifacts and scene problem,for region few color vanishes and did not effectively removed fog the generated image have low visibility and over enhanced.The result generated by our approach are as shown in Fig.6d.The prosed approach effectively removed fog and which are closer to ground truth fog free image and visual pleasing quality.Reference retrieval.Reference [14]results are presented in Fig.7c.We can see that the restored image has blurred and image sharp details are also destroying,image contain low contrast.This method has produced good defogging results for image contain low and heavy fog region.As the results produced by our approach as shown in Fig.7d our generated image slight fog but image contain high contrast and enhanced visibility.Due to accurate calculation of atmospheric light and transmission map get good defogging results,true color details and context information of fog region.The comparison results of SSIM and PSNR are shown in above Tab.2.According to Tab.2,the higher SSIM and PSNR values manufactured by proposed method and lower values generated by existing methods.During experiments we observed that the proposed framework process an image within seconds.

    Figure 6:Defogging result of proposed method in comparison with Tai et al.[13]based on Fog data set.(a) Original Fog free image.(b) Foggy image.(c) Vector quantization [13].(d) Proposed method

    Figure 7:Defogging result of proposed method in comparison with Yuan et al.[14]based on Fog data set.(a) Original Fog free image.(b) Foggy image.(c) Reference retrieval [14].(d) Proposed method

    Figure 8:Comparison graph of PNSR and SSIM metrics of FRIDA dataset

    4.2.2 Experimental Result on Haze images

    Visual result of CGAN are carried out based on the experiment performed on the Haze dataset which consist of 35 haze images and 35 haze free images.The performance result of e,r-σof proposed CGAN is presented in Tab.3.

    Table 3:Result of proposed method on Haze dataset

    To assess the performance of proposed method with existing dehazing methods the comparative analysis are presented in Tab.4.Comparison is performed based on e,r-andσmetrics Tab.4 show the image restoration rate produced by e,r-,andσmetrics and existing methods.It can be observed from Tab.4 the higher values generated by proposed method which demonstrated proposed method good defogging results and lower values generated by existing method.The above Tab.4 clearly show that the proposed method performed well and enhance image visibility and contrast as compared to state-of-the-art methods.

    Five types of haze images are chosen from test set to perform the comparison results.In Figs.9-12,first image is ground truth haze free image and second image is input haze image.As shown in Fig.9,algorithms [17,21-23]are more appropriate after being processed by existing methods and significantly removed fog.The bottom part of image is road the restored image of existing methods has been low contrast and less visible edges almost the entire image have saturation.But the result produced by proposed method in Fig.9 which have high contrast,more visible edges and texture information.The results presented by existing methods Guo et al.[10],Zhao et al.[11],Fattal [24],and Kumari et al.[25]as shown in Fig.10,visibilities are enhanced by existing methods the result generated by method [24]still haze exist and image contain low contrast.The result generated by proposed method as shown in Fig.10 the restored image high contrast and brightness.We can observe that the restoration rate produced by state-of-art method He et al.[6],Nair et al.[12],Tan [26]and Liu et al.[27]as shown in Fig.11 did not perfectly removed haze,large number of artifacts and image have too dark region due to wrong calculation of transmission map.The result presented by prosed method in generated image which have no artifacts,increase contrast and image fine details.These methods usually insufficient estimation of the haze thickness for this reasons the generated image cannot good satisfactory restoration rate and visible edges information which are displayed in Fig.11.As shown in Fig.11 the result produced by Fattal [24]did not remove haze perfectly and the restored image has low visibility.As shown in Fig.12e the result generated by Kumari almost the technique remove haze but the restored image contains high contrast.The result generated by Guo [10]and Zhao et al.[11]are with better visibility there are almost haze removed.As shown in Fig.10 the method Nair et al.[12]and Tan [26]did not remove haze and in the restored image haze still exists.The resultant image is too dark as compared to original haze free image.The result produced by He et al.[6]and Liu et al.[27]has almost removed haze but in the restored image there exists artifacts and low contrast.As shown in Fig.10 the result generated by our method and other methods such as He et al.[6],Yuan et al.[14]and Liu et al.[27]almost the techniques remove haze perfectly but the resultant image has low visibility and scene problem.

    Table 4:Comparison Result of performance measure e,r-,and σ on Haze images

    Figure 9:Comparison results of various dehazing methods and proposed method.(a) Original Haze free image,(b) Haze image,(c) The algorithm [17].(d) The algorithm [21].(e) The algorithm [22].(f) The algorithm [23].(g) Proposed algorithm

    Figure 10:Dehazing result of proposed method and state-of-art methods.(a) Original image.(b) Haze image.(c) Guo et al.[10].(d) Zhao et al.[11].(e) Fattal [24].(f) Kumari et al.[25].(g) Proposed

    Figure 11:Dehazing result of proposed method and state-of-art methods.(a) Original image.(b) Haze image.(c) He et al.[6].(d) Nair et al.[12].(e) Tan [26].(f) Liu et al.[27].(g) Proposed

    Figure 12:Dehazing result of proposed method in comparison with other state-of-art methods.(a) Original image.(b) Haze image.(c) He et al.[6].(d) Fei et al.[14].(e) Liu et al.[27].(f) Proposed

    5 Conclusion

    In this paper,we have proposed an efficient framework for image dehazing and defogging by using CGAN.The proposed CGAN is a combination of two networks;Generator and Discriminator.The generator network produces a clear image from input foggy and haze image and it also preserves the structure and detailed information of an input image.The proposed generator network consists of three parts atmospheric light,transmission map and scene radiance.The generator network effectively estimates these three parts.The discriminator network distinguishes input fog and haze-free image and restored image.The experimental results demonstrated that the proposed framework performed well and achieved good image restoration rate than the existing state-of-art-techniques.In future,the proposed framework can be used for thick haze,night time foggy and haze images.Now a days,fog is a big reason for road accidents.In the winter,due to the heavy and thick fog,automobile drivers faced sight problem which causes accidents.The automated framework with increased efficiency can help drivers to see the clear vision during foggy weather.

    Funding Statement:We deeply acknowledge Taif University for Supporting and funding this study through Taif University Researchers Supporting Project number (TURSP-2020/115),Taif University,Taif,Saudi Arabia.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    人人妻人人澡人人爽人人夜夜| 精品国产国语对白av| 五月天丁香电影| 午夜精品国产一区二区电影| 日韩在线高清观看一区二区三区| 国产女主播在线喷水免费视频网站| av免费观看日本| 亚洲精品国产色婷婷电影| 日本欧美国产在线视频| 亚洲伊人色综图| 午夜免费男女啪啪视频观看| 日韩av不卡免费在线播放| 亚洲一级一片aⅴ在线观看| 校园人妻丝袜中文字幕| 欧美亚洲日本最大视频资源| 日韩,欧美,国产一区二区三区| 亚洲一区二区三区欧美精品| 又粗又硬又长又爽又黄的视频| 丰满少妇做爰视频| 日本91视频免费播放| 亚洲成av片中文字幕在线观看 | 又大又黄又爽视频免费| 日日摸夜夜添夜夜爱| 97人妻天天添夜夜摸| 在线观看www视频免费| 国产精品免费大片| 不卡视频在线观看欧美| 欧美成人精品欧美一级黄| 日韩一区二区三区影片| 97人妻天天添夜夜摸| 九色亚洲精品在线播放| 黄色视频在线播放观看不卡| 水蜜桃什么品种好| 欧美国产精品va在线观看不卡| 老司机影院成人| 777久久人妻少妇嫩草av网站| 国产黄色视频一区二区在线观看| 午夜老司机福利剧场| 亚洲国产欧美网| 高清欧美精品videossex| 亚洲av免费高清在线观看| 天堂俺去俺来也www色官网| 久久久久久免费高清国产稀缺| 国产精品蜜桃在线观看| 成人免费观看视频高清| 少妇人妻久久综合中文| 五月天丁香电影| 天天躁狠狠躁夜夜躁狠狠躁| 国产精品一区二区在线观看99| 伦精品一区二区三区| 午夜免费鲁丝| 久久午夜综合久久蜜桃| 国产综合精华液| 亚洲精品国产色婷婷电影| 成人漫画全彩无遮挡| 亚洲精品国产色婷婷电影| 久久久精品94久久精品| 伦理电影免费视频| 两个人看的免费小视频| 国产综合精华液| 免费观看性生交大片5| 丝袜美足系列| 色婷婷久久久亚洲欧美| 欧美精品亚洲一区二区| 青春草国产在线视频| 免费黄网站久久成人精品| 伊人久久大香线蕉亚洲五| h视频一区二区三区| 国产福利在线免费观看视频| 欧美另类一区| 妹子高潮喷水视频| 亚洲成色77777| 丝袜人妻中文字幕| 日韩制服骚丝袜av| 色婷婷久久久亚洲欧美| 国产不卡av网站在线观看| 久久99一区二区三区| 欧美日韩精品成人综合77777| 国产精品不卡视频一区二区| freevideosex欧美| av一本久久久久| 精品少妇一区二区三区视频日本电影 | av福利片在线| 国精品久久久久久国模美| 18禁动态无遮挡网站| 捣出白浆h1v1| 视频区图区小说| 少妇猛男粗大的猛烈进出视频| 国产爽快片一区二区三区| 99re6热这里在线精品视频| 免费观看a级毛片全部| av国产精品久久久久影院| 国产欧美日韩一区二区三区在线| 你懂的网址亚洲精品在线观看| 色吧在线观看| 侵犯人妻中文字幕一二三四区| 亚洲人成网站在线观看播放| 亚洲熟女精品中文字幕| 啦啦啦在线免费观看视频4| 久久久久久人人人人人| 丝袜美腿诱惑在线| 日本猛色少妇xxxxx猛交久久| 亚洲国产精品成人久久小说| 免费看不卡的av| 亚洲人成电影观看| 少妇猛男粗大的猛烈进出视频| 亚洲av在线观看美女高潮| 久久国产精品大桥未久av| 精品一区二区免费观看| 美女国产高潮福利片在线看| 99国产精品免费福利视频| 亚洲天堂av无毛| av国产精品久久久久影院| 999久久久国产精品视频| 久久久国产精品麻豆| 国产成人91sexporn| 国语对白做爰xxxⅹ性视频网站| 天天躁狠狠躁夜夜躁狠狠躁| 大香蕉久久成人网| 一区二区三区四区激情视频| 日本欧美视频一区| 黄片无遮挡物在线观看| 午夜日韩欧美国产| 成年动漫av网址| 亚洲一区二区三区欧美精品| 毛片一级片免费看久久久久| 久久狼人影院| 欧美日韩视频精品一区| 精品久久久精品久久久| 美女午夜性视频免费| 日韩人妻精品一区2区三区| 伦理电影免费视频| 成年av动漫网址| 国产有黄有色有爽视频| 国产人伦9x9x在线观看 | 男女下面插进去视频免费观看| av免费在线看不卡| 亚洲欧美日韩另类电影网站| 深夜精品福利| 爱豆传媒免费全集在线观看| 国产无遮挡羞羞视频在线观看| 亚洲国产日韩一区二区| 亚洲av福利一区| 亚洲欧美中文字幕日韩二区| 狠狠精品人妻久久久久久综合| 欧美日韩av久久| 黄频高清免费视频| 久久久久久久久久人人人人人人| av网站免费在线观看视频| 精品人妻偷拍中文字幕| 又大又黄又爽视频免费| 一边亲一边摸免费视频| 午夜老司机福利剧场| 老汉色∧v一级毛片| 丁香六月天网| 宅男免费午夜| 国产精品av久久久久免费| 日产精品乱码卡一卡2卡三| 一区二区av电影网| 国产人伦9x9x在线观看 | 久久av网站| 欧美日韩一区二区视频在线观看视频在线| 哪个播放器可以免费观看大片| 亚洲色图综合在线观看| 妹子高潮喷水视频| 亚洲视频免费观看视频| 日韩人妻精品一区2区三区| 国产精品一国产av| 国产精品女同一区二区软件| 国产黄色视频一区二区在线观看| 成年女人在线观看亚洲视频| 狠狠婷婷综合久久久久久88av| 亚洲精品视频女| 国产精品人妻久久久影院| 夜夜骑夜夜射夜夜干| 丰满饥渴人妻一区二区三| 国产成人午夜福利电影在线观看| 亚洲欧美精品自产自拍| 色吧在线观看| 欧美日韩一区二区视频在线观看视频在线| 美女中出高潮动态图| 大话2 男鬼变身卡| 国产成人免费观看mmmm| 午夜日韩欧美国产| 国产亚洲一区二区精品| 在线观看人妻少妇| 欧美激情 高清一区二区三区| 国产福利在线免费观看视频| 久久精品国产自在天天线| 激情五月婷婷亚洲| 亚洲国产最新在线播放| 久久久精品区二区三区| tube8黄色片| 国产一区二区激情短视频 | 尾随美女入室| 人成视频在线观看免费观看| 亚洲国产av新网站| xxxhd国产人妻xxx| 蜜桃国产av成人99| 亚洲精华国产精华液的使用体验| a级片在线免费高清观看视频| 亚洲av日韩在线播放| 自拍欧美九色日韩亚洲蝌蚪91| 一级毛片我不卡| 亚洲欧美日韩另类电影网站| 人妻 亚洲 视频| 交换朋友夫妻互换小说| 精品少妇内射三级| 国产日韩欧美在线精品| 香蕉丝袜av| 亚洲成国产人片在线观看| 最近手机中文字幕大全| 免费大片黄手机在线观看| 中文字幕色久视频| 久久久久久人妻| 亚洲人成77777在线视频| 国产精品亚洲av一区麻豆 | 久热这里只有精品99| 国产精品一国产av| 嫩草影院入口| 成人二区视频| 欧美黄色片欧美黄色片| 大码成人一级视频| 伊人久久国产一区二区| 婷婷色综合大香蕉| 一二三四在线观看免费中文在| 国产在线免费精品| 成人18禁高潮啪啪吃奶动态图| 97在线人人人人妻| 欧美bdsm另类| 建设人人有责人人尽责人人享有的| 99精国产麻豆久久婷婷| 免费观看无遮挡的男女| 18禁国产床啪视频网站| 啦啦啦在线观看免费高清www| 丝袜在线中文字幕| 国产极品天堂在线| 黄色视频在线播放观看不卡| 熟女电影av网| 一二三四中文在线观看免费高清| 80岁老熟妇乱子伦牲交| 亚洲综合色网址| 亚洲国产欧美日韩在线播放| 超色免费av| 超碰成人久久| 日本黄色日本黄色录像| 搡老乐熟女国产| 国产野战对白在线观看| 看非洲黑人一级黄片| 国产欧美日韩一区二区三区在线| 99国产精品免费福利视频| 一级片'在线观看视频| 久久久久久人人人人人| 中文欧美无线码| 美女国产高潮福利片在线看| 肉色欧美久久久久久久蜜桃| 毛片一级片免费看久久久久| 欧美日本中文国产一区发布| 久久影院123| tube8黄色片| 美女主播在线视频| 建设人人有责人人尽责人人享有的| 日本-黄色视频高清免费观看| av国产精品久久久久影院| 精品人妻在线不人妻| 极品人妻少妇av视频| 啦啦啦啦在线视频资源| 亚洲精品日韩在线中文字幕| 亚洲欧美中文字幕日韩二区| 大话2 男鬼变身卡| 中文字幕精品免费在线观看视频| 欧美av亚洲av综合av国产av | 最新的欧美精品一区二区| 久久久久久人妻| 欧美97在线视频| 国产有黄有色有爽视频| 菩萨蛮人人尽说江南好唐韦庄| 日日爽夜夜爽网站| www.自偷自拍.com| 亚洲人成77777在线视频| 一本—道久久a久久精品蜜桃钙片| 最近手机中文字幕大全| 国产免费又黄又爽又色| 日韩一本色道免费dvd| 黑人欧美特级aaaaaa片| 久热久热在线精品观看| 国产成人精品福利久久| 菩萨蛮人人尽说江南好唐韦庄| 亚洲综合精品二区| 日韩电影二区| 麻豆av在线久日| 国产综合精华液| 飞空精品影院首页| 久久久久久久久久人人人人人人| 国产亚洲欧美精品永久| 丝袜喷水一区| 99热网站在线观看| 久久精品国产鲁丝片午夜精品| 黄片无遮挡物在线观看| 一区二区三区激情视频| 超色免费av| 97人妻天天添夜夜摸| 看免费成人av毛片| 国产精品人妻久久久影院| 侵犯人妻中文字幕一二三四区| 九草在线视频观看| 最新的欧美精品一区二区| 亚洲精华国产精华液的使用体验| 国产精品国产三级专区第一集| 伦理电影免费视频| 美女视频免费永久观看网站| 久久久国产精品麻豆| 国产精品人妻久久久影院| 丝袜脚勾引网站| 国产不卡av网站在线观看| 黄片无遮挡物在线观看| 啦啦啦视频在线资源免费观看| 精品人妻一区二区三区麻豆| 交换朋友夫妻互换小说| 校园人妻丝袜中文字幕| 精品亚洲成国产av| 亚洲av欧美aⅴ国产| 人人澡人人妻人| 国产爽快片一区二区三区| 天堂中文最新版在线下载| 久久久久人妻精品一区果冻| 免费黄网站久久成人精品| 大话2 男鬼变身卡| 国产日韩一区二区三区精品不卡| 中文字幕制服av| 老司机影院毛片| 精品久久久久久电影网| 秋霞在线观看毛片| 亚洲久久久国产精品| 麻豆av在线久日| 在线观看免费高清a一片| 美女中出高潮动态图| 国产麻豆69| 久久久久久久国产电影| 欧美bdsm另类| 黑人巨大精品欧美一区二区蜜桃| 日本午夜av视频| 成人二区视频| 最黄视频免费看| 欧美国产精品va在线观看不卡| 国产探花极品一区二区| 国产av国产精品国产| 国产成人一区二区在线| 在线观看美女被高潮喷水网站| 欧美激情 高清一区二区三区| 精品一区在线观看国产| 国产精品国产av在线观看| 免费久久久久久久精品成人欧美视频| 亚洲国产看品久久| 亚洲精品国产一区二区精华液| av在线老鸭窝| av网站在线播放免费| 巨乳人妻的诱惑在线观看| 一级毛片 在线播放| 日韩制服丝袜自拍偷拍| 一区二区av电影网| videos熟女内射| 亚洲综合色惰| 丝袜喷水一区| 久久久国产一区二区| 亚洲国产看品久久| 欧美另类一区| 亚洲,欧美精品.| 久久热在线av| 老汉色av国产亚洲站长工具| 国产免费视频播放在线视频| 久久久久久久久久久免费av| 精品久久久精品久久久| 精品国产乱码久久久久久男人| 美国免费a级毛片| 国产熟女欧美一区二区| 我要看黄色一级片免费的| 免费观看av网站的网址| 五月天丁香电影| 免费观看在线日韩| 免费日韩欧美在线观看| 国产精品 国内视频| 人妻少妇偷人精品九色| 亚洲一区二区三区欧美精品| 亚洲精品在线美女| 日韩伦理黄色片| 蜜桃国产av成人99| 亚洲av欧美aⅴ国产| 王馨瑶露胸无遮挡在线观看| 在线天堂最新版资源| 人妻人人澡人人爽人人| 午夜福利在线免费观看网站| 中文字幕最新亚洲高清| 美女中出高潮动态图| 国产一区二区在线观看av| 亚洲五月色婷婷综合| 制服丝袜香蕉在线| 卡戴珊不雅视频在线播放| 一区二区日韩欧美中文字幕| 精品一区二区三区四区五区乱码 | 亚洲伊人色综图| 日韩av不卡免费在线播放| 啦啦啦视频在线资源免费观看| 精品人妻在线不人妻| 女人精品久久久久毛片| 国产精品久久久久久久久免| 午夜福利网站1000一区二区三区| 日韩av免费高清视频| 久久久久久免费高清国产稀缺| 久久人妻熟女aⅴ| 在线观看免费高清a一片| kizo精华| 午夜激情久久久久久久| 男人舔女人的私密视频| 视频在线观看一区二区三区| 人妻人人澡人人爽人人| 黄色配什么色好看| 久久影院123| 亚洲精品久久成人aⅴ小说| 99热国产这里只有精品6| 亚洲精品第二区| 欧美日韩一区二区视频在线观看视频在线| xxxhd国产人妻xxx| 国产精品国产av在线观看| 国产精品人妻久久久影院| 亚洲美女搞黄在线观看| 大香蕉久久成人网| 日韩 亚洲 欧美在线| 欧美激情高清一区二区三区 | 欧美日韩国产mv在线观看视频| 亚洲男人天堂网一区| 国产男女内射视频| 丁香六月天网| 欧美bdsm另类| 国产亚洲一区二区精品| 精品人妻偷拍中文字幕| 欧美人与性动交α欧美软件| 90打野战视频偷拍视频| 啦啦啦中文免费视频观看日本| 久久久欧美国产精品| 啦啦啦在线观看免费高清www| videos熟女内射| 精品一区二区免费观看| 久久久久久人妻| 国产欧美日韩综合在线一区二区| 秋霞在线观看毛片| 欧美精品一区二区大全| 好男人视频免费观看在线| 欧美97在线视频| 国产在线视频一区二区| av网站在线播放免费| 麻豆乱淫一区二区| 国产免费现黄频在线看| 欧美日韩视频高清一区二区三区二| 在线观看免费视频网站a站| 99香蕉大伊视频| 一级a爱视频在线免费观看| 在线天堂最新版资源| 综合色丁香网| 国产野战对白在线观看| 男人添女人高潮全过程视频| 69精品国产乱码久久久| 97人妻天天添夜夜摸| 国产白丝娇喘喷水9色精品| 亚洲美女搞黄在线观看| 欧美在线黄色| 精品久久久久久电影网| 亚洲国产成人一精品久久久| 亚洲色图 男人天堂 中文字幕| 欧美人与性动交α欧美软件| 国产白丝娇喘喷水9色精品| 亚洲av电影在线观看一区二区三区| 免费少妇av软件| 久久久久久伊人网av| 一区二区三区激情视频| 国产黄色免费在线视频| 一本色道久久久久久精品综合| 校园人妻丝袜中文字幕| 少妇的逼水好多| 啦啦啦在线观看免费高清www| 久久精品夜色国产| 深夜精品福利| 菩萨蛮人人尽说江南好唐韦庄| 国产精品国产三级国产专区5o| 欧美xxⅹ黑人| 春色校园在线视频观看| 捣出白浆h1v1| 18禁裸乳无遮挡动漫免费视频| 亚洲国产欧美日韩在线播放| 一区二区三区精品91| 男男h啪啪无遮挡| 在线观看免费视频网站a站| av又黄又爽大尺度在线免费看| 熟妇人妻不卡中文字幕| 国产精品国产三级专区第一集| 欧美日韩av久久| 高清黄色对白视频在线免费看| 99久国产av精品国产电影| 欧美日韩一区二区视频在线观看视频在线| 午夜老司机福利剧场| 国产高清不卡午夜福利| 中文字幕av电影在线播放| 国产精品.久久久| 五月天丁香电影| 免费高清在线观看视频在线观看| 亚洲av在线观看美女高潮| 啦啦啦在线观看免费高清www| 高清视频免费观看一区二区| videossex国产| 人成视频在线观看免费观看| 婷婷色av中文字幕| 十分钟在线观看高清视频www| 人妻 亚洲 视频| 91在线精品国自产拍蜜月| 国产有黄有色有爽视频| 国产精品一二三区在线看| 国产麻豆69| 午夜福利一区二区在线看| 高清不卡的av网站| 交换朋友夫妻互换小说| 久久久久久人人人人人| 日韩av免费高清视频| 亚洲国产日韩一区二区| a级毛片黄视频| 99香蕉大伊视频| 欧美97在线视频| 校园人妻丝袜中文字幕| 国产免费福利视频在线观看| 国产成人欧美| 中文字幕色久视频| 亚洲精品美女久久久久99蜜臀 | 一级,二级,三级黄色视频| 97人妻天天添夜夜摸| 国产女主播在线喷水免费视频网站| av不卡在线播放| 成人毛片60女人毛片免费| 少妇熟女欧美另类| 日本vs欧美在线观看视频| 午夜免费男女啪啪视频观看| 久久午夜福利片| 国产黄色免费在线视频| av在线app专区| 国产亚洲最大av| 国产高清不卡午夜福利| 人妻系列 视频| 性少妇av在线| 亚洲国产精品999| 一本大道久久a久久精品| 18在线观看网站| 香蕉精品网在线| 日韩精品有码人妻一区| 韩国精品一区二区三区| 国产人伦9x9x在线观看 | 国产精品免费视频内射| 亚洲国产av影院在线观看| 国产精品国产av在线观看| 精品国产国语对白av| 亚洲人成电影观看| 一个人免费看片子| 综合色丁香网| 成人毛片60女人毛片免费| 在线亚洲精品国产二区图片欧美| 亚洲欧洲日产国产| 午夜福利在线免费观看网站| 麻豆精品久久久久久蜜桃| 亚洲综合色惰| 高清视频免费观看一区二区| www日本在线高清视频| 哪个播放器可以免费观看大片| 美女中出高潮动态图| 国精品久久久久久国模美| 国产成人精品无人区| 久久人人97超碰香蕉20202| 久久久精品94久久精品| 99国产综合亚洲精品| 久久久久久免费高清国产稀缺| xxx大片免费视频| 亚洲精品美女久久av网站| 国产 精品1| 性少妇av在线| 欧美+日韩+精品| 一本久久精品| 欧美人与性动交α欧美精品济南到 | 国产午夜精品一二区理论片| 久久精品国产综合久久久| 在线观看免费日韩欧美大片| tube8黄色片| 亚洲色图综合在线观看| 欧美精品一区二区大全| 亚洲伊人色综图| 久久人人爽av亚洲精品天堂| 永久网站在线| 久久久久久久国产电影| 成人影院久久| 制服丝袜香蕉在线| 丝袜在线中文字幕| 国产激情久久老熟女| 大陆偷拍与自拍| 国产一级毛片在线| 国产激情久久老熟女| 国产高清不卡午夜福利| 亚洲成人一二三区av| 国产精品免费大片| 90打野战视频偷拍视频| 啦啦啦啦在线视频资源| 日本猛色少妇xxxxx猛交久久| 亚洲成人av在线免费| 999精品在线视频| 欧美国产精品一级二级三级| 欧美日韩综合久久久久久| 国产精品久久久久久精品电影小说| 18在线观看网站| 国产精品久久久av美女十八| 啦啦啦在线免费观看视频4| freevideosex欧美|