向巍 鐘魁松 張振博
摘要:針對(duì)霧天環(huán)境的城市道路下自動(dòng)駕駛車(chē)輛視覺(jué)感知效果不佳的問(wèn)題,提出一種基于大氣光值的快速圖像去霧改進(jìn)算法,并驗(yàn)證了算法的有效性。首先,制定了大氣光動(dòng)態(tài)估算策略,并設(shè)計(jì)了大氣光動(dòng)態(tài)估算的自適應(yīng)觸發(fā)函數(shù),通過(guò)大氣散射模型進(jìn)行了大氣光的估計(jì);其次,利用最小濾波技術(shù)獲取了霧天圖像的暗通道圖,并估算了對(duì)應(yīng)的投射圖像;再次,制定了大氣光計(jì)算策略,并優(yōu)化了去霧系數(shù);最后,利用直方圖均衡化算法抑制了殘余噪聲,進(jìn)一步提升了無(wú)霧圖像的對(duì)比度。實(shí)驗(yàn)結(jié)果表明,所提算法相比DCP、CAP等在NIQE和SSEQ性能指標(biāo)上都有所提升,擁有更好的細(xì)節(jié)恢復(fù)能力和處理性能,更有利于交通信息的提取。
關(guān)鍵詞:自動(dòng)駕駛;圖像去霧;大氣散射模型;動(dòng)態(tài)大氣光;暗通道;直方圖均衡
中圖分類(lèi)號(hào):TP391文獻(xiàn)標(biāo)志碼:A環(huán)境感知技術(shù)是自動(dòng)駕駛領(lǐng)域的重要組成部分[1],尤其是隨著近年來(lái)圖像處理技術(shù)和計(jì)算機(jī)硬件技術(shù)的飛速發(fā)展,基于視覺(jué)的環(huán)境感知技術(shù)也有了較為廣泛的應(yīng)用。但視覺(jué)環(huán)境感知技術(shù)對(duì)環(huán)境可視條件依賴(lài)較高,尤其是在我國(guó)西南地區(qū)頻繁的雨、霧等不利視覺(jué)條件下,自動(dòng)駕駛車(chē)輛的視覺(jué)感知效果不甚理想。因此,針對(duì)霧天的圖像恢復(fù)技術(shù)也是環(huán)境感知領(lǐng)域面臨的難題之一[2]。
目前針對(duì)視覺(jué)圖像的去霧,按照方法機(jī)制主要分為三大類(lèi)[3]。第一類(lèi)是基于圖像增強(qiáng)的方法,通過(guò)去除噪聲、增強(qiáng)對(duì)比度等方法以達(dá)到恢復(fù)圖像的目的,如直方圖均衡化、小波變換、Retinex、同態(tài)濾波等算法[4]。第二類(lèi)是基于物理模型的方法,利用大氣散射模型對(duì)圖像進(jìn)行恢復(fù)。如HE等[5]提出的暗通道先驗(yàn)去霧算法,該方法效果良好,但針對(duì)天空區(qū)域存在較為嚴(yán)重的顏色失真,不適合非均勻天空條件的情況。針對(duì)這一問(wèn)題,HE等[6]再次引入了引導(dǎo)濾波器,該算法在一定程度上解決了天空顏色失真的問(wèn)題,有效提升了運(yùn)行效率,但存在去霧不徹底的問(wèn)題。文獻(xiàn)[7]提出霧的濃度與景深變化的關(guān)系較大,即濃度越高景深越大,圖像的亮度和飽和度相差也越大,基于該發(fā)現(xiàn)的顏色衰減先驗(yàn)去霧算法針對(duì)單圖像去霧取得了較好的效果,但其中的重要參數(shù)對(duì)景深依賴(lài)較高,因此泛化能力不強(qiáng)。文獻(xiàn)[8]提出了一種基于伽馬校正先驗(yàn)(gamma correction prior,GCP)的去霧算法,采用全局搜索策略,泛化能力強(qiáng)。第三類(lèi)是基于神經(jīng)網(wǎng)絡(luò)的去霧算法,其內(nèi)涵是使用神經(jīng)網(wǎng)絡(luò)建立一個(gè)端到端的網(wǎng)絡(luò)模型,通過(guò)有霧圖像恢復(fù)出無(wú)霧圖像。目前基于該方法的去霧算法主要有2種思路:一種是使用卷積神經(jīng)網(wǎng)絡(luò)(convolutional neural networks, CNN)生成大氣散射模型的某些參數(shù),再利用大氣散射模型來(lái)恢復(fù)圖像[9-11];另一種是使用對(duì)抗神經(jīng)網(wǎng)絡(luò)(adversarial neural network, GAN)直接根據(jù)模糊圖像生成無(wú)霧的清晰圖像[12-15]。文獻(xiàn)[9]首次提出了一種名為DehazeNet的去霧網(wǎng)絡(luò),經(jīng)過(guò)CNN的深層架構(gòu)估算出有霧圖像的透射率,代入大氣散射模型恢復(fù)出無(wú)霧圖像。文獻(xiàn)[13]提出了一種利用疊加條件GAN的去霧網(wǎng)絡(luò),可對(duì)RGB各顏色通道獨(dú)立恢復(fù),具有較好的泛化性?;谏窠?jīng)網(wǎng)絡(luò)的去霧方法具有效率高的特點(diǎn),但由于缺乏真實(shí)的訓(xùn)練數(shù)據(jù)或者先驗(yàn)參數(shù),一定程度限制了其去霧性能。
本文立足于此前的工作基礎(chǔ)[16],利用大氣散射模型的最小濾波技術(shù)與限制,對(duì)大氣散射模型參數(shù)重新進(jìn)行了優(yōu)化,提升了算法性能,同時(shí)引入了限制對(duì)比度的直方圖均衡算法并重新設(shè)計(jì)了對(duì)比度增強(qiáng)模塊,提升了去霧圖像的對(duì)比度,并將本文算法在無(wú)參考評(píng)價(jià)指標(biāo)下進(jìn)行了測(cè)試。
1算法原理
在車(chē)輛行駛過(guò)程中,短時(shí)間內(nèi)相同場(chǎng)景的光照變化不大,近似認(rèn)為該條件下大氣光值不變。該假設(shè)條件下,基于大氣散射模型的連續(xù)圖像去霧算法可在同場(chǎng)景下短時(shí)間內(nèi)變化不大的圖像幀上動(dòng)態(tài)估計(jì)大氣光值。由于大幅減少了大氣光值估計(jì)頻率,此方法可節(jié)省處理時(shí)間,且對(duì)霧天圖像的恢復(fù)影響不大。在此設(shè)計(jì)一個(gè)自適應(yīng)觸發(fā)函數(shù),當(dāng)觸發(fā)條件成立時(shí)對(duì)圖像幀進(jìn)行大氣光值估算。為了避免場(chǎng)景過(guò)度平緩而導(dǎo)致觸發(fā)函數(shù)失效,規(guī)定觸發(fā)條件未達(dá)成的情況下,每20 s強(qiáng)制觸發(fā)計(jì)算大氣光值并迭代至下一輪,再依次進(jìn)行透射圖估計(jì)和無(wú)霧場(chǎng)景的恢復(fù),最后對(duì)圖像幀進(jìn)行自適應(yīng)直方圖均衡處理,以提高圖像的對(duì)比度和亮度,改善圖像質(zhì)量。算法流程如圖1所示。
1.1自適應(yīng)觸發(fā)函數(shù)
1.2大氣光估計(jì)
1.3透射圖估計(jì)
1.3.1暗原色先驗(yàn)理論
1.3.2基于引導(dǎo)濾波的透射圖估計(jì)
1.4無(wú)霧圖復(fù)原
1.5對(duì)比度增強(qiáng)
經(jīng)過(guò)上述方法處理后的圖像雖然質(zhì)量有所提升,但由于保留了一定的霧,圖像仍存在對(duì)比度不足、暗淡等缺點(diǎn),在此對(duì)圖像進(jìn)一步做限制對(duì)比度自適應(yīng)直方圖均衡處理[17],以突出圖像的特征和細(xì)節(jié)。直方圖均衡化是一種根據(jù)色彩通道的值對(duì)像素進(jìn)行分散以獲得更好的圖像對(duì)比度的圖像處理技術(shù)[18],可有效提升去霧后圖像的對(duì)比度和亮度。
本文為了突出道路特征,提高圖像的局部對(duì)比度,采用自適應(yīng)直方圖均衡,即將圖像分成若干塊,分塊進(jìn)行直方圖均衡化處理。同時(shí)為了限制對(duì)比度調(diào)整過(guò)大,造成圖像失真,對(duì)局部對(duì)比度進(jìn)行限制,設(shè)置ClipLimit顏色對(duì)比度閾值,即限制對(duì)比度自適應(yīng)直方圖均衡。對(duì)比度增強(qiáng)模塊算法流程如下:
由圖2可知:AOD去霧效果微弱,圖像質(zhì)量提升不明顯;CAP在效果上略勝于AOD,但圖像質(zhì)量提升依然不明顯;DCP算法處理后的圖像暗部對(duì)比度不高,細(xì)節(jié)等特征不清晰,且天空區(qū)域有光暈;MSRCR算法色彩對(duì)比度不高,整體顏色失真,去霧效果較差,且道路等細(xì)節(jié)特征不突出;GridDehazeNet處理后的圖像整體亮度不高,且圖像色調(diào)過(guò)度不佳,存在暈影的現(xiàn)象;相比而言,本文算法處理后的圖像亮度、對(duì)比度提升明顯,圖像細(xì)節(jié)清晰,道路及車(chē)輛特征明顯,且最大程度保留了真實(shí)色彩。通過(guò)對(duì)比各算法圖像的顏色直方圖分布可知:原圖像顏色分布呈頻率稀疏、頻段區(qū)間窄,且主要在頻段分布上較為極端,而對(duì)理想圖像的顏色直方圖分布特點(diǎn)則是頻率密集、頻段區(qū)間寬,主要分布于中頻區(qū)間;CAP在顏色頻率上相對(duì)原始圖像密集一些,但依然與之差別不大;相對(duì)前者而言,CAP和GridDehazeNet則在顏色頻率和頻段上都有提升,所得顏色直方圖頻率更加密集,頻段也有了擴(kuò)展;受算法自身局限性的影響,DCP的光暈實(shí)質(zhì)上是一種高頻色調(diào),所體現(xiàn)的顏色直方圖頻率分布也主要集中于高頻;MSRCR處理后的圖像頻段也有相應(yīng)的擴(kuò)展,并且主要偏向于中頻,但是由于頻率分布稀疏,所對(duì)應(yīng)的圖像對(duì)比度也不高;本文算法對(duì)應(yīng)的圖像顏色直方圖無(wú)論是在頻率、頻段和分布上,相比較其他幾種算法都更接近于理想分布狀態(tài),頻率主要集中在中間頻段且逐漸向兩端遞減,在低中、中高頻段過(guò)度也較為平滑,基本沒(méi)有上述算法的斷崖式的變化,頻率分布均勻且密集,在顏色區(qū)間上較為全面,因此圖像對(duì)比度更好,細(xì)節(jié)信息也更為豐富。
并且,由圖3可知:AOD和GridDehazeNet在檢測(cè)精度上相比原始圖像提升不大,MSPCR和CAP算法增強(qiáng)后的圖像檢測(cè)精度有所提升,DCP則在包含天空的交通場(chǎng)景下檢測(cè)效果不佳,而本文算法在檢測(cè)精度上相對(duì)較高,在一些小目標(biāo)上也略勝一籌。因此,增強(qiáng)后的圖像對(duì)目標(biāo)檢測(cè)等下游算法也更加友好。
由于道路環(huán)境復(fù)雜多變,無(wú)法獲取無(wú)霧條件下某位置的標(biāo)準(zhǔn)圖像,因此圖像評(píng)價(jià)的全參考(如PSNR等)指標(biāo)不適用。本文采用NIQE[20]、Brisque[21]和SSEQ[22]等無(wú)參考的客觀評(píng)價(jià)指標(biāo)進(jìn)行驗(yàn)證,表2為本文算法與其他幾種算法在RTTS數(shù)據(jù)集下的評(píng)價(jià)數(shù)據(jù),其中最優(yōu)數(shù)據(jù)用粗體標(biāo)出,次優(yōu)數(shù)據(jù)用下劃線(xiàn)標(biāo)出。
由表2可知:本文算法處理后的無(wú)霧圖像在各評(píng)價(jià)指標(biāo)下都表現(xiàn)很好, 在SSEQ和NIQE方面表現(xiàn)最優(yōu),且與次優(yōu)數(shù)據(jù)對(duì)比明顯,雖然在Brisque評(píng)價(jià)下不及MSRCR,但二者相差甚小,依然有著較好的數(shù)據(jù)表現(xiàn)。
為進(jìn)一步驗(yàn)證算法性能,針對(duì)不同算法下,分辨率為1 280×720、720×540的真實(shí)霧天視頻分別進(jìn)行了去霧試驗(yàn),視頻時(shí)長(zhǎng)分別為67 s、54 s,所得性能數(shù)據(jù)如表3所示。由表3可知:在1 280×720分辨率下,本文算法性能是DCP的2.3倍,CAP的2.9倍;在720×540分辨率下,較DCP提升了1.5倍,較CAP提升了1.8倍;相比其他4種算法中性能最優(yōu)秀的DCP算法,在分辨率降低2.4倍的情況下,DCP算法性能提升了1.5倍,而本文算法在同樣的實(shí)驗(yàn)條件下的性能提升了1.7倍,性能優(yōu)勢(shì)依舊明顯。相比其他幾種算法而言,本文算法擁有更好的圖像處理性能。
3結(jié)論
本文基于動(dòng)態(tài)大氣光的暗通道先驗(yàn)理論和直方圖均衡化算法,改進(jìn)了一種快速、有效的圖像去霧方法,在NIQE、SSEQ等無(wú)參考指標(biāo)下有著很好的圖像性能。在暗通道先驗(yàn)去霧的基礎(chǔ)上優(yōu)化了去霧系數(shù),較好地規(guī)避了天空區(qū)域的問(wèn)題。對(duì)去霧圖像進(jìn)行了限制對(duì)比度的直方圖均衡化處理,增強(qiáng)了道路與背景的對(duì)比度,凸顯了道路信息,更有利于交通目標(biāo)等信息的提取,對(duì)自動(dòng)駕駛系統(tǒng)做出正確決策具有較大的指導(dǎo)意義。同時(shí),基于大氣散射模型的去霧方法,自適應(yīng)地動(dòng)態(tài)估計(jì)大氣光值,節(jié)省了大量的時(shí)間,擁有更好的圖像處理性能。參考文獻(xiàn):
[1]CHEN Q P, XIE Y F, GUO S F, et al. Sensing system of environmental perception technologies for driverless vehicle: a review of state of the art and challenges[J]. Sensors and Actuators A: Physical, 2021, 319: 112566. 1-112566. 18.
[2] JIANG Q Y, ZHANG L J, MENG D J. Target detection algorithm based on MMW radar and camera fusion[C]//2019 IEEE Intelligent Transportation Systems Conference (ITSC). Auckland: IEEE, 2019: 1-6.
[3] BABU G H, VENKATRAM N. A survey on analysis and implementation of state-of-the-art haze removal techniques[J]. Journal of Visual Communication and Image Representation, 2020, 72: 102912.1-102912.15.
[4] BANERJEE S, SINHA CHAUDHURI S. Nighttime image-dehazing: a review and quantitative benchmarking[J]. Archives of Computational Methods in Engineering, 2021, 28(4): 2943-2975.
[5] HE K M, SUN J, TANG X O. Single image haze removal using dark channel prior[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 33(12): 2341-2353.
[6] HE K M, SUN J, TANG X O. Guided image filtering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 35(6): 1397-1409.
[7] ZHU Q S, MAI J M, SHAO L. A fast single image haze removal algorithm using color attenuation prior[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3522-3533.
[8] JU M, DING C, GUO Y J, et al. IDGCP: image dehazing based on gamma correction prior[J]. IEEE Transactions on Image Processing, 2019, 29: 3104-3118.
[9] CAI B L, XU X M, JIA K, et al. Dehazenet: an end-to-end system for single image haze removal[J]. IEEE Transactions on Image Processing, 2016, 25(11): 5187-5198.
[10]LI B Y, PENG X L, WANG Z Y, et al. An all-in-one network for dehazing and beyond[DB/OL]. (2017-07-20)[2022-06-15] https://arxiv.org/abs/1707.06543.
[11]LIU X H, MA Y R, SHI Z H, et al. GridDehazeNet: attention-based multi-scale network for image dehazing[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 7314-7323.
[12]LEE B U, LEE K, OH J, et al. CNN-based simultaneous dehazing and depth estimation[C]// 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris: IEEE, 2020: 9722-9728.
[13]SUREZ P L, SAPPA A D, VINTIMILLA B X, et al. Deep learning based single image dehazing[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City: IEEE, 2018: 1169-1176.
[14]ENGIN D, GEN A, KEMAL EKENEL H. Cycle-dehaze: Enhanced cyclegan for single image dehazing[C]// Proceedings of the IEEE conference on computer vision and pattern recognition workshops. Salt Lake City: IEEE, 2018: 825-833.
[15]CHEN D D, HE M M, FAN Q N, et al. Gated context aggregation network for image dehazing and deraining[C]// 2019 IEEE winter conference on applications of computer vision (WACV). Waikoloa: IEEE, 2019: 1375-1383.
[16]鐘魁松, 馮治國(guó), 張振博, 等. 霧天道路下智能車(chē)視覺(jué)圖像實(shí)時(shí)快速去霧研究[J]. 汽車(chē)技術(shù), 2022(5): 27-33.
[17]PIZER S M, AMBURN E P, AUSTIN J D, et al. Adaptive histogram equalization and its variations[J]. Computer vision, graphics, and image processing, 1987, 39(3): 355-368.
[18]畢秀麗, 邱雨檬, 肖斌, 等. 基于統(tǒng)計(jì)特征的圖像直方圖均衡化檢測(cè)[J]. 計(jì)算機(jī)學(xué)報(bào), 2021, 44(2): 292-303.
[19]WANG J B, LU K, XUE J, et al. Single image dehazing based on the physical model and MSRCR algorithm[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 28(9): 2190-2199.
[20]MITTAL A, SOUNDARARAJAN R, BOVIK A C. Making a “completely blind” image quality analyzer[J]. IEEE Signal Processing Letters, 2012, 20(3): 209-212.
[21]MITTAL A, MOORTHY A K, BOVIK A C. No-reference image quality assessment in the spatial domain[J]. IEEE Transactions on Image Processing, 2012, 21(12): 4695-4708.
[22]LIU L X, LIU B, HUANG H, et al. No-reference image quality assessment based on spatial and spectral entropies[J]. Signal Processing: Image Communication, 2014, 29(8): 856-863.
(責(zé)任編輯:周曉南)
A Visual Image Dehazing Method for Urban Roads
XIANG Wei ZHONG Kuisong ZHANG Zhenbo
(1.Guizhou Communications Polytechnic, Guiyang 551400, China;
2.School of Mechanical Engineering, Guizhou University, Guiyang 550025, China)Abstract: Aiming at the problems in visual environment perception of autonomous vehicles in the foggy environment of urban roads, an improved fast dehazing algorithm based on atmospheric scattering model and image processing technology is proposed, and the effectiveness of the algorithm is verified by experiments. Firstly, the atmospheric light dynamic estimation strategy is formulated, the adaptive trigger function for atmospheric light dynamic estimation is designed, and the atmospheric light is estimated by the atmospheric scattering model. Secondly, the dark channel map of the hazy image is obtained by the minimal filtering technique, and the transmission map is estimated. Thirdly, the atmospheric light calculation strategy is formulated, and the dehazing coefficient is optimized. Finally, the residual noise is suppressed by the histogram equalization algorithm, which further improves the contrast of the haze-free image. The experimental results show that, compared with algorithms such as DCP and CAP, this algorithm has improvement in NIQE and SSEQ performance indicators, with better detail recovery ability and processing performance, and is more conducive to the extraction of traffic information.
Key words: autonomous driving; image defogging; atmospheric scattering model; dynamic atmospheric light; dark channel; histogram equalization