賀興容 龔奕宇 范松海 吳天寶 劉益岑 劉小江
關(guān)鍵詞: 紅外與可見光圖像融合; 信息互補(bǔ); 幀差檢測(cè); 目標(biāo)分簇; 圖像分割; 算法復(fù)雜度比較
中圖分類號(hào): TN911.73?34; TP391 ? ? ? ? ? ? ? ? ?文獻(xiàn)標(biāo)識(shí)碼: A ? ? ? ? ? ? ? ? ? 文章編號(hào): 1004?373X(2019)01?0057?05
Abstract: An infrared and visible image fusion algorithm based on frame difference detection technique and region feature is proposed to improve the fusion quality of the infrared image and reduce its fusion complexity. The frame difference method is designed to detect the target in infrared image for target clustering and image segmentation. The target is accurately located by means of the information among frames. Some different fusion rules are designed according to the characteristics of target region, and the image fusion is realized by means of the effective information complementation of infrared and visible images. The theoretical analysis of the complexity of the fusion algorithm is carried out. The fusion experiments are performed for the unmoving observable target, and moving observable target in infrared and visible image. The experimental results show that, in comparison with the available image fusion techniques, the proposed technique has higher fusion quality, and its fusion image can reflect the target and background more accurately.
Keywords: infrared and visible image fusion; information complementation; frame difference detection; target clustering; image segmentation; algorithm complexity comparison
無(wú)人機(jī)(Unmanned Aerial Vehicle,UAV)是不需要機(jī)載飛行員就有飛行能力的飛機(jī),可通過(guò)遠(yuǎn)程控制、半自主、自主或多種方式同時(shí)控制UAV,UAV能在許多領(lǐng)域中執(zhí)行給定的任務(wù)[1]。為了完成各種各樣的任務(wù),UAV首先需要配備傳感器載荷采集任務(wù)區(qū)域的圖像并實(shí)現(xiàn)環(huán)境感知,傳感器載荷包括紅外傳感器和可見光傳感器。融合同一場(chǎng)景的紅外和可見光圖像是UAV目標(biāo)檢測(cè)和識(shí)別的基礎(chǔ)。然而,機(jī)載傳感器拍攝的圖像是動(dòng)態(tài)的,會(huì)增加可見光和紅外圖像融合的難度[2]。為了獲得形勢(shì)評(píng)估,提取目標(biāo)信息是很重要的,事實(shí)上,可見光圖像中的紋理和顏色信息非常豐富,而紅外圖像中的目標(biāo)信息非常突出,尤其是人造目標(biāo),據(jù)此,為了更有效地利用目標(biāo)信息,可以基于目標(biāo)區(qū)域來(lái)劃分圖像區(qū)域。
圖像融合分為四個(gè)信息表示層次,即信號(hào)、像素、特征和決策[3]。文獻(xiàn)[4]提出基于稀疏表示與非下采樣Contourlet變換的紅外和可見光圖像融合算法,通過(guò)對(duì)圖像進(jìn)行NSCT變換,獲取其高頻與低頻系數(shù),利用系數(shù)表示提取低頻系數(shù)特征,再設(shè)計(jì)不同的融合規(guī)則,實(shí)現(xiàn)高頻、低頻系數(shù)融合,但該算法無(wú)法有效地采集目標(biāo)信息。文獻(xiàn)[5]提出一種基于色彩對(duì)比增強(qiáng)的紅外和可見光圖像融合算法,有利于目標(biāo)檢測(cè)和提高觀察者性能。文獻(xiàn)[6]提出一種基于數(shù)據(jù)同化和遺傳算法(Genetic Algorithm,GA)的紅外和可見光圖像融合框架。文獻(xiàn)[7]引入快速離散曲波變換(Fast Discrete Curvelet Transform,F(xiàn)DCT)和清晰度評(píng)價(jià)操作符來(lái)實(shí)現(xiàn)紅外和可見光圖像融合。其他紅外和彩色可見光圖像融合算法見文獻(xiàn)[2,8?9],其中基于離散小波變換(Discrete Wavelet Transform,DWT)的融合算法執(zhí)行效果好,盡管NSCT、FDCT和其他新穎變換在部分性能上優(yōu)于DWT,但變換并不是紅外和可見光圖像融合的關(guān)鍵和核心問(wèn)題[10]。本文僅使用DWT來(lái)研究融合相關(guān)的目標(biāo)檢測(cè)和感知,DWT的計(jì)算復(fù)雜度較低[11]。文獻(xiàn)[12?13]提出的可見光和紅外圖像融合算法,研究了目標(biāo)融合檢測(cè)在不同幀使用相關(guān)信息可以滿足低實(shí)時(shí)融合的要求。
動(dòng)態(tài)圖像融合有其自身的特征,要求融合算法在時(shí)間和空間上一致且具有較強(qiáng)的魯棒性。為了利用不同區(qū)域特征并且得到更有效的目標(biāo)和背景信息,提出基于幀差檢測(cè)技術(shù)與區(qū)域特征的紅外與可見光圖像融合算法?;趧?dòng)態(tài)目標(biāo)檢測(cè)和目標(biāo)區(qū)域分割,然后使用幀之間的信息實(shí)現(xiàn)穩(wěn)定性和時(shí)間的共識(shí),根據(jù)目標(biāo)區(qū)域的特征設(shè)計(jì)不同的融合規(guī)則實(shí)現(xiàn)可見光和紅外圖像的融合。最后,測(cè)試所提算法的融合質(zhì)量并分析其復(fù)雜度。
接下來(lái),在可見光和紅外圖像中目標(biāo)運(yùn)動(dòng)且可觀察的條件下執(zhí)行兩幅圖像的融合實(shí)驗(yàn)。源圖像中存在多個(gè)運(yùn)動(dòng)車輛目標(biāo),給多源圖像融合增加了難度。通過(guò)目標(biāo)檢測(cè),將源圖像劃分成4個(gè)不同區(qū)域,即區(qū)域[R1,R2,R3,R4]和背景,如圖5c)和圖5d)所示。依據(jù)不同的區(qū)域采用不同的融合規(guī)則,與基小波算法做對(duì)比研究。圖5e)和圖5f)中的結(jié)果表明,本文提出的算法比基小波算法背景更豐富,目標(biāo)更凸顯。
為了更好地體現(xiàn)本文算法的優(yōu)越性,將本文算法的復(fù)雜度與各個(gè)較為先進(jìn)的算法進(jìn)行比較,包括加權(quán)平均、主成分分析、拉普拉斯金字塔、梯度金字塔、對(duì)比度金字塔、離散小波變換,全局和局部特征的主成分分析(GLFP)[2],增強(qiáng)Gabor特征的線性判別分析(EGFL) [4],虛擬樣本擴(kuò)展的2DPCA(VSE?2DPCA)[9],分塊Fisher線性判別分析(BFLD)[2],以及通用Fisher線性判別分析(GFLD)[11]算法進(jìn)行對(duì)比,分別比較了訓(xùn)練時(shí)間復(fù)雜度、測(cè)試時(shí)間復(fù)雜度及空間復(fù)雜度,具體比較結(jié)果如表2所示,其中,[m]和[n]分別表示圖像矩陣的行數(shù)和列數(shù),[L,M]和[N]分別表示熵、互信息和邊緣保持。
從表2可以看出,與加權(quán)平均算法相比,本文算法在分割階段的時(shí)間復(fù)雜度稍微高了一點(diǎn),其他均相同;與主成分分析算法相比,本文算法的各個(gè)復(fù)雜度均相當(dāng);與拉普拉斯金字塔、梯度金字塔算法相比,本文算法的分割階段時(shí)間復(fù)雜度稍微高了點(diǎn),但是融合階段的時(shí)間復(fù)雜度比它們低了一半;與對(duì)比度金字塔、離散小波變換算法相比,本文算法的分割階段、融合階段時(shí)間復(fù)雜度及總體空間復(fù)雜度均低了很多。在提高識(shí)別率的同時(shí),本文算法仍然能夠保持與其他相關(guān)算法相當(dāng)甚至更低的復(fù)雜度。
本文提出基于幀差檢測(cè)技術(shù)與區(qū)域特征的紅外與可見光圖像融合算法,有助于UAV實(shí)現(xiàn)環(huán)境感知。不同于基于區(qū)域分割的傳統(tǒng)融合算法,本文為目標(biāo)檢測(cè)提出一個(gè)幀差算法,用于分割源圖像,而且本文基于目標(biāo)區(qū)域設(shè)計(jì)了不同的融合規(guī)則來(lái)融合可見光和紅外圖像,能獲得更多目標(biāo)信息并且在源圖像中保留更多背景信息。實(shí)驗(yàn)結(jié)果驗(yàn)證了所提算法的合理性與優(yōu)異性。
參考文獻(xiàn)
[1] LIU Zhanwen, FENG Yan, ZHANG Yifan. A fusion algorithm for infrared and visible images based on RDU?PCNN and ICA?bases in NSST domain [J]. Infrared physics and technology, 2016, 79(8): 183?190.
[2] YIN Ming, DUAN Puhong, LIU Wei. A novel infrared and visible image fusion algorithm based on shift?invariant dual?tree complex shearlet transform and sparse representation [J]. Neurocomputing, 2016, 32(6): 11?17.
[3] LIU C H, QI Y, DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain [J]. Infrared physics and technology, 2017, 83(4): 94?102.
[4] 王珺,彭進(jìn)業(yè),何貴青,等.基于非下采樣Contourlet變換和稀疏表示的紅外與可見光圖像融合方法[J].兵工學(xué)報(bào),2014,34(7):815?820.
WANG Jun, PENG Jinye, HE Guiqing, et al. Fusion method for visible and infrared image based on non?subsampled Contourlet transform and sparse representation [J]. Acta armamentarii, 2014, 34(7): 815?820.
[5] 錢小燕,韓磊,王幫峰.紅外與可見光圖像快速融合算法[J].計(jì)算機(jī)輔助設(shè)計(jì)與圖形學(xué)學(xué)報(bào),2012,23(7):1211?1216.
QIAN Xiaoyan, HAN Lei, WANG Bangfeng. A fast fusion algorithm of visible and infrared images [J]. Journal of computer?aided design and computer graphics, 2012, 23(7): 1211?1216.
[6] 付煒,裴歡,廖曉玉,等.多源遙感圖像融合的數(shù)據(jù)同化算法[J].自動(dòng)化學(xué)報(bào),2013,37(3):309?315.
FU Wei, PEI Huan, LIAO Xiaoyu, et al. Data assimilation algorithm of multi?fountain remote sensing image fusion [J]. Acta automatica Sinica, 2013, 37(3): 309?315.
[7] SUN Zhenfeng, LIU Jun, CHANG Qimin. Fusion of infrared and visible images based on focus measure operators in the curvelet domain [J]. Applied optics, 2012, 51(12): 1910?1921.
[8] 李光鑫,吳偉平,胡君.紅外和彩色可見光圖像亮度?對(duì)比度傳遞融合算法[J].中國(guó)光學(xué),2011,4(2):161?166.
LI Guangxin, WU Weiping, HU Jun. Luminance?contrast transfer base fusion algorithm for infrared and color visible images [J]. Chinese journal of optics, 2011, 4(2): 161?166.
[9] 邢雅瓊,王曉丹,劉健,等.基于NSST域的紅外和彩色可見光圖像融合[J].系統(tǒng)工程理論與實(shí)踐,2016,35(6):536?544.
XING Yaqiong, WANG Xiaodan, LIU Jian, et al. Fusion technique for infrared and color visible image in non?subsample shearlet transform domain [J]. Systems engineering: theory and practice, 2016, 35(6): 536?544.
[10] 楊風(fēng)暴,董安冉,張雷.DWT、NSCT和改進(jìn)PCA協(xié)同組合紅外偏振圖像融合[J].紅外技術(shù),2017,39(3):201?208.
YANG Fengbao, DONG Anran, ZHANG Lei. Infrared polarization image fusion using the synergistic combination of DWT, NSCT and Improved PCA [J]. Infrared technology, 2017, 39(3): 201?208.
[11] 劉衛(wèi),殷明,欒靜,等.基于平移不變剪切波變換域圖像融合算法[J].光子學(xué)報(bào),2013,42(4):496?503.
LIU Wei, YIN Ming, LUAN Jing, et al. Image fusion algorithm based on shift?invariant shearlet transform [J]. Acta photonica Sinica, 2013, 42(4): 496?503.
[12] 趙春暉,劉春紅,王克成.基于第二代小波的超譜遙感圖像融合算法研究[J].光學(xué)學(xué)報(bào),2015,25(7):891?896.
ZHAO Chunhui, LIU Chunhong, WANG Kecheng. Research on fusion of hyperspectral remote sensing image based on second generation wavelet [J]. Acta optica Sinica, 2015, 25(7): 891?896.
[13] ZHOU Zehua, TAN Min. Infrared image and visible image fusion based on wavelet transform [J]. Advanced materials research, 2014, 35(6): 1011?1017.
[14] JING Z L, PAN H, LI Y K, et al. Evaluation of focus measures in multi?focus image fusion [J]. Pattern recognition letters, 2007, 28(4): 493?500.