• 
    

    
    

      99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

      基于邊界檢測(cè)和骨骼提取的顯著性檢測(cè)網(wǎng)絡(luò)

      2023-06-21 02:42:34楊?lèi)?ài)萍程思萌王金斌宋尚陽(yáng)丁學(xué)文
      關(guān)鍵詞:多任務(wù)骨骼邊界

      楊?lèi)?ài)萍,程思萌,王金斌,宋尚陽(yáng),丁學(xué)文

      基于邊界檢測(cè)和骨骼提取的顯著性檢測(cè)網(wǎng)絡(luò)

      楊?lèi)?ài)萍1,程思萌1,王金斌1,宋尚陽(yáng)1,丁學(xué)文2

      (1. 天津大學(xué)電氣自動(dòng)化與信息工程學(xué)院,天津 300072;2. 天津職業(yè)技術(shù)師范大學(xué)電子工程學(xué)院,天津 300222)

      目前一些方法通過(guò)多任務(wù)聯(lián)合實(shí)現(xiàn)顯著性檢測(cè),在一定程度上提升了檢測(cè)精度,但仍存在誤檢和漏檢問(wèn)題,其原因在于各任務(wù)優(yōu)化目標(biāo)不同且特征域差異較大,導(dǎo)致網(wǎng)絡(luò)對(duì)顯著性、物體邊界等特征辨識(shí)能力不足.基于此,借助邊界檢測(cè)和骨骼提取提出一種多任務(wù)輔助的顯著性檢測(cè)網(wǎng)絡(luò),其包括特征提取子網(wǎng)絡(luò)、邊界檢測(cè)子網(wǎng)絡(luò)、骨骼提取子網(wǎng)絡(luò)以及顯著性填充子網(wǎng)絡(luò).其中,特征提取子網(wǎng)絡(luò)利用ResNet101預(yù)訓(xùn)練模型提取圖像的多尺度特征;邊界檢測(cè)子網(wǎng)絡(luò)選擇前3層特征進(jìn)行融合,可完整保留顯著性目標(biāo)的邊界信息;骨骼提取子網(wǎng)絡(luò)選擇后兩層特征進(jìn)行融合,可準(zhǔn)確定位顯著性目標(biāo)的中心位置;所提方法基于邊界檢測(cè)數(shù)據(jù)集和骨骼提取數(shù)據(jù)集分別對(duì)兩個(gè)子網(wǎng)絡(luò)進(jìn)行訓(xùn)練,保留最好的邊界檢測(cè)模型和骨骼提取模型,作為預(yù)訓(xùn)練模型輔助顯著性檢測(cè)任務(wù).為降低網(wǎng)絡(luò)優(yōu)化目標(biāo)與特征域之間的差異,設(shè)計(jì)了顯著性填充子網(wǎng)絡(luò)將提取的邊界特征和骨骼特征進(jìn)行融合和非線性映射.在4種數(shù)據(jù)集上的實(shí)驗(yàn)結(jié)果表明,所提方法能有效恢復(fù)缺失的顯著性區(qū)域,優(yōu)于其他顯著性目標(biāo)檢測(cè)方法.

      邊界檢測(cè);骨骼提?。欢嗳蝿?wù);顯著性檢測(cè)網(wǎng)絡(luò)

      顯著性檢測(cè)可通過(guò)計(jì)算機(jī)模擬人類(lèi)視覺(jué)系統(tǒng),快速分析輸入圖像并保留最引人注意的區(qū)域.廣泛應(yīng)用在圖像檢索[1]、目標(biāo)檢測(cè)[2]、行為識(shí)別[3]、圖像分?割[4]和目標(biāo)識(shí)別[5]等計(jì)算機(jī)視覺(jué)任務(wù).

      現(xiàn)有顯著性檢測(cè)方法主要采用“主體檢測(cè)為主,邊界細(xì)化為輔”的思想,通過(guò)不同任務(wù)分別檢測(cè)顯著性目標(biāo)區(qū)域和顯著性目標(biāo)邊界.根據(jù)邊界檢測(cè)方式可分為基于單數(shù)據(jù)集的多任務(wù)檢測(cè)網(wǎng)絡(luò)和基于多數(shù)據(jù)集的多任務(wù)檢測(cè)網(wǎng)絡(luò).

      基于單數(shù)據(jù)集的多任務(wù)檢測(cè)網(wǎng)絡(luò)一般設(shè)計(jì)兩個(gè)并行網(wǎng)絡(luò)分別檢測(cè)顯著性目標(biāo)區(qū)域和顯著性目標(biāo)邊界,兩個(gè)網(wǎng)絡(luò)均使用DUTS數(shù)據(jù)集[6]進(jìn)行監(jiān)督學(xué)習(xí).Wei等[7]通過(guò)數(shù)學(xué)運(yùn)算將輸入圖像分解為邊界圖和目標(biāo)區(qū)域圖,并設(shè)計(jì)兩個(gè)子網(wǎng)絡(luò)分別學(xué)習(xí).Song?等[8]提出一種多層次邊界細(xì)化的顯著性檢測(cè)網(wǎng)絡(luò),先獲得粗糙的顯著性預(yù)測(cè)圖,后對(duì)顯著性邊界進(jìn)行細(xì)化.該類(lèi)方法由于利用邊界檢測(cè)算子在計(jì)算過(guò)程中存在誤差,導(dǎo)致提取的邊界不完整.因此,一些學(xué)者使用多個(gè)數(shù)據(jù)集對(duì)網(wǎng)絡(luò)進(jìn)行監(jiān)督訓(xùn)練來(lái)提升顯著性判別能力和邊界提取能力.Wu等[9]采用邊界檢測(cè)和顯著性檢測(cè)多任務(wù)聯(lián)合訓(xùn)練方式,提升網(wǎng)絡(luò)的特征提取能力;然而該方法沒(méi)有考慮多個(gè)檢測(cè)任務(wù)之間的協(xié)作問(wèn)題,導(dǎo)致提取的目標(biāo)區(qū)域和邊界特征不完整.在此基礎(chǔ)上,Liu等[10]增加了骨架檢測(cè)任務(wù),通過(guò)對(duì)3個(gè)任務(wù)聯(lián)合訓(xùn)練,提升網(wǎng)絡(luò)邊界檢測(cè)能力和中心定位能力.該方法使用權(quán)重共享策略交換多任務(wù)信息,忽略了不同檢測(cè)任務(wù)之間的差異,導(dǎo)致預(yù)測(cè)圖不完整.

      由以上分析可知,目前多任務(wù)檢測(cè)方法大都通過(guò)特征堆疊或權(quán)重共享等方式交換信息,而未考慮不同任務(wù)之間特征域差異性,導(dǎo)致特征提取不完整.與現(xiàn)有方法不同,本文采用“分治”思想,提出了一種基于多數(shù)據(jù)集多任務(wù)輔助的顯著性檢測(cè)網(wǎng)絡(luò).具體地,通過(guò)對(duì)邊界檢測(cè)任務(wù)和骨骼提取任務(wù)獨(dú)立訓(xùn)練,保留最優(yōu)的邊界檢測(cè)模型和骨骼提取模型,并將它們作為預(yù)訓(xùn)練模型輔助顯著性目標(biāo)檢測(cè)任務(wù),分別提取邊界特征和骨骼特征來(lái)準(zhǔn)確定位顯著性目標(biāo)的邊界位置和中心位置,緩解因不同任務(wù)目標(biāo)之間特征域的差異性導(dǎo)致的特征提取不完整問(wèn)題.最后,將提取的邊界特征和骨骼特征進(jìn)行融合和非線性映射得到完整的顯著性圖.

      1?本文方法

      本文提出了一種基于多任務(wù)輔助的顯著性檢測(cè)網(wǎng)絡(luò),其整體結(jié)構(gòu)如圖1所示.該網(wǎng)絡(luò)由特征提取子網(wǎng)絡(luò)、邊界檢測(cè)子網(wǎng)絡(luò)、骨骼提取子網(wǎng)絡(luò)和顯著性填充子網(wǎng)絡(luò)組成.其中,特征提取子網(wǎng)絡(luò)用于提取輸入圖像的多尺度特征,由5個(gè)殘差卷積塊級(jí)聯(lián)而成,可表示為RB1-RB5.邊界檢測(cè)子網(wǎng)絡(luò)通過(guò)前3層卷積RB1、RB2、RB3提取顯著性目標(biāo)的輪廓,得到邊界信息;骨骼提取子網(wǎng)絡(luò)利用后兩層卷積RB4、RB5提取顯著性目標(biāo)的骨骼,定位中心位置;為了提升網(wǎng)絡(luò)的辨識(shí)能力,利用金字塔卷積模塊增大特征的感受野,并設(shè)計(jì)特征增強(qiáng)模塊對(duì)特征進(jìn)行自適應(yīng)加權(quán).最后,顯著性填充子網(wǎng)絡(luò)根據(jù)顯著性目標(biāo)的邊界信息和中心位置進(jìn)行填充,得到顯著性預(yù)測(cè)圖.

      圖1?網(wǎng)絡(luò)整體結(jié)構(gòu)

      1.1?特征提取子網(wǎng)絡(luò)

      1.1.1?金字塔卷積模塊

      為了增強(qiáng)網(wǎng)絡(luò)的全局感知能力,受金字塔網(wǎng)絡(luò)結(jié)構(gòu)[12-13]啟發(fā),本文設(shè)計(jì)了金字塔卷積模塊,將多種感受野下的特征進(jìn)行融合.金字塔卷積模塊結(jié)構(gòu)如圖2所示.

      圖2?金字塔卷積模塊結(jié)構(gòu)

      1.1.2?特征增強(qiáng)模塊

      為提升特征表達(dá)能力,篩選有用特征、抑制無(wú)用特征,設(shè)計(jì)了特征增強(qiáng)模塊,利用通道注意力機(jī)制[14]和空間注意力機(jī)制[15],從通道維度和空間維度對(duì)特征進(jìn)行篩選和增強(qiáng),特征增強(qiáng)模塊結(jié)構(gòu)如圖3所示.

      圖3?特征增強(qiáng)模塊

      1.2?邊界檢測(cè)子網(wǎng)絡(luò)

      1.3?骨骼提取子網(wǎng)絡(luò)

      1.4?顯著性填充子網(wǎng)絡(luò)

      1.5?損失函數(shù)

      分階段對(duì)不同任務(wù)進(jìn)行監(jiān)督學(xué)習(xí).第1階段為邊界檢測(cè)任務(wù),選擇二進(jìn)制交叉熵函數(shù)[16]作為損失函數(shù),即

      第2階段為骨骼提取任務(wù),其損失函數(shù)為

      第3階段為顯著性目標(biāo)檢測(cè)任務(wù),其損失函數(shù)為

      2?實(shí)驗(yàn)與結(jié)果分析

      2.1?實(shí)驗(yàn)設(shè)置

      選擇BSDS500[17]數(shù)據(jù)集作為邊界檢測(cè)任務(wù)的訓(xùn)練集,該數(shù)據(jù)集包含200張圖像,每張圖像對(duì)應(yīng)3~4張真值圖,隨機(jī)選取一張用于網(wǎng)絡(luò)訓(xùn)練;選擇SK-LARGE[18]數(shù)據(jù)集為骨骼提取任務(wù)的訓(xùn)練集,包含746張圖像;選擇DUTS-TR[6]數(shù)據(jù)集為顯著性檢測(cè)任務(wù)的訓(xùn)練集,包含10553張圖像;DUTS-TE[6]、ECSSD[19]、HKU-IS[20]和PASCAL-S[21]作為測(cè)試集.

      2.2?對(duì)比實(shí)驗(yàn)

      為了驗(yàn)證本文方法的有效性,從客觀指標(biāo)和主觀效果兩方面與現(xiàn)有顯著性目標(biāo)檢測(cè)方法進(jìn)行對(duì)比,對(duì)比方法包括PAGR[22]、PiCANet[23]、MLM[9]、ICNet[24]、BASNet[25]、AFNet[26]、PAGE[27]、CPD[28]、ITSD[29]、CANet[30]、CAGNet[31]、HERNet[8]、AMPNet[32].

      圖4為主觀效果對(duì)比結(jié)果.選取了一些代表性場(chǎng)景圖像,如復(fù)雜場(chǎng)景的顯著性目標(biāo)(第1行)、前景和背景相似的顯著性目標(biāo)(第2行)、小型顯著性目標(biāo)(第3行)、規(guī)則顯著性目標(biāo)(第4行)、被遮擋的顯著性目標(biāo)(第5行)、多個(gè)顯著性目標(biāo)(第6行).可以看出,所提方法取得了理想的檢測(cè)結(jié)果,尤其在小型顯著目標(biāo)圖像中,大部分方法漏檢了遠(yuǎn)處的鴨子(第3行);在顯著目標(biāo)被覆蓋和復(fù)雜場(chǎng)景圖像中,多數(shù)方法均存在誤檢問(wèn)題,將狗上方的黃色字母(第1行)和覆蓋在鳥(niǎo)身上的葉子(第5行)判定為顯著目標(biāo);在長(zhǎng)條狀顯著目標(biāo)圖像中,本文方法得到了更精確的顯著目標(biāo)邊界.由圖4可以看出,本文方法在多個(gè)場(chǎng)景下,主觀效果都優(yōu)于其他多任務(wù)聯(lián)合方法(MLM[9]).由此可以得知,本文提出的基于邊界檢測(cè)和骨骼提取的多任務(wù)輔助方法能有效恢復(fù)缺失的顯著性區(qū)域,解決檢測(cè)結(jié)果不完整的問(wèn)題.

      表1?不同顯著性目標(biāo)檢測(cè)方法的客觀指標(biāo)

      Tab.1?Objective metrics of different saliency detection methods

      圖4?所提方法和其他方法的主觀比較

      2.3?平均速度比較

      表2比較了本文方法與其他方法的平均速度.可以看出,本文方法的運(yùn)行速度優(yōu)于大部分顯著性檢測(cè)方法;相比于ITSD[29]和CPD[28]兩個(gè)快速顯著性檢測(cè)網(wǎng)絡(luò),本文方法也有一定競(jìng)爭(zhēng)力.

      2.4?消融實(shí)驗(yàn)

      表2 本文所提方法與其他方法在平均速度上的比較

      Tab.2 Comparison of the proposed method with other methods in terms of average speed

      為了驗(yàn)證所提方法中金字塔卷積模塊(PCM)的作用,進(jìn)行了消融實(shí)驗(yàn).共包括如4個(gè)實(shí)驗(yàn):實(shí)驗(yàn)1為淺層特征和深層特征均不使用PCM(without PCM,WPCM);實(shí)驗(yàn)2為僅淺層特征使用PCM (shallow PCM,SPCM);實(shí)驗(yàn)3為僅深層特征用PCM(deep PCM,DPCM);實(shí)驗(yàn)4為淺層特征和深層特征均用PCM(both PCM,BPCM).

      為了驗(yàn)證所提方法中特征增強(qiáng)模塊(FEM)的有效性,在4個(gè)數(shù)據(jù)集上進(jìn)行消融實(shí)驗(yàn),分別為:淺層特征和深層特征均不使用FEM(without FEM,WFEM)(實(shí)驗(yàn)1);僅淺層特征使用FEM(shallow PCM,SFEM)(實(shí)驗(yàn)2);僅深層特征用FEM(deep FEM,DFEM)(實(shí)驗(yàn)3);淺層特征和深層特征均用FEM(both FEM,BFEM)(實(shí)驗(yàn)4);不同模型是否使用FEM的值和MAE的結(jié)果如表5所示.

      表3?消融實(shí)驗(yàn)結(jié)果

      Tab.3?Ablation results

      表4?金字塔卷積模塊的消融實(shí)驗(yàn)結(jié)果

      Tab.4?Ablation results on the pyramid convolutional module

      表5?特征增強(qiáng)模塊的消融實(shí)驗(yàn)結(jié)果

      Tab.5?Ablation results on the feature enhancement module

      3?結(jié)?語(yǔ)

      本文提出了一種基于邊界檢測(cè)和骨骼提取的顯著性檢測(cè)網(wǎng)絡(luò),通過(guò)對(duì)兩個(gè)任務(wù)分別訓(xùn)練,輔助顯著性檢測(cè)網(wǎng)絡(luò)生成完整的顯著性圖,可有效解決檢測(cè)結(jié)果中部分顯著區(qū)域漏檢和誤檢的問(wèn)題.具體來(lái)說(shuō),本文將輸入圖像分解,利用邊界檢測(cè)子網(wǎng)絡(luò)和骨骼提取子網(wǎng)絡(luò)分別獲得顯著性目標(biāo)邊界特征和骨骼特征,可準(zhǔn)確地定位顯著性目標(biāo)的邊界位置和中心位置;為了降低多任務(wù)之間的差異,設(shè)計(jì)顯著性填充子網(wǎng)絡(luò),以骨骼特征為中心、邊界特征為邊界,對(duì)顯著性目標(biāo)區(qū)域進(jìn)行填充,獲得完整的顯著性圖.此外,文中還設(shè)計(jì)了金字塔卷積模塊和特征增強(qiáng)模塊對(duì)邊界特征和骨骼特征進(jìn)行篩選和增強(qiáng),提升網(wǎng)絡(luò)表達(dá)能力.實(shí)驗(yàn)結(jié)果表明,本文方法能在降低特征提取難度的同時(shí),完整且準(zhǔn)確地檢測(cè)出顯著性目標(biāo).

      [1] Babenko A,Lempitsky V. Aggregating local deep features for image retrieval[C]//Proceedings of the IEEE International Conference on Computer Vision. Santiago,Chile,2015:1269-1277.

      [2] 龐彥偉,余?珂,孫漢卿,等. 基于逐級(jí)信息恢復(fù)網(wǎng)絡(luò)的實(shí)時(shí)目標(biāo)檢測(cè)算法[J]. 天津大學(xué)學(xué)報(bào)(自然科學(xué)與工程技術(shù)版),2022,55(5):471-479.

      Pan Yanwei,Yu Ke,Sun Hanqing,et al. Hierarchical information recovery network for real-time object detection[J]. Journal of Tianjin University(Science and Technology),2022,55(5):471-479(in Chinese).

      [3] Abdulmunem A,Lai Y K,Sun X. Saliency guided local and global descriptors for effective action recognition[J]. Computational Visual Media,2016,2(1):97-106.

      [4] Zhou S P,Wang J J,Zhang S,et al. Active contour model based on local and global intensity information for medical image segmentation[J]. Neurocomputing,2016,186:107-118.

      [5] Cao X C,Tao Z Q,Zhang B,et al. Self-adaptively weighted co-saliency detection via rank constraint[J]. IEEE Transactions on Image Processing,2014,23(9):4175-4186.

      [6] Wang L J,Lu H C,Wang Y F,et al. Learning to detect salient objects with image-level supervision[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,USA,2017:136-145.

      [7] Wei J,Wang S H,Wu Z,et al. Label decoupling framework for salient object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle,USA,2020:13025-13034.

      [8] Song D W,Dong Y S,Li X L. Hierarchical edge refinement network for saliency detection[J]. IEEE Transactions on Image Processing,2021,30:7567-7577.

      [9] Wu R M,F(xiàn)eng M Y,Guan W L,et al. A mutual learning method for salient object detection with intertwined multi-supervision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:8150-8159.

      [10] Liu J J,Hou Q B,Cheng M M. Dynamic feature integration for simultaneous detection of salient object,edge,and skeleton[J]. IEEE Transactions on Image Processing,2020,29:8652-8667.

      [11] He K M,Zhang X Y,Ren S Q,et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,USA,2016:770-778.

      [12] Chen L C,Papandreou G,Kokkinos I,et al. Dee-plab:Semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected crfs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(4):834-848.

      [13] He K M,Zhang X Y,Ren S Q,et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1904-1916.

      [14] Hu J,Shen L,Sun G. Squeeze-and-excitation networks [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,USA,2018:7132-7141.

      [15] Peng C,Zhang X Y,Yu G,et al. Large kernel matters—Improve semantic segmentation by global convolutional network[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,USA,2017:4353-4361.

      [16] De Boer P T,Kroese D P,Mannor S,et al. A tutorial on the cross-entropy method[J]. Annals of Operations Research,2005,134(1):19-67.

      [17] Arbelaez P,Maire M,F(xiàn)owlkes C,et al. Contour detection and hierarchical image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(5):898-916.

      [18] Shen W,Zhao K,Jiang Y,et al. Object skeleton extraction in natural images by fusing scale-associated deep side outputs[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,USA,2016:222-230.

      [19] Yan Q,Xu L,Shi J D,et al. Hierarchical saliency detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Portland,USA,2013:1155-1162.

      [20] Li G,Yu Y. Visual saliency based on multiscale deep features[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston,USA,2015:5455-5463.

      [21] Li Y,Hou X D,Koch C,et al. The secrets of salient object segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus,USA,2014:280-287.

      [22] Zhang X W,Wang T T,Qi J Q,et al. Progressive attention guided recurrent network for salient object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,USA,2018:714-722.

      [23] Liu N,Han J W,Yang M H. Picanet:Learning pixel-wise contextual attention for saliency detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,USA,2018:3089-3098.

      [24] Wang W G,Shen J B,Cheng M M,et al. An iterative and cooperative top-down and bottom-up inference network for salient object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:5968-5977.

      [25] Qin X B,Zhang Z C,Huang C Y,et al. Basnet:Boundary-aware salient object detection[C]// Proceed-ings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:7479-7489.

      [26] Feng M Y,Lu H C,Ding E. Attentive feedback network for boundary-aware salient object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:1623-1632.

      [27] Wang W G,Zhao S Y,Shen J B,et al. Salient object detection with pyramid attention and salient edges[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:1448-1457.

      [28] Wu Z,Su L,Huang Q M. Cascaded partial decoder for fast and accurate salient object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:3907-3916.

      [29] Zhou H J,Xie X H,Lai J H,et al. Interactive two-stream decoder for accurate and fast saliency detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle,USA,2020:9141-9150.

      [30] Li J X,Pan Z F,Liu Q S,et al. Complementarity-aware attention network for salient object detection[J]. IEEE Transactions on Cybernetics,2020,52(2):873-887.

      [31] Mohammadi S,Noori M,Bahri A,et al. CAGNet:Content-aware guidance for salient object detection[J]. Pattern Recognition,2020,103:107303.

      [32] Sun L N,Chen Z X,Wu Q M J,et al. AMPNet:Average- and max-pool networks for salient object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology,2021,31(11):4321-4333.

      [33] Achanta R,Hemami S,Estrada F,et al. Frequency-tuned salient region detection[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami,USA,2009:1597-1604.

      [34] Fan D P,Cheng M M,Liu Y,et al. Structure-measure:A new way to evaluate foreground maps[C]// Proceedings of the IEEE International Conference on Computer Vision. Venice,Italy,2017:4548-4557.

      [35] DeepSaliency:Muilt-task deep neural network model for salient object detection[J]. IEEE Transactions on Image Processing,2016,25(8):3919-3930.

      Saliency Detection Network Based on Edge Detection and Skeleton Extraction

      Yang Aiping1,ChengSimeng1,Wang Jinbin1,Song Shangyang1,Ding Xuewen2

      (1. School of Electrical and Information Engineering,Tianjin University,Tianjin 300072,China;2. School of Electronic Engineering,Tianjin University of Technology and Education,Tianjin 300222,China)

      Recently,considerable progress has been made in salient object detection based on joint multitask learning. However,false detection and leak detection persist owing to differences in optimization objectives and feature domains among different tasks. Therefore,current networks are incapable of identifying features such as saliency and object boundaries. Herein,we proposed an assisted multitask saliency detection network based on edge detection and skeleton extraction,comprising a feature extraction subnetwork,edge detection subnetwork,skeleton extraction subnetwork,and saliency filling subnetwork. The feature extraction subnetwork extracts multilevel features of images using ResNet101 pretrained model. The edge detection subnetwork selects the first three layers for feature fusion to retain the salient edge completely. The skeleton extraction subnetwork selects the last two layers for feature fusion to locate the center of the salient object accurately. Unlike the current networks,we train two subnetworks on edge detection dataset and skelecton extraction dataset to preserve the best models separately,which are used as pretrained models to assist with saliency detection tasks. Furthermore,to reduce the discrepancy between optimization objects and feature domains,the saliency filling subnetwork is designed to make the fusion and non-linear mapping for extracted edge and skeletal features. Experimental results for four datasets show that the proposed method can not only restore the missing saliency regions effectively but also outperform other methods.

      edge detection;skeleton extraction;multitask;saliency detection network

      10.11784/tdxbz202204052

      TP391

      A

      0493-2137(2023)08-0823-08

      2022-04-29;

      2022-12-16.

      楊?lèi)?ài)萍(1977—??),女,博士,副教授.Email:m_bigm@tju.edu.cn

      楊?lèi)?ài)萍,yangaiping@tju.edu.cn.

      國(guó)家自然科學(xué)基金資助項(xiàng)目(62071323,61632018,61771329);天津市科技計(jì)劃資助項(xiàng)目(20YDTPJC01110).

      the National Natural Science Foundation of China(No. 62071323,No. 61632018,No. 61771329),Tianjin Science and Technology Planning Project(No. 20YDTPJC01110).

      (責(zé)任編輯:孫立華)

      猜你喜歡
      多任務(wù)骨骼邊界
      拓展閱讀的邊界
      做家務(wù)的女性骨骼更強(qiáng)壯
      中老年保健(2021年5期)2021-12-02 15:48:21
      三減三健全民行動(dòng)——健康骨骼
      中老年保健(2021年5期)2021-08-24 07:06:28
      基于中心化自動(dòng)加權(quán)多任務(wù)學(xué)習(xí)的早期輕度認(rèn)知障礙診斷
      論中立的幫助行為之可罰邊界
      骨骼和肌肉
      小布老虎(2017年1期)2017-07-18 10:57:27
      基于判別性局部聯(lián)合稀疏模型的多任務(wù)跟蹤
      電測(cè)與儀表(2016年5期)2016-04-22 01:13:46
      “偽翻譯”:“翻譯”之邊界行走者
      未知環(huán)境下基于粒子群優(yōu)化的多任務(wù)聯(lián)盟生成
      新竹市| 东宁县| 丹棱县| 民和| 广东省| 琼结县| 涡阳县| 甘肃省| 通化市| 杭锦后旗| 东阿县| 滦南县| 襄汾县| 嘉荫县| 公主岭市| 邓州市| 仙桃市| 江口县| 金阳县| 嘉峪关市| 台中县| 普兰县| 汶川县| 罗甸县| 安图县| 土默特左旗| 青海省| 边坝县| 玛纳斯县| 石景山区| 泸溪县| 且末县| 平凉市| 息烽县| 玛多县| 宣恩县| 富川| 洪雅县| 博乐市| 祁阳县| 葫芦岛市|