吳江春,王虎虎,徐幸蓮
基于機(jī)器視覺(jué)的雞胴體斷翅快速檢測(cè)技術(shù)
吳江春,王虎虎※,徐幸蓮
(南京農(nóng)業(yè)大學(xué)肉品加工與質(zhì)量控制教育部重點(diǎn)實(shí)驗(yàn)室,南京 210095)
為實(shí)現(xiàn)肉雞屠宰過(guò)程中斷翅雞胴體的快速檢測(cè),提高生產(chǎn)效率,該研究利用機(jī)器視覺(jué)系統(tǒng)采集了肉雞屠宰線上的1 053張肉雞胴體圖,構(gòu)建了一種快速識(shí)別斷翅缺陷的方法。通過(guò)機(jī)器視覺(jué)裝置采集雞胴體正視圖,經(jīng)圖像預(yù)處理后分別提取雞胴體左右兩端到質(zhì)心的距離及其差值(d1、d2、dc)、兩翅最低點(diǎn)高度及其差值(h1、h2、hc)、兩翅面積及其比值(S1、S2、Sr)、矩形度()和寬長(zhǎng)比(rate)共11個(gè)特征值,并通過(guò)主成分分析降維至8個(gè)主成分。建立線性判別模型、二次判別模型、隨機(jī)森林、支持向量機(jī)、BP神經(jīng)網(wǎng)絡(luò)和VGG16模型,比較模型的F1分?jǐn)?shù)和總準(zhǔn)確率,在所有模型組合中,以VGG16模型的F1分?jǐn)?shù)和總準(zhǔn)確率最高,分別為94.35%和93.28%,平均預(yù)測(cè)速度為10.34張/s。利用VGG16建立的模型有較好的分類效果,可為雞胴體斷翅的快速識(shí)別與分類提供技術(shù)參考。
機(jī)器視覺(jué);機(jī)器學(xué)習(xí);雞胴體;斷翅檢測(cè)
近年來(lái),受國(guó)內(nèi)外禽流感、非洲豬瘟以及新冠肺炎疫情的影響,中國(guó)已逐步取消活禽市場(chǎng)交易,推廣集中宰殺,肉雞行業(yè)從活禽交易向以冰鮮雞或熟制品出售的方式轉(zhuǎn)型升級(jí)[1]。隨著雞肉市場(chǎng)需求的增加,消費(fèi)者對(duì)雞肉品質(zhì)的要求也逐漸提高,他們常通過(guò)感官評(píng)定來(lái)選擇是否購(gòu)買冰鮮雞,外觀上有明顯缺陷的雞胴體不能被消費(fèi)者青睞。在肉雞屠宰過(guò)程中,雞的品種、養(yǎng)殖期、屠宰工藝和設(shè)備等因素會(huì)造成不同程度的雞胴體損傷[2]。據(jù)調(diào)研,生產(chǎn)線上主要的雞胴體缺陷類型有淤血、斷骨和皮損,斷骨常發(fā)生在兩翅區(qū)域。雖然大多數(shù)肉雞屠宰廠在生產(chǎn)的各個(gè)環(huán)節(jié)基本實(shí)現(xiàn)了機(jī)械化,但雞胴體的質(zhì)檢仍需經(jīng)驗(yàn)豐富的工人用肉眼識(shí)別判斷[3]。生產(chǎn)線斷翅的人工判斷標(biāo)準(zhǔn)為人工檢查時(shí)無(wú)明顯肉眼可見(jiàn)的翅膀變形(外/斜翻、骨折、下垂)、翅骨外露、區(qū)部殘缺等特征。判定結(jié)果嚴(yán)重依賴經(jīng)驗(yàn),不同檢測(cè)工人之間的誤差較大,且顯著受個(gè)人主觀因素影響,無(wú)法避免漏檢與誤檢的發(fā)生,從而造成經(jīng)濟(jì)損失。若未檢出的斷翅品流入市場(chǎng)將降低企業(yè)產(chǎn)品整體品質(zhì),損害企業(yè)形象。因此亟需一種技術(shù)代替工人進(jìn)行高效、客觀的檢測(cè)。機(jī)器視覺(jué)技術(shù)具有人眼視物的功能,能對(duì)被識(shí)別物進(jìn)行分析和判斷[4]。其基本組成可分為硬件和軟件系統(tǒng),其中硬件系統(tǒng)由光源、相機(jī)、工控機(jī)和計(jì)算機(jī)組成,軟件系統(tǒng)包括計(jì)算機(jī)軟件等組分[5-8]。隨著我國(guó)智能制造的不斷深入,機(jī)器視覺(jué)憑借高效率、高精確度、無(wú)損等優(yōu)勢(shì)廣泛應(yīng)用于各個(gè)領(lǐng)域[9]。在農(nóng)業(yè)領(lǐng)域,機(jī)器視覺(jué)可用于水果、蔬菜、堅(jiān)果等農(nóng)產(chǎn)品的缺陷檢測(cè)和分級(jí),或用于農(nóng)產(chǎn)品生長(zhǎng)過(guò)程的監(jiān)測(cè)和病蟲害的控制[10-13]。此外機(jī)器視覺(jué)技術(shù)還可用于其他食品或食品原料的外觀品質(zhì)、新鮮度的檢測(cè)和分級(jí),如牛肉新鮮度的測(cè)定[14]、雙黃蛋的識(shí)別[15]、餅干外形殘缺品的檢出與分類[16]。在禽類屠宰加工過(guò)程中,機(jī)器視覺(jué)可用于雞胴體及其分割組分的重量預(yù)測(cè)[7,17-21]和品質(zhì)檢測(cè)[22-24]、自動(dòng)掏膛[25-27]和自動(dòng)分割[28]等研究。然而現(xiàn)有研究中雞胴體品質(zhì)檢測(cè)的研究對(duì)象大多是已分割的雞翅、雞胸等組分,對(duì)整只雞胴體的研究較少,此外品質(zhì)檢測(cè)研究的缺陷類型集中在淤血、表面污染物、新鮮度、木質(zhì)化程度等方面,對(duì)雞胴體斷翅識(shí)別與檢測(cè)存在研究空白。本研究通過(guò)機(jī)器視覺(jué)系統(tǒng)采集雞胴體正視圖,經(jīng)過(guò)圖像預(yù)處理后提取特征值,建立快速識(shí)別雞胴體斷翅的模型,為雞胴體品質(zhì)檢測(cè)提供技術(shù)參考,對(duì)促進(jìn)肉雞屠宰加工業(yè)自動(dòng)化與智能化、減少工廠勞動(dòng)力使用、提高生產(chǎn)效率具有一定意義。
試驗(yàn)對(duì)象為肉雞胴體,于江蘇省某大型肉雞屠宰線采集圖片。從生產(chǎn)線上收集經(jīng)工人質(zhì)檢的斷翅雞胴體和合格雞胴體,統(tǒng)一采集其正視圖,共獲得1 053張樣本圖片,其中斷翅雞胴體553只,合格雞胴體500只。代表性斷翅和正常樣本如圖1所示。
圖1 部分?jǐn)喑崞泛秃细衿穲D
本研究中圖像采集裝置各組分的安裝和連接方法參考趙正東等[29]研究,安裝方位圖如圖2所示。
圖2 圖像采集裝置簡(jiǎn)圖
1.3.1 圖像的預(yù)處理
圖像噪聲是指圖像中造成干擾的無(wú)用信息,按來(lái)源可分為內(nèi)部噪聲和外部噪聲,外部噪聲由外界環(huán)境的變化引起,如光照強(qiáng)度、拍攝背景等,內(nèi)部噪聲由機(jī)器視覺(jué)系統(tǒng)內(nèi)部的抖動(dòng)、數(shù)據(jù)傳輸?shù)纫蛩卦斐蒣30-31]。圖像預(yù)處理可減少噪聲,強(qiáng)化圖像的特征,利于后續(xù)提取特征值[32-33]。圖像預(yù)處理方法有灰度化、圖像增強(qiáng)、圖像分割和形態(tài)學(xué)處理等,本研究選用加權(quán)平均值法作為灰度化方法,在圖像增強(qiáng)中,對(duì)比了低通濾波、高通濾波、同態(tài)濾波、線性空域?yàn)V波和二維中值濾波對(duì)圖像中噪聲的去除效果,根據(jù)圖像處理前后清晰度的變化選擇合適的圖像增強(qiáng)方法。如圖3所示,低通濾波、高通濾波處理降低了圖像的清晰度,同態(tài)濾波處理使圖像的亮度增加,造成部分信息的丟失,線性空域?yàn)V波和二維中值濾波處理后圖像的清晰度都得到了提升,為了選擇最優(yōu),人為在灰度圖上增加椒鹽噪聲,二者對(duì)圖像中椒鹽噪聲的去除效果如圖4所示。通過(guò)觀察可以發(fā)現(xiàn)二維中值濾波法的處理效果優(yōu)于線性空域?yàn)V波,最終選擇二維中值濾波作為本研究的圖像增強(qiáng)方法。
本研究比較了最大熵法閾值分割、Otsu閾值分割、迭代法閾值分割和K-means法的分割效果,最終結(jié)果如圖5所示,這4種方法中以迭代法閾值分割的分割效果最好,因此選擇迭代閾值法作為本研究的圖像分割方法。
圖4 對(duì)添加椒鹽噪聲的灰度圖采取不同圖像增強(qiáng)方法的處理效果
圖5 不同圖像分割方法處理效果
如圖5c所示,采用迭代法閾值分割后的二值圖仍存在孔洞和白色噪點(diǎn),因此,在此基礎(chǔ)上采用孔洞填充除去雞胴體區(qū)域內(nèi)部孔洞,選取最大連通域的方法來(lái)去除雞胴體外的白色噪點(diǎn),如圖6a所示。將圖6a與原始RGB圖像點(diǎn)乘,得到除去背景干擾的RGB圖,如圖6b所示,并以此圖作為提取特征值和建立模型的基礎(chǔ)。
圖6 圖像預(yù)處理最終效果
1.3.2 特征值的提取
通過(guò)觀察斷翅品與合格品之間的差別,可發(fā)現(xiàn)由于斷翅雞胴體翅膀處骨頭斷裂,肌肉之間缺少支撐,在重力的作用下斷翅向外展開(kāi),造成雞胴體最左/最右端到雞胴體質(zhì)心的距離要大于合格雞胴體,斷翅一側(cè)的雞翅區(qū)域最低點(diǎn)高度大于合格雞胴體,斷翅一側(cè)的雞翅區(qū)域在圖像上的投影面積要大于合格雞胴體?;谝陨系男螤钐卣骱蛶缀翁卣?,提取所有樣本相關(guān)的特征值。
如圖7所示,提取雞胴體左右兩端到質(zhì)心的距離及其差值,利用MATLAB中regionprops()函數(shù)計(jì)算圖像連通分量的質(zhì)心,記為1(,),再計(jì)算出雞胴體區(qū)域最左端和最右端的坐標(biāo)位點(diǎn),分別記為1(1,1),1(2,2)。圖像左右兩端到質(zhì)心的距離及其差值的計(jì)算公式如下:
注:O1為雞胴體質(zhì)心;L1為雞胴體最左端端點(diǎn);R1為雞胴體最右端端點(diǎn);d1為雞胴體最左端到質(zhì)心距離;d2為雞胴體最右端到質(zhì)心距離。
圖像最左端到質(zhì)心的距離d1=-1,圖像最右端到質(zhì)心的距離d2=2-,左右兩端到質(zhì)心距離差值的絕對(duì)值dc= | d1-d2 |。
要計(jì)算圖像中兩翅的面積,首先要將雞翅區(qū)域從胴體中分割出來(lái)。參考戚超等[18-19]利用雞胴體軀干部位擬合橢圓來(lái)獲取雞胸長(zhǎng)和寬的方式,并在此基礎(chǔ)上做了改進(jìn),獲得一種從雞胴體圖像上分割雞翅區(qū)域的方法。該法可分為擬合橢圓和直線分割2個(gè)步驟,流程圖如圖8所示。如圖9a所示,擬合橢圓能去除胴體軀干部位像素,一定程度上將兩翅和胴體分離,但僅采取橢圓擬合的操作并不適用所有角度的雞胴體,如圖9 b~9d所示,2翅區(qū)域與雞脖、雞腿區(qū)域以及兩腿之間還存在不同程度的粘連,因此還需對(duì)粘連的像素進(jìn)行直線分割。
圖8 分割雞翅流程
圖9 橢圓擬合分割效果
如圖10所示,直線分割的步驟為:提取擬合橢圓的圓心,記為2(,),將圓心沿長(zhǎng)軸向下平移三分之二的長(zhǎng)軸長(zhǎng)度,再沿短軸分別向左向右平移三分之二的短軸長(zhǎng)度,得到點(diǎn)1和2,連接21和22,這兩條直線可分割粘連的雞翅和雞脖像素。
將2沿長(zhǎng)軸向上平移五分之二的長(zhǎng)軸長(zhǎng)度,得到點(diǎn)3,過(guò)3做一條平行于短軸的直線,該線可分割粘連的雞翅和雞腿像素。
連接2、3,該線可分割兩腿間的粘連像素。
根據(jù)左右兩翅在圖中的位置關(guān)系,利用regionprops()函數(shù)選出左翅和右翅,再用bwarea()函數(shù)計(jì)算左翅、右翅像素面積,分別記為S1、S2,其中Sr的計(jì)算公式如下:
注:圖a中,O2為擬合橢圓的圓心;X1為分割左翅的直線與橢圓的交點(diǎn); X2為分割右翅的直線與橢圓的交點(diǎn);X3為分割兩腿的直線與橢圓的交點(diǎn)
如圖11所示提取雞胴體圖像兩翅最低點(diǎn)的高度及其差值,圖像的大小為×,提取出左右兩翅最低點(diǎn)的坐標(biāo)記為2(3,3),2(4,4),兩翅最低點(diǎn)高度的計(jì)算公式分別為h1=-3,h2=-4,二者差值記為hc,hc= | h1-h2 |。
注:為水平坐標(biāo)軸;為縱坐標(biāo)軸;為圖像的長(zhǎng)度;為圖像的寬度;2為左翅最低點(diǎn);2為右翅最低點(diǎn);3為左翅最低點(diǎn)的縱坐標(biāo)值;4為右翅最低點(diǎn)的縱坐標(biāo)值。
Note:is the horizontal axis;is the vertical axis;is the length of the image;is the width of the image;2is the left wing minimum;2is the right wing minimum;3is the longitudinal coordinate value of the left wing minimum;4is the longitudinal coordinate value of the right wing minimum.
圖11 雞胴體兩翅最低點(diǎn)的高度及其差值的提取
Fig.11 Heights of the lowest point in the two wing and their difference
如圖12所示,提取雞胴體圖的矩形度和寬長(zhǎng)比。矩形度是一種形狀特征,指物體面積占其最小外接矩形的比值,表示物體對(duì)其最小外接矩形的充盈程度[34]。寬長(zhǎng)比是指物體最小外接矩形的寬度和長(zhǎng)度的比值,表示物體與正方形或圓形的接近程度,其值在0~1之間[35-36]。在本文中矩形度記為,寬長(zhǎng)比記為。
1.3.3 模型的建立與測(cè)試
機(jī)器學(xué)習(xí)按照特征值提取方式可分為淺層機(jī)器學(xué)習(xí)和深度學(xué)習(xí),淺層機(jī)器學(xué)習(xí)需手動(dòng)提取特征值,即依靠研究者的經(jīng)驗(yàn)和直覺(jué)選取合適的特征值,并將特征值輸入算法,深度學(xué)習(xí)可自動(dòng)提取特征值,特征提取的準(zhǔn)確度更高[37-38]。本研究使用的淺層機(jī)器學(xué)習(xí)算法為線性判別模型(Linear Discriminant Analysis,LDA)、二次判別模型(Quadratic Discriminant Analysis,QDA)、隨機(jī)森林(Random Forest,RF)、支持向量機(jī)(Support Vector Machines,SVM)、誤差反向傳播算法(Error Back- propagation Training,BP),深度學(xué)習(xí)算法為層數(shù)為16的視覺(jué)幾何群網(wǎng)絡(luò)(Visual Geometry Group Network,VGG16)。
圖12 雞胴體的最小外接矩形
在淺層機(jī)器學(xué)習(xí)模型的訓(xùn)練中,對(duì)上述11個(gè)特征值進(jìn)行主成分分析,選取方差累計(jì)貢獻(xiàn)率在95%以上的主成分作為模型的輸入?yún)?shù)。分別將11個(gè)特征值和主成分作為輸入?yún)?shù)輸入LDA、QDA、RF、SVM、BP模型,在1 053張圖中隨機(jī)選取700張圖片作為訓(xùn)練集(斷翅品350張,合格品350張),353張圖片作為測(cè)試集(斷翅品203張,合格品150張)。
在深度學(xué)習(xí)模型VGG16的訓(xùn)練中,輸入值為除去背景的雞胴體RGB圖,將1 053張圖劃分為訓(xùn)練集和測(cè)試集。其中訓(xùn)練集為800張圖片(斷翅品、合格品各400張),并從中劃分出30%作為模型訓(xùn)練過(guò)程中的驗(yàn)證集,測(cè)試集為253張圖片(斷翅品153張,合格品100張)。所有模型的訓(xùn)練參數(shù)如下表所示。
表1 不同模型訓(xùn)練參數(shù)
注:LDA為線性判別模型;QDA為二次判別模型;RF為隨機(jī)森林模型;SVM為支持向量機(jī)模型;BP為誤差反向傳播模型;VGG16為層數(shù)為16的視覺(jué)幾何群網(wǎng)絡(luò)。
Note: LDA is linear discriminant model; QDA is quadratic discriminant model; RF is random forest model; SVM is support vector machine model; BP is error back propagation model; VGG16 is visual geometric group network with 16 layers.
模型分類效果的評(píng)判指標(biāo)為召回率(Recall,Rec)、精確度(Precision,Pre)、F1分?jǐn)?shù)(F1-score)和總準(zhǔn)確率(Accuracy,Acc),計(jì)算公式如下:
表2 分類結(jié)果混淆矩陣表
注:TP為正確地預(yù)測(cè)為斷翅的樣本數(shù);TN為正確地預(yù)測(cè)為正常的樣本數(shù);FP錯(cuò)誤地預(yù)測(cè)為斷翅的樣本數(shù);FN錯(cuò)誤地預(yù)測(cè)為正常的樣本數(shù)。
Note: TP is the number of samples correctly predicted as broken wings; TN is the number of samples correctly predicted as normal; FP is the number of samples incorrectly predicted as broken wings; FN is the number of samples incorrectly predicted as normal.
如表3所示,前8個(gè)主成分的方差累計(jì)貢獻(xiàn)率為95.20%,能代表11個(gè)特征值的大部分信息,并且第8個(gè)主成分開(kāi)始特征值的降幅趨于平緩,因此選取前8個(gè)主成分作為模型的輸入?yún)?shù)。
表3 主成分方差貢獻(xiàn)率
如表4所示,在淺層學(xué)習(xí)模型中,以特征值作為輸入?yún)?shù)的 RF模型召回率最高,為91.13%;以特征值作為輸入?yún)?shù)的SVM模型的識(shí)別精確度、F1分?jǐn)?shù)均高于其它模型,分別為96.28%、92.58%;模型識(shí)別總準(zhǔn)確率最高的模型是以特征值為輸入?yún)?shù)的二次判別模型和支持向量機(jī)模型,均為91.78%。特征值經(jīng)主成分分析降維后再輸入分類模型中均降低了模型的總準(zhǔn)確率,其原因是主成分分析降維后得到的數(shù)據(jù)是原始數(shù)據(jù)的近似表達(dá),降維的同時(shí)也損失了原始數(shù)據(jù)結(jié)構(gòu),減少了原始數(shù)據(jù)特征,從而降低分類算法的準(zhǔn)確率[38-40]。
表4 淺層學(xué)習(xí)模型分類效果
注:表中-1表示斷翅;1表示正常。
Note: In the table, -1 indicates broken-wing; 1 indicates normal.
深度學(xué)習(xí)模型VGG16的分類效果如圖13所示,153個(gè)斷翅品中正確分類的有142個(gè)樣本,誤分類的有11個(gè);100個(gè)合格品中正確分類的樣本有94個(gè),誤分類的有6個(gè)。經(jīng)計(jì)算可得該模型的召回率為92.81%,精確度為95.95%,F(xiàn)1分?jǐn)?shù)為94.35%,總準(zhǔn)確率為93.28%。在所有模型中,預(yù)測(cè)時(shí)間最短的是以主成分作為輸入?yún)?shù)的SVM模型,可在0.000 9 s內(nèi)判定353張樣本圖片,平均速度為3.92×105張/s;預(yù)測(cè)時(shí)間最長(zhǎng)的模型是VGG16模型,判定253張樣本圖片需24.46 s,平均速度為10.34 張/s。
經(jīng)過(guò)綜合評(píng)判VGG16模型的F1分?jǐn)?shù)和總準(zhǔn)確率均高于其他識(shí)別模型,但其平均預(yù)測(cè)速度為10.34張/s,遠(yuǎn)慢于其它模型。其原因是VGG16是深度學(xué)習(xí)模型,具有結(jié)構(gòu)復(fù)雜、層數(shù)多的特征,在提高模型的準(zhǔn)確率和容錯(cuò)性的同時(shí)也導(dǎo)致了運(yùn)行時(shí)間長(zhǎng)的弊端[41-42]。因此在后續(xù)的研究中可通過(guò)簡(jiǎn)化代碼、及時(shí)清理變量的方式提高代碼的運(yùn)行速度,或利用參數(shù)剪枝、參數(shù)量化、緊湊網(wǎng)絡(luò)、參數(shù)共享等方式對(duì)深度學(xué)習(xí)模型進(jìn)行壓縮和增速,從而優(yōu)化模型、提高預(yù)測(cè)速度[43-45]。
圖14為VGG16模型部分誤檢與漏檢的雞胴體樣本圖,如圖14a所示,由于少數(shù)斷翅品翅膀與軀干處的骨頭未完全斷裂,翅膀展開(kāi)幅度小,弱化了斷翅特征。如圖 14b所示,翅膀的大小也會(huì)影響結(jié)果的準(zhǔn)確性,翅膀肥大的合格品在重量的影響下增加了兩翅向外伸展的幅度,形成斷翅的假象。
注:圖中第一行數(shù)據(jù)分別表示正確地預(yù)測(cè)為斷翅的樣本數(shù)、錯(cuò)誤地預(yù)測(cè)為正常的樣本數(shù)、斷翅品的識(shí)別精確度;第二行數(shù)據(jù)分別表示錯(cuò)誤地預(yù)測(cè)為斷翅的樣本數(shù)、正確地預(yù)測(cè)為正常的樣本數(shù)、合格品的識(shí)別精確度;第三行分別表示斷翅品的召回率、合格品的召回率、模型的總準(zhǔn)確率。
圖14 漏檢和誤檢樣本
本研究利用機(jī)器視覺(jué)系統(tǒng)獲得經(jīng)人工質(zhì)檢的雞胴體斷翅品和合格品的正視圖,共計(jì)1 053張,采用加權(quán)平均值法(灰度化)、二維中值濾波法(去噪)、迭代法(閾值分割)的圖像預(yù)處理方法獲得除去背景的雞胴體圖像,并以此圖為基礎(chǔ)提取了11個(gè)特征值。分別將特征值和經(jīng)降維的主成分導(dǎo)入LDA、QDA、RF、SVM、BP模型,將去除背景的雞胴體RGB圖導(dǎo)入深度學(xué)習(xí)模型VGG16。綜合比較模型的F1分?jǐn)?shù)和總準(zhǔn)確率,發(fā)現(xiàn)所有模型中以VGG16對(duì)雞胴體斷翅品和合格品分類效果最好,F(xiàn)1分?jǐn)?shù)為94.35%,總準(zhǔn)確率為93.28%,平均預(yù)測(cè)速度為10.34 張/s。該模型可為雞胴體斷翅的快速識(shí)別與分類提供技術(shù)參考。但仍需改善模型的預(yù)測(cè)速度,并提高對(duì)斷翅程度輕的斷翅品和翅膀肥大的合格品的識(shí)別準(zhǔn)確率。此外,本研究通過(guò)手動(dòng)捕獲雞胴體的正視圖的方式對(duì)斷翅雞胴體靜態(tài)檢測(cè),尚未實(shí)現(xiàn)斷翅雞胴體的全自動(dòng)檢測(cè),因此在后續(xù)研究中需通過(guò)信號(hào)觸發(fā)裝置實(shí)現(xiàn)相機(jī)自動(dòng)曝光和光源自動(dòng)頻閃,從而實(shí)現(xiàn)對(duì)斷翅雞胴體的實(shí)時(shí)檢測(cè)。
[1] 何雯霞,熊濤,尚燕. 重大突發(fā)疫病對(duì)我國(guó)肉禽產(chǎn)業(yè)鏈?zhǔn)袌?chǎng)價(jià)格的影響研究:以非洲豬瘟為例[J]. 農(nóng)業(yè)現(xiàn)代化研究,2022,43(2):318-327.
He Wenxia, Xiong Tao, Shang Yan. The impacts of major animal diseases on the prices of China’s meat and poultry markets: evidence from the African swine fever[J]. Research of Agricultural Modernization, 2022, 43(2): 318-327. (in Chinese with English abstract)
[2] 李繼忠,曲威禹,花園輝,等. 屠宰工藝和設(shè)備對(duì)雞胴體自動(dòng)分割的影響[J]. 肉類工業(yè),2021(4):36-41.
Li Jizhong, Qu Weiyu, Hua Yuanhui, et al. Effect of slaughtering technology and equipment on automatic segmentation of chicken carcass[J]. Meat Industry, 2021(4): 36-41. (in Chinese with English abstract)
[3] Chowdhury E U, Morey A. Application of optical technologies in the US poultry slaughter facilities for the detection of poultry carcass condemnation[J]. British Poultry Science, 2020, 61(6): 646-652.
[4] 王成軍,韋志文,嚴(yán)晨. 基于機(jī)器視覺(jué)技術(shù)的分揀機(jī)器人研究綜述[J]. 科學(xué)技術(shù)與工程,2022,22(3):893-902.
Wang Chengjun, Wei Zhiwen, Yan Chen. Review on sorting robot based on machine vision technology[J]. Science Technology and Engineering, 2022, 22(3): 893-902. (in Chinese with English abstract)
[5] Brosnan T, Sun D W. Improving quality inspection of food products by computer vision:A review[J]. Journal of Food Engineering, 2004, 61(1): 3-16.
[6] Taheri-Garavand A, Fatahi S, Omid M, et al. Meat quality evaluation based on computer vision technique: A review[J]. Meat Science, 2019, 156: 183-195.
[7] 戚超,徐佳琪,劉超,等. 基于機(jī)器視覺(jué)和機(jī)器學(xué)習(xí)技術(shù)的雞胴體質(zhì)量自動(dòng)分級(jí)方法[J]. 南京農(nóng)業(yè)大學(xué)學(xué)報(bào),2019,42(3):551-558.
Qi Chao, Xu Jiaqi, Liu Chao, et al. Automatic classification of chicken carcass weight based on machine vision and machine learning technology[J]. Journal of Nanjing Agricultural University, 2019, 42(3): 551-558. (in Chinese with English abstract)
[8] Huynh T T M, Tonthat L, Dao S V T. A vision-based method to estimate volume and mass of fruit/vegetable: case study of sweet potato[J]. International Journal of Food Properties, 2022, 25(1): 717-732.
[9] 周寶倉(cāng),呂金龍,肖鐵忠,等. 機(jī)器視覺(jué)技術(shù)研究現(xiàn)狀及發(fā)展趨勢(shì)[J]. 河南科技,2021,40(31):18-20.
Zhou Baocang, Lü Jinlong, Xiao Tiezhong, et al. Research status and development trend of machine vision technology[J]. Henan Technology, 2021, 40(31): 18-20. (in Chinese with English abstract)
[10] 溫艷蘭,陳友鵬,王克強(qiáng),等. 基于機(jī)器視覺(jué)的病蟲害檢測(cè)綜述[EB/OL]. 中國(guó)糧油學(xué)報(bào),(2022-01-11) [2022-08-11]. http: //kns. cnki. net/kcms/detail/11. 2864. TS. 20220302. 1806. 014. html.
Wen Yanlan, Chen Youpeng, Wang Keqiang, et al. An overview of plant diseases and insect pests detection based on machine vision[EB/OL]. Journal of the Chinese Cereals and Oils Association, (2022-01-11) [2022-08-11]. (in Chinese with English abstract)
[11] 劉平,劉立鵬,王春穎,等. 基于機(jī)器視覺(jué)的田間小麥開(kāi)花期判定方法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2022,53(3):251-258.
Liu Ping, Liu Lipeng, Wang Chunying, et al. Determination method of field wheat flowering period based on machine vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2022, 53(3): 251-258. (in Chinese with English abstract)
[12] Li J B, Rao X Q, Wang F J, et al. Automatic detection of common surface defects on oranges using combined lighting transform and image ratio methods[J]. Postharvest Biology and Technology, 2013, 82: 59-69.
[13] 閆彬,楊福增,郭文川. 基于機(jī)器視覺(jué)技術(shù)檢測(cè)裂紋玉米種子[J]. 農(nóng)機(jī)化研究,2020,42(5):181-185, 235.
Yan Bin, Yang Fuzeng, Guo Wenchuan. Detection of maize seeds with cracks based on machine vision technology[J]. Journal of Agricultural Mechanization Research, 2020, 42(5): 181-185, 235. (in Chinese with English abstract)
[14] 姜沛宏,張玉華,陳東杰,等. 基于多源感知信息融合的牛肉新鮮度分級(jí)檢測(cè)[J]. 食品科學(xué),2016,37(6):161-165.
Jiang Peihong, Zhang Yuhua, Chen Dongjie, et al. Measurement of beef freshness grading based on multi-sensor information fusion technology[J]. Food Science, 2016, 37(6): 161-165. (in Chinese with English abstract)
[15] Chen W, Du N F, Dong Z Q, et al. Double yolk nondestructive identification system based on Raspberry Pi and computer vision[J]. Journal of Food Measurement and Characterization, 2022, 16(2): 1605-1612.
[16] 程子華. 基于機(jī)器視覺(jué)的殘缺餅干分揀系統(tǒng)開(kāi)發(fā)[J]. 現(xiàn)代食品科技,2022,38(2):313-318, 325.
Cheng Zihua. The development of incomplete biscuit sorting system based on machine vision[J]. Modern Food Science and Technology, 2022, 38(2): 313-318, 325. (in Chinese with English abstract)
[17] 郭峰,劉立峰,張奎彪,等. 家禽胴體影像分選技術(shù)研究新進(jìn)展[J]. 肉類工業(yè),2019(11):31-40.
Guo Feng, Liu Lifeng, Zhang Kuibiao, et al. New progress in research on image grading technology of poultry carcass[J]. Meat Industry, 2019(11): 31-40. (in Chinese with English abstract)
[18] 戚超. 基于深度相機(jī)和機(jī)器視覺(jué)技術(shù)的雞胴體質(zhì)量在線分級(jí)系統(tǒng)[D]. 南京:南京農(nóng)業(yè)大學(xué),2019.
Qi Chao. On-line Grading System of Chicken Carcass Quality Based on Deep Camera and Machine Vision Technology[D]. Nanjing: Nanjing Agricultural University, 2019. (in Chinese with English abstract)
[19] 陳坤杰,李航,于鎮(zhèn)偉,等. 基于機(jī)器視覺(jué)的雞胴體質(zhì)量分級(jí)方法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2017,48(6):290-295, 372.
Chen Qunjie, Li Hang, Yu Zhengwei, et al. Grading of chicken carcass weight based on machine vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2017, 48(6): 290-295, 372. (in Chinese with English abstract)
[20] 吳玉紅. 基于機(jī)器視覺(jué)的雞翅質(zhì)檢研究[D]. 泰安:山東農(nóng)業(yè)大學(xué),2016.
Wu Yuhong. Research on Chicken Wings Quality and Mass Detection Based on Machine Vision[D]. Taian: Shandong Agricultural University, 2016. (in Chinese with English abstract)
[21] 徐京京. 雞翅質(zhì)量檢測(cè)與重量分級(jí)智能化裝備的設(shè)計(jì)[D]. 泰安:山東農(nóng)業(yè)大學(xué),2016.
Xu Jingjing. Design of the Chicken Wings Quality Inspection and Weight Classification of Intelligent Equipment[D]. Taian: Shandong Agricultural University, 2016. (in Chinese with English abstract)
[22] Asmara R, Rahutomo F, Hasanah Q, et al. Chicken meat freshness identification using the histogram color feature[C]//IEEE. International Conference on Sustainable Information Engineering and Technology. Batu, Indonesia, 2017: 57-61.
[23] Carvalho L, Perez-Palacios T, Caballero D, et al. Computer vision techniques on magnetic resonance images for the non-destructive classification and quality prediction of chicken breasts affected by the White-Striping myopathy[J]. Journal of Food Engineering, 2021, 306: 110633.
[24] 楊凱. 雞胴體表面污染物在線檢測(cè)及處理設(shè)備控制系統(tǒng)的設(shè)計(jì)與開(kāi)發(fā)[D]. 南京:南京農(nóng)業(yè)大學(xué),2015.
Yang Kai. Design and Development of Online Detection and Processing Equipment Control System for Contaminants on Chicken Carcass Surface[D]. Nanjing: Nanjing Agricultural University, 2015. (in Chinese with English abstract)
[25] 陳艷. 基于機(jī)器視覺(jué)的家禽機(jī)械手掏膛及可食用內(nèi)臟分揀技術(shù)研究[D]. 武漢:華中農(nóng)業(yè)大學(xué),2018.
Chen Yan. Research on the Technology of Poultry Manipulator Eviscerating and Edible Viscera Sorting Based on Machine Vision[D]. Wuhan: Huazhong Agriculture University, 2018. (in Chinese with English abstract)
[26] 王樹(shù)才,陶凱,李航. 基于機(jī)器視覺(jué)定位的家禽屠宰凈膛系統(tǒng)設(shè)計(jì)與試驗(yàn)[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2018,49(1):335-343.
Wang Shucai, Tao Kai, Li Hang. Design and experiment of poultry eviscerator system based on machine vision positioning[J]. Transactions of the Chinese Society for Agricultural Machinery, 2018, 49(1): 335-343. (in Chinese with English abstract)
[27] Chen Y, Wang S C. Poultry carcass visceral contour recognition method using image processing[J]. Journal of Applied Poultry Research, 2018, 27(3): 316-324.
[28] Teimouri N, Omid M, Mollazade K, et al. On-line separation and sorting of chicken portions using a robust vision-based intelligent modelling approach[J]. Biosystems Engineering, 2018, 167: 8-20.
[29] 趙正東,王虎虎,徐幸蓮. 基于機(jī)器視覺(jué)的肉雞胴體淤血檢測(cè)技術(shù)初探[J]. 農(nóng)業(yè)工程學(xué)報(bào),2022,38(16):330-338.
Zhao Zhengdong, Wang Huhu, Xu Xinglian. Broiler carcass congestion detection technology based on machine vision[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2022, 38(16): 330-338. (in Chinese with English abstract)
[30] 高舉,李永祥,徐雪萌. 基于機(jī)器視覺(jué)編織袋縫合缺陷識(shí)別與檢測(cè)[J]. 包裝與食品機(jī)械,2022,40(3):51-56.
Gao Ju, Li Yongxiang, Xu Xuemeng. Recognition and detection of stitching defects of woven bags based on machine vision[J]. Packaging and Food Machinery, 2022, 40(3): 51-56. (in Chinese with English abstract)
[31] 王坤. 基于機(jī)器視覺(jué)的花邊針自動(dòng)分揀方法研究[D]. 上海:東華大學(xué),2022.
Wang Kun. Investigation on Automatic Sorting Method of Curved Edge Needle Based on Machine Vision[D]. Shanghai: Donghua University, 2022. (in Chinese with English abstract)
[32] 位沖沖. 基于卷積神經(jīng)網(wǎng)絡(luò)的工件識(shí)別技術(shù)研究[D]. 哈爾濱:哈爾濱商業(yè)大學(xué),2022.
Wei Chongchong. Research on Workpiece Recognition Technology Based on Convolutional Neural Networks[D]. Harbin: Harbin University of Commerce, 2022. (in Chinese with English abstract)
[33] 劉德志,曾勇,袁雨鑫,等. 基于機(jī)器視覺(jué)的火車輪對(duì)軸端標(biāo)記自動(dòng)識(shí)別算法研究[J]. 現(xiàn)代制造工程,2022(7):113-120.
Liu Dezhi, Zeng Yong, Yuan Yuxing, et al. Research on automatic recognition algorithm of axle end mark of train wheelset based on machine vision[J]. Modern Manufacturing Engineering, 2022(7): 113-120. (in Chinese with English abstract)
[34] 張凱,李振華,郁豹,等. 基于機(jī)器視覺(jué)的花生米品質(zhì)分選方法[J]. 食品科技,2019,44(5):297-302.
Zhang Kai, Li Zhenghua, Yu Bao, et al. Peanut quality sorting method based on machine vision[J]. Food Science and Technology, 2019, 44(5): 297-302. (in Chinese with English abstract)
[35] 戴建民,曹鑄,孔令華,等. 基于多特征模糊識(shí)別的煙葉品質(zhì)分級(jí)算法[J]. 江蘇農(nóng)業(yè)科學(xué),2020,48(20):241-247.
Dai Jianmin, Cao Zhu, Kong Linghua, et al. Tobacco quality grading algorithm based on multi-feature fuzzy recognition[J]. Jiangsu Agricultural Science, 2020, 48(20): 241-247. (in Chinese with English abstract)
[36] 王慧慧,孫永海,張婷婷,等. 鮮食玉米果穗外觀品質(zhì)分級(jí)的計(jì)算機(jī)視覺(jué)方法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2010,41(8):156-159,165.
Wang Huihui, Sun Yonghai, Zhang Tingting, et al. Appearance quality grading for fresh corn ear using computer vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2010, 41(8): 156-159, 165. (in Chinese with English abstract)
[37] 朱博陽(yáng),吳睿龍,于曦. 人工智能助力當(dāng)代化學(xué)研究[J]. 化學(xué)學(xué)報(bào),2020,78(12):1366-1382.
Zhu Boyang, Wu Ruilong, Yu Xi. Artificial intelligence for contemporary chemistry research[J]. Acta Chimica Sinica, 2020, 78(12): 1366-1382. (in Chinese with English abstract)
[38] 王敏,周樹(shù)道,楊忠,等. 深度學(xué)習(xí)技術(shù)淺述[J]. 自動(dòng)化技術(shù)與應(yīng)用,2019,38(5):51-57.
Wang Min, Zhou Shudao, Yang Zhong, et al. Simple aclysis of deep learning technology[J]. Computer Applications, 2019, 38(5): 51-57. (in Chinese with English abstract)
[39] 劉廣昊. 基于數(shù)字圖像和高光譜的柑橘葉片品種鑒別方法[D]. 重慶:西南大學(xué),2020.
Liu Guanghao. Identification Methods of Ccitrus Leaf Varieties Based on Digital Image and Hyperspectral[D]. Chongqing: Southwest University, 2020. (in Chinese with English abstract)
[40] 梁培生,孫輝,張國(guó)政,等. 基于主成分分析和BP神經(jīng)網(wǎng)絡(luò)的蠶蛹分類方法[J]. 江蘇農(nóng)業(yè)科學(xué),2016,44(10):428-430, 582.
Liang Peisheng, Sun Hui, Zhang Guozheng, et al. A classification method for silkworm pupae based on principal component analysis and BP neural network[J]. Jiangsu Agricultural Science, 2016, 44(10): 428-430, 582. (in Chinese with English abstract)
[41] 劉雅琪. 基于機(jī)器視覺(jué)的魚頭魚尾定位技術(shù)的研究[D]. 武漢:武漢輕工大學(xué),2021.
Liu Yaqi. Research on Positioning Technology of Fish Head and Tail Based on Machine Vision[D]. Wuhan: Wuhan Polytechnic University, 2021. (in Chinese with English abstract)
[42] 史甜甜. 基于深度學(xué)習(xí)的織物疵點(diǎn)檢測(cè)研究[D]. 杭州:浙江理工大學(xué),2019.
Shi Tiantian. Research on Fabric Defection Based on Deep Learning. Hangzhou: Zhejiang Sci-Tech Unicersity, 2019. (in Chinese with English abstract)
[43] 高晗,田育龍,許封元,等. 深度學(xué)習(xí)模型壓縮與加速綜述[J]. 軟件學(xué)報(bào),2021,32(1):68-92.
Gao Han, Tian Yulong, Xu Fengyuan, et al. Survey of deep learning model compression and acceleration[J]. Coden Ruxuew, 2021, 32(1): 68-92. (in Chinese with English abstract)
[44] 鮑春. 基于FPGA的圖像處理深度學(xué)習(xí)模型的壓縮與加速[D]. 北京:北京工商大學(xué),2020.
Bao Chun. Deep Learning Model Compression and Acceleration for Image Processing Based on FPGA[D]. Beijing: Beijing Technology and Business University, 2020. (in Chinese with English abstract)
[45] Han R, Liu C H, Li S, et al. Accelerating deep learning systems via critical set identification and model compression[J]. IEEE Transactions on Computers, 2020, 69(7): 1059-1070.
Rapid detection technology for broken-winged broiler carcass based on machine vision
Wu Jiangchun, Wang Huhu※, Xu Xinglian
(,,,210095,)
Broken-winged chicken carcasses can be one of the most common defects in broiler slaughter plants. Manual detection cannot fully meet the large-scale production, due to the high labor intensity with the low efficiency and accuracy. Therefore, it is a high demand to rapidly and accurately detect broken wings on chicken carcasses. This study aims to realize the rapid inspection of broken-winged chicken carcasses in the progress of broiler slaughter, in order to improve the production efficiency for the cost-saving slaughter line. 1053 broiler carcass images were collected from a broiler slaughter line using a computer vision system. Rapid identification was then constructed for the broken wing defects. Specifically, the front view of the chicken carcass was obtained in the machine vision system. The preprocessing was then deployed to obtain the chicken carcass images without the background, including the weighted average (graying), two-dimensional median filtering (denoising), and iterative (threshold segmentation). The code was also written in the MATLAB platform. After that, a total of 11 characteristic values were calculated, covering the exact distance starting from the left and right ends of the chicken carcass image to the centroid and the difference (d1, d2, and dc), the heights of the lowest point in the two wings and their difference (h1, h2, and hc), the areas of the two wings and ratio of them (S1, S2, and Sr), squareness (R), and width-length ratio (rate). As such, the eight principal components were achieved in the principal component analysis after the reduction of several dimensions. Separately, the principal components and characteristic values were imported into the specific model of linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), random forest (RF), support vector machine (SVM), and BP neural network. Among them, the input parameter of the VGG16 model was from the RGB maps of the chicken carcass with the removed background. Finally, a comparison was made for the F1-scores and total accuracy of each model. Thus, the highest recall rate of 91.13% was achieved in the RF model with the characteristic values as the input parameters among the shallow learning models. The second higher recognition accuracy and F1-score were 96.28% and 92.58%, respectively in the SVM model with the characteristic values as input parameters. The highest total accuracy of model recognition was achieved in the quadratic discriminant and SVM models with the characteristic values as the input parameters, both up to a proportion of 91.78%. Moreover, the F-score and total accuracy of the VGG16 model were the highest among the total model combinations, with respective rates of 94.35% and 93.28%, respectively. In terms of the prediction time of models, the shortest prediction time was obtained in the SVM model with the principal components as the input parameters. Specifically, the capacity was found to determine 353 sample images in 0.000 9 s, with an average speed of 3.92×105images per second. By contrast, the longest prediction time was observed in the VGG16 model, where 24.46 s to determine 253 sample images, with an average speed of 10.34 images per second. In conclusion, the VGG16 model can be expected to serve as the best classification of broken wings in chicken carcasses.
machine vision; machine learning; broiler carcass; detection technology of broken wing
10.11975/j.issn.1002-6819.2022.22.027
TS251.7
A
1002-6819(2022)-22-0253-09
吳江春,王虎虎,徐幸蓮. 基于機(jī)器視覺(jué)的雞胴體斷翅快速檢測(cè)技術(shù)[J]. 農(nóng)業(yè)工程學(xué)報(bào),2022,38(22):253-261.doi:10.11975/j.issn.1002-6819.2022.22.027 http://www.tcsae.org
Wu Jiangchun, Wang Huhu, Xu Xinglian, et al. Rapid detection technology for broken-winged broiler carcass based on machine vision[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2022, 38(22): 253-261. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2022.22.027 http://www.tcsae.org
2022-08-26
2022-11-05
國(guó)家現(xiàn)代農(nóng)業(yè)產(chǎn)業(yè)技術(shù)體系項(xiàng)目(CARS-41)
吳江春,研究方向?yàn)槿馄芳庸づc質(zhì)量安全控制。Email:2021808112@stu.njau.edu.cn
王虎虎,教授,博士,研究方向?yàn)槿馄芳庸づc質(zhì)量安全控制。Email:huuwang@njau.edu.cn