• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    RPNet: Rice plant counting after tillering stage based on plant attention and multiple supervision network

    2023-10-27 12:18:58XiodongBiSusongGuPihoLiuAipingYngZheCiJinjunWngJinguoYo
    The Crop Journal 2023年5期

    Xiodong Bi ,Susong Gu,Piho Liu ,Aiping Yng ,Zhe Ci ,Jinjun Wng ,Jinguo Yo

    a School of Computer Science and Technology,Hainan University,Haikou 570228,Hainan,China

    b Institute of Advanced Technology,Nanjing University of Posts and Telecommunications,Nanjing 210003,Jiangsu,China

    c Agricultural Meteorological Center,Jiangxi Meteorological Bureau,Nanchang 330045,Jiangxi,China

    Keywords: Rice Precision agriculture Plant counting Deep learning Attention mechanism

    ABSTRACT Rice is a major food crop and is planted worldwide.Climatic deterioration,population growth,farmland shrinkage,and other factors have necessitated the application of cutting-edge technology to achieve accurate and efficient rice production.In this study,we mainly focus on the precise counting of rice plants in paddy field and design a novel deep learning network,RPNet,consisting of four modules:feature encoder,attention block,initial density map generator,and attention map generator.Additionally,we propose a novel loss function called RPloss.This loss function considers the magnitude relationship between different sub-loss functions and ensures the validity of the designed network.To verify the proposed method,we conducted experiments on our recently presented URC dataset,which is an unmanned aerial vehicle dataset that is quite challenged at counting rice plants.For experimental comparison,we chose some popular or recently proposed counting methods,namely MCNN,CSRNet,SANet,TasselNetV2,and FIDTM.In the experiment,the mean absolute error (MAE),root mean squared error (RMSE),relative MAE(rMAE)and relative RMSE(rRMSE)of the proposed RPNet were 8.3,11.2,1.2%and 1.6%,respectively,for the URC dataset.RPNet surpasses state-of-the-art methods in plant counting.To verify the universality of the proposed method,we conducted experiments on the well-know MTC and WED datasets.The final results on these datasets showed that our network achieved the best results compared with excellent previous approaches.The experiments showed that the proposed RPNet can be utilized to count rice plants in paddy fields and replace traditional methods.

    1.Introduction

    Rice is the second most important cereal crop in the world and accounts for 1/4 of the world’s total grain output with an annual output of approximately 800 Mt[1,2].Rice also has high economic value.For example,rice seeds can be utilized for starch production,and rice stalks can be used for feed and paper processing [3,4].However,owing to global warming,the cultivated farmland and water consumption for rice have decreased,resulting in a decline in production[5-7].Therefore,there is an urgent need to promote agricultural automation and precision agricultural technology to increase rice production.In China’s rice production,plant counts depend heavily on labor-intensive regional sampling and manual observations.Rice plant counting is a foundational task with numerous applications in rice production,including yield prediction,planting density selection,lodging resistance tests,disaster estimation,and rice breeding,etc.[8,9].Compared with the traditional manual count method,automatic plant counting saves human resources and has higher accuracy while avoiding direct contact between the observer and rice.

    As the name implies,the purpose of plant counting is to accurately predict the number of plants against various complex backgrounds.In recent years,many crop counting methods have been reported.These can be roughly divided into three types counting methods: traditional machine learning (TML) methods,detection based deep learning (DDL) methods,and regression based deep learning (RDL) methods.

    In TML methods,plant counting depends primarily on the TML algorithm.For example,Jin et al.[10] applied green pixel segmentation and PSO-SVM to estimate wheat plant density.Mojaddadi et al.[11] performed oil palm age estimation and counting from Worldview-3 satellite images and light detection and range(LiDAR) airborne imagery using an SVM algorithm.Through the extraction of spectral features from crop multispectral images and the application of MLR,Bikram et al.[12]estimated crop emergence.Although the above methods have achieved very good performance,all of these methods depend on manually selected image features according to specific crops or trees in their research.In DDL methods,crop counting is typically achieved using bounding box labeling and detection-based neural networks.The total number of bounding boxes in an output image is the number of predicted plants.For example,Hasan et al.[13] and Madec et al.[14] utilized a Fast RCNN architecture to realize wheat ear detection.A new multi-classifier cascade based method was proposed by Bai et al.[15] to realize rice spike detection and automatic determination of the heading stage.Yu et al.[16] studied the potential of the U-Net to provide an accurate segmentation of tassels from RGB images and then realized the dynamic monitoring of the maize tassel.However,these detection-based methods typically have drawbacks.First,the bounding box labeling is expensive and labor-intensive,especially for the high-throughput images collected by unmanned aerial vehicle (UAV).Second,because nonmaximum suppression is required to delete some candidate boxes,these models may ignore crowded and overlapping plants.Finally,time-consuming parameter tuning is typically required during the training stage [17].Compared to DDL counting methods,RDL methods have largely overcome the defects of DDL methods,do not require a tedious bounding box labeling,and perform better in dense scenes.Several achievements have been achieved in this regard.Rahnemoonfar and Sheppard [18] proposed a density regression model similar to that of the ResNet for counting tomatoes.Lu et al.[19] introduced a new maize tassel-counting model that included a novel upsampling operator to enhance visualization and counting accuracy.To realize citrus tree counting and locating,Osco et al.[20] considered individual bands and their combinations.A multicolumn convolutional network was presented by Lin et al.[21] to predict a density map and estimate the number of litchi flowers.Ao et al.[22] collected 5670 images of three types of rice seeds with different qualities and constructed a model to detect the thousand grain weight of rice.An active learning approach that automatically chooses the most relevant samples was proposed by Xiao et al.[23]to create an oil palm density map for Malaysia and Indonesia.Min et al.[24] proposed a maize ear phenotyping pipeline to estimate kernels per ear,rows per ear,and kernels per row in an interpretable manner.Although the above excellent RDL methods have been proposed,they still cannot be applied to accurately count rice plants in highthroughput UAV images.First,the crop objects selected by previous researchers were significantly different from rice plants in terms of shape,color,and appearance.Second,crop counting from the perspective of a drone has unique characteristics,such as long imaging distance,high-throughput imaging,and local texture loss.Third,we believe that a more effective feature fusion strategy and a more reasonable loss function can further improve network counting performance.

    In this study,we propose a new rice plant counting network,RPNet,containing four modules: feature encoder,attention block,IDMG,and AMG.The proposed method first extracts shallow and deep features of the input image using the Feature Encoder.Next,because the extracted features are redundant,an attention block is adopted to highlight their unique information and realize a more effective feature fusion.Finally,the IDMG and AMG modules are presented to maintain accurate counting functions and regress the output density map.Moreover,we propose a new loss function(RPloss) that fully considers different sub-loss functions that supervise the training process of our network to realize rice plant counting.For experimental comparison,we selected classic networks such as MCNN and CSRNet and recently proposed competitive networks such as FIDTM and TasselNetV3.The source code for the RPNet is available at https://github.com/xdbai-source/RPNet.

    2.Materials and methods

    In this section,we will provide a detailed introduction to the network pipeline of the RPNet we have designed.Additionally,we will introduce the datasets URC [25],MTC [26] and WED [14]used in our experiments.

    2.1.Framework overview

    The main structure of our proposed network is illustrated in Fig.1.For the input imageI,feature encoder module was first utilized to extract its featuresF∈RC×H×W,whereC,HandWrepresent the number of channels,height,and width,respectively.Second,the featuresf1-f4extracted from the feature encoder were sent to the attention block to highlight the difference between the shallow and deep features and preserve the effective features.In the IDMG module,F4was employed to perform layer-by-layer upsampling for feature fusion,and obtain the initial density map (IDM)for rice counting.Using the same method,the attention map(AM)was generated for background suppression in the AMG module.The above IDM and AM were then multiplied to obtain the final output density map.In Fig.1A,the dot map is a 0-1 matrix indicating the labeled plant positions of an input image.The density map was obtained via the convolution of the above dot map.As shown in Fig.1A,ground-truth density map was obtained by down-sampling the density map and then used to supervise the network predicted density map.Next,ground-truth AM was applied to supervise the AM generated by the AMG obtained using the method described in section 2.4.

    Fig.1.The network structure of the proposed RPNet.

    2.2.Attention block

    In Feature Encoder,VGG16 [27] was adopted to obtain shallow and deep features fromf1tof4.Considering the excellent spatial/channel feature extraction and fusion ability of the CBAM [28],it was first introduced into rice plant counting research and applied as the main part of the attention block to enhance the differences between features and enable feature fusion.Fig.2 shows the structure of the CBAM.The channel-dimension attention weightsMCwere first calculated using channel attention module.These weights were then multiplied with the input featurefto obtain the channel-refined featuref’.Next,using the spatial attention module,the spatial dimension attention weightsMSwere obtained.Finally,MSwas multiplied by the channel-refined featuref’to generate the refined featureF.The detailed structures of the channel attention module and spatial attention module are presented below.

    Fig.2.Architecture of the CBAM.

    As shown in Fig.2,in the channel attention module,the input featuref∈RC×H×Wwas first utilized via average pooling and global maximum pooling along theH×Wdimension to obtain featuresfCAvg,fCMax∈RC×1×1respectively.Then,fCAvgandfCMaxwere integrated through the shared fully connected layer respectively.Next,after the element-wise summation offCAvgandfCMax,the channel attention weightsMCwere calculated by the sigmoid activation function.In the spatial attention module,the input channel-refined featuref’was first subjected to average pooling and maximum pooling along the channel dimension respectively to obtain featuresfSAvg,fSMax∈R1×H×W.After that,fSAvgandfSMaxwere fused through a convolution layer.Finally,we obtained the spatial attention weightsMCthrough the sigmoid activation function.

    2.3.IDMG and AMG modules

    As shown in Fig.1A,the IDMG module comprisesL1,L2,an upsampling operation and initial density map head(IDMH).Adopting a similar structure,the AMG module was composed ofL3,L4,an upsampling operation,and an attention map head (AMH).More detailed configuration information regarding the IDMG and AMG is provided in Fig.1B.In the IDMG module,F4was upsampled twice and then sent toL1withF3to obtainFD34.In this process,L1realizes information fusion of different scales and depths by stacking the convolution layers.Similarly,FD34was upsampling twice,and merged byL2withF2to achieveFD234,whereL2integrates information from different scales and depths fromF2andFD34.Finally,we upsampleFD234twice,and fused it withF1in the IDMH to generate the IDM.The size of the IDM was 1/2 that of the original input image.Through the aforementioned continuous fusion operations,our model can capture information from multiple scales and depths to suppress the scale transformation of the samples.

    To reduce the effect of noise on the image background,we designed an AMG module for the proposed RPNet.In the AMG module,F4was first upsampled twice and then merged withF3usingL3to obtain the fused attention featureFA34.Next,FA34was upsampled twice,and merged withF2to obtainFA234,whereFA234integrated the attention information fromF2andFA34.Finally,we upsampledFA234twice,and fused withF1in the AMH to generate the AM.Its size was also 1/2 of the original input image.By fusing the attention information at different scales and depths,the model accurately focused on the feature regions of interest in scenes with large-scale changes.

    After obtaining the IDM and AM,the final density map (FDM)was calculated using Eq.(1),where conv represents the convolution operation,and ‘⊙’ is an element-wise multiple.

    2.4.Multiple supervision loss function

    In this section,we introduce the generation process of the ground-truth density mapDgtand the ground-truth AMAgt.Subsequently,we introduce the newly proposed multiple supervision loss function,RPloss.

    If δ ·()is defined as an impulse function,it can applyto represent a plant at image positionxi.We then employed Eq.(2)to describe the dot map withNplants,namelyH(x).Finally,similar to the method used by [29],we convolved H(x)with the fixed Gaussian kernelGσ(x)to obtain the ground-truth density map given in Eq.(3).

    The ground-truth density map generated using Eq.(3) can be regarded as a probability map of plant appearance.Therefore,for image pixels with zero value,we consider that they do not contain plants and regard them as background.Conversely,for pixels with nonzero values,we assume that they may contain plants and regard them as foreground.Based on the above considerations,we regard the foreground areas as the ground-truth AMAgtand expect our model to pay more attention to these areas.Consequently,Eq.(4) can be applied to obtain the ground-truth AM.

    In the following,we describe the proposed multiple supervision loss function,RPloss.During the network training process,the above obtained ground-truth density map and ground-truth AM were utilized for network supervision and learning.

    For AM supervision,we chose the binary cross-entropy lossLbceas our sub-loss function as shown in Eq.(5),whereKrepresents the batch size,andrepresent theithground-truth AM and the predicted AM,respectively.

    For the density map supervision,we chose L1loss to supervise the counting performance and further used L2loss to improve the quality of the generated density map.L1loss and L2loss can be written as Eqs.(6)and(7),respectively,whereandrepresent the ground-truth density map and the predicted density map,respectively.

    Moreover,we introduced a probabilitypvalue representing the correct manual labeling.The probability of incorrect labelling was1-p.Next,we recorded the area in the input image that contained a plant as a positive domain and that containing no plant as a negative domain.Consequently,if a pixelxmin the ground-truth density map was greater than zero,the probability falling in the positive domain isp,and the probability falling in the negative domain is 1-p.If the pixel value was less than or equal to zero,it was regarded as a background point.Because background points require no labeling,we assumed thatxmwas error free;therefore,the probability of labeling it in the positive domain is 0,and the probability of labeling it in the negative domain is 1.Thus,equations (8) and (9) can be used to express the above operation.

    We calculated the number of predicted density maps in the positive and negative domains using Eqs.(10) and (11):

    We hope that the sum of the positive domainCpostended toward the ground-truth countCgt,and the sum of the negative domainCnegtended toward zero to suppress the background.Hence,our positive-negative lossLPNcan be expressed as:

    To sum up,our multiple supervision loss function was:

    where λ1,λ2and λ3are hyperparameters.

    In the experiment,the magnitude of each sub-loss function of Eq.(13) was quite different.Therefore,for better compatibility and adaptation between sub-loss functions,we set their weights λ1,λ2,λ3into 1,10-3and 10-4,respectively.In our UAV-based rice plant counting dataset,plants were approximately evenly distributed,and there were relatively few scenes with plant occlusion.After completing the manual labeling of the URC dataset,a twowheeled manual review was conducted.Therefore,the manual labeling errors were negligible,and we setPinLPNto 1 in this case.

    2.5.Datasets

    In this study,we focus on the accurate rice plant counting in paddy fields based on high-throughput images collected by an UAV.The proposed RPNet was evaluated using our recently proposed UAV-based rice counting dataset URC.To further verify the counting capability of the proposed network,the MTC and WED datasets were introduced in our experiment.Below,we introduce each dataset separately.

    URC: This dataset is our recently proposed dataset containing 355 high-resolution(5472×3468)RGB rice images collected using low-altitude drone (DJI Phantom 4 Advanced) and 257,793 manually annotated points.During the manual labeling of the URC dataset,we followed the labeling method described by Xiong et al.[30]and placed a dot annotation at the center of each rice plant.We wrote a script using MATLAB R2018 to improve the efficiency of the manual labeling and facilitate the display of the labeled annotations.In the URC,246 images were randomly selected for training,and the remaining 109 images were used for testing.Each image contains 84-1125 rice plants,with an average of 726 plants per image.The dataset was challenged by various light conditions.The images were taken at a height of 7 m using DJI GS Pro in Nanchang (28°31′10′′N(xiāo),116°4′6′′E),Jiangxi,China.Each pixel in the URC dataset images corresponds to approximately 0.01 square centimeters.All collected images were stored in the JPG format.

    MTC: It is the well-known maize tassel counting dataset containing 361 high-resolution RGB maize images with three types of image resolutions: 3648 × 2736,4272 × 2848 and 3456 × 2304.In the dataset,186 images were used for training,and 175 images were used for testing.All images were automatically collected using a camera(E450 Olympus)fixed in four fields:Zhengzhou (34°16′N(xiāo),112°42′E),Henan,China;Tai’an (39°38′N(xiāo),116°20′E),Shandong,China;Gucheng (30°53′N(xiāo),111°7′E),Hubei,China;Jalaid (46°52′N(xiāo),122°32′E),Inner Mongolia,China.The shooting height was 5 m,except in Gucheng,which was 4 m.Each maize tassel in the dataset was manually labeled using a bounding box.These bounding-box annotations were directly transformed into dot annotation by calculating their central coordinates.

    WED: It is a well-known wheat ear detection dataset used to count wheat ears.It contains 236 high-resolution RGB images of six different types of wheat ears,with an image resolution of 6000 × 4000.In the dataset,165 images were used for training,and 71 images were used for testing.Each image contains 80-170 wheat ears.The entire dataset contained 30,729 wheat-ear images.Those images were captured by SONY ILCE 6000 in Gréoux les Bains,France (43.7°N,5.8°E),at a height of 2.9 m.Each wheat ear was manually labeled with a bounding box by‘LABELIMG’[14].

    Representative images from the three datasets are shown in Fig.3.Images were selected from the URC,MTC,and WED datasets from the top to bottom rows.

    Fig.3.Some representative images in the URC,MTC and WED datasets.The images in rows 1 to 3 are from the URC,MTC,and WED datasets,respectively.

    3.Results

    3.1.Implement details

    VGG16_bn with parameters pre-trained on ImageNet,was employed as the backbone of the feature encoder.The number of training epochs was set to 500.The batch size was 3,and the initial learning rate,Lr,was set to 10-4.For the three datasets,we first resize all original images and then randomly cropped,flipped,and scaled each training image for image augmentation.The Adam algorithm with a step factor of 10-5was used as the optimizer.In the spatial attention module in CBAM,we set the convolution kernel of the convolution layer in fusion operation to,whereHiis the height of the input feature map.The resizing sizes or downsampling ratios for different datasets,crop sizes,and the σ size used for the ground-truth density map generation are all listed in Table 1.All experiments were based on the Pytorch framework and accelerated using an RTX 3090.

    Table 1Image preprocessing settings of each dataset.

    3.2.Evaluation metric

    To evaluate the counting performance of the different approaches,we adopted the MAE,RMSE,rMAE and rRMSE as evaluation metrics.The above four indicators can be expressed as equations (14) to (17):

    whereNis the number of images,andziandare ground-truth count number and predicted count numbers of theithimage,respectively.

    3.3.Comparison with state-of-the-art methods

    To evaluate the proposed RPNet rice plant counting method,it was compared with state-of-the-art counting approaches using the URC rice plant dataset.In order to evaluate the proposed RPNet method,it was compared with some classic counting methods(MCNN,CSRNet and SANet)and recent proposed counting methods(TasselNetV2 and FIDTM)on the URC dataset.To test the feasibility of the RPNet for other crop counting tasks,the MTC and WED datasets were applied in our experiment.In the following section,we introduce comparison experiments on these three datasets.

    The performance of the different counting methods on the URC dataset are listed in Table 2.The MAE and RMSE of our RPNet obtained the lowest values 8.3 and 11.2 on this dataset,respectively.An improvement of 17.8% and 16.4% compared to the second-best SANet in the experiment.The rMAE and rRMSE values of the proposed network were 1.2% and 1.6%,respectively.Table 2 indicates the accuracy and robustness of our network.To further demonstrate the counting performance of our network,Fig.S1 shows some final output density maps for the URC dataset.

    Table 2The performance of different methods on the test set of the URC dataset.

    In Fig.S1,the five columns from left to right show the input test images,the ground-truth density maps,and predicted density maps generated by the RPNet,SANet and CSRNet respectively.‘GT’ and ‘PD’ denote the manual ground-truth and the network predicted count values,respectively.As shown in Fig.S1,the generated density map indicates that our model accurately counted rice plants.Furthermore,the network estimated density maps were similar to the ground-truth density maps.This indicates that the network estimated density map was of very high quality.As shown in the first and second rows of Fig.S1,to a certain extent,RPNet can deal with the scene of sunlight reflection in rice fields.Even in the case of strong sun reflection,the density map estimated using RPNet was essentially the same as the ground-truth density map.It’s interesting that SANet also shows a good ability to resist light reflection.However,as shown in the second,third,and fourth rows of Fig.S1,the density maps predicted by RPNet were better than those predicted by SANet.Moreover,the density maps generated by CSRNet were similar to the ground-truth density maps.However,the counting results were not as accurate as those of the proposed RPNet.We believe that the good counting accuracy of RPNet was owing to the effective integration of the four modules and the ability of RPloss to achieve effective supervision learning during the training stage.In future research,sun-glint suppression can also be achieved by improving the imaging method and implementing it in image preprocessing[35].To address this issue,researchers can acquire UAV images using both RGB and near-infrared cameras and use the near-infrared images to correct for the sun-glint effect in the RGB images.

    For the MTC dataset,the MAE and RMSE of the proposed method also achieved the best results of 3.1 and 5.0,respectively,as listed in Table 3.Compared to the second-best SFC2Net,the MAE and RMSE improved by 38% and 46%,respectively.The rMAE and rRMSE of RPNet were 20.0% and 46.1%,respectively.RPNet provided rMAE and rRMSE results comparable to those of these methods.Fig.S2 shows the final output density maps for the MTC dataset.As shown in the first and second rows of Fig.S2,the RPNet performed well in both dense and sparse scenes.In the third row of Fig.S2,the newly sprouted maize tassels looked very similar to the surrounding leaves owing to changes in illumination.In this chaotic background scenario,our RPNet still distinguished the maize tassels from the background.We believe that the good performance of the RPNet was derived from the introduction of the plant attention block module and effective background suppression strategies that allowed the network to adapt to changes in light intensity and more effectively focus on the maize tassels in the image.As shown in the second and third rows of Fig.S2,the density maps generated by the SANet and CSRNet were unclear owing to interference from the image background.Compared with the SANet and CSRNet,the RPNet had a better counting accuracy and higher-quality predicted density map.We believe that this was mainly because RPNet was better at suppressing the image background.

    Table 3Performance of different methods on the test set of the MTC dataset.

    The counting performances of different methods on the WED dataset are listed in Table 4.The proposed RPNet obtained the best MAE and RMSE,which were 4.0 and 4.9,as show in Table 4.Although the improvement was not significant compared with the second-best method,it was very competitive because the images all showed the characteristics of a messy background and dense wheat ear distribution.In Fig.S3,the five pictures differ in terms of image background and light intensity and without the contrast of the ground-truth density map,it is difficult for the human eye to identify all wheat ears.The estimated density map in the third column of Fig.S3 shows that RPNet accurately identified plants and achieved a counting performance.Simultaneously,we observed less background noise.As shown in the third and fifth rows of Fig.S3,the density maps generated by the SANet and CSRNet were not similar to ours.The results obtained by the RPNet indicate that the AM generated in the proposed RPNet was welltrained with the help of the presented RPloss.Thus,the RPNet effectively distinguished wheat ears from complex image backgrounds.

    3.4.Ablation experiments

    In this portion,we introduce the ablation experiments to demonstrate the effective configuration of the presented network.All the ablation experiments were conducted on the URC dataset.In the first ablation experiment,we demonstrate the impact of the hyper-parameter σ in the experiment.The value of σ represents the size of the fixed Gaussian kernel which is used to generate the ground-truth density map.The experimental results regarding this hyper-parameter in Table S1 show that our network performed best when σ was set to 6.When σ was small,some detailed plant features were lost because the window of the fixed Gaussian kernel could not completely cover the plant region.When σ was too large,the window of the fixed Gaussian kernel was beyond the plant region,and some interference information was included in the generated ground-truth density map and slightly reduced the counting performance of the network.

    We performed another ablation experiment to demonstrate the effectiveness of each sub-loss function.All the related results are given in Table S2,and ‘√’ indicates that we utilized this sub-loss function.As shown in Table S2,the best performance of our network was obtained when all the sub-loss functions were employed.This experiment verified the effectiveness of the multiple supervision loss functions we designed in the proposed RPNet.By comparing the second row with the sixth row and the fifth row with the eighth row of Table S2,we find that the proposed LPNimproved the counting performance.By comparing the sixth line with the eighth line,we found that the AM effectively suppressed the background and improved the counting accuracy.

    Additionally,we performed an ablation experiment to analyze the weight hyperparameters λ1,λ2and λ3in the RPloss.The comparative experiments were based on the URC dataset.The results of the different hyperparameters are listed in Table S3,where rows 1-6 represent the ablation experiments for λ1.In these sixexperiments,we fixed the parameters λ2and λ3,then utilized different parameters λ1to compare the counting performance of the proposed RPNet on the URC dataset.Similarly,rows 7-12 represent the ablation experiments for λ2.Rows 13-18 are the experiment results for λ3.In the initial experiments,we found that the magnitude of each sub-loss function was significantly different.We believe that the weight hyperparameters were not only related to the importance of different sub-loss functions,but also to the magnitudes of those loss values.Therefore,for better compatibility and adaptation between the sub-loss functions,we set the weights λ1,λ2and λ3to 1,10-3and 10-4,respectively,to balance the different sub-loss values.As shown in Table S3,the weights we chose were good.In fact,it was difficult to determine optimal hyperparameters and we proposed a feasible option for these weight hyperparameters.Researchers can utilize other methods to obtain better weights and further improve network performance.In future research,the selection of neural network hyperparameters may be a promising direction.

    Finally,we analyzed the running efficiency of the proposed RPNet on a GPU server equipped with an RTX 3090.Table S4 lists the running efficiency of the different networks.As shown,the FPS of the neural network was closely related to the number of parameters and FLOPs.The running efficiency of the RPNet reached 3.57HZ FPS and was comparable with that of the SANet.Although there were fewer network parameters in the SANet,its running time was relatively long,mainly because the SANet adopts a series of transposed convolutions.In this study,frequent rice counting was unnecessary.Therefore,we did not need a rice counting network with high running efficiency.

    4.Conclusions

    In this study,we proposed a powerful rice plant counting network (RPNet) that accurately counts the number of rice plants in a paddy field based on a UAV vision platform.The main contributions of this study are as follows: Firstly,the presented RPNet can accurately count the number of rice plants in paddy field based on a UAV vision platform.In the experiment,the RPNet was evaluated using our recently proposed rice plant dataset,URC.Its performance surpassed that of many state-of-the-art methods for rice plant counting tasks.Additionally,the MTC and WED datasets were used in our experiment.Our network achieved the best performance for both datasets.This demonstrates that our network has the potential for other plant-counting tasks.Our network has an accurate rice plant counting ability and is highly competitive compared with other excellent counting approaches.Secondly,we propose an efficient loss function called RPloss.This loss function considers different sub-loss functions that achieve multiple supervision during network training and ensures the validity of network parameter learning.Lastly,the convolutional block attention module(CBAM)attention module is introduced into rice plant counting.It was applied as a rice plant Attention Block in the RPNet pipeline to extract and fuse multi-scale feature maps.In conclusion,experiments showed that the proposed network can be employed for the accurate counting of rice plants in paddy field,thereby replacing the traditional tedious manual counting method.

    CRediT authorship contribution statement

    Xiaodong Bai:Conceptualization,Methodology,Funding Acquisition,Writing-Review,Project Administration.Susong Gu:Methodology,Software,Validation,Writing.Pichao Liu:Conceptualization,Methodology.Aiping Yang:Resources,Formal Analysis,Investigation.Zhe Cai:Formal Analysis,Investigation.Jianjun Wang:Resources,Formal Analysis.Jianguo Yao:Project Administration,Funding Acquisition.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgments

    The authors would like to thank the field management staff at the Nanchang Key Laboratory of Agrometeorology for their continued support of our research.Authors also would like to thank Shiqi Chen,Tingting Xie,Xiaodi Zhou,Dongjun Chen and Haocheng Li for their contribution to manual image labeling.Thanks Shiqi Chen and Tingting Xie for contributing to the language revision.This work was supported by the National Natural Science Foundation of China (61701260 and 62271266),the Postgraduate Research &Practice Innovation Program of Jiangsu Province (SJCX21_0255),and the Postdoctoral Research Program of Jiangsu Province(2019K287).

    Appendix A.Supplementary data

    Supplementary data for this article can be found online at https://doi.org/10.1016/j.cj.2023.04.005.

    搞女人的毛片| 国产精品综合久久久久久久免费| 亚洲国产精品专区欧美| 99国产精品一区二区蜜桃av| 蜜桃久久精品国产亚洲av| 欧美变态另类bdsm刘玥| 日韩欧美 国产精品| 日韩大片免费观看网站 | 人妻系列 视频| 一级毛片aaaaaa免费看小| 欧美不卡视频在线免费观看| 色噜噜av男人的天堂激情| 综合色av麻豆| 国产黄色视频一区二区在线观看 | 久久久久久久午夜电影| 麻豆乱淫一区二区| 麻豆成人av视频| 久久热精品热| 青春草亚洲视频在线观看| 国产麻豆成人av免费视频| 国产日韩欧美在线精品| 中文天堂在线官网| 狂野欧美激情性xxxx在线观看| 国产老妇伦熟女老妇高清| 高清毛片免费看| 国产精品不卡视频一区二区| 亚洲成色77777| 亚洲欧美精品自产自拍| 亚洲最大成人中文| 日本免费在线观看一区| 国产亚洲5aaaaa淫片| 亚洲乱码一区二区免费版| 蜜臀久久99精品久久宅男| 免费看a级黄色片| 欧美日韩在线观看h| 日本午夜av视频| 国产高清视频在线观看网站| 在线观看美女被高潮喷水网站| 午夜老司机福利剧场| 韩国高清视频一区二区三区| videos熟女内射| 日日摸夜夜添夜夜添av毛片| 你懂的网址亚洲精品在线观看 | 久久国内精品自在自线图片| 人妻系列 视频| 欧美成人免费av一区二区三区| 欧美3d第一页| 毛片女人毛片| 一级毛片我不卡| 麻豆成人午夜福利视频| 1024手机看黄色片| 免费黄色在线免费观看| 国产精品蜜桃在线观看| 啦啦啦韩国在线观看视频| 亚洲欧美成人精品一区二区| 久久久久久久久久久免费av| 一区二区三区高清视频在线| 看非洲黑人一级黄片| 国产精品三级大全| 亚洲欧美日韩卡通动漫| 午夜免费激情av| 久久人人爽人人片av| 别揉我奶头 嗯啊视频| 青春草视频在线免费观看| 麻豆久久精品国产亚洲av| 国产成年人精品一区二区| 男人的好看免费观看在线视频| 亚洲国产成人一精品久久久| 欧美区成人在线视频| 免费av毛片视频| 亚洲国产最新在线播放| 联通29元200g的流量卡| 国产午夜精品久久久久久一区二区三区| 一级毛片久久久久久久久女| 欧美高清性xxxxhd video| 三级经典国产精品| 日韩制服骚丝袜av| 日本欧美国产在线视频| 91精品一卡2卡3卡4卡| 亚洲精品乱码久久久久久按摩| 九九热线精品视视频播放| 国产中年淑女户外野战色| 免费av观看视频| 久久99蜜桃精品久久| 综合色丁香网| 精品久久久久久久久久久久久| 99视频精品全部免费 在线| 99久久中文字幕三级久久日本| 2021少妇久久久久久久久久久| 大又大粗又爽又黄少妇毛片口| 国内少妇人妻偷人精品xxx网站| 久久久久久国产a免费观看| 久久精品夜色国产| 国产亚洲精品av在线| 久久99蜜桃精品久久| 校园人妻丝袜中文字幕| 国产精品永久免费网站| 亚洲第一区二区三区不卡| 中文精品一卡2卡3卡4更新| 成人性生交大片免费视频hd| 少妇熟女aⅴ在线视频| 午夜a级毛片| 最后的刺客免费高清国语| 国产 一区 欧美 日韩| 国产一级毛片七仙女欲春2| АⅤ资源中文在线天堂| 国产精品伦人一区二区| 亚洲成人av在线免费| 五月玫瑰六月丁香| 桃色一区二区三区在线观看| 国产老妇女一区| 亚州av有码| 国产人妻一区二区三区在| 久久久久久久久中文| 亚洲欧美精品专区久久| 日韩高清综合在线| 看黄色毛片网站| 91aial.com中文字幕在线观看| 69人妻影院| 晚上一个人看的免费电影| 久久久久久大精品| 亚洲欧美清纯卡通| 在线播放无遮挡| 国产 一区精品| 99久久无色码亚洲精品果冻| 最近最新中文字幕大全电影3| 一区二区三区高清视频在线| 成人午夜高清在线视频| 黄色欧美视频在线观看| 精华霜和精华液先用哪个| 亚洲高清免费不卡视频| 夜夜爽夜夜爽视频| 日韩精品青青久久久久久| 日本免费a在线| 精品99又大又爽又粗少妇毛片| 免费av不卡在线播放| 色吧在线观看| 国产黄色视频一区二区在线观看 | 国产黄片视频在线免费观看| 国产精品麻豆人妻色哟哟久久 | 九九热线精品视视频播放| 亚洲精品456在线播放app| 国产精品1区2区在线观看.| 精品午夜福利在线看| 久久精品夜色国产| 亚洲成色77777| 精品午夜福利在线看| 99热全是精品| 国产老妇女一区| 国产一区二区亚洲精品在线观看| 高清毛片免费看| 日本黄大片高清| 亚洲精品国产成人久久av| 蜜桃亚洲精品一区二区三区| 午夜福利在线观看免费完整高清在| 久久久久久大精品| 级片在线观看| 国产亚洲5aaaaa淫片| 国产成人a∨麻豆精品| 国产日韩欧美在线精品| 婷婷色综合大香蕉| 好男人在线观看高清免费视频| 成人欧美大片| 久久婷婷人人爽人人干人人爱| 精品欧美国产一区二区三| 91精品一卡2卡3卡4卡| 国内揄拍国产精品人妻在线| 日韩 亚洲 欧美在线| 亚洲欧美日韩东京热| 国产私拍福利视频在线观看| 建设人人有责人人尽责人人享有的 | 亚洲经典国产精华液单| 日韩强制内射视频| 中文字幕人妻熟人妻熟丝袜美| 日本免费一区二区三区高清不卡| 男女视频在线观看网站免费| 久久久久久久久中文| 26uuu在线亚洲综合色| 国产精华一区二区三区| 日本熟妇午夜| 少妇的逼好多水| 晚上一个人看的免费电影| 免费观看在线日韩| 狠狠狠狠99中文字幕| 日本一本二区三区精品| www日本黄色视频网| 亚洲最大成人手机在线| 国产毛片a区久久久久| 精品午夜福利在线看| 干丝袜人妻中文字幕| 99视频精品全部免费 在线| 国产视频首页在线观看| 国产一区有黄有色的免费视频 | 亚洲av.av天堂| 亚洲人与动物交配视频| 你懂的网址亚洲精品在线观看 | 国产高清不卡午夜福利| 综合色丁香网| 波多野结衣巨乳人妻| 国内揄拍国产精品人妻在线| 成年免费大片在线观看| 国产免费一级a男人的天堂| 在线观看美女被高潮喷水网站| 老师上课跳d突然被开到最大视频| 久久久午夜欧美精品| 午夜福利在线观看吧| 中文字幕av在线有码专区| 亚洲怡红院男人天堂| 最近2019中文字幕mv第一页| 夫妻性生交免费视频一级片| 国产高清国产精品国产三级 | 91久久精品国产一区二区三区| 免费黄网站久久成人精品| 国产精品综合久久久久久久免费| 18禁裸乳无遮挡免费网站照片| 国产又色又爽无遮挡免| 日韩欧美三级三区| 久久久久久久国产电影| 黄片无遮挡物在线观看| 亚洲欧美一区二区三区国产| 欧美日韩一区二区视频在线观看视频在线 | 搞女人的毛片| 欧美一区二区国产精品久久精品| 国产精品麻豆人妻色哟哟久久 | 丝袜美腿在线中文| 中国美白少妇内射xxxbb| 亚洲国产精品专区欧美| 免费看日本二区| 三级国产精品欧美在线观看| 亚洲国产最新在线播放| 色综合色国产| 国产视频首页在线观看| 亚洲丝袜综合中文字幕| 亚洲精品色激情综合| 国产一级毛片在线| 亚洲av中文av极速乱| 性插视频无遮挡在线免费观看| 麻豆一二三区av精品| 草草在线视频免费看| 免费黄色在线免费观看| 夜夜看夜夜爽夜夜摸| 狠狠狠狠99中文字幕| 看片在线看免费视频| 欧美性感艳星| 水蜜桃什么品种好| 亚洲精华国产精华液的使用体验| 国产精品无大码| 少妇裸体淫交视频免费看高清| 免费在线观看成人毛片| 亚洲成av人片在线播放无| 天天一区二区日本电影三级| www日本黄色视频网| 最近中文字幕高清免费大全6| 久久精品国产99精品国产亚洲性色| 中文资源天堂在线| 黄片wwwwww| 亚洲欧美日韩卡通动漫| 老女人水多毛片| 全区人妻精品视频| 变态另类丝袜制服| 麻豆成人午夜福利视频| 91精品一卡2卡3卡4卡| 国内少妇人妻偷人精品xxx网站| 久久精品国产自在天天线| 久久精品夜夜夜夜夜久久蜜豆| 精品久久久久久电影网 | 欧美区成人在线视频| 菩萨蛮人人尽说江南好唐韦庄 | 国产一级毛片在线| 国产精品1区2区在线观看.| 亚洲欧洲国产日韩| 亚洲婷婷狠狠爱综合网| 久久久久久久久久久丰满| 亚洲欧美成人精品一区二区| 最近中文字幕2019免费版| 国产伦精品一区二区三区视频9| 天天躁日日操中文字幕| 亚洲精品乱码久久久v下载方式| 网址你懂的国产日韩在线| 亚洲欧美成人综合另类久久久 | 99久久九九国产精品国产免费| 免费播放大片免费观看视频在线观看 | 69av精品久久久久久| 国产v大片淫在线免费观看| 欧美激情久久久久久爽电影| 五月伊人婷婷丁香| 欧美zozozo另类| 日本三级黄在线观看| 国产老妇女一区| 精品久久国产蜜桃| 亚洲在线自拍视频| 亚洲av男天堂| 99久久人妻综合| 深夜a级毛片| 水蜜桃什么品种好| 亚洲精品aⅴ在线观看| 一边摸一边抽搐一进一小说| 成年版毛片免费区| 久久精品久久精品一区二区三区| 亚洲精华国产精华液的使用体验| 有码 亚洲区| 久久久a久久爽久久v久久| 在线观看一区二区三区| 99久国产av精品| 汤姆久久久久久久影院中文字幕 | 色视频www国产| 国产精品综合久久久久久久免费| 亚洲欧美日韩高清专用| 在线天堂最新版资源| av视频在线观看入口| 欧美高清成人免费视频www| 国产高清视频在线观看网站| 99热这里只有是精品50| 免费电影在线观看免费观看| 国产在线一区二区三区精 | 在线免费观看不下载黄p国产| 精品熟女少妇av免费看| 免费观看的影片在线观看| 天堂av国产一区二区熟女人妻| 中文欧美无线码| 91精品伊人久久大香线蕉| 久久人妻av系列| 欧美精品一区二区大全| 成人午夜高清在线视频| 欧美潮喷喷水| 3wmmmm亚洲av在线观看| 国产亚洲精品久久久com| 赤兔流量卡办理| 亚洲国产精品sss在线观看| 欧美成人a在线观看| 成人美女网站在线观看视频| 少妇人妻一区二区三区视频| 亚洲四区av| 少妇的逼水好多| 国产真实伦视频高清在线观看| 欧美成人a在线观看| 国产精品电影一区二区三区| 最新中文字幕久久久久| 在现免费观看毛片| 看免费成人av毛片| 床上黄色一级片| 美女高潮的动态| 美女脱内裤让男人舔精品视频| 色综合站精品国产| 精品久久久久久久久久久久久| 国内少妇人妻偷人精品xxx网站| 久久99热这里只频精品6学生 | 国内揄拍国产精品人妻在线| 欧美另类亚洲清纯唯美| 九九久久精品国产亚洲av麻豆| 亚洲国产成人一精品久久久| 日韩中字成人| 蜜桃亚洲精品一区二区三区| 国产真实伦视频高清在线观看| 日本黄色片子视频| 欧美日韩在线观看h| 色视频www国产| 久久99蜜桃精品久久| 青春草国产在线视频| 亚洲精品亚洲一区二区| 99热这里只有是精品在线观看| 国产极品天堂在线| 国产精品一二三区在线看| 日本五十路高清| 秋霞在线观看毛片| 欧美日本亚洲视频在线播放| 嘟嘟电影网在线观看| 日日啪夜夜撸| 婷婷色av中文字幕| av免费观看日本| 特大巨黑吊av在线直播| 最新中文字幕久久久久| 尤物成人国产欧美一区二区三区| 日韩欧美三级三区| 免费人成在线观看视频色| 久久精品影院6| 联通29元200g的流量卡| 亚洲精品国产成人久久av| 日韩欧美在线乱码| 欧美性猛交╳xxx乱大交人| 中国美白少妇内射xxxbb| 18禁在线播放成人免费| 国内精品一区二区在线观看| 久久午夜福利片| 一级黄色大片毛片| 国产亚洲精品久久久com| 亚洲中文字幕一区二区三区有码在线看| 国产在视频线精品| 三级国产精品欧美在线观看| 久久精品夜色国产| 深夜a级毛片| 亚洲av不卡在线观看| 男人舔女人下体高潮全视频| 亚洲不卡免费看| 美女国产视频在线观看| 黄色欧美视频在线观看| 久久久久久大精品| 精品少妇黑人巨大在线播放 | 身体一侧抽搐| 亚洲人与动物交配视频| 2021天堂中文幕一二区在线观| 超碰97精品在线观看| 国产真实伦视频高清在线观看| 欧美成人精品欧美一级黄| 久久精品国产自在天天线| 九九爱精品视频在线观看| 日韩制服骚丝袜av| 一级爰片在线观看| 午夜亚洲福利在线播放| 亚洲人成网站在线观看播放| 亚洲av一区综合| 日韩国内少妇激情av| 91久久精品国产一区二区成人| 欧美最新免费一区二区三区| 嫩草影院入口| 日本免费在线观看一区| 国产美女午夜福利| 国产一区二区三区av在线| 国产精品蜜桃在线观看| 日日撸夜夜添| 国产欧美日韩精品一区二区| 久久久久久国产a免费观看| 91精品一卡2卡3卡4卡| 一个人免费在线观看电影| 亚洲欧美日韩东京热| 国产精品久久久久久av不卡| 亚洲中文字幕一区二区三区有码在线看| 简卡轻食公司| 国产精品福利在线免费观看| 欧美高清性xxxxhd video| av免费在线看不卡| a级一级毛片免费在线观看| 我要看日韩黄色一级片| 亚洲国产精品合色在线| 日韩欧美精品免费久久| 全区人妻精品视频| 51国产日韩欧美| 日本与韩国留学比较| 晚上一个人看的免费电影| 菩萨蛮人人尽说江南好唐韦庄 | 久久婷婷人人爽人人干人人爱| 国产亚洲av片在线观看秒播厂 | 搡老妇女老女人老熟妇| 一级黄色大片毛片| 简卡轻食公司| 亚洲精品日韩av片在线观看| 直男gayav资源| 五月玫瑰六月丁香| 在线播放无遮挡| av在线播放精品| 在线a可以看的网站| 中文乱码字字幕精品一区二区三区 | 国产真实伦视频高清在线观看| 亚洲久久久久久中文字幕| 亚洲av.av天堂| 欧美日韩综合久久久久久| 99久久精品国产国产毛片| 国产乱人视频| 成人二区视频| 日日撸夜夜添| 赤兔流量卡办理| 91在线精品国自产拍蜜月| 波野结衣二区三区在线| 麻豆成人午夜福利视频| 国产真实伦视频高清在线观看| 亚洲精品乱码久久久v下载方式| 亚洲精品色激情综合| 久久久久久久久中文| 超碰av人人做人人爽久久| 人人妻人人澡人人爽人人夜夜 | 婷婷色av中文字幕| 亚洲av.av天堂| 国产伦精品一区二区三区四那| 在线观看美女被高潮喷水网站| 人人妻人人澡人人爽人人夜夜 | 蜜桃亚洲精品一区二区三区| av免费在线看不卡| 国产成人午夜福利电影在线观看| 少妇猛男粗大的猛烈进出视频 | 精品不卡国产一区二区三区| 水蜜桃什么品种好| eeuss影院久久| 国产色爽女视频免费观看| 久久久精品94久久精品| 伦理电影大哥的女人| 美女cb高潮喷水在线观看| 波多野结衣高清无吗| 欧美另类亚洲清纯唯美| 久久精品影院6| 伦精品一区二区三区| 成人无遮挡网站| 中文天堂在线官网| 18禁动态无遮挡网站| 国产成年人精品一区二区| 国产亚洲精品av在线| av卡一久久| 人妻夜夜爽99麻豆av| 超碰97精品在线观看| 高清日韩中文字幕在线| 国产又色又爽无遮挡免| 精品久久久久久久末码| 日韩一本色道免费dvd| 国产精品久久久久久久久免| 国产精品人妻久久久影院| 久久久欧美国产精品| 久久6这里有精品| 三级国产精品欧美在线观看| 女人被狂操c到高潮| 亚洲国产精品专区欧美| 国产又色又爽无遮挡免| av国产久精品久网站免费入址| 欧美精品一区二区大全| 久久久久久久国产电影| 有码 亚洲区| 黄色配什么色好看| 在现免费观看毛片| 免费观看a级毛片全部| 国产精品麻豆人妻色哟哟久久 | 男人狂女人下面高潮的视频| 国产精品人妻久久久影院| 高清视频免费观看一区二区 | 黄色欧美视频在线观看| 久久99热这里只频精品6学生 | 欧美xxxx黑人xx丫x性爽| 日韩av不卡免费在线播放| 免费黄网站久久成人精品| 国产精品三级大全| 三级经典国产精品| 免费黄色在线免费观看| 国产精品久久久久久精品电影| 久久久欧美国产精品| 亚洲五月天丁香| 国内少妇人妻偷人精品xxx网站| 午夜精品国产一区二区电影 | 日韩av在线免费看完整版不卡| 中文字幕免费在线视频6| 精品久久国产蜜桃| 亚洲经典国产精华液单| 99热6这里只有精品| 精品一区二区免费观看| 18+在线观看网站| 日韩欧美精品免费久久| 亚洲欧美精品专区久久| 亚洲在线观看片| 嫩草影院新地址| 国产免费福利视频在线观看| 欧美成人免费av一区二区三区| 少妇的逼好多水| 欧美不卡视频在线免费观看| 国产麻豆成人av免费视频| 久久婷婷人人爽人人干人人爱| 午夜免费男女啪啪视频观看| 一二三四中文在线观看免费高清| 人妻夜夜爽99麻豆av| 一卡2卡三卡四卡精品乱码亚洲| 变态另类丝袜制服| 三级经典国产精品| 韩国av在线不卡| 色吧在线观看| 一个人看视频在线观看www免费| 日韩亚洲欧美综合| 男人舔女人下体高潮全视频| 成人午夜精彩视频在线观看| 精品久久久久久久末码| 人人妻人人澡欧美一区二区| 观看免费一级毛片| 天美传媒精品一区二区| 欧美精品一区二区大全| 国产精品永久免费网站| 美女xxoo啪啪120秒动态图| 亚洲国产高清在线一区二区三| 国产一级毛片在线| 精品久久国产蜜桃| 三级男女做爰猛烈吃奶摸视频| 日韩一区二区三区影片| 97超视频在线观看视频| 看免费成人av毛片| 最近的中文字幕免费完整| 国产成人91sexporn| 久久亚洲国产成人精品v| 久久国内精品自在自线图片| 看非洲黑人一级黄片| 小蜜桃在线观看免费完整版高清| 18禁裸乳无遮挡免费网站照片| 久久国产乱子免费精品| 国产伦精品一区二区三区四那| 一边摸一边抽搐一进一小说| 亚洲欧美日韩无卡精品| 国产淫语在线视频| 欧美xxxx黑人xx丫x性爽| 久久99蜜桃精品久久| 国产淫语在线视频| 欧美成人一区二区免费高清观看| 国产午夜精品久久久久久一区二区三区| 亚洲三级黄色毛片| av卡一久久| 国产精品福利在线免费观看| 国产探花在线观看一区二区| 好男人视频免费观看在线| 村上凉子中文字幕在线| 欧美潮喷喷水| 午夜老司机福利剧场| av免费观看日本| 久久久久久久久中文| 大香蕉97超碰在线| 精品国产三级普通话版| 国产午夜精品论理片| av女优亚洲男人天堂| 日日撸夜夜添| 看免费成人av毛片| 免费在线观看成人毛片| 中文精品一卡2卡3卡4更新| 黄色日韩在线| 久久精品国产99精品国产亚洲性色| 国国产精品蜜臀av免费|