• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An algorithm for automatic identification of multiple developmental stages of rice spikes based on improved Faster R-CNN

    2022-10-12 09:30:48YunqinZhngDeqinXioYoufuLiuHuilinWu
    The Crop Journal 2022年5期

    Yunqin Zhng ,Deqin Xio, *,Youfu Liu ,Huilin Wu

    a College of Mathematics and Informatics,South China Agricultural University,Guangzhou 510642,Guangdong,China

    b Guangzhou National Modern Agricultural Industry Science and Technology Innovation Center,Guangzhou 511458,Guangdong,China

    Keywords:Improved Faster R-CNN Rice spike detection Rice spike count Developmental stage identification

    ABSTRACT Spike development directly affects the yield and quality of rice.We describe an algorithm for automatically identifying multiple developmental stages of rice spikes(AI-MDSRS)that transforms the automatic identification of multiple developmental stages of rice spikes into the detection of rice spikes of diverse maturity levels.The scales vary greatly in different growth and development stages because rice spikes are dense and small,posing challenges for their effective and accurate detection.We describe a rice spike detection model based on an improved faster regions with convolutional neural network(Faster R-CNN).The model incorporates the following optimization strategies:first,Inception_ResNet-v2 replaces VGG16 as a feature extraction network;second,a feature pyramid network (FPN) replaces single-scale feature maps to fuse with region proposal network (RPN);third,region of interest (RoI) alignment replaces RoI pooling,and distance-intersection over union (DIoU) is used as a standard for non-maximum suppression (NMS).The performance of the proposed model was compared with that of the original Faster R-CNN and YOLOv4 models.The mean average precision (mAP) of the rice spike detection model was 92.47%,a substantial improvement on the original Faster R-CNN model (with 40.96% mAP) and 3.4%higher than that of the YOLOv4 model,experimentally indicating that the model is more accurate and reliable.The identification results of the model for the heading-flowering,milky maturity,and full maturity stages were within two days of the results of manual observation,fully meeting the needs of agricultural activities.

    1.Introduction

    Accurate observation of the developmental stages of rice spikes(DSRS) can guide precise management and control aimed at achieving high rice quality and yield [1].Observation of rice spike developmental stages has long been performed manually,requiring observers to conduct on-site sampling and observation [2].However,this process is time-consuming,labor-intensive,and inefficient.Moreover,owing to the similarity among plants and the subjectivity of observers,it is very difficult to count rice spikes accurately in a large area.For these reasons,the accuracy of rice spike development records is limited [3].There is an urgent need to develop an automatic identification algorithm for DSRS.

    With the maturation and popularization of computer vision technology,an increasing number of researchers have continuously observed a small area in a field that can represent the entire field to yield the DSRS.They acquire image sequences,examine the morphological features of rice spikes in the image,and identify DSRS based on the evolution law of morphological features.Bai et al.[4] collected the front down view image in a rice field,used color information to extract the coverage of the rice spike region,and judged whether the rice spike had entered the heading stage based on the change in coverage.Cao et al.[5] used the frontdown view image collected in the rice field as the object,used color information to extract the rice spike region,calculated the spike curvature of the rice spike angle detection area,and determined whether the rice spike had entered the milky maturity(MM)stage based on the spike curvature.Soontranon et al.[6] used a vegetation index to monitor the growth stage of rice on a small scale based on the shape model fitting (SMF) method and roughly divided the growth period of rice into seedling,tillering,heading,and harvest stages using vegetation index graph analysis.However,the current automatic identification methods for DSRS features have been studied for only a single critical stage,and the rice spike area must be segmented,an operation that is greatly affected by windy weather or complex scenario and is not practical.

    Rice spikes exhibit varying morphological features such as size,shape,and color during the growth stages from heading to harvest.During the heading-flowering (HF) stage,rice spikes are dotted with small white glume flowers;at the MM stage,there are no glume flowers,but the spikes are bent,drooping,or divergent,and turn yellow at the full maturity (FM) stage.According to the Specifications for Agrometeorological Observation-Rice [2],a developmental stage can be recognized when 10%of plant individuals have entered the stage[7].Thus,the automatically identifying multiple developmental stages of rice spikes (AI-MDSRS) can be transformed into the detection of rice spikes at multiple developmental stages by identifying sufficient rice spikes.Automatic counting of spikes using the spike target detection algorithm allows effective and efficient identification of rice developmental stages.

    There have been some advances in automatically counting plant spikes based on digital images.The operation is divided into two main categories: segmentation methods based on color [8] and texture [9],and classification methods based on pixel-level color feature candidate regions [10] and superpixel fusion to generate candidate regions [11].Although these methods can detect plant spikes,their accuracy still requires improvement.

    Deep learning is a new type of high-precision target detection method that is widely used in agricultural applications[12].These applications include detection and counting of corn kernels [13]and rice spikes and kernels,plant leaf identification and counting[14],and wheat spike detection and counting [15,16].There have also been advances in research on rice spike counting.Duan et al.[17]collected rice plant images from multiple angles and proposed a method for automatically determining spike number in potted rice plants.Xu et al.[18] proposed a robust rice spike-counting algorithm based on deep learning and multiscale hybrid windows,enhancing rice spike features to detect and calculate the number of small-scale rice spikes in large-area scenes.However,existing automatic counting algorithms for rice spikes are limited to detecting the spikes at a specific developmental stage and still stays in this part,and there has been no further application research based on actual scenarios.No study has developed an algorithm for detecting rice spike targets based on multiple developmental stages,and further developmental stage identification has been completed.

    Aiming at the multi-scale problem of rice spike detection at different developmental stages,especially the detection of small target rice spikes,a new automatic identification algorithm based on improved Faster R-CNN for MDSRS was proposed in this paper.Based on improved Faster R-CNN model,the algorithm realizes the automatic extraction of image features at different developmental stages of rice spikes,and accurately detect the corresponding developmental stage of rice spikes using the real-time rice image sequence acquired by ground monitoring equipment.

    2.Materials

    2.1.Experimental site description

    The rice variety used in this study was JinNongSiMiao.We used the rice growth monitoring station to take regularly rice images in two rice planting scenarios: potted and field rice (Fig.1A).

    In the pot scenario,rice was planted in pots of diameter approximately 40 cm,and each pot was divided into three holes for transplanting seedlings with a hole spacing of 10-12 cm,which met planting standards for planting interval and spacing.Nine pots were enclosed in a monitoring area to simulate an area blocks of three rows and three columns in the field environment.Because of the height limitation of the greenhouse,cameras 1 and 2 (DS-2DC4420IW-D,Hikvision),which were fixed on a beam 2.5 m above the ground,captured images of the potted rice from an overhead perspective.The resolution of the images taken was 1920 × 1080 pixels.There was negligible difference between the images taken by the two cameras;only nine pots of rice could be seen under one camera,and the size of the monitoring area was approximately 1.2 m2.Two cameras were set up to increase the dataset,a measure beneficial for improving the performance of the training model.In addition,during testing,it is equivalent to repeating the test for the same period,verifying the reliability of the model and reducing the test error.In the large field scenario,the cameras(DS-2DC7423IW-A,Hikvision) of field plot 1 and field plot 2 were mounted on a crossbar 2.5 m above the ground,and the images taken were the front lower view with a resolution of 1280 × 720 pixels.The actual area S of the images taken by the two field cameras was estimated from reference discs with areas of 9.89 and 2.43 m2(Fig.1B,C).

    2.2.Data collection

    In the study area,double-season rice,including early and late rice,is grown annually.Images were acquired at regular intervals by the rice growth monitoring station for the rice monitoring areas in the greenhouse pot and field.The images were collected hourly from 8:00 to 17:00 every day.The image sequences acquired in the above two scenarios from 2019 to 2021 are presented here,and the critical developmental stages of rice spikes were recorded by professionals(Table S1).The first 1-8 sequences are image sequences of potted rice,where markers I and II in the camera column indicate that the transplanting stages of rice in the two camera monitoring areas are different and identical,respectively.The last 9-16 sequences are image sequences of rice in the field,including two different field plots.

    2.3.Data set creation

    To identify the three key developmental stages of rice spikes,according to the different morphological features of rice spikes in different growth and developmental stages,we used the annotation software‘‘LabelImg”to manually label three types of rice spike different maturity levels by recording the coordinates of the smallest outer rectangle of the spikes: the HF stage (rice spike maturity is ripe 1 level),with small white glume flowers dotted on rice spikes;the MM stage (rice spike maturity is ripe 2 level),with no glume flowers and rice spikes bending and drooping,with some diverging;and the FM stage (rice spike maturity is ripe 3 level),with rice spikes turning yellow (Fig.S1A-C).

    Image sequences collected in greenhouse pots and fields from 2019 to 2021 were divided.Deep learning networks were trained using four image sequences from 2019 to build rice spike target detection models for multiple developmental stages in the two scenarios.The remaining four image sequences were used to verify the effectiveness of the automatic identification algorithm for MDSRS.During the 2019 trial,rice spike images with high similarity were screened out,and 1600 original images were obtained in a greenhouse potted scenario;of these,573 images were obtained for the HF stage (class I),584 for the MM stage (class II),and 443 for the FM stage (class III).A total of 720 original images were obtained for the field scenarios,including field plots 1 and 2,of which 168 were obtained for the HF stage (class I),112 for the MM stage(class II),and 80 for the FM stage(class III).To construct a multi-class rice spike target detection model based on deep learning,we randomly divided these datasets into training,validation,and test sets with membership proportions of respectively 0.60,0.25,and 0.15 relative to the full number of images,for training,validating,and testing the rice spike target detection model(Table S2).

    Fig.1.Rice image acquisition sites and actual monitoring-area estimation.(A)Experimental site of rice image acquisition in two scenarios.(B)Area of rice monitoring area in the field.Field plot 1 monitoring area S1=S0/L1,the proportion of reference disc pixels in field plot 1 image L1=1/356.52.Field plot 2 monitoring area S2=S0/L2,the proportion of reference disc pixels in field plot 2 image L2=1/87.65.The reference disc area S0=277.45 cm2.

    3.Methods

    3.1.General description of AI-MDSRS algorithm

    The AI-MDSRS algorithm consists of three main parts: (1)expansion of image data,(2) construction and training of a rice spike detection model,and (3) establishment of a correlation between image spike density and developmental data based on the rice spike detection model (Fig.2).The following subsections present a specific description of the AI-MDSRS algorithm.

    3.2.Image data expansion

    During trials,the image data obtained are often much less abundant than those meeting the training requirements of deep learning models.To solve this problem,it is generally necessary to perform image enhancement and data expansion on the training-set samples.The larger the scale of the data and the higher the quality,the better is the generalization ability [21].

    Owing to the uncertainty of weather changes,image lighting variation is relatively large.In this study,the contrast transform was used to process the original images to enrich the training set of rice spike images and avoid overfitting (Fig.S2).In the HSV(hue,saturation,value) color space of the image,the saturation S and luminance V components were changed,keeping the hue H constant,and the S and V components of each pixel were subjected to an exponential operation (an exponential factor between 0.25 and 1.5,with an adjustment step of 0.25)to increase the illumination variation.Alternatively,data expansion is a common method for extending the variability of the training data by artificially scaling up the dataset via transformations that preserve labels.In this study,we used three typical data-augmentation techniques to expand the dataset: translation,rotation,and random cropping(Fig.S2).

    3.3.Rice spike detection model based on improved Faster R-CNN

    The essence of automatic rice spike counting is the identification and location of rice spikes,an operation that is consistent with target recognition and location solved by target detection,thus transforming the rice spike-counting problem into a rice spike target-detection problem.

    The detection methods for image target detection can be divided into two categories.One is a two-stage target detection method based on candidate regions,which first identifies candidate frames of interest and then performs category score prediction and bounding box regression for the targets in the candidate bounding boxes.The second is a one-stage target detection method,which achieves target localization while predicting the target category and belongs to the detection method of integrated convolutional networks.Faster R-CNN [19] is representative of a two-stage target detection network,characterized by low rates of recognition error rate and missed recognition.Unlike one-stage target detection networks such as YOLOv4 [20],the prominent feature of these network models is their speed,but their accuracy is slightly lower than that of two-stage target detection networks.In the present study,the Faster R-CNN network model was selected to identify and localize rice spikes in order to reduce the rice spike counting error.

    Fig.2.General structure of AI-MDSRS algorithm.

    The Faster R-CNN network structure is divided into three parts:a convolutional network for feature extraction,a region proposal network (RPN) for generating candidate boxes,and a detection subnetwork.The VGG16 network adopted by Faster R-CNN and the settings of the anchor box specifications are biased towards large targets.Some constraints influence the detection of small targets.The features of small targets are sparse and easily lost,and the feature extraction process is different from that of large targets,making the algorithm unsuitable for multiscale target-detection problems.In the study,considering the problems of highprecision Faster R-CNN network in detecting small target objects[22],combined with the problems of small and dense rice spike,and large span of rice spike scales in different growth stages,we improved the Faster R-CNN network pertinently (Fig.S3).Specific improvement strategies are described in the following subsections.

    3.3.1.Inception-ResNet-v2 feature extraction network

    Replacing the feature extraction structure is the most common way to improve Faster R-CNN networks,such as by replacing them with new networks such as ResNet [23] and DeseNet [24] and replacing the backbone network with lightweight networks such as MobileNet [25] and SqueezeNet [26] for easy mobile applications.To solve the detection problem of scale difference of rice spikes and smaller rice spikes in images,this paper intercepts the original Inception-Resnet-V2 network as the Inception-Resnet-C module,which is used as the feature extraction network for Faster R-CNN.The stem module is shown in Fig.S4A.

    (1) Inception structure.

    The inception network (GoogLeNet) [27] is a milestone in the development of convolutional neural network (CNN) classifiers.Before the emergence of inception,the most popular CNNs simply stacked more convolutional layers to improve accuracy by deepening the layers and depth of the network.However,it also incurs huge computational consumption and overfitting problems.GoogLeNet is characterized by the use of an inception structure,as shown in Fig.S4B.First,several convolutional or pooling operations are performed in parallel on the input image using 1 × 1,3×3,and 5×5 convolutional kernels to extract features to obtain several kinds of information from the input image.Feature fusion is then performed using concatenation operations to yield better image representations.Targets with more global information distribution prefer larger convolutional kernels,whereas targets with more local information distribution prefer smaller convolutional kernels.This problem is solved by concatenating filters of different sizes in the same layer,widening the network.To reduce the computational cost of the larger(5×5)convolution kernel,a 1×1 convolution was added to the later inception structure to reduce dimensionality.The 1 × 1 convolution also increases nonlinearity while maintaining the original structure,so that the deeper the network,the more high-dimensional features of the image are represented.

    (2) ResNet module.

    Increasing the depth of the network will cause the gradient to disappear or explode and even reduce accuracy.For this reason,the residual network was proposed.Its core aim was to solve the degradation problem caused by increasing the network depth,making it feasible to improve network performance by simply increasing network depth.The residual structure is illustrated in Fig.S4C,where the input tensor is x and the learning residual function is F(x)=H(x)-x.When the model accuracy reaches saturation,the training goal of the redundant network layers approximates the residual result to zero,that is,F(x)=0,to achieve a constant mapping so that training accuracy does not degrade as the network deepens.

    3.3.2.Feature pyramid networks in RPN

    This study used a feature pyramid network (FPN) instead of single-scale feature maps to adapt to the RPN network,which shares the C2-C5 convolutional layer with the improved Faster R-CNN detection network.The region of interest (RoI) and region score were obtained on all feature maps through the RPN and FPN.The region with the highest score was used as the candidate region for the various types of rice spikes.The prediction feature layer [P2,P3,P4,P5,P6] in the top-down transmission module of the improved Faster R-CNN model has receptive fields of multiple scales and can detect target rice spikes of multi-scale.

    As shown in Fig.S4D,a network header was attached to each layer of the feature pyramid.It was implemented using a 3 × 3 convolutional layer,followed by two 1×1 convolutions for classification and regression.Because the head slides densely at each position of each pyramid layer,multi-scale anchor boxes are not required at a specific layer.Instead,a single-scale anchor box is assigned to each layer.In this study,according to the particularity of the target rice spike scale,the scale of the anchor boxes corresponding to the prediction feature layer [P2,P3,P4,P5,P6] was set to {162,322,642,1282,2562},and using three aspect ratios{1:1,2:1,1:2},there were 15 anchor boxes in the pyramid.

    (1) RoI Align module.

    The RoI pooling layer in Faster R-CNN can map the candidate frames generated by the RPN to the feature map output from the shared convolution layer,obtain the RoI of the candidate regions on the shared feature map,and generate a fixed-size RoI.This process requires two quantization operations(rounding floating point numbers),which cause the candidate boxes to deviate from the position returned at the beginning,and the RoI mapping of the feature space to the original map will have a large deviation,affecting the detection accuracy.Because of the small size of the rice spikes in this study and their growth and development,the rice spikes in the image will become dense.For small and dense target objects,the extraction accuracy of the RoI is particularly critical.In this study,we used RoI Align to cancel the quantization operation and used bilinear interpolation to obtain pixel values with floating-point coordinates to solve the problem of region mismatch caused by two quantizations in the RoI pooling operation [27].

    If ROI Align is used with an FPN,the RoI of different scales must be assigned to the pyramid layers.The pyramid layer with the most suitable size is selected to extract the RoI feature block.Taking the input 224 × 224 pixel image as an example,the RoI of width w and height h(on the input image)is assigned to the pyramid layer Pk,and the calculation formula (1) is as follows:

    where k is the feature pyramid layer,k0represents the target layer mapped to w×h=2242RoI,and k0was set to 5 in this study.Formula (1) indicates that if the scale of the RoI becomes smaller(such as 1/2 of 224),it should be mapped to a finer layer (such as k=4).

    (2) Non-maximum suppression method.

    Non-maximum suppression (NMS) is a necessary postprocessing step in target detection.In the original NMS,the intersection over union(IoU)indicator was used to suppress redundant bounding boxes and leave the most accurate bounding box.The calculation form is shown in formulas (2) and (3).Because the IoU-NMS method considers only the overlapping area,it often results in false suppression,particularly when the ground-truth box contains the bounding box.In this study,we used distance-IoU (DIoU) as the standard for NMS,and the DIoU_NMS method addresses the problems of IoU by considering not only the overlap area but also the centroid distance,somewhat increasing the speed and accuracy of the bounding-box regression,and its calculation form is shown in formulas (4) and (5).As shown in Fig.S5,A and B indicate that the two bounding boxes (blue) have the same size and that the IoU is also the same.The IoU-NMS method could not distinguish between intersections with the ground-truth box(red).For this case,C and D calculate the difference between the minimum bounding rectangle (yellow) and the union (yellow block) of the intersection and increase the measurement of the intersection scale to distinguish the relative position relationship.For a situation in which the ground-truth box contains the bounding box,the DIoU_NMS method directly measures the Euclidean distance between the center points of the two boxes,as shown in E.In particular,when the distances between the center points of the two bounding boxes are equal,the scale information of the aspect ratio of the bounding boxes is considered,as shown in F and G.

    where B and Bgtdenote the bounding and ground-truth boxes,respectively.

    where siis the classification confidence,ε is the conventional NMS threshold,and M is the box with the highest confidence level.

    where b and bgtdenote the centroids of B and Bgt,respectively,ρ(.)is the Euclidean distance,and c is the diagonal length of the box that minimally encloses B and Bgt.

    where siis the classification confidence,ε is the DIoU_NMS threshold,and M is the box with the highest confidence level.

    3.3.3.Offline training of the model

    Before the rice spike detection network was trained,it was initialized and pre-trained on the ImageNet dataset and then learned and fine-tuned on its own dataset.The experimental running environment was Intel i9-9900 K CPU,3.40 GHz,64 GB RAM,NVIDIA 2080ti GPU,Ubuntu 16.04 LTS operating system,CUDA version10.0,and TensorFlow 14.0 as the deep learning framework,and Python3.7 as the programming language.The network parameters for the training phase were designed as shown in Table S3.

    3.3.4.Evaluation indicators of the model

    The accuracy curve in formula (6) was drawn to evaluate the performance of the training model.To verify the generalizability of the training model,its precision and recall rates were evaluated by the calculation formulas shown in (7) and (8).The accuracy of multiclass target detection was evaluated based on the mean average precision (mAP) to measure the quality of the trained model in all categories.The mAP is the average of the sum of the average precisions (AP) over all categories.The formula is shown in (9).

    In formulas (6-8),TP is the number of correctly detected rice spikes,FP is the number of incorrectly detected rice spikes,FN is the number of missing rice spikes or incorrectly detected backgrounds,and TN is the number of correctly detected backgrounds.

    where AP is the area under the precision-recall curve and n is the number of categories.

    3.4.Identifying the multiple developmental stages of rice spikes

    Based on the rice spike detection model,to detect multiple times to reduce the error of target detection,n images corresponding to n moment points are collected online every day and input into the trained rice spike detection model.We first estimated the number of rice spikes with various maturities in each image collected daily and then calculated the spike density of various images daily using formula (10).

    where n is the number of images collected online in a day,S is the actualareaofthecameramonitoringarea,bboxrepresentsthebound-

    ing box obtainedfrom the target detectionnetwork,Numj(b boxi)represents the number of the ithtype of rice spikes detected in the jthimage,and Ωiis the density of the ithtype of image spike.

    To establish the correlation between the image spike density and the development date of rice spikes,the fitted curves of daily changes in spike density of rice spike images with diverse maturities were plotted for multiple developmental stages.When the spike density curve showed a rapid upward trend for the first time,the arrival of the developmental stage could be determined.Two commonly used indicators were used to evaluate the goodness of curve fitting: the coefficient of determination (R2) and the root mean square error(RMSE).These indicators were calculated using formulas(11)and(12).

    4.Results and discussion

    4.1.Training effect and performance analysis of the rice spike detection model

    Loss values and accuracy rates changed during the model training process (Fig.S6).The loss value decreased with an increase in the training epoch.During the first 30 training epochs,the loss value decreased rapidly.After 80 epochs,the loss value remained stable in the range of 0.05 to 0.1.After 204 epochs,the training was stopped and the model converged.When the model loses,the accuracy of the model increases with the number of training epochs on both the training and validation sets,increases rapidly in the first 30 epochs,slows from 40 to 60 epochs,and stabilizes after 80 epochs.After training was completed,the accuracy of the model was as high as 97.55% and 96.68% on the training and validation sets,respectively.In summary,the loss of the model training process and the trend of the training set accuracy,as well as the validation set accuracy in different epochs,reflect the performance of the rice spike detection model.

    4.2.Effect of improvement strategies on the performance of the rice spike detection model

    Based on the original Faster R-CNN network model,this study proposes a targeted improvement strategy to optimize the effect of the rice spike detection model.First,the advanced Inception-ResNet-v2 network was used to replace VGG16 as the backbone of Faster R-CNN (strategy 1);then,the FPN was used to replace the single-scale feature map and merge it with RPN to generate the candidate regions at different scales (strategy 2);then,in the detection sub-network,not only the RoI Align module was used to replace the RoI pooling quantization operation (strategy 3) but DIoU instead of IoU was used as an indicator of NMS (strategy 4).This study verified the optimization effect on model performance by adding the improvement strategies one by one (Table 1).

    Table 1 Optimization effect of rice spike detection model performance.

    First,the use of data enhancement not only expanded the number of data samples but also increased the diversity of the data,making the network model better trained and greatly increasing the detection accuracy by 12.92% of mAP.Second,the use of the advanced Inception-ResNet-v2 network to replace the VGG16 network increased the detection accuracy of the model by 5.82%of the mAP.This is because the Inception module of the Inception-ResNet-v2 network replaced the fully connected layer with a global mean pooling layer to reduce the number of parameters and parallelize convolutional kernels of different sizes to capture receptive fields of different sizes,as well as combining residual connections to allow shortcutting in the model.Thus,deeper networks can be trained to produce better performance.Third,the detection accuracy of the Faster R-CNN model fused with FPN greatly improved by 13.73% of mAP,and the detection accuracy of each of the three types of rice spikes was substantially improved,especially for the smaller target rice spikes (ripe 1),reflecting the importance of multi-scale feature fusion for the detection of rice spikes of diverse sizes.Fourth,in the detection sub-network,the use of the RoI Align module instead of the RoI pooling quantization operation was ben-eficial for increasing the extraction accuracy of RoI.The use of DIoU instead of IoU as an indicator of NMS was beneficial for increasing the regression speed and accuracy of the bounding boxes,increasing the model detection accuracy by 2.74% and 5.18% mAP,respectively.

    4.3.Comparison with the YOLOv4 model

    To further verify the superiority of the rice spike detection model,its detection accuracy was compared with that of the YOLOv4 model (Table 2).

    Because the feature extraction layer of YOLOv4 adopts the feature pyramid down-sampling structure and the mosaic data enhancement method during training,it showed good results(89.07% mAP) for the detection of small target rice spikes.In view of the particularity of the dataset,some optimization strategies were designed to improve the Faster R-CNN network,resulting in a 92.47% mAP for rice spike detection,a large improvement over the original Faster R-CNN model without the improvement strategy (40.96% mAP),and 3.40% higher than the YOLOv4 model.In addition,on the test set,although the average detection speed of the YOLOv4 model is about 4.6 times higher than that of the rice spike detection model developed in this study,the average detection speed of the rice spike detection model is about 0.2 s,and this waiting time has negligible impact on user experience perception.

    This study shows the detection results of the rice spike detection model with the original Faster R-CNN model and the YOLOv4 model in two scenarios:greenhouse pot(Fig.S7)and field(Fig.S8)samples.Because of the large image size,some details of the detection results of the three models were compared and analyzed.First,for the identification of small target rice spikes,the original Faster R-CNN model has a severe rice spike leakage problem(vs 1,vs 6),and both the original Faster R-CNN model and the YOLOv4 model have the problem of leaf misidentification when the background is similar to the color of the rice spike,whereas the rice spike detection model can solve this problem (vs 2 &vs 4,vs 6 &vs 7).Second,when rice spikes are more than 50 %occluded,both the original Faster R-CNN model and the YOLOv4 model ignore them,whereas the rice spike detection model fully detects them (vs 1 &vs 3,vs 6 &vs 7).Thus,partial occlusion of rice spikes the background leaves and partial overlap among rice spikes do not affect the detection accuracy of the rice spike detection model.Third,when spikes are smaller and their color is brighter,the YOLOv4 model incorrectly identifies them as background,whereas the rice spike detection model can still detect them correctly (vs 3,vs 7).The detection boxes of the rice-spike detection model better surround the rice spike and have a higher confidence score (vs 5).For clearer comparison of the three models,the examples given for the large-field scenario do not show the confidence scores of the rice spike detection result to avoid occlusion.

    4.4.Further analysis of the effect of light conditions on model detection

    Both scenarios in this study employed natural lighting conditions,and time-series images throughout the growth phases of the rice spike needed to be acquired periodically.Owing to the uncertainty of weather conditions,the brightness and sharpness of the acquired images differed between sunny and cloudy days and between morning and afternoon hours.To investigate whether diverse lighting conditions affected the accuracy of the rice-spikedetection model,348 images in the test set were divided into strong and weak lighting groups based on variation in lighting intensity (Table 3).The recall rate (R) under weak lighting conditions was slightly lower than that under strong lighting conditions,but the difference was not significant.The detection precision (P)was as high as 97.21% for both,indicating that the lighting condition had little effect on the detection precision of the model.

    Table 2 Comparison of the detection accuracy of the rice spike detection model in this study with the YOLOv4 model.

    Table 4 Comparison results of automatic identification (AI) and manual recording (MR) of rice spikes with multiple developmental stages.

    Fig.3.Curves of image density versus development days for each type of rice spike.(A)Image sequence of camera 1 for the 2020 late rice potted scenario.(B)Image sequence of field 2 for the 2020 late rice field scenario.(C)Image sequence of field 1 for the 2021 early rice field scenario.Ripe 1 is the maturity of rice spike at heading-flowering stage,Ripe 2 is the maturity of rice spike at milky maturity stage,Ripe 3 is the maturity of rice spike at full maturity stage.

    Fig.4.Variation of daily increment of spike density for each type of rice spike image.(A) Image sequence of camera 1 for the 2020 late rice potted scenario.(B) Image sequence of field 2 for the 2020 late rice field scenario.(C) Image sequence of field 1 for the 2021 early rice field scenario.Ripe 1 is the maturity of rice spike at headingflowering stage,Ripe 2 is the maturity of rice spike at milky maturity stage,Ripe 3 is the maturity of rice spike at full maturity stage.

    4.5.Estimation of diverse developmental stages of rice spikes

    Because rice spike detection requires much time to process all the daily rice images,three images corresponding to each of three momentary points at an interval of 10 min at 9:00 AM every day(good lighting conditions) were collected online.The images were input into the rice spike detection model for the automatic counting of various rice spikes,and the daily spike density of each type of image after averaging was calculated.

    The curves of the detected image spike densities of each type of image and their relationship with the development date for the image sequences of camera 1 of the 2020 late rice potting scenario(Fig.3A),field plot 2 of the 2020 late rice field scenario (Fig.3B),and field plot 1 of the 2021 early rice field(Fig.3C),are presented.First,the fitted spike densities of the various image types were in good agreement with the true values,and all had the highest coefficient of determination with the lowest root mean square error.This finding indicates that variation in spike density with development date was reflected by fitted curves.Second,for the three types of rice spikes with different maturity levels in different seasons and monitoring areas,their image spike densities showed the same trend as the development date,with ripe 1 and ripe 2 following a bell-shaped variation pattern and ripe 3 following an Sshaped pattern.

    The initial developmental stage of rice spikes can be identified if a sufficient number of rice spikes is detected.To determine the HF,MM,and FM stage of rice spikes,this study yielded the following conclusion by experimental analysis based on the image spike density fitting curve of rice spikes of three categories with differing maturities: if the spike density of a certain type in the image of that day increased by more than 2 times compared with that of the previous day,then the fitting curve of spike density in the image of that day showed a rapid upward trend for the first time,so the date of that day was determined to be the beginning of the development stage of the corresponding rice spike type.The following figure shows the change of daily increment of spike density of three types of rice spike with different maturity in the above image sequence,and the threshold is shown in red line (Fig.4A-C).

    Seven image sequences for the remaining two scenarios from 2020 to 2021 were tested for automatic identification of MDSRS.The identification results of the MDSRS using the proposed automatic identification algorithm and manual recording are shown in Table 4.The comparison results were used to verify the identification effect of the proposed automatic identification algorithm for multiple developmental periods of the rice spikes.

    The automatic identification results for the three rice spike developmental stages were compared with manual observations(Table 4).The maximum error at each stage was no more than two days,indicating that the automatic identification algorithm was reliable for reporting the early developmental stages of rice spikes.

    In conclusion,the rice spike detection model based on improved Faster R-CNN is a reliable model with high accuracy.This model had an mAP as high as 92.47%,with a large improvement over the original Faster R-CNN model (40.96% mAP) and the YOLOv4 model (mAP increased by 3.4%).The rice spike detection model based on improved Faster R-CNN had small fluctuations when used to detect various rice spikes under diverse lighting conditions,and the detection accuracy was as high as 98.01%.Compared with manual observation,the rice spike detection model based on improved Faster R-CNN yielded only a 0.7-1.1 day error in identifying the initiation date of the HF,MM,and FM stages of rice.However,the present result was obtained using only one rice variety and awaits validation with more varieties.We established the image spike density of various types of rice spikes by introducing monitoring area S to estimate the developmental stages of rice spikes with diverse maturities.Therefore,in order to increase the generalizability of the algorithm,spike images were collected from multiple planting densities.

    CRediT authorship contribution statement

    Yuanqin Zhang:Methodology,Software,Validation,Writing -original draft,Writing-review&editing.Deqin Xiao:Conceptualization,Supervision,Project administration,Funding acquisition.Youfu Liu:Software,Visualization.Huilin Wu:Validation,Formal analysis.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgments

    This work was supported by the Key-Area Research and Development Program of Guangdong Province (2019B020214005) and Agricultural Research Project and Agricultural Technology Promotion Project of Guangdong (2021KJ383).

    Appendix A.Supplementary data

    Supplementary data for this article can be found online at https://doi.org/10.1016/j.cj.2022.06.004.

    国产精品熟女久久久久浪| 我要看黄色一级片免费的| 久久精品国产综合久久久| 韩国高清视频一区二区三区| 国产av一区二区精品久久| 国产有黄有色有爽视频| 美女主播在线视频| 国产免费视频播放在线视频| 啦啦啦啦在线视频资源| 青春草视频在线免费观看| 日本黄色日本黄色录像| 中文字幕人妻熟女乱码| 天堂8中文在线网| 国产精品久久久久成人av| 18禁观看日本| 人人妻人人爽人人添夜夜欢视频| 交换朋友夫妻互换小说| 日韩中文字幕视频在线看片| 精品卡一卡二卡四卡免费| 悠悠久久av| 蜜桃在线观看..| 精品福利永久在线观看| 久久久国产一区二区| 高清欧美精品videossex| 伊人久久大香线蕉亚洲五| 久久精品国产亚洲av涩爱| 免费看十八禁软件| 男女国产视频网站| 国产日韩欧美视频二区| 亚洲,欧美精品.| 精品福利永久在线观看| 国产精品免费大片| 亚洲国产精品成人久久小说| 成人午夜精彩视频在线观看| 一级片免费观看大全| 91麻豆av在线| 欧美国产精品va在线观看不卡| 亚洲av在线观看美女高潮| 久久鲁丝午夜福利片| 免费高清在线观看视频在线观看| 两个人看的免费小视频| 亚洲精品国产区一区二| 日本黄色日本黄色录像| 老汉色∧v一级毛片| 在线观看一区二区三区激情| 亚洲av在线观看美女高潮| 男女无遮挡免费网站观看| 热99久久久久精品小说推荐| 丝袜喷水一区| 巨乳人妻的诱惑在线观看| 国产一卡二卡三卡精品| 宅男免费午夜| 国产精品一区二区在线观看99| 成人亚洲欧美一区二区av| 国产成人精品在线电影| 免费女性裸体啪啪无遮挡网站| 亚洲一码二码三码区别大吗| 大话2 男鬼变身卡| 老鸭窝网址在线观看| 最近最新中文字幕大全免费视频 | 别揉我奶头~嗯~啊~动态视频 | 欧美黄色淫秽网站| 少妇猛男粗大的猛烈进出视频| 午夜两性在线视频| 久热这里只有精品99| 天天操日日干夜夜撸| 国产成人欧美在线观看 | 纵有疾风起免费观看全集完整版| 国产精品 欧美亚洲| 亚洲一区中文字幕在线| 亚洲第一青青草原| 日韩大码丰满熟妇| 亚洲精品久久午夜乱码| av线在线观看网站| 性色av乱码一区二区三区2| 极品人妻少妇av视频| 丝袜美足系列| 亚洲国产看品久久| 天天操日日干夜夜撸| av网站在线播放免费| 99国产精品一区二区三区| 国产成人系列免费观看| 精品福利永久在线观看| 91老司机精品| 飞空精品影院首页| 视频区欧美日本亚洲| 国产成人影院久久av| 777米奇影视久久| 欧美乱码精品一区二区三区| 午夜福利视频在线观看免费| 最近最新中文字幕大全免费视频 | 在线亚洲精品国产二区图片欧美| av视频免费观看在线观看| 天天操日日干夜夜撸| 国产成人啪精品午夜网站| 不卡av一区二区三区| 国产在线一区二区三区精| 亚洲av日韩精品久久久久久密 | 国产精品国产三级国产专区5o| 蜜桃在线观看..| 又大又爽又粗| 99国产精品99久久久久| 国产在线观看jvid| 亚洲激情五月婷婷啪啪| 看免费av毛片| tube8黄色片| 久久精品国产亚洲av高清一级| 精品欧美一区二区三区在线| 亚洲三区欧美一区| 国产三级黄色录像| 成年女人毛片免费观看观看9 | 性高湖久久久久久久久免费观看| 十分钟在线观看高清视频www| 美国免费a级毛片| 亚洲伊人色综图| 欧美日韩综合久久久久久| 老司机午夜十八禁免费视频| 中文乱码字字幕精品一区二区三区| 男人添女人高潮全过程视频| 十八禁人妻一区二区| 两个人看的免费小视频| 午夜精品国产一区二区电影| 美女脱内裤让男人舔精品视频| 十分钟在线观看高清视频www| 老司机亚洲免费影院| 男人添女人高潮全过程视频| 两性夫妻黄色片| 亚洲欧美日韩高清在线视频 | 亚洲精品成人av观看孕妇| 亚洲视频免费观看视频| 99热国产这里只有精品6| 91精品国产国语对白视频| 伊人久久大香线蕉亚洲五| 女性生殖器流出的白浆| 成人国产av品久久久| 免费高清在线观看日韩| 久久精品aⅴ一区二区三区四区| 日本vs欧美在线观看视频| 免费女性裸体啪啪无遮挡网站| 亚洲精品美女久久久久99蜜臀 | 视频在线观看一区二区三区| 国产精品.久久久| 国产在线视频一区二区| 精品国产乱码久久久久久男人| 最近最新中文字幕大全免费视频 | 曰老女人黄片| 黄网站色视频无遮挡免费观看| 晚上一个人看的免费电影| 韩国精品一区二区三区| 国产亚洲精品第一综合不卡| 国产淫语在线视频| www.999成人在线观看| 纵有疾风起免费观看全集完整版| 精品少妇久久久久久888优播| 中文字幕制服av| 天天添夜夜摸| 搡老乐熟女国产| 欧美亚洲日本最大视频资源| 一区二区三区乱码不卡18| 亚洲精品国产区一区二| 国产伦人伦偷精品视频| 久久人人爽av亚洲精品天堂| 晚上一个人看的免费电影| 99国产综合亚洲精品| 大片电影免费在线观看免费| 纯流量卡能插随身wifi吗| 国产有黄有色有爽视频| 亚洲情色 制服丝袜| 在线亚洲精品国产二区图片欧美| 纵有疾风起免费观看全集完整版| 亚洲精品av麻豆狂野| 97在线人人人人妻| 久久99一区二区三区| 久久久精品94久久精品| 精品亚洲乱码少妇综合久久| 超色免费av| 高清不卡的av网站| 精品国产一区二区三区四区第35| 亚洲美女黄色视频免费看| 大片免费播放器 马上看| 午夜精品国产一区二区电影| 搡老岳熟女国产| 亚洲男人天堂网一区| 人人妻人人澡人人爽人人夜夜| 亚洲,欧美精品.| 午夜福利视频在线观看免费| 国产91精品成人一区二区三区 | 欧美国产精品va在线观看不卡| 亚洲精品一卡2卡三卡4卡5卡 | 欧美人与性动交α欧美软件| 国产日韩欧美视频二区| 免费看不卡的av| 中文字幕人妻熟女乱码| 午夜激情av网站| 中文字幕人妻丝袜一区二区| 2018国产大陆天天弄谢| 久久久久久久大尺度免费视频| 波野结衣二区三区在线| 国产片内射在线| 久久久久久久大尺度免费视频| 免费观看av网站的网址| 黄色a级毛片大全视频| 精品人妻1区二区| 色精品久久人妻99蜜桃| 一级毛片我不卡| 人妻 亚洲 视频| 亚洲人成电影免费在线| 久久人人爽人人片av| 女性生殖器流出的白浆| 青春草亚洲视频在线观看| 亚洲精品国产av蜜桃| 亚洲av日韩在线播放| 制服人妻中文乱码| 免费黄频网站在线观看国产| 99香蕉大伊视频| 在线 av 中文字幕| 在线天堂中文资源库| 精品欧美一区二区三区在线| 少妇粗大呻吟视频| 日韩av不卡免费在线播放| 国精品久久久久久国模美| 久久国产精品男人的天堂亚洲| 如日韩欧美国产精品一区二区三区| 婷婷色综合www| 国产精品久久久人人做人人爽| bbb黄色大片| 国产精品一区二区精品视频观看| 视频区欧美日本亚洲| 欧美另类一区| 男女午夜视频在线观看| 国产熟女午夜一区二区三区| 五月天丁香电影| 一二三四社区在线视频社区8| 成人三级做爰电影| 嫩草影视91久久| 久久久久久久久久久久大奶| 无限看片的www在线观看| 久久av网站| 在线av久久热| 午夜精品国产一区二区电影| 成人手机av| 最近手机中文字幕大全| 成年人免费黄色播放视频| 中文字幕色久视频| 亚洲国产精品国产精品| 操出白浆在线播放| 国产视频一区二区在线看| 最近中文字幕2019免费版| 在线观看国产h片| 男女午夜视频在线观看| 又大又黄又爽视频免费| 建设人人有责人人尽责人人享有的| 你懂的网址亚洲精品在线观看| 久久久久久久精品精品| 久久久欧美国产精品| 国产成人91sexporn| 麻豆乱淫一区二区| 国产熟女欧美一区二区| 色播在线永久视频| 99国产精品免费福利视频| 亚洲精品国产一区二区精华液| 亚洲图色成人| 后天国语完整版免费观看| 老司机影院成人| 国产一区二区激情短视频 | 亚洲欧美精品综合一区二区三区| 在线精品无人区一区二区三| 日本午夜av视频| 极品人妻少妇av视频| 欧美日韩综合久久久久久| 一级,二级,三级黄色视频| 18禁裸乳无遮挡动漫免费视频| 看免费av毛片| 久久精品国产a三级三级三级| 美女高潮到喷水免费观看| 日韩一卡2卡3卡4卡2021年| 热re99久久国产66热| 中文字幕制服av| 一本一本久久a久久精品综合妖精| 如日韩欧美国产精品一区二区三区| 看免费av毛片| 天天躁夜夜躁狠狠躁躁| 国产在线视频一区二区| 一区在线观看完整版| 中文字幕高清在线视频| 久久精品国产a三级三级三级| 亚洲一区二区三区欧美精品| 最近手机中文字幕大全| 99精国产麻豆久久婷婷| netflix在线观看网站| 亚洲国产日韩一区二区| 精品一区在线观看国产| 精品一品国产午夜福利视频| 国产成人91sexporn| 女人被躁到高潮嗷嗷叫费观| 免费人妻精品一区二区三区视频| 亚洲精品成人av观看孕妇| 最近最新中文字幕大全免费视频 | 在线天堂中文资源库| 青春草亚洲视频在线观看| h视频一区二区三区| 99热网站在线观看| 精品国产一区二区三区久久久樱花| 50天的宝宝边吃奶边哭怎么回事| 亚洲,欧美精品.| 男女边吃奶边做爰视频| avwww免费| 国产成人免费观看mmmm| 十八禁高潮呻吟视频| av福利片在线| 飞空精品影院首页| 99香蕉大伊视频| 啦啦啦啦在线视频资源| 成年人黄色毛片网站| 午夜激情av网站| 亚洲七黄色美女视频| 精品少妇内射三级| 大香蕉久久网| 久久精品久久精品一区二区三区| 日本黄色日本黄色录像| 亚洲 欧美一区二区三区| 黄片小视频在线播放| www.精华液| 日韩 欧美 亚洲 中文字幕| av片东京热男人的天堂| 在线观看人妻少妇| 又大又黄又爽视频免费| 韩国精品一区二区三区| 精品久久久久久电影网| 久久久久久人人人人人| 十八禁人妻一区二区| 九色亚洲精品在线播放| 免费看十八禁软件| 午夜两性在线视频| 久久性视频一级片| 丰满饥渴人妻一区二区三| 一级,二级,三级黄色视频| 亚洲欧美中文字幕日韩二区| 亚洲黑人精品在线| 九色亚洲精品在线播放| 久久久久久久久久久久大奶| 久久国产精品影院| 亚洲精品自拍成人| 精品少妇一区二区三区视频日本电影| 十八禁人妻一区二区| 午夜激情av网站| 亚洲久久久国产精品| 少妇的丰满在线观看| 建设人人有责人人尽责人人享有的| 大香蕉久久网| 中文字幕色久视频| 久久久久网色| 国产欧美日韩一区二区三 | 五月开心婷婷网| 免费久久久久久久精品成人欧美视频| 亚洲三区欧美一区| 成在线人永久免费视频| 国产亚洲午夜精品一区二区久久| 老汉色∧v一级毛片| 日本a在线网址| 午夜免费鲁丝| 日本欧美视频一区| 国产色视频综合| 亚洲自偷自拍图片 自拍| 亚洲,欧美,日韩| 久久久久久亚洲精品国产蜜桃av| 久久人人爽人人片av| 国产精品av久久久久免费| 丁香六月欧美| 国产精品九九99| 国产av国产精品国产| 亚洲精品美女久久久久99蜜臀 | 中文字幕人妻丝袜制服| 在线 av 中文字幕| 日韩av不卡免费在线播放| 狂野欧美激情性bbbbbb| 18在线观看网站| 国产精品国产三级国产专区5o| 十八禁人妻一区二区| 亚洲中文字幕日韩| 午夜福利免费观看在线| 欧美中文综合在线视频| 老司机靠b影院| 美女高潮到喷水免费观看| 日韩一卡2卡3卡4卡2021年| 曰老女人黄片| 波野结衣二区三区在线| 亚洲国产中文字幕在线视频| 久久久久久久大尺度免费视频| 精品福利永久在线观看| 国产成人a∨麻豆精品| 国产av一区二区精品久久| 亚洲av男天堂| 免费看av在线观看网站| 国产成人精品无人区| 亚洲,一卡二卡三卡| 在线观看免费视频网站a站| 免费在线观看黄色视频的| 少妇 在线观看| 久久女婷五月综合色啪小说| 久久久精品94久久精品| 精品亚洲成国产av| 婷婷色综合大香蕉| 少妇 在线观看| 男女免费视频国产| 亚洲欧美一区二区三区久久| www.av在线官网国产| 久久午夜综合久久蜜桃| 欧美精品高潮呻吟av久久| 亚洲国产av影院在线观看| 亚洲精品国产区一区二| 亚洲精品久久久久久婷婷小说| 亚洲国产精品成人久久小说| 少妇裸体淫交视频免费看高清 | 天天影视国产精品| 婷婷色综合大香蕉| www日本在线高清视频| 19禁男女啪啪无遮挡网站| 国产男女内射视频| 在线亚洲精品国产二区图片欧美| 亚洲精品自拍成人| 高清欧美精品videossex| www.av在线官网国产| 久久影院123| 国产精品香港三级国产av潘金莲 | 亚洲图色成人| 十八禁人妻一区二区| 老司机影院成人| 久久国产精品大桥未久av| 久久青草综合色| 久久久精品免费免费高清| 国产成人av教育| 又黄又粗又硬又大视频| 国产一卡二卡三卡精品| 爱豆传媒免费全集在线观看| 超碰97精品在线观看| 亚洲av电影在线进入| 久久综合国产亚洲精品| 亚洲人成网站在线观看播放| 可以免费在线观看a视频的电影网站| 999久久久国产精品视频| 汤姆久久久久久久影院中文字幕| 在线 av 中文字幕| 成在线人永久免费视频| 亚洲精品av麻豆狂野| 国产精品二区激情视频| 国产高清国产精品国产三级| 天天躁狠狠躁夜夜躁狠狠躁| 中文字幕人妻熟女乱码| 韩国高清视频一区二区三区| 亚洲激情五月婷婷啪啪| 欧美人与性动交α欧美精品济南到| 日韩一区二区三区影片| 99香蕉大伊视频| 美女视频免费永久观看网站| 日韩制服骚丝袜av| 日韩 欧美 亚洲 中文字幕| 夫妻午夜视频| 亚洲久久久国产精品| 热re99久久国产66热| 欧美xxⅹ黑人| 国产片特级美女逼逼视频| 亚洲激情五月婷婷啪啪| 中文字幕亚洲精品专区| 日韩中文字幕视频在线看片| a级片在线免费高清观看视频| 免费在线观看日本一区| 欧美激情高清一区二区三区| 亚洲五月色婷婷综合| 亚洲一卡2卡3卡4卡5卡精品中文| 91成人精品电影| 欧美国产精品一级二级三级| 亚洲精品美女久久久久99蜜臀 | 久久99一区二区三区| 亚洲成人免费电影在线观看 | 午夜影院在线不卡| 青春草亚洲视频在线观看| 女性生殖器流出的白浆| 男人爽女人下面视频在线观看| 电影成人av| 免费在线观看影片大全网站 | 中文字幕亚洲精品专区| 各种免费的搞黄视频| 一区二区日韩欧美中文字幕| av国产精品久久久久影院| 国产熟女欧美一区二区| 99热全是精品| 精品一区二区三区四区五区乱码 | av片东京热男人的天堂| 亚洲人成网站在线观看播放| 午夜福利视频精品| 亚洲av欧美aⅴ国产| 另类精品久久| e午夜精品久久久久久久| 两个人免费观看高清视频| 精品第一国产精品| 国产在线免费精品| 在线 av 中文字幕| 男人操女人黄网站| tube8黄色片| 国产精品一区二区在线不卡| 亚洲人成77777在线视频| 五月天丁香电影| 亚洲美女黄色视频免费看| av在线app专区| 激情五月婷婷亚洲| 色94色欧美一区二区| 丝袜脚勾引网站| 69精品国产乱码久久久| 国产精品久久久av美女十八| 天堂8中文在线网| 久久人妻福利社区极品人妻图片 | 操出白浆在线播放| 日韩电影二区| 国精品久久久久久国模美| 亚洲七黄色美女视频| 妹子高潮喷水视频| 成人国产av品久久久| 亚洲中文字幕日韩| 七月丁香在线播放| a级片在线免费高清观看视频| 大片免费播放器 马上看| 国产精品 国内视频| 亚洲专区中文字幕在线| 免费久久久久久久精品成人欧美视频| 啦啦啦 在线观看视频| www日本在线高清视频| 制服诱惑二区| 国产免费又黄又爽又色| 啦啦啦中文免费视频观看日本| 亚洲av日韩在线播放| 欧美亚洲日本最大视频资源| 亚洲综合色网址| 日韩,欧美,国产一区二区三区| 在现免费观看毛片| 别揉我奶头~嗯~啊~动态视频 | 51午夜福利影视在线观看| 国产黄色视频一区二区在线观看| 亚洲精品久久久久久婷婷小说| 国产精品一二三区在线看| 亚洲伊人久久精品综合| 午夜免费男女啪啪视频观看| 我的亚洲天堂| 成人免费观看视频高清| 老司机深夜福利视频在线观看 | 看免费av毛片| 男女下面插进去视频免费观看| 超碰97精品在线观看| 国产精品人妻久久久影院| 人妻一区二区av| 欧美 亚洲 国产 日韩一| 制服诱惑二区| 青春草视频在线免费观看| 高清不卡的av网站| 男女下面插进去视频免费观看| 亚洲国产成人一精品久久久| √禁漫天堂资源中文www| 成人亚洲精品一区在线观看| 国产精品一区二区精品视频观看| 久久这里只有精品19| 免费看不卡的av| 国语对白做爰xxxⅹ性视频网站| 久久精品亚洲av国产电影网| 亚洲第一av免费看| 黄色毛片三级朝国网站| 亚洲精品自拍成人| 91成人精品电影| 亚洲欧美日韩高清在线视频 | 国产精品av久久久久免费| 黄网站色视频无遮挡免费观看| 亚洲成国产人片在线观看| 亚洲国产看品久久| www.熟女人妻精品国产| 狂野欧美激情性bbbbbb| 看免费成人av毛片| 亚洲欧美清纯卡通| 男的添女的下面高潮视频| 亚洲熟女毛片儿| 国产一区二区三区av在线| 久久久久久人人人人人| 亚洲欧洲国产日韩| 国产野战对白在线观看| 又粗又硬又长又爽又黄的视频| 成人午夜精彩视频在线观看| av天堂久久9| 尾随美女入室| 亚洲欧美日韩高清在线视频 | 中文字幕高清在线视频| 你懂的网址亚洲精品在线观看| 欧美日韩精品网址| 欧美精品一区二区免费开放| 嫩草影视91久久| 国产一区亚洲一区在线观看| 国产精品久久久人人做人人爽| 久久99热这里只频精品6学生| 99精国产麻豆久久婷婷| 丝袜脚勾引网站| 制服诱惑二区| 新久久久久国产一级毛片| 婷婷丁香在线五月| a级片在线免费高清观看视频| 久久 成人 亚洲| 51午夜福利影视在线观看| 亚洲欧美激情在线| av福利片在线| 亚洲欧美色中文字幕在线| 激情五月婷婷亚洲| 国产欧美亚洲国产| 2021少妇久久久久久久久久久| 久久天躁狠狠躁夜夜2o2o | 国产精品一区二区在线不卡| 久久精品国产亚洲av高清一级| 老汉色∧v一级毛片| 一本色道久久久久久精品综合| 国产主播在线观看一区二区 |