• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Panicle-3D: A low-cost 3D-modeling method for rice panicles based on deep learning,shape from silhouette,and supervoxel clustering

    2022-10-12 09:31:02DnWuLejunYuJunliYeRuifngZhiLingfengDunLingboLiuNiWuZedongGengJingboFuChenglongHungShngbinChenQinLiuWnnengYng
    The Crop Journal 2022年5期

    Dn Wu ,Lejun Yu ,Junli Ye ,Ruifng Zhi ,Lingfeng Dun ,Lingbo Liu ,Ni Wu,Zedong Geng,Jingbo Fu,Chenglong Hung,Shngbin Chen,Qin Liu,b,*,Wnneng Yng,*

    a Britton Chance Center for Biomedical Photonics,Wuhan National Laboratory for Optoelectronics,and Key Laboratory of Ministry of Education for Biomedical Photonics,Department of Biomedical Engineering,Huazhong University of Science and Technology,Wuhan 430074,Hubei,China

    b School of Biomedical Engineering,Hainan University,Haikou 570228,Hainan,China

    c National Key Laboratory of Crop Genetic Improvement,National Center of Plant Gene Research,Huazhong Agricultural University,Wuhan 430070,Hubei,China

    Keywords:Panicle phenotyping Deep convolutional neural network 3D reconstruction Shape from silhouette Point-cloud segmentation Ray tracing Supervoxel clustering

    ABSTRACT Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits.Such interference can be largely eliminated if panicles are phenotyped at the 3D level.Research on 3D panicle phenotyping has been limited.Given that existing 3D modeling techniques do not focus on specified parts of a target object,an efficient method for panicle modeling of large numbers of rice plants is lacking.This paper presents an automatic and nondestructive method for 3D panicle modeling.The proposed method integrates shoot rice reconstruction with shape from silhouette,2D panicle segmentation with a deep convolutional neural network,and 3D panicle segmentation with ray tracing and supervoxel clustering.A multiview imaging system was built to acquire image sequences of rice canopies with an efficiency of approximately 4 min per rice plant.The execution time of panicle modeling per rice plant using 90 images was approximately 26 min.The outputs of the algorithm for a single rice plant are a shoot rice model,surface shoot rice model,panicle model,and surface panicle model,all represented by a list of spatial coordinates.The efficiency and performance were evaluated and compared with the classical structure-from-motion algorithm.The results demonstrated that the proposed method is well qualified to recover the 3D shapes of rice panicles from multiview images and is readily adaptable to rice plants of diverse accessions and growth stages.The proposed algorithm is superior to the structure-from-motion method in terms of texture preservation and computational efficiency.The sample images and implementation of the algorithm are available online.This automatic,cost-efficient,and nondestructive method of 3D panicle modeling may be applied to high-throughput 3D phenotyping of large rice populations.

    1.Introduction

    Rice (Oryza sativa) is one of the most important food crops in the world,which feeds over half of the global population [1,2].Geneticists and breeders have made great efforts to identify rice accessions with ideal characteristics for crop growth and production [3-5].The rice panicle,whose characteristics influence grain yield,is the target of many rice phenotyping studies [6-8].

    Image-based techniques have become increasingly important in crop phenotyping.These techniques,generally adopting one or more imaging methods such as visible,hyperspectral,thermal infrared,and tomographic imaging,have greatly advanced the progress of high-throughput phenotyping and have potential to replace conventional phenotyping methods,which depend mainly on low-efficiency manual manipulation [9,10].Deep learning methods have shown impressive performance in many areas and have increasing application in phenotyping research,especially for detection and counting tasks.Pound et al.[11] presented a deep-learning approach with a new dataset for localizing and counting wheat spikes and spikelets.Lu et al.[12] solved the infield counting problem of maize tassels with a local count regression network.Xiong et al.[13] proposed a robust method for field rice panicle segmentation by simple linear iterative clustering and convolutional neural-network classification.With the combination of image processing techniques,panicle traits can be efficiently quantified from images.In the method proposed by Duan et al.[14],the panicle numbers of rice plants were determined using multiangle imaging and artificial neural network segmentation.Wu et al.[15]developed an image analysis-based method to quantify the grain numbers of detached panicles.But nondestructive image-based panicle phenotyping accuracy is greatly reduced by self-occlusions that commonly appear in rice canopy images.Such interference can be largely eliminated if panicles are phenotyped at the three-dimensional (3D) level.However,relatively few studies have considered acquiring panicle traits from 3D rice models,given that an efficient method for modeling large numbers of rice plants is lacking.

    Generally,a more comprehensive understanding of plant features can be obtained from 3D plant models than from singleview or multiple-view plant images,and analysis of links between canopy architecture characteristics and photosynthetic capacity can be performed based on 3D plant models [16].The generation of a 3D plant model is an essential step for subsequent trait extraction and can be achieved using various techniques,including laser scanning,structured light (SL),time of flight (TOF),and structure from motion (SFM) [17].The laser-scanning technique is used mostly in the case of miniplots or experimental fields.This approach can obtain 3D point clouds of high resolution and accuracy [17].Usually,canopy-associated parameters,including plant height,leaf inclination angle,and plant area density,can be extracted automatically or by manual interpretation [18,19].The structured-light technique is also superior in resolution and accuracy,though the long time required for imaging limits its application to high-throughput phenotyping.Nguyen et al.[20]established a structured light-based 3D reconstruction system to phenotype plant heights,leaf numbers,leaf sizes and internode distances of cabbages,cucumbers,and tomatoes.Time-of-flight imaging,despite its low resolution,was successfully applied to corn[21,22],and phenotypic parameters of the corn plant,including leaf length,leaf width,leaf area,stem diameter,stem height and leaf angle,could be estimated.This technique was also used to measure cotton plant height under field conditions [23].SFM reconstruction,in contrast to active illumination-based laser scanning,SL and TOF,is a passive approach that represents the current state-of-the-art technique in multiview stereo vision.Pound et al.[24] presented an automatic SFM-based approach to recover 3D shapes of rice plants in which the outputs were surface-mesh structures consisting of a series of small planar sections.This method was then employed by Burgess et al.[25] to investigate the potential effects of wind-induced perturbation of the plant canopy on light patterning,interception,and photosynthetic productivity.It was combined with ray tracing [26] to characterize the light environment within an intercrop.The reconstruction software packages VisualSFM [27] and PMVS [28],on which Pound et al.’s method[24]was based,accept as input a set of images with no special shooting mode required,and have shown high robustness in various cases.They have also been applied to 3D phenotyping of other crops including corn [29],strawberry [30],and grapevine[31].Another multiview stereo algorithm for 3D modeling is space carving [32].This method reconstructs the 3D shape according to the photoconsistency of calibrated images around the scene of interest.Simpler than space carving is the shapefrom-silhouette (SFS) algorithm,which requires foreground segmentation for each input image.Both space carving and SFS are good at reconstructing high curvature and thin structures [33].Another novel method is modeling by hyperspectral imaging,which was investigated by Liang et al.[34]and Behmann et al.[35].

    Despite the many approaches available for 3D plant modeling,a reconstruction system for rice panicles is lacking.In general,a 3D panicle model for rice can be developed by two methods: cutting off panicles from rice plants and reconstructing 3D panicle models with images of excised panicles,or developing a 3D shoot rice model and then segmenting panicles from the rest of the plant.Examples of the first method are PI-Plat[36]and X-ray-based work[37].Given that panicles were cut from stems,neither estimation of the panicle spatial distributions nor dynamic observations were possible.In addition,the phenotyping process was slowed by manual collection of panicles.For the second method,a 3D shoot rice model can be generated,while there are no algorithms for automatic 3D panicle segmentation.Panicle segmentation of a 3D shoot rice model is more complex than 2D panicle segmentation of rice canopy images.It is difficult to distinguish panicles from the remaining parts of a 3D shoot rice model using color information or geometric features.Although there are some well-built neural networks for 3D classification and segmentation,such as VoxNet[38] and PointNet [39],the input data sizes are quite limited (an occupancy grid of 32×32×32 for VoxNet and thousands of points for PointNet).Current computing power cannot deal with a shoot rice model that may contain hundreds of thousands of points.No such current technology focuses on nondestructive 3D panicle modeling.

    In this paper,we present an automatic and nondestructive 3D modeling method for rice panicles.The SFS algorithm is used to generate the shoot rice model,and then a deep convolutional neural network and supervoxel clustering are used to perform 3D panicle segmentation.Totally 50 rice plants of various genotypes and growth stages were used to test the proposed algorithm and comparisons with the SFM method were performed.The results show that the proposed method is well qualified to recover the 3D shapes of rice panicles from multiview images and is easily adaptable to rice plants of diverse accessions and growth stages.It is superior to the SFM method in terms of texture preservation and computational efficiency.

    2.Materials and methods

    2.1.Multiview imaging system

    An indoor 3D imaging system named Panicle-3D was developed to acquire multiview rice images.The imaging system (Fig.1A)comprised mainly a digital single-lens reflex camera (EOS 760D,Canon,Tokyo,Japan),a turntable (MERA200,Red Star Yang Technology,Wuhan,Hubei,China),a group of LED lights(Philips,Amsterdam,Netherlands),a PLC control unit (CP1H,OMRON,Kyoto,Japan)and a computer(Gigabyte Technology,New Taipei City,Taiwan,China).The camera was kept at a fixed position,and the focal length was fixed to 18 mm throughout image acquisition.A rice plant was placed on the turntable rotating at a constant speed of 2° per second,and the camera automatically shot at two-second intervals during the revolution.The acquisition time of each image was recorded with millisecond precision for calibration.It took approximately 4 min to phenotype a single rice plant,including manual handling and image acquisition.

    2.2.Rice materials

    Fifty rice plants of various genotypes and growth stages were tested with the Panicle-3D system.These plants,including 25 rice accessions selected from 529 O.sativa accessions [40] and 25 mutants of ZH11,were grown in plastic pots.Images of the 25 accessions from 529 O.sativa accessions were taken between the flowering and dough grain stages.Images of the 25 ZH11 mutants were taken between the dough-grain and mature-grain stages.For each rice plant,90 side-view images,as shown in Fig.1B,were acquired.A total of 4500 images were collected to form a dataset for 3D panicle modeling.

    Fig.1.Multiview imaging system.(A) The multiview imaging system.(B) A rice canopy image in side view.

    2.3.The concepts of four 3D models

    We introduce the four 3D models for later reference.The shoot rice model(SRM)refers to the stuffed point cloud of a rice canopy.The shoot rice model reconstructed by the SFS algorithm does not contain color information.The surface shoot rice model (SSRM)refers to the surface point cloud of a rice canopy.The panicle model(PM) refers to the stuffed point cloud of all panicles in a rice canopy.The panicle model acquired by 3D segmentation of the shoot rice model reconstructed by the SFS algorithm does not contain color information.The surface panicle model (SPM) refers to the surface point cloud of all panicles in a rice canopy.Both the surface shoot rice model and the surface panicle model contain color information obtained by identifying correspondences between image pixels and spatial points.

    2.4.The pipeline of the 3D panicle modeling algorithm

    The flow diagram of the proposed 3D panicle modeling algorithm is shown in Fig.2.It includes 2D panicle segmentation,3D shoot rice reconstruction,and 3D panicle segmentation.The shoot rice model,surface shoot rice model,panicle model,and surface panicle model were generated from multiview rice canopy images.Following are the detailed steps of the algorithm,taking one rice plant as an example.(1) The SegNet-Panicle model for 2D panicle segmentation: 60 field rice images [13] of 1971 × 1815 and 4000×4500 pixel resolution(Fig.2A)and the corresponding label images (Fig.2B) were used to generate 2370 rice images of 360× 480 resolution(Fig.2C)and the corresponding label images(Fig.2D).These images were used to train SegNet [41] to obtain a SegNet-Panicle model (Fig.2E) for 2D panicle segmentation.(2)Multiview rice canopy images: for a single rice plant,90 images of 6000×4000 resolution(Fig.2F)were taken automatically from different views in the imaging chamber.All these images were calibrated with a rotation-axis calibration technique following Zhang[42].(3) Rice canopy silhouette images: all original rice canopy images (Fig.2F) were segmented using fixed-color thresholding to obtain a canopy silhouette images (Fig.2G) in which each pixel is categorized into either a rice or background pixel.(4) Paniclesegmented images: all original rice canopy images (Fig.2F) were segmented using the pretrained SegNet-Panicle model to obtain panicle-segmented images (Fig.2H) in which each pixel was assigned as either a panicle or background pixel.(5) Shoot rice model and surface shoot rice model: the shoot rice model(Fig.2I) was reconstructed by the SFS algorithm using 90 canopy silhouette images,and then the surface shoot rice model (Fig.2J)was obtained by rendering the surface points of the shoot rice model.(6) Panicle model and surface panicle model: the panicle model(Fig.2K)was obtained by performing 3D panicle segmentation of the shoot rice model,and then the surface panicle model(Fig.2L) was obtained as the intersection of the panicle model and the surface shoot rice model.

    2.5.Camera calibration

    The SFS algorithm requires calibration parameters corresponding to rice image sequences.To obtain these parameters,the rotation axis was set as the Z axis of the world coordinate system.Because the object underwent pure rotation,the origin of the world coordinate system could be an arbitrary point on the rotation axis.A simple technique was developed to determine the orientation of the rotation axis relative to the camera.First,a chessboard pattern with 15 × 10 white and black squares and 14 × 9 inner corners (see in Supplementary files) was printed and attached to a Perspex panel.A few images of the chessboard panel in different orientations were taken with the camera from close distances.These close-up shots were used to calculate the intrinsic camera parameters by Zhang’s calibration method [42].The chessboard panel was then placed on the top surface of the turntable to acquire an image sequence over 360° of rotation for extrinsic parameter calibration.The pixel image coordinates of each inner corner of the chessboard pattern were tracked automatically with OpenCV [43].Given the intrinsic camera parameters,the extrinsic parameters including the rotation parameters and the translation parameters that related the world coordinate system to the camera coordinate system were calculated from the correspondences of spatial coordinates and pixel coordinates of the chessboard corners.A different assignment of the world coordinate system determines a different group of spatial coordinates of the corners,which further leads to a different group of rotation and translation vectors.Note that the translation vectors corresponding to the extrinsic calibration images theoretically have the same value when the rotation axis is taken as the Z-axis of the world coordinate system.Any other assignment of the Z-axis leads to variation between the translation vectors.Accordingly,the extrinsic parameters that corresponded to the extrinsic calibration images with the adopted assignment were determined by finding the group of translation vectors with minimum variance.The rotation vectors that corresponded to the extrinsic calibration images determined a regression plane from which an initial rotation vector could be selected.The selection of the initial rotation vector could be arbitrary because only relative positions would be considered in SFS reconstruction.Once the initial rotation vector and the translation vector were determined,the extrinsic parameters corresponding to the rice image sequences were calculated according to the acquisition time and rotation speed.

    Fig.2.The pipeline of the 3D panicle modeling algorithm.(A)Original training images.(B)Original label images.(C)Cropped training images.(D)Cropped label images.(E)The SegNet-Panicle model for 2D panicle segmentation.(F)Multiview rice canopy images.(G)Rice canopy silhouette images.(H)Panicle-segmented images.(I)The shoot rice model reconstructed by the SFS algorithm.(J) The surface shoot rice model generated by texture extrusion.(K) The panicle model generated by performing 3D panicle segmentation on the shoot rice model.(L) The surface panicle model generated by taking the intersection of the surface shoot rice model and the panicle model.

    2.6.2D panicle segmentation

    The aim of 2D panicle segmentation was to acquire paniclesegmented images that provide essential information for 3D panicle segmentation.In a panicle-segmented image,each pixel is categorized as either a panicle pixel or a nonpanicle pixel.Considering that the panicle colors are similar to those of other parts of rice plants and that panicles appear in various shapes and poses,it is difficult to segment panicles by color thresholding or conventional machine-learning algorithms that depend on hand-engineered features.Instead,a well-built deep convolutional neural network(CNN),SegNet [41],was employed to perform robust 2D panicle segmentations.The SegNet architecture is composed of an encoder network,a corresponding decoder network,and a pixelwise classification layer.It takes as input an image of 360 × 480 resolution and generates a prediction image of the same size.The network should be trained with an adequate number of training samples before it can be applied for panicle segmentation.The use of Seg-Net is similar to that of general deep learning methods,and the detailed implementation steps are described as follows.

    (1) Training set: Sixty rice images (including 50 images of 1971 × 1815 resolution and 10 images of 4500 × 4000 resolution) with corresponding ground-truth labels were selected from the Panicle-Seg dataset[13].These rice images were acquired in complex field environments,involving diverse challenges for panicle segmentation,such as variations in rice accession,illumination imbalance,and cluttered background caused by soil and water reflection.Because the size of these images did not match the input image size of SegNet,each image and ground-truth was first extended to a larger size by adding a black background and then cut into small patches of 360 × 480 resolution.Each image of 1971×1815 resolution was extended to 2160×1920 resolution and cut into 24 patches.Each image of 4500 × 4000 resolution was extended to 4680 × 4320 resolution and cut into 117 patches.In total,2370 patches of 360×480 resolution were acquired,and all these patches were used as the training set.

    (2) Training SegNet: The network was trained using stochastic gradient descent [44] with a fixed learning rate of 0.001 and momentum of 0.9.The model was accepted after 100 epochs through the training set when the training loss converged and no increases in accuracy were observed.This model was named the SegNet-Panicle model.

    (3) Segmentation with the SegNet-Panicle model: All 4500 rice canopy images of 6000 × 4000 resolution for 3D panicle modeling were segmented using the pretrained SegNet-Panicle model.To meet the input image size of SegNet,each rice image was extended to 6120×4320 resolution by adding a black background and then cut into 153 patches of 360 × 480 resolution.Each patch was segmented with the pretrained SegNet-Panicle model.The segmentation results of 153 patches were spliced into a single result image of 6120 × 4320 resolution.The extended black area was removed and the image was trimmed to obtain a final image of 6000 × 4000 resolution.

    2.7.3D reconstruction of rice shoot

    The SFS algorithm,also known as visual hull construction,was employed to generate the shoot rice model.The algorithm recovers an object shape by carving away empty regions using a silhouette sequence[45].The general processes for a rice shoot are as follows:(1) acquire the calibration parameters corresponding to 90 rice canopy images by the method described in section 2.5;(2)acquire 90 canopy silhouette images using fixed color thresholding;and(3) initialize a volume that is large enough to contain a rice shoot and carve away the regions of the volume that once projected out of the canopy silhouettes.

    The silhouette images for shoot rice reconstruction were the binary segmentation results of the rice canopy.For rice images taken in the Panicle-3D imaging system in which the color of the scene background was unlikely to appear in rice canopies and the pixel values of the background were significantly lower than that of the rice shoot,the silhouettes were extracted automatically with fixed color thresholding according to the discriminants given below:

    where r,g and b represent respectively the gray values of the red,green,and blue channels of pixels in the original rice canopy images.The silhouette was the combination of all the pixels that satisfied these inequalities.It should be mentioned that the exposure and brightness were higher in the images of the 25 accessions selected from 529 O.sativa accessions than in the images of the 25 ZH11 mutants.The threshold value m was set 80 for 2250 shoot rice images of 25 ZH11 mutants and to 150 for 2250 shoot rice images of 25 of 529 O.sativa accessions.

    After the silhouette sequence and corresponding calibration parameters were obtained,the shoot rice model was computed volumetrically.This was done by initializing 1,048,576,000(1024 × 1024 × 1000) cubic voxels of 1 × 1 × 1 mm3that represented a cuboid volume of 1024×1024×1000 mm3,then projecting each voxel to the silhouette sequence and carving away all the voxels that had once projected outside the silhouettes.

    Besides the SFS reconstruction,the surface points of the shoot rice model were also rendered by extruding the original RGB rice images along viewing rays.Texture extrusion was implemented by the ray-tracing technique.For each pixel that belonged to the rice shoot silhouette,a ray was cast from the viewpoint across the pixel into space and the intersection of the viewing ray and the shoot rice model was calculated.Actually,the viewing ray was a list of occupancy intervals during intersection calculations.The voxel corresponding to the pixel was the voxel of intersection nearest to the viewpoint.The intersection voxel was drawn with an average of colors as seen from the RGB rice image sequence.The textured surface shoot rice model was obtained by assembling all colored voxels.

    2.8.3D panicle segmentation

    Apparently,panicle segmentation at the three-dimensional level is more difficult than for 2D images.The problem does not lie merely in the similarities of the panicle voxel colors to those of other parts of the rice plant.Some progressive deep learning techniques targeted on 3D objects were also influenced because of the shortage of 3D training datasets and because of memory intensiveness.We accordingly developed a solution for 3D panicle segmentation that innovatively integrated 2D pixelwise panicle segmentation with ray tracing and supervoxel clustering.

    Fig.3.The pipeline of 3D panicle segmentation.(A)3D shoot rice model.(B)3D mask points generated by presegmentation according to panicle-segmented images.(C)3D mask points (local view).(D) Supervoxel clusters of shoot rice generated by VCCS segmentation.(E) Supervoxel clusters of shoot rice (local view).(F) Coarse supervoxel classification results (local view).(G) External panicle supervoxels (local view).(H) Internal panicle supervoxels (local view).(I) The panicle model.(J) The surface panicle model.

    The detailed procedures of 3D panicle segmentation are illustrated in Fig.3.First,presegmentation was performed to acquire mask points (Fig.3B).This operation required 2D paniclesegmented images,as previously mentioned.The presegmentation resembled the rendering of the shoot rice model in the application of a ray-tracing technique.For each pixel that belonged to the canopy silhouette,its corresponding voxel on the shoot rice model was determined by intersection calculation.The shoot rice model voxels were scored by introducing a parameter,referred to as the score,that indicated the probability of the voxel’s belonging to the panicle.Each voxel achieved an initial score of zero.The score increased when the voxel was seen to be foreground in the paniclesegmented images and decreased when the voxel was seen to be background.After scoring was complete,mask points (Fig.3B)were obtained by removing voxels whose score was zero or negative.For better observation,a partial view of the mask points is presented in Fig.3C.Paralleling the presegmentation was the generation of supervoxels by voxel-cloud connectivity segmentation(VCCS) [46].The VCCS algorithm works directly in the 3D space using voxel relationships to produce oversegmentations.It constructs an adjacency graph for the voxel cloud by searching the voxel K-dimensional tree,then selects seed points to initialize the supervoxels,and finally iteratively assigns voxels to supervoxels using flow constrained clustering.Generally,three parameters,λ,μ and ?,which control the influence of color,spatial distance and geometric similarity,respectively,need to be specified when the VCCS algorithm is run.We expected that supervoxels would occupy a relatively spherical space,which would be more desirable for further processing.Accordingly,the values of λ and?were set to zero,which meant that only spatial distance was considered.A sample result of VCCS and a partial view are shown in Fig.3D and E,respectively.After the shoot rice model was transformed into a set of supervoxels,coarse supervoxel segmentation was performed to classify each supervoxel into either panicle supervoxel or nonpanicle supervoxel.If a supervoxel contained one or more mask points,it was classified as a panicle supervoxel;otherwise,it was classified as a nonpanicle supervoxel.The result of coarse supervoxel segmentation is shown in Fig.3F.It is easy to see that only external panicle supervoxels (Fig.3G) can be recognized,given that the mask points were a set of surface points,there being no opportunity for an internal panicle supervoxel to contain a mask point.Accordingly,a simple criterion was adopted to detect the internal panicle supervoxels (Fig.3H) that were not recognized by coarse supervoxel segmentation.If a supervoxel was adjacent to identified panicle supervoxels in three or more directions,it was also classified as a panicle supervoxel.The ultimate panicle model (Fig.3I) was the combination of all identified external and internal panicle supervoxels.Derived from the shoot rice model,the panicle model did not contain color information.A textured surface panicle model was also created (Fig.3J) from the intersection of the panicle model and the surface shoot rice model.

    2.9.Performance evaluation

    To evaluate the performance of the SegNet-Panicle model,25 images that originated from 25 ZH11 mutants for 3D panicle modeling were selected to build the test set.For each mutant,the first image of the multiview image sequence was selected.The ground-truth labels of the 25 test images were obtained by manual segmentation using Photoshop software [47].Automatic segmentation of the test images was performed with the SegNet-Panicle model.Four indicators: precision,recall,IoU,and F-measure of the test images,were calculated for evaluation of 2D panicle segmentation accuracy.

    We also investigated how the number of panicle-segmented images that were used would affect the efficiency and accuracy of 3D panicle segmentation on the shoot rice model using 25 ZH11 mutants.When the algorithm was tested on each plant,all 90 images taken from different angles were used to generate the shoot rice model and the surface shoot rice model.The panicle-segmented images were obtained using the pretrained SegNet-Panicle model.Then,different numbers of panicle-segmented images,in turn from 3 to 90,were used to segment panicle points from the shoot rice model.Manual segmentations on shoot rice models were conducted using CloudCompare[48]software for comparison with automatic segmentation.The IoU was adopted to evaluate the performance of 3D panicle segmentation.The calculation formula of IoU at the three-dimensional level is given below:

    where TP refers to points that were categorized as panicle points both by the algorithm and by manual segmentation.FP refers to points that were categorized as panicle points by the algorithm but were manually classified as nonpanicle points.FN refers to points that were manually classified as panicle points but were not recognized by the algorithm.The ranges of IoU were from 0 to 1,and a higher value of IoU indicated better 3D panicle segmentation.

    VisualSFM software (retrieved at https://ccwu.me/vsfm/),which represents the current state-of-the-art technology of multiview stereo reconstruction,was tested on the dataset for comparison with the proposed method.To make use of VisualSFM,the background in each original image was colored black and the rice pixels remained unchanged,since these images were acquired in a static camera capture system,while VisualSFM was adaptable to moving camera systems.For a single rice plant,90 background-removed images were used to generate the surface shoot rice model.Then,noise points on the surface shoot rice model were filtered out using color thresholding.Finally,manual segmentation of the surface shoot rice model was performed using CloudCompare software to obtain the surface panicle model.

    2.10.Requirements for data processing

    Computations were performed on Ubuntu 18.04 64-bit and Windows 10 64-bit dual operating systems with an NVIDIA GeForce RTX 2080Ti GPU configured.The training of SegNet and 2D panicle segmentation were performed on the Ubuntu system.All other processes were performed on the Windows system.The project for 3D shoot rice reconstruction and 3D panicle segmentation was developed in the C++language combined with OpenCV and PCL libraries [49].OpenMP [50] and CUDA [51] were adopted to speed up the calculations.The source codes of the algorithm and the implementation of the codes have been provided in the supplementary files.

    3.Results and discussion

    3.1.Performance of 2D panicle segmentation

    The comparison of the original rice images,ground-truth labels and segmentation results using the SegNet-Panicle model is shown in Fig.4.The automatic segmentation results were highly consistent with the ground-truth labels.The precision,recall,IoU and F-measure of the 25 test images were respectively 0.84,0.93,0.79 and 0.88,also indicating that the SegNet-Panicle model could provide reliable 2D panicle segmentation results.

    Fig.4.The results of 2D panicle segmentation using the SegNet-Panicle model.The original rice images are shown in the first column,the manual segmentation results using Photoshop software are shown in the second column,and the results using the SegNet-Panicle model are shown in the last column.(A),(B),and(C)are three mutants of ZH11.

    3.2.Performance of 3D shoot rice reconstruction and 3D panicle segmentation

    The panicle modeling algorithm was tested on 50 rice plants.The results of four samples are shown in Fig.5.For each sample,one original rice image and the corresponding 2D panicle segmentation result are shown in the first column and the third column,respectively.The surface shoot rice model (SSRM) and the surface panicle model(SPM)of each sample were loaded using CloudCompare software.For comparison,the SSRM and SPM were rotated to an angle that was closest to the shooting angle of the selected original rice image,and the screenshots of the SSRM and the SPM from this view (side view) are displayed in the second and fourth columns,respectively.Screenshots of the SPMs from the top view are shown in the last column.For better observation,only panicle regions are shown in the third column.Comparing the third column with the fourth column,the SPMs were generally consistent with panicles in images,showing that the proposed algorithm was well qualified to recover the 3D shapes of rice panicles from multiview images.For sample A and B,images were taken at the flowering stage when panicles grew upright and appeared green.For sample C and D,images were taken at the mature stage,and panicles were bent by the weight of the spikes and appeared yellow.The results showed that the algorithm was easily adaptable to rice plants of different accessions and growth stages.Besides,the reconstructed models are focused on panicle level and larger localized examples are provided in Fig.6 for detailed comparison.The original rice images,the surface shoot rice models,the panicle-segmented images,and the surface panicle models are shown from the first to the last column,respectively.The texture of the reconstructed panicles could be easily observed.The videos of the reconstructed SRM,SSRM,PM,and SPM are represented in the supplementary files.

    Fig.5.The results of 3D shoot rice reconstruction and 3D panicle segmentation.The original rice images and the corresponding 2D panicle segmentation results are shown in the first and third columns,respectively.The surface shoot rice models are shown in the second column.The surface panicle models from the side view and top view are shown in the fourth and last columns,respectively.(A,B) Rice samples from 529 O.sativa accessions at the flowering stage.(C,D) Mutants of ZH11 at the mature stage.

    Fig.6.The results of 3D shoot rice reconstruction and 3D panicle segmentation in local view at panicle level.The original rice images and the corresponding 2D panicle segmentation results are shown in the first and third columns,respectively.The surface shoot rice models and the surface panicle models are shown in the second and last columns,respectively.(A,B) Rice samples from 529 O.sativa accessions at the flowering stage.(C,D) Mutants of ZH11 at the mature stage.

    Fig.7.The accuracies and efficiencies of 3D panicle segmentation using different numbers of panicle-segmented images.

    Fig.8.The results of 3D panicle segmentation using different numbers of panicle-segmented images.n is the number of panicle-segmented images used in 3D panicle segmentation.TP,true positive;FN,false negative;FP,false positive.

    3.3.Processing efficiency

    The training of SegNet and 2D panicle segmentation were conducted on the Ubuntu system.It took approximately 8.6 h to train the SegNet for 100 epochs through the training set.The execution time for 2D panicle segmentation per rice sample using the SegNet-Panicle model was approximately 11.6 min.All the other processes were conducted on the Windows system.It took approximately 23.5 s to reconstruct the shoot rice model and 8.8 min to reconstruct the surface shoot rice model for a single rice plant when 90 images were used.The computational time for generating the panicle model and surface panicle model of a single rice plant varied from 0.8 to 5.1 min when different numbers of paniclesegmented images were used in the 3D panicle segmentation procedure.The whole process of panicle modeling for a rice plant was completed within 26 min.

    3.4.Efficiency and accuracy of 3D panicle segmentation

    The mean IoU and mean time cost when different numbers of panicle-segmented images were used in 3D panicle segmentation of 25 ZH11 mutants are shown in Fig.7.The minimum and maximum values of the IoU and time cost are marked in the figure.The performance of 3D panicle segmentation was improved and the execution time increased when more panicle-segmented images were used.The mean IoU achieved the highest value of 0.95 when 90 panicle-segmented images were used.Detailed data of the efficiencies and accuracies are shown in Table 1.The segmentation results obtained using 3,15,45,and 90 panicle-segmented images are shown in Fig.8.TP,FN,and FP are marked in different colors.Smaller areas of FN and FP indicate higher accuracy.Although marked improvement is visible in the comparison of Fig.8A with Fig.8B,the results in Fig.8B-D show little improvement.This finding is consistent with those shown in Fig.7.Although the execution time increases proportionally,accuracy shows diminishing increases.Although there is a tradeoff between the accuracy and efficiency of 3D panicle segmentation,the execution time of this process can be reduced by half with almost no sacrifice of accuracy.

    Table 1 Efficiencies and accuracies of 3D panicle segmentation.

    3.5.Comparison with structure from motion

    The surface shoot rice models(SSRM-SFM)and the surface panicle models (SPM-SFM) based on VisualSFM are shown in respectively the first and third columns of Fig.9.The surface shoot rice models (SSRM-SFS) and surface panicle models (SPM-SFS) generated by the proposed method are shown in respectively the second and last columns of Fig.9.Comparing the last two columns,the surface panicle models generated by SFM method and the proposed method are similar overall.In detail,as shown in Fig.10,with original rice images as references,many points are missing in SPM-SFM but were well recovered by SPM-SFS.This finding indicates that the proposed method gave better performance with respect to shape and texture preservation in the case of rice panicles.From the aspect of efficiency,for a single rice plant,the processing time for the generation of the surface shoot rice model using 90 images by VisualSFM varied from 22 to 35 min.Tens more minutes were needed for manual segmentation on the SSRM-SFM by CloudCompare to obtain the final SPM-SFM.The execution time for all procedures of the proposed algorithm using 90 images for a single rice plant was no longer than 26 min.The proposed method was superior in processing efficiency as well.

    Fig.9.Comparison of the reconstructed results of the proposed method with the SFM method.The surface shoot rice model and the surface panicle model by the SFM method are shown in the first and third columns,respectively,and the surface shoot rice model and the surface panicle model by the proposed method are shown in the second and last columns,respectively.The numbers are the point sizes of the models.(A)and(B)Rice samples from 529 O.sativa accessions at the flowering stage.(C)and(D)Mutants of ZH11 at the mature stage.

    Fig.10.Comparison of the reconstructed results of the proposed method with SFM method in local view at panicle level.The original rice images are shown in the first column.The surface shoot rice model and the surface panicle model by the proposed method are shown in the second and third columns,respectively.The surface shoot rice model and the surface panicle model by SFM method are shown in the fourth and the last column,respectively.(A)and(B)Rice samples from 529 O.sativa accessions at the flowering stage.(C) and (D) Mutants of ZH11 at the mature stage.

    3.6.Advantages and limitations

    To our knowledge,automatic panicle segmentation of a 3D shoot rice model has not been described previously.In the proposed method,this issue was well addressed by using deep convolutional neural network and supervoxel clustering.The method is superior to a SFM-based method which needs subsequent manual processing in terms of texture preservation and computational efficiency.

    The image acquisition is nondestructive,given that the panicle modeling algorithm requires multi-view images of the whole rice canopies instead of excised panicles.The developed low-cost (US$~2000) multi-view imaging system is well adaptable for fully automatic high-throughput data acquisition if equipped with electromechanical controllers and automated conveyer.

    The proposed algorithm was developed specially for indoor imaging systems,requiring a fixed camera and a pure rotation movement of the rice plant at constant speed.For this reason,it could not be applied in the field.

    The validity is not guaranteed when the rice canopy is extremely dense.This limitation is common in visible image-based reconstructions and is unlikely to be eliminated unless other techniques such as computed tomography or magnetic resonance imaging are adopted.In addition,automatic acquisition of panicle traits such as panicle number,single-panicle length,and kernel number based on panicle models or surface panicle models remains a challenge,which should be relieved in the future work.

    4.Conclusions

    This paper described an automatic and nondestructive method for 3D modeling of rice panicles that combines shape from silhouette with a deep convolutional neural network and supervoxel clustering.The outputs of the algorithm for each rice plant are four 3D point clouds,including the shoot rice model,surface shoot rice model,panicle model,and surface panicle model.The image acquisition for a single rice plant was~4 min and the image processing time was~26 min when 90 images were used.The tradeoff between accuracy and efficiency involved in 3D panicle segmentation was assessed.Comparing the proposed algorithm with the widely used VisualSFM software,the proposed algorithm is superior with respect to texture preservation and processing efficiency.In future,we expect this method would be applied in highthroughput 3D phenotyping of large rice populations.

    Data availability

    Supplementary files for this article,which include source code,Panicle-3D technical documentation,evaluations of SFS reconstruction accuracy,and four videos of the reconstructed SRM,SSRM,PM,and SPM,can be retrieved from http://plantphenomics.hzau.edu.cn/usercrop/Rice/download.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    CRediT authorship contribution statement

    Dan Wu:Data curation,Formal analysis,Methodology,Writing-original draft.Lejun Yu:Data curation,Formal analysis,Methodology,Writing-original draft.Junli Ye:Writing-review&editing.Ruifang Zhai:Software,Writing -review &editing.Lingfeng Duan:Software,Writing -review &editing.Lingbo Liu:Writing-review &editing.Nai Wu:Resources.Zedong Geng:Writing -review &editing.Jingbo Fu:Writing -review &editing.Chenglong Huang:Software,Writing -review &editing.Shangbin Chen:Software,Writing-review&editing.Qian Liu:Conceptualization,Funding acquisition,Project administration,Writing -review &editing.Wanneng Yang:Conceptualization,Funding acquisition,Project administration,Writing -review &editing.

    Acknowledgments

    This work was supported by the National Natural Science Foundation of China (U21A20205),Key Projects of Natural Science Foundation of Hubei Province (2021CFA059),Fundamental Research Funds for the Central Universities (2021ZKPY006),and cooperative funding between Huazhong Agricultural University and Shenzhen Institute of Agricultural Genomics (SZYJY2021005,SZYJY2021007).

    国产私拍福利视频在线观看| 精品99又大又爽又粗少妇毛片 | 身体一侧抽搐| 草草在线视频免费看| 九九热线精品视视频播放| 久久九九热精品免费| 日韩欧美在线二视频| 国产免费男女视频| 亚洲精品亚洲一区二区| 午夜影院日韩av| 男女边吃奶边做爰视频| 少妇人妻精品综合一区二区 | 亚洲国产色片| 真实男女啪啪啪动态图| 国产乱人伦免费视频| 亚洲美女黄片视频| 天美传媒精品一区二区| 成年免费大片在线观看| 亚洲中文日韩欧美视频| 国产高清视频在线播放一区| 免费高清视频大片| 成人性生交大片免费视频hd| 欧美另类亚洲清纯唯美| 欧美最新免费一区二区三区| 欧美日韩精品成人综合77777| 极品教师在线免费播放| 国产精品久久久久久久久免| 久久久久国产精品人妻aⅴ院| 国产午夜精品久久久久久一区二区三区 | 国产在线男女| 亚洲乱码一区二区免费版| 国产av不卡久久| 成年人黄色毛片网站| 精品人妻一区二区三区麻豆 | ponron亚洲| 亚洲av成人av| av女优亚洲男人天堂| 亚洲在线自拍视频| 不卡一级毛片| 亚洲av美国av| 亚洲成人久久爱视频| 中文字幕久久专区| 1000部很黄的大片| 露出奶头的视频| 狂野欧美白嫩少妇大欣赏| 日韩一区二区视频免费看| 麻豆久久精品国产亚洲av| 97人妻精品一区二区三区麻豆| 观看免费一级毛片| 两人在一起打扑克的视频| 久久精品国产鲁丝片午夜精品 | 22中文网久久字幕| 91av网一区二区| 精品久久国产蜜桃| 男女之事视频高清在线观看| 一本久久中文字幕| aaaaa片日本免费| 国产v大片淫在线免费观看| 性欧美人与动物交配| 国产精品久久电影中文字幕| av专区在线播放| 国产探花极品一区二区| av.在线天堂| 午夜影院日韩av| 精品乱码久久久久久99久播| 我要搜黄色片| 一区二区三区免费毛片| 亚洲人成伊人成综合网2020| 国产大屁股一区二区在线视频| 成人精品一区二区免费| 欧美最新免费一区二区三区| 神马国产精品三级电影在线观看| 91麻豆精品激情在线观看国产| 欧美一区二区国产精品久久精品| 91久久精品国产一区二区三区| 女生性感内裤真人,穿戴方法视频| a级毛片免费高清观看在线播放| 日韩av在线大香蕉| 99久久精品国产国产毛片| 亚洲精品456在线播放app | 他把我摸到了高潮在线观看| 欧美性猛交╳xxx乱大交人| 久久久久九九精品影院| 老司机午夜福利在线观看视频| 国产探花在线观看一区二区| 国产精品一区二区三区四区免费观看 | 真人做人爱边吃奶动态| 日韩,欧美,国产一区二区三区 | 国产精品伦人一区二区| 又紧又爽又黄一区二区| 亚洲av免费在线观看| 麻豆国产av国片精品| 99精品在免费线老司机午夜| 日本色播在线视频| 男插女下体视频免费在线播放| 亚洲人成网站高清观看| 日韩精品青青久久久久久| 舔av片在线| 久久草成人影院| 看十八女毛片水多多多| 免费高清视频大片| 男人狂女人下面高潮的视频| 国产熟女欧美一区二区| 九九热线精品视视频播放| 国产69精品久久久久777片| 成人永久免费在线观看视频| 国产91精品成人一区二区三区| 亚洲欧美清纯卡通| 中文资源天堂在线| 成人国产麻豆网| 性色avwww在线观看| 精品久久久久久久久久免费视频| 在线观看av片永久免费下载| 日本a在线网址| 久久精品国产亚洲av天美| 欧美日韩综合久久久久久 | 欧美国产日韩亚洲一区| 久久久久久九九精品二区国产| a级毛片a级免费在线| 最新在线观看一区二区三区| 欧美日韩瑟瑟在线播放| 国产精品一区二区三区四区久久| 国产成人av教育| 国产伦一二天堂av在线观看| 少妇丰满av| 99久久成人亚洲精品观看| 欧美高清成人免费视频www| 午夜影院日韩av| 久久亚洲真实| 国产av麻豆久久久久久久| 欧美日本亚洲视频在线播放| 色精品久久人妻99蜜桃| 天堂av国产一区二区熟女人妻| 国产女主播在线喷水免费视频网站 | 噜噜噜噜噜久久久久久91| 女人被狂操c到高潮| 一个人看的www免费观看视频| 色综合亚洲欧美另类图片| 最近在线观看免费完整版| 日韩欧美在线乱码| 色综合亚洲欧美另类图片| 最近在线观看免费完整版| 亚洲国产日韩欧美精品在线观看| 99热网站在线观看| 亚洲成a人片在线一区二区| 成人高潮视频无遮挡免费网站| 日韩欧美一区二区三区在线观看| 狂野欧美白嫩少妇大欣赏| 不卡视频在线观看欧美| 亚洲av成人av| 亚洲久久久久久中文字幕| 动漫黄色视频在线观看| 亚洲av第一区精品v没综合| 99久久成人亚洲精品观看| 一进一出抽搐gif免费好疼| 欧美一区二区亚洲| 亚洲第一电影网av| 欧美一区二区精品小视频在线| 国内毛片毛片毛片毛片毛片| 黄色欧美视频在线观看| 国产一区二区在线观看日韩| 日韩 亚洲 欧美在线| 搞女人的毛片| 国产黄a三级三级三级人| 国产69精品久久久久777片| av天堂中文字幕网| 极品教师在线免费播放| 国产成人a区在线观看| av中文乱码字幕在线| 精品久久久久久久末码| 欧美潮喷喷水| 国内揄拍国产精品人妻在线| 国内精品久久久久久久电影| 熟女电影av网| 午夜亚洲福利在线播放| 啦啦啦观看免费观看视频高清| 国产三级在线视频| 毛片女人毛片| 日本熟妇午夜| 日韩欧美三级三区| 女的被弄到高潮叫床怎么办 | 精品无人区乱码1区二区| 成人国产综合亚洲| 亚洲狠狠婷婷综合久久图片| 精品人妻一区二区三区麻豆 | 精品国产三级普通话版| xxxwww97欧美| 亚洲成人久久性| 搡老妇女老女人老熟妇| 老女人水多毛片| 成人特级黄色片久久久久久久| 综合色av麻豆| 最近最新中文字幕大全电影3| 婷婷精品国产亚洲av在线| 国产精品久久久久久av不卡| 色5月婷婷丁香| 久久99热这里只有精品18| 国产麻豆成人av免费视频| 99久久九九国产精品国产免费| 天堂动漫精品| 免费看av在线观看网站| 国产女主播在线喷水免费视频网站 | 国产成人影院久久av| 91在线观看av| 少妇的逼好多水| 久久草成人影院| 999久久久精品免费观看国产| 天美传媒精品一区二区| 成人一区二区视频在线观看| 色综合亚洲欧美另类图片| x7x7x7水蜜桃| 在线天堂最新版资源| 久久精品综合一区二区三区| av在线老鸭窝| 久久精品夜夜夜夜夜久久蜜豆| 国产人妻一区二区三区在| 国内精品一区二区在线观看| 噜噜噜噜噜久久久久久91| 精品久久久久久久久久久久久| 男女啪啪激烈高潮av片| 一本久久中文字幕| 日韩,欧美,国产一区二区三区 | 成人永久免费在线观看视频| 国产精品日韩av在线免费观看| 精品久久久久久成人av| 在线观看av片永久免费下载| videossex国产| 日韩欧美在线乱码| 亚洲熟妇熟女久久| 又爽又黄a免费视频| 午夜福利18| 真人一进一出gif抽搐免费| 日韩,欧美,国产一区二区三区 | 欧美日韩瑟瑟在线播放| 极品教师在线免费播放| 天堂av国产一区二区熟女人妻| 99久久精品热视频| 午夜福利成人在线免费观看| 国产精品福利在线免费观看| 给我免费播放毛片高清在线观看| 在线观看66精品国产| 乱人视频在线观看| 欧美最黄视频在线播放免费| 亚洲av美国av| 级片在线观看| 少妇被粗大猛烈的视频| 99热这里只有是精品在线观看| 一区二区三区激情视频| 性插视频无遮挡在线免费观看| 联通29元200g的流量卡| 日韩欧美三级三区| 久久久久精品国产欧美久久久| 国产精品福利在线免费观看| 非洲黑人性xxxx精品又粗又长| 18+在线观看网站| 美女黄网站色视频| 国内久久婷婷六月综合欲色啪| 免费看a级黄色片| 在现免费观看毛片| 国产视频一区二区在线看| 一夜夜www| 看片在线看免费视频| 在线观看一区二区三区| 国产日本99.免费观看| 欧美三级亚洲精品| 国产精品久久视频播放| 两性午夜刺激爽爽歪歪视频在线观看| av在线天堂中文字幕| 黄色欧美视频在线观看| 免费不卡的大黄色大毛片视频在线观看 | 中文字幕久久专区| 免费不卡的大黄色大毛片视频在线观看 | 国产高清三级在线| 亚洲av.av天堂| 欧美中文日本在线观看视频| av天堂在线播放| 给我免费播放毛片高清在线观看| 亚洲色图av天堂| 欧美一区二区亚洲| 18+在线观看网站| 亚洲国产欧洲综合997久久,| 国产精品一区二区免费欧美| 丰满人妻一区二区三区视频av| 特大巨黑吊av在线直播| 日本色播在线视频| 国内精品久久久久久久电影| 日韩精品青青久久久久久| 给我免费播放毛片高清在线观看| 国产熟女欧美一区二区| 久久久久久久精品吃奶| 又黄又爽又免费观看的视频| 一区二区三区四区激情视频 | 精品一区二区三区av网在线观看| 久久久久久伊人网av| 91麻豆av在线| 久久国产乱子免费精品| 啦啦啦啦在线视频资源| 亚洲乱码一区二区免费版| 18禁黄网站禁片午夜丰满| 亚洲第一区二区三区不卡| 精品久久久久久久末码| 波野结衣二区三区在线| 嫩草影院入口| 亚洲av二区三区四区| а√天堂www在线а√下载| netflix在线观看网站| 国产黄a三级三级三级人| 观看免费一级毛片| 亚洲一级一片aⅴ在线观看| 99热这里只有精品一区| 精品国产三级普通话版| 亚洲精品乱码久久久v下载方式| 日韩一本色道免费dvd| 国内精品宾馆在线| 午夜福利在线观看免费完整高清在 | 成年女人看的毛片在线观看| 最近中文字幕高清免费大全6 | 高清日韩中文字幕在线| 乱码一卡2卡4卡精品| 久久这里只有精品中国| 免费看日本二区| 人人妻,人人澡人人爽秒播| 国产真实乱freesex| 亚洲成人免费电影在线观看| 18禁裸乳无遮挡免费网站照片| 国语自产精品视频在线第100页| 嫩草影院新地址| 尾随美女入室| 日本免费a在线| 日日摸夜夜添夜夜添av毛片 | 日本熟妇午夜| 狠狠狠狠99中文字幕| 熟妇人妻久久中文字幕3abv| 最近中文字幕高清免费大全6 | 亚洲内射少妇av| 国产在线精品亚洲第一网站| 精品乱码久久久久久99久播| 观看免费一级毛片| 亚洲av.av天堂| 亚洲第一电影网av| 热99在线观看视频| 高清在线国产一区| 丰满人妻一区二区三区视频av| 黄色丝袜av网址大全| 欧美日韩精品成人综合77777| 成人二区视频| 欧美bdsm另类| 日本三级黄在线观看| 五月伊人婷婷丁香| 成人亚洲精品av一区二区| 亚洲成人中文字幕在线播放| 婷婷丁香在线五月| 最新中文字幕久久久久| 欧美高清性xxxxhd video| 18禁黄网站禁片免费观看直播| 久久精品国产99精品国产亚洲性色| 亚洲乱码一区二区免费版| 亚洲成a人片在线一区二区| 婷婷精品国产亚洲av在线| 亚洲人成伊人成综合网2020| 欧美日本亚洲视频在线播放| 国产v大片淫在线免费观看| 美女大奶头视频| 国产精品久久久久久久久免| 久久精品夜夜夜夜夜久久蜜豆| 中文字幕免费在线视频6| 91av网一区二区| 国产精品久久久久久久久免| 国产主播在线观看一区二区| 午夜久久久久精精品| 一进一出好大好爽视频| 成人美女网站在线观看视频| 欧美精品国产亚洲| 麻豆一二三区av精品| a级毛片免费高清观看在线播放| 波多野结衣巨乳人妻| 亚洲无线在线观看| 伦精品一区二区三区| 一个人免费在线观看电影| 免费看光身美女| av视频在线观看入口| 99久久九九国产精品国产免费| 亚洲黑人精品在线| 中文亚洲av片在线观看爽| 亚洲va在线va天堂va国产| 一区二区三区激情视频| 午夜爱爱视频在线播放| 亚洲真实伦在线观看| 国产视频内射| av国产免费在线观看| 婷婷丁香在线五月| 国产午夜精品久久久久久一区二区三区 | 少妇高潮的动态图| 亚洲av免费在线观看| 亚洲avbb在线观看| 日韩欧美 国产精品| 日本色播在线视频| 国产成人av教育| 91狼人影院| 国产精品综合久久久久久久免费| 亚洲五月天丁香| 亚洲无线观看免费| 亚洲熟妇中文字幕五十中出| 亚洲人成网站在线播| av中文乱码字幕在线| 亚洲av免费高清在线观看| 少妇高潮的动态图| 日韩强制内射视频| 久久久精品大字幕| 久久婷婷人人爽人人干人人爱| 啦啦啦啦在线视频资源| 亚洲电影在线观看av| 69av精品久久久久久| 国产成人一区二区在线| 国产一区二区激情短视频| 亚洲第一电影网av| www日本黄色视频网| x7x7x7水蜜桃| 色哟哟哟哟哟哟| 日韩欧美精品免费久久| 亚洲最大成人中文| 亚洲精品国产成人久久av| 成人美女网站在线观看视频| 久久精品夜夜夜夜夜久久蜜豆| 欧美中文日本在线观看视频| 亚洲精品一区av在线观看| 人妻少妇偷人精品九色| 亚洲成人免费电影在线观看| 春色校园在线视频观看| 日本免费一区二区三区高清不卡| 在线看三级毛片| 18禁黄网站禁片免费观看直播| 亚洲不卡免费看| 99热网站在线观看| 国产av麻豆久久久久久久| 国产精品精品国产色婷婷| 日韩,欧美,国产一区二区三区 | 久久热精品热| 桃色一区二区三区在线观看| 欧美bdsm另类| 99九九线精品视频在线观看视频| 国产白丝娇喘喷水9色精品| 99riav亚洲国产免费| 12—13女人毛片做爰片一| 亚洲人与动物交配视频| 日韩在线高清观看一区二区三区 | 波多野结衣巨乳人妻| 在线观看美女被高潮喷水网站| 欧美不卡视频在线免费观看| 嫁个100分男人电影在线观看| 日韩欧美 国产精品| 97超级碰碰碰精品色视频在线观看| 91av网一区二区| 国产av一区在线观看免费| 丰满人妻一区二区三区视频av| 老师上课跳d突然被开到最大视频| 亚洲综合色惰| 欧美国产日韩亚洲一区| 国产真实伦视频高清在线观看 | 国产真实伦视频高清在线观看 | 日韩欧美三级三区| 人人妻人人看人人澡| 男女之事视频高清在线观看| 精品不卡国产一区二区三区| 一进一出抽搐动态| 国产精品福利在线免费观看| 欧美国产日韩亚洲一区| 精品久久国产蜜桃| 日本熟妇午夜| 给我免费播放毛片高清在线观看| 精品人妻1区二区| 99热这里只有精品一区| 国产亚洲91精品色在线| 久久久久久久久久成人| 俺也久久电影网| 欧美+亚洲+日韩+国产| 精品午夜福利视频在线观看一区| 国产私拍福利视频在线观看| 真人做人爱边吃奶动态| 69av精品久久久久久| 永久网站在线| 日本黄色片子视频| 国产91精品成人一区二区三区| 99久久精品一区二区三区| 免费观看人在逋| 免费av不卡在线播放| av女优亚洲男人天堂| 亚洲自拍偷在线| 级片在线观看| 亚洲狠狠婷婷综合久久图片| 波野结衣二区三区在线| 狂野欧美白嫩少妇大欣赏| 精品久久久久久久久亚洲 | 噜噜噜噜噜久久久久久91| 99久久九九国产精品国产免费| 国产精品嫩草影院av在线观看 | 麻豆成人午夜福利视频| 搡老岳熟女国产| 男女那种视频在线观看| 偷拍熟女少妇极品色| 波野结衣二区三区在线| 热99re8久久精品国产| 男女下面进入的视频免费午夜| 国内精品久久久久精免费| 在线观看美女被高潮喷水网站| 国产视频一区二区在线看| 在线免费观看不下载黄p国产 | 日日夜夜操网爽| 午夜亚洲福利在线播放| 999久久久精品免费观看国产| 久久99热6这里只有精品| 国产免费av片在线观看野外av| 成人欧美大片| 久久久久久久精品吃奶| 一进一出抽搐gif免费好疼| 如何舔出高潮| 国产国拍精品亚洲av在线观看| 国产伦在线观看视频一区| 高清毛片免费观看视频网站| 国产男靠女视频免费网站| 精品人妻一区二区三区麻豆 | 亚洲五月天丁香| 在线观看av片永久免费下载| 欧美高清性xxxxhd video| 色哟哟·www| 天天一区二区日本电影三级| 久久久色成人| 国产一区二区在线观看日韩| 美女被艹到高潮喷水动态| 久久久久久久精品吃奶| 22中文网久久字幕| 亚洲国产精品久久男人天堂| 亚洲精品国产成人久久av| .国产精品久久| 99热6这里只有精品| 国产精品久久久久久精品电影| 18+在线观看网站| 在线观看午夜福利视频| av天堂在线播放| 乱码一卡2卡4卡精品| 俄罗斯特黄特色一大片| 日韩中文字幕欧美一区二区| 色视频www国产| 88av欧美| 人妻制服诱惑在线中文字幕| 麻豆国产av国片精品| 日本色播在线视频| 99精品在免费线老司机午夜| 色播亚洲综合网| 亚洲国产精品合色在线| 人妻少妇偷人精品九色| 黄色女人牲交| 日日摸夜夜添夜夜添av毛片 | 日日啪夜夜撸| 中文字幕人妻熟人妻熟丝袜美| 国产精品自产拍在线观看55亚洲| 色5月婷婷丁香| 亚洲四区av| 五月伊人婷婷丁香| 国产伦精品一区二区三区四那| 毛片一级片免费看久久久久 | 久久九九热精品免费| 亚洲成人中文字幕在线播放| 国产欧美日韩精品一区二区| 国产精品久久视频播放| 国产三级在线视频| 午夜精品一区二区三区免费看| 亚洲欧美日韩卡通动漫| 嫩草影视91久久| 久久久久精品国产欧美久久久| 偷拍熟女少妇极品色| 日本黄大片高清| 亚洲男人的天堂狠狠| 欧美性猛交黑人性爽| 国产精品亚洲一级av第二区| 干丝袜人妻中文字幕| 99在线人妻在线中文字幕| 一个人看的www免费观看视频| 免费在线观看成人毛片| 国产精品,欧美在线| 久久久久久久久久成人| av女优亚洲男人天堂| 久久精品国产亚洲av香蕉五月| 韩国av一区二区三区四区| 一本一本综合久久| 少妇被粗大猛烈的视频| 国产精品人妻久久久久久| 久久久久九九精品影院| 国产真实伦视频高清在线观看 | 欧美成人性av电影在线观看| 久久精品人妻少妇| 精品人妻一区二区三区麻豆 | 国产亚洲av嫩草精品影院| 亚洲欧美日韩无卡精品| 精品国内亚洲2022精品成人| 国产精品久久视频播放| 亚洲专区中文字幕在线| 精品日产1卡2卡| 亚洲av免费在线观看| 啦啦啦观看免费观看视频高清| 少妇的逼好多水| 18+在线观看网站| 女人十人毛片免费观看3o分钟| 中文亚洲av片在线观看爽| 欧美xxxx性猛交bbbb| www日本黄色视频网| 久久人妻av系列| 国产一区二区亚洲精品在线观看| 看黄色毛片网站| 国产成年人精品一区二区| 熟妇人妻久久中文字幕3abv| 桃红色精品国产亚洲av| 亚洲乱码一区二区免费版| 国产精品美女特级片免费视频播放器| 精品久久久久久,|