• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Temporal sequence Object-based CNN (TS-OCNN) for crop classification from fine resolution remote sensing image time-series

    2022-10-12 09:31:34HupengLiYjunTinCeZhngbShuqingZhngPeterAtkinson
    The Crop Journal 2022年5期

    Hupeng Li , ,Yjun Tin ,Ce Zhngb, ,Shuqing Zhng ,Peter M.Atkinson

    a Northeast Institute of Geography and Agroecology,Chinese Academy of Sciences,Changchun 130012,Jilin,China

    b Lancaster Environment Centre,Lancaster University,Lancaster LA1 4YQ,UK

    c Faculty of Science and Technology,Lancaster University,Lancaster LA1 4YR,UK

    Keywords:Convolutional neural network Multi-temporal imagery Object-based image analysis (OBIA) Crop classification Fine spatial resolution imagery

    ABSTRACT Accurate crop distribution mapping is required for crop yield prediction and field management.Due to rapid progress in remote sensing technology,fine spatial resolution (FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail.However,within-class variance can hamper attempts to discriminate crop classes at fine resolutions.Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery,although current methods do not exploit the available information fully.In this research,a novel Temporal Sequence Object-based Convolutional Neural Network (TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series.An object-based CNN(OCNN)model was adopted in the TS-OCNN to classify images at the object level (i.e.,segmented objects or crop parcels),thus,maintaining the precise boundary information of crop parcels.The combination of image time-series was first utilized as the input to the OCNN model to produce an ‘original’ or baseline classification.Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy.By doing so,the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification.The effectiveness of the proposed approach was investigated using multitemporal SAR and optical imagery,respectively,over two heterogeneous agricultural areas.The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy,and achieved the greatest accuracies(82.68%and 87.40%)in comparison with state-of-the-art benchmark methods,including the object-based CNN (OCNN) (81.63% and 85.88%),object-based image analysis (OBIA) (78.21% and 84.83%),and standard pixel-wise CNN (79.18%and 82.90%).The proposed approach is the first known attempt to explore simultaneously the joint information from image time-series with the unique information from single-date images for crop classification using a deep learning framework.The TS-OCNN,therefore,represents a new approach for agricultural landscape classification from multi-temporal FSR imagery.Besides,it is readily generalizable to other landscapes (e.g.,forest landscapes),with a wide application prospect.

    1.Introduction

    Accurate information about cropland distribution is very important for estimation of crop production [1],management of farmland [2] and assessment of crop-associated environmental impacts [3].Besides,information on crop distribution is needed to support agrarian policy actions associated to agroenvironmental measurements[4].Remote sensing has become a popular means of monitoring crops owing to its unique advantages over traditional field survey methods,such as synoptic view,repeat acquisition capability,and so on[5-8].Moreover,crop distribution maps generated from remote sensing imagery are consistent and comparable,which is especially beneficial to long-term analysis of cropping systems [9].The spatial resolution of remotely sensed imagery has a great influence on crop classification detail and accuracy.Mulla[10] demonstrated that a spatial resolution of less than 10 m is generally required for precision agriculture since agricultural landscapes are usually narrow,highly fragmented and heterogeneous.Fortunately,fine spatial resolution (FSR) remotely sensed imagery from sensors onboard both satellite (e.g.,Rapideye) and airborne [e.g.,Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR)] platforms is now commercially available,which offers huge opportunities for detailed and accurate crop monitoring and mapping[11,12].However,crop classification from FSR imagery is a very challenging task in consideration of the large intra-class variance [13].This is mainly due to variation in local elevation and relief,flow accumulation,soil composition,and management practice [5].

    The use of multitemporal imagery over single-date imagery is a major means of increasing the accuracy of crop classification.This is because seasonal differences in the growth processes of different crop types can provide useful discrimination information,e.g.,corn and soybean have different senescence phases[8].Several previous studies have demonstrated the advantages of image time-series over single-date images for crop mapping and classification.Wardlow and Egbert [5] used a decision tree to classify crop classes in the state of Kansas with multi-temporal images spanning the growing season and achieved classification accuracies greater than 80%.Jiao et al.[14] showed that the time of image acquisition is crucial to crop classification accuracy.Zhong et al.[15] differentiated corn and soybean using the random forest algorithm and found that input sets containing phenological metrics achieved the greatest classification accuracy.Li et al.[8]illustrated that crop types could be completely separated from each other with UAVSAR time-series spanning the whole growing season.Most of these studies classified crop types with phenological metrics or temporal features extracted from image time-series using threshold-based methods (e.g.,time of peak biomass [16]) or predefined models(e.g.,Fourier transform [17]).In essence,the extracted features are hand-crafted via feature engineering which relies heavily on user experience and domain knowledge.Consequently,these manual feature engineering models might be effective for some specific tasks but it is hard to generalize them to other applications.In addition,the spatial configurations of crop fields can be very difficult to hand-code into features using predefined rules or models[13].Therefore,advanced expert-free data-driven models are urgently needed to extract features from image time-series automatically.

    Recently,deep learning,a breakthrough technology in the field of computer vision and machine learning,has attracted increasing attention due to its capability to learn representative features in an end-to-end fashion[18].The Convolutional Neural Network(CNN),as one of the most popular network architectures,has achieved impressive,state-of-the-art results in a variety of domains,such as speech recognition [19],object detection [20] and visual recognition [21].Owing to its superiority in high-level spatial feature representations,the CNN has demonstrated great potential and achieved great success in a diverse set of remote sensing applications,such as change detection[22],urban functional zone division[23] and image classification [24].Recently,efforts have been devoted to increasing the accuracy of crop classification using the CNN network [25,26].For example,Zhong et al.[27] proposed a novel one-dimensional CNN framework for crop classification based on multi-temporal remotely sensed images.Sidike et al.[13] constructed a deep Progressively Expanded Network (dPEN)which allows the deep network to go deeper for accurate heterogeneous agricultural landscape mapping using WorldView-3 imagery.Ji et al.[28] designed a 3D CNN embedded with a channel attention module to classify crop types from multi-temporal fineresolution satellite sensor images.Li et al.[29] presented a hybrid CNN-transformer approach to mine temporal patterns from image time-series for crop classification.The multitemporal remotely sensed images were stacked and used directly as the input to the CNNs in most of the above-mentioned studies.A major drawback of this is that the unique and useful information about crop differentiation from single-date observations might be ignored.Besides,the pixel-wise CNN was adopted in these studies,which often generates blurred boundaries between crop fields due to the requirement for an input window or patch [26,30].These two issues impair greatly the accuracy of CNN-based crop classification from multi-temporal images.

    The purpose of this research was to develop an approach that is able to learn fully discriminative features from image time-series automatically.A novel Temporal Sequence Object-based CNN(TS-OCNN)for crop classification was proposed,in which the combination of multi-temporal imagery was first used as the input to the CNN and then the single-date images in the time-series were fed automatically into the deep CNN model scene by scene following a forward temporal sequence (FTS) from early to late acquisition.In TS-OCNN,an object-based CNN (OCNN) was adopted to classify crop types at the object-level to maintain the precise crop parcel boundaries.The effectiveness of the proposed TS-OCNN approach was tested on two crop-rich agricultural fields,respectively,with FSR multitemporal SAR and optical images.

    2.The proposed temporal sequence Object-based CNN method

    2.1.Convolutional neural network (CNN)

    The CNN,a variant of the multilayer feed forward neural network,involves a cascade of convolutional and pooling layers which are able to learn features at deep and abstract levels[21].By using convolutional filters,each convolutional layer transforms the input to an output which will be used as the input of the next layer.An activation function is taken outside the convolutional layer to enhance the non-linearity.A pooling layer is designed to further generalize the convolved features by reducing the resolution of the input[31].A fully connected layer is utilized on top of the last-pooling layer.The parameters(i.e.,weights and biases)of the CNN network are optimized using stochastic gradient descent.

    2.2.Object-based convolutional neural networks (OCNN)

    The object-based CNN was originally proposed to solve the complex land use classification task [32].Similar to the standard pixel-wise CNN(PCNN),the OCNN is trained with labelled patches.However,unlike the PCNN that predicts the label of each pixel across the entire imagery,the OCNN places an image patch at the centroid of each object to classify the segmented objects [26].Essentially,the OCNN framework is a hybrid method combining the OBIA and CNN techniques,which not only significantly increases the computational efficiency,but also maintains the precise boundary information of each object (e.g.,crop parcel) [30].The prediction results made by the OCNN for each segmented object formulate the final thematic classification map.

    2.3.Temporal sequence Object-based convolutional neural network(TS-OCNN)

    Suppose n scenes of multi-temporal remotely sensed images covering the study area are available,with m classes to be classified.Let M=(M1,M2,...,Mi,...,Mn)denote the set of multitemporal images,where i and n are the i-th image (Mi) in the temporal sequence and the total number of the multi-temporal images,respectively.Note that the multi-temporal images have the same spatial extent and spatial resolution so that they can be overlapped spatially at the pixel level.Let O=(o1,o2,...,oj,...,ou)be the set of segmented objects from M,where ojand u are the j-th object and the total number of objects,respectively.Let T=(t1,t2,...,tk,...,tv) be the set of training samples,where tkand v are the k-th sample and the total number of samples,respectively.Note that T is employed to train the OCNN model which estimates classification probability per object at each iteration through the iterative process.

    The proposed TS-OCNN method is designed to explore fully the distinctive and useful information hidden in image time-series for crop classification.The basic assumption of the TS-OCNN is that the multi-temporal remotely sensed images are correlated with each other,and the classification results (X) of the i th image in the temporal sequence is conditional upon the output of the (i-1)th image,which formulates a Markov process as follows:

    where,i denotes the number of ‘iterations’ within the Markov process andrepresents the classification probabilities of the i-th iteration.

    The general procedure of the TS-OCNN approach is demonstrated in Fig.1,in which crop classifications are refined gradually along with the temporal sequence of image time-series.The methodology of TS-OCNN is detailed below.

    In order to exploit the joint information in the image timeseries,multitemporal images are first combined spatially (Mstack)and used as the input of the original OCNN(OCNNori).The training process of the model can be represented as follows:

    The original classification probabilities P(X)oriare calculated with the trained OCNN model as follows:

    On the basis of the original classification (ORC),to further exploit the unique individual information from single-date images,images in the time-series are fed into the OCNN model scene-byscene.That is,a new image scene in the time-series is fed into the model at each iteration,and the number of multitemporal images is,thus,equal to the number of iterations.Specifically,from the i-th step where i ≥1,the classification probabilities at the previous iteration (i-1) (P(X)oriif i=1) and the i-th image (Mi) in the temporal sequence are spatially combined as conditional probabilities for image classification as:

    where,Combine represents a function to combine the i-th image Miwith the probabilities generated at the previous iteration.In other words,the function combines spatially the bands contained in P(X)i-1produced at the previous iteration with those in the i-th image (Mi) as the input for current iteration.

    With the newly generated dataset using Eq.(4) as input,the OCNN model at the i-th iteration is trained through the training samples (T) as follows:

    Note that the OCNN model is trained from scratch at each iteration.The trained OCNN model is subsequently used to predict the classification probabilities as:

    Based on Eq.(6),the probability of being assigned to each class for each segmented object is predicted within each iteration.The spatial extent of P(X)iis equal to that of any image in M,and the dimension of P(X)i(i.e.,the number of bands) equals the number of classes with each dimension corresponding to the probabilities of a specific class.

    The thematic classification (TCi) of each iteration can be acquired using the corresponding probabilities (P(X)i) as:

    Fig.1.The proposed TS-OCNN methodology.

    where,arg max is a function classifying each object as the class with the maximum membership,and i denotes the number of iterations.

    A total of n classification maps (n denotes the total number of multitemporal images) was generated with iteration.The classification accuracy of the thematic classification was assessed at each iteration,and the classification with the highest accuracy amongst the n classifications was selected as the final thematic classification(TCfinal) of the TS-OCNN approach.

    3.Study materials

    3.1.Study area and data

    The Sacramento Valley,lying in the north of California,USA,was selected as a case study area.The Valley is considered as an important agricultural area across the state of California [33],which accounts for about one quarter of the state’s organic hectarage.Two typical agricultural sites (S1 and S2) within the Valley with distinctive heterogeneous crop compositions were intentionally selected to investigate the effectiveness of proposed method(Fig.2).

    In S1,four scenes of fully polarimetric L-band Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) images were captured in 2011 on June 16,July 20,August 29,and October 3.The UAVSAR data selected is in the format of ground range projected GRD (georeferenced),with a fine spatial resolution of 5 m and a spatial extent of 3474 × 2250 pixels.Three linear polarizations(namely HH,HV and VV)were achieved from the original datasets and used for crop classification.S1 is a mixture of fruit crops,summer crops and forage crops,which consists of 10 crop classes,namely walnut (Juglans nigra),almond (Amygdalus communis L.),alfalfa(Lotus corniculatus L.),grass,clover(Trifolium repens L.),winter wheat (denoted as wheat hereafter,Triticum aestivum L.),corn(Zea mays L.),sunflower (Helianthus annuus L.),tomato (Lycopersicon esculentum Mill.),and pepper(Capsicum annuum L.)(Table S1).

    In S2,three scenes of optical RapidEye images were acquired in 2016 on 30/05,10/07,and 07/09.Each scene of the image consists of five bands: blue,green,red,red edge,and near infrared.The RapidEye images were Level 3A ortho products,with sensor,radiometric and geometric corrections already applied.The spatial size of the image is 3222 × 2230 pixels with a fine spatial resolution of 5 m.A total of nine crop types were identified throughout this area,including walnut,almond,fallow,alfalfa,wheat,corn,sunflower,tomato,and cucumber (Cucumis sativus L.) (Table S1).

    Sample points were acquired using a stratified random scheme according to the Crop Data Layer (CDL).The CDL is generated by the United States Department of Agriculture(USDA),and has been employed widely as the ground reference dataset in a wide range of crop classification studies[25,26,34-37]because of its very high accuracy [38].To collect representative samples,crop parcels within the two study areas with an area larger than 5 ha were selected and delineated manually according to the CDL datasets[8].To ensure that training,validation and test data are completely independent,the digitalized polygons were split randomly into a 50%subset for training and validation(for hyperparameter tuning)and the other 50%for testing.A stratified random sampling scheme was adopted to collect training and validation sample points from training and validation polygons,respectively.To ensure that the CNN networks can learn sufficiently representative features from input,the training sample size was guaranteed above an average of 200 per crop category.In total 2560 and 1900 training sample points were collected for S1 and S2,respectively (Table S1).To evaluate comprehensively the classifications,wall-to-wall assessment was used for both sites.In other words,all pixels within the testing polygons were utilized for accuracy assessment.

    3.2.TS-OCNN model architecture and parameters

    3.2.1.Image segmentation

    Remotely sensed imagery first needs to be segmented into objects through object-based segmentation since the proposed TS-OCNN is implemented on the segmented objects.Herein,a multi-resolution segmentation(MRS)algorithm was implemented to obtain segmented objects [39].Note that the combination of multitemporal images was employed as the input of MRS algorithm in each experiment (study site).The ‘‘scale” parameter is the most critical in MRS since it determines directly the average size of segmented objects[40].Through cross-validation,the scale parameters were optimized as 30 and 180 for S1 and S2,respectively,to acquire slightly over-segmented results,with the Shape and Compactness parameters being 0.2 and 0.7 in both study sites.A total of 3040 and 3876 objects were acquired for S1 and S2,respectively.

    Fig.2.Locations of the two study sites with the remotely sensed images.

    Fig.3.Overall accuracy of the proposed TS-OCNN plotted against iteration for S1 and S2.The number of iterations denotes the position of the image in the time-series sequence.The position zero indicates the original classification (ORC) of the TS-OCNN (see Section 2.3).The dashed line represents the baseline accuracy of the TS-OCNN.

    3.2.2.Model architecture and parameters

    In the presented TS-OCNN,a standard CNN classifier was adopted to classify each segmented object by taking the centroid of each object as the convolutional point[25,26].Several hyperparameters need to be optimized for the CNN within the TS-OCNN to maximize classification accuracy.In order to test the transferability of the proposed method,the model architectures and parameters were optimized through cross-validation in S1 and directly generalized in S2,as detailed below.

    The model architecture of the CNN applied in the TS-OCNN was designed similar to AlexNet [21] with nine hidden layers alternated with convolution,max-pooling,and batch normalization (Fig.S1).Small filters were applied in convolutional layers(5 × 5 for the first layer and 3 × 3 for the remaining layers),and the number of filters was tuned to 32 to learn deep feature representations.As suggested by Langkvist et al.[41],the input patch size of the network was chosen from {16 × 16,24 × 24,32 × 32,40 × 40 and 48 × 48} and 32 × 32 was found to be the optimal size.To tackle the overfitting problem,dropout was applied before the dense layer with an optimized value of 0.3.Besides,the number of epochs was set to 500 to allow the network to converge through iteration.

    3.2.3.Benchmark methods and parameter settings

    To evaluate the effectiveness of the proposed approach,three typical methods were benchmarked,including traditional objectbased image analysis (OBIA),standard pixel-wise CNN (PCNN),and standard object-based CNN (OCNN).Note that the multitemporal remotely sensed images were stacked directly together and used as the input of the three benchmarks in each experiment.To make a fair comparison,the architecture,as well as hyperparameters (filter size,dropout value,etc.) of the two CNN-based comparators (i.e.,PCNN and OCNN),were kept the same as the CNN model within the TS-OCNN (denoted as CNN_TS-OCNN hereafter).The parameters of the benchmark methods are detailed as follows:

    OBIA:The OBIA was implemented on the segmented objects acquired by the MRS algorithm in Section 3.2.1.A range of handcrafted features were acquired from the objects,including spectral features (mean and standard deviation) and texture variables(mean and variance).A parameterized SVM classifier was then adopted to classify the segmented objects with these hand-coded feature representations.

    PCNN:The standard pixel-wise CNN classifies each pixel across the entire image using densely overlapping patches.The most important parameter that determines directly the classification accuracy of the PCNN is the input patch size.Herein,the input window size was cross-validated with a range of CNN window sizes,including {8 × 8,16 × 16,24 × 24,32 × 32,and 40 × 40},and 24 × 24 was found to be the optimal input patch size.The other parameters were identical to the CNN_TS-OCNN.

    OCNN:Like the proposed TS-OCNN,the OCNN was implemented based on the segmented objects.The difference between the OCNN and TS-OCNN is the usage of the multi-temporal images.Herein,the multitemporal images were spatially combined and used as the input to the CNN model.The input patch size was parameterized as 32 × 32 from {16 × 16,24 × 24,32 × 32,40×40 and 48×48}through trial and error.The other parameters were kept the same as for the CNN_TS-OCNN.

    4.Crop classification results

    4.1.TS-OCNN classification accuracies

    Figure 3 illustrates how the overall accuracy (OA) of the TSOCNN varies with iteration in both study areas.Note that the number of iterations is equal to the number of images in the image time-series,and there are four and three iterations in S1 and S2,respectively.It can be seen that the classification accuracy of TSOCNN in S1 started at 81.63%,then kept increasing with iteration,and reached the highest accuracy of 82.68%at iteration 3,followed by a slight decrease at iteration 4.The most accurate classification generated at iteration 3 with the highest accuracy was selected as the final thematic classification(TCfinal)for S1.Similarly,the classification accuracy increased gradually through iteration in S2.Specifically,the accuracy started from 85.88%,then increased rapidly along the iterative process (i.e.,temporal sequence),and reached a maximum of 87.40% at the third iteration.As a result,the classification of the last iteration was chosen as the final thematic classification for S2.

    4.2.TS-OCNN classification results

    To illustrate visually how the temporal sequence increased the classification accuracy through iteration,the crop classifications produced by TS-OCNN in S1 and S2,are shown in Fig.4A and B,respectively.For each site,three typical subsets at different iterations (1,2 and 3) are illustrated,with the correct and incorrect class allocations highlighted with yellow and red circles,respectively.

    Fig.4.Typical image subsets of the TS-OCNN classification in S1(A)and S2(B).ORC denotes the original classification of TS-OCNN.Correct and incorrect classifications are highlighted using yellow and red circles,respectively.

    For S1,with spatially stacked multitemporal images,the TSOCNN failed to distinguish grass from alfalfa,as shown by the red circles in the original classification (ORC) (Fig.4A-a).Besides,parts of alfalfa and walnut were,respectively,misidentified as clover and almond (Fig.4A-b,c) because of their highly similar spectra.These misclassifications were gradually corrected through iteration with the single-date images being fed into the model successively (yellow circles in iteration 1-3).For example,the confusion between alfalfa and grass was rectified step-by-step,and the grass was correctly identified at iteration 3 (yellow circle in Fig.4A-a).Similarly,the classification errors for walnut and alfalfa were also resolved by iteration,as illustrated by the yellow circles of Fig.4Ab,c.In short,the proposed TS-OCNN achieved a desirable result by capturing the unique information contained in the single-date images.

    Similar to S1,the crop classification results of S2 were refined gradually throughout the temporal sequence.Originally,wheat and sunflower were falsely identified as cucumber and tomato,respectively,as demonstrated by the red circles in the original classification (Fig.4B-a,b).Besides,classification errors were also found at the boundary between crop parcels.Such misclassifications were rectified with increasing iteration(Fig.4B).For example,the crop parcel misclassified as cucumber was correctly rectified to wheat at iteration 2.At the same time,the misclassifications between wheat and almond,and between sunflower and tomato were also corrected gradually,and were eliminated completely at iteration 3.Moreover,the classification errors near parcel boundaries were removed through the process.

    4.3.Benchmark comparison for the crop classification

    Accuracy assessment:To assess quantitatively the effectiveness of the classification methods,the proposed TS-OCNN wascompared with a range of benchmarks using the overall accuracy(OA),Kappa coefficient(k),and per-class mapping accuracy.Table 1 demonstrates the accuracy of crop classification for both S1 and S2.As can be observed from the table,the proposed TS-OCNN acquired consistently the most accurate results,with the OA up to 82.68%and 87.40% for S1 and S2,higher than for the OCNN (81.63% and 85.88%),OBIA (78.21% and 84.83%),and PCNN (79.18% and 82.90%).Similarly,the TS-OCNN produced the greatest Kappa coefficients,reaching 0.80 and 0.85 for S1 and S2,greater than those of the OCNN (0.78 and 0.83),OBIA (0.75 and 0.82),and PCNN (0.76 and 0.79),respectively.Besides,a McNemar test designed for pair-wise comparison further revealed a significant increase in crop mapping accuracy has achieved by the presented TS-OCNN approach over the OCNN,OBIA,and PCNN,with z-value=82.17,158.25,and 123.19 in S1 and z-value=142.88,117.16,192.97 in S2,respectively (Table S2).

    Table 1 Classification accuracy comparison between pixel-wise CNN,OBIA,object-based CNN,and the proposed TS-OCNN in S1 and S2.

    The effectiveness of the proposed approach was further evaluated with the per-class mapping accuracy.As shown in Table 1,the TS-OCNN achieved the greatest accuracy for most of the crop categories in both study sites.For S1,the most remarkable increase in accuracy can be seen for grass (79.31%),dramatically greater than for the comparators,including the OCNN (75.72%),the OBIA(58.38%) and the PCNN (74.56%).Similarly,the accuracies of almond,alfalfa and walnut by the TS-OCNN,were also markedly higher than for the benchmarks.For other crop categories such as clover,corn,and tomato,the TS-OCNN achieved only slight increases compared with benchmarks,with an average accuracy increase between 1% and 3%.With respect to S2,satisfactory classification accuracy was achieved for most of crop classes by the TSOCNN,with accuracies of six classes (walnut,alfalfa,wheat,corn,sunflower,and tomato) being higher than 85%.The most notable increase in accuracy was observed for alfalfa with an accuracy of 94.71%,markedly greater by 11.01%,9.71%,and 2.61% in comparison with the OCNN,OBIA,and PCNN,respectively (Table 1).Another large increase in accuracy was found for cucumber,with an average accuracy increase of 6.34%.Besides,a moderate accuracy increase was seen for walnut,almond,and fallow,with an average increase around 3%-4%.Other crop classes,including wheat,corn,sunflower,and tomato demonstrated only slight increases in average accuracy (<2.20%) relative to other comparators.

    Classification results: To evaluate visually the superiority of the proposed approach,the classifications of the TS-OCNN were compared with those of benchmarks in S1 (Fig.5A) and S2(Fig.5B),respectively.Clearly,the classification results produced with pixel-wise CNN (PCNN) were severely affected by salt-andpepper noise.For example,several of speckle errors existed within the crop parcels,including alfalfa,tomato,and walnut (Fig.5A-b,c)).Besides,classification errors along the boundaries between crop parcels were found,as demonstrated by Fig.5B-c.In contrast,the two object-based methods (OBIA and OCNN) reduced significantly the salt-and-pepper noise,and achieved smooth classification results with precise boundary information.However,severe confusions between alfalfa and wheat,and between almond and walnut,were found in the OBIA classifications.The OCNN was superior to the OBIA in differentiating crop classes with similar spectra.For example,alfalfa and wheat were more accurately classified from each other,as shown in Fig.5A-a,b.Nevertheless,misclassifications between alfalfa and grass in S1,and between sunflower and tomato in S2,still existed in the OCNN classifications.Making use of the temporal sequence,the proposed TS-OCNN approach corrected most of the aforementioned classification errors while keeping precise crop boundaries well maintained.

    5.Discussion

    Agro-ecosystems are affected greatly by both human activities and natural conditions (e.g.,climate change),making them highly complex and heterogeneous.As a result,classifying crop types accurately from fine spatial resolution(FSR) remotely sensed imagery remains a great challenge,even for the-state-of-the-art deep learning-based approaches [13,28].Multitemporal remotely sensed images have been used widely to increase crop mapping accuracy.Prior studies have tended to explore the joint information of multi-temporal observations,which are usually stacked together and fed into predefined models for crop classification[27,28,42].However,few studies have explored the utility of mining the individual information from single-date images.In principle,a certain crop type might be easily and accurately classified from others at some point during the growing season [43].For example,rice can be identified using imagery collected at the stage of flooding and rice transplanting [44].Such unique individual information is complementary to the joint information captured from the whole image time-series and is potentially of great importance for differentiating crop classes across heterogeneous agricultural landscapes.In the proposed TS-OCNN,individual information about crop discrimination from each image was extracted and integrated automatically into the joint information gradually,thus,mining more comprehensively the latent information in image time-series for crop classification.To further illustrate the contribution of single-date imagery to crop classification,the TSOCNN was implemented respectively with each single-date imagery in the UAVSAR experiment.As can be seen from Fig.S2,the accuracies of TS-OCNN with single-date images (numbers in red in Fig.S2) were generally better than that by OCNN (81.63%),demonstrating the unique value of single-date images to crop mapping.Specifically,it was found that the accuracies with single imagery (numbers in red in Fig.S2) were inferior to that (82.68%)with all temporal images,indicating that the TS-OCNN makes use of the temporal images thoroughly.

    Fig.5.Typical image subsets of classification maps produced by PCNN,OBIA,OCNN,and the presented TS-OCNN in S1 (A) and S2 (B).

    This research demonstrates that it is difficult for the standard pixel-wise CNN (PCNN) to achieve desirable crop classification results from FSR imagery.In PCNN,an input patch is adopted to learn features to identify each pixel across the entire image,which usually leads to severe geometric distortions (e.g.,enlarged crop parcels) [45].Unlike the standard CNN,the proposed TS-OCNN is built and implemented on segmented objects,leading to increases in crop classification accuracy.By using image segmentation,the salt-and-pepper noise affecting negatively the crop classification results was eliminated.More importantly,the TS-OCNN avoids mislabeling pixels falling near the field edges(where misclassification occurs relatively often),thus,maintaining precise boundary of crop parcels.Our results are consistent with previous research[26,30],highlighting the significance of object-based image analysis for complex remote sensing classification from FSR imagery.It should be noted that the object-based image classification methods depend on the quality of image segmentation results since they are implemented based on segmented objects[30].For image segmentation algorithms,the scale is a key parameter as it directly determines the size of the segmented objects [40].By taking the UAVSAR experiment as an example,we illustrated how the results of image segmentation affect the classification accuracy of objectbased methods (OCNN and TS-OCNN) (Fig.6).It can be seen that object-based methods acquired the greatest accuracy with a scale value of 30 (achieving a small amount of over-segmentation),superior to that with a value of 20,and 40,respectively.Nevertheless,the proposed approach consistently outperformed OCNN,demonstrating the robustness of the TS-OCNN in object-based image classification.

    While the TS-OCNN greatly increased the overall crop classification accuracy compared with benchmark comparator (i.e.,OCNN),the increases in accuracy resulting for individual crop category are even more impressive.Generally,the increases in accuracy achieved for small biomass crops with short stems and narrow leaves (e.g.,alfalfa and clover) were greater than those for large biomass crops with long stems and large leaves(e.g.,corn and sunflower).For example,the TS-OCNN increased the accuracies of grass in S1 (from 75.72% to 79.31%) and cucumber in S2 (from 63.21% to 67.09%) by around 4%,even though the increased accuracies were still relatively low (less than 80%).In contrast,the accuracies of corn and sunflower were only increased slightly(less than 1%) for both sites.It is likely that small biomass crops with relatively weak signals [8] are more difficult to identify than large biomass crops (which can be classified relatively easily),and they are,thus,difficult to distinguish accurately without utilizing the unique individual information available in single-date images.In other words,the accuracy of small biomass crops benefits more from the increases in the dimensionality of the observational data than large biomass crops,which is in accord with previous findings[46].

    A forward temporal sequence (FTS) was adopted in the proposed TS-OCNN.That is,multitemporal images were fed into the deep learning model scene-by-scene in the order of image acquisition date (i.e.,start earliest,sequence moves towards latest).In fact,several strategies are available for ordering the temporal sequence.For example,the random temporal sequence(i.e.,select an image at each iteration randomly without considering image acquisition date) and the backward temporal sequence (i.e.,start latest,sequence moves towards earlies).However,we found that the FTS was superior to the others in increasing crop classification accuracy.This may be attributed to the fact that temporally adjacent images are correlated with each other [47];for a pair of temporally adjacent images,the classification result of the late acquisition is usually upon that of the early acquisition.

    6.Conclusions

    This paper presented a new approach for crop classification from multi-temporal FSR remotely sensed imagery.In the proposed TS-OCNN,a CNN model was adopted to classify agricultural landscapes into crop classes at the object (crop parcel) level,thus maintaining the precise boundary information of crop parcels.The combination of image time-series was first utilized as the input to a CNN model to produce an ‘original’ classification result,and then single-date images from the image time-series were fed into the deep learning model scene-by-scene in the order of image acquisition date to increase gradually the crop classification accuracy.As such,the joint information(sequential relationship)of the multi-temporal observations as well as the individual information from each image in the time-series were explored fully and utilized for crop classification.The experimental results on two heterogeneous agricultural areas with two types of FSR imagery demonstrated that the proposed TS-OCNN achieved consistently the most accurate classification results in comparison with state-ofthe-art benchmarks.Specifically,the TS-OCNN markedly increased the classification accuracies of small biomass crops (e.g.,forage crops)that were very difficult to identify because of their indistinct remote-sensing spectra.We,therefore,conclude that the newly presented TS-OCNN is an effective approach for crop classification from multi-temporal FSR remotely sensed imagery.Meanwhile,the TS-OCNN is readily generalizable to other landscapes(e.g.,forest and wetland) and it,thus,has a wide application prospect.

    CRediT authorship contribution statement

    Huapeng Li:Conceptualization,Data curation,Formal analysis,Funding acquisition,Investigation,Methodology,Project administration,Resources,Software,Supervision,Validation,Visualization,Writing -original draft,Writing -review &editing.Yajun Tian:Formal analysis,Investigation,Validation,Visualization.Ce Zhang:Methodology,Software,Writing -review &editing.Shuqing Zhang:Writing -original draft,Writing -review &editing.Peter M.Atkinson:Writing -original draft,Writing -review &editing.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgments

    This research was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA28070503),the National Key Research and Development Program of China(2021YFD1500100),the Open Fund of State Laboratory of Information Engineering in Surveying,Mapping and Remote Sensing,Wuhan University (20R04),and Land Observation Satellite Supporting Platform of National Civil Space Infrastructure Project(CASPLOS-CCSI).

    The OCNN approach was developed during a PhD studentship‘‘Deep Learning in massive area,multi-scale resolution remotely sensed imagery” (EAA7369),sponsored by Lancaster University and Ordnance Survey (the national mapping agency of Great Britain).Ordnance Survey owns the intellectual property arising from the project,together with a US patent pending:‘Object Based Convolutional Neural Network’ (US application number 16/156044).Lancaster University wishes to thank Ordnance Survey for permission to publish this paper and for the supply of aerial imagery and the supporting geospatial data which facilitated the PhD(which is protected as Crown copyright).

    Appendix A.Supplementary data

    Supplementary data for this article can be found online at https://doi.org/10.1016/j.cj.2022.07.005.

    波多野结衣一区麻豆| 你懂的网址亚洲精品在线观看| 丰满少妇做爰视频| 极品人妻少妇av视频| 91精品三级在线观看| 亚洲伊人色综图| 午夜激情av网站| 国产精品99久久99久久久不卡| 精品久久久久久电影网| 成人亚洲欧美一区二区av| 无遮挡黄片免费观看| 九色亚洲精品在线播放| 国产人伦9x9x在线观看| 欧美成狂野欧美在线观看| 亚洲久久久国产精品| 国产在线一区二区三区精| 精品高清国产在线一区| 国产精品秋霞免费鲁丝片| 日韩av不卡免费在线播放| 国产男女超爽视频在线观看| 亚洲第一青青草原| 激情视频va一区二区三区| 一级黄片播放器| 久久国产精品影院| 色婷婷久久久亚洲欧美| 午夜久久久在线观看| 精品久久久久久久毛片微露脸 | 精品人妻在线不人妻| 精品福利观看| 国产熟女午夜一区二区三区| 国产一区二区三区综合在线观看| 自线自在国产av| 日韩 亚洲 欧美在线| 一级毛片我不卡| 欧美国产精品一级二级三级| 亚洲一卡2卡3卡4卡5卡精品中文| 免费观看人在逋| 宅男免费午夜| av在线app专区| 久久精品久久久久久久性| 一本久久精品| 成人影院久久| 多毛熟女@视频| 亚洲av男天堂| 制服诱惑二区| 久久人妻福利社区极品人妻图片 | 亚洲国产欧美在线一区| 亚洲人成电影免费在线| 亚洲一码二码三码区别大吗| 99re6热这里在线精品视频| 国产精品一区二区精品视频观看| 久久午夜综合久久蜜桃| 国产精品 欧美亚洲| 成人国产av品久久久| 国产午夜精品一二区理论片| 91九色精品人成在线观看| 国产男女内射视频| 欧美日韩福利视频一区二区| 中国美女看黄片| 精品国产乱码久久久久久男人| kizo精华| 搡老岳熟女国产| 欧美成人午夜精品| 久久午夜综合久久蜜桃| 婷婷色综合大香蕉| 纵有疾风起免费观看全集完整版| 宅男免费午夜| 午夜福利一区二区在线看| av在线app专区| 亚洲专区中文字幕在线| 狠狠精品人妻久久久久久综合| 亚洲国产最新在线播放| 色综合欧美亚洲国产小说| tube8黄色片| 国产成人免费无遮挡视频| 精品久久蜜臀av无| 99精品久久久久人妻精品| 久久九九热精品免费| 亚洲av欧美aⅴ国产| 人人妻,人人澡人人爽秒播 | 99热全是精品| 老司机影院成人| 欧美精品啪啪一区二区三区 | 亚洲国产欧美一区二区综合| 国产av精品麻豆| 国产亚洲精品久久久久5区| 国产一卡二卡三卡精品| 国产欧美日韩一区二区三 | 少妇人妻久久综合中文| 欧美日韩福利视频一区二区| 欧美成狂野欧美在线观看| 亚洲精品久久久久久婷婷小说| 别揉我奶头~嗯~啊~动态视频 | 精品一品国产午夜福利视频| 亚洲精品国产av蜜桃| 一级a爱视频在线免费观看| 欧美日韩精品网址| www日本在线高清视频| 日日摸夜夜添夜夜爱| 欧美日韩国产mv在线观看视频| 国产成人av激情在线播放| 国产精品欧美亚洲77777| 欧美精品人与动牲交sv欧美| 亚洲av电影在线进入| 欧美人与善性xxx| 热99久久久久精品小说推荐| 99re6热这里在线精品视频| 国产日韩欧美在线精品| 久久久久久免费高清国产稀缺| 亚洲欧美一区二区三区国产| 中文字幕高清在线视频| 黑人欧美特级aaaaaa片| 99热网站在线观看| 制服诱惑二区| 亚洲国产欧美一区二区综合| 视频在线观看一区二区三区| 99国产综合亚洲精品| 18禁国产床啪视频网站| 亚洲视频免费观看视频| 蜜桃在线观看..| 国产淫语在线视频| 久久久久网色| 黄色片一级片一级黄色片| 18在线观看网站| 一级片免费观看大全| 母亲3免费完整高清在线观看| 免费高清在线观看日韩| 欧美人与善性xxx| 国产成人一区二区在线| 久久久精品94久久精品| 国产亚洲欧美在线一区二区| 国产三级黄色录像| 国产精品av久久久久免费| 久久女婷五月综合色啪小说| avwww免费| 亚洲精品日本国产第一区| 国精品久久久久久国模美| 欧美另类一区| 夫妻午夜视频| 国产亚洲午夜精品一区二区久久| 亚洲伊人久久精品综合| 麻豆国产av国片精品| 中文欧美无线码| 日本色播在线视频| 国产精品偷伦视频观看了| 亚洲五月婷婷丁香| 欧美日韩综合久久久久久| 九草在线视频观看| 美女高潮到喷水免费观看| 亚洲国产看品久久| 亚洲图色成人| 免费黄频网站在线观看国产| 国产成人91sexporn| 性高湖久久久久久久久免费观看| 少妇被粗大的猛进出69影院| 涩涩av久久男人的天堂| 丝瓜视频免费看黄片| 91老司机精品| 丰满少妇做爰视频| 手机成人av网站| 欧美成人午夜精品| 国产欧美日韩一区二区三区在线| 亚洲欧美一区二区三区国产| 一本综合久久免费| 我要看黄色一级片免费的| 久久精品国产综合久久久| 天堂8中文在线网| 黄色a级毛片大全视频| 50天的宝宝边吃奶边哭怎么回事| 国产精品一区二区在线不卡| 午夜免费鲁丝| 80岁老熟妇乱子伦牲交| 丝瓜视频免费看黄片| 99久久人妻综合| 97在线人人人人妻| 国产午夜精品一二区理论片| 成人免费观看视频高清| 国产成人影院久久av| 人人妻人人澡人人看| 久久久久国产一级毛片高清牌| 亚洲精品国产色婷婷电影| 亚洲欧洲精品一区二区精品久久久| 欧美久久黑人一区二区| 国产1区2区3区精品| 国产精品久久久久久精品电影小说| 日日夜夜操网爽| 欧美另类一区| 丰满迷人的少妇在线观看| 久久99热这里只频精品6学生| 久久国产精品人妻蜜桃| 日日摸夜夜添夜夜爱| 久久精品亚洲熟妇少妇任你| 黄网站色视频无遮挡免费观看| 新久久久久国产一级毛片| 欧美日韩黄片免| 嫩草影视91久久| 老熟女久久久| av在线老鸭窝| 午夜福利在线免费观看网站| 国产精品 国内视频| 国产精品国产三级专区第一集| 欧美精品一区二区免费开放| 亚洲人成电影观看| 久久久亚洲精品成人影院| 国产黄色免费在线视频| 久久免费观看电影| 亚洲一码二码三码区别大吗| 一区二区三区精品91| 日韩制服骚丝袜av| 伦理电影免费视频| 美国免费a级毛片| 97精品久久久久久久久久精品| 妹子高潮喷水视频| 亚洲精品国产av蜜桃| 91成人精品电影| 国产成人系列免费观看| 精品视频人人做人人爽| 一本一本久久a久久精品综合妖精| 亚洲精品美女久久av网站| 久久女婷五月综合色啪小说| 纵有疾风起免费观看全集完整版| 国产成人精品久久久久久| 夫妻性生交免费视频一级片| 亚洲人成77777在线视频| 99国产精品免费福利视频| 叶爱在线成人免费视频播放| 男女边摸边吃奶| 精品国产一区二区久久| www.熟女人妻精品国产| 亚洲国产看品久久| 国产91精品成人一区二区三区 | 国产女主播在线喷水免费视频网站| 欧美人与性动交α欧美精品济南到| 色婷婷av一区二区三区视频| 久久ye,这里只有精品| 国产老妇伦熟女老妇高清| 国产高清国产精品国产三级| 亚洲综合色网址| 天天操日日干夜夜撸| 日本av免费视频播放| 国产精品一国产av| 后天国语完整版免费观看| 久久久久视频综合| 欧美精品av麻豆av| 男女边吃奶边做爰视频| 久久ye,这里只有精品| 国产精品秋霞免费鲁丝片| 如日韩欧美国产精品一区二区三区| 欧美日韩一级在线毛片| 欧美亚洲 丝袜 人妻 在线| 最新的欧美精品一区二区| 欧美在线一区亚洲| 亚洲人成电影免费在线| 免费高清在线观看视频在线观看| 大陆偷拍与自拍| 九草在线视频观看| 精品人妻熟女毛片av久久网站| 成年动漫av网址| av有码第一页| 黑丝袜美女国产一区| 看十八女毛片水多多多| 啦啦啦 在线观看视频| 久久精品国产亚洲av高清一级| 后天国语完整版免费观看| 国产成人91sexporn| 成人三级做爰电影| 无限看片的www在线观看| 一二三四在线观看免费中文在| 丝袜脚勾引网站| 久久久久网色| 精品人妻1区二区| 欧美黄色片欧美黄色片| 丝袜喷水一区| 日韩视频在线欧美| 91麻豆精品激情在线观看国产 | 精品国产乱码久久久久久小说| 一区二区三区精品91| 久久久亚洲精品成人影院| 国产欧美日韩综合在线一区二区| 免费人妻精品一区二区三区视频| 在线观看免费午夜福利视频| 免费不卡黄色视频| 高清av免费在线| 久久久久精品国产欧美久久久 | 成人影院久久| 久久青草综合色| 一区二区三区乱码不卡18| 天堂俺去俺来也www色官网| 亚洲图色成人| 国产精品国产av在线观看| 亚洲七黄色美女视频| 狠狠婷婷综合久久久久久88av| 美女大奶头黄色视频| 日本av免费视频播放| 亚洲成国产人片在线观看| 人人澡人人妻人| 精品高清国产在线一区| 色婷婷久久久亚洲欧美| 日韩中文字幕欧美一区二区 | 大话2 男鬼变身卡| 国产在线视频一区二区| 亚洲黑人精品在线| 波多野结衣一区麻豆| 日日爽夜夜爽网站| 9热在线视频观看99| 国产精品久久久久久精品古装| 免费黄频网站在线观看国产| 色94色欧美一区二区| 亚洲精品久久久久久婷婷小说| 午夜激情久久久久久久| 久久青草综合色| 亚洲欧美精品综合一区二区三区| 三上悠亚av全集在线观看| 亚洲成av片中文字幕在线观看| 一级,二级,三级黄色视频| 亚洲av国产av综合av卡| 91麻豆av在线| 亚洲精品av麻豆狂野| 在线 av 中文字幕| av在线app专区| 国产免费现黄频在线看| 日韩大码丰满熟妇| 丰满人妻熟妇乱又伦精品不卡| 久久精品国产综合久久久| 秋霞在线观看毛片| 18禁国产床啪视频网站| 亚洲 国产 在线| 亚洲男人天堂网一区| 国产亚洲精品久久久久5区| bbb黄色大片| 久久午夜综合久久蜜桃| 曰老女人黄片| 涩涩av久久男人的天堂| 日本黄色日本黄色录像| 亚洲欧美成人综合另类久久久| 国产男人的电影天堂91| 久久综合国产亚洲精品| 欧美日韩亚洲国产一区二区在线观看 | 日韩电影二区| 在线观看免费视频网站a站| 国产伦理片在线播放av一区| 女性生殖器流出的白浆| 在线观看免费日韩欧美大片| 女人精品久久久久毛片| 色综合欧美亚洲国产小说| 热re99久久国产66热| 免费看av在线观看网站| 久久久久久久国产电影| 老汉色av国产亚洲站长工具| 日韩一本色道免费dvd| 欧美精品一区二区大全| 亚洲伊人久久精品综合| 精品少妇久久久久久888优播| 美女国产高潮福利片在线看| 在线观看www视频免费| 女人高潮潮喷娇喘18禁视频| 亚洲成人国产一区在线观看 | 女人久久www免费人成看片| 亚洲国产精品999| 男女午夜视频在线观看| 精品久久久精品久久久| 妹子高潮喷水视频| 精品人妻在线不人妻| 国产精品九九99| 欧美精品人与动牲交sv欧美| 菩萨蛮人人尽说江南好唐韦庄| 国产亚洲欧美精品永久| 久久精品国产综合久久久| 亚洲中文字幕日韩| 欧美精品高潮呻吟av久久| 老汉色av国产亚洲站长工具| 亚洲精品自拍成人| av网站在线播放免费| 国产成人a∨麻豆精品| 好男人电影高清在线观看| 人人妻人人爽人人添夜夜欢视频| 少妇裸体淫交视频免费看高清 | 国产成人a∨麻豆精品| 久久久久久久大尺度免费视频| 在线观看免费视频网站a站| 狠狠婷婷综合久久久久久88av| bbb黄色大片| 精品一区二区三区av网在线观看 | 国产亚洲一区二区精品| 成人午夜精彩视频在线观看| 国产伦人伦偷精品视频| 爱豆传媒免费全集在线观看| 好男人视频免费观看在线| 亚洲精品中文字幕在线视频| 国产欧美日韩一区二区三 | 天天躁夜夜躁狠狠躁躁| 女人久久www免费人成看片| 99热国产这里只有精品6| 精品国产国语对白av| 中文字幕亚洲精品专区| 19禁男女啪啪无遮挡网站| 欧美成狂野欧美在线观看| 一边亲一边摸免费视频| 日韩 亚洲 欧美在线| 国产免费又黄又爽又色| 99精国产麻豆久久婷婷| 免费久久久久久久精品成人欧美视频| 性色av一级| 啦啦啦 在线观看视频| 国产国语露脸激情在线看| 色视频在线一区二区三区| 少妇粗大呻吟视频| 午夜免费男女啪啪视频观看| 大话2 男鬼变身卡| 日韩av免费高清视频| 我要看黄色一级片免费的| 搡老岳熟女国产| 亚洲黑人精品在线| 免费看十八禁软件| 欧美日韩视频高清一区二区三区二| 丝袜在线中文字幕| 精品国产一区二区三区久久久樱花| 操美女的视频在线观看| 91九色精品人成在线观看| 日韩精品免费视频一区二区三区| 欧美xxⅹ黑人| 精品欧美一区二区三区在线| 一本久久精品| 国产日韩欧美亚洲二区| 国产成人精品无人区| 欧美中文综合在线视频| 在现免费观看毛片| 两个人免费观看高清视频| 日韩av在线免费看完整版不卡| 2018国产大陆天天弄谢| 亚洲熟女毛片儿| 少妇被粗大的猛进出69影院| 欧美亚洲 丝袜 人妻 在线| 黄色怎么调成土黄色| 国产精品国产三级国产专区5o| 十八禁网站网址无遮挡| 日本一区二区免费在线视频| xxxhd国产人妻xxx| 乱人伦中国视频| 欧美xxⅹ黑人| 午夜激情久久久久久久| 丝袜在线中文字幕| 电影成人av| 在现免费观看毛片| 久热这里只有精品99| 国产淫语在线视频| 一区二区av电影网| 一级a爱视频在线免费观看| 99国产精品一区二区蜜桃av | 9191精品国产免费久久| www.av在线官网国产| 啦啦啦中文免费视频观看日本| 成年人免费黄色播放视频| 黄片播放在线免费| a级片在线免费高清观看视频| 高清av免费在线| 午夜免费成人在线视频| av福利片在线| 亚洲国产成人一精品久久久| 最黄视频免费看| 欧美精品啪啪一区二区三区 | 欧美成人精品欧美一级黄| 成年美女黄网站色视频大全免费| 高潮久久久久久久久久久不卡| 日韩欧美一区视频在线观看| 国产成人影院久久av| 久久免费观看电影| 制服人妻中文乱码| 美女午夜性视频免费| 一级,二级,三级黄色视频| 欧美日韩国产mv在线观看视频| 欧美精品高潮呻吟av久久| 两性夫妻黄色片| 大片免费播放器 马上看| 亚洲欧美色中文字幕在线| 免费人妻精品一区二区三区视频| 人人澡人人妻人| 国产日韩欧美亚洲二区| av网站免费在线观看视频| 男女高潮啪啪啪动态图| 97在线人人人人妻| 亚洲七黄色美女视频| 欧美日韩视频高清一区二区三区二| av线在线观看网站| 中文字幕人妻丝袜制服| 日本av手机在线免费观看| 好男人电影高清在线观看| 丝袜人妻中文字幕| 精品少妇久久久久久888优播| 国产在线一区二区三区精| 亚洲av国产av综合av卡| 亚洲 欧美一区二区三区| 国产av一区二区精品久久| 久久九九热精品免费| 国产精品久久久久久精品电影小说| 久久久欧美国产精品| 亚洲av成人精品一二三区| 欧美成人午夜精品| 亚洲中文av在线| 丰满迷人的少妇在线观看| 国产欧美日韩一区二区三区在线| 亚洲一区二区三区欧美精品| 50天的宝宝边吃奶边哭怎么回事| 午夜福利视频在线观看免费| 国产色视频综合| 人妻人人澡人人爽人人| 捣出白浆h1v1| 久久久久视频综合| 成人国产一区最新在线观看 | 日韩一卡2卡3卡4卡2021年| 建设人人有责人人尽责人人享有的| 国产又爽黄色视频| 久久久亚洲精品成人影院| 欧美日韩国产mv在线观看视频| 考比视频在线观看| 亚洲欧美成人综合另类久久久| 日韩中文字幕欧美一区二区 | 色视频在线一区二区三区| 十八禁高潮呻吟视频| 交换朋友夫妻互换小说| 美女福利国产在线| 两个人免费观看高清视频| 久久久久视频综合| 性色av一级| 免费看十八禁软件| 9191精品国产免费久久| 一本一本久久a久久精品综合妖精| 午夜老司机福利片| 侵犯人妻中文字幕一二三四区| 精品免费久久久久久久清纯 | 一区二区av电影网| 亚洲精品在线美女| 成年美女黄网站色视频大全免费| 色播在线永久视频| 亚洲久久久国产精品| 91麻豆av在线| 亚洲精品一区蜜桃| 超色免费av| 不卡av一区二区三区| 亚洲精品一二三| 欧美中文综合在线视频| 午夜视频精品福利| 在线看a的网站| 9色porny在线观看| 成人国产一区最新在线观看 | 人人澡人人妻人| 日本欧美视频一区| 色婷婷av一区二区三区视频| 99热全是精品| 电影成人av| 人妻人人澡人人爽人人| 久久精品人人爽人人爽视色| 一级毛片电影观看| 午夜福利乱码中文字幕| 免费不卡黄色视频| 老熟女久久久| 久久性视频一级片| 99热全是精品| 免费女性裸体啪啪无遮挡网站| 丝袜美足系列| 99国产精品免费福利视频| 一级片'在线观看视频| av在线老鸭窝| 国产国语露脸激情在线看| 丝袜在线中文字幕| 一级,二级,三级黄色视频| 一边摸一边抽搐一进一出视频| 亚洲成色77777| 色播在线永久视频| 中文字幕人妻丝袜一区二区| 久久午夜综合久久蜜桃| 巨乳人妻的诱惑在线观看| 国产一区亚洲一区在线观看| 欧美 日韩 精品 国产| 午夜av观看不卡| 国产精品99久久99久久久不卡| 精品国产乱码久久久久久男人| 免费一级毛片在线播放高清视频 | 久久精品国产a三级三级三级| av片东京热男人的天堂| 亚洲国产毛片av蜜桃av| 激情五月婷婷亚洲| 亚洲精品一二三| 男女高潮啪啪啪动态图| kizo精华| 国产精品免费大片| 亚洲人成网站在线观看播放| av又黄又爽大尺度在线免费看| 无遮挡黄片免费观看| 又大又黄又爽视频免费| av线在线观看网站| 欧美人与性动交α欧美精品济南到| 免费在线观看影片大全网站 | 亚洲成国产人片在线观看| 一个人免费看片子| 大片电影免费在线观看免费| 欧美老熟妇乱子伦牲交| 天堂8中文在线网| kizo精华| 人人澡人人妻人| 国产一卡二卡三卡精品| 黄片播放在线免费| 久久天堂一区二区三区四区| 久久影院123| 亚洲人成网站在线观看播放| 欧美大码av| 国产av一区二区精品久久| 激情五月婷婷亚洲| 日韩欧美一区视频在线观看| 国产人伦9x9x在线观看| 两个人免费观看高清视频| 亚洲精品国产色婷婷电影| 久热这里只有精品99| 久久影院123| av视频免费观看在线观看|