• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DNN-Based Joint Classification for Multi-source Image Change Detection

    2017-10-10 11:30:59WenpingMaZhizhouLiPuzhaoZhangTianyuHuandYueWu

    Wenping Ma, Zhizhou Li, Puzhao Zhang, Tianyu Hu, and Yue Wu

    DNN-BasedJointClassificationforMulti-sourceImageChangeDetection

    Wenping Ma*, Zhizhou Li, Puzhao Zhang, Tianyu Hu, and Yue Wu

    Multi-source change detection is an increasingly presented issue and it is of great significance in environmental and land exploration. Multi-source remote sensing images are obtained by different sensors, which usually are not completely consistent in terms of spatial resolution, spectral bands number in the same region. In this paper, we propose a novel joint classification framework for multi-source image change detection, the multi-source image-pair are generated by different sensors, such as optical sensor and synthetic aperture radar, respectively. This frame-work is established for feature learning, which is based on deep neural networks. Firstly, in order to segment the optical image, deep neural networks are essential to extract deep features for clustering segmentation. Then the stacked denoising autoencoders are used to learn the capability of classification by training the reliable training examples, which are selected from optical image segmentation results that are unchanged area corresponded ground truth. Next, the other image of the image-pair is entered in the trained stacked denoising autoencoders to classification automatically. Afterwards, two images passed joint classification are obtained. Finally, the difference image is produced by comparing two images passed joint classification. Experimental results illustrate that the method can be applied to multi-source images and outperforms the state-of-the-art methods.

    change detection; multi-source image; deep neural networks; feature learning

    1 Introduction

    Image Change detection is the process to determine and analyze the changes of regional features between two given images observing the same surface area at different times[1]. It combines corresponding characteristics and remote sensing imaging mechanism to identify and analyze the regional characteristics change, including changes in object location, scope changes and surface properties[2]. With the rapid development of remote sensing techniques, the research of multi-source image change detection is an increasingly popular issue and is of great significance in environmental and land exploration[3]and natural disaster appraisal[4]and medical diagnosis[5]etc.. Researchers can capture information of earth’s surface by using different sensors, including different aero sensors, satellite sensors etc.. These sensors present multiple information of terrestrial globe for the ground, oceans, monitoring research[6]. However, remote sensing images obtained by different sensors usually are not completely consistent in terms of spatial resolution, spectral bands number and range of wavelength and radiometric resolution[7], effective exploitation of such data is a sticky issue, the traditional change detection methods are difficult to implement directly in multi-source image[8].

    The purpose of change detection is to determine the group of pixels that are inconsistent between multi-temporal images. These pixels comprise the change image[9]. Various approaches aimed at multi-source image change detection are presented[10]. For instance, the methods of data fusion and kernel-based integration are used extensively in multi-source image change detection. Li[11]proposed the fusion of remotely sensed images and GIS data for automatic change detection. Gustavo et al.[12]proposed kernel-based framework in multi-source remote sensing data change detection. Du et al.[13]proposed the method of integrating multiple features for remote sensing images change detection. Zhang et al.[14]proposed a deep architecture-based feature learning for mapping-based feature change analysis (MBFCA) for multi-spatial-resolution remote sensing images change detection. In spite of many successful cases in the detection of environmental change, there exist a lot of technological problems in multi-source image change detection.

    Because of the improvement of the spatial, temporal and spectral resolution of remote sensing images, the maps in disaster area can be quickly acquired when the disaster occurs[15], and the joint analysis for various image data is essential to disaster evaluation. For instance, obtaining excellent optical images by optical sensor needs the fine condition with the absence of cloud and better solar light[16]. We cannot get high-quality optical images immediately when the disaster occurs in rainy or cloudy weather[17]. However, SAR sensor has the characteristics of active imaging, and it has no effect on the weather and illumination[18]. SAR images can be obtained immediately whatever the weather is. But, SAR images do not have enough detailed spatial information while the optical images have. Hence, the technique of joint interpretation in multi-source image has a great significance[19]. In general, image change detection can be classified into three procedures: image preprocessing, generate initial difference map and segmentation and analysis of difference map[20].

    In this paper, we propose a novel method to solve the issue of multi-source image change detection. The method described here is called deep neural networks (DNN)-based joint classification (DBJC) for multi-source image change detection. We focus on developing a technique of change detection on earth’s surface based on a time series of terrene cover maps originated from different sensors. In our method, we suppose that the unchanged areas are larger than the changed areas. The method we proposed is joint classification based on DNN[21], which uses stacked denoising autoencoders to learn capability of classification by training the reliable training examples, which are selected from unchanged area corresponded ground truth in optical image segmentation results. Then the other image of the image-pair is entered in the trained stacked denoising autoencoders to classification automatically. Afterwards, two images passed joint classification are obtained. Multi-layer neural networks were proved to be successful for extracting high-level features and classification in ship detection[22].

    The rest of this article is divided into four parts as follows: Section 2 suggests the description of the problem and our motivations for multi-source image change detection. Section 3 exhibits the application details of the proposed technique. Experimental results on real data set and synthetic images are shown in the Section 4. Section 5 summarizes the conclusion of our work.

    2 Problem and Motivation

    In this paper, the propose of multi-source image change detection is to find out the changed areas of the given image-pair derived from different sensors. One co-registered multi-source image-pair is considered, one is SAR image denoted by:IS={I(x,y)|1≤x≤M,1≤y≤N}, and the other one is optical image denoted by:IO={I(x,y)|1≤x≤M,1≤y≤N}, SAR and optical images are of sizeM×Nand are obtained in the same area at different timest1andt2. The change detection results are presented in the form of binary imageDI={di(x,y)∈{0,1}|1≤x≤M,1≤y≤N}, wheredi(x,y)=0 represents that the pixel at location (x,y) is unchanged, whiledi(x,y)=1 means it is changed.

    Because of different imaging mechanisms of multi-source image, the spatial information of two images is not consistent. We consider establishing a model based on DNN to explore inner relation between two images which obtained by different sensors. Since the majority of objectives in the two images are the same, we assume they have some relation in high-level features[23]. We can find the changed areas between two images by exploring their inner connection[24,25]. We use autoencoders to extract deep feature within the local neighborhood of the pixels. The flowchart of the method in this article can be expressed in Fig.1. We use clustering results of one image to guide the classification of another image, aiming at converting two images with different types of data into the same type of data.

    Fig.1Flowchartofourmethod: (1)Preprocessingistakeninthetwogivenmulti-sourceimages, (2)Deepneuralnetworksareusedtoextractthedeeplevelfeaturesforopticalimages, (3)TheclusteringsegmentationresultsofopticalimageareproducedbyfeatureclusteringandthenitisusedtoselectreliablelabelsfortrainingSDAEforlearningthecapabilityofclassification, (4)ThetrainedSDAEisusedtoclassifytheSARimage,and(5)Thechangedetectionresultsareobtainedbycomparisonoftwoimagespassedjointclassification.

    2.1Unsupervisedfeaturelearningandclustering

    An artificial neural network has good performance in pattern recognition and machine learning[26], and has the capability of representing non-linear function. Stacked denoising autoencoders (SDAE) are deemed to have high-performance in learning edge features of image patches via training it unsupervised[27]. We use fuzzy c-means (FCM) algorithm[28]cluster optical image. In 2006, Chuang and Tzeng presented the FCM algorithm that utilizes spatial information reducing noisy spots for image segmentation[29]. In our clustering approach, we use feature clustering, the features which are extracted in the local neighborhood of the pixels by SDAE. In the proposed clustering segmentation in 1997, Ohm and Ma used different features of pixels field, which indicated its low-complexity and reliable segmentation[30].

    2.2 Joint classification

    As shown Yellow-River image in Fig.2, the left SAR image is low resolution with ambiguous spatial details, however, the right optical image display its high resolution. Obviously, these two images are incommensurability directly. The model of SDAE contained a classifier shows a lower classification error in classification problem via learning useful high-level representation in image patches[31]. After clustering one image, we choose part of reliable segmentation results as the labels for training SDAE contained classifier supervised to learn the capability of classification. As a result, two images with different types of data are converted into the same type of data via guiding one image clustering results classifying the other one.

    Fig.2Exampleofmultisourceimage-pairobtainedbydifferentsensors,atdifferenttimes: (a)TheSARimageobtainedbyRadarsat,and(b)TheopticalimageacquiredfromGoogleEarth.

    3 Methodology

    In this session, we will introduce the specific application of the proposed method in this paper. As shown in Fig.1, the flowchart presents the whole process of change detection proposed in our method. For two co-registered images obtained by different sensors, optical and SAR images are image-pair in this paper. First, image preprocessing should be taken in image-pair, and it mainly includes filtering and divides the image into patches. Second, learning deep level features is the key point for clustering optical image. Features based on the extracted previously are used to cluster optical image. Then we choose part of reliable pixels in the optical clustering results as labels, which are selected from unchanged area corresponded ground truth in optical image segmentation results. And the pixels in the SAR image corresponding position is the input of SDAE contained classifier, which is learning the capability of classification. After training SDAE, we input the SAR image patches to the trained SDAE for classifying SAR image. Finally, the difference image (DI) is produced by comparing the image-pair passed joint classification.

    3.1 Stacked denoising autoencoders

    A general autoencoder includes two parts as follows Fig.3: an encoder and a decoder. The encoder is trained to learn some implicit feature representation, and it is transforming vectorxinto hidden layerh. Its mathematical formula is a mapping followed by a nonlinearity:

    hn=fw,b(xn)=sigmoid(wxn+b)

    (1)

    wherewis anm×nweight matrix andbis a bias vector of dimensionalitym. The sigmoid function is defined as follows:

    (2)

    (3)

    wherew′ is an n×m weight matrix andb′is a bias vector of dimensionalityn. For an autoencoder, we optimized the cost function by minimizing the average reconstruction error:

    (4)

    Fig.3Autoencoder(AE):Autoencoderisthebrickofdeepneuralnetworksanditconsistsofanencoderandadecode.Thefirstlayerneurondenotestheinput.Theneuronsinmiddlelayerarethecharacteristicslearningfromthefirstlayer,thethirdlayeristhereconstructedversionoftheinput.

    In our method, the networks are fully connected multi-hidden layer SDAE, which is built for learning the local features. Multi-hidden layer SDAE includes multiple autoencoders. The training process is that each layer of the network is trained in layer-wise, and then the whole deep neural network is trained. The hidden layer of AE1(h(1)) is the input of AE2, as shown in Fig.4,h(1)as the first order representation,h(2)as the second order representation. The 2-hidden-layer SDAE with structure 6-3-4 is presented in Fig.5, where the deep neural networks with full structure 6-3-4-3-6. 6, 3 and 4 is the number of neurons in each layer. In our method, the second order representation is the useful features for joint classification.

    Fig.4ThehiddenlayeroftheAE1(h(1))istheinputofAE2,h(1)asthefirstorderrepresentation,h(2)asthesecondorderrepresentation.

    Fig.52-layerSDAEwithstructure6-3-4-3-6:theprocessofpre-trainistrainingeachDAEinlayer-wise,andthetrainingresultsasitsinitializationparameters.Fine-tuningisusedintheentirenetworksunsupervisedbybackpropagationalgorithmtoimprovetheperformance.

    3.2 Classifier and fine-tuning

    After training the SDAE with capability of classification, we enter the SAR image into the network to extract the deep features. Then, we input high-level representation into a classifier in SAR image classification. After pre-train layer-wise in DAE, the training results are the initialization of the entire multi-layer networks. Then fine-tuning is used in the entire networks unsupervised to improve the performance of classification. Back propagation algorithm and SGD is used in the process of fine-tuning entire network.

    Fine-tuning is a common strategy in deep learning, and it can significantly enhance the performance of stacked denoising autoencoders neural networks. Form a higher perspective, the process of fine-tuning treats all layers in the stacked denoising autoencoders as a model, so the value of weight in networks can be optimized in each iteration. In this paper, we use softmax regression as the final classifier (see Fig.6). Softmax regression is generalized from the logistic regression model in multi classification problem. The class labelycan take more than two values in multi classification problem. The function of softmax regression is defined as follows:

    (5)

    whereθ={w,b},pis the number of classification.

    Fig.6Thesecondorderrepresentationh(2)istheinputforsoftmaxclassifier,wesetthenumberofclassificationis4inthislegend.

    4 Experiments

    In order to demonstrate the effectiveness of DBJC, we test one set of SAR images and three sets of multi-source images (SAR and optical images) in multi-source image change detection problem in this article. The method of mapping-based feature change analysis (MBFCA), principal component analysis method (PCA)[32]and the post-classification comparison (PCC)[33]are selected as the compared methods in this paper. In our method, first, deep neural networks is used to extract the deep level features for optical images, then the clustering algorithm used for the segmentation of optical image. Then we choose part of reliable pixels in the optical clustering results as labels, and the pixels in the SAR image corresponding position is the input of SDAE contained classifier, which is learning the capability of classification, as represented in Section 3.When we use autoencoders to extract deep feature, the size of field window in the image block is 3×3, the larger window may lead to edge blur and Image distortion. We set four different levels deep architecture for SDAE to learn features, they are two layers, three layers, four layers and five layers deep architecture respectively.

    4.1 Datasets

    In the experiments, there are three pairs of data sets come from Yellow River Estuary Region. The Yellow River data contains two SAR images, whose spatial resolution is 8 m, obtained by Radarsat-2 at the Yellow River Estuary Region in China in June 2008 and June 2009, which are shown in Fig.7(a) and Fig.7(b) with the size of 7666×7692. We choose three typical region a, b and c in red as shown in Fig.7.

    The first dataset are two SAR images, shown in Fig.8, which are from the region of the Yellow River Estuary with 306×291 pixels. There are generally two objectives in these maps, i.e. farmland and water. Fig.8(c) is the ground truth that was arose by integrating prior information with photo-interpretation based on the input original images in Fig.8(a) and Fig.8(b).

    There are two sets of multi-source images, shown in Fig.9 and Fig.11, in Yellow River Estuary Region. In multi-source images, optical images acquired by Google Earth sharing the same region as shown in corresponding SAR images. These images were co-registered by the method in [34]. The second dataset are two multi-source images, shown in Fig.9, obtained in the region b of Yellow River Estuary. The SAR image acquired by Radarsat, as shown in Fig.9(a).The other image, in Fig.9(b), is the optical image obtained from Google Earth. Two images have a size of 340×290, and Fig.9(c) is produced by integrating prior information based on images Fig.9(a) and Fig.9(b).

    The third dataset as shown in Fig.10, obtained in the Mediterranean Sardinia, Italy, in which one is TM image acquired by Landsat-5 in September, 1995, and the other one is the optical image acquired from in July, 1996. Fig.10(a) is the fifth band of TM image, with the spatial resolution 30 m while Fig.10(b) is the corresponding region, optical image obtained from Google Earth, with the spatial resolution is 4 m. Fig.10(c) is the ground truth for reference obtained by manual plotting. Two images are size of 300×412.

    The last dataset, shown in Fig.11(a) and (b) with the size of 333×391, consist of one SAR image and one optical. The SAR image is acquired in the region c of Yellow River Estuary, in June 2008. The optical image is acquired from Google Earth in Dec., 2013, whose spatial resolution 4 m. These two images are taken from the same area. The optical image provided by integrated image form QuickBird and Landsat-7. The major area of changed can be observed from the ground truth, shown in Fig.11(c).

    Fig.7Multi-temporalimagespairrelatingtoYellowRiverEstuary: (a)TheimageobtainedinJune2008,and(b)TheimageobtainedinJune2009.

    Fig.8Multi-temporalimagespairrelatingtotheregionaoftheYellowRiverEstuary: (a)TheimageobtainedinJune2008, (b)TheimageobtainedinJune2009,and(c)Thegroundtruthimage.

    Fig.9Multi-sourceimage-pairrelatingtotheregionboftheYellowRiverEstuary: (a)TheSARimageobtainedbyRadarsat, (b)TheopticalimageacquireinGoogleEarth,and(c)Thegroundtruthimage.

    Fig.10Multi-sourceimage-pairontheMediterraneanSardiniafromdifferentsensors: (a)TheTMimage, (b)TheopticalimageobtainedfromGoogleEarth,and(c)Thegroundtruthimage.

    Fig.11Multi-sourceimage-pairrelatingtotheregioncoftheYellowRiverEstuary: (a)TheSARimageobtainedbyRadarsat, (b)TheopticalimageacquireinGoogleEarth,and(c)Thegroundtruthimage.

    4.2 Evaluating index

    4.3ResultsontheregionaoftheYellowRiverEstuary

    The first experiment is launched on the region of the Yellow River Estuary. The datasets are homogeneous images, which consist of two SAR images. The change maps obtained by three contrast algorithm are shown in Fig.12(a), (b) and (c), respectively. And the change map by the proposed method (DBJC) is displayed in Fig.12(d). A detailed quantitative analysis of change maps achieved by the four methods is displayed in Table 1. According to the experimental results and quantitative analysis, the detection results obtained by our method are better performance than compared methods. In Fig.12(b), a large number of pixels are wrongly detected as changed pixels. MBFCA performs high value of PCC, which represents robust to noisy based on the feature mapping, but FP is slightly higher than the detection results by our method. From the view of Table 1, we can see theFN,FP,CCRandKCof our method achieved high performance.

    Table1ComparisonofchangedetectionresultsontheregionaoftheYellowRiverEstuary.

    MethodFNFPOECCR/%KCPCA832273012813368.410.1540PCC183892251106387.580.3266MBFCA17595662325997.390.7377DBJC1761351211297.630.7564

    4.4ResultsontheregionboftheYellowRiverEstuary

    The second experiment is launched on the region of the Yellow River Estuary, which consists of two multi-source images, i.e. optical and SAR images. They are heterogeneous multi-source remote sensing images. In this dataset, we set four different levels with deep architecture for SDAE to learn feature for joint classification. These four deep architectures which contain two layers, three layers, four layers and five layers deep architecture are set for comparison. To demonstrate the effectiveness of SDAE, we take features learned by SDAE with two layers, three layers, four layers and five hidden layers respectively for joint classification. The experiment results of different levels deep architecture and compared methods are shown in Fig.13. The detection results by traditional pixel-based methods such as PCC and PCA are shown in Fig.13(a) and (b), and one feature mapping-based change detection map is displayed in Fig.13(c). The unchanged area corresponded to ground truth, shown in Fig.9(c), in optical image segmentation results, is used to select reliable training examples. After training SDAE, SAR image of the image-pair is entered in the trained stacked denoising autoencoders to classification automatically. A detailed quantitative analysis of change maps achieved by the seven methods is displayed in Table 2. As we can see in Fig.13(e), two layer SDAE have best performance in joint classification. Compared with the traditional change detection method, deep layer networks can learn more abstract features, achieved better effect on the heterogeneous images change detection.

    Fig.12ChangedetectionresultsrelatingtotheregionaoftheYellowRiverEstuarybyusingdifferentmethod: (a)PCC, (b)PCA, (c)MBFCA,and(d)DBJC.

    Fig.13ChangedetectionresultsbyusingdifferentdeeparchitecturesandcomparedmethodsontheregionboftheYellowRiverEstuary: (a)PCC, (b)PCA, (c)MBFCA, (d)Two-layerarchitecture, (e)Three-layerarchitecture, (f)Four-layerarchitecture,and(g)Five-layerarchitecture.

    Table2ComparisonofchangedetectionresultsontheregionboftheYellowRiverEstuary.

    MethodFNFPOECCR/%KCPCA1152113081246087.360.1766PCC5131537205097.920.7830MBFCA8541681253597.340.73772?layer12074941614893.760.54353?layer8061097190398.070.79844?layer124888521010089.760.30455?layer1103150971620083.570.1359

    4.5ResultsontheMediterraneanSardiniadataset

    For the Mediterranean Sardinia dataset, we also set four different levels deep architecture for SDAE to learn the feature for joint classification. These four deep architectures which contain two layers, three layers, four layers and five layers deep architecture are set for comparison. The final change detection results are shown in Fig.14. In Fig.14(a) and (b), the change map, produced by PCC and PCA respectively, contained large amount of noise points. While the change detection results in Fig.14(c), which is produced by MBFCA,have less noise points. The change detection results with different levels deep architecture are shown in Fig. 14(d) (g). It is obvious that the architecture with too many layers, the worse the performance is. Two layer and three layer deep architecture, shown in Fig.14(d) and (e), are the best networks joint classification. Table 3 shows the quantitative results for Sardinia dataset, and it demonstrated the performance of suppressing the noise significantly on the Mediterranean Sardinia dataset.

    Fig.14ChangedetectionresultsbyusingdifferentdeeparchitectureandcomparedmethodsontheMediterraneanSardiniadataset: (a)PCC, (b)PCA, (c)MBFCA, (d)Two-layerarchitecture, (e)Three-layerarchitecture, (f)Four-layerarchitecture,and(g)Five-layerarchitecture.

    Table3ComparisonofchangedetectionresultsonSardiniaregiondataset.

    MethodFNFPOECCR/%KCPCA1545192312077683.190.3031PCC124989551020491.740.5160MBFCA16413411505295.910.87202?layer17203561528195.730.71703?layer1902115451344789.120.40934?layer1904116001350489.070.40805?layer2298229782527679.550.2208

    4.6ResultsontheregioncoftheYellowRiverEstuary

    In this dataset, it consists of two heterogeneous and multi-source images, i.e. optical and SAR images. The final change detection results are illustrated in Fig.13. The method of PCA used in this dataset performs well as shown in Fig.13(a). In the method of DBJC, the main FP pixels are caused by the inaccurate of co-registration. The change detection results produced by DBJC are shown in Fig.15(d). In Fig.15(c), the results generated by MBFCA display a good performance in both PCC and KC. Table 4 shows the quantitative results for the region c of the Yellow River Estuary dataset.

    Fig.15ChangedetectionresultsrelatingtotheregioncoftheYellowRiverEstuarybyusingdifferentmethod: (a)PCC, (b)PCA, (c)MBFCA,and(d)DBJC.

    Table4ComparisonofchangedetectionresultsrelatingtotheregioncoftheYellowRiverEstuary.

    MethodFNFPOECCR/%KCPCA16952960465596.420.8516PCC963155161647987.340.6840MBFCA2616144101702686.920.5645DBJC15254180570595.260.8239

    5 Conclusion

    In this paper, a novel joint classification framework for multi-source image change detection is proposed. The research of multi-source image change detection is an increasingly popular issue and is of great significance in environmental and land exploration. Due to inconsistency of multi-source images in terms of spatial resolution, the traditional change detection method is difficult to use directly in the multi-source image. SDAE, an efficient network for extracting deep features, is implemented to extract features. We utilize SDAE to explore the inner relation so that we can achieve the goal of joint classification for multi-source image change detection. Deep structure can find a better representation for image texture information, and selecting reliable training sample is the key for the method. Experimental results on the real dataset illustrate that the method can be applied to multi-source image and outperform the state-of-the-art methods in terms of detection accuracy. Because of different properties of multi-source image, images are often incommensurability directly, better representations between images should be explored for multi-source image change detection, our future work will mainly focus on it.

    [1]R.J.Radke, S.Andra, O.Al-Kofahi, and B.Roysam, Image change detection algorithms: a systematic survey,IEEETransactionsonImageProcessing,vol.14, pp.294-307,2005.

    [2]O.Kit and M.Ludeke,.Automated detection of slum area change in Hyderabad, India using multitemporal satellite imagery,IsprsJournalofPhotogrammetryandRemoteSensing.vol.83, no.9, pp.130-137,2013.

    [3]J.Chen, M.Lu, X.Chen, J.Chen, and L.Chen, A spectral gradient difference based approach for land cover change detection,IsprsJournalofPhotogrammetryandRemoteSensing, vol.85, no.2, pp.1-12, 2013.

    [4]S.Stramondo, C.Bignami, M.Chini, N.Pierdicca, and A.Tertulliani, Satellite radar and optical remote sensing for earthquake damage detection: results from different case studies.InternationalJournalofRemoteSensing, vol.27, no.20, pp.4433-4447, 2006.

    [5]D.M.Beck, G.Rees, C.D.Frith, and N.Lavie, Neural correlates of change detection and change blindness,NatureNeuroscience,vol.4,no.6,pp.645-500,2001.

    [6]C.C.Petit and E.F.Lambin, Integration of multi-source remote sensing data for land cover change detection,InternationalJournalofGeographicalInformationScience, vol.15,no.8,pp.785-803,2001.

    [7]T.Ranchin and L.Wald, Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation,PhotogrammetricEngineeringandRemoteSensing, vol.66,no.2,pp.49-61,2000.

    [8]A.H.Ozcan, C.Unsalan, and P.Reinartz, A Systematic Approach for Building Change Detection using Multi-Source Data, InProceedingsof22ndIEEESignalProcessingandCommunicationsApplicationsConference,Trabzon, Turkey, 2014, pp.477-480.

    [9]C.Song, C.E.Woodcock, K.C.Seto, M.P.Lenney, and S.A.Macomber, Classification and Change Detection Using Landsat TM Data: When and How to Correct Atmospheric Effects,RemoteSensingofEnvironment, vol.75, no.2, pp.230-244,2001.

    [10] Y.Wen, Data application of multi-temporal and multi-source data for land cover change detection in Guam,.InProceedingsof 19th International Conference on Geoinformatics, Shanghai, China, 2011, pp.1-4.

    [11] D.Li, Remotely sensed images and GIS data fusion for automatic change detection.InternationalJournalofImageandDataFusion.vol.1,no.1,pp.99-108, 2010.

    [12] C.V.Gustavo, G.C.Luis, M.M.Jordi, R.A.Jos, and M.R.Manel, Kernel-Based Framework for Multitemporal and Multisource Remote Sensing Data Classification and Change Detection.IEEETransactionsonGeoscienceandRemoteSensing,vol.46,no.6,pp.1822-1835,2008.

    [13] P.Du, S.Liu, J.Xia, and Y.Zhao, Information fusion techniques for change detection from multi-temporal remote sensing images,InformationFusion,vol.14,no.1, pp.19-27,2013.

    [14] P.Zhang, M.Gong, L.Su, J.Liu, and Z.Li, Change detection based on deep feature representation and mapping transformation for multi-spatial-resolution remote sensing images,IsprsJournalofPhotogrammetryandRemoteSensing, vol.116, pp.24-41, 2016.

    [15] A.Schmitt, B.Wessel, and A.Roth, Curvelet-based Change Detection on SAR Images for Natural Disaster Mapping.PhotogrammetrieFernerkundungGeoinformation, no.6, pp.463-474, 2010.

    [16] T.A.Dickinson, J.White, J.S.Kauer, and D.R.Walt, A chemical-detecting system based on a cross-reactive optical sensor array.Nature, vol.382, pp.697-700,1996.

    [17] A.Singh, Review article digital change detection techniques using remotely-sensed data,Internationaljournalofremotesensing, vol.10, pp.989-1003, 1989.

    [18] D.Tarchi, N.Casagli, S.Moretti, D.Leva, and A.J.Sieber, Monitoring landslide displacements by using ground-based synthetic aperture radar interferometry: Application to the Ruinon landslide in the Italian Alps.JournalofGeophysicalResearchAtmospheres, vol.108,pp.503-518,2003.

    [19] D.C.Mason, C.Oddy, A.J.Rye, S.B.M.Bell, M.Illingworth, K.Preedy, C.Angelikaki, and E.Pearson, Spatial database manager for a multi-source image understanding system,ImageandVisionComputing,vol.11, pp.25-34,1993.

    [20] A.Hecheltjen, F.Thonfeld, and G.Menz, Recent Advances in Remote Sensing Change Detection—A Review,LandUseandLandCoverMappinginEurope,vol.18, pp.145-178, 2014.

    [21] C.Dan, U.Meier, and J.Schmidhuber, Multi-column deep neural networks for image classification.InProceedingsof25thIEEEConferenceonComputerVisionandPatternRecognition, Washington DC, USA, 2012, pp.3642-3649.

    [22] J.Tang, C.Deng, G.Huang, and B.Zhao, Compressed-Domain Ship Detection on Spaceborne Optical Image Using Deep Neural Network and Extreme Learning Machine,IEEETransactionsonGeoscienceandRemoteSensing, vol.53,no.3, pp.1174-1185, 2015.

    [23] D.Erhan, Y.Bengio, A.Courville, P.A.Manzagol, P.Vincent, and S.Bengio, Why Does Unsupervised Pre-training Help Deep Learning?JournalofMachineLearningResearch, vol.11,no.3,pp.625-660,2010.

    [24] M.Sato, A real time learning algorithm for recurrent analog neural networks,BiologicalCybernetics, vol.62, pp.237-241,1990.

    [25] J.Cao, P.Li, and W.Wang, Global synchronization in arrays of delayed neural networks with constant and delayed coupling,PhysicsLettersA, vol.353,no.4,pp.318-325, 2006.

    [26] J.Schmidhuber, Deep learning in neural networks: An overview,NeuralNetworks.vol.61, pp.85-11,2014.

    [27] P.Vincent, L.Hugo, L.Isabelle, B.Yoshua, and M.P.Antoine, Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion,JournalofMachineLearningResearch,vol.11, pp.3371-3408, 2010.

    [28] X.Wang, Y.Wang, and L.Wang, Improving fuzzy c-means clustering based on feature-weight learning,PatternRecognitionLetters.vol.25, pp.1123-1132, 2004.

    [29] K.S.Chuang, H.L.Tzeng, S.Chen, J.Wu, and T.J.Chen,.Fuzzy c-means clustering with spatial information for image segmentation,ComputerizedMedicalImagingandGraphics, vol.30, pp.9-15,2006.

    [30] J.R.Ohm and P.Ma, Feature-based cluster segmentation of image sequences, Inproceedingsof9thInternationalConferenceonImageProcessing, 1997, pp.178-181.

    [31] P.Vincent, L.Hugo, L.Isabelle, B.Yoshua, and M.P.Antoine, Extracting and composing robust features with denoising autoencoders.Inproceedingsof25thInternationalConferenceonMachinelearning, Helsinki, Finland, 2008, pp.1096-1103.

    [32] T.Celik,Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering,IEEEGeoscienceandRemoteSensingLetters, vol.6, pp.772-776, 2009.

    [33] R.Colditz, J.A.Velazquez, D.J.R.Gallegos, A.D.V.Lule, M.T.R.Zuniga, P.Maeda, M.I.C.Lopez, and R.Ressl,Potential effects in multi-resolution post-classification change detection.InternationalJournalofRemoteSensing, vol.33, no.20, pp.6426-6445, 2012.

    [34] B.Zitova and J.Flusser, Image registration methods: a survey.MageandVisionComputin.vol.21, pp.977-1000, 2003.

    2016-12-20; accepted:2017-01-20

    ?Wenping Ma, Zhizhou Li, Puzhao Zhang and Tianyu Hu are with Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Joint International Research Laboratory of Intelligent Perception and Computation, Xidian University, Xi’an 710071, China. E-mail: wpma@mail.xidian.edu.cn.

    ?Yue Wu is with School of Computer Science and Technology, Xidian University, Xi’an 710071, China.

    *To whom correspondence should be addressed. Manuscript

    亚洲最大成人手机在线| 成人毛片a级毛片在线播放| 免费高清在线观看视频在线观看| av又黄又爽大尺度在线免费看| 色哟哟·www| 女人十人毛片免费观看3o分钟| 国产亚洲最大av| 男人和女人高潮做爰伦理| 人妻系列 视频| 国产 一区 欧美 日韩| 人妻一区二区av| 大话2 男鬼变身卡| 日韩一本色道免费dvd| 99热网站在线观看| 男男h啪啪无遮挡| 51国产日韩欧美| 国产精品久久久久久精品古装| 国产精品蜜桃在线观看| 久久亚洲国产成人精品v| 欧美激情久久久久久爽电影| 看非洲黑人一级黄片| 国产成人福利小说| 99久国产av精品国产电影| 亚洲欧美日韩另类电影网站 | 99久久精品国产国产毛片| 青春草国产在线视频| 又黄又爽又刺激的免费视频.| 特级一级黄色大片| h日本视频在线播放| av国产免费在线观看| 我要看日韩黄色一级片| 成年人午夜在线观看视频| 国产 一区 欧美 日韩| 看免费成人av毛片| 国产真实伦视频高清在线观看| 中文欧美无线码| 日本午夜av视频| 亚洲欧美精品专区久久| 免费观看a级毛片全部| 五月伊人婷婷丁香| 亚洲精品日韩av片在线观看| 黄片wwwwww| 日韩伦理黄色片| eeuss影院久久| 免费观看a级毛片全部| 亚洲欧美成人综合另类久久久| 亚洲精品色激情综合| 国产视频内射| 丝袜脚勾引网站| 成人二区视频| 欧美极品一区二区三区四区| 中文字幕亚洲精品专区| 直男gayav资源| 真实男女啪啪啪动态图| 色视频www国产| 又爽又黄a免费视频| 国产成人freesex在线| 亚洲av日韩在线播放| 亚洲av免费在线观看| 欧美成人a在线观看| 啦啦啦啦在线视频资源| 久久久欧美国产精品| 天天躁夜夜躁狠狠久久av| 国产精品国产三级国产av玫瑰| 精品少妇久久久久久888优播| 人妻系列 视频| 又黄又爽又刺激的免费视频.| 全区人妻精品视频| 一级毛片 在线播放| 性插视频无遮挡在线免费观看| 欧美xxxx性猛交bbbb| 国产伦精品一区二区三区四那| 黄色日韩在线| 国产精品一及| 亚洲欧美清纯卡通| 午夜视频国产福利| 亚洲av国产av综合av卡| 国产精品精品国产色婷婷| 在线免费十八禁| 国产 一区 欧美 日韩| 波野结衣二区三区在线| 美女国产视频在线观看| 高清欧美精品videossex| 精品人妻一区二区三区麻豆| 一级毛片黄色毛片免费观看视频| 777米奇影视久久| 在线观看三级黄色| 少妇猛男粗大的猛烈进出视频 | 亚洲精品国产av蜜桃| 美女视频免费永久观看网站| av线在线观看网站| 日本一本二区三区精品| 成人欧美大片| 特级一级黄色大片| .国产精品久久| 亚洲第一区二区三区不卡| 在线观看av片永久免费下载| 午夜福利在线在线| 26uuu在线亚洲综合色| a级毛片免费高清观看在线播放| av在线播放精品| 日韩亚洲欧美综合| 国产精品99久久99久久久不卡 | 久久99精品国语久久久| av国产免费在线观看| 久久久成人免费电影| 少妇的逼好多水| 欧美性猛交╳xxx乱大交人| 久久6这里有精品| 亚洲最大成人手机在线| 内地一区二区视频在线| 亚洲性久久影院| 夫妻性生交免费视频一级片| 麻豆久久精品国产亚洲av| 国产精品一区二区性色av| 2022亚洲国产成人精品| 欧美日韩精品成人综合77777| 我的女老师完整版在线观看| 国产黄a三级三级三级人| 久久ye,这里只有精品| 国产av国产精品国产| 2021少妇久久久久久久久久久| 久久久久国产精品人妻一区二区| 好男人在线观看高清免费视频| 一级黄片播放器| 免费看av在线观看网站| 晚上一个人看的免费电影| 成人综合一区亚洲| 亚洲av在线观看美女高潮| 亚洲图色成人| 亚洲人与动物交配视频| 青春草国产在线视频| 成人亚洲精品av一区二区| 中文字幕久久专区| 国产探花极品一区二区| av福利片在线观看| 神马国产精品三级电影在线观看| 精华霜和精华液先用哪个| 日日撸夜夜添| 国产视频首页在线观看| 国产精品久久久久久av不卡| a级毛片免费高清观看在线播放| 国产亚洲最大av| 久久久精品欧美日韩精品| 六月丁香七月| 一级黄片播放器| 亚洲天堂国产精品一区在线| 国产日韩欧美在线精品| 欧美人与善性xxx| 老司机影院毛片| 久久久午夜欧美精品| 日本午夜av视频| 日韩成人伦理影院| 看免费成人av毛片| 免费观看在线日韩| 久久热精品热| eeuss影院久久| 国产黄a三级三级三级人| 女人十人毛片免费观看3o分钟| 国产成人午夜福利电影在线观看| 久久久久性生活片| 女人被狂操c到高潮| 精品99又大又爽又粗少妇毛片| 大又大粗又爽又黄少妇毛片口| 国产精品.久久久| 91久久精品电影网| eeuss影院久久| 狂野欧美激情性xxxx在线观看| 一级毛片我不卡| 97热精品久久久久久| av国产精品久久久久影院| 日本欧美国产在线视频| 久久人人爽人人片av| 亚洲国产精品国产精品| av专区在线播放| 三级经典国产精品| 国产探花极品一区二区| 精品熟女少妇av免费看| 国产黄色免费在线视频| 日本wwww免费看| 一级二级三级毛片免费看| 日韩欧美精品免费久久| 水蜜桃什么品种好| 97精品久久久久久久久久精品| 亚洲av在线观看美女高潮| 亚洲欧美成人精品一区二区| 99视频精品全部免费 在线| 2022亚洲国产成人精品| 九九久久精品国产亚洲av麻豆| 又粗又硬又长又爽又黄的视频| 69人妻影院| 在现免费观看毛片| 国产精品久久久久久精品古装| 又爽又黄a免费视频| 日韩成人伦理影院| 亚洲精品自拍成人| 国国产精品蜜臀av免费| 在线观看免费高清a一片| 人妻少妇偷人精品九色| 男女那种视频在线观看| 内射极品少妇av片p| 亚洲av日韩在线播放| 亚洲欧美日韩无卡精品| 亚洲一级一片aⅴ在线观看| 男女边吃奶边做爰视频| 在线观看免费高清a一片| 狂野欧美激情性bbbbbb| 97在线人人人人妻| 最近最新中文字幕大全电影3| 不卡视频在线观看欧美| 在线观看av片永久免费下载| 国产精品无大码| 黑人高潮一二区| 观看美女的网站| 国产精品久久久久久av不卡| 免费黄色在线免费观看| 超碰97精品在线观看| 日韩一区二区视频免费看| 日韩国内少妇激情av| 午夜福利高清视频| 亚洲av成人精品一二三区| 亚洲av电影在线观看一区二区三区 | 久久99精品国语久久久| 777米奇影视久久| 日韩欧美精品免费久久| 国产69精品久久久久777片| 老女人水多毛片| 麻豆成人av视频| 国产老妇女一区| 中文天堂在线官网| 日本色播在线视频| 国产男人的电影天堂91| 久久精品综合一区二区三区| 日日摸夜夜添夜夜爱| 亚洲国产色片| 亚洲美女搞黄在线观看| 精品一区二区三卡| 国产探花极品一区二区| 亚洲欧美一区二区三区黑人 | 男女下面进入的视频免费午夜| 99九九线精品视频在线观看视频| 国产老妇伦熟女老妇高清| 免费黄网站久久成人精品| 欧美日本视频| 免费观看在线日韩| 少妇人妻久久综合中文| 中文字幕制服av| tube8黄色片| 亚洲无线观看免费| 99视频精品全部免费 在线| 97人妻精品一区二区三区麻豆| 日韩大片免费观看网站| 美女cb高潮喷水在线观看| 亚洲成人一二三区av| 国产精品不卡视频一区二区| 成人毛片60女人毛片免费| 国产精品国产三级国产av玫瑰| 永久免费av网站大全| 成人一区二区视频在线观看| 伊人久久精品亚洲午夜| 久久精品人妻少妇| 国产精品一及| 成人无遮挡网站| 欧美高清性xxxxhd video| 亚洲国产日韩一区二区| 亚洲精品久久久久久婷婷小说| 久久精品综合一区二区三区| 水蜜桃什么品种好| 国产色婷婷99| 日韩av不卡免费在线播放| 欧美精品国产亚洲| 午夜福利视频1000在线观看| 又黄又爽又刺激的免费视频.| 欧美日韩综合久久久久久| 视频区图区小说| 校园人妻丝袜中文字幕| 久久人人爽人人片av| 午夜精品一区二区三区免费看| 午夜福利在线观看免费完整高清在| 亚洲自偷自拍三级| 亚洲精品日本国产第一区| av在线老鸭窝| 欧美日韩综合久久久久久| 日本wwww免费看| 国产免费视频播放在线视频| 女的被弄到高潮叫床怎么办| 99re6热这里在线精品视频| 亚洲av国产av综合av卡| 直男gayav资源| 国产高清不卡午夜福利| 精品久久久久久电影网| 五月玫瑰六月丁香| 国产成人福利小说| 禁无遮挡网站| 午夜精品一区二区三区免费看| 插逼视频在线观看| 亚州av有码| 国产免费一区二区三区四区乱码| 九九爱精品视频在线观看| 亚洲国产日韩一区二区| 国产真实伦视频高清在线观看| 一区二区三区四区激情视频| 中文在线观看免费www的网站| 日韩一本色道免费dvd| 天美传媒精品一区二区| 成年免费大片在线观看| 成人高潮视频无遮挡免费网站| 亚洲熟女精品中文字幕| 午夜亚洲福利在线播放| 在线亚洲精品国产二区图片欧美 | 一级av片app| 亚洲,欧美,日韩| av免费观看日本| 亚洲色图综合在线观看| 观看免费一级毛片| av福利片在线观看| 久久久成人免费电影| 性色avwww在线观看| 蜜桃久久精品国产亚洲av| 大香蕉久久网| 3wmmmm亚洲av在线观看| 国精品久久久久久国模美| 亚洲图色成人| 99精国产麻豆久久婷婷| 男女下面进入的视频免费午夜| 一级二级三级毛片免费看| 亚洲精品成人av观看孕妇| 国产伦理片在线播放av一区| 看非洲黑人一级黄片| 午夜福利视频精品| 国产精品伦人一区二区| 中文字幕久久专区| 男的添女的下面高潮视频| 亚洲精品国产成人久久av| 国产精品一区www在线观看| 日韩在线高清观看一区二区三区| 日产精品乱码卡一卡2卡三| 免费av毛片视频| 亚洲欧美日韩卡通动漫| 国产91av在线免费观看| 岛国毛片在线播放| 亚洲国产av新网站| 亚洲四区av| 97在线人人人人妻| 人人妻人人澡人人爽人人夜夜| 免费不卡的大黄色大毛片视频在线观看| 亚洲av免费高清在线观看| 中文字幕av成人在线电影| 欧美另类一区| 少妇人妻精品综合一区二区| 欧美人与善性xxx| 男女下面进入的视频免费午夜| 亚洲,一卡二卡三卡| 日日啪夜夜爽| 熟女电影av网| 亚洲国产欧美人成| 亚洲精品日本国产第一区| 国产日韩欧美在线精品| 寂寞人妻少妇视频99o| 日本猛色少妇xxxxx猛交久久| a级毛色黄片| 亚洲av二区三区四区| 色吧在线观看| 久久久久精品性色| 日韩在线高清观看一区二区三区| 国产一区二区三区av在线| 亚洲高清免费不卡视频| 亚洲人成网站高清观看| 亚洲一区二区三区欧美精品 | 18禁动态无遮挡网站| 国产成人aa在线观看| 免费黄网站久久成人精品| 久久亚洲国产成人精品v| 亚洲av二区三区四区| 亚洲伊人久久精品综合| 久久午夜福利片| 久久久a久久爽久久v久久| 老司机影院毛片| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲精品一区蜜桃| 成人亚洲精品一区在线观看 | 国产永久视频网站| 一级毛片我不卡| 亚洲成人av在线免费| 欧美激情在线99| 日日摸夜夜添夜夜添av毛片| 大片电影免费在线观看免费| 日韩电影二区| 亚洲国产欧美在线一区| 亚洲av成人精品一区久久| 亚洲av福利一区| av在线app专区| 视频区图区小说| 性色av一级| 亚洲精品日韩av片在线观看| 亚洲精品中文字幕在线视频 | 赤兔流量卡办理| 嫩草影院精品99| 国产成人一区二区在线| 少妇猛男粗大的猛烈进出视频 | 狂野欧美激情性bbbbbb| 黄片无遮挡物在线观看| 精品久久久久久久久亚洲| 国产精品蜜桃在线观看| 99热网站在线观看| 大又大粗又爽又黄少妇毛片口| 麻豆国产97在线/欧美| 另类亚洲欧美激情| 国产精品女同一区二区软件| 99久久九九国产精品国产免费| 男男h啪啪无遮挡| 国产欧美亚洲国产| 久久久久久久精品精品| 国产成人精品福利久久| 中文精品一卡2卡3卡4更新| 最近最新中文字幕大全电影3| 一区二区av电影网| 成人二区视频| 欧美bdsm另类| 久久久久久久国产电影| 亚洲欧美成人综合另类久久久| 国产日韩欧美亚洲二区| 最近最新中文字幕免费大全7| 别揉我奶头 嗯啊视频| 国产成人免费观看mmmm| 新久久久久国产一级毛片| 国产精品av视频在线免费观看| 国产精品久久久久久久久免| 亚洲激情五月婷婷啪啪| 国产中年淑女户外野战色| 欧美精品一区二区大全| 亚洲av免费高清在线观看| 精品人妻偷拍中文字幕| 老司机影院成人| 老司机影院毛片| 久久6这里有精品| 欧美激情久久久久久爽电影| 久久6这里有精品| 国产又色又爽无遮挡免| 纵有疾风起免费观看全集完整版| 乱系列少妇在线播放| 天堂中文最新版在线下载 | 欧美成人一区二区免费高清观看| 尾随美女入室| 亚洲,一卡二卡三卡| 久久久久国产网址| 日日摸夜夜添夜夜爱| 国产熟女欧美一区二区| 能在线免费看毛片的网站| 色网站视频免费| 欧美成人a在线观看| 制服丝袜香蕉在线| 男插女下体视频免费在线播放| 国产白丝娇喘喷水9色精品| 午夜亚洲福利在线播放| 白带黄色成豆腐渣| 王馨瑶露胸无遮挡在线观看| 国产高清不卡午夜福利| 国产淫片久久久久久久久| 91久久精品电影网| 国产中年淑女户外野战色| 精品人妻视频免费看| 欧美极品一区二区三区四区| 一区二区av电影网| 欧美成人午夜免费资源| 最近最新中文字幕免费大全7| 免费黄频网站在线观看国产| 欧美精品国产亚洲| 中文资源天堂在线| 久久6这里有精品| 看十八女毛片水多多多| 亚洲精品久久午夜乱码| 搞女人的毛片| 91久久精品国产一区二区成人| 在现免费观看毛片| 综合色丁香网| 人妻系列 视频| 国产免费一级a男人的天堂| 久久综合国产亚洲精品| 中文在线观看免费www的网站| 亚洲精品日韩在线中文字幕| 久久久色成人| 日韩欧美一区视频在线观看 | 日韩亚洲欧美综合| 国产探花在线观看一区二区| 成人国产麻豆网| 18禁裸乳无遮挡免费网站照片| 伊人久久精品亚洲午夜| 舔av片在线| 成人一区二区视频在线观看| 国产男女内射视频| 亚洲国产精品国产精品| av免费观看日本| 美女高潮的动态| 纵有疾风起免费观看全集完整版| 高清欧美精品videossex| 白带黄色成豆腐渣| av一本久久久久| 国产免费又黄又爽又色| 一级毛片 在线播放| 欧美日韩亚洲高清精品| 一区二区三区精品91| 麻豆精品久久久久久蜜桃| 免费看a级黄色片| 男人爽女人下面视频在线观看| 亚洲经典国产精华液单| 最近中文字幕高清免费大全6| 日日摸夜夜添夜夜添av毛片| 男女国产视频网站| 黄色欧美视频在线观看| 街头女战士在线观看网站| 卡戴珊不雅视频在线播放| 尾随美女入室| 99久久精品热视频| 精品久久久精品久久久| 看黄色毛片网站| 日产精品乱码卡一卡2卡三| 日韩成人av中文字幕在线观看| 亚洲人成网站在线观看播放| 国产亚洲午夜精品一区二区久久 | 视频区图区小说| 亚洲精品456在线播放app| 身体一侧抽搐| 18禁裸乳无遮挡免费网站照片| 麻豆成人av视频| 国产国拍精品亚洲av在线观看| 嘟嘟电影网在线观看| 交换朋友夫妻互换小说| av国产久精品久网站免费入址| 国产伦理片在线播放av一区| 亚洲精品乱久久久久久| 久热久热在线精品观看| 22中文网久久字幕| 国产片特级美女逼逼视频| 欧美日韩一区二区视频在线观看视频在线 | 日韩欧美 国产精品| 好男人视频免费观看在线| 成人午夜精彩视频在线观看| 99热网站在线观看| 狠狠精品人妻久久久久久综合| 国产精品无大码| 国产亚洲一区二区精品| 国产精品国产三级专区第一集| 少妇人妻精品综合一区二区| 在线看a的网站| 亚洲四区av| 天堂中文最新版在线下载 | 免费黄色在线免费观看| 欧美xxⅹ黑人| 国内精品宾馆在线| 看十八女毛片水多多多| 美女内射精品一级片tv| 黄色一级大片看看| av国产精品久久久久影院| 国产探花极品一区二区| 校园人妻丝袜中文字幕| 精品少妇黑人巨大在线播放| 午夜免费鲁丝| 欧美一级a爱片免费观看看| 亚洲欧美成人综合另类久久久| 嫩草影院入口| 国产精品.久久久| 国产精品爽爽va在线观看网站| 精品久久久久久久人妻蜜臀av| 男女国产视频网站| 久久精品国产鲁丝片午夜精品| 日韩一本色道免费dvd| 日本黄色片子视频| 亚洲av国产av综合av卡| 日本黄色片子视频| 久久久久精品久久久久真实原创| 中文字幕人妻熟人妻熟丝袜美| 大码成人一级视频| 欧美老熟妇乱子伦牲交| 日日摸夜夜添夜夜爱| 亚洲第一区二区三区不卡| 最近2019中文字幕mv第一页| 午夜日本视频在线| 国产黄色视频一区二区在线观看| 自拍偷自拍亚洲精品老妇| 欧美日韩亚洲高清精品| 久久久久久久午夜电影| 一级毛片黄色毛片免费观看视频| 香蕉精品网在线| 国产成人福利小说| 国产老妇女一区| 男人狂女人下面高潮的视频| 国产成人福利小说| 色网站视频免费| 美女视频免费永久观看网站| 亚洲欧洲日产国产| 欧美区成人在线视频| 国产69精品久久久久777片| 亚洲美女视频黄频| 天堂网av新在线| 91久久精品国产一区二区三区| 麻豆成人av视频| 国产高清有码在线观看视频| 国产黄片美女视频| 最近手机中文字幕大全| 边亲边吃奶的免费视频| 女人被狂操c到高潮| 高清日韩中文字幕在线| 日韩一区二区视频免费看| 一区二区三区四区激情视频| 制服丝袜香蕉在线| 啦啦啦啦在线视频资源| 七月丁香在线播放| 久久精品熟女亚洲av麻豆精品| 激情五月婷婷亚洲| 成人免费观看视频高清| 久久久久网色| 亚洲熟女精品中文字幕| 99久久人妻综合| 黑人高潮一二区|