• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep hybrid: Multi-graph neural network collaboration for hyperspectral image classification

    2023-05-31 01:34:16DingYoZhngZhiliZhoXiofengCiWeiHeFngCiYomingWeiWeiCi
    Defence Technology 2023年5期

    Ding Yo ,Zhng Zhi-li ,* ,Zho Xio-feng ,Ci Wei ,He Fng ,Ci Yo-ming ,Wei-Wei Ci

    a Xi'an Research Institute of High Technology, Xi'an, 710025, China

    b The School of Computer Science, China University of Geosciences, Wuhan, 430074, China

    c The School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, 214122, China

    Keywords: Graph neural network Hyperspectral image classification Deep hybrid network

    ABSTRACT With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network (GNN) has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average (ARMA) filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.

    1.Introduction

    Hyperspectral images (HSIs) are captured by the hyperspectral imaging spectrometer.Each pixel in the HSI contains hundreds of pieces of reflection information at different frequencies,which makes it suitable for many practical applications such as military target detection,mineral exploration,and agricultural production[1-4].With the progress of remote sensing and the unique properties of HSI data,hyperspectral imaging is playing an important role in remote sensing,and HSI classification (classifying a class label to every pixel)has increasingly become one of the important contents of HSI research[5,6].However,the complex noise effects and spectral variability[7],high dimensionality[8],labeled training samples deficiency[9],and high spectral mixing between materials[10]create difficulties in extracting discriminative information from HSI for classification.

    To deal with these issues,many machine learning and handcrafted features classification algorithms have been designed for HSI classification,such as gray-level co-occurrence matrix[11]and support vector machine (SVM) [12].However,these methods only extract features from the perspective of spectral information,while ignoring the spatial features in the image[13-17].Applying spatial information into the classification process will reduce noise and ensure the spatial continuity of the classification results,such as manifold learning methods[18],Markov Random Field[19],patchbased feature extraction [20],and morphological profiles [21].However,handcrafted features are quite empirical and heavily rely on professional expertise,which may have poor classification results compared with the deep learning method [22].

    Motivated by the successful application of deep learning in many fields,researchers have had great interest in the use of deep learning in HSI classification.Chen et al.[23]adopted an unsupervised deep Stacked Autoencoder (SAE) to learn spatial and spectral features separately,which was the first time that the deep learning concept had been introduced into an HSI classification task.Subsequently,deep Convolutional Neural Networks (CNNs)saw rapid development.Different dimensional CNNs,such as 1DCNN [24],2DCNN [25],and 3DCNN [26],have been applied to HSI classification.Inspired by their successful application in other fields,including Natural Language Processing (NLP) and Computer Version (CV),the application of Recurrent Neural Network (RNN)[27]and Deep Belief Network (DBN) [28]to HSI classification has also emerged.In Ref.[29],the Fully Convolutional Network (FCN)was unutilized for HSI classification.Recently,Generative Adversarial Networks(GANs)[30]have also been widely investigated for HSI classification studies.Different from the traditional methods based on handcrafted features,the CNN-based methods can extract the high-level features from the HSI automatically,demonstrating better classification effect for different classification targets.However,the deep CNN methods also face limitations.In particular,the fixed shape convolution kernels may fail to perceive the object's different geometric edges,which leads to poor adaptation to the targets with diverse shapes or sizes.In addition,the CNN-based methods are generally applied to Euclidean data and cannot deal with non-Euclidean data[31].

    To break the bottleneck of using the CNN-based methods in HSI classification,the graph convolutional network (GCN) approach emerged.Recently,GCN is of research interest in HSI classification.Different from the CNN-based methods,GCN can operate convolution on graph-structured data,such as social network data,and is able to learn the feature information between graph nodes[32].Qin et al.[34]introduced GCN to learn spectral-spatial information of the HSI,which showed the potential of GCN to alleviate the limitation of labeled samples in HSI.Then,Hong et al.[33]investigated different fusion strategies for combing GCN and CNN,and the miniGCN was proposed firstly to reduce the computational complexity.To explore the different aggregation mechanisms of neighboring nodes in the graph,some spatial methods have been investigated.For example,the Graph Attention Network(GAT)[35]and GraphSAGE [3]were adopted for HSI classification.This led to the rapid development of spectral filter and other the spectral methods.And more graph filters,such as the autoregressive moving average(ARMA) filter[36],were designed to extract the graph features.To obtain more spatial and global features of HSI,different structures of graph networks have been proposed.Subsequently,multi-scale and context-aware learning was introduced[37].In the past few years,remarkable successes have been achieved in HSI classification by this approach.However,there are still many defects in GCNs.

    (1) In the available methods,a single graph neural network or graph filter is mainly used to extract HSI features and does not take full advantage of various graph neural networks(graph filters).In practice,a single graph filter is prone to lead to feature degradation.Specifically,there are not any information interactions between different graph filters in a graph neural network,which is not conducive to the diversification of information acquisition during feature extraction.

    (2) The traditional GCNs are facing the Problem of oversmoothing,which limits the application of graph convolution neural network.

    To address the problems,a novel deep hybrid network is proposed,where multi-graph neural network mechanisms,such as GCN,ARMA,and GraphSAGE,are adopted to refine graph features.In particular,we introduce a deep dual graph interaction neural network,where two different graph filters,i.e.,the spectral filter and ARMA filter,are utilized in two branches separately.The spectral filter can well extract the spectral features of the graph,and the ARMA filter can suppress graph noise.This network realizes information interaction between different graph filters,which combines the feature extraction advantages of different filters,and prevents feature degradation during feature extraction.In addition,to deal with oversmoothing,a dense network is further proposed,where the local graph features are preserved.In summary,the main contributions of the paper are threefold.

    (1) A novel graph filter hybrid mechanism is presented,which realizes information interaction between different graph filters to ensure feature extraction diversification and prevent feature degradation.

    (2) The ARMA filter is introduced into the network for HSI classification,and a novel graph DenseNet structure is proposed to realize the ARMA filter.

    (3) Multi-graph neural network collaboration mechanisms are explored to extract HSI features.

    2.Related work and preliminaries

    This part focuses on relevant typical works,definitions and notations on HSI classification.

    2.1. Problem deficiency

    One objective of this work is to achieve state-of-the-art classification accuracy with a small number of samples by learning the relationship between labeled nodes and unlabeled nodes.

    2.2. Graph convolutional network (GCN) and spectral filter

    Given a graph G withmnodes,an adjacency matrixA∈{0,1}m×mand an eigenmatrix X∈Rm×dcan be used,and each node contains addimensional eigenvector.The mathematical expression of the aggregation operation in GCN is

    where,Xirepresents thei-th layer output feature matrix and X0is set as X0=X.The node feature transforms from Xi∈Rm×cito Xi+1∈Rm×ci+1.=A +I(xiàn),which means that the adjacency matrix is added with a self-loop.D is the degree matrix,which performs normalization on.Wi∈Rci×ci+1denotes a learnable weight matrix to realize the linear transformation on the feature matrix and σ is a nonlinear activation function.

    Given a graph filter g,the graph convolution of X can be expressed as

    where ⊙is the element-wise product.The graph filter is important to spectral-based graph convolution algorithms.In other words,the performance of spectral-based graph convolution algorithms is determined by graph filter designing.

    2.3. GraphSAGE convolutional network (SAGE)

    Given the graph G,GraphSAGE adoptsKaggregation functions,i.e.,AGGREGATEk,to aggregate node neighbors and extract their information.The propagation rule is expressed as

    3.Proposed method

    In this section,a brief overview of the proposed DHMG is presented(Section 3.1),and the deep dense convolutional network and ARMA convolution implementation are illustrated(Section 3.2 and Section 3.3).Then deep hybrid network manipulation is explained(Section 3.4).Subsequently,HSI preprocessing and graph construction are introduced(Section 3.5).Finally,the HSI classification using DHMG(Section 3.6) is detailed.

    3.1. Overview of the proposed method

    Fig.1 gives the overview of DHMG.Firstly,the original HSI is accurately partitioned into adaptive regions called superpixels.With this strategy,the number of graph nodes that need to be processed in the subsequent graph network can be reduced significantly,thus improving computational efficiency.The spectral features of each superpixel are calculated using a mean operation.Then the graph is constructed.Subsequently,a deep hybrid network (using the spectral filter and ARMA filter) and a GraphSAGE-based network are proposed to acquire graph features by information propagation and interaction.Finally,the graph features are interpreted using cross-entropy loss,and the label of each pixel can be obtained.

    3.2. ARMA filter and ARMA convolutional network (ARMA)

    Different from the spectral filter of GCN which decomposes the signal into base waves,ARMA predicts the future signal value using past values and past errors.However,the calculation of matrix transformation with Eq.(4) is complex,which is not conducive to practical use.More importantly,the ARMA filter cannot implement GNN sparsely.To overcome this Problem,we propose to approximate the ARMA filter in a recursive way.

    Proof.Referring to Ref.[39],the first-order recursion ARMA1filter can be presented as follows:

    where λmand μmare themtheigenvalues of L and M,and μm=(λmax-λmin)/2-λm.Then,Eq.(6) can be rewritten as

    By summing theK-th ARMA1filter,we can restore the analytical form of the ARMAKfilter with Eq.(4),and the final filtering operation is defined as

    It has been proven that ARMA can be implemented iteratively,and the graph convolutional operation of ARMA1is calculated by

    where,W and V are the learnable weight matrices,presents the node eigenmatrix,denotes the modified Laplacian,and σ is the logistic ReLu function.

    3.3. ARMA convolution implementation with dense graph convolutional network

    The implementation of the ARMA structure is complex,and thus many calculation parameters are added.In addition,different classification targets have varying requirements for features.For example,some classification targets need shallow features,while others need deep features.Inspired by ResNet and DenseNet in CNN,we connect all local features to subsequent convolutional layers (Fig.1),and the concatenate operation is adopted to fuse information.This structure,like DenseNet,is widely utilized in computer vision fields,namely,

    From Eq.(12),it is concluded thatrealizes structural recursion,so as to realize the ARMA filter.Meanwhile,it should be noted that can retain the local features of convolution layers and avoid the phenomenon of feature disappearance with the increase of convolution layers,which is why DenseNet is often used in deep convolution networks.

    3.4. Deep hybrid network feature interaction

    The available graph convolution networks for hyperspectral classification adopt a single graph kernel,which does not take full advantage of different convolution kernels in hyperspectral image feature extraction.In practice,we find that the adaptability of graph filters to various types of HSI classification is different.Therefore,the graph convolution network has different classification effects for different classification targets.

    Inspired by hybridization in molecular biology,we develop a deep hybrid graph convolution framework,where information interaction occurs between all convolutional layers in the two branches.Fig.1 demonstrates the deep hybrid network of the two branches,and different branches adopt different convolution kernels.Concretely,branch 1 uses the GCN spectral filter,and branch 2 uses the ARMA filter.Here,the outputs of thel-th graph convolutional layer in the two branches are denoted asThe concatenate operation is utilized for information interaction between the two convolutions.For example,the outputl-th layer feature interaction in branch 1can be expressed as

    As shown in Fig.1,the graph features in branch 1 and branch 2 are integrated with interaction operation.And thellayers output of the deep hybrid networkcan be expressed as

    Through information interaction of different branches,feature degradation of a single graph convolution kernel during feature extraction is avoided.Meanwhile,the graph DenseNet structure of the same branch keeps the local information of each convolutional layer,thus avoiding oversmoothing.In a word,the deep hybrid network realizes the diversity of feature sources,makes comprehensive use of the advantages of each convolution kernel,and improves the adaptability of the model to different classification targets.

    3.5. HSI preprocessing and graph construction

    3.5.1.HSI preprocessing

    For an HSIIβ={x1,x2,…,xm} withmpixels and β spectral bands,the superpixel is defined as

    where,Siis a superpixel withnipixels,and N denotes the total number of superpixels.

    The overview of the HSI preprocessing is shown in Fig.3.Firstly,an unsupervised principal component analysis (PCA) [40]method to reduce the dimension of the HSI for computational efficiency is adopted.We choose the first principal component for reduced imagewithmpixels andbbands,whereb?B,andrdenotes reduction.Then the reduced image is partitioned into superpixels.In our proposed method,the simple linear iterative clustering(SLIC)[41]is utilized to assign the pixels into superpixel,namely,

    Fig.3.HSI preprocessing and graph construction diagram.

    whereSidenotes a superpixel inIband N is the number of segmented superpixels (graph nodes).

    Finally,a mean filter is applied to each superpixel to produce a mean feature vector.Concretely,the average of the spectral feature of the pixels contained in the superpixel is calculated as a superpixel feature.The feature vector of the superpixel hiis defended as

    whereniandare the number and the spectral feature of pixel contained in superpixelSi.

    3.5.2.Graph construction

    In this paper,superpixels in HSI are treated as graph nodes.Given an HSI superpixel graph G=(V,E,A),the adjacency matrix Aij∈RN×Ncan be expressed as

    where hiand hjdenote the spectral feature of nodesi,j(calculated by Eq.(14)).hi-hj2is the Euclidean distance between two nodes,γ is an empirical value set to 0.2,and Nt(hj)represents the t-hop neighbors set of hi.

    3.6. HSI classification using DHMG

    In Section 3.5,we introduce the mechanism of HSI preprocessing and graph construction.Then a deep hybrid network is proposed to extract graph features.Finally,we adopt a SAGE algorithm to refine the features produced by the deep hybrid network.

    Referring to Eq.(3),the output feature of the node v in DHMG can be formed as

    wherehvis the feature vector of the node v,{hu,?u∈N(v)} is the feature matrix of the neighbor nodes of the node v,and W and B are trainable parameters.In our proposed network,the learnable parameters are penalized by the loss function

    where Yzfdenotes the label matrix,yGis the labeled example set,Cpresents the number of landcover categories,andis the output of DHMG.Meanwhile,Adam [42](gradient descent) is utilized to update the weight of DHMG.The implementation details of DHMG are demonstrated in Algorithm 1.

    4.Experiments

    To evaluate the performance of DHMG,extensive experimental comparison and analysis are presented in this paper.We briefly introduce three HSI datasets used in our experiments,compare the proposed DHMG with six state-of-the-art methods,analyze the impacts of hyperparameters on DHMG performance,and discuss the computational complexity of the proposed method compared to previous GCN.Finally,the ablative effects of information interaction and dense structure are analyzed.

    Algorithm 1.DHMG for HSI Classification.

    4.1. Dataset description

    Three benchmark HSI datasets are employed to evaluate the proposed approach,which are described as follows:

    (1)Pavia University: The first University of Pavia (UP) dataset,obtained by the reflective optics system imaging spectrometer(ROSIS),is often used for HSIC.The original UP dataset is composed of 610*340 pixels with 115 spectral bands,which are usually reduced to 103 bands with wavelengths in the range of 430-860 nm.The land covers in the HSI can be allocated into 9 landcover categories.The false color image,ground truth image,and reference map are shown in Fig.4.

    Fig.4.(a) False color image,(b) round truth image,and reference map of Pavia University.

    (2)Salinas: The second dataset,Salinas,was collected by the airborne visible/infrared imaging spectrometer(AVIRIS)over Salinas Valley,California.The spatial resolution of the data is 3.7 m,and the size is 512×217.There are 224 spectral bands in the original data,and 204 bands are left after the water vapor absorption bands are removed.The data contains 16 crop categories.The false-color image,ground truth image,and reference map are shown in Fig.5.

    Fig.5.(a) False color image,(b) round truth image,and reference map of Salinas.

    (3)Houston 2013: The third dataset was obtained by the ITRES CASI-1500 sensor and provided for 2013 IEEE GRSS data fusion contest.The data size is 349 × 1905,including 144 bands ranging from 364 to 1046 nm.The ground cover is labeled into 15 categories,and the false-color composition picture,corresponding ground truth map,and reference map are demonstrated in Fig.6.

    Fig.6.(a) False color image,(b) round truth image,and reference map of Houston 2013.

    Fig.7.Classification maps on Pavia University dataset: (a) Ground-truth map;(b) MSDN (90.90%);(c) CRNN (85.46%);(d) S2GCN (89.74%);(e) MSAGE-CAL (96.14%);(f) MBCUT(89.43%);(g) JSDF (90.82%);(h) DHMG (97.81%).

    To compare the proposed approach with other methods,we randomly select 30 labeled pixels to train the model from each class,and the remaining pixels are used for testing,which is a common practice in many papers.

    4.2. Experimental setup

    4.2.1.Parameter selection

    In the proposed network,five hyperparameters are selected in our DHMG,i.e.,superpixel numberN,the number of epochsT,learning ratelr,the number of deep hybrid layerL,and the number of SAGE layerS,as listed in Table 1.The impacts of these parameters on classification are listed in detail in Section 4.4.

    Table 1 Hyperparameter settings for different datasets.

    4.2.2.Evaluation index

    To evaluate the performance of the comparison algorithms,four widely accepted indexes are adopted,including Overall Accuracy(OA),Per-class accuracy (PA),Average accuracy (AA),and Kappa coefficient (Kappa).

    4.2.3.Compared methods

    To validate the performance of our proposed DHMG,two CNN classifiers,i.e.,convolutional RNN (CRNN,2018) [43]and multiscale dense network (MSDN,2019) [44],two GNN methods,i.e.,multi-scale graph sample and aggregate network with contextaware network (MSAGE-CAL,2021) [37]and spectral-spatial GCN(S2GCN,2021) [34],and two machine learning classifiers,i.e.,multiband compact texture units (MBCTU,2019) [21]and joint collaborative SVM (JSDF,2017) [45]are utilized as comparison models.We list the architecture of DHMG in Table 2.

    Table 2 Architectural details of proposed network.

    To conduct the experiments,the latest software and hardware resources are employed.As hardware resources,the Intel i9-10900 K processor with 3.70G of DDR4 RAM is exploited.Each experiment is run ten times,and the standard deviation and the average are provided for the measurement indexes mentioned above.

    4.3. Classification results comparison

    4.3.1.Quantitative results

    In this part,the proposed DHMG is compared with the above six algorithms to validate the performances.The quantitative results of all experiments(the average values of ten running times)on three HSI datasets are reported in Tables 3-5.As shown in the results,our proposed DHMG outperforms the comparative classifiers.

    Concretely speaking,in Table 3,the CNN-based methods,i.e.,MSDN and CRNN,are not more advantageous compared with other methods due to a lack of training labeled dates.Meanwhile,we can observe that the GNN-based methods,i.e.,S2GCN,MSAGE-CAL,and DHMG,show their superiorities in hyperspectral classification.However,S2GCN only achieve the OA of 89.74,which is significantly lower than those of MSAGE-CAL and DHMG,namely,96.14 and 97.81.This is mainly because MSAGE-CAL and DHMG improve the GNN network and can learn the multi-scale information of the graph.It is also notable that DHMG has superior classification accuracy in most classes.The results demonstrate the adaptability of DHMG to the classification of small targets.

    Table 3 Quantitative experimental performance on pavia university,bold is the best performance.

    The comparison results of Salinas are presented in Table 4.Different from the results of Pavia University,JSDF (a traditional machine learning method) gets an outstanding result,whose OA,AA,and Kappa are 94.67,97.69,and 94.06,respectively.The phenomenon illustrates that traditional machine learning methods can also achieve results comparable to deep learning with proper design.For the Salinas dateset,the proposed DHMG improves OA,AA,and Kappa by 1.46,1.34,and 1.01,respectively.And DHMG achieves good classification accuracy for the classes with similar spectral features,i.e.,C8 and C15.This is mainly due to the adaption of ARMA filters and information interaction in each convolutionallayer,and the DHMG can suppress graph noise and learn the diverse graph network information.

    Table 4 Quantitative experimental performance on salinas,bold is the best performance.

    As for Houston 2013,the classification results are demonstrated in Table 5.Compared with the other two datasets,Houston 2013 contains more cover classes and more spectral bands.To classify a large HSI is a challenge to classifiers.From the results,all investigated methods get lower classification accuracy measured by OA,AA,and kappa.This is because Houston demands more computational power and needs to identify more details.As demonstrated in Table 5,there is no doubt that the proposed DHMG achieves the best classification results by improving OA,AA,and kappa by 1.18,0.60,and 0.40,respectively.The results validate the function of deep dense structure in hyperspectral classification,which can preserve the local convolutional features in the convolution process.

    Table 5 Quantitative experimental performance on houston 2013,bold is the best performance.

    4.3.2.Visual results

    To intuitively observe the classification results of each method,we compare the visual maps produced by investigated classifiers and the results are shown in Figs.7-10.The different colors in the classification maps mean different kinds of land cover.According to the figures,we can observe that the visualization results of ourDHMG on three HSI datasets are the most similar to the ground truth.Specifically,the map on Pavia University contains less miss classification compared with other classification maps of the art-ofthe-state classifiers.In terms of Salinas,as shown in Fig.8,the investigated methods are able to well classify land covers except for C8 and C15 as they are heavily mixed.However,the proposed DHMG still achieves an outstanding result(Fig.8(h)),which verifies its good ability to classify targets with similar spectra.As for Huston 2013 (Fig.9),we obtain a conclusion consistent with the previous analysis.Generally speaking,the results verify the effectiveness and superiority of DHMG for HSI classification.

    Fig.8.Classification maps on Salinas dataset:(a)Ground-truth map;(b)MSDN(91.64%);(c)CRNN(87.64%);(d)S2GCN(88.39%);(e)MSAGE-CAL(96.87%);(f)MBCUT(92.14%);(g)JSDF (90.82%);(h) DHMG (98.33%).

    Fig.9.Classification maps on Houston dataset:(a)Ground-truth map;(b)MSDN(88.49%);(c)CRNN(82.10%);(d)S2 GCN(89.31%);(e)MSAGE-CAL(92.13%);(f)MBCUT(87.07%);(g)JSDF (90.51%);(h) DHMG (93.31%).

    4.4. Hyperparameter impaction

    Table 1 shows the hyperparameter selection in our method,i.e.,superpixel numberN,the number of epochsT,learning ratelr,and the number of deep hybrid layerL.In the experiment,a grid search strategy is employed to search the optimal setting of paraments.And we divide the four hyperparameters into two groups,and the index OA is utilized to measure the performance.

    4.4.1.Impact of L and N

    In this part,we investigate the sensitivity ofLandNin detail.We setLin range of 1-7,andNis varied from 5000 to 20,000 with an interval of 2500 for Pavia University dataset,4000 to 16,000 with an interval of 4000 for Salinas dataset,5000 to 20,000 with an interval of 2500 for Houston dataset,and the other parameters are the same as in Table 1.The results are shown in Fig.10.From the results,we can see that the accuracy of the algorithm is limited by increasing the number of network layers when the number of network layers reaches 2.At the same time,increasing the number of network layers will also increase the amount of calculation.Therefore,we set the number of network layers to 3 in our network.ForN,we observe that OA is improved with the increase ofN.This is mainly because a biggerNleads to a smaller superpixel,and more local spatial features of HSI can be preserved.However,the biggerNmeans a larger graph,which not only requires stronger feature extraction abilities of the algorithm,but also increases the computational burden of computer.It should be emphasized that due to experimental conditions,we can only set the maximumNto 22,000,soNin Huston 2013 is 22,000.

    Fig.10.Sensition of L and N: (a) Pavia University dataset;(b) Salinas dataset;(c) Houston 2013 dataset.

    4.4.2.Impact of lr and T

    In the experiment,lris set to 0.1,0.01,0.001,0.0001,respectively.AndTis varied from 200 to 1000 with an interval of 200.The impact results oflrandTon the three datasets are presented in Fig.11.From the results,we can observe that a large learning rate can speed up the model parameter training,but cannot reach the optimal solution.A small learning rate can make the model achieve good classification accuracy,but the training time is long.Considering both efficiency and training time,we set the learning rate to 0.001 in the proposed framework.In addition,an appropriateTis critical to achieve satisfactory performance.To prevent insufficient model training and over-fitting,we conclude from the results that whenT=600,the model classification accuracy is the best.

    Fig.11.Sensition of lr and T: (a) Pavia University dataset.(b) Salinas dataset.(c) Houston 2013 dataset.

    4.5. Impact of the number of labeled samples

    In this section,effects of different training samples of the above algorithms on three HSI datasets are analyzed.The number of training pixels is varied from 5 to 30 with an interval of 5.Except for different training data numbers,other parameter settings are the same as in Section 4.2,and Fig.12 presents the OA performance of each classifier on three HSI datasets.From Fig.12,we observe that the performances of the compared classifiers are generally improved as the number of training pixel increases because more training labels enable the model to learn more prior knowledge,and the classification ability is improved.However,the performance of the CNN-based methods is unstable,especially when the number of labeled data is small.The results fully show that the CNN-based methods are not suitable for classification with a small amount of labeled data.Different from the CNN-based methods,the GNN-based methods can learn the relationships between labeled nodes and unlabeled nodes,which greatly reduces the need for labeled data for classification.The experimental results also support that the performance of the GNN-based methods are generally outstanding.It is also worth mentioning that the OA performance of the proposed DHMG outperforms the comparative methods,especially in the adaptability to the situation of limited number of labeled samples because DHMG can preserve the local convolutional information,and performs information interaction between the two graph filters.

    Fig.12.Effects of different number of labeled samples on the three HSI datasets: (a) Pavia University;(b) Salinas;(c) Houston 2013.

    4.6. Running time

    To evaluate the classification efficiency,we investigate the running times of different GNN-based methods and list them in Table 6,including S2GCN [34],S2GAT [35],MDGCN [46],and the proposed DHMG on the three datasets.The results are reported on a server with a 3.70G Intel i9-10900 K CPU and a GeForce GTX 1080Ti 11G GPU.From the results,we can conclude that our proposed model is more efficient than the comparison methods.This mainly because the utilization of segmentation operations and the ARMA filter can effectively reduce the computational cost.

    Table 6 Running time(in seconds),best in bold.

    Table 7 Ablation study results on pavia university dataset.Best results are market in bold.

    Table 8 Ablation study results on salinas dataset.

    Table 9 Ablation study results on houston 2013 dataset.

    4.7. Ablation study

    As illustrated above,information interaction and dense structure play a significant role in improving the effect of hyperspectral classification with the proposed algorithm.To shed light on the contributions of information interaction and dense structure,the reduced model by removing the information interaction is represented as“DHMG-x1“,and“DHMG-x2”denotes the reduced model without dense structure.The experimental setup is the same as described in Section 4.2.As shown in Tables 7-9,it is observed that DHMG achieves the best performance,which validates theimportance of information interaction and dense structure.Meanwhile,we find that the results of DHMG-x2are significantly lower than those of DHMG-x1and DHMG.This is because DHMG-x2confronts the oversmoothing Problem(the best effect comes from 2 layers and this experimental network has 4 layers).The results of DHMG-x2fully show that dense structure can alleviate the oversmoothing problem,increase the depth of the network,and extract the high-level features of the graph.It is more important to note that the results of DHMG are also improved compared to DHMG-x1.This is due to the information interaction design in DHMG.DHMG can make full use of the advantages of different graph filters and realize the diversification of information sources.

    5.Conclusions

    In this paper,we presented a novel deep hybrid multi-graph neural network for feature extraction of HSI.We designed a novel ARMA filter implemented in a recursive way to suppress the graph noise.By segmenting the original HSI to many superpixels(nodes)based on similar reflectance properties,the spatial dimension was reduced and the subsequent calculations were simplified.To take full advantage of different graph filters,refine the deep features,and address the Problem of oversmoothing,a deep hybrid network(using the spectral filter and ARMA filter) was proposed.The network can retain local information of convolution layers and suppress noise,which improves classification accuracy.Subsequently,a GraphSAGE-based mechanism was adopted to acquire graph features.We explored multi-graph neural network collaboration for hyperspectral classification.Compared with the state-ofthe-art methods,the experimental results on the three public HSI datasets verified the superiority of the proposed method.

    Recently,the application of semi-supervised graph convolutional networks in HSI classification has achieved good results.However,network training still needs labeled data,which cannot solve labeled samples deficiency Problem.In future work,we will focus on unsupervised models for HSI clustering,such as contrastive learning and reinforcement learning.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    免费观看人在逋| 国产成人精品久久二区二区91 | 99久久人妻综合| 爱豆传媒免费全集在线观看| 国产日韩欧美在线精品| 亚洲综合精品二区| 国产精品免费大片| av.在线天堂| 精品人妻在线不人妻| 如何舔出高潮| 亚洲国产最新在线播放| 亚洲欧洲精品一区二区精品久久久 | av网站在线播放免费| 大片电影免费在线观看免费| 伦理电影免费视频| 国语对白做爰xxxⅹ性视频网站| 人人澡人人妻人| 久久精品国产综合久久久| 亚洲综合精品二区| 国产99久久九九免费精品| 王馨瑶露胸无遮挡在线观看| 国产av国产精品国产| 国产成人啪精品午夜网站| 色视频在线一区二区三区| 亚洲成人手机| 欧美精品亚洲一区二区| 中文字幕制服av| 免费黄频网站在线观看国产| 午夜福利视频在线观看免费| 久久毛片免费看一区二区三区| 国产在线免费精品| 中文天堂在线官网| 在线观看免费午夜福利视频| 2018国产大陆天天弄谢| www.熟女人妻精品国产| 91精品国产国语对白视频| 国产探花极品一区二区| 亚洲少妇的诱惑av| 国产精品麻豆人妻色哟哟久久| 国语对白做爰xxxⅹ性视频网站| 丰满饥渴人妻一区二区三| 日本av手机在线免费观看| 精品久久蜜臀av无| 免费人妻精品一区二区三区视频| av网站免费在线观看视频| 在线免费观看不下载黄p国产| 18在线观看网站| 在线观看一区二区三区激情| 男男h啪啪无遮挡| 国产一区二区在线观看av| 天天躁夜夜躁狠狠久久av| 久久精品aⅴ一区二区三区四区| 欧美97在线视频| 自线自在国产av| 高清欧美精品videossex| 亚洲欧美一区二区三区国产| 亚洲国产精品成人久久小说| 亚洲五月色婷婷综合| 激情视频va一区二区三区| 国产av国产精品国产| 亚洲欧美色中文字幕在线| 黄色怎么调成土黄色| 国产精品无大码| 天天操日日干夜夜撸| 五月开心婷婷网| 久久国产精品男人的天堂亚洲| 一级黄片播放器| 最近中文字幕高清免费大全6| 视频在线观看一区二区三区| 亚洲精品,欧美精品| 波多野结衣一区麻豆| 18禁裸乳无遮挡动漫免费视频| 制服人妻中文乱码| 亚洲精品国产av成人精品| 青春草国产在线视频| 亚洲av在线观看美女高潮| 日韩一区二区视频免费看| 性高湖久久久久久久久免费观看| 久久性视频一级片| 日本猛色少妇xxxxx猛交久久| 国产av国产精品国产| 丰满少妇做爰视频| 999久久久国产精品视频| 另类亚洲欧美激情| 制服丝袜香蕉在线| 菩萨蛮人人尽说江南好唐韦庄| 精品一区在线观看国产| 老司机亚洲免费影院| tube8黄色片| 中文字幕人妻丝袜一区二区 | 青春草亚洲视频在线观看| 无限看片的www在线观看| 侵犯人妻中文字幕一二三四区| 精品视频人人做人人爽| 亚洲成人av在线免费| 亚洲成av片中文字幕在线观看| 中文乱码字字幕精品一区二区三区| 亚洲精品日韩在线中文字幕| 人体艺术视频欧美日本| 国产淫语在线视频| 欧美av亚洲av综合av国产av | 亚洲四区av| av国产久精品久网站免费入址| 亚洲四区av| 午夜福利视频精品| h视频一区二区三区| 国产乱来视频区| 青青草视频在线视频观看| 国产精品香港三级国产av潘金莲 | 免费女性裸体啪啪无遮挡网站| 亚洲激情五月婷婷啪啪| 色吧在线观看| 99久久99久久久精品蜜桃| 国产精品久久久久久精品电影小说| 日韩制服丝袜自拍偷拍| 亚洲欧美成人精品一区二区| 亚洲精品视频女| 亚洲伊人久久精品综合| 我的亚洲天堂| 麻豆乱淫一区二区| 日韩欧美一区视频在线观看| 国产成人精品久久久久久| 欧美人与性动交α欧美精品济南到| 一边亲一边摸免费视频| 99久国产av精品国产电影| 免费黄网站久久成人精品| 国产又色又爽无遮挡免| av.在线天堂| 日本一区二区免费在线视频| 在线观看国产h片| 亚洲欧美一区二区三区国产| 欧美97在线视频| 老司机影院毛片| 国产精品无大码| 国产激情久久老熟女| 嫩草影院入口| 亚洲精品国产av蜜桃| 亚洲婷婷狠狠爱综合网| 国产精品99久久99久久久不卡 | 日本一区二区免费在线视频| www.熟女人妻精品国产| 老司机亚洲免费影院| 国产精品久久久久久精品古装| 亚洲国产看品久久| 免费女性裸体啪啪无遮挡网站| 波多野结衣一区麻豆| 久久久精品94久久精品| 9191精品国产免费久久| 悠悠久久av| 久久精品aⅴ一区二区三区四区| 色94色欧美一区二区| 亚洲国产欧美一区二区综合| 欧美国产精品一级二级三级| 又黄又粗又硬又大视频| 国产成人系列免费观看| 欧美人与善性xxx| 中文字幕人妻熟女乱码| 黄色毛片三级朝国网站| 国产精品久久久久久久久免| 在线精品无人区一区二区三| 日韩av在线免费看完整版不卡| 中国三级夫妇交换| 亚洲综合精品二区| 国产精品麻豆人妻色哟哟久久| 大香蕉久久网| 91aial.com中文字幕在线观看| 啦啦啦在线免费观看视频4| 国产亚洲av高清不卡| 国产精品麻豆人妻色哟哟久久| 久久女婷五月综合色啪小说| 69精品国产乱码久久久| 日韩免费高清中文字幕av| 国产精品久久久久久久久免| 欧美变态另类bdsm刘玥| 久久免费观看电影| 中文字幕最新亚洲高清| 亚洲国产欧美网| 久久精品国产a三级三级三级| 欧美黄色片欧美黄色片| 中文欧美无线码| 成人影院久久| 国产 精品1| 精品国产一区二区三区四区第35| 国产有黄有色有爽视频| 一本色道久久久久久精品综合| 亚洲成人手机| 精品亚洲乱码少妇综合久久| 肉色欧美久久久久久久蜜桃| 亚洲av男天堂| 日韩中文字幕欧美一区二区 | 丝袜喷水一区| 叶爱在线成人免费视频播放| 极品人妻少妇av视频| 亚洲,一卡二卡三卡| 久久精品久久久久久久性| 国产av一区二区精品久久| 最近最新中文字幕大全免费视频 | 成人黄色视频免费在线看| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲国产最新在线播放| 新久久久久国产一级毛片| 亚洲av日韩精品久久久久久密 | 无遮挡黄片免费观看| 久久鲁丝午夜福利片| 汤姆久久久久久久影院中文字幕| 精品一区二区三区av网在线观看 | 免费不卡黄色视频| av卡一久久| 熟妇人妻不卡中文字幕| 成年美女黄网站色视频大全免费| 亚洲av成人精品一二三区| 亚洲国产最新在线播放| 精品一区二区三卡| 男女下面插进去视频免费观看| 一本色道久久久久久精品综合| 午夜福利影视在线免费观看| 国产精品麻豆人妻色哟哟久久| 中文字幕另类日韩欧美亚洲嫩草| 欧美日韩亚洲国产一区二区在线观看 | 亚洲精品一区蜜桃| av.在线天堂| 日韩中文字幕视频在线看片| 美女国产高潮福利片在线看| 新久久久久国产一级毛片| 国产一区二区激情短视频 | 天天操日日干夜夜撸| 国产成人91sexporn| 狂野欧美激情性bbbbbb| 久热这里只有精品99| 精品福利永久在线观看| www.熟女人妻精品国产| 丝袜在线中文字幕| 欧美最新免费一区二区三区| 日日撸夜夜添| 亚洲四区av| 国产xxxxx性猛交| 亚洲成人av在线免费| 又大又黄又爽视频免费| 午夜91福利影院| 精品一区二区三卡| 美女大奶头黄色视频| 又粗又硬又长又爽又黄的视频| 色婷婷久久久亚洲欧美| 人妻 亚洲 视频| 色网站视频免费| 中国三级夫妇交换| 国产片特级美女逼逼视频| 亚洲图色成人| 午夜福利网站1000一区二区三区| 国产视频首页在线观看| 午夜精品国产一区二区电影| 日韩一本色道免费dvd| 中文字幕最新亚洲高清| 18禁观看日本| 亚洲av电影在线进入| 97人妻天天添夜夜摸| 少妇的丰满在线观看| 国产麻豆69| 啦啦啦在线观看免费高清www| av国产精品久久久久影院| 精品第一国产精品| 久久久久人妻精品一区果冻| 欧美成人精品欧美一级黄| 美女高潮到喷水免费观看| 日韩精品有码人妻一区| 女人爽到高潮嗷嗷叫在线视频| 老熟女久久久| 男女之事视频高清在线观看 | 久久av网站| 日韩一本色道免费dvd| 国产一区有黄有色的免费视频| 免费在线观看完整版高清| 国产黄色免费在线视频| 国产男人的电影天堂91| 97人妻天天添夜夜摸| 久久毛片免费看一区二区三区| 91国产中文字幕| 亚洲精品久久午夜乱码| tube8黄色片| 男女之事视频高清在线观看 | 老司机亚洲免费影院| 亚洲欧洲日产国产| av网站在线播放免费| 成人18禁高潮啪啪吃奶动态图| 超碰成人久久| 热99国产精品久久久久久7| 在线亚洲精品国产二区图片欧美| 男人添女人高潮全过程视频| 深夜精品福利| 欧美激情高清一区二区三区 | 亚洲色图 男人天堂 中文字幕| 老鸭窝网址在线观看| 亚洲美女黄色视频免费看| 成人国语在线视频| 久久影院123| 啦啦啦视频在线资源免费观看| 91精品国产国语对白视频| 免费黄色在线免费观看| 老司机靠b影院| 精品亚洲乱码少妇综合久久| 欧美黑人欧美精品刺激| 五月开心婷婷网| 这个男人来自地球电影免费观看 | svipshipincom国产片| 欧美 亚洲 国产 日韩一| 国产色婷婷99| 国产极品天堂在线| 久久ye,这里只有精品| 欧美久久黑人一区二区| 亚洲成人一二三区av| a 毛片基地| 国产精品一区二区精品视频观看| 18禁动态无遮挡网站| 巨乳人妻的诱惑在线观看| 亚洲精品一区蜜桃| 男女午夜视频在线观看| 亚洲伊人色综图| 精品人妻在线不人妻| 免费观看av网站的网址| 香蕉国产在线看| 伦理电影免费视频| 下体分泌物呈黄色| 欧美精品高潮呻吟av久久| 中文字幕人妻丝袜一区二区 | 国产国语露脸激情在线看| 色综合欧美亚洲国产小说| 中国国产av一级| 高清不卡的av网站| 青春草亚洲视频在线观看| 一区二区三区乱码不卡18| 超碰97精品在线观看| av在线老鸭窝| 男女床上黄色一级片免费看| 午夜福利网站1000一区二区三区| 极品人妻少妇av视频| 成人漫画全彩无遮挡| 99久国产av精品国产电影| 国产深夜福利视频在线观看| 午夜福利乱码中文字幕| 免费女性裸体啪啪无遮挡网站| 波多野结衣一区麻豆| 欧美乱码精品一区二区三区| 热99久久久久精品小说推荐| 菩萨蛮人人尽说江南好唐韦庄| 超碰成人久久| 国产成人一区二区在线| 最新在线观看一区二区三区 | 亚洲精品久久午夜乱码| 国产日韩欧美亚洲二区| 天堂中文最新版在线下载| 青春草亚洲视频在线观看| 18禁动态无遮挡网站| 天堂中文最新版在线下载| 国产一区二区 视频在线| 亚洲美女搞黄在线观看| 一级片免费观看大全| 国产成人av激情在线播放| 国产欧美日韩一区二区三区在线| 免费在线观看完整版高清| 在线观看免费午夜福利视频| 大陆偷拍与自拍| 中国三级夫妇交换| 又黄又粗又硬又大视频| 精品国产一区二区三区久久久樱花| 免费久久久久久久精品成人欧美视频| 国产精品蜜桃在线观看| 亚洲一码二码三码区别大吗| 中文字幕av电影在线播放| 日本欧美视频一区| 99香蕉大伊视频| 高清欧美精品videossex| a级片在线免费高清观看视频| 99热国产这里只有精品6| 热99国产精品久久久久久7| 蜜桃在线观看..| 国产麻豆69| 国产 精品1| 免费观看av网站的网址| av国产精品久久久久影院| 一区福利在线观看| 欧美精品人与动牲交sv欧美| 亚洲成国产人片在线观看| 9色porny在线观看| 啦啦啦视频在线资源免费观看| 久久久久国产精品人妻一区二区| 天天躁夜夜躁狠狠躁躁| 久久av网站| 一区二区av电影网| 久久人人爽人人片av| 人人妻人人澡人人爽人人夜夜| 亚洲欧洲日产国产| 女人高潮潮喷娇喘18禁视频| 午夜日本视频在线| 王馨瑶露胸无遮挡在线观看| 亚洲成av片中文字幕在线观看| 日韩中文字幕视频在线看片| 在线亚洲精品国产二区图片欧美| a级毛片在线看网站| av卡一久久| 久久毛片免费看一区二区三区| 亚洲欧洲国产日韩| 一本大道久久a久久精品| 午夜免费男女啪啪视频观看| 国产成人精品久久二区二区91 | 精品一区在线观看国产| 国产深夜福利视频在线观看| 精品福利永久在线观看| 免费黄色在线免费观看| 肉色欧美久久久久久久蜜桃| 亚洲综合色网址| 在线免费观看不下载黄p国产| 最近中文字幕2019免费版| 久久综合国产亚洲精品| avwww免费| 国产一区二区在线观看av| 欧美日韩国产mv在线观看视频| 精品亚洲乱码少妇综合久久| a级毛片在线看网站| 亚洲精品中文字幕在线视频| 老司机亚洲免费影院| 亚洲精华国产精华液的使用体验| 可以免费在线观看a视频的电影网站 | 最近手机中文字幕大全| 国产欧美亚洲国产| 亚洲av福利一区| 另类精品久久| 久久人妻熟女aⅴ| 中文字幕精品免费在线观看视频| 久久鲁丝午夜福利片| 国产精品 欧美亚洲| 亚洲图色成人| 两性夫妻黄色片| 国产精品熟女久久久久浪| 伦理电影大哥的女人| 久久久精品94久久精品| 一级爰片在线观看| 日本一区二区免费在线视频| 日韩中文字幕欧美一区二区 | 人妻一区二区av| 日本wwww免费看| 青春草国产在线视频| 亚洲第一区二区三区不卡| 久久精品久久精品一区二区三区| 午夜福利乱码中文字幕| 18禁裸乳无遮挡动漫免费视频| 人成视频在线观看免费观看| 国产av国产精品国产| 赤兔流量卡办理| 亚洲精品乱久久久久久| 亚洲美女搞黄在线观看| 一区在线观看完整版| 国产精品免费视频内射| 色网站视频免费| 看免费av毛片| 日韩 欧美 亚洲 中文字幕| 成人黄色视频免费在线看| 国产精品99久久99久久久不卡 | 丰满迷人的少妇在线观看| 丁香六月天网| 日本一区二区免费在线视频| 久久久精品区二区三区| 久久久亚洲精品成人影院| 9191精品国产免费久久| 看免费成人av毛片| www.精华液| 国产亚洲av片在线观看秒播厂| 免费观看a级毛片全部| 在线 av 中文字幕| 成人亚洲欧美一区二区av| 啦啦啦中文免费视频观看日本| 国产精品一区二区在线观看99| 欧美乱码精品一区二区三区| 高清黄色对白视频在线免费看| 欧美日韩av久久| 亚洲精品,欧美精品| 美女视频免费永久观看网站| 久久久精品国产亚洲av高清涩受| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲欧洲日产国产| 人妻一区二区av| 久久国产亚洲av麻豆专区| 看十八女毛片水多多多| 99九九在线精品视频| 国产有黄有色有爽视频| 曰老女人黄片| 天堂中文最新版在线下载| 精品国产国语对白av| 欧美日本中文国产一区发布| 青青草视频在线视频观看| 欧美老熟妇乱子伦牲交| 久久久国产一区二区| www.自偷自拍.com| 中文字幕人妻熟女乱码| 如何舔出高潮| 国产精品成人在线| 亚洲欧美精品综合一区二区三区| 久久99一区二区三区| 最近手机中文字幕大全| 大陆偷拍与自拍| av卡一久久| 九草在线视频观看| 久久人人爽av亚洲精品天堂| 人妻 亚洲 视频| 高清在线视频一区二区三区| 亚洲免费av在线视频| 嫩草影视91久久| 看非洲黑人一级黄片| 蜜桃国产av成人99| 免费高清在线观看视频在线观看| 午夜日韩欧美国产| 国产成人精品福利久久| 天美传媒精品一区二区| 国产成人一区二区在线| 精品一区二区三区av网在线观看 | 日韩欧美一区视频在线观看| 大码成人一级视频| 美女视频免费永久观看网站| 亚洲精品自拍成人| 高清欧美精品videossex| 91精品三级在线观看| 国产高清国产精品国产三级| 国产一区二区三区综合在线观看| 亚洲国产精品一区三区| 欧美激情 高清一区二区三区| 午夜久久久在线观看| 97在线人人人人妻| 免费不卡黄色视频| 精品亚洲成国产av| 七月丁香在线播放| 丝袜喷水一区| 国产精品人妻久久久影院| 日日啪夜夜爽| 精品国产一区二区久久| www.自偷自拍.com| 午夜老司机福利片| 国产精品蜜桃在线观看| 久久精品久久久久久噜噜老黄| 18禁观看日本| 国产熟女午夜一区二区三区| 一边摸一边做爽爽视频免费| 满18在线观看网站| 一边摸一边抽搐一进一出视频| 人成视频在线观看免费观看| 国产精品99久久99久久久不卡 | 亚洲成人手机| 色94色欧美一区二区| 在线观看免费高清a一片| 90打野战视频偷拍视频| 亚洲成人手机| 午夜福利在线免费观看网站| 99久久综合免费| 国产免费现黄频在线看| netflix在线观看网站| 黑丝袜美女国产一区| 99九九在线精品视频| 久久国产精品大桥未久av| 国语对白做爰xxxⅹ性视频网站| 99国产综合亚洲精品| 国产精品久久久久成人av| 久久99一区二区三区| 亚洲精品av麻豆狂野| 看免费成人av毛片| 啦啦啦在线免费观看视频4| 日韩,欧美,国产一区二区三区| 日本爱情动作片www.在线观看| 人妻一区二区av| 青青草视频在线视频观看| 一本一本久久a久久精品综合妖精| 欧美另类一区| 亚洲,欧美精品.| 在线观看国产h片| 成年美女黄网站色视频大全免费| 黑人欧美特级aaaaaa片| 91精品国产国语对白视频| 久久鲁丝午夜福利片| 日本wwww免费看| 国产成人av激情在线播放| 欧美黑人欧美精品刺激| 日日爽夜夜爽网站| 日本黄色日本黄色录像| 热re99久久国产66热| 伊人久久大香线蕉亚洲五| 欧美人与性动交α欧美精品济南到| 如何舔出高潮| 亚洲精品视频女| 久久久国产一区二区| 亚洲精华国产精华液的使用体验| 国产精品免费大片| 国产精品国产av在线观看| 亚洲国产av影院在线观看| av在线老鸭窝| 亚洲综合色网址| 国产伦理片在线播放av一区| 亚洲,一卡二卡三卡| 老司机影院毛片| 免费黄色在线免费观看| 国产黄色免费在线视频| 在线观看www视频免费| 国产 一区精品| 999久久久国产精品视频| 久久精品国产亚洲av高清一级| 在线 av 中文字幕| 电影成人av| 热re99久久精品国产66热6| 国产成人精品久久二区二区91 | 国产老妇伦熟女老妇高清| 国产精品久久久人人做人人爽| 亚洲欧美成人综合另类久久久| 国产日韩欧美亚洲二区| 国产亚洲欧美精品永久| 欧美 亚洲 国产 日韩一| 久久天躁狠狠躁夜夜2o2o | 尾随美女入室| 久久久久精品国产欧美久久久 | 少妇被粗大的猛进出69影院|