• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Variational Gridded Graph Convolution Network for Node Classification

    2021-10-25 01:41:24XiaobinHongTongZhangZhenCuiandJianYang
    IEEE/CAA Journal of Automatica Sinica 2021年10期

    Xiaobin Hong,Tong Zhang,Zhen Cui,and Jian Yang

    Abstract—The existing graph convolution methods usually suffer high computational burdens,large memory requirements,and intractable batch-processing.In this paper,we propose a high-efficient variational gridded graph convolution network(VG-GCN) to encode non-regular graph data,which overcomes all these aforementioned problems.To capture graph topology structures efficiently,in the proposed framework,we propose a hierarchically-coarsened random walk (hcr-walk) by taking advantage of the classic random walk and node/edge encapsulation.The hcr-walk greatly mitigates the problem of exponentially explosive sampling times which occur in the classic version,while preserving graph structures well.To efficiently encode local hcr-walk around one reference node,we project hcrwalk into an ordered space to form image-like grid data,which favors those conventional convolution networks.Instead of the direct 2-D convolution filtering,a variational convolution block(VCB) is designed to model the distribution of the randomsampling hcr-walk inspired by the well-formulated variational inference.We experimentally validate the efficiency and effectiveness of our proposed VG-GCN,which has high computation speed,and the comparable or even better performance when compared with baseline GCNs.

    I.INTRODUCTION

    IN recent years,convolutional neural networks (CNNs) [1]have achieved great success in a variety of machine learning tasks such as object detection [2],[3],machine translation [4],and speech recognition [5].Basically,CNNs aim to explore the local correlation through neighborhood convolution,and are rather sophisticated to encode Euclidean structure data w.r.t.shape-gridded images and videos.In realworld applications,however,there is a large amount of non-Euclidean structure data such as social networks [6],citation networks [7],knowledge graphs [8],protein-protein interaction [9],and time series system [10]–[14],which are usual non-grid data and cannot be habitually encoded with the conventional convolution.

    As graphs are natural and frequently-used to describe nongrid data,researchers have recently attempted to introduce convolution filtering into graph modeling,which is called graph convolution or graph convolution network (GCN).Generally,they fall into two categories:spectral based approaches [15]–[22] and spatial based approaches [23]–[26].

    Spectral based approaches employ the recent emerging spectral graph theory to filter signals in the frequency domain of graph topology.Brunaet al.[15] proposed the first spectral convolution neural network (Spectral CNN),which defines the filter as a set of learnable parameters to process graph signals.Chebyshev Spectral CNN (ChebNet) [16] defines the Chebyshev polynomial of eigenvector diagonal as the convolutional filter,which avoids the computation of the graph Fourier basis,reducing the computation complexity from O(N3) to O(Ke) (whereNis the number of nodes,eis the number of edges,andKis the number of engine vectors).Kipf and Welling [17] proposed the most commonly used GCN,which is essentially a first-order approximation of ChebNet assumingK=1 and the max-eigenvalue λmax=2.Wuet al.[22] proposes a disordered graph convolutional neural network (DGCNN) based on the Gaussian Mixture Model,which extends CNN by adding a preprocessing layer called disordered graph convolutional layer (DGCL).DGCL uses a mixture of Gaussian functions to achieve the mapping between the convolution kernel and nearby nodes in the graph.Besides,some other GCN variants have also been proposed,including Hessian GCN (HesGCN) [19] and Hypergraph p-Laplacian GCN (HpLapGCN) [20].Specifically,HesGCN and HpLapGCN belong to the first-order variant of GCN,while TGCN [21] is the second-order approximation of GCN.

    These spectral based methods above are well-supported by the strong theory of graph signals,but they usually suffer high computational burdens because of the eigenvalue decomposition on graph topology.To mitigate this problem,the fast approximation algorithm [27]–[29] defines convolution filtering as a recursive calculation on graph topology,which actually may belong to the category of spatial convolution.

    Spatial based approaches often use explicitly spatial edgeconnection relations to aggregate those nodes locally adjacent to one reference node.The local aggregation with mean/sum/max operation on neighbor nodes cannot yet satisfy the Weisfeiler-Lehman (WL) test [30],where non-isomorphism graph structures need different filtering responses.The critical reason is that graph topology structures are degraded to certain-degree confusion after aggregation,even though the recent graph attention networks (GAT) [31] attempts to adaptively/discriminatively aggregate local neighbor nodes with different weights learnt by the attention mechanism between one central node and its neighbor nodes.Moreover,spatial based GCNs usually need to optimize the entire graph during training due to the intractability of batch-processing,which will result in high-memory requirements for large-scale graphs and thus cannot run on plain platforms.This means that,when graph structures change with newly-added/deleted nodes or links,the GCN models should be well-restarted or even re-trained for new structures.In addition,the time complexity of spatial based GCNs will be exponentially increased with the receptive field size (w.r.t.the hop stepl),i.e.,O(Nmld2) in each convolution layer,wheremdenotes average degree of nodes,anddis the dimension of input signal.

    In this paper,we propose a high-efficient variational gridded graph convolution network (VG-GCN) with highcomputation efficiency,low-memory cost,easy batch-process,and comparable or even better performance when compared with the baseline GCNs.To efficiently capture local structures,we introduce the random sampling strategy through random walks,which can well preserve graph topology under randomly sampling sufficient walks [32],[33].As the quantity of walks has the exponentially-explosive increase with the walk step,i.e.,O(Nml),the burden of sampling sufficient walks tends to overwhelm the entire algorithm especially for the larger node degreem?2.Instead of the original random walk,specifically,we propose a hierarchically-coarsened random walk (hcr-walk) to reduce sampling times.The strategy of hcr-walk can efficiently reduce traversal edges through random combinations (to form hyper-edge) of connection edges during walking.As a result,the hcr-walk balances the advantages of random walk as well as node aggregation.Under the fixed hyper-edge numberthe hcrwalk will fall into a deep-first traversal onwhose height may be limited in the radius of graph to cover the global receptive field.In view of the limited height,as well as the smallvalue,a small amount of sampling times could well preserve most information of topology structures as well as node signals.

    To efficiently encode the local hcr-walk around each reference node,we project the hcr-walk onto an ordered space to form image-like grid-shape data,which better favors those conventional convolution networks.Thus,a 2-D convolution filtering can be performed in the normalized hcr-walk space to encode the correlation of within-walk adjacencies and cross adjacent walks.To characterize the uncertainty of latent feature representation,we design a 2-D variational convolution block (VCB) inspired by the recent variational inference,rather than directly adopting 2-D convolution filtering.The benefit of variational convolution is that the probability distribution of random sampled walks could be well modeled and the performance could be further boosted.The proposed hcr-walk can be framed in an end-to-end neural network with high running speed even on large-scale graphs.

    Our contributions are three-fold:

    1) We propose the hcr-walk to describe local topology structures of graphs,which can efficiently mitigate the problem of exponentially-explosive sampling times occurring in the original random walk.

    2) We project the hcr-walk onto the grid-shape space and then introduce 2-D variational convolution to describe the uncertainty of latent features,which makes the convolution operation on graphs more efficient and flexible,just as the standard convolution on images,and well support batchprocessing.

    3) We experimentally validate the efficiency and effectiveness of our proposed VG-GCN,which has a high-efficient computation speed,and comparable or even better performance when compared with those baseline GCNs.

    II.RELATED WORk

    In this section,we will introduce previous literatures which are related to our work.Generally,they can be divided into three parts:graph convolutional neural networks,random walk,and variational inference.

    A.Graph Convolutional Neural Networks

    With the rapid development of deep learning,more and more graph convolutional neural network models [34]–[37]are proposed to deal with the irregular data structure of graphs.Compared with regular convolutional neural networks on structured data,this is a challenge since each node’s neighborhood size varies in graphs,while the regular convolutional operation requires fixed local neighborhood.To address this problem,the graph convolutional neural networks fall into two categories,spectral-based convolution and spatial-based convolution.Spectral-based filtering method was first proposed by Brunaet al.[15].It defines the filter operators in spectral domain,and then implements a series of convolution operations through the Laplace decomposition of graphs.Because the spectrum filter includes the process of matrix eigenvalue decomposition,the computational complexity is generally high,especially for graphs with a large number of nodes.To alleviate the computation burden,Defferrardet al.[16] proposed a local spectral filtering method,which approximates the frequency responses with the Chebyshev polynomial.Spatial-based filtering methods simulate the image processing approach of regular convolutional neural networks,and employ convolution based on nodes’ spatial relations.The general approach of spatial convolution is to construct the regular neighborhood of nodes through sampling (discarding a part of nodes if the neighbor number exceeds while repeating a part of nodes if the neighbor number is insufficient),and then carry out the convolution operation with the convolution kernel of rules.According to whether the data to be predicted can be known from the model in training stages,it can be divided into transductive learning [38] and inductive learning [39].Specifically,for the inductive learning,the data to be predicted is not accessible during training,and the data of the model may be in an“open world”.

    B.Graph Clustering

    Graph clustering aims to depart a complete graph G toKdisjoint subsets V1,...,VK.The vertices within clusters are densely connected,while the vertices in different clusters are sparsely connected.Graph clustering plays an important role in community detection [40]–[42],telecommunication networks [43],and email analysis [44].Heet al.[45] proposed a dubbed contextual correlation preserving multiview featured graph clustering (CCPMVFGC) for discovering clusters in graphs with multiview vertex features.To address the problem of identifying clusters in attributed graphs,Huet al.[46] proposed an inductive clustering algorithm (MICAG) for attributed graphs from an alternative view.To overcome the problem where common clustering models are only applicable to complex networks where the attribute information is composed of attributes in binary form,Huet al.[47] proposed a three-layer node-attribute-value hierarchical structure to describe the attribute information in a flexible and interpretable manner.

    C.Random Walk

    Random walk is an effective method to get graph embedding.It is especially useful in the situation that the graph is partially visible or the graph is too large to measure in its entirety.Given a graph composed of nodes and the connection between nodes,walk paths can be obtained by selecting the start nodes and then executing random walk with certain sampling strategies.The commonly used random walk strategies generally include the truncated random walk[48]–[50] and the second-order random walk [51].Analogous to tasks in natural language processing where all the nodes in a graph constitute a dictionary,a walk path is regarded as a sentence,and a node in the path is regarded as a word.Graph embedding can be learned by adopting the continuous bag-ofwords (CBOW) model [52] or the Skip-gram model [53].Specifically,the CBOW model uses context to predict the central node embedding,while the Skip-gram model predicts the context nodes based on the central node embedding.Among them,Skip-gram model is the most widely used one.Skip-gram aims to maximize the co-occurrence probability among the words that appear within a windoww.Among various graph embedding methods based on random walk,DeepWalk and node2vec are two typical examples.

    1) DeepWalk:DeepWalk preserves the high-order proximity between nodes by maximizing the co-occurrence probability of the lastknodes and nextknodes in the path centered at vertexvi:maximizing logPr(vi?k,...,vi?1,vi+1,...,vi+k|Yi),where 2k+1 is the walk length.It produces lots of walks and optimizes the logarithmic probability of all paths.

    2) node2vec:node2vec controls the partial random walk on the graph by two hyper-parameterspandq,which can be seen as providing a trade-off between breadth-first (BFS) and depth-first (DFS) graph searching,and hence produces higherquality and more informative embedding than DeepWalk.

    D.Variational Inference

    Variational inference is a fast and effective method from machine learning that approximates probability densities for a large amount of data.In Bayesian statics,the inference of unknown quantities can be regarded as the calculation of posterior probability,which is usually difficult to calculate.The usual approach is to make an approximation using the Markov Chain Monte Carlo (MCMC) algorithm [54],which is slow for large amounts of data due to the sampling.Different from MCMC,the idea behind variational inference is to first posit a family of densities based on latent variables,and then to find the member of that family which is close to the target.Specifically,the degree of closeness is usually described by the Kullback-Leibler (KL) divergence.Moreover,Kingma and Welling [55] proposed the reparameterization trick to solve the non-differentiable problem in optimization caused by the sampling of the involved latent variables.

    III.VG-GCN

    In this section,we will introduce our VG-GCN in detail.We first define the notations used in this paper and overview the entire architecture of VG-GCN,then introduce the main modules of VG-GCN,including hcr-walk,gridding,and variational convolution.

    A.Notations

    B.Overview

    Fig.1.The framework of our proposed VG-GCN.The description can be found in Section III-B.Specifically,G0 denotes the origin graph,and G1,G2 are the coarsened graphs with different coarsening ratios.

    Fig.2.Graph coarsening with the coarsen ratio of 0.3.

    The overall network framework is shown in Fig.1,where the input is the graph-structured data.To illustrate the convolution process,we take the corresponding local subgraph (i.e.,local receptive field) around one node as an example.In order to aggregate topological information of different levels of nodes,we execute graph coarsening on the input graph according to different coarsen ratios,and then random walk on these hierarchical graphs to capture the local structures;see Section III-C.Through random walk based on hierarchically coarsening,the hcr-walk could effectively mitigate the problem of exponentially-explosive increases of sampled walks incurred in the original random walk as sufficient sampling could well guarantee to cover graph structures.Next,the sampled walks are adaptively gridded into an ordered space through the computation of correlation to the first principal component of random-walks;see Section III-D.The gridding walks are spanned to a 2-D plane ofT×L,which thus favors the conventional convolution.If stacking multi-dimensional signals,the gridded representation of local subgraph is a 3-D tensor ofd×T×L.Thus,the high-efficient and powerful CNNs run on images can be extended for this case to encode the correlation of within-walk adjacencies and cross adjacent walks.To describe the variations of latent feature representation,we introduce variational inference into the 2-D convolution process,referred to as the variation convolution block,to encode the distribution of random-walks therein.Finally,the output features of variation convolution are passed through a fully connected layer and a softmax function for node classification.

    C.Hierarchically-Coarsened Random Walk

    The random sampling strategy is introduced to characterize the topology structure of local receptive fields.There are two critical questions which need to be solved:i) random sampling should well preserve topology property of original graph,and ii) sampling times should be as few as possible for highefficient computation as well as low-memory requirement.The first condition dedicates to the accuracy of representation,while the second one focuses on the efficiency of learning.Random walk can satisfy the first condition well under the sufficient samplings,but the sampling complexity heavily depends on node degrees during traversals on graph.Given the average node degreemand walk lengtht,the combinatorial number of walks ismt,which has the exponentially-explosive quantity whenmis a bit large(especially density graph) even for a small walk step.For example,supposem=10 andt=8,the combinatorial walks can reach the number of 1 0 000 000 for each staring node.In order to guarantee the accuracy in the first condition,the practical sampling times might be huge even if a small sampling ratio is taken.It will cause high-computation burdens and require high storage space.

    To address this problem,we extend the random walk to hierarchically-coarsened random walk by leveraging the powerful topology preservation ability of random walk and the high efficiency of random aggregation.The schematic diagram of graph coarsening is shown in Fig.2.If we useto denote the original graph before coarsening andp1to denote the coarsen ration,the coarsening graph for G1can be represented aswhererepresent the numbers of nodes in G1and G2.Specifically,s atisfymeans round up to an integer.We randomly seeded G1with|V2|c luster seeds based on the coarsen ratiop1,and the nodes in G1converge to each cluster seed according to the adjacency relationship in E1.The connection relation of coarsening graph G2is defined by the connection and the number of connections among the clusters.For example,denote nodeiand nodejin G2,they are composed ofm1andm2nodes in G1

    Considering that the number of connections between cluster nodes is not consistent,we processed the coarsening graph into a weighted graph (For consistency,the original unweighted graph can convert to the weighted graph according to the node’s degree).The weightbetweenandcan be computed as

    The feature of the node in coarsening graph is related to the nodes belonging to the cluster.In graph learning task,the larger the degree of the node,the more important the node (for example,in social networks,large degree nodes represent popular users and play a more important role in pattern mining).Therefore,the degree of nodes in the cluster can be used as the weight of their feature synthesis

    whereDvdenotes the degree of nodev.We construct the hierarchical coarsening graphs by coarsening the input original graph according to different coarsen ratios,and then execute random walk on the original graph and hierarchical graphs.We employ the alias sampling method [56] to sample truncated random walks from the discrete probability distribution in Aifor hierarchical graph Gi,and concatenate the paths which belong to the same start node.The walks on different hierarchical graphs can aggregate the graph topologies with few sampling times,high-efficient computation,and low-memory requirement.

    D.Gridding

    One problem of the classic random walk is the irregularity along different paths caused by random sampling,which makes it rather challenging to exploit the underlying local correlation across adjacent walks.To solve this problem,we perform gridding on the output features of random-walks to project them into an ordered space.Based on this operation,the gridding paths are spanned to a 2-D plane based on two axes,i.e.,TandL,representing the time of sampling and the number of walk step,respectively.The operation of gridding brings one notable benefit that the high-efficient and powerful CNNs run on images can be extended for the paths to jointly encode the correlation of within-walk adjacencies and cross adjacent walks.For multi-dimensional signals,the gridded representation of local subgraph may be a 3-D tensor ofd×T×L,which is also suitable for the application of CNN.

    To adaptively capture the correlation among random paths,we conduct gridding from the perspective of distribution and consider each sampled path based on its correlation to the first principal component of hcr-walk.For the hcr-walk on one node with the representation denoted aswe first split it into a set of path samples denoted as {d1,...,dT} along the sampling time axisT,whereis the vectorized representation of thei-th path sample.For gridding,the clustering center of path samples is first calculated as

    Then,the correlation between thei-th path sample and the cluster center is defined as follows:

    Then,the path of each node is gridded based on its corresponding value correlation related to the cluster center.

    E.Variational Convolution

    Variational convolution is used to characterize the uncertainty of latent feature representation,which is inspired by the variational inference.We stack multiple 2-D convolutional layers on the ordered grid-like feature mapis the output of gridding operation,andf(·) denotes the convolutional layers.We view the path numberTand the walk lengthLas the height and width in one image,and use the feature dimensiondto represent the in-channel of convolutional layers.Because of the dimensions ofTandLcontain the practical structure significance of the graph,instead of the standard convolution kernel (k×k),we employ an irregular convolution kernel(k1×k2) to better aggregate the neighborhood structure information sampled by the hcr-walk and the node information in different size of receptive field.For one start node,the irregular kernel collects the depth information from theLdimension and the breadth information from theTdimension,which performs as the local aggregation process with an increasing receptive field.Ideally,ifL=2 and the central node’s first neighbor number isT,our convolutional layer is equal to one layer of GCN.

    For each start nodevi,we get the feature representationHifrom stacked convolutional layers.Then,we adopt VCB to describe the probability distribution of node’s aggregated feature representation Hi.The structure of VCB is shown in Fig.3.According to the theory of variational inference,the marginal likelihood of Hican be written as

    Fig.3.Variational convolution block (VCB).

    The first term denotes the KL divergence of the approximate from the true posterior,and the second term L(θ,φ;Hi)is called the variational lower bound.According to the conditional probability formula

    and the definition of KL divergence

    We can use Monte Carlo sampling [57] to approximate the expectation of the second logarithmic likelihood term in (9),and adopt reparameterization trick (Z=g(H,?),?~N(0,1))to make the process of calculating parameter gradient be differentiable.Let the prior of Z denote the standard normal distribution: Z~N(0,1),the resulting estimator for this model and datapoint Hiis

    wherejdenotes thej-th feature map,(x,y) denotes the local position of(T,L),k1andk2are the height and width of the kernel,andis the value at the position(p,q) of the kernel connect to thej-th feature map.

    IV.ALGORITHM AND ANALYSIS

    Our VG-GCN algorithm flow is shown in Algorithm 1.VGGCN is an end-to-end online learning model.The inputs are the adjacency matrices A and feature matrices X of the starting nodes,and the outputs are the corresponding predicted labelsThe loss function L of the model consists of two parts:cross-entropy loss L1and variational lower bound L2in(10).

    where α is a hyper parameter.

    For each starting node,we sampleTpaths with lengthLin hcr-walks,so as to ensure that the neighborhood nodes could be covered as much as possible.The complexity of VG-GCN iswherenis the number of nodes to be processed (i.e.,training set and testing set),kis thek-th layer,dkis the output dimension of thek-th layer,andk1andk2denote the height and width of convolution kernels,respectively.The complexity of GCN isandN?nin commons (i.e.,in Pubmed dataset,n=1060 andN=19 717),indicates the speed of our proposed VG-GCN.

    V.ExPERIMENTS

    In this section,we comprehensively evaluate the effectiveness of our method on five widely used public datasets:Cora,Citeseer,Pubmed [58],AMZ PHOTOS [59],and NELL [60].We first briefly introduce these datasets,then report our experimental results on them and compare the performance with other state-of-the-art methods.Finally,we conduct an ablation study to dissect the proposed model.

    A.Datasets

    Five public graph-structured datasets are employed to evaluate our proposed method,including three citation network datasets (i.e.,Cora,Citeseer,Pubmed),a co-purchase dataset (i.e.,AMZ PHOTOS),and a knowledge graph dataset(i.e.,NELL).For fair compassion,the dataset split protocols of these citation networks strictly follow the widely used ones in [60].The overall information about these five datasets is listed in Table I.

    1) Cora:Cora is a citation network about machine learning papers categorized into seven classes:case-based,genetic algorithm,neural network,probabilistic method,reinforcement learning,principle learning,and theoretical.In total,Cora contains 2708 nodes and 5429 edges,where each node can be described by a 1433 dimensional vector consisting of0/1-valued elements.The average degree of nodes in Cora is about four.For the protocol on this dataset,there are 5.2% of nodes labeled for training (20 nodes in each class,140 nodes in total),500 nodes for validating,and 1000 nodes for testing.

    TABLEI GRAPH DATASETS INFORMATION

    2) Citeseer:Citeseer depicts a citation network of 3327 nodes and 4732 links,where the nodes are divided into six classes.Each node can be described by a 3703 dimensional 0/1-valued vector.For evaluation,3.6% nodes are labeled for training (20 nodes in each class,120 nodes in total),and the numbers of nodes in the validation and test sets are 500 and 1000,respectively,which are as same as those in Cora.Each node in Citeseer is connected by three nodes in average.

    3) AMZ PHOTOS:AMZ PHOTOS is a Co-purchase dataset.It contains 7487 nodes of 8 classes with 119 043 edges.Each node is described by a 745 dimensions vector,and the average degree of nodes is 32.This dataset is split by 200/4800/2487 for train/val/test.

    4) Pubmed:Pubmed contains 19 717 nodes of three classes with 44 338 edges.Each node is described by a term frequency-inverse document frequency (TF-IDF) vector drawn from a dictionary with 500 terms.For the widely accepted protocol on this dataset,there are only 0.3% of nodes for training (20 nodes in each class,60 nodes in total),500 nodes for validating,and 1000 nodes for testing.The average degree of each node is about five.

    5) NELL:NELL dataset is extracted from the never ending language learning (NELL) knowledge graph [61].Each relation in NELL links the selected entities (9897 in total)with text descriptions in ClueWeb09 [62],and can be described with a triplet (eh,r,et) whereehandetare the head and tail entity vector,andrdenotes the relation between them.By splitting every triplet (eh,r,et) into two edges (eh,r1) and(et,r2),a graph of 65 755 nodes (including relations and entities) and 266 144 edges can be obtained,where each node can be described by a 61 278 dimensional vector and approximately connected by four nodes.

    B.Baseline Methods

    To verify the superiority of our proposed VG-GCN model,various state-of-the-art methods are used for performance comparison.Basically,the results of these baseline methods are obtained either according to their reported performance in previously literatures,or through conducting the experiments based on their released public codes.For the baseline methods of our implementation,sophisticated hyper-parameter finegrained tuning is performed to report their performances.

    DeepWalk [48] is a generative model for graph embedding,which samples multiple walk paths from a graph by the truncated random walk,and learns the representation by regarding the paths as sentences,and path nodes as tokens in natural language processing (NLP).The source code for DeepWalk is publicly available1https://github.com/phanein/deepwalk.Planetoid [60] is inspired by the Skipgram [53] model from NLP,and it embeds a graph through both positive and negative samplings while considering the node information and graph structure.The source code of Planetoid is available2https://github.com/kimiyoung/planetoid.Chebyshev [16] designs a fast localized graph convolution by employing localized filters with polynomial parametrization,and adopts graph coarsening procedure to group together similar vertices.Graph convolutional networks (GCN) [17] updates the feature expression of the central node by synthesizing the information of the gradually expanding sensing nodes in the field.GCN’s source code is publicly available3https://github.com/tkipf/gcn.Graph attention networks(GAT) [31] applies the attention mechanism to graph convolution.It calculates the attention coefficient between central node and its neighborhood nodes to express the different contribution of neighbor connections.Moreover,GAT has an additional sparse version which is also involved as the baseline.For these two versions of GAT,the performance is reported in Table II.GAT’s source code is publicly available4https://github.com/PetarV-/GAT.Dual graph convolutional networks(DGCN) [63] executes dual graph convolution based on the adjacency matrix and positive pointwise mutual information(PPMI) matrix,respectively,and combines the output of different convolved data transformations.The source code of DGCN is publicly available5https://github.com/ZhuangCY/DGCN.Graph learning-convolutional networks (GLCN) [64] learns a discriminativeSto replace the adjacency matrixAfor graph convolution based on the topological between nodes and on high-dimensional manifold.gLGCN [65] adds the local invariance constraint to the loss function that the same label samples should have the same data distribution.Hypergraph neural networks (HGNN) [66]designs a hypergraph structure,where one edge can connect multiple nodes.Then,robust node representation can be learned by aggregating the node information to the hyper-edge and then returning the integrated information to each node.

    C.Experiment Setting

    The parameters of our VG-GCN model are traversed in certain ranges and finally set when the best performance on the validation set is obtained.For the basic architecture of the VG-GCN model,there are two convolutional layers in VCB.The coarse layers number is 2 and the coarse ratios are 0.8 and 0.4,respectively.In the hcr-walk process,the number of walks,denoted asT,is set to 15,and the hcr-walk lengthLis5.For the convolutional layers,the sizes of convolution kernels are set to 5×3,while in the VCB the convolution kernel sizes are both set to 1×1 to produce the mean and covariance matrices,respectively.During the training process,we run the model for 500 epochs with the learning rate of 0.01 and dropout rate of 0.5 for tuning the network parameters.

    TABLEII PERFORMANCE OF GRAPH NODE CLASSIFICATION, COMPARED WITH DEEPWALk, PLANETOID, CHEBYSHEV, GCN, GAT,DGCN, GLCN, GLGCN, AND HGNN METHODS

    D.Experiment Results

    The experimental results of our proposed VG-GCN on the three citation datasets (Cora,Citeseer,and Pubmed),one copurchase dataset (AMZ PHOTOS),and large-scale dataset of knowledge graph (NELL) are reported in Table II.These performances are also compared with various state-of-the-art methods,where the metric of accuracy is employed for quantitative evaluation of the semi-supervised node classification.Our VG-GCN obtains the best results on AMZ PHOTOS,Pubmed,and NELL datasets (there are 0.05%performance gain on AMZ PHOTOS,1.2% on Pubmed,and 4.2% on NELL),and achieves the competitive performances on the Cora and Citerseer datasets.

    We calculated the average and variance of node degree of five datasets.We found that the degree variances of two small datasets (Cora and Citeseer) are small (27.3 and 11.4,respectively),while the other three large scale datasets (AMZ PHOTOS,Pubmed,and NELL) have the larger degree variances (55.2,1852.3,and 2262.6,respectively).Therefore,we speculate that the reason our VG-GCN does not achieve the best performance on the two small datasets may be that the two dataset have fewer random walk patterns and are more likely to fall into over-smoothing during the training process.This also shows that our hierarchical coarsening and random walk can model complex data patterns.

    Besides the competitive performance,it should be especially noticed that our model is more advantageous in computation efficiency compared with all other baseline methods.We present the time costs of our VG-GCN for running one epoch on Cora and Pubmed datasets in Table III,and compare them with GAT,DGCN,and GCN.For Cora,the smallest dataset,GAT takes about 7 s per epoch while its sparse version takes 1 s,and DGCN takes about 0.5 s per epoch.The time consumed by our VG-GCN is 0.05 s,which is much less than those of GAT and DGCN,while almost the same as GCN.However,on the large graph Pubmed (19 717 nodes),our VG-GCN takes about 0.04 s per epoch,and is the fastest compared with the sparse GAT of 2.5 s per epoch,DGCN of 3.2 s,and GCN of 0.6 s.

    TABLEIII EACH EPOCH TIME COST ON CORA AND PUBMED, COMPARED WITH GAT, DGCN, AND GCN

    The convergence of our VG-GCN on the Nell datasets is shown in Fig.4,and is compared with that of DGCN which also achieves a considerable performance.Both VG-GCN and DGCN are trained 1000 epochs on NELL dataset.According to Fig.4,our VG-GCN can converge faster and obtains better performances.In terms of running time and accuracy comparisons,our VG-GCN takes about 6 minutes with the accuracy of 79.1%.In contrast,DGCN takes about 2.8 hours,which is 29.2 times that of VG-GCN,while obtaining an accuracy of 74.9%,which is 4.2% lower.

    Fig.4.The convergence on NELL datasets,compared with DGCN.

    E.Ablation Study and Parameter Sensitivity

    As the proposed VG-GCN achieves promising performance with high computational efficiency,it is interesting for us to dissect the model to evaluate the contribution of each part.Moreover,it is also meaningful to evaluate the sensitivity of those critical parameters in the VG-GCN model to make clear how their variation influences the performance.Therefore,we conduct several additional experiments:

    1) Comparison between the hcr-walk and random walk.To verify the superiority of the proposed hcr-walk over random walk,we simply replace the hcr-walk unit with random-walk on original graph,and test the performance on the four public datasets.The results are shown in Table IV.

    TABLEIV COMPARISON BETWEEN HCR-WALk AND RANDOM-WALk

    2) Evaluation of VCB.To evaluate the effectiveness of the proposed VCB,we compare its performance with the classic convolutional layer and MLP-VCB on the four datasets,and the results are shown in Table V.Specifically,MLP-VCB means that the calculation of mean and covariance matrices in VCB are revised to be obtained through multilayer perception instead of convolutional layers.

    TABLEV COMPARISON OF CONVOLUTIONAL LAYERS, MLP-VCB, AND VCB

    Based on the results of the multiple experiments above,we can get the following observations:

    1) Hcr-walk outperforms classic random walk and promotes the node classification performance.On all the four evaluated datasets,the classification accuracies of our hcr-walk are higher than those of random walk.The performance gain verifies the effectiveness of our hcr-walk,which constructs coarsening graphs,while avoiding an explosive growth of walk paths with increasing walk steps.

    2) VCB is effective to promote the node classification task.Comparing with both convolutional layers and MLP-VCB,VCB obtains better node classification performances with an average performance gain of about 1% on the four public datasets.The performance improvement verifies the superiority of VCB,which encodes the comprehensive correlation of within-walk adjacencies and cross adjacent walks.

    Kernel sizek1,k2,hcr-walk numbersTand lengthL,and coarsening ratiopare hyper parameters in VG-GCN.To analyze the sensitivity and value selection of each hyper parameter,we designed the following comparative experiments.

    1) Kernel Size:The non-square 2D convolution kernel is employed in hcr-walk convolution.The different classification performance with different kernel sizes on three citation datasets is plotted in Fig.5.From the experimental results of Fig.5,the irregular convolution kernel with size (5,3) has better performance on three datasets.Specifically,(5,3)means the cross-path filtering height is 5 and within-path filtering width is 3.

    Fig.5.Classification performance with different kernel sizes on three citation datasets.

    2) Hcr-Walk Numbers and Length:The parameter sensitivity experiments with the hcr-walk numbersTand hcrwalk lengthLbelonging to one start node are shown in Figs.6 and 7.For Cora dataset,15 hcr-walks are sampled for each node,and the length of 5 is the most appropriate.(T=20,L=5) and (T=12,L=6) are the best combinations of parameters for Citeseer and Pubmed datasets,respectively.

    3) Coarsening Ratio:The hierarchically-coarsened random walk is employed to leverage the powerful topology preservation ability of random walk and the high efficiency of random aggregation.We experimentally compare the effects of one coarsening layer,two coarsening layers and different coarsening ratiospon the classification accuracy of Cora dataset in Table VI.For the two coarsening layers,the coarsening ratio of the second layer is half of the first.Two coarsening layers with ratios of (0.8,0.4) achieve the best performance.

    TABLEVI SENSITIVITY OF COARSEN LAYER NUMBER AND COARSE RATIO ON MODEL PERFORMANCE ON CORA DATASET.FOR TWO COARSEN LAYERS, WE SET THE COARSEN RATIO OF SECOND LAYER IS HALF OF FIRST LAYER

    Fig.6.The results of VG-GCN model with different hcr-walk numbers on Cora,Citeseer,and Pubmed datasets.

    Fig.7.The results of VG-GCN model with different hcr-walk lengths on Cora,Citeseer,and Pubmed datasets.

    4) Convolution Layers:The number of variational convolution layer is traversed in [2,8] on Cora dataset,and the experiments results are listed in Fig.8.The experimental results show that the too deep network can not surely achieve higher classification accuracy,and may cause performance degradation which may be due to the over-smoothing.

    Fig.8.The results of different variational convolution layers on Cora dataset.

    In order to show that our VG-GCN can better capture the graph local invariance than other methods (GCN),we perform a standard set of“evolving”t-SNE [67] plots of the test set embeddings on Cora dataset,given in Fig.9.The raw features come from the node original feature matrix,and the GCN and our VG-GCN embeddings come from the output of the last layer respectively.Intuitively,our VG-GCN is best placed to clearly separate different categories of nodes.And from the perspective of quantitative analysis,the Silhouette score [68]of our VG-GCN is the largest (The larger the Silhouette score,the better the clustering effect).

    VI.CONCLUSION

    In this paper,we proposed a VG-GCN framework for node classification.We developed the random walk and proposed the hcr-walk to effectively avoid possible exponential explosion of walk paths as the path length increases,and cover the whole neighborhood by coarsening graphs.In order to maintain the permutation invariance of the generated paths belonging to the same node of each epoch,we sorted them after projection and constructed a grid-like feature map for 2-D convolution.Moreover,we designed a 2-D convolution variational inference block to learn the probability distribution characteristics of latent variables in two-dimensional space.As a result,VG-GCN learns the aggregation pattern of node topological neighborhood in an inductive way,which can be easily extended to the inference problem of unknown nodes.Meanwhile,VG-GCN can process large scale graphs quickly with the tensor graph structure and consumes less memory.Experiments on a variety of public datasets verified the effectiveness of our method for solving the node classification problem.

    Fig.9.t-SNE embeddings of the nodes in the test set of Cora citation network from the raw features (left),GCN model (middle),and our VG-GCN model(right).Our VG-GCN performs the best clustering effect of embedding among the three plot,and the Silhouette scores support evidence.

    久久久午夜欧美精品| 日韩三级伦理在线观看| 色吧在线观看| 亚洲精品日韩在线中文字幕| 亚洲怡红院男人天堂| 亚洲18禁久久av| 亚洲内射少妇av| av在线天堂中文字幕| 中文字幕av在线有码专区| 国产伦在线观看视频一区| 免费无遮挡裸体视频| 三级国产精品欧美在线观看| 男女下面进入的视频免费午夜| 九九热线精品视视频播放| 成人午夜高清在线视频| 免费观看性生交大片5| 国产在视频线精品| 婷婷色av中文字幕| 亚洲无线观看免费| av在线蜜桃| 成人三级黄色视频| 国产一区二区亚洲精品在线观看| 久久久色成人| 国产精品久久电影中文字幕| 久久精品熟女亚洲av麻豆精品 | 亚洲欧美中文字幕日韩二区| 欧美成人午夜免费资源| 久久久久久久久中文| 亚洲色图av天堂| 永久免费av网站大全| 久久国产乱子免费精品| 成年女人永久免费观看视频| 亚洲五月天丁香| 一夜夜www| 91av网一区二区| 少妇高潮的动态图| 国产黄色视频一区二区在线观看 | 午夜精品一区二区三区免费看| 久久久久久久国产电影| 麻豆精品久久久久久蜜桃| 乱系列少妇在线播放| 毛片女人毛片| 美女cb高潮喷水在线观看| 日韩欧美精品免费久久| 日日摸夜夜添夜夜爱| 日韩强制内射视频| 国产亚洲av嫩草精品影院| 韩国高清视频一区二区三区| 天堂网av新在线| 伊人久久精品亚洲午夜| 国产乱人视频| 精品熟女少妇av免费看| 国产精品av视频在线免费观看| 97在线视频观看| 亚洲婷婷狠狠爱综合网| 日韩一区二区三区影片| 一个人看视频在线观看www免费| 午夜福利网站1000一区二区三区| 丰满人妻一区二区三区视频av| 日日撸夜夜添| 丰满乱子伦码专区| 国产精品一二三区在线看| 丝袜喷水一区| av福利片在线观看| 人人妻人人澡欧美一区二区| av视频在线观看入口| 狂野欧美白嫩少妇大欣赏| 久久精品国产鲁丝片午夜精品| 亚洲精品乱码久久久v下载方式| 别揉我奶头 嗯啊视频| h日本视频在线播放| 只有这里有精品99| 久久精品影院6| 卡戴珊不雅视频在线播放| 中文乱码字字幕精品一区二区三区 | 久久这里只有精品中国| 一级毛片久久久久久久久女| 精品久久久久久成人av| 日韩,欧美,国产一区二区三区 | 秋霞在线观看毛片| 中文字幕久久专区| 天堂中文最新版在线下载 | 国产日韩欧美在线精品| 亚洲成色77777| 日本免费一区二区三区高清不卡| 国产亚洲午夜精品一区二区久久 | 美女国产视频在线观看| 国产高清有码在线观看视频| 女人被狂操c到高潮| 免费观看精品视频网站| 成人亚洲欧美一区二区av| 亚洲伊人久久精品综合 | 国产精品久久视频播放| 人妻少妇偷人精品九色| 欧美变态另类bdsm刘玥| 成人毛片a级毛片在线播放| 麻豆久久精品国产亚洲av| 中文欧美无线码| 久久99精品国语久久久| 亚洲最大成人av| 亚洲四区av| 精品99又大又爽又粗少妇毛片| 国产精品电影一区二区三区| 国产av在哪里看| 在线免费观看的www视频| 91在线精品国自产拍蜜月| 国产成人aa在线观看| 亚洲人成网站高清观看| 免费看a级黄色片| 久久精品影院6| 丝袜美腿在线中文| 欧美另类亚洲清纯唯美| 国产视频内射| 国产精品99久久久久久久久| 亚洲自拍偷在线| 一夜夜www| 麻豆精品久久久久久蜜桃| 日本一本二区三区精品| 两个人的视频大全免费| 欧美精品国产亚洲| 精品一区二区三区视频在线| 国产淫语在线视频| 精品不卡国产一区二区三区| 亚洲,欧美,日韩| 国产一区二区在线av高清观看| av天堂中文字幕网| 国产伦在线观看视频一区| 小蜜桃在线观看免费完整版高清| 亚洲中文字幕一区二区三区有码在线看| 国内揄拍国产精品人妻在线| 欧美性猛交黑人性爽| 日本欧美国产在线视频| 午夜精品一区二区三区免费看| 九九在线视频观看精品| 成人国产麻豆网| 日韩高清综合在线| 亚洲精品乱码久久久v下载方式| 日韩,欧美,国产一区二区三区 | 久久国内精品自在自线图片| 亚洲人成网站高清观看| 欧美精品一区二区大全| 久久精品综合一区二区三区| 午夜福利成人在线免费观看| 在线a可以看的网站| 婷婷色综合大香蕉| 少妇猛男粗大的猛烈进出视频 | 久久精品久久精品一区二区三区| 最近的中文字幕免费完整| 色5月婷婷丁香| 亚洲精品自拍成人| 最近视频中文字幕2019在线8| 一本一本综合久久| 国产探花极品一区二区| 久久精品国产自在天天线| 国产精品女同一区二区软件| 久久精品国产亚洲网站| 亚洲,欧美,日韩| 欧美日本亚洲视频在线播放| 狠狠狠狠99中文字幕| 成人亚洲精品av一区二区| 欧美人与善性xxx| 2022亚洲国产成人精品| 我要搜黄色片| 好男人视频免费观看在线| 亚洲人成网站高清观看| 亚洲av日韩在线播放| 久久久久久伊人网av| 91久久精品国产一区二区成人| 国产69精品久久久久777片| 久久久a久久爽久久v久久| 国产爱豆传媒在线观看| 狂野欧美激情性xxxx在线观看| 亚洲国产色片| 成人欧美大片| 黄片无遮挡物在线观看| 少妇猛男粗大的猛烈进出视频 | 免费电影在线观看免费观看| 午夜免费激情av| 免费观看精品视频网站| 亚洲精品,欧美精品| 超碰av人人做人人爽久久| 国产精品美女特级片免费视频播放器| 热99re8久久精品国产| 日韩,欧美,国产一区二区三区 | 亚洲av免费高清在线观看| 麻豆一二三区av精品| 久久精品人妻少妇| 听说在线观看完整版免费高清| 午夜免费激情av| 亚洲av成人精品一区久久| 寂寞人妻少妇视频99o| 日本猛色少妇xxxxx猛交久久| 欧美丝袜亚洲另类| 欧美日本视频| 亚洲精品亚洲一区二区| 久久99热这里只有精品18| 亚洲精品影视一区二区三区av| 高清毛片免费看| 亚洲成人av在线免费| 亚洲欧洲日产国产| 中文字幕制服av| www.av在线官网国产| 只有这里有精品99| 国产淫片久久久久久久久| 秋霞伦理黄片| 黄片无遮挡物在线观看| 人妻少妇偷人精品九色| 国产精品不卡视频一区二区| 3wmmmm亚洲av在线观看| 1024手机看黄色片| 日韩欧美在线乱码| 国产三级在线视频| 免费看美女性在线毛片视频| 秋霞伦理黄片| 汤姆久久久久久久影院中文字幕 | 秋霞伦理黄片| 97超碰精品成人国产| 永久网站在线| 精品午夜福利在线看| 亚洲av免费高清在线观看| 久久久国产成人精品二区| 久久久久久久久久成人| 亚洲伊人久久精品综合 | 久久久久久久久久久丰满| 一级黄色大片毛片| 麻豆国产97在线/欧美| 国产免费视频播放在线视频 | 一区二区三区高清视频在线| 精品久久国产蜜桃| 中文字幕免费在线视频6| 夫妻性生交免费视频一级片| 久久精品影院6| 中文字幕精品亚洲无线码一区| 观看免费一级毛片| 国产一区二区亚洲精品在线观看| 免费观看的影片在线观看| 97超碰精品成人国产| 两个人的视频大全免费| 天堂√8在线中文| 国产成年人精品一区二区| 国内精品宾馆在线| 国产伦在线观看视频一区| 少妇的逼水好多| 亚洲在久久综合| 亚洲成人精品中文字幕电影| 男插女下体视频免费在线播放| 欧美潮喷喷水| 日产精品乱码卡一卡2卡三| 91在线精品国自产拍蜜月| 国产淫语在线视频| 日本午夜av视频| 免费观看性生交大片5| 亚洲精品一区蜜桃| 22中文网久久字幕| 一级毛片我不卡| 成人综合一区亚洲| 在线观看一区二区三区| 日本一二三区视频观看| 夫妻性生交免费视频一级片| 色综合站精品国产| 中文字幕精品亚洲无线码一区| 精品少妇黑人巨大在线播放 | 欧美一区二区国产精品久久精品| 又爽又黄无遮挡网站| 国产精品.久久久| 亚洲最大成人av| 精品99又大又爽又粗少妇毛片| 日本免费在线观看一区| 免费看美女性在线毛片视频| 免费黄色在线免费观看| 亚洲经典国产精华液单| 高清av免费在线| 桃色一区二区三区在线观看| 在线a可以看的网站| 亚洲内射少妇av| 国产一级毛片在线| 亚洲精品久久久久久婷婷小说 | 中国国产av一级| 男人狂女人下面高潮的视频| 永久网站在线| 淫秽高清视频在线观看| 九九爱精品视频在线观看| 亚洲国产高清在线一区二区三| 国产一区二区亚洲精品在线观看| 久久精品夜色国产| 两性午夜刺激爽爽歪歪视频在线观看| 精品熟女少妇av免费看| av视频在线观看入口| 亚洲久久久久久中文字幕| 日本五十路高清| 日韩av在线大香蕉| 精品国内亚洲2022精品成人| 欧美日韩精品成人综合77777| 欧美xxxx性猛交bbbb| 少妇熟女aⅴ在线视频| 日韩三级伦理在线观看| 日本黄色片子视频| 联通29元200g的流量卡| 国产免费福利视频在线观看| 国产精品女同一区二区软件| 十八禁国产超污无遮挡网站| 波多野结衣高清无吗| 91久久精品国产一区二区成人| 美女被艹到高潮喷水动态| 亚洲成人中文字幕在线播放| 噜噜噜噜噜久久久久久91| 免费看日本二区| 日本-黄色视频高清免费观看| 久久精品综合一区二区三区| 又爽又黄a免费视频| 久久久久精品久久久久真实原创| 最近2019中文字幕mv第一页| www.av在线官网国产| 中文精品一卡2卡3卡4更新| 亚洲国产精品国产精品| 午夜福利在线在线| 亚洲欧美日韩高清专用| 国产白丝娇喘喷水9色精品| 日本爱情动作片www.在线观看| 亚洲图色成人| 边亲边吃奶的免费视频| 91精品伊人久久大香线蕉| 欧美激情国产日韩精品一区| 国产亚洲午夜精品一区二区久久 | 老司机影院毛片| 亚洲伊人久久精品综合 | 中文字幕精品亚洲无线码一区| www.av在线官网国产| 99在线人妻在线中文字幕| 女的被弄到高潮叫床怎么办| 黄色一级大片看看| 免费观看性生交大片5| 啦啦啦观看免费观看视频高清| 村上凉子中文字幕在线| 1024手机看黄色片| 亚洲第一区二区三区不卡| av在线天堂中文字幕| 丝袜喷水一区| 久久久久精品久久久久真实原创| 18禁动态无遮挡网站| 午夜福利网站1000一区二区三区| 18禁在线播放成人免费| 建设人人有责人人尽责人人享有的 | 免费av毛片视频| 99久久精品国产国产毛片| 亚洲人成网站在线观看播放| 久久久久精品久久久久真实原创| a级毛色黄片| 久久精品91蜜桃| 午夜福利在线观看吧| 在线播放无遮挡| 国产精品日韩av在线免费观看| 亚洲国产精品久久男人天堂| 国产伦精品一区二区三区视频9| 99热全是精品| 亚洲精品,欧美精品| 亚洲精品影视一区二区三区av| 成人午夜精彩视频在线观看| 国产精品一及| 超碰97精品在线观看| 精品国产露脸久久av麻豆 | 久热久热在线精品观看| 欧美日韩国产亚洲二区| 欧美性猛交黑人性爽| 成人性生交大片免费视频hd| 国产淫语在线视频| 国产真实伦视频高清在线观看| 变态另类丝袜制服| 久久精品国产99精品国产亚洲性色| 色综合色国产| 少妇的逼好多水| 免费黄色在线免费观看| 中文亚洲av片在线观看爽| 淫秽高清视频在线观看| 少妇的逼水好多| 亚洲自拍偷在线| 男人狂女人下面高潮的视频| 亚洲成人中文字幕在线播放| 午夜爱爱视频在线播放| 少妇高潮的动态图| 男女国产视频网站| 99久久人妻综合| 尾随美女入室| 日本免费一区二区三区高清不卡| 亚洲四区av| 欧美高清成人免费视频www| 国产成人91sexporn| 少妇熟女欧美另类| 国产av码专区亚洲av| 最近视频中文字幕2019在线8| av线在线观看网站| 中文字幕av成人在线电影| 男女视频在线观看网站免费| 国产精品野战在线观看| 亚洲av福利一区| 女的被弄到高潮叫床怎么办| 成人毛片60女人毛片免费| 亚洲av熟女| 人妻系列 视频| 国产精品一区二区三区四区免费观看| 国产欧美另类精品又又久久亚洲欧美| 桃色一区二区三区在线观看| 日本黄大片高清| 嫩草影院精品99| 99久久成人亚洲精品观看| 中文资源天堂在线| 在线免费观看的www视频| 美女高潮的动态| 日日啪夜夜撸| 国产高清国产精品国产三级 | 国产乱人视频| 精品久久久久久久久亚洲| 国产视频首页在线观看| 深爱激情五月婷婷| 亚洲va在线va天堂va国产| 又爽又黄a免费视频| 亚洲av二区三区四区| av播播在线观看一区| 青春草亚洲视频在线观看| 亚洲欧美中文字幕日韩二区| 乱码一卡2卡4卡精品| .国产精品久久| 日韩制服骚丝袜av| 日本一二三区视频观看| 久久精品国产99精品国产亚洲性色| 欧美日本视频| 国产精品无大码| 我要看日韩黄色一级片| 国产高清三级在线| 久久久成人免费电影| 超碰97精品在线观看| 亚洲av中文av极速乱| 卡戴珊不雅视频在线播放| 内射极品少妇av片p| 国产一区二区在线av高清观看| 啦啦啦啦在线视频资源| 精品人妻偷拍中文字幕| 在现免费观看毛片| 成年女人看的毛片在线观看| 亚洲国产最新在线播放| 欧美日韩一区二区视频在线观看视频在线 | 色5月婷婷丁香| 亚洲最大成人av| 久久这里有精品视频免费| 日韩制服骚丝袜av| 成人美女网站在线观看视频| 高清日韩中文字幕在线| 高清毛片免费看| 日韩中字成人| 国产单亲对白刺激| 久久亚洲精品不卡| 中文字幕免费在线视频6| av.在线天堂| 亚州av有码| 欧美xxxx性猛交bbbb| 国产亚洲av嫩草精品影院| 成人美女网站在线观看视频| 欧美三级亚洲精品| 婷婷色麻豆天堂久久 | 男女下面进入的视频免费午夜| 亚洲av电影在线观看一区二区三区 | 禁无遮挡网站| 久久久欧美国产精品| 亚洲精品日韩av片在线观看| 国产一区二区在线av高清观看| 亚洲一区高清亚洲精品| 天堂中文最新版在线下载 | 天堂√8在线中文| 亚洲欧美精品综合久久99| 亚洲av日韩在线播放| 少妇裸体淫交视频免费看高清| 老女人水多毛片| 神马国产精品三级电影在线观看| 一边摸一边抽搐一进一小说| 国产又色又爽无遮挡免| 久久久成人免费电影| 非洲黑人性xxxx精品又粗又长| 亚洲av二区三区四区| 国产高清国产精品国产三级 | 亚洲av福利一区| 精品无人区乱码1区二区| 欧美成人精品欧美一级黄| 啦啦啦韩国在线观看视频| 午夜福利在线在线| 偷拍熟女少妇极品色| 韩国高清视频一区二区三区| 成年版毛片免费区| 久久婷婷人人爽人人干人人爱| 一边摸一边抽搐一进一小说| 欧美成人精品欧美一级黄| 岛国在线免费视频观看| 欧美极品一区二区三区四区| 国产白丝娇喘喷水9色精品| 久久综合国产亚洲精品| 欧美日韩国产亚洲二区| 美女xxoo啪啪120秒动态图| av黄色大香蕉| 日韩制服骚丝袜av| 精品人妻熟女av久视频| 国产激情偷乱视频一区二区| 看非洲黑人一级黄片| 免费大片18禁| 国产 一区 欧美 日韩| 国产精品.久久久| 亚洲欧美成人综合另类久久久 | 亚洲丝袜综合中文字幕| 国产高潮美女av| 久久精品久久精品一区二区三区| videos熟女内射| 亚洲自拍偷在线| 成人无遮挡网站| 亚洲美女搞黄在线观看| 三级男女做爰猛烈吃奶摸视频| 3wmmmm亚洲av在线观看| 好男人视频免费观看在线| 精品酒店卫生间| 日本猛色少妇xxxxx猛交久久| 国产精品一区www在线观看| 欧美日韩综合久久久久久| 国产精品女同一区二区软件| 91狼人影院| 99国产精品一区二区蜜桃av| 国产伦在线观看视频一区| 亚洲va在线va天堂va国产| 蜜臀久久99精品久久宅男| 亚洲一区高清亚洲精品| 亚洲av免费在线观看| 欧美人与善性xxx| 久久精品久久精品一区二区三区| 精品一区二区三区人妻视频| 好男人视频免费观看在线| 国产三级中文精品| 内地一区二区视频在线| 亚洲av.av天堂| 人人妻人人看人人澡| 国产精品久久久久久av不卡| 你懂的网址亚洲精品在线观看 | 全区人妻精品视频| 色视频www国产| 亚洲电影在线观看av| 男人和女人高潮做爰伦理| 深夜a级毛片| 国产91av在线免费观看| 丰满乱子伦码专区| 亚洲成人av在线免费| 成人无遮挡网站| 久久99精品国语久久久| 久久人妻av系列| 国产成人福利小说| 欧美不卡视频在线免费观看| 最近中文字幕高清免费大全6| 人妻夜夜爽99麻豆av| 成人亚洲精品av一区二区| 精品一区二区免费观看| 亚洲无线观看免费| 日本欧美国产在线视频| 日韩一本色道免费dvd| 国产久久久一区二区三区| 午夜a级毛片| 可以在线观看毛片的网站| 国产精品久久久久久av不卡| 亚洲国产高清在线一区二区三| 一级av片app| 九色成人免费人妻av| 中文字幕av在线有码专区| 午夜老司机福利剧场| 亚洲真实伦在线观看| 特大巨黑吊av在线直播| 色哟哟·www| 国产伦理片在线播放av一区| 日韩av在线大香蕉| 婷婷色麻豆天堂久久 | 男的添女的下面高潮视频| 欧美极品一区二区三区四区| 亚洲欧美成人综合另类久久久 | 看非洲黑人一级黄片| 成人鲁丝片一二三区免费| av又黄又爽大尺度在线免费看 | 色综合亚洲欧美另类图片| 成人美女网站在线观看视频| 丝袜美腿在线中文| 亚洲精品亚洲一区二区| 久久精品国产亚洲网站| 亚洲精品日韩av片在线观看| 人妻系列 视频| 特级一级黄色大片| 神马国产精品三级电影在线观看| 国产亚洲av嫩草精品影院| 边亲边吃奶的免费视频| 国产淫片久久久久久久久| 神马国产精品三级电影在线观看| 嫩草影院入口| 永久网站在线| 91在线精品国自产拍蜜月| 精品国内亚洲2022精品成人| 狂野欧美激情性xxxx在线观看| 国产在线男女| 99国产精品一区二区蜜桃av| 日本熟妇午夜| 国内精品宾馆在线| 搞女人的毛片| 一区二区三区乱码不卡18| 国产爱豆传媒在线观看| 在线免费十八禁| 蜜臀久久99精品久久宅男| 麻豆成人午夜福利视频| 国产精品综合久久久久久久免费| 一级毛片久久久久久久久女| 亚洲经典国产精华液单| av在线播放精品| av.在线天堂| 丰满少妇做爰视频| 成人毛片60女人毛片免费| 亚洲婷婷狠狠爱综合网| 22中文网久久字幕|