• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Graph Transformer for Communities Detection in Social Networks

    2022-03-14 09:26:18NagaChandrikaKhalidAlnowibetSandeepKautishSreenivasaReddyAdelAlrasheediandAliWagdyMohamed
    Computers Materials&Continua 2022年3期

    G.Naga Chandrika,Khalid Alnowibet,K.Sandeep Kautish,E.Sreenivasa Reddy,Adel F.Alrasheedi and Ali Wagdy Mohamed

    1Department of Computer Science and Engineering,ANU College of Engineering and Technology,Guntur,522510,India

    2Statistics and Operations Research Department,College of Science,King Saud University,Riyadh,11451,Kingdom of Saudi Arabia

    3LBEF Campus,Kathmandu,44600,Nepal

    4Department of Computer Science and Engineering,ANU,Guntur,522510,India

    5Operations Research Department,Faculty of Graduate Studies for Statistical Research,Cairo University,Giza,12613,Egypt

    6Wireless Intelligent Networks Center(WINC),School of Engineering and Applied Sciences,Nile University,Giza,12588,Egypt

    Abstract: Graphs are used in various disciplines such as telecommunication,biological networks, as well as social networks.In large-scale networks, it is challenging to detect the communities by learning the distinct properties of the graph.As deep learning has made contributions in a variety of domains,we try to use deep learning techniques to mine the knowledge from large-scale graph networks.In this paper,we aim to provide a strategy for detecting communities using deep autoencoders and obtain generic neural attention to graphs.The advantages of neural attention are widely seen in the field of NLP and computer vision,which has low computational complexity for large-scale graphs.The contributions of the paper are summarized as follows.Firstly, a transformer is utilized to downsample the first-order proximities of the graph into a latent space,which can result in the structural properties and eventually assist in detecting the communities.Secondly, the fine-tuning task is conducted by tuning variant hyperparameters cautiously,which is applied to multiple social networks(Facebook and Twitch).Furthermore,the objective function(crossentropy)is tuned by L0 regularization.Lastly,the reconstructed model forms communities that present the relationship between the groups.The proposed robust model provides good generalization and is applicable to obtaining not only the community structures in social networks but also the node classification.The proposed graph-transformer shows advanced performance on the social networks with the average NMIs of 0.67±0.04, 0.198±0.02, 0.228±0.02, and 0.68±0.03 on Wikipedia crocodiles, Github Developers,Twitch England,and Facebook Page-Page networks,respectively.

    Keywords: Social networks; graph transformer; community detection; graph classification

    1 Introduction

    The concept of networks is widely used in various disciplines, such as social networks,protein to protein interactions, knowledge graphs, recommendation systems, etc.The social network analysis is studied due to the development of big data techniques, where the communities are categorized into groups based on their relationship.In biological networks, interconnectivity among protein molecules can result in similar protein-protein interactions which may depict similar functionality.In general, a community is a collection of the closely related entity in terms of the similarity among individuals.With the increase of social media utility, social network mining becomes one of the crucial topics in both industry and academia.With the growth of the network topology, the complexity of information mining increases.Therefore, it is challenging to detect and segregate the communities by analyzing individual clusters [1].Moreover, it is also a sophisticated task to understand the topological properties of a cluster in a network as well as the information that they carry simultaneously.The graphs (networks) community detection refers to collecting a set of closely related nodes based on either the spatial location of nodes or topological characteristics.Hence, understanding the network behavior in detail is the key to mine information for detecting appropriate communities.

    A variety of works study how to detect the communities in large-scale networks such as deep walk [2], skip-gram with negative sub-sampling [3], and matrix factorization methods [4,5].However, with the advance of deep learning, the encoder-decoder structures are utilized to be a stacked autoencoder and preserve the proximities, which can achieve great performance.It can also provide a solution for image reconstruction in computer vision and language translation NLP.Hence, this paradigm is important for graph neural networks (GNN), and the graph autoencoder can provide an encoder and a decoder.The graph is represented by mapping the encoder into a latent space.Furthermore, the decoder reconstructs the latent representations to generate the new graph with varying embedding structures.Various researchers have focused on this direction.Some of them utilize transformers to improve the embedding quality of the model, considering the equivalent relationship with the encoder.The embedding feature vector is created by tuning the parameters rigorously optimally.In this paper, we aim to provide a robust solution by cautiously considering every constraint in detail.The graph transformer is implemented in Section 4.

    2 Related Work

    Deep Neural Network for graph representations (DNGR) [6] utilizes a stacked autoencoder [7]for finding patterns in the community by encoding a graph to the Positive Point-wise Mutual Information (PPMI) matrix.The procedure is initiated with the stacked denoising autoencoders in largely connected networks.Subsequently, structural deep network embedding (SDNE) [8] imparts stacked autoencoders for preserving the node proximities.The first-order and the second-order proximities are preserved together by providing two objective functions for encoder and decoder separately.The objective function for the encoder preserves the first-order node proximity.

    wherexu=Au.

    where

    Cu,v=1if Au,v=0;

    Cu,v=β >1if Au,v=1

    However, DNGR and SDNE only preserve the topological information and fail to extract the information regarding the nodes’attributes.To solve this problem, graph autoencoders in [9]import a graph convolutional network to leverage the information capturing ability of nodes.

    whereZrepresents the graph embedding space.Then the decoder tries to extract the information on the relationship of nodes fromZby reconstructing an adjacency matrix, i.e.,

    where

    Note that the reconstruction of the adjacency matrix may cause overfitting of the model.To this end, researchers make efforts to detect communities through either understanding the structural properties of the nodes or extracting information from underlying relationships among nodes.

    The above-mentioned research introduces the auto-encoder and encoder-decoder architecture to learn representations in a graph structure.Similarly, in the language processing, especially for the sequence transduction task, a Long Short-Term Memory (LSTM) architecture has been developed for the machine translation task [10].Moreover, the neural machine translation (NMT)attracts attention from researchers since it can greatly improve the translation accuracy [11].A variety of local and global attention paradigms are introduced to investigate the attention layers in NMT [12].It is demonstrated that the attention in NMT is a deterministic factor for performance improvement.In addition, the transformer is proposed for NMT, which has a similar encoder-decoder architecture and is self-attention [13].

    Besides the neural sequence transduction tasks, the transformers tend to provide numerous other applications, such as pre-trained machine translation [14], ARM to text-generation [15], and document classification [16].Furthermore, the transformers can provide advanced performance in large-scale image recognition [17] and object detection in 3D videos [18].It is even widely utilized in the domain of graph neural networks.Hu et al.[19] are motivated by the transformers and propose a heterogeneous graph transformer architecture for extracting complex networks variances from the node and edge dependent parameters.An HG Sampling algorithm is proposed to train the mini-batch samples in the large-scale academic dataset named Open Academic Graph (OAG),of which heterogeneous transformer is able to rise the baseline performance from 9% to 19% in the paper-filed classification task.

    Some research focuses on designing models which provide hardware acceleration to fasten the training of the large-scale network.Auten et al.[20] provide a unique proposition to improve the performance of graph neural networks with Central Processing Unit (CPU) clusters instead of Graphical Processing Units (GPU).The authors consider some standard benchmark models, and the proposed architecture for computing the factorization can improve the performance of graph traversal greatly.Jin et al.[21] study a graph neural network namedPro-GNN,which learns the community structures underlying a network by defending against adversarial attacks.The model is able to tackle the problem of perturbations in large-scale networks.It presents high performance for some of the standard datasets, such as Cora, Citeseer, Pubmed, even though the perturbation rate is high.Ma et al.[22] investigate a graph neural network that can learn the representations dynamically, i.e.,DyGNN.They address the problem of static graphs and propose a dynamic GNN model that performs well on both link prediction and node classification.Compared with the benchmark models, the DyGNN model shows better performance on link prediction with UCI and Epinions datasets.Moreover, for the case training ratios vary from 60-100% on the Epinions dataset, the model outperforms the individual models.El-Kenawy et al.[23] propose a modified binary Grey Wolf Optimizer (GWO) algorithm for selecting the optimal subset of features.It utilizes the Sandia frequency shift (SFS) technique, where the diffusion process is based on the Gaussian distribution method.In this way, the values can be converted to binary by sigmoid.Eid et al.[24] propose a new feature selection approach based on the Sine Cosine algorithm which obtains unassociated characteristics and optimum features.In 2021, El-Kenawy et al.[25]propose a method for disease classification based on the Advanced Squirrel Search Optimization algorithm.They employ a Convolutional Neural Network model with image augmentation for feature selection.However, most of the state-of-the-art models are focusing on specific domains.It means they cannot represent heterogeneous graphs information and are suitable for static graphs with deep neural networks.

    3 Motivation

    This work is inspired by the transformers, which is applicable to various domains.The contributions of this paper are summarized as follows.

    (1) Firstly, the transformer is applied to downsample the first-order proximities of the graph into a latent space, which can preserve the structural properties and eventually assist in detecting the communities.

    (2) The fine-tuning task is conducted by tuning various hyperparameters cautiously, which can be widely competent on multiple social networks, e.g., Facebook and Twitter.In addition,the objective function, i.e., cross-entropy, is tuned byL0regularization.

    4 Methodology

    In this section, we aim to introduce the implementation of methodology and the process involved in this paper.The process is illustrated in Fig.1, including (a) definitions of the basic notations and the related terms; (b) implementation of the graph-transformer for both graph clustering and classification tasks; (c) discussions on the insights of transformers in detail which present the self-attention mechanism, and residual connectivity with its relative connection to GNN for detecting communities by using the first-order proximity.

    Figure 1: The model diagram of the proposed model

    4.1 Notations and Definitions

    Here, the required definitions and notations in the paper are described as follows.

    GraphA graph is a collection of nodes and their relative connectivity.G <V,E>is used to denote a graph, where the pair<V, E >is a collection of nodes and edges.V∈v1, v2,..., vnrepresents the set of nodes, whileE∈e1,e2,...,ekis the set of edges.

    First-Order ProximityThe first-order proximity determines the relationship between two specific nodes in the given graphG.Specifically, if an edge exists between the node pair (vi ,vj), the first-order proximity is equal tow; otherwise, it is set to 0, i.e., null.Note thatwdepends on the connectivity of nodes in the given graph.If the edge of the graph is weighted,wwill denote the edge weight; otherwise, it is regarded as 1.

    Adjacency MatrixA square matrix is constructed according to the first-order proximity of nodes in the given graph and is represented asA.The value of the first-order proximity is placed by checking individual node pairs.In this way, a complete set of node pair samples is selected.

    4.2 The Graph-Transformer

    In this sub-section, the transformer and its internal working principle are explained first.The graph structures of the first-order proximity are subsequently learned.The transformers are guided with self-attention which has an encoder and a decoder structure.The encoder part consists of two attention blocks.One is a multi-head intra-attention network, and the other is a position-wise fully-connected feed-forward network.These two blocks are sequentially connected with multiple units.Each layer has a definite set of residual connections and successive layer normalization to overhaul covariate shifts in recurrent neural networks [26,27].Fig.2 demonstrates the internal working of the transformer.

    However, he had to follow his guides in silence, and they led him into a magnificent hall; the floor was of ebony, the walls of jet, and all the hangings were of black velvet54, but the Prince looked round it in vain for something to eat, and then made signs that he was hungry

    whereSis an input to the feed-forward neural network layer as mentioned in Fig.2.

    Figure 2: The internal working model diagram of the transformer

    Eq.(7) represents the linear transformation of the inputx, i.e., densely connected neurons.ReLUis an activation function to push the feed to the next layer, i.e.,ReLU(S)←max(0,S).fis a tunable feed-forward neural network with a weight matrixWand biasb, i.e.,f(S)←S.W+b.TheFFNSis the same at different positions, while the parametric weights vary from layer to layer.In this way, the weighted combination of the entire neighborhood is obtained which is equivalent to summary the information from different connected inputs, as shown in Fig.3.The densely connected networks are beneficial to compute new feature representation across the input space.The information is successively iterated forNtimes, and the weights are successively updated to achieve the minimal loss.As the residual connections can improve the gradient flow over the network without degradation [13], the positional information is carried.In addition, the self-attention layers introduce the similarity of different information, it thus can carry the firstorder proximities.The provided attention is permutation invariant.It means that, even though the positional order is changed, the required information can be extracted.The gating interaction is provided when the information is succeeded to the subsequent layers.Note that the residual connections in the architecture carry the information about the position which attracts attention to the required regions, i.e., the region of interests.In this case, the positional embeddings can be obtained based on adjacency matrix, which carries structural proximities and leads to selfattention by iterations.The self-attention presents the relatively similarity between two data points.

    Figure 3: The information flow of the proposed model.(a) Scaled dot product attention (b) Multi head attention

    4.3 Fine-Tuning of the Graph-Transformer

    In this sub-section, the parameter settings are introduced for the graph transformer, as well as the corresponding tuning process.

    (1) The individual attention layers and their layer partitions are illustrated in Fig.3, wherehin Fig.3b denotes the number of the attention heads utilized and is set to 2.

    (2) To obtain the structure, the adjacency matrix is the input of the graph-transformer and positional encoding, as shown in Figs.2 and 3.It is constructed by query, key, and value.Note that the embeddings here are generic, and the whole first-order proximity is derived from the latent space.

    (3) The hidden layers for each attention head are set to 2.The dimension of the transformer model is decreased to 128, where the shape of the adjacency matrix is reduced fromn×nton×128.nrepresents the number of nodes in the given graphG.

    (4) The number of attention heads is set to 2.For appropriate regularization, the graphtransformer dropout [kk] and layer normalization layers are added, where the drop rate is set to 50% for effective generalization during the testing phase.

    (5) The objective function for evaluating the loss is categorical cross-entropy.And the Adam optimizer is involved with an initial learning rate of 4.5×10-5.Note that the learning is multiplied by 0.9 after a certain number of iterations (≈10 epochs).Since the model can reach convergence, no further increment in the learning rate is required.

    The above-mentioned parameters are tuned internally in the network, and the objective function is regularized with a cautious optimization.

    Objective Function OptimizationIt is known thatAis sparse, if the sparse matrix is reconstructed without appropriate regularization.It can mislead the reconstruction of the matrix and result in a number of zeros in the matrix.To solve the problem, we introduce an appropriate penalty on neural networks.Generally, Ridge (L1) or Lasso (L2) can be utilized directly into the neural networks, as they have differentiable gradients.Here,L0is selected which is beneficial to achieve convergence quickly.It can also solve the issue that the differentiable regularization techniques incur shrinkage of the sampled parameters.

    where

    θis the dimension of the parametric factor,Λis a weighting agent for the regularization, andL(.)is the objective (loss) function for the task.In this way, based onL0regularization, the objective function will be optimized, which can obtain the rigorous outcome with high generalization [28].

    5 Experiments

    5.1 Datasets

    To evaluate the proposed methodology, a set of vertex-level algorithms with the ground truth of communities are considered.The statistics of datasets are listed in Tab.1.The ground-truth of communities in the datasets can assist in both community detection and graph classification.Moreover, the collaboration network, web network, and social networks in the datasets are considered.The raw adjacency matrix is constructed with available nodes and edges.The networks considered are acquired from the publicly available resources [29,30].

    Table 1: The statistics of social network data

    5.2 Community Detection

    In this sub-section, whether the structure is preserved by the embeddings of the graphtransformer is investigated first.To this end, the latent space embeddings are evaluated with the standard graph clustering metrics.Normalized Mutual Information (NMI) [31] is chosen to be the standard metric for the cluster quality evaluation, due to its improvement on the relative normalized mutual information.The library of Karate Club [32] is utilized as a benchmark, which can obtain fast and reliable reproducible results with consistency.

    The evaluation procedure is comparatively different from the proposed method.Firstly, a set of nodes are trained on the graph transformer.Only 50% of the nodes are trained in the network and the remaining ones are used for testing.The results are obtained over 10-times repetitive experiments, of which the mean deviations are shown in Tab.2.

    Table 2: Performance evaluation with various standard literature

    A set of standard models for community detection algorithms are utilized to validate the work.The first five models in Tab.2 [33-37] are proposed for overlapping communities, whereas the remaining methods are designed for non-overlapping communities.It is observed that the proposed graph-transformer tends to have effective outcomes for the datasets.The results present the resilience of the model which can balance the performance of different communities appropriately.The average NMIs for Wikipedia crocodiles, Github Developers, Twitch England, and Facebook Page-Page networks are 0.67±0.04, 0.198±0.02, 0.228±0.02, and 0.68±0.03, respectively.Furthermore, the NMIs of the testing sets for different networks are studied and shown in Fig.4.

    5.3 Graph Classification

    Subsequently, the latent node embeddings of graph-transformer are investigated for node classification.Due to the availability of ground truth labels for the individual networks, the task is evaluated as similar as the clustering problem.The nodes are equally divided into two sets for training and test, respectively.The training convergence is studied accordingly.It is observed that,in most of the scenarios, the model can converge very fast and obtain the optimal accuracy within about 10 epochs.To present the learning ability of the graph-transformer, the accuracy and loss curves are illustrated in Fig.5.The accuracy and loss values on node classification for the selected networks are listed in Tab.3.

    6 Drawbacks

    This section aims to illustrate the drawbacks of the proposed work.Firstly, in some intricate scenarios, reducing dimensions can be problematic.Small data with higher sparsity can mislead the predictability, as the small-scale data cannot draw inferences generically.As a result, the dimensionality should not be selected for the case with small data samples.Secondly, the proposed work mainly focuses on undirected, homogeneous, and unweighted graphs.Thirdly, the method shall be fine-tuned to a specific dataset, as the constructed adjacency matrix varies with different network structures.It means that the problems of dynamically evolving networks cannot be solved, since an increasing number of nodes leads to the incensement of the adjacency matrix dimensions, in terms of width and length.

    Figure 4: NMI growth (Test) for the selected networks with several epochs

    Hence, it is recommended to build a very small model or a naive model for the unseen samples.The parametric weights are required to be dealt with carefully.It means that the specified attention layers should be added to extract temporal patterns, and the parameters are required to be tuned based on the real-life network.

    Figure 5: The individual accuracy plot and the loss decay plot of the networks.(a) Accuracy plots(b) Loss plots

    Table 3: Performance of node classification for the selected networks

    7 Conclusion

    It is possible to improve the performance by using various dimensionality reduction techniques, especially for the graph-transformer techniques.The attention heads and self-attention mechanisms are important with a balanced criterion.The structure of the complete graph can be captured with the assistant of the local patterns, which leads to the communities containing the global and local structural patterns.The objective function obliges to provide appropriate learning through stochastic optimization.

    It is observed that, even on variant tasks, the proposed method can outperform the existing task invariant domain.Hence, the objective function can provide a domain invariant characterization with higher generalization.The proposed mechanism tends to have advanced performance on social network data for both detecting communities and node classification.

    Acknowledgement:The authors extend their appreciation to King Saud University for funding this work through Researchers Supporting Project number RSP-2021/305, King Saud University,Riyadh, Saudi Arabia.

    Funding Statement:The research is funded by the Researchers Supporting Project at King Saud University (Project# RSP-2021/305).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    性色av一级| 午夜福利视频精品| 丰满人妻熟妇乱又伦精品不卡| 免费不卡黄色视频| 国产一区二区三区av在线| 久久天躁狠狠躁夜夜2o2o| 免费女性裸体啪啪无遮挡网站| 国产一区二区三区综合在线观看| 久久毛片免费看一区二区三区| 人人妻,人人澡人人爽秒播| 日韩中文字幕欧美一区二区| 在线 av 中文字幕| 亚洲中文日韩欧美视频| 久久热在线av| 免费av中文字幕在线| 少妇的丰满在线观看| 久久性视频一级片| 中国美女看黄片| 国产免费福利视频在线观看| 久久精品人人爽人人爽视色| 久久av网站| 国产av一区二区精品久久| 欧美在线黄色| 午夜影院在线不卡| 涩涩av久久男人的天堂| 成人国产一区最新在线观看| 不卡一级毛片| 久久久久国产精品人妻一区二区| 女性被躁到高潮视频| 无限看片的www在线观看| 国产99久久九九免费精品| 久久毛片免费看一区二区三区| 性色av乱码一区二区三区2| 欧美成狂野欧美在线观看| 一级片'在线观看视频| 99久久综合免费| 日韩中文字幕欧美一区二区| 成年动漫av网址| 日韩视频一区二区在线观看| 欧美日韩成人在线一区二区| 爱豆传媒免费全集在线观看| 国产人伦9x9x在线观看| 欧美日韩国产mv在线观看视频| av国产精品久久久久影院| 亚洲av电影在线观看一区二区三区| 国产男人的电影天堂91| 亚洲第一欧美日韩一区二区三区 | 99国产精品免费福利视频| 精品人妻在线不人妻| 免费少妇av软件| 99国产极品粉嫩在线观看| 99国产极品粉嫩在线观看| 亚洲精品一区蜜桃| av天堂在线播放| 男人舔女人的私密视频| 999久久久国产精品视频| 国产高清视频在线播放一区 | 免费高清在线观看日韩| av有码第一页| 亚洲国产日韩一区二区| 亚洲精品一二三| 亚洲av美国av| 1024香蕉在线观看| h视频一区二区三区| 欧美在线黄色| tube8黄色片| 免费黄频网站在线观看国产| 亚洲av成人不卡在线观看播放网 | 日韩大码丰满熟妇| 十八禁人妻一区二区| 亚洲精品日韩在线中文字幕| 最近最新免费中文字幕在线| 一级,二级,三级黄色视频| 人成视频在线观看免费观看| 亚洲欧美精品自产自拍| 狠狠狠狠99中文字幕| 亚洲欧美精品自产自拍| 美女脱内裤让男人舔精品视频| 亚洲人成77777在线视频| 国内毛片毛片毛片毛片毛片| 日韩欧美免费精品| 久久中文字幕一级| 肉色欧美久久久久久久蜜桃| 亚洲欧美一区二区三区黑人| 永久免费av网站大全| 电影成人av| 十八禁人妻一区二区| av网站免费在线观看视频| 亚洲黑人精品在线| 最新的欧美精品一区二区| 亚洲一码二码三码区别大吗| 狠狠婷婷综合久久久久久88av| 少妇裸体淫交视频免费看高清 | 少妇人妻久久综合中文| 色婷婷av一区二区三区视频| 欧美黄色淫秽网站| 自拍欧美九色日韩亚洲蝌蚪91| av又黄又爽大尺度在线免费看| 在线av久久热| 老司机午夜福利在线观看视频 | 男女之事视频高清在线观看| 国产伦人伦偷精品视频| 男女边摸边吃奶| 嫁个100分男人电影在线观看| 亚洲天堂av无毛| 国产成人av教育| 桃花免费在线播放| 亚洲国产欧美在线一区| 国产精品免费大片| 国产高清国产精品国产三级| 一级毛片精品| 一本综合久久免费| 欧美激情久久久久久爽电影 | 欧美亚洲 丝袜 人妻 在线| 大香蕉久久成人网| 国产国语露脸激情在线看| 两人在一起打扑克的视频| 91麻豆av在线| 青青草视频在线视频观看| 亚洲成人手机| 欧美性长视频在线观看| 亚洲欧美精品综合一区二区三区| 黄色 视频免费看| 韩国精品一区二区三区| 一区二区三区乱码不卡18| 热99国产精品久久久久久7| 日韩 亚洲 欧美在线| 国产成人欧美| videos熟女内射| 男人添女人高潮全过程视频| 欧美午夜高清在线| 捣出白浆h1v1| 美女脱内裤让男人舔精品视频| 美女大奶头黄色视频| 麻豆国产av国片精品| 两人在一起打扑克的视频| 2018国产大陆天天弄谢| 久久ye,这里只有精品| av在线播放精品| 在线观看舔阴道视频| 曰老女人黄片| 在线观看www视频免费| 老熟妇仑乱视频hdxx| 欧美在线黄色| 日本黄色日本黄色录像| 51午夜福利影视在线观看| 1024视频免费在线观看| 精品国产乱码久久久久久小说| 可以免费在线观看a视频的电影网站| 老司机午夜福利在线观看视频 | 交换朋友夫妻互换小说| 青草久久国产| 女人精品久久久久毛片| 成人18禁高潮啪啪吃奶动态图| 一级毛片精品| 丝袜脚勾引网站| 国产精品国产三级国产专区5o| 三上悠亚av全集在线观看| 久久久国产欧美日韩av| 国产主播在线观看一区二区| 美女高潮喷水抽搐中文字幕| 女人久久www免费人成看片| 天堂中文最新版在线下载| 这个男人来自地球电影免费观看| 久久香蕉激情| 精品欧美一区二区三区在线| 制服人妻中文乱码| 国产1区2区3区精品| 曰老女人黄片| 好男人电影高清在线观看| 美女中出高潮动态图| 欧美黑人欧美精品刺激| 叶爱在线成人免费视频播放| 午夜福利在线观看吧| 国产成人一区二区三区免费视频网站| 欧美中文综合在线视频| 国产1区2区3区精品| 免费日韩欧美在线观看| 国产精品国产三级国产专区5o| 精品亚洲成a人片在线观看| 18禁国产床啪视频网站| 90打野战视频偷拍视频| 69av精品久久久久久 | 超色免费av| 日本wwww免费看| 国产精品欧美亚洲77777| 国产99久久九九免费精品| 亚洲午夜精品一区,二区,三区| 欧美日韩黄片免| 亚洲欧美色中文字幕在线| 亚洲成av片中文字幕在线观看| 亚洲欧美精品自产自拍| 91国产中文字幕| 欧美一级毛片孕妇| 精品第一国产精品| 在线观看舔阴道视频| 亚洲国产欧美日韩在线播放| 日本欧美视频一区| 国产欧美亚洲国产| 欧美精品人与动牲交sv欧美| 亚洲国产av影院在线观看| 国产高清videossex| 久久精品国产综合久久久| 日韩一区二区三区影片| 国产精品麻豆人妻色哟哟久久| 深夜精品福利| 国产免费福利视频在线观看| 99热国产这里只有精品6| 久久ye,这里只有精品| 一本大道久久a久久精品| 老熟妇仑乱视频hdxx| 欧美黑人精品巨大| 黑丝袜美女国产一区| 国产av又大| 午夜成年电影在线免费观看| 亚洲av片天天在线观看| 亚洲精品在线美女| 美女中出高潮动态图| 午夜福利在线免费观看网站| 中国美女看黄片| 国产精品亚洲av一区麻豆| 亚洲色图综合在线观看| 色婷婷久久久亚洲欧美| 亚洲 国产 在线| 亚洲第一青青草原| 亚洲一码二码三码区别大吗| 久久久精品国产亚洲av高清涩受| 青春草亚洲视频在线观看| 9191精品国产免费久久| 亚洲精品中文字幕在线视频| avwww免费| 99精品欧美一区二区三区四区| 狠狠狠狠99中文字幕| 最新在线观看一区二区三区| 国产高清国产精品国产三级| 国产成人免费无遮挡视频| 美女视频免费永久观看网站| 国产主播在线观看一区二区| 99精国产麻豆久久婷婷| 人成视频在线观看免费观看| 精品熟女少妇八av免费久了| 久久综合国产亚洲精品| 欧美 亚洲 国产 日韩一| 成年av动漫网址| 久久亚洲国产成人精品v| 51午夜福利影视在线观看| 久久精品久久久久久噜噜老黄| av一本久久久久| 亚洲一区中文字幕在线| 精品人妻1区二区| 国产在线视频一区二区| 男人操女人黄网站| 国产高清videossex| 久久久久网色| 精品亚洲成a人片在线观看| 精品久久久久久久毛片微露脸 | 欧美少妇被猛烈插入视频| 无限看片的www在线观看| 国产精品影院久久| 91精品三级在线观看| 亚洲精华国产精华精| 久久久国产一区二区| 欧美乱码精品一区二区三区| 久久久久久久国产电影| 99国产精品免费福利视频| 汤姆久久久久久久影院中文字幕| 免费久久久久久久精品成人欧美视频| 美国免费a级毛片| 韩国精品一区二区三区| 啦啦啦 在线观看视频| 国产黄色免费在线视频| 国产精品免费大片| 9191精品国产免费久久| 亚洲精品国产区一区二| 国产淫语在线视频| 最近最新中文字幕大全免费视频| 一进一出抽搐动态| 高潮久久久久久久久久久不卡| 老司机深夜福利视频在线观看 | 亚洲人成电影观看| 欧美中文综合在线视频| 老司机深夜福利视频在线观看 | 久久久国产成人免费| 久久影院123| a 毛片基地| 日本欧美视频一区| 亚洲精品中文字幕一二三四区 | 18禁黄网站禁片午夜丰满| 一区二区三区四区激情视频| 欧美日韩亚洲高清精品| kizo精华| 免费看十八禁软件| 国产高清国产精品国产三级| 一边摸一边做爽爽视频免费| 日韩视频在线欧美| 亚洲七黄色美女视频| 久久久久久亚洲精品国产蜜桃av| 99久久99久久久精品蜜桃| 日韩欧美国产一区二区入口| 国产精品九九99| bbb黄色大片| 午夜日韩欧美国产| 中文字幕制服av| 国产一区二区三区综合在线观看| 免费观看a级毛片全部| 欧美少妇被猛烈插入视频| 手机成人av网站| 天天影视国产精品| 黑丝袜美女国产一区| 人人妻人人澡人人看| av在线播放精品| 成人av一区二区三区在线看 | 亚洲成国产人片在线观看| 国产成+人综合+亚洲专区| 欧美av亚洲av综合av国产av| 中文字幕人妻丝袜制服| 少妇人妻久久综合中文| 99国产精品99久久久久| 久久人人97超碰香蕉20202| 90打野战视频偷拍视频| 两人在一起打扑克的视频| 久久久久久久久免费视频了| 老司机靠b影院| 99re6热这里在线精品视频| 波多野结衣一区麻豆| 国产99久久九九免费精品| www.自偷自拍.com| 国产av精品麻豆| 桃花免费在线播放| 亚洲自偷自拍图片 自拍| 美女午夜性视频免费| 1024视频免费在线观看| 久久热在线av| 中文字幕制服av| 久久女婷五月综合色啪小说| 男女边摸边吃奶| 国产高清videossex| 国产一级毛片在线| 亚洲,欧美精品.| 亚洲一卡2卡3卡4卡5卡精品中文| 国产1区2区3区精品| 亚洲伊人色综图| 免费女性裸体啪啪无遮挡网站| 午夜激情久久久久久久| 国产亚洲精品一区二区www | 国产麻豆69| 一本综合久久免费| 一区二区日韩欧美中文字幕| 国产黄色免费在线视频| 精品国内亚洲2022精品成人 | 久久久国产精品麻豆| 亚洲欧美成人综合另类久久久| 中文字幕高清在线视频| 人人妻,人人澡人人爽秒播| 中文字幕人妻丝袜制服| 欧美日韩亚洲国产一区二区在线观看 | 国产激情久久老熟女| 亚洲五月色婷婷综合| a级片在线免费高清观看视频| 日韩人妻精品一区2区三区| 国产成人啪精品午夜网站| 狂野欧美激情性bbbbbb| 欧美老熟妇乱子伦牲交| 美女主播在线视频| 日本av手机在线免费观看| 亚洲全国av大片| 精品国产一区二区三区久久久樱花| 大码成人一级视频| 下体分泌物呈黄色| 久久久欧美国产精品| 亚洲色图 男人天堂 中文字幕| 啦啦啦视频在线资源免费观看| 中国国产av一级| 亚洲三区欧美一区| 亚洲精品美女久久久久99蜜臀| 操出白浆在线播放| 午夜福利影视在线免费观看| 亚洲第一av免费看| 国产精品 国内视频| 丰满迷人的少妇在线观看| 午夜视频精品福利| 欧美日韩中文字幕国产精品一区二区三区 | 成人黄色视频免费在线看| 在线观看免费高清a一片| 国产亚洲av片在线观看秒播厂| 亚洲激情五月婷婷啪啪| 日日夜夜操网爽| 丁香六月天网| 亚洲精华国产精华精| 久久综合国产亚洲精品| 伊人久久大香线蕉亚洲五| 香蕉丝袜av| 国产人伦9x9x在线观看| 丝瓜视频免费看黄片| 成年美女黄网站色视频大全免费| 免费久久久久久久精品成人欧美视频| 国产日韩欧美视频二区| 日韩三级视频一区二区三区| 性少妇av在线| 国产精品免费视频内射| 亚洲中文av在线| 各种免费的搞黄视频| 天堂俺去俺来也www色官网| 91成年电影在线观看| 日本猛色少妇xxxxx猛交久久| 亚洲精品中文字幕在线视频| 成在线人永久免费视频| 免费久久久久久久精品成人欧美视频| 黄色 视频免费看| 精品免费久久久久久久清纯 | 国产精品久久久av美女十八| 亚洲精华国产精华精| av在线播放精品| 男人添女人高潮全过程视频| 大香蕉久久成人网| 亚洲中文日韩欧美视频| 少妇精品久久久久久久| 电影成人av| 国产精品一区二区在线不卡| 欧美日韩亚洲高清精品| 少妇裸体淫交视频免费看高清 | 男男h啪啪无遮挡| 美女中出高潮动态图| 操出白浆在线播放| 欧美性长视频在线观看| 一本一本久久a久久精品综合妖精| 三级毛片av免费| 亚洲精品一卡2卡三卡4卡5卡 | 国产av又大| 中文字幕精品免费在线观看视频| 欧美激情 高清一区二区三区| 亚洲国产av新网站| 一级黄色大片毛片| 亚洲中文av在线| www.精华液| 日韩制服丝袜自拍偷拍| 国产精品久久久久久人妻精品电影 | a级毛片在线看网站| 国产99久久九九免费精品| 国产淫语在线视频| 免费在线观看黄色视频的| 欧美 亚洲 国产 日韩一| 深夜精品福利| 亚洲国产欧美在线一区| 国产无遮挡羞羞视频在线观看| 国产精品自产拍在线观看55亚洲 | 久久精品aⅴ一区二区三区四区| 久9热在线精品视频| 在线 av 中文字幕| 国产在线免费精品| 午夜成年电影在线免费观看| 国产成人精品久久二区二区免费| 免费少妇av软件| 国产欧美日韩一区二区精品| 久久久久网色| 视频区欧美日本亚洲| 久久久精品94久久精品| 国产日韩欧美亚洲二区| 免费看十八禁软件| 免费在线观看视频国产中文字幕亚洲 | 男女之事视频高清在线观看| 亚洲精品久久午夜乱码| 久久精品熟女亚洲av麻豆精品| av福利片在线| av免费在线观看网站| 老司机影院毛片| 国产日韩一区二区三区精品不卡| 精品亚洲成a人片在线观看| 欧美精品一区二区免费开放| 老司机亚洲免费影院| 国产亚洲欧美在线一区二区| 国产人伦9x9x在线观看| tube8黄色片| 免费观看a级毛片全部| 国产精品99久久99久久久不卡| 免费高清在线观看日韩| 久久人人爽av亚洲精品天堂| 亚洲第一av免费看| 日本av手机在线免费观看| 国产真人三级小视频在线观看| 久久久国产成人免费| 老熟妇乱子伦视频在线观看 | 亚洲美女黄色视频免费看| 两性午夜刺激爽爽歪歪视频在线观看 | 中文欧美无线码| 两性午夜刺激爽爽歪歪视频在线观看 | 欧美日韩福利视频一区二区| 亚洲精品一卡2卡三卡4卡5卡 | 国产成人一区二区三区免费视频网站| 国产视频一区二区在线看| 中文字幕另类日韩欧美亚洲嫩草| 国产av国产精品国产| 日韩欧美一区二区三区在线观看 | 亚洲va日本ⅴa欧美va伊人久久 | 99香蕉大伊视频| 亚洲精品第二区| 亚洲七黄色美女视频| 国产成人免费无遮挡视频| 欧美黑人欧美精品刺激| 免费久久久久久久精品成人欧美视频| 午夜精品国产一区二区电影| 免费高清在线观看视频在线观看| 99国产精品一区二区三区| 一个人免费在线观看的高清视频 | 少妇 在线观看| 亚洲av日韩在线播放| 99久久99久久久精品蜜桃| 老汉色av国产亚洲站长工具| 久久久精品94久久精品| 午夜影院在线不卡| 亚洲欧美成人综合另类久久久| 亚洲欧美激情在线| 欧美在线一区亚洲| 人妻人人澡人人爽人人| 国产精品.久久久| 亚洲av电影在线观看一区二区三区| 青春草视频在线免费观看| 99热网站在线观看| 亚洲国产欧美日韩在线播放| 嫁个100分男人电影在线观看| 免费日韩欧美在线观看| 日韩欧美免费精品| 国产成人精品久久二区二区免费| 国产一区二区三区综合在线观看| 欧美少妇被猛烈插入视频| 久久狼人影院| 国产97色在线日韩免费| 999久久久国产精品视频| 麻豆国产av国片精品| 亚洲三区欧美一区| 国产成人精品无人区| 在线十欧美十亚洲十日本专区| 成在线人永久免费视频| av在线播放精品| 老汉色av国产亚洲站长工具| 国产日韩一区二区三区精品不卡| 久久精品国产亚洲av香蕉五月 | 国产日韩欧美在线精品| 亚洲av男天堂| 亚洲精品一区蜜桃| 99精品久久久久人妻精品| 老司机靠b影院| 亚洲美女黄色视频免费看| 侵犯人妻中文字幕一二三四区| 免费观看av网站的网址| 国产成+人综合+亚洲专区| 久久国产精品大桥未久av| 91精品伊人久久大香线蕉| 一二三四社区在线视频社区8| 久久国产精品人妻蜜桃| 久久天堂一区二区三区四区| 国产成人啪精品午夜网站| 国产精品久久久av美女十八| 18禁裸乳无遮挡动漫免费视频| 亚洲成人免费av在线播放| 99久久精品国产亚洲精品| 自线自在国产av| 久久精品久久久久久噜噜老黄| 亚洲精品中文字幕在线视频| 国产av又大| 老汉色∧v一级毛片| 麻豆国产av国片精品| 欧美日韩黄片免| 成人手机av| 亚洲成人免费电影在线观看| 欧美日韩成人在线一区二区| 秋霞在线观看毛片| 国产成人影院久久av| 777久久人妻少妇嫩草av网站| 51午夜福利影视在线观看| 欧美国产精品va在线观看不卡| 国产免费福利视频在线观看| 国产精品熟女久久久久浪| 欧美xxⅹ黑人| 超碰成人久久| 亚洲欧美一区二区三区黑人| 国产成+人综合+亚洲专区| 精品免费久久久久久久清纯 | 丝袜美腿诱惑在线| 黑人操中国人逼视频| 国产精品自产拍在线观看55亚洲 | 成在线人永久免费视频| 咕卡用的链子| 国产亚洲精品久久久久5区| 我要看黄色一级片免费的| 高清黄色对白视频在线免费看| 国产老妇伦熟女老妇高清| 亚洲av欧美aⅴ国产| 少妇 在线观看| 免费人妻精品一区二区三区视频| 国产精品久久久久久精品古装| 国产av又大| 在线亚洲精品国产二区图片欧美| 久久精品久久久久久噜噜老黄| 国产在线一区二区三区精| 午夜老司机福利片| 国产亚洲精品第一综合不卡| av片东京热男人的天堂| 啦啦啦免费观看视频1| 亚洲欧洲日产国产| 欧美老熟妇乱子伦牲交| 色视频在线一区二区三区| 在线天堂中文资源库| 50天的宝宝边吃奶边哭怎么回事| 在线天堂中文资源库| 精品少妇内射三级| 俄罗斯特黄特色一大片| 五月天丁香电影| 日韩人妻精品一区2区三区| 亚洲精品第二区| 青草久久国产| 久久久久久久大尺度免费视频| 国产成人av教育| 91成人精品电影| 亚洲综合色网址|