• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-Head Attention Graph Network for Few Shot Learning

    2021-12-11 13:29:08BaiyanZhangHefeiLingPingLiQianWangYuxuanShiLeiWuRunshengWangandJialieShen
    Computers Materials&Continua 2021年8期

    Baiyan Zhang,Hefei Ling,*,Ping Li,Qian Wang,Yuxuan Shi,Lei Wu Runsheng Wang and Jialie Shen

    1School of Computer Science and Technology,Huazhong University of Science and Technology,Wuhan,430074,China

    2School of Electronics,Electrical Engineering and Computer Science,Queens University,Belfast,BT7 1NN,UK

    Abstract:The majority of existing graph-network-based few-shot models focus on a node-similarity update mode.The lack of adequate information intensifies the risk of overtraining.In this paper, we propose a novel Multihead Attention Graph Network to excavate discriminative relation and fulfill effective information propagation.For edge update,the node-level attention is used to evaluate the similarities between the two nodes and the distributionlevel attention extracts more in-deep global relation.The cooperation between those two parts provides a discriminative and comprehensive expression for edge feature.For node update,we embrace the label-level attention to soften the noise of irrelevant nodes and optimize the update direction.Our proposed model is verified through extensive experiments on two few-shot benchmark MiniImageNet and CIFAR-FS dataset.The results suggest that our method has a strong capability of noise immunity and quick convergence.The classification accuracy outperforms most state-of-the-art approaches.

    Keywords: Few shot learning; attention; graph network

    1 Introduction

    The past decade has seen the remarkable development of deep learning in a broad spectrum of Computer Vision field, including Image classification [1], Object Detection [2-4], Person reidentification [5-8], Face Recognition [9], etc.Such progress cannot be divorced from vast amounts of labeled data.Nevertheless, the performance can be adversely affected by the data-hungry condition.Thus, there is an urgent need to enable learning systems to efficiently resolve new tasks with few labeled data, which is termed as few-shot learning (FSL).

    The origin of FSL can be traced back to 2000, E.G.Miller et al.investigated Congealing algorithm to learn the common features from a few examples and accomplished the matching of specific images [10].Since then, considerable literature has grown up around the theme of few-shot learning [11].The vast majority of existing implementation methodologies belong to meta-learning (ML), which implements an episodic training strategy to learn the task-agnostic knowledge from abundant meta-train tasks.Multifarious ML approaches fall into three major groups:learn-to-measure methods provide explicit criteria across different tasks to assess the similarity between labeled and unlabeled data [12,13]; learn-to-model methods generate and update parameters through collaborating with proven networks [14,15]; learn-to-optimize methods suggest to fine-tune a base learner for fast adaptation [16].Despite its diversity and efficacy, mainstream meta-learning models mostly pay attention to generalize to the unseen task with transferable knowledge, but few explore inherent structured relation and regularity [17].

    To remedy the drawback above, another line of work has focused on Graph Network, which adopted structural representation to support relational reasoning for few-shot learning [17].The early work constructed a complete graph to represent each task, where label information was propagated by updating node features from neighborhood aggregation [18].Thereafter, more and more graph methods have been devoted to few-shot learning.Such as edge-labeling framework EGNN [19], transductive inference methods TPN [20], distribution propagation methods DPGN [21], etc.With various features involved in the graph update, limited label information has been converted to multiple forms, and then double-counting and aggregation, entailing many otherwise unnecessary costs [22].Consequently, how to find the discriminable information and realize effective propagation is a problem that desperately needs to be settled.

    Figure 1:The overall framework of the MAGN model.In this figure, we present a 3-way 1-shot problem as an example.After Feature Embedding Module femb (details in Section 4.2.1), samples and their relations generate the initial graph.There are L generations in the GNN module(we show one of them for simplicity).Each generation consists of node feature update and edge feature update, with cooperation among the node-attention, distribution-attention and labelattention.The solid circle represents support samples and the hollow circle represents the query samples.The square indicates the edge feature and the darkness of color denotes the value.The darker the color, the larger the value.The detailed process is described in Section 3

    In this paper, we propose a novel multi-head attention Graph Network (MAGN) to address the problem stated above, which is shown in Fig.1.In the process of updating the graph network,different weights are assigned to different neighbor nodes.Compared to the node-similarity based weight of existing methods, we provide new insights into multi-level fusion similarity mechanism with distribution feature and label information to improve discriminative performance.More specifically, for node update, we treat the label information as an initial adjacency matrix to soften the noise of irrelevant nodes, thereby providing a constraint for the update direction.For edge update, we excavate the distribution feature by calculating the edge-level similarity of overall samples, as a feedback of global information, it reveals more in-depth relations.Collocating with the regular node-level attention, more valuable and discriminable relations would be involved in the process of knowledge transfer.Furthermore, we verify the effectiveness of our methods through extensive experiments on the MiniImagenet and CIFAR-FS datasets.The results show that MAGN exceeds comparable performance in quick convergence, robustness at the same time keeps the property of accuracy.

    2 Related Work

    2.1 Meta-Learning

    Meta-learning, also known as “l(fā)earn to learn,” plays an essential role in addressing the issue of few-shot learning.According to the different content in the learning systems, it can be divided into three categories:learn-to-measure methods, which based on metric learning,employs an attention nearest neighbor classifier with the similarity between labeled and unlabeled data.Matching networks adopts a cosine similarity [15], Prototypical network [12] establishes a prototype for each class and utilize Euclidean distance as a metric.Differ from above, Relation Net [13] devises a CNN-based relation metric network.Learn-to-optimize methods suggest to fine-tuning a base learner for fast adaptation.MAML [16] is a typical approach that learns a good initialization parameter for rapid generalization.Thereafter, various models derived from MAML, such as first-order gradients methods Reptile [23], task-agnostic method TAML [24],Bayes based method BMAML [25], etc.Learn-to-model methods generate and update parameters on the basis of the proven networks.Meta-LSTM [26] embraces the LSTM network to update the meta-learner parameters.VERSA [27] builds a probabilistic amortization network to obtain softmax layer weights.In order to predict weights, MetaOpt Net [28] advocates SVM, R2-D2 adopts ridge regression layer [29], while Dynamic Net [30] uses a memory module.

    2.2 Graph Attention Network

    The attention mechanism is essential for a wide range of technologies, such as sequence learning, feature extraction, signal enhancement and so on [31].The core objective is to select the information that is more critical to the current task objective from the numerous information.The early GCN works have been limited by the Fourier transform derivation, which was challenging to deal with a directed graph with indiscriminate equal weight [32].Given that, Yoshua Bengio equipped the graph network with a masked self-attention mechanism [33].During information propagation, it assigns different weights to each node according to the neighbor distribution.Benefited from this strategy, GAT can filter noise neighbor and improve the performance of the graph Framework.Such an idea was adopted and enhanced by GAAN [34].It combined these two mechanisms, the multi-head attention to extract various information, likewise the self-attention to gather them.

    3 Model

    In this section, we first summarize the preliminaries of few-shot classification following previous work and then describe our method in more technical detail.

    3.1 Preliminaries

    Few-shot learning:The goal of FSL is to train a reliable model with the capability of learning and generalizing from few samples.A common setting isN-wayK-shot classification task.Clearly,each taskTconsists of support setSand query setQ.There areN?Klabeled samples in the support set, whereNis the number of class andKis the number of samples in each class.Samples in the query set are unlabeled, but they belong to theNclass of support set.The learning algorithm aims to produce a mapping function from query samples to the label.

    Meta-Learning:One of the main obstacles in the FSL is overfitting caused by limited labeled data.Meta-learning adopts episodic training strategy to make up for this, which increase generalization ability through extensive training on similar tasks.Given train date setDtrainand test date setDtest,Dtrain∩Dtest=?.Each taskTis randomly sampled from a task distributionP(T).It can be expressed asxirepresents thei-th sample,yiis its label.Tis the number of samples inQ.In the training stage, there are plenty ofN-wayK-shot classification tasks which samples fromDtrain.Through amounts of training episodic on these tasks, we can propose a feasible classifier.And in the testing stage, samples of each task stem fromDtest.Since tasks inDtrainandDtestfollow the same distributionP(T).Such classifier can generalize well on the task which samples fromDtest.

    3.2 Initialized GNN

    Graph Neural Networks:In this section, we describe the overall framework of our proposed GNN, as shown in Fig.1.Firstly, we utilize an embedding module to extract feature (detail in Section 4.2.1), after that each task is expressed as a fully-connected graph.ThroughLlayers Graph Update, the GNN realizes information transfer and relational reasoning.Specifically, the taskTis formed as the graphG=(V,E), where each nodevi∈Vdenotes the embedding samplexiin taskT, and each edgeei,j∈Ecorresponds to the relationship of two connected nodesvjandvi, wherei,j=1,2···F,Fis the numbers of all samples in theT,F=N×K+T.

    Initial graph feature:In the graphG=(V,E), node features are initialized as the output of feature embedding module:v0i=femb(xi;θemb).Whereθembis the parameter set of the embedding modulefemb.Edge features are used to indicate the degree of correlation between the two connected nodes,ei,j∈[0,1].Given the label information, we set the edge features of labeled samples to reach the two extremes of intra-class and inter-class relations, while the edge features of unlabeled samples share the same relation to others.Therefore, the edge features are initialized as Eq.(1):

    3.3 Multi-Head Attention

    The majority of existing few-shot graph-models focus on a node-attention update mode,which adopts the node similarity to control neighborhood aggregation.This mode ignores the inherent relationships between the samples, which may lead to the risk of overtraining.Therefore,we propose a multi-head attention mechanism with distribution feature and label information to enhance the model capability.

    3.3.1 Node-Level Attention

    Like some existing methods as EGNN and DPGN, the node-level attention is based on the similarity between the two nodes.Since each node has a different neighborhood, we use normalization operation for nodes in the same neighborhood to get more discriminative and comparable results.We employ node-level attention with node-similarity defined as follows:

    In detail, given nodesvkiandvkjfrom thek-th layer, Att is a metric network with four Conv-BN-ReLU blocks to calculate the primary similarity of the two nodes.In Eq.(3),N(i)denotes the neighbor set of the nodevi.Then we apply a local normalization operation by softmax and get the final node-similarity~nki,j.

    3.3.2 Distribution-Level Attention

    The node-level attention relies on the local relationships of node similarity, while the global relationship has not yet been fully investigated.To mine more discriminative information, we extract the global distribution feature by aggregating the edge features of overall samples and then evaluate the similarity of distribution feature, with definitions as Eqs.(4) and (5).

    whereDkiis the distribution feature of nodevkifrom thek-th layer, it consists of all the edge features ofvki.Similarly, we can get the distribution feature of nodevkjasDkj.Then both of them would be sent to the Att network to assess the distribution similarity.The same softmax operation aims at simplifying the computations.

    3.3.3 Label-Level Attention

    In the previous work, though the aggregation scope is the neighborhood of each node,it extends beyond the same class.Furthermore, the update of graph network is a process of information interaction and fusion, therefore increasing the noise of nodes from diverse classes.We set an adjacency matrix to filter irrelevant information and constraint update direction as shown in Eq.(6).

    whereAkis the adjacency matrix at thek-th layer.Ais the label adjacency matrix, the elementai,jis equal to one whenviandvjhave the same label and zero otherwise.Ekis the matrix of edge feature.It combines long-term label information with short-term updated edge features in a Recurrent Neural Network.Such operation prunes useless information from inter-class samples and distills useful intra-class samples.

    3.4 Feature Update

    Information transmission has been facilitated through the alternate update of node features and edge features.In particular, the update of node feature depends on neighborhood aggregation,where edge features cooperate with label information to control the relation transformation.While the edge features of MAGN subject to node-similarity and neighborhood distribution.

    Based on the above update rule, the edge features at the(k+1)-th layer can be formulated as follows:

    where conca/ave represents the connection between the two attention mechanisms, conca means cascade connection, ave denotes mean reversion.represents the node-similarity as shown in Eq.(3),represents the distribution-similarity as shown in Eq.(5).

    The node vectors at the(k+1)-th layer can be formulated as Eq.(8):

    where MLPvis the node update network with two Conv-BN-ReLU blocks,is the adjacency status ofvjandviat the(k+1)-th layer.It aggregates the node features of neighbor set with multi-head attention mechanism shown in Fig.2.

    Figure 2:Multi-head attention

    3.5 Prediction

    OverLlayers update of node and edge feature, the classification results of nodexican be obtained from a prediction probability of corresponding edge feature at the final layerby softmax function:

    In Eq.(9),is the Kronecker function that outputs one ifyj=nand zero otherwise.stands for the prediction probability whereviis in then-th category.

    3.6 Training

    During the episodic training, the parameters of proposed GNN are trained in an end-to-end manner.The final objective is to minimize the total loss function computed in all layers as shown in Eq.(10):

    whereλkis the weight ofk-th layer,LErepresents the cross-entropy loss function,is the probability predictions of samplexiat thek-th layer andyiis the ground-truth label.

    4 Experiments

    For a fair comparison, we conduct our method on two standard few-shot learning datasets following the proposed experimental settings of EGNN and make contrast experiments with stateof-the-art approaches.

    4.1 Datasets

    MiniImageNet is a typical benchmark few-shot dataset.As a subset of the ImageNet, it is composed of 60,000 images uniformly distributed over 100 classes.All of the images are RGB colored, the size is 84?84?3.Following the setting provided by [26], we randomly select 64 classes for training, 16 classes for validation, and 20 classes for testing.

    CIFAR-FS is derived from CIFAR-100 dataset.The same as MiniImageNet, it is formed of 100 classes and each class contains 600 images, which splits 64, 16, 20 for training, validation, and testing.In particular, the main obstacles of low resolution (32?32) and high inter-class similarity make classification task technically challenging.

    Before training, both datasets have been endured data augmentation with transformation as horizontal flip, random crop, and color jitter (brightness, contrast, and saturation).

    4.2 Implementation Details

    4.2.1 Embedding Network

    We adopt ConvNet and ResNet12 for the backbone embedding module.Following the same setting used in [19,23], the ConvNet architecture contains four convolutional blocks, each block is composed of 3 ?3 convolutions, a batch normalization, a 2 ?2 max-pooling and a LeakyReLU activation.Similar to ConvNet, ResNet12 also has four blocks, one of which is replaced by a residual block.

    4.2.2 Parameter Settings

    We evaluate MAGN in 5-way 1-shot and 5-shot classification task on both benchmarks.There are three layers in the proposed GNN model.In the meta-train stage, each batch consists of 60 tasks.While in the meta-test step, each batch obtains ten tasks.During training, we adopt the Adam optimizer with an initial learning rate of 5?10?4and a weight decay of 10?6.The dropout rate is set as 0.3, and the loss coefficient is 1.The results of our proposed model are obtained through 100kiterations on MiniImageNet and CIFAR-FS.

    4.3 Results and Analysis

    4.3.1 Main Results

    We compare our approach with recent state-of-the-art models.The main results are listed in Tabs.1 and 2.According to diverse embedding architectures, the backbone can be divided into ConvNet, ResNet12, ResNet18, and WRN28.The major difference is the number of residual blocks.In addition, GNN-based methods are listed separately for the sake of intuition.Extensive results show that our MAGN yields better performance on both datasets.For example, among all the Convnet-architecture methods, The MAGN is substantially better than others.Although the results are slightly lower than DPGN, we still obtain the second place with a narrow gap of both backbones.Nevertheless, some common graph network methods like EGNN, DPGN adopt training and testing with labels in a consistent order, such as the label in the 5-way 1-shot task is from support set (0, 1, 2, 3, 4) to the query set (0, 1, 2, 3, 4).The learning system may learn the order of task rather than the relation of samples.To avoid this effect, we disrupt the label order of support set and query set.This setup makes our results less than optimal, but it is more in line with the reality of the scene.The proposed MAGN acquires a robust result that would not be biased by the noise of label order.

    Table 1:Classification accuracy on CIFAR-FS

    4.3.2 Ablation Study

    Effect of Data shuffling mode:There are three ways to scramble data:shuffle the support set, shuffle the query set and shuffle both sets.We conduct a 5-way 1-shot trial with label-node attention in MiniImagenet.The comparative result is shown in the Tab.3.As we can see, the use of data shuffling mode has little effect on the accuracy rate, while it makes a difference to the time of convergence.It is consistent with the essence of random selection.To further explore the convergence performance of the model, the default setting is shuffling the order of both sets.

    Effect of Different Attention:The major ablation results of different attention components are shown in Fig.3.All variants are performed on the 5-way 1-shot classification task of MiniImageNet.The baseline adopts only node attention (“NodeAtt”).On this basis, the variant “DisNode”adds distribution-level attention to assist edge update.For samples in the same class, their surrounding neighborhood would follow a similar distribution.Thus the “DisNode” model can mine more discriminable relationship between the two nodes and obtain an enhancement in accuracy.Besides, the performance of concatenating aggregation is superior to average aggregation.This advantage extends to the final state of three attentions with a slight rise from 0.49 (“CatDisNode”-“AveDisNode”) to 0.85 (“Cat3Att”-“Ave3Att”).The variant “LabNode” equips node update with label-level attention, leading to a considerable improvement in convergent iteration from 89k to 63k.We attribute this to the filtering capability of label adjacency matrix, which constrains update direction and realizes fast convergence.

    Table 2:Classification accuracies on MiniImageNet

    Table 3:5-way 1-shot results on MiniImagenet with different data shuffling mode

    Figure 3:Effect of different attention.The left part shows the accuracy of variants with different attention components, the right part describes the convergence process of those variants

    Effect of Layers:In GNN, the depth of the network has some influence on feature extraction and information transmission.To explore this problem, we perform 5-way 1-shot experiments with different numbers of layers.As shown in Tab.4, accuracy rate and convergence times are improved steadily with the network deepens.To manage the trade-off between convergence and accuracy, a 3-layers GNN is configured for our models.

    Table 4:5-way 1-shot results on MiniImagenet with different layers

    5 Conclusion

    In this paper, we propose a multi-head attention Graph Network for few-shot learning.The multiple attention mechanism including three parts:node-level attention explores the similarities between the two nodes, and distribution-level attention extracts more in-deep global relation.The cooperation between those two parts provides a discriminative expression for edge feature.While the label-level attention, served as a filtration, weakens the noise of some inter-class information during node update and accelerates the convergence process.Furthermore, we scramble the training data of support set and query set to guarantee to transfer order-agnostic knowledge.Extensive experiments on few-shot benchmark datasets validate the accuracy and efficiency of the proposed method.

    Funding Statement:This work was supported in part by the Natural Science Foundation of China under Grant 61972169 and U1536203, in part by the National key research and development program of China (2016QY01W0200), in part by the Major Scientific and Technological Project of Hubei Province (2018AAA068 and 2019AAA051).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    热re99久久精品国产66热6| 亚洲人成77777在线视频| 国产成人精品无人区| 99热国产这里只有精品6| 黑人猛操日本美女一级片| 欧美日韩国产mv在线观看视频| 亚洲精品国产区一区二| 国产成人一区二区在线| 只有这里有精品99| 男女国产视频网站| 亚洲精品成人av观看孕妇| 国产色婷婷99| 欧美日韩亚洲综合一区二区三区_| 久久性视频一级片| 我的亚洲天堂| 久久久精品区二区三区| 亚洲成av片中文字幕在线观看| 色精品久久人妻99蜜桃| 熟女少妇亚洲综合色aaa.| 男男h啪啪无遮挡| 97精品久久久久久久久久精品| 免费少妇av软件| 国产有黄有色有爽视频| 色婷婷av一区二区三区视频| 国产在视频线精品| 天天躁狠狠躁夜夜躁狠狠躁| 欧美最新免费一区二区三区| 免费不卡黄色视频| 超碰成人久久| 国产精品麻豆人妻色哟哟久久| 女性生殖器流出的白浆| 亚洲欧洲国产日韩| 欧美日韩国产mv在线观看视频| 黑人巨大精品欧美一区二区蜜桃| 国产av精品麻豆| 亚洲国产av新网站| 午夜91福利影院| 男人舔女人的私密视频| 欧美日韩综合久久久久久| 99热国产这里只有精品6| 久久97久久精品| 免费看av在线观看网站| 亚洲精品美女久久av网站| 中文欧美无线码| 国产精品女同一区二区软件| 热re99久久精品国产66热6| 黄片小视频在线播放| 最新的欧美精品一区二区| 免费av中文字幕在线| 久久久久精品性色| h视频一区二区三区| 大话2 男鬼变身卡| 一级片'在线观看视频| 免费日韩欧美在线观看| 欧美变态另类bdsm刘玥| 成人手机av| 丰满迷人的少妇在线观看| 韩国av在线不卡| 久久久欧美国产精品| 亚洲激情五月婷婷啪啪| 我的亚洲天堂| 九九爱精品视频在线观看| 亚洲av日韩精品久久久久久密 | 午夜福利网站1000一区二区三区| svipshipincom国产片| 欧美日韩av久久| 日韩电影二区| 国产黄频视频在线观看| 综合色丁香网| 国产在线一区二区三区精| 午夜福利免费观看在线| 国产男人的电影天堂91| 秋霞在线观看毛片| 十八禁网站网址无遮挡| 菩萨蛮人人尽说江南好唐韦庄| 亚洲婷婷狠狠爱综合网| 国产又爽黄色视频| 满18在线观看网站| 黑人欧美特级aaaaaa片| 伊人久久大香线蕉亚洲五| 国产精品一区二区在线观看99| 国产精品秋霞免费鲁丝片| 男男h啪啪无遮挡| 高清在线视频一区二区三区| 亚洲av欧美aⅴ国产| 自拍欧美九色日韩亚洲蝌蚪91| 久久精品亚洲熟妇少妇任你| 亚洲欧美成人精品一区二区| 精品国产乱码久久久久久小说| 国产成人免费无遮挡视频| 18禁观看日本| 男女免费视频国产| 欧美另类一区| 一级黄片播放器| 国产精品久久久人人做人人爽| 亚洲精品视频女| 成人18禁高潮啪啪吃奶动态图| 黄网站色视频无遮挡免费观看| 人妻一区二区av| 哪个播放器可以免费观看大片| 日日摸夜夜添夜夜爱| 久久人人97超碰香蕉20202| 欧美最新免费一区二区三区| 色综合欧美亚洲国产小说| 曰老女人黄片| 久久av网站| 欧美日韩视频精品一区| 中文字幕精品免费在线观看视频| 国产又爽黄色视频| 成人手机av| 日日爽夜夜爽网站| 亚洲国产看品久久| 亚洲人成电影观看| 亚洲成人手机| 在线观看免费视频网站a站| 日韩人妻精品一区2区三区| 久久ye,这里只有精品| 亚洲精品在线美女| 大陆偷拍与自拍| 中文精品一卡2卡3卡4更新| 国产精品久久久久久精品电影小说| 欧美黑人精品巨大| 精品国产乱码久久久久久小说| 欧美日韩国产mv在线观看视频| 久久国产精品男人的天堂亚洲| 亚洲激情五月婷婷啪啪| 亚洲,一卡二卡三卡| 伊人久久国产一区二区| 汤姆久久久久久久影院中文字幕| 热re99久久国产66热| 免费黄频网站在线观看国产| 麻豆av在线久日| 日韩成人av中文字幕在线观看| 成年av动漫网址| 人体艺术视频欧美日本| av天堂久久9| 丝袜在线中文字幕| 久久精品国产亚洲av高清一级| 国产视频首页在线观看| 亚洲图色成人| 亚洲av中文av极速乱| 中国国产av一级| 亚洲精品国产av蜜桃| 叶爱在线成人免费视频播放| 搡老乐熟女国产| 纯流量卡能插随身wifi吗| 精品国产露脸久久av麻豆| av网站在线播放免费| 麻豆精品久久久久久蜜桃| 黄色视频不卡| 操美女的视频在线观看| 激情视频va一区二区三区| 90打野战视频偷拍视频| 中文精品一卡2卡3卡4更新| 日韩一区二区三区影片| 美女脱内裤让男人舔精品视频| 熟女少妇亚洲综合色aaa.| 热re99久久精品国产66热6| 国产成人精品在线电影| 久久午夜综合久久蜜桃| 自线自在国产av| 午夜福利乱码中文字幕| 日本wwww免费看| 亚洲四区av| 黄片小视频在线播放| 国产精品秋霞免费鲁丝片| 亚洲国产精品成人久久小说| av有码第一页| 激情五月婷婷亚洲| 自线自在国产av| 永久免费av网站大全| 天堂8中文在线网| 亚洲精品一区蜜桃| av网站免费在线观看视频| 观看美女的网站| 国产日韩欧美亚洲二区| 街头女战士在线观看网站| 亚洲国产欧美网| 午夜福利乱码中文字幕| e午夜精品久久久久久久| 最近2019中文字幕mv第一页| 国产熟女欧美一区二区| 欧美日韩视频高清一区二区三区二| 精品人妻一区二区三区麻豆| 亚洲色图 男人天堂 中文字幕| 亚洲综合精品二区| www.av在线官网国产| av又黄又爽大尺度在线免费看| 久久人人爽人人片av| 国产成人精品在线电影| 国产精品三级大全| 丝瓜视频免费看黄片| 欧美在线一区亚洲| av卡一久久| 日韩电影二区| 日日撸夜夜添| 久久久久久人人人人人| 久久久亚洲精品成人影院| 亚洲色图综合在线观看| 日韩电影二区| 七月丁香在线播放| 老司机靠b影院| 亚洲av日韩在线播放| 日本av手机在线免费观看| 在线观看人妻少妇| 国产精品一国产av| a 毛片基地| 欧美激情极品国产一区二区三区| 中文字幕最新亚洲高清| 涩涩av久久男人的天堂| 汤姆久久久久久久影院中文字幕| www日本在线高清视频| 亚洲,欧美,日韩| 日本vs欧美在线观看视频| 国产亚洲av高清不卡| 国产男女超爽视频在线观看| 国产亚洲最大av| 久久精品国产a三级三级三级| 麻豆av在线久日| 精品国产国语对白av| 日韩 欧美 亚洲 中文字幕| 男人操女人黄网站| 欧美 亚洲 国产 日韩一| 深夜精品福利| 男女边吃奶边做爰视频| 久久久国产欧美日韩av| 免费在线观看黄色视频的| 国产精品99久久99久久久不卡 | 亚洲精品日本国产第一区| 精品少妇黑人巨大在线播放| 欧美少妇被猛烈插入视频| 亚洲美女搞黄在线观看| 色精品久久人妻99蜜桃| 老司机影院成人| 老汉色av国产亚洲站长工具| 中国国产av一级| 在线观看一区二区三区激情| 少妇人妻久久综合中文| 悠悠久久av| 伊人亚洲综合成人网| 精品少妇内射三级| 看免费成人av毛片| 黑人猛操日本美女一级片| 97在线人人人人妻| 免费在线观看黄色视频的| 97精品久久久久久久久久精品| 成人亚洲欧美一区二区av| 9色porny在线观看| 欧美变态另类bdsm刘玥| 美女扒开内裤让男人捅视频| 亚洲欧美色中文字幕在线| 成人黄色视频免费在线看| 亚洲精品日本国产第一区| 午夜福利免费观看在线| 亚洲精品av麻豆狂野| 久久精品aⅴ一区二区三区四区| 日韩精品有码人妻一区| 久久人人爽人人片av| 中文字幕另类日韩欧美亚洲嫩草| 啦啦啦视频在线资源免费观看| 一区二区日韩欧美中文字幕| 国产一区亚洲一区在线观看| 亚洲精品一区蜜桃| 别揉我奶头~嗯~啊~动态视频 | 我要看黄色一级片免费的| 91精品三级在线观看| 午夜激情久久久久久久| 精品亚洲乱码少妇综合久久| 国产精品一国产av| 日韩一区二区三区影片| 视频在线观看一区二区三区| 亚洲国产精品成人久久小说| 在线看a的网站| 国产成人a∨麻豆精品| 欧美黄色片欧美黄色片| 激情五月婷婷亚洲| 人妻 亚洲 视频| 国产精品亚洲av一区麻豆 | 国产男人的电影天堂91| 少妇精品久久久久久久| 中国三级夫妇交换| 亚洲国产精品国产精品| 国产麻豆69| 啦啦啦啦在线视频资源| 这个男人来自地球电影免费观看 | 91aial.com中文字幕在线观看| 国产片特级美女逼逼视频| 波野结衣二区三区在线| 黄色毛片三级朝国网站| 欧美激情极品国产一区二区三区| 久久精品久久精品一区二区三区| 精品少妇黑人巨大在线播放| 综合色丁香网| 99国产精品免费福利视频| 精品一区在线观看国产| 成人午夜精彩视频在线观看| 高清av免费在线| 国产野战对白在线观看| 精品久久久久久电影网| 黑人巨大精品欧美一区二区蜜桃| 人人妻人人澡人人看| 999精品在线视频| 日韩大码丰满熟妇| 黄色视频在线播放观看不卡| 欧美在线一区亚洲| 欧美日韩av久久| 黄频高清免费视频| 国产在视频线精品| 国产有黄有色有爽视频| 国产在视频线精品| 国产伦人伦偷精品视频| 2021少妇久久久久久久久久久| 亚洲av电影在线进入| 校园人妻丝袜中文字幕| 捣出白浆h1v1| 看免费av毛片| 19禁男女啪啪无遮挡网站| 久久综合国产亚洲精品| 男女床上黄色一级片免费看| 两个人看的免费小视频| 人人妻人人爽人人添夜夜欢视频| 久久人人爽av亚洲精品天堂| 99国产精品免费福利视频| 久久久久人妻精品一区果冻| 亚洲,欧美精品.| 伦理电影免费视频| 成人午夜精彩视频在线观看| 国产成人啪精品午夜网站| 天天躁狠狠躁夜夜躁狠狠躁| 十分钟在线观看高清视频www| 美女视频免费永久观看网站| 国产精品无大码| 精品少妇一区二区三区视频日本电影 | 丰满迷人的少妇在线观看| 精品国产一区二区三区四区第35| 七月丁香在线播放| 涩涩av久久男人的天堂| 亚洲国产av影院在线观看| 欧美 日韩 精品 国产| 国产黄频视频在线观看| 麻豆乱淫一区二区| 天天操日日干夜夜撸| 天天躁日日躁夜夜躁夜夜| 精品少妇黑人巨大在线播放| 黑人欧美特级aaaaaa片| 亚洲男人天堂网一区| 国产亚洲午夜精品一区二区久久| 美女中出高潮动态图| 亚洲精品乱久久久久久| 国产成人系列免费观看| 成年女人毛片免费观看观看9 | 美女国产高潮福利片在线看| 国产成人精品久久二区二区91 | 亚洲第一青青草原| 伦理电影免费视频| 蜜桃在线观看..| 夫妻午夜视频| 日韩一区二区视频免费看| av又黄又爽大尺度在线免费看| 亚洲成人免费av在线播放| 人人妻人人澡人人看| 国产精品蜜桃在线观看| 视频区图区小说| 美女视频免费永久观看网站| 少妇被粗大猛烈的视频| 天天躁夜夜躁狠狠躁躁| 亚洲国产av影院在线观看| 亚洲美女搞黄在线观看| 一级毛片 在线播放| 91成人精品电影| 性少妇av在线| 搡老岳熟女国产| 中文精品一卡2卡3卡4更新| 日韩视频在线欧美| 国产激情久久老熟女| 欧美97在线视频| 在线观看三级黄色| 午夜福利一区二区在线看| 国产精品欧美亚洲77777| 在线观看免费视频网站a站| 国产精品三级大全| 欧美激情 高清一区二区三区| 国产精品欧美亚洲77777| 欧美变态另类bdsm刘玥| 9色porny在线观看| 不卡视频在线观看欧美| 嫩草影院入口| 免费高清在线观看日韩| 一本久久精品| 人成视频在线观看免费观看| 最近的中文字幕免费完整| 久久天躁狠狠躁夜夜2o2o | 国产毛片在线视频| 日日爽夜夜爽网站| 精品国产一区二区三区久久久樱花| 最近手机中文字幕大全| 欧美精品一区二区大全| 少妇 在线观看| 十八禁人妻一区二区| 性色av一级| av免费观看日本| 成年av动漫网址| 国产成人91sexporn| 无遮挡黄片免费观看| 可以免费在线观看a视频的电影网站 | 亚洲欧洲国产日韩| 波多野结衣av一区二区av| 狂野欧美激情性xxxx| 国产淫语在线视频| 亚洲精品国产色婷婷电影| 欧美老熟妇乱子伦牲交| 日日爽夜夜爽网站| 国产有黄有色有爽视频| 黄色视频在线播放观看不卡| 青草久久国产| 999久久久国产精品视频| 菩萨蛮人人尽说江南好唐韦庄| 精品亚洲成国产av| 夫妻午夜视频| 亚洲精品美女久久久久99蜜臀 | 乱人伦中国视频| 一本—道久久a久久精品蜜桃钙片| 国产免费又黄又爽又色| 国产高清国产精品国产三级| 岛国毛片在线播放| 最近手机中文字幕大全| 色精品久久人妻99蜜桃| 国产精品久久久久久久久免| 日本wwww免费看| 欧美精品高潮呻吟av久久| 国产成人啪精品午夜网站| a级片在线免费高清观看视频| 久久人人爽av亚洲精品天堂| 国产免费现黄频在线看| 欧美 亚洲 国产 日韩一| 电影成人av| 激情视频va一区二区三区| 少妇被粗大猛烈的视频| 欧美av亚洲av综合av国产av | 国产精品二区激情视频| 亚洲精品国产色婷婷电影| 青春草视频在线免费观看| 国产激情久久老熟女| 亚洲人成网站在线观看播放| 亚洲色图综合在线观看| 精品国产乱码久久久久久男人| 亚洲精品久久午夜乱码| 欧美老熟妇乱子伦牲交| 我的亚洲天堂| 国产av精品麻豆| 丝袜美足系列| 久久久久久久久久久免费av| 99久久综合免费| 国产成人精品久久久久久| 黄色视频在线播放观看不卡| 国产 一区精品| 亚洲伊人色综图| 午夜激情久久久久久久| 高清视频免费观看一区二区| 99九九在线精品视频| 久久99一区二区三区| 最近的中文字幕免费完整| 男人操女人黄网站| 天天躁夜夜躁狠狠久久av| 国语对白做爰xxxⅹ性视频网站| 高清黄色对白视频在线免费看| 亚洲欧美中文字幕日韩二区| 成人手机av| av线在线观看网站| 午夜激情久久久久久久| av视频免费观看在线观看| 久久99热这里只频精品6学生| 国产免费现黄频在线看| 久久久久精品性色| 一区二区三区精品91| 熟妇人妻不卡中文字幕| 精品午夜福利在线看| 亚洲欧美成人精品一区二区| 91精品伊人久久大香线蕉| 久久精品国产亚洲av高清一级| 欧美成人午夜精品| 亚洲国产av新网站| 午夜免费观看性视频| 久久久久久久精品精品| 午夜福利免费观看在线| 国产 精品1| 91aial.com中文字幕在线观看| 亚洲精品一区蜜桃| 黑人猛操日本美女一级片| 看免费成人av毛片| 老司机亚洲免费影院| 婷婷色麻豆天堂久久| 女人被躁到高潮嗷嗷叫费观| 精品午夜福利在线看| 久久av网站| 精品国产国语对白av| 欧美精品人与动牲交sv欧美| 久久久久久久大尺度免费视频| 爱豆传媒免费全集在线观看| 人妻一区二区av| 日韩欧美精品免费久久| 这个男人来自地球电影免费观看 | 亚洲精品国产一区二区精华液| 亚洲成av片中文字幕在线观看| 一级毛片我不卡| 天天躁夜夜躁狠狠久久av| 中文字幕亚洲精品专区| 一级片'在线观看视频| 欧美黑人欧美精品刺激| 亚洲欧美清纯卡通| 国产黄频视频在线观看| 在线观看人妻少妇| 丰满迷人的少妇在线观看| 国产男女超爽视频在线观看| 在线免费观看不下载黄p国产| 最黄视频免费看| 日韩视频在线欧美| 国产99久久九九免费精品| 制服丝袜香蕉在线| 黄色 视频免费看| 日韩大片免费观看网站| 人妻一区二区av| av电影中文网址| 男女床上黄色一级片免费看| 一级毛片我不卡| 18在线观看网站| 国产乱人偷精品视频| 亚洲精品国产色婷婷电影| 国产成人精品在线电影| 亚洲精品第二区| 亚洲伊人久久精品综合| 亚洲精华国产精华液的使用体验| 丰满少妇做爰视频| 免费人妻精品一区二区三区视频| 十八禁高潮呻吟视频| 久久久久精品国产欧美久久久 | 青草久久国产| 免费黄频网站在线观看国产| 亚洲情色 制服丝袜| 中文字幕人妻丝袜制服| 蜜桃国产av成人99| av网站在线播放免费| 亚洲国产日韩一区二区| 久久久久精品性色| 日韩不卡一区二区三区视频在线| 考比视频在线观看| 亚洲欧美一区二区三区久久| 亚洲精品,欧美精品| www.自偷自拍.com| 国产视频首页在线观看| 国产精品av久久久久免费| 建设人人有责人人尽责人人享有的| 色吧在线观看| 国产毛片在线视频| 超碰成人久久| 99精品久久久久人妻精品| 成年动漫av网址| 久久韩国三级中文字幕| 亚洲av中文av极速乱| 国产日韩欧美亚洲二区| 人人澡人人妻人| 狠狠婷婷综合久久久久久88av| 婷婷成人精品国产| 熟女少妇亚洲综合色aaa.| 久久久久久久久久久免费av| 亚洲精品成人av观看孕妇| 性少妇av在线| 亚洲男人天堂网一区| 在现免费观看毛片| 久久精品久久久久久噜噜老黄| 国产日韩欧美在线精品| 亚洲精品国产色婷婷电影| 十八禁网站网址无遮挡| 一区二区三区四区激情视频| 狠狠婷婷综合久久久久久88av| 日韩一本色道免费dvd| 毛片一级片免费看久久久久| 亚洲男人天堂网一区| 欧美久久黑人一区二区| 国产成人精品福利久久| 国产成人a∨麻豆精品| 久久人妻熟女aⅴ| 亚洲欧美精品综合一区二区三区| 91aial.com中文字幕在线观看| 午夜精品国产一区二区电影| 日韩不卡一区二区三区视频在线| videos熟女内射| 操美女的视频在线观看| 久久狼人影院| 青青草视频在线视频观看| 19禁男女啪啪无遮挡网站| 午夜福利影视在线免费观看| 日韩 欧美 亚洲 中文字幕| 男女边吃奶边做爰视频| 99精国产麻豆久久婷婷| 亚洲av综合色区一区| www.精华液| 在线天堂最新版资源| 中文字幕亚洲精品专区| 搡老乐熟女国产| 男女免费视频国产| 老熟女久久久| 午夜免费鲁丝| 亚洲精品自拍成人| 亚洲成人免费av在线播放| 亚洲精品av麻豆狂野| 汤姆久久久久久久影院中文字幕| 久久国产精品男人的天堂亚洲| 99久久综合免费| 亚洲色图 男人天堂 中文字幕| 国产av一区二区精品久久| 一本大道久久a久久精品| 又大又黄又爽视频免费| 亚洲男人天堂网一区| 水蜜桃什么品种好|