• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Task-adaptation graph network for few-shot learning①

    2022-07-06 03:23:18ZHAOWencang趙文倉LIMingQINWenqian
    High Technology Letters 2022年2期

    ZHAO Wencang(趙文倉), LI Ming, QIN Wenqian

    (College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, P.R.China)

    Abstract Numerous meta-learning methods focus on the few-shot learning issue, yet most of them assume that various tasks have a shared embedding space, so the generalization ability of the trained model is limited. In order to solve the aforementioned problem, a task-adaptive meta-learning method based on graph neural network (TAGN) is proposed in this paper, where the characterization ability of the original feature extraction network is ameliorated and the classification accuracy is remarkably improved. Firstly, a task-adaptation module based on the self-attention mechanism is employed,where the generalization ability of the model is enhanced on the new task. Secondly, images are classified in non-Euclidean domain, where the disadvantages of poor adaptability of the traditional distance function are overcome. A large number of experiments are conducted and the results show that the proposed methodology has a better performance than traditional task-independent classification methods on two real-word datasets.

    Key words:meta-learning,image classification,graph neural network (GNN),few-shot learning

    0 Introduction

    Plentiful machine learning problems have been solved by the development of deep learning. Based on large datasets and high-dimensional datasets, high prediction accuracy has been achieved in tasks such as computer vision, machine translation, sentiment analysis, as well as speech recognition, and the development of artificial intelligence has also been promoted.However, the following key problem still remains: a large number of labeled samples are required when training models;on the contrary,in the real world,the labeled samples are quite rare or undisclosed, which makes it impossible to train a highly accurate model.The problem mentioned is the few-shot classification problem[1].

    In recent years, in order to solve the above fewshot learning problem, researchers have shifted their focus to meta-learning (learning to learn). Thanks to prior knowledge can be used by meta-learning to train a model with strong generalization ability, the most important thing is that it has weak demand for sample labels and has a wide range of applications. The existing meta-learning methods can be divided into the following four categories: meta-learning methods based on parameter optimization, such as MAML algorithm[2];meta-learning methods based on external memory,such as MetaNet algorithm[3]; meta-learning methods based on data enhancement, such as the SGM algorithm[4];meta-learning methods based on metric learning, such as RepMet algorithm[5]. Metric learning is also called similarity learning. Its essence is to complete the fewshot classification by comparing the similarity of two images.

    Although stupendous progress has been made by these methods,limitations still remain. First of all,the feature vectors extracted by most methods are not sufficiently distinguishable for new tasks. The main reason is that they do not consider that different classification tasks have different differences between their features,and the feature representation should also be adjusted according to the task. In other words, the feature representation that is applicable to a certain task does not be applicable to other tasks. In order to overcome this shortcoming, a feasible solution is to link the training task with the test task, so that the feature extraction process is more focused on the task to be completed next, rather than extracting feature representations for all classification tasks. Secondly, Euclidean or other artificially defined distance functions are usually used by most of the existing few-shot learning methods based on metric learning, which are limited to a two-dimensional or multi-dimensional space[6]. The solution is to use a neural network to train this metric,so that the distance function can be adjusted according to different tasks.

    A self-attention learning module based on the metric learning algorithm is proposed in this paper, which is used to improve the characterization ability of the original algorithm feature extraction network, so that the extracted feature vectors can be applied to various tasks(task-adaptation). In addition, a graph neural network (GNN) is employed as a distance measurement method to overcome the shortcomings of traditional measurement methods and the relationship between samples is fully used to further improve the accuracy of few-shot classification.

    1 Related work

    The essence of the attention mechanism is to only focus on key information, that is, when multiple types of information appear, only a specific part is captured.A great success in natural language processing, semantic segmentation, image classification and other fields[7]has been achieved.

    In recent years, the attention mechanism has been improved and the self-attention mechanism has been produced by researchers, where the ability to capture the internal correlation of features is referred to by attracting attention to each information in a set of information. Similarly, remarkable results in various fields have been achieved by attention mechanism. For example, when completing natural language processing,a self-attention mechanism instead of a recurrent neural network was used to excellently complete the learning of text representation[8]. In Ref.[9], the task of instance segmentation and target detection was completed through adding a non-local module(the modeling method is the self-attention mechanism) to the ResNet network, and the performance was greatly improved. In addition, the task of extracting biomedical relationships was completed through applying the self-attention mechanism in Ref.[10].

    Recently, innumerable image classification work have shifted to the attention mechanism[11-13]. Different weights were assigned to different samples in the training set, so that important features were paid more attention by the classification model. In this way, irrelevant information was ignored, however, the sample features in the training set were weighed while the samples in the test set were not considered. That is to say,after the attention mechanism, feature vectors are more conducive to the classification of specific tasks, the generalization ability on other tasks is still not ideal.The reason is that feature vectors should be different for different tasks. For example, two tasks are given, one is to distinguish between lion and tiger, the other is to distinguish between lion and wolf, the difference of lion and tiger must be different from the difference of lion and wolf. If the feature vectors used for classification are the same, the accuracy will be definitely affected. In order to overcome this limitation, a taskadaptive few-shot learning method is proposed.

    For the above considerations, the characteristics of the test set samples are merged when the attention weights of the training set samples are calculated, so that the trained model can adapt to various classification tasks. In addition, a large number of existing fewshot classification methods based on metric learning are limited to the use of cosine distance[14],Euclidean distance[15]or nonlinear neural network[16]to calculate similarity, where the correlation between the samples is not sufficiently considered. And the graph neural network model[17]is used as the classification module,that is, the distance measurement is transferred from the Euclidean space to the non-Euclidean space, so as to fully explore the relationship between the samples and further improve the classification accuracy.

    2 Problem definition

    In this section, the definition of the meta-learning problem and graph classification problem will be introduced.

    2.1 Meta-learning problem

    Firstly, the specific meaning ofN-wayK-shot will be presented: the unlabeled samples predicted by classification models belong to one of theNcategories, and there are onlyKlabeled samples in each category,whereKis a very small number.N×Ksamples with known labels will be used to make predictions on samples with unknown labels. All the samples mentioned above are from the test set, denoted asDtest, and the set of labeled samples in the test set is also called the support setDsupport, the set of unlabeled samples is called the query setDquery. And the samples inDsupportare denoted asxs, while the samples inDqueryare denoted asxq.

    Since there are few samples with known labels,Dtrain, whose samples are all labeled, is used to simulateN-wayK-shot tasks. The specific steps are: randomly selectNclasses from the training set, where there areKsamples in each class, and theseN×Ksamples form support setDsupport, and then randomly selectPsamples from the remaining samples in each category to form query setDquery, and repeat the above stepsMtimes,M N-wayK-shot tasks can be derived.In order to simulate the test task, what need to do is to use the samples in the support set to predict the labels of samples in the query set ,that is,train the model on theMtasks in the training set, and then generalize it to the test set. Usually the classification modelfis trained by minimizing the loss sum of these tasks(Eq.(1)). The specific steps are shown in Algorithm 1.

    Algorithm 1 The formation process of meta-tasks for all samples x in Dtrain do for each task Ti, ?i ∈{1,…,M} do Randomly sample N classes from Dtrain;Randomly sample K instances from N classes to form Dsupport;Randomly sample P instances from (Dtrain -Dsupport)to form Dquery;end for end for

    2.2 Graph classification problem

    The definition of the graph is briefly introduced.Graph is a data structure consisting of two components:nodes and edges. Nodes represent the object of classifying and edges represent the specific relationship between the two objects. An undirected graph can be described as a set of nodes and edges, denoted asG=(V,E),Gis a two-tuple, whereV={v1,v2,…,vi,…,vn} is a node set;E={ei,j=(vi,vj)}?(V×V)is an edge set. In addition, the matrix storing the data of the relationship (edge) between nodes is called the adjacency matrix (A)[18].

    The graph neural network used for classification in this paper contains two modules: weight update module and node update module. The weight update module is composed of 5 fully connected layers. Four of them have a batch normalization layer and an activation function leaky ReLU layer. In order to learnθ, a fully connected layer is added at the end of the network.When the nodes need to be classified, the softmax function is used to normalize the adjacency matrixArow by row, and outputs the adjacency matrixB,whose elements represent the similarity between nodes.The node update module is composed of graph convolution block and cascading operation layers. Among them,the graph convolution block contains a batch normalization layer and a leaky ReLU layer. The network structure of the graph neural network and its detailed work process is shown in Section 3.

    3 Methods

    The task-adaptive few-shot learning method will be introduced in this section. Firstly, how the embedding module extracts the initial feature representations is explained. Then the initial feature vectors are operated based on the self-attention mechanism, so that the output of the self-attention module is a series of taskrelated feature vectors. Finally the working process of the classification module (graph neural network) is illustrated.

    The visualization of the operation process is shown in Fig.1.

    Fig.1 The overview of task-adaptation graph network (TAGN) method

    3.1 Feature embedding module

    In order to learn the initial feature representationsθ(x)(Eq. (2)) of the inputsx, the prototype network[18]is used as the feature extraction networkE(x)(as visualized in Fig.2). The core of this network is transforming the comparison betweenθ(xq) (the feature vectors of samples inDquery) andθ(xs) (the feature vectors of samples inDsupport) (Eq.(3)) into the comparison betweenθ(xq) andCi(the category center ofDsupport)(Eq. (5)). Among them, the calculation method of the category center ofDsupportis to take the average value of all the sample features in the category(Eq.(4)).

    Fig.2 The overview of the network structure of feature embedding module

    3.2 Self-attention module

    The purpose of self-attention module is to transform the initial feature representationθ(x) unrelated to the task into a task - adaptive feature representationψ(x) (Eq.(7)). The specific operation steps are based on the self-attention mechanism, that is, a converterFis introduced, whose function is to calculate the correct value corresponding to the query point. It is a store of triplets, which is composed of query, key and value[8]. Among them,the query vector represents the feature of the sample, the key vector represents the feature of the information, and the value vector represents the content of the information. The way of defining them will be described later.

    To convert the task-independent feature vectorsθ(xq) to the task-adaptive feature vectorsψ(xq), the‘self-attention’ operation is utilized, and the scaling dot is used to get the self-attention score. In addition,the self-attention module consists of a fully connected layer, a matrix multiplication layer, and a softmax normalization layer. In order to improve the generalization ability of the model and prevent overfitting, a dropout layer (which rate is set as 0.5) is added to the network structure. In addition, in order to make the final model more stable and effective, layer normalization layer instead of batch normalization layer is utilized at the end, and its network structure is shown in Fig.3.

    3.3 Classification module

    Fig.3 The overview of the network structure of self-attention module

    As mentioned in subsection 3. 1, the center of task-adaption feature representationψ(x) will be obtained firstly according to Eq.(12),and then the category centerCiand query set sample representations will be connected with their labels. For test samples that do not know the labels,h(l) is defined as a uniform distribution function(for the sake of simplicity, the label is filled with all 0) (Eq.(13)), and the spliced feature vectors are regarded as nodes, denoted asV={v1,v2,…,vi,…,vn}, wheren=N+P(Pis the number of sample points inDquery), which will be the inputs of the graph neural network.

    whereGc(·) represents a neural network layer,ρ(·) represents the activation function leaky ReLU,dis the number of rows of the node vector, andBis the adjacency matrix normalized by softmax for each row of the adjacency matrixA.

    Fig.4 The overview of the network structure of classification module

    Algorithm 2 Training strategy of task-adaption graph network Inputs: DSupport = {(x1, y1),…, (xN×K, yN×K)}; Dquery ={x1,…,xP}Outputs: labels y of samples in Dquery for all M tasks do Calculate θ(x) using Eq.(2);Calculate aqk using Eq.(10);Calculate ψ(x) using Eq.(11);Calculate Ci using Eq.(12);Connect Ci, ψ(xj) with its labels using Eq.(14)→V= (v1,…, vN+P);for all layers=1,…, l do Calculate Al using Eq.(14);Calculate Vl+1 using Eq.(15);Predict the labels ypq of xq using adjacency matrix B;Calculate the loss function l(ypq,yq)using Eq.(16);end for Calculate the gradient of Loss using Eq.(1);Update E, F and GNN with the gradient of Loss using stochastic gradient descent;end for

    wherekis the number of samples inDquery.

    4 Experiments

    The performance of the task-adaptation method on two general datasets (MiniImageNet and CUB200-2011) is shown in this section. Firstly, a brief introduction to the two datasets and their category allocation is given. Then the accuracy of the proposed method and baselines is compared to verify the advancement of the proposed method based on two backbone networks.Finally ablation experiments are conducted to further test the significance of each module in TAGN.

    4.1 Datasets

    MinilmageNet(Mini) dataset[15]is excerpted from the ImageNet dataset and it is the benchmark dataset in the field of meta-learning and few-shot learning. It contains 100 categories of 60 000 images. And there are 600 samples in each category.

    CUB200-2011(CUB) dataset[15]is a fine-grained dataset presented in 2010, and it contains 200 categories of 11 788 images. More about the setting methods of these two datasets are shown in Table 1.

    Table 1 Descriptive statistics of datasets

    4.2 Backbone networks

    Two traditional feature extraction networks are used asE(x), namely four-layer convolution network(ConvNet-4) and residual network (ResNet-12).Next, they will be introduced in detail.

    ConvNet-4 consists of 4 identical convolutional blocks, and each convolutional block contains a 3 ×3 convolutional layer, a batch normalization layer, an activation function layer and the maximum pooling layer for compression. And in order to reduce the amount of calculation for subsequent operations, a global max pooling layer (GMPL) that reduces the dimension of feature representation is added at the end. Its network structure is shown in Fig.2(a).

    ResNet-12 is composed of 4 convolutional blocks,and each convolutional block contains a 3 ×3 convolutional layer, a batch normalization layer, and an activation function layer. Also in order to reduce the amount of calculation, a global average pooling layer (GAPL)is added at the end. Its network structure is shown in Fig.2(b).

    4.3 Results and discussion

    Table 2 and Table 3 indicate the results of the performance comparison between task-adaptive method and baselines, where the model of this paper achieves the best performance on two datasets. Among all models, two non-linear neural networks-based models, relation network and GNN, have shown better results,compared with traditional distance function-based models matching network and prototypical network. For example, in 5-way 1-shot setting the accuracy of GNN improved by at least 0.16% (50.46% vs. 50.62%)on MiniImageNet and 1.27% (62.45% vs. 63.72%)on CUB200-2011,while in 5-way 5-shot setting the accuracy of GNN improved by 0.62% (65.85% vs. 66.47%)on MiniImageNet and 5.44% (76.12% vs. 81.56%)on CUB200-2011. To further demonstrate the advancement of the method of this paper, it is compared with EGNN[20]and TPN[21], where the accuracy is improved by at least 0. 97% (52. 46% vs. 53. 43%)and 1.05% (81.64% vs. 82.69%), under 5-way 1-shot setting and 5-way 5-shot setting compared with EGNN,and the accuracy is improved by at least 0.08%(53. 35% vs. 53. 43%) and 0. 97% (81. 72% vs.82.69%),under 5-way 1-shot setting and 5-way 5-shot setting compared with TPN (as shown in Table 2 and Table 3).

    Table 2 Few-shot classification accuracy on MiniImageNet dataset

    In addition, in order to verify the effectiveness of the task-adaptation module (TA), a self-attention module is added to the original structure of the matching network and the prototype network. The results show that due to the addition of the self-attention mechanism on MiniImageNet, the accuracy of matching network has increased from 43.84% to 52.84% (5-way 1-shot setting), the accuracy of prototype network increased from 49.54% to 52.31% (5-way 1-shot setting).

    Table 3 Few-shot classification accuracy on CUB200-2011 dataset

    As for CUB200-2011, the accuracy of the matching network increased from 62.26% to 66.98% (5-way 1-shot setting), the accuracy of prototype network has risen from 51.31% to 68.65% (5-way 1-shot setting). This proves that the motivation using self-attention module as supplement to feature extraction module gets benign results. Moreover, the task-adaptive graph neural network has at least improved accuracy compared with the task-unknown baselines 0.5% (62.10% vs.62.60%) and 0. 13% (77. 13% vs. 77. 26%) for MiniImageNet and CUB200-2011 respectively. It implies that the model that uses self-attention mechanism to get task-adaptation feature vector and then utilizes GNN as classification module has received well performance.

    4.4 Ablation experiments

    To predict how the regularizationλimpacts the results, another experiment is implemented, where the value ofλis changed and all other parameters are fixed to the values that produce the best results. In addition,the value ofλis set as 0,1,10, 100, and the changes of classification accuracy are observed based on ConvNet-4 on the dataset MiniImageNet. The results show that when the value ofλchanges within a certain range, the accuracy is increased. And models perform well when the value ofλ=10. The overall operational results is depicted in Fig.5.

    In order to prove the advancement of TAGN,comparative experiments are conducted on the MiniImageNet dataset and CUB200-2011 dataset based on ConvNet-4. Among them, the comparison baselines are the matching network using the cosine distance, the prototype network using the squared Euclidean distance,and GNN which changes the distance measurement from the Euclidean domain to the non-Euclidean domain. Results are shown in Fig.6, where MatchNet,ProtoNet, GNN and TA represents matching network,prototype network, graph neural network and the selfattention (task-adaptation) module, respectively. The results prove that the methods of training the metric function with neural network are better than the traditional methods of fixed distance function.

    Fig.5 Line chart of the accuracy for different value of regularization

    Fig.6 Histogram of the accuracy for different network structure

    As for the accuracy on the MiniImageNet dataset,the addition of self-attention module improves the accuracy from 49.54% and 65.21% to 52.31% and 71.35%,under 1-shot and 5-shot settings, respectively. And the method TAGN increases the accuracy by at least 2.21% (52.31% vs. 54.43%) and 0.82% (71.35%vs. 72.17%) under 1-shot and 5-shot settings. As for the accuracy on the CUB200-2011 dataset, the addition of self-attention module improves the accuracy from 51.31% and 68.65% to 70.87% and 80.78%, under 1-shot and 5-shot settings,respectively. And the method TAGN increases the accuracy by at least 0.29% (68.65%vs. 68.94%) and 1.13% (81.56% vs. 82.69%)under 1-shot and 5-shot settings.

    5 Conclusions

    A feature learning model based on the metric learning algorithm is proposed in this paper, where the self-attention mechanism is utilized to produce taskadaption feature vectors and adaptable model. In addition, a classification framework based on graph neural network is established, where the relationship between samples can be fully explored by the model, and the effect of improving the accuracy of few-shot classification can be achieved. The experimental results on the datasets MiniImageNet and CUB200-2011 prove that the method of this paper is better than task-independent methods. In the future, the model will be extended to the two settings including zero-shot classification and generalized few-shot classification.

    久久人人爽av亚洲精品天堂| 久久香蕉激情| 国产老妇伦熟女老妇高清| 欧美精品亚洲一区二区| 老司机在亚洲福利影院| 纯流量卡能插随身wifi吗| 亚洲欧美色中文字幕在线| 亚洲男人天堂网一区| 国产免费福利视频在线观看| 精品久久久久久电影网| 91精品国产国语对白视频| 欧美性长视频在线观看| 亚洲伊人色综图| 亚洲第一青青草原| 精品福利永久在线观看| 亚洲avbb在线观看| 丝袜美足系列| 国产欧美日韩一区二区三区在线| 久久久久精品人妻al黑| 亚洲熟女精品中文字幕| 9色porny在线观看| 亚洲成人免费电影在线观看| 巨乳人妻的诱惑在线观看| 女性生殖器流出的白浆| 三级毛片av免费| 黄色丝袜av网址大全| 两性午夜刺激爽爽歪歪视频在线观看 | 天天躁夜夜躁狠狠躁躁| 一本一本久久a久久精品综合妖精| 国产成人精品无人区| 99久久国产精品久久久| 免费一级毛片在线播放高清视频 | 女警被强在线播放| 午夜福利视频精品| 乱人伦中国视频| 久久久水蜜桃国产精品网| 汤姆久久久久久久影院中文字幕| 最新美女视频免费是黄的| 欧美性长视频在线观看| av在线播放免费不卡| 久久人妻福利社区极品人妻图片| av片东京热男人的天堂| 成人永久免费在线观看视频 | 91麻豆av在线| 久久精品亚洲熟妇少妇任你| 男女无遮挡免费网站观看| 丝袜美腿诱惑在线| 亚洲伊人久久精品综合| 久久精品国产综合久久久| 每晚都被弄得嗷嗷叫到高潮| 国产三级黄色录像| 国产成人一区二区三区免费视频网站| 精品卡一卡二卡四卡免费| 欧美成人午夜精品| 欧美黑人欧美精品刺激| 国产成人精品在线电影| 精品久久久久久电影网| 美女福利国产在线| 涩涩av久久男人的天堂| 少妇裸体淫交视频免费看高清 | 少妇精品久久久久久久| 大型黄色视频在线免费观看| 女人被躁到高潮嗷嗷叫费观| 国产精品99久久99久久久不卡| 我的亚洲天堂| 久久久久国产一级毛片高清牌| 三级毛片av免费| 日本一区二区免费在线视频| 一区二区三区精品91| 精品福利永久在线观看| 69av精品久久久久久 | 夜夜骑夜夜射夜夜干| 99国产精品一区二区三区| 欧美日韩国产mv在线观看视频| 精品国内亚洲2022精品成人 | 动漫黄色视频在线观看| 十八禁网站网址无遮挡| 日韩一卡2卡3卡4卡2021年| 久久国产亚洲av麻豆专区| 老司机靠b影院| 99在线人妻在线中文字幕 | 天堂8中文在线网| 亚洲欧美日韩另类电影网站| 久久中文看片网| 一区二区三区精品91| 9色porny在线观看| 国产深夜福利视频在线观看| 日韩欧美免费精品| 法律面前人人平等表现在哪些方面| 在线十欧美十亚洲十日本专区| 国产xxxxx性猛交| 精品免费久久久久久久清纯 | 国产老妇伦熟女老妇高清| 国产又色又爽无遮挡免费看| 视频在线观看一区二区三区| 亚洲欧美日韩高清在线视频 | 亚洲成a人片在线一区二区| 王馨瑶露胸无遮挡在线观看| 欧美在线黄色| 这个男人来自地球电影免费观看| xxxhd国产人妻xxx| 国产亚洲精品第一综合不卡| 交换朋友夫妻互换小说| 国产一区二区 视频在线| 精品国产亚洲在线| 欧美日韩中文字幕国产精品一区二区三区 | 一级毛片女人18水好多| 97在线人人人人妻| 亚洲黑人精品在线| av天堂久久9| 国产无遮挡羞羞视频在线观看| 美女高潮喷水抽搐中文字幕| 久久这里只有精品19| 色播在线永久视频| 免费看a级黄色片| 久久人妻福利社区极品人妻图片| 久久久久精品国产欧美久久久| 女同久久另类99精品国产91| 亚洲,欧美精品.| 成人黄色视频免费在线看| 日本av手机在线免费观看| 女人被躁到高潮嗷嗷叫费观| 51午夜福利影视在线观看| 精品午夜福利视频在线观看一区 | 91精品三级在线观看| 久久香蕉激情| 欧美性长视频在线观看| 18禁国产床啪视频网站| 无人区码免费观看不卡 | 久久精品91无色码中文字幕| 亚洲精品久久午夜乱码| 亚洲少妇的诱惑av| 亚洲精华国产精华精| 午夜成年电影在线免费观看| 午夜两性在线视频| 亚洲av日韩精品久久久久久密| 国产麻豆69| 欧美日韩国产mv在线观看视频| 亚洲少妇的诱惑av| 免费观看av网站的网址| 国产人伦9x9x在线观看| av线在线观看网站| 亚洲国产欧美一区二区综合| 免费在线观看日本一区| 啦啦啦免费观看视频1| 丝袜人妻中文字幕| 咕卡用的链子| 欧美午夜高清在线| 国产麻豆69| 色老头精品视频在线观看| 蜜桃在线观看..| 精品国产乱码久久久久久小说| 精品久久久精品久久久| 欧美老熟妇乱子伦牲交| 亚洲国产av影院在线观看| 老司机亚洲免费影院| 日韩欧美一区二区三区在线观看 | 欧美一级毛片孕妇| 国产成人欧美| 叶爱在线成人免费视频播放| 精品国产乱子伦一区二区三区| 好男人电影高清在线观看| 日韩欧美免费精品| 久久99热这里只频精品6学生| 日本五十路高清| 人成视频在线观看免费观看| 精品少妇内射三级| 国产不卡一卡二| 中文欧美无线码| 中文字幕高清在线视频| 国产一卡二卡三卡精品| 国产区一区二久久| 免费在线观看影片大全网站| 亚洲七黄色美女视频| 黑人欧美特级aaaaaa片| 久久精品国产a三级三级三级| 大型av网站在线播放| 国产精品一区二区免费欧美| 国产不卡一卡二| av不卡在线播放| 久久精品aⅴ一区二区三区四区| 后天国语完整版免费观看| 99九九在线精品视频| 亚洲国产看品久久| 一进一出抽搐动态| 在线观看免费视频日本深夜| 国产精品av久久久久免费| 性少妇av在线| 国产精品久久久久久精品古装| 日韩视频一区二区在线观看| 国产精品一区二区免费欧美| avwww免费| 久9热在线精品视频| 超碰成人久久| 99精品欧美一区二区三区四区| 免费不卡黄色视频| 久热爱精品视频在线9| 国产一区二区激情短视频| av又黄又爽大尺度在线免费看| 日本黄色视频三级网站网址 | 桃花免费在线播放| 欧美亚洲日本最大视频资源| 老鸭窝网址在线观看| 免费一级毛片在线播放高清视频 | 啦啦啦在线免费观看视频4| 波多野结衣av一区二区av| 欧美亚洲 丝袜 人妻 在线| 欧美乱码精品一区二区三区| 久久久水蜜桃国产精品网| 成人三级做爰电影| 三上悠亚av全集在线观看| 最近最新中文字幕大全免费视频| 99精品在免费线老司机午夜| 少妇粗大呻吟视频| a级毛片在线看网站| 久久国产亚洲av麻豆专区| 久热这里只有精品99| 最黄视频免费看| 一边摸一边抽搐一进一出视频| 亚洲精品粉嫩美女一区| 搡老乐熟女国产| 极品少妇高潮喷水抽搐| 国产欧美日韩一区二区精品| 久久国产精品影院| 免费女性裸体啪啪无遮挡网站| 成人18禁高潮啪啪吃奶动态图| 精品亚洲乱码少妇综合久久| 后天国语完整版免费观看| 五月天丁香电影| 老汉色av国产亚洲站长工具| 在线播放国产精品三级| 欧美精品高潮呻吟av久久| 欧美人与性动交α欧美精品济南到| 99国产精品一区二区三区| 亚洲精品美女久久av网站| 亚洲综合色网址| 成人av一区二区三区在线看| 又大又爽又粗| 一本久久精品| kizo精华| 国产精品久久电影中文字幕 | 精品一品国产午夜福利视频| e午夜精品久久久久久久| 精品国产超薄肉色丝袜足j| 亚洲七黄色美女视频| 中文字幕av电影在线播放| 国产主播在线观看一区二区| av又黄又爽大尺度在线免费看| 久久久久久久大尺度免费视频| 男女下面插进去视频免费观看| 成人精品一区二区免费| 十八禁网站免费在线| 大码成人一级视频| 国产男靠女视频免费网站| 中文欧美无线码| 亚洲一区中文字幕在线| 丝袜美腿诱惑在线| 亚洲av成人一区二区三| 黄色视频不卡| 人人澡人人妻人| 午夜福利欧美成人| 国产成人欧美在线观看 | 国产三级黄色录像| 精品少妇久久久久久888优播| 日韩精品免费视频一区二区三区| 精品久久蜜臀av无| 老熟女久久久| 一区在线观看完整版| 成年版毛片免费区| 51午夜福利影视在线观看| 真人做人爱边吃奶动态| 日韩三级视频一区二区三区| 青青草视频在线视频观看| avwww免费| 女人精品久久久久毛片| 伊人久久大香线蕉亚洲五| 国产成人影院久久av| 黑人操中国人逼视频| 精品人妻熟女毛片av久久网站| 国产精品 欧美亚洲| 精品人妻1区二区| 制服诱惑二区| 精品国产超薄肉色丝袜足j| 国产成人精品久久二区二区91| 成人精品一区二区免费| 亚洲中文av在线| 欧美大码av| 99re在线观看精品视频| 91麻豆av在线| 欧美精品高潮呻吟av久久| 一区二区av电影网| 老熟妇仑乱视频hdxx| 亚洲av国产av综合av卡| 久热爱精品视频在线9| 最近最新中文字幕大全免费视频| 两性午夜刺激爽爽歪歪视频在线观看 | 黑人巨大精品欧美一区二区蜜桃| 操出白浆在线播放| 亚洲精品中文字幕在线视频| 一边摸一边抽搐一进一小说 | 一区福利在线观看| 午夜激情久久久久久久| 99re在线观看精品视频| 色综合欧美亚洲国产小说| 99久久人妻综合| 午夜精品久久久久久毛片777| 久久国产精品影院| 免费看十八禁软件| 精品少妇一区二区三区视频日本电影| 亚洲一区二区三区欧美精品| 欧美日韩av久久| 中文字幕人妻丝袜制服| 午夜福利影视在线免费观看| 欧美激情高清一区二区三区| 中文字幕精品免费在线观看视频| 亚洲九九香蕉| 久久精品亚洲精品国产色婷小说| 日韩欧美免费精品| 欧美人与性动交α欧美软件| 欧美激情高清一区二区三区| 欧美人与性动交α欧美软件| 男女之事视频高清在线观看| 免费在线观看视频国产中文字幕亚洲| 久久中文字幕一级| 亚洲自偷自拍图片 自拍| 国产精品一区二区在线观看99| 久久青草综合色| 一本一本久久a久久精品综合妖精| 国产片内射在线| av在线播放免费不卡| 成年女人毛片免费观看观看9 | av视频免费观看在线观看| 欧美一级毛片孕妇| 国产不卡一卡二| 欧美一级毛片孕妇| 久久精品熟女亚洲av麻豆精品| 亚洲第一欧美日韩一区二区三区 | 亚洲成人免费电影在线观看| 丝袜人妻中文字幕| 丰满饥渴人妻一区二区三| 久久精品91无色码中文字幕| 亚洲五月色婷婷综合| cao死你这个sao货| 欧美日韩精品网址| 久久精品成人免费网站| 麻豆成人av在线观看| 亚洲av欧美aⅴ国产| 757午夜福利合集在线观看| 黄频高清免费视频| 国产精品熟女久久久久浪| 亚洲九九香蕉| 热99国产精品久久久久久7| 久久人妻福利社区极品人妻图片| 欧美人与性动交α欧美软件| 国产福利在线免费观看视频| 黄色视频在线播放观看不卡| 色精品久久人妻99蜜桃| 亚洲五月色婷婷综合| 午夜福利在线观看吧| 精品福利观看| 视频在线观看一区二区三区| 国产亚洲一区二区精品| 欧美激情高清一区二区三区| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲美女黄片视频| 日韩大片免费观看网站| 日本欧美视频一区| 美女扒开内裤让男人捅视频| 他把我摸到了高潮在线观看 | 女人久久www免费人成看片| 亚洲色图综合在线观看| 午夜福利免费观看在线| 一区二区三区乱码不卡18| 亚洲九九香蕉| 岛国在线观看网站| 日韩人妻精品一区2区三区| av免费在线观看网站| 中亚洲国语对白在线视频| 午夜激情av网站| 啪啪无遮挡十八禁网站| 国产日韩欧美视频二区| 亚洲天堂av无毛| 丝袜在线中文字幕| 久久这里只有精品19| 99国产极品粉嫩在线观看| 极品人妻少妇av视频| 久久精品国产a三级三级三级| 菩萨蛮人人尽说江南好唐韦庄| 乱人伦中国视频| 操美女的视频在线观看| 美女午夜性视频免费| 久久久国产一区二区| 午夜福利欧美成人| 久久ye,这里只有精品| 亚洲成人国产一区在线观看| 脱女人内裤的视频| 老司机影院毛片| 一本久久精品| 啪啪无遮挡十八禁网站| 一区二区三区激情视频| 欧美日韩福利视频一区二区| 免费看a级黄色片| 国产单亲对白刺激| 在线观看免费视频网站a站| 亚洲欧洲日产国产| 欧美激情 高清一区二区三区| 国产成人av激情在线播放| 黄色视频不卡| 一区二区三区乱码不卡18| 一夜夜www| 色老头精品视频在线观看| av超薄肉色丝袜交足视频| 又紧又爽又黄一区二区| 久久九九热精品免费| 午夜视频精品福利| 国产97色在线日韩免费| 在线观看免费视频网站a站| 国产片内射在线| 人人妻人人爽人人添夜夜欢视频| 一级毛片精品| 叶爱在线成人免费视频播放| 成年版毛片免费区| 国产色视频综合| 国产精品久久久久久精品古装| 精品国产乱码久久久久久小说| 国产在线一区二区三区精| 少妇的丰满在线观看| av在线播放免费不卡| 正在播放国产对白刺激| 亚洲精品美女久久av网站| 午夜日韩欧美国产| 男女免费视频国产| 国产精品亚洲一级av第二区| 国产熟女午夜一区二区三区| 国产野战对白在线观看| 欧美国产精品一级二级三级| 天堂动漫精品| 91九色精品人成在线观看| 51午夜福利影视在线观看| 国产精品1区2区在线观看. | 成年女人毛片免费观看观看9 | 五月开心婷婷网| 91精品国产国语对白视频| 久久人妻福利社区极品人妻图片| 中文字幕精品免费在线观看视频| 亚洲,欧美精品.| 黄色丝袜av网址大全| 精品一区二区三卡| 中文字幕色久视频| 在线观看一区二区三区激情| 成人影院久久| 欧美激情极品国产一区二区三区| av电影中文网址| 天天操日日干夜夜撸| 又紧又爽又黄一区二区| 免费在线观看黄色视频的| 在线观看人妻少妇| √禁漫天堂资源中文www| 男女床上黄色一级片免费看| 午夜91福利影院| 人成视频在线观看免费观看| 妹子高潮喷水视频| 老司机午夜十八禁免费视频| 日韩视频在线欧美| 国产一区二区 视频在线| videosex国产| 欧美激情久久久久久爽电影 | 黑人巨大精品欧美一区二区蜜桃| 欧美成人免费av一区二区三区 | 国产精品九九99| 中文字幕精品免费在线观看视频| 18在线观看网站| 18禁观看日本| 中文字幕色久视频| 精品乱码久久久久久99久播| 中文字幕高清在线视频| 亚洲黑人精品在线| 每晚都被弄得嗷嗷叫到高潮| 久久久久视频综合| 亚洲,欧美精品.| 色播在线永久视频| 国产精品 国内视频| 老司机靠b影院| 18禁裸乳无遮挡动漫免费视频| av视频免费观看在线观看| 亚洲色图综合在线观看| 亚洲国产精品一区二区三区在线| 中文亚洲av片在线观看爽 | 欧美日韩国产mv在线观看视频| 国产精品久久久久久人妻精品电影 | 国产午夜精品久久久久久| 中文字幕精品免费在线观看视频| 嫁个100分男人电影在线观看| 999久久久国产精品视频| 狂野欧美激情性xxxx| 欧美午夜高清在线| 亚洲国产欧美网| 亚洲av欧美aⅴ国产| 亚洲国产欧美日韩在线播放| 大码成人一级视频| www.999成人在线观看| 亚洲午夜理论影院| 中国美女看黄片| 91字幕亚洲| 欧美成人免费av一区二区三区 | 精品国产一区二区三区久久久樱花| 欧美国产精品va在线观看不卡| 99riav亚洲国产免费| 乱人伦中国视频| 成人18禁在线播放| 女性生殖器流出的白浆| 欧美精品亚洲一区二区| 99精品久久久久人妻精品| 巨乳人妻的诱惑在线观看| 青青草视频在线视频观看| 欧美日韩亚洲综合一区二区三区_| 黄色视频在线播放观看不卡| 视频区图区小说| 高清av免费在线| 国产一区二区三区综合在线观看| 超碰成人久久| 亚洲天堂av无毛| 国产又爽黄色视频| 久久人妻av系列| 午夜福利免费观看在线| 黄色视频在线播放观看不卡| 人人妻人人澡人人看| 精品一品国产午夜福利视频| 日日摸夜夜添夜夜添小说| 高清黄色对白视频在线免费看| 在线十欧美十亚洲十日本专区| 国产av又大| 亚洲成人免费av在线播放| 久久人妻av系列| 精品国产乱码久久久久久男人| 成人特级黄色片久久久久久久 | 亚洲欧美日韩另类电影网站| 别揉我奶头~嗯~啊~动态视频| 精品国产一区二区三区四区第35| 一级毛片精品| 亚洲精华国产精华精| 日韩欧美国产一区二区入口| 欧美精品av麻豆av| 亚洲精品国产精品久久久不卡| 757午夜福利合集在线观看| 久9热在线精品视频| 国产又色又爽无遮挡免费看| 啦啦啦中文免费视频观看日本| 狠狠狠狠99中文字幕| 国产高清视频在线播放一区| 成人三级做爰电影| 国产精品.久久久| 国产精品98久久久久久宅男小说| 国产一区二区在线观看av| 1024香蕉在线观看| 欧美人与性动交α欧美软件| 麻豆国产av国片精品| 国产在线观看jvid| 国产精品电影一区二区三区 | 久久久久精品国产欧美久久久| 国产一区二区三区在线臀色熟女 | 一边摸一边抽搐一进一出视频| 99热国产这里只有精品6| 国产主播在线观看一区二区| 精品卡一卡二卡四卡免费| 超色免费av| 无遮挡黄片免费观看| 久久99热这里只频精品6学生| 交换朋友夫妻互换小说| 日韩欧美国产一区二区入口| 成人18禁高潮啪啪吃奶动态图| 免费观看av网站的网址| cao死你这个sao货| 妹子高潮喷水视频| 亚洲精品国产一区二区精华液| 国产一区二区激情短视频| 变态另类成人亚洲欧美熟女 | av线在线观看网站| 黄片播放在线免费| 亚洲一码二码三码区别大吗| 日韩制服丝袜自拍偷拍| 欧美精品av麻豆av| 性色av乱码一区二区三区2| 中文字幕制服av| 黑人猛操日本美女一级片| 国产精品av久久久久免费| 美女视频免费永久观看网站| 国产精品久久电影中文字幕 | 汤姆久久久久久久影院中文字幕| 国产成人系列免费观看| 一级毛片精品| 丝袜人妻中文字幕| 日本欧美视频一区| 热re99久久国产66热| 国产精品成人在线| 国产xxxxx性猛交| 丝袜美足系列| 如日韩欧美国产精品一区二区三区| 美女午夜性视频免费| 亚洲国产欧美网| 国产av又大| 考比视频在线观看| 美国免费a级毛片| 别揉我奶头~嗯~啊~动态视频| 在线永久观看黄色视频| 一本综合久久免费| 国产极品粉嫩免费观看在线| 1024香蕉在线观看| 国产精品国产高清国产av | 一本大道久久a久久精品| 别揉我奶头~嗯~啊~动态视频| 欧美午夜高清在线| 欧美激情久久久久久爽电影 | 黄片小视频在线播放| 国产精品熟女久久久久浪|