• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-attention fusion and weighted class representation for few-shot classification①

    2022-10-22 02:23:46ZHAOWencang趙文倉QINWenqianLIMing
    High Technology Letters 2022年3期

    ZHAO Wencang (趙文倉), QIN Wenqian, LI Ming

    (College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, P.R.China)

    Abstract The existing few-shot learning (FSL) approaches based on metric-learning usually lack attention to the distinction of feature contributions, and the importance of each sample is often ignored when obtaining the class representation, where the performance of the model is limited. Additionally, similarity metric method is also worthy of attention. Therefore, a few-shot learning approach called MWNet based on multi-attention fusion and weighted class representation (WCR) is proposed in this paper. Firstly, a multi-attention fusion module is introduced into the model to highlight the valuable part of the feature and reduce the interference of irrelevant content. Then, when obtaining the class representation, weight is given to each support set sample, and the weighted class representation is used to better express the class. Moreover, a mutual similarity metric method is used to obtain a more accurate similarity relationship through the mutual similarity for each representation.Experiments prove that the approach in this paper performs well in few-shot image classification,and also shows remarkable excellence and competitiveness compared with related advanced techniques.

    Key words: few-shot learning (FSL), image classification, metric-learning, multi-attention fusion

    0 Introduction

    Learning from a few examples is still a key challenge for many machine vision tasks. Humans can recognize new objects from a small number of samples. In order to imitate this cognitive intelligence of humans,few-shot learning (FSL) has been proposed,and it has quickly become a hot and challenging research field. It aims to learn a classifier that has good generalization ability when faced with a few new unknown class samples.

    The existing few-shot learning technologies can be roughly divided into two categories: approaches based on metric-learning[1-9]and approaches based on metalearning[10-13]. The basic idea of the former is to learn an embedding space and classify samples according to the similarity metric between the query sample and the representation of each class. The goal of the latter is to learn a cross-task meta-learner to improve generalization ability through cross-task knowledge transfer. Both of these two types of approaches are designed to allow the trained model to classify query samples with limited support samples.

    Excellent research progress has been made by FSL approaches based on metric-learning, but the existing approaches still have some limitations. Breaking these limitations to improve model performance is the main motivation of this work.

    Such approaches usually have a simple and efficient architecture, but the emphasis on highlighting valuable regions and weakening irrelevant regions in the feature extraction stage is still insufficient. The feature expression ability of the model will be limited,and serious deviations could even be caused in the subsequent metric process, thus the desired classification effect failed to be achieved. This problem can be solved ingeniously by the attention mechanism[14]. It is inspired by the human visual mechanism that has been widely used in many fields of deep learning[15-16]. It can focus on more important information and reduce the focus on irrelevant information. In order to obtain more valuable feature representations, in this work, the attention mechanism will be introduced into the few-shot classification tasks in the form of a multi-attention fusion module (MAFM). By acquiring the attention weights of the features from the spatial dimension and the channel dimension, and performing multi-attention fusion, the features extracted by the model are more meaningful.

    Most of the existing approaches are similar to the PrototypicalNet[2], where the averaging of the feature vectors of various support set samples is used as the prototype (class representation) of each class. The value of each sample of the class is often not considered, and the contribution of each support set sample in each class to the prototype of the class is regarded as consistent. However, in this case, it is easy to be affected by some invalid or interfering samples, resulting in the prototype not reaching good representativeness.That is to say, the usefulness of each feature vector to the prototype can not be effectively evaluated, and the similarity judgment result will be biased. In response to this problem, a weighted class representation(WCR) is proposed, which is generated after evaluating the value of each support sample. It can reduce the negative impact of interfering samples, and assign more valuable samples with greater weight, so that the final weighted class representation is more beneficial to FSL.

    In addition, prototypes can be obtained through shrinking the support set, and then the one-way similarity between query samples and prototypes is directly compared by such approaches. It is hardly considered that the similarity comparison is not only in a single direction, but also can be measured from the perspective of class to query. Therefore, some similarity deviations in the metric stage may be produced, where the final classification results will be affected. In this paper,the conventional one-way similarity metric method is not used, but the interaction between samples and class representations is fully considered. A mutual similarity metric method is introduced in the feature space to determine the class attribution of the sample, and a more convincing class discrimination result is obtained,so as to better improve the performance of the model.

    A series of experimental results on the few-shot image classification benchmark dataset miniImageNet[1]and the fine-grained dataset Stanford Dogs[17]show that the few-shot learning approach based on multi-attention fusion and weighted class representation proposed in this paper has excellent performance.

    The main work and contributions of this paper are as follows.

    (1) An effective multi-attention fusion module is designed to optimize the features by acquiring attention in the spatial dimension and the channel dimension, so that the valuable information is highlighted by the extracted features and the interference of irrelevant information is reduced.

    (2) The weighted class representation is used to replace the traditional approaches of finding the mean value of feature vectors as the class prototype, where the obtained class representation features are more valuable and more accurate.

    (3) In the classification stage,a mutual similarity metric method is introduced, and two-way similarity discrimination between the query and the class is carried out, and the mutual similarity for each representation is combined to make the final similarity metric result more reliable.

    1 Related work

    Many breakthroughs in the current area have been made by the development of few-shot learning, and a brief example is given to introduce its two major branches.

    Approaches based on metric-learning. Most of these approaches are for learning the similarity comparison of information among samples. The attention mechanism is introduced in MatchingNet[1], and the attention score of the extracted features is used to predict the class of the query sample. The support set of each class is shrank by PrototypicalNet[2]into a representation of a class by finding the mean vector of the support set samples of each class, called a prototype, and then the distance (similarity) between the query image and each prototype is compared to be classified. Neural network is used by RelationNet[3]to analyze the similarity between samples, which can be regarded as a non-linear classifier. Conditional batch normalization is relied by task dependent adaptive metric (TADAM)[4]to increase task adaptability and task-related metric space is learnt. The similarity metric between images and classes is considered by DN4[5]from the perspective of local descriptors. The research of subspace is involved by TapNet[6]and DSN[7], and the rationality of subspace modeling for few-shot learning is verified.A category traversal module (CTM) is used in Ref.[8]to make full use of the intra-class commonality and inter-class uniqueness of features. An attention mechanism is introduced by MADN4[9]to improve the extracted local descriptors.

    Approaches based on meta-learning. A metalearner is usually learnt by such approaches to adjust the optimization algorithm. The meta-learner is trained by model-agnostic meta-learning (MAML)[10]to perform proper parameter initialization to better adapt to the new task,so that the model can have good performance on the new task with only a few steps of gradient descent. A differentiable quadratic programming (QP)solver is combined by MetaOptNet[11]to achieve good performance. The goal of latent embedding optimization(LEO)[12]is to train the model in a high-dimensional parameter space, and it only needs a few updates in the low-data area. Attentive weights generation for few shot learning via information maximization (AWGIM)[13]is also based on parameter optimization, learning to directly use a generator to generate the weight parameters of the classifier.

    The approach proposed in this paper is based on metric-learning. The difference from other related work is that this paper has carried out a unique work in the attention mechanism, support set class representation and similarity metric method. In the experimental section,the validity and advancement of these work in this paper are verified, and the comparison and connection with related approaches are made.

    2 Proposed approach

    2.1 Problem set-up

    The episodic training mechanism[1]is widely used in the training process of FSL. Usually the support set and query set are randomly selected from the training set to train the model. The entire training process is divided into many episodes. In each episode, a small number of labeled samples in the support set are used for training, and then query images determine which category in the support set they belong to. In this work, the episodic training mechanism will also be used to train the model. Specifically, each episode containsNesample classes which are randomly selected from the training set. Among them, each classm(m∈1,…,Ne) hasNssupport samples andNqquery samples as the support setSmand query setQmin the episode, respectively. The samples in the support set and the query set are randomly selected and do not intersect, and the number of samples can be set manually,that is,the support images and query images of classmtogether form the support set and query set of this class. According to the number of sample classesNeand the number of support set samplesNs, each of these episodes can also be called FSL in the case ofNe-wayNs-shot.

    2.2 Model overview

    As shown in Fig.1, the support set samples and query samples are input into the feature extraction backbone networkfθembedded with the multi-attention fusion module. In the process of feature extraction, after multi-attention acquisition of spatial dimension and channel dimension, more distinguishing and valuable features are obtained. In the feature space, the positions of samples belonging to the same class are close together. According to the method proposed in this paper, weight is assigned to each support set sample,and the weighted class representation of each class is calculated. Then the mutual similarity between the query sample and the weighted class representation is measured. The category of the class representation with the highest mutual similarity score is the classification result of the query sample. The specific content of each part will be shown in the following sections.

    Fig.1 The overall framework of the proposed MWNet

    2.3 Multi-attention fusion module

    In order to obtain more distinguishing features in the feature extraction stage, an attention module based on multi-attention fusion is designed in this paper,which is mainly composed of two parts: spatial attention part and channel attention part. Different from the single-dimensional attention mechanism, attention weights are obtained by this module in the spatial dimension and channel dimension respectively, and they are connected in series, so it can be regarded as a multi-dimensional attention fusion. The valuable regions can be highlighted by the obtained features while rich information is included and irrelevant regions are suppressed, so it is called a multi-attention fusion module.

    Spatial attention. This part of the structure is shown in Fig.2. In order to find the weight relationship in the spatial dimension, the intermediate feature mapF∈Rc×h×wextracted in the previous stage is reduced through a 1 ×1 convolutional layer with a channel number of 1, and then the Sigmoid function is used to obtain the spatial attention weightSA∈R1×h×w.The calculation process can be expressed as

    whereσrepresents the Sigmoid function, andConvrepresents the convolution operation. It is equivalent to compressing the information of the originalcchannels to a single channel and using it as a weight, and then multiplying the weightSAandFto obtain the spatial attention optimization feature mapF′∈Rc×h×w,namely,F′=F?SA.

    Fig.2 Spatial attention acquisition process

    Channel attention. This part of the structure is shown in Fig.3. Perform global average pooling onF′obtained through spatial attention optimization, that is,compress the global information in each channel dimension ofF′into the global average pooling vectorFG∈Rc×1×1.Then pass through the two fully connected layers in turn and use the Sigmoid function to obtain the final channel attention weightCA∈Rc×1×1, which can be expressed as

    where,fullandGAPrespectively represent that the feature passes through the two-layer fully connected layer and the global average pooling layer. Then multiplyCAandF′to get the final multi-attention optimization feature mapF″∈Rc×h×w,namely,F″=F′?CA.

    Fig.3 Channel attention acquisition process

    After the above steps,the model realizes the multiinformation interaction between the spatial dimension and the channel dimension, that is, multi-attention fusion. Moreover, this module can be directly embedded in the feature extraction backbone, effectively highlighting the important part of the feature while keeping the original feature map size unchanged, which is more conducive to subsequent similarity metric.

    2.4 Weighted class representation

    Ideally, samples belonging to the same class should be as close as possible in the feature space, but it is inevitable that occasionally one or several interfering samples deviate from other samples in the class. At this time, if all samples of this category are treated equally and the feature vectors are averaged,the representativeness of the obtained prototype may not be ideal. This is due to the negligence in obtaining the prototype: in addition to positive effects, some samples will also have interference factors. If these negative effects are not fully considered, the obtained prototype (class representation) will be biased. In order to reduce this deviation, in this work, a weighted class representation with the size of the weight is proposed. It is not simply averaging the feature vectors of the support set samples, but fully considers that when calculating the class representation of each class sample, each sample has its own different degree of positive influence. The idea of this work is to give more weight to samples with greater positive influence. Specifically, in the support samples of the same class, the smaller the Euclidean distance between a sample and other samples, the larger the proportion of the sample in the construction of the class representation, on the contrary, the proportion is smaller. Based on this idea, the representation of each class sample obtained by calculation is more ideal.

    The process of obtaining is as follows. First, the support samplexiand other samplesxj(j≠i) of the same class pass through the feature extraction network embedded with the attention module to obtain the feature vectorsfθ(xi) andfθ(xj) after attention optimization. Then in this class, the average value of the sum of Euclidean distances betweenxiand eachxjcan be expressed by Eq.(3). The larger theαi, the greater the difference between samplexiand other samples in the same class, and the smaller the similarity.

    Finally, each support samplexiin the class is combined with its weightwito perform a weighted summation to obtain the weighted class representation of the current classm.

    By calculating the weighted class representation,it is possible to largely avoid some samples with large deviations from the samples of the same class from interfering with the calculation of the entire class representation. It can also make the model get better performance with more accurate class representation.

    2.5 Mutual similarity metric method

    Conventional methods usually only carry out a one-way metric from a certain query sample to each class, but do not consider the metric from a certain class to each query sample from the perspective of each class. In this work, research has been conducted on this issue. Here, a mutual similarity metric method is introduced. Specifically, the idea of this method is:when the similarity from query^xqto the weighted class representationξmof classmis high, and the similarity fromξmto the query^xqis also high. Then it can be considered that the similarity discrimination result that^xqbelongs to classmis highly credible. The specific process of the mutual similarity metric method is as follows.

    The query sample^xqpasses through the feature extraction network embedded with the attention module to obtain the attention optimized sample featurefθ(^xq),then the probability of its belonging to each classmcan be calculated by the Softmax function.exp(-d(f(^x),ξ))

    By carrying out similarity metric from different angles and combining them appropriately, the model can make better use of the interrelationship between query and class representation, interactively fuse information, and make the obtained similarity metric results more accurate.

    2.6 Training algorithm

    Algorithm 1 shows the episodic training process of MWNet in this paper. For each episode, the support set samples and query samples are input into the feature extraction network embedded with the multi-attention fusion module, and the optimized sample features are obtained after attention acquisition in the spatial and channel dimensions. Next, weight is assigned to each support set sample based on the Euclidean distance and Softmax function, and the weighted class representation of each class is calculated. Finally, the mutual similarity metric method is used in the feature space to predict the class of query samples, and the parameters of the feature extraction network are updated by minimizing the classification loss of the episode.Then use the updated model to process the new episode until the training is completed.

    3 Experiments

    In order to evaluate the performance of the approach proposed in this paper, this section conducts experiments on the benchmark dataset miniImageNet[1]in the field of few-shot learning, and compares it with the existing advanced approaches. Further, in order to explore the effectiveness of the approach in this paper on fine-grained images, the fine-grained dataset Stanford Dogs[17]with small inter-class changes and large intra-class changes is selected for experiments, and compared with related metric-learning based approaches on this dataset. In addition, in this section, the feature space visualization of the model, research on higher way training, and ablation experiments will also be carried out.

    3.1 Datasets

    miniImageNet dataset is a small version of ImageNet[18]. It has a total of 100 classes, each with 600 samples, and the image resolution is 84 ×84. This paper adopts the division method in PrototypicalNet[2]:64 classes for training,16 classes for verification, and 20 classes for testing.

    Stanford Dogs dataset is often used for finegrained image classification. It has 120 classes and a total of 20 580 images. According to the method in Ref.[5], it is divided into 70 training classes,20 verification classes and 30 testing classes.

    Algorithm 1 The process of a training episode for MWNet Input: each episode ei with S and Q 1: for i in{e1,…,eI} do 2: Li ←0 3: for sample x in S, Q do 4: F←Intermediate feature map of sample x 5: SA ←σ(Conv(F)) Spatial attention weight 6: F′←F ?SA Spatial attention optimization feature map 7: CA ←σ(full(GAP(F′))) Channel attention weight 8: F″←F′?CA Multi-attention optimization feature map 9: fθ(x) ←The final feature map of sample x 10: end for 11: for m in{1,…, Ne} do 12: for(xi, yi) ∈Sm, (xj, yj) ∈Sm do 13: αi = 1 Ns -1∑j≠id(fθ(xi), fθ(xj))14: wi = exp(- αi)∑Ns i=1exp(- αi)15: ξm = ∑fθ(xi) Weighted class representation(xi, yi)∈Sm exp(- αi)∑Ns i=1exp(- αi)16: end for 17: end for 18: for m in{1,…,Ne} do 19: for q in{1,…,Nq} do 20: Sim(^xq →ξm) ← exp(- d(fθ(^xq), ξm))∑m′exp(- d(fθ(^xq),ξm′))21: Sim(ξm →^xq) ← exp(- d(ξm, fθ(^xq)))∑q′exp(- d(ξm, fθ(^xq′)))22: Sim(^xq?ξm) ←Sim(xq →ξm)·Sim(ξm →^xq) Mutual similarity 23: end for 24: end for

    25: Li ← 1 NeNq ∑m ∑q[- log(Sim(^xq?ξm))]26: Update θ using ?Li 27: end for

    3.2 Experimental setup

    The experiments in this section use typical fewshot image classification settings, namelyN-wayK-shot settings. The Adam algorithm[19]is used for training,the initial learning rate is 10-3, and more support set classes are used for training than for testing. In the experiments on miniImageNet, there are a total of 6 ×104training episodes, with 12 query samples in each class. For the 5-way 1-shot tasks, the learning rate drops by 10 times after every 2.5 ×104episodes. For the 5-way 5-shot tasks, the learning rate drops by 10 times after every 4 ×104episodes. In the test phase,there are 15 query samples for each class, and the average accuracy of 4 ×104episodes is randomly selected to evaluate the performance of the model. For the finegrained few-shot classification experiments on the Stanford Dogs[17]dataset, since the number of samples in the dataset is much smaller than in miniImageNet, in order to avoid model overfitting, data augmentation technology is used. The rest of the experimental settings are the same as those in miniImageNet.

    3.3 Feature extraction backbone

    In order to better compare the model with other advanced approaches, in the experiments on miniImageNet in this section, two feature extraction backbone networks are used to implement the models in this paper. The first one used is ResNet-12[20], which is the same as that used in TapNet[6]. It consists of four residual blocks with channel numbers (represented by L in Fig. 4) of 64, 128, 256, and 512 respectively.Each residual block contains three 3 ×3 convolutional blocks and a shortcut connection. After the convolutional block is the batch normalization (BN) layer and the ReLU activation function,there is a 2 ×2 max-pooling layer after each residual block, and the shortcut connection contains 3 ×3 convolutional layer and batch normalization layer. The multi-attention fusion module is embedded after the last convolutional layer in each residual block, and a global average pooling layer is added at the end of the backbone. The structure of the entire network is shown in Fig.4.

    In addition, the common four-layer convolutional network Conv-4 is also used as the backbone, as used in PrototypicalNet[2]. It has a total of 4 convolutional blocks, each of which contains 64 3 × 3 kernels, a batch normalization layer, a ReLU activation functions, and a 2 ×2 max-pooling layer. In Conv-4, the multi-attention fusion module is embedded after the first two convolutional blocks, as shown in Fig.5.

    Fig.4 ResNet-12 embedded with multi-attention fusion module

    Fig.5 Conv-4 embedded with multi-attention fusion module

    3.4 Comparison results

    Table 1 shows the experimental results of the model MWNet proposed in this paper compared with advanced technologies on miniImageNet[1]when Conv-4 and ResNet-12 are used respectively. It can be seen that when using the same size backbone, MWNet always outperforms other approaches in 5-way 1-shot and 5-way 5-shot tasks. Compared with the benchmark approach PrototypicalNet[2], which is also based on metric-learning, the model in this paper has achieved obvious advantages. In the case of Conv-4,the classification accuracy of 5-way 1-shot and 5-way 5-shot tasks are 4.15% and 4.19% higher than PrototypicalNet respectively. In the case of ResNet-12, the classification accuracy of 5-way 1-shot and 5-way 5-shot tasks have advantages of 4.01% and 4.43%, respectively. Compared with TapNet[6]and DSN[7]based on subspace,the approach in this paper achieves better results without involving complex subspace structure, which is simpler and more efficient. CTM[8]involves model fine-tuning, and uses ResNet-18, which is deeper than ResNet-12, and MWNet does not need such a deep backbone to achieve better performance through end-toend training. In addition, compared with advanced meta-learning based technologies, MWNet still has strong competitiveness. It is worth noting that both LEO[12]and AWGIM[13]use deeper and wider wide residual network (WRN-28-10)[21], and the model in this paper can compete with them without such a complex network architecture, and has a higher classification accuracy.

    Table 1 Accuracy comparison with other approaches on miniImageNet

    3.5 Fine-grained few-shot classification

    In order to explore the performance of the approach proposed in this paper in the task of finegrained few-shot image classification, the Stanford Dogs[17]dataset is selected and the 5-way 1-shot and 5-shot experiments are performed. For the sake of comparison, Conv-4 with the same size as the related approaches is used as the backbone.

    As shown in Table 2, the model in this paper is effective on fine-grained dataset.

    Table 2 5-way 1-shot and 5-way 5-shot fine-grained few-shot classification on Stanford Dogs

    In addition, when the feature extraction backbone of the same size is used,compared with related approaches based on metric-learning, the model in this paper has achieved better classification accuracy. Compared with the benchmark approach PrototypicalNet[2], the accuracy is 13.02% and 22.62% higher in 5-way 1-shot and 5-way 5-shot tasks, respectively. Compared with the local descriptor-based DN4[5]and the approach MADN4[9]with the attention mechanism added,the model in this paper still has better performance.

    3.6 Visualizations of feature space

    Fig.6 t-SNE visualization of feature space

    In order to express the multi-attention fusion and weighted class representation in this paper more vividly, in Fig.6, the relevant feature spaces in the experiments on miniImageNet[1]is subjected to t-SNE visualizations. Conv-4 is used as the backbone, and the PrototypicalNet[2]is re-implemented through the settings in this paper. As shown in Fig.6, different shapes of graphics represent different types of support samples,there are a total of 5 classes, and the number of support samples for each class is set to 15. The graphic with a black frame represents the characterization of the class. More specifically, Fig.6(a) represents the feature space of the PrototypicalNet, an ordinary feature space without introducing attention mechanism and weighted class representation. Fig.6(b) represents the feature space after multi-attention fusion. It can be seen that due to the acquisition of multi-dimensional attention,the support set samples at this time are closer than the original feature space. However, because individual samples deviate from other samples of the same class,the prototype calculated according to the class mean vector is interfered by this sample to a certain extent,which will induce some misclassifications. Fig. 6(c)represents the feature space in which weighted class representations are introduced after multi-attention fusion. At this time, the value of each support set sample is considered, and each sample is assigned a corresponding weight based on the Euclidean distance and Softmax function when obtaining the class representation, so that the weighted class representation can better reflect this class of sample. This largely avoids the occurrence of the misclassification in Fig.6(b).

    3.7 Research on higher way training

    According to previous experience, using a higher number of ways during training, that is, using more support set classes in each episode will make the model obtain a higher classification accuracy. In order to find a more suitable way number for the model in this paper, the FSL experiments with different way number settings is performed on the miniImageNet[1]dataset.In this section, ResNet-12 is used as the feature extraction backbone of the model, other experimental settings remain unchanged, and the number of shots for training and testing is the same. The experimental results are shown in Fig.7. It can be seen that for the 5-way 1-shot tasks, using the 15-way 1-shot setting during training will obtain a better classification accuracy.And for 5-way 5-shot tasks, using the 20-way 5-shot setting during training will obtain better classification results.

    3.8 Ablation study

    In order to further verify that the various parts of the work performed in this paper are helpful to improve the classification performance of the model, 5-way 1-shot and 5-way 5-shot few-shot ablation study is conducted in this section on miniImageNet[1]. Considering the relevance to the work of this paper, the selected baseline approach is PrototypicalNet[2]based on metric-learning. And in order to make a better comparison, the experimental data when ResNet-12 is used as the backbone is selected as a reference, and the relevant experiments are implemented in accordance with the settings in this paper.

    Fig.7 Results with different number of ways

    First,this section studies the influence of different attention on the performance of the model. For the sake of comparison, only attention is introduced in this part of the experiment. As shown in Fig.8, thex-axis represents the classification accuracy of the model when only channel attention is introduced, only spatial attention is introduced, channel-spatial attention parallel,and channel-spatial attention series are introduced. It can be seen that under the 5-way 1-shot setting, when only channel attention is introduced, the accuracy is increased by 0.62%; when only spatial attention is introduced, the accuracy is increased by 0.59%; when the channel-spatial attention is connected in parallel,the accuracy is increased by 0.91%; when the channel-spatial attention is connected in series, the accuracy is increased by 1.71%. With the 5-way 5-shot setting, the accuracy of the corresponding four parts is increased by 0.64%,0.68%,0.96%, and 1.28% respectively. Considering comprehensively, the way of attention series is chose in the model.

    Fig.8 The influence of different attention on miniImageNet

    What follows is the rest of the ablation study. As shown in Fig.9, under the 5-way 1-shot setting, after adding the MAFM, the accuracy of the model increased by 1. 17%. Then, after the introduction of WCR, the accuracy increased by 1.87%. After using the mutual similarity metric method, the accuracy is increased by 0.97%. At this time, the accuracy of the model is the highest, that is, MWNet. Under the 5-way 5-shot setting, the accuracy shows the same upward trend, and the accuracy of the corresponding three parts are increased by 1.28%,1.97%,and 1.18%respectively. Obviously, the various parts of the work in this paper are beneficial to the improvement of the few-shot classification performance. And when the multi-attention fusion module, weighted class representation and mutual similarity metric exist at the same time, the classification accuracy of the model is the highest, which is the MWNet proposed in this paper.With the positive contributions of these beneficial parts, the model in this paper has such excellent performance.

    4 Conclusions

    In this paper, a simple and efficient few-shot learning model is proposed. Through the cross-spatial and

    Fig.9 Ablation study on the miniImageNet dataset

    channel attention acquisition in the feature extraction stage, the extracted features are richer and more discriminative. The importance of each sample is considered based on the Euclidean distance and the Softmax function, where the negative influence of interfering samples is weakened. In the metric phase, information is fused from different angles to obtain a more reliable similarity relationship. A series of experiments on miniImageNet and Stanford Dogs datasets show that the approach proposed in this paper is effective and superior, especially when compared with advanced related technologies, it is highly competitive. Future work will explore the applicability of the model in more problem settings, such as cross-domain and transduction fewshot classification. In addition, a combination of fewshot learning and active learning can also be tried.

    亚洲成人国产一区在线观看| 日本撒尿小便嘘嘘汇集6| 最近最新中文字幕大全电影3 | 国产亚洲午夜精品一区二区久久| 精品国产国语对白av| 美女视频免费永久观看网站| 久久久欧美国产精品| 久久精品国产亚洲av高清一级| 免费观看a级毛片全部| 午夜成年电影在线免费观看| 老熟妇乱子伦视频在线观看| 精品人妻1区二区| 亚洲,欧美精品.| 电影成人av| 黄色片一级片一级黄色片| 国产日韩一区二区三区精品不卡| 水蜜桃什么品种好| 久久天堂一区二区三区四区| 99riav亚洲国产免费| 午夜久久久在线观看| 国产在视频线精品| 新久久久久国产一级毛片| 日本精品一区二区三区蜜桃| 国产精品偷伦视频观看了| av又黄又爽大尺度在线免费看| 色综合婷婷激情| 大型黄色视频在线免费观看| 国产在线免费精品| 亚洲色图 男人天堂 中文字幕| 亚洲精品国产色婷婷电影| a级毛片黄视频| 欧美精品高潮呻吟av久久| 极品教师在线免费播放| 精品欧美一区二区三区在线| 欧美日韩av久久| 丁香欧美五月| 国产精品久久久久久人妻精品电影 | 黄色a级毛片大全视频| 亚洲五月色婷婷综合| av欧美777| 亚洲欧美一区二区三区黑人| 亚洲欧洲日产国产| 黄色视频在线播放观看不卡| avwww免费| 最近最新中文字幕大全免费视频| 在线播放国产精品三级| 999久久久精品免费观看国产| 少妇被粗大的猛进出69影院| 精品久久久久久久毛片微露脸| 欧美精品高潮呻吟av久久| 久久久久久亚洲精品国产蜜桃av| 在线av久久热| 老司机福利观看| 国产人伦9x9x在线观看| 亚洲成人手机| 18禁裸乳无遮挡动漫免费视频| 老鸭窝网址在线观看| 久久久久国内视频| 国产在线免费精品| 天堂中文最新版在线下载| 深夜精品福利| 免费在线观看完整版高清| 亚洲成人国产一区在线观看| 国产亚洲精品第一综合不卡| 国产精品免费视频内射| 黑人巨大精品欧美一区二区mp4| 妹子高潮喷水视频| 日本五十路高清| 色综合婷婷激情| 亚洲精品成人av观看孕妇| 老汉色∧v一级毛片| 十八禁高潮呻吟视频| 午夜两性在线视频| 亚洲国产毛片av蜜桃av| 一本久久精品| 色综合欧美亚洲国产小说| 婷婷丁香在线五月| 亚洲九九香蕉| 免费看a级黄色片| 美女视频免费永久观看网站| 中文字幕av电影在线播放| 国产精品一区二区在线不卡| 成人国产av品久久久| 欧美精品av麻豆av| 大片电影免费在线观看免费| 久久香蕉激情| 亚洲国产精品一区二区三区在线| 久久久久久久精品吃奶| 欧美在线黄色| 午夜两性在线视频| 成人特级黄色片久久久久久久 | av有码第一页| 欧美变态另类bdsm刘玥| 国产av精品麻豆| 99精品欧美一区二区三区四区| 老司机靠b影院| 在线观看66精品国产| 成人特级黄色片久久久久久久 | 亚洲精品美女久久av网站| 久久久欧美国产精品| 在线永久观看黄色视频| 精品国产亚洲在线| 99九九在线精品视频| 9191精品国产免费久久| 超碰成人久久| 午夜福利视频精品| 丰满少妇做爰视频| 国产精品1区2区在线观看. | 91精品国产国语对白视频| 亚洲成a人片在线一区二区| 日本wwww免费看| 99香蕉大伊视频| 欧美精品人与动牲交sv欧美| 国产免费av片在线观看野外av| 一二三四在线观看免费中文在| 亚洲欧美日韩高清在线视频 | 19禁男女啪啪无遮挡网站| 搡老岳熟女国产| 乱人伦中国视频| 999久久久精品免费观看国产| 成人永久免费在线观看视频 | 亚洲av日韩在线播放| 久久久久精品人妻al黑| 男女之事视频高清在线观看| 大香蕉久久网| 亚洲人成77777在线视频| 免费少妇av软件| 午夜福利乱码中文字幕| 欧美黑人欧美精品刺激| a在线观看视频网站| 久9热在线精品视频| 久久久国产一区二区| 精品国产乱子伦一区二区三区| 欧美国产精品va在线观看不卡| 一级a爱视频在线免费观看| 久久久久久久国产电影| 欧美老熟妇乱子伦牲交| 人人妻人人澡人人看| 精品久久久精品久久久| 国产不卡av网站在线观看| 久久午夜亚洲精品久久| 欧美 亚洲 国产 日韩一| 久久精品熟女亚洲av麻豆精品| 99riav亚洲国产免费| 黄色毛片三级朝国网站| 精品一品国产午夜福利视频| 精品熟女少妇八av免费久了| 在线十欧美十亚洲十日本专区| 考比视频在线观看| 日韩一卡2卡3卡4卡2021年| 欧美国产精品va在线观看不卡| 国产不卡av网站在线观看| 亚洲人成电影免费在线| 老司机在亚洲福利影院| 自线自在国产av| 俄罗斯特黄特色一大片| 成人影院久久| 丁香六月天网| 91国产中文字幕| 成人精品一区二区免费| 亚洲视频免费观看视频| 欧美黄色淫秽网站| 亚洲欧美日韩另类电影网站| 久久国产精品大桥未久av| 熟女少妇亚洲综合色aaa.| 国产亚洲一区二区精品| 狠狠婷婷综合久久久久久88av| 国产精品 欧美亚洲| 国产亚洲一区二区精品| 一夜夜www| 欧美精品av麻豆av| tube8黄色片| 高清毛片免费观看视频网站 | 无人区码免费观看不卡 | 久久免费观看电影| 国精品久久久久久国模美| 国产精品久久久av美女十八| a在线观看视频网站| 中文字幕高清在线视频| 国产欧美日韩一区二区三| 别揉我奶头~嗯~啊~动态视频| 天天躁日日躁夜夜躁夜夜| 国产av一区二区精品久久| 别揉我奶头~嗯~啊~动态视频| 两性午夜刺激爽爽歪歪视频在线观看 | 久久久久网色| 19禁男女啪啪无遮挡网站| 少妇的丰满在线观看| 午夜两性在线视频| 丝袜美腿诱惑在线| xxxhd国产人妻xxx| 一区二区日韩欧美中文字幕| 女警被强在线播放| 亚洲五月色婷婷综合| 一个人免费在线观看的高清视频| 国产在线精品亚洲第一网站| 丁香欧美五月| 精品一区二区三区四区五区乱码| 妹子高潮喷水视频| a级毛片在线看网站| 久久 成人 亚洲| 啦啦啦免费观看视频1| 后天国语完整版免费观看| 夜夜骑夜夜射夜夜干| 美女高潮喷水抽搐中文字幕| 日韩大片免费观看网站| 悠悠久久av| 一夜夜www| 亚洲精品久久午夜乱码| 久久久精品免费免费高清| 99久久国产精品久久久| 超碰97精品在线观看| 18禁美女被吸乳视频| 50天的宝宝边吃奶边哭怎么回事| 在线观看免费视频网站a站| 久9热在线精品视频| 在线 av 中文字幕| 国产精品自产拍在线观看55亚洲 | 国产精品亚洲一级av第二区| 国产成人精品久久二区二区91| 黄色怎么调成土黄色| 国产在线观看jvid| 后天国语完整版免费观看| 欧美国产精品va在线观看不卡| 国产欧美日韩精品亚洲av| 亚洲综合色网址| 黄色 视频免费看| 亚洲精品中文字幕一二三四区 | 免费观看av网站的网址| 久久 成人 亚洲| 欧美变态另类bdsm刘玥| 人妻久久中文字幕网| 亚洲全国av大片| 国产精品 国内视频| 亚洲av成人一区二区三| 国产一区二区三区综合在线观看| 黑人猛操日本美女一级片| 成人国产一区最新在线观看| 波多野结衣一区麻豆| 国产精品久久久久久精品电影小说| 女人久久www免费人成看片| 国产激情久久老熟女| 日韩制服丝袜自拍偷拍| 国产成人影院久久av| 国产亚洲精品一区二区www | 色婷婷av一区二区三区视频| 女人被躁到高潮嗷嗷叫费观| a在线观看视频网站| 高清欧美精品videossex| 99国产精品免费福利视频| 欧美成人免费av一区二区三区 | 亚洲第一av免费看| e午夜精品久久久久久久| 热re99久久精品国产66热6| 男女下面插进去视频免费观看| 淫妇啪啪啪对白视频| 日韩视频在线欧美| 最新在线观看一区二区三区| 狠狠精品人妻久久久久久综合| 日韩人妻精品一区2区三区| 9191精品国产免费久久| 国产在线免费精品| 久久影院123| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲国产av新网站| 超碰97精品在线观看| 母亲3免费完整高清在线观看| 精品少妇内射三级| 免费在线观看完整版高清| 欧美日韩亚洲国产一区二区在线观看 | 女同久久另类99精品国产91| 欧美在线一区亚洲| 无限看片的www在线观看| 亚洲精品美女久久久久99蜜臀| 夫妻午夜视频| 亚洲av成人不卡在线观看播放网| 国产欧美日韩一区二区精品| 国产无遮挡羞羞视频在线观看| 欧美在线一区亚洲| www日本在线高清视频| 亚洲精品美女久久久久99蜜臀| 多毛熟女@视频| 女人高潮潮喷娇喘18禁视频| 18禁黄网站禁片午夜丰满| 精品国产亚洲在线| 国产精品偷伦视频观看了| 手机成人av网站| 欧美日韩亚洲综合一区二区三区_| 一级毛片女人18水好多| 午夜福利,免费看| 日韩中文字幕视频在线看片| 色在线成人网| 欧美黄色淫秽网站| 午夜福利视频在线观看免费| 女人精品久久久久毛片| 女人高潮潮喷娇喘18禁视频| 日韩一卡2卡3卡4卡2021年| 成人av一区二区三区在线看| 波多野结衣一区麻豆| 国产视频一区二区在线看| 国产福利在线免费观看视频| 女性被躁到高潮视频| 人人妻人人澡人人爽人人夜夜| 我的亚洲天堂| 十八禁网站网址无遮挡| 精品少妇久久久久久888优播| 欧美日韩中文字幕国产精品一区二区三区 | 水蜜桃什么品种好| 亚洲黑人精品在线| www日本在线高清视频| 日韩视频一区二区在线观看| 王馨瑶露胸无遮挡在线观看| 欧美黑人欧美精品刺激| 两个人看的免费小视频| 久久久欧美国产精品| 大片电影免费在线观看免费| 久久狼人影院| 人妻久久中文字幕网| 亚洲成人免费av在线播放| 操出白浆在线播放| 日韩免费av在线播放| 欧美成狂野欧美在线观看| 亚洲专区国产一区二区| av超薄肉色丝袜交足视频| 久久精品亚洲熟妇少妇任你| 免费观看av网站的网址| 欧美性长视频在线观看| 亚洲av第一区精品v没综合| 天天躁日日躁夜夜躁夜夜| 成人亚洲精品一区在线观看| 午夜日韩欧美国产| 亚洲人成77777在线视频| 精品少妇久久久久久888优播| 亚洲,欧美精品.| 伦理电影免费视频| 宅男免费午夜| 人人澡人人妻人| 欧美激情 高清一区二区三区| 国产激情久久老熟女| 亚洲第一av免费看| 国产成人欧美在线观看 | 性色av乱码一区二区三区2| 欧美日韩国产mv在线观看视频| 国产精品九九99| 人人妻人人添人人爽欧美一区卜| 国产精品二区激情视频| 脱女人内裤的视频| 国产精品.久久久| 久久青草综合色| 亚洲成av片中文字幕在线观看| 亚洲精品国产色婷婷电影| 亚洲精品久久成人aⅴ小说| 国产亚洲欧美精品永久| 两个人免费观看高清视频| 国产男女内射视频| 午夜福利一区二区在线看| 国产男女内射视频| 热99久久久久精品小说推荐| 欧美久久黑人一区二区| 99精品欧美一区二区三区四区| 久久人妻av系列| 亚洲视频免费观看视频| 91老司机精品| 成年人免费黄色播放视频| 亚洲 欧美一区二区三区| 精品少妇黑人巨大在线播放| 人人妻人人添人人爽欧美一区卜| 啦啦啦 在线观看视频| 99精品在免费线老司机午夜| 欧美日韩中文字幕国产精品一区二区三区 | 纯流量卡能插随身wifi吗| 别揉我奶头~嗯~啊~动态视频| 悠悠久久av| 99香蕉大伊视频| 在线亚洲精品国产二区图片欧美| 久久久精品国产亚洲av高清涩受| 乱人伦中国视频| 免费不卡黄色视频| 天天影视国产精品| 色尼玛亚洲综合影院| 午夜福利乱码中文字幕| av天堂久久9| 国产一卡二卡三卡精品| 怎么达到女性高潮| 91成年电影在线观看| 黑人操中国人逼视频| 亚洲av电影在线进入| 99九九在线精品视频| 免费高清在线观看日韩| 91老司机精品| 人妻 亚洲 视频| 久9热在线精品视频| 最近最新中文字幕大全电影3 | 男女午夜视频在线观看| 狠狠狠狠99中文字幕| 高潮久久久久久久久久久不卡| 免费看a级黄色片| 国产黄频视频在线观看| 色播在线永久视频| 国产一卡二卡三卡精品| av片东京热男人的天堂| 亚洲欧美激情在线| 亚洲一码二码三码区别大吗| 黑人操中国人逼视频| 亚洲av美国av| 狂野欧美激情性xxxx| 久久婷婷成人综合色麻豆| 国内毛片毛片毛片毛片毛片| 我的亚洲天堂| 午夜福利欧美成人| 久久久久久亚洲精品国产蜜桃av| 999久久久精品免费观看国产| 久久精品国产亚洲av高清一级| 国产免费视频播放在线视频| 亚洲精品一二三| 久久精品国产99精品国产亚洲性色 | 天堂8中文在线网| 久久亚洲精品不卡| 亚洲国产欧美在线一区| 搡老岳熟女国产| 欧美中文综合在线视频| 人人妻人人爽人人添夜夜欢视频| 啦啦啦视频在线资源免费观看| 女性生殖器流出的白浆| 一本久久精品| 亚洲国产欧美一区二区综合| 热re99久久精品国产66热6| 欧美人与性动交α欧美精品济南到| 男人舔女人的私密视频| 首页视频小说图片口味搜索| 久久精品aⅴ一区二区三区四区| 国产精品一区二区在线不卡| 国产午夜精品久久久久久| 亚洲成人手机| 国产福利在线免费观看视频| 午夜两性在线视频| 18禁黄网站禁片午夜丰满| 国产野战对白在线观看| 午夜成年电影在线免费观看| 亚洲精品美女久久久久99蜜臀| 国产免费视频播放在线视频| 18禁美女被吸乳视频| 亚洲欧美精品综合一区二区三区| 中文欧美无线码| 久久人人97超碰香蕉20202| 成人国产av品久久久| 国产欧美日韩综合在线一区二区| 成人av一区二区三区在线看| 精品国产乱码久久久久久小说| 十八禁高潮呻吟视频| 大码成人一级视频| 亚洲成人国产一区在线观看| 少妇猛男粗大的猛烈进出视频| 精品国产乱码久久久久久男人| 曰老女人黄片| 国产精品偷伦视频观看了| 国产欧美日韩一区二区三区在线| aaaaa片日本免费| 无遮挡黄片免费观看| 国产激情久久老熟女| 亚洲男人天堂网一区| 久久 成人 亚洲| 中文字幕人妻丝袜制服| 欧美成人午夜精品| 免费在线观看黄色视频的| 日韩成人在线观看一区二区三区| 久久久久精品国产欧美久久久| 精品人妻熟女毛片av久久网站| 精品国产国语对白av| 69av精品久久久久久 | 日韩一卡2卡3卡4卡2021年| 国产xxxxx性猛交| 国产aⅴ精品一区二区三区波| 国产一区二区激情短视频| 一级毛片女人18水好多| 中文字幕另类日韩欧美亚洲嫩草| 一级a爱视频在线免费观看| 国产熟女午夜一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 最新美女视频免费是黄的| 国产精品.久久久| 久久精品亚洲熟妇少妇任你| 国产精品成人在线| 久久精品国产a三级三级三级| 老司机午夜福利在线观看视频 | 亚洲综合色网址| 午夜福利乱码中文字幕| 亚洲色图综合在线观看| 黄频高清免费视频| 精品国产超薄肉色丝袜足j| 老司机午夜十八禁免费视频| 精品一品国产午夜福利视频| 窝窝影院91人妻| 俄罗斯特黄特色一大片| 麻豆成人av在线观看| 久久久国产成人免费| 自线自在国产av| 熟女少妇亚洲综合色aaa.| 国产精品二区激情视频| 久久精品亚洲精品国产色婷小说| 免费观看av网站的网址| 亚洲专区字幕在线| 俄罗斯特黄特色一大片| 国产免费视频播放在线视频| 国产精品免费一区二区三区在线 | 亚洲精品中文字幕一二三四区 | 男女之事视频高清在线观看| 搡老熟女国产l中国老女人| 90打野战视频偷拍视频| 大片免费播放器 马上看| 午夜福利免费观看在线| 咕卡用的链子| 久久中文字幕一级| 一本久久精品| 亚洲精品中文字幕一二三四区 | 国产成人精品久久二区二区91| 色婷婷久久久亚洲欧美| 男女下面插进去视频免费观看| a级毛片黄视频| 亚洲国产欧美一区二区综合| 国产精品98久久久久久宅男小说| av超薄肉色丝袜交足视频| 在线观看66精品国产| 日韩大码丰满熟妇| 国产日韩欧美视频二区| 最新的欧美精品一区二区| 在线亚洲精品国产二区图片欧美| 大片电影免费在线观看免费| 19禁男女啪啪无遮挡网站| 欧美激情久久久久久爽电影 | 丝袜喷水一区| 欧美久久黑人一区二区| 少妇裸体淫交视频免费看高清 | 国产99久久九九免费精品| 久久精品国产亚洲av高清一级| 999精品在线视频| 亚洲国产欧美在线一区| 亚洲熟妇熟女久久| 女性被躁到高潮视频| 美女主播在线视频| 亚洲精品久久成人aⅴ小说| 久久国产亚洲av麻豆专区| 一级片免费观看大全| 久久久久久久国产电影| 黑人猛操日本美女一级片| 色94色欧美一区二区| 九色亚洲精品在线播放| 人人妻人人澡人人看| 国产免费av片在线观看野外av| 色综合婷婷激情| 18在线观看网站| 精品亚洲成a人片在线观看| 欧美国产精品一级二级三级| 亚洲人成77777在线视频| 国产亚洲av高清不卡| 日本vs欧美在线观看视频| a级片在线免费高清观看视频| 国产成人精品久久二区二区免费| 久久久久国内视频| 久久久国产欧美日韩av| 国产在线精品亚洲第一网站| 狠狠狠狠99中文字幕| 老熟妇仑乱视频hdxx| 高清av免费在线| 女性生殖器流出的白浆| 亚洲精品成人av观看孕妇| 亚洲免费av在线视频| 日韩欧美三级三区| 成人永久免费在线观看视频 | 波多野结衣av一区二区av| 久久国产精品人妻蜜桃| 亚洲欧美一区二区三区黑人| 久久久精品区二区三区| 一区二区av电影网| 三上悠亚av全集在线观看| 女人精品久久久久毛片| 一区福利在线观看| 久久久久视频综合| 国产亚洲精品第一综合不卡| 人人澡人人妻人| 精品一区二区三卡| 多毛熟女@视频| 欧美黄色片欧美黄色片| 人人妻人人添人人爽欧美一区卜| 日韩三级视频一区二区三区| 国产成人精品久久二区二区免费| 欧美激情高清一区二区三区| 黄色视频在线播放观看不卡| 久久精品亚洲av国产电影网| 亚洲全国av大片| 狂野欧美激情性xxxx| 久久精品aⅴ一区二区三区四区| 久久久久久久久免费视频了| 亚洲精品美女久久av网站| 欧美日韩国产mv在线观看视频| 99国产精品一区二区蜜桃av | 亚洲精品中文字幕在线视频| 极品人妻少妇av视频| 免费少妇av软件| 国产精品一区二区在线不卡| 女人久久www免费人成看片| 欧美性长视频在线观看| 久久久久国内视频| 国产欧美日韩一区二区三区在线| 一本色道久久久久久精品综合| 在线永久观看黄色视频| 丁香六月欧美| 国产有黄有色有爽视频| 美女国产高潮福利片在线看| 亚洲伊人久久精品综合| 日本精品一区二区三区蜜桃| 黄色视频在线播放观看不卡| 久久久久久亚洲精品国产蜜桃av| 啦啦啦 在线观看视频| 黑人巨大精品欧美一区二区mp4|