• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-attention fusion and weighted class representation for few-shot classification①

    2022-10-22 02:23:46ZHAOWencang趙文倉QINWenqianLIMing
    High Technology Letters 2022年3期

    ZHAO Wencang (趙文倉), QIN Wenqian, LI Ming

    (College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, P.R.China)

    Abstract The existing few-shot learning (FSL) approaches based on metric-learning usually lack attention to the distinction of feature contributions, and the importance of each sample is often ignored when obtaining the class representation, where the performance of the model is limited. Additionally, similarity metric method is also worthy of attention. Therefore, a few-shot learning approach called MWNet based on multi-attention fusion and weighted class representation (WCR) is proposed in this paper. Firstly, a multi-attention fusion module is introduced into the model to highlight the valuable part of the feature and reduce the interference of irrelevant content. Then, when obtaining the class representation, weight is given to each support set sample, and the weighted class representation is used to better express the class. Moreover, a mutual similarity metric method is used to obtain a more accurate similarity relationship through the mutual similarity for each representation.Experiments prove that the approach in this paper performs well in few-shot image classification,and also shows remarkable excellence and competitiveness compared with related advanced techniques.

    Key words: few-shot learning (FSL), image classification, metric-learning, multi-attention fusion

    0 Introduction

    Learning from a few examples is still a key challenge for many machine vision tasks. Humans can recognize new objects from a small number of samples. In order to imitate this cognitive intelligence of humans,few-shot learning (FSL) has been proposed,and it has quickly become a hot and challenging research field. It aims to learn a classifier that has good generalization ability when faced with a few new unknown class samples.

    The existing few-shot learning technologies can be roughly divided into two categories: approaches based on metric-learning[1-9]and approaches based on metalearning[10-13]. The basic idea of the former is to learn an embedding space and classify samples according to the similarity metric between the query sample and the representation of each class. The goal of the latter is to learn a cross-task meta-learner to improve generalization ability through cross-task knowledge transfer. Both of these two types of approaches are designed to allow the trained model to classify query samples with limited support samples.

    Excellent research progress has been made by FSL approaches based on metric-learning, but the existing approaches still have some limitations. Breaking these limitations to improve model performance is the main motivation of this work.

    Such approaches usually have a simple and efficient architecture, but the emphasis on highlighting valuable regions and weakening irrelevant regions in the feature extraction stage is still insufficient. The feature expression ability of the model will be limited,and serious deviations could even be caused in the subsequent metric process, thus the desired classification effect failed to be achieved. This problem can be solved ingeniously by the attention mechanism[14]. It is inspired by the human visual mechanism that has been widely used in many fields of deep learning[15-16]. It can focus on more important information and reduce the focus on irrelevant information. In order to obtain more valuable feature representations, in this work, the attention mechanism will be introduced into the few-shot classification tasks in the form of a multi-attention fusion module (MAFM). By acquiring the attention weights of the features from the spatial dimension and the channel dimension, and performing multi-attention fusion, the features extracted by the model are more meaningful.

    Most of the existing approaches are similar to the PrototypicalNet[2], where the averaging of the feature vectors of various support set samples is used as the prototype (class representation) of each class. The value of each sample of the class is often not considered, and the contribution of each support set sample in each class to the prototype of the class is regarded as consistent. However, in this case, it is easy to be affected by some invalid or interfering samples, resulting in the prototype not reaching good representativeness.That is to say, the usefulness of each feature vector to the prototype can not be effectively evaluated, and the similarity judgment result will be biased. In response to this problem, a weighted class representation(WCR) is proposed, which is generated after evaluating the value of each support sample. It can reduce the negative impact of interfering samples, and assign more valuable samples with greater weight, so that the final weighted class representation is more beneficial to FSL.

    In addition, prototypes can be obtained through shrinking the support set, and then the one-way similarity between query samples and prototypes is directly compared by such approaches. It is hardly considered that the similarity comparison is not only in a single direction, but also can be measured from the perspective of class to query. Therefore, some similarity deviations in the metric stage may be produced, where the final classification results will be affected. In this paper,the conventional one-way similarity metric method is not used, but the interaction between samples and class representations is fully considered. A mutual similarity metric method is introduced in the feature space to determine the class attribution of the sample, and a more convincing class discrimination result is obtained,so as to better improve the performance of the model.

    A series of experimental results on the few-shot image classification benchmark dataset miniImageNet[1]and the fine-grained dataset Stanford Dogs[17]show that the few-shot learning approach based on multi-attention fusion and weighted class representation proposed in this paper has excellent performance.

    The main work and contributions of this paper are as follows.

    (1) An effective multi-attention fusion module is designed to optimize the features by acquiring attention in the spatial dimension and the channel dimension, so that the valuable information is highlighted by the extracted features and the interference of irrelevant information is reduced.

    (2) The weighted class representation is used to replace the traditional approaches of finding the mean value of feature vectors as the class prototype, where the obtained class representation features are more valuable and more accurate.

    (3) In the classification stage,a mutual similarity metric method is introduced, and two-way similarity discrimination between the query and the class is carried out, and the mutual similarity for each representation is combined to make the final similarity metric result more reliable.

    1 Related work

    Many breakthroughs in the current area have been made by the development of few-shot learning, and a brief example is given to introduce its two major branches.

    Approaches based on metric-learning. Most of these approaches are for learning the similarity comparison of information among samples. The attention mechanism is introduced in MatchingNet[1], and the attention score of the extracted features is used to predict the class of the query sample. The support set of each class is shrank by PrototypicalNet[2]into a representation of a class by finding the mean vector of the support set samples of each class, called a prototype, and then the distance (similarity) between the query image and each prototype is compared to be classified. Neural network is used by RelationNet[3]to analyze the similarity between samples, which can be regarded as a non-linear classifier. Conditional batch normalization is relied by task dependent adaptive metric (TADAM)[4]to increase task adaptability and task-related metric space is learnt. The similarity metric between images and classes is considered by DN4[5]from the perspective of local descriptors. The research of subspace is involved by TapNet[6]and DSN[7], and the rationality of subspace modeling for few-shot learning is verified.A category traversal module (CTM) is used in Ref.[8]to make full use of the intra-class commonality and inter-class uniqueness of features. An attention mechanism is introduced by MADN4[9]to improve the extracted local descriptors.

    Approaches based on meta-learning. A metalearner is usually learnt by such approaches to adjust the optimization algorithm. The meta-learner is trained by model-agnostic meta-learning (MAML)[10]to perform proper parameter initialization to better adapt to the new task,so that the model can have good performance on the new task with only a few steps of gradient descent. A differentiable quadratic programming (QP)solver is combined by MetaOptNet[11]to achieve good performance. The goal of latent embedding optimization(LEO)[12]is to train the model in a high-dimensional parameter space, and it only needs a few updates in the low-data area. Attentive weights generation for few shot learning via information maximization (AWGIM)[13]is also based on parameter optimization, learning to directly use a generator to generate the weight parameters of the classifier.

    The approach proposed in this paper is based on metric-learning. The difference from other related work is that this paper has carried out a unique work in the attention mechanism, support set class representation and similarity metric method. In the experimental section,the validity and advancement of these work in this paper are verified, and the comparison and connection with related approaches are made.

    2 Proposed approach

    2.1 Problem set-up

    The episodic training mechanism[1]is widely used in the training process of FSL. Usually the support set and query set are randomly selected from the training set to train the model. The entire training process is divided into many episodes. In each episode, a small number of labeled samples in the support set are used for training, and then query images determine which category in the support set they belong to. In this work, the episodic training mechanism will also be used to train the model. Specifically, each episode containsNesample classes which are randomly selected from the training set. Among them, each classm(m∈1,…,Ne) hasNssupport samples andNqquery samples as the support setSmand query setQmin the episode, respectively. The samples in the support set and the query set are randomly selected and do not intersect, and the number of samples can be set manually,that is,the support images and query images of classmtogether form the support set and query set of this class. According to the number of sample classesNeand the number of support set samplesNs, each of these episodes can also be called FSL in the case ofNe-wayNs-shot.

    2.2 Model overview

    As shown in Fig.1, the support set samples and query samples are input into the feature extraction backbone networkfθembedded with the multi-attention fusion module. In the process of feature extraction, after multi-attention acquisition of spatial dimension and channel dimension, more distinguishing and valuable features are obtained. In the feature space, the positions of samples belonging to the same class are close together. According to the method proposed in this paper, weight is assigned to each support set sample,and the weighted class representation of each class is calculated. Then the mutual similarity between the query sample and the weighted class representation is measured. The category of the class representation with the highest mutual similarity score is the classification result of the query sample. The specific content of each part will be shown in the following sections.

    Fig.1 The overall framework of the proposed MWNet

    2.3 Multi-attention fusion module

    In order to obtain more distinguishing features in the feature extraction stage, an attention module based on multi-attention fusion is designed in this paper,which is mainly composed of two parts: spatial attention part and channel attention part. Different from the single-dimensional attention mechanism, attention weights are obtained by this module in the spatial dimension and channel dimension respectively, and they are connected in series, so it can be regarded as a multi-dimensional attention fusion. The valuable regions can be highlighted by the obtained features while rich information is included and irrelevant regions are suppressed, so it is called a multi-attention fusion module.

    Spatial attention. This part of the structure is shown in Fig.2. In order to find the weight relationship in the spatial dimension, the intermediate feature mapF∈Rc×h×wextracted in the previous stage is reduced through a 1 ×1 convolutional layer with a channel number of 1, and then the Sigmoid function is used to obtain the spatial attention weightSA∈R1×h×w.The calculation process can be expressed as

    whereσrepresents the Sigmoid function, andConvrepresents the convolution operation. It is equivalent to compressing the information of the originalcchannels to a single channel and using it as a weight, and then multiplying the weightSAandFto obtain the spatial attention optimization feature mapF′∈Rc×h×w,namely,F′=F?SA.

    Fig.2 Spatial attention acquisition process

    Channel attention. This part of the structure is shown in Fig.3. Perform global average pooling onF′obtained through spatial attention optimization, that is,compress the global information in each channel dimension ofF′into the global average pooling vectorFG∈Rc×1×1.Then pass through the two fully connected layers in turn and use the Sigmoid function to obtain the final channel attention weightCA∈Rc×1×1, which can be expressed as

    where,fullandGAPrespectively represent that the feature passes through the two-layer fully connected layer and the global average pooling layer. Then multiplyCAandF′to get the final multi-attention optimization feature mapF″∈Rc×h×w,namely,F″=F′?CA.

    Fig.3 Channel attention acquisition process

    After the above steps,the model realizes the multiinformation interaction between the spatial dimension and the channel dimension, that is, multi-attention fusion. Moreover, this module can be directly embedded in the feature extraction backbone, effectively highlighting the important part of the feature while keeping the original feature map size unchanged, which is more conducive to subsequent similarity metric.

    2.4 Weighted class representation

    Ideally, samples belonging to the same class should be as close as possible in the feature space, but it is inevitable that occasionally one or several interfering samples deviate from other samples in the class. At this time, if all samples of this category are treated equally and the feature vectors are averaged,the representativeness of the obtained prototype may not be ideal. This is due to the negligence in obtaining the prototype: in addition to positive effects, some samples will also have interference factors. If these negative effects are not fully considered, the obtained prototype (class representation) will be biased. In order to reduce this deviation, in this work, a weighted class representation with the size of the weight is proposed. It is not simply averaging the feature vectors of the support set samples, but fully considers that when calculating the class representation of each class sample, each sample has its own different degree of positive influence. The idea of this work is to give more weight to samples with greater positive influence. Specifically, in the support samples of the same class, the smaller the Euclidean distance between a sample and other samples, the larger the proportion of the sample in the construction of the class representation, on the contrary, the proportion is smaller. Based on this idea, the representation of each class sample obtained by calculation is more ideal.

    The process of obtaining is as follows. First, the support samplexiand other samplesxj(j≠i) of the same class pass through the feature extraction network embedded with the attention module to obtain the feature vectorsfθ(xi) andfθ(xj) after attention optimization. Then in this class, the average value of the sum of Euclidean distances betweenxiand eachxjcan be expressed by Eq.(3). The larger theαi, the greater the difference between samplexiand other samples in the same class, and the smaller the similarity.

    Finally, each support samplexiin the class is combined with its weightwito perform a weighted summation to obtain the weighted class representation of the current classm.

    By calculating the weighted class representation,it is possible to largely avoid some samples with large deviations from the samples of the same class from interfering with the calculation of the entire class representation. It can also make the model get better performance with more accurate class representation.

    2.5 Mutual similarity metric method

    Conventional methods usually only carry out a one-way metric from a certain query sample to each class, but do not consider the metric from a certain class to each query sample from the perspective of each class. In this work, research has been conducted on this issue. Here, a mutual similarity metric method is introduced. Specifically, the idea of this method is:when the similarity from query^xqto the weighted class representationξmof classmis high, and the similarity fromξmto the query^xqis also high. Then it can be considered that the similarity discrimination result that^xqbelongs to classmis highly credible. The specific process of the mutual similarity metric method is as follows.

    The query sample^xqpasses through the feature extraction network embedded with the attention module to obtain the attention optimized sample featurefθ(^xq),then the probability of its belonging to each classmcan be calculated by the Softmax function.exp(-d(f(^x),ξ))

    By carrying out similarity metric from different angles and combining them appropriately, the model can make better use of the interrelationship between query and class representation, interactively fuse information, and make the obtained similarity metric results more accurate.

    2.6 Training algorithm

    Algorithm 1 shows the episodic training process of MWNet in this paper. For each episode, the support set samples and query samples are input into the feature extraction network embedded with the multi-attention fusion module, and the optimized sample features are obtained after attention acquisition in the spatial and channel dimensions. Next, weight is assigned to each support set sample based on the Euclidean distance and Softmax function, and the weighted class representation of each class is calculated. Finally, the mutual similarity metric method is used in the feature space to predict the class of query samples, and the parameters of the feature extraction network are updated by minimizing the classification loss of the episode.Then use the updated model to process the new episode until the training is completed.

    3 Experiments

    In order to evaluate the performance of the approach proposed in this paper, this section conducts experiments on the benchmark dataset miniImageNet[1]in the field of few-shot learning, and compares it with the existing advanced approaches. Further, in order to explore the effectiveness of the approach in this paper on fine-grained images, the fine-grained dataset Stanford Dogs[17]with small inter-class changes and large intra-class changes is selected for experiments, and compared with related metric-learning based approaches on this dataset. In addition, in this section, the feature space visualization of the model, research on higher way training, and ablation experiments will also be carried out.

    3.1 Datasets

    miniImageNet dataset is a small version of ImageNet[18]. It has a total of 100 classes, each with 600 samples, and the image resolution is 84 ×84. This paper adopts the division method in PrototypicalNet[2]:64 classes for training,16 classes for verification, and 20 classes for testing.

    Stanford Dogs dataset is often used for finegrained image classification. It has 120 classes and a total of 20 580 images. According to the method in Ref.[5], it is divided into 70 training classes,20 verification classes and 30 testing classes.

    Algorithm 1 The process of a training episode for MWNet Input: each episode ei with S and Q 1: for i in{e1,…,eI} do 2: Li ←0 3: for sample x in S, Q do 4: F←Intermediate feature map of sample x 5: SA ←σ(Conv(F)) Spatial attention weight 6: F′←F ?SA Spatial attention optimization feature map 7: CA ←σ(full(GAP(F′))) Channel attention weight 8: F″←F′?CA Multi-attention optimization feature map 9: fθ(x) ←The final feature map of sample x 10: end for 11: for m in{1,…, Ne} do 12: for(xi, yi) ∈Sm, (xj, yj) ∈Sm do 13: αi = 1 Ns -1∑j≠id(fθ(xi), fθ(xj))14: wi = exp(- αi)∑Ns i=1exp(- αi)15: ξm = ∑fθ(xi) Weighted class representation(xi, yi)∈Sm exp(- αi)∑Ns i=1exp(- αi)16: end for 17: end for 18: for m in{1,…,Ne} do 19: for q in{1,…,Nq} do 20: Sim(^xq →ξm) ← exp(- d(fθ(^xq), ξm))∑m′exp(- d(fθ(^xq),ξm′))21: Sim(ξm →^xq) ← exp(- d(ξm, fθ(^xq)))∑q′exp(- d(ξm, fθ(^xq′)))22: Sim(^xq?ξm) ←Sim(xq →ξm)·Sim(ξm →^xq) Mutual similarity 23: end for 24: end for

    25: Li ← 1 NeNq ∑m ∑q[- log(Sim(^xq?ξm))]26: Update θ using ?Li 27: end for

    3.2 Experimental setup

    The experiments in this section use typical fewshot image classification settings, namelyN-wayK-shot settings. The Adam algorithm[19]is used for training,the initial learning rate is 10-3, and more support set classes are used for training than for testing. In the experiments on miniImageNet, there are a total of 6 ×104training episodes, with 12 query samples in each class. For the 5-way 1-shot tasks, the learning rate drops by 10 times after every 2.5 ×104episodes. For the 5-way 5-shot tasks, the learning rate drops by 10 times after every 4 ×104episodes. In the test phase,there are 15 query samples for each class, and the average accuracy of 4 ×104episodes is randomly selected to evaluate the performance of the model. For the finegrained few-shot classification experiments on the Stanford Dogs[17]dataset, since the number of samples in the dataset is much smaller than in miniImageNet, in order to avoid model overfitting, data augmentation technology is used. The rest of the experimental settings are the same as those in miniImageNet.

    3.3 Feature extraction backbone

    In order to better compare the model with other advanced approaches, in the experiments on miniImageNet in this section, two feature extraction backbone networks are used to implement the models in this paper. The first one used is ResNet-12[20], which is the same as that used in TapNet[6]. It consists of four residual blocks with channel numbers (represented by L in Fig. 4) of 64, 128, 256, and 512 respectively.Each residual block contains three 3 ×3 convolutional blocks and a shortcut connection. After the convolutional block is the batch normalization (BN) layer and the ReLU activation function,there is a 2 ×2 max-pooling layer after each residual block, and the shortcut connection contains 3 ×3 convolutional layer and batch normalization layer. The multi-attention fusion module is embedded after the last convolutional layer in each residual block, and a global average pooling layer is added at the end of the backbone. The structure of the entire network is shown in Fig.4.

    In addition, the common four-layer convolutional network Conv-4 is also used as the backbone, as used in PrototypicalNet[2]. It has a total of 4 convolutional blocks, each of which contains 64 3 × 3 kernels, a batch normalization layer, a ReLU activation functions, and a 2 ×2 max-pooling layer. In Conv-4, the multi-attention fusion module is embedded after the first two convolutional blocks, as shown in Fig.5.

    Fig.4 ResNet-12 embedded with multi-attention fusion module

    Fig.5 Conv-4 embedded with multi-attention fusion module

    3.4 Comparison results

    Table 1 shows the experimental results of the model MWNet proposed in this paper compared with advanced technologies on miniImageNet[1]when Conv-4 and ResNet-12 are used respectively. It can be seen that when using the same size backbone, MWNet always outperforms other approaches in 5-way 1-shot and 5-way 5-shot tasks. Compared with the benchmark approach PrototypicalNet[2], which is also based on metric-learning, the model in this paper has achieved obvious advantages. In the case of Conv-4,the classification accuracy of 5-way 1-shot and 5-way 5-shot tasks are 4.15% and 4.19% higher than PrototypicalNet respectively. In the case of ResNet-12, the classification accuracy of 5-way 1-shot and 5-way 5-shot tasks have advantages of 4.01% and 4.43%, respectively. Compared with TapNet[6]and DSN[7]based on subspace,the approach in this paper achieves better results without involving complex subspace structure, which is simpler and more efficient. CTM[8]involves model fine-tuning, and uses ResNet-18, which is deeper than ResNet-12, and MWNet does not need such a deep backbone to achieve better performance through end-toend training. In addition, compared with advanced meta-learning based technologies, MWNet still has strong competitiveness. It is worth noting that both LEO[12]and AWGIM[13]use deeper and wider wide residual network (WRN-28-10)[21], and the model in this paper can compete with them without such a complex network architecture, and has a higher classification accuracy.

    Table 1 Accuracy comparison with other approaches on miniImageNet

    3.5 Fine-grained few-shot classification

    In order to explore the performance of the approach proposed in this paper in the task of finegrained few-shot image classification, the Stanford Dogs[17]dataset is selected and the 5-way 1-shot and 5-shot experiments are performed. For the sake of comparison, Conv-4 with the same size as the related approaches is used as the backbone.

    As shown in Table 2, the model in this paper is effective on fine-grained dataset.

    Table 2 5-way 1-shot and 5-way 5-shot fine-grained few-shot classification on Stanford Dogs

    In addition, when the feature extraction backbone of the same size is used,compared with related approaches based on metric-learning, the model in this paper has achieved better classification accuracy. Compared with the benchmark approach PrototypicalNet[2], the accuracy is 13.02% and 22.62% higher in 5-way 1-shot and 5-way 5-shot tasks, respectively. Compared with the local descriptor-based DN4[5]and the approach MADN4[9]with the attention mechanism added,the model in this paper still has better performance.

    3.6 Visualizations of feature space

    Fig.6 t-SNE visualization of feature space

    In order to express the multi-attention fusion and weighted class representation in this paper more vividly, in Fig.6, the relevant feature spaces in the experiments on miniImageNet[1]is subjected to t-SNE visualizations. Conv-4 is used as the backbone, and the PrototypicalNet[2]is re-implemented through the settings in this paper. As shown in Fig.6, different shapes of graphics represent different types of support samples,there are a total of 5 classes, and the number of support samples for each class is set to 15. The graphic with a black frame represents the characterization of the class. More specifically, Fig.6(a) represents the feature space of the PrototypicalNet, an ordinary feature space without introducing attention mechanism and weighted class representation. Fig.6(b) represents the feature space after multi-attention fusion. It can be seen that due to the acquisition of multi-dimensional attention,the support set samples at this time are closer than the original feature space. However, because individual samples deviate from other samples of the same class,the prototype calculated according to the class mean vector is interfered by this sample to a certain extent,which will induce some misclassifications. Fig. 6(c)represents the feature space in which weighted class representations are introduced after multi-attention fusion. At this time, the value of each support set sample is considered, and each sample is assigned a corresponding weight based on the Euclidean distance and Softmax function when obtaining the class representation, so that the weighted class representation can better reflect this class of sample. This largely avoids the occurrence of the misclassification in Fig.6(b).

    3.7 Research on higher way training

    According to previous experience, using a higher number of ways during training, that is, using more support set classes in each episode will make the model obtain a higher classification accuracy. In order to find a more suitable way number for the model in this paper, the FSL experiments with different way number settings is performed on the miniImageNet[1]dataset.In this section, ResNet-12 is used as the feature extraction backbone of the model, other experimental settings remain unchanged, and the number of shots for training and testing is the same. The experimental results are shown in Fig.7. It can be seen that for the 5-way 1-shot tasks, using the 15-way 1-shot setting during training will obtain a better classification accuracy.And for 5-way 5-shot tasks, using the 20-way 5-shot setting during training will obtain better classification results.

    3.8 Ablation study

    In order to further verify that the various parts of the work performed in this paper are helpful to improve the classification performance of the model, 5-way 1-shot and 5-way 5-shot few-shot ablation study is conducted in this section on miniImageNet[1]. Considering the relevance to the work of this paper, the selected baseline approach is PrototypicalNet[2]based on metric-learning. And in order to make a better comparison, the experimental data when ResNet-12 is used as the backbone is selected as a reference, and the relevant experiments are implemented in accordance with the settings in this paper.

    Fig.7 Results with different number of ways

    First,this section studies the influence of different attention on the performance of the model. For the sake of comparison, only attention is introduced in this part of the experiment. As shown in Fig.8, thex-axis represents the classification accuracy of the model when only channel attention is introduced, only spatial attention is introduced, channel-spatial attention parallel,and channel-spatial attention series are introduced. It can be seen that under the 5-way 1-shot setting, when only channel attention is introduced, the accuracy is increased by 0.62%; when only spatial attention is introduced, the accuracy is increased by 0.59%; when the channel-spatial attention is connected in parallel,the accuracy is increased by 0.91%; when the channel-spatial attention is connected in series, the accuracy is increased by 1.71%. With the 5-way 5-shot setting, the accuracy of the corresponding four parts is increased by 0.64%,0.68%,0.96%, and 1.28% respectively. Considering comprehensively, the way of attention series is chose in the model.

    Fig.8 The influence of different attention on miniImageNet

    What follows is the rest of the ablation study. As shown in Fig.9, under the 5-way 1-shot setting, after adding the MAFM, the accuracy of the model increased by 1. 17%. Then, after the introduction of WCR, the accuracy increased by 1.87%. After using the mutual similarity metric method, the accuracy is increased by 0.97%. At this time, the accuracy of the model is the highest, that is, MWNet. Under the 5-way 5-shot setting, the accuracy shows the same upward trend, and the accuracy of the corresponding three parts are increased by 1.28%,1.97%,and 1.18%respectively. Obviously, the various parts of the work in this paper are beneficial to the improvement of the few-shot classification performance. And when the multi-attention fusion module, weighted class representation and mutual similarity metric exist at the same time, the classification accuracy of the model is the highest, which is the MWNet proposed in this paper.With the positive contributions of these beneficial parts, the model in this paper has such excellent performance.

    4 Conclusions

    In this paper, a simple and efficient few-shot learning model is proposed. Through the cross-spatial and

    Fig.9 Ablation study on the miniImageNet dataset

    channel attention acquisition in the feature extraction stage, the extracted features are richer and more discriminative. The importance of each sample is considered based on the Euclidean distance and the Softmax function, where the negative influence of interfering samples is weakened. In the metric phase, information is fused from different angles to obtain a more reliable similarity relationship. A series of experiments on miniImageNet and Stanford Dogs datasets show that the approach proposed in this paper is effective and superior, especially when compared with advanced related technologies, it is highly competitive. Future work will explore the applicability of the model in more problem settings, such as cross-domain and transduction fewshot classification. In addition, a combination of fewshot learning and active learning can also be tried.

    国产亚洲午夜精品一区二区久久| 丰满饥渴人妻一区二区三| 精品亚洲乱码少妇综合久久| 看免费成人av毛片| 亚洲av中文av极速乱| 日韩中文字幕视频在线看片| 18禁在线无遮挡免费观看视频| 免费看av在线观看网站| 久久久欧美国产精品| 寂寞人妻少妇视频99o| 精品久久久久久电影网| 色视频在线一区二区三区| av女优亚洲男人天堂| av在线app专区| 久久久国产精品麻豆| 日本午夜av视频| 国产视频首页在线观看| 亚洲精品国产av成人精品| 亚洲国产成人一精品久久久| 大香蕉久久网| 精品国产乱码久久久久久小说| 搡老乐熟女国产| 春色校园在线视频观看| 久久久久久久亚洲中文字幕| 国产高清不卡午夜福利| 桃花免费在线播放| videossex国产| 亚洲av综合色区一区| 久久婷婷青草| 久久久久久久久久久久大奶| 黑人高潮一二区| 高清不卡的av网站| 久久久久久人妻| 亚洲在久久综合| 肉色欧美久久久久久久蜜桃| 欧美 日韩 精品 国产| 日韩在线高清观看一区二区三区| 2022亚洲国产成人精品| 色视频在线一区二区三区| 性色av一级| kizo精华| 汤姆久久久久久久影院中文字幕| 国产综合精华液| 在线免费观看不下载黄p国产| 男女无遮挡免费网站观看| 国产免费一级a男人的天堂| 爱豆传媒免费全集在线观看| 丝袜脚勾引网站| av黄色大香蕉| 在线天堂最新版资源| 99热网站在线观看| 日本欧美国产在线视频| 午夜视频国产福利| 婷婷色麻豆天堂久久| 欧美国产精品一级二级三级 | 日本-黄色视频高清免费观看| 日本vs欧美在线观看视频 | 大话2 男鬼变身卡| 欧美另类一区| 亚洲av综合色区一区| 久久久久久久久久成人| 卡戴珊不雅视频在线播放| 欧美xxxx性猛交bbbb| 两个人免费观看高清视频 | 麻豆精品久久久久久蜜桃| 成人午夜精彩视频在线观看| 亚洲精品,欧美精品| 青青草视频在线视频观看| 色视频www国产| 亚洲国产精品999| 日本猛色少妇xxxxx猛交久久| 亚洲电影在线观看av| 日产精品乱码卡一卡2卡三| 一区二区av电影网| 国产国拍精品亚洲av在线观看| 男女国产视频网站| 少妇裸体淫交视频免费看高清| 热re99久久国产66热| 老司机影院成人| 一级av片app| 国产一区有黄有色的免费视频| av视频免费观看在线观看| 国产白丝娇喘喷水9色精品| 亚洲精品自拍成人| 日韩大片免费观看网站| 丰满迷人的少妇在线观看| 我要看黄色一级片免费的| 午夜精品国产一区二区电影| 午夜免费观看性视频| 欧美精品一区二区大全| 我要看日韩黄色一级片| 狠狠精品人妻久久久久久综合| 看十八女毛片水多多多| 啦啦啦在线观看免费高清www| 日韩亚洲欧美综合| 亚洲精品一二三| 男的添女的下面高潮视频| 欧美精品国产亚洲| 国产亚洲一区二区精品| 老司机影院毛片| 色吧在线观看| 最近2019中文字幕mv第一页| 国产精品不卡视频一区二区| 一区在线观看完整版| 啦啦啦视频在线资源免费观看| 久久精品国产鲁丝片午夜精品| 国产有黄有色有爽视频| 大片电影免费在线观看免费| 免费看av在线观看网站| 性高湖久久久久久久久免费观看| 我的女老师完整版在线观看| 乱系列少妇在线播放| 亚洲精华国产精华液的使用体验| 又大又黄又爽视频免费| 亚洲精品国产成人久久av| 最近2019中文字幕mv第一页| 亚洲国产日韩一区二区| 亚洲自偷自拍三级| a级毛色黄片| 内射极品少妇av片p| 国产欧美日韩综合在线一区二区 | 中文字幕制服av| 看十八女毛片水多多多| 亚洲av男天堂| 99热6这里只有精品| 色5月婷婷丁香| 国产一区二区在线观看日韩| 亚洲精品国产av成人精品| 男女无遮挡免费网站观看| 亚洲精品乱码久久久久久按摩| 天堂8中文在线网| 中文字幕久久专区| 国产精品偷伦视频观看了| 久久精品久久精品一区二区三区| 老熟女久久久| 成人毛片a级毛片在线播放| 丰满乱子伦码专区| 亚洲国产欧美日韩在线播放 | 高清欧美精品videossex| 黄色欧美视频在线观看| 欧美日韩在线观看h| 高清不卡的av网站| h视频一区二区三区| 只有这里有精品99| 插阴视频在线观看视频| 人妻一区二区av| 99热国产这里只有精品6| 一本久久精品| 国产精品99久久久久久久久| 日韩三级伦理在线观看| 国产精品国产三级国产专区5o| www.av在线官网国产| 我的女老师完整版在线观看| 国产69精品久久久久777片| 少妇高潮的动态图| 欧美另类一区| 97在线人人人人妻| 日本与韩国留学比较| 少妇猛男粗大的猛烈进出视频| 一个人免费看片子| 欧美精品一区二区免费开放| 久久国内精品自在自线图片| 久久久欧美国产精品| 国产精品成人在线| 极品人妻少妇av视频| 十分钟在线观看高清视频www | 日日啪夜夜撸| 一级毛片电影观看| 曰老女人黄片| 国产成人精品一,二区| 欧美日韩一区二区视频在线观看视频在线| 久久99蜜桃精品久久| 成人特级av手机在线观看| 久久精品国产亚洲网站| 精品少妇黑人巨大在线播放| 3wmmmm亚洲av在线观看| 日韩av免费高清视频| 亚洲va在线va天堂va国产| 久久精品久久久久久久性| 一级毛片aaaaaa免费看小| 亚洲av欧美aⅴ国产| 婷婷色综合大香蕉| 美女xxoo啪啪120秒动态图| 97精品久久久久久久久久精品| 久久久久久久亚洲中文字幕| 你懂的网址亚洲精品在线观看| 久久久久久久久大av| 夜夜骑夜夜射夜夜干| 日韩 亚洲 欧美在线| 欧美变态另类bdsm刘玥| 午夜福利网站1000一区二区三区| 国产精品三级大全| a 毛片基地| 大又大粗又爽又黄少妇毛片口| 日本黄色日本黄色录像| 亚洲精品视频女| 精品午夜福利在线看| 毛片一级片免费看久久久久| 一二三四中文在线观看免费高清| 人人妻人人添人人爽欧美一区卜| 99热全是精品| 久久久久久久久久久丰满| 日韩欧美一区视频在线观看 | 国产黄色免费在线视频| 一本—道久久a久久精品蜜桃钙片| 国产黄片视频在线免费观看| 一区二区三区乱码不卡18| 午夜精品国产一区二区电影| 国产伦理片在线播放av一区| 天天操日日干夜夜撸| 欧美日韩在线观看h| 欧美少妇被猛烈插入视频| 99久久综合免费| a级毛色黄片| 亚洲人成网站在线观看播放| 欧美日韩综合久久久久久| 哪个播放器可以免费观看大片| 久久6这里有精品| 久久精品国产a三级三级三级| 黑丝袜美女国产一区| 亚洲国产av新网站| 91精品国产国语对白视频| 欧美成人精品欧美一级黄| 中文在线观看免费www的网站| 久久精品熟女亚洲av麻豆精品| 色5月婷婷丁香| 交换朋友夫妻互换小说| 日韩制服骚丝袜av| 国产精品免费大片| 少妇裸体淫交视频免费看高清| 一级毛片我不卡| 亚洲精品成人av观看孕妇| 精品亚洲成a人片在线观看| 亚洲精华国产精华液的使用体验| av网站免费在线观看视频| 18禁裸乳无遮挡动漫免费视频| 日日摸夜夜添夜夜爱| 免费在线观看成人毛片| tube8黄色片| 少妇丰满av| 久久久久视频综合| 国产av国产精品国产| 性色av一级| 久久久久久久久久成人| 亚洲一区二区三区欧美精品| 国产亚洲欧美精品永久| 国语对白做爰xxxⅹ性视频网站| 丝袜在线中文字幕| 国产色婷婷99| 精品国产一区二区三区久久久樱花| 最新中文字幕久久久久| 综合色丁香网| 18禁动态无遮挡网站| 免费看不卡的av| 我的老师免费观看完整版| 国产国拍精品亚洲av在线观看| 精品一区二区三区视频在线| 亚洲精品一二三| 国产男女超爽视频在线观看| 精品国产一区二区久久| 18禁在线无遮挡免费观看视频| 熟妇人妻不卡中文字幕| 国产精品99久久久久久久久| 午夜av观看不卡| 亚洲一区二区三区欧美精品| 亚洲国产日韩一区二区| 亚洲美女黄色视频免费看| 欧美日韩视频高清一区二区三区二| 老司机影院成人| 丝瓜视频免费看黄片| 最近中文字幕2019免费版| 激情五月婷婷亚洲| 国产深夜福利视频在线观看| 久久99一区二区三区| 青春草亚洲视频在线观看| 国产成人aa在线观看| 熟女av电影| 日韩不卡一区二区三区视频在线| 精品一区二区三区视频在线| 亚洲国产日韩一区二区| 自拍欧美九色日韩亚洲蝌蚪91 | 国产又色又爽无遮挡免| 欧美精品一区二区免费开放| 一区二区三区乱码不卡18| 在线观看免费视频网站a站| 久久久午夜欧美精品| 国产成人免费无遮挡视频| 男人添女人高潮全过程视频| 婷婷色综合大香蕉| 热99国产精品久久久久久7| 伊人久久精品亚洲午夜| 日韩大片免费观看网站| 人妻系列 视频| 国产精品一区二区在线观看99| 久久人人爽人人爽人人片va| 成人二区视频| 国产av精品麻豆| 精品久久国产蜜桃| 99热全是精品| av福利片在线观看| 亚洲欧洲日产国产| 中文字幕制服av| 亚洲,欧美,日韩| 久久久久久伊人网av| 国产精品久久久久久精品电影小说| 亚洲人成网站在线观看播放| 3wmmmm亚洲av在线观看| 春色校园在线视频观看| 在线播放无遮挡| 国产午夜精品久久久久久一区二区三区| 久久韩国三级中文字幕| 又黄又爽又刺激的免费视频.| 国产极品天堂在线| 亚洲成人手机| 免费人成在线观看视频色| 国产 一区精品| 美女福利国产在线| 99久久精品一区二区三区| 欧美日韩在线观看h| 国产高清有码在线观看视频| 午夜影院在线不卡| 深夜a级毛片| 亚洲av免费高清在线观看| 乱码一卡2卡4卡精品| 亚洲精品aⅴ在线观看| 久久久久久久精品精品| 国产在线一区二区三区精| 亚洲av中文av极速乱| 最近中文字幕高清免费大全6| 精品人妻一区二区三区麻豆| 亚洲精品亚洲一区二区| 亚洲国产最新在线播放| 成人午夜精彩视频在线观看| 日产精品乱码卡一卡2卡三| 日韩欧美精品免费久久| 中文资源天堂在线| 亚洲av男天堂| 三级国产精品片| 天堂中文最新版在线下载| 久久青草综合色| 国产成人精品一,二区| 亚洲欧美精品专区久久| 大片免费播放器 马上看| 午夜免费男女啪啪视频观看| 人人妻人人澡人人看| 亚洲国产精品成人久久小说| 久久鲁丝午夜福利片| 亚洲国产欧美在线一区| 久久99热6这里只有精品| 国产av精品麻豆| 一级二级三级毛片免费看| 91成人精品电影| 欧美亚洲 丝袜 人妻 在线| 卡戴珊不雅视频在线播放| 久久狼人影院| 日本av免费视频播放| 91在线精品国自产拍蜜月| 在线观看一区二区三区激情| 伊人亚洲综合成人网| 在线观看免费高清a一片| 国产成人精品一,二区| a级毛片在线看网站| 亚洲av福利一区| 老司机影院成人| 久热这里只有精品99| 2022亚洲国产成人精品| 综合色丁香网| 国产一区有黄有色的免费视频| 亚洲真实伦在线观看| 国产免费视频播放在线视频| 国产免费又黄又爽又色| av播播在线观看一区| 国产精品久久久久成人av| 日韩精品免费视频一区二区三区 | 天堂8中文在线网| 久久久国产欧美日韩av| 街头女战士在线观看网站| 久久 成人 亚洲| 国产免费又黄又爽又色| 婷婷色麻豆天堂久久| 91精品伊人久久大香线蕉| 久热这里只有精品99| 3wmmmm亚洲av在线观看| 性色avwww在线观看| 日韩视频在线欧美| 日韩一本色道免费dvd| 亚洲欧美日韩卡通动漫| 80岁老熟妇乱子伦牲交| 亚洲综合色惰| 国产精品久久久久久精品古装| 欧美三级亚洲精品| 黄色怎么调成土黄色| 男女无遮挡免费网站观看| 久久久久精品久久久久真实原创| 亚洲欧美一区二区三区国产| 熟女av电影| 99热网站在线观看| 久久午夜福利片| 夜夜爽夜夜爽视频| 欧美亚洲 丝袜 人妻 在线| 91久久精品国产一区二区成人| 国产精品麻豆人妻色哟哟久久| 91午夜精品亚洲一区二区三区| 校园人妻丝袜中文字幕| 97在线人人人人妻| 精品人妻偷拍中文字幕| 搡老乐熟女国产| 国产日韩欧美视频二区| 成年人免费黄色播放视频 | 日本av手机在线免费观看| av在线老鸭窝| freevideosex欧美| 亚洲av电影在线观看一区二区三区| 99久久人妻综合| 日韩人妻高清精品专区| 男女国产视频网站| 国产深夜福利视频在线观看| 午夜福利在线观看免费完整高清在| 国国产精品蜜臀av免费| 婷婷色av中文字幕| 美女脱内裤让男人舔精品视频| 色哟哟·www| 中文在线观看免费www的网站| 久久久亚洲精品成人影院| 亚洲色图综合在线观看| 大又大粗又爽又黄少妇毛片口| 久久国产乱子免费精品| 老司机亚洲免费影院| 国产精品.久久久| 在线观看美女被高潮喷水网站| 国产在线男女| 国产成人精品婷婷| 亚洲av在线观看美女高潮| 亚洲精品亚洲一区二区| 99久国产av精品国产电影| 成人免费观看视频高清| 久久久精品94久久精品| 男人狂女人下面高潮的视频| 在线精品无人区一区二区三| 国产一区二区在线观看av| 国产精品久久久久成人av| 国产成人午夜福利电影在线观看| 亚洲精品日韩av片在线观看| 久久午夜福利片| 少妇的逼水好多| 欧美bdsm另类| 国产美女午夜福利| 国产在线免费精品| 只有这里有精品99| 黄色视频在线播放观看不卡| 国产中年淑女户外野战色| 久久国产精品男人的天堂亚洲 | 免费观看a级毛片全部| 精品国产国语对白av| 精品一品国产午夜福利视频| 国产极品粉嫩免费观看在线 | 免费不卡的大黄色大毛片视频在线观看| 国产 精品1| 久久鲁丝午夜福利片| 日本猛色少妇xxxxx猛交久久| 91精品国产九色| 夜夜看夜夜爽夜夜摸| 色视频在线一区二区三区| 91成人精品电影| 日本与韩国留学比较| 一区二区三区乱码不卡18| 91精品一卡2卡3卡4卡| 人人妻人人澡人人看| 亚洲精品成人av观看孕妇| 国产精品一区二区三区四区免费观看| 亚洲国产精品专区欧美| 精品人妻熟女av久视频| 国产精品.久久久| 成人特级av手机在线观看| 亚洲不卡免费看| 欧美精品人与动牲交sv欧美| 久久精品久久精品一区二区三区| 一级爰片在线观看| av播播在线观看一区| 国产在视频线精品| 97超碰精品成人国产| 亚州av有码| 亚洲不卡免费看| 哪个播放器可以免费观看大片| 国产片特级美女逼逼视频| 少妇高潮的动态图| 一级毛片久久久久久久久女| 观看免费一级毛片| 日本色播在线视频| 欧美日韩av久久| 亚洲成人av在线免费| 天美传媒精品一区二区| 日本欧美视频一区| 又大又黄又爽视频免费| 97精品久久久久久久久久精品| 全区人妻精品视频| 青青草视频在线视频观看| 欧美一级a爱片免费观看看| 成人国产麻豆网| 久久精品国产亚洲网站| 免费观看在线日韩| 高清不卡的av网站| 久久久欧美国产精品| 一本色道久久久久久精品综合| 欧美 日韩 精品 国产| 精品人妻偷拍中文字幕| 亚洲综合精品二区| 欧美日本中文国产一区发布| 一级a做视频免费观看| 国产一区有黄有色的免费视频| 日本色播在线视频| 秋霞在线观看毛片| 91久久精品电影网| 美女cb高潮喷水在线观看| 大香蕉久久网| 我要看日韩黄色一级片| 成人美女网站在线观看视频| 九九久久精品国产亚洲av麻豆| 国产精品伦人一区二区| 最新的欧美精品一区二区| 成年人午夜在线观看视频| 最近中文字幕2019免费版| 亚洲精品久久午夜乱码| 赤兔流量卡办理| 丰满人妻一区二区三区视频av| 九九爱精品视频在线观看| 在线观看免费日韩欧美大片 | 国产免费一区二区三区四区乱码| 99国产精品免费福利视频| 黑人猛操日本美女一级片| 国产成人精品一,二区| 精品国产一区二区久久| 丰满饥渴人妻一区二区三| 边亲边吃奶的免费视频| 能在线免费看毛片的网站| 九草在线视频观看| 成人漫画全彩无遮挡| 精品一品国产午夜福利视频| 国产成人免费观看mmmm| 自线自在国产av| av免费观看日本| 亚洲精品456在线播放app| av在线观看视频网站免费| 久久精品夜色国产| 久久久久久久久久久久大奶| 国产精品熟女久久久久浪| 日本黄大片高清| 一本大道久久a久久精品| 美女脱内裤让男人舔精品视频| 热re99久久国产66热| 日韩,欧美,国产一区二区三区| 成人漫画全彩无遮挡| 熟女av电影| 久久ye,这里只有精品| 80岁老熟妇乱子伦牲交| 欧美精品国产亚洲| 亚洲av成人精品一区久久| www.色视频.com| 极品少妇高潮喷水抽搐| 各种免费的搞黄视频| 久久国产精品男人的天堂亚洲 | 亚洲欧洲精品一区二区精品久久久 | 欧美精品高潮呻吟av久久| 亚洲精品色激情综合| 国产淫语在线视频| 成人特级av手机在线观看| 人人妻人人爽人人添夜夜欢视频 | 免费黄网站久久成人精品| 国产欧美日韩综合在线一区二区 | kizo精华| 多毛熟女@视频| 精品久久久噜噜| 亚州av有码| 欧美97在线视频| 亚洲内射少妇av| 日本91视频免费播放| 亚洲伊人久久精品综合| 高清在线视频一区二区三区| 日韩三级伦理在线观看| 日韩视频在线欧美| 国产亚洲91精品色在线| 人妻 亚洲 视频| 在线观看国产h片| 国产探花极品一区二区| 卡戴珊不雅视频在线播放| 成人免费观看视频高清| 午夜福利影视在线免费观看| 国产成人精品久久久久久| 老司机亚洲免费影院| 男的添女的下面高潮视频| 午夜激情福利司机影院| 哪个播放器可以免费观看大片| 狠狠精品人妻久久久久久综合| 我的女老师完整版在线观看| 中文在线观看免费www的网站| 国产亚洲91精品色在线| 亚洲精品一二三| 综合色丁香网| 久久久国产精品麻豆| 午夜老司机福利剧场| 欧美xxⅹ黑人| 亚洲精品成人av观看孕妇| 国产真实伦视频高清在线观看| 久热久热在线精品观看| 人人妻人人澡人人看| 一级爰片在线观看| 亚洲内射少妇av| 色婷婷av一区二区三区视频| 中文字幕人妻熟人妻熟丝袜美| av又黄又爽大尺度在线免费看| 久久这里有精品视频免费| 国产日韩一区二区三区精品不卡 | 久久人人爽人人片av| 深夜a级毛片| 伊人久久国产一区二区|