• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Federated unsupervised representation learning?

    2023-09-01 03:29:08FengdaZHANGKunKUANGLongCHENZhaoyangYOUTaoSHENJunXIAOYinZHANGChaoWUFeiWUYuetingZHUANGXiaolinLI

    Fengda ZHANG ,Kun KUANG? ,Long CHEN ,Zhaoyang YOU ,Tao SHEN,Jun XIAO,Yin ZHANG,Chao WU,Fei WU,Yueting ZHUANG,Xiaolin LI

    1College of Computer Science and Technology,Zhejiang University,Hangzhou 310027,China

    2School of Public Affairs,Zhejiang University,Hangzhou 310027,China

    3Tongdun Technology,Hangzhou 310000,China

    4Institute of Basic Medicine and Cancer,Chinese Academy of Sciences,Hangzhou 310018,China

    5ElasticMind.AI Technology Inc.,Hangzhou 310018,China

    Abstract: To leverage the enormous amount of unlabeled data on distributed edge devices,we formulate a new problem in federated learning called federated unsupervised representation learning (FURL) to learn a common representation model without supervision while preserving data privacy.FURL poses two new challenges:(1) data distribution shift (non-independent and identically distributed,non-IID) among clients would make local models focus on different categories,leading to the inconsistency of representation spaces;(2) without unified information among the clients in FURL,the representations across clients would be misaligned.To address these challenges,we propose the federated contrastive averaging with dictionary and alignment (FedCA) algorithm.FedCA is composed of two key modules:a dictionary module to aggregate the representations of samples from each client which can be shared with all clients for consistency of representation space and an alignment module to align the representation of each client on a base model trained on public data.We adopt the contrastive approach for local model training.Through extensive experiments with three evaluation protocols in IID and non-IID settings,we demonstrate that FedCA outperforms all baselines with significant margins.

    Key words: Federated learning;Unsupervised learning;Representation learning;Contrastive learning

    1 Introduction

    Federated learning (FL) is proposed as a paradigm that enables distributed clients to collaboratively train a shared model while preserving data privacy(McMahan et al.,2017).Specifically,in each round of FL,clients obtain the global model and update it on their own private data to generate the local models,and then the central server aggregates these local models into a new global model.Most of existing works focus on supervised FL,in which clients train their local models with supervision.However,the data generated in edge devices are typically unlabeled.Therefore,learning a common representation model for various downstream tasks from decentralized and unlabeled data while keeping private data on devices,i.e.,federated unsupervised representation learning(FURL),remains still an open problem.

    It is a natural idea that we can combine FL with unsupervised approaches,which means that clients can train their local models via unsupervised methods.There are a lot of highly successful works on unsupervised representation learning.Particularly,contrastive learning methods train models by reducing the distance between representations of positive pairs (e.g.,different augmented views of the same image)and increasing the distance between negative pairs (e.g.,augmented views from different images),which have been outstandingly successful in practice (van den Oord et al.,2019;Chen T et al.,2020;Chen XL et al.,2020;He KM et al.,2020).However,their successes highly rely on their abundant data for representation training;for example,contrastive learning methods need a large number of negative samples for training (Sohn,2016;Chen T et al.,2020).Moreover,few of these unsupervised methods take the problem of data distribution shift into account,which is a common practical problem in FL.Hence,it is difficult to combine FL with unsupervised approaches for FURL.

    In FL applications,however,the data collected by each client are limited and the data distribution of the client might be different from each other (Jeong et al.,2018;Yang Q et al.,2019;Sattler et al.,2020;Kairouz et al.,2021;Zhao et al.,2022).Hence,we face the following challenges in combining FL with unsupervised approaches for FURL:

    1.Inconsistency of representation spaces

    In FL,the limited data of each client would lead to variation of data distribution from client to client,resulting in inconsistency of representation spaces encoded by different local models (Kuang et al.,2020).For example,as shown in Fig.1a,client 1 has only images of cats and dogs,and client 2 is with only images of cars and planes.Then,the locally trained model on client 1 encodes only a feature space of cats and dogs,failing to map cars or planes to the appropriate representations,and the same goes for the model trained on client 2.Intuitively,the performance of the global model aggregated by these inconsistent local models may fall short of expectations.

    2.Misalignment of representations

    Even if the training data of the clients are independent and identically distributed (IID) and the representation spaces encoded by different local models are consistent,there may be misalignment between representations due to randomness in the training process.For instance,for a given input set,the representations generated by a model are equivalent to the representations generated by another model when rotated by a certain angle,as shown in Fig.1b.It should be noted that the misalignment between local models may have drastic detrimental effects on the performance of the aggregated model.

    To address these challenges,we propose a contrastive loss-based FURL algorithm called the federated contrastive averaging with dictionary and alignment (FedCA),which consists of two main novel modules:a dictionary module for addressing the inconsistency of representation spaces and an alignment module for aligning the representations across clients.Specifically,the dictionary module,which is maintained by the server,aggregates the abundant representations of samples from clients and these can be shared with each client for local model optimization.In the alignment module,we first train a base model based on small public data (e.g.,a subset of STL-10 dataset) (Coates et al.,2011) and then require all local models to mimic the base model such that the representations generated by different local models can be aligned.Overall,in each round,FedCA involves two stages:(1) clients train local representation models on their own unlabeled data via contrastive learning with the two modules mentioned above,and then generate local dictionaries,and(2)the server aggregates the trained local models to obtain a shared global model and integrates the local dictionaries into a global dictionary.

    To the best of our knowledge,FedCA is the first algorithm designed for the FURL problem.Our experimental results show that FedCA has better performance than those naive methods that solely combine FL with unsupervised approaches.We believe that FedCA will serve as a critical foundation in this novel and challenging problem.

    2 Related works

    2.1 Federated learning

    FL enables distributed clients to train a shared model collaboratively while keeping private data on devices (McMahan et al.,2017).Li T et al.(2020)added a proximal term to the loss function to keep local models close to the global model.Wang HY et al.(2020) proposed a layer-wise FL algorithm to deal with the permutation invariance of neural network parameters.However,existing works focus only on the consistency of parameters,while we emphasize the consistency of representations in this study.Some works also focus on reducing the communication of FL(Kone?ny et al.,2017).To further protect the data privacy of clients,cryptography technologies have been applied to FL(Bonawitz et al.,2017).

    2.2 Unsupervised representation learning

    Learning high-quality representations is important and essential for various downstream tasks(Zhou et al.,2017;Duan et al.,2018).There are two main types of unsupervised representation learning methods:generative and discriminative(Zhuang YT et al.,2017;Lei et al.,2020;Zhu et al.,2020).Generative approaches learn representations by generating pixels in the input space (Hinton and Salakhutdinov,2006;Kingma and Welling,2014;Radford et al.,2016).Discriminative approaches train a representation model by performing pretext tasks,where labels are generated for free from unlabeled data (Pathak et al.,2017;Gidaris et al.,2018).Among them,contrastive learning methods achieve excellent performance (van den Oord et al.,2019;Chen T et al.,2020;Chen XL et al.,2020;He KM et al.,2020).The contrastive loss was proposed by Hadsell et al.(2006).Wu ZR et al.(2018) proposed an unsupervised contrastive learning approach based on a memory bank to learn visual representations.Wang TZ and Isola(2020)pointed out two key properties,namely,closeness and uniformity,related to the contrastive loss.Other works also applied contrastive learning to videos(Sermanet et al.,2018;Tian et al.,2020),natural language processing (NLP) (Mikolov et al.,2013;Logeswaran and Lee,2018;Yang ZL et al.,2019),audios(Baevski et al.,2020),and graphs(Hassani and Ahmadi,2020;Qiu et al.,2020).

    2.3 Federated unsupervised learning

    Before the FL was proposed,there have been some works on unsupervised representation learning in the distributed/decentralized setting,which are easily portable to the FL setting (Kempe and Mc-Sherry,2008;Liang et al.,2014;Shakeri et al.,2014;Raja and Bajwa,2016;Wu SX et al.,2018).However,different from the deep learning method,the convergence of these methods is limited by the size of the data,and it is difficult to achieve good performance on downstream tasks (Lyu,2020;Pan,2020;Zhuang YT et al.,2020).

    Some concurrent works (van Berlo et al.,2020;Jin et al.,2020) also focus on FL from unlabeled data with the deep learning method.Different from these works which simply combine FL with unsupervised approaches,we explore and identify the main challenges in FURL and design an algorithm to deal with these challenges.There are some later works aiming to solve our proposed problem(Zhuang WM et al.,2021b).For example,Sattler et al.(2021)proposed to use the unlabeled auxiliary data in FL by federated distillation techniques.

    2.4 Contrastive learning for FL

    To our best knowledge,our work is the first one to combine contrastive learning with FL,which has inspired some later works (He CY et al.,2021;Ji et al.,2021;Shi et al.,2022).Li QB et al.(2021)conducted contrastive learning at the model level to correct local training.Wu YW et al.(2021)proposed to exchange the features of clients to provide diverse contrastive data to each client.Zhuang WM et al.(2021a) focused on unsupervised setting in FL by designing a dynamically contrastive module with an effective communication protocol.Zhuang WM et al.(2022)proposed a new method to tackle the non-IID data problem in FL and filled in the gap between FL and self-supervised approaches based on Siamese networks.

    3 Preliminaries

    In this section,we discuss the primitives needed for our approach.The symbols and the corresponding meanings are given in Table 1.

    3.1 Federated learning

    In FL,each clientu ∈Uhas a private datasetDuof training samples,and our aim is to train a shared model while keeping private data on devices.There are a lot of algorithms designed for aggregation in FL (Li T et al.,2020;Wang HY et al.,2020),and we point out that our approach does not depend on the way of aggregation.Here,for simplicity,we introduce a standard and popular aggregation method named FedAvg (McMahan et al.,2017).In roundtof FedAvg,the server randomly selects a subset of clientsUt ?Uand each clientu ∈Utlocally updates the global model with parametersθton datasetDuvia the stochastic gradient descent rule to generate the local model:

    whereηis the stepsize andL(Du,θt)is the loss function of clientuin roundt.Then the server gathers the parameters of the local models{θut+1|u ∈Ut}and aggregates these local models via weighted average to generate a new global model:

    The training process above is repeated until the global model converges.

    3.2 Unsupervised contrastive learning

    Unsupervised contrastive representation learning methods learn representations from unlabeled data by reducing the distance between representations of positive samples and increasing the distance between representations of negative samples.Among them,SimCLR achieves outstanding performance and can be applied to FL easily(Chen T et al.,2020).SimCLR randomly samples a minibatch ofNsamples and executes twice random data augmentations for each sample to obtain 2Nviews.Typically,the views augmented from the same image are treated as positive samples and the views augmented from different images are treated as negative samples(Dosovitskiy et al.,2014).The loss function for a positive pair of samples (i,j)is defined as follows:

    whereτis the temperature andif and only ifk/=i.sim(·,·) measures the similarity of two representations of samples (e.g.,cosine similarity).The model (consisting of a base encoder networkfto extract representationhfrom augmented views and a projection headgto map representationhtoz) is trained by minimizing the loss function above.Finally,we use representationhto perform downstream tasks.

    4 Method

    In this section,we analyze the two challenges mentioned above and detail the dictionary module and alignment module designed for these challenges.Then we introduce the FedCA algorithm for FURL.

    4.1 Dictionary module for inconsistency challenge

    FURL aims to learn a shared model that maps data to representation vectors such that similar samples are mapped to nearby points in the representation space so that the features are well clustered by classes.However,the presence of non-IID data presents a great challenge to FURL.Since the local datasetDuof a given clientulikely contains samples of only a few classes,the local models may encode inconsistent spaces,causing bad effects on the performance of the aggregated model.

    To empirically verify this,we visualize the representations of images from CIFAR-10 via thetdistributed stochastic neighbor embedding (T-SNE)method.To be specific,we split the training data of CIFAR-10 into five non-IID sets,and each set consists of 10 000 samples from two classes.Then,the FedAvg algorithm is combined solely with the unsupervised approach(SimCLR)to learn representations from these sets.We use the local model in the 20thround of the client who has only samples of class 0 and class 1 to extract features from the test set of CIFAR-10 and visualize the representations after dimensionality reduction by T-SNE (Fig.2a).We find that the scattered representations of samples from class 0 and class 1 are spread over a very large area of representation space,and it is difficult to distinguish samples of class 0 and class 1 from others.It suggests that the local model encodes a representation space of samples of class 0 and class 1,and it cannot map samples of other classes to the suitable positions.The visualization results support our hypothesis that the representation spaces encoded by different local models are inconsistent in a non-IID setting.

    We argue that the cause of inconsistency is that the clients can use only their own data to train the local models but the distribution of data varies from client to client.To address this issue,we design a dictionary module (Fig.3b).Specifically,in each communication round,clients use the global model(including the encoder and the projection head) to obtain the normalized projectionsof their own samples and send the normalized projections to the server along with the trained local models.Then,the server gathers the normalized projections into a shared dictionary.For each client,the global dictionary dict withKprojections is treated as a normalized projection set of negative samples for local contrastive learning.Specifically,in the local training process,for a given minibatchxbatchwithNsamples,we randomly augment them to obtainxiandxj,and generate normalized projectionsand.Then we calculate the following:

    Fig.3 Illustration of FedCA:(a) overview of FedCA (in each round,clients generate local models and dictionaries,and then the server gathers them to obtain the global model and dictionary);(b) local update of model (clients update local models by contrastive leaning with the dictionary and alignment modules);(c)local update of dictionary (clients generate local dictionaries via temporal ensembling).In (b), xother is a sample different from sample x, xalign is a sample from the additional public dataset for alignment, f is the encoder,and g is the projection head

    where concat() denotes concatenation,the size oflogitsisN ×(N+K),and dim=1 means that they are concatenated in the 1stdimension.Now,we turn the unsupervised problem into an (N+K)-classification problem and define as a class indicator.Then the loss function is given as follows:

    where CE denotes the cross-entropy loss andτis the temperature term.

    Note that in each round,the shared dictionary is generated by the global model from the previous round,but the projections of local samples are encoded by current local models.The inconsistencies in representations may affect the function of the dictionary module,especially in a non-IID setting.We use temporal ensembling to alleviate this problem (Fig.3c).To be specific,each client maintains a local ensemble dictionary consisting of projection set.In each round,clientuuses the trained local model to obtain projectionsand accumulates them into ensemble dictionary by updating

    and then the normalized ensemble projection is given as

    whereα ∈[0,1)is a momentum parameter and=0.

    We visualize the representations encoded by the local models trained via federated contrastive learning with the dictionary module in the same setting as the vanilla federated unsupervised approach.As shown in Fig.2b,we find that the points of class 0 and class 1 are clustered in a small subspace of the representation space,which means that the dictionary module works well as we expected.

    4.2 Alignment module for misalignment challenge

    Due to the randomness in the training process,there might be differences between the representations generated by the two models trained on the same dataset,although these two models encode consistent spaces.The misalignment of representations may have an adverse effect on model aggregation.

    To verify this,we use the angle between two representation vectors of the same image encoded by different models to measure the degree of difference in representations.Then we record the angles between representations generated by different local models in FL on CIFAR-10.We split the training data of CIFAR-10 into five IID sets randomly,and each set consists of 10 000 samples from all 10 classes.We randomly select two local models trained by the vanilla federated unsupervised approach (FedSim-CLR is used as an example) and use them to obtain normalized representations on the test set of CIFAR-10.As shown in Fig.4a,there is always a large difference in the angle (beyond 20?) between the representations encoded by the local models in the learning process.

    Fig.4 Box plots of the angles between the representations encoded by local models on the CIFAR-10 dataset in FL with an IID setting:(a) FedSimCLR;(b) FedCA.FL:federated learning;IID:independent and identically distributed

    We introduce an alignment module to tackle this challenge.As shown in Fig.3b,we prepare an additional public dataset with a small size and train a modelgalign(falign())(called the alignment model)on it.The local models are then trained via contrastive loss with a regularization term that replicates outputs of the alignment model on an alignment dataset.For a given clientu,the loss functions are defined as follows:

    We also calculate the angles between the representations of the local models trained via federated contrastive learning with the alignment module(3200 images sampled from the STL-10 dataset randomly are used for alignment) in the same setting as the vanilla federated unsupervised approach.As shown in Fig.4b,the angles can be controlled within 10?after 10 training rounds,suggesting that the alignment module can help align the local models.

    4.3 FedCA algorithm

    From the above,the total loss function of the local model update is given as follows:

    whereβis a scale factor controlling the influence of the alignment module.Now we have a complete algorithm named FedCA,which can handle the challenges of FURL well,as shown in Fig.3.

    Algorithm 1 summarizes the proposed approach.In each round,clients update the local models with the contrastive loss and the alignment loss,and then generate local dictionaries.The server aggregates the local models into a global model and updates the global dictionary.

    5 Experiments

    FURL aims to learn a representation model from decentralized and unlabeled data.In this section,we present an empirical study of FedCA.

    5.1 Experimental setup

    5.1.1 Baselines

    AutoEncoder is a generative method to learn representations in an unsupervised manner by generating a representation from the reduced encoding as close as possible to its original input (Hinton and Salakhutdinov,2006).Predicting rotation is one of the proxy tasks of self-supervised learning by rotating samples by random multiples of 90?and predicting the degrees of rotations(Gidaris et al.,2018).We solely combine FedAvg with AutoEncoder (named FedAE),predicting rotation (named FedPR),and SimCLR (named FedSimCLR),separately,and use them as baselines for FURL.

    5.1.2 Datasets

    TheCIFAR-10/CIFAR-100dataset(Krizhevsky,2009) consists of 60 000 32×32 color images in 10/100 classes,with 6000/600 images per class,and there are 50 000 training images and 10 000 test images in CIFAR-10 and CIFAR-100.The MiniImageNet dataset (Deng et al.,2009;Vinyals et al.,2016)is extracted from the ImageNet dataset and consists of 60 000 84×84 color images in 100 classes.We split it into a training dataset with 50 000 samples and a test dataset with 10 000 samples.We implement FedCA and the baseline methods on the three datasets above in PyTorch(Paszke et al.,2019).

    5.1.3 Federated setting

    We deploy our experiments under a simulated FL environment,where we set a centralized node as the server and five distributed nodes as the clients.The number of local epochs (E) is five,and in each round,all of the clients obtain the global model and execute local training,i.e.,the proportion of the selected clientsC=1.For each dataset,we consider two federated settings:IID and non-IID.Each client randomly samples 10 000 images from the entire training dataset in an IID setting,while in the non-IID setting,samples are split to clients by class,which means that each client has 10 000 samples of 2/20/20 classes of CIFAR-10/CIFAR-100/MiniImageNet.

    5.1.4 Training details

    We compare our approach with baseline methods on different encoders,including five-layer convolutional neural network(CNN)(Krizhevsky et al.,2012) and ResNet-50 (He KM et al.,2016).The encoder maps input samples to representations with 2048 dimensions,and then a multilayer perceptron(MLP) translates the representations to a vector with 128 dimensions used to calculate the contrastive loss.Adam is used as the optimizer,and the initial learning rate is 1×10?3with 1×10?6weight decay.We train models for 100 epochs with a minibatch size of 128.We set the dictionary sizeK=1024,the momentum term of temporal ensemblingα=0.5,and the scale factorβ=0.01.Furthermore,3200 images randomly sampled from the STL-10 dataset are used for the alignment module.Data augmentation for contrastive representation learning includes random cropping and resizing,random color distortion,random flipping,and Gaussian blurring.

    5.2 Evaluation protocols and results

    5.2.1 Linear evaluation

    We first study our method by linear classification on a fixed encoder to verify the representations learned in FURL.We perform FedCA and baseline methods to learn representations on CIFAR-10,CIFAR-100,and MiniImageNet without labels separately in a federated setting.Then,we fix the encoder and train a linear classifier with supervision on the entire dataset.We train this classifier with Adam as the optimizer for 100 epochs and report the top-1 classification accuracy on the test datasets of CIFAR-10,CIFAR-100,and MiniImageNet.

    As shown in Table 2,federated averaging with contrastive learning works better than other unsupervised approaches.Moreover,our method outperforms all of the baseline methods due to the modules designed for FURL as we expected.

    Table 2 Top-1 accuracies of algorithms for FURL on linear evaluation

    5.2.2 Semi-supervised learning

    In federated scenarios,the private data at the clients may be only partly labeled,so we can learn a representation model without supervision and finetune it on labeled data.We assume that the ratios of labeled data of each client are 1% and 10%,separately.First,we train a representation model in FURL setting.Then,we fine-tune it (followed by an MLP consisting of a hidden layer and a rectified linear unit (ReLU) activation function) on labeled data for 100 epochs with Adam as the optimizer and a learning rate of 1×10?3.

    Table 3 reports the top-1 accuracy of various methods on CIFAR-10,CIFAR-100,and MiniImageNet.We observe that the accuracy of the global model trained by federated supervised learning on limited labeled data is significantly bad,and the use of the representation model trained in FURL as the initial model can improve performance relatively.Our method outperforms other approaches,suggesting that FURL benefits from the designed modules of FedCA,especially in a non-IID setting.

    Table 3 Top-1 accuracies of algorithms for FURL on semi-supervised learning

    5.2.3 Transfer learning

    A main goal of FURL is to learn a representation model from decentralized and unlabeled data for personalized downstream tasks.To verify whether the features learned in FURL are transferable,we set the models trained in FURL as the initial models,and then an MLP is used to be trained along with the encoder on other datasets.The image size of CIFAR(32×32×3)is resized to be the same as that in Mini-ImageNet (84×84×3) when we fine-tune the model learned from MiniImageNet on CIFAR.We train it for 100 epochs with Adam as the optimizer and set the learning rate to be 1×10?3.

    Table 4 shows that the model trained by FedCA achieves an excellent performance and outperforms all of the baseline methods in the non-IID setting.

    Table 4 Top-1 accuracies of algorithms for FURL on transfer learning

    5.3 Ablation study

    5.3.1 Alignment and dictionary modules

    We perform the ablation study analysis on CIFAR-10 in IID and non-IID settings to demonstrate the effectiveness of the alignment and dictionary modules (with temporal ensembling).We implement (1) FedSimCLR,(2) federated contrastive learning with only alignment module,(3) federated contrastive learning with only dictionary module,(4) federated contrastive learning with only dictionary module based on temporal ensembling,and(5)FedCA,and then a linear classifier is used to evaluate the performance of the frozen representation model with supervision.Fig.5 shows the results.

    Fig.5 Ablation study of modules designed for FURL by linear classification on CIFAR-10 (ResNet-50).FURL:federated unsupervised representation learning;IID:independent and identically distributed

    We observe that the alignment module improves the performance by 1.4% in both IID and non-IID settings.With the help of the dictionary module(without temporal ensembling),there are 2.5% and 2.7% increases in the accuracy under the IID and non-IID settings,respectively.Moreover,we note that the representation model learned in FURL benefits more from the temporal ensembling technique in the non-IID setting than in the IID setting,probably because the features learned in the IID setting are stable enough so that temporal ensembling plays a far less important role in the IID setting than in the non-IID setting.Fortunately,the model achieves excellent performance when we combine federated contrastive learning with the alignment and dictionary modules based on temporal ensembling,which suggests that both of these two modules can work collaboratively and help tackle the challenges in FURL.

    5.3.2 Coefficient of alignment loss

    To explore the effectiveness of the coefficient of alignment lossβ,we run our algorithm on the CIFAR-10 dataset(IID setting,five-layer CNN)with different values of the hyper-parameterβ.

    The results are shown in Table 5.We can find that the values ofβhave a slight effect on the performance of the federated representation model.The reason for the performance differences may be that a small value ofβcannot make the local models become aligned,so that the performance of the aggregated model will be degraded.A large value ofβlimits the function of the contrastive loss,so that the model ability cannot be guaranteed.We suggest that,in practice,people should select an appropriate value forβon a subset of data with a small size before the formal federated training.

    Table 5 Ablation study for coefficient of alignment loss β

    6 Conclusions

    We formulate a significant and challenging problem,termed federated unsupervised representation learning (FURL),and show the two main challenges (inconsistency of representation spaces and misalignment of representations).In this paper,we propose a contrastive learning based FL algorithm named FedCA,composed of the dictionary module and alignment module,to tackle the above challenges.Owing to these two modules,FedCA enables distributed local models to learn consistent and aligned representations while protecting data privacy.Our experimental results demonstrate that FedCA outperforms those algorithms that solely combine FL with unsupervised approaches and provides a stronger baseline for FURL.

    In future work,we plan to extend FedCA to cross-modal scenarios where different clients may have data in different modes such as images,videos,texts,and audios.

    Contributors

    All authors contributed to the study conception and design.Fengda ZHANG,Chao WU,and Yueting ZHUANG proposed the motivation.Fengda ZHANG,Kun KUANG,and Long CHEN designed the method.Fengda ZHANG,Zhaoyang YOU,and Tao SHEN performed the experiments.Fengda ZHANG drafted the paper,and all authors commented on previous versions of the paper.Jun XIAO,Yin ZHANG,Fei WU,and Xiaolin LI revised the paper.All authors read and approved the final version.

    Compliance with ethics guidelines

    Fei WU and Yueting ZHUANG are editorial board members ofFrontiers of Information Technology&Electronic Engineering.Fengda ZHANG,Kun KUANG,Long CHEN,Zhaoyang YOU,Tao SHEN,Jun XIAO,Yin ZHANG,Chao WU,Fei WU,Yueting ZHUANG,and Xiaolin LI declare that they have no conflict of interest.

    Data availability

    The data that support the findings of this study are openly available in public repositories.

    久久性视频一级片| 亚洲午夜理论影院| 成人av一区二区三区在线看| 在线天堂中文资源库| 母亲3免费完整高清在线观看| 国产精品秋霞免费鲁丝片| 成人18禁在线播放| 老司机在亚洲福利影院| 日韩精品免费视频一区二区三区| 亚洲av熟女| 精品一区二区三区四区五区乱码| 国产99久久九九免费精品| 中文亚洲av片在线观看爽| 男女下面进入的视频免费午夜 | 一进一出好大好爽视频| 日韩大码丰满熟妇| 制服人妻中文乱码| 动漫黄色视频在线观看| 男女做爰动态图高潮gif福利片 | 一本大道久久a久久精品| 一进一出抽搐gif免费好疼| 12—13女人毛片做爰片一| 亚洲国产高清在线一区二区三 | 欧美丝袜亚洲另类 | 91av网站免费观看| tocl精华| 亚洲在线自拍视频| 少妇粗大呻吟视频| 国产精品日韩av在线免费观看 | 黑人巨大精品欧美一区二区蜜桃| 午夜福利免费观看在线| 精品一区二区三区四区五区乱码| 国产熟女xx| 伊人久久大香线蕉亚洲五| 十八禁网站免费在线| 18禁裸乳无遮挡免费网站照片 | 最新美女视频免费是黄的| 成人亚洲精品av一区二区| 久久亚洲精品不卡| 丁香欧美五月| 18禁国产床啪视频网站| 精品国产美女av久久久久小说| 亚洲av美国av| 国产黄a三级三级三级人| 成年人黄色毛片网站| 国产精品一区二区在线不卡| 成人特级黄色片久久久久久久| 国产欧美日韩一区二区三| 老鸭窝网址在线观看| 亚洲欧美精品综合久久99| 搞女人的毛片| 国产高清激情床上av| 一级片免费观看大全| 校园春色视频在线观看| 欧美黄色片欧美黄色片| 91av网站免费观看| www.精华液| 欧美精品亚洲一区二区| 99热只有精品国产| 可以在线观看的亚洲视频| 日日夜夜操网爽| 成人av一区二区三区在线看| 极品教师在线免费播放| 99久久综合精品五月天人人| 亚洲精品一卡2卡三卡4卡5卡| 男女下面进入的视频免费午夜 | 国产真人三级小视频在线观看| 亚洲激情在线av| 免费女性裸体啪啪无遮挡网站| 国产精品美女特级片免费视频播放器 | 亚洲欧洲精品一区二区精品久久久| 久久精品亚洲精品国产色婷小说| av中文乱码字幕在线| 大陆偷拍与自拍| 亚洲人成伊人成综合网2020| 久久精品亚洲精品国产色婷小说| 婷婷六月久久综合丁香| 国产精华一区二区三区| 99国产精品99久久久久| 亚洲欧美日韩无卡精品| 啪啪无遮挡十八禁网站| 国产精品一区二区免费欧美| 亚洲国产欧美日韩在线播放| 欧美日韩亚洲综合一区二区三区_| 欧美黄色淫秽网站| 一边摸一边抽搐一进一出视频| 国产又色又爽无遮挡免费看| 久久精品成人免费网站| 久久青草综合色| 欧美一区二区精品小视频在线| 国产一区二区在线av高清观看| 别揉我奶头~嗯~啊~动态视频| 亚洲 欧美一区二区三区| 每晚都被弄得嗷嗷叫到高潮| 免费观看人在逋| 欧美一级毛片孕妇| 亚洲中文字幕日韩| 亚洲无线在线观看| 精品日产1卡2卡| 熟妇人妻久久中文字幕3abv| 国产私拍福利视频在线观看| 人人妻人人澡人人看| 色播在线永久视频| 男人舔女人下体高潮全视频| 男人操女人黄网站| 国产精品亚洲美女久久久| 成人三级黄色视频| 国产精品 国内视频| 亚洲精品国产色婷婷电影| 成人特级黄色片久久久久久久| 99国产精品免费福利视频| 操出白浆在线播放| 少妇熟女aⅴ在线视频| 一a级毛片在线观看| 久久精品91无色码中文字幕| 99精品在免费线老司机午夜| 亚洲欧美日韩无卡精品| 国产午夜精品久久久久久| 午夜福利在线观看吧| 成人18禁高潮啪啪吃奶动态图| 国产一级毛片七仙女欲春2 | 久久久国产成人精品二区| 色老头精品视频在线观看| 一a级毛片在线观看| 在线观看午夜福利视频| 人人澡人人妻人| 午夜视频精品福利| 日本a在线网址| 国产片内射在线| 亚洲精品国产区一区二| 欧美激情高清一区二区三区| 91大片在线观看| 久久婷婷成人综合色麻豆| 美女高潮到喷水免费观看| 欧美黄色淫秽网站| 动漫黄色视频在线观看| 视频区欧美日本亚洲| 欧美中文日本在线观看视频| 精品一区二区三区视频在线观看免费| 精品国产乱码久久久久久男人| 免费久久久久久久精品成人欧美视频| 久久久久久人人人人人| av视频在线观看入口| 亚洲天堂国产精品一区在线| 一进一出抽搐gif免费好疼| 亚洲九九香蕉| 最新美女视频免费是黄的| 999久久久国产精品视频| av在线天堂中文字幕| 成熟少妇高潮喷水视频| 久久精品91蜜桃| 久久久国产精品麻豆| 亚洲成国产人片在线观看| 国产精品98久久久久久宅男小说| 国产黄a三级三级三级人| 亚洲一区高清亚洲精品| 少妇 在线观看| 亚洲熟妇熟女久久| 国产亚洲欧美精品永久| 在线十欧美十亚洲十日本专区| 亚洲成人精品中文字幕电影| a在线观看视频网站| 色综合欧美亚洲国产小说| 老司机午夜福利在线观看视频| 日本免费a在线| 国产熟女午夜一区二区三区| 国产人伦9x9x在线观看| netflix在线观看网站| 欧美 亚洲 国产 日韩一| 成人国产一区最新在线观看| 国产欧美日韩一区二区三| 国产精品久久视频播放| 亚洲精品国产色婷婷电影| 国产亚洲精品久久久久久毛片| 亚洲av美国av| 久久香蕉激情| netflix在线观看网站| 日韩欧美国产一区二区入口| 88av欧美| 给我免费播放毛片高清在线观看| 成人18禁高潮啪啪吃奶动态图| 国产精品亚洲一级av第二区| 一区福利在线观看| 最近最新中文字幕大全免费视频| 欧美中文日本在线观看视频| 美国免费a级毛片| 亚洲伊人色综图| 免费女性裸体啪啪无遮挡网站| 免费久久久久久久精品成人欧美视频| 变态另类成人亚洲欧美熟女 | 亚洲人成电影观看| 黄色a级毛片大全视频| 免费无遮挡裸体视频| 男女做爰动态图高潮gif福利片 | 亚洲av片天天在线观看| 久久人人97超碰香蕉20202| 欧美日韩亚洲综合一区二区三区_| 久久久国产成人精品二区| 在线十欧美十亚洲十日本专区| 国产主播在线观看一区二区| 大香蕉久久成人网| 窝窝影院91人妻| 日韩精品青青久久久久久| 女性生殖器流出的白浆| 99香蕉大伊视频| 波多野结衣巨乳人妻| 日本免费一区二区三区高清不卡 | 精品国内亚洲2022精品成人| 男女午夜视频在线观看| 男女下面插进去视频免费观看| 久久久久精品国产欧美久久久| 国产一级毛片七仙女欲春2 | 老鸭窝网址在线观看| 九色国产91popny在线| 久久精品91蜜桃| 天堂动漫精品| 精品一区二区三区四区五区乱码| 午夜福利影视在线免费观看| 国产又色又爽无遮挡免费看| 免费看十八禁软件| 国产日韩一区二区三区精品不卡| 国产99白浆流出| 成人亚洲精品一区在线观看| 可以免费在线观看a视频的电影网站| 韩国av一区二区三区四区| 神马国产精品三级电影在线观看 | 国产精品电影一区二区三区| 在线观看www视频免费| 国产精品久久久人人做人人爽| av视频在线观看入口| 美女扒开内裤让男人捅视频| 在线观看舔阴道视频| 精品久久久久久,| 国产亚洲精品久久久久5区| 国产成人欧美在线观看| 手机成人av网站| 精品国产美女av久久久久小说| 91国产中文字幕| 欧美一级a爱片免费观看看 | 黄色丝袜av网址大全| 中文亚洲av片在线观看爽| 人人妻人人爽人人添夜夜欢视频| 日本一区二区免费在线视频| 中文字幕精品免费在线观看视频| 老汉色av国产亚洲站长工具| 天堂影院成人在线观看| 久久狼人影院| e午夜精品久久久久久久| 动漫黄色视频在线观看| 亚洲精品久久国产高清桃花| 老司机在亚洲福利影院| 日韩欧美免费精品| 色播在线永久视频| 久久香蕉国产精品| 高清在线国产一区| av欧美777| 夜夜夜夜夜久久久久| 亚洲精品中文字幕在线视频| 十八禁人妻一区二区| 欧美 亚洲 国产 日韩一| 国产一区二区激情短视频| 正在播放国产对白刺激| 男女做爰动态图高潮gif福利片 | 免费看a级黄色片| 欧美色视频一区免费| 亚洲色图av天堂| 亚洲av电影在线进入| 久久国产精品人妻蜜桃| 国产免费av片在线观看野外av| 成人国语在线视频| 国产精品久久久久久亚洲av鲁大| 免费在线观看黄色视频的| 久久香蕉精品热| 日日干狠狠操夜夜爽| 热re99久久国产66热| 欧美激情 高清一区二区三区| 欧美日韩亚洲国产一区二区在线观看| 日韩成人在线观看一区二区三区| 女人精品久久久久毛片| 亚洲精品一区av在线观看| 如日韩欧美国产精品一区二区三区| 久久久久国产精品人妻aⅴ院| 精品久久久久久久毛片微露脸| 久久人人爽av亚洲精品天堂| 黄片小视频在线播放| 老司机福利观看| 亚洲精品一区av在线观看| 一级作爱视频免费观看| 久久久久久免费高清国产稀缺| 大型av网站在线播放| 色播在线永久视频| 午夜福利免费观看在线| 一级毛片高清免费大全| 亚洲久久久国产精品| svipshipincom国产片| 村上凉子中文字幕在线| 日本vs欧美在线观看视频| 999精品在线视频| 久久精品成人免费网站| 亚洲欧美日韩另类电影网站| 日本 欧美在线| 欧美性长视频在线观看| 亚洲欧美一区二区三区黑人| 免费在线观看视频国产中文字幕亚洲| 黄色视频,在线免费观看| 一级毛片女人18水好多| 日本精品一区二区三区蜜桃| 欧美久久黑人一区二区| 欧美老熟妇乱子伦牲交| 亚洲欧洲精品一区二区精品久久久| 精品国产一区二区三区四区第35| 人妻久久中文字幕网| 国产av在哪里看| 99精品久久久久人妻精品| 侵犯人妻中文字幕一二三四区| 日本a在线网址| 亚洲一区二区三区不卡视频| 久久中文字幕人妻熟女| 国产成人精品久久二区二区免费| 久久精品人人爽人人爽视色| 国产成人精品久久二区二区免费| 一个人免费在线观看的高清视频| 成人精品一区二区免费| 国产精品爽爽va在线观看网站 | 欧美日韩瑟瑟在线播放| 亚洲一码二码三码区别大吗| 又黄又粗又硬又大视频| 性色av乱码一区二区三区2| 精品卡一卡二卡四卡免费| 国产精品九九99| 国内精品久久久久精免费| 久久中文看片网| 国产片内射在线| 两个人免费观看高清视频| 精品午夜福利视频在线观看一区| 美女大奶头视频| 国产97色在线日韩免费| 国产在线观看jvid| 校园春色视频在线观看| 久久天躁狠狠躁夜夜2o2o| 激情在线观看视频在线高清| 满18在线观看网站| 激情在线观看视频在线高清| 亚洲精华国产精华精| 欧美日韩精品网址| 亚洲精华国产精华精| 免费无遮挡裸体视频| 99国产精品一区二区蜜桃av| 精品福利观看| 18禁裸乳无遮挡免费网站照片 | 日韩精品青青久久久久久| 国产精品美女特级片免费视频播放器 | 精品久久久久久,| 日本vs欧美在线观看视频| 纯流量卡能插随身wifi吗| 精品欧美国产一区二区三| 黄网站色视频无遮挡免费观看| 久久精品成人免费网站| 久久精品91蜜桃| 成人手机av| 999久久久精品免费观看国产| 高清毛片免费观看视频网站| 亚洲 欧美 日韩 在线 免费| 国产亚洲av嫩草精品影院| 亚洲熟女毛片儿| 日韩大码丰满熟妇| 精品国产超薄肉色丝袜足j| 久久性视频一级片| 一进一出抽搐gif免费好疼| 69精品国产乱码久久久| 午夜福利高清视频| 欧美色欧美亚洲另类二区 | 久久伊人香网站| 国产精品香港三级国产av潘金莲| 在线播放国产精品三级| 欧美激情 高清一区二区三区| 久久人妻av系列| 亚洲av美国av| 黄色毛片三级朝国网站| 久久久精品欧美日韩精品| 国产熟女xx| 91av网站免费观看| 手机成人av网站| 亚洲精华国产精华精| 在线观看免费视频日本深夜| 好男人在线观看高清免费视频 | 久久精品人人爽人人爽视色| 国产伦一二天堂av在线观看| 国产成人精品久久二区二区91| 韩国av一区二区三区四区| 久久人妻福利社区极品人妻图片| 日韩精品中文字幕看吧| 欧美成人一区二区免费高清观看 | 精品无人区乱码1区二区| 韩国精品一区二区三区| 中文字幕色久视频| 久久婷婷人人爽人人干人人爱 | 一个人免费在线观看的高清视频| 无限看片的www在线观看| 免费女性裸体啪啪无遮挡网站| 亚洲最大成人中文| 国产精品 国内视频| 国产精品免费视频内射| 桃红色精品国产亚洲av| 老司机靠b影院| av福利片在线| 亚洲伊人色综图| 久久久国产成人精品二区| 美女午夜性视频免费| 亚洲avbb在线观看| 国产欧美日韩一区二区三| 人人澡人人妻人| 国产黄a三级三级三级人| 亚洲中文日韩欧美视频| а√天堂www在线а√下载| 亚洲色图av天堂| 这个男人来自地球电影免费观看| 免费在线观看影片大全网站| 日本撒尿小便嘘嘘汇集6| 美女大奶头视频| 欧美中文综合在线视频| 岛国视频午夜一区免费看| 中文字幕久久专区| 天天添夜夜摸| 亚洲国产精品合色在线| 国产成人影院久久av| 国产99白浆流出| 一进一出抽搐动态| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲第一电影网av| 亚洲精品美女久久av网站| 波多野结衣巨乳人妻| 免费看美女性在线毛片视频| 九色亚洲精品在线播放| 久久午夜亚洲精品久久| 日本五十路高清| 成熟少妇高潮喷水视频| 亚洲视频免费观看视频| 国产精品二区激情视频| 精品第一国产精品| 久久久久久久午夜电影| 不卡av一区二区三区| 高清黄色对白视频在线免费看| 久久 成人 亚洲| 99国产精品免费福利视频| 亚洲精品国产一区二区精华液| 一边摸一边做爽爽视频免费| av视频在线观看入口| 一本综合久久免费| 91成年电影在线观看| 最新美女视频免费是黄的| 婷婷六月久久综合丁香| 欧美另类亚洲清纯唯美| 天天躁狠狠躁夜夜躁狠狠躁| 女人被狂操c到高潮| 久久久久亚洲av毛片大全| 一区在线观看完整版| 久久国产精品影院| 狠狠狠狠99中文字幕| 两性午夜刺激爽爽歪歪视频在线观看 | 国产高清激情床上av| 黄色片一级片一级黄色片| 亚洲精品国产区一区二| 两性夫妻黄色片| 国产精品国产高清国产av| 国产高清有码在线观看视频 | 黑丝袜美女国产一区| 国产精品野战在线观看| 久久精品影院6| 午夜久久久久精精品| 亚洲欧美日韩无卡精品| 成人国语在线视频| 一二三四在线观看免费中文在| 午夜亚洲福利在线播放| 一本综合久久免费| 久久久精品欧美日韩精品| 人人澡人人妻人| 色综合亚洲欧美另类图片| 国产精品永久免费网站| 男女午夜视频在线观看| 久久久久九九精品影院| 在线十欧美十亚洲十日本专区| 亚洲国产欧美网| 精品人妻1区二区| 免费少妇av软件| 国产精品 国内视频| 一区在线观看完整版| 国产精品亚洲av一区麻豆| 免费不卡黄色视频| 国产高清videossex| 高潮久久久久久久久久久不卡| 黄色a级毛片大全视频| 久久青草综合色| 精品久久久久久久人妻蜜臀av | 国产激情欧美一区二区| 亚洲精品一卡2卡三卡4卡5卡| 午夜福利欧美成人| 国产成人精品无人区| 看免费av毛片| 久久婷婷人人爽人人干人人爱 | 亚洲欧美日韩另类电影网站| 视频区欧美日本亚洲| 我的亚洲天堂| 国产主播在线观看一区二区| 久久性视频一级片| 欧美精品啪啪一区二区三区| 亚洲熟妇中文字幕五十中出| 精品午夜福利视频在线观看一区| 99久久久亚洲精品蜜臀av| 麻豆av在线久日| 欧美乱码精品一区二区三区| 久久天堂一区二区三区四区| 女人精品久久久久毛片| 亚洲精品在线美女| 久久性视频一级片| 精品高清国产在线一区| 国产私拍福利视频在线观看| 操美女的视频在线观看| 国产成人欧美| 久久精品成人免费网站| 欧美绝顶高潮抽搐喷水| 亚洲精品美女久久久久99蜜臀| 色尼玛亚洲综合影院| 97碰自拍视频| 如日韩欧美国产精品一区二区三区| 搞女人的毛片| av在线天堂中文字幕| 啦啦啦韩国在线观看视频| 欧美午夜高清在线| 99久久综合精品五月天人人| 久久久久久人人人人人| 男女下面插进去视频免费观看| 欧美日本视频| 国产亚洲欧美在线一区二区| 50天的宝宝边吃奶边哭怎么回事| 欧美在线黄色| e午夜精品久久久久久久| 大型黄色视频在线免费观看| 国产色视频综合| 亚洲五月色婷婷综合| 久久精品国产亚洲av高清一级| 国内精品久久久久精免费| 成人三级黄色视频| 女警被强在线播放| 女同久久另类99精品国产91| 搡老妇女老女人老熟妇| 国产精品野战在线观看| 欧美激情 高清一区二区三区| 可以免费在线观看a视频的电影网站| 亚洲成人免费电影在线观看| 两个人视频免费观看高清| 看免费av毛片| 亚洲第一青青草原| 久久草成人影院| 日韩免费av在线播放| 黄片大片在线免费观看| 欧美精品啪啪一区二区三区| 午夜福利免费观看在线| 夜夜躁狠狠躁天天躁| 国产高清激情床上av| 97超级碰碰碰精品色视频在线观看| 身体一侧抽搐| 亚洲人成电影观看| 少妇熟女aⅴ在线视频| 亚洲中文av在线| 50天的宝宝边吃奶边哭怎么回事| 日韩欧美一区视频在线观看| 最新在线观看一区二区三区| 成人av一区二区三区在线看| 一边摸一边抽搐一进一小说| 搡老熟女国产l中国老女人| 亚洲av成人av| www.熟女人妻精品国产| 亚洲成人精品中文字幕电影| 中文字幕人妻熟女乱码| 手机成人av网站| 免费看美女性在线毛片视频| 精品久久久久久久人妻蜜臀av | 国产午夜精品久久久久久| 国产精品久久电影中文字幕| 最好的美女福利视频网| 免费在线观看黄色视频的| 午夜两性在线视频| 男女下面插进去视频免费观看| 精品卡一卡二卡四卡免费| 国产午夜精品久久久久久| 黄色a级毛片大全视频| 久久精品国产亚洲av香蕉五月| 少妇的丰满在线观看| 日韩精品青青久久久久久| 国产欧美日韩一区二区精品| 国产精品1区2区在线观看.| 免费女性裸体啪啪无遮挡网站| 麻豆一二三区av精品| 亚洲精品美女久久久久99蜜臀| 国产高清videossex| av视频免费观看在线观看| 欧美黄色淫秽网站| 国产三级在线视频| 12—13女人毛片做爰片一| 高清毛片免费观看视频网站| 亚洲色图av天堂| 亚洲av熟女| 少妇熟女aⅴ在线视频| 精品国产乱码久久久久久男人| 国产成人免费无遮挡视频| 人人妻人人爽人人添夜夜欢视频| 91大片在线观看| 亚洲视频免费观看视频| 国产片内射在线| 日韩精品免费视频一区二区三区| 国产区一区二久久| 人人妻,人人澡人人爽秒播| 人人妻人人澡人人看| ponron亚洲|