• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    FedTC:A Personalized Federated Learning Method with Two Classifiers

    2023-10-26 13:13:44YangLiuJiaboWangQinboLiuMehdiGheisariWanyinXuZoeJiangandJiajiaZhang
    Computers Materials&Continua 2023年9期

    Yang Liu ,Jiabo Wang ,Qinbo Liu ,Mehdi Gheisari ,Wanyin Xu ,Zoe L.Jiang and Jiajia Zhang,?

    1School of Computer Science and Technology,Harbin Institute of Technology,Shenzhen,518055,China

    2Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies,Shenzhen,518055,China

    3Research Center for Cyberspace Security,Peng Cheng Laboratory,Shenzhen,518055,China

    ABSTRACT Centralized training of deep learning models poses privacy risks that hinder their deployment.Federated learning(FL) has emerged as a solution to address these risks,allowing multiple clients to train deep learning models collaboratively without sharing raw data.However,FL is vulnerable to the impact of heterogeneous distributed data,which weakens convergence stability and suboptimal performance of the trained model on local data.This is due to the discarding of the old local model at each round of training,which results in the loss of personalized information in the model critical for maintaining model accuracy and ensuring robustness.In this paper,we propose FedTC,a personalized federated learning method with two classifiers that can retain personalized information in the local model and improve the model’s performance on local data.FedTC divides the model into two parts,namely,the extractor and the classifier,where the classifier is the last layer of the model,and the extractor consists of other layers.The classifier in the local model is always retained to ensure that the personalized information is not lost.After receiving the global model,the local extractor is overwritten by the global model’s extractor,and the classifier of the global model serves as an additional classifier of the local model to guide local training.The FedTC introduces a two-classifier training strategy to coordinate the two classifiers for local model updates.Experimental results on Cifar10 and Cifar100 datasets demonstrate that FedTC performs better on heterogeneous data than current studies,such as FedAvg,FedPer,and local training,achieving a maximum improvement of 27.95%in model classification test accuracy compared to FedAvg.

    KEYWORDS Distributed machine learning;federated learning;data hetero-geneity;non-independent identically distributed

    1 Introduction

    Machine learning mines experience and knowledge from large amounts of data,allowing computers to think more innovatively.However,data is often distributed across different devices or departments,which causes training of machine learning models prone to overfitting due to insufficient local data.Therefore,collecting data from multiple parties to the computing center is often necessary for centralized model training.However,people have become increasingly concerned about personal data privacy in recent years.Relevant regulations have also been issued in some countries and regions to protect personal data,such as the General Data Protection Regulation (GDPR) [1] in Europe and the Personal Information Protection Law [2] in China.Due to privacy and security concerns and regulatory prohibitions,data owners are no longer willing to share their source data with the computing center.

    Federated learning (FL) [3] has been proposed to solve the privacy issue in centralized model training.FL allows multiple clients to train a deep learning model collaboratively by exchanging model parameters with the server and avoiding the direct sharing of raw data.Currently,federated learning is a crucial privacy computing technology widely used in finance[4],healthcare[5],telecommunications[6],and other fields[7–10].However,since the data are usually generated locally,the distribution is nonindependently and homogeneously distributed(Non-IID)across clients,also known as heterogeneous data.For example,the data held by different hospitals are often distributed differently due to their areas of expertise and geographical location.The study [11] has demonstrated that conventional federated learning algorithms struggle to achieve stable convergence when dealing with Non-IID data,resulting in a substantial decrease in model quality.

    Traditional federated learning algorithms train only one global model.However,the study in[12]showed that the local optimal point for individual client data and the global optimal point for all data are inconsistent,especially when the data are Non-IID.Therefore,some personalized federated learning algorithms have recently been proposed to train a personalized model for each client rather than a single global model.For example,studies such as Per-FedAvg [13],Ditto [14],and FedAMP[15] introduced a local regularization term to allow clients to focus more on local objectives and train personalized local models.However,introducing the regularization term has brought a lot of additional computational load.To achieve personalization in a more lightweight manner,the approach of partial layer sharing has been proposed in studies such as FedPer[16],FedRep[17],FedBABU[18],and LG-FedAvg[19].These studies divide the neural network layers of the model into two parts:the shared and personalized layers.In their methods,only the shared layers are uploaded to the server for aggregation in each training round,while the personalized layers are always trained locally.In this way,the personalized information of the classifier is retained,and thus the model performs better on local data.However,in these approaches,the shared model loses the information of the personalized layers that might benefit most clients[20].

    Through our observations,we have found another reason for the loss of client personalization information:the client discarded old local models from each client in every federated training round.Specifically,within each training round in the traditional federated learning algorithm,the client performs the following steps after receiving the global model from the server.The client first discards the old local model and uses the received global model as the initialized model for local training.The client then independently updates the model in several steps on the local data and uploads the updated local model to the server for aggregation.However,it is important to note that the old local model contains valuable personalized information specific to the client’s data distribution.Discarding this information can negatively impact the test performance of the model on the client’s data,particularly at the classifier layer,which plays a crucial role in making final predictions based on the learned features from the input data.

    Based on the aforementioned observations,we propose FeTC,a federated learning algorithm method with two classifiers.Unlike traditional federated learning algorithms,FeTC does not introduce regularization terms,which significantly reduces computational loads.Additionally,FeTC allows all layers to be shared,ensuring that no layer information is lost in the shared model.To achieve personalization,we employ a two-classifier training strategy in FeTC.Specifically,we define the last layer of the neural network as the classifier and the other neural network layers as the extractor.FedTC requires the client to upload the entire local model (extractor and classifier) to the server in each training round,as with FedAvg,to ensure that no information is left out.To ensure personalization,FedTC allows the local classifier to remain instead of discarded when obtaining the initial model for local training.To effectively use the global classifier,FedTC designs a two-classifier training strategy,where the global classifier acts as a second classifier to guide the update of the local model.Empirically,in the Non-IID setting,we demonstrate that FedTC performs better than FedAvg,FedPer,and Local.

    Our paper makes the following contributions:

    (1) We analyzed the training process of federated learning.We discovered that personalized information in the model is lost due to the discarding of old local models in each round of training.

    (2) We propose a novel federated learning method called FedTC.FedTC does not use any regularization terms,thereby avoiding excessive computational overhead.Furthermore,FedTC allows all layers of the local model to be shared,ensuring that valuable information is not lost in the shared model.We also introduce a dual classifier training strategy into FedTC,which ensures personalization in federated learning and enhances its ability to handle heterogeneous data.

    (3) Our experiments on the Cifar10 and Cifar100 datasets demonstrate that FedTC can improve model accuracy on Non-IID data.In three Non-IID settings,FedTC outperforms FedAvg,FedPer,and local training.Notably,in extremely Non-IID cases,FedTC achieves a classification accuracy of 27.95%higher than that of FedAvg.

    The structure of this paper is as follows.In Section 2,we review relevant literature on federated learning.Section 3 formally describes of classical federated learning algorithms and highlights their limitations.In Section 4,we propose the FedTC algorithm and describe it in detail,emphasizing the local training strategy that utilizes two classifiers.We evaluate our approach using the widely used Cifar10 and Cifar100 datasets in Section 5.Finally,in Section 6,we conclude and discuss future work.

    2 Related Work

    The first federated learning algorithm is FedAvg[3],proposed by Google,which is used to train word prediction models without collecting data on the user.In FedAvg,the client uploads model parameters instead of original data to the server in each training round.FedAvg can achieve the same effect as the centralized model training when the participants’data are independent and identically distributed(IID).However,when the data is Non-IID,FedAvg is challenging to converge stably,and the trained global model performs poorly on the local data.Thus,many studies have made efforts to improve the performance of FedAvg on Non-IID data.Li et al.[21]introduced an L2 regularization term into the local objective function to limit the distance between the local and global models,making the model convergence more stable.According to the study [22],“client drift”is a significant factor contributing to the deterioration of joint learning performance.To address this problem,they propose SCAFFOLD,which utilizes control variables to correct client drift and improve federated learning performance.They also prove the effectiveness and convergence of SCAFFOLD through rigorous mathematical proof and experimental analysis.Zhao et al.[11]used Earth’s Mover’s Distance(EMD)to measure the difference in data distribution among clients.They found that the model accuracy would drop sharply when EMD reached a certain threshold.Therefore,they reduce EMD by sending clients a subset of global data.Experiments on the Cifar10 dataset show that only 5% of globally shared data can improve accuracy by about 30%.Also,to make clients’data more IID,Jeong et al.[23]proposed a data enhancement strategy FAug based on Generative Adversarial Network (GAN).In this method,each client uploads seed data of the labels lacking samples to the server.The server then oversamples these seed data and uses them to train a GAN model.Finally,the client downloads the GAN model from the server and generates the missing data locally.Overall,although these studies improved FedAvg,only one global model was trained in their approach during the entire training process,which resulted in every client getting the same model at the end of the training.However,in the case of Non-IID local data distributions among clients,training a single global model may not meet the diversified local data[24].

    To address the issue of Non-IID local data distributions among clients,personalized federated learning has been proposed as an alternative approach.Specifically,custom models are trained for each client instead of using the same model for all clients.This approach allows for greater flexibility and can better accommodate the diverse local data distributions.Researchers in[25]clustered clients according to the channel sparse vector uploaded by clients.Through clustering,the local data distribution of the same cluster client is more IID.In addition,Fallah et al.[13] introduced metalearning into federated learning,where all clients work together to train a good initialization model.The client then fine-tunes the model on the local data to get a personalized model suitable for its local data.Li et al.[26] used model distillation in federated learning and assume that there was a large common dataset.During each training round,each client calculates the class scores on the common dataset and sends the class scores to the server for average aggregation to get the global average class score.Each client then downloads the global average class scores and performs model distillation to make the local class scores fit the global class scores.After receiving the trained model,the client continues training on their local dataset for several steps to obtain the final personalized model.However,these methods come with additional computational costs.In contrast,researchers in[16–19]proposed personalized federated learning methods based on partial layer sharing.In their methods,only a part of the neural network layers are uploaded to the server for aggregation,and the other parts only perform local updates.Although these methods are more lightweight,they lead to the information of some layers not being shared.Our method,FedTC,is also a personalized federated learning algorithm because the client ultimately obtains a personalized local model.However,it is worth noting that our method does not introduce additional computational load and ensures that all neural network layers are shared.

    3 Preliminaries

    3.1 Notations

    In this paper,we consider a simple federated learning setup.Suppose there areNclients whose local training data areD1,D2,...,DN,respectively.In each training round,fNclients will be selected to participate in federated learning,wherefis the client sampling rate.

    3.2 Classical Federated Learning

    In classic federated learning algorithms such as FedAvg[3],a single shared model with parameterswis trained by all clients in coordination with the server.In each training roundt,The training process of the classic federated learning algorithm can be divided into the following four stages:

    (1)Client selection stage:The server selectsfNclients to participate in the training round;

    (2)Model distribution stage:The server distributes the latest global modelto the selected clients;

    (3) Local training stage: First,each clientiinitializes the local model after receiving the global model.The local model is initialized with the parameters of the global model,that is,.Then,the clientiperforms several local updates to the model on local training data.Suppose a batch of minibatch dataξ?Diis fed into the local model of clienti,and then the following parameter updates are performed as Eq.(1):

    (4)Model aggregation stage:After all clients complete local training,they upload the local model to the server.The server collects these models and averages them to get the global model of the next round as Eq.(2):

    The goal of classic federated learning algorithms is to get a global modelw?training over the global datasetD=∪i∈[N]Dithat solves the objective as Eq.(3):

    whereL(w;D)is global empirical loss on global training dataD,Li(wi;Di)is local empirical loss on local training dataDi.|Di|is the number of samples onDi,|D|is the number of samples onD,wiis the model parameter of the clienti.

    However,when the local data is Non-IID,the above equation is not valid.That is,the optimal model on the global data may not be optimal on the local data.The study [11] indicates that the convergence of federated learning is unstable,and the accuracy decreases significantly when training on heterogeneous data.

    4 Method

    Classical federated learning aims to train an optimal model on global data.However,when data is heterogeneous,global data distribution cannot represent local data distribution.Therefore,the global model may perform poorly on local data.Observing the whole training process of classical federated learning in the previous section,it can be found that after receiving the global model,the client will discard the entire local model and adopt the parameter of the latest global model.However,the local model retains personalized information that reflects the distribution of client data.Especially when the local data distribution is Non-IID,personalized information often determines whether the model can perform well on the local dataset.Recently,some personalized federated learning (PFL) algorithms have been proposed.The goal of PFL is to collaboratively learn individual local modelsusingD1,D2,...,DNfor each client.As shown in Eq.(4),PFL is trained to minimize the following objective:

    In particular,researchers in [16] realized personalization through partial layer sharing.Unlike classic federated learning,each client finally obtains a customized model due to the presence of the personalized layer in their methods.However,these approaches also result in the shared model lacking information about the personalized layers.To ensure the personalization of the local model while effectively sharing information from all layers,we propose FedTC,a personalized federated learning method with two classifiers.

    Similar to the personalized federated learning method based on partial layer sharing,we consider the deep neural network to consist of two components:the extractor and the classifier.In our work,the last linear layer of the deep learning model acts as the classifier,while the other layers make up the extractor.In our proposed approach,the local model’s classifier is always updated locally.To efficiently utilize the information of the classifier,the parameters of the classifier and extractor will be uploaded to the server for aggregation in the model aggregation stage.Then the server aggregates the local models uploaded by the client to get the latest global model.The entire global model is distributed to the clients during the model distribution stage.The study[27]indicated that the extractor usually contains more beneficial information,so the global extractor will replace the local extractor before local training in FedTC.The global model classifier serves as the client’s second classifier to guide the local model’s training.Fig.1 shows the basic training steps of FedTC,the classical federated learning algorithm FedAvg and a partial layer sharing-based federated learning algorithm FedPer.For FedAvg,FedPer,and FedTC,each training round is divided into four steps:The server distributes the global model to selectedKclients.The client initializes the local model (FedAvg replaces all local model parameters with global model parameters,FedPer replaces the parameters of shared layers with global model parameters,and FedTC replaces the parameters of the local extractor with the parameters of the global extractor).Local training (FedAvg and FedPer use one-classifier mode for local update training,FedTC uses two-classifier mode for local update training).Central server aggregates models (FedAvg and FedTC aggregate the parameters of the entire model,while FedPer aggregates only the parameters of the personalized layers).

    Algorithm 1 provides the pseudo code for FedTC.First,before the federated learning starts,the server selects a portion of the clients (in this paper,the portion is set to 100% by default),and then the initialized global model is distributed to the selected clients.Then,client and server updates are executed alternately in the following manner.

    Figure 1:The overview of the training process for each round of FedAvg,FedPer,and FedTC

    Figure 2:The local model consists of three parts:shared extractor,shared classifier,and local classifier

    whereηeis the learning rate for the shared extractor.

    As mentioned above,the local extractor parameters do not change when the shared classifier is updated,so the local extractor only needs to be forward propagated once.When updating the local extractor,the output of the local extractor obtained when updating the local classifier can be fed directly into the shared classifier without the need to compute it again.

    Server executes.After all the clients have completed the local updates,they will upload their local model parametersto the server.Assume that there areKclients participating in thetth round of training,whereK=fN.The number of local samples for each client isnk,and the total number of local samples isn.After receiving the model parameters uploaded by the client,the server will aggregate these parameters to get the global model of the next round as Eqs.(7)and(8):

    After computing the global model for the next round,the server will select a portion of the clients to perform the next round of federated training.

    5 Test Experimental Results and Discussion

    5.1 Datasets and Settings

    Federated Datasets.To validate the effectiveness of the method proposed in this paper,we perform simulated federated learning experiments on Cifar10 and Cifar100,which are two popular public datasets for the image classification task.To better simulate the Non-IID distribution of client datasets,we use the popular data partitioning method based on Dirichlet distribution to allocate data for each client [28–30].We useDir(α)to denote such a partitioning strategy for convenience.Hereαis a hyperparameter used to control the Non-IID degree of data distribution.Whenαis larger,the local data distribution tends to be IID,while whenαis smaller,the local data distribution tends to be Non-IID.We randomly divide the data on each local device into 75%of the training set and 25%of the test set.

    Implementation.All our algorithms are implemented based on the open-source project PFL-Non-IID from Zhang et al.[31].We used Pytorch to perform our experiments on NVIDIA GeForce RTX 3090 GPUs.For the Cifar10 dataset,we used the same convolutional neural network(CNN)model setup as the study [3],while for the Cifar100 dataset,we used the ResNet18 network [32].In all experiments,We set the client sampling rate to 1.0 for each round of federated learning,similar to recent works[33–35],i.e.,all clients are involved in each round of federated learning.In all experiments,the number of local training iterations is set to 5,and the local batch size is set to 64 by default.We used stochastic gradient descent(SGD)to optimize the neural network parameters,with the weight decay set to 1e-5 and the momentum parameter set to 0.9.In FedTC,the learning rate of the extractor is set to 0.01,and the learning rate of the classifier is set to 1e-4.The learning rate of FedAvg,FedPer,and Local is set to 0.01 for fairness.Li et al.[36]proved that the learning rate decay is necessary,so we set the learning rate decay to 0.9 in all experiments.We set the number of clients for all datasets to 10 in all experiments[37].

    Evaluation Metrics.As we are conducting an image classification experiment,we have chosen image classification accuracy as our evaluation metric.In each round of training,when a client receives the global model and initializes its local model,the client will test the initialized local model on its local test dataset.Then,for each of the selected clients in a round,clientiwill calculate the total number of test samplesTsiand the number of correctly classified samplesTci.The local test accuracyAcciis calculated using Eq.(9).For the global test accuracy,we count the sum of correctly classified samples for all clients,as well as the total number of test samples for all clients,and then calculate the overall test accuracy ACC using Eq.(10):

    whereKrepresents the number of selected clients,Tcirepresents the number of correctly classified samples for clienti,andTsirepresents the total number of test samples for clienti.

    5.2 Evaluate the Test Performance of the FedAvg Algorithm on Local Test Dataset

    Since all clients share a unique global model throughout the FedAvg training process,many previous researchers have typically assumed that the server has a portion of test data that matches the global data distribution.In their research,the global model is evaluated on the global test data after the model aggregation stage,but the model’s performance on local datasets is not considered.When the client data distribution is IID,the data distribution of each client matches the global data distribution,so the accuracy of the global test data is usually similar to the accuracy of the local test data.However,when the local data distribution is Non-IID,the global data distribution does not represent the data distribution of each client.Therefore,the accuracy of the global model is not representative of its performance on each client’s local data.

    We conducted experiments on the Cifar10 dataset to verify the above claims,training a classification model using the FedAvg algorithm and performing local tests.We considered different values of the hyperparameterα,namely 0.1,0.5,and 5.0.As shown in Table 1,we tested the trained model on the local datasets of 10 clients and computed the standard deviationδof their local test accuracies.The“global”entry in the table indicates the model’s test accuracy on the entire dataset.We observed that Non-IID data leads to a decrease in both the global and local test accuracies of the FedAvg algorithm.Specifically,whenα=0.1,the global test accuracy drops by 7.73% compared to whenα=5.0.Additionally,we noticed that as the Non-IID degree increases,the differences in the local test accuracies among the clients become more pronounced.In particular,whenα=0.1,the local test accuracy of client 2 differs from client 8 by 22.12%.Therefore,a single model trained using FedAvg may not be suitable for scenarios where the client data is highly Non-IID.

    Table 1:The local test accuracy(%),standard deviation,and global test accuracy(%)of FedAvg on the local test data of each client are on the Cifar10 dataset

    5.3 Compare the Performance of FedAvg and Local Training

    Local training can be viewed as a highly personalized setup of federated learning,where each client trains the model only on its local dataset,and no data is shared between the clients.Traditional federated learning algorithms that train a single model are rarely compared to local training,as the test data is usually on the server side.However,the performance improvement on local data may affect the client’s interest in participating in federated learning.Therefore,we also compared FedAvg and Local on the Cifar10 dataset.

    The experimental results are presented in Fig.3,where the x-axis represents the value ofα,and the y-axis represents the global testing accuracy of the corresponding algorithm.It can be observed that the global model trained by FedAvg outperforms the Local whenα=5.0.As the Non-IID level increases,the accuracy of FedAvg’s model decreases,but the accuracy of the locally trained model gradually increases.Atαvalues of 0.5 and 0.1,the accuracy of the model trained by FedAvg decreases significantly and is lower than that of the locally trained model for each client.To further investigate the reasons for the accuracy fluctuations of these two algorithms,we visualize the local data distribution of clients.As shown in Fig.4,the bar chart shows the local data distribution of each client when Cifar10 is partitioned into 10 clients based onαvalues of 0.1,0.5,and 5.0,where the height of each bar represents the number of samples.When theαis large,the sample quantity of each label is uniform among clients,enabling FedAvg to perform better.However,asαdecreases,data distribution across clients becomes more and more different.The single model trained by FedAvg is no longer applicable to all clients’local data.However,the decreasing value ofαallows clients to have more samples of a specific class.For example,whenα=0.1,the data of label 3,label 8,and label 9 make up a large proportion of the local data samples of client 9,which is a simple classification task for the client.Thus,in the case of highly Non-IID client data distribution,a good model can be obtained solely through local training,which may outperform the model trained through FedAvg.Therefore,it is crucial to enhance the performance of FedAvg on Non-IID data to prevent potential discouragement of user participation in federated learning.In addition,we found that local training may outperform FedAvg in some extremely Non-IID scenarios,indicating that local models may contain personalized information that enables them to perform better on local data.

    Figure 3:Test accuracy of FedAvg and Local on Cifar10 when α values of 0.1,0.5 and 5.0

    Figure 4:The local data distribution of each client when Cifar10 is divided into 10 clients according to α values of 0.1,0.5 and 5.0

    5.4 Evaluation of FedTC

    To demonstrate the effectiveness of our proposed method,we compare FedTC with FedAvg,FedPer,and Local(local training).FedAvg is a classic federated learning algorithm and the baseline for comparison in many studies.FedPer is a personalized federated learning algorithm based on partial layer sharing,which proposes keeping the client model’s classifier parameters local and uploading only the feature extractor parameters to the server for aggregation to ensure local personalization.Local is implemented the same way as FedAvg,except that model aggregation and model distribution are removed.In our experiment,the number of clients was 10.We divided the Cifar10 and Cifar100 datasets into 10 parts as the local data of each client according to theαvalues of 0.1,0.5,and 5.0.

    The experimental results are presented in Table 2.As the Non-IID degree increases,all algorithms except for Local exhibit a decrease in accuracy,which is expected as a higher Non-IID degree reduces the difficulty of local tasks.FedAvg performs worse than Local in all cases except forα=5.0,indicating that FedAvg is not proficient in handling Non-IID data.FedPer preserves the personalized information of local models through partial layer sharing.Hence its accuracy is significantly higher than FedAvg atα=0.1 or 0.5.This indicates that preserving personalized information is an effective way to improve model performance in Non-IID scenarios.However,FedPer’s accuracy is slightly lower than FedAvg atα=5.0 because some layers’information is not shared.FedTC,proposed in this work,preserves personalized information of local models through a two-classifier training strategy while retaining all layer information in the shared model.Therefore,from the experimental results,FedTC achieves the highest accuracy in all scenarios.Notably,in the extreme Non-IID scenario whereα=0.1,where FedAvg performs the worst,FedTC achieves up to 27.95%higher test accuracy than FedAvg.

    Table 2:Test accuracy(%)of Local,FedTC,FedPer,and FedAvg on Cifar10 and Cifar100 datasets

    6 Conclusion and Future Work

    In this study,we propose FedTC,a personalized federated learning algorithm that handles the problem of Non-IID local data distribution.FedTC introduces almost no additional computational complexity compared to previous methods and ensures that all layers of the client’s local model are shared.To maintain personalization,we present a two-classifier training strategy that ensures the classifier of the local model is not discarded before each round of local training.Our extensive experiments show that FedTC outperforms many federated learning algorithms in Non-IID scenarios.However,we acknowledge that our current work only considers simple federated learning settings and future work will need to address the challenge of applying it to complex experimental scenarios.Additionally,techniques such as generative adversarial networks and model compression can be considered to improve model accuracy further and reduce communication costs.

    Acknowledgement:The authors thank Lu Shi and Weiqi Qiu for discussions that helped to improve this work.

    Funding Statement:This research was funded by Shenzhen Basic Research (Key Project) (No.JCYJ20200109113405927),Shenzhen Stable Supporting Program(General Project)(No.GXWD2020 1230155427003-20200821160539001),Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies (2022B1212010005),and Peng Cheng Laboratory Project (Grant No.PCL2021A02),Ministry of Education’s Collaborative Education Project with Industry Cooperation(No.22077141140831).

    Author Contributions:The authors confirm their contribution to the paper as follows:study conceptionand design:Y.Liu,J.Wang;data collection:J.Wang,W.Xu,J.Zhang;analysis and interpretation of results: J.Wang,Y.Liu,Z.L.Jiang;draft manuscript preparation: Y.Liu,J.Wang,Q.Liu,M.Gheisari.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:All data incorporated in this study can be accessed by contacting the corresponding author upon request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品一区二区性色av| 欧美3d第一页| 国产毛片a区久久久久| 亚洲欧美中文字幕日韩二区| 亚洲熟妇中文字幕五十中出| 成人美女网站在线观看视频| 男插女下体视频免费在线播放| 国内揄拍国产精品人妻在线| 日本黄大片高清| 亚洲欧美日韩高清在线视频| 国产亚洲av嫩草精品影院| videossex国产| 午夜激情欧美在线| 成人性生交大片免费视频hd| 欧美高清性xxxxhd video| 久久久精品大字幕| 亚洲国产精品国产精品| 日本在线视频免费播放| 一进一出抽搐动态| 亚洲美女视频黄频| 日韩三级伦理在线观看| 国产精品爽爽va在线观看网站| 日本爱情动作片www.在线观看| 亚洲aⅴ乱码一区二区在线播放| 国产精华一区二区三区| 中文字幕精品亚洲无线码一区| 桃色一区二区三区在线观看| 国产91av在线免费观看| 免费看日本二区| 亚洲精品国产av成人精品| 成年女人看的毛片在线观看| 国产精品1区2区在线观看.| 亚洲av二区三区四区| 国产高潮美女av| 村上凉子中文字幕在线| 看非洲黑人一级黄片| 国产av不卡久久| 男女视频在线观看网站免费| 麻豆av噜噜一区二区三区| 可以在线观看毛片的网站| 国产成人freesex在线| 插逼视频在线观看| 国产一区二区在线观看日韩| 国产成人福利小说| 嫩草影院新地址| 国产成人福利小说| 午夜亚洲福利在线播放| 中国美女看黄片| 晚上一个人看的免费电影| 夜夜夜夜夜久久久久| 国产精品一区www在线观看| 18禁在线无遮挡免费观看视频| av在线天堂中文字幕| 国模一区二区三区四区视频| 美女 人体艺术 gogo| 麻豆精品久久久久久蜜桃| 国产精品麻豆人妻色哟哟久久 | 久久精品国产亚洲av香蕉五月| 变态另类丝袜制服| 国产精品爽爽va在线观看网站| 男女视频在线观看网站免费| 成人鲁丝片一二三区免费| av在线老鸭窝| 中文字幕av在线有码专区| 欧美3d第一页| 亚洲一区高清亚洲精品| 老司机福利观看| 一级av片app| 晚上一个人看的免费电影| 蜜桃久久精品国产亚洲av| 亚洲不卡免费看| 亚洲精品乱码久久久v下载方式| 色尼玛亚洲综合影院| 亚洲精品粉嫩美女一区| 国产精品一区二区性色av| 精品99又大又爽又粗少妇毛片| 国产真实伦视频高清在线观看| 国内精品一区二区在线观看| 国产成人精品一,二区 | 亚洲成a人片在线一区二区| 亚洲无线在线观看| eeuss影院久久| 一区二区三区高清视频在线| 伦精品一区二区三区| 国产91av在线免费观看| 精品无人区乱码1区二区| 精品一区二区三区视频在线| 最后的刺客免费高清国语| 99在线视频只有这里精品首页| 精品久久久久久久久av| 日韩欧美精品v在线| 成人特级黄色片久久久久久久| 美女被艹到高潮喷水动态| 欧美3d第一页| 成人永久免费在线观看视频| 一区福利在线观看| 久久九九热精品免费| 久久久久久久久久久丰满| 老师上课跳d突然被开到最大视频| 久久久久性生活片| 国产亚洲91精品色在线| 亚洲最大成人av| 国产黄色小视频在线观看| 精品久久久久久久末码| 亚洲国产精品成人综合色| 欧美成人a在线观看| 在线免费观看不下载黄p国产| 99热这里只有精品一区| 日韩亚洲欧美综合| 国产av不卡久久| avwww免费| 国产黄a三级三级三级人| 国国产精品蜜臀av免费| 久久精品综合一区二区三区| videossex国产| 国产精品伦人一区二区| 亚洲在线观看片| 婷婷亚洲欧美| 国产精品福利在线免费观看| 国产高清视频在线观看网站| 亚洲aⅴ乱码一区二区在线播放| 免费在线观看成人毛片| 婷婷精品国产亚洲av| 免费电影在线观看免费观看| 热99re8久久精品国产| 亚洲美女搞黄在线观看| 亚洲电影在线观看av| 国产男人的电影天堂91| 久久人人爽人人片av| 日本熟妇午夜| www日本黄色视频网| 狠狠狠狠99中文字幕| 嫩草影院新地址| 亚洲中文字幕日韩| 亚洲av中文字字幕乱码综合| 岛国毛片在线播放| 日本黄大片高清| 自拍偷自拍亚洲精品老妇| 在线观看一区二区三区| 日韩一本色道免费dvd| 黄片无遮挡物在线观看| 国产爱豆传媒在线观看| 久久精品国产自在天天线| or卡值多少钱| 午夜老司机福利剧场| 亚洲va在线va天堂va国产| 欧洲精品卡2卡3卡4卡5卡区| 日本-黄色视频高清免费观看| 免费不卡的大黄色大毛片视频在线观看 | 亚洲精品影视一区二区三区av| 久久久国产成人免费| 精品久久久久久成人av| 色5月婷婷丁香| 91久久精品电影网| or卡值多少钱| 极品教师在线视频| 寂寞人妻少妇视频99o| 高清日韩中文字幕在线| 亚洲欧美中文字幕日韩二区| 看十八女毛片水多多多| 亚洲熟妇中文字幕五十中出| 精品无人区乱码1区二区| 看片在线看免费视频| ponron亚洲| 天堂√8在线中文| 麻豆乱淫一区二区| 色5月婷婷丁香| 国内揄拍国产精品人妻在线| 久久午夜亚洲精品久久| 国内精品久久久久精免费| 久久精品91蜜桃| 久久婷婷人人爽人人干人人爱| 毛片女人毛片| 亚洲av中文字字幕乱码综合| 99热网站在线观看| 在线免费十八禁| 两性午夜刺激爽爽歪歪视频在线观看| eeuss影院久久| 国产亚洲av嫩草精品影院| 成人亚洲精品av一区二区| 在线天堂最新版资源| 国产一区二区三区av在线 | 久久久久免费精品人妻一区二区| 免费看a级黄色片| 一区二区三区免费毛片| 亚洲欧美日韩高清在线视频| 1000部很黄的大片| 国产精品久久久久久av不卡| a级毛色黄片| 久久久久久久久久久免费av| av又黄又爽大尺度在线免费看 | 一进一出抽搐动态| 亚洲成人av在线免费| 床上黄色一级片| 中文字幕精品亚洲无线码一区| 看十八女毛片水多多多| 亚洲最大成人av| 99久国产av精品| 国产亚洲精品久久久久久毛片| 久久婷婷人人爽人人干人人爱| 欧美xxxx性猛交bbbb| 黄色一级大片看看| 亚州av有码| 久久精品久久久久久久性| 国产精品1区2区在线观看.| 99久久中文字幕三级久久日本| 麻豆精品久久久久久蜜桃| 欧美最新免费一区二区三区| av国产免费在线观看| 内射极品少妇av片p| 伊人久久精品亚洲午夜| 在线观看66精品国产| 亚洲精品自拍成人| av免费在线看不卡| 亚洲欧美清纯卡通| 免费黄网站久久成人精品| 久久鲁丝午夜福利片| 久久人人精品亚洲av| 中文字幕熟女人妻在线| 69av精品久久久久久| 青春草国产在线视频 | 中国美白少妇内射xxxbb| 欧美日韩一区二区视频在线观看视频在线 | 极品教师在线视频| 大又大粗又爽又黄少妇毛片口| 嫩草影院精品99| 可以在线观看的亚洲视频| 国产一级毛片在线| 国产伦在线观看视频一区| 国内少妇人妻偷人精品xxx网站| 亚洲自偷自拍三级| 一级毛片久久久久久久久女| 一区二区三区免费毛片| 可以在线观看的亚洲视频| 天堂网av新在线| 亚洲成人av在线免费| 99久久精品热视频| 欧美一区二区亚洲| 日韩一区二区三区影片| 精品久久久久久久久久免费视频| 一区福利在线观看| 禁无遮挡网站| 亚洲,欧美,日韩| 国产亚洲91精品色在线| 久久人人爽人人片av| av.在线天堂| 岛国在线免费视频观看| 亚洲欧美精品综合久久99| 高清午夜精品一区二区三区 | 亚洲av成人精品一区久久| 哪个播放器可以免费观看大片| 色播亚洲综合网| 日韩在线高清观看一区二区三区| 国产成人精品婷婷| 人妻系列 视频| 国产精品一区二区三区四区免费观看| 国产成人精品婷婷| 少妇人妻精品综合一区二区 | 有码 亚洲区| 干丝袜人妻中文字幕| 国产免费男女视频| 白带黄色成豆腐渣| 免费黄网站久久成人精品| 中文字幕人妻熟人妻熟丝袜美| 在线播放无遮挡| 免费观看在线日韩| 久久精品夜夜夜夜夜久久蜜豆| 黄色视频,在线免费观看| 最近2019中文字幕mv第一页| 日韩大尺度精品在线看网址| 成人毛片60女人毛片免费| av在线观看视频网站免费| 国产av不卡久久| 美女xxoo啪啪120秒动态图| 好男人在线观看高清免费视频| 国产在视频线在精品| 夜夜看夜夜爽夜夜摸| 色综合亚洲欧美另类图片| 久久久久九九精品影院| 久久精品国产亚洲av天美| 2021天堂中文幕一二区在线观| 亚洲av一区综合| 99精品在免费线老司机午夜| 国产精品一及| 免费看日本二区| 天天躁夜夜躁狠狠久久av| 日本一二三区视频观看| 中国美白少妇内射xxxbb| 国产黄a三级三级三级人| 亚洲一区高清亚洲精品| 午夜福利高清视频| 免费观看的影片在线观看| 欧美3d第一页| 男人舔奶头视频| 99在线视频只有这里精品首页| 亚洲最大成人中文| 我要看日韩黄色一级片| 大又大粗又爽又黄少妇毛片口| 亚洲精品日韩av片在线观看| 女人被狂操c到高潮| 国产三级中文精品| 97热精品久久久久久| 亚洲最大成人中文| 欧美一区二区国产精品久久精品| 欧美最黄视频在线播放免费| 亚洲av免费在线观看| a级毛色黄片| 在线观看av片永久免费下载| av卡一久久| 久久鲁丝午夜福利片| 深夜精品福利| 天堂网av新在线| 亚洲欧美日韩无卡精品| 色视频www国产| 国产高潮美女av| 在线观看美女被高潮喷水网站| 亚洲人成网站在线观看播放| 哪里可以看免费的av片| 九九爱精品视频在线观看| 久久热精品热| 熟女人妻精品中文字幕| 看十八女毛片水多多多| 毛片一级片免费看久久久久| 真实男女啪啪啪动态图| 国内少妇人妻偷人精品xxx网站| 狠狠狠狠99中文字幕| 亚洲无线观看免费| 久久精品国产清高在天天线| 99热这里只有精品一区| 成人亚洲欧美一区二区av| 免费搜索国产男女视频| 欧美激情国产日韩精品一区| 国产黄a三级三级三级人| 欧美高清性xxxxhd video| 亚洲精品成人久久久久久| 一本久久精品| 亚洲中文字幕日韩| 日日摸夜夜添夜夜爱| 国产伦一二天堂av在线观看| 久久精品国产亚洲av香蕉五月| 亚洲人成网站在线播| 国产精品综合久久久久久久免费| 国产黄色视频一区二区在线观看 | 全区人妻精品视频| 亚洲精品国产av成人精品| 一级黄色大片毛片| av视频在线观看入口| 内射极品少妇av片p| 免费大片18禁| 可以在线观看毛片的网站| 亚洲18禁久久av| 老女人水多毛片| 亚洲成人精品中文字幕电影| 色视频www国产| 2022亚洲国产成人精品| 亚洲欧美中文字幕日韩二区| 久久精品人妻少妇| 亚洲人与动物交配视频| 三级国产精品欧美在线观看| 麻豆成人午夜福利视频| 天天躁夜夜躁狠狠久久av| 亚洲精品影视一区二区三区av| 亚洲色图av天堂| 91在线精品国自产拍蜜月| 免费人成在线观看视频色| 亚洲av成人av| 精品久久久久久久久久久久久| 国产亚洲精品久久久久久毛片| 国产精品日韩av在线免费观看| 青春草亚洲视频在线观看| 在线免费十八禁| 国产精品电影一区二区三区| av免费观看日本| 国产精品女同一区二区软件| 久久草成人影院| 伊人久久精品亚洲午夜| 国产不卡一卡二| 变态另类成人亚洲欧美熟女| 成人三级黄色视频| 国产黄片美女视频| 爱豆传媒免费全集在线观看| 最近2019中文字幕mv第一页| 欧美高清成人免费视频www| 身体一侧抽搐| 丰满乱子伦码专区| 91精品国产九色| 寂寞人妻少妇视频99o| 国产一区二区亚洲精品在线观看| 成年av动漫网址| 日韩人妻高清精品专区| 欧美一区二区国产精品久久精品| 91精品一卡2卡3卡4卡| 亚洲18禁久久av| 99精品在免费线老司机午夜| 亚洲最大成人av| 亚洲欧美成人精品一区二区| 舔av片在线| 特级一级黄色大片| 天美传媒精品一区二区| 欧美变态另类bdsm刘玥| 乱人视频在线观看| 日日摸夜夜添夜夜添av毛片| 成人特级av手机在线观看| 日韩欧美在线乱码| or卡值多少钱| 午夜免费男女啪啪视频观看| 综合色丁香网| 老熟妇乱子伦视频在线观看| 女人十人毛片免费观看3o分钟| 国产黄片美女视频| 亚洲最大成人手机在线| 熟女电影av网| 国产精品美女特级片免费视频播放器| 亚洲精品久久久久久婷婷小说 | 在线观看午夜福利视频| 国产精品乱码一区二三区的特点| 一区二区三区免费毛片| 我的老师免费观看完整版| 国产美女午夜福利| 日韩欧美 国产精品| 国产成人aa在线观看| av天堂在线播放| 日韩 亚洲 欧美在线| 日日啪夜夜撸| 老司机福利观看| 国产亚洲5aaaaa淫片| 国产一级毛片在线| 亚洲综合色惰| 国产精品永久免费网站| 长腿黑丝高跟| kizo精华| 午夜亚洲福利在线播放| 久久鲁丝午夜福利片| 欧美xxxx黑人xx丫x性爽| 在线天堂最新版资源| 一区二区三区免费毛片| 欧美精品一区二区大全| 麻豆精品久久久久久蜜桃| 午夜福利高清视频| 亚洲精品456在线播放app| 久久亚洲精品不卡| 久久人妻av系列| 久久久久久久久久黄片| 在线国产一区二区在线| 亚洲欧美日韩卡通动漫| 老熟妇乱子伦视频在线观看| 久久午夜亚洲精品久久| 人人妻人人澡欧美一区二区| 免费看av在线观看网站| 亚洲精品成人久久久久久| 亚洲精品乱码久久久v下载方式| 亚洲一级一片aⅴ在线观看| 国产乱人偷精品视频| 亚洲国产欧美在线一区| 国产在线男女| 一级毛片电影观看 | 欧美最黄视频在线播放免费| 乱人视频在线观看| 国产精品国产高清国产av| av视频在线观看入口| 大香蕉久久网| 精品欧美国产一区二区三| 日韩av不卡免费在线播放| 秋霞在线观看毛片| 日日啪夜夜撸| 又爽又黄无遮挡网站| 亚洲欧洲国产日韩| 日韩一区二区三区影片| 婷婷六月久久综合丁香| 可以在线观看毛片的网站| 欧美性感艳星| 日韩,欧美,国产一区二区三区 | 青春草视频在线免费观看| 五月伊人婷婷丁香| 精品99又大又爽又粗少妇毛片| 在线免费观看的www视频| av黄色大香蕉| 草草在线视频免费看| 欧美成人精品欧美一级黄| 免费av毛片视频| 直男gayav资源| avwww免费| 国产91av在线免费观看| 日韩制服骚丝袜av| 99精品在免费线老司机午夜| 偷拍熟女少妇极品色| 成人午夜精彩视频在线观看| 日日干狠狠操夜夜爽| 岛国毛片在线播放| 国产片特级美女逼逼视频| 国产亚洲精品久久久久久毛片| 最近2019中文字幕mv第一页| 日韩中字成人| 亚洲av不卡在线观看| 亚洲一级一片aⅴ在线观看| 精品久久久久久久久久免费视频| 欧美性猛交黑人性爽| 国产探花极品一区二区| 卡戴珊不雅视频在线播放| 高清午夜精品一区二区三区 | 亚洲欧美精品自产自拍| 日韩 亚洲 欧美在线| 男的添女的下面高潮视频| 中文字幕精品亚洲无线码一区| 欧美+亚洲+日韩+国产| 亚洲精品456在线播放app| 午夜福利在线在线| 亚洲不卡免费看| 色哟哟·www| 小蜜桃在线观看免费完整版高清| 久久久久久大精品| 久久久精品大字幕| 日日撸夜夜添| 美女被艹到高潮喷水动态| 免费看美女性在线毛片视频| 91狼人影院| 一边摸一边抽搐一进一小说| 麻豆久久精品国产亚洲av| 嘟嘟电影网在线观看| av免费在线看不卡| 亚洲国产精品国产精品| 亚洲内射少妇av| 日韩国内少妇激情av| 亚洲七黄色美女视频| 国产老妇女一区| 少妇人妻精品综合一区二区 | 成人漫画全彩无遮挡| 婷婷亚洲欧美| 日韩亚洲欧美综合| 国产精品综合久久久久久久免费| 亚洲欧美精品综合久久99| 日本五十路高清| 99久久人妻综合| 亚洲av男天堂| 午夜精品一区二区三区免费看| 一区福利在线观看| 国产精品伦人一区二区| 久久99热这里只有精品18| 国产在视频线在精品| 久久精品综合一区二区三区| 国产成人福利小说| 真实男女啪啪啪动态图| 热99在线观看视频| 校园春色视频在线观看| 一区福利在线观看| 中出人妻视频一区二区| 天天躁夜夜躁狠狠久久av| 日韩欧美精品免费久久| 小说图片视频综合网站| av黄色大香蕉| 久久精品国产亚洲av香蕉五月| 九九热线精品视视频播放| 美女 人体艺术 gogo| 九草在线视频观看| 美女xxoo啪啪120秒动态图| 可以在线观看毛片的网站| 又粗又硬又长又爽又黄的视频 | 99热这里只有精品一区| 最新中文字幕久久久久| 亚洲成a人片在线一区二区| 国产午夜精品久久久久久一区二区三区| 熟女人妻精品中文字幕| 亚洲成人久久性| 日本欧美国产在线视频| 国产熟女欧美一区二区| 久久久精品欧美日韩精品| 国产伦精品一区二区三区四那| 99热网站在线观看| 级片在线观看| 成人国产麻豆网| 国产久久久一区二区三区| 哪个播放器可以免费观看大片| 精品久久久久久成人av| 91av网一区二区| 欧美日韩精品成人综合77777| 日韩欧美精品免费久久| 色综合色国产| 久久久久久久久久黄片| 女的被弄到高潮叫床怎么办| 亚洲真实伦在线观看| 亚洲精品国产av成人精品| 全区人妻精品视频| 中国美女看黄片| 在线天堂最新版资源| 欧美xxxx黑人xx丫x性爽| 插逼视频在线观看| 18禁在线无遮挡免费观看视频| 国产精品.久久久| 免费观看人在逋| 国产高清不卡午夜福利| 97热精品久久久久久| 成人特级黄色片久久久久久久| 日韩av在线大香蕉| 你懂的网址亚洲精品在线观看 | 国产美女午夜福利| 精品久久久久久久久亚洲| 美女被艹到高潮喷水动态| 国国产精品蜜臀av免费| 在线a可以看的网站| 日本撒尿小便嘘嘘汇集6| 久久国产乱子免费精品| 日本撒尿小便嘘嘘汇集6| 亚洲精品成人久久久久久| 青春草视频在线免费观看| 欧美zozozo另类| 三级男女做爰猛烈吃奶摸视频| 婷婷色av中文字幕| 久久精品国产清高在天天线| 中国美白少妇内射xxxbb| 日韩高清综合在线| 亚洲精品亚洲一区二区| 99热这里只有精品一区| 久久久精品欧美日韩精品| 久久久久国产网址| 国产人妻一区二区三区在|