• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Decentralized Heterogeneous Federal Distillation Learning Based on Blockchain

    2023-10-26 13:14:36HongZhuLishaGaoYitianShaNanXiangYueWuandShuoHan
    Computers Materials&Continua 2023年9期

    Hong Zhu,Lisha Gao,Yitian Sha,Nan Xiang,Yue Wu and Shuo Han

    Nanjing Power Supply Branch,State Grid Jiangsu Electric Power Co.,Ltd.,Nanjing,210000,China

    ABSTRACT Load forecasting is a crucial aspect of intelligent Virtual Power Plant(VPP)management and a means of balancing the relationship between distributed power grids and traditional power grids.However,due to the continuous emergence of power consumption peaks,the power supply quality of the power grid cannot be guaranteed.Therefore,an intelligent calculation method is required to effectively predict the load,enabling better power grid dispatching and ensuring the stable operation of the power grid.This paper proposes a decentralized heterogeneous federated distillation learning algorithm(DHFDL)to promote trusted federated learning(FL)between different federates in the blockchain.The algorithm comprises two stages:common knowledge accumulation and personalized training.In the first stage,each federate on the blockchain is treated as a meta-distribution.After aggregating the knowledge of each federate circularly,the model is uploaded to the blockchain.In the second stage,other federates on the blockchain download the trained model for personalized training,both of which are based on knowledge distillation.Experimental results demonstrate that the DHFDL algorithm proposed in this paper can resist a higher proportion of malicious code compared to FedAvg and a Blockchain-based Federated Learning framework with Committee consensus(BFLC).Additionally,by combining asynchronous consensus with the FL model training process,the DHFDL training time is the shortest,and the training efficiency of decentralized FL is improved.

    KEYWORDS Load forecasting;blockchain;distillation learning;federated learning;DHFDL algorithm

    1 Introduction

    With the increasingly prominent natural environmental problems,the constraint of fossil energy becomes tighter.It is imperative to develop renewable energy and promote national energy transformation vigorously.However,due to its small capacity,decentralized layout,and strong randomness of output,distributed renewable energy will have an impact on the security and reliability of the grid when it is connected to the grid alone,so it is difficult to participate in the power market competition as an independent individual.The large-scale integration of distributed power grid requires intelligent centralized management to coordinate the power grid effectively.As an important form of intelligent management,VPP[1]not only fosters enhanced interaction among users but also bolsters the stability of the power grid.However,the continuous emergence of power consumption peaks increases the load of virtual power plants,affecting the grid’s power supply quality.Consequently,the VPP requires a sophisticated computational approach to accurately predict load demands,thereby enabling more efficient power grid dispatching and management.

    In the load forecasting business,each VPP platform precipitates large enterprise power consumption data.The accuracy of power load forecasting can be improved through cross-domain collaborative computing of data.Traditional machine learning requires centralized training of data.However,in the cross-domain transmission link of data within the power grid,there are problems such as data theft,data tampering,data power,responsibility separation,and low transmission efficiency.The power grid has the right to access the power data of the enterprise,which can be viewed internally,but does not have the right to disclose the data.Once the power data of the enterprise is stolen,sold,and disclosed in the transmission link,it will cause a great blow to the credibility of the power grid.At the same time,the power and responsibility caused by the cross-domain data need to be clarified,and different branches pay different attention to the same data,which may further increase the risk of data leakage.

    FL can protect data security and ensure the consistency of data rights and responsibilities as a kind of privacy-protected distributed machine learning.It is a secure collaborative computing for data confirmation.On the other hand,it can eliminate data transmission links and reduce the energy consumption of collaborative computing.It is a kind of green collaborative computing.FL can ensure that the local data owned by the participants stay within the control of the participants and conduct joint model training.FL can better solve the problems of data islands,data privacy,etc.At present,FL has been widely used in various fields.

    However,the existing FL primarily relies on the parameter server to generate or update the global model parameters,which is a typical centralized architecture.There are problems such as single-point failure,privacy disclosure,performance bottlenecks,etc.The credibility of the global model depends on the parameter server and is subject to the centralized credit model.Traditional FL relies on a trusted centralized parameter server,in which multiple participants cooperate to train a global model under the condition that their data does not go out of the local area.The server collects local model updates,performs update aggregation,maintains global model updates,and other centralized operations.The entire training process is vulnerable to server failures.Malicious parameter servers can even poison the model,generate inaccurate global updates,and then distort all local updates,thus making the entire collaborative training process error.In addition,some studies have shown that unencrypted intermediate parameters can be used to infer important information in training data,and the private data of participants are exposed.Therefore,in the process of model training,it is particularly important to adopt appropriate encryption schemes for local model updates and maintain the global model on distributed nodes.As a distributed shared general ledger jointly maintained by multiple parties,the blockchain realizes the establishment of the trust relationship between participants without relying on the credit endorsement of a trusted third party through the combined innovation of multiple technologies such as distributed ledger technology,cryptographic algorithms,peer-to-peer communication,consensus mechanism,smart contracts,etc.It can be used to replace the parameter server in FL and store relevant information in the model training process.

    In the peer-to-peer cooperative computing scenario,the traditional centralized FL has the disadvantages of low communication efficiency,slow aggregation speed,insecure aggregation,and untrustworthy aggregation.First,the aggregation node needs to consume a large amount of computing and communication resources.However,in the peer entities,the benefits are equal,and the entities are unwilling to take responsibility for aggregation tasks and bear redundant responsibilities and resource consumption.Secondly,in the process of aggregation,there are malicious attacks.On the one hand,aggregation nodes can maliciously reduce the aggregation weight of a cooperative subject so that the global model deviates from its local model,and targeted attacks can be achieved.On the other hand,the aggregation node can retain the correct model and distribute the tampered model to achieve a global attack.Finally,the global model trained by the aggregation node has a weak prediction effect for a single agent and cannot be personalized.In practical applications,due to data heterogeneity and distrust/nonexistence of the central server,different federations can only work together sometimes.

    To sum up,this paper proposes a decentralized asynchronous federated distillation learning algorithm.Through circular knowledge distillation,the personalized model of each federation is obtained without a central server.Then the trained model is uploaded to the blockchain for other federations on the chain to download to the local for training.

    Our contributions are as follows:

    a) We propose an asynchronous FL distillation algorithm that integrates blockchain and federated learning,which can accumulate public information from different federations without violating privacy and implement personalized models for each federation through adaptive knowledge distillation.

    b) Asynchronous consensus is combined with FL to improve the efficiency of model uplink.

    c) By comparing the FedAvg algorithm,BFLC [2] algorithm,and the DHFDL algorithm proposed in this paper,it can be seen that the DHFDL training of asynchronous uplink aggregation of models through asynchronous consensus takes the shortest time and has the highest efficiency.

    2 Related Work

    2.1 Federated Learning

    FL[3]was launched by Google in 2016 to solve the problems of data privacy and data islands in AI.Its essence is that the central server pushes the global model to multiple data parties participating in FL and trains the model in multiple data parties.The data party transmits the updates of local training to the central server,which aggregates these updates to generate a new global model and then pushes it to the data party.The architecture of FL is shown in Fig.1.

    Figure 1:General FL architecture

    To fully use the data of different independent clients while protecting data privacy and security,Google has proposed the first FL algorithm FedAvg to summarize customer information.FedAvg trains the machine learning model by aggregating data from distributed mobile phones and exchanging model parameters rather than directly exchanging data.FedAvg can well solve the problem of data islands in many applications.However,simple FedAvg cannot meet complex reality scenarios,and when meeting the statistical data heterogeneity,FedAvg may converge slowly and generate many communication costs.In addition,because only the shared global model is obtained,the model may degenerate when making predictions in the personalized client.Reference [4] combined three traditional adaptive technologies into the federated model: fine-tuning,multi-task learning,and knowledge distillation.Reference [5] attempted to deal with feature changes between clients by retaining local batch processing normalization parameters,which can represent some specific data distribution.Reference [6] proposed introducing the knowledge distillation method into FL so that FL can achieve better results on the local data distribution might be not Independent and Identically Distributed (Non-IID).Reference [7] evaluated FL’s model accuracy and stability under the Non-IID dataset.Reference[8]proposed an open research library that allows researchers to compare the performance of FL algorithms fairly.In addition,the research library also promotes the research of various FL algorithms through flexible and general Application Programming Interface(API)design.Reference[9]proposed a sustainable user incentive mechanism in FL,which dynamically distributes the given budget among the data owners in the federation,including the received revenue and the waiting time for receiving revenue,by maximizing the collective effect and the way perceived below,to minimize the inequality between data owners.Reference[10]proposed a new problem called federated unsupervised representational learning to use unlabeled data distributed in different data parties.The meaning of this problem is to use unsupervised methods to learn data distributed in each node while protecting user data privacy.At the same time,a new method based on dictionary and alignment is proposed to realize unsupervised representation learning.

    The purpose of FL is to train in-depth learning models on the premise of ensuring user privacy.However,the transmission of model updates involved in general FL has been proved in Reference[11]that gradients can disclose data,so we can see that there is still a risk of data privacy disclosure in general FL.Therefore,the research on FL security is also a valuable direction.The following summarizes some research results on FL security.Reference[12]proposed introducing a differential privacy algorithm into FL to construct false data sets with similar data distribution to real data sets to improve the security of real data privacy.Reference[13]proposed to apply secure multi-party computing(SMC)and differential privacy at the same time and achieve a balance between them so that FL can achieve better reasoning performance while achieving the security brought by differential privacy.Reference[14]proposed an algorithm combining secret sharing and Tok-K gradient selection,which balances the protection of user privacy and the reduction of user communication overhead,reduces communication overhead while ensuring user privacy and data security,and improves model training efficiency.

    2.2 Knowledge Distillation

    Knowledge distillation is a technique that extracts valuable insights from complex models and condenses them into a singular,streamlined model,thereby enabling its deployment in real-world applications.Knowledge distillation [15] is a knowledge transfer and model compression algorithm proposed by Geoffrey Hinton et al.in 2015.For a specific character,through the use of a knowledge distillation algorithm,the information of an ideally trained teacher network containing more knowledge can be transferred to a smaller untrained student network.

    In this paper,the loss function Lstudentof the student network can be defined as:

    LCE is the cross entropy loss function,LKL is the Kullback Leibler (KL) divergence,pstudentand pteacherare the outputs of the network after the softmax activation function,z is the output logits of the neural network,and T is the temperature,which is generally set as 1.The primary purpose of temperature is to reduce the loss of knowledge contained in the small probability results caused by excessive probability differences.KL divergence can measure the difference between the two models.The larger the KL divergence,the more significant the distribution difference between the models,and the smaller the KL divergence,the smaller the distribution difference between the two models.The formula of KL divergence is:

    where P(x)and Q(x)respectively represent the output of different networks after the softmax activation function.

    2.3 Federated Learning Based on Blockchain

    Reference[16]proposed a trusted sharing mechanism that combines blockchain and FL to achieve data sharing,protecting private data and ensuring trust in the sharing process.Reference[2]proposed an FL framework based on blockchain,using committee consensus BFLC.This framework uses blockchain to store global and local models to ensure the security of the FL process and uses special committee consensus to reduce malicious attacks.Reference [17] designed a blockchain-based FL architecture that includes multiple mining machines,using blockchains to coordinate f FL tasks and store global models.The process is as follows:nodes download the global model from the associated mining machine,train it,and then upload the trained local model as a transaction to the associated mining machine.The miner confirms the validity of the uploaded transaction,verifies the accuracy of the model,and stores the confirmed transaction in the candidate block of the miner.Once the candidate block has collected enough transactions or waited for a while,all miners will enter the consensus stage together,and the winner of PoW will publish its candidate block on the blockchain.In addition,miners can allocate rewards to encourage devices to participate in FL when they publish blocks on the blockchain.The recently emerged directed acyclic graph-based FL framework[18]builds an asynchronous FL system based on asynchronous bookkeeping of directed acyclic graphs to solve the device asynchrony problem in FL.

    Reference[19]proposed to enhance the verifiable and auditable of the FL training process through blockchain,but the simultaneous up-chaining of the models through the committee validation model is less efficient.Reference[20]proposed data sharing based on blockchain and zero-knowledge proof,However,it is not suitable for computing and data sharing of complex models.Reference[21]proposed a verifiable query layer for guaranteeing data trustworthiness,but this paper’s multi-node model verification mechanism is more suitable for FL.

    3 Method

    3.1 Problem Analysis

    In an ideal situation,the current decentralized FL solutions have been proven to work well.However,in the actual scene,problems such as model training speed and federated learning security still pose a huge challenge to the existing decentralized FL algorithm.For the model training speed,synchronous model aggregation slows down the model updating speed when the equipment performance of multiple participants is different.In terms of the security of federated learning,decentralized,federated learning faces not only the data poisoning of malicious nodes but also the problem of information tampering.Malicious nodes undermine the security of FL by tampering with the model parameters or gradient of communication.Different nodes have different FL computing and communication resources.In conventional centralized synchronous FL systems,a single node needs to wait for other nodes to finish their tasks before moving on to the next round.Only after completing their training tasks can they enter the next round together.However,if a node is turned off during training,it may completely invalidate a round of FL.

    Blockchain is a decentralized asynchronous data storage system.All transactions verified in the blockchain will be permanently stored in the blockchain blocks and cannot be tampered with.In addition,the blockchain uses a consensus algorithm to verify transactions,which can effectively prevent malicious nodes from tampering with transaction information.In decentralized FL,blockchain asynchronous consensus can be used to accelerate the model aggregation efficiency,taking the model training speed as an example.Therefore,this paper introduces blockchain technology based on a decentralized FL framework so that network communication load,FL security,and other indicators can reach better standards.

    3.2 Architecture Design

    Considering the absence of a central server among different federal bodies,the key to enabling them to share knowledge without the involvement of other administrators and without directly exchanging data is crucial.The objective of a blockchain-based FL architecture is to accumulate public knowledge through knowledge distillation while preserving data privacy and security and storing personalized information.As shown in Fig.2,decentralized heterogeneous federated distillation learning architecture is divided into the bottom algorithm application layer,the blockchain layer for trustworthy model broadcasting and endorsement,and the asynchronous federated distillation learning part for model training.

    On the blockchain,we design two different types of blocks to store public models and local models.FL training relies only on the latest model blocks,and historical block storage is used for fault fallback and block validation.The data storage structure on the blockchain is shown in Fig.3.

    The public model block is created in the common knowledge accumulation phase.In the common knowledge accumulation phase,nodes use local data for model training and then access the blockchain to obtain the latest public model.The public model acts as a teacher to enhance the local model by knowledge distillation,and the local model that completes knowledge distillation is chained as a new public model block.When the public model blocks with the same TeacherID accumulate to a certain number,the model aggregation smart contract is triggered to generate a new public model block.The public model block includes the block header,TeacherID,this model ID,model evaluation score,IsPublic,IsAggreation,and model parameters.

    Figure 3:Data storage structure on blockchain

    Local model blocks are created in the personalization phase.When the accuracy of the public model reaches a threshold,the subsequent participating nodes trigger the personalization training smart contract and enter the personalization training phase.The public model we obtained in the previous phase is used as a pre-trained model input in the personalization phase to improve the node’s local model,and no new public model blocks are generated.The participating nodes in this phase first perform local model training,then download the public model and use the public model as a teacher to fine-tune the local model by distillation learning.The fine-tuned local model is up-linked as a new personalized model block.Nodes download each new personalized model block,verify the personalized model uploaded by other nodes and try to use it for fine-tuning the local model,and the fine-tuned personalized model is also uploaded to the blockchain for knowledge sharing.The personalized model block includes the block header,TeacherID,ID of this model,IsPublic,UserID of the user to which the model belongs,model evaluation score,and model parameters.

    3.3 Model Asynchronous Uplink Mechanism Based on Honeybadger Consensus

    The model uplink consensus mechanism based on honeybadger consensus is shown in Fig.4:

    WhereCnparticipates in the FL training,that is,the load forecasting business of the virtual power plant;LMnis a local model for participants to train based on local data,namely personalized training.TheSHnmodule is used for the trusted broadcast of each local model.Theendorsenmodule is used for model endorsement.When a local model collects a certain number of endorsements,it performs the model up chain step through the smart contract.

    3.3.1 SHn Module

    As an anonymous transmission channel based on secret sharing,theSHnmodule can obfuscate the uploader’s address of the model,preventing malicious nodes from reproducing the data and launching targeted attacks against the model owner by knowing the model’s source in advance.

    After construction,theSHnmodule is used for on-chain updates of the FL local update block.The local update block serves as a black box input,and the on-chain verification nodes cannot access or modify the information inside the block before completing the anonymous transmission.

    TheSHnmodule satisfies the following properties:

    (Validity)If an honest node outputs integrity verification set V against the received locally updated model integrity,then|V|≥N-fand v contain the verification of at leastN-2fhonest nodes.

    (Consensus)If one honest node outputs integrity verification setV,then other nodes should also outputV.

    (Integrity)IfN-fcorrect nodes receive input,then all nodes generate an output.

    Nmeans the number of participating nodes is N,and f means the number of malicious nodes is f.

    The anonymous transmission channelSHnis implemented based on secret sharing.Firstly,the node needs to generate a public key andNprivate keysSK_iaccording to the node ID.Then,the public key is used to encrypt the model and distribute the encrypted model and public key to other nodes.For the encrypted model,multiple nodes need to cooperate to complete decryption.Once thef+1honest node decrypts the ciphertext,the encrypted model will be restored to a usable model.Unless an honest node leaks the model after decryption,the attacker cannot complete the decryption of the model ciphertext.TheSHnprocess is as follows:

    SH.setup()->PK,{SK_i},generates the cryptographic public key PK for the local model update and SK_i,a set of private keys for decrypting the cryptographic model.

    SH.Enc(PK,m)->C,encrypts the local model update m using the public key and generates the encrypted model C.

    SH.DecShare(SK_i,C)distributes the encrypted model and key to each node.

    SH.Dec(PK,C,{i,SK_i})->m,aggregate{i,SK_i}from at leastf+1nodes to obtain the private key SK corresponding to PK,and use SK to decrypt each node to obtain a usable local update model.

    3.3.2 Endorsen Module

    Theendorsenmodule is used to verify the update of the FL local model.All nodes verify the model passed bySHnand give the verification vote.TheNconcurrent instances of binary Byzantine are used for the counterpoint vector whereb=1 indicates that the node agrees to chain the model.

    Theendorsenmodule satisfies the following properties:

    (Consensus)If any honest node endorses the model output with agreement endorsement b,then every honest node outputs agreement endorsement b.

    (Termination)If all honest nodes receive the input model,then every honest node outputs a 1-bit value indicating whether it agrees to endorse the model or not.

    (Validity)If any honest node outputs b,then at least one honest node accepts b as input.

    The validity property implies consistency:if all correct nodes receive the same input value b,then b must be a deterministic value.On the other hand,if two nodes receive different inputs at any point,the adversary may force the decision of one of the values before the remaining nodes receive the input.

    3.4 Asynchronous Federated Distillation Learning

    The training is divided into a common knowledge accumulation stage and a personalization stage.Specifically,in the common knowledge accumulation stage,each federation on the blockchain is regarded as a meta-distribution,and the knowledge of each federation is aggregated cyclically.After the knowledge accumulation is completed,the model is uploaded to the blockchain so that other federations on the blockchain can perform personalized training.The common knowledge accumulation stage lasts for several rounds to ensure that the public knowledge of each federation is fully extracted.In the personalization stage,the federation in the blockchain downloads the trained model from the chain to the local for guidance training,and the personalization stage can also be trained in the same way.Since the public knowledge has been accumulated,the local training Before sending the public knowledge model to the next federation.Both stages are based on knowledge distillation.

    In the first stage,we divide the specific steps into four steps.In short,the four steps are operated as follows:

    a) Train:Using the local dataset to train the local model as a student model

    b) Download:Download the on-chain model for distillation

    c) Distill:Using knowledge distillation to enhance the local model to get the pre-on chain model

    d) Upload:Upload the pre-on chain model to the blockchain

    The detailed procedures are as follows:

    1.Train.In this step,each client updates its model with its local dataset.The model parameters are updated as follows:

    2.Download.In this step,each client downloads an on-chain modelwgfor distillation.

    3.Distill.

    Based on the model learned in the previous step,each client predicts the local logit,which is the label for data samples in the local dataset.More specifically,given the model parameter,each client predicts local logitsfor j ∈ρas follows:

    whereη2is the learning rate in the proposed distillation procedure.

    4.Upload.In this step,each client uploads the pre-on-chain model to the blockchain.

    The second stage is the personalized training stage.Since there is no central server for the entire model,we must obtain the personalized model in the same order as the common knowledge accumulation stage.In the first stage,we obtain the public model f,which contains enough common knowledge.To prevent the common knowledge from being lost,the public model f is transferred to the next federation before local,personalized training.Other federations on the blockchain can download other nodes with trained models for local training.Since public knowledge has been accumulated,local training is optional.The process of the second stage is shown in Fig.5.When the public model performs poorly on the local validation data,the personalization phase modifies it very little;when the public model’s performance on the local validation data is acceptable,the personalization phase modifies it.Mostly modified for better performance.

    4 Experiment

    Based on the algorithm proposed in this paper,the load forecasting and analysis simulation experiments on the demand side of the VPP show that the forecasting model can realize the accurate prediction of the demand side compliance and support the VPP to achieve precise regulation and control of layers and partitions.The models are written in Python 3.9.10 and Pytorch 1.11.0 and executed on a Geforce RTX 3080Ti GPU.

    In the load forecasting experiment,the dataset contains three types of enterprises the real estate industry,the manufacturing industry,and the catering industry.Each industry includes sample data from 100 companies for 36 consecutive months.The characteristics of each sample are enterprise water consumption,enterprise gas consumption,daily maximum temperature,daily minimum temperature,daily average temperature,daily rainfall,and humidity,and the label is enterprise energy used.

    Based on the DHFDL proposed in this paper,the federated load forecasting model is constructed and trained for the data of the three industries.Fig.6 is the comparison between the prediction effect and the actual value.It can be seen from the figure that the algorithm proposed in this paper has a better prediction effect.

    To prove the effectiveness of blockchain-based decentralized heterogeneous federated distillation learning,this paper conducts experiments on the real-world federated dataset FEMNIST.The dataset contains 805,263 samples,made by 3,550 handwritten character image classification task users,including 62 different categories(10 digits,26 lowercase letters,26 uppercase letters),and the number of local datasets is unbalanced.The distributions are not independent.After randomly selecting active nodes,we perform local training and aggregation via memory.

    Malicious blockchain nodes participating in FL training will generate harmful local models for malicious attacks.If they participate in model aggregation,the performance of the global model will be significantly reduced.In this section,we simulate malicious node attacks and set different malicious node ratios to demonstrate the impact of different malicious node ratios among participating nodes on the performance of FedAvg,BFLC,and the DHFDL model proposed in this paper.This paper assumes that the malicious attack mode is to randomly perturb the local training model to generate an unusable model.FedAvg performs no defenses and aggregates all local model updates.BFLC relies on committee consensus to resist malicious attacks.During the training process,each model update will get a score from the committee,and the model with a higher score will be selected for aggregation.In the experiment,we assume that malicious nodes are colluding,that is,members of the malicious committee will give random high scores to malicious updates and select nodes with model evaluation scores in the top 20%of each round of training as the committee for the next round.The participating nodes of DHFDL train the local model and select the on-chain model with a model accuracy rate to carry out knowledge distillation in each round of updates to improve the effectiveness of the local model.As shown in Fig.7,DHFDL can resist a higher proportion of malicious codes than the comparative methods.This shows the effectiveness of DHFDL with the help of knowledge distillation.

    Figure 7:Performance of algorithms under malicious attacks

    By combining the asynchronous consensus with the training process of the FL model,the training efficiency of decentralized FL can be improved.As shown in Fig.8,this paper conducts FL training by setting the number of training nodes participating in FL and counts FedAvg and BFLC.Compared with the training time of each round of the three algorithms of DHFDL,it can be seen that the time required for each round of training of DHFDL,which realizes model asynchronous uplink aggregation through asynchronous consensus,is the lowest.At the same time,as the number of participating nodes increases,the training of the three algorithms.The time required increases accordingly,but the time required for each training round of the DHFDL algorithm is still the lowest,which shows that DHFDL is efficient with the help of the asynchronous model on-chain.

    Figure 8:Performance of algorithms in different numbers of participating nodes

    Fig.9 describes the on-chain storage cost comparison of the algorithm.Ten training nodes are simulated to conduct FEMINIST classification model training based on the Convolutional Neural Network (CNN) model,and the on-chain storage cost of the model based on the BFLC and DHFDL algorithms is recorded.DHFDL algorithm that realizes the model chain aggregation through asynchronous consensus requires less on-chain storage overhead to achieve the same accuracy.Meanwhile,with the improvement of accuracy,the two algorithms require more on-chain storage space,but the DHFDL algorithm still has lower storage overhead than the BFLC algorithm.

    Figure 9:Storage performance of the algorithm

    5 Conclusion

    In this paper,we propose the DHFDL algorithm,Decentralized Heterogeneous Federated Distillation Learning,to effectively predict the load of virtual power plants for better grid scheduling.DHFDL does not need a central server to organize the federation for training.The public model is extracted through distillation learning,and the model is uploaded to the blockchain.The federation nodes on the blockchain can download the trained models of other federation nodes to guide personalization training to get a better model.The introduction of blockchain technology enables indicators such as network communication load and FL security to reach better standards.By simulating malicious node attacks and comparing the FedAvg algorithm and the BFLC algorithm,it can be seen that the DHFDL algorithm proposed in this paper can resist a higher proportion of malicious codes.From the comparative experimental results,it can be seen that the combination of asynchronous consensus and FL model training process improves the training efficiency of decentralized FL.

    Acknowledgement:I would first like to thank Jiangsu Provincial Electric Power Corporation for providing the experimental environment and necessary equipment and conditions for this research.The strong support from the institution made the experiment possible.I would particularly like to acknowledge my team members,for their wonderful collaboration and patient support.Finally,I could not have completed this dissertation without the support of my friends,who provided stimulating discussions as well as happy distractions to rest my mind outside of my research.

    Funding Statement:This work was supported by the Research and application of Power Business Data Security and Trusted Collaborative Sharing Technology Based on Blockchain and Multi-Party Security Computing(J2022057).

    Author Contributions:Study conception and design:Hong Zhu;data collection:Lisha Gao;analysis and interpretation of results:Yitian Sha,Shuo Han;draft manuscript preparation:Nan Xiang,Yue Wu.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data not available due to ethical restrictions.Due to the nature of this research,participants of this study did not agree for their data to be shared publicly,so supporting data is not available.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美成人精品欧美一级黄| 亚洲精品乱码久久久v下载方式| 亚洲在线观看片| 国产一区二区三区综合在线观看 | 国产真实伦视频高清在线观看| 自拍欧美九色日韩亚洲蝌蚪91 | 18禁在线播放成人免费| 亚洲精品成人av观看孕妇| 亚洲图色成人| 久久久久久久久久成人| 日韩人妻高清精品专区| 亚洲国产av新网站| 高清欧美精品videossex| 亚洲精品乱码久久久v下载方式| 日韩,欧美,国产一区二区三区| 日产精品乱码卡一卡2卡三| 爱豆传媒免费全集在线观看| 亚洲av免费高清在线观看| 精品人妻一区二区三区麻豆| 大陆偷拍与自拍| 成人国产麻豆网| 嫩草影院新地址| 国产视频内射| 久久亚洲国产成人精品v| 天天躁夜夜躁狠狠久久av| 精品午夜福利在线看| 一级毛片我不卡| 亚洲国产成人一精品久久久| 久久久久久久久久久免费av| 亚洲伊人久久精品综合| 好男人视频免费观看在线| 午夜精品国产一区二区电影 | 97在线视频观看| 日韩亚洲欧美综合| 亚洲人成网站高清观看| 秋霞伦理黄片| 亚洲丝袜综合中文字幕| 亚洲丝袜综合中文字幕| 亚洲精品一二三| 国产成人一区二区在线| 成年版毛片免费区| 亚洲欧美日韩东京热| 日本黄大片高清| 纵有疾风起免费观看全集完整版| 两个人的视频大全免费| 亚洲熟女精品中文字幕| 99久久中文字幕三级久久日本| 精品久久国产蜜桃| 国产精品女同一区二区软件| 午夜老司机福利剧场| 麻豆成人午夜福利视频| 国产69精品久久久久777片| 最近最新中文字幕大全电影3| 熟女av电影| 免费大片黄手机在线观看| 欧美成人a在线观看| 青春草国产在线视频| 国产精品麻豆人妻色哟哟久久| 伊人久久精品亚洲午夜| 久久精品人妻少妇| 国产精品av视频在线免费观看| 精品久久国产蜜桃| 毛片一级片免费看久久久久| 免费观看的影片在线观看| 一级黄片播放器| 久久久精品免费免费高清| 日韩人妻高清精品专区| 亚洲精品国产色婷婷电影| 国产美女午夜福利| 欧美人与善性xxx| av播播在线观看一区| 少妇裸体淫交视频免费看高清| 成人鲁丝片一二三区免费| 久久精品国产亚洲av涩爱| 欧美日韩视频精品一区| 能在线免费看毛片的网站| 涩涩av久久男人的天堂| 日本-黄色视频高清免费观看| 一本色道久久久久久精品综合| 亚洲va在线va天堂va国产| 中文在线观看免费www的网站| 大陆偷拍与自拍| 各种免费的搞黄视频| 男插女下体视频免费在线播放| 精品人妻一区二区三区麻豆| 中文天堂在线官网| 少妇裸体淫交视频免费看高清| 特大巨黑吊av在线直播| 国产午夜福利久久久久久| av又黄又爽大尺度在线免费看| 日本欧美国产在线视频| 亚洲欧美成人精品一区二区| 国产一区二区在线观看日韩| 最后的刺客免费高清国语| 中文在线观看免费www的网站| 精品一区二区三卡| 亚洲精品,欧美精品| 亚洲av一区综合| 国产v大片淫在线免费观看| 五月开心婷婷网| 搡女人真爽免费视频火全软件| 久久午夜福利片| 观看免费一级毛片| 亚洲精品视频女| 一级二级三级毛片免费看| 少妇熟女欧美另类| 国产中年淑女户外野战色| 美女主播在线视频| 免费av毛片视频| 麻豆国产97在线/欧美| 欧美日韩视频高清一区二区三区二| 黄色怎么调成土黄色| 国产亚洲av嫩草精品影院| 99久国产av精品国产电影| 国产精品99久久99久久久不卡 | 久久久久网色| 女的被弄到高潮叫床怎么办| 男女无遮挡免费网站观看| 国产黄片美女视频| 免费播放大片免费观看视频在线观看| 色婷婷久久久亚洲欧美| 搞女人的毛片| 99久久中文字幕三级久久日本| 中文乱码字字幕精品一区二区三区| 精品久久久久久久人妻蜜臀av| 久久99蜜桃精品久久| 免费观看性生交大片5| 亚洲精品乱码久久久v下载方式| 国产有黄有色有爽视频| 亚洲精品国产色婷婷电影| 一级av片app| 国产白丝娇喘喷水9色精品| 亚洲av.av天堂| 国产老妇伦熟女老妇高清| 国产在线男女| 国产精品秋霞免费鲁丝片| 国产亚洲av嫩草精品影院| 成人欧美大片| 五月伊人婷婷丁香| 国产人妻一区二区三区在| 国产精品不卡视频一区二区| 免费播放大片免费观看视频在线观看| 国产美女午夜福利| 欧美最新免费一区二区三区| 夫妻午夜视频| 蜜桃久久精品国产亚洲av| 成年女人看的毛片在线观看| 精品少妇久久久久久888优播| 久久精品国产a三级三级三级| 国产高清不卡午夜福利| 男女下面进入的视频免费午夜| 一级毛片电影观看| 久久6这里有精品| 美女cb高潮喷水在线观看| 久久久色成人| 三级国产精品片| 欧美潮喷喷水| 岛国毛片在线播放| 2022亚洲国产成人精品| 少妇被粗大猛烈的视频| 国产亚洲最大av| 大香蕉97超碰在线| 99久久人妻综合| 国产欧美亚洲国产| 身体一侧抽搐| 日本午夜av视频| 欧美国产精品一级二级三级 | 国产 一区精品| 国产av不卡久久| 插阴视频在线观看视频| 99热这里只有精品一区| 91狼人影院| 亚洲精品久久久久久婷婷小说| 国产视频首页在线观看| 日本猛色少妇xxxxx猛交久久| 91精品一卡2卡3卡4卡| 国产伦在线观看视频一区| 国产成人精品福利久久| 国产精品国产三级专区第一集| 日韩三级伦理在线观看| 日韩欧美精品免费久久| 亚洲美女搞黄在线观看| 五月开心婷婷网| 99久久精品热视频| 国产免费视频播放在线视频| 狠狠精品人妻久久久久久综合| 国产午夜福利久久久久久| 偷拍熟女少妇极品色| 又爽又黄a免费视频| 国产黄色免费在线视频| 好男人视频免费观看在线| 80岁老熟妇乱子伦牲交| 亚洲一级一片aⅴ在线观看| 久久久久性生活片| av在线蜜桃| 日本av手机在线免费观看| 中文在线观看免费www的网站| 国产精品国产三级专区第一集| 久久午夜福利片| 国产成人午夜福利电影在线观看| 丰满人妻一区二区三区视频av| 青春草国产在线视频| 熟女电影av网| 一二三四中文在线观看免费高清| 国产又色又爽无遮挡免| 国产成年人精品一区二区| 性色av一级| 熟女电影av网| 午夜福利视频精品| 欧美变态另类bdsm刘玥| 最近最新中文字幕免费大全7| 久久人人爽人人片av| 波野结衣二区三区在线| 免费大片18禁| 精品视频人人做人人爽| 亚洲国产av新网站| 91精品国产九色| 两个人的视频大全免费| 一级毛片我不卡| 人妻一区二区av| 亚洲精品色激情综合| 干丝袜人妻中文字幕| 永久网站在线| 啦啦啦中文免费视频观看日本| 免费少妇av软件| 午夜免费男女啪啪视频观看| 日韩制服骚丝袜av| 欧美日韩亚洲高清精品| 永久免费av网站大全| 亚洲,一卡二卡三卡| 男女无遮挡免费网站观看| 男女下面进入的视频免费午夜| 国产成人aa在线观看| 国产男女超爽视频在线观看| 嫩草影院精品99| 又黄又爽又刺激的免费视频.| 午夜日本视频在线| 日本一二三区视频观看| 久久久久久久久久久免费av| 亚洲精品久久午夜乱码| 精品久久久久久久人妻蜜臀av| 精品一区二区免费观看| 亚洲欧美中文字幕日韩二区| 插逼视频在线观看| 大片免费播放器 马上看| 久久亚洲国产成人精品v| 日韩三级伦理在线观看| 极品少妇高潮喷水抽搐| 26uuu在线亚洲综合色| 少妇 在线观看| 在线天堂最新版资源| 免费观看性生交大片5| 中文字幕久久专区| 免费不卡的大黄色大毛片视频在线观看| 青春草国产在线视频| 国内精品美女久久久久久| av在线播放精品| 国产午夜精品一二区理论片| 少妇的逼好多水| 亚洲精品一区蜜桃| 国产一区二区在线观看日韩| 亚洲精品中文字幕在线视频 | 好男人视频免费观看在线| 另类亚洲欧美激情| 亚洲内射少妇av| 97在线人人人人妻| 亚洲av成人精品一二三区| 狂野欧美白嫩少妇大欣赏| 高清午夜精品一区二区三区| 国产av国产精品国产| 欧美高清成人免费视频www| 国产欧美另类精品又又久久亚洲欧美| 亚洲不卡免费看| 亚洲真实伦在线观看| 麻豆成人午夜福利视频| 黄色日韩在线| 最近中文字幕高清免费大全6| 成年免费大片在线观看| 欧美区成人在线视频| 亚州av有码| 久久99精品国语久久久| 国产又色又爽无遮挡免| 国产精品一区二区三区四区免费观看| 人体艺术视频欧美日本| 欧美bdsm另类| 噜噜噜噜噜久久久久久91| av国产久精品久网站免费入址| 亚洲婷婷狠狠爱综合网| 嘟嘟电影网在线观看| 亚洲,欧美,日韩| 亚洲最大成人中文| 亚洲成人精品中文字幕电影| 舔av片在线| 久久99热这里只频精品6学生| 春色校园在线视频观看| 狠狠精品人妻久久久久久综合| 免费看不卡的av| 久久久久久九九精品二区国产| 国产午夜精品久久久久久一区二区三区| 伦理电影大哥的女人| 一个人看的www免费观看视频| 成人免费观看视频高清| 欧美日韩综合久久久久久| 日韩人妻高清精品专区| av播播在线观看一区| 少妇被粗大猛烈的视频| 国产极品天堂在线| 一边亲一边摸免费视频| 国产老妇伦熟女老妇高清| 中文字幕亚洲精品专区| 99re6热这里在线精品视频| 黄片wwwwww| 欧美日韩国产mv在线观看视频 | 国产乱人偷精品视频| 日韩电影二区| 精品熟女少妇av免费看| 亚洲精品亚洲一区二区| 在线观看人妻少妇| 一级毛片aaaaaa免费看小| 成人国产麻豆网| 久久久精品94久久精品| 男插女下体视频免费在线播放| 伊人久久国产一区二区| 成年女人在线观看亚洲视频 | 2021天堂中文幕一二区在线观| 狂野欧美激情性xxxx在线观看| 婷婷色综合www| 欧美极品一区二区三区四区| 中文资源天堂在线| 日本午夜av视频| 好男人视频免费观看在线| 日日摸夜夜添夜夜添av毛片| 久久久久久久午夜电影| 日韩一区二区视频免费看| 国产精品av视频在线免费观看| 建设人人有责人人尽责人人享有的 | 狂野欧美激情性bbbbbb| 欧美精品一区二区大全| 国产精品久久久久久精品电影小说 | 交换朋友夫妻互换小说| 五月天丁香电影| 美女脱内裤让男人舔精品视频| 丝袜美腿在线中文| 国产黄片美女视频| 毛片一级片免费看久久久久| 色综合色国产| 亚洲高清免费不卡视频| .国产精品久久| 美女内射精品一级片tv| 精品国产一区二区三区久久久樱花 | 色哟哟·www| 99热这里只有是精品50| 国产精品人妻久久久影院| av在线app专区| 亚洲欧洲国产日韩| 蜜桃久久精品国产亚洲av| 日本色播在线视频| 久久久久久久国产电影| 男女下面进入的视频免费午夜| 国产精品久久久久久精品古装| 国产极品天堂在线| 久久精品久久久久久久性| 国产亚洲5aaaaa淫片| 麻豆乱淫一区二区| 自拍偷自拍亚洲精品老妇| 亚洲精品成人av观看孕妇| 午夜福利高清视频| 51国产日韩欧美| 日韩一本色道免费dvd| 国产成人aa在线观看| 中文字幕av成人在线电影| 亚洲国产色片| 国产av码专区亚洲av| 91久久精品国产一区二区成人| 久久久久久久久久成人| 久久99热这里只频精品6学生| 久久精品国产亚洲av涩爱| 国产精品女同一区二区软件| 搞女人的毛片| 午夜福利视频精品| 老司机影院毛片| 天天躁日日操中文字幕| 久久久久久久久久人人人人人人| 一个人观看的视频www高清免费观看| 看免费成人av毛片| 色播亚洲综合网| 国产毛片a区久久久久| 免费观看性生交大片5| 一个人看视频在线观看www免费| 激情五月婷婷亚洲| 国产日韩欧美在线精品| 久久精品久久久久久噜噜老黄| 美女高潮的动态| 国产精品成人在线| 久久精品久久久久久久性| 深夜a级毛片| 51国产日韩欧美| 在线a可以看的网站| 色视频在线一区二区三区| 国产精品不卡视频一区二区| 五月玫瑰六月丁香| 国产在线一区二区三区精| 亚洲伊人久久精品综合| 国产综合懂色| 日本猛色少妇xxxxx猛交久久| 成人欧美大片| 偷拍熟女少妇极品色| 伦精品一区二区三区| 久久99热这里只频精品6学生| 成人高潮视频无遮挡免费网站| 日本爱情动作片www.在线观看| 久久久久国产网址| 亚洲精品影视一区二区三区av| 欧美+日韩+精品| 国产成人精品婷婷| 欧美3d第一页| 日韩精品有码人妻一区| 在线观看美女被高潮喷水网站| 男人添女人高潮全过程视频| 大片电影免费在线观看免费| 欧美变态另类bdsm刘玥| 99九九线精品视频在线观看视频| 国产欧美亚洲国产| 六月丁香七月| 肉色欧美久久久久久久蜜桃 | 亚洲精品日韩在线中文字幕| 免费看av在线观看网站| 99久久精品热视频| 涩涩av久久男人的天堂| 九色成人免费人妻av| 日韩欧美一区视频在线观看 | 777米奇影视久久| 99久久精品一区二区三区| 激情五月婷婷亚洲| 特级一级黄色大片| 69av精品久久久久久| 黄色日韩在线| 99久国产av精品国产电影| 国产高潮美女av| 国产精品国产三级国产专区5o| 亚洲自拍偷在线| 日本黄色片子视频| 91午夜精品亚洲一区二区三区| 国产在视频线精品| 少妇高潮的动态图| 狂野欧美激情性xxxx在线观看| 亚洲三级黄色毛片| 国产成年人精品一区二区| 亚洲人与动物交配视频| 国产精品99久久久久久久久| 王馨瑶露胸无遮挡在线观看| 国产av码专区亚洲av| 久久久久网色| 久久久午夜欧美精品| 99热这里只有是精品在线观看| 亚洲av国产av综合av卡| a级毛色黄片| 亚洲天堂av无毛| 亚洲国产精品国产精品| 中文精品一卡2卡3卡4更新| 久热这里只有精品99| 一级毛片久久久久久久久女| 夫妻性生交免费视频一级片| 麻豆成人午夜福利视频| 日韩欧美精品v在线| 18禁动态无遮挡网站| 国产精品av视频在线免费观看| 一级爰片在线观看| 欧美性猛交╳xxx乱大交人| 日本熟妇午夜| 18禁裸乳无遮挡免费网站照片| 免费看光身美女| 一二三四中文在线观看免费高清| 在线天堂最新版资源| 日韩亚洲欧美综合| av在线观看视频网站免费| 久热久热在线精品观看| 国产精品久久久久久精品电影| 天堂网av新在线| 综合色av麻豆| 大香蕉久久网| 久久国内精品自在自线图片| 午夜日本视频在线| 亚洲国产成人一精品久久久| 中国三级夫妇交换| 国产精品不卡视频一区二区| 18禁在线无遮挡免费观看视频| 成人毛片60女人毛片免费| 亚洲精品成人av观看孕妇| 99热国产这里只有精品6| 国产大屁股一区二区在线视频| 久久精品综合一区二区三区| 国产探花极品一区二区| 精品久久久精品久久久| 中文字幕av成人在线电影| 香蕉精品网在线| 亚洲综合色惰| 午夜免费观看性视频| 嫩草影院精品99| 99久久九九国产精品国产免费| 亚洲欧美清纯卡通| 交换朋友夫妻互换小说| 欧美成人精品欧美一级黄| 国产成人免费观看mmmm| 久久这里有精品视频免费| 美女xxoo啪啪120秒动态图| 国产男女超爽视频在线观看| 亚洲欧美中文字幕日韩二区| 亚洲自拍偷在线| 国产亚洲av片在线观看秒播厂| 七月丁香在线播放| 成人美女网站在线观看视频| 一本色道久久久久久精品综合| 国产欧美日韩精品一区二区| 精品少妇久久久久久888优播| 国产老妇女一区| 大香蕉97超碰在线| 亚洲精品日韩在线中文字幕| 亚洲精品一区蜜桃| 久久久久久久午夜电影| 成人亚洲精品一区在线观看 | 久久人人爽人人片av| 久久99热这里只有精品18| 免费人成在线观看视频色| 国产高清有码在线观看视频| 午夜福利在线观看免费完整高清在| 禁无遮挡网站| 1000部很黄的大片| 亚洲国产色片| 久久精品人妻少妇| 秋霞在线观看毛片| 国产真实伦视频高清在线观看| 91精品国产九色| 亚洲av成人精品一二三区| 中文在线观看免费www的网站| 爱豆传媒免费全集在线观看| 亚洲人成网站高清观看| 色哟哟·www| 五月开心婷婷网| 国产淫片久久久久久久久| 国产一区二区在线观看日韩| 秋霞伦理黄片| 性插视频无遮挡在线免费观看| 中文天堂在线官网| 久久久久久久大尺度免费视频| 国产精品人妻久久久影院| 内射极品少妇av片p| 国产精品嫩草影院av在线观看| 只有这里有精品99| 国产高清三级在线| 精品少妇久久久久久888优播| 亚洲av日韩在线播放| 久久久久久久大尺度免费视频| 啦啦啦在线观看免费高清www| 看免费成人av毛片| 亚洲成人久久爱视频| 草草在线视频免费看| 久久这里有精品视频免费| 欧美成人a在线观看| 欧美精品人与动牲交sv欧美| 国产成人aa在线观看| 国产午夜精品久久久久久一区二区三区| 高清毛片免费看| 久久久久久久午夜电影| 最近中文字幕2019免费版| 亚洲精品成人av观看孕妇| av免费在线看不卡| 人人妻人人爽人人添夜夜欢视频 | 大码成人一级视频| www.av在线官网国产| 亚洲欧美一区二区三区国产| 美女国产视频在线观看| 欧美激情国产日韩精品一区| 国内精品宾馆在线| 99热国产这里只有精品6| 亚洲人成网站在线观看播放| 嫩草影院入口| 2022亚洲国产成人精品| 亚洲av在线观看美女高潮| 欧美另类一区| 老司机影院毛片| 热re99久久精品国产66热6| 国产一区亚洲一区在线观看| 久久精品国产亚洲av涩爱| 国产人妻一区二区三区在| 亚洲国产av新网站| 欧美高清成人免费视频www| 性色avwww在线观看| 日本色播在线视频| 午夜视频国产福利| 午夜福利视频1000在线观看| 少妇人妻一区二区三区视频| av在线老鸭窝| 最近中文字幕高清免费大全6| 成人毛片60女人毛片免费| 别揉我奶头 嗯啊视频| 一级毛片久久久久久久久女| 边亲边吃奶的免费视频| 伊人久久国产一区二区| 色视频在线一区二区三区| 久久久久久九九精品二区国产| 新久久久久国产一级毛片| 偷拍熟女少妇极品色| 日韩欧美 国产精品| 亚洲精品乱码久久久v下载方式| 亚洲精品国产色婷婷电影| 精品熟女少妇av免费看| 熟妇人妻不卡中文字幕| 亚洲精品国产色婷婷电影| 日本-黄色视频高清免费观看| 极品教师在线视频| 久久精品综合一区二区三区| 亚洲电影在线观看av| 熟妇人妻不卡中文字幕| 欧美一区二区亚洲|