王俊雅
?
切換網(wǎng)絡(luò)下加速分布式在線加權(quán)對偶平均算法
王俊雅
(安徽理工大學數(shù)學與大數(shù)據(jù)學院,安徽,淮南 232000)
研究了切換網(wǎng)絡(luò)下加速分布式在線加權(quán)對偶平均算法,提出了A-DOWDA算法。首先利用加權(quán)因子對對偶變量進行加權(quán),其次在有向切換網(wǎng)絡(luò)是周期強連通,且對應(yīng)的鄰接矩陣是隨機的而非雙隨機的條件下,加速了算法的收斂速率,最后通過數(shù)值實驗驗證了算法的可行性。
分布式;加權(quán);切換網(wǎng)絡(luò);對偶平均;Regret界
近些年來,隨著網(wǎng)絡(luò)規(guī)模的增長,復雜網(wǎng)絡(luò)成為人們研究的熱點領(lǐng)域。通過增加這些系統(tǒng)的規(guī)模和復雜性,需要分散控制方案,以降低數(shù)據(jù)傳輸速率和確保本地故障時的魯棒性,這使得分布式網(wǎng)絡(luò)受到了越來越多的重視,在并行計算、機器學習和通信系統(tǒng)等多個方面具有廣泛的應(yīng)用[1-3]。文獻[4-6]提出了基于次梯度的分布式優(yōu)化算法,但其成本函數(shù)是不變的,而網(wǎng)絡(luò)的拓撲結(jié)構(gòu)允許變化。然而,環(huán)境中的不確定性往往會對成本函數(shù)產(chǎn)生重大影響,難以建立易于處理的優(yōu)化問題。文獻[7-9]通過隨機框架來提高算法魯棒性,但隨機優(yōu)化方法難以解決動態(tài)問題。
本文考慮基于在線優(yōu)化的分布式權(quán)重對偶平均算法,不僅能處理復雜系統(tǒng)的動態(tài)模型,而且節(jié)約了網(wǎng)絡(luò)成本和存儲空間,避免了資源浪費。文獻[10]提出了基于交替乘子法的分布式在線算法,對網(wǎng)絡(luò)數(shù)據(jù)流進行實時采集和分析,增強了網(wǎng)絡(luò)的魯棒性。
文章在第一部分介紹了圖論和Regret相關(guān)知識,第二部分介紹了切換網(wǎng)絡(luò),第三部分提出了加速分布式在線加權(quán)對偶平均算法(A-DOWDA),第四部分給出了證明,第五部分給出了數(shù)值實驗,在第六部分總結(jié)了本文工作,并提出了未來值得進一步研究的問題。
算法1 加速分布式在線加權(quán)對偶平均算法(A-DWDA)
11) end
14) end for
證明 由引理2,引理4得:
證明:由文獻14中定理2得靜態(tài)網(wǎng)絡(luò)Regret界:
又根據(jù)文獻14中定理2知:
故可得靜態(tài)網(wǎng)絡(luò)情形下的Regret界收斂速度為:
證明:由定理1易證。
對比(8)式和(9)式,可以看出,當網(wǎng)絡(luò)是切換網(wǎng)絡(luò)時,帶有加權(quán)的分布式在線對偶平均算法收斂速度變快,變快的速率可以由(9)式右邊第二項衡量。
通過數(shù)值實驗驗證A-DWDA的可行性,評價指標為平均損失函數(shù):即
圖1 不同迭代次數(shù)和步長下加速與不加速算法的對比
Fig.1 Comparison between accelerated and unaccelerated algorithms with different number of iterations and step sizes
圖2 不同函數(shù)在迭代次數(shù)和步長下加速與不加速算法的對比
本文在切換網(wǎng)絡(luò)所對應(yīng)的鄰接矩陣是隨機的情況下,對分布式在線權(quán)重對偶平均算法進行加速,提出了A-DWDA算法,不僅提高了算法的收斂速度,而且改進了Regret界。未來將考慮相應(yīng)的量化算法或異步問題。
[1] Boyd S, Parikh N, Chu E. Distributed optimization and statistical learning via the alternating direction method of multipliers[J].Foundations & Trends in Machine Learning, 2010, 3(1):1-122.
[2] Dorfler F, Chertkov M, Bullo F. Synchronization in complex oscillator networks and smart grids[J]. Proceedings of the National Academy of Sciences, 2013, 110(6): 2005-2010.
[3] Blatt D, Hero A O, Gauchman H. A convergent incremental gradient method with a constant step size[J]. SIAM Journal on Optimization, 2007, 18(1): 29-51.
[4] Nedic A, Ozdaglar A. Distributed subgradient methods for multi-agent optimization[C]. Proceedings of the 2009 IEEE Conference on Automatic Control, Piscataway, 2009:48-61.
[5] Lobel I, Ozdaglar A. Distributed subgradient methods for convex optimization over random networks[C]. Proceedings of the 2011 IEEE Conference on Automatic Control, Piscataway, 2011: 1291-1306.
[6] Lee S, Nedic A. Distributed random projection algorithm for convex optimization[C]. Proceedings of the 2013 IEEE conference on Selected Topics in Signal Processing, Piscataway, 2013:221-229.
[7] Ram S, Nedic A, Veeravalli V. Incremental stochastic subgradient algorithms for convex optimization[J]. SIAM Journal on Optimization, 2009, 20(2):691-717.
[8] Sundhar R S, Nedi A, Veeravalli V V. Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization[J]. Journal of Optimization Theory and Applications,2010, 147(3):516-545.
[9] Agarwal A, Duchi J. Distributed delayed stochastic optimization[C]. Proceedings of the 2012 IEEE Conference on Decision and Control, Piscataway, 2012:5451-5452.
[10] Lin P, Ren W. Distributed Constrained Consensus in the Presence of Unbalanced Switching Graphs and Communication Delays[C]. Proceedings of the 2012 IEEE Conference on Decision and Control, Piscataway, 2012: 2238-2243.
[11] Tsianos K I, Rabbat M G. Distributed dual averaging for convex optimization under communication delays[C]. Proceedings of the 2012 American Control Conference, Piscataway, 2012:1067-1072.
[12] Tsianos K I, Lawlor S, Rabbat M G. Push-sum distributed dual averaging for convex optimization[C]. Proceedings of the 2012 IEEE Conference on Decision and Control, Piscataway, 2012:5453-5458.
[13] Hosseini S, Chapman A, Mesbahi M. Online Distributed Convex Optimization on Dynamic Networks[J]. IEEE Transactions on Automatic Control, 2014, 61(11):3545-3550.
[14] Duchi J C, Agarwal A, Wainwright M J. Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling[J]. IEEE Transactions on Automatic Control, 2012, 57(3):592-606.
[15] Liu S J, Chen P Y, Heri A O. Accelerated Distributed Dual Averaging over Evolving Networks of Growing Connectivity[J]. IEEE Transactions on Signal Processing, 2017, 66(7):1845-1859.
ACCELERATE DISTRIBUTED ONLINE WEIGHTED DUAL AVERAGE ALGORITHM IN SWITCHED NETWORKS
WANG Jun-ya
(College of Mathematics and Big Data, Anhui University of Science and Technology, Huainan, Anhui 232000, China)
We studies the distributed online weighted dual average algorithm is accelerated under switched network and an A-DWDA algorithm is proposed. Firstly, weighting factors are used to weight dual variables. Secondly, the directed switched network is periodically strongly connected, and the corresponding adjacency matrix is stochastic rather than doubly stochastic, the convergence speed of the algorithm is accelerated. Finally, a numerical experiment is performed to verify the effectiveness of the proposed algorithm.
distributed; weighted; switched network; dual average; Regret bound
1674-8085(2018)04-0006-05
TP-301.6
A
10.3969/j.issn.1674-8085.2018.04.002
2018-05-05;
2018-06-27
安徽省級精品資源共享課程(11528);碩士研究生創(chuàng)新基金項目(2017CX2046)
王俊雅(1994-),女,安徽阜陽人,碩士生,主要從事分布式優(yōu)化研究(E-mail:784836893@qq.com).