• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Distributed ADMM Approach for Collaborative Regression Learning in Edge Computing

    2019-05-10 03:59:38YangyangLiXueWangWeiweiFangFengXueHaoJinYiZhangandXianweiLi
    Computers Materials&Continua 2019年5期

    Yangyang LiXue WangWeiwei Fang *Feng XueHao JinYi Zhang and Xianwei Li

    Abstract:With the recent proliferation of Internet-of-Things(IoT),enormous amount of data are produced by wireless sensors and connected devices at the edge of network.Conventional cloud computing raises serious concerns on communication latency,bandwidth cost,and data privacy.To address these issues,edge computing has been introduced as a new paradigm that allows computation and analysis to be performed in close proximity with data sources.In this paper,we study how to conduct regression analysis when the training samples are kept private at source devices.Specifically,we consider the lasso regression model that has been widely adopted for prediction and forecasting based on information gathered from sensors.By adopting the Alternating Direction Method of Multipliers(ADMM),we decompose the original regression problem into a set of subproblems,each of which can be solved by an IoT device using its local data information.During the iterative solving process,the participating device only needs to provide some intermediate results to the edge server for lasso training.Extensive experiments based on two datasets are conducted to demonstrate the efficacy and efficiency of our proposed scheme.

    Keywords:Internet-of-Things(IoT),edge computing,ADMM,lasso.

    1 Introduction

    In recent years,the fast penetration of Internet-of-Things(IoT)devices with various embedded sensors have significantly changed the way of information gathering,processing and sharing.Generally,it is impractical to run computation intensive applications at the IoT devices,since these devices are often constrained by on-board resources and battery capacity.This motivates development of IoT cloud platforms allowing offloading computation and analysis tasks to a resourceful centralized cloud[Truong and Dustdar(2015)].

    Nevertheless,the cloud-based solution introduces unpredictably long latency for data communication through the wide area network,and incurs huge additional bandwidth occupation that may be beyond the capabilities of today’s Internet[Ha,Pillai,Lewis et al.(2013)].Furthermore,it would also bring about a number of privacy threats and security challenges[Zhou,Cao,Dong et al.(2017)].Therefore,it is more preferable to move computation and analysis to a close proximity of the IoT devices,i.e.,to the edge of the network.It is envisioned that edge computing would be a promising supplement to cloud computing[Shi,Cao,Zhang et al.(2016)],and make as much of an impact on human society as the latter.

    By edge computing,it is feasible to conduct collaborative machine learning[Portelli and Anagnostopoulos(2017)]in real-time on site,obtaining useful information from data collected by a variety of IoT devices.For instance,the roadside base-station can use regression analysis to forecast short-term traffic flow by analyzing data originated from proximate GPS-enabled vehicles,video cameras and roadway sensors[Zhou,Cao,Dong et al.(2017);Shi,Cao,Zhang et al.(2016);Xi,Sheng,Sun et al.(2018)].Another good example is equipment maintenance,which uses multi-sensor information(e.g.,temperature,sound and vibration)to construct classifiers for fault detection and diagnosis[Kwon,Hodkiewicz,Fan et al.(2016)].In such systems,edge analytics is usually performed in a centralized fashion,i.e.,each involved device sends its own data samples to a dedicated edge server for training and building learning models.However,this centralized solution suffers from three drawbacks.Firstly,many machine learning algorithms require to solve a particular convex optimization problem.According to previous studies[Dhar,Yi,Ramakrishnan et al.(2015);Boyd,Parikh,Chu et al.(2011)],the traditional centralized solver does not scale well with increasing volume of data.Secondly,not all edge servers are as resourceful as cloud servers to run sophisticated tools for single-node in-memory analytics[Dhar,Yi,Ramakrishnan et al.(2015);Ismail,Goortani,Karim et al.(2015)].Thirdly,IoT-generated data may contain private and sensitive information(e.g.,healthy state of wearable users)that should not be directly exposed to the edge server or other devices[Zhou,Cao,Dong et al.(2017);Gong,Fang and Guo(2016)].To tackle these challenges,it is desirable that the learning solution for edge computing can jointly take scalability,performance and privacy issues into consideration.

    In this paper,we are particularly interested in lasso(i.e.,least absolute shrinkage and selection operator[Tibshirani(1996)]),a classic linear regression technique that combines regularization and variable selection together for prediction and forecasting.This technique has already been used in a lot of IoT applications,e.g.,battery availability prediction for IoT devices[Longo,Mateos and Zunino(2018)],and internal temperature forecast for smart buildings[Spencer,Alfandi and Al-Obeidat(2018)].Specifically,we develop a distributed,collaborative learning solution that utilizes sampling data from multiple IoT devices for training lasso regression models.Based on the Alternating Direction Method of Multiplies(ADMM)[Boyd,Parikh,Chu et al.(2011)],the proposed scheme naturally decomposes the target optimization problem of lasso into small sub-problems that can be solved by each participating device using its local data.Unlike centralized solutions[Lon-go,Mateos and Zunino(2018);Spencer,Alfandi and Al-Obeidat(2018)],in our scheme the edge server only needs to collect locally trained intermediate parameters from IoT devices,and performs a simple aggregate operation to obtain the objective lasso model.The edge server and IoT devices work in such a collaborative way for multiple iterations until the lasso model converges.We have conducted extensive experiments based on two datasets with various system configurations.The experimental results show that our scheme quickly converges to near-optimal performance in a few tens of iterations.As compared to other benchmark solutions,it performs well in terms of efficiency and scalability,while obtaining a resulting lasso model with modest accuracy.

    The rest of this paper is organized as follows.A brief review of existing work is presented in Section 2.Section 3 describes the system model and derives the problem formulation.In Section 4,we elaborate and discuss the proposed ADMM-based algorithm.Section 5 illustrates and discusses simulation results.Finally,we conclude this paper in Section 6.

    2 Related work

    Traditional machine learning algorithms [Tibshirani (1996); Spencer,Alfandi and Al-Obeidat(2018)]and tools[Boyd,Parikh,Chu et al.(2011)]are implemented using a fully centralized architecture,which requires a dedicated server with powerful computation capability and huge amount of memory.However,they fail to scale well with increasing size of data in the big data era.To address this challenge,various approaches have leveraged distributed data-parallel platforms to develop distributed machine learning libraries,such as Apache Mahout and Spark MLlib[Dhar,Yi,Ramakrishnan et al.(2015)].These platforms and libraries can significantly speed up the large-scale data analytics by coordinating the operations of multiple servers[Richter,Khoshgoftaar,Landset et al.(2015)].Nevertheless,they are not suitable to be applied for model learning in edge computing,due to resource constraints[Shi,Cao,Zhang et al.(2016)]and privacy concerns[Zhou,Cao,Dong et al.(2017)].

    Considering that convex optimization is at the core of most machine learning algorithms,recent years have seen a number of distributed learning algorithms based on iterative methods[Dhar,Yi,Ramakrishnan et al.(2015)],which use successive approximations to come closer to the optimal solutions in each iteration.Among them,Stochastic Gradient Descent(SGD)is the most influential technique for solving linear prediction problems,e.g.,logistic regression.Zinkevich et al.[Zinkevich,Weimer,Smola et al.(2010)]propose the first parallel SGD algorithm that brings very little overhead on both I/O and communication.Meeds et al.[Meeds,Hendriks,Al Faraby et al.(2015)]develop a SGD-based javascript library that enables ubiquitous compute devices to run training algorithms in web browsing environments.With similar motivation to ours,McMahan et al.[McMahan,Moore,Ramage et al.(2017)]study the SGD-based distributed model training by iteratively aggregating locally trained parameters from edge devices.Although very efficient and easy to implement,SGD algorithms generally have a slow convergence rate due to their stochastic nature[Dhar,Yi,Ramakrishnan et al.(2015)].How to accelerate the convergence of SGD still remains as a challenging issue[Allen-Zhu(2017)].Recent research progresses on ADMM[Boyd,Parikh,Chu et al.(2011)]make it a competitive technique for solving distributed optimization and statistical learning problems.ADMM integrates the fast convergence characteristics of the multipliers method with the decomposability of the dual ascent approach,and can quickly converge to modest accuracy.This technique can be used to solve many supervised learning algorithms on regression and classification[Dhar,Yi,Ramakrishnan et al.(2015)].For example,Zhang et al.[Zhang,Lee and Shin(2012)]propose a distributed linear classification algorithm to solve the support vector machine problem.Gong et al.[Gong,Fang and Guo(2016)]design a privacypreserving scheme for training a logistic regression model based on distributed data from multiple users.However,the private aggregation mechanism used in Gong et al.[Gong,Fang and Guo(2016)]are proved to be inefficient and not suitable for resource-constrained devices[Joye and Libert(2013)].The work most relevant to ours is Bazerque et al.[Bazerque,Mateos and Giannakis(2010)],in which a consensus-based distributed algorithm is developed for in-network lasso regression.This algorithm is designed for networked systems with no central coordination,e.g.,wireless sensor network.A device needs to communicate with its one-hop neighbors frequently for updating intermediate parameters,resulting in heavy communication overhead and low convergence rate in large networks.

    3 System model and problem formulation

    3.1 System model

    We consider edge systems consisting of an edge server,andNhomogeneous IoT devices performing a common sensing task.The IoT device continuously generates sensory data,and transforms the raw data in a certain time duration into a feature vector.Each feature vector consists of more than one predictor variables,and corresponds to an response variable.The edge server is responsible for performing data analysis and modelling the relationship between feature vectors and response variables.The resulting model is learned and built in a collaborative fashion at the edge server based on data samples from all participating devices.

    The prediction model considered in this work is a lasso regression model [Tibshirani (1996)],which is a classic linear regression technique widely used for prediction and forecasting.Here we briefly introduce its basics.Given input training data set{(xi,yi),i=1,...,N},wherexi∈Rndenotes a feature vector andyi∈R denotes the corresponding response variable,the lasso model solves the following optimization problem:

    wherew∈Rnis the weight vector,b∈R is the intercept,andλ>0is the regularization parameter.With the trained lasso model(w,b)and a given feature vectorx∈Rn,we can estimate the value of response variableas follows:

    Although very effective in practice,current lasso implementations[Longo,Mateos and Zunino(2018);Kwon,Hodkiewicz,Fan et al.(2016)]generally require that the participating devices contribute their native data to the edge server for model training.This would cause privacy leakage problems[Zhou,Cao,Dong et al.(2017)],as the native data could reveal private or sensitive information about device users. We assume that standard network security mechanisms[Zhou,Cao,Dong et al.(2017)],such as encryption and authentication,are applied to protect data storage and network communication of IoT devices from outsider attacks.Nevertheless,the edge server may not be trustworthy,and can still be a potential source of information leakage.

    3.2 Problem formulation

    Based on the basic model introduced in(1),we investigate the problem of collaborative lasso learning in edge computing systems.Specifically,it is assumed that each IoT devicei∈{1,...,N}generate a set of data samplesDi={(xij,yij),j=1,...,Mi},wherexij∈Rndenotes a feature vector,yij∈R denotes the response variable ofxij,andMidenotes how many data samples are contributed byi.Then,we hope to find a distributed solution to address the following minimization problem:

    wherew∈Rn,andb∈R.

    4 Distributed lasso learning via ADMM

    According to the problem formulation and our analysis presented above,it is inappropriate to take the centralized approaches[Kwon,Hodkiewicz,Fan et al.(2016);Spencer,Alfandi and Al-Obeidat(2018)]as a solution in edge computing scenarios.A desirable solution should take the requirements on scalability,performance and privacy into consideration.This motivates us to develop an efficient and scalable scheme that enables collaborative lasso learning in a distributed manner.

    4.1 A briefing on ADMM

    The proposed scheme is based on ADMM,which follows a decomposition-coordination process.The target optimization problem is firstly decomposed into a set of small subproblems,and then the solutions to these sub-problems are coordinated to obtain the global optimal result.Specifically,ADMM solves optimization problems taking the following forms:

    wherex∈Rn,y∈Rm,fandgare two convex functions,A∈Rl×nandB∈Rl×mare relation matrices,C∈Rlis a relation vector,andXandYare non-empty convex subsets ofRnandRm,respectively.

    The augmented Lagrangian of problem(4)is formed by adding al2norm penalty to the objective function:

    wherez∈Rlis the dual variable,andρis a positive penalty parameter.

    Then,the problem(4)is solved in a iterative fashion,by updatingx,y,zsequentially and alternatively.Specifically,in thet-th iteration,the updates of variables are as follows:

    The proofs on optimality and convergence of ADMM have been given in Bertsekas et al.[Bertsekas and Tsitsiklis(1989)].Besides,it is revealed by empirical studies that this technique often achieves an acceptable solution with modest accuracy after dozens of iterations[Boyd,Parikh,Chu et al.(2011)].

    4.2 Algorithm design

    However,ADMM cannot be applied to problem(3)directly,as the coupling of variables makes it impossible to separate the objective function over each set of variables.In this case,a set of auxiliary variables{(wi,bi),i=1,...,N}are introduced to reformulate problem(3)as:

    By enforcing equality constraints,the new problem(7)is obviously equivalent to the original problem(3).Particularly,{(w,b)}can be regarded as the global model parameters at the edge server,while{(wi,bi),i=1,...,N}can be regarded as the local model parameters at each IoT devicei.Now we are able to separate the objective function over{(w,b)}and{(wi,bi),i=1,...,N}.The augmented Lagrangian of problem(7)can be obtained as:

    where for simplicity we defineξ={(w,b)},ψ={(wi,bi),i=1,...,N},andζ={(ζi,w,ζi,b),i=1,...,N}as the dual variables associated with the equality constraints in(7).The problem is then solved by updatingξ,ψ,andζsequentially.In each iterationt,the updates are performed as follows:

    1. ξ -update:The update ofξis performed by solving the following subproblem:

    Since problem(10)is convex but non-differentiable,we use the subgradient calculus technique[Nesterov(2013)]in convex analysis to obtain a closed-form solution.The solution is given by

    2. ψ -update:The update ofψis performed by solving the following subproblem:

    This problem is decomposable intoNsubproblems over all IoT devices,among which the deviceionly needs to solve its local subproblem as follows:

    The new subproblem(14)is a typical nonlinear programming problem,which is difficult to solve due to its complexity.Even standard tools like YALMIP can be used as solvers,they are still too heavyweight to run on IoT devices.Thus,we propose to to solve subproblem(14)by using the conjugate gradient method[Nesterov(2013)]in two sequential steps.Firstly,we consider the objective function of(14)as a functionhofwi,and letbi=btito find the optimalwi?that minimizesh.Then,we consider the objective function of(14)as a functionh′ofbi,and letwi=w?to find the optimalb?that minimizesh′.After these two steps,we obtain the solution asDue to space limitations,we only introduce the algorithm forwi?in Algorithm 1,whileb?ican be obtained in a similar way.In Algorithm 1,kdenotes the iteration time,pdenotes the conjugate direction,εdenotes the convergence criteria.Particularly,we design a simple heuristic to search for the optimal step sizeσfrom a given setS,with the help of two auxiliary parameterss1ands2[Hager and Zhang(2006)].That’s because the objective functionh(wi)does not contain a symmetric and positive-definite matrix, which is required by standard conjugate gradient for determining the value of σ [Nesterov (2013)].Algorithm 1 presents the parameter values used in our experiments in Section 3.Note that the choice of suitable values for these parameters depends on the scale of variable values inh(wi).

    Algorithm 1 Conjugate gradient algorithm for obtaining w?i 1:Initialize k←0,ε←10-5,w0i←0,p0←-▽h(w0i),S←{-1,1},s1←1×10-2,and s2←2×10-2.2:repeat 3: σk←s1 4: for each δ∈ S do 5: if h(wi+δs2pk)< h(wi+σkpk)then 6: σk← δ 7: end if 8: end for 9: wk+1 i ←wki+σkpk 10: βk← ‖▽h(wk+1 i)‖2‖▽h(wki)‖2 11: pk+1=-▽h(wk+1 i)+βkpk 12: k←k+1 13:until Convergence:‖▽h(wki)‖ ≤ ε 14:w?i←wki

    3. ζ -update:After obtainingξt+1andψt+1,we finally update the dual variables as:

    The entire process of algorithm execution in our scenarios is summarized in Algorithm 2.In this work,the primal residualrand the dual residuals[Boyd,Parikh,Chu et al.(2011)]are used together as convergence criterion,and they are expected to be less than their respective tolerances?priand?dual.According to Boyd et al.[Boyd,Parikh,Chu et al.(2011)],?priand?dualcan be reasonably chosen using an absolute toleranceεabsand a relative toleranceεrel.In each iteration,a participating device submits locally trained intermediate parameters(wi,bi)and(ζi,w,ζi,b)to the edge server.The edge server then computes and updates the global model parameters(w,b)by averaging the parameters gathered from devices.The edge server and IoT devices work in such a collaborative way for multiple iterations until the lasso model converges.Note that in the above process,all training samples are kept locally at the device’s side,and protected from leakage to the edge server or other devices.

    Algorithm 2 Distributed lasso algorithm in edge computing b 0 ← 0.2:Each IoT device i initializes t← 0, ζ 0 1:The edge server initializes t← 0, w 0 ← 0, i,w ← 0, ζ 0 i,b← 0.3:repeat 4:Given(w t b .According to(11)and(12),the server updates w t +1 and b t +1,and broadcasts the results to all devices.5:Given w t +1and b t +1,each device i solves the subproblem(14)independently to obtain w t +1 i)and(ζ ti,w ,ζ ti,b)from each device i ∈ { 1 ,...,N} ,the edge server computes w t,b t,i,b t ζ t w ,and ζ t i and b t +1 i .6:Each device i updates its dual variables ζ t +1 i,w and ζ t +1 i,b according to(15)and(16).7:Each device i sends the results of(w t +1 i ,b t +1 i)and(ζ t +1 i,w ,ζ t +1 i,b)to the edge server.8: t←t +1 9:until Convergence: ‖r t ‖2 ≤ ε pri and ‖s t ‖2 ≤ ε dual

    5 Performance evaluation

    In this section,we conduct simulation experiments to evaluate the performance of our ADMM-based algorithm.

    5.1 Experiment settings

    We evaluate our algorithm on two datasets,i.e.,a synthetic one and a real-world one.The synthetic dataset contains1500data samples,each of which includes nine dimensional feature vector.The generation of data follows the description in Boyd et al.[Boyd,Parikh,Chu et al.(2012)].Using this dataset,we can focus on evaluating the algorithm performance under different parameter settings,regardless of the impact of data quality.The experiment results are shown in Figs.1-4.Then,we use a well-known diabetes dataset from[Efron,Hastie,Johnstone et al.(2004)]to further investigate the algorithm performance under realistic conditions.This dataset contains442data samples,each of which includes ten dimensional feature vector.The experiment results are shown in Fig.5.In all experiments,we split the given dataset into70%training data and30%validation data.Unless otherwise specified,the simulation parameters are given as follows:N=10,λ=1.0,ρ=1.0,εabs=0.2,andεrel=0.5.

    To provide performance benchmarks for the proposed algorithm,we compare it with two baselines,namely centralized training approach and independent training approach.As stated in Section 1,the centralized approach generally canobtain theoptimal result,but suffers from problems of scalability,performance and privacy.In the independent approach,each participating device trains its own model independently with local data samples.This approach overcomes the challenges of scalability and privacy at the cost of modeling performance,and the result accuracy is highly correlated with both the size and the quality of the training set.

    5.2 Experiment results

    Convergence of our algorithm.Fig.1 depicts the convergence property of our ADMM algorithm.The left-hand plot(Fig.1(a))shows the change of objective function value w.r.t.iterations,while the right-hand plot(Fig.1(b))shows the progress of the primal and dual residual norm w.r.t.iterations.The objective value computed using the centralized algorithm is taken as the global optimal result.As shown in Fig.1,we observe that our algorithm approaches very close to optimum after30iterations,and finally converges within 58iterations according to the given stopping criterion.

    Figure1:Convergence of the ADMM-based algorithm

    To fully investigate the convergence performance,we conduct random independent experiments for11times to compare the number of iterations for achieving convergence.As shown in Fig.2,our algorithm takes at most138iterations to converge,and the fastest run only takes17iterations.For80%of the time,our algorithm achieves convergence within 120iterations.On the other hand,the centralized algorithm takes at least580iterations to converge,and even800iterations in the worst case.These results clearly indicate that our algorithm converges significantly faster than the centralized approach.

    Impacts of parameter settings.Next we study the algorithm performance under differentN,ρ,andλvalues(with other parameters fixed).All these results are plotted in Fig.3.As shown in Fig.3(a),our algorithm can always converge to the same optimal objective value no matter how many IoT devices are involved.We can observe that with the increase ofN,our algorithm converges with only moderate increment of iterations.These observations demonstrate the scalability of this algorithm as well as its associated overhead for device coordination.Similarly,varyingρa(bǔ)lso has little impact on the final optimization objective,as shown in Fig.3(b).However,a smaller value ofρtends to speed up the dual update and achieve a faster convergence.From Fig.3(c),we can see that changingλonly determines the expected optimization objective in(3),but has neglectable impacts on the convergence and its rate of the algorithm.

    Figure2:CDF of the number of iterations to achieve convergence

    Figure3:Convergence performance of the algorithm with varying parameter settings

    Comparison of modeling performance.In order to evaluate model performance,we use the two most common metrics in regression analysis,including Mean Squared Error(MSE)and Adjusted R-Squared(Adjusted-R2)[Spencer,Alfandi and Al-Obeidat(2018);Kmenta and Rafailzadeh(1997)].MSE measures the expected squared distance between actual and predicted values.It must be non-negative,and a value closer to0indicates a better model.Adjusted-R2is used to measure the correlation between actual and predicted values.It can take on any value no greater than1,and a value closer to1indicates a better fit.Since the performance of the independent approach relies on the size of local training set,we randomly partition the original dataset intoNtraining subsets,and compare the performance of algorithms under different device numbersN.It is noted that whenN=1,the independent approach does actually equate with the centralized approach.

    As shown in Fig.4(a),the average MSE of the independent approach keeps increasing with the increment ofN.The variance of MSE values also becomes larger asNincreases,implying that different IoT devices with same-sized data are more likely to obtain distinct training results.The MSE of our algorithm is always kept at about0.01,and does not depend onN.From Fig.4(b),we observe that our algorithm obtains a steady value of Adjusted-R2around0.985.The independent approach has a slightly better performance than ours whenN<60,but its performance degrades significantly whenN≥60.An IoT device may even obtain a negative Adjusted-R2whenN=90,meaning that its resulting model doesn’t fit the data[Kmenta and Rafailzadeh(1997)].Note that for the centralized algorithm,MSE=0.000451and Adjusted-R2=0.998639.From these results,we know that the lasso models trained by the independent approach may not be robust for individual IoT devices,due to the limitation on data size and the lack of data diversity.Our algorithm can always converge to near-optimal and obtain modestly accurate models comparable to that of centralized approach,by utilizing data samples contributed by many IoT devices.

    Figure4:Comparisons on model performance of different algorithms using different metrics

    Results of real-world dataset.Furthermore,we evaluate our algorithm on the diabetes dataset in Efron et al.[Efron,Hastie,Johnstone et al.(2004)]using the same metrics.Due to space limitation,typical experimental results are chosen to be plotted in Figs.5(a)-5(c),respectively.

    We obtain some new observations that are different from those presented above and need special attention.Firstly,as shown in Fig.5(a),the objective value of our algorithm falls off spectacularly fast during the initial 3 iterations,and converges after 25 iterations.We attribute this fast convergence rate to the significantly small size of this dataset.Secondly,the performance of the independent approach is obviously the worst among all three algorithms,according to Figs.5(b)and Fig.5(c).We can notice that start fromN=10,its MSE increases sharply to values far greater than0and those of other algorithms.Meanwhile,its Adjusted-R2decreases quickly and directly to negative values.These observations in-dicate that when training data are irregularly distributed overall,which is very common in reality,a good model is hardly to obtain by individual devices due to the severe lack of data diversity.Thirdly,our algorithm still achieves a good modeling performance comparable to that obtained by centralized approach.

    Figure5:Experimental results on the diabetes dataset

    6 Conclusion

    In this paper,we present a collaborative learning scheme for training lasso regression models based on data samples from IoT devices in edge computing.The target optimization problem of lasso is solved in a distributed fashion by leveraging ADMM,while the participating devices only need to share indispensable intermediate results with the edge server for model training.Simulation results on two typical datasets demonstrate that the proposed scheme can quickly converge to near-optimal within dozens of iterations,and significantly outperforms other baseline solutions in performance evaluation.Our future work involves implementing our scheme on a real edge computing platform to evaluate its feasibility and performance.

    一级毛片黄色毛片免费观看视频| 中文在线观看免费www的网站| 欧美成人a在线观看| 一区二区av电影网| 高清午夜精品一区二区三区| 色5月婷婷丁香| 成人一区二区视频在线观看| 成人鲁丝片一二三区免费| 久久久久网色| 亚洲欧美中文字幕日韩二区| 最后的刺客免费高清国语| 一级毛片电影观看| 五月玫瑰六月丁香| 看非洲黑人一级黄片| 久久精品人妻少妇| 日本熟妇午夜| 老师上课跳d突然被开到最大视频| 亚洲精品日本国产第一区| 男男h啪啪无遮挡| 一级a做视频免费观看| 亚洲国产精品专区欧美| 精品国产乱码久久久久久小说| 亚洲国产高清在线一区二区三| 伊人久久精品亚洲午夜| 真实男女啪啪啪动态图| 又爽又黄a免费视频| 少妇 在线观看| 亚洲综合精品二区| 69人妻影院| xxx大片免费视频| 菩萨蛮人人尽说江南好唐韦庄| 色网站视频免费| 可以在线观看毛片的网站| 欧美极品一区二区三区四区| 久久精品久久久久久久性| 日韩欧美精品免费久久| 亚洲av福利一区| 五月伊人婷婷丁香| 全区人妻精品视频| 天天一区二区日本电影三级| 天堂网av新在线| 欧美日韩亚洲高清精品| 亚洲最大成人手机在线| 一区二区三区四区激情视频| 七月丁香在线播放| 免费看a级黄色片| 肉色欧美久久久久久久蜜桃 | 久久鲁丝午夜福利片| a级一级毛片免费在线观看| 国产欧美日韩一区二区三区在线 | 99九九线精品视频在线观看视频| 爱豆传媒免费全集在线观看| 亚洲天堂av无毛| av.在线天堂| 国产在线男女| 亚洲精品乱码久久久v下载方式| 在线免费观看不下载黄p国产| 性色avwww在线观看| 午夜亚洲福利在线播放| 亚洲欧美日韩无卡精品| 久久久精品免费免费高清| 男男h啪啪无遮挡| av在线天堂中文字幕| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 交换朋友夫妻互换小说| 肉色欧美久久久久久久蜜桃 | www.色视频.com| 久久国产乱子免费精品| 亚洲自拍偷在线| 国产有黄有色有爽视频| 免费看不卡的av| 成年av动漫网址| 日韩亚洲欧美综合| 在线观看免费高清a一片| 一边亲一边摸免费视频| 最新中文字幕久久久久| 日本wwww免费看| 国产精品国产三级专区第一集| 午夜福利在线观看免费完整高清在| 在线a可以看的网站| av国产免费在线观看| 久久久久久久久久人人人人人人| 日韩国内少妇激情av| 精品少妇黑人巨大在线播放| 欧美高清成人免费视频www| 国产一区有黄有色的免费视频| 国产精品蜜桃在线观看| 超碰97精品在线观看| 国产美女午夜福利| 嫩草影院新地址| 国产精品人妻久久久影院| 日本与韩国留学比较| 日本-黄色视频高清免费观看| 日韩欧美 国产精品| 人妻夜夜爽99麻豆av| 欧美精品一区二区大全| 亚洲欧美精品专区久久| 又大又黄又爽视频免费| 亚洲精品成人久久久久久| 91午夜精品亚洲一区二区三区| 大话2 男鬼变身卡| 亚洲精品自拍成人| 欧美日韩视频精品一区| 18禁裸乳无遮挡动漫免费视频 | 国产亚洲一区二区精品| 免费在线观看成人毛片| 国产69精品久久久久777片| 亚洲国产精品国产精品| 一级毛片aaaaaa免费看小| 亚洲性久久影院| 看十八女毛片水多多多| 五月开心婷婷网| 人人妻人人爽人人添夜夜欢视频 | 波野结衣二区三区在线| .国产精品久久| 亚洲国产精品国产精品| 麻豆乱淫一区二区| 久热久热在线精品观看| 狂野欧美激情性bbbbbb| 亚洲国产精品成人久久小说| 能在线免费看毛片的网站| 六月丁香七月| 欧美国产精品一级二级三级 | 国产爽快片一区二区三区| 免费看日本二区| 国产v大片淫在线免费观看| 春色校园在线视频观看| 大片免费播放器 马上看| av国产久精品久网站免费入址| 黄色怎么调成土黄色| 亚洲国产欧美人成| 欧美另类一区| 国产视频首页在线观看| 久久久久久久午夜电影| 男女国产视频网站| 国产成人91sexporn| www.色视频.com| av免费在线看不卡| 美女国产视频在线观看| 最新中文字幕久久久久| 日产精品乱码卡一卡2卡三| 又大又黄又爽视频免费| 蜜臀久久99精品久久宅男| 亚洲欧美一区二区三区黑人 | 国产综合懂色| 三级国产精品片| 直男gayav资源| 麻豆精品久久久久久蜜桃| 韩国高清视频一区二区三区| 韩国高清视频一区二区三区| 晚上一个人看的免费电影| 国产一级毛片在线| 久热久热在线精品观看| 亚洲精品国产av蜜桃| 久久久久久久国产电影| videos熟女内射| 午夜精品一区二区三区免费看| 一级毛片aaaaaa免费看小| 成人欧美大片| 中文资源天堂在线| 亚洲最大成人中文| 成人亚洲欧美一区二区av| 亚洲电影在线观看av| 日韩精品有码人妻一区| 一区二区三区免费毛片| 中文字幕制服av| 欧美激情久久久久久爽电影| 亚洲美女视频黄频| 久久午夜福利片| 又黄又爽又刺激的免费视频.| 国产欧美亚洲国产| 欧美bdsm另类| 亚洲国产精品成人综合色| 亚洲国产色片| 国产色爽女视频免费观看| 五月玫瑰六月丁香| a级毛片免费高清观看在线播放| 美女xxoo啪啪120秒动态图| 中文乱码字字幕精品一区二区三区| 99热这里只有是精品50| 美女脱内裤让男人舔精品视频| 日韩不卡一区二区三区视频在线| 97超视频在线观看视频| 久久久久久国产a免费观看| 偷拍熟女少妇极品色| 在线观看免费高清a一片| 亚洲va在线va天堂va国产| 韩国av在线不卡| videos熟女内射| 日韩,欧美,国产一区二区三区| 国产爱豆传媒在线观看| 久久久久久久久大av| 免费电影在线观看免费观看| 国产黄a三级三级三级人| 亚洲高清免费不卡视频| 狠狠精品人妻久久久久久综合| 赤兔流量卡办理| 黄色一级大片看看| 九九在线视频观看精品| 身体一侧抽搐| av在线观看视频网站免费| 看非洲黑人一级黄片| 亚洲精品乱久久久久久| 久久久久久久午夜电影| av国产精品久久久久影院| 春色校园在线视频观看| 国产中年淑女户外野战色| 免费看日本二区| 久久精品国产鲁丝片午夜精品| 日韩一区二区视频免费看| 黄色配什么色好看| 永久网站在线| 亚洲精品乱码久久久久久按摩| 日韩大片免费观看网站| 美女xxoo啪啪120秒动态图| 六月丁香七月| 日本欧美国产在线视频| 午夜福利视频1000在线观看| 亚洲成人久久爱视频| 亚洲精品第二区| 亚洲不卡免费看| 欧美人与善性xxx| 亚洲精品久久久久久婷婷小说| 成人国产麻豆网| av一本久久久久| 亚洲高清免费不卡视频| 国产美女午夜福利| 亚洲久久久久久中文字幕| 大又大粗又爽又黄少妇毛片口| 人妻一区二区av| 国产有黄有色有爽视频| 午夜福利在线在线| 99久久精品热视频| 国产精品三级大全| av在线播放精品| 成年人午夜在线观看视频| 免费看av在线观看网站| 极品少妇高潮喷水抽搐| 秋霞伦理黄片| 特级一级黄色大片| 人体艺术视频欧美日本| 免费高清在线观看视频在线观看| 国产精品秋霞免费鲁丝片| 夫妻午夜视频| 91久久精品国产一区二区成人| 亚洲欧美中文字幕日韩二区| 亚洲国产成人一精品久久久| 欧美成人精品欧美一级黄| 久久久久久久亚洲中文字幕| 国产伦理片在线播放av一区| 国产精品久久久久久久电影| 久久99蜜桃精品久久| 日韩三级伦理在线观看| 视频中文字幕在线观看| 美女cb高潮喷水在线观看| 韩国高清视频一区二区三区| 乱码一卡2卡4卡精品| 小蜜桃在线观看免费完整版高清| 人人妻人人澡人人爽人人夜夜| 午夜福利高清视频| 免费播放大片免费观看视频在线观看| 中文精品一卡2卡3卡4更新| 六月丁香七月| 国产精品蜜桃在线观看| av专区在线播放| 色综合色国产| 免费看a级黄色片| 亚州av有码| 观看美女的网站| 久久久久精品性色| 蜜桃久久精品国产亚洲av| 少妇的逼水好多| 国产成人精品久久久久久| 国产高潮美女av| 国产伦精品一区二区三区视频9| 成人特级av手机在线观看| 69av精品久久久久久| 精品人妻熟女av久视频| 成年av动漫网址| 国产精品国产av在线观看| 成人高潮视频无遮挡免费网站| 交换朋友夫妻互换小说| 亚洲一级一片aⅴ在线观看| 成人无遮挡网站| 久久久久性生活片| 涩涩av久久男人的天堂| 成人免费观看视频高清| 亚洲av中文av极速乱| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 99久久人妻综合| 久久久久久久久久人人人人人人| 午夜福利在线在线| 777米奇影视久久| 久久午夜福利片| 欧美日韩国产mv在线观看视频 | 久久久色成人| 国产 一区 欧美 日韩| 欧美区成人在线视频| 久久精品久久久久久噜噜老黄| 亚洲精品国产色婷婷电影| 美女cb高潮喷水在线观看| 亚洲图色成人| 日日摸夜夜添夜夜添av毛片| 我的老师免费观看完整版| 久久久久久久久大av| 天美传媒精品一区二区| 欧美丝袜亚洲另类| 国产久久久一区二区三区| 全区人妻精品视频| 99热网站在线观看| 男插女下体视频免费在线播放| 成人鲁丝片一二三区免费| 欧美97在线视频| 日本免费在线观看一区| 国产欧美另类精品又又久久亚洲欧美| 毛片一级片免费看久久久久| 欧美3d第一页| 国产在线男女| 欧美精品一区二区大全| 我要看日韩黄色一级片| 国产精品久久久久久精品古装| 特级一级黄色大片| 久久久久久久久久人人人人人人| 国产精品久久久久久精品电影| 最后的刺客免费高清国语| 欧美97在线视频| 国产精品成人在线| 狂野欧美激情性xxxx在线观看| 亚洲精品日本国产第一区| 国产综合懂色| 免费av不卡在线播放| 成人国产麻豆网| 日本爱情动作片www.在线观看| 麻豆乱淫一区二区| 国产精品不卡视频一区二区| 国内少妇人妻偷人精品xxx网站| 大码成人一级视频| 久久久色成人| 国产成人a区在线观看| 亚洲精品自拍成人| 激情五月婷婷亚洲| 免费看av在线观看网站| 一级毛片黄色毛片免费观看视频| av一本久久久久| 午夜精品国产一区二区电影 | 18禁裸乳无遮挡动漫免费视频 | 国产精品嫩草影院av在线观看| 国产日韩欧美在线精品| 神马国产精品三级电影在线观看| 久久6这里有精品| 国产探花在线观看一区二区| 久久精品国产自在天天线| 久久精品夜色国产| 精品国产露脸久久av麻豆| 精品少妇久久久久久888优播| 国产免费又黄又爽又色| 婷婷色综合大香蕉| 精品人妻熟女av久视频| 国产高清不卡午夜福利| 成人漫画全彩无遮挡| 成人美女网站在线观看视频| 国内揄拍国产精品人妻在线| 国产精品99久久99久久久不卡 | 亚洲最大成人手机在线| 免费少妇av软件| 伦理电影大哥的女人| 国产精品久久久久久精品电影| 嫩草影院入口| 伊人久久国产一区二区| 国产精品一区二区在线观看99| 我的女老师完整版在线观看| 久久久久九九精品影院| 国产淫语在线视频| 大话2 男鬼变身卡| 日本猛色少妇xxxxx猛交久久| 精品一区二区三区视频在线| 秋霞伦理黄片| 久久久亚洲精品成人影院| 狂野欧美激情性xxxx在线观看| 韩国高清视频一区二区三区| 小蜜桃在线观看免费完整版高清| 精品国产一区二区三区久久久樱花 | 熟女电影av网| 精品午夜福利在线看| 国产美女午夜福利| 午夜免费男女啪啪视频观看| 只有这里有精品99| 日本av手机在线免费观看| 欧美成人精品欧美一级黄| 精品酒店卫生间| 菩萨蛮人人尽说江南好唐韦庄| 18禁裸乳无遮挡免费网站照片| av在线app专区| 国产女主播在线喷水免费视频网站| 国产男女内射视频| 99热6这里只有精品| 在线播放无遮挡| 黄片无遮挡物在线观看| 亚洲欧美日韩卡通动漫| 日日啪夜夜爽| 成人毛片60女人毛片免费| 日本猛色少妇xxxxx猛交久久| 最近的中文字幕免费完整| 国产精品人妻久久久久久| 成人黄色视频免费在线看| 午夜福利视频精品| 中文在线观看免费www的网站| 久久精品人妻少妇| 一级毛片我不卡| av在线亚洲专区| 日本猛色少妇xxxxx猛交久久| 成人亚洲精品av一区二区| 18+在线观看网站| 国产 一区精品| 在线观看免费高清a一片| tube8黄色片| 在线观看av片永久免费下载| 深爱激情五月婷婷| 如何舔出高潮| 国产男女超爽视频在线观看| 国产亚洲91精品色在线| 最近的中文字幕免费完整| 特级一级黄色大片| 日韩不卡一区二区三区视频在线| 色婷婷久久久亚洲欧美| 熟女av电影| 男女下面进入的视频免费午夜| 丰满少妇做爰视频| 99久久精品一区二区三区| 国产黄频视频在线观看| 亚洲高清免费不卡视频| 日韩电影二区| 国产精品成人在线| 大片免费播放器 马上看| 国产成人精品一,二区| 国产黄色免费在线视频| 春色校园在线视频观看| 亚洲丝袜综合中文字幕| 免费电影在线观看免费观看| 中文字幕制服av| 亚洲第一区二区三区不卡| 欧美国产精品一级二级三级 | 亚洲成人中文字幕在线播放| 日韩欧美精品免费久久| 久热这里只有精品99| 欧美亚洲 丝袜 人妻 在线| 在线观看免费高清a一片| 夜夜爽夜夜爽视频| 街头女战士在线观看网站| 六月丁香七月| 成人欧美大片| 深夜a级毛片| 插阴视频在线观看视频| 日本爱情动作片www.在线观看| 韩国高清视频一区二区三区| 欧美日韩综合久久久久久| 国产高清国产精品国产三级 | 麻豆精品久久久久久蜜桃| 成人亚洲欧美一区二区av| 亚洲精品国产色婷婷电影| 99久久人妻综合| 2022亚洲国产成人精品| 欧美xxxx黑人xx丫x性爽| 久久久欧美国产精品| 菩萨蛮人人尽说江南好唐韦庄| 51国产日韩欧美| 国产精品.久久久| 国产成人福利小说| 日本三级黄在线观看| 日韩中字成人| 色视频www国产| 国产黄色免费在线视频| 亚洲精品第二区| 亚洲av国产av综合av卡| 熟女人妻精品中文字幕| 国产成人精品久久久久久| 亚洲精品国产av成人精品| 久久久久久久久久成人| av又黄又爽大尺度在线免费看| 久热久热在线精品观看| 人妻夜夜爽99麻豆av| 少妇猛男粗大的猛烈进出视频 | 免费av毛片视频| 免费在线观看成人毛片| 三级男女做爰猛烈吃奶摸视频| 成人漫画全彩无遮挡| 日本一本二区三区精品| 日韩精品有码人妻一区| 国产亚洲一区二区精品| 亚洲欧美一区二区三区黑人 | 天天躁日日操中文字幕| 秋霞伦理黄片| 国产成人免费观看mmmm| 久久精品夜色国产| 成年女人看的毛片在线观看| 最近中文字幕高清免费大全6| 亚洲丝袜综合中文字幕| 日日撸夜夜添| 18禁动态无遮挡网站| 日韩人妻高清精品专区| 午夜老司机福利剧场| 91狼人影院| 国国产精品蜜臀av免费| 亚洲人成网站高清观看| 亚洲av成人精品一二三区| 美女高潮的动态| 22中文网久久字幕| 日韩免费高清中文字幕av| 岛国毛片在线播放| av一本久久久久| 精品一区二区三卡| 国产伦精品一区二区三区视频9| 韩国av在线不卡| 我的老师免费观看完整版| 欧美日韩国产mv在线观看视频 | 亚洲人成网站在线观看播放| 成人国产av品久久久| 亚洲av不卡在线观看| 国内少妇人妻偷人精品xxx网站| 成人无遮挡网站| 岛国毛片在线播放| 久久综合国产亚洲精品| 美女被艹到高潮喷水动态| 亚洲国产av新网站| 干丝袜人妻中文字幕| 久久精品人妻少妇| 日韩亚洲欧美综合| 插逼视频在线观看| 久久午夜福利片| 黄色一级大片看看| 青青草视频在线视频观看| 久久久精品免费免费高清| 免费电影在线观看免费观看| 国产欧美日韩一区二区三区在线 | 国产欧美日韩精品一区二区| 国内揄拍国产精品人妻在线| 国产69精品久久久久777片| 日韩人妻高清精品专区| 毛片女人毛片| 九九在线视频观看精品| 卡戴珊不雅视频在线播放| 十八禁网站网址无遮挡 | 国产午夜精品一二区理论片| 777米奇影视久久| 97在线人人人人妻| 26uuu在线亚洲综合色| 久久精品久久久久久久性| 久久精品人妻少妇| 国产精品一区www在线观看| 天堂中文最新版在线下载 | 蜜桃亚洲精品一区二区三区| 日日啪夜夜撸| 国产精品99久久99久久久不卡 | 三级国产精品片| 国产黄频视频在线观看| 亚洲精品456在线播放app| 蜜臀久久99精品久久宅男| 亚洲人与动物交配视频| 成人毛片60女人毛片免费| 最近手机中文字幕大全| 老司机影院毛片| 六月丁香七月| 看十八女毛片水多多多| 一区二区三区四区激情视频| 91狼人影院| 搡老乐熟女国产| 97超碰精品成人国产| 亚洲最大成人av| 美女被艹到高潮喷水动态| 一级片'在线观看视频| 最近最新中文字幕大全电影3| 亚洲伊人久久精品综合| 国产久久久一区二区三区| 精品国产三级普通话版| 在线亚洲精品国产二区图片欧美 | 免费高清在线观看视频在线观看| 熟女人妻精品中文字幕| 91久久精品电影网| 亚洲国产色片| 久久久久久九九精品二区国产| 在线 av 中文字幕| 大话2 男鬼变身卡| 亚洲精品日本国产第一区| 人妻夜夜爽99麻豆av| 国产精品嫩草影院av在线观看| 亚洲国产高清在线一区二区三| 一本久久精品| 国产午夜福利久久久久久| 亚洲欧美清纯卡通| 国产精品久久久久久久电影| 禁无遮挡网站| 亚洲电影在线观看av| 国产免费一区二区三区四区乱码| 波野结衣二区三区在线| 成人无遮挡网站| 一级毛片黄色毛片免费观看视频| 日韩免费高清中文字幕av| 亚洲电影在线观看av| 老司机影院成人| 久久久久久九九精品二区国产| 国产成人aa在线观看| 婷婷色麻豆天堂久久| 性色av一级| 亚洲精品日韩av片在线观看| 晚上一个人看的免费电影| 在线观看一区二区三区| 别揉我奶头 嗯啊视频| 亚洲综合色惰| 男女那种视频在线观看| 精品午夜福利在线看| 午夜免费观看性视频| 亚洲最大成人av| 99re6热这里在线精品视频| 亚洲国产精品成人综合色| 久久女婷五月综合色啪小说 |