• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Parallelizing Modif i ed Cuckoo Search on MapReduce Architecture

    2013-07-14 01:21:04ChiaYuLinYuanMingPaiKunHungTsaiCharlesWenandLiChunWang

    Chia-Yu Lin, Yuan-Ming Pai, Kun-Hung Tsai, Charles H.-P. Wen, and Li-Chun Wang

    1. Introduction

    Meta-heuristics such as particle swarm optimization(PSO) and cuckoo search (CS) are widely used in engineering optimization. PSO was inspired by foraging social behavior of birds and fi shes[1]. At the beginning, the species have no idea about the food location and thus search according to their experience and intuition. Once an individual fi nds the food, it informs other individuals of such location. Accordingly, others adjust their flight.Bird/f i sh foraging behavior is a concept of socially mutual inf l uence, which guides all individuals to move toward the optimum. PSO is prevailing because it is simple, requires little tuning, and is found effective for problems of wide-range solutions.

    Moreover, cuckoo search (CS), an optimization algorithm was proposed in 2009[2]. The cuckoo eggs mimic the eggs of other host birds to stay in their nests. This phenomenon leads to the evolution of egg appearance towards optimal disguise. In order to improve the performance of cuckoo search, a modif i ed cuckoo search(MCS) was later proposed in 2011[3]and successfully demonstrated good performance. Based on [3], we parallelize MCS to propose a MapReduce modif i ed cuckoo search (MRMCS) in this work. As a result, our MRMCS outperforms previously proposed MapReduce particle swarm optimization (MRPSO)[4]on all evaluation functions and two engineering design problems in terms of both convergence of optimality and runtime.

    MapReduce[5]is a widely-used parallel programming model in cloud platforms and consists of mapping and reducing functions inspired by dividing and conquering.Mapping and reducing functions execute the computation in parallel, combine the intermediate result, and output the fi nal result. Independent data are suitable for the MapReduce computing. For example, in the k-means algorithm, each data node computes the distance from itself to all central nodes and thus the work[6]proposed its parallelized version on MapReduce in 2009. Similarly,particle swarm optimization (PSO) addresses that each data node computes its own best value by itself and thus were also successfully parallelized on a MapReduce framework[4].

    Since PSO can be successfully parallelized into MRPSO[4], we are motivated to parallelize MCS on a MapReduce architecture and compare the performance of MRMCS with that of MRPSO. However, two critical issues are worth pointing out when parallelizing MCS on a MapReduce architecture: 1) job partitioning (i.e. which jobs go to the mappers and which jobs go to the reducers) needs to be decided in MRMCS; 2) the support of information exchange is critical during evolution in MCS. However, an original MapReduce architecture like Hadoop cannot support this and thus need proper modif i cation. Therefore,this work is motivated to deal with these two problems to enable good parallelism on MRMCS.

    The rest of the paper is organized as follows. Section 2 introduces the fundamentals of MCS, and Section 3 describes the MapReduce architecture in detail. MRMCS is proposed and elaborated in Section 4. Section 5 shows several optimization applications with MRMCS and compares their performance and runtime with MRPSO.Finally, Section 6 concludes the paper.

    2. Modified Cuckoo Search

    CS was proposed for optimization problems by Yang et al. in 2009[2]. Later, in order to improve the performance of the baseline CS, Walton et al. in 2011 added more perturbations to the generation of population and thus proposed MCS in [3]. In this work, we further parallelize MCS on a MapReduce architecture and propose MRMCS.

    The original CS was inspired by the behavior of cuckoo laying eggs. Cuckoos tend to lay eggs in the nests of other host birds. If the host birds can differentiate cuckoo eggs from their own eggs, they may throw away cuckoo eggs or all eggs in the nest. This leads to the evolution of cuckoo eggs mimicking the eggs of local host birds. Yang et al.[2]conducted the three following rules from the behavior of cuckoo laying eggs for optimization:

    · Each egg laid by one cuckoo is a set of solution coordinates and is dumped in a random nest at a time.

    · A fraction of the nests containing the eggs (solutions)with best fitness will carry over to the next generation.

    · The number of nests is fi xed and there is a probability that a host can discover such alien egg. If this happens, the host can either discard the egg or the nest, resulting in building a new nest in a new location.

    Besides the three rules stated above, the use of Lévy fl ight[7]for both the local and global search is another important component in CS. The Lévy fl ight, also frequently used in other search algorithms[7], is a random walk in which the step lengths have a probability distribution with heavy tails. The egg generated by Lévy fl ight compares its fitness value with that of the current egg.If the fitness value of the new egg is better than the current one, the new egg takes place of the position. The random size of Lévy fl ight is controlled by a constant step size α where α can be adjusted according to the problem size of target applications. The fraction of nests to be abandoned is the only one parameter which is needed to be adjusted during the CS evolution.

    In order to speed up the convergence of evolution,Walton et al.[3]proposed MCS. There are two modif i cations over CS. The fi rst change is that the step size1α is no longer a constant and can decrease as the number of generation increases. Adjusting1α dynamically leads to faster convergence on optimality. At each generation, a new step size of Lévy fl ight is

    where A is initialized as 1 and G is the generation number.This setting is used for deciding the fraction of nests to be abandoned.

    The second change is the information exchange between eggs. In MCS, eggs with the best fi tness values are put in the top-egg group. Every top egg will be paired with a randomly-picked egg. During the selection process, if the same egg is picked, a new egg is generated with the step size

    Otherwise, a new egg is generated from two top eggs using the golden ratio

    The fraction of nests to be abandoned and the fraction of nests to generate next top eggs are two adjustable parameters in MCS. Algorithm 1 shows the details of MCS as follows.

    Algorithm 1. MCS Algorithm in [3]

    1: A←MaxLévyStepSize

    2: φ ←GoldenRatio

    3: Initialize a population of n host nests xi,(i=1,2,??,n)

    4: for all xido

    5: Calculate fitness Fi=f( xi)

    6: end for

    7: Generation number G←1

    8: while

    NumberObjectiveEvaluations

    do

    9: G←G+1

    10: Sorts nests by order of fitness

    11: for all nests to be abandoned do

    12: Calculate position xi

    13: Calculate Lévy flight step size α1←A

    14: Perform Lévy flight from xito generate new egg xk

    15: xi← xk

    16: Fi←f( xi)

    17: end for

    18: for all of the top nests do

    19: Calculate position xi

    20: Pick another nest from the top nests at random xj

    21: if xi=xjthen

    22: Calculate Lévy flight step size α←A/ G2

    2

    23: Perform Lévy flight from xito generate new egg xk

    24: Fk=f( xk)

    25: Choose a random nest l from all nests

    26: if Fk>Flthen

    27: xl←xk

    28: Fl←Fk

    29: end if

    30: else

    32: Move distance dx from the worst nest to the

    best nest to find xi

    33: Fk=f( xk)

    34: Choose a random nest l from all nests

    35: if Fk>Flthen

    36: xl←xk

    37: Fl←Fk

    38: end if

    39: end if

    40: end for

    41: end while

    3. MapReduce Architecture

    MapReduce[5]is a patented software framework introduced by Google to support distributed computing on large data volumes on clusters of computers. MapReduce can also be considered as a parallel programming model and it aims at processing large datasets. A MapReduce framework consists of mapping and reducing functions which are inspired by dividing and conquering. The map function, which is also known as the mapper, parallelizes the computation on large-scale clusters of machines. The reduce function, which is also called the reducer, collects the intermediate results from the mappers and then outputs the fi nal result. In the MapReduce architecture, all data items are represented as the form of keys paired with associated values. For example, in a program that counts the frequency of occurrences for words, the key is the word itself and the value is its frequency of occurrences.Applications with independent input data or computation are suitable to be parallelized on the MapReduce framework. For example, for PSO, each data node can fi nish computing its own best value without acquiring information from other nodes. Therefore, PSO is a good candidate that can be parallelized on the MapReduce framework to save runtime greatly. Such idea was termed MRPSO and realized in [4].

    3.1 Map Function (Mapper)

    A MapReduce job usually splits the input data set into many independent chunks which are processed by the map function in a completely parallel manner. The map function takes a set of (key, value) pairs and generates a set of intermediate (key, value) pairs by applying a designated function to all these pairs, that is,

    3.2 Reduce Function (Reducer)

    Before running the reduce function, the shuff l e and sort functions are applied to the outputs from the map function. Then the new outputs become the input to the reduce function. The reduce function merges all pairs with the same key using a reduction function:

    The input type and output type of a MapReduce job are illustrated in Fig. 1, respectively. The data which is a (key,value) pair is the input to the mapper. The mapper extracts meaningful information from each record independently.The output of the mapper is sorted and combined according to the key and passed to the reducer where the reducer performs aggregation, summarization, fi ltering, or transformation of data and writes the fi nal result.

    3.3 MapReduce Example

    An example of the overall Map/Reduce framework is shown in Fig. 2. This is a program named “WordCount”used for counting the frequency of occurrences for different words. The input data is partitioned into several fi les and sent to different mappers to count occurrences of one target word. The input key is ignored but arbitrarily set to be the line number for the input value. The output key is the word under interest, and the output value is its counts. The shuff l e and sort functions are performed to combine key values output from the mappers. Finally, the reducer merges the count value of each word and writes out the fi nal result(i.e. the frequency of occurrences).

    3.4 MapReduce Implementation

    Google has published its MapReduce implementation in [5], but has not yet released the system to the public.Thus, the Apache Lucene project developed Hadoop, a Java-based platform, as an open-source MapReduce implementation. Hadoop[8]was derived from Google’s MapReduce architecture and the Google file system (GFS).Data-intensive and distributed applications can work on Hadoop which can support up to thousands of computing nodes. In this work, we referred to [4] and implemented PSO and MCS into MRPSO and MRMCS, respectively, on the Hadoop platform.

    (input) → map → combine → reduce → (output)Fig. 1. Input and output types of a MapReduce job.

    Fig. 2. Example of a MapReduce framework.

    Fi g. 3. Overall fl owchart of MR MCS.

    Fi g. 4. One of the map function in MRMCS.

    4. MRMCS

    Parallelizinng MCS ona MapReducce architecturre is ellaborated in thhis section. Twwo major probblems remainto be soolved. First,we have todetermine aa strategy forr job paartitioning. Inn other wordss, we need too decide jobss that mmappers andreducers neeed to take ccare, respectiively.Second, informmation is enabbled to exchaange in MRMMCS,annd computingg nodes need tto communicaate with each oother inn the MapRedduce architecture, which wwas not suppoorted orriginally. TTherefore,we propoose 3-egg-tuple trransformationto facilitateexchange infformation betwween egggs.

    Fig. 3 shhows the oveerall fl ow oof MRMCS.The 3--egg-tuple traansformationfunction outpputs a new sample coomposed of ooriginal samplles (i, j, k) forr mappers, whhere i deenotes the inndex of currrent egg, j iss the indexof a raandomly-pickeed egg to bepaired with eegg i, and k iis the inndex of the nnest for puttinng the new eggg after evoluution.AAfter the 3-eegg-tuple trannsformationprocess, mapppers peerform gold-rratio crossoveer or Lévy fl iight to generaate a neew egg. Laterr, each reduceer chooses the best eggs aas the deescendant sammple among all candidates oof its own. Deetails off three steps sstated above arre further disccussed as folloows.

    4.1 3-Egg-TupleTransformattion

    IIn MCS, eggss are separatedd according too top-egg grouups andbad-egg grooups. The eggg picked fromm one top-eggg grouup and the oother one ranndomly pickedd from anothher top-egg group aree first pairedand then MCCS performs tthe crosssover operatioon over the ppair to generatte a new egg.If twoeggs are ppicked fromthe sametop-egg grouup coinncidently, theLévy fl ight iss used insteadd to generate tthe neww egg. Sincethe egg inforrmation is noot preserved oon diffeerent mapperrs in the MMapReduce arrchitecture, wwe commbines informmation from thhree eggs intoo one and suuch funcction is calledd 3-egg-tupletransformatioon. The outputs of 33-egg-tuple traansformationfunction aresets of (current eggindex, randommly-picked eggg index, targeet-nest index for puttiing the new egg) denoted aas (i, j, k). Eacch 3-egg-tuple (i,j, k)is sent to a mmapper for gennerating a neww egg.

    4.2 MRMCS Maappers

    OOne key chhallenge ofparallelizingg MCS ona MappReduce platform is job parrtitioning. Wee have to deciide whicch jobs go too mappers andd which jobsgo to reducerrs.Thegeneral rule iis that mapperrs take chargee of independeent jobss and reducerss are responsibble for combinning the resullts.Sincce operationsof crossoverand Lévy fl igght for new eggg geneeration are inddependent ammong all samplles, mappers aare assiggned to peerform thenew-egg geeneration. The 3-eggg-tuples arethe inputto new-egggenerationin mapppers and eachh new-egg geeneration canbe divided innto three cases.

    ·Case 1: Thhe top egg xiaand top egg xxjare not drawwn fromm the same nest. The egg xiis fi rst dupliccated and placed at thhe nest niforthe next genneration. Thenn the egg xiand eggxjare furtherr used to perfform the crossover operation andgenerate a neew egg to beplaced at thee nest nk. Fig.4 showws an examplee for this casee.

    ·Case 2: The top egg xiaannd top egg xjare drawn from thesame nest. Thhe egg xiis fif rst duplicated and placedat thenest niforthe next geeneration. The Lévy fl ight operration is perfoormed on theegg xiinstead and a new egg is generated to bbe placed at tthe nest nk. FFig. 5 showsan exammple for Case2.

    Fig. 5. Case two o f the mapper function in MRMCS.

    Fig. 6. Case thr ee of the mapper function in MRMCS.

    Fii g. 7. Example on the reduce ff unction.

    ·Case 3: TThe Lévy fl ighht operation is performed on the baad egg xidireectly and a neww egg is generated to be placed att nk=ni, as shoown in Fig. 6.

    4..3 MRMCS RReducers

    Reducers aare responsiblle for combingg the intermeediate reesults from mmappers. In MMRMCS, reduccers determine the neext-generationn egg of the nnests, respectivvely. Fig. 7 shhows ann example ffor the reduccer operationn. After mapppers geenerate new eeggs, every nest may contaain more than one eggg. Each reduucer fi nds thebest value froom all eggs in one neest and usesthe egg witth the best vvalue as thenext geeneration. The results of reeducers are ussed as the input to thhe next MRMCCS generationn.

    Algorithmss 2 and 3 summmarize thedetails of mapper opperations inccluding threecases statedd above and the reeducer operatiions in MRMCCS.A lg orithm 2. MMRMCS on MMap

    1: A←MaxLévyStepSize

    2

    2: φ ←GoldennRatio

    33: f( xi) ← TThe fitness ofxi

    4: definition:Mapper ( keyy, value )

    5: input: (Lasst iteration Fittness value, S)),

    S:{(x1, xj,xxk),…,( xn, xj,,xk)}, a set of (the current egg,a random egg, the nest foor putting new egg).

    6

    6: if Bad nestthen

    7

    7: Pick thenest ni

    8: Calculatte Lévy flightstep size α1←AG

    99: PerformLévy flight frrom xito generate new eggxk

    100: xi← xk

    11:Fi←f( xi)

    12: eend if

    13: iif Top nest thhen

    14:Pick the neest ni

    15:Randomlypick another nnest njfrom another top ness t

    16:if i = j thhen

    17:Calculaate Lévy flighht step size α2←A G2

    18:Performm Lévy flightfrom xito generate new egg xk

    19:Fk=ff( xk)

    20:else

    22:Move ddistance dx froom the worst nest to the bes t nest tofind xk

    23:Fk=ff( xk)

    24:end if

    25: eend if

    Algoorithm 3. MRMMCS on Reduuce

    1: ddefinition: Reeducer (key, vvaluelist ):

    2: iinput: (Last itteration fitness value, a population of n hhost nest Fi, i==1, 2,??, n)

    3: ffor all Fido

    4:Find the beest value xbestoof Fi

    5:Calculate ffitness Fi=f( xbest)

    6: eend for

    5. Evaluations and Appl ications

    In our experiiments, we immplemented bboth serial annd paraallel versionsfor MCS andd PSO on Haddoop. The seriial MCS generated thhe new egg of every nest aand replaced tthe oldegg with thhe better one serially. TThe processof MRMMCS was simmilar to the seerial MCS. Hoowever, instead of pperforming MMCS sequentiially, in ordeer to exchange information onHadoop, the3-egg-tupletransformation procceeded beforee executingthe mappingg and reducing funcctions. The ouutput of 3-egg--tuple transforrmation was the inpuut to the MapRReduce operattion.

    HHadoop carrried out asequence oof MapReduuce operrations, eachof which evaluated a sinngle iterationof MCS. In each MMapReduce opperation, Haddoop called tthe mappping functionn (as in Algorithm 2) annd the reducinng funcction (as in Allgorithm 3). MMappers in Haadoop generatted thenew egg of eevery nest thrrough the croossover or Lévvy flighht operation inn parallel andd reducers choose the best eggg fromm all candidattes of every nnest, respectivvely. The outpput of eeach MapReduuce operationn representedthe best eggof eachh nest. MRPSO was also immplemented aaccording to [4].Variious evaluatioons of MRMCCS and MRPSO in termsof perfformance andruntime weree compared inn the following sectiions.

    Experimentswere conduccted on a commputer withan AMD FX(tm)-8150 eight-coore processorr and 12 GB memory. Eighht virtual macchines (VMs)were run onn the phhysical machhine. A 10 Gdisk and a 11 G memorywere alllocated to eacch VM. Hadooop version 0.221 in Java 1.77 was used as the MMapReduce syystem for allexperiments.The innput dataset ((containing 10000 data nodees) was generated byy Latin hypeercube sampliing[9]with reespect to different appplications. HHere four eevaluation fuunctions andtwo engineeringoptimizationapplicationss[10]withtheir experimental reesults are pressented as folloows, respectively.

    5.1 Function GGriewank

    The Griewannk function caan be expresseed as

    where in our eexperiment, dimension d iss set as 30, xiiis a raandom variablle, xi∈[-600, ++600], and i iis their indexfrom 1to d. Fig. 8compares thee performancee of MRMCSS and MMRPSO for GGriewank. As aa result, MRMMCS and MRRPSO foound the minnimum valuess at the scaleof 10-2and10-1affter 3000 times of iterationn evolution, ddemonstratingg that MMRMCS showws a faster convvergence thann MRPSO doees.

    Fig. 9 commpares the runntime of MRMMCS and MRRPSO onn Griewankusing 1, 2,4, and 8virtual machhines,reespectively. AAs you cansee, MRMCSS run fasterthan MMRPSO. Suchh phenomenoon can be aattributed totwo reeasons: 1) MRRPSO in [4] did not use fi tnness values askeys.AAs a result, inn each iteratioon, searchingthe optimal vvalue ammong all saamples requirres more timme for addittional coomparison operations. 2) MMRPSO requirres an extra fi lle for thhe dependentlist as its inpput data. Howwever, such alarge file incurs morre processingtime to the tootal runtime. MMore sppeci fi cally, inFig. 9, we caan also observve that the runntime off MRMCS deecreases whenn VM increaases. Althoughh the ruuntime reducttion is not linnear, MRMCCS still runs mmore eff fi ciently thann MRPSO doees on Hadoop.

    5..2 Function RRastrigrin

    Define thee second evaluuation function—Rastrigrinas

    where in our eexperiment, ddimension d iss set as 30, xiiis a raandom variabble, xi∈[-5.122, +5.12], andd i is their iindex frrom 1 to d. Thhe performance comparisonn of MRMCSS and MMRPSO for Raastrigin is shoown in Fig. 100. Surprisinglyy, the mminimum valuue found by MMRMCS is mmuch smallerthan thhat found byy MRPSO aafter 3000 tiimes of iterration evvolution. Aggain, MRMMCS demonsstrates a better coonvergence tthan MRPSOO does. Runntime comparison beetween MRPSO and MRMMCS is pressented in Fig. 11.Similarly, thannks to two reeasons statedd above, MRMCS uses much shhorter runtimee than MRPSSO under vaarious numbers of VM in use. The total runtime used by MRMCS alsodecreases when the number of VM in use increases.

    Fig.8. Performance of MRMCS and MRPSO on Griewank.

    Fig.9. Runtime of MRMCS and MRPSO on Griew ank.

    Fig.10. Performanc e of MRMCS and MRPSO on Rastrigrin.

    Fig.11. Runtime of MRMCS and MRPSO on Ra strigrin.

    Fi g. 12. Performance of MRMCS and MRPSO on Rosenbroc k.

    Fi g. 13. Runtimee of MRMCS and MRPSO on Rosenbrock.

    5.3 Function RRosenbrock

    The third eevaluation funcc tion, Rosenbrock, is define as

    where in our eexperiment, dimension d iss set as 30, xiiis a raandom variablle, xi∈[-100, ++100], and i iis their indexfrom 1to d. Fig. 122 compares thee performance of MRMCSS and MMRPSO forRosenbrock.In this casse, MRMCSand MMRPSO can fif nd the minimmum value off the same quuality affter 3000 timmes iterationevolution. However, MRMMCS coonverges durring the 5000th iterationwhere MRRPSO coonverges duriing the 1000thh iteration. Thherefore, MRMMCS iss more ef fi ciennt than MRPSSO in fi ndingthe optimalitty for RRosenbrock.

    Fig. 13 commpares the runntime of MRMMCS and MRRPSO onn Rosenbrockk using 1, 22, 4, and 8virtual machhines,reespectively. AAs you cansee, MRMCSS run fasterthan MMRPSO. Againn, MRPSO usses 2 to 3 times of runtimethan MMRMCS does,, demonstratinng that MCS iis more suitabble to bee parallelizedd on the MappReduce archiitecture thanPSO foor the functionn Rosenbrock.

    5..4 Function SSphere

    The expresssion of the Spphere function is

    w

    where in our experiment, dimension d is set as 30, xiis a random variable, xi∈[-5.12, +5.12], and i is their index fromm 1 to d. Fig.14 comparesthe performannce of MRMCCS andMRPSO forr Sphere. Unnlike previouss cases, in tthe midddle of the seaarch process,MRPSO oncee found a bettter valuue than MRMMCS duringaround the 4400th iteratioon.Howwever, it cannnot make anyadvancementt for the restof 26000 iterations.MRMCS, oon the other hand, keeps polishing it solutiion. Before thhe end of ourexperiment, we havee not yet cooncluded if thhe optimal vvalue found by MRMMCS is the truue minimum vvalue.

    AAs to the runttime, Fig. 15 ccompares it of MRMCS with MRPPSO to the Sphere functioon. Followingthe same trend as pprevious evaluuations, MRMMCS runs fasteer than MRPSO doess, maintainingg a 3 times speeed-up.

    5.5 Applicationof Spring Design

    Tensional andd/or compressional springsare used wideely in eengineering. TThere are thhree design vvariables in tthe sprinng design prooblem: the wirre diameter ww, the mean cooil diammeter d, and thhe length (ornumber of cooils) L. The goal is too minimize thhe weight of tthe spring witth the limitation of tthe maximumm shear stresss, minimumde fl ection, and geommetrical limitss. The detailsof spring desiign problem are desccribed in [11]and [12].

    This overall pproblem can be formulated a s

    subj ect to

    wheere

    FFig. 16 and FFig. 17 showthe comparisson in termsof perfformance andd runtime oof MRMCSand MRPSO,resppectively, onthe spring deesign applicattion. It is cleear thatMRMCS andd MRPSO cann fi nd the samme optimal value butMRMCS ruuns much ffaster thanMRPSO doees,mainntaining a 4-tiimes speed-upp.

    Fig.14. Performanc e of MRMCS and MRPSO on Sphere.

    Fiig . 15. Runtimee of MRMCS annd MRPSO on Sphere.

    5..6 Applicatioo n of Welded--Beam Designn

    The Welded-beam design problemcomes fromm the sttandard tesst problemfor connstrained deesign opptimization[12],[13]. There arre four designn variables inn this prroblem: the wwidth w and leength L of thee welded areaa, the deepth d and thiickness h of tthe main beamm. The objectiive is too minimizethe overallfabricationcost, underthe apppropriate connstraints of thhe shear stresss τ, bending sstress σ, buckling loaad P(x), and mmaximum enddeflection δ.

    This overalll problem cann be formulateed as:

    suubject to

    where

    Fig. 18 and Fig. 19 show the performmance and runtime comparisons of MRMCS and MRPSO onn the welded-bbeam design application, respectively. Similaarly as the sppring design optimization, MRMCS and MRRPSO achievee the solutions of comparable quality whereasMRMCS only takes a quarter ofruntime thanMRPSO does.

    Fig.16. Performanc e of MRMCS and MRPSO on spring design.

    Fig.17. Runtime of MRMCS and MRPSO on spr ing design.

    Fig.18. Performance of MRMCS and MRPSO on welded-bea m design.

    Fig.19. Runtimeof MRMCS and MRPSO on welded-bea m design.

    6. Conclusions

    Meta-heuristics as a search strategy for optimization has been extensively studied and applied to solve many engineering problems. Most of them suffer from long runtime and thus parallelizing them to improve their eff i ciency is a thriving topic in research. Recently, PSO has been successfully implemented on the MapReduce platform.Therefore, in this paper, we parallelize MCS on a MapReduce platform and propose MRMCS. Problems of job partitioning and information exchange are solved by modif i cation on the MapReuce architecture and 3-egg-tuple transformation. As a result, MRMCS outperforms MRPSO on four evaluation functions and two engineering design optimization applications. Experimental results show MRMCS has better convergence than MRPSO does.Moreover, MRMCS also brings about two to four times speed-ups for four evaluation functions and engineering design applications, demonstrating superior eff i ciency after parallelization on the MapReduce architecture (Hadoop).

    [1] J. Kennedy and R. Eberhart, “Particle swarm optimization,”in Proc. of IEEE Int. Conf. on Neural Networks, Perth, pp.1942–1948, 1995.

    [2] X. Yang and S. Deb, “Cuckoo search via Lévy fl ights,” in Proc. of IEEE World Congress on Nature & Biologically Inspired Computing, Coimbatore, 2009, pp. 210–214.

    [3] S. Walton, O. Hassan, K. Morgan, and M. Brown, “Modif i ed cuckoo search: a new gradient free optimisation algorithm,”Chaos, Solitons & Fractals, vol. 44, pp. 710–718, Sep.2011.

    [4] A. McNabb, C. Monson, and K. Seppi, “Parallel PSO using mapreduce,” in Proc. of IEEE Congress on Evolutionary Computation, Singapore, 2007, pp. 7–14.

    [5] J. Dean and S. Ghemawat, “Mapreduce: simplif i ed data processing on large clusters,” Communications of the ACM,vol. 51, no. 1, pp. 107–113, 2008.

    [6] W. Zhao, H. Ma, and Q. He, “Parallel k-means clustering based on mapreduce,” Lecture Notes in Computer Science vol. 5931, 2009, pp. 674-679.

    [7] I. Pavlyukevich, “Lévy fl ights, non-local search and simulated annealing,” Journal of Computational Physics,vol. 226, no. 2, pp. 1830–1844, 2007.

    [8] Hadoop: The Def i nitive Guide, O’Reilly Media, 2012.

    [9] R. L. Iman, “Latin hypercube sampling,” in Encyclopedia of Statistical Science Update, New York: Wiley, 1999, pp.408–411.

    [10] X. Yang and S. Deb, “Engineering optimisation by cuckoo search,” Int. Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010.

    [11] J. S. Arora, Introduction to Optimum Design, Waltham:Academic Press, 2004.

    [12] L. Cagnina, S. Esquivel, and C. Coello, “Solving engineering optimization problems with the simple constrained particle swarm optimizer,” Informatica, vol. 32,no. 3, pp. 319–326, 2008.

    [13] K. Ragsdell and D. Phillips, “Optimal design of a class of welded structures using geometric programming,” ASME Journal of Engineering for Industries, vol. 98, no. 3, pp.1021–1025, 1976

    婷婷色综合大香蕉| 美女脱内裤让男人舔精品视频| 亚洲国产色片| 国产精品精品国产色婷婷| eeuss影院久久| 国产免费男女视频| 免费在线观看成人毛片| 国产视频首页在线观看| 蜜臀久久99精品久久宅男| 麻豆一二三区av精品| 国产精品伦人一区二区| 欧美激情在线99| 亚洲综合色惰| 夫妻性生交免费视频一级片| 亚洲在久久综合| 一本久久精品| 丰满人妻一区二区三区视频av| 久久久久久久久久久免费av| 日本一本二区三区精品| 国产视频内射| 一本久久精品| 美女黄网站色视频| 三级经典国产精品| 日日撸夜夜添| 午夜福利在线观看吧| 国产视频首页在线观看| 久久国产乱子免费精品| 黄色一级大片看看| 尤物成人国产欧美一区二区三区| 一区二区三区乱码不卡18| 成年女人永久免费观看视频| 精品久久久久久电影网 | kizo精华| 99久国产av精品| 一级黄片播放器| 卡戴珊不雅视频在线播放| 精品久久国产蜜桃| 看非洲黑人一级黄片| 欧美+日韩+精品| 99久久人妻综合| 欧美xxxx性猛交bbbb| 69人妻影院| 变态另类丝袜制服| 亚洲自拍偷在线| 精品国产露脸久久av麻豆 | 天美传媒精品一区二区| 色综合亚洲欧美另类图片| ponron亚洲| 女人久久www免费人成看片 | 久久久久精品久久久久真实原创| 午夜福利在线观看免费完整高清在| 尾随美女入室| 日韩一区二区视频免费看| 国产精华一区二区三区| 九九久久精品国产亚洲av麻豆| 亚洲欧美日韩无卡精品| 波多野结衣高清无吗| 亚洲av日韩在线播放| 欧美三级亚洲精品| 99热这里只有是精品50| 国产在视频线精品| 一级毛片久久久久久久久女| 男女国产视频网站| 有码 亚洲区| 日韩欧美精品v在线| 午夜福利网站1000一区二区三区| 赤兔流量卡办理| 三级毛片av免费| 大香蕉97超碰在线| 欧美一区二区亚洲| 长腿黑丝高跟| 久久精品人妻少妇| 插逼视频在线观看| 黄色配什么色好看| 久久亚洲国产成人精品v| 亚洲欧美精品综合久久99| 春色校园在线视频观看| 我要看日韩黄色一级片| 七月丁香在线播放| av播播在线观看一区| av.在线天堂| 久久鲁丝午夜福利片| 直男gayav资源| 秋霞在线观看毛片| 国产在视频线精品| av视频在线观看入口| 欧美性猛交黑人性爽| 菩萨蛮人人尽说江南好唐韦庄 | 欧美精品一区二区大全| 午夜视频国产福利| 日本一本二区三区精品| 成年av动漫网址| 一级av片app| 免费在线观看成人毛片| 欧美xxxx黑人xx丫x性爽| 精品久久国产蜜桃| 少妇丰满av| 男人舔女人下体高潮全视频| 26uuu在线亚洲综合色| 国产爱豆传媒在线观看| 在线a可以看的网站| 少妇的逼水好多| 免费黄网站久久成人精品| 一级黄片播放器| 免费观看的影片在线观看| 精品免费久久久久久久清纯| 免费不卡的大黄色大毛片视频在线观看 | 精品无人区乱码1区二区| 一夜夜www| 亚洲精品色激情综合| 色噜噜av男人的天堂激情| 国产精品国产三级专区第一集| 成人亚洲欧美一区二区av| 亚洲无线观看免费| 91精品一卡2卡3卡4卡| 成年版毛片免费区| 两个人的视频大全免费| 亚洲最大成人中文| 国产激情偷乱视频一区二区| 国产高清三级在线| 国产在线一区二区三区精 | 老师上课跳d突然被开到最大视频| 欧美色视频一区免费| 亚洲av免费高清在线观看| 2021少妇久久久久久久久久久| 男插女下体视频免费在线播放| 亚洲欧美清纯卡通| 最近中文字幕高清免费大全6| 国产成年人精品一区二区| 国产乱人视频| 欧美又色又爽又黄视频| 亚洲欧美日韩无卡精品| 国产免费一级a男人的天堂| 尾随美女入室| 高清视频免费观看一区二区 | 如何舔出高潮| 日本欧美国产在线视频| 亚洲综合精品二区| 在现免费观看毛片| 99久久无色码亚洲精品果冻| av视频在线观看入口| videossex国产| 禁无遮挡网站| 在线观看美女被高潮喷水网站| 国产精品三级大全| 在线免费观看的www视频| 久久久成人免费电影| 亚洲av成人精品一二三区| 亚洲成人中文字幕在线播放| 自拍偷自拍亚洲精品老妇| 永久免费av网站大全| 中文亚洲av片在线观看爽| av福利片在线观看| 免费av不卡在线播放| 欧美最新免费一区二区三区| 国产白丝娇喘喷水9色精品| 伦理电影大哥的女人| 九草在线视频观看| 日韩,欧美,国产一区二区三区 | 日日撸夜夜添| 国产成人91sexporn| 国产伦精品一区二区三区视频9| 色噜噜av男人的天堂激情| 在线观看一区二区三区| 精华霜和精华液先用哪个| 少妇人妻一区二区三区视频| 成人三级黄色视频| 哪个播放器可以免费观看大片| 久久国产乱子免费精品| 午夜a级毛片| 国内揄拍国产精品人妻在线| 草草在线视频免费看| 九九在线视频观看精品| 寂寞人妻少妇视频99o| 国产女主播在线喷水免费视频网站 | av女优亚洲男人天堂| 建设人人有责人人尽责人人享有的 | 国产成人一区二区在线| 少妇熟女aⅴ在线视频| 国产精品久久久久久久电影| 九草在线视频观看| 女人久久www免费人成看片 | 97在线视频观看| 国产一区二区亚洲精品在线观看| 中国国产av一级| a级毛片免费高清观看在线播放| 国产视频首页在线观看| 亚洲最大成人av| 国产精品日韩av在线免费观看| 我要看日韩黄色一级片| 嫩草影院入口| 国产片特级美女逼逼视频| 91久久精品国产一区二区成人| 岛国在线免费视频观看| 色综合亚洲欧美另类图片| 亚洲美女视频黄频| av播播在线观看一区| 精品久久久久久久久亚洲| 99久久精品热视频| 亚洲av.av天堂| 亚洲国产欧洲综合997久久,| 亚洲国产欧美人成| 成人漫画全彩无遮挡| 国产精品国产三级国产av玫瑰| 亚洲av福利一区| 国模一区二区三区四区视频| 内地一区二区视频在线| 国产精品电影一区二区三区| 欧美日韩国产亚洲二区| 国产精品.久久久| 中文资源天堂在线| 熟女电影av网| 22中文网久久字幕| 日韩制服骚丝袜av| 国产一区二区在线av高清观看| 天天躁日日操中文字幕| 最近中文字幕高清免费大全6| 午夜免费激情av| 秋霞在线观看毛片| 国产在线一区二区三区精 | 中文字幕久久专区| 18禁动态无遮挡网站| 能在线免费观看的黄片| 美女大奶头视频| 青春草视频在线免费观看| 亚洲人成网站在线播| 中文字幕精品亚洲无线码一区| 国产精品一二三区在线看| 国产久久久一区二区三区| 免费一级毛片在线播放高清视频| 亚洲性久久影院| 欧美日韩国产亚洲二区| 大香蕉久久网| 免费av不卡在线播放| 内射极品少妇av片p| 色5月婷婷丁香| 看十八女毛片水多多多| 亚洲第一区二区三区不卡| 久久久亚洲精品成人影院| 欧美日韩在线观看h| 国产伦精品一区二区三区视频9| 国产精品国产三级国产av玫瑰| 精品久久久久久久久av| 少妇裸体淫交视频免费看高清| 久久午夜福利片| 亚洲国产精品国产精品| 欧美不卡视频在线免费观看| 婷婷色麻豆天堂久久 | 亚洲国产精品成人久久小说| 中文字幕免费在线视频6| 亚洲av成人精品一区久久| 少妇的逼水好多| av天堂中文字幕网| 亚洲精品乱码久久久久久按摩| 亚洲av男天堂| 亚洲av一区综合| 国内精品宾馆在线| 亚洲一区高清亚洲精品| 亚洲不卡免费看| 久久久久免费精品人妻一区二区| 精品不卡国产一区二区三区| 亚洲欧美日韩无卡精品| 欧美精品国产亚洲| 性插视频无遮挡在线免费观看| 美女内射精品一级片tv| 秋霞在线观看毛片| 亚洲三级黄色毛片| 日日摸夜夜添夜夜添av毛片| 久久精品91蜜桃| 国产一区二区三区av在线| 色吧在线观看| 一夜夜www| 中文乱码字字幕精品一区二区三区 | 精品一区二区三区人妻视频| 少妇人妻一区二区三区视频| 国产精品三级大全| 久久6这里有精品| 1024手机看黄色片| 国产一级毛片在线| 99热这里只有是精品50| 中文资源天堂在线| 亚洲中文字幕日韩| 国产午夜精品久久久久久一区二区三区| 婷婷六月久久综合丁香| 九九热线精品视视频播放| av卡一久久| 国产人妻一区二区三区在| 久久久久久久国产电影| 天堂av国产一区二区熟女人妻| 久久久久国产网址| 91久久精品电影网| 久久亚洲精品不卡| 国产精品爽爽va在线观看网站| 99久久无色码亚洲精品果冻| 99热这里只有是精品在线观看| 国产精品伦人一区二区| 成人美女网站在线观看视频| 三级经典国产精品| 深夜a级毛片| 亚洲av电影在线观看一区二区三区 | 国产麻豆成人av免费视频| 高清在线视频一区二区三区 | 熟女电影av网| 3wmmmm亚洲av在线观看| 在线观看美女被高潮喷水网站| 水蜜桃什么品种好| 免费在线观看成人毛片| 联通29元200g的流量卡| 小说图片视频综合网站| 精华霜和精华液先用哪个| 国产亚洲午夜精品一区二区久久 | 秋霞伦理黄片| 一个人看的www免费观看视频| 2021天堂中文幕一二区在线观| 少妇丰满av| 免费观看人在逋| 51国产日韩欧美| 黄色欧美视频在线观看| 人体艺术视频欧美日本| 2022亚洲国产成人精品| 久久久久久伊人网av| 成人漫画全彩无遮挡| 日韩亚洲欧美综合| 欧美成人午夜免费资源| av在线亚洲专区| 中文天堂在线官网| 我的女老师完整版在线观看| 一夜夜www| av视频在线观看入口| 日本免费a在线| 我要搜黄色片| 免费观看人在逋| 久久综合国产亚洲精品| 男女国产视频网站| 亚洲av福利一区| 国产精品国产三级国产av玫瑰| www.色视频.com| 美女国产视频在线观看| 婷婷六月久久综合丁香| 精品国产三级普通话版| 国产精品久久久久久av不卡| 91精品伊人久久大香线蕉| 国产精品久久久久久av不卡| 91精品伊人久久大香线蕉| 51国产日韩欧美| 国产精品电影一区二区三区| 日本黄色视频三级网站网址| 热99在线观看视频| 女人十人毛片免费观看3o分钟| 99热这里只有是精品在线观看| 国产伦精品一区二区三区四那| 日产精品乱码卡一卡2卡三| 噜噜噜噜噜久久久久久91| 精品一区二区三区视频在线| 日本一本二区三区精品| 老司机影院毛片| 亚洲美女视频黄频| 亚洲国产精品成人综合色| 极品教师在线视频| 特级一级黄色大片| 色综合亚洲欧美另类图片| av国产久精品久网站免费入址| 在线免费观看不下载黄p国产| 国产又色又爽无遮挡免| 久久精品人妻少妇| 深爱激情五月婷婷| 特大巨黑吊av在线直播| 欧美区成人在线视频| 国产私拍福利视频在线观看| 99久国产av精品国产电影| 成人午夜精彩视频在线观看| 在线天堂最新版资源| 亚洲精品色激情综合| 国产成人aa在线观看| 欧美日韩国产亚洲二区| 国产一区二区在线观看日韩| 男女视频在线观看网站免费| 丝袜喷水一区| 永久免费av网站大全| 久久久精品欧美日韩精品| 免费黄色在线免费观看| 欧美日韩国产亚洲二区| 国产午夜精品久久久久久一区二区三区| 青青草视频在线视频观看| 久久久久久久国产电影| 女人久久www免费人成看片 | av国产久精品久网站免费入址| 国产精品永久免费网站| 精品国产露脸久久av麻豆 | 久久久久国产网址| 少妇丰满av| 午夜福利视频1000在线观看| 欧美潮喷喷水| 国产亚洲5aaaaa淫片| 亚洲精品色激情综合| 狂野欧美激情性xxxx在线观看| 免费观看性生交大片5| 亚洲国产最新在线播放| 99久国产av精品国产电影| 桃色一区二区三区在线观看| 九九爱精品视频在线观看| 国产免费又黄又爽又色| 在线播放无遮挡| .国产精品久久| 久久热精品热| 色噜噜av男人的天堂激情| av又黄又爽大尺度在线免费看 | 国产av不卡久久| 青青草视频在线视频观看| av国产久精品久网站免费入址| 简卡轻食公司| 欧美成人精品欧美一级黄| 别揉我奶头 嗯啊视频| 久久精品熟女亚洲av麻豆精品 | 国产精品久久久久久精品电影| 高清av免费在线| 久久99热这里只频精品6学生 | 91精品伊人久久大香线蕉| 中文字幕免费在线视频6| av天堂中文字幕网| 亚洲精品色激情综合| 国产淫片久久久久久久久| 国产在线男女| 欧美日韩在线观看h| 一二三四中文在线观看免费高清| 亚洲人成网站在线播| 欧美最新免费一区二区三区| 国产精品,欧美在线| 亚洲最大成人手机在线| 免费观看精品视频网站| 男人舔奶头视频| 嫩草影院新地址| 国产在线一区二区三区精 | 国产真实伦视频高清在线观看| 九九热线精品视视频播放| 国产单亲对白刺激| 一个人看的www免费观看视频| 亚洲国产欧洲综合997久久,| 国产视频内射| 一个人看的www免费观看视频| 黑人高潮一二区| 麻豆精品久久久久久蜜桃| 人人妻人人澡人人爽人人夜夜 | 网址你懂的国产日韩在线| 亚洲综合色惰| 亚洲国产色片| 国产探花极品一区二区| 国产精品国产三级国产av玫瑰| 欧美xxxx性猛交bbbb| 国产精品国产高清国产av| 七月丁香在线播放| 狂野欧美白嫩少妇大欣赏| 精品少妇黑人巨大在线播放 | 国产精品美女特级片免费视频播放器| 91狼人影院| 少妇丰满av| 国产精品久久久久久精品电影小说 | 国产毛片a区久久久久| 男女下面进入的视频免费午夜| 在现免费观看毛片| 日韩欧美在线乱码| 久久这里只有精品中国| 国产黄色视频一区二区在线观看 | 高清av免费在线| 成年免费大片在线观看| 国产白丝娇喘喷水9色精品| 亚洲18禁久久av| 人妻系列 视频| 一个人看的www免费观看视频| 国产精品国产高清国产av| 大话2 男鬼变身卡| 一级二级三级毛片免费看| 久久精品国产鲁丝片午夜精品| 国产高潮美女av| 日韩成人av中文字幕在线观看| 精品久久久久久久久久久久久| 午夜福利高清视频| 岛国在线免费视频观看| 日韩一区二区视频免费看| 91av网一区二区| 大香蕉97超碰在线| 特级一级黄色大片| 亚洲精品乱久久久久久| 非洲黑人性xxxx精品又粗又长| 亚洲国产精品合色在线| 中文精品一卡2卡3卡4更新| 国产单亲对白刺激| 人妻夜夜爽99麻豆av| 毛片一级片免费看久久久久| 看黄色毛片网站| 搡女人真爽免费视频火全软件| 亚洲成人久久爱视频| 我的老师免费观看完整版| 少妇猛男粗大的猛烈进出视频 | 日韩精品有码人妻一区| 三级毛片av免费| 欧美日韩综合久久久久久| 一级二级三级毛片免费看| 青春草视频在线免费观看| 国产精品一及| 欧美成人免费av一区二区三区| 男的添女的下面高潮视频| 美女cb高潮喷水在线观看| 欧美人与善性xxx| 国产黄a三级三级三级人| 午夜福利成人在线免费观看| 天天躁夜夜躁狠狠久久av| 精品久久久久久久末码| 欧美人与善性xxx| 国产亚洲91精品色在线| 亚洲人成网站在线播| 日日啪夜夜撸| 99久久精品国产国产毛片| 国产精品国产高清国产av| 国产精品三级大全| 国产av一区在线观看免费| 97人妻精品一区二区三区麻豆| 亚洲国产色片| 高清午夜精品一区二区三区| 免费看美女性在线毛片视频| 亚洲三级黄色毛片| 色噜噜av男人的天堂激情| 韩国av在线不卡| 人妻夜夜爽99麻豆av| 嫩草影院入口| 国产精品99久久久久久久久| 麻豆国产97在线/欧美| 久久人妻av系列| 黄片wwwwww| 色视频www国产| 精品人妻偷拍中文字幕| videossex国产| 少妇熟女aⅴ在线视频| АⅤ资源中文在线天堂| 久久精品国产亚洲av天美| 久久99精品国语久久久| 99久久精品国产国产毛片| 少妇高潮的动态图| 成人漫画全彩无遮挡| 国产探花在线观看一区二区| 久久这里只有精品中国| 久久精品久久久久久久性| 1000部很黄的大片| 欧美成人一区二区免费高清观看| 91狼人影院| 丰满乱子伦码专区| 国产成人a区在线观看| 蜜臀久久99精品久久宅男| 亚洲最大成人手机在线| 三级经典国产精品| 国产黄色小视频在线观看| 精品久久久久久久人妻蜜臀av| 全区人妻精品视频| 亚洲av电影在线观看一区二区三区 | 又黄又爽又刺激的免费视频.| 国产一区二区在线观看日韩| 国产视频内射| 又粗又爽又猛毛片免费看| 日韩亚洲欧美综合| 精品国产露脸久久av麻豆 | 亚洲精品久久久久久婷婷小说 | 成人亚洲精品av一区二区| 人人妻人人澡欧美一区二区| 国产精品一区www在线观看| 成人亚洲欧美一区二区av| 高清av免费在线| 婷婷色av中文字幕| 日本黄色视频三级网站网址| 乱人视频在线观看| 九九爱精品视频在线观看| 国产免费福利视频在线观看| 我要搜黄色片| 尤物成人国产欧美一区二区三区| 两个人的视频大全免费| 国产精品女同一区二区软件| 国产乱人偷精品视频| 久久热精品热| 免费搜索国产男女视频| 青春草视频在线免费观看| 亚洲天堂国产精品一区在线| 国产三级中文精品| 久久久久九九精品影院| 久久久久免费精品人妻一区二区| 国产私拍福利视频在线观看| 国产乱来视频区| 日本黄色视频三级网站网址| 国产黄色小视频在线观看| 91狼人影院| 观看美女的网站| 人妻制服诱惑在线中文字幕| av在线播放精品| 国内精品一区二区在线观看| 亚洲国产精品成人综合色| 看十八女毛片水多多多| 性插视频无遮挡在线免费观看| 亚洲经典国产精华液单| 最近手机中文字幕大全| 麻豆一二三区av精品| 美女xxoo啪啪120秒动态图| 亚洲,欧美,日韩| 在线观看美女被高潮喷水网站| 特级一级黄色大片| 日本av手机在线免费观看| 久久亚洲国产成人精品v| 深爱激情五月婷婷| 久久鲁丝午夜福利片| videossex国产| 高清午夜精品一区二区三区| 国产精华一区二区三区| 日本免费在线观看一区| 男人和女人高潮做爰伦理| 两个人的视频大全免费| 水蜜桃什么品种好| 亚洲真实伦在线观看| 精品久久久久久成人av|