,,3,4
1.Schoolof Mechano-Electronic Engineering,Xidian University,Xi’an 710071,China; 2.Schoolof Electronic Engineering and Automation,Guilin University of Electronic Technology,Guilin 541004,China; 3.Guangxi Key Laboratory of Automatic Detecting Technology and Instruments,Guilin 541004,China; 4.Guilin University of Aerospace Technology,Guilin 541004,China
Hybridizing grey wolfoptimization with differentialevolution for globaloptimization and testscheduling for3D stacked SoC
Aijun Zhu1,ChuanpeiXu2,3,*,ZhiLi1,2,3,4,Jun Wu2,and Zhenbing Liu2
1.Schoolof Mechano-Electronic Engineering,Xidian University,Xi’an 710071,China; 2.Schoolof Electronic Engineering and Automation,Guilin University of Electronic Technology,Guilin 541004,China; 3.Guangxi Key Laboratory of Automatic Detecting Technology and Instruments,Guilin 541004,China; 4.Guilin University of Aerospace Technology,Guilin 541004,China
A new meta-heuristic method is proposed to enhance current meta-heuristic methods for global optimization and test scheduling for three-dimensional(3D)stacked system-on-chip (SoC)by hybridizing grey wolf optimization with differential evolution(HGWO).Because basic grey wolf optimization(GWO)is easy to fallinto stagnation when it carries out the operation of attacking prey,and differentialevolution(DE)is integrated into GWO to update the previous best position of grey wolf Alpha,Beta and Delta,in order to force GWO to jump out of the stagnation with DE’s strong searching ability.The proposed algorithm can accelerate the convergence speed of GWO and improve its performance. Twenty-three well-known benchmark functions and an NP hard problem of test scheduling for 3D SoC are employed to verify the performance ofthe proposed algorithm.Experimentalresults show the superior performance of the proposed algorithm for exploiting the optimum and it has advantages in terms of exploration.
meta-heuristic,globaloptimization,NP hard problem.
Global optimization problems(GOPs)are constantly unavoidable in modern science and engineering fi elds.Without loss of generality,a global optimization problem can be formulated as
where n is the number of decision variables and f(x)is the objective function.R is the real fi eld,x∈Q and Q is an n-dimensionalrectangle in Rnde fi ned by the following equation:
where l=(l1,...,ln),u=(u1,...,un),li<xi<ui(i=1,...,n)and[l,u]is the feasible region.
In the last two decades,meta-heuristic optimization methods have become quite popular for solving global optimization problems,such as grey wolf optimization (GWO)[1],genetic algorithm(GA)[2],particle swarm optimization(PSO)[3],differential evolution(DE)[4] and antcolony optimization(ACO)[5].Such optimization methods are also widely used in various engineering fi elds.
Tomlinson presented a model from the grey wolf(Canis lupus),and the social behavior exhibited by packs of wolves,which was shown at SIGGRAPH 2001.Vonholdt analyzed DNA samples from 555 Northern Rocky Mountain wolves in three recovery areas over the 10-year recovery period(1995-2004)[6].Matthew discussed the effect of sociality and season on grey wolf foraging behavior[7].Vucetich presented an analysis thatincorporated a previously ignored feature of wolf foraging ecology.They showed thatindividuals in large packs do accrue foraging advantages[8].
However,there are no mathematicalmodels to solve the GOPs in the above literature.
Mirjaliliproposed a new meta-heuristic called grey wolf optimizer inspired by grey wolves and described a model to solve the GOPs for the fi rst time[1].Experimentalresults demonstrate that the GWO can solve classical engineering design problems,such as tension/compression spring,welded beam and pressure vessel design.The GWO can be adopted in the fi eld of optical engineering,too.Results show that the GWO has excellentperformance,which can be applied to challenging problems withunknown search spaces.
As we all know,there are three steps as to the main stages of grey wolf hunting:tracking,encircling and attacking the prey[9].Grey wolves launch the behavior of attacking the prey when|A|<1.However,grey wolves search for new prey when|A|>1.A can be formulated as follows:
where the elements of a are linearly deceased from 2 to 0 over the process of iterations,and r1is a random vector in [0,1].
We can summarize that the attacking behavior is similar to local search and diverging from the current prey to fi nd a better prey is similar to globalsearch.Therefore,the GWO is likely to fallinto stagnation while grey wolves are attacking the prey,which is a common disadvantage of the localsearch operation.
In this paper,we propose a new meta-heuristic method to enhance current meta-heuristic methods by hybridizing GWO with DE.Basic GWO is easy to fallinto stagnation when itcarries outthe operation of prey attacking[1].Because of DE’s strong searching ability,DE is integrated into GWOto update the previous bestposition ofgrey wolf Alpha,Beta and Delta,in order to force GWO to jump out of the stagnation.
The rest of this paper is organized as follows.The basics of GWO is brie fl y introduced in Section 2.The basics of DE is presented in Section 3.The proposed algorithm is introduced in details in Section 4.Twenty-three wellknown benchmark functions are employed to evaluate the proposed algorithm in Section 5.An NP hard problem of test scheduling for 3D stacked SoC is adopted to verify the proposed algorithm in Section 6.Finally,concluding remarks are given in Section 7.
2.1 Socialhierarchy of grey wolves
The GWO is a stochastic global optimization method inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature[1].
The leaderin a grey wolves group is called Alpha,which is responsible for making decisions about almost everything including hunting[7].The second level hierarchy in the grey wolves is Beta,which helps Alpha make decisions.Beta is also the best candidate in case of Alpha passing away or getting old.The lowest level is Omega, which plays the role of scapegoat.Omega can satisfy the whole group and maintain the dominantarchitecture ofthe group.The third level is Delta,which must submit to Alpha and Beta.Delta should scoutto protectand guarantee the safety of the group.
2.2 Hunting behavior and mathematicalmodel
The social hierarchy of wolves is a special characteristic and group hunting is another particular social behavior. There are three steps as to the main stages of grey wolf hunting[9]:fi rst,they track and approach the prey;second,they run after,encircle and harass the prey;fi nally, they attack the prey.
Fig.1 Hierarchy of grey wolves
A mathematicalmodelfor the hunting behavior can be formulated.To model the encircling behavior,the following equations[1]are putforward:
where t stands for the currentiteration,A and C are coef fi cientvectors,X indicates the position vector of a grey wolf,and Xpindicates the position vectorof the prey.The coef fi cientvectors A and C can be formulated as follows:
where r2is a random vectorin[0,1].
Grey wolves fi rst fi nd out the position of the prey,and then encircle it.In fact,the position of the optimalprey is unknown in a search space.For the sake of simulating the hunting behaviorof grey wolves,we suppose thatthe grey wolf Alpha,Beta and Delta are aware ofthe potentialposition ofa prey.So,the fi rstthree bestsolutions gained so far are stored and the other members in the pack mustupdate their positions in the lightof the bestthree solutions.Such behavior can be formulated as follows:
By decreasing the value of a(t)in a,we mathematically model the grey wolf’s behavior of approaching the prey, and a(t)can be formulated as follows:
where t is an integer between 0 and Max iter and it is increased by one each time over the course of the iteration; Max iter is the maximum number of the iteration.Therefore,a(t)is linearly deceased from 2 to 0 over the process of iterations.
Naturally using(5),we fi nd values of A are random values in the interval[-2a,2a].When random values of A are in[-1,1],it means that the next position of the grey wolf must be in any position between its current position and the position of the prey.We force the grey wolf to launch the behavior of attacking the prey by making |A|<1.We can also force the grey wolves to search for a new prey by making|A|>1.Here grey wolves diverge from the prey to fi nd a better prey.
DE was fi rst proposed by Storn[4]to solve global optimization.Because DE is simple and powerful,which is a stochastic algorithm with quite few controlparameters,it has been widely used in various engineering fi elds[10-12].
First,DE begins with a randomly generated population, and then the next generation population is generated by mutation,crossover and selection operations.Each part is introduced as follows.
3.1 Generation of initialpopulation
Generally speaking,initial population can be randomly generated for almost all evolutionary algorithms,so we generate a random population atthe beginning of DE.
3.2 Mutation operation
DE adopts a typical differential strategy to produce the variation ofan individual.First,three individuals which are notthe same are randomly selected,and then the difference vector of two individuals is zoomed.Finally,the zoomed difference vectoris synthesized with the third individualto achieve the mutation operation as follows:
where r1/=r2/=r3/=i,g is the generation number,F is the scaling factor,g=0,1,2,...,MaxGen,and MaxGen is the maximum numberof the iteration.
Such differentialstrategy is widely used,which is called DE/rand/1/bin,because it can remain the diversity of the population.
3.3 Crossover operation
The g th generation{X1(g),X2(g),...,Xk(g),..., Xpsize(g)}and its variantare crossed as follows:
where CR represents the crossover probability,and jrandis a random integer between 1 and d.d is the number of the dimension of the solution(individual).
3.4 Selection operation
In DE,the greedy strategy is adopted to select individuals for the nextgeneration as follows:
The hybridizing GWO with DE(HGWO)is introduced in details in this section.Three groups with the same population size are adopted.In the fi rst step of the algorithm, the three populations are randomly generated in a feasible region using(20).Let POP representa population,which can be de fi ned as below:
where psize is the population size;k is the serial number
of individuals,and k=1,2,3,...,psize.Each individual can be expressed as
where p=1,2,...,d,and k=1,2,3,...,psize.
In the following step,we sortthe parentpopulation in a non-decreasing order and fi nd the fi rst,the second and the third individuals in the parent population of grey wolves, which are called Alpha,Beta and Delta.
Over the course of the iteration,we update the position of each individualin the parentpopulation of grey wolves using(13).Then we obtain a mutantpopulation and a child population using(15)and(16),respectively.Next,we update the parent population using(17).After that,we update A,C and a using(5),(6)and(14).Then,to update Parentα,Parentβand Parentδ,we sortthe parent population ofgrey wolves in a non-decreasing order.Once the course of the iteration is over,it returns Parentαand f(Parentα).
When each individual in the parent population of grey wolves updates its position and violates the boundary constrain,the violating value is changed using the following equation:
where j=1,2,...,d;i=1,2,...,psize.
The pseudo code of the HGWO algorithm is demonstrated in Algorithm 1.
Algorithm 1:HGWO
InputObjective function f,population size psize,the lower bound of feasible region L={l1,...,ln}and the upper bound of feasible region U={u1,...,un}.
OutputThe optimal solution and the best objective function value.
Initialize a parent population of grey wolf Parenti(i=1,2,...,psize)with a random position in a feasible region using(20);
Initialize a mutant population of grey wolf Mutanti(i=1,2,...,psize)with a random position in a feasible region using(20);
Initialize a child population of grey wolf Childi(i= 1,2,...,psize)with a random position in a feasible region using(20);
Initialize crossover probability Pc and scaling factor F; Initialize a,A and C;
Evaluate f for allindividuals in the parentpopulation; Sortthe parentpopulation in a non-decreasing order,according to the objective function value;
Parentαis the best individualin the parentpopulation of grey wolves;
Parentβis the second individualin the parent population of grey wolves;
Parentδis the third individualin the parentpopulation of grey wolves;
While(t<MaxGen)
for each individualin the parentpopulation ofgrey wolves
Update the position using(13);
end for
Obtain a mutant population of grey wolves using (15);
Obtain a child population of grey wolves using (16);
for each individual Parentiin a parentpopulation of grey wolves
if f(Childi)<f(Parenti) Replace Parentiwith Childi
end if
end for
Update A,C and a using(5),(6)and(14);
Sortthe parentpopulation of grey wolves in a nondecreasing order;
Update Parentα,Parentβ,Parentδ;
t=t+1;
end while
Return Parentαand f(Parentα).
To test the effectiveness of the proposed algorithm,we benchmark the HGWO algorithm in this section on 23 classical and popular benchmark functions used by a lot of researchers[13-18].The 23 benchmark functions can be divided into three groups:unimodal benchmark functions,multimodal benchmark functions and fi xeddimension multimodal benchmark functions.The fi rst group is listed as follows:
The above benchmark functions f1-f7are unimodal benchmark functions,and their 2D versions are as follows:
Fig.2 2D versions graph of unimodalbenchmark functions
The following benchmark functions f8-f13are multimodalbenchmark functions:
Their 2D versions are as follows:
Fig.3 2D versions graph of multimodalbenchmark functions
The following benchmark functions f14-f23are fi xeddimension multimodalbenchmark functions:
Their 2D versions are as follows:
Fig.4 2D versions graph of fixed-dimension multimodal benchmark functions
The HGWO algorithm and other algorithms used for comparison run thirty times on each above benchmark function.The experimentalresults are composed of some statistical parameters,such as average,standard deviation,best and worst.Statistical results are reported in Tables 1-6.
To test the effectiveness of the proposed algorithm,we setthe same population size and the same maximum number of iterations for allalgorithms.
The HGWO parameter con fi guration is carried out as follows:ScalingFactor F=0.5;crossover probability Pc=0.2;population size SearchAgents no=30;maximum number of iterations Max iteration=500.
The GWO parameter con fi guration is carried out as follows:population size SearchAgents no=30;maximum number of iterations Max iteration=500.
The DE parameter con fi guration is carried out as follows:ScalingFactor F=0.5;crossover probability Pc=0.2; population size SearchAgents no=30;maximum number of iterations Max iteration=500.
The PSO parameter con fi guration is carried out as follows:population size SearchAgents no=30;maximum number of iterations Max iteration=500;Vmax=6;wMax=0.9;wMin=0.2;c1=2;c2=2.
Table 1 Experimentalresults(Best,Worst)of unimodalbenchmark functions
Table 2 Experimental results(Best,Worst)of multimodalbenchmark functions
Table 3 Experimentalresults(Best,Worst)of fixed-dimension multimodalbenchmark functions
Table 4 Experimental results(Average,Standard)of unimodalbenchmark functions
Table 5 Experimental results(Average,Standard)of multimodalbenchmark functions
Table 6 Experimentalresults(Average,Standard)of fixed-dimension multimodalbenchmark functions
Every algorithm is run independently thirty times,and the experimentalresults are listed in Tables 1-6.
5.1 Exploitation discussion
Table 1 shows the best and worst results of unimodal benchmark functions.According to Table 1,the HGWO is quite competitive and itoutperforms allothers in f1,f2, f3,f4,f7,in the bestand worstresults.Table 4 shows the average and standard deviation results of unimodalbenchmark functions.According to Table 4,the HGWO outperforms all others in f1,f2,f3,f4,f5,f7in the average results.As we all know,unimodal benchmark functions are much fi tted for benchmarking exploitation.Consequently, the results prove the superior performance of the HGWO for exploiting the optimum.
5.2 Exploration discussion
Compared with the unimodalbenchmark functions,multimodal benchmark functions have a lot of local optimums with the increasing number of dimensions.Such feature makes them fi tted for the benchmarking exploration performance of an algorithm.On the basis of the results of Table 5 and Table 6,the HGWO can also obtain competitive results on the multimodal benchmark functions.The proposed algorithm outperforms GWO and PSO on the majority of multimodalbenchmark functions.The HGWO obtains quite competitive results compared with DE.Such results prove that the proposed algorithm has advantages in terms of exploration.
5.3 Avoidance of localminima
The localminima avoidance of an algorithm can be tested in multimodalbenchmark functions,due to theirvastnumberoflocalminimals.On the basis ofthe results of Table 2, Table 3,Table 5 and Table 6,the proposed algorithm outperforms GWO and PSO on the majority of multimodal benchmark functions.The HGWOoutperforms DE on half of multimodalfunctions.These results prove that the proposed algorithm has a good balance between exploitationand exploration.This superior performance is due to the adaptive value of A and the import of DE for updating Alpha,Beta and Delta.Over the course of iterations,half of them are dedicated to exploitation when|A|is less than 1;the rest half of them are dedicated to exploration when |A|is equalto or more than 1.
In brief,the above experimental results verify the performance of the proposed algorithm in working out various benchmark functions compared with eminent metaheuristics.In addition,we further observe and study the performance of the proposed algorithm.A well-known NP hard problem in engineering optimization is employed in the following section.
Test scheduling is a well-known NP hard problem in design for testability in SoC[19].With the development of 3Dintegrated circuit,there are more diesstacked in a stack. With the increase of the die number in a stack,the traditionalmethod integerlinearprogramming(ILP)is notsuitable for solving the test scheduling for a stack with more than fi ve dies[19].Therefore,we can use the HGWO to solve this NP hard problem.
3Dstacked SoC[20]is referred to the creation ofa complete system which is directly stacked and bonded by dieon-die.Through-silicon via(TSV)is used as verticalconnection,as itobtains the highestverticalinterconnectdensity.In order to test the dies and the relative cores,a test access mechanism(TAM)is needed to transport test data to the cores on the dies.In general,we also need a 3DTAM [21-25]to transport test data from the stack input/output porton the bottom.As we design the testarchitecture,we should consider not only minimizing the test length,but also minimizing the numberof TSVused to route 3D TAM and the number of stack testpins.The reason is that every TSV needs area costs and is a possible source of defectin 3D SoC.
Above all,the test time(test length)lies on the test architecture and the corresponding test schedule with constraints of the number of TSV and test pins used.Here, we only consider 3D SoC with hard dies.The problem of 3D So C with hard dies can be described as:Given a stack with a set of M dies,the total number of test pins Pinmaxusable forthe testand the maximalnumberof TSV (TSVmax),employed globally for 3D TAM design.For each die n∈N,the number of the test pin Pinnused to test the die and the corresponding test length Tnis given. We aim at determining a TAM design and the associated test scheduling for the 3D stack,such that the total test length is minimized,at the same time,the total test pin does not outnumber Pinmaxand the total TSV used does notoutnumber TSVmax.
To calculate the total test length,we fi rst de fi ne a variable Pij,which equals 1 if die i is tested in parallelwith j, and 0 otherwise.Then,we de fi ne the second variable PLi, which equals 0 if die i is tested in parallelwith any lower die,and 1 otherwise.The totaltest length is the sum of all the maximum testlengths in allthe testsessions which are tested in series.In other words,all the test schedules in a tested session are testin parallel,and differenttestsessions are tested in series.
The testlength TL can be formulated as follows:
where the variable Pijequals 1 if die i is tested in parallel with j,and 0 otherwise.The variable PLiequals 0 if die i is tested in parallel with any lower die,and Tjis the test length of die j.
For each die i,itmustbe in a testsession;therefore,the total test pins in any test session cannot exceed Pinmax. We can formulate itas follows:
where Pinmaxis the totalnumber of testpins available for testing,the variable Pijequals 1 if die i is tested in parallelwith j,and 0 otherwise.Pinjis the numberof testpins used to testdie j.
At the same time,the total number of TSV used could notoutnumberthe upperbound TSVmax.We can calculate the numberof TSV used as follows:
The above formulation can be explained as follows:the number of TSV employed to connect lay i and lay i-1 is the maximalnumberof pins needed by layerator above lay i thattakes the most test pin connections and the sum of dies tested in parallel at or above lay i in the same test session.
Therefore,the test schedule problem with hard dies can be formulated as follows:
From ITC’02 SoC test benchmarks,we have crafted benchmarks by hand as the die inside the 3D So C.The SoCs employed are f2126,d695,p22810,p34392 and p93791.The hard dies are shown in Table 7.
Table 7 Number of test pins and test lengths for hard dies[17]
There are two SICs.SIC1 is made up of 10 hard dies from top to bottom,which are d695,d695,f2126,f2126, p22810,p22810,p34392,p34392,p93791 and p93791. SIC2 is also made up of 10 hard dies from top to bottom,which are p93791,p93791,p34392,p34392,p22810, p22810,f2126,f2126,d695 and d695.
In this section,we make the same parameter con fi guration as thatin Section 5.
Every algorithm is run independently thirty times,and the best results of each algorithm are listed in Table 8 and Table 9.
Table 8 Experimental results of SIC1
Table 9 Experimental results of SIC2(N/A means not available)
From Table 8,we can fi nd that the proposed algorithm can obtain a test length nine times shorter than that obtained from GWO.We can also conclude thatthe proposed algorithmcan obtain a testlength twenty times shorterthan thatobtained from PSO.The results show thatthe proposed algorithm can obtain a test length six times shorter than thatobtained from DE.
From Table 9,we can fi nd that the proposed algorithm can obtain a test length eight times shorter than that obtained from GWO.We can also conclude thatthe proposed algorithm can obtain a testlength eleven times shorterthan that obtained from PSO.The experimental results show that the proposed algorithm can obtain a test length three times shorter than thatobtained from DE.
We propose a new meta-heuristic method to enhance current meta-heuristic methods by hybridizing GWO with DE.Considering the problem that the basic GWO is easy to fall into stagnation when it carries out the operation of prey attacking,DE is integrated into GWO to update the previous bestposition of grey wolves Alpha,Beta and Delta,so as to make GWO jump out of the stagnation with DE’s strong searching ability.The proposed algorithm can accelerate GWO’s convergence speed and improve its performance.Twenty-three well-known benchmark functions and one NP hard problem of test scheduling for 3D stacked SoC are adopted to test and verify the excellentperformance of the proposed algorithm.Experimental results show the superior performance of the proposed algorithm for exploiting the optimum and it has advantages in terms of exploration.
[1]S.Mirjalili,S.M.Mirjalili,A.Lewis.Grey wolf optimizer. Advances in Engineering Software,2014,69(3):46-61.
[2]E.Bonabeau,M.Dorigo,G.Theraulaz.Swarm intelligence: from natural to artificial systems.New York:Oxford University Press,1999.
[3]J.Kennedy,R.Eberhart.Particle swarm optimization.Proc.of the IEEE International Conference on Neural Networks,1995: 1942-1948.
[4]R.Storn,K.Price.Differential evolution--a simple and ef ficientheuristic forglobaloptimization over continuous spaces. Journal ofGlobal Optimization,1997,11(4):341-359.
[5]M.Dorigo,M.Birattari,T.Stutzle.Ant colony optimization. IEEE Computational Intelligence Magazine,2006,1(4):28-39.
[6]B.M.Vonholdt,D.R.Stahler,E.E.Bangs.A novel assessmentofpopulation structure and gene fl ow in grey wolf populations ofthe Northern Rocky Mountains ofthe United States. Molecular Ecology,2010,19(20):4412-4427.
[7]C.M.Matthew,J.A.Vucetich.Effectof sociality and season on gray wolf foraging behavior.Plos One,2011,6(3):1-10.
[8]J.A.Vucetich,R.O.Peterson,T.A.Waite.Raven scavenging favours group foraging in wolves.Animal Behavior,2004, 67(6):1117-1126.
[9]C.Muro,R.Escobedo,L.Spector,etal.Wolf-pack(Canis lupus)hunting strategies emerge from simple rules in computational simulations.Behavioral Processes,2011,88(3):192-197.
[10]R.Storn.System design by constraint adaptation and differential evolution.IEEE Trans.on Evolutionary Computation, 1999,3(1):22-34.
[11]S.Harish,B.J.Chand,K.V.Arya.Fitness based differential evolution.Memetic Computing,2012,4(4):303-316.
[12]V.H.Carbajal-G′omez.Optimizing the positive Lyapunov exponent in multi-scroll chaotic oscillators with differential evolution algorithm.Applied Mathematics and Computation, 2013,219(15):8163-8168.
[13]X.Yao,Y.Liu,G.Lin.Evolutionary programming made faster. IEEE Trans.on Evolutionary Computation,1999,3(2):82-102.
[14]J.Digalakis,K.Margaritis.On benchmarking functions forgenetic algorithms.International Journal of Computer Mathematics,2001,77(4):481-506.
[15]M.Gaviano,D.Lera.Test functions with variable attraction regions for global optimization problems.Journal of Global Optimization,1998,13(2):207-233.
[16]M.Jamil,X.S.Yang.A literature survey of benchmark functions for global optimisation problems.International Journal of Mathematical Modelling and Numerical Optimization, 2013,4(2):150-194.
[17]S.Mirjalili,A.Lewis.S-shaped versus V-shaped transfer functions for binary particle swarm optimization.Swarm and Evolutionary Computation,2013,9(4):1-14.
[18]S.Mirjalili,S.M.Mirjalili,X.S.Yang.Binary batalgorithm. Neural Computing and Applications,2014,25(3):663-681.
[19]B.Noia,K.Chakrabarty.Test-architecture optimization and test scheduling for TSV-based 3-D stacked ICs.IEEE Trans. on Computer-Aided Design ofIntegrated Circuits and Systems, 2011,30(11):1705-1718.
[20]B.Noia,K.Chakrabarty.Test-wrapper optimization for embedded cores in through-silicon via-based three-dimensional system on chips.IET Computers&Digital Techniques,2011, 5(3):186-197.
[21]A.J.Zhu,Z.Li,W.C.Zhu,etal.Design of testwrapper scan chain based on differential evolution.Journal of Engineering Science and Technology Review,2013,6(2):10-14.
[22]E.J.Marinissen,V.Iyengar,K.Chakrabarty.A setof benchmarks for modular testing of SOCs.Proc.of the International Test Conference,2002:519-528.
[23]X.X.Wu,Y.B.Chen,K.Chakrabarty.Test-access mechanism optimization for core-based three-dimensional SOCs.Microelectronics Journal,2010,41(10):601-615.
[24]D.L.Lewis,S.Panth,X.Zhao,etal.Designing 3D testwrappers for pre-bond and post-bond test of 3D embedded cores. Proc.ofthe IEEE International Conference on Computer Design:VLSI in Computers and Processors,2011:90-95.
[25]H.F.Azmadi,Y.T.E.Chua,Y.Tomokazu,et al.RedSOCs: 3D Thermal-safe test scheduling for 3D-stacked SOC.Proc. of the IEEE Asia-Pacific Conference on Circuits and Systems, 2010:264-267.
Aijun Zhu was born in 1978.He received his B.Eng degree and M.Eng degree from Chengdu University of Technology in 2001 and 2004,respectively.He is a Ph.D.candidate at Xidian University and a lecturer at Guilin University of Electronic Technology.His main research interests are meta-heuristic and integrated circuittesting theory and technology.
E-mail:zbluebird@guet.edu.cn
Chuanpei Xu was born in 1968.She received her Ph.D.degree from Xidian University in 2006.She is a professor at Guilin University of Electronic Technology.Hermain research directions are intelligentinstrumentsystem and integrated circuittesting theory and technology.
E-mail:xcp@guet.edu.cn
Zhi Li was born in 1965.He received his Ph.D. degree from University of Electronic Science and Technology in 2003.He is a Ph.D.supervisor atXidian University,and a professor at Guilin University of Electronic Technology and Guilin University of Aerospace Technology.His main research direction is intelligent instrumentsystem.
E-mail:cclizhi@guet.edu.cn
Jun Wu was born in 1973.He received his Ph.D. degree from Wuhan University in 2004.He is a professor at Guilin University of Electronic Technology.His main research interests are embedded system and FPGA development for parallelimage processing.
E-mail:wujun93161@hotmail.com
Zhenbing Liu was born in 1980.He received his Ph.D.degree from Huazhong University of Science and Technology in 2010.He is an associate professor at Guilin University of Electronic Technology. His main research interests include machine learning image processing.
E-mail:liuzb0618@hotmail.com
10.1109/JSEE.2015.00037
Manuscriptreceived April17,2014.
*Corresponding author.
This work was supported by the National Natural Science Foundation of China(60766001;61105004),the Guangxi Key Laboratory of Automatic Detecting Technology and Instruments(YQ14110),and the Program for Innovative Research Team of Guilin University of Electronic Technology(IRTGUET).
Journal of Systems Engineering and Electronics2015年2期