• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    VGWO:Variant Grey Wolf Optimizer with High Accuracy and Low Time Complexity

    2023-04-23 08:41:45JunqiangJiangZhifangSunXiongJiangShengjieJinYinliJiangandBoFan
    Computers Materials&Continua 2023年11期

    Junqiang Jiang,Zhifang Sun,Xiong Jiang,Shengjie Jin,Yinli Jiang and Bo Fan,★

    1School of Information Science and Engineering,Hunan Institute of Science and Technology,Yueyang,414006,China

    2Department of Computer and Information Science,Link?ping University,Link?ping,58183,Sweden

    3Department of Safety and Quality,Hunan Construction Investment Group,Changsha,410004,China

    ABSTRACT The grey wolf optimizer (GWO) is a swarm-based intelligence optimization algorithm by simulating the steps of searching,encircling,and attacking prey in the process of wolf hunting.Along with its advantages of simple principle and few parameters setting,GWO bears drawbacks such as low solution accuracy and slow convergence speed.A few recent advanced GWOs are proposed to try to overcome these disadvantages.However,they are either difficult to apply to large-scale problems due to high time complexity or easily lead to early convergence.To solve the abovementioned issues,a high-accuracy variable grey wolf optimizer(VGWO)with low time complexity is proposed in this study.VGWO first uses the symmetrical wolf strategy to generate an initial population of individuals to lay the foundation for the global seek of the algorithm,and then inspired by the simulated annealing algorithm and the differential evolution algorithm,a mutation operation for generating a new mutant individual is performed on three wolves which are randomly selected in the current wolf individuals while after each iteration.A vectorized Manhattan distance calculation method is specifically designed to evaluate the probability of selecting the mutant individual based on its status in the current wolf population for the purpose of dynamically balancing global search and fast convergence capability of VGWO.A series of experiments are conducted on 19 benchmark functions from CEC2014 and CEC2020 and three real-world engineering cases.For 19 benchmark functions,VGWO’s optimization results place first in 80% of comparisons to the state-of-art GWOs and the CEC2020 competition winner.A further evaluation based on the Friedman test,VGWO also outperforms all other algorithms statistically in terms of robustness with a better average ranking value.

    KEYWORDS Intelligence optimization algorithm;grey wolf optimizer(GWO);manhattan distance;symmetric coordinates

    1 Introduction

    This section first introduces why we conduct this research through background and motivation in Section 1.1.Then we present our contribution in Section 1.2.

    1.1 Background and Motivation

    With the development of society,human beings have increasingly higher requirements for the accuracy of engineering design,structural optimization,task scheduling,and other practical engineering applications and scientific research.In the process of conducting the above research,many excellent algorithms have been generated in recent years.For example,the grey wolf optimizer (GWO) [1],the Equilibrium optimizer[2],the Artificial hummingbird algorithm[3],the Multi-trial vector-based monkey king evolution algorithm[4],and so on.They have been shown to outperform many intelligent optimization algorithms and are applied to different engineering optimization problems.However,as the research progressed,many limitations of intelligent optimization algorithms were identified.For example,Zaman et al.used the powerful global exploration capability of the backtracking search optimization algorithm to enhance the convergence accuracy of particle swarm optimization (PSO)[5].Shen et al.optimized the convergence speed of the whale optimization algorithm by dividing the fitness into three subpopulations[6].

    The grey wolf optimizer(GWO)[1]is a swarm-based intelligence optimization algorithm proposed by Mirjalili et al.in 2014.GWO solves the optimization problem by simulating the steps of searching,encircling,and attacking prey in the process of wolf hunting.The first two stages(i.e.,searching and encircling) make a global search,and the last stage (i.e.,attacking) takes a local search.The social hierarchy of wolves is classified into four types(i.e.,α,β,δ,andω)according to their fitness,whereαwolf has the highest social rank,andωwolf has the lowest social rank.Compared with other optimization algorithms,GWO can adaptively adjust the convergence factor and information feedback to achieve a balance between local optimization and global search,so it has good performance in terms of problem-solving.GWO has been successfully applied to the fields of job shop scheduling[7],parameter optimization[8],and image detection[9]due to its advantages of a simple principle and a few parameters that need adjustment.

    Despite the merits mentioned above,concerning the solution accuracy and the searching ability,the original GWO has a potential space to improve.In addition,there is a paradox in GWO: the social rank values ofα,β,andδare different in accordance with the design of the social hierarchy,but these values are the same in practice.In response to the above questions,Some advanced GWOs have been proposed recently,including variable weight GWO (VWGWO) [10],PSO combined with GWO (PSOGWO) [11],and covariance matrix adaptation with GWO (CMAGWO) [12].VWGWO strengthens the leadership ofα.PSOGWO accelerates the convergence of populations at later stages.CMAGWO can enhance a local search for wolves.Although these algorithms make some improvements to GWO,they still have the following limitations:

    1) Low accuracy solution.During the initialization population phase,GWO,VWGWO,and CMAGWO generate wolves randomly;this mechanism cannot guarantee the diversity of the population.Although PSOGWO uses Tent mapping to make the initial population more uniformly distributed,however,due to the fixed mapping period of the Tent mapping,the randomness in the initialization phase still has room to improve.Meanwhile,in our observation,PSOGWO does not remarkably improve the solution quality because of the two fundamental optimizers’common drawbacks of falling into local optimum.

    2) Unreasonable weight setting.During the searching phase,VWGWO sets the weight ofαclose to 1,whereas the weights ofβandδare close to 0.This setting will untimely establish theαwolf’s dominance and weaken the leadership ofβandδ.Thus,VWGWO easily leads to early convergence toαand falls into a local optimum.

    3) High time complexity.In CMAGWO,if the GWO solution quality is poor,then the CMAES optimization space is extremely limited.In addition,CMA-ES needs to decompose the covariance matrix during iteration (its complexity is as high asO(N3),whereNdenotes the number of wolves),and CMAGWO significantly increases the running time compared with GWO.

    1.2 Our Contributions

    To solve the aforementioned problems,this paper proposes a high-accuracy variable GWO(VGWO)while still with low time complexity in this study.The contributions are listed below:

    1) A mechanism to improve population diversity is developed.Several pairs of wolves with symmetric relationships are added to the population during the wolves’initialization stage.The diversity of the wolves increases and further makes the wolves’distribution more uniform.This scheme is easy to implement and can be applied to problems of different scales.

    2) A method to prevent wolves from premature convergence is designed.Dynamically adjusting the weights ofα,β,andδimproves the convergence speed and solution accuracy of wolves.

    3) A notion to make wolves jump out of the local optimum is presented.Random perturbations are leveraged to delay the gathering of the wolves and strengthen the random search ability of the algorithm while keeping the algorithm in a low time complexity.

    4) Some evaluations are conducted to illustrate the VGWO’s advantages.VGWO,GWO,PSOGWO,VWGWO,and CMAGWO are evaluated by 19 benchmark functions and 3 engineering cases.Compared to the state-of-art GWOs and CEC2020 competition winner,the experimental results illustrate that VGWO has good solution quality and fast convergence speed without increasing the time complexity.A further evaluation based on the Friedman test,VGWO also outperforms all other algorithms statistically in terms of robustness with a better average ranking value.

    The remainder of this article is as follows:Section 2 reviews the work related to GWO.Section 3 describes the VGWO model and its derivation process.Section 4 gives the parameter settings of different algorithms in the experiments.Section 5 first conducts the experiments on benchmark functions,followed by the analysis of the experimental results,and further demonstrates the excellence of VGWO by comparing its performance with the CEC2020 winner.Section 6 evaluates VGWO through three different engineering examples.Section 7 summarizes the work of this study and provides suggestions for further research.

    2 Related Work

    Many researchers have made relevant improvements to address the shortcomings of GWO.Work related to GWO focuses on three perspectives,namely,the initialization population of GWO,the search mechanism of GWO,and hybridizing other metaheuristics of GWO.

    1)Population Initialization of GWO.The outcome of the initialization population affects the quality and speed of convergence.However,wolves in the original GWO are generated randomly,so their diversity cannot be guaranteed.Long et al.[13]first introduced the point set theory in GWO,which initializes the population and makes wolves’distribution more uniform to solve this problem.Luo et al.[14] employed complex numbers to code the grey wolf individuals.This method can increase the amount of information in the gene because a complex number has 2D properties.Kohli et al.[15] developed a GWO based on chaotic sequences and demonstrated that their method has greater stability than GWO.Similarly,Teng et al.[11]suggested a particle swarm optimization combined with GWO(PSOGWO),which increases the randomness of initialized populations by Tent mapping.However,the foregoing research only considers the effects of the starting population and ignores the balance between local and global search capabilities.

    2)Searching Mechanism of GWO.A number of studies about search mechanisms focus on the enhancement of control parametera(ais used to control the proportion of global and local search) or on the calculation method of individual coordinates.Saremi et al.[16] combined GWO with evolutionary population dynamics (EPD),where EPD can remove the poor solution and reposition it aroundα,β,orδto improve GWO’s local search ability.Mittal et al.[17] proposed a modified GWO (mGWO) that adjusts the control parameterato improve the effectiveness and stability of GWO.Lu et al.[18] increased the topological neighbors of wolves to ensure population diversity by combining GWO with topology.However,the weight of dominant wolves (α,β,andδ) contradicts their social hierarchy and affects the GWO’s performance.Considering this problem,Gao et al.[10] proposed a variable weight GWO to reduce the probability of GWO falling into the local optimum.However,VWGWO’s solution quality does not improve much compared to GWO because it overemphasizes the weight ofα.

    3)Hybrid Metaheuristics of GWO.GWO also has gained popularity in the field of hybrid metaheuristics,with many researchers combining it with differential evolution(DE)or PSO.Yao et al.[19] used the survival of the fittest strategy from DE and presented an improved GWO.Similarly,Zhu et al.[20] proposed a hybrid GWO with DE to improve the GWO’s global search ability.Contrary to reference [20],Jitkongchuen [21] proposed a hybrid DE algorithm with GWO(jDE).jDE employs the GWO’s coordinate update strategy to improve the crossover operator in DE.However,since PSO introduces the historical optimal solution into the algorithm,thereby obtaining strong convergence[22],Singh et al.[23]observed that the above GWOs do not account for wolves’historical optimal coordinates,so they proposed a hybrid GWO based on PSO(HPSOGWO).HPSOGWO has better exploratory ability than GWO;PSOGWO is developed based on HPSOGWO.Zhao et al.[12]used a two stage search to improve the GWO’s local search ability and then designed a covariance matrix adaptation with GWO(CMAGWO).

    Meanwhile,GWO plays a very important role in the application of engineering problems.In the field of parameter optimization,Madadi et al.[24] used the GWO algorithm to design a new optimal proportional-integral-derivative (PID) controller,then compared the PID-GWO and PIDPSO controllers;the results showed that the PID-GWO can better improve the dynamic performance of the system.Lal et al.[25] employed GWO algorithm to optimize the fuzzy PID controller and applied it to the automatic control of the interconnected hydrothermal power system.Sweidan et al.[26] leveraged GWO to optimize the parameters of the support vector machine (SVM) and used the optimized SVM to assess water quality;Eswaramoorthy et al.[27] utilized GWO to tune the SVM classifier and used it for a case study of intracranial electroencephalography (iEEG) signal classification middle.Muangkote et al.[28]applied the improved GWO algorithm to the training of q-Gaussian-based radial basis function logic network(RBFLN)neural network and pointed out that the improved GWO algorithm has higher accuracy in solving complex problems through comparative experiments;Mirjalili et al.[29] first used GWO to train multi-layer perceptron (MLP).Compared with other well-known algorithms,GWO-MLP can effectively avoid falling into local optimum.Al-Shaikh et al.[30] used GWO to find strongly connected components in directed graphs in linear time,addressing the serious time-consuming shortcomings of Tarjan’s algorithm and the Forward-Backward algorithm.In the field of renewable energy systems,Sivarajan et al.[31]made use of GWO to determine the optimal feasible solution to the dynamic economic dispatch problem of combined heat and power generation considering wind farms and tested the performance of the GWO algorithm with a test system containing 11 generating units.The results show that the GWO algorithm is superior in both economy and computing time.Mohammad et al.[32] used GWO to solve the problem of photovoltaic solar cells and greatly reduced the waste of energy by estimating the performance.In addition,Almazroi et al.applied GWO to an implicit authentication method for smartphone users[33].Zaini et al.predicted the energy consumption of home appliances through GWO[34].

    Although the above GWOs have some advantages in different parts,they mainly focus on optimizing the internal shortcomings of wolf individuals.This study continues to analyze GWO.Different from the above algorithm,our work focuses on trying to leverage multiple mechanisms to improve the quality of the solution.

    3 Design of VGWO Algorithm

    This section first introduces VGWO in detail.Then,an outline of three aspects is presented,followed by a detailed description of the algorithmic flow.All variants used in this study and their corresponding definitions,as well as the time complexity analysis of VGWO,are also introduced in this section.

    3.1 Enhancing Wolves’Diversity through Symmetrical Wolf(SW)

    In GWO,initializing the population is a crucial stage.A healthy population has a direct influence on the speed and outcome of convergence in general.However,the original GWO population initialization was performed at random,which cannot guarantee a good diversity of wolves,thereby decreasing the algorithm’s effectiveness.This paper gives a definition named SW to overcome this drawback and then focuses on how SW be used to start populations.

    Definition 1:SW.Wolf individual’s symmetric coordinates about the center of the search range.

    We briefly describe the above definition through an example.Assuming that a pointx∈[l,r]exists in the 1D space,and the SW point ofxis the symmetrical point ofxabout the symmetry centerWhen extending to 3D space,for example,a pointx=(1,2,3)is found in Fig.1,where the search range is{(x,y,z)|x∈[-5,5],y∈[-5,5],z∈[-5,5]},that is,the green space in Fig.1.Thus,the symmetry center is(0,0,0),and the SW point ofxin the search range isx’=(-1,-2,-3).

    Combined withDefinition 1,the steps of initializing an individual coordinates solution using SW are as follows: 1) Initialize the coordinates→Xi(i=1,2,···,N) ofNgrey wolf individuals in the defined domain space as the initialization wolvesini_population;2)Generate the SW populationsym_populationof populationini_populationin accordance withDefinition 1;3) Merge populationsini_populationandsym_populationto obtainpopulation,and then sortpopulationby fitness value;4)Select the firstNindividuals as the initial population.

    The pseudo-code of the initialized population is shown in Algorithm 1,whereNanddimindicate the amount and dimension of the population,respectively,and [l,r] represents the search range of wolves.The initialized individuals obey the beta distribution because the beta distribution can generate as many non-edge individuals as possible,thereby improving the search efficiency of GWO.With the use of SW constraint in Algorithm 1,the diversity of the wolves is enhanced considerably.The search for SW and the initialization of the population are performed simultaneously,which has minimal effect on the time complexity of the algorithm.

    Figure 1:SW point in three-dimensional space(1,2,3)

    The SW implementation is simple and scalable.Thus,it can be applied after each wolf’s coordinate update.Figs.2 and 3 show the wolves after seeking for SWs in the searching and encircling phases,respectively.The black dot in the figure is assumed to be the search range’s center,α’,β’,andδ’are correspondingly the SWs ofα,β,andδ.The grey point represents an ordinary wolf who will move towardα,β,andδ,and the yellow point represents the prey.As shown in Fig.2,SW increases wolf diversity and has the potential to bring wolves closer to their prey.However,the wolves begin to round up the prey,as shown in Fig.3.If we search SW for the entire population as usual,then the generated SW wolves will be far away from the prey,which is obviously an invalid operation.Thus,the wolves must generate SW wolves adaptively at various stages.In short,VGWO will reduce the SW wolf search when the wolves reach the encirclement phase.

    Figure 2:Add SW in the phase of searching prey

    Figure 3:Add SW in the phase of encircling prey

    For the SW wolves search in the iterative process,our study only searches for SW wolves for the firstlengthof well-fitness individuals in each iteration.Thislengthreduces linearly fromNto 0 with the number of iterations.The computation formula is expressed as follows:

    whereNindicates the number of individuals in the population,itindicates the current iteration,andMax_iterindicates the number of iterations.This practice can prevent the time complexity of VGWO from becoming high.

    3.2 Changing the Weights of α,β,and δ

    As previously shown,changing the weight of the dominant wolf (i.e.,α,β,andδ) can lead to better optimization capability [10],however,the weight ofαshould not always be greater than onethird.Althoughαis representative of the current optimal solution,it is not constantly closest to the prey[1].Ifαalways has the highest weight,then the wolves will inevitably gather aroundαprematurely,thereby ignoring the leadership ofβandδ,and making the search easier to fall into the local optimum.Therefore,this paper proposes a new weight calculation method.

    whereω1,ω2,andω3indicate the weight ofα,β,andδ,respectively.ω1linearly increases from 0 to 2/3 with the iterationsit.ω2indicates the weight of the individual in the middle position,and this paper simply keeps its weight to 1/3.ω3linearly reduces from 2/3 to 0 with the number of iterations.As shown in Eq.(2),this calculation method aims to weaken the weight ofαin the searching and encircling phases and strengthen the leadership ofβandδ,thereby improving the global search ability and reducing the probability of wolves falling into the local optimum.

    3.3 Adding Random Perturbations to Wolves

    Random perturbation is a type of contingent fluctuation that can be used to test the stability and usefulness of a model [35].Simulated annealing (SA) jumps out of local optimum by implementing a survival of the fittest strategy for random perturbations [36] like the greedy strategy proposed in DE[37].On this basis,SA and DE have powerful random search capabilities.Contrary to GWO,the authors in[38]and[39]pointed out that SA and DE often need to pay the price of slow convergence speed.Therefore,this paper combines the advantages of the above three algorithms to create the VGWO,that is,adding random perturbations to the wolves.The random perturbations are generated as follows:

    wherer1,r2,andr3are three different random integers in the range of [1,N],andFis the scaling parameter.In the DE,Faffects the diversity of the population and has a remarkable effect on convergence and stability.The appropriateFis difficult to be determined.As mentioned in[40],the upper limit ofF=1.2 is determined on the basis of experience,and no optimization problem requiresFto be greater than 1.2.Therefore,for wolves,after ending each search,creating the variant individuals with differentFis a reasonable means to balance the local search and global search capabilities of the algorithm.For the VGWO,the value ofFis linearly reduced from 1.2 to 0.This paper changesFin accordance with a special parameter,which is called temperature changing in SA.

    whereTmaxindicates the initial temperature,Tindicates the temperature under the current iteration,andTminindicates the temperature under steady state.However,the judgment of whether a random perturbation is added to the wolf pack needs to be based on its fitness value.

    A method for calculating the probability of a random perturbation being selected is proposed in the archival multiobjective SA algorithm(AMOSA)[41].

    whereΔdomavgis the average domination of the random perturbation.Tdenotes the present temperature,and the above formula guarantees that theprobabilityvalue is between 0 and 1.Δdomavgis determined as the following:

    wherekis the number of solutions,andΔdomi,bis the amount of domination about two solutions(i.e.,iandb),which is determined by

    where theOis the number of objectives.In [41],the new solution is obtained by making a random perturbation to the current state,which is similar to the idea of deriving the variant individual in Eq.(3).However,the calculation of the average dominance in the AMOSA is based on a multiobjective problem,and the problem discussed in this study is a single-objective problem.In other words,Oin Eq.(7) is equal to 1,and Eq.(6) represents the average domination of the new solutionbover all the solutions.This strategy is similar to the vector 1-norm.The traditional vector 1-norm calculation method is as follows:,wherexdenotes anN-dimensional vector.The mathematical meaning of the 1-norm is the sum of the elements of a 1D vector,and the geometric meaning is the Manhattan distance from the vectorxto the original point.

    The Manhattan distance is a scalar that reflects the absolute value of the sum of the axial distances between two points.As shown in Fig.4,the Manhattan distance of pointx(1,1)to pointy(4,5)is 7.However,this study wants to calculate the superiority or inferiority of variant individuals relative to the population as a whole in terms of fitness.Thus,a new method of calculating the Manhattan distance is needed to represent the above problem by the positive or negative of the Manhattan distance.This paper defines this method as the vectorized Manhattan distance to calculate the domination deviation of the variant wolf to the current population.In our study,the fitness value of the population is considered anN-dimensional vector,=(Fitness,Fitness,...,Fitness()).Thus,the corresponding vectorized Manhattan distance calculation is expressed as

    then we calculate the 1-normMof non-absolute values of the →Vmavector as the following:

    where theFitness() function is denoted as the fitness function.For example,is better than most individuals in the current wolves when the optimization objective of the fitness function is the minimum,and the value ofMis less than 0.Thus,the vectorized Manhattan distance can be used to determine the status of random perturbations in the current population by the positive or negative ofM.The pseudo-code for vectorized Manhattan distance calculation is presented in Algorithm 2.

    Figure 4: Manhattan distance from point x to point y (both the red and blue lines are Manhattan distances)

    Combined with Eqs.(5)and(6),this paper can derive a new method for calculating the probability of a variation individual being selected is presented as follows:

    whereNindicates the population number andTindicates the current temperature.

    To better clarify the definition of variables,we list the variables used in this paper and give their corresponding descriptions in Table 1.

    Table 1: R major variants used in this study

    3.4 VGWO Process

    Algorithm 3 demonstrates the VGWO pseudo-code.The initialization section must determine the initial temperatureTmax,steady state temperatureTmin,and temperature change ratekbased on the actual problem.The initialized populationini_populationis generated on the basis of the population size and shape;the initialized individuals follow the beta distribution.The SW populationsym_populationis generated by using the method described in Section 3.1,ini_populationandsym_populationare merged to obtainpopulation,andpopulationis sorted by the fitness value.The topNindividuals are chosen to form the final initialpopulation.The total number of iterationsMax_iteris calculated.

    The second part is the process of slow cooling.Three wolves with the highest status in the wolves are selected asα,β,andδ.In accordance with Eq.(1),the number of SW wolves to be searched in each iteration (length) is calculated.sym_populationis initialized as an empty set,which is used to save the SW of the firstlengthwolves.The search process of grey wolves is the same as the original GWO.With the gradual decrease inlength,the SW of the current grey wolf is searched after determining its new coordinates and saved insym_populationuntillengthdecreases to 0.Whenever the wolves have finished searching,the variation factorFis calculated by using Eq.(4).The variation is obtained as a new individual →Xvariationby using Eq.(3),and the dominationMof →Xvariationis calculated in accordance with Algorithm 2.The probability of selecting →Xvariationis calculated by using Eq.(10).New offspringpopulationandsym_populationare combined,and the firstNgrey wolves are taken as the newpopulation.

    3.5 Complexity Analysis

    Unlike the original GWO,the VGWO uses the quick sort to update the population with a time complexity ofO(n* log2n).Assume the time complexity of the fitness value calculation is set toT.The length of the population after the merging of the population and thesym_populationis denoted byL,and the value ofLreduces linearly from(2*N)toNwith the number of iterations to facilitate the expression of the subsequent complexity.The basic operations’complexities are as follows.1)Initializing the wolves:O(N*dim+2*N*log22*N);2)Wolf searching:The number of iterations isMax_iter.The time complexity of updating the coordinates of wolves isO(N*dim).The time complexity of calculating the vectorized Manhattan distance and fitness values isO(N*T),while updating the population isO(L*log2L).Therefore,the overall complexity of VGWO isO(Max_iter*(N*dim)).

    The GWO,PSOGWO,and VWGWO all have the same time complexity,which isO(Max_iter*(N*dim)) [1,10,11].The time complexity of the CMAGWO isO(iter1 * (N*dim))+O((Max_iter-iter1)*dim)[12],whereiter1 denotes the number of iterations of the first stage in the CMAGWO.The time complexity difference between the five algorithms is small,and subtle differences may be observed.Thus,the effect on the performance of the algorithms is negligible in this study.

    4 Experimental Settings

    With the purpose of assessing the capability of the VGWO,we conduct a series of experiments.The algorithms being compared include GWO,VWGWO,PSOGWO,and CMAGWO.This section outlines the experimental steps and parameter setting.The experimental framework and programs used for comparison were implemented in Python and ran on a computer with a 2.90 GHz Intel i7-10700 CPU and 16 GB of RAM.The source code and experimental results are available at https://github.com/ZhifangSun/VGWO.

    4.1 Benchmark Function

    Benchmark function is a variety of standard functions derived from the optimization problems encountered by human beings in real life.This paper selected 19 representative benchmark functions from CEC2014 and CEC2020 to test the performance of the above algorithms,which are well-known complex test functions [42].They can be divided into three main categories,namely,unimodal (F1,F2,F3,F4,F5,F6,F13,F14),multimodal (F7,F8,F9,F10,F11,F12,F19),and fixed-dimension multimodal (F15,F16,F17,F18).As shown in Table 2,the inputD-dimensional vector →X=(x1,x2,···,xD) is limited by different definition fields.The maximum value and the minimum value in the definition field arerandl,respectively,and the optimum of each function is 0.For each tested algorithm,different random populations are used for 15 repeated tests,and the average solutions and standard deviations of all experiments are recorded.

    Table 2: Benchmark functions and their definition domain

    4.2 Parameter Setting of VGWO

    Orthogonal experimental design is a multi-factor,multi-level experimental design method [43],which is fast,economical,and efficient,and can obtain satisfactory results with a fewer number of experiments.Therefore,the orthogonal test method is often used to compare the nature of parameters and to determine the best combination between different parameters.The parameters that need to be set in advance for this experiment are the rate of temperature changek,the starting temperatureTmax,and the steady state temperatureTmin.The three parameters control the global number of iterationsMax_iter,the variation factorF,and the admission probabilityPof the perturbed individuals.Therefore,different combinations and different levels of the 3 parameters are shown in Table 3.

    Table 3: Combinations of different values for parameters

    Table 3 lists factors/parameters and their corresponding levels/values.The number of orthogonal groups ofL25(53)was chosen for this experiment.For each set of orthogonal experiments,five tests are performed,and the test termination condition is when the scheduling time of the VGWO reachesstla=maxdenotes the average scheduling time of thela-th benchmark function after 300 iterations throughalgorithm,wherealgorithm∈{GWO,VWGWO,PSOGWO,CMAGWO},andla∈Label,for the benchmark functions in Table 2.The results of the orthogonal experiments are shown in the GitHub repository mentioned in Section 4.

    The performance of the GWOs is linked to the parameter values when faced with different problems,and a tuned combination of parameter values will yield better results.However,the experiments use the same combination of parameters to optimize different benchmark functions for highlighting the performance of VGWO.Thus,the average response value(ARV)of thei-th parameter combination is calculated by using the following equation:

    The ARVs obtained from the orthogonal experiments are shown in Table 4.Table 5 shows the mean value of ARV and the extreme difference in the mean value for each factor at different levels.A smaller value of ARV indicates that this parameter has less difference from the global optimum at the current level,and a larger value of ARV indicates that this parameter has less optimization ability at the current level.The larger the extreme difference in the mean value of ARV indicates that this parameter has more influence on the performance of the algorithm.Therefore,the conclusions based on Tables 4 and 5 are summarized as follows:

    Table 4: Combinations of different values for parameters

    Table 5: Combinations of different values for parameters

    1.In accordance with the value of range,the effect of the three parameters on the performance of the algorithm from low to high isTmax,Tmin,k.

    2.Based on each factor’s mean ARV value of different levels,the best combination of parameters for the VGWO is suggested to beTmax=1,Tmin=1e-4,andk=0.90.

    3.The extreme differences of all three parameters are small,less than 0.25,which shows that the algorithm is not sensitive to the values of the parameters,and has robust and stable performance within a wide range of parameter settings.

    4.3 Other Parameter Settings

    The parameters of the VGWO are obtained in Section 4.2 (i.e.,Tmax=1,Tmin=1e-4,andk=0.90),and the parameters of the remaining algorithms (GWO [1],PSOGWO [11],VWGWO [10],CMAGWO[12])are set in the same way as the parameters of the respective references.The number of iterations of all algorithms is 90,and the number of wolves is 50.

    5 Experimental Result

    This section presents and discusses the results of the proposed VGWO compared with the stateof-art GWOs.Mainly comparing their performance through optimization results and convergence curves.

    5.1 Result Analysis

    The optimal solutions of the above 10 functions and the algorithms that produce this result are recorded in Table 6,where the optimal average solution of each function is marked in boldface.Given that the generation of the wolves is random and has certain instability in the iterative process,the results that are different from the ideal state exist.However,they are determined by the excellence of the algorithm itself in most cases.

    The results of the 19 benchmark functions are analyzed as follows:

    1) For all functions,most of the optimal average solutions compared with various algorithms are found by the VGWO.Because the VGWO enhances the diversity of wolves through SW,it outperforms other algorithms in terms of finding the best solution and even produces the ideal solution 0 for functions F8,F11,and F18.In each iteration,VGWO will arrange moreωwolves to search for prey,which greatly increases the probability of obtaining the global optimum.

    2) The standard deviation of F8,F11,and F18 functions is 0,and the other four GWOs easily fall into local optimum,so it is difficult to achieve this level.VGWO generates random perturbations through differential operators,and calculates the probability of acceptance through its own domination.This method provides a chance for the stability of the wolves to jump out of the local optimum,which reflects the strong robustness of VGWO.

    3) F10 and F12 are multi-peaked functions,F13 is a single-peaked function,and F16 is a fixeddimensional multi-peaked function.Although the GWO achieves the lead,it does not differ much from the results obtained by the VGWO,and VGWO has good performance for other functions of the same type.

    In addition,the Friedman test results shown in Table 7 indicate that VGWO ranks first in terms of average results.It is clear that VGWO is statistically superior to all other algorithms since it has an average ranking value.

    Table 7: Ranking of all algorithms for all benchmarks obtained by Friedman test

    5.2 Convergence Curve Analysis

    Figs.5-7 show most of the fitness value iteration plots.The rest of the functions not shown are due to either excessive disparity or similarities in the convergence curves between different algorithms.GWOs’excellence cannot be compared visually by the graphs.Considering that the curves of different GWOs overlap in these graphs,some iteration graphs have additional small views,which can be used to visualize the distribution of the overlapping curves at the end of the iteration.

    Figure 5:Unimodal benchmark functions

    The analysis results for the three different types of benchmark functions(unimodal,multimodal,and fixed-dimension multimodal benchmark functions)are as follows:

    1) For the unimodal benchmark function,the VGWO and CMAGWOs have excellent capabilities in initializing populations due to the beta distribution.Figs.5b and 5c show that the convergence rate at the beginning is still slow.This finding is due to the improved mechanism proposed in Section 3 enhances the population diversity of the pre-VGWO.However,this scheme does not affect the convergence speed of the VGWO,and the rich population diversity enhances the convergence speed of the VGWO in the later stages.As shown in Figs.5b and 5c,for the above reasons,the VGWO obtains a solution that exceeds the rest of the algorithms when the number of iterations reaches about the 20th time.

    2) For the multimodal benchmark function,except for the PSOGWO that will fall into the local optimum prematurely,the rest of the algorithms have a good convergence trend,where the most outstanding performance is still the VGWO.With the continuous addition of random perturbations,and accepting wolves with dominant positions,the VGWO can jump out of the local optimum easily and establish the search area quickly by the rich population diversity.

    3) For the fixed-dimension multimodal benchmark function,many algorithms fall into local optimum,and the curves of the GWO,VWGWO,and VGWO show a decreasing trend,as shown in Fig.7a.In Fig.7b,only the GWO and the VGWO converge toward the amiable results.In Fig.7c,only the VGWO converges to the ideal results.The GWO,CMAGWO,PSOGWO,and VWGWO are difficult to determine the approximate position of the prey and present extremely unstable curves.This is due to VGWO’s guarantee of population diversity,which reflects the strong robustness of the VGWO.

    Figure 6:Multimodal benchmark functions

    Figure 7:Fixed-dimension multimodal benchmark functions

    5.3 Comparison with CEC Competition Winner

    To judge the performance of VGWO further,this study compares the results obtained by VGWO from the 19 benchmark functions mentioned above with those obtained by the winner of the CEC2020 competition,an improved multi-operator DE(IMODE)algorithm.Its parameters are obtained from relevant articles,and the same random seeds are used for both VGWO and IMODE to ensure a fair comparison.

    The mean and standard deviation of the results are recorded according to the rules of the benchmark.

    The run was stopped if the number of evolutions was greater than 90.The performance comparison between VGWO and IMODE is shown in Table 6.

    By the mean and standard deviation obtained,the proposed VGWO outperforms IMODE for unimodal,multimodal,and fixed-dimension multimodal problems.In [44],IMODE obtains good results by a large number of iterations and running time.However,in this study,the results obtained by IMODE are much worse compared to VGWO.This shows that VGWO is able to obtain better results after fewer iterations compared to IMODE,proving that VGWO has higher accuracy and convergence.

    6 Application Studies on Engineering Cases

    Engineering examples are designed in such a way that the solution usually involves constraints on inequalities or equations.The methods to handle the constraints are special operators,repair algorithms,and penalty functions [45].The simplest method is used because this study ignores additional algorithms for the VGWO to design good handling of constraints.In simple terms,any solutions that violate the constraints are uniformly penalized by assigning them the maximum fitness value(minimum fitness value in the maximization case).This method is extremely easy to implement and requires no modification to the VGWO.

    In the following summary,the VGWO will be used to solve 3 constrained engineering problems and each engineering case will be run 15 times to compare the optimization results with the GWO,CMAGWO,PSOGWO,and VWGWO.The number of iterations of all algorithms is 90,and the number of wolves is 50.The parameter settings for all algorithms are the same as in Section 4.

    6.1 Cantilever Beam Design Problem

    Fig.8 shows the model of a cantilever beam,which consists of five hollow elements of a square cross section,each corresponding to a variable parameter.From Fig.8,the right side of element 1 is rigidly supported,and element 5 is subjected to a vertical load [46].The optimization objective is to minimize the total weight of the beam without violating the vertical displacement constraint.The weight of the beam is calculated by using the following formula:

    The constraint function for the vertical displacement can be expressed as follows:

    where 0.01 ≤x1,x2,x3,x4,x5≤100.Table 8 shows the best results of the five algorithms optimized for this engineering example.The VGWO achieves the best average result after 15 iterations of the experiment.Although the global optimal solution is obtained by the CMAGWO,its average result is inferior to that of the VGWO,which reflects the stability of the VGWO.The last column of the table(Max.eval.) represents the maximum number of evaluations required for the algorithm of this row to converge to the average value of the VGWO (bolded values in the table),and N/A represents the inability to reach the average of the VGWO.This finding shows that the VGWO only requires a smaller number of evaluations to converge to a better solution,reflecting the higher accuracy of the results obtained by VGWO.

    Table 8: Comparison results for cantilever design problem

    Figure 8:Cantilever beam design problem

    6.2 Three-Bar Truss Design Problem

    Fig.9 shows the second engineering example,which is the design of a threebar truss with the optimization objective of minimizing its weight while being constrained by buckling and stresses[47],where trussesx1andx3are the same.The objective function can be expressed as

    Figure 9:Three-bar truss design problem

    The constraint function can be expressed as

    where 0 ≤x1,x2≤1,l=100cm,P=2kN/cm2,σ=2kN/cm2.From Table 9,the VGWO achieves the optimal average result,obtains the global optimal solution among the five algorithms,and only uses 79 iterations.The optimal solutions obtained by GWO,PSOGWO,and VWGWO are less than the average of VGWO,which again proves the efficiency of the VGWO in solving engineering instance problems.

    Table 9: Comparison results of the three-bar truss design problem

    6.3 Gear Train Design Problem

    Fig.10 shows the third engineering example,which is a discrete case study with four parameters representing four gears(i.e.,A,B,C,andD),and the optimization objective is to minimize the rotation ratio of the gears and determine the optimal number of teeth for the four gears at this time[48].The objective function can be expressed as follows:

    Figure 10:Gear train design problem

    Its constraints are relatively simple and only need to be satisfied as follows:12 ≤x1,x2,x3,x4≤60.According to[49],when the optimal number of teeth is set tox1=43,x2=19,x3=16,andx4=49,then the gear rotation ratio will obtain the smallest value.As shown in Table 10,VGWO obtains this result by only through 81 iterations,and its average result is the best among the five algorithms.From the above three engineering examples,the VGWO is suitable for most of optimization problems with constraints and can obtain better results after relatively fewer iterations,which reflects the robustness,stability,and strong convergence of the VGWO.

    Table 10: Comparison results of the gear train design problem

    7 Conclusion

    In this study,this paper proposes a high-accuracy VGWO while still with low time complexity.SW is used to enhance the diversity of the wolves.Inspired by VWGWO’s proposal that three dominant grey wolves have different social statuses in different periods,this study improves the linear variation method of the three dominant wolves’weights during the search procedure.With the purpose of reducing the chance of falling under local optimum,a differential variation factor is introduced to generate new individuals.Enlightened by the AMOSA,the probability of accepting the random perturbation is calculated by vectorizing the Manhattan distance when the grey wolf finishes the current search.The experimental comparison of 19 benchmark functions,the CEC competition winner(IMODE),and 3 engineering cases with GWO,CMAGWO,PSOGWO,and VWGWO reflects the advantages of VGWO in terms of convergence and accuracy.The experimental results show that our VGWO has better solution quality and robustness than other algorithms under different conditions.For 19 benchmark functions,VGWO’s optimization results place first in 80%of comparisons to the state-of-art GWOs and the CEC2020 competition winner.Only VGWO’s average ranking value is less than 1.5,according to Friedman’s test.

    The proposed VGWO is promising to be a good choice for solving problems with uncertain search ranges and high requirements for accuracy of results.However,the proposed improvements mainly focus on increasing the wolves’diversity,which is not necessarily applicable to solve all benchmark functions.Tendency to fall into local optimum is still a weakness of the GWO.VGWO is not always the best solution when encountering a larger range of solution spaces than the current popular application problem,and further efforts are needed to solve this problem.The GWO is still a hot topic for future research on optimization algorithms,and the application of VGWO to suitable engineering optimization problems will be considered in the future.For example,it can be applied to renewable energy systems,to workflow scheduling problems in heterogeneous distributed computing systems,or to integrated process planning and scheduling problems.As more excellent intelligent optimization algorithms are proposed,we will try to shift our research direction to these algorithms in the future,such as Mountain Gazelle Optimizer[50],African Vultures Optimization Algorithm[51],and so on.

    Acknowledgement:We would like to thank the anonymous reviewers for their valuable comments on improving the paper.

    Funding Statement:This work is partially funded by the China Scholarship Council.

    Author Contributions:Study conception and design: Junqiang Jiang,Zhifang Sun;data collection:Xiong Jiang;analysis and interpretation of results: Shengjie Jin,Yinli Jiang;draft manuscript preparation: Junqiang Jiang,Zhifang Sun,Bo Fan.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The authors confirm that the data supporting the findings of this study are available within the article.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲午夜精品一区,二区,三区| 精品国产乱码久久久久久小说| 亚洲免费av在线视频| 亚洲精品久久午夜乱码| 老司机深夜福利视频在线观看| 亚洲国产欧美日韩在线播放| 久久久欧美国产精品| 男女之事视频高清在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 午夜日韩欧美国产| 极品少妇高潮喷水抽搐| 美女高潮到喷水免费观看| 一边摸一边抽搐一进一小说 | 亚洲 欧美一区二区三区| 美女国产高潮福利片在线看| 国产一区有黄有色的免费视频| 久久久水蜜桃国产精品网| 高清毛片免费观看视频网站 | 电影成人av| av片东京热男人的天堂| 午夜福利影视在线免费观看| 国产又爽黄色视频| 亚洲九九香蕉| 日韩中文字幕视频在线看片| 757午夜福利合集在线观看| videos熟女内射| 日本黄色日本黄色录像| 国产精品av久久久久免费| 日韩大码丰满熟妇| 操美女的视频在线观看| 一级,二级,三级黄色视频| 丝瓜视频免费看黄片| 久久精品熟女亚洲av麻豆精品| 国产欧美日韩精品亚洲av| 中文字幕另类日韩欧美亚洲嫩草| 99精国产麻豆久久婷婷| 国产福利在线免费观看视频| 久久香蕉激情| 亚洲欧洲日产国产| 99国产精品99久久久久| 999久久久精品免费观看国产| 五月天丁香电影| 欧美激情久久久久久爽电影 | 亚洲成人免费电影在线观看| www.自偷自拍.com| 两个人免费观看高清视频| 国产一卡二卡三卡精品| 亚洲av第一区精品v没综合| 男人操女人黄网站| 久久久精品94久久精品| 老司机亚洲免费影院| 老汉色∧v一级毛片| 国产日韩欧美在线精品| 亚洲精品国产精品久久久不卡| 亚洲第一青青草原| 久久狼人影院| 久久久久久久国产电影| 国产一区有黄有色的免费视频| 男女床上黄色一级片免费看| 国产有黄有色有爽视频| 亚洲天堂av无毛| bbb黄色大片| 搡老乐熟女国产| www日本在线高清视频| 亚洲 国产 在线| 亚洲av美国av| 狠狠狠狠99中文字幕| 一边摸一边抽搐一进一小说 | 在线看a的网站| 最黄视频免费看| 色综合欧美亚洲国产小说| 国产高清videossex| 国产免费视频播放在线视频| 激情在线观看视频在线高清 | 亚洲av成人不卡在线观看播放网| 人妻 亚洲 视频| 国产不卡av网站在线观看| 黄色视频,在线免费观看| 69av精品久久久久久 | 国产亚洲av高清不卡| 一区二区三区乱码不卡18| 97人妻天天添夜夜摸| 国产在线一区二区三区精| av视频免费观看在线观看| 午夜福利免费观看在线| 亚洲精品中文字幕一二三四区 | 午夜91福利影院| 日本av手机在线免费观看| 超碰成人久久| 欧美 亚洲 国产 日韩一| 国产精品久久电影中文字幕 | 在线观看人妻少妇| 成年人午夜在线观看视频| 国产精品欧美亚洲77777| 桃花免费在线播放| 亚洲综合色网址| 精品少妇久久久久久888优播| 黄色视频在线播放观看不卡| 亚洲伊人色综图| 男女无遮挡免费网站观看| 精品人妻熟女毛片av久久网站| 一边摸一边抽搐一进一小说 | 亚洲国产欧美网| 男人舔女人的私密视频| av有码第一页| 无人区码免费观看不卡 | 国产在线视频一区二区| 亚洲av日韩在线播放| 久久国产精品大桥未久av| 国产在线免费精品| 成人亚洲精品一区在线观看| 久久人妻熟女aⅴ| 日韩人妻精品一区2区三区| 嫁个100分男人电影在线观看| av网站免费在线观看视频| 日本wwww免费看| 成人影院久久| 高清在线国产一区| www日本在线高清视频| 男女午夜视频在线观看| 十八禁网站网址无遮挡| 一级,二级,三级黄色视频| 免费人妻精品一区二区三区视频| 夜夜夜夜夜久久久久| 色尼玛亚洲综合影院| 国产精品久久久久成人av| 欧美亚洲日本最大视频资源| 天天影视国产精品| 俄罗斯特黄特色一大片| 久久精品亚洲av国产电影网| 日韩大片免费观看网站| 男女高潮啪啪啪动态图| 日日夜夜操网爽| 一本综合久久免费| 天堂8中文在线网| 久久香蕉激情| 国产精品 欧美亚洲| 精品一区二区三区av网在线观看 | av国产精品久久久久影院| 中文字幕最新亚洲高清| 成年人午夜在线观看视频| 国产精品久久久久成人av| 成人av一区二区三区在线看| 黄网站色视频无遮挡免费观看| 国产亚洲av高清不卡| 水蜜桃什么品种好| 免费日韩欧美在线观看| 亚洲专区字幕在线| 五月天丁香电影| 丝袜美足系列| 久久国产亚洲av麻豆专区| 99国产精品一区二区三区| 国产视频一区二区在线看| 美女视频免费永久观看网站| 黄片小视频在线播放| 天天操日日干夜夜撸| 久9热在线精品视频| 最新美女视频免费是黄的| 成年人午夜在线观看视频| 欧美+亚洲+日韩+国产| 日本欧美视频一区| 亚洲精品粉嫩美女一区| 桃红色精品国产亚洲av| 免费久久久久久久精品成人欧美视频| 一本大道久久a久久精品| 韩国精品一区二区三区| 亚洲熟妇熟女久久| 日韩欧美免费精品| 三上悠亚av全集在线观看| 久久狼人影院| 亚洲精品国产精品久久久不卡| 99精品久久久久人妻精品| 久久ye,这里只有精品| 中文字幕av电影在线播放| 成人18禁高潮啪啪吃奶动态图| 脱女人内裤的视频| 亚洲精品成人av观看孕妇| 精品一品国产午夜福利视频| 免费观看人在逋| 国产精品av久久久久免费| 亚洲色图 男人天堂 中文字幕| 狂野欧美激情性xxxx| 首页视频小说图片口味搜索| av不卡在线播放| 亚洲av欧美aⅴ国产| 老司机午夜十八禁免费视频| 亚洲国产欧美网| 国产无遮挡羞羞视频在线观看| av视频免费观看在线观看| 少妇裸体淫交视频免费看高清 | 蜜桃在线观看..| 久久99一区二区三区| 国产单亲对白刺激| 日日爽夜夜爽网站| 每晚都被弄得嗷嗷叫到高潮| 十八禁网站网址无遮挡| 欧美日韩视频精品一区| 精品一区二区三区av网在线观看 | 一区二区日韩欧美中文字幕| 亚洲av日韩在线播放| 国产精品久久电影中文字幕 | 大码成人一级视频| 欧美黄色淫秽网站| 欧美乱码精品一区二区三区| 黑人巨大精品欧美一区二区蜜桃| 免费观看av网站的网址| 色综合欧美亚洲国产小说| 欧美 亚洲 国产 日韩一| 少妇精品久久久久久久| 捣出白浆h1v1| 69精品国产乱码久久久| 免费av中文字幕在线| 一区二区三区国产精品乱码| 成人永久免费在线观看视频 | 天堂中文最新版在线下载| 久久午夜亚洲精品久久| 高清在线国产一区| 精品国产超薄肉色丝袜足j| 美女高潮喷水抽搐中文字幕| 啪啪无遮挡十八禁网站| 日本av免费视频播放| 亚洲国产中文字幕在线视频| 色综合婷婷激情| av天堂在线播放| 国产成人精品在线电影| 99久久国产精品久久久| 9191精品国产免费久久| 国产男靠女视频免费网站| 日本精品一区二区三区蜜桃| 日韩熟女老妇一区二区性免费视频| 欧美激情高清一区二区三区| a级毛片在线看网站| 国产成人av教育| 两人在一起打扑克的视频| 亚洲第一欧美日韩一区二区三区 | 欧美亚洲日本最大视频资源| avwww免费| 亚洲中文av在线| 久久精品91无色码中文字幕| 久久久久久久久久久久大奶| 麻豆国产av国片精品| 热99re8久久精品国产| 亚洲精品久久午夜乱码| 国产精品免费视频内射| 精品视频人人做人人爽| 99国产精品一区二区蜜桃av | 麻豆av在线久日| 夜夜夜夜夜久久久久| 91大片在线观看| 一区二区三区国产精品乱码| 国产精品亚洲一级av第二区| 乱人伦中国视频| 欧美黄色片欧美黄色片| 亚洲男人天堂网一区| 久久天躁狠狠躁夜夜2o2o| 老司机午夜福利在线观看视频 | 咕卡用的链子| 黄色视频在线播放观看不卡| 少妇粗大呻吟视频| 色在线成人网| 国产精品免费一区二区三区在线 | 欧美国产精品va在线观看不卡| 欧美 亚洲 国产 日韩一| 麻豆成人av在线观看| 日韩制服丝袜自拍偷拍| 女人高潮潮喷娇喘18禁视频| 热99久久久久精品小说推荐| 亚洲精品国产精品久久久不卡| 69av精品久久久久久 | 一本久久精品| 亚洲成人免费电影在线观看| 亚洲精品在线观看二区| 久久中文看片网| 黄频高清免费视频| 黄片大片在线免费观看| 大型av网站在线播放| 别揉我奶头~嗯~啊~动态视频| 成人三级做爰电影| 日日夜夜操网爽| 757午夜福利合集在线观看| 精品国产超薄肉色丝袜足j| 免费少妇av软件| 亚洲中文av在线| 亚洲av国产av综合av卡| 性少妇av在线| 成人影院久久| 不卡av一区二区三区| 热re99久久精品国产66热6| 啦啦啦免费观看视频1| 每晚都被弄得嗷嗷叫到高潮| 免费少妇av软件| av有码第一页| 汤姆久久久久久久影院中文字幕| 久久久精品94久久精品| 97人妻天天添夜夜摸| 午夜免费成人在线视频| 18禁国产床啪视频网站| 日韩欧美免费精品| 中文字幕人妻熟女乱码| 亚洲欧洲精品一区二区精品久久久| 亚洲精品一卡2卡三卡4卡5卡| 国产深夜福利视频在线观看| 久久中文看片网| 国产福利在线免费观看视频| 精品一区二区三区四区五区乱码| 成人18禁高潮啪啪吃奶动态图| 黄色视频在线播放观看不卡| 欧美+亚洲+日韩+国产| 一本一本久久a久久精品综合妖精| 在线看a的网站| 亚洲伊人久久精品综合| 黄片播放在线免费| 国产精品一区二区精品视频观看| 涩涩av久久男人的天堂| 亚洲精品美女久久av网站| 男女边摸边吃奶| 一本色道久久久久久精品综合| 又紧又爽又黄一区二区| 一区二区日韩欧美中文字幕| 国产在线免费精品| 成人手机av| 国产精品 欧美亚洲| 亚洲熟女毛片儿| 欧美黑人精品巨大| 亚洲七黄色美女视频| 性少妇av在线| 日日摸夜夜添夜夜添小说| 亚洲熟女毛片儿| 亚洲av美国av| 一边摸一边抽搐一进一小说 | 久9热在线精品视频| 久久午夜综合久久蜜桃| 精品福利观看| 蜜桃在线观看..| 中亚洲国语对白在线视频| 精品欧美一区二区三区在线| 在线观看舔阴道视频| 91字幕亚洲| 亚洲精品美女久久av网站| 青草久久国产| 脱女人内裤的视频| 久久久久久免费高清国产稀缺| 午夜福利乱码中文字幕| 国产伦人伦偷精品视频| 极品人妻少妇av视频| 丁香六月欧美| 丁香欧美五月| 成在线人永久免费视频| 日本wwww免费看| 男男h啪啪无遮挡| 久久久久久久大尺度免费视频| 人人妻人人爽人人添夜夜欢视频| 丝袜人妻中文字幕| 精品国产乱子伦一区二区三区| 少妇 在线观看| 久久精品aⅴ一区二区三区四区| 狠狠狠狠99中文字幕| 老司机午夜福利在线观看视频 | 成人av一区二区三区在线看| kizo精华| 男女边摸边吃奶| svipshipincom国产片| 97人妻天天添夜夜摸| 成年动漫av网址| 69av精品久久久久久 | 涩涩av久久男人的天堂| 一个人免费在线观看的高清视频| 国产成人一区二区三区免费视频网站| 国产1区2区3区精品| www.自偷自拍.com| 90打野战视频偷拍视频| 一区二区av电影网| 王馨瑶露胸无遮挡在线观看| 日本五十路高清| 国产精品av久久久久免费| av片东京热男人的天堂| 香蕉久久夜色| 国产精品久久久av美女十八| 精品免费久久久久久久清纯 | 亚洲免费av在线视频| 99九九在线精品视频| 午夜福利乱码中文字幕| 色婷婷av一区二区三区视频| 亚洲精品国产区一区二| 国产精品免费一区二区三区在线 | 亚洲成人免费av在线播放| 久久久国产精品麻豆| 一进一出抽搐动态| 窝窝影院91人妻| 国产高清视频在线播放一区| 一本一本久久a久久精品综合妖精| 一二三四社区在线视频社区8| 又大又爽又粗| 操出白浆在线播放| 国产精品一区二区在线不卡| 美女午夜性视频免费| 亚洲国产av影院在线观看| 老司机亚洲免费影院| 国产在线视频一区二区| 最新在线观看一区二区三区| 欧美人与性动交α欧美软件| 十八禁高潮呻吟视频| 中文字幕另类日韩欧美亚洲嫩草| 色综合欧美亚洲国产小说| 女同久久另类99精品国产91| 亚洲熟妇熟女久久| 国产熟女午夜一区二区三区| 色综合婷婷激情| 在线观看人妻少妇| 色精品久久人妻99蜜桃| 久久午夜综合久久蜜桃| 欧美日韩黄片免| 麻豆av在线久日| 淫妇啪啪啪对白视频| av视频免费观看在线观看| 国产97色在线日韩免费| 亚洲熟女毛片儿| 久久99一区二区三区| 亚洲av片天天在线观看| 亚洲久久久国产精品| 一区二区av电影网| 一本大道久久a久久精品| 日韩精品免费视频一区二区三区| 一边摸一边抽搐一进一小说 | 亚洲精品乱久久久久久| 一区二区三区激情视频| 99国产精品免费福利视频| 热99re8久久精品国产| 国产成+人综合+亚洲专区| 精品视频人人做人人爽| 嫁个100分男人电影在线观看| 国产免费视频播放在线视频| tube8黄色片| 亚洲精品自拍成人| 国产精品一区二区免费欧美| 国产精品久久久av美女十八| 视频区图区小说| 国产成+人综合+亚洲专区| 午夜福利欧美成人| 99riav亚洲国产免费| 国产99久久九九免费精品| 成人国产一区最新在线观看| 中文字幕另类日韩欧美亚洲嫩草| 久久久国产一区二区| 亚洲欧美激情在线| 天堂中文最新版在线下载| 99久久99久久久精品蜜桃| 天天影视国产精品| 色精品久久人妻99蜜桃| 我的亚洲天堂| 乱人伦中国视频| 99国产精品一区二区蜜桃av | 午夜视频精品福利| 中文字幕最新亚洲高清| 欧美 日韩 精品 国产| 国产有黄有色有爽视频| svipshipincom国产片| 12—13女人毛片做爰片一| 香蕉丝袜av| 中文字幕最新亚洲高清| 自线自在国产av| 啪啪无遮挡十八禁网站| 午夜日韩欧美国产| 80岁老熟妇乱子伦牲交| 热99re8久久精品国产| 757午夜福利合集在线观看| 国产精品二区激情视频| 国产区一区二久久| 麻豆成人av在线观看| 国产精品一区二区在线观看99| 国产亚洲午夜精品一区二区久久| 不卡一级毛片| 亚洲av第一区精品v没综合| 欧美久久黑人一区二区| 黑人巨大精品欧美一区二区蜜桃| 欧美激情高清一区二区三区| 久久人妻av系列| 免费看a级黄色片| 91国产中文字幕| 午夜91福利影院| 少妇被粗大的猛进出69影院| 国产麻豆69| 又紧又爽又黄一区二区| 人人妻人人澡人人爽人人夜夜| 国产精品 欧美亚洲| 自线自在国产av| 亚洲精品自拍成人| 国产精品电影一区二区三区 | 少妇粗大呻吟视频| 天天操日日干夜夜撸| 丁香六月欧美| 80岁老熟妇乱子伦牲交| 精品国产一区二区久久| 9热在线视频观看99| 国产免费现黄频在线看| 正在播放国产对白刺激| 精品亚洲乱码少妇综合久久| 免费人妻精品一区二区三区视频| 高清欧美精品videossex| 十八禁网站免费在线| 可以免费在线观看a视频的电影网站| 不卡一级毛片| 亚洲七黄色美女视频| 两个人看的免费小视频| 亚洲第一av免费看| 99热国产这里只有精品6| 国产精品免费视频内射| 国产精品av久久久久免费| 淫妇啪啪啪对白视频| av在线播放免费不卡| 黄色片一级片一级黄色片| 亚洲精品成人av观看孕妇| 狠狠婷婷综合久久久久久88av| xxxhd国产人妻xxx| 美女午夜性视频免费| 国产精品秋霞免费鲁丝片| 久久久精品区二区三区| 99国产精品一区二区蜜桃av | 黄片播放在线免费| 久久精品熟女亚洲av麻豆精品| 久久这里只有精品19| 精品一区二区三区视频在线观看免费 | 精品卡一卡二卡四卡免费| 亚洲av美国av| 成人影院久久| 国产av一区二区精品久久| 国产亚洲午夜精品一区二区久久| 久久精品aⅴ一区二区三区四区| 亚洲国产中文字幕在线视频| 午夜福利乱码中文字幕| 国产精品 欧美亚洲| 另类亚洲欧美激情| 国产高清国产精品国产三级| 久久中文看片网| 老熟女久久久| 国产黄频视频在线观看| 热re99久久精品国产66热6| 女警被强在线播放| 成人国产一区最新在线观看| 热99re8久久精品国产| 日韩制服丝袜自拍偷拍| 国产精品美女特级片免费视频播放器 | 别揉我奶头~嗯~啊~动态视频| 欧美日韩亚洲高清精品| 99久久国产精品久久久| 99国产极品粉嫩在线观看| 日本撒尿小便嘘嘘汇集6| 久久国产精品男人的天堂亚洲| 久久热在线av| 18禁美女被吸乳视频| 国产精品久久久久久精品古装| 亚洲第一欧美日韩一区二区三区 | 丰满人妻熟妇乱又伦精品不卡| 亚洲中文av在线| 亚洲精品一卡2卡三卡4卡5卡| 久久久精品国产亚洲av高清涩受| 国产欧美日韩一区二区三| 搡老熟女国产l中国老女人| 十八禁人妻一区二区| 新久久久久国产一级毛片| 精品第一国产精品| 亚洲av第一区精品v没综合| 老司机靠b影院| 熟女少妇亚洲综合色aaa.| 女人久久www免费人成看片| 一级a爱视频在线免费观看| 老司机午夜十八禁免费视频| 久久中文看片网| 一本大道久久a久久精品| 亚洲熟女精品中文字幕| 国产淫语在线视频| 丝袜美足系列| 在线观看免费日韩欧美大片| 国产日韩欧美在线精品| 国精品久久久久久国模美| 中亚洲国语对白在线视频| 免费看a级黄色片| 天堂8中文在线网| 国产福利在线免费观看视频| 91字幕亚洲| 制服人妻中文乱码| 日本a在线网址| 久久久久久久大尺度免费视频| 国产精品 欧美亚洲| 亚洲性夜色夜夜综合| 国产无遮挡羞羞视频在线观看| 亚洲综合色网址| 色94色欧美一区二区| 成人特级黄色片久久久久久久 | 国产精品影院久久| 9色porny在线观看| 亚洲一码二码三码区别大吗| 天天躁夜夜躁狠狠躁躁| 制服人妻中文乱码| 久久精品国产亚洲av高清一级| 999久久久国产精品视频| 久久ye,这里只有精品| 一夜夜www| 亚洲全国av大片| 亚洲精品国产一区二区精华液| 深夜精品福利| 99九九在线精品视频| 久久亚洲真实| 欧美亚洲日本最大视频资源| 亚洲成国产人片在线观看| 精品乱码久久久久久99久播| 精品卡一卡二卡四卡免费| 久久中文字幕一级| 国产精品一区二区精品视频观看| 欧美日韩视频精品一区| 涩涩av久久男人的天堂| 亚洲精品国产色婷婷电影| 十八禁网站免费在线| 人人澡人人妻人| 精品国产一区二区三区久久久樱花|