• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Yield Stress Prediction Model of RAFM Steel Based on the Improved GDM-SA-SVR Algorithm

    2019-03-18 08:15:34SifanLongMingZhaoandXinfuHe
    Computers Materials&Continua 2019年3期

    Sifan Long,Ming Zhao and Xinfu He

    Abstract:With the development of society and the exhaustion of fossil energy,researcher need to identify new alternative energy sources.Nuclear energy is a very good choice,but the key to the successful application of nuclear technology is determined primarily by the behavior of nuclear materials in reactors.Therefore,we studied the radiation performance of the fusion material reduced activation ferritic/martensitic(RAFM)steel.The main novelty of this paper are the statistical analysis of RAFM steel data sets through related statistical analysis and the formula derivation of the gradient descent method(GDM)which combines the gradient descent search strategy of the Convex Optimization Theory to get the best value.Use GDM algorithm to upgrade the annealing stabilization process of simulated annealing algorithm.The yield stress performance of RAFM steel is successfully predicted by the hybrid model which is combined by simulated annealing(SA)with support vector machine(SVM)as the first time.The effect on yield stress by the main physical quantities such as irradiation temperature,irradiation dose and test temperature is also analyzed.The related prediction process is:first,we used the improved annealing algorithm to optimize the SVR model after training the SVR model on a training data set.Next,we established the yield stress prediction model of RAFM steel.The model can predict up to 96%of the data points with the prediction in the test set and the original data point in the 2 σ range.The statistical test analysis shows that under the condition of confidence level α =0.01,the calculation results of the regression effect significance analysis pass the T-test.

    Keywords:Convex optimization theory,simulated annealing algorithm,reduced activation ferritic/martensitic steel,support vector regression.

    1 Introduction

    Reduced activation ferritic/martensitic(RAFM)has excellent thermal,physical and mechanical properties,such as high irradiation swelling and thermal expansion coefficient and high thermal conductivity.RAFM steel is often chosen as the preferred material for future fusion power stations and experimental reactors[Kemp,Cottrell and Bhadeshia(2006)].

    In application,the yield stress is a highly important performance parameter of the material.Under certain conditions,the yield stress is a nonlinear function of the deformation velocity,deformation temperature and degree of deformation[Barnes and Walters(1985)].There are many related variables and many uncontrollable factors in the measurement process.Therefore,it is difficult to describe the performance of RAFM steel in engineering applications.In this paper,the yield stress prediction model of RAFM steel is established by the support vector regression(SVR)model in the field of machine learning[Joachims(1999)].The effects and joint effects of different parameters on the yield stress can be analyzed,obtaining further analysis of the correlation prediction curve change before and after irradiation.Prediction in the range of data based on the training set can be performed.

    This article uses SVR to establish the prediction model of yield stress for RAFM steel.SVM has been applied successfully in the field of text classification since it was put forward by Vapnik[Cortes and Vapnik(1995)].Through the training of the efficient sequential minimal optimization(SMO)algorithm which was designed by Platt[Platt and John(1998)],SVM has been successfully promoted in industry.With the excellent performance,whether it is to address regression or classification tasks,SVM is always one of the best algorithms compared to other similar algorithms[Gain and Roy(2016);Jian,Shen and Li(2017)].At present,the main research direction for the algorithm is the application combined with other algorithms and improvement[Keerthi,Shevade and Bhattacharyya(2014)].For example,Ghamisi used it for feature selection[Ghamisi,Couceiro and Benediktsson(2015)],Ananthi and others have used it for voice recognition[Ananthi and Dhanalakshmi(2015)],and Darmatasia combined it with the CNN algorithm for handwriting recognition[Darmatasia,Fanany and Ivan(2017)].Although the SVM algorithm shows excellent performance,there are still several problems.For example,the selection of the kernel function is a serious and outstanding problem.To date,there are many relevant studies.For example,Peng et al.used multicore SVM emotion recognition[Peng,Hu and Dang(2017)].Tan et al.used SVM for hyperspectral image classification[Kun and Peijun(2010)].The SVM algorithm has been used for unmanned aerial vehicle(UAV)fault diagnosis[Ye,Luo and Li(2014)].For studies on the application of fusion material RAFM steel,Kemp and others used a neural network to study the performance of RAFM steel[Kemp,Cottrell and Bhadeshia(2006)].Neelamegam et al.studied the optimization parameters of RAFM steel based on a hybrid intelligent model to obtain the desired weld bead shape parameters and heat affected zone(HAZ)width[Neelamegam,Sapineni and Muthukumaran(2013)].These researchers combined the genetic algorithm tooptimizeparametersoftheRAFMsteelmaterialonwelding.Inrecentyears,theresearch tends to be purely physical like the neutron irradiation study carried out by Gaganidze et al.[Gaganidze and Aktaa(2013)].Babu et al.performed a fatigue crack propagation study[Babu,Mukhopadhyay and Sasikala(2016)].Compared with this study,previous studies were too monotonous to establish a systematic model to describe the influence mechanism of related variables on yield stress.Although a number of traditional hybrid algorithms,combined heuristic algorithm with machine learning,like SVM,artificial neural network(ANN),naive bayes(NB)and other hybrid model are applied to other research fields.

    For example,the hybrid model composed of genetic algorithm(GA)and SA algorithm combined with ANN is used to predict the groundwater level[Bahrami,Ardejani and Baaf i(2016)].DJ Armaghani use particle swarm optimization combined with neural network to predict the ultimate bearing capacity of rock-socketed pile[Zhang and Zhou(2016)].Faced with the same task,there are also significant differences between different heuristic algorithms.For example,Jia F and others have compared the genetic algorithm and particle swarm optimization algorithm in the first-order design of TLS network[Jia and Lichti(2017)].The global optimization algorithm has stronger search ability in the solution space but it cannot search accurately in a small range.Therefore,the heuristic algorithm combined with the fine search algorithm can effectively avoid the shortcomings of both and can effectively play their advantages.

    In the prediction of RAFM steel,no one has used the hybrid model composed of simulated annealing algorithm and support vector machine to analyze and study the yield stress of RAFM steel before.This research has been proved by a lot of experiments,such as outlier detection,clustering analysis and feature selection.This study establishes the GDM-SASVR model based on the improved annealing algorithm.Within the 2σerror range of the test set,the prediction under the conditions of a given test temperature and radiation dose can achieve a prediction accuracy of 96%.TheT-test showed that there is 99%confidence that the regression effect is significant.

    This article is composed by 5 sections.Section 2 makes some necessary analysis on the data set,this section explains that the characteristics of RAFM steel data set distribution a relatively complex distribution.Section 3 is to improve the simulated annealing algorithm and combine with SVR to form a hybrid prediction model for the yield stress of RAFM steel.Section 4 is to compare the model proposed in this research and other similar models on the test set,which is to illustrate the effectiveness of the hybrid model proposed in this paper.Section 5 used the hybrid model proposed in this paper to analyze the effect of the main parameters of RAFM steel on the yield stress,and finally gives the prediction results.

    2 Dataset analysis

    The dataset is a part of the result in a radiation experiment.In the field of data mining,the quality of the data directly affects the result of the experiment.Therefore,in applications,data processing often takes up 80%of the work of the data analysts[Witten and Frank(2011)].Using statistical methods and data visualization technology to analyze the original data set represents a valuable feature trend that usually is a necessary indicator of a successful selection for a suitable model.

    2.1 RAFM steel dataset

    The data used in this paper are data points on RAFM steel irradiation experiments when the radiation dose is in the range of 0~90(dpa).The data used in this paper consist of a total of 1811 experimental data points and 37 associated characteristic attribute parameters.In other words,this paper addresses a multielement nonlinear function with 37 elements,and the ordinary linear model cannot reach the actual requirements.Therefore,it is necessary to find the appropriate model.The attribute distribution of the data is shown in the Appendix A[Kemp,Cottrell and Bhadeshia(2006)].

    2.2 Symmetry,correlation analysis and outlier detection

    Tab.6 in Appendix A shows that the distribution of data samples in a single dimension is asymmetrical and is accompanied by many outliers.That situation is not conducive to the establishment of an accurate model.The predictability between the data sample and the result label cannot be predicted.Therefore,a more comprehensive analysis is needed from the microscopic perspective.To determine the predictability of data samples and whether there is multicollinearity between attributes,it is necessary to use the statistical Pearson correlation coefficient[Taylor(1990)](simple correlation coefficient)to measure the intimate degree among attributes.The calculation formula is as follows:

    The Pearson correlation coefficient can simply judge whether there is a linear relationship between two attributes,that is,whether the attributes are collinear,where X and Y are two one-dimensional data sets,is the mean of X,andis the mean of Y.Because the Eq.(1)is symmetric,the Pearson coefficient has no effect on the order of computation,that is,rXY=rYX.Relevant criteria for evaluation are usually used to determine the degree of correlation[Nicewander(1988)].Given the significant level ofα(generallyα=0.05)and the degree of freedom(n-2),whereαis a confidence level,for example,whenα=0.01,we canget99% probability to make decision inference. Then, we check the R distribution table for Rα,if|r|> Rα,the correlation coefficient is significant.To facilitate the discrimination,ensure that values of the|r|can meet the following relationship according to the simplified criteria.

    Table1:Criteria for determining the correlation coefficient

    Figure1:Correlation between attribute variables.The Pearson coefficient is calculated to detect the collinear relationship between attributes to infer the impact of the final model.Because of the properties between the Pearson coefficient and sequence independence,the image map is symmetrical

    In regression analysis,we need to understand the pure effect of each independent variable on the dependent variable.Multilinear means that there is a certain functional relationship between the independent variables.If there is a functional relationship between the two independent variables (x1andx2),thenx2will change accordingly whenx1Changes a unit.At this time,you can not fix the other variables.Conditions,if we examine the effect ofx1on dependent variableyalone,the effect ofx1you observe is always mixed up with the effect ofx2,which leads to analysis errors,making the analysis of the effects of independent variables inaccurate,so we need to exclude the effect of multicollinearity in regression analysis.Tab.1 shows that some attributes do have multiple collinear relationships,the relation coefficient between attributes is constructed into a matrix,similar to the adjacency matrix used to describe the adjacent relations of edges in graph theory.The(i,j)th element of the matrix represents the correlation coefficient between theith attribute and thejth attribute,and then draws these matrix elements onto the thermograph.The diagonal correlation is the highest,and the overall image is axisymmetric because the diagonal is more self-related to the attribute.Secondly,the Pearson correlation coefficient is not in the order of calculation.In other words,R(i,j)is equal toR(j,i).the degree of relevance is divided into 5 levels.When the correlation coefficient is above 0.8,we must process the data set,such as removing duplicate data,normalizing data and data cleaning.When the correlation coefficient is low,such as less than 0.4,we can not deal with it.Fig.1 shows that there is no strong correlation between attributes except attribute 15 and attribute 28,which ensures that our model can be influenced by the collinearity of attributes.

    Figure2:The box-plot of 37 attribute values of RAFM steel shows that most of the attribute distributions are asymmetrical.In addition,there are many outliers

    The distribution of data can affect the training process of the prediction model.Compared with the skewed distribution,the symmetry of the data contains a lot of information,and the distribution of information is more balanced,which will ease the learning task of the prediction model.In order to comprehensively analyze the data sets of RAFM steel,we use the box-plot in statistics to analyze it.It contains six data nodes,from which a set of data is arranged from large to small,respectively,to calculate the upper edge of the data,the upper four digits Q3,the median,the lower four digits Q1,the lower edge,and an exception value(outliers).The specific content of the box diagram can be referred to the work of Robert Mcgill et al.[Mcgill,Tukey and Larsen(1978)].Fig.2 shows the data symmetry and outliers by using box diagrams to display the original data.The graph shows that the set of attributes presents asymmetrical distribution,and there are many outliers(’+’representing outliers),which has a great challenge to the prediction results.Therefore,we must be able to find a model with strong anti noise ability,strong robustness and high generalization performance.It is difficult to use the traditional method,which will affect the final stability of the model.

    3 SVR yield stress model of RAFM steel

    Among a large number of heuristic search algorithms,different algorithms have different application scenarios. For example, the traditional genetic algorithm process is too complex for prediction of yield stress of RAFM steel[Zhang and Zhou(2016)],particle swarm optimization(PSO)algorithm is not good for solving discrete processing problem,it is easy to fall into the local optimum[Du(2016);Zhu and Cai(2016)].Simulated annealing algorithm has many advantages,such as fast convergence speed and efficient algorithm.So,it is often applied to various optimization methods[Chen,Zou and Wang(2016)].Based on these advantages,this study uses simulated annealing algorithm as the main optimization tool.This section will introduce the improved annealing method and the model of yield stress based on the SVR yield stress model of RAFM steel.Most search algorithms adopt the heuristic search method,but the ability to use information is deficient.Therefore,it is essential to improve the traditional annealing search algorithm by combining search information.

    3.1 GDM-SA search algorithm

    To accurately adjust the parameters of the SVR model,the simulated annealing algorithm must be improved to obtain the simulated annealing algorithm based on a smooth convex optimization and the gradient descent search algorithm,known as the GDM-SA algorithm.Compared with the traditional search algorithm,the GDM-SA algorithm can be used as a global search,and the algorithm can address the problem that the exact algorithm cannot be solved when the scale of the data is large.In the traditional simulated annealing algorithm,the general process is as follows.

    1.Initialization:Set a high enough initial temperatureT0,and make T=T0,random initialization solutionS1,markov chain length L.

    2.A new solutionS2is generated from the currentS1random perturbation.

    3.Calculating the increment ofS2Δf=f(S1)-f(S2).

    4.IfΔf<0,accept the current solution,otherwise the new solution can only be accepted by the probability of exp(-Δf/T).

    5.If the iterative condition is satisfied,save the optimal solution as is and end the program.Otherwise,perform do cooling according to the decay function and return to Step(2).

    In the process of cooling,it is difficult to determine when the cooling achieves stable status.The traditional method is to fix the length of a markov chain directly.The problem is highly complex in that the objective functions in reality are always complex functions with multiple stagnation points,multiple saddle points,non-smoothness and other characteristics,and there are multiple local minima.Therefore,ordinary optimization methods will face more severe challenges.This paper uses a gradient descent algorithm to improve the range of the search region in the process of cooling.At the same time,this paper uses gradient data to give the algorithm local directivity.

    In the process of annealing,if the number of iterations times is k,try to find the solution that meets the accuracy requirement in the K iterations.The algorithm tends to be stable at current temperature,and then proceed to the next iteration.

    Given the following sets of training samples

    (X1,Y1),(X2,Y2),(X3,Y3),...,(Xk,Yk)

    The error calculation formula used in the K iteration:

    After K times iteration,define the measure of overall cumulative error

    Satisfying

    In fact,

    The above statements shall satisfy the independence assumption.Certainly,it is impossible to be completely independent in practice because there are overlapped parts of data samples on the training set,and there will be correlations between the trained learners depending on the actual situation.

    The error in the K iteration isEk,?Ek∈[τk,ηk],??-→0,according to the Hoeffding inequality,it has

    The above shows that the precision of system itself has an exponential increase after K times[Ruxton(2010)],where K is a given number.

    Considering that the mathematical properties of the cost functionEkare not good,use the exponential loss function as an alternative function.Some researchers already proved the consistency of the exponential loss function and the square loss function[Zhang(2004)].Thus,use the exponential loss function as the optimization objective,and the original formula becomes

    Where g(k)=EkandEk~χis the mathematical expectation of the characteristic spaceχ.Then,use the method of smoothing convex optimization approach to optimize the new target function[Bubeck and bastien(2015)],that is to minimize the new objective functionι(k)

    where theχis the characteristic space of the training sample set,and theψis the characteristic space of the overall sample set.

    There are many ways such as the Lagrangian multiplier method to optimize the objective function,coordinate the descent method,the Newton method,the gradient descent method,and the variant methods derived from them.Using different optimization methods will have different effects on search results.The cost function has been converted to the exponential loss function [Singh,Singh,Singh et al.(2008)],therefore,use the gradient descent method to do optimization since the exponential loss function has good mathematical properties.

    The gradient descent method requires that the following sequence can be constructed:

    And requiring

    In this instance,we used the idea of relaxation and approximation,and constructed a series of relaxation variables to approach the real functions.With the increase in the iteration times,and the contracted sequences,we can finally guarantee algorithm convergence.In practice,μis a step factor.To ensure convergence quickly,the direction of the descent should be in the negative direction of the gradient.The next section will deduce the optimization problem for this study.

    During the iterative process,given that the error of theitime isEi,perform the second order Taylor expansion here because when the error is very small,the higher term of the Taylor expansion is the high order infinitesimal about the step factorμwhich is also called the learning rate in a neural network.The objective function can be approximately replaced by the following quadratic polynomial.

    The upper form is the approximation function that is formed by the quadratic expansion structure.For the optimization functionι(k),here further are:

    Mark?is a gradient operator,andH[f(x)]is the second order Hessian matrix of thef(x)function[Powell(1979)].For the definition of the Hessian matrix,please refer to convex optimization theory[Bubeck(2014)],and the corresponding Hessian matrix here is:

    In actual operation,because the calculation of the Hessian matrix absorbs extensive computing resources[Mcgill,Tukey and Larsen(1978)],the complexity grows exponentially with the increase of scale of the problem.Therefore,the relatively simple matrix(1/μ)Ithat is like the Hessian matrix is used to approximate the substitution andIis thekidentity matrix.From that,the new simplified formula is as follows:

    The gradient vector operator near pointki+1is obtained by the quadratic approximation method.

    Refer to convex optimization theory for the second order gradient analysis of the real variable function[Mcgill,Tukey and Larsen(1978)].The necessary condition for the existence of a first-order extreme point is to meet the requirement that the gradient is 0,using the following formula:

    The upper form is the updated iteration formula of this algorithm.If the search ability cannot meet the specific data requirements,we can also use the second order gradient that is the information of the Hessian matrix to find a faster descent direction.The optimized objective function is as follows:

    With reference to convex optimization theory,the necessary and sufficient conditions for the second order gradient analysis of a real variable function shows that in order to obtain the local optimal point of the original function(it is the minimum here),the first order gradient of the iterative increment is 0 and the second order gradient is positive.The Hessian matrix is required to be a positive de finite matrix.Therefore,it is necessary to find the constraint conditions.

    The concept of a convex set,that is,for a given setSthat is a convex set,two arbitrary pointsAandBbelong toSand the line segments connecting them are also in the setS.It is as follows:

    For the objective function,there is:

    The objective functionιis a lower convex function.Therefore,any convex function satisfies the Jensen inequality

    whenμ>0,ιis a strong convex function.Therefore,the sufficient and necessary condition for the strong convex functionιto be an objective function that is defined onRis that if and only ifιCan be a quadratic differential and the Hessian matrix is a positive definite matrix.

    From the above analysis, the objective function ι satisfies the constraints of the second order convex optimization.After the above constraints are satisfied,the second order algorithm of the smooth convex optimization can be used to analyze the second order gradient of the objective functionι.Therefore,it has the following:

    Thus,the second order iterative formula for this study is obtained.

    The upper form is called the Newton method,which uses the information of the second order gradient to optimize.The updated direction is more accurate,but the drawback is the inverse calculation of the Hessian matrix and the iterative process will also take up extensive of calculation resources.It is necessary to consider the actual situation for screening and this study gives two optimization methods above.

    Applied to the cooling process during annealing,searching the optimal value in the process of cooling under the current temperature by the above algorithm,the algorithm will converge to an optical point and end the search.The algorithm will reach the stable status of the cooling process and start the next round of temperature cooling.The pseudo code of the whole process is represented in reference Tab.2.

    Tab.2 is the use of pseudo code to explain the main program of the combinative annealing algorithm.The traditional annealing algorithm will set a fixed metropolis inner loop chain,except the Steps 5~16,as a short-term equilibrium constraint condition at the current temperature.This balance is called a steady state under the current temperature.However,this approach is relatively fixed,and it is difficult to know whether it reached the steady state or not after the iteration.Therefore,the GDM algorithm is designed to automatically adjust and optimize the algorithm to reach balance by the given precision value.Compared with traditional SA algorithms,the GDM algorithm is more flexible and the scope of the search space is also expended.

    Table2:Improved simulated annealing GDM-SA algorithm

    3.2 Comparison with unimproved algorithms

    Compared with unimproved annealing algorithms,the simulated annealing algorithm of GDM has the advantages of a wider search range and easing of the premature local optimization.To verify the above advantages,chose the following complex functions as the test target to search the minimum of functionf(x1,x2).The function formula is

    This function has numerous local minimum points,and it is easy to find that the global minimum of this function isx1=0,x2=0.At this point,f(x)takes the global minimum value of 1,that is

    After selecting certain tasks,perform the comparison and analysis of the search capability between the simulated annealing algorithm improved by the GDM algorithm and the previous one.Set the same annealing parameters.The attenuation parameter is 0.95.The step factor is 10(force to control the disturbance).The initial temperature is 100,and the tolerance is 0.0001.The learning rate of GDM is 0.001.The maximum number of iterations is 4000.The minimum change curve of functionf(x)obtained in the iterative process is calculated as shown in Fig.3 below.

    Fig.3 shows the results of calculating the minimum value of the given complex function on the examples given in this study.Under the same conditions,we compare the traditional simulated annealing algorithm,GDM-SA algorithm,particle swarm optimization algorithm and artificial bee colony(ABC)algorithm[Gao,Suganthan and Pan(2016)].The average results of each algorithm are obtained by running several times,and then the graph of search results changing with the number of iterations is drawn.It can be seen that the traditional optimization algorithm falls into the local optimum after the search,such as the traditional simulated annealing algorithm with an optimum value of 1.775,particle swarm optimization with an optimum value of 1.996,artificial bee colony with an optimum value of 2.063.The optimal value of GDM-SA algorithm is 1.498,and the search space is wide.Our explanation is that in the process of searching,we can make full use of the advantages of accurate search to find the best value after falling into the local optimal value,and at the same time,we can jump out of the local optimal value by combining the idea of annealing algorithm.Finally,our improved algorithm obtains better results than a single heuristic algorithm.However,the improved annealing algorithm of GDM can almost achieve a global optimal solution.Therefore,Fig.3 shows that the improved simulated annealing algorithm has a stronger searching ability,faster convergence speed and wider search scope.The above experiment is only a method to compare under the same conditions.When the temperature rises appropriately,the traditional simulated annealing algorithm can also approach the global minimum.

    Figure3:The search results of GDM-SA algorithm and traditional simulated annealing algorithm,particle swarm optimization algorithm and artificial bee colony algorithm on function extremum optimization problem.Each algorithm performs at least three experiments,and then the search results are averaged to plot the optimal value and the number of iterations.

    The results above show that the SA algorithm combined with the GDM algorithm,known as the GDM-SA algorithm,has a strong search ability.Next,the parameters of SVR are optimized by the GDM-SA algorithm to get the spatial grid research results within the interval range of optimal value.This paper introduces the SVR model of GDM-SA below and later establishes the yield stress prediction model of SVR.

    3.3 GDM-SA-SVR yield stress model

    This study use the SVR model to establish the yield stress model[Huang and Tsai(2009)].The principle of the SVR model is referred to the work of Vapnik[Cortes and Vapnik(1995)].

    Referring to the data distribution of RAFM steel in Appendix A,the magnitude difference between different attributes is large.For example,carbon element content and the test temperature have 10000 orders of magnitude difference.In scientific calculation,it is necessary to prevent two very large numbers from calculating directly,which will lead to the loss of calculation precision and data information.Therefore,the normalization process is needed[Wang and Tang(2015)].The minimum and maximum normalized normalization formula is used here.

    The strategy used in this paper is using the improved simulated annealing algorithm GDM-SA to optimize the penalty factor C of SVR and the kernel function parameter G(gamma)to get an approximate interval of extreme value.Meanwhile,for the regression error threshold parameterepsilon(general experience valueis 0.1),when the error is less than the modified value,it will not be punished,and the value will be punished when the error is bigger than the modified value.That is equivalent to the expected regression precision,so it has a significant effect on the result.In order to obtain more accurate results,we set a more subtle adjustment interval.Then,use the space grid search algorithm for accurate search[Liu,Liu and Yang(2006)].The combination of rough and precise search can effectively approximate the global optimal solution.

    Train the model on a training set by the annealing algorithm[Ginneken and Van(2016)].Calculate the mean square error.The formula used in the K iteration is the mean square error formulaEk,above.(Refer to the definition in the last chapter).

    Combine with the above design model to design a preliminary algorithm to iterate and optimize parametersCandGof SVR.To obtain more solutions,a more robust method is used to save the number ofmoptimal solutions after each iteration.Next,the number ofMoptimal solutions is sorted by the merging sort and stored in the matrix structure and the final product is output.To date,the annealing algorithm SA was improved by the GDM algorithm to optimize the SVR model,and saved the number of m best optimal solutions of the SVR model.The whole flow chart is shown as the Fig.4.The related attenuation parameters are set to 0.95,the step factor is 10(force to control disturbance),the initial temperature is 80,the tolerance is 0.0001,the GDM learning rate is 0.001,the maximum iteration number is 2000. Tab. 3 shows that 10 optimal values were selected after an iteratedannealing algorithm.

    Table3:The best parameter values from different groups in a certain iteration(here epsilon =0 . 1)

    Excessive penalty parameters will increase the accuracy of the training set but will lead to overfitting.In practice,we are inclined to choose relatively small parameter values.After many times of simulated annealing algorithm optimization,the final optimal value isC=16,G=8 andepsilon=0.0065.After initial determination of the optimal range,start the space grid search [Liu,Liu and Yang (2006)].

    Figure4:Flowchart of improved simulated annealing algorithm.The improved algorithm process is more complex,and search ability is enhanced

    Figure5:Error contour and surface map after multiple iterations.Every iteration,the search area will be further reduced,and we can then determine the global optimal solution through repeated iterations

    Take the sampling points of two dimensions in the optimal value interval calculated by the annealing algorithm,which is the test point of the parametersCandG.Taking the mean square error as the evaluation standard,make the three-dimensional coordinate diagram of the error of parametersCandGand draw the corresponding isogram.By analyzing the contour lines,the search range is further reduced and is repeatedly iterative to obtain the parameters that meet the requirements of the error precision and finally end the algorithm.After repeated iteration through the above methods,the parametersCandGwith small errors are finally approximated,and the value is assigned to the SVR model to retrain the model on the training set and finally obtain a relatively ideal model.Through the preceding analysis,for the extremum optimization problem of complex functions,find the optimal value by a heuristic algorithm (most are locally optimal) and then,after repeatedly iterative analysis,it can approach the global optimum infinitely [Zien,Kramer and Sonnenburg(2009)].

    To date,an SVR model of 37 attribute variables on yield strength YS has been established.The following is the test and prediction analysis of the mode.

    4 Test and analysis of the model

    4.1 Model learning

    When evaluating the performance of a model,a very important index is the learning situation of the training samples.Lack of learning will lead to underfitting,and the prediction results will reflect the high deviation characteristics.Overlearning will learn some features of the sample itself(for example,sample noise),and there will be overfitting at this time.The prediction results have a high variance characteristic.The ideal situation is to make a compromise between underfitting and overfitting.The generalization performance of the model is the best at this time.Overfitting and underfitting always accompanied the whole process of machine learning.Many factors are needed to be considered in the training model,and another important reason is that there is not a complete theory to guide a model for meeting the main problems in the process of establishing the model,such as training guidance and evaluating sample information.Even so,there still are some computational learning theories that can be used to analyze the learning model.

    For a training set,we assume that learning satisfies the requirement by reaching a certain precision,?[Zeugmann(2016)],and it is recorded as timeh,the number of samples ist,and in the iterative training process,define the hypothesis space R.Next comes

    when the number of samplestis big,the empirical error ofHCan be approximated to replace the generalization error.Given the hypothetical space R,whent→∞,the right side of the inequality is 0.There must be a hypothesis that the generalization error is minimal to guarantee the learning process.The following Fig.6 shows the learning of the sample after the final model training.

    The training set used in this paper selects 1711 data training models from a random disorder sequence,and the remaining 100 data training models are used to verify the model.In the process of training the model,Fig.6(a)is over fitting learning the situation.Fig.6(b)is the learning situation between under fitting and over fitting,that is,learning the characteristics of samples,and noisy data not learned here,so there are some large deviations.This phenomenon is a normal.The data itself not only contains the noise data points but also the contradictory data points.

    Figure6:Overfitting and not overfitting sample learning

    4.2 Model comparison

    To verify the performance of the model,compare the model with other similar models including ANN,random forest,linear regression and general regression neural network(GRNN).Train these models on the same set of data and then test these models on the same test set.Get the absolute error (residual) and error distribution of the predicted results.Perform test results statistics with the similar models.Fig.7 to Fig.11 is the result of error analysis.

    The Tab.4 is a specific statistical analysis table.In the filed of ma ch inelearning,use statistical methods to test regression problems.and then,we obtain the residuals of prediction results and expected results on test sets.Perform statistical analysis of residual data by the commonly statistics including mean,maximum deviation,variance,standard deviation and mean square error (MSE) and goodness of fit (also called coefficient of determination).In general,the mean value of error obeys a normal distribution,and the closer to 0,the better the mean value of error is.The smaller the value of the maximum deviation,variance,standard deviation and mean square error are,the better the parameters themselves are.When the goodness of fit,known as the coefficient of determinationR2,is the closer to 1,the better the effect of the regression is.Tab.4 is the BP neural network,random forest,linear regression,generalized regression neural network and the GDM-SA-SVR algorithm used in this paper.It is obvious that GDM-SA-SVR has much better performance than other similar algorithm models in addition to the relevant smaller disadvantages.Therefore,the GDM-SA-SVR algorithm model is effective.

    Figure7:Absolute error and frequency distribution histogram of BP neural network

    Table4:Performance comparison of BP neural network,random forest,linear regression,GRNN(generalized regression neural network)and GDM-SA-SVR algorithm on test set

    Figure8:Absolute error and frequency distribution histogram of random forest

    4.3 Significance test of the regression effect

    The content of the previous section has explored the learning situation of the model for sample data and only assesses the ability of a model to store characteristic information of a data sample.Since the partition of a training set is completely independent,the performance of a test set is truly to show the predictability of the model when the model is used to predict.Given a test set that is not trained,the prediction effect of the model is different from the reality.

    Fig.12 shows that 96 of the 100 data points on the test set fall within the two times sigma range of the prediction curve,and only four of the number are out of the range.The regression effect is very obvious from preliminary judgment,but for further analysis,it is necessary to test the results and the actual results by theT-test[Ruxton(2010)],it is uses thetdistribution theory to deduce the probability of occurrence of differences,so as to compare whether the difference between the two averages is significant.Therefore,it is possible to test whether there is a significant difference between the predicted result and the actual result.

    For the original problem,it is assumed that there is a linear relationship between the predicted results and the actual results,and the regression coefficient set asb.

    1.Putting forward the original hypothesis of regression coefficient

    Figure9:Absolute error and frequency distribution histogram of linear regression

    2.Structural statistics T,and satisfy

    3.Given the significance levelα(0<α<1)making

    4.Get a rejection region

    Calculate T statistics through the experimental results and the test results.Accept the original hypothesis if the rejection region is not satisfied,that means the predicted effect does not have a linear relationship.Otherwise,the prediction effect has a linear relationship.The effect of regression is remarkable.

    For a given confidence levelα=0.01,calculate the results T1=35.7328.There are 100 data points in the test set.When the degree of freedom is greater than 45,the T statistic is approximates the standard normal distribution,where T≈Z.Therefore,in theα=0.01 confidence level,check the table to get Z=2.57,that is,T1>T.Therefore,reject the original hypothesis,and there is 99%confidence that the predictive value and the actual value of the regression effect is obvious.

    Figure10:Absolute error and frequency distribution histogram of GRNN

    Figure11:Absolute error and frequency distribution histogram of GDM-SA-SVR

    Figure12:The predictive effect of the GDM-SA-SVR model on the test set.From the graph,we can see that in the 100 training sets,(assuming that the error obeys normal distribution),only 4 points fall outside the prediction interval of 2 σ

    In addition,there is another method (the correlation coefficient method used previously)that can test the effect of regression.This article does not make a detailed introduction about the correlation coefficient method and only gives the final result.Through calculation,the correlation coefficient fluctuates within the range of more than 0.8.At that point,the quantitative model set by statistical analysis evaluates that the SVR model has a significant regression effect on the test set.

    5 Results and discussion

    5.1 Prediction of test temperature

    The test set data (100 pieces) is remarked before and after irradiation,and a scatter plot is drawn.The data are influenced by the mutual influence of different attributes.Therefore,the data have a certain dispersion,but the overall trend is more obvious.Extrapolation prediction is carried out within the extrapolated range of [100,1000] and a test temperature(K)[Aktaa and Schmitt (2006)].Use the trained SVR model to make the prediction curves of Fig.13 with the constraints before and after irradiation.Compared with the true scatter point data of the test set,the prediction curve has a better prediction effect.

    The model prediction shows that in the temperature range of 100 K to 1000 K,when the temperature increases,the yield stress decreases as the curve in the Fig.13.The contrast of before irradiating and after irradiating indicates that doing the experiment in the test temperature range,within a given dose (the different effects from different doses,the size of irradiation dose in the curve diagram is approximately 0.024 dpa),the yield strength difference will decrease before and after irradiation[Wang,Zhang and Zhao(2017)].Since the sample of the training set includes a variety of alloy materials,the prediction of the RAFM steel alloy material is in line with the test set.

    Figure13:The predictive effect of the GDM-SA-SVR model on the test set.The trained model is given 0 or a number of irradiation parameters,and then the prediction curve is compared with the original data points.The original data points are distributed near the prediction curve.The GDM-SA-SVR algorithm has a good prediction effect

    Figure14:Prediction interval of SCRAM steel in 2 σ ,where σ is the variance of the normal distribution of error

    5.2 Prediction of radiation temperature

    Below is the introduction of the reduced activation ferritic martensitic steel SCRAM steel,which was developed by a university with vacuum induction electric furnace smelting and argon protective atmosphere electroslag re-melting technology.The main chemical components are shown in Tab.5.

    For the above SCRAM steel,give the same test temperature and irradiation dose to investigate the effect of different irradiation temperatures on the yield strength(YS)of SCRAM steel.

    Table5:Chemical composition of SCRAM steel(wt%)

    Fig.14 is the prediction interval of SCRAM steel in 2σ.In the irradiation temperature range [300,900] (unit K),the yield strength variation interval slowly increases first in the range of [200,800],and then decreases (MPa).Most of the radiation temperature range of the training set is arranged in 273 K to 900 K,so the prediction of extrapolation to SCRAM steel has a certain reliability.More accurate analysis needs to be compared with real experimental data.

    Figure15:The prediction line of radiation temperature of SCRAM steel at different doses.The radiation dose was 0.05,0.18,1.3,10.1,22.3,43.3 and 88.6,respectively,and the corresponding prediction curve was calculated by GDM-SA-SVR model

    Figure16:When the element composition of RAFM steel is changed,the prediction curves of different types of steels under different radiation doses are given

    Fig.15 is the change of the yield strength of radiation at several doses in 0.05,0.18,1.3,10.1,22.3,43.3,and 88.6 (dpa) [Gaganidze and Aktaa (2013)].The prediction curve of the irradiation temperature in the range of [300,900] shows that the mutual influence of the physical mechanism of the two attributes is very complex,and in the studied dose range and a given range of irradiation temperature,the effect of the yield strength is not monotonic as there are many maximum or minimum values.If the physical background test is verified,the test points can be selected at the intersection point given by the forecast line.

    The different irradiation doses have different effects on the yield strength.For example,the yield strength of the zirconium alloy will increase under irradiation,but the ductility will decrease [Lucon (2002)].This paper presents a research method to study the effects of different materials.First,the irradiation dose is evenly divided in a certain range,and then the SVR yield stress model is used to make the corresponding curve,so it can further study the effect of irradiation dose on the different results.This method can be evaluated and contrasted in reference to the study of yield stress [Virgil’Ev,Kalyagina and Makarchenko(1979)].

    5.3 Prediction of temperature

    The effect of different steels on the irradiation dose is also different.Fig.16 is the prediction curve made through the SVR prediction model under the conditions of extracting 6 types of RAFM steels with different chemical compositions from the test data set and a given dose in range of [0,80] (dpa).The prediction curves all are first increased and then decreased,and different chemical compositions have different effects.Most of the data used in this dataset are in the range of [0,50].Therefore,a comparative analysis should be performed with the future experimental points if the over-dose prediction is done.In practical engineering applications,generally,there are no standard experimental data points as a contrast indication,so the method needs to be studied in combination with other methods [Lucon (2002)].

    5.4 Combination prediction of test temperature and irradiation dose

    The previous analysis of the original data shows that there are multiple collinear dependencies among some attributes.To study the effect of this dependency on the model,a three-dimensional graph must be made to do visualization analysis of the SVR model.In Fig.17,350 K,436 K,600 K and 783 K are taken as irradiated temperature points corresponding to the combined influence of radiation dose and test temperature on the predicted surface.Clearly,the images of different irradiated surfaces have changed greatly.At the same time,it is worthy noticing that there are few data points in the boundary range,so the model will cause a deviation in the process of learning.

    Figure17:The combined effect of test temperature and irradiation dose.The irradiation temperature is 350 K,436 K,600 K and 763 K,and the combined effect of the irradiation dose and the test temperature on yield strength is calculated by the GDM-SA-SVR model

    6 Conclusions

    In this paper,we use data mining technology,through statistical analysis of data distribution symmetry to further prepare for our screening model.Correlation can describe the collinearity of attributes and analyze the stability of the model.Next,we uses the technique of convex optimization theory to improve the traditional SA algorithm.Then,get the GDMSA algorithm is obtained with stronger search ability.After tests with complex optimal problems test,compared with the traditional SA algorithm under the same conditions,the GDM-SA algorithm has faster convergence speed,less falling into local optima and wider search ability.Therefore,use the improved simulated annealing algorithm (GDMSA) to optimize the parameters of the SVR model.Finally,a GDM-SA-SVR yield stress prediction model is derived for RAFM steel yield strength.

    By comparison,the RAFM steel yield strength prediction model is better than ANN,linear regression,random forest and GRNN.In summary,it is concluded that the hybrid model composed of simulated annealing algorithm and support vector machine can achieve superior performance for compared with ANN,linear regression,GRNN,and random forest.The possible reason is that,from the distribution of the dataset,the data distribution is sparsely presented,which fits the prediction mechanism of the support vector machine.Because of data dispersity and sparsity,SVM can classify data better when it is mapped to the feature space.The soft interval is small,and the error is also small.Therefore,it is also very important to do data mining for original data,which is helpful for subsequent research.These results may help to guide the development direction of the model in the future as follows.

    The SVR model uses a single kernel model while the best kernel function in the experiment is the Gauss kernel or the sigmoid kernel.Therefore,the use of multi-core models will achieve better results in facing of different data sets,which is also a heavily studied research field in SVR.In addition to the nature of the model itself,the data sets shall also be considered.If there are new data in the future,adding the new data into the data set to retrain the model can get better prediction ability than the current dataset,and the model is more accurate.

    Acknowledgement:The research is supported by “National Natural Science Foundation of China” under Grant No.61572526,and thanks to Mr.He from the material radiation effect team of the China Institute of Atomic Energy.With the help and guidance of Mr.He and Mr.Deng,the experiment was successfully conducted,and the results were greatly improved,which enhanced the structure of this article.Thanks to the editor for giving detailed comments,the quality of the article can be improved.

    Appendix A.Statistic information of sample data

    Reference Tab.6.

    Table6:Basic information of the input parameters

    亚洲七黄色美女视频| 不卡av一区二区三区| 亚洲精品久久久久久婷婷小说| 久久久欧美国产精品| 国产精品一区二区精品视频观看| 久久ye,这里只有精品| 欧美日韩视频精品一区| 国产精品二区激情视频| 国产亚洲av片在线观看秒播厂| 久久精品国产综合久久久| 成年女人毛片免费观看观看9 | 精品乱码久久久久久99久播| 久久久久久久国产电影| 久久精品亚洲熟妇少妇任你| 国产日韩一区二区三区精品不卡| tocl精华| 午夜福利在线免费观看网站| 午夜免费观看性视频| 国产视频一区二区在线看| 精品高清国产在线一区| 狂野欧美激情性xxxx| 欧美精品一区二区免费开放| 久久人妻福利社区极品人妻图片| 久久毛片免费看一区二区三区| 在线亚洲精品国产二区图片欧美| 亚洲国产欧美网| 午夜福利在线观看吧| 亚洲国产看品久久| 建设人人有责人人尽责人人享有的| 亚洲成人手机| 午夜成年电影在线免费观看| 日韩制服骚丝袜av| 久久久久国内视频| 亚洲精品国产精品久久久不卡| 丁香六月天网| 亚洲欧美激情在线| 老司机深夜福利视频在线观看 | 一本一本久久a久久精品综合妖精| 在线观看www视频免费| 欧美乱码精品一区二区三区| 午夜成年电影在线免费观看| 在线av久久热| 十分钟在线观看高清视频www| 精品久久久久久电影网| 夜夜夜夜夜久久久久| av福利片在线| 久久免费观看电影| 成人免费观看视频高清| 国产主播在线观看一区二区| 日本五十路高清| 妹子高潮喷水视频| 国产精品熟女久久久久浪| 色播在线永久视频| 丰满迷人的少妇在线观看| 中文精品一卡2卡3卡4更新| 女警被强在线播放| 香蕉国产在线看| 老鸭窝网址在线观看| 人人妻人人爽人人添夜夜欢视频| 免费少妇av软件| 国产在线观看jvid| 三上悠亚av全集在线观看| 久久国产精品影院| 热99国产精品久久久久久7| 亚洲中文日韩欧美视频| 黄色怎么调成土黄色| 亚洲av电影在线观看一区二区三区| 极品人妻少妇av视频| 最黄视频免费看| 免费高清在线观看视频在线观看| cao死你这个sao货| 精品免费久久久久久久清纯 | 欧美中文综合在线视频| 各种免费的搞黄视频| 精品国产一区二区久久| 国产成人av教育| 日韩熟女老妇一区二区性免费视频| 国产伦理片在线播放av一区| 巨乳人妻的诱惑在线观看| 热re99久久国产66热| 日日摸夜夜添夜夜添小说| 亚洲国产av影院在线观看| 精品福利永久在线观看| 男女午夜视频在线观看| 下体分泌物呈黄色| 国产成+人综合+亚洲专区| 国产男女内射视频| 亚洲欧美成人综合另类久久久| 欧美日韩一级在线毛片| 日本av免费视频播放| 青草久久国产| 成年人黄色毛片网站| 欧美xxⅹ黑人| 满18在线观看网站| 狠狠狠狠99中文字幕| 国产精品.久久久| 91精品三级在线观看| 久久国产精品男人的天堂亚洲| 韩国精品一区二区三区| 激情视频va一区二区三区| 不卡av一区二区三区| 啦啦啦视频在线资源免费观看| av天堂久久9| 日本一区二区免费在线视频| 伊人亚洲综合成人网| 亚洲国产看品久久| 97人妻天天添夜夜摸| 亚洲成人手机| 91精品伊人久久大香线蕉| 国产成人精品久久二区二区免费| 午夜福利视频精品| 亚洲精品久久午夜乱码| 一级毛片女人18水好多| 大片免费播放器 马上看| 精品国产国语对白av| 亚洲欧美精品综合一区二区三区| 一本久久精品| 色婷婷久久久亚洲欧美| 久久精品亚洲熟妇少妇任你| 男人爽女人下面视频在线观看| 丝袜脚勾引网站| 亚洲va日本ⅴa欧美va伊人久久 | 免费日韩欧美在线观看| 精品国产国语对白av| 99久久99久久久精品蜜桃| 啦啦啦在线免费观看视频4| 丰满迷人的少妇在线观看| av天堂久久9| 亚洲综合色网址| 国产成人一区二区三区免费视频网站| 波多野结衣一区麻豆| 成年人黄色毛片网站| 亚洲激情五月婷婷啪啪| 久久99一区二区三区| 国产伦人伦偷精品视频| 亚洲激情五月婷婷啪啪| 亚洲av欧美aⅴ国产| 国产欧美日韩精品亚洲av| 欧美老熟妇乱子伦牲交| 两个人免费观看高清视频| 国产亚洲av片在线观看秒播厂| 纯流量卡能插随身wifi吗| 亚洲精品中文字幕在线视频| 18禁观看日本| 国产xxxxx性猛交| 欧美黑人精品巨大| 人人妻人人爽人人添夜夜欢视频| 欧美激情高清一区二区三区| 日韩大码丰满熟妇| 99久久精品国产亚洲精品| 国产区一区二久久| 自拍欧美九色日韩亚洲蝌蚪91| 免费久久久久久久精品成人欧美视频| 精品第一国产精品| 久久人人爽av亚洲精品天堂| 国产亚洲午夜精品一区二区久久| 亚洲国产精品一区三区| 日本av手机在线免费观看| 国产激情久久老熟女| 亚洲精品国产色婷婷电影| 中文字幕最新亚洲高清| 黄色怎么调成土黄色| 国产99久久九九免费精品| 免费一级毛片在线播放高清视频 | 美女高潮喷水抽搐中文字幕| 午夜影院在线不卡| 天天影视国产精品| 男人添女人高潮全过程视频| 啦啦啦免费观看视频1| 久久精品熟女亚洲av麻豆精品| 欧美午夜高清在线| 日韩欧美一区二区三区在线观看 | 不卡一级毛片| 亚洲欧美日韩另类电影网站| 黄色 视频免费看| 午夜两性在线视频| 亚洲精品第二区| 欧美激情久久久久久爽电影 | 亚洲精品中文字幕一二三四区 | 五月开心婷婷网| 人人妻人人澡人人爽人人夜夜| 黑丝袜美女国产一区| 日韩欧美免费精品| 国产亚洲精品久久久久5区| 成年美女黄网站色视频大全免费| 亚洲av日韩在线播放| 老鸭窝网址在线观看| 91av网站免费观看| 香蕉丝袜av| 母亲3免费完整高清在线观看| 又大又爽又粗| 国产麻豆69| 性少妇av在线| 人妻久久中文字幕网| 久久ye,这里只有精品| 久久国产精品人妻蜜桃| 久久99热这里只频精品6学生| 成人手机av| 操出白浆在线播放| 一进一出抽搐动态| 成人国产av品久久久| 在线永久观看黄色视频| 久久毛片免费看一区二区三区| 亚洲成av片中文字幕在线观看| 国产欧美亚洲国产| 叶爱在线成人免费视频播放| 香蕉丝袜av| 中亚洲国语对白在线视频| 嫁个100分男人电影在线观看| 中文字幕人妻熟女乱码| 日韩制服骚丝袜av| 色综合欧美亚洲国产小说| 国产人伦9x9x在线观看| 亚洲avbb在线观看| 精品第一国产精品| 亚洲 欧美一区二区三区| 9191精品国产免费久久| 欧美 亚洲 国产 日韩一| 下体分泌物呈黄色| 黄色视频在线播放观看不卡| 亚洲熟女毛片儿| 狠狠精品人妻久久久久久综合| 中文字幕制服av| 日本91视频免费播放| 国产免费av片在线观看野外av| 女性生殖器流出的白浆| 满18在线观看网站| 女人精品久久久久毛片| 国产有黄有色有爽视频| 性色av乱码一区二区三区2| av超薄肉色丝袜交足视频| 精品熟女少妇八av免费久了| 91麻豆av在线| 国产成人av激情在线播放| 人妻人人澡人人爽人人| 久久国产精品影院| 91字幕亚洲| 日韩欧美一区视频在线观看| 亚洲国产看品久久| av视频免费观看在线观看| 亚洲一码二码三码区别大吗| 一进一出抽搐动态| 老司机福利观看| 一级毛片精品| 亚洲欧美激情在线| av欧美777| 久久精品人人爽人人爽视色| 51午夜福利影视在线观看| 午夜老司机福利片| www.熟女人妻精品国产| 黑人操中国人逼视频| 婷婷成人精品国产| 日韩电影二区| 大香蕉久久网| 亚洲激情五月婷婷啪啪| 亚洲国产av影院在线观看| 我要看黄色一级片免费的| 国产欧美日韩一区二区三区在线| 80岁老熟妇乱子伦牲交| 日韩欧美免费精品| 久久精品久久久久久噜噜老黄| 18禁裸乳无遮挡动漫免费视频| 黄片播放在线免费| 欧美国产精品va在线观看不卡| 老汉色∧v一级毛片| 国产精品香港三级国产av潘金莲| 欧美日本中文国产一区发布| 高清黄色对白视频在线免费看| 日本vs欧美在线观看视频| 国产成人免费无遮挡视频| 午夜两性在线视频| 国产成人系列免费观看| 黄片小视频在线播放| 欧美激情高清一区二区三区| 欧美人与性动交α欧美精品济南到| 午夜久久久在线观看| 啦啦啦视频在线资源免费观看| 亚洲国产精品成人久久小说| 狠狠精品人妻久久久久久综合| 国产真人三级小视频在线观看| 亚洲av电影在线进入| 中文欧美无线码| 国产激情久久老熟女| 一边摸一边抽搐一进一出视频| 新久久久久国产一级毛片| 美女脱内裤让男人舔精品视频| 国产av又大| 男女国产视频网站| 黑人操中国人逼视频| 曰老女人黄片| 精品亚洲成国产av| 国产精品国产av在线观看| 免费高清在线观看日韩| 中国美女看黄片| 搡老乐熟女国产| 热99国产精品久久久久久7| 精品亚洲成国产av| 黑丝袜美女国产一区| 免费在线观看黄色视频的| 免费一级毛片在线播放高清视频 | 欧美日韩一级在线毛片| 欧美少妇被猛烈插入视频| 国产老妇伦熟女老妇高清| 少妇精品久久久久久久| 最近中文字幕2019免费版| 超碰成人久久| 黄网站色视频无遮挡免费观看| 久久狼人影院| 精品福利永久在线观看| 大片电影免费在线观看免费| 亚洲中文日韩欧美视频| 国产高清视频在线播放一区 | 欧美日韩福利视频一区二区| 国产1区2区3区精品| 欧美亚洲日本最大视频资源| 欧美激情极品国产一区二区三区| 亚洲av欧美aⅴ国产| 欧美黄色淫秽网站| 91精品伊人久久大香线蕉| 精品国产超薄肉色丝袜足j| 精品视频人人做人人爽| 精品国产一区二区三区久久久樱花| 久久久精品国产亚洲av高清涩受| 少妇猛男粗大的猛烈进出视频| 久久久久久久国产电影| 国产精品二区激情视频| 一级,二级,三级黄色视频| 首页视频小说图片口味搜索| 亚洲 欧美一区二区三区| 手机成人av网站| 91成人精品电影| 亚洲欧美一区二区三区黑人| 男女免费视频国产| 色婷婷av一区二区三区视频| 丝袜美足系列| 韩国精品一区二区三区| 日韩,欧美,国产一区二区三区| 91国产中文字幕| 别揉我奶头~嗯~啊~动态视频 | 一边摸一边做爽爽视频免费| 人人妻,人人澡人人爽秒播| 国内毛片毛片毛片毛片毛片| 欧美另类一区| 亚洲精品国产区一区二| 免费在线观看完整版高清| 久久精品亚洲av国产电影网| 国产成人av激情在线播放| 久久久久久免费高清国产稀缺| 国产在线免费精品| 日韩制服骚丝袜av| 精品少妇内射三级| 久久精品国产a三级三级三级| 一个人免费在线观看的高清视频 | 久久国产精品人妻蜜桃| 亚洲欧美日韩另类电影网站| 久久精品熟女亚洲av麻豆精品| 亚洲国产精品一区二区三区在线| 大码成人一级视频| 一区二区三区乱码不卡18| 久久精品国产a三级三级三级| videosex国产| 亚洲国产av影院在线观看| 天天躁日日躁夜夜躁夜夜| 日本欧美视频一区| 欧美精品人与动牲交sv欧美| www.熟女人妻精品国产| 亚洲精品在线美女| 午夜福利免费观看在线| 亚洲精品乱久久久久久| 高清欧美精品videossex| 天天影视国产精品| 欧美在线一区亚洲| 搡老乐熟女国产| 国产一级毛片在线| 美女高潮到喷水免费观看| 人人妻人人澡人人看| 亚洲av片天天在线观看| 嫩草影视91久久| 免费高清在线观看日韩| 色婷婷av一区二区三区视频| 丰满少妇做爰视频| 在线观看人妻少妇| 久久精品国产亚洲av高清一级| 欧美一级毛片孕妇| 一边摸一边抽搐一进一出视频| 美女中出高潮动态图| 亚洲av欧美aⅴ国产| 男女高潮啪啪啪动态图| 国产日韩欧美视频二区| 美女高潮喷水抽搐中文字幕| 国产欧美日韩一区二区精品| 日日摸夜夜添夜夜添小说| 亚洲精品国产av蜜桃| 国产精品一区二区在线观看99| 国产成人一区二区三区免费视频网站| netflix在线观看网站| 男女午夜视频在线观看| 天堂中文最新版在线下载| 婷婷丁香在线五月| a 毛片基地| 91麻豆精品激情在线观看国产 | 两性夫妻黄色片| 青青草视频在线视频观看| 中文字幕制服av| 国产男女超爽视频在线观看| 久久久久视频综合| 日韩 亚洲 欧美在线| 亚洲成av片中文字幕在线观看| 日本av免费视频播放| 国产男人的电影天堂91| 国产老妇伦熟女老妇高清| 另类亚洲欧美激情| 国产成人一区二区三区免费视频网站| 一级毛片电影观看| 国产精品久久久久久精品电影小说| videos熟女内射| 精品少妇黑人巨大在线播放| 国产国语露脸激情在线看| 女人爽到高潮嗷嗷叫在线视频| 国产xxxxx性猛交| 汤姆久久久久久久影院中文字幕| 青春草亚洲视频在线观看| 每晚都被弄得嗷嗷叫到高潮| 久久女婷五月综合色啪小说| 精品久久蜜臀av无| 亚洲精品第二区| 久久久欧美国产精品| av福利片在线| 女人爽到高潮嗷嗷叫在线视频| 日韩精品免费视频一区二区三区| 亚洲av国产av综合av卡| 久久青草综合色| 国产日韩欧美亚洲二区| 国产亚洲av片在线观看秒播厂| 午夜福利在线免费观看网站| 久久久久久亚洲精品国产蜜桃av| 美女国产高潮福利片在线看| 国产免费av片在线观看野外av| 欧美人与性动交α欧美软件| 自线自在国产av| 性色av乱码一区二区三区2| av一本久久久久| 三上悠亚av全集在线观看| av有码第一页| 色综合欧美亚洲国产小说| 午夜两性在线视频| 少妇裸体淫交视频免费看高清 | 亚洲精品一区蜜桃| 国产成人影院久久av| 无限看片的www在线观看| 色94色欧美一区二区| 精品一区在线观看国产| 精品亚洲乱码少妇综合久久| 一级,二级,三级黄色视频| 国产成人a∨麻豆精品| 一区福利在线观看| 操出白浆在线播放| 色老头精品视频在线观看| 午夜福利免费观看在线| 国产成人啪精品午夜网站| 午夜精品国产一区二区电影| cao死你这个sao货| 日韩免费高清中文字幕av| 中文字幕人妻丝袜一区二区| 黑人欧美特级aaaaaa片| 宅男免费午夜| av天堂久久9| 国产精品久久久人人做人人爽| 欧美国产精品一级二级三级| 亚洲 国产 在线| 一进一出抽搐动态| 老熟妇仑乱视频hdxx| 久久精品国产亚洲av高清一级| 日韩免费高清中文字幕av| 一个人免费在线观看的高清视频 | 人妻人人澡人人爽人人| www.999成人在线观看| 熟女少妇亚洲综合色aaa.| 国产日韩欧美在线精品| 热99re8久久精品国产| 天天躁夜夜躁狠狠躁躁| 久久香蕉激情| 另类精品久久| 国产精品成人在线| 国产亚洲一区二区精品| 一区二区三区精品91| 亚洲专区国产一区二区| 国内毛片毛片毛片毛片毛片| 精品一区在线观看国产| 国产精品香港三级国产av潘金莲| 亚洲成国产人片在线观看| 午夜福利视频在线观看免费| 黄色片一级片一级黄色片| 美女视频免费永久观看网站| 久久九九热精品免费| 999精品在线视频| 日本av免费视频播放| 国产亚洲一区二区精品| 超色免费av| 亚洲欧美激情在线| 这个男人来自地球电影免费观看| 水蜜桃什么品种好| 高清黄色对白视频在线免费看| av不卡在线播放| 俄罗斯特黄特色一大片| 国产成人a∨麻豆精品| 日韩视频一区二区在线观看| 亚洲国产欧美网| 色94色欧美一区二区| 在线观看免费高清a一片| 国产亚洲欧美在线一区二区| 黄色怎么调成土黄色| 一二三四社区在线视频社区8| 久久综合国产亚洲精品| 午夜福利一区二区在线看| 亚洲成国产人片在线观看| 人人妻,人人澡人人爽秒播| 男人操女人黄网站| 亚洲国产成人一精品久久久| 国产极品粉嫩免费观看在线| 少妇粗大呻吟视频| 欧美人与性动交α欧美精品济南到| 黄色 视频免费看| 国产一区二区三区av在线| 久久久精品区二区三区| 大香蕉久久成人网| 不卡一级毛片| 69精品国产乱码久久久| 纯流量卡能插随身wifi吗| 女警被强在线播放| 韩国精品一区二区三区| 国产一区二区在线观看av| 精品亚洲成a人片在线观看| 一本—道久久a久久精品蜜桃钙片| 在线观看人妻少妇| 两人在一起打扑克的视频| 天天躁狠狠躁夜夜躁狠狠躁| 久久久久国产一级毛片高清牌| av欧美777| www.熟女人妻精品国产| 宅男免费午夜| 久久人人爽av亚洲精品天堂| 欧美黑人欧美精品刺激| 亚洲精品久久午夜乱码| 国产三级黄色录像| 国精品久久久久久国模美| 啦啦啦 在线观看视频| 一级毛片精品| 国产免费现黄频在线看| 多毛熟女@视频| 久久亚洲国产成人精品v| 黑人欧美特级aaaaaa片| 亚洲国产精品999| 久久久久久久精品精品| 大香蕉久久网| 丝袜美足系列| 肉色欧美久久久久久久蜜桃| 五月天丁香电影| av超薄肉色丝袜交足视频| 亚洲中文字幕日韩| 麻豆国产av国片精品| 好男人电影高清在线观看| 亚洲免费av在线视频| 欧美在线黄色| 国产精品欧美亚洲77777| 精品乱码久久久久久99久播| xxxhd国产人妻xxx| 亚洲中文av在线| 午夜日韩欧美国产| 俄罗斯特黄特色一大片| 欧美国产精品va在线观看不卡| 国产三级黄色录像| 一本久久精品| 交换朋友夫妻互换小说| 侵犯人妻中文字幕一二三四区| 中文欧美无线码| 在线亚洲精品国产二区图片欧美| 日本wwww免费看| 亚洲国产毛片av蜜桃av| 黑人猛操日本美女一级片| 国产91精品成人一区二区三区 | 精品人妻熟女毛片av久久网站| 中文字幕色久视频| 日韩欧美一区视频在线观看| 欧美少妇被猛烈插入视频| 亚洲欧美色中文字幕在线| 国产激情久久老熟女| 日本精品一区二区三区蜜桃| 热re99久久国产66热| 免费观看av网站的网址| a 毛片基地| 国产精品偷伦视频观看了| 少妇粗大呻吟视频| 日韩一卡2卡3卡4卡2021年| 欧美av亚洲av综合av国产av| 亚洲国产中文字幕在线视频| 国产免费av片在线观看野外av| 精品第一国产精品| av有码第一页| 免费在线观看视频国产中文字幕亚洲 | 亚洲视频免费观看视频| 久久99热这里只频精品6学生| 亚洲人成77777在线视频| 老司机亚洲免费影院| 丝袜人妻中文字幕| 侵犯人妻中文字幕一二三四区| 久久精品成人免费网站| 激情视频va一区二区三区| 日本wwww免费看| 国产真人三级小视频在线观看| 久久热在线av| 久久国产精品大桥未久av| 欧美激情久久久久久爽电影 | 久久亚洲精品不卡| 中文字幕最新亚洲高清|