• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A hybrid-model optimization algorithm based on the Gaussian process and particle swarm optimization for mixed-variable CNN hyperparameter automatic search*

    2023-12-11 02:37:28HanYANChongquanZHONGYuhuWULiyongZHANGWeiLU

    Han YAN, Chongquan ZHONG, Yuhu WU, Liyong ZHANG, Wei LU

    School of Control Science and Engineering, Dalian University of Technology, Dalian 116024, China

    ?E-mail: luwei@dlut.edu.cn

    Abstract: Convolutional neural networks (CNNs) have been developed quickly in many real-world fields.However,CNN’s performance depends heavily on its hyperparameters, while finding suitable hyperparameters for CNNs working in application fields is challenging for three reasons: (1)the problem of mixed-variable encoding for different types of hyperparameters in CNNs, (2) expensive computational costs in evaluating candidate hyperparameter configuration, and (3) the problem of ensuring convergence rates and model performance during hyperparameter search.To overcome these problems and challenges, a hybrid-model optimization algorithm is proposed in this paper to search suitable hyperparameter configurations automatically based on the Gaussian process and particle swarm optimization (GPPSO) algorithm.First, a new encoding method is designed to efficiently deal with the CNN hyperparameter mixed-variable problem.Second, a hybrid-surrogate-assisted model is proposed to reduce the high cost of evaluating candidate hyperparameter configurations.Third, a novel activation function is suggested to improve the model performance and ensure the convergence rate.Intensive experiments are performed on imageclassification benchmark datasets to demonstrate the superior performance of GPPSO over state-of-the-art methods.Moreover, a case study on metal fracture diagnosis is carried out to evaluate the GPPSO algorithm performance in practical applications.Experimental results demonstrate the effectiveness and efficiency of GPPSO, achieving accuracy of 95.26% and 76.36% only through 0.04 and 1.70 GPU days on the CIFAR-10 and CIFAR-100 datasets,respectively.

    Key words: Convolutional neural network; Gaussian process; Hybrid model; Hyperparameter optimization;Mixed-variable; Particle swarm optimization https://doi.org/10.1631/FITEE.2200515 CLC number: TP181

    1 Introduction

    In recent years, as one of the most useful deep learning models,convolutional neural networks(CNNs)have achieved state-of-the-art results in various artificial intelligence (AI) applications (Jiang and Luo,2022;Tulbure et al.,2022).Through convolution operations,meaningful features are extracted from input data,which have greatly improved model performance (Sun et al., 2019).Such learning and expression abilities result in great success in various real-world applications, such as face detection(Li X et al., 2022) and autonomous driving (Grigorescu et al., 2020).Although CNNs have achieved great success,the design of CNN architecture is still extremely complicated,and obtaining efficient CNN models for solving specific tasks is still a challenge(Fernandes and Yen, 2021; Guo et al., 2022).Furthermore, most efficient CNN models are designed and optimized by experienced AI algorithm engineers through tedious trial-and-error experiments,which does not help technicians in other industries who want to use AI technology.

    Therefore, many researchers have started to consider designing CNN models in a more intelligent,automatic,and efficient way(Li JY et al.,2022).For example, to realize automation in the model design process, researchers regarded the process as an optimization problem, and found the optimal solution using intelligent algorithms(Zhan et al.,2022a).Intelligent optimization algorithms(OAs)are regarded as a class of methods of deriving optimal solutions that have been extensively researched, such as using Bayesian optimization(BO)(Snoek et al.,2012),particle swarm optimization(PSO)(Poli et al.,2007;Li JY et al.,2021),and other evolutionary computation(EC)approaches(Li JY et al.,2020;Zhan et al.,2022b; Wu SH et al., 2023).As an application of the Gaussian process(GP)in optimization problems,BO has been proposed to optimize search hyperparameters in machine learning algorithms.Snoek et al.(2012) selected expected improvement (EI) as the acquisition function to optimize three typical machine learning problems,which reduced training cost and improved training speed.Jin et al.(2019) used BO to guide the generation of network morphism for automatic construction of CNN models, and a new GP kernel function was designed according to the characteristics of network morphism exploration space.Li JY et al.(2023) proposed a surrogateassisted hybrid model to optimize CNN’s hyperparameters,where GP was used as a surrogate model to estimate the fitness function to save computational cost.Li JY et al.(2022) and Zhan et al.(2022a)used BO to search different combinations of hyperparameters to construct different architectures.The unique properties of BO can reduce the number of trained neural networks,resulting in a more efficient search process.As a branch of EC, PSO is efficient to find optimal solutions to high-complexity problems due to its wide exploration area and fast convergence (Li JY et al., 2022).For instance, Wang B et al.(2018) proposed a variable-length encoding method for CNNs and searched hyperparameters through PSO to optimize CNNs, whereas Fielding et al.(2019) used a PSO algorithm and ensemble learning to automatically design CNNs.Wu T et al.(2019) treated model pruning as a multi-objective problem and solved it by PSO to balance the accuracy and complexity, which can reduce weights by 80% without losing significant accuracy.Wang B et al.(2020)proposed an efficient PSO(EPSOCNN)method inspired by transfer learning to accelerate search process, which reduced the search space and demonstrated the transferability of the evolved block.Wang YQ et al.(2022) designed a novel and light-weight scale-adaptive fitness evaluation-based PSO method for reducing search time and providing search performance.In addition, some other EC algorithms can achieve better results in cooperation with model optimization(Alvarez-Rodriguez et al., 2021; Chen et al., 2022).Real et al.(2017)proposed a large-scale neuro-evolutionary method to discover the best CNN model, and achieved 94.6%and 77.0% accuracy on CIFAR-10 and CIFAR-100,respectively.Sun et al.(2020a) used a novel genetic algorithm(GA)called CNN-GA to search CNN architecture automatically.These algorithms have achieved promising results in CNN hyperparameter optimization tasks.

    However,there are still some challenges for CNN hyperparameter optimization tasks due to following problems: mixed-value hyperparameter encoding, high computational cost, low convergence rate,and limited model performance.First, the CNN hyperparameter types are different (continuous or discrete) (Darwish et al., 2020), and such mixedvariable characteristics are proved difficult in effi-cient search space encoding.Second, for traditional OAs(BO and PSO), CNNs are evaluated by the fitness function through assessment criteria based on training, which increases the cost of fitness evaluation(FE)and damages the efficiency of OAs.Third,considering the large number of CNN hyperparameters,it is still necessary to research how to accelerate the convergence for FE and ensure model performance after search.

    Therefore,in this paper we focus on these challenging tasks in the CNN hyperparameter optimization problem and propose a novel Gaussian process and particle swarm optimization (GPPSO) method based on GP and PSO,to solve these difficulties.The major challenges and contributions are summarized in Fig.1.

    Fig.1 Major challenges and contributions of GPPSO

    1.A novel encoding strategy is proposed to efficiently deal with the mixed-variable difficulty of CNN hyperparameters.A unified encoding strategy is designed to encode discrete and continuous variables in the same form,making the optimization process more efficient.

    2.A hybrid-surrogate-assisted(HSA) model is proposed to deal with the expensive computational cost problem in the search process.During the search process, the GP model serves as a surrogate for the fitness function, while the PSO algorithm generates new individuals.To achieve a better balance between efficiency and performance,a multi-level evaluation mechanism is proposed to reduce computational cost.

    3.A novel activation function(AF,Ta-ReLU)is proposed to accelerate the convergence in the process of population evolution and to improve the performance of the model after training.The improved AF has a tiny gradient in the region(<0),which not only enhances the model’s performance, but also ensures efficient training.

    2 Background and related works

    The concepts of CNN, Gaussian process regression (GPR), and PSO as the basic algorithms of GPPSO are introduced in Sections 2.1, 2.2,and 2.3,respectively,which are helpful to know the details of the proposed GPPSO.

    2.1 Convolutional neural network (CNN)

    CNN is a type of deep feedforward neural network that has the advantages of local links and weight sharing (Alzubaidi et al., 2021).With the development of CNNs, the network structures have gradually become deeper, and VGGNet and ResNet have emerged as two state-of-the-art CNN structures in recent years.

    VGGNet (Simonyan and Zisserman, 2014) is a frequently employed CNN for extracting features.Fig.2 illustrates a 16-layer version of VGGNet, consisting of 13 Conv layers and three fully connected(FC) layers.Because smaller 3×3 Conv kernels are used to simulate 5×5 Conv kernels, VGGNet can obtain larger receptive fields with fewer parameters,resulting in effective and efficient feature extraction.

    Fig.2 VGG16 model structure

    ResNet (He et al., 2016) is another commonly used residual structure CNN for feature extraction.ResNet used cross-layer connections to fit residual items, which extends the depth of CNNs.The residual block is shown in Fig.3a, which outputsy=F(x) +xafter the input passes through the module.The overall structure of ResNet is shown in Fig.3b.

    Based on their effective characteristics,VGGNet and ResNet are chosen as basic models for the proposed algorithm.

    Fig.3 ResNet model structure: (a) residual block;(b) ResNet

    2.2 Gaussian process regression(GPR)

    Gaussian process regression (GPR) is an effi-cient modeling algorithm based on statistical learning theory.In contrast to parameterized models in machine learning (e.g., Bayesian linear regression),GP is a nonparameterized model that can fit a black box function and give confidence in the fitting results.In GPR, it is assumed that an unknown functionf(x) is smooth and follows a GP.WhenNpointsX=[x1,x2,···,xN]are sampled fromf(x),the resulting dataset follows a multivariate normal distribution as shown in Eq.(1):

    whereK(x*,X) = [k(x*,x1),k(x*,x2),···,k(x*,xn)].According to the above joint distribution, the posterior distribution ofy*, mean ?μ, and variance value ?σ2are calculated as follows:

    Based on the above properties,GPR is employed as a surrogate-assisted model to predict the performance of the model in this study.

    2.3 Particle swarm optimization(PSO)

    PSO is a type of EC approach based on artificial life and evolutionary computing theory.The main procedures of PSO are as follows: first, PSO initializes a population of individuals with positionX0iand speedV0i,where each individual corresponds to a random candidate solution to the object function.Then, based on the fitness value, individual extremum pbest and global extremum gbest are updated using Eqs.(7)and(8):

    wherevidandxidrefer to thedthvelocity component and position component of theithparticle of thekthgeneration respectively, andc1andc2are learning rates, which control the amplitude of evolution to the individual best particle and the global best particle respectively.The above procedures will be repeated until the expected error value is reached or the maximum number of iterations is reached.Finally, the PSO outputs the best position of the particle, which corresponds to the optimal solution to the problem.Due to its ease of implementation and minimal tunable parameters,PSO can be a good choice for solving complex optimization problems in CNNs, and is also a basic algorithm of GPPSO.

    3 Proposed algorithm

    In this section, the framework of the GPPSO and its main components are discussed in detail.First, we present the mixed-variable encoding strategy, which is used to encode different types of CNN hyperparameters in the same form.Subsequently,we explain the HSA model used to search CNN hyperparameters, which deals with the expensive computational cost problem.We also detail the novel AF,which ensures convergence rate and model performance.Finally, we present the complete algorithm for better understanding of the proposed GPPSO.

    3.1 Mixed-variable encoding strategy

    In GPPSO, each sampled individual represents a group of CNN hyperparameters, where each dimension of the individual corresponds to a CNN hyperparameter.Because the CNN hyperparameters have distinct meanings, each individual has its own specific types and constraints.For example, some hyperparameters should be set as integers with a large range, such as the number of Conv kernels,whereas some hyperparameters should be discrete,such as the size of Conv kernels.Most traditional optimization methods are aimed at a single type of variable, which is difficult when optimizing mixedvariable types.Therefore, it is important to design a mixed-variable encoding scheme for handling the mixed-variable problem.

    Table 1 provides the CNN hyperparameter settings to be optimized in this study.In particular,the variables with many available choices or a large search range are encoded as continuous variables,whereas the variables with several fixed selections are encoded as discrete variables.Although the variables of Conv kernels and FC layer neurons are required to be integers, regarding them as continuous variables can be more flexible and efficient in the optimization process.The reason for processing integers as continuous variables is that the search space for the numbers of kernels and neurons is large and does not have a prior known upper bound(e.g.,[1,+∞)),which is inefficient and complex to encode them as finite integers.As for the discrete variables with only a few choices, we encode these discrete variables in a continuous way.The advantages of encoding discrete variables in this way are as follows: first,the size of Conv kernels, the types of AFs, and the types of pooling layers have only three or four available results.These results correspond to a small search space (e.g.,{0,1,2,3}), which is feasible for continuous encoding.Second, encoding continuous variables and discrete variables in the same way is convenient and efficient, which is conductive to the GPPSO algorithm.

    Based on the above analysis,the mixed-variable encoding strategy is given as follows: in detail,Xof every particle is a vector with dimensionD; each dimension represents a CNN hyperparameter.As a result, each vectorXrepresents an architecture of the candidate CNN structure.Because each dimension has a different meaning and the range of values in each dimension is different, the following strategy is designed for encoding: first, for the integer variable in continuous variables, the encoding strategy is shown in Eq.(9):

    whereNkdenotes the number of kernels in the Conv layer,Xnkdenotes the search result of the number of kernels by the optimization method,and■·」denotes taking the largest integer that is no larger than the search result.Using this method, we translate the continuous variable in[1,+∞)to an integer variable.Because the initial learning rate and dropout rate are consecutive floating-point values in the search space, the GPPSO’s result can be used as the final result during search.For discrete variables, each of their available values corresponds to an integervalue, and the search range is set according to all the integer values that can be taken.Taking the example of discrete variable encoding for AF, the specific encoding strategy is shown in Eq.(10):

    Table 1 Settings of mixed-variable encoding for CNN hyperparameters

    where af denotes the final result of AF,Xafdenotes the GPPSO search result,and「·■denotes taking the smallest integer that is no less than the search result.

    3.2 HSA model

    After encoding the CNN hyperparameters using a mixed-variable encoding scheme,the GPPSO algorithm starts to search for the optimal hyperparameter combinations by the HSA model.As shown in Fig.4, the search process is divided into two parts,master and slave.In the first part, the candidate model is evaluated by constructing a GP model at multiple levels.In the second part,the target model of the next exploration is generated according to the previous search results through the PSO algorithm.

    Fig.4 General flowchart of the hybrid surrogateassisted model

    3.2.1 Multi-level evaluation strategy based on GP

    Because of the large size of CNN models and the large amount of training data, evaluating their fitness(i.e.,classification loss or accuracy)is computationally expensive.Furthermore, even if there are enough computational resources and time to train the model until the accuracy converges, the optimal model obtained cannot guarantee, still having the best performance on the test set due to the difference between the validation data and test data.Therefore,unlike traditional OA,which obtains convergence accuracy through a lot of training as a fitness value, GPPSO is more efficient in evaluating and comparing different CNN candidate structures using a multi-level evaluation strategy.

    GPPSO improves the computational expensive problem from two perspectives.First,GPPSO trains candidate models with few epochs during the search process to distinguish performance of different individuals.Second, a multi-level evaluation strategy based on GP is designed for the computational expensive problem in the search process.Each candidate is preliminarily computed by the GP model,and then some individuals with better evaluation results are trained to construct a new GP model.In addition, to improve the robustness of the algorithm, it is necessary to ensure that at least one individual of the group is evaluated by training.The pseudocode of the multi-level evaluation strategy based on the GP is given in Algorithm 1.

    3.2.2 Individual generation strategy based on PSO

    During the search process of GPPSO, it is necessary to evaluate individuals many times by the multi-level evaluation strategy and cause individuals to evolve to obtain the best outcome.Every time the multi-level evaluation strategy in Algorithm 1 is executed,an individual generation strategy is required to produce new candidates for the next subsequent generation.Traditional OA based on GP, such as the BO algorithm,defines an acquisition function to evaluate whether a sample can provide benefits for the GP model, and then determines whether it is a new individual to be explored.However,traditional acquisition functions generate new individuals based on existing individuals, which results in an incomplete search process, insufficient adaptability, and a limited exploration area.

    Because the exploration performance of the above acquisition function may be not enough to generate new individuals accurately, an individual generation strategy based on PSO is proposed to increase performance in generating candidate individuals.The method is inspired by the PSO algorithm,candidate structures are treated as a group of particles, their initial value is treated as the position,and their velocity is calculated according to the GP model.After iterating through the PSO, the optimal offspring obtained is the new individual to be evaluated in Algorithm 1.The specific pseudocode is shown in Algorithm 2.

    Algorithm 1 Multi-level evaluation strategy Input: Dataset of training CNNs, Dtrain;dataset of evaluating CNNs, Deval;architecture constructed by individuals, Parch;fitness of individuals in the architecture, fitarch;set of individuals to be evaluated, P;number of initial individuals, NI;number of all individuals, NA;number of training epochs, T Output: Set of individuals evaluated, Peval;the fitness set of individuals evaluated, fiteval 1: begin/* construct the initial GP model */2: Initialize NI individuals;3: Obtain Parch and fitarch of the initial individuals by training;4: GPI ←build an initial GP model with Parch and fitarch;/* first level evaluation by the GP model */5: gfitness ←fitness of P predicted by GPI;6: afitness ←the average fitness of fitarch;7: Peval ←empty set;8: fiteval ←empty set;9: ri ←a random integer in {1,2,···,NA};/* second level evaluation by training */10: for each individual Pi in P do 11: if gfitness>afitness or ri==i then 12: Net ←build a candidate architecture according to the hyperparameters in Pi;13: Net ←train Net with T epochs on Dtrain;14: accuracy ←validate Net on Deval;15: fiteval ←fiteval ∪{accuracy};16: Peval ←Peval ∪Pi;17: GPI ←evolve GPI through fiteval and Peval;18: end if 19: end for 20: end

    3.3 An improved AF

    Algorithm 2 Individual generation strategy Input: The initial Gaussian process model, GPI;number of particles, N;fitness of individual i, fiti;dimension of particles, d;number of generations, n;number of iterations, T Output: The generated individual to be evaluated,Pnew 1: begin:/* initialize a group of particles with d dimensions corresponding to hyperparameters in CNNs */2: Initialize a set of N particles as P;3: for each particle Pi in P do 4: for each dimension d of Pi do 5: Randomly initialize the particle position xid within the given range;6: Randomly initialize the particle velocity vid within the given range;7: end for 8: end for/* search individuals for multi-level evaluation by PSO */9: for k =1 to T do 10: for each individual Pi in P do 11: fiti←calculated fitness by GPI in Algorithm 1;12: if the fitness value is larger than pbestid in history then 13: pbestid ←fiti;14: end if 15: gbestd ← the particle with the best fitness value;16: Calculate velocity vid through Eq.(7);17: Calculate position xid through Eq.(8);18: end for 19: end for 20: Pnew ←xid of the last n generations;21: end

    AFs play an essential role in the CNN learning process by fitting complex functions.AFs can introduce nonlinearity into CNNs, and provide the entire model with the ability to solve complex problems.During the early development stage of neural networks, traditional AFs are mainly S-type saturation functions, such as Sigmoid and Tanh (Fig.5),which tend to cause vanishing gradients, resulting in training difficulties.Moreover, the derivative of these functions is complicated, and will cause computational expensive problems during the process of gradient back propagation in training.With the development of deep learning,Krizhevsky et al.(2017)proposed rectified linear units (ReLUs) (Fig.5) to solve the problem of vanishing gradients and speed up network training.However, the ReLU function has some limitations.Becausexvalue of the negative half-axis and its gradient value are both zero,neurons can lose the ability to transmit information during the training process.

    In the automatic search for CNN hyperparameters, fast convergence, low computational cost, and high accuracy should be achieved in the network evaluation process.Traditional AFs are often unable to meet these requirements simultaneously.Therefore, a new AF has been designed as follows:

    Ta-ReLU is a nonlinear and differentiable function with a sensitivity factorαthat controls the function’s negative semi-axis activity to the inputs.Whenα= 0, Ta-ReLU reduces to ReLU.The function is discontinuous atx= 0 and is sensitive to inputs on the positive semi-axes and to the inputs close to 0 on the negative semi-axes.When the input changes,the output also changes significantly.However,Ta-ReLU is insensitive to other inputs;that is,the output does not change in correspondence with the input or produces only a small change.Given these properties,Ta-ReLU meets the required conditions of an AF.A comparison between Ta-ReLU and traditional AFs is shown in Fig.5.

    Fig.5 Comparison of different activation functions

    Fig.5 shows that Ta-ReLU has several advantages compared to traditional AFs.First,it requires less computation and has a simpler derivative,making it more efficient and easier to implement.Additionally, Ta-ReLU has a larger gradient than ReLU at thex-negative half-axis near zero, ensuring that negative output values are not ignored in the network.This leads to higher search speeds during training and results in better performance in the trained model.These advantages will be further verified in Section 4.

    3.4 Complete algorithm

    The complete pseudocode of GPPSO is shown in Algorithm 3 and is detailed as follows:

    Step 1: GPPSO initializes a set of individuals through a mixed-variable encoding scheme(Section 3.1)and evaluates these initial points (IPs)using training to determine their accuracy.Then,according to the accuracy, the best individuals and their fitness values will be evaluated and stored.

    Algorithm 3 Complete algorithm Input: Input dataset, DI;number of initial individuals, NI;number of sampling individuals, NS;the initial Gaussian process model, GPI;the initialized continuous variables, cx;the initialized discrete variables, dx;number of iterations, T Output: The architecture searched by GPPSO, Net 1: begin:2: Dtrain, Dvalid ←split DI by 5:1;3: cx, dx←perform mixed-variable encoding strategy;4: P ←combine cx and dx;5: fitness ←?;6: for each particle Pi in P do 7: Net ←build a CNN according to Pi;8: Net ←train Net with S epochs on Dtrain;9: accuracy ←test Net on Dvalid;10: fitness ←fitness ∪{accuracy};11: end for 12: Pbest,fitbest ← the best individual and fitness,respectively;13: Parch ←P, fitarch ←fitness;14: search_iteration ←0;15: while search_iteration

    Step 2: for the first loop, setting search_iteration to zero.According to the evaluated points in step 1, the initial GP model is constructed.It should be noted that the GP model is dynamically updated during the cycle, and the GP model will evolve each time based on the evaluation results of candidate points unless the iterations are reached.In the process of candidate point evaluations, a multi-level evaluation strategy(Algorithm 1)is adopted to solve the computational expensive problem and improve the search efficiency.

    Step 3: during the cycle, new candidate points need to be generated for evaluation through an individual generation strategy based on PSO(Algorithm 2).First, the boundaries, numbers, and dimensions of particles are set to generate particles through a star topology, and then the fitness of the particles is evaluated using the GP model.After several evolutionary iterations,the optimal particles are obtained for multiple evaluations.

    Step 4: if the stop iterations are not met, the algorithm goes back to step 2 and the procedure is repeated; otherwise, the CNN constructed by the corresponding optimal hyperparameters is trained on the entire dataset for the best performance and outputs the CNN model as the final output.It should be noted that the Ta-ReLU designed in Section 3.3 is used as the AF of the Conv layer in the GPPSO algorithm.This choice aims to accelerate the convergence and ensure the model performance.

    4 Experimental results

    To verify the effectiveness and efficiency of the proposed GPPSO, a series of experiments were designed and conducted.First, the relevant settings of the experiments are introduced in Sections 4.1–4.3, including the datasets and evaluation metrics,the compared state-of-the-art methods, and the parameter settings of the proposed algorithm.Next,the proposed GPPSO is compared with advanced algorithms on the CIFAR datasets to investigate its theoretical effectiveness.Finally, a metal fracture(MF) diagnosis case in a real-world industrial scenario is presented to prove the feasibility of GPPSO in practical industrial applications.

    4.1 Datasets and metrics

    To evaluate the performance of the proposed GPPSO, three datasets were used for experimentation in this subsection: CIFAR-10, CIFAR-100,and the MF dataset.Two different CIFAR datasets(Fig.6) were used to verify the theoretical effectiveness of the proposed algorithm, because they provided varying difficulties for image classification tasks.Both CIFAR datasets contained 50 000 training images and 10 000 test images, where each image has 32×32 pixels and three channels.The MF dataset was used in the experiments to demonstrate the effectiveness of GPPSO in practical applications.The MF dataset consisted of 1500 scanning electron microscopy images of three MF categories, with the resolution of 512×512.The initial dataset was expanded from 1500 to 7500 by random angle rotation,proportional scaling,horizontal and vertical flipping,and a training-to-test set ratio of 4:1.Fig.7 shows the examples of the MF dataset.

    Fig.6 Examples of the CIFAR datasets

    Fig.7 Examples of the metal fracture dataset: (a)cleavage fracture; (b) intergranular fracture; (c) dimple fracture

    During the experiments,three aspects were considered to compare the algorithm’s strength: model performance, model size, and model training time.In terms of these aspects, three popular metrics were adopted: the classification accuracy on the test dataset,the number of parameters in the model,and the time consumed.It should be noted that the time required for training is different for each computer due to the different hardware configurations even with the same graphics processing unit(GPU).Therefore, GPU days were used only as a reference index of other methods; the training time in our experimental environment was used as a horizontal comparison index.

    4.2 Compared methods

    To demonstrate the effectiveness of the proposed GPPSO,a series of state-of-the-art algorithms were used for comparison based on the evaluation metrics in Section 4.1.The compared algorithms can be divided into three categories: manually designed CNNs, non-OA-based methods, and OAbased methods.

    Specifically, the manually designed CNNs include the famous architectures maxout (Goodfellow et al., 2013), network in network (Lin et al., 2013),ALL-CNN(Springenberg et al.,2014),VGGNet(Simonyan and Zisserman,2014),highway network(Srivastava et al., 2015), FractalNet (Larsson et al.,2016), ResNet (He et al., 2016), and DenseNet(Huang et al., 2017), which have shown state-ofthe-art results in image classification tasks.For the non-OA-based methods, some representative algorithms are adopted,such as BO(Snoek et al.,2012),Auto-Keras(AK)(Jin et al.,2019),NAS (Zoph and Le, 2017),MetaQNN(Baker et al.,2017),EAS (Cai et al.,2018),and Block-QNN-S(Zhong et al.,2018).As for the OA-based algorithms, PSO, hierarchical evolution(Liu et al., 2017),large-scale evolution(Real et al., 2017), genetic CNN (Xie and Yuille,2017), CGP-CNN (Suganuma et al., 2017), CNNGA(Sun et al.,2020a),AE-CNN(Sun et al.,2020b),AE-CNN+E2EPP (Sun et al., 2020c),and SHEDACNN (Li JY et al., 2023) are selected to compare with GPPSO.Because of the better performance of OA-based methods in automatic search algorithms,these methods are ideal for comparison with GPPSO.Due to the expensive computational cost and different experiment environments, some final results of the literature were cited directly for comparison,which is also a convention in the deep learning study.In addition, to ensure the effectiveness of the comparisons, the classical algorithms in each category,such as ResNet50, BO algorithm, and PSO algorithm, were implemented in the experimental configuration of this study to make a direct comparison with various indicators of GPPSO.

    4.3 Algorithm settings

    A 20-depth version of ResNet was used as the basic model of GPPSO in the experiments on the CIFAR datasets.There were 19 Conv layers and one pooling layer in ResNet20 for feature extraction,and one FC layer and a final softmax layer for image classification.According to the ResNet20 structure and mixed-variable encoding strategy (Section 3.1), the model was encoded using 22-dimensional continuous variables and 32-dimensional discrete variables.The specific setting is shown in Table 1.

    The GPPSO parameters were set as follows:first, considering the balance between the computational cost and model performance,the number of IPs selected to build the GP model was 20.Second,the minimum number of iterations was set to 30,after that the search will finish when a better CNN cannot be found in three generations.In addition, the candidate model obtained through the search process will be trained for two epochs as a preliminary evaluation of performance.As for the individual generation category,the parameters were configured according to the default values in PySwarms(Miranda,2018):c1= 0.5,c2= 0.3, andw= 0.9.Finally, for the CNN training category, the optimizer was set as Adam, and the initial learning rate was set to 1×10-3.

    In addition,data augmentation was applied before training using the conventional method through Keras.The experiments were conducted and evaluated using the Python programming language with the TensorFlow(Abadi et al.,2016)deep learning library on a 2.30 GHz Intel Core i7-12700H CPU and 16 GB memory NVIDIA RTX 3080Ti graphics card.

    4.4 Comparisons with state-of-the-art methods

    The effectiveness of the GPPSO algorithm was evaluated using two comparisons, as shown in Tables 2 and 3.The first part represents the comparison between GPPSO and state-of-the-art algorithms.Then in the second part, a comparison between GPPSO and the basic algorithms (BO and PSO)is presented.

    As shown in Table 2, the GPPSO required 0.04 GPU days to achieve 95.26% classification accuracy with 5.26×106parameters on the CIFAR-10 dataset and to achieve 76.36% classification accuracy with 4.44× 106parameters on the CIFAR-100 dataset.Compared with state-of-the-art algorithms, GPPSO shows the great performance in both classification accuracy and search time.First,when compared with the manually designed models, GPPSO achieves at least 0.53% and at most5.03% classification accuracy improvement on the CIFAR-10 dataset.As for the CIFAR-100 dataset,GPPSO achieves at most 24.36%accuracy improvement over maxout, and just 3.15% lower than that of the 20-layer version of ResNet, proving that instead of trying deeper and more complex network models, optimizing the hyperparameters of existing well-performing CNN models using GPPSO can obtain outstanding results.As for the number of parameters, the model searched by GPPSO generally has fewer parameters than the directly connected networks such as VGGNet, but more than cross-layer connection models such as ResNet due to the concatenate and other operations.Second,compared with the non-OA-based methods,GPPSO still achieves better performance.On the CIFAR-10 dataset, GPPSO achieves better classification accuracy, and is better than BO, AK, MetaQNN, and NAS by 2.53%, 7.56%, 2.34%, and 1.35%, respectively.As for EAS and Block-QNN-S, GPPSO can achieve similar classification accuracy, but greatly reduces the search time.This means that GPPSO needs less time and computational cost to search for better CNN models.For example, for the non-OA-based methods, NAS needs 22 400 GPU days to find a good CNN model and EAS requires at least 10 GPU days, but GPPSO needs only 0.04 GPU days.On the CIFAR-100 dataset, the accuracy of GPPSO is a little worse than that of Block-QNNS, but still achieves good performance of 76.36%.GPPSO has more model parameters than the NAS and AK methods, but fewer than the other algorithms.Finally,compared with the OA-based methods, GPPSO still obtains competitive results.As shown in Table 2,for classification accuracy,GPPSO ranks the fifth among the 10 algorithms based on the CIFAR-10 dataset, with up to 13.18% higher accuracy than that of PSO and 1.57% lower accuracy than that of the first algorithm (CNN-GA).As for CIFAR-100, GPPSO achieves good performance and ranks the sixth among the eight algorithms, 40.76% better than that of the last and only 3.91%lower than the first.However,compared with CNN-GA, GPPSO cost only about 0.11% and 0.10% GPU days on CIFAR-10 and CIFAR-100, respectively.When compared with the SHEDA-CNN method,GPPSO not only reduces the search time by 93.10%and 95.88%,but also reduces the model size by 51.65% and 76.18%, on CIFAR-10 and CIFAR-100 datasets respectively.This means that GPPSO not only achieves good recognition performance,but also greatly reduces the search time and model size.These advantages provide strong feasibility for the practical applications of deep learning.The above comparison results show the effectiveness and effi-ciency of GPPSO.

    Table 2 Comparisons with the manually designed CNNs, non-OA-based methods, and OA-based methods on CIFAR-10 and CIFAR-100 datasets

    Table 3 presents a set of comparisons between GPPSO, ResNet20, BO algorithm, and the PSO algorithm to show the outstanding performance.It should be noted that all the results in Table 3 were generated in the experimental environment of this study, and algorithm_ac denotes that Ta-ReLU in Section 3.3 is in the search space.Because the experiments were carried out under the same hardware conditions, minute is used as the benchmark index for efficiency comparisons instead of GPU days.It can be seen in Table 3 that GPPSO_ac achieves the best test accuracy on CIFAR-10 and CIFAR-100,where it is 3.26%and 10.73%better than that of the basic model ResNet20 on CIFAR-10 and CIFAR-100,respectively.By using the automatic search method for manually designed CNN ResNet20, the accuracy will be improved by the BO algorithm and reduced by the PSO algorithm, whose test accuracy is lower than that of GPPSO.The number of parameters of the manually designed CNN is significantly smaller compared to those of the automatically searched models.Furthermore, the numbers of parameters in the automatically searched CNNs are in a similar order of magnitude (106).We think that this is due to the concatenate layer and other structures in the automatic search networks.As for the search time, the BO algorithm needs least 17 min to search CNNs, GPPSO takes longer time than BO, 42 min on average, and PSO takes the longest time at an average of 100 min on CIFAR-10 and 202 min on CIFAR-100.After searching for the CNNs, we train them for 200 epochs to obtain the final models.The accuracy-loss curves of training are shown in Fig.8.It can be seen that the convergence speed of the model on the CIFAR-100 dataset is lower than that on CIFAR-10,and the error value is higher than that of CFIAR-10,which indicates that the 20-layer basic model has limited capability for large-scale output categories.For the model with Ta-ReLU function,the convergence rate is higher in the first 20 epochs(e.g.,Figs.8m and 8k),and GPPSO_ac has the best recognition accuracy on CIFAR-10 and CIFAR-100,which proves the effectiveness of the designed AF.

    In conclusion,the comparisons with the basic algorithms of GPPSO and state-of-the-art algorithms prove the effectiveness and efficiency of GPPSO.

    4.5 Ablation experiments

    In the GPPSO algorithm,the initial GP model will be constructed from a set of individuals, so the number of individuals may affect the GPPSO performance.To verify the influence of the number of IPs,GPPSO is compared with variants using different numbers of IPs on CIFAR-10.During the experiments, the range of the number of IPs was set from 10 to 50,and the sampling interval was set as 10;the experimental results are shown in Table 4.It can be seen that with different numbers of IPs, the classification accuracy of all searched models is>92.50%,and the search time increased almost linearly, with an additional 5 min required for each sampling interval increase.Furthermore,the classification accuracy initially increased and then decreased with an increase in the number of IPs,with peak performance achieved when the number of IPs was 20.This indicates that the model searched by GPPSO does not achieve higher performance with an increased number of IPs.We think the reason is that the random IPs cannot precisely describe the trends of Gaussian regression model, and the points obtained by acquisition function in the search process are more meaningful.In conclusion, the number of IPs can influence the search effectiveness of GPPSO,but the GPPSO is not that sensitive with the increase of the number of IPs, and the performance is not always improved with an increased number of IPs.

    Another important parameter in GPPSO is the particle number (PN) in PSO.To investigate the influence of PN, a set of comparisons are given in Table 5.It can be seen that GPPSO is compared with its variants using PNs from 10 to 100; with the PNs increase,the classification accuracy exhibits similarity, when PN=30, 70, and 100, the accuracy can achieve>93%, and when PN=50, the performance of GPPSO is at its best (>95%).That is because GPPSO is not sensitive to the PN.In addition, with an increase in the PN, the GPPSO’s search time increases as well.Therefore,consideringthe performance and the computation costs,PN=50 is suitable and is recommended for GPPSO.

    Table 3 Comparisons with the basic algorithms of GPPSO on CIFAR-10 and CIFAR-100 datasets

    Fig.8 Accuracy-loss curves of training and validation on the CIFAR-10 and CIFAR-100 datasets: (a)ResNet20-10; (b) ResNet20-100; (c) BO-10; (d) BO-100; (e) BO-ac-10; (f) BO-ac-100; (g) PSO-10; (h) PSO-100; (i)PSO-ac-10; (j) PSO-ac-100; (k) GPPSO-10; (l) GPPSO-100; (m) GPPSO-ac-10; (n) GPPSO-ac-100

    In the GPPSO process, after evaluating by the surrogate-assisted model, qualified models will be evaluated based on training.Hence, to reduce the time consumption and the computational cost, only epochTwill be selected for training,which will be a key issue for the effectiveness of the GPPSO.Therefore, an experiment between different epochTvalues was carried out to study the influence on performance.Considering the hardware performance of the experiments and the time complexity of thepractical applications, GPPSO was used to search for optimal CNNs by training epochTfrom 1 to 5 in the experiments.Table 6 shows the experimental results of different epochTvalues on CIFAR-10.To intuitively compare the impact of epochTon model performance, as shown in Eq.(12), a comparison method is designed to measure it:

    Table 4 Comparion with different numbers of initial points on CIFAR-10

    Table 5 Comparison with different particle numbers(PNs) on CIFAR-10

    where CETdenotes the ratio between the accuracy(acc) difference and the search time difference.It can be seen that when the epochs increase from 1 to 2, the classification accuracy is improved by 2.75%,and CETis 0.17 whenT=2.Then, as the number of epochs changes from 2 to 5, the value of CETchanges from 0.03 to 0.01,and approximately equals 0 whenT=4.This means that continuing to increase the epochs will not improve the performance obviously and will consume a lot of computational resources.In conclusion, the epochT=2 of training in the search process will result in the maximum GPPSO efficiency.

    Table 6 Comparison with different epochs on CIFAR-10

    4.6 Application on MF diagnosis

    To prove the effectiveness of GPPSO in realworld problems, an application concerning MF diagnosis in industrial scenarios is presented in this subsection.Metal materials are essential in modern industrial fields such as aerospace, transportation,and metallurgical manufacturing.In a complex environment, metal materials in service cause failure accidents such as fracture, corrosion, and fatigue,which cause heavy economic losses and casualties.Therefore,to achieve accurate MF recognition automatically and efficiently, AI methods such as CNNs will be used, which are suitable for testing the performance of GPPSO.

    In the experiments, a deep learning metal fracture classification(DMFC)model is designed to recognize MFs.The model structure is shown in Fig.9.

    Fig.9 DMFC model structure (FC: fully connected;BN: batch normalization)

    The Conv kernels in the DMFC model are all 3×3, and three pooling layers are constructed using one max pooling layer and two average pooling layers.A flatten layer is added to reduce the dimension of the output feature maps produced by the last Conv layer.This serves as a transition between the Conv layer and the FC layer.The first FC layer has 128 neurons, followed by a batch normalization(BN) layer and a dropout layer.The output layer has three neurons, corresponding to three types of MFs.DMFC is the basic model of MF recognition task.The GPPSO algorithm searches the Conv kernel size,Conv kernel number,pooling layer type,AF type, and other hyperparameters in the DFMC to obtain the GPPSO-DMFC.

    To test the effectiveness, comparisons among VGG, ResNet, DenseNet, DMFC, and GPPSODMFC are given in Table 7.The effectiveness and efficiency of the algorithms are measured by accuracy and training time,respectively.As shown in Table 7,the proposed DMFC model achieved an accuracy of 94.94% and a training time of 11 s/epoch.When using state-of-the-art methods,the recognition accuracy of MF is significantly improved.However, the training time is increased.For example, DenseNet achieved an accuracy of 98.03%, but required training time of 93 s/epoch.After using the GPPSO algorithm to search hyperparameters for the DMFC model, the result achieved the highest accuracy of 98.16% and the shortest training time of 9 s/epoch,indicating the effectiveness and efficiency of GPPSO.Therefore, this application shows that the GPPSO has potential for solving real-world tasks.

    Table 7 Results of the metal fracture diagnosis

    5 Conclusions

    In this paper, a novel method, GPPSO, was proposed for efficient optimization of CNN hyperparameters.First,the GPPSO encoded different types of hyperparameters in CNNs using a mixed-variable encoding strategy to deal with the mixed-variable problem.Then, the HSA model based on the GP and PSO was designed to save computational costs.Finally, a novel AF, Ta-ReLU, was suggested to improve the model performance and ensure convergence rate.Experiments on two benchmark datasets have proven the efficiency of GPPSO.Furthermore, a series of ablation experiments have been used to investigate the parameter sensitivity.We also presented a case study of industrial scenarios to demonstrate the effectiveness of GPPSO in real-world tasks.For further work,we plan to(1)search for the CNN hyperparameters and architectures jointly and (2) design a more efficient OA for obtaining CNNs to handle engineering problems with practical applications.

    Contributors

    Han YAN designed the research and performed the experiments.Han YAN and Chongquan ZHONG implemented the software and drafted the paper.Yuhu WU, Liyong ZHANG, and Wei LU revised and finalized the paper.

    Compliance with ethics guidelines

    Han YAN, Chongquan ZHONG, Yuhu WU, Liyong ZHANG, and Wei LU declare that they have no conflict of interest.

    Data availability

    Due to the nature of this research, all authors of this paper did not agree for their data to be shared publicly, so supporting data are not available.

    国产欧美日韩一区二区精品| 久久午夜综合久久蜜桃| 一本综合久久免费| 在线观看免费视频日本深夜| 在线观看免费视频网站a站| 国产三级黄色录像| 国产av又大| 国产激情久久老熟女| 久久精品亚洲精品国产色婷小说| 色在线成人网| 日韩视频一区二区在线观看| 99久久精品国产亚洲精品| 日日爽夜夜爽网站| 亚洲成人久久性| 国产一卡二卡三卡精品| 久久亚洲真实| 欧美日韩亚洲国产一区二区在线观看| 手机成人av网站| 91成人精品电影| 亚洲精品国产精品久久久不卡| 超色免费av| 男女高潮啪啪啪动态图| 成年版毛片免费区| 在线观看一区二区三区| 日韩人妻精品一区2区三区| 自拍欧美九色日韩亚洲蝌蚪91| 国产熟女xx| 亚洲 国产 在线| 欧美日本中文国产一区发布| 日本一区二区免费在线视频| 91在线观看av| 久久国产乱子伦精品免费另类| 视频区欧美日本亚洲| 久9热在线精品视频| 黑人欧美特级aaaaaa片| 天堂俺去俺来也www色官网| 日韩欧美一区二区三区在线观看| 亚洲国产欧美一区二区综合| 久久影院123| 久久国产精品影院| 国产av一区二区精品久久| 亚洲av熟女| 女人被狂操c到高潮| 国产亚洲精品综合一区在线观看 | 无人区码免费观看不卡| 热re99久久国产66热| 三上悠亚av全集在线观看| 国产一区二区三区综合在线观看| 精品一区二区三区四区五区乱码| 成人永久免费在线观看视频| 日本欧美视频一区| 亚洲七黄色美女视频| 在线av久久热| 午夜激情av网站| av在线播放免费不卡| 久久99一区二区三区| 成年女人毛片免费观看观看9| 又黄又粗又硬又大视频| avwww免费| 国产成人精品久久二区二区91| 少妇的丰满在线观看| 无限看片的www在线观看| 国产男靠女视频免费网站| 国产精品爽爽va在线观看网站 | 国产精品日韩av在线免费观看 | 热re99久久国产66热| 日韩精品青青久久久久久| 国产一区二区激情短视频| 国产熟女午夜一区二区三区| 在线观看舔阴道视频| 亚洲精品一区av在线观看| 国产成人精品在线电影| 色综合欧美亚洲国产小说| 两性午夜刺激爽爽歪歪视频在线观看 | 国产成年人精品一区二区 | 欧美日本亚洲视频在线播放| 精品国产乱码久久久久久男人| 亚洲免费av在线视频| 一本综合久久免费| 黄色女人牲交| 国产欧美日韩一区二区精品| 亚洲午夜理论影院| 午夜a级毛片| 亚洲国产欧美一区二区综合| 伦理电影免费视频| 精品一区二区三区视频在线观看免费 | 午夜福利在线观看吧| 国产片内射在线| 女同久久另类99精品国产91| 亚洲精品美女久久久久99蜜臀| 欧美日韩福利视频一区二区| 午夜福利影视在线免费观看| 国产蜜桃级精品一区二区三区| 麻豆av在线久日| 亚洲欧洲精品一区二区精品久久久| 亚洲欧美激情在线| 成人亚洲精品一区在线观看| 精品日产1卡2卡| 欧美日韩精品网址| 中文字幕人妻丝袜一区二区| 久9热在线精品视频| 91精品国产国语对白视频| 大型黄色视频在线免费观看| 日日干狠狠操夜夜爽| svipshipincom国产片| 国产亚洲av高清不卡| 亚洲第一青青草原| 精品久久久久久电影网| 日韩精品中文字幕看吧| 久久精品国产亚洲av香蕉五月| 一边摸一边抽搐一进一小说| 两个人看的免费小视频| 亚洲伊人色综图| 久久人妻福利社区极品人妻图片| 欧洲精品卡2卡3卡4卡5卡区| 999精品在线视频| 女生性感内裤真人,穿戴方法视频| 国产熟女午夜一区二区三区| 国产亚洲精品综合一区在线观看 | 国产日韩一区二区三区精品不卡| 欧美一区二区精品小视频在线| videosex国产| 日本三级黄在线观看| 亚洲中文字幕日韩| 久久久久久大精品| 亚洲成国产人片在线观看| 欧美国产精品va在线观看不卡| 久久久久久久久中文| 新久久久久国产一级毛片| 免费女性裸体啪啪无遮挡网站| 成人三级做爰电影| 黄网站色视频无遮挡免费观看| 久久精品影院6| 午夜老司机福利片| 激情在线观看视频在线高清| 亚洲精品美女久久久久99蜜臀| 久久久精品国产亚洲av高清涩受| 又黄又粗又硬又大视频| 宅男免费午夜| aaaaa片日本免费| 欧美精品啪啪一区二区三区| 亚洲精品美女久久久久99蜜臀| 三级毛片av免费| 午夜福利影视在线免费观看| 亚洲av美国av| 欧美黄色淫秽网站| 欧美成人午夜精品| 日韩视频一区二区在线观看| 搡老乐熟女国产| 黑人巨大精品欧美一区二区mp4| 久久久水蜜桃国产精品网| aaaaa片日本免费| 色老头精品视频在线观看| 欧美激情 高清一区二区三区| 在线观看日韩欧美| 国产伦一二天堂av在线观看| 岛国视频午夜一区免费看| 国产成人一区二区三区免费视频网站| 黄色成人免费大全| 久久久国产精品麻豆| 精品午夜福利视频在线观看一区| 久久久久久免费高清国产稀缺| av有码第一页| 色尼玛亚洲综合影院| 老汉色∧v一级毛片| av在线天堂中文字幕 | 波多野结衣av一区二区av| 啪啪无遮挡十八禁网站| 亚洲午夜理论影院| 日韩欧美一区视频在线观看| 视频区图区小说| 欧美日韩瑟瑟在线播放| 国产午夜精品久久久久久| 亚洲成国产人片在线观看| 侵犯人妻中文字幕一二三四区| 啦啦啦 在线观看视频| 少妇的丰满在线观看| 亚洲欧美激情在线| 国产伦人伦偷精品视频| 国产精品一区二区在线不卡| 久久人人精品亚洲av| 国产黄色免费在线视频| 美国免费a级毛片| 少妇被粗大的猛进出69影院| 操出白浆在线播放| 黄色成人免费大全| 中文字幕另类日韩欧美亚洲嫩草| 日韩大尺度精品在线看网址 | 欧美激情高清一区二区三区| 国产高清videossex| 日韩免费av在线播放| 亚洲国产精品一区二区三区在线| 黄色怎么调成土黄色| 男女午夜视频在线观看| 国产色视频综合| 91九色精品人成在线观看| 婷婷六月久久综合丁香| av片东京热男人的天堂| 老汉色∧v一级毛片| 中文字幕人妻丝袜制服| 91精品国产国语对白视频| av欧美777| 久久久国产成人免费| 老司机深夜福利视频在线观看| 在线免费观看的www视频| 三级毛片av免费| 国产精品99久久99久久久不卡| 女生性感内裤真人,穿戴方法视频| 成人亚洲精品av一区二区 | 欧美黄色淫秽网站| 日韩免费av在线播放| 交换朋友夫妻互换小说| 男男h啪啪无遮挡| av视频免费观看在线观看| 中文字幕人妻熟女乱码| 黄网站色视频无遮挡免费观看| 国产av精品麻豆| 伦理电影免费视频| 日本欧美视频一区| 午夜精品久久久久久毛片777| 超碰成人久久| 成人免费观看视频高清| 国产精品久久电影中文字幕| 欧美日本亚洲视频在线播放| 99香蕉大伊视频| 欧美日韩精品网址| 午夜亚洲福利在线播放| 一级毛片精品| 亚洲激情在线av| netflix在线观看网站| 丁香六月欧美| 国产精品香港三级国产av潘金莲| 亚洲一卡2卡3卡4卡5卡精品中文| 黄色成人免费大全| 欧美日韩乱码在线| 丝袜美足系列| 免费高清视频大片| 成人影院久久| 日本欧美视频一区| 极品人妻少妇av视频| 日韩中文字幕欧美一区二区| 久久亚洲真实| 老司机亚洲免费影院| 久久久水蜜桃国产精品网| 国产成人啪精品午夜网站| 黑人巨大精品欧美一区二区蜜桃| 亚洲七黄色美女视频| 一级毛片精品| 亚洲av成人av| 久久午夜综合久久蜜桃| 精品人妻在线不人妻| 亚洲欧美激情综合另类| 亚洲专区中文字幕在线| 欧美乱妇无乱码| 不卡av一区二区三区| 欧美激情极品国产一区二区三区| 两个人看的免费小视频| 少妇被粗大的猛进出69影院| 欧美日韩亚洲国产一区二区在线观看| 午夜免费成人在线视频| 免费在线观看黄色视频的| 国产乱人伦免费视频| 午夜福利免费观看在线| 午夜免费鲁丝| 少妇被粗大的猛进出69影院| 亚洲精品国产一区二区精华液| 国产精品影院久久| 女警被强在线播放| 麻豆久久精品国产亚洲av | 一边摸一边做爽爽视频免费| 久久久久久久久久久久大奶| tocl精华| 亚洲人成网站在线播放欧美日韩| 夜夜爽天天搞| 美国免费a级毛片| 成年人免费黄色播放视频| 国产精品 欧美亚洲| 国产av在哪里看| 麻豆国产av国片精品| 一级a爱视频在线免费观看| 久久中文字幕一级| 国产99久久九九免费精品| 久久精品aⅴ一区二区三区四区| 亚洲成a人片在线一区二区| 老汉色∧v一级毛片| 亚洲精华国产精华精| 国产伦一二天堂av在线观看| 琪琪午夜伦伦电影理论片6080| 久久热在线av| 欧美午夜高清在线| 99久久综合精品五月天人人| 乱人伦中国视频| 国产av又大| 亚洲欧美日韩另类电影网站| 性色av乱码一区二区三区2| 一级毛片精品| 国产午夜精品久久久久久| av天堂久久9| 日韩精品免费视频一区二区三区| 婷婷丁香在线五月| 一级a爱片免费观看的视频| 美女 人体艺术 gogo| 动漫黄色视频在线观看| 又大又爽又粗| 又紧又爽又黄一区二区| av超薄肉色丝袜交足视频| 国产精品久久久久成人av| 色综合站精品国产| 中文字幕高清在线视频| 久久久国产精品麻豆| 怎么达到女性高潮| 亚洲熟妇中文字幕五十中出 | videosex国产| 午夜免费鲁丝| 国产欧美日韩精品亚洲av| 国产成人啪精品午夜网站| 美女大奶头视频| 精品一区二区三区四区五区乱码| 国产深夜福利视频在线观看| 久久香蕉激情| 老司机靠b影院| 在线观看午夜福利视频| 女同久久另类99精品国产91| 中国美女看黄片| 欧美精品啪啪一区二区三区| 久久99一区二区三区| 日韩免费高清中文字幕av| av有码第一页| 精品高清国产在线一区| av有码第一页| 午夜日韩欧美国产| 亚洲精品粉嫩美女一区| 日日爽夜夜爽网站| 亚洲av日韩精品久久久久久密| 怎么达到女性高潮| 亚洲精品美女久久av网站| 国产成人av激情在线播放| 国产欧美日韩一区二区三| √禁漫天堂资源中文www| 天天躁狠狠躁夜夜躁狠狠躁| 久久精品国产99精品国产亚洲性色 | 精品少妇一区二区三区视频日本电影| 麻豆av在线久日| 国产精品99久久99久久久不卡| 69av精品久久久久久| 成人永久免费在线观看视频| 51午夜福利影视在线观看| 国产伦一二天堂av在线观看| 精品免费久久久久久久清纯| 日韩精品中文字幕看吧| 久久久久久久久久久久大奶| 久久国产精品影院| 午夜免费激情av| 国产视频一区二区在线看| 国产精品 国内视频| 老司机在亚洲福利影院| 1024视频免费在线观看| 又紧又爽又黄一区二区| 久久精品亚洲av国产电影网| 精品国产超薄肉色丝袜足j| 在线观看午夜福利视频| 精品国产超薄肉色丝袜足j| 午夜免费激情av| 亚洲熟女毛片儿| 亚洲人成电影观看| 久久久久久免费高清国产稀缺| 久久性视频一级片| 免费在线观看影片大全网站| 国产极品粉嫩免费观看在线| 日韩欧美国产一区二区入口| 最新在线观看一区二区三区| 啦啦啦免费观看视频1| 91麻豆av在线| avwww免费| 日韩欧美在线二视频| av在线播放免费不卡| 亚洲国产中文字幕在线视频| 在线永久观看黄色视频| 国产日韩一区二区三区精品不卡| 一二三四社区在线视频社区8| 50天的宝宝边吃奶边哭怎么回事| 国产一区二区在线av高清观看| 日日夜夜操网爽| 女生性感内裤真人,穿戴方法视频| 免费一级毛片在线播放高清视频 | 热99re8久久精品国产| 午夜视频精品福利| 日韩一卡2卡3卡4卡2021年| av中文乱码字幕在线| 欧美丝袜亚洲另类 | 国产精品av久久久久免费| 女人爽到高潮嗷嗷叫在线视频| 亚洲狠狠婷婷综合久久图片| 两性午夜刺激爽爽歪歪视频在线观看 | 免费在线观看日本一区| 国产三级黄色录像| 久久精品国产综合久久久| 免费高清在线观看日韩| 欧美性长视频在线观看| 日韩av在线大香蕉| 亚洲熟女毛片儿| 欧美乱妇无乱码| 99久久久亚洲精品蜜臀av| 一级作爱视频免费观看| 黑人巨大精品欧美一区二区蜜桃| 亚洲成国产人片在线观看| 国产真人三级小视频在线观看| 熟女少妇亚洲综合色aaa.| av视频免费观看在线观看| 搡老乐熟女国产| 国产精品二区激情视频| 一区二区三区国产精品乱码| 日本免费a在线| 热99国产精品久久久久久7| 亚洲黑人精品在线| 日韩中文字幕欧美一区二区| 免费在线观看黄色视频的| 亚洲精品在线观看二区| 精品午夜福利视频在线观看一区| 丰满的人妻完整版| 黄色怎么调成土黄色| av有码第一页| 国产成+人综合+亚洲专区| 午夜亚洲福利在线播放| av免费在线观看网站| 国产精品偷伦视频观看了| 热re99久久精品国产66热6| 99在线视频只有这里精品首页| 亚洲专区中文字幕在线| 国产单亲对白刺激| 两个人免费观看高清视频| 丰满迷人的少妇在线观看| 亚洲熟妇熟女久久| 一本综合久久免费| 国产精品 国内视频| 成人特级黄色片久久久久久久| 久久久国产成人免费| 99久久人妻综合| 国产精品亚洲av一区麻豆| 国产又爽黄色视频| 成人18禁高潮啪啪吃奶动态图| 亚洲精品美女久久av网站| 亚洲av片天天在线观看| 满18在线观看网站| 成人特级黄色片久久久久久久| 中文字幕人妻熟女乱码| 精品久久久久久,| 久久久国产一区二区| 成人av一区二区三区在线看| 在线观看日韩欧美| 亚洲自拍偷在线| 成人亚洲精品av一区二区 | 美女福利国产在线| 国产亚洲精品久久久久5区| 国产乱人伦免费视频| 青草久久国产| 亚洲精品在线观看二区| 国产激情久久老熟女| 日韩欧美三级三区| 麻豆一二三区av精品| 叶爱在线成人免费视频播放| 亚洲精品一卡2卡三卡4卡5卡| 欧美成狂野欧美在线观看| 日韩精品青青久久久久久| 精品久久蜜臀av无| 免费av毛片视频| 免费观看精品视频网站| 极品教师在线免费播放| 黄色成人免费大全| 久99久视频精品免费| 亚洲自拍偷在线| 黑人猛操日本美女一级片| 国产精品二区激情视频| 国产熟女xx| 老鸭窝网址在线观看| 999精品在线视频| 免费不卡黄色视频| 精品人妻在线不人妻| 免费观看精品视频网站| 免费女性裸体啪啪无遮挡网站| 国产精品1区2区在线观看.| 欧美色视频一区免费| av天堂在线播放| 少妇的丰满在线观看| 一进一出好大好爽视频| 一级毛片高清免费大全| cao死你这个sao货| 国产成人欧美| 成年女人毛片免费观看观看9| 亚洲视频免费观看视频| 一级毛片高清免费大全| 免费搜索国产男女视频| 午夜视频精品福利| 精品日产1卡2卡| 欧美黑人欧美精品刺激| 美国免费a级毛片| 久久天躁狠狠躁夜夜2o2o| 亚洲国产欧美日韩在线播放| 国产97色在线日韩免费| 宅男免费午夜| 亚洲欧美一区二区三区久久| 久久亚洲真实| 999久久久国产精品视频| 自拍欧美九色日韩亚洲蝌蚪91| 岛国视频午夜一区免费看| 咕卡用的链子| 日韩大码丰满熟妇| 国产av在哪里看| 高清在线国产一区| 国产亚洲欧美在线一区二区| 美女国产高潮福利片在线看| 欧美人与性动交α欧美精品济南到| 亚洲成人精品中文字幕电影 | 男人舔女人下体高潮全视频| 人人妻人人添人人爽欧美一区卜| 午夜亚洲福利在线播放| 亚洲精品在线美女| 亚洲一区高清亚洲精品| 午夜福利,免费看| 好看av亚洲va欧美ⅴa在| 亚洲精华国产精华精| 亚洲专区中文字幕在线| 国产成人啪精品午夜网站| 亚洲精品粉嫩美女一区| 曰老女人黄片| 国产熟女午夜一区二区三区| 欧美在线黄色| 亚洲色图综合在线观看| 久久精品国产亚洲av高清一级| 久久久久久免费高清国产稀缺| 十分钟在线观看高清视频www| 欧美老熟妇乱子伦牲交| 美国免费a级毛片| 黄片大片在线免费观看| 久久国产乱子伦精品免费另类| 丰满迷人的少妇在线观看| 欧美日韩亚洲综合一区二区三区_| 国产亚洲精品久久久久久毛片| 色婷婷av一区二区三区视频| 久久中文看片网| 午夜福利影视在线免费观看| 国产精品成人在线| 美女大奶头视频| 亚洲欧美日韩高清在线视频| 亚洲七黄色美女视频| 老司机靠b影院| 成人亚洲精品av一区二区 | www日本在线高清视频| 亚洲色图 男人天堂 中文字幕| 村上凉子中文字幕在线| 国产成人av激情在线播放| 一夜夜www| 免费在线观看影片大全网站| 校园春色视频在线观看| 亚洲中文av在线| av免费在线观看网站| 亚洲人成伊人成综合网2020| 国产一卡二卡三卡精品| 成人永久免费在线观看视频| 午夜精品在线福利| 国产不卡一卡二| 国产1区2区3区精品| 色精品久久人妻99蜜桃| 国产男靠女视频免费网站| xxx96com| 在线看a的网站| 午夜精品国产一区二区电影| 国产精品永久免费网站| av在线天堂中文字幕 | 老鸭窝网址在线观看| 久久欧美精品欧美久久欧美| 男女高潮啪啪啪动态图| 叶爱在线成人免费视频播放| 国产99白浆流出| av网站在线播放免费| 在线天堂中文资源库| 九色亚洲精品在线播放| 99久久国产精品久久久| 18禁国产床啪视频网站| 两性午夜刺激爽爽歪歪视频在线观看 | 国产精品乱码一区二三区的特点 | 99国产精品99久久久久| 国产黄色免费在线视频| 亚洲欧美日韩无卡精品| 可以在线观看毛片的网站| 啪啪无遮挡十八禁网站| 乱人伦中国视频| 国产在线精品亚洲第一网站| 高清黄色对白视频在线免费看| 大型黄色视频在线免费观看| 中文字幕最新亚洲高清| 18禁美女被吸乳视频| 天天躁夜夜躁狠狠躁躁| 欧美大码av| 中文字幕人妻丝袜制服| 国产精品综合久久久久久久免费 | 真人做人爱边吃奶动态| www.www免费av| 丁香欧美五月| 99国产精品99久久久久| 一区二区三区精品91| 99re在线观看精品视频| 亚洲av片天天在线观看| 国产精品成人在线| 精品一品国产午夜福利视频| 欧美日韩国产mv在线观看视频| 制服诱惑二区| 男女做爰动态图高潮gif福利片 | 国产精品1区2区在线观看.| 最新在线观看一区二区三区| 亚洲熟妇熟女久久| 亚洲五月色婷婷综合| 久久久久国产一级毛片高清牌| videosex国产| 热re99久久精品国产66热6|