• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Spectral Convolutional Neural Network Model Based on Adaptive Fick’s Law for Hyperspectral Image Classification

    2024-05-25 14:38:38TsuYangWuHaonanLiSaruKumariandChienMingChen
    Computers Materials&Continua 2024年4期

    Tsu-Yang Wu ,Haonan Li ,Saru Kumari and Chien-Ming Chen

    1School of Artificial Intelligence(School of Future Technology),Nanjing University of Information Science&Technology,Nanjing,210044,China

    2College of Computer Science and Engineering,Shandong University of Science and Technology,Qingdao,266590,China

    3Department of Mathematics,Chaudhary Charan Singh University,Meerut,Uttar Pradesh,250004,India

    ABSTRACT Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithm can adjust the weights according to the change in the number of iterations to improve the performance of the algorithm.Gaussian mutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm (FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based on Differential Evolution(DE-SCNN),Spectral Convolutional Neural Network(SCNN)model,and Support Vector Machines(SVM)model using the Indian Pines dataset and Pavia University dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on Pavia University reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.

    KEYWORDS Adaptive Fick’s law algorithm;spectral convolutional neural network;metaheuristic algorithm;intelligent optimization algorithm;hyperspectral image classification

    1 Introduction

    Hyperspectral images (HSIs) have found extensive applications in various fields such as remote sensing,establishing themselves as a focal point within the remote sensing domain[1–3].In addition to spatial resolution,HSIs possess spectral resolution[4].The acquisition of HSIs originates from diverse hyperspectral sensors,which capture tens to hundreds of spectral bands.While obtaining surface image information,these sensors also acquire spectral information,representing a fusion of spectral and imaging data [5].HSIs can be used for land classification,which can distinguish different types of objects and cover on the surface.By analyzing spectral features,accurate classification of vegetation types,buildings,and other targets can be achieved.

    With the advancement of remote sensing technology,the acquisition of hyperspectral image data has significantly increased.These datasets typically encompass hundreds or even thousands of spectral bands.Traditional image processing and classification techniques have proven insufficient in effectively addressing the challenges,thus leading to the gradual emergence of machine learning and deep learning into our purview[6,7].Models such as Random Forests,K-Nearest Neighbors(KNN),and Support Vector Machines(SVM),among others,have been widely utilized for HSI classification[8,9].Among these,SVM has gained extensive application,resulting in numerous variations.For instance,Okwuashi and Ndehedehe[10]presented the Deep Support Vector Machine(DSVM),employing four distinct kernel functions within the DSVM framework.

    With the continuous advancement of deep learning,an increasing number of researchers have been employing neural networks for HSI classification [11].In 2020,Hong et al.[12] introduced a novel mini-batch Graph Convolutional Network (miniGCN) to address HSI classification issues,demonstrating the superior performance of the miniGCN model over CNN and GCN models.In 2021,Ghaderizadeh et al.[13]utilized a hybrid 3D-2D CNN for HSI classification,illustrating the superior performance of the hybrid CNN model compared to 2D-CNN and 3D-CNN models.Subsequently,in 2022,Jia et al.[14] proposed a Graph-in-Graph Convolutional Network (GiGCN) for HSI classification,demonstrating its efficacy in HSI classification tasks through experimental validation.Finally,in 2023,Ge et al.[15] introduced a dual-branch convolutional neural network equipped with polarized self-attention mechanism to investigate HSI classification problems,validating the effectiveness of the proposed network across multiple public datasets.

    Despite the utilization of advanced neural network technologies in recent studies,the selection of neural network hyperparameters involves a certain degree of empiricism and randomness.To identify the most suitable hyperparameters,researchers have begun integrating intelligent optimization algorithms with neural networks[16].After recent years of research,intelligent optimization algorithms have developed rapidly[17–20].In 2020,Banadkooki et al.[21]combined Artificial Neural Networks(ANN)with ALO,BA,and PSO to establish ANN-ALO,ANN-BA,and ANN-PSO models for the prediction of suspended sediment load.In 2021,Nikbakht et al.[22] applied a Genetic Algorithm to neural networks to discover optimal hyperparameter values,demonstrating its effectiveness in engineering problems.In 2022,Fan et al.[23]introduced a novel Hybrid Sparrow Search Algorithm(HSSA)to address hyperparameter optimization in models,with experimental results confirming the method’s efficacy.In 2023,Falahzadeh et al.[24]proposed a model combining Deep Convolutional Neural Networks with Grey Wolf Optimization(GWO)to optimize neural network hyperparameters.Through numerous studies,it is evident that intelligent optimization algorithms can effectively optimize neural network hyperparameters.

    The Fick’s Law algorithm (FLA) was proposed by Hashim et al.[25] in 2022,which is a novel intelligent optimization algorithm.The inspiration for this algorithm comes from Fick’s law in physics.

    After a year of research,improvements to the Fick’s Law algorithm have been gradually proposed.Alghamdi et al.[26] improved the FLA using Rosenbrock’s direct rotation method and applied the improved FLA to the scheduling and management of the energy hub.Mehta et al.[27]improved the FLA using a quasi-oppositional-based approach and applied the improved FLA to mechanical design problems.However,these improved Fick’s Law algorithms cannot adjust weights based on changes in the number of iterations,making our research particularly important.

    To enhance the accuracy of HSI classification and improve the performance of neural networks,we propose a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN).This model utilizes the AFLA to optimize two hyperparameters within the SCNN,thereby enhancing the performance of SCNN and consequently improving the accuracy of HSI Classification.The primary contributions of this paper are outlined as follows:

    1.Introduction of the Adaptive Fick’s Law Algorithm (AFLA) as an improved version of the FLA.AFLA incorporates three novel strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.Moreover,comparative experiments were carried out between AFLA and nine widely recognized intelligent optimization algorithms utilizing the CEC2013 and CEC2017 datasets.

    2.Proposal of the Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm (AFLA-SCNN).This model employs AFLA to optimize two hyperparameters,“numEpochs”and “miniBatchSize”,within SCNN.Upon obtaining the optimal parameter values,they are input into SCNN for HSI classification.Additionally,comparative experiments were conducted among AFLA-SCNN,FLA-SCNN,HHO-SCNN,DE-SCNN,SCNN,and SVM models utilizing the Indian Pines dataset and Pavia University dataset.

    The structure of this paper is outlined as follows: Section 2 introduces the relevant work on FLA and SCNN.Section 3 details AFLA and provides pseudocode.Section 4 elaborates on the construction and detailed workflow of the AFLA-SCNN model.Section 5 conducts performance verification experiments on AFLA.Section 6 assesses the performance of the AFLA-SCNN model in HSI classification.The final section summarizes the entire paper.

    2 Related Work

    2.1 Fick’s Law Algorithm(FLA)

    The inspiration for FLA is derived from Fick’s law,which is a fundamental physical law[28].The mathematical expression of Fick’s law is:

    In Eq.(1),Jis the diffusion flux per unit area per unit time,Dis the diffusion coefficient,andis the concentration gradient per unit area.FLA has three stages: DO (diffusion operator),EO(equilibrium operator),and SSO(steady-state operator).

    2.1.1 DO

    Eqs.(2)–(4) are used to control the direction of molecular movement,whererepresents the position of moleculem,Rrepresents a random number,andC1is a constant equal to 5.

    First describe the case where the concentration in regioniis greater than that in regionj,in which case the molecular formula fromitojis:

    C5is a constant equal to 2.Additionally,is calculated by:

    whereDis 0.01,andandare as follows:

    The remaining molecules in regionican be updated in three different ways,which are calculated using the following formulas:

    If regionjhas a higher concentration than regioni,one can interchangeiandjin the aforementioned process.

    2.1.2 EO

    In this stage,we will focus on the updates for groupsg1andg2.The groupg1is as follows:

    whereis as follows:

    Theis as follows:

    Among them,andrepresent the best fitness value ing1and the fitness value of moleculeming1,respectively.

    Therefore,to update formulas for groupg2,simply replaceg1withg2in the above process.

    2.1.3 SSO

    The groupg1is as follows:

    Moreover,theis modified to:

    Therefore,to update formulas for groupg2,simply replaceg1withg2in the above process.

    2.2 Spectral Convolution Neural Network(SCNN)

    Spectral Convolutional Neural Network(SCNN)has become a powerful tool for image processing and analysis,especially in processing spectral information[29].Unlike traditional convolutional neural networks that mainly focus on the spatial relationships between pixels,SCNN considers both spatial and spectral relationships,providing a more comprehensive understanding of image content[30].Especially hyperspectral images,due to their high dimensionality and complexity,pose unique challenges.These images are composed of many bands,each with different spectral features,reflecting the characteristics of different materials and features in the scene [31].Traditional image processing methods often struggle to effectively capture and utilize this spectral information,but SCNN is specifically designed to handle this complexity.This spectral-spatial fusion allows the network to extract meaningful features from the spatial layout of pixels and the spectral features they emit.Therefore,SCNN is very effective in tasks such as classification,recognition,and segmentation[32].

    In summary,spectral convolutional neural networks have significant advantages in processing and analyzing hyperspectral images and spectral data.By considering the spatial and spectral relationships between pixels,they can extract more comprehensive and meaningful features from the data.This spectral-spatial fusion not only improves the accuracy and precision of classification and recognition tasks,but also opens up new possibilities for advanced image processing and analysis applications in a wide range of fields such as remote sensing,environmental monitoring,and medical imaging.

    In this paper,the SCNN structure depicted in Fig.1 is utilized.The SCNN has seventeen layers of structure,the first layer is a 3D image input layer,the input size is 25×25×30.The second layer is a 3D convolutional layer,the size is 3×3×7 and the number is 8.The third layer is a ReLU layer.The fourth layer is a 3D convolutional layer,the size is 3×3×5 and the number is 16,The fifth layer is a ReLU layer.The sixth layer is a 3D convolutional layer with size 3×3×3 and number 32.The seventh layer is a ReLU layer.The eighth layer is a 3D convolutional layer with size 3 × 3 × 1 and number 8.The ninth layer is a ReLU layer.The tenth layer is a fully-connected layer with an output size of 256.The eleventh layer is a ReLU layer.The twelfth layer is a discard layer with probability of 0.4.The thirteenth layer is a fully-connected layer with an output size of is 128.The fourteenth layer is the discard layer with a probability of 0.4.The fifteenth layer is the fully connected layer with an output size of 16.The sixteenth layer is the Softmax layer,and the seventeenth layer is the output layer.

    Figure 1: Structure of SCNN

    3 Adaptive Fick’s Law Algorithm(AFLA)

    The initial FLA was considered to have limitations,mainly due to its tendency to fall into local optima and converge prematurely before reaching the global optimal solution.These issues may lead to suboptimal results and limit the effectiveness of algorithms in complex optimization problems.Solving these issues is crucial for improving the performance and reliability of FLA.

    In this paper,we introduce an enhanced version of FLA called AFLA.AFLA addresses the limitations of the initial FLA by combining three innovative strategies.Firstly,an adaptive weight factor is introduced.This factor dynamically adjusts weights based on the progress of the optimization process.By doing so,AFLA can better balance the exploration and utilization of search space,reducing the chances of falling into local optima.Secondly,integrate Gaussian variation into AFLA.This mutation strategy introduces random perturbations into the current solution,allowing the algorithm to escape from the local optimal region.The Gaussian mutation is carefully designed to maintain a balance between exploration and development,ensuring that the search remains focused while still exploring promising areas.Finally,adopt a probability update strategy.This strategy adjusts the probability associated with different fuzzy rules based on their historical performance.By doing so,AFLA can shift its search towards more successful rules,thereby accelerating the convergence process.

    In summary,these three strategies aim to enhance the exploration and development capabilities of FLA,addressing its tendency to fall into local optima and premature convergence [33–35].By combining these strategies,AFLA is expected to demonstrate excellent performance in complex optimization problems and provide more accurate and reliable solutions.

    3.1 Adaptive Weight Factor

    To improve the algorithm’s local search capability,an adaptive weight factor denoted asωhas been introduced.The formula for calculating the adaptive weight factorωis depicted in Eq.(22).

    At the beginning of the iteration,ωwill take random values within the range (0,1),and as the number of iterations increases,the range of values will gradually decrease.The calculated values ofωwhenT=1000 have been presented in a curve graph,as depicted in Fig.2.

    Figure 2: Variation of ω with Iterations as T=1000

    We haveωinto the position update formulas for the DO phase,resulting in Eqs.(5)and(10)being transformed to:

    3.2 Gaussian Mutation

    The Gaussian mutation is as follows:

    In this paper,we setμto 0 andσto 1,respectively,and selectxas a random number within the range of(0,3).

    We introduce Gaussian mutation into the position update formulas for the DO,EO,and SSO phases,resulting in Eqs.(5),(11),and(18)being transformed to:

    3.3 Probability Update Strategy

    In order to improve the algorithm’s local search and to prevent the algorithm from getting stuck in local optima,we have introduced a probability update strategy.In the DO phase,both the update formulas for the remaining molecules in regioni,resulting in Eq.(10)being transformed to:

    4 Proposed AFLA-SCNN Model

    To enhance the accuracy of HSI classification,we proposed the Adaptive Feichtinger’s Law Algorithm (AFLA) and applied it to optimize the hyperparameters of the spectral convolutional neural network (SCNN).Consequently,we introduced a Spectral Convolutional Neural Network model based on the Adaptive Feichtinger’s Law Algorithm (AFLA-SCNN).Within this model,AFLA dynamically adjusts weights based on the iteration count,enabling it to obtain the optimal hyperparameters for SCNN.

    Specifically,we employed AFLA to optimize the hyperparameters“numEpochs”and“miniBatch-Size”within SCNN,acquiring their optimal values through AFLA.“numEpochs”represents the total number of training iterations or the number of times the model traverses the dataset.Excessively large “numEpochs”may lead to overfitting,where the model excels on training data but performs poorly on new,unseen data.Conversely,too few “numEpochs”may result in underfitting,causing inadequate exploration of the training data’s features and patterns.“miniBatchSize” refers to the number of samples used for weight updates during each iteration.In our model,we utilized a technique called “batch gradient descent,” which updates weights using a subset of training samples rather than the entire training set.Oversized “miniBatchSize” might slow down training,while a smaller“miniBatchSize”could lead to training instability or suboptimal model performance.Thus,selecting appropriate values for“numEpochs”and“miniBatchSize”is crucial within the network model[36–38].

    The flowchart of AFLA-SCNN is depicted in Fig.3.In this figure,the AFLA-SCNN model is divided into two parts,with the flow of AFLA on the left and the flow of SCNN within the dashed line on the right.When calculating the fitness value of each solution in AFLA,the SCNN model is invoked,and the complement number of accuracy in the SCNN model is used as the fitness value of each solution in AFLA.Subsequently,the model calculates the TF value and calls for different optimization methods based on the TF value.Then,the fitness value is calculated again.The result is output after the maximum number of iterations is reached.A detailed description of the process follows below:

    1.Initialize algorithm AFLA.Initialize the number of solutions to be computed,the number of iterations,dimensions,and upper and lower bounds for AFLA.

    2.Compute the fitness value for each solution.Invoke the SCNN network,load the dataset,preprocess the training data,create the SCNN classification network,train the network,and finally obtain the classification accuracy.Set the fitness value as the complement number of the accuracy achieved by the SCNN network.

    3.Compute the value of TF.If 01,proceed to the steady-state operator.

    4.Recalculate the fitness value for each solution.Again,invoke the SCNN network.

    5.Check if it is the maximum iteration count.If it is not the maximum iteration count,return to Step 3;if it is the maximum iteration count,output the best solution.

    Figure 3: The flowchart of the AFLA-SCNN model

    5 Performance Validation Experiment of AFLA

    In this section,we validate the performance of AFLA through experiments on benchmark functions from CEC2013 and CEC2017[39,40].Additionally,we compared AFLA with the original FLA and several well-known metaheuristic algorithms.

    To ensure fair and just comparison between different algorithms,we standardized the experimental settings of all algorithms involved in the study.Specifically,we set the overall size of all algorithms(representing the number of particles or populations participating in the optimization process)to 50.This ensures that it is relatively unaffected by differences in population size.In addition,we will limit the maximum number of iterations to 1000.This limitation means that each algorithm has a fixed number of opportunities to search for the optimal solution,ensuring the fairness of the computing resources used.To define the search space,we set the lower limit of the search range to-100 and the upper limit to 100.These boundaries represent the minimum and maximum values that the algorithm can explore during the optimization process.By setting the same boundaries for all algorithms,we ensure that they run in the same search space for direct comparison.For other parameter settings,as shown in Table 1,Table 1 provides a detailed list of specific parameter values for each algorithm.These parameter values are selected based on generally accepted values in the literature or through preliminary experiments to ensure optimal performance.To further ensure the reliability of the results,we ran each algorithm multiple times.Specifically,we will run each algorithm 50 times and compare the best fitness values obtained during these runs.The optimal fitness value refers to the lowest fitness value obtained in 50 runs,as in intelligent optimization problems,the goal is usually to minimize the fitness value.By comparing the optimal fitness values of multiple runs,we can consider any potential anomalies or fluctuations in algorithm performance.

    Table 1: Parameter settings

    In terms of evaluation criteria,we focused on studying the CEC2013 and CEC2017 benchmark functions,which are widely used to evaluate the performance of optimization algorithms.We conducted comparative experiments on the 10D,30D,and 50D of these benchmark functions.“D”represents the dimension of the problem.By evaluating algorithms on different dimensions,we can evaluate their scalability and performance in high-dimensional spaces.In CEC2013 and CEC2017,the smaller the fitness value of the algorithm implementation,the better its performance.This evaluation criterion aligns with the goal of most optimization problems,which is to find the optimal solution with the lowest cost or highest quality.

    Our goal is to provide a fair and objective comparison of AFLA with other benchmark algorithms by following this standardized experimental setup and evaluation criteria.The results obtained from these experiments will provide insights into the performance of AFLA.

    5.1 Experiments on CEC2013

    In this section,we will evaluate the proposed AFLA within the CEC2013 dataset.CEC2013 comprises 28 functions[49].After adjusting the objective values of all functions to zero,we will conduct comparative experiments by evaluating AFLA and nine other algorithms across dimensions of 10D,30D,and 50D.

    5.1.1 Result of 10D

    Table 2 shows the performance of AFLA and nine other algorithms on the 10D benchmark functions of CEC2013.In this table,the symbol “+”means that AFLA outperforms the respective algorithm,“≈” signifies AFLA performs comparably to the algorithm,and “-” indicates AFLA performs worse than the algorithm.The last row of the table exhibits the statistics for each algorithm across the 28 functions.

    Table 2: Performance of ten algorithms on CEC2013 in 10D

    From Table 2,it is evident that AFLA surpasses FLA in 26 functions,outperforms HHO in 27 functions,exceeds MFO in 20 functions,outshines SCA in 27 functions,prevails over WOA in all 28 functions,surpasses GSA in 21 functions,outperforms AVOA in 21 functions,performs better than DE in 22 functions,and outperforms GA in all 28 functions.Finally,we assert that AFLA demonstrates commendable performance across the 28 functions of CEC2013 in 10D.

    5.1.2 Result of 30D

    The performance of AFLA and nine comparative algorithms on the benchmark functions of CEC2013 in 30D is presented in Table 3.

    Table 3: Performance of ten algorithms on CEC2013 in 30D

    From Table 3,it shows that AFLA surpasses FLA in 14 functions,outperforms HHO in 27 functions,exceeds MFO in 22 functions,outshines SCA in 27 functions,prevails over WOA in all 27 functions,surpasses GSA in 22 functions,performs better than AVOA in 18 functions,outperforms DE in 23 functions,and outperforms GA in all 28 functions.Finally,we assert that AFLA demonstrates commendable performance across the 28 functions of CEC2013 in 30D.

    5.1.3 Result of 50D

    The performance of AFLA and nine comparative algorithms on the 50D benchmark functions of CEC2013 is presented in Table 4.

    Table 4: Performance of ten algorithms on CEC2013 in 50D

    From Table 4,it is evident that AFLA outperforms FLA in 10 functions,surpasses HHO in 22 functions,exceeds MFO in 21 functions,outshines SCA in 28 functions,surpasses WOA in 26 functions,performs better than GSA in 25 functions,outperforms AVOA in 14 functions,exceeds DE in 20 functions,and outperforms GA in all 28 functions.Finally,we assert that AFLA demonstrates commendable performance across the 28 functions of CEC2013 in 50D.

    5.2 Experiments on CEC2017

    In this section,we will evaluate the proposed AFLA within the CEC2017 dataset.CEC2017 comprises 29 functions[50].After adjusting the objective values of all functions to zero,we will conduct comparative experiments by evaluating AFLA and nine other algorithms across dimensions of 10D,30D,and 50D.

    5.2.1 Result of 10D

    The performance of AFLA and nine comparative algorithms on the benchmark functions of CEC2017 in 10D is presented in Table 5.

    Table 5: Performance of ten algorithms on CEC2017 in 10D

    From Table 5,it is evident that AFLA outperforms FLA in 28 functions,surpasses HHO in 28 functions,exceeds MFO in 22 functions,outshines SCA in all 29 functions,surpasses WOA in all 29 functions,performs better than GSA in 26 functions,outperforms AVOA in 24 functions,exceeds DE in 22 functions,and outperforms GA in all 29 functions.In conclusion,we assert that AFLA demonstrates commendable performance across the 29 functions of CEC2017 in 10D.

    5.2.2 Result of 30D

    The performance of AFLA and nine comparative algorithms on the benchmark functions of CEC2017 in 30D is presented in Table 6.

    Table 6: Performance of ten algorithms on CEC2017 in 30D

    From Table 6,it is evident that AFLA outperforms FLA in 16 functions,surpasses HHO in 28 functions,exceeds MFO in 27 functions,outshines SCA in all 29 functions,surpasses WOA in all 29 functions,performs better than GSA in 28 functions,outperforms AVOA in 22 functions,exceeds DE in 26 functions,and outperforms GA in all 29 functions.Finally,we assert that AFLA demonstrates commendable performance across the 29 functions of CEC2017 in 30D.

    5.2.3 Result of 50D

    The performance of AFLA and nine comparative algorithms on the 50D benchmark functions of CEC2017 is presented in Table 7.

    Table 7: Performance of ten algorithms on CEC2017 in 50D

    From Table 7,it is evident that AFLA outperforms FLA in 14 functions,surpasses HHO in 27 functions,exceeds MFO in 24 functions,outshines SCA in all 29 functions,surpasses WOA in all 29 functions,performs better than GSA in 27 functions,outperforms AVOA in 18 functions,exceeds DE in 24 functions,and outperforms GA in all 29 functions.In conclusion,we assert that AFLA demonstrates commendable performance across the 29 functions of CEC2017 in 50D.

    5.3 Discussion of Experimental Results

    In this experiment,we aimed to evaluate the performance of the AFLA algorithm within the CEC2013 and CEC2017 datasets and compare it with other algorithms to validate its effectiveness.

    According to the experimental results on CEC2013,AFLA has a significant advantage over the original FLA in 10D,a slight advantage in 30D,and no advantage in 50D.AFLA has a significant advantage over HHO,MFO,SCA,WOA,GSA,DE,and GA in 10D,30D,and 50D.AFLA has a significant advantage over AVOA in 10D,and 30D and a slight advantage in 50D.Therefore,the proposed AFLA has a significant performance advantage over most of the other algorithms at CEC2013,especially in low dimensions.However,the performance is slightly inferior compared to FLA and AVOA on 50D.

    According to the experimental results on CEC2017,AFLA has a significant advantage over the original FLA in 10D,a slight advantage in 30 D,and no advantage in 50D.The proposed AFLA has a significant advantage over HHO,MFO,SCA,WOA,GSA,AVOA,DE,and GA in 10D,30D,and 50D.Therefore,the proposed AFLA has a significant performance advantage over other algorithms except FLA at CEC2017.However,the performance of AFLA over FLA on 50D is slightly insufficient.

    The experimental results indicate that AFLA demonstrates significant advantages in 10D,30D,and 50D within CEC2013 and CEC2017,particularly excelling in lower dimensions.However,in a few functions or specific dimensions,its performance slightly falls short.For instance,as the dimensionality increases,AFLA’s performance compared to FLA is not as anticipated,potentially due to experimental settings.Additionally,the presence of stochastic elements in the experiment could affect result stability.

    6 AFLA-SCNN Model Experimentation in HSI

    To enhance the precision of HSI classification,we propose the AFLA-SCNN model.To assess the performance of the AFLA-SCNN model,we will utilize the widely used Indian Pines (IP)hyperspectral image dataset and Pavia University(PU)hyperspectral image dataset for validation.

    6.1 Dataset Description

    6.1.1 Indian Pines(IP)

    The Indian Pines dataset was acquired from the Indian Pines test site in the northwest region of Indiana,USA.It consists of pixels with dimensions of 145×145,containing 220 spectral bands,with an approximate spatial resolution of 20 m[51,52].The RGB representation of this dataset is depicted in Fig.4.

    Figure 4: RGB image of Indian Pines

    Furthermore,the dataset encompasses 16 categories of vegetation and terrain types such as Alfalfa,Corn,Woods,and more,with specific classification details outlined in Table 8.

    Table 8: The statistical table of categories for Indian Pines

    6.1.2 Pavia University(PU)

    The Pavia University dataset is a hyperspectral data acquired by the Reflectance Optical Spectral Imaging System (ROSIS) over Pavia,Northern Italy [53,54].The RGB image representation of this data is shown in Fig.5.

    Figure 5: RGB image of Pavia University

    6.2 Experimental Design

    Firstly,we conducted experimental validation using the AFLA-SCNN model on the IP dataset and the PU dataset.To delve deeper into the performance of this model,we conducted experiments on the same dataset using FLA-SCNN model,HHO-SCNN model,DE-SCNN model,SCNN model,and SVM model.By comparing the experimental outcomes,we aimed to quantitatively assess and validate the superior classification accuracy of the AFLA-SCNN model.

    In the AFLA-SCNN model,we set the number of molecules for AFLA to 10 and the maximum iteration count to 50.Since we need to determine the optimal values for the two hyperparameters,“numEpochs” and “miniBatchSize”,we set the dimension to 2.The upper and lower bounds for“numEpochs”are 200 and 1,respectively,while for“miniBatchSize”,the upper and lower bounds are 256 and 32,respectively.Other parameter settings for the AFLA-SCNN model,FLA-SCNN model,HHO-SCNN model,DE-SCNN model,SCNN model,and SVM model are detailed in Table 9.

    Table 9: Parameter settings

    In Table 9,the parameters and settings of different algorithms are clearly listed so that we can compare and analyze them,which are crucial for the performance and results of the algorithm.Firstly,for each algorithm,NPrepresents the number of particles or populations,that is,the number of individuals simultaneously searching the solution space in the algorithm.IMrepresents the maximum number of iterations,i.e.,the maximum number of rounds the algorithm runs,which determines to what extent the algorithm explores the search space.dimrefers to the dimension of the problem,which is the length of the solution vector or the number of features.lbandubrespectively represent the lower and upper bounds of the solution vector,which limit the scope of the search space and ensure the effectiveness of the solution.For SCNN,SNrepresents the optimizer,which is the algorithm used to update model weights and parameters.LRIis the initial learning rate.LRDPandLRDFrespectively represent the number of rounds to reduce the learning rate and the factor to reduce the learning rate.These parameters are used to dynamically adjust the learning rate during the training process to improve convergence speed and stability.Mrepresents the contribution of the gradient step size from the previous iteration to the current iteration.GTis a gradient threshold used to control the size of gradients,which may be used to prevent gradient explosion or disappearance.NErepresents the number of training rounds,which is the number of times the entire dataset is used to train the model.BSminirepresents the small batch size,which is the number of samples used for each weight update.For SVM,KFrepresents the kernel function,which determines the calculation method of similarity between data points and is crucial for handling nonlinear problems.BCrepresents a constraint.SNrepresents an optimizer.OFrepresents the expected proportion of outliers in the training data,which helps the algorithm be more robust when dealing with noise or outliers.KSrepresents the kernel parameter.

    All experiments in this paper were conducted using MATLAB R2022b on a system equipped with an Intel(R)Core(TM)i9-11900 processor and 64 GB of RAM running Windows 11.

    6.3 Evaluation Metrics

    In this section,we delved into the performance metrics used to evaluate the methods proposed in this experiment.The selected indicators aim to comprehensively evaluate the predictive ability of the model while considering its accuracy and reliability.We focus on four key performance indicators:Accuracy,precision,recall,and F1-score.

    Accuracy is a fundamental indicator that quantifies the proportion of correctly predicted samples in the total number of samples by the model.It provides a rough overview of model performance,indicating how it performs on the entire dataset.The Accuracy is calculated by dividing the number of correctly predicted samples by the total number of samples,as shown in Eq.(30).

    On the other hand,Precision focuses on the quality of model predictions.It represents the proportion of samples that truly belong to a certain category predicted by the model.Precision is crucial in situations where false positives may have significant consequences.It is calculated by dividing the number of true positives(correctly predicted positive samples)by the sum of true positives and false positives (incorrectly predicted positive samples),as shown in Eqs.(31) and (32).Eq.(31) outlines the calculation method for the i-th type Precision,while Eq.(32)provides the average process for all samples.

    Recall supplements Precision by examining the model’s ability to recognize all relevant samples.It represents the proportion of samples correctly predicted by the model as belonging to a certain category among all samples that truly belong to that category.Recall is particularly important when missing a positive prediction could have a significant impact.It is calculated by dividing the number of true positives by the sum of true positives and false negatives (incorrectly predicted as negative samples),as shown in Eqs.(33) and (34).Eq.(33) represents the Recall calculation for class i,while Eq.(34)demonstrates the average process of all samples.

    Finally,the F1-score is the harmonic mean of Precision and Recall,combining their respective strengths.It aims to comprehensively evaluate the performance of the model,balancing Precision and Recall.The F1-score is calculated by using the bidirectional average of Precision and Recall,as shown in Eqs.(35)and(36).Eq.(35)outlines the F1-score calculation for class i,while Eq.(36)presents the average process for all samples.

    By using these performance indicators,we aim to comprehensively understand the predictive ability of the proposed method.The results obtained from these calculations will inform us of the strengths and weaknesses of the model,enabling us to make informed decisions regarding its application and potential improvements.

    6.4 Experimental Results and Discussion

    6.4.1 Experimental Results and Discussion of AFLA-SCNN

    After running AFLA-SCNN on the Indian Pines dataset,the obtained optimal values for“numEpochs”and “miniBatchSize”were 155 and 176,respectively.The best fitness value was determined as 9.76E-4,where its complement number represents the classification accuracy,hence yielding an accuracy rate of 99.90%.However,it is well known that the predictive results of the SCNN model exhibit volatility.Therefore,inputting the optimal values for“numEpochs”and“miniBatchSize”into the SCNN model may not necessarily result in the same accuracy of 99.90%.

    After inputting the optimal values for“numEpochs”and“miniBatchSize”obtained from running the AFLA-SCNN model on the Indian Pines dataset into the SCNN model,resulting in an Accuracy of 99.875%,Precision of 99.681%,Recall of 99.723%,and F1-score of 99.686%.In addition,the ground truth classification image and the predicted classification image are illustrated in Fig.6.In this figure,Fig.6a represents the ground truth classification image,while Fig.6b depicts the predicted classification image.Additionally,it is evident that the predicted classification image significantly resembles the ground truth classification image,displaying only minor differences.

    After running AFLA-SCNN on the Pavia University dataset,the obtained optimal values for “numEpochs” and “miniBatchSize” were 150 and 225,respectively.The best fitness value was determined as 1.65E+0,where its complement number represents the classification accuracy,hence yielding an accuracy rate of 98.35%.

    Figure 6: Ground-truth classification images and predicted classification images on IP using AFLASCNN model

    After inputting the optimal values for“numEpochs”and“miniBatchSize”obtained from running the AFLA-SCNN model on the Pavia University dataset into the SCNN model,resulting in an Accuracy of 98.022%,Precision of 92.541%,Recall of 94.063%,and F1-score of 93.273%.In addition,the ground truth classification image and the predicted classification image are illustrated in Fig.7.In this figure,Fig.7a represents the ground truth classification image,while Fig.7b depicts the predicted classification image.Additionally,it is evident that the predicted classification image significantly resembles the ground truth classification image,displaying only minor differences.

    Figure 7: Ground-truth classification images and predicted classification images on PU using AFLASCNN model

    6.4.2 Experimental Results and Discussion of Comparative Experiment

    In this section,we compare the AFLA-SCNN model,FLA-SCNN model,HHO-SCNN model,DE-SCNN model,SCNN model,and SVM model based on evaluation metrics.

    Table 10 shows the Accuracy,Precision,Recall,and F1-score of AFLA-SCNN model,FLASCNN model,HHO-SCNN model,DE-SCNN model,SCNN model,and SVM model on Indian Pines.Among them,the results of AFLA-SCNN model,FLA-SCNN model,HHO-SCNN model,and DE-SCNN model are obtained by applying the optimized hyperparameters “numEpochs”and“miniBatchSize”obtained by applying the optimized hyperparameters“numEpochs”and“miniBatch-Size”to the SCNN model.The optimized hyperparameters “numEpochs”and “miniBatchSize”of FLA-SCNN model on Indian Pines are 199,256.The values of hyperparameters “numEpochs”and “miniBatchSize”for HHO-SCNN model optimized on Indian Pines are 78,191.The values of hyperparameters“numEpochs”and“miniBatchSize”for DE-SCNN model optimized on Indian Pines are 54,189.

    Table 10: Comparison of AFLA-SCNN model with other models on Indian Pines

    In Table 10,it is evident that the AFLA-SCNN model outperforms the FLA-SCNN model,the HHO-SCNN model,the DE-SCNN model,the SCNN model,and the SVM model in the four evaluation metrics: Accuracy,Precision,Recall,and F1-score.In addition,the AFLA-SCNN model improved 0.65%on Accuracy,1.06%on Precision,1.942%on Recall,and 1.574%on F1-score compared to the SCNN model.The AFLA-SCNN model improved 3.897%on Accuracy,4.277%on Precision,6.241%on Recall,and 5.599%on F1-score compared to the SVM model.The AFLA-SCNN model improved 0.27% on Accuracy,0.729% on Precision,0.054% on Recall,and 0.391% on F1-score compared to the HHO-SCNN model.The AFLA-SCNN model improved 0.293%on Accuracy,1.609%on Precision,0.621%on Recall,and 1.184%on F1-score compared to the DE-SCNN model.

    However,the AFLA-SCNN model improved 0.433%on Accuracy,0.287%on Precision,0.821%on Recall,and 0.512% on F1-score compared to the FLA-SCNN model.The AFLA-SCNN model improves the performance by less than 1%compared to the FLA-SCNN model,so there is still room for improvement of the AFLA-SCNN model in the future.

    Table 11 shows the Accuracy,Precision,Recall,and F1-score of AFLA-SCNN model,FLASCNN model,HHO-SCNN model,DE-SCNN model,SCNN model,and SVM model on Pavia University.Among them,the results of AFLA-SCNN model,FLA-SCNN model,HHO-SCNN model,and DE-SCNN model are obtained by applying the optimized hyperparameters“numEpochs”and“miniBatchSize”obtained by applying the optimized hyperparameters“numEpochs”and“mini-BatchSize”to the SCNN model.The optimized hyperparameters“numEpochs”and“miniBatchSize”of FLA-SCNN model on Pavia University are 126,243.The values of hyperparameters“numEpochs”and“miniBatchSize”for HHO-SCNN model optimized on Pavia University are 89,192.The values of hyperparameters “numEpochs” and “miniBatchSize” for DE-SCNN model optimized on Pavia University are 78,188.

    Table 11: Comparison of AFLA-SCNN model with other models on Pavia University

    In Table 11,it is evident that the AFLA-SCNN model outperforms the FLA-SCNN model,the HHO-SCNN model,the DE-SCNN model,the SCNN model,and the SVM model in the four evaluation metrics: Accuracy,Precision,Recall,and F1-score.In addition,the AFLA-SCNN model improved 1.26%on Accuracy,4.075%on Precision,2.67%on Recall,and 3.433%on F1-score compared to the SCNN model.The AFLA-SCNN model improved 2.347%on Accuracy,6.043%on Precision,8.172%on Recall,and 8.062%on F1-score compared to the SVM model.The AFLA-SCNN model improved 0.903%on Accuracy,2.59%on Precision,1.909%on Recall,and 2.254%on F1-score compared to the HHO-SCNN model.The AFLA-SCNN model improved 0.94%on Accuracy,2.66%on Precision,1.699%on Recall,and 2.248%on F1-score compared to the DE-SCNN model.

    However,the AFLA-SCNN model improved 0.821% on Accuracy,2.133% on Precision,1.73%on Recall,and 1.955% on F1-score compared to the FLA-SCNN model.The AFLA-SCNN model improves the performance by less than 1%compared to the FLA-SCNN model,so there is still room for improvement of the AFLA-SCNN model in the future.

    7 Conclusion

    The aim and focal point of this paper are to enhance the accuracy of HSI classification.To achieve this,we propose a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm (AFLA-SCNN).This model incorporates our devised Adaptive Fick’s Law Algorithm(AFLA),in which we introduce three novel strategies: Adaptive weight factor,Gaussian mutation,and probability update policy.Subsequently,AFLA is integrated with the SCNN model,leading to the establishment of the AFLA-SCNN model.In this model,we use AFLA to optimize the two hyperparameters“numEpochs”and“miniBatchSize”in the SCNN model to obtain the optimal values of these two parameters and then use the optimal values of these two hyperparameters for HSI classification.

    In the experimental part,we first validate the performance of AFLA,and we compare AFLA with 9 well-known intelligent optimization algorithms.And we validate it on 10D,30D,50D for 28 functions of CEC2013 and 10D,30D,50D for 29 functions of CEC2017,respectively.The experimental results show that AFLA has obvious performance advantages over other optimization algorithms.Subsequently,we conducted comparative experiments between AFLA-SCNN model and FLA-SCNN model,HHO-SCNN model,DE-SCNN model,SCNN model,SVM model on the Indian Pines dataset and Pavia University dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on Pavia University reached 98.022%,highlighting the performance of the proposed AFLA-SCNN model in hyperspectral image classification.However,compared with the FLA-SCNN model,the improvement performance of the AFLA-SCNN model is less than 1%,indicating that the model still needs improvement.

    In conclusion,the proposed AFLA-SCNN model demonstrates a significant improvement in the accuracy of HSI classification.This presents a novel and effective choice for model selection in analogous domains,offering valuable insights for future relevant research.

    Acknowledgement:None.

    Funding Statement:This research was partially supported by Natural Science Foundation of Shandong Province,China(Grant No.ZR202111230202).

    Author Contributions:The authors confirm contribution to the paper as follows:Study conception and design:T.-Y.Wu,H.Li;data collection:S.Kumari,H.Li;analysis and interpretation of results:C.-M.Chen;draft manuscript preparation:T.-Y.Wu,H.Li,S.Kumari,C.-M.Chen.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data are contained within the article.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    人妻制服诱惑在线中文字幕| 两个人免费观看高清视频| 乱码一卡2卡4卡精品| 国产精品国产三级国产专区5o| 国产在视频线精品| 亚洲,一卡二卡三卡| 黑人猛操日本美女一级片| 日韩av免费高清视频| 啦啦啦在线观看免费高清www| 国产精品国产三级国产av玫瑰| 精品国产一区二区三区久久久樱花| av专区在线播放| 九九爱精品视频在线观看| 亚洲精品日本国产第一区| 天美传媒精品一区二区| 街头女战士在线观看网站| 日本-黄色视频高清免费观看| 99久久精品一区二区三区| 黄色欧美视频在线观看| 日韩欧美一区视频在线观看| 大又大粗又爽又黄少妇毛片口| 亚洲国产成人一精品久久久| 美女国产高潮福利片在线看| 日本av手机在线免费观看| av又黄又爽大尺度在线免费看| 亚洲色图 男人天堂 中文字幕 | 老司机影院成人| 人人妻人人添人人爽欧美一区卜| 色吧在线观看| 制服诱惑二区| 亚洲av日韩在线播放| 女性生殖器流出的白浆| 国产免费av片在线观看野外av| 亚洲精品在线观看二区| 一二三四在线观看免费中文在| 王馨瑶露胸无遮挡在线观看| 国产日韩一区二区三区精品不卡| 国产区一区二久久| 如日韩欧美国产精品一区二区三区| 亚洲av日韩在线播放| 日韩熟女老妇一区二区性免费视频| 宅男免费午夜| 涩涩av久久男人的天堂| 另类精品久久| 成年人午夜在线观看视频| 国产日韩一区二区三区精品不卡| 中国美女看黄片| 天天影视国产精品| 动漫黄色视频在线观看| 成年人午夜在线观看视频| 久久久国产欧美日韩av| 怎么达到女性高潮| 熟女少妇亚洲综合色aaa.| 69精品国产乱码久久久| 成年人黄色毛片网站| 中国美女看黄片| 超色免费av| 国产欧美日韩一区二区精品| 美女主播在线视频| 亚洲国产av新网站| 亚洲成a人片在线一区二区| 国产熟女午夜一区二区三区| 欧美成狂野欧美在线观看| 黄片大片在线免费观看| 又大又爽又粗| 久久性视频一级片| 国产精品偷伦视频观看了| 国产精品 国内视频| 国产成人精品无人区| 国产亚洲精品久久久久5区| 国产一区二区三区在线臀色熟女 | 亚洲精品久久午夜乱码| 国产高清videossex| 精品午夜福利视频在线观看一区 | 色老头精品视频在线观看| 黄色视频在线播放观看不卡| 久久久久久久大尺度免费视频| 精品福利观看| 精品卡一卡二卡四卡免费| 国产精品一区二区在线不卡| 色婷婷av一区二区三区视频| 久久性视频一级片| 老鸭窝网址在线观看| 国产精品自产拍在线观看55亚洲 | 人妻 亚洲 视频| 免费黄频网站在线观看国产| 妹子高潮喷水视频| 9热在线视频观看99| 午夜福利影视在线免费观看| 黑人欧美特级aaaaaa片| 人妻久久中文字幕网| 99热网站在线观看| 一区在线观看完整版| 久久免费观看电影| 男女下面插进去视频免费观看| 少妇裸体淫交视频免费看高清 | 日韩视频在线欧美| 欧美+亚洲+日韩+国产| 久久久国产欧美日韩av| 丁香欧美五月| 另类精品久久| 久久亚洲精品不卡| 国产一区二区三区在线臀色熟女 | 激情在线观看视频在线高清 | 黑人巨大精品欧美一区二区mp4| 国产深夜福利视频在线观看| av线在线观看网站| 久久天堂一区二区三区四区| 国产免费现黄频在线看| 日韩大码丰满熟妇| 一本色道久久久久久精品综合| 久久青草综合色| 一级黄色大片毛片| 丝袜喷水一区| 国产在线免费精品| 日韩大码丰满熟妇| 国产亚洲欧美精品永久| 热re99久久国产66热| 日日摸夜夜添夜夜添小说| 男男h啪啪无遮挡| 亚洲国产欧美网| av电影中文网址| 最新美女视频免费是黄的| 黄色毛片三级朝国网站| 午夜福利视频精品| 无限看片的www在线观看| 女人爽到高潮嗷嗷叫在线视频| 精品国产国语对白av| 国产精品国产高清国产av | 高清在线国产一区| 亚洲精品国产色婷婷电影| 在线播放国产精品三级| 女人爽到高潮嗷嗷叫在线视频| 国产视频一区二区在线看| 亚洲av成人一区二区三| 在线观看免费高清a一片| 在线观看舔阴道视频| 99热网站在线观看| 最新在线观看一区二区三区| 一级毛片精品| 啦啦啦免费观看视频1| 宅男免费午夜| 黄色视频不卡| 国产精品久久久久久精品电影小说| 99久久99久久久精品蜜桃| 免费在线观看影片大全网站| 久久精品亚洲精品国产色婷小说| 丁香六月天网| 宅男免费午夜| 大香蕉久久成人网| 最新美女视频免费是黄的| 亚洲中文字幕日韩| 免费在线观看完整版高清| 国产精品 国内视频| 大码成人一级视频| 在线观看免费高清a一片| 欧美日韩国产mv在线观看视频| 国产亚洲av高清不卡| 精品一区二区三卡| 成年动漫av网址| 超色免费av| 黄色怎么调成土黄色| 国产高清视频在线播放一区| 免费在线观看影片大全网站| 成人亚洲精品一区在线观看| 亚洲av电影在线进入| 久久人妻福利社区极品人妻图片| 国产97色在线日韩免费| 老司机福利观看| 婷婷成人精品国产| 成人av一区二区三区在线看| 大片免费播放器 马上看| 一本大道久久a久久精品| 色婷婷久久久亚洲欧美| 丁香欧美五月| 精品国产一区二区三区久久久樱花| 视频区欧美日本亚洲| 欧美黄色淫秽网站| 国产成人免费观看mmmm| 老司机午夜福利在线观看视频 | 新久久久久国产一级毛片| 亚洲成av片中文字幕在线观看| 欧美成狂野欧美在线观看| 久久人人爽av亚洲精品天堂| 国产高清国产精品国产三级| 亚洲中文字幕日韩| 国产一区二区激情短视频| 男女下面插进去视频免费观看| 国产aⅴ精品一区二区三区波| 美女国产高潮福利片在线看| 国产精品一区二区精品视频观看| 伦理电影免费视频| 亚洲三区欧美一区| 日韩欧美一区二区三区在线观看 | 久久精品国产亚洲av香蕉五月 | 亚洲欧美色中文字幕在线| 日本欧美视频一区| 高清毛片免费观看视频网站 | 亚洲自偷自拍图片 自拍| 欧美在线黄色| 免费看十八禁软件| 大香蕉久久成人网| 国产亚洲精品一区二区www | 不卡一级毛片| 亚洲欧美激情在线| 伊人久久大香线蕉亚洲五| 人妻一区二区av| 一级毛片精品| 亚洲国产成人一精品久久久| 成人国语在线视频| 一区二区三区国产精品乱码| 日本五十路高清| 亚洲av日韩在线播放| 免费看十八禁软件| 精品久久蜜臀av无| a级毛片黄视频| 国产成人系列免费观看| 天堂中文最新版在线下载| www.熟女人妻精品国产| 亚洲精品一卡2卡三卡4卡5卡| av视频免费观看在线观看| 丝袜人妻中文字幕| a级毛片黄视频| 亚洲熟妇熟女久久| 国产精品免费大片| 亚洲 欧美一区二区三区| 免费看a级黄色片| 亚洲精品中文字幕一二三四区 | 一边摸一边做爽爽视频免费| 国产主播在线观看一区二区| 1024视频免费在线观看| 亚洲精品久久午夜乱码| 精品福利观看| 国产在线免费精品| 久久精品亚洲av国产电影网| 亚洲免费av在线视频| 午夜日韩欧美国产| 亚洲欧美日韩另类电影网站| 最近最新中文字幕大全免费视频| 免费观看a级毛片全部| 99国产精品99久久久久| 国产精品1区2区在线观看. | 妹子高潮喷水视频| 欧美老熟妇乱子伦牲交| 国产午夜精品久久久久久| 国产麻豆69| 成人影院久久| 国产精品亚洲一级av第二区| 少妇精品久久久久久久| 男女下面插进去视频免费观看| 青青草视频在线视频观看| 成人精品一区二区免费| 亚洲精品中文字幕在线视频| 亚洲精品国产区一区二| 日本黄色日本黄色录像| 菩萨蛮人人尽说江南好唐韦庄| 下体分泌物呈黄色| 国产亚洲一区二区精品| 老司机影院毛片| 狂野欧美激情性xxxx| 亚洲专区字幕在线| 国产精品一区二区精品视频观看| 亚洲久久久国产精品| 五月开心婷婷网| 久久久国产一区二区| 男女高潮啪啪啪动态图| 亚洲少妇的诱惑av| 中文字幕另类日韩欧美亚洲嫩草| kizo精华| 国产aⅴ精品一区二区三区波| 一边摸一边抽搐一进一出视频| 视频区欧美日本亚洲| 一边摸一边抽搐一进一小说 | 国产精品久久久久成人av| 国产成人精品在线电影| 青草久久国产| 黑人巨大精品欧美一区二区mp4| 制服诱惑二区| 日本av手机在线免费观看| 香蕉丝袜av| 亚洲三区欧美一区| 99在线人妻在线中文字幕 | 亚洲欧美日韩高清在线视频 | 男男h啪啪无遮挡| 老司机靠b影院| 久久午夜亚洲精品久久| 欧美国产精品一级二级三级| 老汉色∧v一级毛片| 99精国产麻豆久久婷婷| 中文字幕最新亚洲高清| 飞空精品影院首页| 免费观看人在逋| 老司机午夜十八禁免费视频| 日韩一卡2卡3卡4卡2021年| 精品卡一卡二卡四卡免费| 免费看a级黄色片| 天天躁狠狠躁夜夜躁狠狠躁| 午夜福利免费观看在线| 狂野欧美激情性xxxx| 国产1区2区3区精品| 精品人妻1区二区| 日本一区二区免费在线视频| 亚洲第一av免费看| 精品少妇内射三级| 亚洲成人手机| 亚洲天堂av无毛| 乱人伦中国视频| 亚洲av欧美aⅴ国产| 久热这里只有精品99| 男人舔女人的私密视频| 成人特级黄色片久久久久久久 | 在线天堂中文资源库| 日本黄色日本黄色录像| 久久精品成人免费网站| 成年动漫av网址| 91麻豆精品激情在线观看国产 | 手机成人av网站| 妹子高潮喷水视频| 考比视频在线观看| 极品教师在线免费播放| 精品国产一区二区久久| 伦理电影免费视频| a级毛片黄视频| 久久狼人影院| 91av网站免费观看| 老司机午夜十八禁免费视频| 99精品在免费线老司机午夜| 午夜激情av网站| 极品少妇高潮喷水抽搐| 国产91精品成人一区二区三区 | 一区二区三区乱码不卡18| 91字幕亚洲| 黄色视频不卡| 别揉我奶头~嗯~啊~动态视频| 国产精品一区二区在线不卡| a级毛片黄视频| 丁香欧美五月| 国产有黄有色有爽视频| 久久久精品国产亚洲av高清涩受| 高潮久久久久久久久久久不卡| 黄网站色视频无遮挡免费观看| 精品一区二区三卡| 中文字幕人妻丝袜制服| 99久久精品国产亚洲精品| 日日摸夜夜添夜夜添小说| 在线亚洲精品国产二区图片欧美| 91精品三级在线观看| 欧美精品一区二区免费开放| 丰满迷人的少妇在线观看| 亚洲三区欧美一区| 欧美激情 高清一区二区三区| 欧美亚洲 丝袜 人妻 在线| 国产精品欧美亚洲77777| 亚洲色图av天堂| 国产免费av片在线观看野外av| 免费看十八禁软件| 久久热在线av| 精品国产超薄肉色丝袜足j| 中文字幕人妻丝袜一区二区| 久久免费观看电影| 高清在线国产一区| 亚洲欧美精品综合一区二区三区| 精品人妻1区二区| 热99久久久久精品小说推荐| 国产成人精品久久二区二区91| 麻豆av在线久日| 亚洲欧洲日产国产| 国产精品免费视频内射| 久久av网站| tocl精华| 精品卡一卡二卡四卡免费| videos熟女内射| 叶爱在线成人免费视频播放| 亚洲第一欧美日韩一区二区三区 | 日韩免费av在线播放| 国产在线免费精品| 夜夜夜夜夜久久久久| 老司机深夜福利视频在线观看| 亚洲国产看品久久| 男女下面插进去视频免费观看| 黑人猛操日本美女一级片| 成人特级黄色片久久久久久久 | 成人国产av品久久久| 国产精品自产拍在线观看55亚洲 | 王馨瑶露胸无遮挡在线观看| 国产91精品成人一区二区三区 | 精品国产一区二区三区四区第35| 在线观看免费午夜福利视频| 国产精品98久久久久久宅男小说| 老司机影院毛片| 国产成人av激情在线播放| 国产精品香港三级国产av潘金莲| 精品国产乱码久久久久久小说| 欧美国产精品va在线观看不卡| 欧美老熟妇乱子伦牲交| e午夜精品久久久久久久| 亚洲精品一卡2卡三卡4卡5卡| 国产在线免费精品| 欧美日韩亚洲国产一区二区在线观看 | 在线看a的网站| av一本久久久久| 妹子高潮喷水视频| 性高湖久久久久久久久免费观看| av片东京热男人的天堂| 亚洲自偷自拍图片 自拍| 在线观看免费视频网站a站| 午夜福利在线观看吧| 搡老熟女国产l中国老女人| 久久国产精品男人的天堂亚洲| 桃红色精品国产亚洲av| 夜夜骑夜夜射夜夜干| 日本精品一区二区三区蜜桃| av又黄又爽大尺度在线免费看| 亚洲精品国产区一区二| 黄频高清免费视频| 免费一级毛片在线播放高清视频 | 日韩欧美一区二区三区在线观看 | 人人妻人人澡人人爽人人夜夜| 美女扒开内裤让男人捅视频| 丝袜在线中文字幕| 久久人妻熟女aⅴ| 国产在线精品亚洲第一网站| 韩国精品一区二区三区| 国产精品久久久久久人妻精品电影 | 欧美 亚洲 国产 日韩一| 国产精品免费一区二区三区在线 | 午夜福利免费观看在线| 亚洲欧洲日产国产| 久久婷婷成人综合色麻豆| 丝袜美腿诱惑在线| 最近最新中文字幕大全电影3 | 国产精品 国内视频| 十分钟在线观看高清视频www| 天天躁日日躁夜夜躁夜夜| 在线观看免费视频网站a站| 久久九九热精品免费| 在线观看舔阴道视频| 精品人妻在线不人妻| 午夜精品久久久久久毛片777| 国产成+人综合+亚洲专区| 国产高清激情床上av| 国产黄频视频在线观看| 亚洲 欧美一区二区三区| 欧美av亚洲av综合av国产av| 丝袜人妻中文字幕| 久久国产亚洲av麻豆专区| 国产xxxxx性猛交| 两性夫妻黄色片| 夜夜骑夜夜射夜夜干| 王馨瑶露胸无遮挡在线观看| 亚洲第一av免费看| 久久久久国内视频| 亚洲av成人一区二区三| 精品少妇黑人巨大在线播放| 黑人猛操日本美女一级片| 国产亚洲精品第一综合不卡| 老司机午夜福利在线观看视频 | 国产精品九九99| 我的亚洲天堂| 黄片小视频在线播放| 夫妻午夜视频| av在线播放免费不卡| 日韩一卡2卡3卡4卡2021年| 国产深夜福利视频在线观看| 制服诱惑二区| 在线观看免费视频日本深夜| 亚洲精品自拍成人| 成年人午夜在线观看视频| 香蕉丝袜av| 无限看片的www在线观看| 日韩欧美免费精品| 亚洲va日本ⅴa欧美va伊人久久| 俄罗斯特黄特色一大片| 黄色毛片三级朝国网站| 欧美老熟妇乱子伦牲交| 多毛熟女@视频| 99国产极品粉嫩在线观看| 汤姆久久久久久久影院中文字幕| 精品国产国语对白av| 考比视频在线观看| 99久久国产精品久久久| 精品福利观看| 黑丝袜美女国产一区| 久久av网站| 亚洲成国产人片在线观看| 999精品在线视频| cao死你这个sao货| 日日夜夜操网爽| 成年动漫av网址| 久久这里只有精品19| 午夜两性在线视频| 久久人妻熟女aⅴ| 精品国内亚洲2022精品成人 | 欧美乱码精品一区二区三区| 国产精品免费一区二区三区在线 | 91av网站免费观看| 我的亚洲天堂| 一区福利在线观看| 99香蕉大伊视频| 国产精品1区2区在线观看. | 久久人人97超碰香蕉20202| 色精品久久人妻99蜜桃| 99国产精品99久久久久| 国产91精品成人一区二区三区 | 日韩欧美国产一区二区入口| 18在线观看网站| 一本久久精品| 亚洲熟妇熟女久久| 91老司机精品| 1024香蕉在线观看| 老熟妇乱子伦视频在线观看| 人人妻人人澡人人爽人人夜夜| 91精品国产国语对白视频| 久久久国产精品麻豆| 91九色精品人成在线观看| 国产一区二区三区在线臀色熟女 | 中文欧美无线码| 亚洲精品在线美女| 中文欧美无线码| 日韩中文字幕欧美一区二区| 日韩大码丰满熟妇| 午夜两性在线视频| 亚洲五月婷婷丁香| 蜜桃国产av成人99| 69精品国产乱码久久久| 欧美日韩亚洲高清精品| 黄色片一级片一级黄色片| 天天躁夜夜躁狠狠躁躁| 国产极品粉嫩免费观看在线| 亚洲专区中文字幕在线| 久久香蕉激情| 欧美+亚洲+日韩+国产| 色婷婷av一区二区三区视频| 亚洲少妇的诱惑av| 夜夜夜夜夜久久久久| 免费少妇av软件| 久久久国产一区二区| 精品国产一区二区三区四区第35| 亚洲综合色网址| 免费少妇av软件| 色综合欧美亚洲国产小说| 久久久国产欧美日韩av| 性少妇av在线| 午夜福利视频在线观看免费| 成人国产av品久久久| 亚洲av欧美aⅴ国产| 成年版毛片免费区| 亚洲精华国产精华精| 日韩一区二区三区影片| 在线观看免费日韩欧美大片| 麻豆乱淫一区二区| 99riav亚洲国产免费| 一级a爱视频在线免费观看| 久久久久久久精品吃奶| 免费在线观看视频国产中文字幕亚洲| 丝袜美腿诱惑在线| www日本在线高清视频| av超薄肉色丝袜交足视频| 中文亚洲av片在线观看爽 | 免费不卡黄色视频| 国产在线观看jvid| 国产精品久久电影中文字幕 | 老汉色∧v一级毛片| 18禁黄网站禁片午夜丰满| 午夜免费成人在线视频| 色综合欧美亚洲国产小说| 国产淫语在线视频| 2018国产大陆天天弄谢| 免费在线观看完整版高清| av国产精品久久久久影院| 一个人免费在线观看的高清视频| 老司机深夜福利视频在线观看| 亚洲黑人精品在线| 黄色 视频免费看| 精品久久久久久电影网| av在线播放免费不卡| 久久久久久久久免费视频了| 亚洲av电影在线进入| 老司机亚洲免费影院| 亚洲专区字幕在线| 叶爱在线成人免费视频播放| 欧美性长视频在线观看| 电影成人av| 不卡一级毛片| 亚洲免费av在线视频| 国产一区有黄有色的免费视频| 国产精品欧美亚洲77777| 日韩欧美三级三区| 热re99久久国产66热| 国产免费福利视频在线观看| 国产亚洲av高清不卡| 亚洲色图 男人天堂 中文字幕| 伊人久久大香线蕉亚洲五| 久久久久久亚洲精品国产蜜桃av| av在线播放免费不卡| 欧美国产精品一级二级三级| 黄片小视频在线播放| 日日爽夜夜爽网站| 久久久久精品人妻al黑| 最近最新中文字幕大全电影3 | 午夜精品久久久久久毛片777| 免费在线观看影片大全网站| 一区在线观看完整版| 亚洲国产欧美日韩在线播放| 99香蕉大伊视频| 男人舔女人的私密视频| 亚洲国产av新网站| 一个人免费看片子| 日韩大片免费观看网站| 少妇精品久久久久久久| av又黄又爽大尺度在线免费看| 国产主播在线观看一区二区| 成人18禁在线播放| 午夜福利,免费看| av国产精品久久久久影院|