• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improving Dendritic Neuron Model With Dynamic Scale-Free Network-Based Differential Evolution

    2022-10-26 12:23:52YangYuZhenyuLeiYiruiWangTengfeiZhangSeniorChenPengandShangceGaoSenior
    IEEE/CAA Journal of Automatica Sinica 2022年1期
    關(guān)鍵詞:黑河充值糧油

    Yang Yu,, Zhenyu Lei, Yirui Wang, Tengfei Zhang, Senior,Chen Peng, and Shangce Gao, Senior

    Abstract—Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification,prediction, and other problems when its parameters are welltuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and easily dropping into local minima. Therefore, more and more research adopts non-BP learning algorithms to train ANNs. In this paper, a dynamic scale-free network-based differential evolution (DSNDE) is developed by considering the demands of convergent speed and the ability to jump out of local minima. The performance of a DSNDE trained DNM is tested on 14 benchmark datasets and a photovoltaic power forecasting problem. Nine meta-heuristic algorithms are applied into comparison, including the champion of the 2017 IEEE Congress on Evolutionary Computation(CEC2017) benchmark competition effective butterfly optimizer with covariance matrix adapted retreat phase (EBOwithCMAR).The experimental results reveal that DSNDE achieves better performance than its peers.

    I. INTRODUCTION

    NOWADYS, artificial neuron networks (ANNs) are applied to more and more fields, such as image processing, character recognition, and financial forecasting[1]–[3]. These successful applications benefited from their distinct structures. A typical structure of an ANN can be seen as a directed graph with processing elements as nodes, and interconnected by weighted directed links. The first computational model of a neuron was proposed by McCulloch and Pitts in 1943 [4]. Based on it, a multi-layer perceptron(MLP) was constructed and becomes a classical model in the ANN community. MLP is composed of three layers, including an input layer, one or more hidden layers, and an output layer.The information is transmitted between layers with probability-weighted associations, which are stored within the data structure of the network. Each layer has multiple neurons and it is assigned different thresholds to decide whether to transfer processed data to the next layer. An output layer acts as a multiplicative function for the received data from the former layer. At last, an activation function is implemented to calculate the ultimate output. Common activation functions include a sigmoid function, a rectified linear units function,and an exponential linear units function [5]–[8].

    With the developments and applications of ANN in various fields, ANN derives many other different models. Convolutional neuron network (CNN) is a very effective one, which was proposed for analyzing visual imagery. It consists of an input layer, a convolution layer, a pooling layer, and a fully connected layer [9]. A convolution layer is also called a weighted filter as its size is smaller than that of the input data.An inner product is calculated by sliding a weighted filter with respect to the input. CNN takes advantage of a hierarchical pattern in data and assembles more complex patterns with smaller and simpler patterns. Therefore, CNN is efficient on the scale of connectedness and complexity.

    Recurrent neuron network (RNN) is derived from feedforward neural networks and can use internal memory to process variable-length sequences of input [10]. This property makes it suitable for requests such as natural language processing and speech recognition. Basic RNN is a network of neuron-like nodes organized into sequential layers. Each node has a time-varying real-valued activation, and each connection has a real-valued weight that can be modified. All these nodes are either input nodes, hidden nodes, or output nodes and they are connected successively. RNN uses sequences of realvalued as input and is recursive in the training direction of the sequence.

    Although these mentioned neural models succeed in many research and techniques, they still have some drawbacks, such as slow convergent speed and high computational cost [11].Recently, some research reveals that dendrite plays a pivotal role in the nervous system [12], [13]. The neural network,equipped with functional dendrites, shows a potential of substantial-overall performance improvement [13], [14]. This research arouses our attention to the study of the dendritic neuron model (DNM). DNM is developed by taking inspiration from the nonlinearity of synapses, and its dendrite layer can process input data independently. The characteristics of DNM can be summarized following the description in [15]:1) The structure of DNM is multilayered, and signals are transmitted between layers in a feedforward manner. Hence,the applied functions of these models can be reciprocated.2) Multiplication is both the simplest and one of the most widespread of all nonlinear operations in the nervous system[16]. DNM contributes a lot to the information process in neurons and the computation in synapses, whereas the latter is innovatively modeled by using sigmoid functions. 3) The output of synapses has four states, including excitatory,inhibitory, constant 1, and constant 0. They can beneficially identify the morphology of a neuron. The presentation of each state primarily depends on the values of parameters in synapses [17], [18]. Consequently, the training for parameters crucially influences the performance of a DNM.

    Generally speaking, most ANN models use the back propagation (BP) algorithm, which is a gradient-based algorithm, as their learning methods to find the best combination of network weights and thresholds. However, BP intrinsically suffers from defects of slow convergence and easily dropping into local minima, making it has poor training efficiency [15]. Therefore, in recent studies, adopting non-BP learning algorithms for ANNs gradually becomes a tendency[19]–[26].

    In view of the limitations of the previous work, a wavelet transform algorithm is used as a learning algorithm for DNM on forecasting the photovoltaic power [18], which is one of the important research issues within the smart grid. Wavelet transform algorithm was originally developed in the field of signal processing and has been shown to offer advantages over the Fourier transform when processing non-stationary signals. It has been widely used in time series forecasts due to its capability in dealing with discrete signals. The proposed forecasting model claims high computational efficiency and prediction accuracy by using actual training and test data taken with a sampling time interval of 15 minutes.

    In [20], a hybrid algorithm that combines a genetic algorithm with a local search method is deployed to enhance the learning capability of a supervised adaptive resonance theory-based neural network by searching and adapting network weights. Owing to the effectiveness of the genetic algorithm in optimizing parameters, the proposed model can easily give high-accurate rates for samples from different classes in an imbalanced data environment.

    Specifically, meta-heuristic algorithms are proven to be effective in training ANNs. In [27], biogeography-based optimization (BBO) is used as a trainer for MLP. It is compared with BP and other five meta-heuristic algorithms on eleven benchmark datasets. The statistical results reveal that the utilization of meta-heuristic algorithms is very promising in training MLP. Moreover, BBO is much more effective than BP regarding classification rate and test error.

    Similarly, Gaoetal. [15] comprehensively investigate the performance of six meta-heuristic algorithms as learning methods. Taguchi’s experimental design method is used to systematically find the best combination of user-defined parameter sets. Benchmark datasets, involving five classification, six approximation, and three prediction problems, are conducted by using an MLP and DNM. Twelve combinations are investigated. It is reported that a combination of BBO and DNM is the most effective among its peers according to the experimental results.

    The above-mentioned research reveals the flexibility and effectiveness of using meta-heuristics as learning algorithms for ANNs. It also gives us the motivation to propose better algorithms with much more powerful search ability.Generally, differential evolution (DE) is arguably one of the most efficient meta-heuristic algorithms in current use [28].Its simplicity and strong robustness realize successful applications to various real-world optimization problems,where finding an approximate solution in a reasonable amount of computational time is much weighted [29]. In the meanwhile, a scale-free network is a very common structure in nature. One of its characteristics is preferential linking,which means the probability that an edge links a vertex is proportional to the degree of this vertex. It provides a great benefit to the interaction of information exchange in DE’s population. The nodes with better fitness can have a greater influence on other inferior nodes, while the nodes with worse fitness have a lower chance to participate in a solution generation process. Hence, to further enhance DE’s robustness and stability when the population size and problem scale change, a dynamic scale-free network-based differential evolution (DSNDE) is developed. DSNDE combines a scalefree network structure with DE by considering a dynamic adjustment for the parameter, which endows DE with the benefit of utilizing the neighborhood information provided by a scale-free network. Meanwhile, its parameters can be dynamically tuned during the optimization. A mutation operator called DE/old-c enters/1 is finely designed to adequately exploit the advantages of a scale-free networkbased DE. In this way, DSNDE can concurrently avoid premature convergence and enhance its global optimality.

    This paper contributes to communities of ANNs and evolutionary algorithms in the following aspects: 1) An effective DNM is trained by a novel learning algorithm DSNDE to improve its performance. For a given task, it can effectively enhance the training results of DNM whether it is a prediction problem, a classification problem, or a function approximation problem. 2) A photovoltaic power forecasting problem, in which its actual training and test data are collected from the natural environment, is used to seek the application value of the proposed training model. 3) Comparisons with nine state-of-the-art meta-heuristic algorithms, including the champion of the 2017 IEEE Congress on Evolutionary Computation (CEC2017) benchmark competition EBOwithCMAR [30], reveal that DSNDE has superiority in improving the computational efficiency and prediction accuracy of DNM for various training tasks.

    The next section gives a brief introduction to a canonical DNM. A novel learning algorithm DSNDE is proposed in Section III. Sections IV and V present the experimental results of DSNDE and nine contrast learning algorithms for training DNM on 14 benchmark datasets and a photovoltaic power forecasting problem, respectively. Section VI concludes this paper.

    II. DENDRITIC NEURON MODEL

    DNM is composed of four layers [18], including a synaptic layer, a dendrite layer, a membrane layer, and a soma layer.The functions and details of each layer are described as follows.

    A. Synaptic Layer

    A synaptic layer refers to a structure that transmits impulse from one dendrite to another dendrite or neural cell. The information transfers in a feedforward manner. Equation (1)describes the connection of theith (i=1,2,3,...,N) synaptic input to thejth (j=1,2,3,...,M) dendrite layer.

    whereYi,jis the output from theith synaptic to thejth dendrite layer.xiis the input signal and normalized into [0, 1].kis a user-defined positive constant. Its value is problem-related.ωi,jand θi,jare the corresponding weight and threshold,respectively. They are the targets to be optimized by learning algorithms. The population to be trained is formulated as follows:

    whereXi(i=1,2,...,Np) denotes theith individual in the population.Npis the population size.

    B. Dendrite Layer

    The main function of a dendrite layer is to conduct a multiplicative operation to the outputs of synaptic layers.When the information transfers from a synaptic layer to a dendrite layer, the connection could have four kinds of states depending on different values of ωi,jand θi,j. They can be used to infer the morphology of a neuron by specifying the positions and synapse types of dendrites [15], [31].

    Case 1:A direct or excitatory connection (when 0 ≤θi,j≤ωi,j). In this state, the output is proportional to the input no matter when the input varies from 0 to 1.

    Case 2:An inverse or inhibitory connection (when ωi,j≤θi,j≤0). In contrary to the previous state, the output is inversely proportional to the input no matter when the input varies from 0 to 1.

    Case 3:A constant 1 connection (when θi,j≤ωi,j≤0 or θi,j≤0 ≤ωi,j). The output is approximately 1 no matter when the input varies from 0 to 1.

    Case 4:A constant 0 connection (when ωi,j≤0 ≤θi,jor 0 ≤ωi,j≤θi,j). The output is approximately 0 no matter when the input varies from 0 to 1.

    Since the values of inputs and outputs of the dendrites correspond to 1 or 0, the multiplication operation is equivalent to the logic AND operation. The symbol π in Fig. 1 represents a multiplicative operator, and it is formulated in (3).

    Fig. 1. Illustration for DNM. It consists of a synaptic layer, a dendrite layer,a membrane layer, and a soma layer.

    whereZjis the output function for thejth dendrite branch.

    C. Membrane Layer

    A membrane layer is used to aggregate and process information in all dendrite layers by a summation function to closely resemble a logic OR operation. It is worth noting that the input and output are either 1 or 0. Thus, DNM is only suitable for two-class datasets, but cannot be applied to multiclassification problems under the current structure. As the threshold of a soma layer is set to 0.5, the soma body will be activated if all inputs are non-zero. The function is formulated as follows:

    whereVis a summation output of all dendrite layers.

    D. Soma Layer

    A soma layer represents a soma cell body. When the threshold is exceeded, the neuron is activated and the ultimate output of the entire model is calculated by a process expressed by a sigmoid function, which is shown as follows:

    whereksis a positive constant and θsrepresents the threshold of a soma body. A sigmoid function output values ranging from 0 to 1. Therefore, it is often used for ANNs that require an output value located at intervals of 0 to 1 [32].

    Fig. 1 illustrates the structure of DNM.x1,x2,..., andxNare the inputs in each dendrite layer. They are transformed into signals according to four connection states. Then, a multiplicative operation is conducted to multiply all the outputs from the synaptic layer. In the next step, these multiplied outputs are summed in a membrane layer. Finally,the obtained result is regarded as the input of a soma layer to generate the ultimate training output of DNM.

    III. PROPOSED LEARNING ALGORITHM

    A. Scale-Free Network

    Scale-free networks commonly exist in nature. Many social and transportation behaviors exhibit a character of scale-free,such as world wide web and protein-protein interaction networks. There are already some studies trying to reveal their properties [33]–[35]. In these studies, the Barabási-Albert(BA) model is the first model that generates random scale-free networks with a preferential attachment mechanism [36] and is the most widely used scale-free model in the swarm and evolutionary algorithms. When building a scale-free network,mnodes are firstly initialized, and the network is constructed by connecting other nodes to the existing nodes. A network usually has two parameters, degree (k) and average degreeDegreekis the number of connections a node possesses to other nodes, and average degreeis thekaveraged over all nodes in a network. In a BA model, the probability that a new node connects to existing nodes is proportional to the degree of these existing nodes. The degree distribution of a BA model follows a power-law of the form:

    whereγis three for a BA model [37]. Fig. 2 illustrates the degree distribution graph of a BA model when γ=3. The power-law distribution allows the existence of some nodes with numerous links which can be reflected by the long tail,whereas the majority of nodes have only a few links. This phenomenon presents a strong heterogeneity among the network topology. Thus, a stable diversity of the population is ensured. That is the reason that we introduce a scale-free network to enhance the information exchange in DE.

    Fig. 2. Degree distribution of a BA model when γ=3. The long tail represents the existence of a few nodes with a very large number of connections.

    B. Dynamic Scale-Free Network-Based Differential Evolution(DSNDE)

    In this part, DSNDE is introduced in detail. Firstly, a scalefree network is constructed based on the topmindividuals with better fitness in the population. They are called the centers. As we introduced, the BA model is the most widely used scale-free model. Hence, in DSNDE, it is used to construct a network initialized by the centers. Withminitial interconnected nodes, the left individuals are successively added to the network due to their fitness values from the best to the worst, which endows the better individuals with a higher chance to link the centers.Piis the probability of new individuals building a connection with the existing nodei, and is proportional to degreeki, shown as follows:

    wherejrepresents all existing nodes. When an iteration starts,the centers are the first connection choices for other nodes,which leads to a result that their degree increases quickly. The increase in degree makes other individuals more inclined to establish connections with them. Consequently, they dominate a solution generation process and transmit more genetic information to others.

    Traditional DE has four main operators, including initialization, mutation, crossover, and selection. In mutation operator, DE usually applies four common strategies, namely DE/best/1, D E/rand/1, DE/best/2, and D E/rand- to-b est/1, to generate mutant vectors. While in DSNDE, because of the implementation of a scale-free network, a new mutation operator called DE/old-c enters/1 is proposed to fully use the neighborhood information, which is formulated as follows:

    whereXiandViare theith and the mutant vector,respectively.Xneighbor1andXneighbor2are selected from the neighborhood ofXiby a roulette wheel selection method based on their fitness. To avoid a situation that an individual has one link only, the minimum number of links a node can maintain is set to two, which means a newly-connected node links with at least two existing network nodes. The roulette wheel selection method is formulated as follows:

    whereRirepresents a fitness rank of the corresponding individual in the neighborhood of the individual to be generated. It is inversely proportional to the fitness value in a minimization problem, which means the individual with better fitness has a higher chance to be selected into a mutation operation. Since the better individual has more useful information, the roulette wheel selection method can more effectively share the information with the population.is a vector of the former generation. It provides searching history to the current population.Xjis randomly selected from the centers, which shares the information of better individuals.

    whereFiandCrirefer to a parameter set for theith individual.randnis a normal distribution with standard deviation 0.1,and with mean valuesFPandCrPforFandCr, respectively.Denote SFand SCras the sets of theFandCrvalues of the centers, andis an arithmetic mean of the elements of S.

    Moreover, a dynamic mechanism is developed for adjusting the size of centers in DSNDE. Ifmis preset with a constant value, it can not maintain a stable optimization performance when facing variations of population size and problem scale.Therefore, in DSNDE, the size of centers dynamically varies in dependence on whether there is an improvement of the best solution. If the best optimum of the population gets improved byLmtimes in total andmis greater than 4, the size of centers decreases. Otherwise, if no improvement occurs in the best optimum in a total ofLmtimes, the size of centers increases until it reaches a maximum value of 0.2×Np. In the former case, successive improvements indicate that the whole population is in an exploration phase. Thus, decreasingmcan enhance the influence of the best individuals to accelerate the convergence speed. On the other hand, stagnation means the generation process sticks to a local optimum. Adding more nodes into the centers can increase the diversity of exchanged information and the probability of the population jumping out of a local optimum. The flowchart of DSNDE is presented in Algorithm 1, where the Best refers to the best individual found so far. Following this flowchart, the time complexities of DSNDE can be calculated as follows.Drepresents the dimension of the problem. In the procedure of initializing a population, the time complexity isO(Np×D). Then, as for individual evaluation, the time complexity isO(Np)+O(D).The construction of a scale-free network requirestime complexity. Select the nodes participating in mutation for the population needsThe mutation coststo generate mutant vectors. The time complexities for the crossover and selection operators areO(Np). In closing,DSNDE demands an overall time complexity ofin the worst case. Thus, DSNDE is computationally efficient.

    Algorithm 1 DSNDE Algorithm 1 Initialize a population with individuals randomly;2 Calculate the fitness of each individual;t=1 Np 3 Generation ;F0=0.5Cr0=0.9 4 Initialize , ;5 while: The maximum number of iterations is not reached do centers 6 Use a BA model to build a scale-free network based on the;i=1:Np 7 for do Xneighbor1(t) Xneighbor2(t)Xi(t)8 Decide and by a roulette wheel selection method from the neighborhood of ;Xj(t) centers 9 Randomly choose from the ;10 Generate mutant vectors via - mutation operator;Vi(t)=Xi(t)+Fi·(Xj(t)?Xiold+Xneighbor1(t)?Xneighbor2(t))11 ;oldcenters/1 rand

    next generation;Xi(t+1)=17 18 end{ Ui(t), if f(Ui(t))< f(Xi(t))Xi(t), otherwise;19 Store current population as ;Xold =X(t)Xold 20 ;FP= ˉSF CrP= ˉSCr 21 Calculate and Fi Cri 22 Update and ;Fi=N(FP,0.1)23 ;Cri=N(CrP,0.1)24 ;Lm m>4 25 if The Best gets improved in a total of iterations & then m=m?1 26 27 else 28 if The Best does not get improved in a total of iterations & then m=m+1 Lm m<0.2×Np 29 30 end 31 end t=t+1 32 ;33 end

    IV. EXPERIMENTS ON BENCHMARK DATASETS

    In this section, five classification, six function approximation, and three prediction benchmark datasets are utilized to verify the performance of DNM training by DSNDE and nine meta-heuristic algorithms [38]. The comparison algorithms are listed as follows:

    1) BBO [39]: Biogeography-based optimization algorithm;

    2) DE [28]: Differential evolution;

    3) DEGoS [40]: DE with global optimum-based search strategy;

    在黑河分公司的遜克片區(qū)齊克加油站,那里的農(nóng)戶相對(duì)需求比較大,齊克加油站就在“10惠”活動(dòng)中,要求先充值1000到2000元,同時(shí)當(dāng)天加滿200升柴油,還能贈(zèng)送一些非油品,比如糧油等商品。

    4) JADE [41]: Adaptive DE with optional external archive;

    5) SHADE [42]: Success-history based adaptive DE;

    6) CJADE [43]: Chaotic local search-based JADE;

    7) NDi-DE [44]: Neighborhood and direction information based DE;

    8) EBLSHADE [45]: Successful history-based adaptive DE with linear population size reduction (LSHADE) with novel mutation strategy;

    9) EBOwithCMAR [30]: Effective butterfly optimizer with covariance matrix adapted retreat phase.

    Tables I–III list the details of these benchmark datasets and their abbreviations. These datasets are named as CF1–CF5(classification functions), AF1–AF6 (approximation functions), and PF1–PF3 (predication functions) for convenience.The classification datasets are acquired from the University ofCalifornia at Irvine Machine Learning Repository [46]. Table I summarizes their numbers of attributes, training samples, test samples, and classes.

    TABLE I DETAILS OF THE CLASSIFICATION DATASETS

    TABLE II DETAILS OF THE FUNCTION APPROXIMATION DATASETS

    TABLE III DETAILS OF THE PREDICTION DATASETS

    Table II lists the function expressions of 1-Dsigmoid, 1-Dcosine with one peak, 1-Dsine with four peaks, 2-Dsphere, 5-DRosenbrock, and 2-DGriewank functions, as well as the number and value range of training and test samples.

    The details of three prediction datasets are given in Table III,involving the numbers of training and test samples. The Mackey Glass equation is derived from a nonlinear time-delay differential equation shown as follows:

    whereα,β,τandnare real numbers.xτis the value of variablexat timet-τ. Box Jenkins time series data and EGG data are acquired from [47] and [48], respectively.

    The population sizeNpand maximum number of iterations for all contrast learning algorithms are set to 50 and 250,respectively.Lmis set to 5. The corresponding parameter sets for each applied algorithm can be addressed in Table IV. They are set according to the related reference to ensure they can own the best performance. Each benchmark dataset is run 51 times to reduce random errors. All experiments are implemented on a PC with Windows 10 OS, 3.60 GHz AMD Ryzen 5 3600 6-core CPU, and 16 GB of RAM with MATLAB R2018a. In Table V, acceptable user-defined parameter settings are summarized, and they can be addressed in [15].Mis the number of dendrite layers,kandksare predefined parameters, and θsis the threshold value.

    The experimental results are presented in Tables VI–VII, in which the mean-squared errors (MSE) are used to calculate the output error of DNM for a given solutionXi. It is formulated in (13).

    TABLE IV PARAMETER SETTING OF ALGORITHMS

    TABLE V DETAILS OF THE CLASSIFICATION DATASETS

    whereTis the total number of training samples.ytandare the target and actual output vector of thetth sample,respectively.

    To precisely detect the significant difference between any two algorithms, a non-parametric statistical analysis method,Wilcoxon rank-sum test, is implemented [49], [50]. In this study, a significance level of 5% is set, which means that ifpvalue is less than 0.05, two compared algorithms are considered significantly different, and the former outperformsthe latter. From Tables VIII–IX, thepvalues between DSNDE and corresponding learning algorithms are listed. For a given problem, the MSE of DSNDE is highlighted when it significantly outperforms all other contrast algorithms.Otherwise, the corresponding algorithms are highlighted. The symbols +/≈/– comprehensively presents the statistical results of DSNDE versus its peers, which indicate that DSNDE performs significantly better (+), worse (–), or not significantly better and worse (≈) than the corresponding algorithm. According to these statistical results, the numbers of times that DSNDE wins others are 11 (BBO), 12 (DE), 11(DEGoS), 9 (JADE), 10 (SHADE), 10 (CJADE), 12 (NDi-DE), 10 (EBLSHADE) and 10 (EBOwithCMAR) out of 14 benchmark datasets. DSNDE is significantly better than other comparison algorithms on eight datasets. The proposed learning algorithm DSNDE shows overwhelming advantages over all contrast algorithms, including the champion of CEC2017 benchmark competition EBOwithCMAR [30].However, it should be noted that DSNDE does not obtain the best performance on a few datasets, including CF3, CF5, AF2,AF3, and AF4. On CF3, CF5, AF2, and AF4, the statistical test results show that all competitors achieve similar performances. The performance of DSNDE is not satisfactory on AF3. Seven competitors significantly outperform it. AF3 is an approximation dataset of function sine, and it is not a complex function. According to the no free lunch theorem, noalgorithm can perform the best for all problems [51]. The reason for DSNDE’s underperformance may be the special structure of the dynamic scale-free network. As we want to reduce the impact of poor individuals on the whole population, the information exchange in DSNDE is directed and limited. However, for some simple problems, all individuals could find high-quality solutions and deliver the correct search information. In this case, the search efficiency of DSNDE may not be as good as its peers. But its prominent performance on other datasets reveals the success of the proposed model.

    TABLE VI EXPERIMENTAL RESULTS OBTAINED BY DSNDE, BBO, DE, DEGOS AND JADE ON 14 DATASETS

    TABLE VII EXPERIMENTAL RESULTS OBTAINED BY SHADE, CJADE, NDi-DE, EBLSHADE AND EBOWITHCMAR ON 14 DATASETS

    TABLE VIII WILCOXON RANK-SUM TEST RESULTS (P -VALUES) OBTAINED BY DSNDE, BBO, DE, DEGOS AND JADE ON 14 DATASETS

    TABLE IX WILCOXON RANK-SUM TEST RESULTS (P -VALUES) OBTAINED BY SHADE, CJADE, NDi-DE, EBLSHADE AND EBOWITHCMAR ON 14 DATASETS

    Some matrix diagrams are shown in Fig. 3 to directly display the changes of weightωand thresholdθf(wàn)rom initialization to end of the training by DSNDE. For the heart dataset, DNM only has 200 parameters (including 100 weight values and 100 threshold values) to be trained, which indicates the required computing resources of DNM are far less than those of ANN. Figs. 4 and 5 exhibit the classification accuracy, the error value and the receiver operating characteristic (ROC) curves of two classification datasets. The ROC curve is the average of the sensitivity over all possible specificity values [52]. The area under the ROC curve (AUC)can effectively summarize the accuracy of the classification. It takes a value from 0.5 to 1 (0.5 represents a random classification), and a value closer to 1 means the classification is more accurate. It can be observed that DSNDE obtains the best performance on accuracy and error values. Especially on the heart dataset, DSNDE overwhelmingly outperforms its peers. The AUC of DSNDE is 0.985 on the cancer dataset,and 0.842 on the heart dataset. They are also higher than the AUCs of other algorithms. All these results demonstrate the remarkable effectiveness and efficiency of DSNDE.

    Fig. 3. Changes of weight ω and threshold θ on heart dataset.

    Fig. 4. Analysis of classification dataset: Cancer.

    Fig. 5. Analysis of classification dataset: Heart.

    V. EXPERIMENTS ON PHOTOVOLTAIC POWER FORECASTING

    The performance of DSNDE on benchmark datasets can directly exhibit its pros and cons compared with its peers. But the practicality value of DSNDE still needs to be further validated by real-world challenges. Thus, in this section, an attempt is made to apply the DSNDE-trained DNM for a photovoltaic power forecasting problem, which is one of the most important research issues within the smart grid. By proposing a forecasting model based upon DNM with the aid of DSNDE, the accuracy of forecasting results is greatly improved. The actual training and test datasets of forecasting are taken from a photovoltaic power plant located in Gansu Province, China, with a sampling size of 8000 and a time interval of 15 minutes [18]. To comprehensively estimate the forecasting errors obtained by each learning algorithm, the dataset is evenly divided into 10 sets for cross-validation, and the sample size of each set is 800 [53]. Nine groups of contrast experiments are conducted by considering the training sets with 800, 1600, ..., 7200 samples, respectively.The subsequent 800 samples are used as the test set. Each group is repeatedly run six times to ensure independence and effectiveness. Hence, each algorithm is performed 54 times.

    To statistically measure the performance of the tested learning methods, and facilitate the comparison of approaches we compute, a mean absolute error rate (MAPE) and a root mean square error (RMSE) are introduced as follows:

    where the meaning of each variable here is the same as that defined in (13).

    Table X gives a comprehensive comparison results of nine groups and an average value on RMSE and MAPE. It can be observed that forecasting accuracy decreases when the size of training sets increases from 800 to 3200. Most algorithms obtain their worst and best performances on MAPE at sizes of 4800 and 6400, respectively. This result suggests that the most suitable ratio of the test set to the training set is 1:8 for the photovoltaic power forecasting problem, while a ratio of 1:6 is not applicable. According to the average values, DSNDE obtains the best performance on both RMSE and MAPE,which fully illustrates the practical value of DSNDE.

    VI. CONCLUSIONS

    In this paper, we propose a dynamic scale-free networkbased differential evolution to train the parameters of DNM. A scale-free network structure helps DE enhance its information exchange among individuals and improves its overall performance. The experiments on 14 benchmark datasets and a photovoltaic power forecasting problem are conducted to verify its effectiveness in training the parameters of DNM.DSNDE compares with nine powerful meta-heuristic algorithms, including the champion of CEC2017 benchmark competition EBOwithCMAR. The statistical results show that DSNDE outperforms its peers on most benchmark datasets and gains the highest accuracy on the photovoltaic power forecasting problem. In our future research, we wish to propose a population adaptation approach for DSNDE, which has the potential to further improve the training efficiency of DNM. Moreover, the proposed algorithm can be applied to address the semi-supervised classification issue [38].

    TABLE X COMPREHENSIVE COMPARISON RESULT OF RMSE AND MAPE (%)

    猜你喜歡
    黑河充值糧油
    黑河的“護(hù)衛(wèi)隊(duì)”
    2019年《中國(guó)糧油學(xué)報(bào)》征稿簡(jiǎn)則
    歡迎訂閱2019年《中國(guó)糧油學(xué)報(bào)》
    到張掖看黑河
    文學(xué)港(2019年5期)2019-05-24 14:19:42
    奇妙的智商充值店
    推廣優(yōu)質(zhì)稻 種出“好糧油”
    充值
    張掖黑河濕地國(guó)家級(jí)自然保護(hù)區(qū)
    基于NFC的ETC卡空中充值服務(wù)應(yīng)用系統(tǒng)實(shí)現(xiàn)
    國(guó)內(nèi)11月底主要糧油價(jià)格
    日韩欧美一区二区三区在线观看 | 老司机深夜福利视频在线观看| 国产亚洲一区二区精品| 精品国产一区二区三区四区第35| 午夜福利免费观看在线| 黄色成人免费大全| 男人舔女人的私密视频| 国产亚洲欧美在线一区二区| 国产精品成人在线| 国产片内射在线| 757午夜福利合集在线观看| 人人妻,人人澡人人爽秒播| 国产精品免费一区二区三区在线 | 大码成人一级视频| 丝袜美足系列| 十八禁高潮呻吟视频| 国产欧美亚洲国产| bbb黄色大片| 国产在线精品亚洲第一网站| 久久婷婷成人综合色麻豆| 久久精品熟女亚洲av麻豆精品| 国产av精品麻豆| 久久午夜亚洲精品久久| 久久热在线av| 精品国产亚洲在线| 亚洲精品美女久久av网站| 天天操日日干夜夜撸| 美女 人体艺术 gogo| 高潮久久久久久久久久久不卡| 亚洲欧美精品综合一区二区三区| 久热这里只有精品99| 亚洲国产精品一区二区三区在线| 亚洲精品乱久久久久久| 国产精品久久久久成人av| 日韩制服丝袜自拍偷拍| www.999成人在线观看| 又大又爽又粗| 淫妇啪啪啪对白视频| 久热爱精品视频在线9| 一区福利在线观看| 最新的欧美精品一区二区| 自拍欧美九色日韩亚洲蝌蚪91| 日日摸夜夜添夜夜添小说| 变态另类成人亚洲欧美熟女 | 99久久人妻综合| 亚洲中文av在线| 久久久国产成人免费| 亚洲情色 制服丝袜| 一边摸一边做爽爽视频免费| 黄色女人牲交| 麻豆av在线久日| 日韩欧美在线二视频 | 中文欧美无线码| 国产精品一区二区在线不卡| 亚洲精品国产区一区二| 国产亚洲欧美在线一区二区| 国产成人欧美在线观看 | 亚洲精品自拍成人| 一级片'在线观看视频| 99国产精品一区二区三区| 91字幕亚洲| 大香蕉久久网| 十分钟在线观看高清视频www| 亚洲av成人一区二区三| 大型黄色视频在线免费观看| 日韩欧美免费精品| 亚洲精品乱久久久久久| 下体分泌物呈黄色| 国产av一区二区精品久久| 母亲3免费完整高清在线观看| 熟女少妇亚洲综合色aaa.| 免费日韩欧美在线观看| 黄色女人牲交| 欧美日韩一级在线毛片| 天天躁日日躁夜夜躁夜夜| 精品人妻1区二区| 久久人妻福利社区极品人妻图片| 国产av又大| 国产成人免费观看mmmm| 国产成+人综合+亚洲专区| 免费黄频网站在线观看国产| 亚洲七黄色美女视频| 精品午夜福利视频在线观看一区| 在线观看免费视频日本深夜| 亚洲成av片中文字幕在线观看| 欧美乱妇无乱码| 50天的宝宝边吃奶边哭怎么回事| 黑人巨大精品欧美一区二区蜜桃| 大陆偷拍与自拍| 精品无人区乱码1区二区| 久久国产亚洲av麻豆专区| 在线观看www视频免费| 丰满的人妻完整版| 亚洲人成电影免费在线| 精品久久久精品久久久| 80岁老熟妇乱子伦牲交| 99热国产这里只有精品6| 国产激情欧美一区二区| 午夜福利视频在线观看免费| 午夜福利在线观看吧| 国产精品一区二区在线不卡| 国产深夜福利视频在线观看| 亚洲av熟女| 午夜久久久在线观看| 变态另类成人亚洲欧美熟女 | 国产xxxxx性猛交| 久久久久久久精品吃奶| 国产在线精品亚洲第一网站| 国产日韩欧美亚洲二区| 电影成人av| 搡老乐熟女国产| 超碰97精品在线观看| 成年人免费黄色播放视频| bbb黄色大片| 9191精品国产免费久久| 国产男靠女视频免费网站| 中文字幕另类日韩欧美亚洲嫩草| 欧洲精品卡2卡3卡4卡5卡区| 亚洲专区字幕在线| 人妻 亚洲 视频| 又大又爽又粗| netflix在线观看网站| 精品久久久久久,| 国产成人精品久久二区二区91| 法律面前人人平等表现在哪些方面| 侵犯人妻中文字幕一二三四区| 亚洲第一青青草原| 免费不卡黄色视频| 亚洲欧美精品综合一区二区三区| 女人久久www免费人成看片| x7x7x7水蜜桃| 老司机深夜福利视频在线观看| 人人妻人人澡人人爽人人夜夜| 99久久国产精品久久久| 手机成人av网站| 亚洲精品一二三| 亚洲色图 男人天堂 中文字幕| 丁香六月欧美| 日韩 欧美 亚洲 中文字幕| 大型av网站在线播放| www.精华液| 侵犯人妻中文字幕一二三四区| 99热只有精品国产| 亚洲免费av在线视频| 9191精品国产免费久久| 成年动漫av网址| 久久精品91无色码中文字幕| 男男h啪啪无遮挡| 国产亚洲精品一区二区www | 校园春色视频在线观看| 午夜激情av网站| 日韩欧美三级三区| 国产精品美女特级片免费视频播放器 | 老鸭窝网址在线观看| 91大片在线观看| 午夜精品国产一区二区电影| 国产精品 国内视频| 久久人人爽av亚洲精品天堂| 精品久久久久久,| 午夜福利在线观看吧| 亚洲 欧美一区二区三区| 国产男靠女视频免费网站| 久久亚洲真实| 看黄色毛片网站| 国产xxxxx性猛交| 在线观看免费日韩欧美大片| 久久精品亚洲精品国产色婷小说| 午夜福利乱码中文字幕| 别揉我奶头~嗯~啊~动态视频| 亚洲成人手机| 热99久久久久精品小说推荐| 如日韩欧美国产精品一区二区三区| 精品高清国产在线一区| 精品国产一区二区三区四区第35| 国产极品粉嫩免费观看在线| 国产精品1区2区在线观看. | 色尼玛亚洲综合影院| 婷婷成人精品国产| 天堂中文最新版在线下载| 欧洲精品卡2卡3卡4卡5卡区| 男人舔女人的私密视频| 国产成人影院久久av| 亚洲 欧美一区二区三区| 亚洲美女黄片视频| 天天操日日干夜夜撸| 国产精品一区二区在线不卡| 国产精品久久久av美女十八| 中文字幕人妻熟女乱码| 色婷婷久久久亚洲欧美| 国产精品免费一区二区三区在线 | 不卡一级毛片| 欧美在线一区亚洲| 欧美最黄视频在线播放免费 | 欧美老熟妇乱子伦牲交| 国产av又大| 不卡av一区二区三区| 欧美老熟妇乱子伦牲交| 欧美日韩亚洲国产一区二区在线观看 | 午夜免费观看网址| 日韩一卡2卡3卡4卡2021年| 欧美日韩精品网址| 日本黄色日本黄色录像| 欧美国产精品一级二级三级| 自线自在国产av| 精品福利永久在线观看| 日日摸夜夜添夜夜添小说| 国产精品香港三级国产av潘金莲| 99热网站在线观看| 身体一侧抽搐| 丝袜人妻中文字幕| 91在线观看av| 亚洲精品美女久久av网站| 一级片'在线观看视频| 久久久久久人人人人人| 久久精品成人免费网站| 99国产精品一区二区三区| 一二三四社区在线视频社区8| 在线av久久热| 纯流量卡能插随身wifi吗| 国产成人精品无人区| 精品国产乱子伦一区二区三区| 午夜福利视频在线观看免费| 人人妻人人澡人人看| 欧美日韩精品网址| 天堂中文最新版在线下载| 人妻丰满熟妇av一区二区三区 | 日本wwww免费看| 午夜激情av网站| 人人妻人人澡人人爽人人夜夜| av一本久久久久| 日韩欧美国产一区二区入口| 欧美人与性动交α欧美精品济南到| 亚洲一区中文字幕在线| 极品教师在线免费播放| 国产精品电影一区二区三区 | 免费观看精品视频网站| 国产免费av片在线观看野外av| 18禁黄网站禁片午夜丰满| 一级a爱视频在线免费观看| 国产成人精品在线电影| 人人妻人人添人人爽欧美一区卜| 午夜福利一区二区在线看| 日本wwww免费看| 建设人人有责人人尽责人人享有的| 亚洲九九香蕉| 免费一级毛片在线播放高清视频 | 久久国产精品男人的天堂亚洲| 香蕉国产在线看| 亚洲va日本ⅴa欧美va伊人久久| 91精品三级在线观看| 国产精品免费大片| 亚洲国产毛片av蜜桃av| av片东京热男人的天堂| 纯流量卡能插随身wifi吗| 麻豆av在线久日| 中文字幕精品免费在线观看视频| 成人免费观看视频高清| 久久青草综合色| 欧美黑人精品巨大| 99久久综合精品五月天人人| 精品人妻1区二区| 久久九九热精品免费| 欧美国产精品va在线观看不卡| 大香蕉久久网| 欧美最黄视频在线播放免费 | 成人亚洲精品一区在线观看| 亚洲色图综合在线观看| 在线免费观看的www视频| 老司机午夜福利在线观看视频| 欧美 日韩 精品 国产| 电影成人av| 成人永久免费在线观看视频| 午夜福利免费观看在线| 久久 成人 亚洲| 国产一卡二卡三卡精品| 性少妇av在线| 嫩草影视91久久| 久久九九热精品免费| 777久久人妻少妇嫩草av网站| 久久久国产精品麻豆| 黄片播放在线免费| 国产欧美日韩一区二区三| 久久久国产成人精品二区 | 视频区欧美日本亚洲| 伦理电影免费视频| 黄色女人牲交| 国产精品 欧美亚洲| tube8黄色片| 精品无人区乱码1区二区| 国产一区二区三区视频了| 国产精品久久视频播放| av免费在线观看网站| 叶爱在线成人免费视频播放| 精品高清国产在线一区| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲精品美女久久av网站| 久久久久精品国产欧美久久久| 无遮挡黄片免费观看| 国产av又大| 国产视频一区二区在线看| а√天堂www在线а√下载 | 欧美日韩成人在线一区二区| 国产乱人伦免费视频| 久久中文字幕一级| 午夜福利影视在线免费观看| 久久国产精品大桥未久av| 精品国产乱子伦一区二区三区| 一进一出好大好爽视频| 国产人伦9x9x在线观看| 黄色女人牲交| 国产高清videossex| 一级a爱片免费观看的视频| 美女 人体艺术 gogo| 午夜视频精品福利| 国产欧美日韩一区二区精品| 国产无遮挡羞羞视频在线观看| 夜夜爽天天搞| 老司机在亚洲福利影院| 电影成人av| a级毛片黄视频| 18禁美女被吸乳视频| 亚洲午夜理论影院| 国产一区二区三区在线臀色熟女 | 三上悠亚av全集在线观看| 两性夫妻黄色片| 久热爱精品视频在线9| 大型av网站在线播放| 亚洲自偷自拍图片 自拍| 色94色欧美一区二区| 亚洲欧美日韩高清在线视频| 日本精品一区二区三区蜜桃| 黄片播放在线免费| 老汉色av国产亚洲站长工具| 人妻丰满熟妇av一区二区三区 | 午夜成年电影在线免费观看| 人妻 亚洲 视频| 熟女少妇亚洲综合色aaa.| 在线观看一区二区三区激情| 可以免费在线观看a视频的电影网站| 老熟女久久久| 国产区一区二久久| 色在线成人网| 99久久综合精品五月天人人| avwww免费| 大片电影免费在线观看免费| 一区二区日韩欧美中文字幕| 亚洲精品国产色婷婷电影| 日韩精品免费视频一区二区三区| 成人三级做爰电影| 国产又色又爽无遮挡免费看| 多毛熟女@视频| 日韩欧美一区视频在线观看| 久久ye,这里只有精品| 国产高清激情床上av| 大码成人一级视频| 在线观看免费视频网站a站| 精品欧美一区二区三区在线| 美女午夜性视频免费| 777米奇影视久久| 午夜福利视频在线观看免费| 婷婷丁香在线五月| 两个人免费观看高清视频| 国产极品粉嫩免费观看在线| 亚洲精品在线美女| 成年人黄色毛片网站| 激情视频va一区二区三区| 嫁个100分男人电影在线观看| 亚洲欧美精品综合一区二区三区| 高清毛片免费观看视频网站 | 成年人黄色毛片网站| 精品国产亚洲在线| 99精品久久久久人妻精品| 身体一侧抽搐| 人人妻人人澡人人看| 日韩制服丝袜自拍偷拍| 久久久久久亚洲精品国产蜜桃av| 无遮挡黄片免费观看| 国产单亲对白刺激| 99re在线观看精品视频| 老司机福利观看| 日本欧美视频一区| 免费av中文字幕在线| 亚洲精品久久成人aⅴ小说| 夫妻午夜视频| 欧美午夜高清在线| 精品久久久久久久久久免费视频 | 精品国产一区二区三区久久久樱花| 男男h啪啪无遮挡| 国产精品一区二区在线观看99| 午夜91福利影院| 精品国产一区二区三区久久久樱花| 国产亚洲精品一区二区www | 老司机靠b影院| 精品熟女少妇八av免费久了| 丝瓜视频免费看黄片| 国产成人一区二区三区免费视频网站| 嫩草影视91久久| 色婷婷av一区二区三区视频| 国产一区在线观看成人免费| 伦理电影免费视频| 捣出白浆h1v1| 久久精品国产综合久久久| 国产一区二区三区在线臀色熟女 | 十八禁高潮呻吟视频| 欧美黄色淫秽网站| 精品国产一区二区三区四区第35| 欧美精品av麻豆av| 中文字幕人妻丝袜一区二区| 精品一区二区三区av网在线观看| 亚洲精品国产精品久久久不卡| 黄色a级毛片大全视频| 亚洲久久久国产精品| 啦啦啦视频在线资源免费观看| 一个人免费在线观看的高清视频| 两性午夜刺激爽爽歪歪视频在线观看 | 一区二区三区精品91| 欧美亚洲日本最大视频资源| 很黄的视频免费| 国产亚洲精品第一综合不卡| 老司机福利观看| 国产蜜桃级精品一区二区三区 | 国产精品免费视频内射| cao死你这个sao货| 亚洲五月色婷婷综合| 中文欧美无线码| ponron亚洲| 久久久精品国产亚洲av高清涩受| 无遮挡黄片免费观看| 亚洲精品乱久久久久久| 国产av一区二区精品久久| 国产人伦9x9x在线观看| 亚洲精品粉嫩美女一区| 色播在线永久视频| 69av精品久久久久久| 国产精品免费视频内射| 男人操女人黄网站| 国产深夜福利视频在线观看| 国产成人欧美在线观看 | 亚洲色图综合在线观看| 久久人妻福利社区极品人妻图片| 九色亚洲精品在线播放| 国产熟女午夜一区二区三区| 精品亚洲成a人片在线观看| 校园春色视频在线观看| 日韩欧美一区二区三区在线观看 | 黄色女人牲交| 久久青草综合色| 成在线人永久免费视频| 又黄又爽又免费观看的视频| 国产成人影院久久av| 超碰97精品在线观看| 国产国语露脸激情在线看| 韩国精品一区二区三区| 18禁裸乳无遮挡动漫免费视频| 国产主播在线观看一区二区| 人妻丰满熟妇av一区二区三区 | 免费在线观看日本一区| 91成年电影在线观看| 热99国产精品久久久久久7| 电影成人av| 国产精品久久久久久精品古装| 国产欧美日韩综合在线一区二区| 免费黄频网站在线观看国产| 国产人伦9x9x在线观看| 女性生殖器流出的白浆| 色老头精品视频在线观看| 黄片大片在线免费观看| av天堂在线播放| 99久久综合精品五月天人人| 一本一本久久a久久精品综合妖精| 一级毛片高清免费大全| 狠狠婷婷综合久久久久久88av| 国产精品乱码一区二三区的特点 | cao死你这个sao货| 国产精品久久久久久精品古装| 精品久久久久久电影网| 一a级毛片在线观看| 最新美女视频免费是黄的| 国产精品.久久久| 无限看片的www在线观看| 麻豆av在线久日| 纯流量卡能插随身wifi吗| 少妇的丰满在线观看| 亚洲精品一二三| 亚洲中文日韩欧美视频| 亚洲国产精品一区二区三区在线| 久久久久久免费高清国产稀缺| 在线观看免费午夜福利视频| 国产又爽黄色视频| 久久热在线av| 叶爱在线成人免费视频播放| 亚洲五月色婷婷综合| 18禁黄网站禁片午夜丰满| 99re在线观看精品视频| 精品少妇久久久久久888优播| 国产男女超爽视频在线观看| 国产乱人伦免费视频| 久久久国产成人免费| 午夜福利视频在线观看免费| 成人18禁高潮啪啪吃奶动态图| 91成年电影在线观看| 老司机在亚洲福利影院| 在线观看免费午夜福利视频| 美女午夜性视频免费| 国产成人av激情在线播放| 丝袜美腿诱惑在线| 黑人巨大精品欧美一区二区mp4| 国产国语露脸激情在线看| 精品视频人人做人人爽| 中文字幕av电影在线播放| 黄频高清免费视频| 不卡一级毛片| 精品欧美一区二区三区在线| 高清视频免费观看一区二区| 国产免费男女视频| 大香蕉久久成人网| 精品亚洲成国产av| 国产高清国产精品国产三级| 国产精品自产拍在线观看55亚洲 | 99精品久久久久人妻精品| 日本欧美视频一区| 欧美最黄视频在线播放免费 | tocl精华| 日韩成人在线观看一区二区三区| 久久影院123| 90打野战视频偷拍视频| 又紧又爽又黄一区二区| 免费不卡黄色视频| 久久中文看片网| av电影中文网址| 免费看十八禁软件| 老熟女久久久| 国产高清国产精品国产三级| 亚洲九九香蕉| 国产99白浆流出| 在线av久久热| 99re6热这里在线精品视频| 午夜福利影视在线免费观看| 国产精品成人在线| 免费少妇av软件| 国产成人av激情在线播放| 黄频高清免费视频| 每晚都被弄得嗷嗷叫到高潮| 亚洲avbb在线观看| 岛国毛片在线播放| av免费在线观看网站| 美女 人体艺术 gogo| 丰满的人妻完整版| 一本综合久久免费| 欧美日韩视频精品一区| 亚洲第一av免费看| 日日摸夜夜添夜夜添小说| 老鸭窝网址在线观看| 丝瓜视频免费看黄片| 黄色丝袜av网址大全| 国产av精品麻豆| 精品一区二区三区视频在线观看免费 | a在线观看视频网站| 91成人精品电影| 精品第一国产精品| 狠狠狠狠99中文字幕| 三上悠亚av全集在线观看| 国产精品 欧美亚洲| 天天添夜夜摸| 成人特级黄色片久久久久久久| 十分钟在线观看高清视频www| 日本精品一区二区三区蜜桃| 欧美激情久久久久久爽电影 | 亚洲精华国产精华精| www.自偷自拍.com| 亚洲欧洲精品一区二区精品久久久| 叶爱在线成人免费视频播放| 婷婷成人精品国产| www.精华液| 日韩视频一区二区在线观看| 亚洲熟女毛片儿| 大码成人一级视频| 日韩熟女老妇一区二区性免费视频| 欧美精品亚洲一区二区| 日本撒尿小便嘘嘘汇集6| 1024香蕉在线观看| av不卡在线播放| 麻豆国产av国片精品| 99re在线观看精品视频| 男男h啪啪无遮挡| 黄片大片在线免费观看| 啦啦啦免费观看视频1| 少妇猛男粗大的猛烈进出视频| 最近最新免费中文字幕在线| 亚洲少妇的诱惑av| 人妻丰满熟妇av一区二区三区 | 免费在线观看视频国产中文字幕亚洲| 如日韩欧美国产精品一区二区三区| 精品视频人人做人人爽| 精品国产国语对白av| 如日韩欧美国产精品一区二区三区| 国产淫语在线视频| 欧美日韩国产mv在线观看视频| 亚洲色图 男人天堂 中文字幕| 国产主播在线观看一区二区| 精品久久久精品久久久| 免费观看a级毛片全部| 国产精品久久久人人做人人爽| 91九色精品人成在线观看| 久久久久国内视频| 天天躁狠狠躁夜夜躁狠狠躁| 国产99久久九九免费精品| 久久亚洲真实| 免费黄频网站在线观看国产| 日日爽夜夜爽网站| 欧美亚洲 丝袜 人妻 在线| 国产一区在线观看成人免费| 亚洲午夜理论影院| 一区在线观看完整版| 国产欧美日韩一区二区三|