• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improving Dendritic Neuron Model With Dynamic Scale-Free Network-Based Differential Evolution

    2022-10-26 12:23:52YangYuZhenyuLeiYiruiWangTengfeiZhangSeniorChenPengandShangceGaoSenior
    IEEE/CAA Journal of Automatica Sinica 2022年1期
    關(guān)鍵詞:黑河充值糧油

    Yang Yu,, Zhenyu Lei, Yirui Wang, Tengfei Zhang, Senior,Chen Peng, and Shangce Gao, Senior

    Abstract—Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification,prediction, and other problems when its parameters are welltuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and easily dropping into local minima. Therefore, more and more research adopts non-BP learning algorithms to train ANNs. In this paper, a dynamic scale-free network-based differential evolution (DSNDE) is developed by considering the demands of convergent speed and the ability to jump out of local minima. The performance of a DSNDE trained DNM is tested on 14 benchmark datasets and a photovoltaic power forecasting problem. Nine meta-heuristic algorithms are applied into comparison, including the champion of the 2017 IEEE Congress on Evolutionary Computation(CEC2017) benchmark competition effective butterfly optimizer with covariance matrix adapted retreat phase (EBOwithCMAR).The experimental results reveal that DSNDE achieves better performance than its peers.

    I. INTRODUCTION

    NOWADYS, artificial neuron networks (ANNs) are applied to more and more fields, such as image processing, character recognition, and financial forecasting[1]–[3]. These successful applications benefited from their distinct structures. A typical structure of an ANN can be seen as a directed graph with processing elements as nodes, and interconnected by weighted directed links. The first computational model of a neuron was proposed by McCulloch and Pitts in 1943 [4]. Based on it, a multi-layer perceptron(MLP) was constructed and becomes a classical model in the ANN community. MLP is composed of three layers, including an input layer, one or more hidden layers, and an output layer.The information is transmitted between layers with probability-weighted associations, which are stored within the data structure of the network. Each layer has multiple neurons and it is assigned different thresholds to decide whether to transfer processed data to the next layer. An output layer acts as a multiplicative function for the received data from the former layer. At last, an activation function is implemented to calculate the ultimate output. Common activation functions include a sigmoid function, a rectified linear units function,and an exponential linear units function [5]–[8].

    With the developments and applications of ANN in various fields, ANN derives many other different models. Convolutional neuron network (CNN) is a very effective one, which was proposed for analyzing visual imagery. It consists of an input layer, a convolution layer, a pooling layer, and a fully connected layer [9]. A convolution layer is also called a weighted filter as its size is smaller than that of the input data.An inner product is calculated by sliding a weighted filter with respect to the input. CNN takes advantage of a hierarchical pattern in data and assembles more complex patterns with smaller and simpler patterns. Therefore, CNN is efficient on the scale of connectedness and complexity.

    Recurrent neuron network (RNN) is derived from feedforward neural networks and can use internal memory to process variable-length sequences of input [10]. This property makes it suitable for requests such as natural language processing and speech recognition. Basic RNN is a network of neuron-like nodes organized into sequential layers. Each node has a time-varying real-valued activation, and each connection has a real-valued weight that can be modified. All these nodes are either input nodes, hidden nodes, or output nodes and they are connected successively. RNN uses sequences of realvalued as input and is recursive in the training direction of the sequence.

    Although these mentioned neural models succeed in many research and techniques, they still have some drawbacks, such as slow convergent speed and high computational cost [11].Recently, some research reveals that dendrite plays a pivotal role in the nervous system [12], [13]. The neural network,equipped with functional dendrites, shows a potential of substantial-overall performance improvement [13], [14]. This research arouses our attention to the study of the dendritic neuron model (DNM). DNM is developed by taking inspiration from the nonlinearity of synapses, and its dendrite layer can process input data independently. The characteristics of DNM can be summarized following the description in [15]:1) The structure of DNM is multilayered, and signals are transmitted between layers in a feedforward manner. Hence,the applied functions of these models can be reciprocated.2) Multiplication is both the simplest and one of the most widespread of all nonlinear operations in the nervous system[16]. DNM contributes a lot to the information process in neurons and the computation in synapses, whereas the latter is innovatively modeled by using sigmoid functions. 3) The output of synapses has four states, including excitatory,inhibitory, constant 1, and constant 0. They can beneficially identify the morphology of a neuron. The presentation of each state primarily depends on the values of parameters in synapses [17], [18]. Consequently, the training for parameters crucially influences the performance of a DNM.

    Generally speaking, most ANN models use the back propagation (BP) algorithm, which is a gradient-based algorithm, as their learning methods to find the best combination of network weights and thresholds. However, BP intrinsically suffers from defects of slow convergence and easily dropping into local minima, making it has poor training efficiency [15]. Therefore, in recent studies, adopting non-BP learning algorithms for ANNs gradually becomes a tendency[19]–[26].

    In view of the limitations of the previous work, a wavelet transform algorithm is used as a learning algorithm for DNM on forecasting the photovoltaic power [18], which is one of the important research issues within the smart grid. Wavelet transform algorithm was originally developed in the field of signal processing and has been shown to offer advantages over the Fourier transform when processing non-stationary signals. It has been widely used in time series forecasts due to its capability in dealing with discrete signals. The proposed forecasting model claims high computational efficiency and prediction accuracy by using actual training and test data taken with a sampling time interval of 15 minutes.

    In [20], a hybrid algorithm that combines a genetic algorithm with a local search method is deployed to enhance the learning capability of a supervised adaptive resonance theory-based neural network by searching and adapting network weights. Owing to the effectiveness of the genetic algorithm in optimizing parameters, the proposed model can easily give high-accurate rates for samples from different classes in an imbalanced data environment.

    Specifically, meta-heuristic algorithms are proven to be effective in training ANNs. In [27], biogeography-based optimization (BBO) is used as a trainer for MLP. It is compared with BP and other five meta-heuristic algorithms on eleven benchmark datasets. The statistical results reveal that the utilization of meta-heuristic algorithms is very promising in training MLP. Moreover, BBO is much more effective than BP regarding classification rate and test error.

    Similarly, Gaoetal. [15] comprehensively investigate the performance of six meta-heuristic algorithms as learning methods. Taguchi’s experimental design method is used to systematically find the best combination of user-defined parameter sets. Benchmark datasets, involving five classification, six approximation, and three prediction problems, are conducted by using an MLP and DNM. Twelve combinations are investigated. It is reported that a combination of BBO and DNM is the most effective among its peers according to the experimental results.

    The above-mentioned research reveals the flexibility and effectiveness of using meta-heuristics as learning algorithms for ANNs. It also gives us the motivation to propose better algorithms with much more powerful search ability.Generally, differential evolution (DE) is arguably one of the most efficient meta-heuristic algorithms in current use [28].Its simplicity and strong robustness realize successful applications to various real-world optimization problems,where finding an approximate solution in a reasonable amount of computational time is much weighted [29]. In the meanwhile, a scale-free network is a very common structure in nature. One of its characteristics is preferential linking,which means the probability that an edge links a vertex is proportional to the degree of this vertex. It provides a great benefit to the interaction of information exchange in DE’s population. The nodes with better fitness can have a greater influence on other inferior nodes, while the nodes with worse fitness have a lower chance to participate in a solution generation process. Hence, to further enhance DE’s robustness and stability when the population size and problem scale change, a dynamic scale-free network-based differential evolution (DSNDE) is developed. DSNDE combines a scalefree network structure with DE by considering a dynamic adjustment for the parameter, which endows DE with the benefit of utilizing the neighborhood information provided by a scale-free network. Meanwhile, its parameters can be dynamically tuned during the optimization. A mutation operator called DE/old-c enters/1 is finely designed to adequately exploit the advantages of a scale-free networkbased DE. In this way, DSNDE can concurrently avoid premature convergence and enhance its global optimality.

    This paper contributes to communities of ANNs and evolutionary algorithms in the following aspects: 1) An effective DNM is trained by a novel learning algorithm DSNDE to improve its performance. For a given task, it can effectively enhance the training results of DNM whether it is a prediction problem, a classification problem, or a function approximation problem. 2) A photovoltaic power forecasting problem, in which its actual training and test data are collected from the natural environment, is used to seek the application value of the proposed training model. 3) Comparisons with nine state-of-the-art meta-heuristic algorithms, including the champion of the 2017 IEEE Congress on Evolutionary Computation (CEC2017) benchmark competition EBOwithCMAR [30], reveal that DSNDE has superiority in improving the computational efficiency and prediction accuracy of DNM for various training tasks.

    The next section gives a brief introduction to a canonical DNM. A novel learning algorithm DSNDE is proposed in Section III. Sections IV and V present the experimental results of DSNDE and nine contrast learning algorithms for training DNM on 14 benchmark datasets and a photovoltaic power forecasting problem, respectively. Section VI concludes this paper.

    II. DENDRITIC NEURON MODEL

    DNM is composed of four layers [18], including a synaptic layer, a dendrite layer, a membrane layer, and a soma layer.The functions and details of each layer are described as follows.

    A. Synaptic Layer

    A synaptic layer refers to a structure that transmits impulse from one dendrite to another dendrite or neural cell. The information transfers in a feedforward manner. Equation (1)describes the connection of theith (i=1,2,3,...,N) synaptic input to thejth (j=1,2,3,...,M) dendrite layer.

    whereYi,jis the output from theith synaptic to thejth dendrite layer.xiis the input signal and normalized into [0, 1].kis a user-defined positive constant. Its value is problem-related.ωi,jand θi,jare the corresponding weight and threshold,respectively. They are the targets to be optimized by learning algorithms. The population to be trained is formulated as follows:

    whereXi(i=1,2,...,Np) denotes theith individual in the population.Npis the population size.

    B. Dendrite Layer

    The main function of a dendrite layer is to conduct a multiplicative operation to the outputs of synaptic layers.When the information transfers from a synaptic layer to a dendrite layer, the connection could have four kinds of states depending on different values of ωi,jand θi,j. They can be used to infer the morphology of a neuron by specifying the positions and synapse types of dendrites [15], [31].

    Case 1:A direct or excitatory connection (when 0 ≤θi,j≤ωi,j). In this state, the output is proportional to the input no matter when the input varies from 0 to 1.

    Case 2:An inverse or inhibitory connection (when ωi,j≤θi,j≤0). In contrary to the previous state, the output is inversely proportional to the input no matter when the input varies from 0 to 1.

    Case 3:A constant 1 connection (when θi,j≤ωi,j≤0 or θi,j≤0 ≤ωi,j). The output is approximately 1 no matter when the input varies from 0 to 1.

    Case 4:A constant 0 connection (when ωi,j≤0 ≤θi,jor 0 ≤ωi,j≤θi,j). The output is approximately 0 no matter when the input varies from 0 to 1.

    Since the values of inputs and outputs of the dendrites correspond to 1 or 0, the multiplication operation is equivalent to the logic AND operation. The symbol π in Fig. 1 represents a multiplicative operator, and it is formulated in (3).

    Fig. 1. Illustration for DNM. It consists of a synaptic layer, a dendrite layer,a membrane layer, and a soma layer.

    whereZjis the output function for thejth dendrite branch.

    C. Membrane Layer

    A membrane layer is used to aggregate and process information in all dendrite layers by a summation function to closely resemble a logic OR operation. It is worth noting that the input and output are either 1 or 0. Thus, DNM is only suitable for two-class datasets, but cannot be applied to multiclassification problems under the current structure. As the threshold of a soma layer is set to 0.5, the soma body will be activated if all inputs are non-zero. The function is formulated as follows:

    whereVis a summation output of all dendrite layers.

    D. Soma Layer

    A soma layer represents a soma cell body. When the threshold is exceeded, the neuron is activated and the ultimate output of the entire model is calculated by a process expressed by a sigmoid function, which is shown as follows:

    whereksis a positive constant and θsrepresents the threshold of a soma body. A sigmoid function output values ranging from 0 to 1. Therefore, it is often used for ANNs that require an output value located at intervals of 0 to 1 [32].

    Fig. 1 illustrates the structure of DNM.x1,x2,..., andxNare the inputs in each dendrite layer. They are transformed into signals according to four connection states. Then, a multiplicative operation is conducted to multiply all the outputs from the synaptic layer. In the next step, these multiplied outputs are summed in a membrane layer. Finally,the obtained result is regarded as the input of a soma layer to generate the ultimate training output of DNM.

    III. PROPOSED LEARNING ALGORITHM

    A. Scale-Free Network

    Scale-free networks commonly exist in nature. Many social and transportation behaviors exhibit a character of scale-free,such as world wide web and protein-protein interaction networks. There are already some studies trying to reveal their properties [33]–[35]. In these studies, the Barabási-Albert(BA) model is the first model that generates random scale-free networks with a preferential attachment mechanism [36] and is the most widely used scale-free model in the swarm and evolutionary algorithms. When building a scale-free network,mnodes are firstly initialized, and the network is constructed by connecting other nodes to the existing nodes. A network usually has two parameters, degree (k) and average degreeDegreekis the number of connections a node possesses to other nodes, and average degreeis thekaveraged over all nodes in a network. In a BA model, the probability that a new node connects to existing nodes is proportional to the degree of these existing nodes. The degree distribution of a BA model follows a power-law of the form:

    whereγis three for a BA model [37]. Fig. 2 illustrates the degree distribution graph of a BA model when γ=3. The power-law distribution allows the existence of some nodes with numerous links which can be reflected by the long tail,whereas the majority of nodes have only a few links. This phenomenon presents a strong heterogeneity among the network topology. Thus, a stable diversity of the population is ensured. That is the reason that we introduce a scale-free network to enhance the information exchange in DE.

    Fig. 2. Degree distribution of a BA model when γ=3. The long tail represents the existence of a few nodes with a very large number of connections.

    B. Dynamic Scale-Free Network-Based Differential Evolution(DSNDE)

    In this part, DSNDE is introduced in detail. Firstly, a scalefree network is constructed based on the topmindividuals with better fitness in the population. They are called the centers. As we introduced, the BA model is the most widely used scale-free model. Hence, in DSNDE, it is used to construct a network initialized by the centers. Withminitial interconnected nodes, the left individuals are successively added to the network due to their fitness values from the best to the worst, which endows the better individuals with a higher chance to link the centers.Piis the probability of new individuals building a connection with the existing nodei, and is proportional to degreeki, shown as follows:

    wherejrepresents all existing nodes. When an iteration starts,the centers are the first connection choices for other nodes,which leads to a result that their degree increases quickly. The increase in degree makes other individuals more inclined to establish connections with them. Consequently, they dominate a solution generation process and transmit more genetic information to others.

    Traditional DE has four main operators, including initialization, mutation, crossover, and selection. In mutation operator, DE usually applies four common strategies, namely DE/best/1, D E/rand/1, DE/best/2, and D E/rand- to-b est/1, to generate mutant vectors. While in DSNDE, because of the implementation of a scale-free network, a new mutation operator called DE/old-c enters/1 is proposed to fully use the neighborhood information, which is formulated as follows:

    whereXiandViare theith and the mutant vector,respectively.Xneighbor1andXneighbor2are selected from the neighborhood ofXiby a roulette wheel selection method based on their fitness. To avoid a situation that an individual has one link only, the minimum number of links a node can maintain is set to two, which means a newly-connected node links with at least two existing network nodes. The roulette wheel selection method is formulated as follows:

    whereRirepresents a fitness rank of the corresponding individual in the neighborhood of the individual to be generated. It is inversely proportional to the fitness value in a minimization problem, which means the individual with better fitness has a higher chance to be selected into a mutation operation. Since the better individual has more useful information, the roulette wheel selection method can more effectively share the information with the population.is a vector of the former generation. It provides searching history to the current population.Xjis randomly selected from the centers, which shares the information of better individuals.

    whereFiandCrirefer to a parameter set for theith individual.randnis a normal distribution with standard deviation 0.1,and with mean valuesFPandCrPforFandCr, respectively.Denote SFand SCras the sets of theFandCrvalues of the centers, andis an arithmetic mean of the elements of S.

    Moreover, a dynamic mechanism is developed for adjusting the size of centers in DSNDE. Ifmis preset with a constant value, it can not maintain a stable optimization performance when facing variations of population size and problem scale.Therefore, in DSNDE, the size of centers dynamically varies in dependence on whether there is an improvement of the best solution. If the best optimum of the population gets improved byLmtimes in total andmis greater than 4, the size of centers decreases. Otherwise, if no improvement occurs in the best optimum in a total ofLmtimes, the size of centers increases until it reaches a maximum value of 0.2×Np. In the former case, successive improvements indicate that the whole population is in an exploration phase. Thus, decreasingmcan enhance the influence of the best individuals to accelerate the convergence speed. On the other hand, stagnation means the generation process sticks to a local optimum. Adding more nodes into the centers can increase the diversity of exchanged information and the probability of the population jumping out of a local optimum. The flowchart of DSNDE is presented in Algorithm 1, where the Best refers to the best individual found so far. Following this flowchart, the time complexities of DSNDE can be calculated as follows.Drepresents the dimension of the problem. In the procedure of initializing a population, the time complexity isO(Np×D). Then, as for individual evaluation, the time complexity isO(Np)+O(D).The construction of a scale-free network requirestime complexity. Select the nodes participating in mutation for the population needsThe mutation coststo generate mutant vectors. The time complexities for the crossover and selection operators areO(Np). In closing,DSNDE demands an overall time complexity ofin the worst case. Thus, DSNDE is computationally efficient.

    Algorithm 1 DSNDE Algorithm 1 Initialize a population with individuals randomly;2 Calculate the fitness of each individual;t=1 Np 3 Generation ;F0=0.5Cr0=0.9 4 Initialize , ;5 while: The maximum number of iterations is not reached do centers 6 Use a BA model to build a scale-free network based on the;i=1:Np 7 for do Xneighbor1(t) Xneighbor2(t)Xi(t)8 Decide and by a roulette wheel selection method from the neighborhood of ;Xj(t) centers 9 Randomly choose from the ;10 Generate mutant vectors via - mutation operator;Vi(t)=Xi(t)+Fi·(Xj(t)?Xiold+Xneighbor1(t)?Xneighbor2(t))11 ;oldcenters/1 rand

    next generation;Xi(t+1)=17 18 end{ Ui(t), if f(Ui(t))< f(Xi(t))Xi(t), otherwise;19 Store current population as ;Xold =X(t)Xold 20 ;FP= ˉSF CrP= ˉSCr 21 Calculate and Fi Cri 22 Update and ;Fi=N(FP,0.1)23 ;Cri=N(CrP,0.1)24 ;Lm m>4 25 if The Best gets improved in a total of iterations & then m=m?1 26 27 else 28 if The Best does not get improved in a total of iterations & then m=m+1 Lm m<0.2×Np 29 30 end 31 end t=t+1 32 ;33 end

    IV. EXPERIMENTS ON BENCHMARK DATASETS

    In this section, five classification, six function approximation, and three prediction benchmark datasets are utilized to verify the performance of DNM training by DSNDE and nine meta-heuristic algorithms [38]. The comparison algorithms are listed as follows:

    1) BBO [39]: Biogeography-based optimization algorithm;

    2) DE [28]: Differential evolution;

    3) DEGoS [40]: DE with global optimum-based search strategy;

    在黑河分公司的遜克片區(qū)齊克加油站,那里的農(nóng)戶相對(duì)需求比較大,齊克加油站就在“10惠”活動(dòng)中,要求先充值1000到2000元,同時(shí)當(dāng)天加滿200升柴油,還能贈(zèng)送一些非油品,比如糧油等商品。

    4) JADE [41]: Adaptive DE with optional external archive;

    5) SHADE [42]: Success-history based adaptive DE;

    6) CJADE [43]: Chaotic local search-based JADE;

    7) NDi-DE [44]: Neighborhood and direction information based DE;

    8) EBLSHADE [45]: Successful history-based adaptive DE with linear population size reduction (LSHADE) with novel mutation strategy;

    9) EBOwithCMAR [30]: Effective butterfly optimizer with covariance matrix adapted retreat phase.

    Tables I–III list the details of these benchmark datasets and their abbreviations. These datasets are named as CF1–CF5(classification functions), AF1–AF6 (approximation functions), and PF1–PF3 (predication functions) for convenience.The classification datasets are acquired from the University ofCalifornia at Irvine Machine Learning Repository [46]. Table I summarizes their numbers of attributes, training samples, test samples, and classes.

    TABLE I DETAILS OF THE CLASSIFICATION DATASETS

    TABLE II DETAILS OF THE FUNCTION APPROXIMATION DATASETS

    TABLE III DETAILS OF THE PREDICTION DATASETS

    Table II lists the function expressions of 1-Dsigmoid, 1-Dcosine with one peak, 1-Dsine with four peaks, 2-Dsphere, 5-DRosenbrock, and 2-DGriewank functions, as well as the number and value range of training and test samples.

    The details of three prediction datasets are given in Table III,involving the numbers of training and test samples. The Mackey Glass equation is derived from a nonlinear time-delay differential equation shown as follows:

    whereα,β,τandnare real numbers.xτis the value of variablexat timet-τ. Box Jenkins time series data and EGG data are acquired from [47] and [48], respectively.

    The population sizeNpand maximum number of iterations for all contrast learning algorithms are set to 50 and 250,respectively.Lmis set to 5. The corresponding parameter sets for each applied algorithm can be addressed in Table IV. They are set according to the related reference to ensure they can own the best performance. Each benchmark dataset is run 51 times to reduce random errors. All experiments are implemented on a PC with Windows 10 OS, 3.60 GHz AMD Ryzen 5 3600 6-core CPU, and 16 GB of RAM with MATLAB R2018a. In Table V, acceptable user-defined parameter settings are summarized, and they can be addressed in [15].Mis the number of dendrite layers,kandksare predefined parameters, and θsis the threshold value.

    The experimental results are presented in Tables VI–VII, in which the mean-squared errors (MSE) are used to calculate the output error of DNM for a given solutionXi. It is formulated in (13).

    TABLE IV PARAMETER SETTING OF ALGORITHMS

    TABLE V DETAILS OF THE CLASSIFICATION DATASETS

    whereTis the total number of training samples.ytandare the target and actual output vector of thetth sample,respectively.

    To precisely detect the significant difference between any two algorithms, a non-parametric statistical analysis method,Wilcoxon rank-sum test, is implemented [49], [50]. In this study, a significance level of 5% is set, which means that ifpvalue is less than 0.05, two compared algorithms are considered significantly different, and the former outperformsthe latter. From Tables VIII–IX, thepvalues between DSNDE and corresponding learning algorithms are listed. For a given problem, the MSE of DSNDE is highlighted when it significantly outperforms all other contrast algorithms.Otherwise, the corresponding algorithms are highlighted. The symbols +/≈/– comprehensively presents the statistical results of DSNDE versus its peers, which indicate that DSNDE performs significantly better (+), worse (–), or not significantly better and worse (≈) than the corresponding algorithm. According to these statistical results, the numbers of times that DSNDE wins others are 11 (BBO), 12 (DE), 11(DEGoS), 9 (JADE), 10 (SHADE), 10 (CJADE), 12 (NDi-DE), 10 (EBLSHADE) and 10 (EBOwithCMAR) out of 14 benchmark datasets. DSNDE is significantly better than other comparison algorithms on eight datasets. The proposed learning algorithm DSNDE shows overwhelming advantages over all contrast algorithms, including the champion of CEC2017 benchmark competition EBOwithCMAR [30].However, it should be noted that DSNDE does not obtain the best performance on a few datasets, including CF3, CF5, AF2,AF3, and AF4. On CF3, CF5, AF2, and AF4, the statistical test results show that all competitors achieve similar performances. The performance of DSNDE is not satisfactory on AF3. Seven competitors significantly outperform it. AF3 is an approximation dataset of function sine, and it is not a complex function. According to the no free lunch theorem, noalgorithm can perform the best for all problems [51]. The reason for DSNDE’s underperformance may be the special structure of the dynamic scale-free network. As we want to reduce the impact of poor individuals on the whole population, the information exchange in DSNDE is directed and limited. However, for some simple problems, all individuals could find high-quality solutions and deliver the correct search information. In this case, the search efficiency of DSNDE may not be as good as its peers. But its prominent performance on other datasets reveals the success of the proposed model.

    TABLE VI EXPERIMENTAL RESULTS OBTAINED BY DSNDE, BBO, DE, DEGOS AND JADE ON 14 DATASETS

    TABLE VII EXPERIMENTAL RESULTS OBTAINED BY SHADE, CJADE, NDi-DE, EBLSHADE AND EBOWITHCMAR ON 14 DATASETS

    TABLE VIII WILCOXON RANK-SUM TEST RESULTS (P -VALUES) OBTAINED BY DSNDE, BBO, DE, DEGOS AND JADE ON 14 DATASETS

    TABLE IX WILCOXON RANK-SUM TEST RESULTS (P -VALUES) OBTAINED BY SHADE, CJADE, NDi-DE, EBLSHADE AND EBOWITHCMAR ON 14 DATASETS

    Some matrix diagrams are shown in Fig. 3 to directly display the changes of weightωand thresholdθf(wàn)rom initialization to end of the training by DSNDE. For the heart dataset, DNM only has 200 parameters (including 100 weight values and 100 threshold values) to be trained, which indicates the required computing resources of DNM are far less than those of ANN. Figs. 4 and 5 exhibit the classification accuracy, the error value and the receiver operating characteristic (ROC) curves of two classification datasets. The ROC curve is the average of the sensitivity over all possible specificity values [52]. The area under the ROC curve (AUC)can effectively summarize the accuracy of the classification. It takes a value from 0.5 to 1 (0.5 represents a random classification), and a value closer to 1 means the classification is more accurate. It can be observed that DSNDE obtains the best performance on accuracy and error values. Especially on the heart dataset, DSNDE overwhelmingly outperforms its peers. The AUC of DSNDE is 0.985 on the cancer dataset,and 0.842 on the heart dataset. They are also higher than the AUCs of other algorithms. All these results demonstrate the remarkable effectiveness and efficiency of DSNDE.

    Fig. 3. Changes of weight ω and threshold θ on heart dataset.

    Fig. 4. Analysis of classification dataset: Cancer.

    Fig. 5. Analysis of classification dataset: Heart.

    V. EXPERIMENTS ON PHOTOVOLTAIC POWER FORECASTING

    The performance of DSNDE on benchmark datasets can directly exhibit its pros and cons compared with its peers. But the practicality value of DSNDE still needs to be further validated by real-world challenges. Thus, in this section, an attempt is made to apply the DSNDE-trained DNM for a photovoltaic power forecasting problem, which is one of the most important research issues within the smart grid. By proposing a forecasting model based upon DNM with the aid of DSNDE, the accuracy of forecasting results is greatly improved. The actual training and test datasets of forecasting are taken from a photovoltaic power plant located in Gansu Province, China, with a sampling size of 8000 and a time interval of 15 minutes [18]. To comprehensively estimate the forecasting errors obtained by each learning algorithm, the dataset is evenly divided into 10 sets for cross-validation, and the sample size of each set is 800 [53]. Nine groups of contrast experiments are conducted by considering the training sets with 800, 1600, ..., 7200 samples, respectively.The subsequent 800 samples are used as the test set. Each group is repeatedly run six times to ensure independence and effectiveness. Hence, each algorithm is performed 54 times.

    To statistically measure the performance of the tested learning methods, and facilitate the comparison of approaches we compute, a mean absolute error rate (MAPE) and a root mean square error (RMSE) are introduced as follows:

    where the meaning of each variable here is the same as that defined in (13).

    Table X gives a comprehensive comparison results of nine groups and an average value on RMSE and MAPE. It can be observed that forecasting accuracy decreases when the size of training sets increases from 800 to 3200. Most algorithms obtain their worst and best performances on MAPE at sizes of 4800 and 6400, respectively. This result suggests that the most suitable ratio of the test set to the training set is 1:8 for the photovoltaic power forecasting problem, while a ratio of 1:6 is not applicable. According to the average values, DSNDE obtains the best performance on both RMSE and MAPE,which fully illustrates the practical value of DSNDE.

    VI. CONCLUSIONS

    In this paper, we propose a dynamic scale-free networkbased differential evolution to train the parameters of DNM. A scale-free network structure helps DE enhance its information exchange among individuals and improves its overall performance. The experiments on 14 benchmark datasets and a photovoltaic power forecasting problem are conducted to verify its effectiveness in training the parameters of DNM.DSNDE compares with nine powerful meta-heuristic algorithms, including the champion of CEC2017 benchmark competition EBOwithCMAR. The statistical results show that DSNDE outperforms its peers on most benchmark datasets and gains the highest accuracy on the photovoltaic power forecasting problem. In our future research, we wish to propose a population adaptation approach for DSNDE, which has the potential to further improve the training efficiency of DNM. Moreover, the proposed algorithm can be applied to address the semi-supervised classification issue [38].

    TABLE X COMPREHENSIVE COMPARISON RESULT OF RMSE AND MAPE (%)

    猜你喜歡
    黑河充值糧油
    黑河的“護(hù)衛(wèi)隊(duì)”
    2019年《中國(guó)糧油學(xué)報(bào)》征稿簡(jiǎn)則
    歡迎訂閱2019年《中國(guó)糧油學(xué)報(bào)》
    到張掖看黑河
    文學(xué)港(2019年5期)2019-05-24 14:19:42
    奇妙的智商充值店
    推廣優(yōu)質(zhì)稻 種出“好糧油”
    充值
    張掖黑河濕地國(guó)家級(jí)自然保護(hù)區(qū)
    基于NFC的ETC卡空中充值服務(wù)應(yīng)用系統(tǒng)實(shí)現(xiàn)
    國(guó)內(nèi)11月底主要糧油價(jià)格
    女人被狂操c到高潮| 男人狂女人下面高潮的视频| av在线老鸭窝| 窝窝影院91人妻| 成人综合一区亚洲| 欧美潮喷喷水| 成人一区二区视频在线观看| 欧美一级a爱片免费观看看| 中文在线观看免费www的网站| 女的被弄到高潮叫床怎么办 | 国产在线男女| 久久精品国产鲁丝片午夜精品 | 国产免费av片在线观看野外av| 丰满人妻一区二区三区视频av| 少妇的逼好多水| 偷拍熟女少妇极品色| 99国产精品一区二区蜜桃av| 国产精品久久久久久av不卡| 国内精品一区二区在线观看| 中国美白少妇内射xxxbb| 在线观看66精品国产| 亚洲专区中文字幕在线| www日本黄色视频网| 五月伊人婷婷丁香| 91麻豆精品激情在线观看国产| 国产精品人妻久久久影院| 综合色av麻豆| 日韩av在线大香蕉| 久久九九热精品免费| 最新在线观看一区二区三区| 黄色丝袜av网址大全| 中文在线观看免费www的网站| 中文字幕av在线有码专区| 国产伦一二天堂av在线观看| 在线免费观看的www视频| 日本精品一区二区三区蜜桃| 真人一进一出gif抽搐免费| 级片在线观看| 网址你懂的国产日韩在线| 亚洲国产精品sss在线观看| 亚洲精品粉嫩美女一区| 在线免费观看不下载黄p国产 | avwww免费| 少妇裸体淫交视频免费看高清| 国产成人aa在线观看| 男女那种视频在线观看| 熟女电影av网| 亚洲精品色激情综合| 97热精品久久久久久| 校园春色视频在线观看| 国产精品乱码一区二三区的特点| 一a级毛片在线观看| 久久久久久国产a免费观看| 超碰av人人做人人爽久久| 国产精品久久久久久av不卡| 日韩欧美精品免费久久| 99久久精品国产国产毛片| 亚洲人成伊人成综合网2020| 91精品国产九色| 国产在视频线在精品| 午夜精品一区二区三区免费看| 特级一级黄色大片| 国产女主播在线喷水免费视频网站 | 九色国产91popny在线| 成人亚洲精品av一区二区| 中出人妻视频一区二区| 欧美极品一区二区三区四区| 欧洲精品卡2卡3卡4卡5卡区| 午夜精品一区二区三区免费看| 在线观看舔阴道视频| 日韩,欧美,国产一区二区三区 | 一a级毛片在线观看| 久久中文看片网| 亚洲内射少妇av| 欧美成人性av电影在线观看| 国产成人影院久久av| 色av中文字幕| 内射极品少妇av片p| 国产三级中文精品| 黄片wwwwww| av国产免费在线观看| 日本一本二区三区精品| 欧美另类亚洲清纯唯美| 亚洲美女搞黄在线观看 | 久久精品国产99精品国产亚洲性色| 亚洲人与动物交配视频| 久久久色成人| 在线免费观看不下载黄p国产 | 国产真实伦视频高清在线观看 | 欧美成人免费av一区二区三区| 亚洲人成网站在线播| 成人一区二区视频在线观看| av黄色大香蕉| 内射极品少妇av片p| 日本与韩国留学比较| 我要看日韩黄色一级片| 国产v大片淫在线免费观看| 搞女人的毛片| 九九久久精品国产亚洲av麻豆| 欧美zozozo另类| 性欧美人与动物交配| 欧美激情国产日韩精品一区| 国内揄拍国产精品人妻在线| 国产精品精品国产色婷婷| 久久久久久大精品| 亚洲经典国产精华液单| 女同久久另类99精品国产91| 91午夜精品亚洲一区二区三区 | 韩国av在线不卡| 日韩欧美国产在线观看| 国产免费男女视频| 国产精品综合久久久久久久免费| 国产主播在线观看一区二区| 嫩草影院精品99| 成年人黄色毛片网站| 99热这里只有精品一区| 小蜜桃在线观看免费完整版高清| 在线观看免费视频日本深夜| 成人三级黄色视频| 琪琪午夜伦伦电影理论片6080| 欧美黑人巨大hd| 国产人妻一区二区三区在| 九九爱精品视频在线观看| 亚洲va日本ⅴa欧美va伊人久久| 亚洲欧美日韩无卡精品| 长腿黑丝高跟| 最近最新免费中文字幕在线| 亚洲成a人片在线一区二区| 亚洲av成人精品一区久久| 成年版毛片免费区| 国产三级在线视频| 有码 亚洲区| 嫁个100分男人电影在线观看| 蜜桃亚洲精品一区二区三区| 成人精品一区二区免费| 久久香蕉精品热| 精品人妻视频免费看| 亚洲av日韩精品久久久久久密| 日本一二三区视频观看| 欧洲精品卡2卡3卡4卡5卡区| 伦精品一区二区三区| 日韩人妻高清精品专区| 亚洲av美国av| 性插视频无遮挡在线免费观看| 18禁黄网站禁片午夜丰满| 国产精品无大码| 精品久久久久久久人妻蜜臀av| 日本黄大片高清| 亚洲成a人片在线一区二区| 欧美黑人欧美精品刺激| 色在线成人网| 国产精品1区2区在线观看.| 久久久久久久久久久丰满 | 国产精品一区二区免费欧美| 69人妻影院| 久久精品夜夜夜夜夜久久蜜豆| 亚洲国产精品合色在线| 干丝袜人妻中文字幕| 亚洲自偷自拍三级| 在线a可以看的网站| 夜夜夜夜夜久久久久| 女人十人毛片免费观看3o分钟| 嫩草影院新地址| 人妻久久中文字幕网| 亚洲狠狠婷婷综合久久图片| 亚洲欧美日韩无卡精品| 老司机深夜福利视频在线观看| 成年女人毛片免费观看观看9| 成人亚洲精品av一区二区| 岛国在线免费视频观看| 自拍偷自拍亚洲精品老妇| 国产精品1区2区在线观看.| 亚洲不卡免费看| 国产精品久久久久久av不卡| 在线免费观看的www视频| 国内精品美女久久久久久| 亚洲人成网站高清观看| 国产人妻一区二区三区在| 日韩精品青青久久久久久| 中国美白少妇内射xxxbb| 非洲黑人性xxxx精品又粗又长| 久久久久久国产a免费观看| 日本色播在线视频| 欧美日韩瑟瑟在线播放| 校园春色视频在线观看| 狠狠狠狠99中文字幕| 免费av观看视频| 一进一出抽搐动态| 欧美黑人巨大hd| 最近视频中文字幕2019在线8| 亚洲av美国av| 12—13女人毛片做爰片一| 淫妇啪啪啪对白视频| 国产麻豆成人av免费视频| 国内毛片毛片毛片毛片毛片| 国产精品不卡视频一区二区| 夜夜夜夜夜久久久久| 色播亚洲综合网| 日韩中字成人| 97人妻精品一区二区三区麻豆| 床上黄色一级片| 露出奶头的视频| 女生性感内裤真人,穿戴方法视频| 亚洲熟妇熟女久久| 嫩草影院新地址| 亚洲精品影视一区二区三区av| 午夜福利视频1000在线观看| 中国美女看黄片| 一本一本综合久久| 久久久久性生活片| 久9热在线精品视频| 午夜亚洲福利在线播放| 国产伦人伦偷精品视频| 免费看美女性在线毛片视频| 国产精品三级大全| 国产精品,欧美在线| 男人舔奶头视频| 国产白丝娇喘喷水9色精品| 欧美日本视频| 九色国产91popny在线| 国产单亲对白刺激| 热99re8久久精品国产| 99在线视频只有这里精品首页| 成人午夜高清在线视频| 久久婷婷人人爽人人干人人爱| 国产男靠女视频免费网站| 亚洲精品亚洲一区二区| 欧美最新免费一区二区三区| 看片在线看免费视频| 国产麻豆成人av免费视频| 性色avwww在线观看| 69人妻影院| 偷拍熟女少妇极品色| 国内揄拍国产精品人妻在线| av在线蜜桃| 少妇熟女aⅴ在线视频| 午夜日韩欧美国产| 日韩欧美国产在线观看| 干丝袜人妻中文字幕| 夜夜爽天天搞| a级毛片免费高清观看在线播放| 精品人妻1区二区| 日本免费一区二区三区高清不卡| 22中文网久久字幕| 日韩欧美 国产精品| 精品人妻1区二区| 免费人成在线观看视频色| 乱系列少妇在线播放| 中文字幕精品亚洲无线码一区| 极品教师在线免费播放| 亚洲性夜色夜夜综合| 最好的美女福利视频网| 男女啪啪激烈高潮av片| 此物有八面人人有两片| 欧美日韩黄片免| 男女啪啪激烈高潮av片| 精品久久久久久,| 精品久久久久久久久亚洲 | 日日摸夜夜添夜夜添小说| 黄色丝袜av网址大全| 窝窝影院91人妻| 天天一区二区日本电影三级| 欧美绝顶高潮抽搐喷水| 熟妇人妻久久中文字幕3abv| 国产成人一区二区在线| 亚洲精品一卡2卡三卡4卡5卡| 九九在线视频观看精品| 国产精品久久久久久久电影| 久久热精品热| 亚洲乱码一区二区免费版| 午夜福利18| 亚洲精品一区av在线观看| 五月伊人婷婷丁香| www.色视频.com| 99在线人妻在线中文字幕| 成年女人永久免费观看视频| 最近中文字幕高清免费大全6 | 亚洲最大成人中文| 欧美日韩国产亚洲二区| 国产伦在线观看视频一区| 国产亚洲精品久久久久久毛片| 夜夜看夜夜爽夜夜摸| 日本黄色视频三级网站网址| 亚洲精品在线观看二区| 亚洲欧美日韩无卡精品| 精品久久久久久久久亚洲 | 18禁在线播放成人免费| 黄片wwwwww| av国产免费在线观看| 欧美日本亚洲视频在线播放| 18禁黄网站禁片免费观看直播| 国产91精品成人一区二区三区| 国产不卡一卡二| 国产av不卡久久| 欧美性感艳星| 岛国在线免费视频观看| 一个人免费在线观看电影| av在线天堂中文字幕| 狂野欧美激情性xxxx在线观看| 男女边吃奶边做爰视频| 精品午夜福利在线看| 琪琪午夜伦伦电影理论片6080| 男女边吃奶边做爰视频| av国产免费在线观看| 亚洲成人精品中文字幕电影| 成人国产一区最新在线观看| 亚洲 国产 在线| 精品不卡国产一区二区三区| 国产老妇女一区| 12—13女人毛片做爰片一| 午夜福利欧美成人| 黄片wwwwww| 免费av毛片视频| 校园春色视频在线观看| 久久久久精品国产欧美久久久| 日韩欧美在线乱码| 日韩一本色道免费dvd| 色哟哟·www| 人妻少妇偷人精品九色| 又黄又爽又刺激的免费视频.| 动漫黄色视频在线观看| 99久久中文字幕三级久久日本| 一区二区三区免费毛片| 日韩欧美在线二视频| 亚洲av中文字字幕乱码综合| 欧美最黄视频在线播放免费| 精品99又大又爽又粗少妇毛片 | a级一级毛片免费在线观看| 亚洲精品国产成人久久av| 麻豆久久精品国产亚洲av| 性欧美人与动物交配| 亚洲五月天丁香| 91久久精品国产一区二区成人| 我要看日韩黄色一级片| 久9热在线精品视频| 极品教师在线视频| 18禁裸乳无遮挡免费网站照片| 男插女下体视频免费在线播放| 免费观看人在逋| 我要搜黄色片| 少妇裸体淫交视频免费看高清| 一个人观看的视频www高清免费观看| 日韩欧美免费精品| 久久天躁狠狠躁夜夜2o2o| www.www免费av| 亚洲四区av| 桃色一区二区三区在线观看| 内地一区二区视频在线| 欧美潮喷喷水| 午夜免费男女啪啪视频观看 | 麻豆精品久久久久久蜜桃| 非洲黑人性xxxx精品又粗又长| 老师上课跳d突然被开到最大视频| 精品久久久久久久久久久久久| 国产又黄又爽又无遮挡在线| 国产精品一区二区免费欧美| 成人欧美大片| 亚洲无线观看免费| 婷婷精品国产亚洲av| 国产高清视频在线播放一区| 亚洲专区中文字幕在线| 精品久久久久久,| 日日撸夜夜添| 亚州av有码| 亚洲专区中文字幕在线| 简卡轻食公司| 天天一区二区日本电影三级| 久久精品91蜜桃| 欧美绝顶高潮抽搐喷水| 日日摸夜夜添夜夜添小说| 亚洲成人中文字幕在线播放| 精品一区二区三区人妻视频| 日本免费一区二区三区高清不卡| av.在线天堂| 久久精品91蜜桃| 色哟哟哟哟哟哟| 人人妻,人人澡人人爽秒播| 中文字幕av在线有码专区| 欧美3d第一页| 蜜桃亚洲精品一区二区三区| 又爽又黄无遮挡网站| 国产精品永久免费网站| 久99久视频精品免费| 成人亚洲精品av一区二区| 成熟少妇高潮喷水视频| 国产精品永久免费网站| 国产大屁股一区二区在线视频| 成人一区二区视频在线观看| 午夜视频国产福利| 少妇熟女aⅴ在线视频| 亚洲精品一卡2卡三卡4卡5卡| 网址你懂的国产日韩在线| 国内少妇人妻偷人精品xxx网站| 亚洲美女视频黄频| 国产精品,欧美在线| 精品人妻视频免费看| 日韩欧美国产一区二区入口| 国产真实伦视频高清在线观看 | 日韩欧美精品v在线| 成人国产麻豆网| 亚洲精品色激情综合| 3wmmmm亚洲av在线观看| 搡老岳熟女国产| 亚州av有码| 免费观看人在逋| 男人的好看免费观看在线视频| 一进一出抽搐动态| 精品日产1卡2卡| 女的被弄到高潮叫床怎么办 | 国产又黄又爽又无遮挡在线| 欧美最黄视频在线播放免费| 日韩,欧美,国产一区二区三区 | 国产精品不卡视频一区二区| 日日摸夜夜添夜夜添小说| 中文字幕人妻熟人妻熟丝袜美| 成人欧美大片| 国产久久久一区二区三区| 啦啦啦韩国在线观看视频| 国产伦在线观看视频一区| 永久网站在线| 亚洲乱码一区二区免费版| 久久久久久久久大av| 久久久久久国产a免费观看| 亚洲五月天丁香| 国产单亲对白刺激| 两人在一起打扑克的视频| 99久久成人亚洲精品观看| 高清在线国产一区| 国产探花在线观看一区二区| 内射极品少妇av片p| 三级男女做爰猛烈吃奶摸视频| 两个人的视频大全免费| 此物有八面人人有两片| 最新中文字幕久久久久| 女生性感内裤真人,穿戴方法视频| 欧美性猛交╳xxx乱大交人| 乱人视频在线观看| 国产亚洲欧美98| 日韩在线高清观看一区二区三区 | 久久6这里有精品| 成年女人毛片免费观看观看9| 国产探花极品一区二区| 亚洲精品成人久久久久久| 制服丝袜大香蕉在线| 亚洲avbb在线观看| 国产黄片美女视频| 国内揄拍国产精品人妻在线| 99久久无色码亚洲精品果冻| 老司机福利观看| 伦理电影大哥的女人| 日本黄大片高清| 观看美女的网站| 亚洲人成网站在线播| 国产免费男女视频| 91av网一区二区| 成年女人看的毛片在线观看| 韩国av一区二区三区四区| 成年版毛片免费区| 亚洲精品成人久久久久久| 麻豆成人午夜福利视频| 国产伦精品一区二区三区四那| 男女啪啪激烈高潮av片| 国国产精品蜜臀av免费| 1024手机看黄色片| 中文在线观看免费www的网站| 欧美区成人在线视频| 91av网一区二区| 欧美性猛交╳xxx乱大交人| 51国产日韩欧美| 中文字幕高清在线视频| 国产一区二区三区视频了| 天堂动漫精品| 简卡轻食公司| 国产av麻豆久久久久久久| 久久久久久九九精品二区国产| 美女大奶头视频| 欧美一区二区亚洲| 成年女人毛片免费观看观看9| 国产综合懂色| 成人毛片a级毛片在线播放| 国产一区二区亚洲精品在线观看| 午夜精品久久久久久毛片777| 亚洲中文日韩欧美视频| 午夜影院日韩av| 少妇裸体淫交视频免费看高清| 日本在线视频免费播放| 亚洲中文字幕一区二区三区有码在线看| 日日撸夜夜添| 一本久久中文字幕| 乱码一卡2卡4卡精品| 国产黄片美女视频| 国产高清激情床上av| 我要看日韩黄色一级片| xxxwww97欧美| 日韩欧美在线乱码| 999久久久精品免费观看国产| 长腿黑丝高跟| 国产成人aa在线观看| 亚洲精华国产精华液的使用体验 | 人人妻,人人澡人人爽秒播| 久久这里只有精品中国| 国模一区二区三区四区视频| 中文在线观看免费www的网站| 国产高清不卡午夜福利| 在线免费观看的www视频| 亚洲成人久久爱视频| 在线播放无遮挡| 国产一区二区三区av在线 | 国产真实伦视频高清在线观看 | 久久香蕉精品热| 天美传媒精品一区二区| 午夜精品一区二区三区免费看| 久久婷婷人人爽人人干人人爱| 欧美中文日本在线观看视频| 日韩一本色道免费dvd| 国产亚洲精品久久久com| .国产精品久久| 色精品久久人妻99蜜桃| 色综合色国产| 天美传媒精品一区二区| 欧美人与善性xxx| 综合色av麻豆| 99热精品在线国产| 97超视频在线观看视频| 高清在线国产一区| 色综合亚洲欧美另类图片| 精品午夜福利视频在线观看一区| 2021天堂中文幕一二区在线观| av女优亚洲男人天堂| 亚洲av二区三区四区| 深夜精品福利| 成年人黄色毛片网站| 国产精品电影一区二区三区| 国产精品一区二区性色av| 天堂√8在线中文| 亚洲四区av| 日韩大尺度精品在线看网址| 男人舔奶头视频| 日本a在线网址| 中文字幕免费在线视频6| 伊人久久精品亚洲午夜| 黄色女人牲交| 久久天躁狠狠躁夜夜2o2o| 99热精品在线国产| 亚洲最大成人手机在线| 国产色爽女视频免费观看| 免费黄网站久久成人精品| 国产精品永久免费网站| 国产大屁股一区二区在线视频| 黄色丝袜av网址大全| 欧美最黄视频在线播放免费| 国产精品一区二区免费欧美| 91麻豆av在线| 亚洲欧美日韩高清在线视频| av在线亚洲专区| 欧美区成人在线视频| 欧美日韩综合久久久久久 | 亚洲狠狠婷婷综合久久图片| 久久久久九九精品影院| 国产v大片淫在线免费观看| 国内精品久久久久久久电影| 免费在线观看日本一区| 白带黄色成豆腐渣| 国产精品野战在线观看| 国产亚洲精品av在线| 我的老师免费观看完整版| 黄色女人牲交| 18禁在线播放成人免费| 麻豆成人午夜福利视频| 在现免费观看毛片| 又爽又黄无遮挡网站| 欧美最黄视频在线播放免费| 欧美丝袜亚洲另类 | 国产精品永久免费网站| 久久天躁狠狠躁夜夜2o2o| 97超视频在线观看视频| 国产精品福利在线免费观看| 精品一区二区三区人妻视频| 欧美激情久久久久久爽电影| av黄色大香蕉| 国产单亲对白刺激| 国产激情偷乱视频一区二区| 伦精品一区二区三区| 麻豆久久精品国产亚洲av| 我的女老师完整版在线观看| 欧美激情在线99| 一进一出抽搐动态| 成人综合一区亚洲| av在线蜜桃| 最近视频中文字幕2019在线8| 18禁裸乳无遮挡免费网站照片| 人妻久久中文字幕网| 内地一区二区视频在线| 国产精品野战在线观看| 久久久精品大字幕| 黄色一级大片看看| 两个人的视频大全免费| 日韩中字成人| 国产高清视频在线观看网站| 久久久久久久精品吃奶| 国产精品伦人一区二区| av专区在线播放| 国产亚洲精品综合一区在线观看| 午夜福利高清视频| 国产高清视频在线观看网站| 亚洲欧美清纯卡通| 亚洲人成网站高清观看| 在线天堂最新版资源| 乱人视频在线观看| 午夜久久久久精精品| 亚洲精华国产精华液的使用体验 | 国产淫片久久久久久久久| 欧美日本视频| 久久久久久久久大av| 成人性生交大片免费视频hd|