• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Self-Organizing RBF Neural Network Based on Distance Concentration Immune Algorithm

    2020-02-29 14:21:38JunfeiQiaoFeiLiCuiliYangWenjingLiandKeGu
    IEEE/CAA Journal of Automatica Sinica 2020年1期

    Junfei Qiao,, Fei Li, Cuili Yang, Wenjing Li, and Ke Gu

    Abstract—Radial basis function neural network (RBFNN) is an effective algorithm in nonlinear system identification. How to properly adjust the structure and parameters of RBFNN is quite challenging. To solve this problem, a distance concentration immune algorithm (DCIA) is proposed to self-organize the structure and parameters of the RBFNN in this paper. First, the distance concentration algorithm, which increases the diversity of antibodies, is used to find the global optimal solution. Secondly,the information processing strength (IPS) algorithm is used to avoid the instability that is caused by the hidden layer with neurons split or deleted randomly. However, to improve the forecasting accuracy and reduce the computation time, a sample with the most frequent occurrence of maximum error is proposed to regulate the parameters of the new neuron. In addition, the convergence proof of a self-organizing RBF neural network based on distance concentration immune algorithm (DCIA-SORBFNN) is applied to guarantee the feasibility of algorithm. Finally, several nonlinear functions are used to validate the effectiveness of the algorithm. Experimental results show that the proposed DCIASORBFNN has achieved better nonlinear approximation ability than that of the art relevant competitors.

    I. INTRODUCTION

    THE radial basis function neural network (RBFNN) has been extensively used to model and control nonlinear systems due to its universal approximation ability [1]-[3]. In addition, it is able to approximate any nonlinear function to any desirable accuracy [1] when there are enough neurons. The desired approximation accuracy is primarily achieved by the network size and the parameters of RBFNN.

    In order to adjust the network parameters, the gradientbased methods are proposed in [4], [5]. Among them, the error back propagation (BP) algorithm is popular and widely used[6]. However, the BP algorithm has still many shortcomings,such as the time-consuming convergence and poor globalsearch capability [7]. Compared with the BP algorithm, the recursive least squares (RLS) algorithm has better convergence rate and accuracy [8]. However, RLS involves more complicated mathematical operations and require more computational resources. In addition, the local minimum is not solved [9]. To solve this problem, a variable-length sliding window blockwise least squares (VLSWBLS) algorithm is proposed by Jiang and Zhang [10]. VLSWBLS outperforms the RLS with forgetting factors. Penget al. [11] introduced a continuous forward algorithm (CFA) to optimize the parameters of RBFNNs. As a result, this method realizes major performance in reducing memory usage and computational complexity. Qiao and Han [12] proposed a forward-only computation (FOC) algorithm to adjust the parameters. Unlike the traditional forward and backward computation, the FOC algorithm simplifies the calculation and decreases computational complexity. However, the descriptions about how to automatically adjust network size are seldom seen in these literatures mentioned above.

    In fact, a proper structure size can avoid network overfitting and achieve a desired performance. In recent years,many studies have focused on the structure design of the RBFNN. Huanget al. [13] proposed a sequential learning method known as the growing and pruning RBF (GAP-RBF)algorithm. In addition, a more advanced model based on the GAP-RBF algorithm (GGAP-RBF) was advocated in [14].The results show that an RBFNN with a relatively compact structure can achieve a less computational time. However,both algorithms require a complete set of samples for the training process. Generally, it is impossible for designers to obtain a priori knowledge of the training samples before implementation [15]. To solve this problem, an informationoriented algorithm (IOA) [16] was proposed to self-organize the RBFNN structure. The IOA is used to calculate the information processing strength (IPS) of hidden neurons. In addition, it is a computational technique that identifies hidden independent sources from multivariate data. However, most of these self-organizing RBF (SORBF) neural networks adopt the learning algorithms that are based on the gradient decent(GD) algorithm, which may easily trap into a local optimum[17].

    To optimize the parameters and network size of an RBFNN simultaneously, the evolutionary algorithms (EAs) are studied to train the RBFNN [18] to achieve a good robustness and a global optimization capability. For example, Feng [19]proposed an SORBF neural network based on the particle swarm optimization (PSO) algorithm. Alexandridiset al. [20]developed a novel algorithm that used fuzzy means and PSO algorithms to train the RBFNN. The results show that the proposed SORBF neural networks obtain higher prediction accuracy and smaller network structure. Moreover, an adaptive-pso-based self-organizing RBF neural network(APSO-SORBF) was proposed to construct the RBFNN in[21]. This algorithm is adopted to determine the optimal parameters and network size of the RBFNN simultaneously for time series prediction problems. The simulation results illustrate that APSO-SORBF performs better than the other PSO-based RBFNN in terms of the forecast accuracy and the computational efficiency [21]. Compared with other EA algorithms, the PSO algorithm has a faster convergence rate,but it easily traps into a local optimum, which affects the calculation accuracy [22]. To solve this problem, the immune algorithm (IA) was proposed [23]. The IA is a highly parallel,distributed, and adaptive system whose diversity and maintaining mechanism can be used to maintain the diversity of solutions and overcome the “premature” problem of a multi-peak function. Moreover, to break away from the local optimal solution, increasing the diversity of the artificial immune algorithm is very important. In this paper, the distance concentration immune algorithm (DCIA) was proposed to increase the population diversity. In comparison with the artificial immune based on information entropy(IEIA) algorithm, this algorithm does not need requirement for setting any threshold. The results show that the proposed DCIA can significantly increase the global search capability.According to the above analysis, it is necessary and effective for the structure and parameters of the neuron network to be adjusted by DCIA instead of PSO [21]. However, the structure of APSO-SORBF [21] is adjusted by increasing and decreasing the number of hidden layer neurons randomly so that the network is unstable. Thus, how to adjust the structure steadily and present theoretical analysis of algorithm convergence is quite challenging.

    To solve the problems as mentioned above, the IPS [16] of hidden neurons was adopted to determine which hidden layer neurons need to be split or pruned when the network of antibodies should be updated. However, the last input sample is used to adjust the parameters of the hidden layer neuron in[16]. Then, the computational accuracy is affected, and the calculation time is lengthened. Based on the analysis above,the information-oriented error compensation algorithm(IOECA) is proposed in this paper. In this algorithm, the input sample with the most frequent occurrence of maximum error is used to set up the parameters of the new hidden neurons.Then the accuracy is increased. In addition, in order to ensure the stability of algorithm, the convergence analysis of the DCIA-SORBF neural network is provided.

    The main contributions of this paper are summarized as follows:

    1) A DCIA algorithm is adopted to improve the diversity of antibodies. The distance concentration is used to determine the diversity of the artificial immune algorithm. The DCIA algorithm approaches to skip the local minimum and finally find the global minimum. Consequently, it has a higher accuracy than that of many traditional EAs.

    2) The immune algorithm is used to adjust the network and parameters according to [16]. However, in [16], the hidden layer neurons are increased or deleted randomly to adjust network structure. Then this caused instability of the system.To solve this, the IPS is used to identify the hidden layer neurons that needed to be deleted and increased according to its ability to identify hidden independent sources from multivariate data.

    3) Five samples are used to be calculate simultaneously in[16], and the last one is used to update the parameters of hidden layer neurons, so that the system has large amount of calculation and its computation is not high enough. So that all samples are selected in this paper. Furthermore, the input sample with the most frequent occurrence of maximum error is used to set up the parameters of the new hidden neurons.Therefore, the parameters and structure of the RBFNN can be optimized simultaneously by DCIA-SORBF. Compared with other algorithms, this method has a greater accuracy with a compact structure, and the stability can be ensured.

    4) The convergence analysis is provided. The results of multiple experiments testify to the feasibility and efficiency of the DCIA-SORBF algorithm. A convergence analysis of the DCIA-SORBF neural network is provided and the effectiveness is verified via simulations because the convergence of the algorithm is necessary and very important for many actual engineering problems.

    The rest of this article is organized as followed: In Section II, brief reviews of the RBF and immune algorithm system model are introduced. In Section III, the details of the DCIA algorithm and DCIA-SORBF are described. In Section IV, the convergence analyses of DCIA-SORBF are provided. In Section V, four experiments are conducted. Finally, the conclusions are presented.

    II. PROBLEM FORMULATION

    A. RBF Neural Network

    The RBF neural network is a typical feed-forward neural network [24], and it is generally composed of three layers: the input layer, hidden layer and output layer. The structure of the RBF is shown in Fig. 1.

    The structure of the RBF neural network is described as follows:

    1) The input layer. In this layer, ann-dimensional input vectorx= (x1,x2, …,xn) is imported to the network, wherenis the number of neurons in the layer.

    2) The hidden layer. In this layer, the input variables are converted to the high dimensional space by a nonlinear transformation. There are many activation functions in network, such as Gaussian function, sigmoid function and so on. Here, Gaussian function is used as the activation function.The output is defined as

    Among them, φK(t) is the output oft-time hidden layer.x(t)is the input sample matrix at timet,μK(t) is the center of theKth hidden layer neurons at timet, andσK2(t) is the width of

    t

    heKth hidden layer neurons at timet.is the Euclidean distance betweenx(t) andμK(t). It is noted that the widths and centers of the activation function is not fixed. They are randomly initiated and then optimized by DCIA.

    3) The output layer. In this layer, there is only one node,which is the output of the neural network. The output is given as

    whereωjis the connection weight between thejth neuron in hidden layer and the network output, andyis the network output. Here,ωjis randomly initialized.

    B. Immune Algorithm System

    The artificial immune system is primarily based on the information processing mechanism of the biological immune system [25]. Then, the new algorithm is used to solve complex problems. To describe the algorithm better, several common immunology terms in the artificial immune system are defined as follows:

    Definition 1 (Antigen):An antigen refers to the problem of the constraint to be solved. It is defined as the objective function.

    Definition 2 (Antibody):An antibody refers to the candidate solution of the problem. It is described as a candidate solution that corresponds to the objective function.

    Definition 3 (Affinity):An affinity refers to the adaptive measure of the candidate solution. It is related to the problem or the candidate solution that corresponds to the target function value.

    The antibodies that have a high affinity in the immune system can achieve a high rate of cloning. To maintain the diversity of antibodies, the rate of cloningP(xi) can be expressed as follows [25]:

    whereD(xi) is the affinity function of the antibody,C(xi) is the concentration of the antibody,Ma(t) is the parent generation group, anda=o+s, whereais the size of parent generation group,ois the elite number.Ei(t) is the elite solution of the population, and it is selected according to the affinityA(xi)which is sorted in ascending order. The immune cellsRi(t),which are used to reflect the diversity of individuals, are selected fromo+1 toaaccording toP(xi) that is listed in descending order. And to update the optimal individual, the method is expressed as

    whereD(·) is the antibody affinity function.G(t) is the minimum value of population affinity.xg(t) is the global best solution at timet. Subsequently, the crossover and mutation operations can progress.

    In fact, the antibody diversity is very important to immune algorithm. It is closely related to global searching ability of algorithm. For that reason, the R bit comparison method that reflects the similarity degree between antibodies was proposed. However, such an approach is time consuming and has a low calculation accuracy. To avoid this problem, Chun proposed an information entropy-based artificial immune algorithm (IEIA) [26] whose diversity can be satisfied.However, the constant factors of this algorithm that is able to influence the convergence performance, are determined by experience. In addition, the calculation of different antibodies is similar. In view of the above problems, Zhenget al. [27]proposed artificial immunity based on the Euclid distance algorithm (EDAI). This algorithm calculates the Euclid distance between two antibodies and if it reaches a certain threshold, the similarity of antibodies can be determined.However, the problem of threshold setting should be solved,which is a tedious process. In light of the above problems, the distance concentration immune algorithm (DCIA) was proposed to increase the diversity. This algorithm can effectively jump from local optimal solution without requirement for setting any threshold. The results show that the calculation accuracy is improved effectively.

    III. DCIA-SORBF NEURAL NETWORK

    The performance of the RBFNN primarily relies on its structures and parameters. To optimize them simultaneously,the distance concentration artificial immune algorithm (DCIA)is used. However, if the network structure of the immune cells is randomly increased and decreased [21], system instability will occur. To avoid this situation, the information-oriented error compensation algorithm (IOECA) is proposed, with an aim of obtaining a compact structure of the RBFNN.Subsequently, the accuracy of the algorithm is improved and the stability is guaranteed.

    A. DCIA Algorithm

    To reflect the diversity of the antibodies, DCIA [25] is used to calculate the concentration. The greater the distance between the antibodies is, the smaller the distance concentration will be. The use of the DCIA algorithm that is conducive to obtaining the optimal solution set rapidly, can ensure the diversity of the antibodies so that a trap into local optimal will not occur. The expressions are

    wherexiis theith antibody andC(xi) is the distance concentration of antibodyxi.dis the sum of distances between antibodies in the population.diis the sum of the distances between theith antibody and other antibodies.mis the size of population. In addition, the affinity functionD(xi) is another factor that determines the probability of cloning. The formulas are

    wheref(xi(t)) is the fitness of antibodyxi(t). The detailed process is shown in Algorithm 1.

    Remark 1:In the immune algorithm, the distance concentration that reflects the diversity of antibodies, is directly calculated in the DCIA algorithm without setting the threshold, and the antibodies with the smaller affinity and lower concentration can be stored in elite archives.

    B. DCIA-SORBF Neural Network

    To adjust the network size and the parameters during the training process, the DCIA-SORBFNN is proposed in this section. The proposed DCIA-SORBFNN algorithm is summarized in Algorithm 2. From Fig. 2, we can see that an antibody is a complete RBF neuron network as shown in Fig. 2(a) (i.e., the RBF centers, the widths of RBF neuron, and the output weights).

    Firstly, the population is randomly initialized. Then,different antibodies have different network size and parameters. The initialized variables are given by

    whereAis the antibodies population,Aiis theith antibody which hasKhidden layer neurons.μi,K,σi,K,ωi,Kare the center, width and output weight of theKth hidden neuron in theith antibody. To self-organize the network,Kis a random integer. For the sake of improving the accuracy of algorithm, the error criterion is selected as the fitness value of each antibody.The proposed expression is

    whereei(t) is the root-mean-square error (RMSE) of theith antibody.Tis the number of training samples, andyi(t) andyid(t) are the network actual output and predictive output of theith antibody at timet, respectively. In order to ensure the convergence of the algorithm, it is necessary to have a set range for the value ofis the maximum value ofis the network size of theith antibody, andnis the number of input variable.

    As seen in Fig. 2, the 2nd immune antibody is the optimal antibody which is obtained by (9a) and (9b) at timet. Then the optimal size of RBF neural network is obtained and the dimensions of other antibodies need to be updated (Fig. 2(d)).The network size which satisfies the following condition, is updated

    whereKbestis the network size of the optimal antibody, andKiis the network size of theith antibody (i= 1, 2, …,s). Each adjustment process divides or deletes a hidden layer neuron.However, in [21] a neuron is deleted or added randomly and the network sometimes is unsteady.

    To update the dimensions stably, the information-oriented error compensation algorithm (IOECA) is proposed in this paper. In IOECA, the information processing strengths (IPSs)[16] according to the independent information between the neurons in the hidden layer and their independent contribution to the output neurons, the information processing intensity(IPSs) of the neurons in the hidden layer in the learning process is calculated to identify the neurons in the hidden layer that need to be updated. The methods are given asUij(t)

    where and are the input and output information processing strengths (IPSs) of thejth hidden neuron in theith antibody.Sis the number of the samples.Hij(t) is the independent component contribution. In IPSs, by calculating the independent information of the neurons in the hidden layer,the contribution of the neurons in the independent hidden layer to the output neurons is obtained. Here, the information-oriented algorithm (IOA) which is an independent component analysis method is used. The expressions [16] are shown in(12a)-(12f)

    whereCi(t) is the independent contribution matrix of theith antibody,dij(t) is the independent contribution of thejth hidden neuron. AmongΨi(t) = [φi(t-S+ 1), …,φi(t- 1),φi(t)]Tis the output matrix of hidden layer in theith antibody,j=1,…,K. γi(t) is the coefficients matrix which is given as

    whereσi(t),ηi(t),εi(t) are the covariance matrix ofΨi(t), the whitening matrix ofyi(t),yi(t) = [yi(t-S+ 1),…,yi(t- 1),yi(t)]and the whitening transformation matrix ofyi(t), respectively.σi-1(t)Ψi(t) is a decorrelation process forΨi(t) which can represent the independence between the hidden neurons.σi(t),Ψi(t) andεi(t) are given as

    Ui(t) andΛi(t) are the eigenvector and eigenvalue matrices ofyi(t), respectively. Moreover,ηi(t)εi(t) is used to reduce the correlation between the output layer and the hidden neurons.According to the competitive capacities of hidden neurons which are obtained by IPSs. The adjustment rules are given as shown below:

    Case 1 (Neuron Splitting Rule):In fact, ifUi j(t) is larger,the input samples are closer to the center of thejth hidden neuron. In addition, thejth hidden neuron is more active than these input samples. Meanwhile, based on (11a) and (11b), ifis larger, the hidden neurons are more sensitive to the output neurons. Therefore,Ui j(t) andcan be used to describe the information processing ability of the hidden neurons. Then, the condition is described as

    wherePis the sample matrix,Pkis thekth sample,Sis the size ofP.E(t) is the error matrix of the sample matrixPat timet.nEis the matrix of the sample maximum error calculation occurrences.nekis the maximum ofnE. Thekth samplePkhas the largest number and times of errors in the whole iteration process,k21, 2, …,S.XkandYkare the input and output matrices for all samples, respectively. They can be described as

    wherexkoandykoare theoth input and output values of samplePkrespectively.landvare the input and output dimension of samples, separately. Subsequently, the parameters of the new neuron are given as

    wherecij(t),σij(t) andωij(t) represent respectively the center,radius and weight of thejth hidden pre-split neuron in theith immune cell at timet, andcinew(t),σinew(t) andωinew(t) are respectively the center, radius and connection weights of the new added hidden neuron, among whichα2 [0.95, 1.05] andβ2[0, 0.1]. The sample that has the most error calculation occurrences is trained with compensation by (15), so that the accuracy of algorithm can be improved.

    Case 2 (Neuron Deleting Rule):Based on the information processing ability of hidden neurons, thejth hidden neuron is deleted if the IPSs of the hidden neurons satisfy the following conditions:

    The connection weights of thej'th hidden neuron will be updated according to

    Before thejth hidden neuron is cut off, thej'th hidden neuron is the one that is nearest thejth hidden neuron. Before and after thejth hidden neuron is cut off.andare the connection weights between thej'th hidden neuron and the output layer, respectively. The center and radius of thej′th hidden neuron remain unchanged after thejth hidden neuron is deleted.

    Case 3 (Neuron Retaining Rule):If the input and output IPSs of the hidden neurons are neither the maximum nor minimum information strength (such as (13) or (16)), the structure of theith immune cell does not change. With the self-organizing mechanism, the structure of the immune cell can be automatically organized to improve the performance.In addition, the prediction accuracy of the system will be improved based on the error compensation method.

    Remark 2:In [21], the neuron that is split or deleted is random. Then, the network is unstable. To solve this problem,the IOA is used to provide the split or deleted rules in the artificial immune algorithm.

    Remark 3:In IOECA-SORBF, the error compensation is used. When the structure of the RBF network needs to be split or deleted, the parameters of the new neuron need to be updated. However, in [16], the parameters of the last sample in every five samples are used to update the newly added hidden layer neurons, so that the accuracy cannot be ensured.To increase the network prediction accuracy, the neural network sample with the most frequent occurrence of error is used to regulate the parameters of the new neuron. Therefore,the output error is compensated appropriately, and the prediction accuracy of the RBF neural network is increased.

    IV. CONVERGENCE ANALYSIS

    For the proposed DCIA-SORBFNN, the convergence of the algorithm is an important issue and needs to be carefully investigated. In this section, the analysis of convergence is provided in detail to guarantee the successful application of the proposed DCIA-SORBFNN. Furthermore, one can obtain a better understanding of the DCIA-SORBFNN through this analysis.

    A. Convergence Analysis of DCIA Algorithm

    The operations in each generation of crossover, mutation,and antibody concentration regulation actually correspond to the state transition process, which come from one antibody populationto another antibody populationis the transfer process from the populationto the populationPijis only related to the previous population state and is independent of evolutionary algebra. Therefore, the antibody state transfer process can be regarded as a finite homogeneous Markov chain. The stability of crossover, mutation and selection operation is to prove respectively below.

    Definition 4:Set upFkas the best antibody in thekth moment,F* is the antigen of the problem to be solved, if and only ifis established, the DCIA algorithm is convergent [29].

    According to the definition in [29], the crossover operator may be regarded as a random total function whose domain and range areR. The state spaceR=IBN=IBl.n= {0,1}l.n, wherenis the population size, andlis the number of genes. i.e., each state ofRis mapped probabilistically to another state.Therefore, the state transition matrixCis stochastic.

    The probability that stateibecomes statejafter mutation can be aggregated as [29] because the mutation operator is applied independently to each gene/bit in the population

    wherepm2 (0,1), alli,j2S,Hijdenotes the Hamming distance between the binary representations of stateiand statej.The length of each antibody is set asl, and the hamming distance between stateiandjis defined as

    wheregikandgjkare thekth gene of thexiandxjimmune cells,respectively. The same holds for other operators and their transition matrices. Thus,Mis positive. The probabilitySwhose selection does not alter the state generated by mutation is column-allowable. Subsequently, the state transition of one generation antibody is completed, whereP=SCMis positive[29].

    Definition 5:A square matrixA:n×nis said to be reducible, ifAcan be brought into the form (with square matricesCandT)

    by applying the same permutations to rows and columns.

    Theorem 1:LetPa reducible stochastic matrix, whereC:m×mis a primitive stochastic matrix andR,Then

    It is a stable stochastic matrix withP∞= 1'P∞, whereP∞=P0P∞is unique regardless of the initial distribution, andP∞satisfies:for 1 ≤i≤m;form<i≤n.

    Theorem 2:The immune algorithm based on distance concentration is globally convergent to probability 1.

    Proof: The state transition matrix of immune algorithm is denoted byAnd the state transition matrixUis stochastic matrices.The states of transition matrixUare as follows: the first state is global optimal solution. The second state is global suboptimal solution, …, thenth state is the solution of the worst. Then,for any state,anduii= 0,8j>i, the upgraded matrix can be written as

    WithP=SCMthe transition matrix for DCIA becomes

    C= [1] is a first order stochastic matrix. The submatricesPua1witha≥ 2 can be gathered in a rectangular matrixAccording to Definition 5, the state transition matrix is reducible[29]. From Theorem 1

    So that Theorem1 and Definition 4 may be used to prove that the DCIA converges to the global optimum [29].

    B. Convergence Analysis of DCIA-SORBF Algorithm

    According to the above description, a simple and efficient DCIA algorithm with a special fitness function is used to automatically construct the RBFNN. The goal of the DCIA algorithm is to construct an appropriate RBFNN. After the DCIA algorithm returns a set of immune cells, the parameters and size of the DCIA-SORBFNN corresponding to each immune cell can be obtained by using (8b).

    To provide the theoretical basis for the applications, this section presents the convergence analysis of the DCIASORBF neural network based on the convergence analysis of the DCIA algorithm. The convergence analysis of the DCIASORBFNN is summarized in Theorem 3.

    Theorem 3: If the bounds of the predefined maximumDmax(xi(t)) < 2|Ei(t)|/((2 +n)Ki(t))1/2, then the DCIA-SORBFNN is convergent, andEi(t) →0ast→ 0,i= 1, 2, …,s.

    Proof: Consider the following Lyapunov function:

    According to (9b), the system error is [30]

    Then the change in the Lyapunov function between two steps is

    In addition, the error change is denoted:

    Then, the strictly differential formula of the RMSE is

    whereandare the three adjusted parameters in the RBFNN.

    where Δωi(t), Δμi(t) and Δσi(t) are the parameter updating rules.is the affinity function ofith immune cell. According to the above analysis, the factorization of (30) is described as

    where

    The following conditions can be obtained [21] because the bounds of the predefined maximum distance concentration is adjusted dynamically, and

    whereKi(t) is the number of hidden neurons at timetfor theith immune cell.

    which leads to

    Thus,ei(t) is bounded fort≥t0. Moreover, through the Lyapunov-like lemma, it is implied that

    Therefore,ei(t)→ 0 ast→∞,i= 1, 2, …,s.

    In addition, the structure of self-organizing phase stage converges according to [31], thus, the convergence of the proposed DCIA-SORBF neural network is proved.

    Remark 4:Based on the above discussion, in the adjustment phase of the parameter and network size, the convergence of DCIA-SORBF neural network can be maintained according to formulas from (25a) to (35). Then, the convergence of DCIASORBF neural network that is necessary for successful applications, can be guaranteed by using the DCIA algorithm.

    Algorithm Testing RMSE No. of hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0122 0.0035 8 0.0032 1/2 APSO-SORBF [21] 0.0133* 0.0056* 9* 0.0039* 2/3 AI-RBF [37] 0.0235 0.0092 10 0.0062 4/5 GAP-RBF [13] 0.0415* 0.0087* 19* 0.0087* 8/4 PSO-RBF [36] 0.0368* 0.0164* 13* 0.0054* 7/6 AI-PSO-RBF [34] 0.0295* 0.073* 11* 0.0042* 6/7 SAIW-PSO-RBF [35] 0.0197* 0.0026* 11* 0.0046* 3/1

    V. DCIA-SORBF SIMULATION AND APPLICATION

    In this section, five systems are used to demonstrate the effectiveness of DCIA-SORBF. They are the Mackey-Glass time series prediction, nonlinear system identification, the Lorenz time series prediction focusing on nonlinear system modeling or prediction problems, and the effluent total phosphorus (TP) prediction. Whereas TP is an actual industrial problem in a wastewater treatment process (WWTP). In addition, the fitness is used to reflect the diversity of DCIASORBF. And six algorithms are used to compare the performance with it. They are APSO-SORBF [21], GAP-RBF[13], (AI-PSO-RBF) [34], and stability adaptive inertia weight PSO-based RBF (SAIW-PSO-RBF) [35], separately. All the examples were programmed in MATLAB R2014a and ran on a PC with a clock speed of 2.60 GHz and 4 GB RAM under a Microsoft Windows 8.0 environment.

    A. Function Approximation

    In this example, the DCIA-SORBF neural network is used to approximate the following benchmark problem:

    This function is used to examine many popular algorithms in [13], [32] and [33]. There are 300 training patterns that are generated randomly on the domain [0, 2], along theXdirection. Similarly, the testing samples are also randomly produced in the range [0, 2]. In addition, the testing set contains 200 samples. In Fig. 3, four indices are used to reflect the DCIA-SORBF neural network performance: the number of hidden neurons, the training RMSE, the approximation error, and the testing output. In addition, the proposed DCIASORBF algorithm is compared with six other algorithms. All algorithms use the same training data sets and test partitions.And the initial parameters of DCIA-SORBF neural network are set as: the cross probabilitypc= 0.4, the mutation probabilitypm= 0.35, the parameter of diversity evaluationps= 0.85, the elite file sizeEa= 60, and the maximum number of neurons in an antibodymax_num= 60.

    To compare these algorithms, four performance parameters are shown in Table I. This table shows the results of the mean value and standard deviation (Dev.) of testing RMSE, number of hidden neurons, and testing time. The results show that the structure of DCIA-SORBF neural network is the most compact one for its self-organizing capability. Moreover, the proposed DCIA-SORBF neural network requires the least testing time of all algorithms. And Wilcoxon’ rank is added to verify the effectiveness of the proposed DCIA-SORBF algorithm. Mean/Dev. rank is used to rank the results of mean and Dev. Among them, the left side is the ranking result of mean, the right side is the ranking result of Dev. We can see that DCIA-SORBF gets the first mean and the second Dev. value.

    B. Mackey-Glass Time Series Prediction

    The Mackey-Glass time series prediction problem which is one of the benchmark problems. It is used to assess the performance of learning algorithms [21]. The time series prediction is generated by the following equation:

    Algorithm Testing RMSE No.of hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0116 0.0075 9 0.0035 1/1 APSO-SORBF [21] 0.0135* 0.0095* 11* 0.0039* 2/2 AI-RBF [37] 0.0151 0.0128 11 0.0042 3/3 GAP-RBF [13] 0.0321* — 19* — 7/—PSO-RBF [36] 0.0208* 0.0249* 12* 0.0047* 6/6 AI-PSO-RBF [34] 0.0189* 0.0132* 11* 0.0043* 5/4 SAIW-PSO-RBF [35] 0.0166* 0.0145* 11* 0.0053* 4/5

    whereα= 0.1,b= 0.2, andτ= 17, and the initial conditionx(0) = 1.2. The valuex(t+Δt) is predicted from the previous values {x(t),x(t-Δt), …,x(t- (l- 1)Δt)}. In this paper, the prediction model is given by

    In the simulation experiment, 1700 data points were selected fromt= 1 tot= 1700. Among them, the first 1200 data points are used for training, and the last 500 data points are used as test data. The initial network size is set to 60. Andpc= 0.45,pm= 0.55,ps= 0.9,Ea= 60 are selected as the best parameters. The experimental results are shown in Fig. 4. The proposed algorithm can track the Mackey-Glass time series problem well, and the test error is within the range of [-0.05,0.05], and the test error is small. The network structure is constantly adjusted during the process of iteration. Finally, the performance is the best when the network structure is 8. As can be seen from Table II, DCIA-SORBF has the smallest Mean and Dev values compared with other algorithms. At the same time, it has the smallest test time because the algorithm has the smallest network structure. In addition, APSO-SORBF[21] outperforms other algorithms except DCIA-SORBF and ranks second. And AI-RBF [37], SAIW-PSO-RBF [35], AIPSO-RBF [34] and PSO-RBF [36] ranked third to sixth,respectively. GAP-RBF [13] has the worst test error.Therefore, DCIA-SORBF has the smallest test error and the best system stability compared with other algorithms.

    C. Nonlinear System Identification

    The nonlinear system is given by

    There are two input values,y(t) andu(t), and the outputy(t+ 1). The nonlinear system is used in [17], [13], and [36] to demonstrate the performance of a neural network. The training inputs were obtained from two parts. Half of them were sequenced uniformly over the interval [-2, 2], and the others were generated by 1.05 × sin(t/45). Besides, 2400 and 1000 samples were selected for training and testing. The testing samples of input were set as

    whereu(t) is the input signal which used to determine the identification results for the testing signal. To evaluate the performance of DCIA-SORBF neural network, its results are compared with those of six other neural networks. Fig. 5 records the RMSE values, prediction results, prediction errors and the number of hidden neurons for DCIA-SORBF. The number of hidden neurons self-organization adjustments is shown in Fig. 5 (d). And we can see that DCIA-SORBF neural network performs well and the test error remains within the range [-0.05, 0.05]. Therefore, it can predict the nonlinear system function well.

    Algorithm Testing RMSE No.of Hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0724 0.0035 7 0.0039 1/1 APSO-SORBF [21] 0.0916 0.0116 11* 0.0047* 2/4 AI-RBF [37] 0.1049 0.052 11 0.0062 4/7 GAP-RBF [13] 0.2229* 0.0165* 15* 0.0068* 6/6 PSO-RBF [36] 0.2564* 0.0126* 12* 0.0049* 7/5 AI-PSO-RBF [34] 01536* 0.0109* 14* 0.0051* 5/2 SAIW-PSO-RBF [35] 0.0934* 0.0113* 13* 0.0056* 3/3

    Table III exhibits the detailed results of the different algorithms. Four indices are selected to reflect the performances. They are the number of hidden neurons, the mean value and standard Dev. of the testing RMSE, and the testing time. In Table III, the mean value and standard dev. of the testing RMSE are the smallest for DCIA-SORBF. The number of the hidden neurons is the smallest and the testing time is the least for DCIA-SORBF. And APSO-SORBF is the second only to DCIA-SORBF, ranking second. PSO-RBF performed poorest for nonlinear system identification. This example shows that the DCIA-SORBF neural network has better identification ability and its structure is more compact.

    D. Lorenz Time Series Prediction

    The Lorenz time series system is a mathematical model for atmospheric convection that is also widely used as a benchmark in many applications [33]. As a 3-D and highly nonlinear system, the Lorenz system is governed by

    wherea1,a2, anda3are the system parametersa1= 10,a2=28, anda3= 8/3;x(t),y(t), andz(t) are the 3-D space vectors of the Lorenz system. In this example, the fourth-order Runge-Kutta approach with a step size 0.01 is adopted to generate the Lorenz samples, and only theY-dimension samplesy(t) are used for the time series prediction. For 3400 data samples generated fromy(t), the first 2400 samples were taken as training data, and the last 1000 samples were used to check the proposed model. The ratio is close to 7: 3. The test results in Fig. 6 show that DCIA-SORBF neural network performs well and the test error remains within the range [-0.05, 0.05].And the network structure is constantly adjusted during the process of iteration. Finally, the performance is the best when the network structure is 9. Moreover, six algorithm: APSOSORBF[21], GAP-RBF [13], MRL-QPSO-RBF [33], PSORBF [36], AI-PSO-RBF [34], and SAIW-PSO-RBF [35] are compared with DCIA-SORBF in Table IV. This comparison show that the DCIA-SORBF neural network has the smallest mean value error and standard Dev. And the testing RMSE is far better than other algorithms except AI-RBF. In addition,AI-RBF which is worse than DCIA-SORBF is better than the other five algorithms. However, GAP-RBF shows the worst performance. Then the results indicate that DCIA-SORBF has best identification ability for Lorenz time series prediction than the other proposed algorithms.

    E. Effluent TP Prediction in WWTP

    Algorithm Testing RMSE No.of hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0958 0.026 9 0.0075 1/1 APSO-SORBF [21] 0.1726* 0.054* 5* 0.0069* 3/3 AI-RBF 0.1049 0.052 11 0.0062 2/2 GAP-RBF [13] 2.3294* — 70* — 7/—PSO-RBF [36] 0.2673* 0.095* 6* 0.0076* 6/6 AI-PSO-RBF [34] 0.2017* 0.058* 6* 0.0076* 5/4 SAIW-PSO-RBF [35] 0.1981* 0.073* 5* 0.0072* 4/5

    The effluent TP is an important parameter for evaluating the performance of a WWTP [30]. However, the values of the effluent TP are difficult to measure due to the biological characteristics of the activated sludge process. The availability of the effluent TP is often associated with expensive capital and maintenance costs [37]. Therefore, the proposed DCIASORBF neural network is used to predict the values of the effluent TP in this experiment.

    Due to the influence of accuracy measurement, operation and measurement method, water quality abrupt change and so on, the collected data have a certain degree of error.Moreover, direct soft sensor modeling of unprocessed data will inevitably lead to poor performance system and unreliable prediction results. Therefore, in order to ensure the reliability and accuracy of soft sensing, it is necessary to eliminate abnormal data. The existing TP prediction of wastewater treatment is mostly processed by noise reduction. However,all the collected data are real data, and some noise data are hard to avoid, so we do the total phosphorus experiment with real data. We obtain 367 sets of data from a small sewage treatment plant in Beijing from June to August 2015. 267 sets of data are used as training samples and 110 sets of data are used as test samples. The ratio of training samples to test samples is 7:3. In this experiment, the proposed DCIASORBF neural network is used to predict the values of the effluent TP. And the easy-to-measure process variables include: the temperature, oxidation reduction potential,influent TP, dissolved oxygen, pH and total soluble solid,which are selected as the input variables of the DCIA-SORBF neural network.

    The experimental results are shown in the Fig. 7. As can be seen from the graph, DCIA-SORBF can better predict TP value with a small prediction error and the error is between-0.015 and 0.015. However, when noise data appear in the 70th to 90th samples, the algorithm still has a certain degree of prediction distortion. The algorithm has better robustness and the prediction error is within acceptable range when noise occurs. Therefore, the algorithm can still track and predict the results of total phosphorus. In addition, the comparison results are recorded in Table V. We can see that DCIA-SORBF is compared with other six algorithms. From the results, we can see that DCIA-SORBF has a smallest mean testing RMSE and a more compact structure for TP prediction. Then, DCIASORBF has the shortest prediction time. The above results show that the DCIA-SORBF is more suitable and effective than the other SORBF neural networks on predicting the effluent TP values.

    In order to avoid the randomness caused by the experimental results, In Table VI, at the same time, in order to judge the overall performance of the algorithm, the experimental results of five test functions are counted and sorted. The specific results such as Table VI can be seen. rank sum on mean and rank sum on Dev. are the sum of mean and Dev. ranking of all test functions for each algorithm. As you can see, for all test functions, the sum of mean and Dev.rankings of DCIA-SORBF is the smallest. In addition, sum rank on all the problems is the sum of the sorting of all test functions mean and Dev. for each algorithm. Final rank on all the problems is the final ranking result of each algorithm. The experimental results show that DCIA-SORBF has the smallest test error and the best stability, so the algorithm has the best prediction performance.

    Algorithm Testing RMSE No. of hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0102 0.0027 10 0.0097 1/1 APSO-SORBF [21] 0.0127* 0.0025* 12* 0.0120* 2/2 AI-RBF [37] 0.0201 0.0076 12 0.0580 5/5 GAP-RBF [13] 0.0356* 0.0085* 18* 0.0630* 6/6 PSO-RBF [36] 0.1602* 0.0915* 14* 0.0055* 7/7 AI-PSO-RBF [34] 0.0191* 0.0052* 12* 0.0290* 4/4 SAIW-PSO-RBF [35] 0.0159* 0.0049* 12* 0.0410* 3/3

    F. The Fitness

    In order to verify the diversity of DCIA algorithm, Mackey-Glass time series prediction problem is used as the standard test function in this experiment. Meanwhile, the average value of fitness (test error here) and the best fitness value of all antibodies were taken as test indicators. In order to verify the effectiveness of distance concentration method with immune algorithm, this paper compares it with the self-organizing RBF neural network (IA-SORBF) based on artificial immunity. Among them, the number of iterations is 100. The experimental results are shown in Fig. 8. We can see that IASORBF has gradually converged in less than 10 generations,while DCIA-SORBF needs at least 20 generations. Therefore,IA-SORBF is more likely to fall into local optimum. In addition, the fitness average of all antibodies given in Fig. 8(a) shows that the fluctuation range of DCIA-SORBF is larger than that of IA-SORBF, which also shows that the difference of antibodies in DCIA-SORBF algorithm is greater and the diversity of antibodies is better. At the same time, as shown in Fig. 8 (b), the optimal fitness value of DCIA-SORBF algorithm is smaller. Therefore, the algorithm has smaller test error and better diversity, so it can better approximate the global optimal solution.

    VI. DISCUSSION

    In order to verify the effectiveness of the proposed algorithm, The experimental results show that the diversity of the proposed DCIA algorithm is significantly increased compared with that of IA without distance concentration algorithm, so that the proposed DCIA algorithm can better jump out of the local optimum. Secondly, five test functions are used in this paper. From Figs. 3-7, it can be seen that DCIA-SORBF can predict the value of the objective function better, and the error is small. At the same time, the algorithm can adjust the network structure adaptively with the change of sample data, and finally make the RBF network structure the most compact. In addition, the proposed DCIA-SORBF algorithm is compared with six algorithms: APSO-SORBF[21], AI-RBF [13], GAP-RBF [33], PSO-RBF [35], AI-PSORBF [25], and SAIW-PSO-RBF [48]. As can be seen from Tables I-V, except Lorenz time series system, DCIA-SORBF algorithm has the smallest test error in other test functions.Therefore, it has the best prediction accuracy. Meanwhile, for the RMS error value, the RMS error of the algorithm is the smallest except for the function approximation of the test function so that the system has high stability. However, since RBF network structure can be self-organized, we can see from the table that DCIA-SORBF has the smallest network structure for function approximation, nonlinear system identification and effluent TP prediction in WWTP, which effectively avoids the redundancy of network structure and thus has the shortest computing time. However, for the more complex Lorenz time series system, a larger network structure is needed to improve the prediction accuracy, and the experimental results show that DCIA-SORBF error is the smallest except AI-RBF. Compared with other algorithms, it has the smallest root mean square error. For Mackey-Glass time series prediction function, the network structure of the algorithm is slightly larger than others, but the error and root mean square error are the smallest. Finally, in order to calculate the performance of the proposed DCIA-SORBF algorithm, Wilcoxon’ rank is used to analyze the experimental results. From Table VI , through five test functions, the algorithm has the smallest error and root mean square error compared with other algorithms, which proves that DCIASORBF algorithm has the highest prediction accuracy and better stability. At the same time, the statistical results of final rank on all the problems show that the algorithm has the best performance.

    Problems Algorithms DCIA-SORBF APSO-SORBF AI-RBF GAP-RBF PSO-RBF AI-PSO-RBF SAIW-PSO-RBF Function approximation 1/2 2/3 4/5 8/4 7/6 6/7 3/1 Mackey-Glass timeseries prediction 1/1 2/2 3/3 7/— 6/6 5/4 4/5 Nonlinear system identification 1/1 2/4 4/7 6/6 7/5 5/2 3/3 Lorenz time series system 1/1 3/3 2/2 7/— 6/6 5/4 4/5 Effluent TP prediction in WWTP 1/2 2/2 5/5 6/6 7/7 4/4 3/3 Rank sum on mean 5 11 17 34 33 25 17 Rank sum on Dev. 7 14 22 — 30 21 17 Sum rank on all the problems 11 25 40 — 63 46 34 Final rank on all the problems 1 2 4 — 6 5 3

    CONCLUSION

    In this paper, a SORBF neural network is presented to model uncertain nonlinear systems, and the network size and parameters are simultaneously optimized by the proposed DCIA algorithm. In addition, to overcome the shortcoming of easily falling into a local optimum of other algorithms, the distance concentration algorithm that can increase the diversity of immune cells, is adopted. However, while adjusting the structure of the RBFNN, ensuring the stability of the network is quite challenging. For this purpose, the information-oriented algorithm (IOA) is applied to identify which antibodies need to be updated. To increase the network prediction accuracy and reduce the calculation tasks, the sample with the most frequent occurrence of error is used to regulate the parameters of the new neuron. Additionally, the convergence of DCIA-SORBF is demonstrated theoretically for a practical application. Finally, the experimental results demonstrate that the proposed DCIA-SORBF algorithm is more effective in solving nonlinear learning problems.Moreover, the good potential of the proposed techniques in real-world applications is demonstrated from our simulation results over several benchmark problems and an engineering modeling task.

    In addition, the parameters have a great correlation with the predictive results. In the future research work, we will choose the best parameters adaptively according to different situations. At the same time, when there is some noise or interference data, how to get accurate prediction results is another direction and focus of our next research.

    免费大片18禁| 一级毛片我不卡| 男女边吃奶边做爰视频| 日本wwww免费看| 黄片wwwwww| 久久精品国产自在天天线| 免费av毛片视频| 国产av不卡久久| 亚洲伊人久久精品综合| av免费在线看不卡| 大码成人一级视频| 97超视频在线观看视频| 在线观看三级黄色| 精品久久国产蜜桃| 中国三级夫妇交换| 777米奇影视久久| 麻豆国产97在线/欧美| 激情五月婷婷亚洲| 国产亚洲午夜精品一区二区久久 | 最近中文字幕高清免费大全6| 精品少妇久久久久久888优播| 青青草视频在线视频观看| 视频中文字幕在线观看| 一二三四中文在线观看免费高清| 欧美日韩视频高清一区二区三区二| 日本爱情动作片www.在线观看| 18禁在线无遮挡免费观看视频| 国产免费视频播放在线视频| 国语对白做爰xxxⅹ性视频网站| av在线播放精品| 国产免费一区二区三区四区乱码| 亚洲欧美清纯卡通| 久久午夜福利片| 国产久久久一区二区三区| 国产精品熟女久久久久浪| 在线免费十八禁| 国产一级毛片在线| 免费黄频网站在线观看国产| 波多野结衣巨乳人妻| 日本熟妇午夜| 国产精品一区二区在线观看99| 麻豆久久精品国产亚洲av| 欧美亚洲 丝袜 人妻 在线| 亚洲欧美成人综合另类久久久| 国产亚洲91精品色在线| 欧美一区二区亚洲| 亚洲精品久久午夜乱码| 免费观看无遮挡的男女| 毛片一级片免费看久久久久| 久久久色成人| 中文乱码字字幕精品一区二区三区| 韩国av在线不卡| 欧美 日韩 精品 国产| 成年免费大片在线观看| 亚洲不卡免费看| 精品视频人人做人人爽| 97超碰精品成人国产| 精品少妇黑人巨大在线播放| 人妻一区二区av| 国产成人freesex在线| 白带黄色成豆腐渣| 九九在线视频观看精品| 麻豆久久精品国产亚洲av| 成人一区二区视频在线观看| 国产欧美日韩精品一区二区| 国产精品不卡视频一区二区| 搞女人的毛片| 三级男女做爰猛烈吃奶摸视频| 亚洲欧美精品自产自拍| 久久精品人妻少妇| 深夜a级毛片| 日韩精品有码人妻一区| 色吧在线观看| 各种免费的搞黄视频| 爱豆传媒免费全集在线观看| 一边亲一边摸免费视频| 欧美另类一区| 欧美少妇被猛烈插入视频| 有码 亚洲区| 久久6这里有精品| 禁无遮挡网站| 国产成人aa在线观看| 香蕉精品网在线| tube8黄色片| 免费av观看视频| 国产成人精品福利久久| 人妻夜夜爽99麻豆av| 亚洲欧美日韩无卡精品| 国产精品一及| 夫妻午夜视频| 青春草亚洲视频在线观看| 亚洲国产精品成人综合色| 大香蕉97超碰在线| 久久精品久久久久久久性| 新久久久久国产一级毛片| 欧美潮喷喷水| 久久精品国产亚洲av涩爱| 在线a可以看的网站| 男人舔奶头视频| 亚洲欧洲国产日韩| 国产男女内射视频| 国产高清国产精品国产三级 | 日韩av免费高清视频| 亚洲av一区综合| 国内精品宾馆在线| 青春草国产在线视频| 国产乱人偷精品视频| 成人一区二区视频在线观看| 日日摸夜夜添夜夜添av毛片| .国产精品久久| 日韩av不卡免费在线播放| 日韩精品有码人妻一区| 中国三级夫妇交换| 超碰av人人做人人爽久久| 日韩欧美精品免费久久| 免费在线观看成人毛片| 日日啪夜夜爽| 尾随美女入室| 日本黄大片高清| 久久精品国产自在天天线| 精品一区在线观看国产| 午夜老司机福利剧场| 成人鲁丝片一二三区免费| 成人一区二区视频在线观看| 黄色日韩在线| 国产精品福利在线免费观看| 夫妻午夜视频| 久久午夜福利片| a级毛色黄片| 可以在线观看毛片的网站| 三级男女做爰猛烈吃奶摸视频| 91aial.com中文字幕在线观看| 99久久精品热视频| av又黄又爽大尺度在线免费看| 啦啦啦中文免费视频观看日本| 丝袜脚勾引网站| 高清毛片免费看| 久久久精品免费免费高清| 熟女电影av网| av国产免费在线观看| xxx大片免费视频| 啦啦啦中文免费视频观看日本| 女人十人毛片免费观看3o分钟| 一级a做视频免费观看| 婷婷色综合www| 亚洲久久久久久中文字幕| 欧美一级a爱片免费观看看| 波野结衣二区三区在线| eeuss影院久久| 亚洲av电影在线观看一区二区三区 | 青青草视频在线视频观看| 亚洲欧美清纯卡通| 成年av动漫网址| 熟女av电影| 特级一级黄色大片| 国产乱人视频| 免费不卡的大黄色大毛片视频在线观看| 超碰av人人做人人爽久久| 午夜激情福利司机影院| 欧美变态另类bdsm刘玥| 精品国产露脸久久av麻豆| 欧美日韩视频精品一区| 欧美性猛交╳xxx乱大交人| 在线免费十八禁| 观看免费一级毛片| www.av在线官网国产| 日本与韩国留学比较| 精品一区在线观看国产| 亚洲四区av| 日日啪夜夜爽| 99久久九九国产精品国产免费| 国产真实伦视频高清在线观看| 亚洲av电影在线观看一区二区三区 | 日本一本二区三区精品| 看免费成人av毛片| 男人舔奶头视频| 日日撸夜夜添| 少妇猛男粗大的猛烈进出视频 | 亚洲精品久久久久久婷婷小说| 日韩三级伦理在线观看| 成人无遮挡网站| 欧美极品一区二区三区四区| 亚洲精华国产精华液的使用体验| 高清午夜精品一区二区三区| 婷婷色综合大香蕉| 久久久精品欧美日韩精品| 欧美成人a在线观看| 99久久精品国产国产毛片| 又爽又黄无遮挡网站| 性色av一级| 波多野结衣巨乳人妻| 国产精品国产三级国产专区5o| 欧美丝袜亚洲另类| 高清在线视频一区二区三区| 成人免费观看视频高清| 黄片无遮挡物在线观看| av在线天堂中文字幕| 狂野欧美激情性bbbbbb| 成人综合一区亚洲| 精品国产露脸久久av麻豆| 18禁裸乳无遮挡动漫免费视频 | 久久鲁丝午夜福利片| 日产精品乱码卡一卡2卡三| 噜噜噜噜噜久久久久久91| 欧美丝袜亚洲另类| 亚洲国产精品999| 日日啪夜夜撸| 夫妻午夜视频| 狂野欧美白嫩少妇大欣赏| 大又大粗又爽又黄少妇毛片口| 成年女人看的毛片在线观看| 日韩制服骚丝袜av| 97热精品久久久久久| 香蕉精品网在线| 色5月婷婷丁香| 深夜a级毛片| 亚洲av一区综合| 国产69精品久久久久777片| 免费看不卡的av| 国产精品无大码| 亚洲色图综合在线观看| 国产欧美日韩一区二区三区在线 | 美女脱内裤让男人舔精品视频| 欧美日韩在线观看h| 99九九线精品视频在线观看视频| 少妇熟女欧美另类| 国产伦在线观看视频一区| 91aial.com中文字幕在线观看| 日韩成人av中文字幕在线观看| 男人狂女人下面高潮的视频| 日韩,欧美,国产一区二区三区| 视频区图区小说| 亚洲综合精品二区| 国产成人a区在线观看| 国产午夜福利久久久久久| 2018国产大陆天天弄谢| 国产老妇伦熟女老妇高清| 中文字幕久久专区| 国产精品一区二区性色av| xxx大片免费视频| 久久久色成人| 尤物成人国产欧美一区二区三区| 成人亚洲欧美一区二区av| 亚洲av成人精品一二三区| 毛片一级片免费看久久久久| 国产v大片淫在线免费观看| 十八禁网站网址无遮挡 | 99热这里只有是精品50| 久久女婷五月综合色啪小说 | 观看美女的网站| 极品教师在线视频| 一个人看视频在线观看www免费| 一区二区三区精品91| 国产视频首页在线观看| 日本黄色片子视频| 赤兔流量卡办理| 亚洲真实伦在线观看| 欧美日韩视频高清一区二区三区二| 又粗又硬又长又爽又黄的视频| 国内精品宾馆在线| 国产探花在线观看一区二区| 国产亚洲5aaaaa淫片| 亚洲av电影在线观看一区二区三区 | 美女被艹到高潮喷水动态| 纵有疾风起免费观看全集完整版| 亚洲欧洲日产国产| 免费黄色在线免费观看| 亚洲人成网站在线观看播放| 寂寞人妻少妇视频99o| xxx大片免费视频| 国产人妻一区二区三区在| 成人欧美大片| 亚洲真实伦在线观看| 噜噜噜噜噜久久久久久91| 热re99久久精品国产66热6| 色5月婷婷丁香| 亚洲自拍偷在线| 丝袜美腿在线中文| 观看免费一级毛片| 日日啪夜夜撸| 精品少妇久久久久久888优播| 又爽又黄a免费视频| 中文天堂在线官网| 少妇被粗大猛烈的视频| 只有这里有精品99| 少妇裸体淫交视频免费看高清| 国产精品蜜桃在线观看| 看十八女毛片水多多多| 丰满人妻一区二区三区视频av| 亚洲高清免费不卡视频| 亚洲精品一区蜜桃| 肉色欧美久久久久久久蜜桃 | 午夜激情福利司机影院| 白带黄色成豆腐渣| 日韩三级伦理在线观看| 国产精品99久久99久久久不卡 | 国产综合精华液| 国产一区有黄有色的免费视频| 欧美xxⅹ黑人| 69人妻影院| 观看免费一级毛片| 久久99精品国语久久久| 美女内射精品一级片tv| 国产一区有黄有色的免费视频| 久久99蜜桃精品久久| 国产成人免费无遮挡视频| 天堂网av新在线| 精华霜和精华液先用哪个| 欧美zozozo另类| 国产综合懂色| 麻豆久久精品国产亚洲av| 男女国产视频网站| 免费看日本二区| 亚洲经典国产精华液单| 亚洲伊人久久精品综合| 亚洲国产精品成人久久小说| 免费av观看视频| 久久精品国产自在天天线| 一区二区三区乱码不卡18| 男女无遮挡免费网站观看| 国产精品成人在线| 六月丁香七月| av在线老鸭窝| 久久热精品热| av国产精品久久久久影院| 亚洲天堂国产精品一区在线| 国产高潮美女av| 欧美日韩视频高清一区二区三区二| 老师上课跳d突然被开到最大视频| 国内少妇人妻偷人精品xxx网站| 少妇裸体淫交视频免费看高清| 久久久久久久亚洲中文字幕| 国产精品国产三级国产av玫瑰| 久久人人爽人人片av| 免费看光身美女| 久久久久精品久久久久真实原创| 亚洲精品久久午夜乱码| 男女无遮挡免费网站观看| 搡女人真爽免费视频火全软件| 97精品久久久久久久久久精品| 欧美xxⅹ黑人| 国产综合精华液| 成人美女网站在线观看视频| 午夜福利网站1000一区二区三区| 91久久精品国产一区二区三区| 免费黄色在线免费观看| 晚上一个人看的免费电影| 精品久久久噜噜| 国产精品一及| 久久久久久久亚洲中文字幕| 亚州av有码| 乱码一卡2卡4卡精品| 国产伦精品一区二区三区视频9| 免费看日本二区| 亚洲熟女精品中文字幕| 国产精品久久久久久久久免| 99热国产这里只有精品6| 日本黄色片子视频| 欧美日韩在线观看h| 久久精品久久久久久久性| 2022亚洲国产成人精品| 女人被狂操c到高潮| av在线观看视频网站免费| 在线免费观看不下载黄p国产| 男的添女的下面高潮视频| 国产精品嫩草影院av在线观看| 天堂中文最新版在线下载 | 国产欧美日韩精品一区二区| 亚洲欧洲日产国产| 欧美老熟妇乱子伦牲交| 久久久精品欧美日韩精品| 91精品一卡2卡3卡4卡| 精品一区在线观看国产| 亚洲色图av天堂| 国产免费视频播放在线视频| 亚洲成人av在线免费| 大片免费播放器 马上看| 国产精品.久久久| 国产黄片视频在线免费观看| 黄色一级大片看看| 美女内射精品一级片tv| 欧美最新免费一区二区三区| 男插女下体视频免费在线播放| 人体艺术视频欧美日本| 五月天丁香电影| 身体一侧抽搐| 熟女人妻精品中文字幕| 国产精品国产三级专区第一集| 天堂俺去俺来也www色官网| 日本与韩国留学比较| 婷婷色av中文字幕| 精品国产三级普通话版| 国产亚洲av片在线观看秒播厂| www.色视频.com| 涩涩av久久男人的天堂| 国产精品人妻久久久影院| 中文字幕久久专区| 97在线视频观看| 欧美高清成人免费视频www| 国产精品爽爽va在线观看网站| 赤兔流量卡办理| 欧美性猛交╳xxx乱大交人| 国产伦理片在线播放av一区| 欧美zozozo另类| 色综合色国产| 国语对白做爰xxxⅹ性视频网站| 国产老妇女一区| 亚洲欧美清纯卡通| 在线观看av片永久免费下载| 在线免费观看不下载黄p国产| 免费看a级黄色片| 久久精品久久久久久久性| av免费观看日本| 国产一级毛片在线| 少妇人妻 视频| 新久久久久国产一级毛片| 最近最新中文字幕免费大全7| 天堂网av新在线| 久久久国产一区二区| 美女被艹到高潮喷水动态| 国产高清国产精品国产三级 | 日韩av在线免费看完整版不卡| 久久久久国产精品人妻一区二区| 亚洲真实伦在线观看| 干丝袜人妻中文字幕| 哪个播放器可以免费观看大片| 久久久久久久大尺度免费视频| 久久精品久久久久久噜噜老黄| 国产乱人偷精品视频| 免费电影在线观看免费观看| 成人欧美大片| 99九九线精品视频在线观看视频| 99久久九九国产精品国产免费| 嫩草影院入口| 高清午夜精品一区二区三区| 国产黄色视频一区二区在线观看| 国产精品成人在线| 久久精品国产a三级三级三级| 免费电影在线观看免费观看| 国产淫语在线视频| 久久久精品欧美日韩精品| 国产精品99久久久久久久久| 18禁在线无遮挡免费观看视频| 国产熟女欧美一区二区| 老司机影院成人| 亚洲av国产av综合av卡| 如何舔出高潮| 一本色道久久久久久精品综合| 大又大粗又爽又黄少妇毛片口| 国产高清国产精品国产三级 | 五月伊人婷婷丁香| 久久久久久久久久久丰满| 国产一区亚洲一区在线观看| 秋霞伦理黄片| 69人妻影院| 一个人看的www免费观看视频| 日本一二三区视频观看| 国产乱人视频| 美女视频免费永久观看网站| 日韩大片免费观看网站| a级一级毛片免费在线观看| 中国国产av一级| 日韩,欧美,国产一区二区三区| 能在线免费看毛片的网站| 国内精品美女久久久久久| 狂野欧美白嫩少妇大欣赏| 国产 一区精品| 99热这里只有精品一区| 亚洲av免费在线观看| 日韩欧美精品v在线| 热re99久久精品国产66热6| 精品国产三级普通话版| 国产高清三级在线| 成人特级av手机在线观看| 国产成人freesex在线| 嫩草影院入口| 亚洲av欧美aⅴ国产| 久久人人爽人人片av| 精品国产乱码久久久久久小说| 中文字幕人妻熟人妻熟丝袜美| 又粗又硬又长又爽又黄的视频| 欧美日本视频| 欧美激情在线99| 国产午夜福利久久久久久| 狂野欧美白嫩少妇大欣赏| 五月伊人婷婷丁香| 久久久a久久爽久久v久久| 91久久精品国产一区二区三区| 一个人看视频在线观看www免费| 国产精品秋霞免费鲁丝片| av网站免费在线观看视频| 少妇 在线观看| 精华霜和精华液先用哪个| 伊人久久国产一区二区| 久久久精品94久久精品| 天天一区二区日本电影三级| 2021少妇久久久久久久久久久| 日韩av在线免费看完整版不卡| 午夜精品国产一区二区电影 | 国产免费一区二区三区四区乱码| 国产高潮美女av| 久久精品久久久久久久性| 草草在线视频免费看| 亚洲丝袜综合中文字幕| 精品少妇久久久久久888优播| 亚洲国产高清在线一区二区三| 少妇 在线观看| 亚洲欧美成人综合另类久久久| 在线观看一区二区三区激情| 青青草视频在线视频观看| 亚洲av二区三区四区| 日韩亚洲欧美综合| 街头女战士在线观看网站| 尤物成人国产欧美一区二区三区| 激情五月婷婷亚洲| 成人免费观看视频高清| 国产黄色视频一区二区在线观看| 国产一区二区在线观看日韩| 成人一区二区视频在线观看| 最近中文字幕高清免费大全6| 亚洲欧美一区二区三区黑人 | 少妇熟女欧美另类| 国产精品秋霞免费鲁丝片| 80岁老熟妇乱子伦牲交| 少妇猛男粗大的猛烈进出视频 | 如何舔出高潮| xxx大片免费视频| 高清视频免费观看一区二区| 亚洲最大成人手机在线| 日产精品乱码卡一卡2卡三| 高清在线视频一区二区三区| 亚洲精品国产成人久久av| 国产探花在线观看一区二区| 欧美一区二区亚洲| 精品熟女少妇av免费看| 一级av片app| 国产男人的电影天堂91| 岛国毛片在线播放| 国产黄色视频一区二区在线观看| 99久久精品热视频| 亚洲人成网站在线观看播放| 亚洲国产最新在线播放| 国产黄频视频在线观看| 国产高清三级在线| 久久精品久久精品一区二区三区| 亚洲精品aⅴ在线观看| 久久久久久久国产电影| 久久久精品94久久精品| 久久女婷五月综合色啪小说 | 大陆偷拍与自拍| 亚洲精品第二区| 日日撸夜夜添| 欧美人与善性xxx| 免费观看的影片在线观看| 国产 一区精品| 亚洲在线观看片| 久久6这里有精品| 午夜福利网站1000一区二区三区| 中文字幕免费在线视频6| 日韩av免费高清视频| 久久久久久久久久久免费av| 水蜜桃什么品种好| 国产精品久久久久久精品电影| 国产精品人妻久久久久久| 久久久久九九精品影院| 国产精品偷伦视频观看了| 成人美女网站在线观看视频| 久久99热这里只频精品6学生| 交换朋友夫妻互换小说| 99热这里只有精品一区| 国产高清国产精品国产三级 | av又黄又爽大尺度在线免费看| 特级一级黄色大片| 一本一本综合久久| 视频区图区小说| 午夜视频国产福利| 国模一区二区三区四区视频| 免费高清在线观看视频在线观看| 又黄又爽又刺激的免费视频.| 色视频在线一区二区三区| 国产成人精品久久久久久| 亚洲精品国产av蜜桃| 久久久久久久午夜电影| 一级毛片黄色毛片免费观看视频| 女人被狂操c到高潮| 日本猛色少妇xxxxx猛交久久| 国产白丝娇喘喷水9色精品| 国产精品国产三级国产专区5o| 成人国产av品久久久| 69人妻影院| 91久久精品电影网| 国产久久久一区二区三区| 99九九线精品视频在线观看视频| 天堂中文最新版在线下载 | 中文欧美无线码| 欧美一级a爱片免费观看看| 国产成人a∨麻豆精品| 久久久午夜欧美精品| 久久久久久久亚洲中文字幕| 日本爱情动作片www.在线观看| 久久久亚洲精品成人影院| 尤物成人国产欧美一区二区三区| 日韩成人av中文字幕在线观看| 交换朋友夫妻互换小说| 国产日韩欧美亚洲二区| 校园人妻丝袜中文字幕| 国产高潮美女av| 涩涩av久久男人的天堂| 日本av手机在线免费观看| 午夜福利高清视频| 亚洲图色成人| 午夜亚洲福利在线播放| 久久久久久九九精品二区国产| 国产亚洲av片在线观看秒播厂| 嫩草影院新地址|