• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A spintronic memristive circuit on the optimized RBF-MLP neural network

    2022-11-21 09:35:06YuanGe葛源JieLi李杰WenwuJiang蔣文武LidanWang王麗丹andShukaiDuan段書凱
    Chinese Physics B 2022年11期
    關(guān)鍵詞:文武李杰

    Yuan Ge(葛源) Jie Li(李杰) Wenwu Jiang(蔣文武) Lidan Wang(王麗丹) and Shukai Duan(段書凱)

    1School of Artificial Intelligence,Southwest University,Chongqing 400715,China

    2Chongqing Brain Science Collaborative Innovation Center,Chongqing 400715,China

    3Brain-inspired Computing and Intelligent Control of Chongqing Key Laboratory,Chongqing 400715,China

    4National&Local Joint Engineering Laboratory of Intelligent Transmission and Control Technology,Chongqing 400715,China

    A radial basis function network(RBF)has excellent generalization ability and approximation accuracy when its parameters are set appropriately. However,when relying only on traditional methods,it is difficult to obtain optimal network parameters and construct a stable model as well.In view of this,a novel radial basis neural network(RBF-MLP)is proposed in this article. By connecting two networks to work cooperatively,the RBF’s parameters can be adjusted adaptively by the structure of the multi-layer perceptron (MLP) to realize the effect of the backpropagation updating error. Furthermore, a genetic algorithm is used to optimize the network’s hidden layer to confirm the optimal neurons (basis function) number automatically. In addition, a memristive circuit model is proposed to realize the neural network’s operation based on the characteristics of spin memristors. It is verified that the network can adaptively construct a network model with outstanding robustness and can stably achieve 98.33%accuracy in the processing of the Modified National Institute of Standards and Technology(MNIST)dataset classification task. The experimental results show that the method has considerable application value.

    Keywords: radial basis function network(RBF),genetic algorithm spintronic memristor,memristive circuit

    1. Introduction

    Since the birth of the concept of ANN in the 1940s, it has resolved many practical problems that cannot be solved by the human brain or even modern computers in the fields of medicine, economics, biology, automatic control, and pattern recognition.[1–3]As one of the most classical network models,the back-propagation(BP)neural network has greatly accelerated the development of ANN.[4,5]However, its disadvantages of slow convergence speed and poor approximation ability make it incapable of high-precision classification tasks.Hence, user should pay attention to the new network with a more reasonable design when dealing with tasks. The radial basis function(RBF)network is a type of network mechanism similar to the BP neural network,and the concept of RBF was introduced by Powell in 1985.[6,7]Compared with the BP neural network,the RBF neural network has the characteristics of simple structure,fast speed and splendid convergence; meanwhile,it also has the capability of global approximation while ensuring the accuracy of approximation.[8,9]The key to establishing an excellent RBF network model is the generation of basis functions,including the determination of the number,centre and width of basis functions. Beyond that, the model can also adjust parameters adaptively according to the types of classified samples.[10–12]Due to the difficulty with network model construction, scholars try to replace thek-means clustering algorithm with other clustering algorithms,which determines the parameters of hidden layers.[13]Spectral clustering is used in the unsupervised learning stage to complete the optimization of hidden layer parameters.[14]A fuzzyC-means clustering algorithm was proposed to obtain the parameters directly,[15]while leader-follower clustering was applied to initialize the centres of hidden layers automatically.[16]However, the robustness of the network model based on the improved clustering algorithm is not ideal. In other words, the optimal centre and width parameters obtained by the clustering algorithm cannot be synchronously adjusted with the change in the number of basis functions. In addition,there are ways to simplify network parameters,such as using the pruning method on hidden layer neurons to simplify the network structure directly.[17,18]This paper starts with the idea of improving the structure and finding a reasonable method to build up a stable network model.

    In recent years,the rapid development of electronic components made it possible to simulate neural networks on circuits, and scholars have carried out a variety of explorations: for example, using a memristor-based multiplication circuit to improve the image processing ability of the cellular neural network (CNN);[19]realizing the ink drop spread(IDS) effect on the neural network by a memristor’s fuzzy logic properties;[20]simulating the weight update progress of synapses by a memristor-bridge to complete the random weight change(RWC)learning algorithm on multi-layer neural networks.[21]Evidently, all of this reasearch should be attributed to the creation of memristors. In 1971, Leon Chua proposed the concept of the memristor, the fourth fundamental circuit element after resistance, capacitance and inductance.[22]Through unremitting efforts, HP Studios implemented memory behaviour on a nanoscale TiO2device model in 2008.[23]Its excellent characteristics give it potential application value in signal processing, artificial neural networks and non-volatile storage, and it has become one of the innovative breakthroughs in new circuits.[24,25]In general,there are two aspects to the improvement of a memristor’s properties, and one relies on the inherent properties of raw materials,such as fabrication using chalcogenide,polymer organic matter and other new types of materials.[26]The other method improving the memristor’s structure to affect internal electron flow rules, including the spin memristor, which is based on the spin characteristics of electrons.[11,26]

    Based on the existing research,this paper makes some advances in network structure optimization and proposes a memristive circuit to realize the network mentioned above. The remainder of the article is organized as follows. Section 2 presents the RBF-MLP network structure model and optimization method on the basis of the RBF neural network.The basic principles and operating characteristics of spin memristors are analyzed in Section 3. Section 4 proposes a hardware circuit to simulate the complete operational flow of the RBF-MLP network. Section 5 verifies the performance of the network through analysis of the experimental results and comparison with other networks.Finally,Section 6 provides a summary of the whole article.

    2. RBF-MLP network model and optimization

    2.1. RBF network mathematical model

    The classical RBF network structure consists of three layers: an input layer, a hidden layer and an output layer, as shown in Fig. 1. The hidden layer is composed of a set of radial basis functions,which can be used to map the input vector to the hidden layer nonlinearly rather than through weight connections. In contrast, the mapping from the hidden layer to the output layer is linear,so that the weights of the network can be solved by linear equations. When the network adopts a Gaussian function as the radial basis function,the activation function of the neural network can be expressed as

    wherex= [xi]Tis the input vector of the neural network,c=[ci]is the coordinate value of the centre point of the Gaussian function when thei-th input vector is used, andσrepresents the width value of the Gaussian function. Assuming that the hidden layer is connected to the weight of the output layerw=[wj],j=1,2,...,n, the output of the radial basis neural network can be expressed as

    The critical reason for constructing an RBF network is to determine the parameters of radial basis functions in the hidden layer, including the number of basis functions, centre, variance,weight and other parameters,which are essential factors that affect network capability.[7]If the value of the variance is not big enough,the coverage area of the basis function will be insufficient, affecting the generalization ability of the network. Nevertheless, an excessive value of the variance will reduce the network’s classification accuracy. Therefore, the unsupervised learning process is generally used to obtain the suitable values of the center, variance and the number of basis functions. In thek-means clustering algorithm,hcenters are selected and the local optimal solutionLmaxis obtained through the algorithm;that is,the maximum distance between centres is chosen as the centre of the basis function.[13]By this way, the variances and weights from the hidden layer to the output layer can be expressed as follows:

    The output of the radial basis function can be obtained by the relationship of the above parameters. However,k-means local clustering algorithms and other global clustering algorithms are likely to group samples that are close to each other but have different categories grouped into one category;therefore,it is difficult to achieve ideal classification accuracy.This is mainly reflected in the significant disparities in the accuracy results of different sample sets. For some sample sets with relatively regular distribution (the distance between classes is far and the distance within categories is close), the processing results are better and, for those sample sets with irregular distribution (complex interleaving between classes, the distance within classes is far),serious misclassification occurs. Therefore, the performance bottleneck of three-layer RBF mainly comes from the limitations of the clustering algorithm.

    Fig.1. RBF neural network structure.

    Fig.2. Performance comparison of the MLP and RBF neural network on the Wine data set: (a)10 rounds;(b)30 rounds;(c)60 rounds.

    2.2. The RBF-MLP network

    The multi-layer perceptron (MLP) is similar to the RBF network in structure, and it is also a feedforward neural network,and includes an input layer,a single or multi-layer hidden layer, and an output layer.[27]The error is calculated using forward propagation, and then supervised learning is carried out using the back-propagation algorithm until the error is lower than the established standard, to achieve better classification ability. A Sigmoid function is used as an activation function in the hidden layer,and the Softmax function is used to map classification results in the output layer.Figure 2 shows the training and test results of the RBF network and MLP for the Wine data set, respectively. In few experimental rounds,the difference in classification ability between the two is not apparent.[28]With the increasing of experimental rounds, the ability gap between the two networks becomes more intuitive.After removing some“bad”results,it can be seen that the performance results of the two networks show an irregular curve with stable upper and lower bounds, and it is evident that the accuracy of the MLP is better than that of the RBF network in both training and testing stages. Although the MLP has the disadvantages of long training time and poor coverage,its better performance provides new ideas for improving the RBF network structure.[29]

    Fig.3. RBF-MLP neural network structure.

    Therefore, the operation of the error reverse update can be divided into another more precise part. As shown in Fig.3,the base function of the RBF network’s hidden layer connects to the hidden layer of the MLP through weights for further processing. The error is automatically corrected through the back-propagation of the MLP, and the optimization parameters are sent back to the base function. In the final step, the results are normalized by the Softmax activation function in the output layer.[30]The adaptive error adjustment process of the network shows that, considering the input as the joint action of vectorv, then the inputxpcan be represented by the componentxp(v). The center and variance are initialized asμn(v) andσn(v). The actual outputg(n1)of each basis function is calculated as follows:

    Here,srepresents the Sigmoid activation function in the MLP,which is used to map the errors of the hidden layer and make classification judgments;Sis the Softmax activation function that is applied to normalize the probability distribution of the output layer.[30]Under the cooperation of two hidden layers,the basis function could adjust parameters adaptively according to the error value,thus making the complex nonlinear example linearly separable. Compared with the network structure with an external algorithm framework for parameter adjustment,this method of error back-propagation and optimization within the network not only provides more timely error feedback,but also improves the accuracy of parameter adjustment. Furthermore, it avoids the misclassification and multiclassification caused by poor parameter selection to a large extent.

    2.3. Automatic parameter searching by using the genetic algorithm.

    While optimizing the basis function parameters, the determination of the hidden layer scale is equally significant,i.e.,the valuen1,n2in the RBF-MLP structure affects the performance of the network as well. To be widely applicable to all kinds of sample sets, it is necessary to increasen1andn2as much as possible to reduce the amount of data processed by each unit. In this case, although the classification effect is guaranteed, it will lead to a series of generalization errors,such as the phenomenon of overfitting. In addition to this,the scale of the network will become uncontrollable with the increase in the sample set. Therefore, the values ofn1andn2must be elastic and adaptive.

    A genetic algorithm (GA), also known as an evolutionary algorithm, is a search algorithm based on Darwin’s theory of evolution.[31,32]The specific concept is: assuming the problems to be solved are the process of biological evolution;eliminate the solutions that do not adapt to the task through a series of processes,such as crossover,mutation and evolution;retain the highly adaptable solutions in the later generation and repeatedly evolve;and finally obtain the optimal solution by decoding the optimal individuals in the last generation.[33]Therefore,in this paper,obtaining the optimal network structure can be considered as a task requiring constant evolution,and the optimaln1andn2values can be regarded as the optimal solution to the problem. The overall operation process is shown in Fig.4.

    The first step is population initialization,which is related to whether the whole genetic algorithm can finally converge to the global optimal solution. For the small-scale optimization problem in this paper, the random number generation method is used. Then, the fitness function is called for calculation,and the fitness obtained determines whether it is the optimal solution or not.[34]If not,we then select the best contemporary individual to proceed with subsequent operations.To ensure the excellence of the next generation, the binary tournament selection method can be used in the experiment to iterate according to the fitness value and to select the outperformers in the next generation population set.[35]The next step is crossover and mutation, which is also the core of the genetic algorithm and is used to complete genetic operation.Crossover operation refers to the process in which the genes of chromosomes are crossed and part of their chromosomes are exchanged with a certain probability to pass the good genes of the previous generation to the next generation. In this paper,simulated binary(SBX)crossover is adopted: a pair of parent chromosomes represented by seven-bit binary are truncated with the generated random number and recombined to form a new offspring’s chromosome.[36]By generating all the random numbers within the range to solve the binary values, we can generate the distribution coefficientβ, which represents the crossover relationship between the parent generation and the child generation. Then,it can be fitted using the probability density functionC(x),

    Fig.4. A flow chart of optimization of the RBF-MLP network using the genetic algorithm.

    wherenstands for the distribution index and, the largernis,the closer the relationship between the two generations. The distribution function is calculated by the probability density and converted into the cumulative distribution function that is shown in Eqs.(14)and(15),respectively:

    The distribution function can represent the crossover situation of each generation intuitively, and the information of the offspring can be obtained through it. After the crossover operation,the set containing the new generation of individuals will have a probability of genetic variation according to the set parameters.To speed up the efficiency of mutation,a polynomial mutation operator is used in this paper.[33,37]The order of decision variables was determined according to fitness, and the individuals corresponding to decision variables were mutated according to the polynomial mutation mechanism.

    wherehis a random number within[0,1];nmis the polynomial mutation index;vpis the parent solution; andvl/vurepresent the lower/upper bound of the solution variable. The mutation operator can be expressed as

    The early stage of mutation can be regarded as a global search,so the mutation probability is at a relatively high level. As the iteration progresses, the mutation probability will drop to a shallow level;therefore,the mutation probability has an adaptive dynamic change process from high to low, accelerating the convergence to the optimal solution in the later stage of operation. After a series of operations, the new generation is sent back to the fitness function for calculation and judgment.This cycle is repeated until the optimal solution of the problem is obtained: that is, the values ofn1andn2in the RBF-MLP network. Obviously, this method of obtaining parameters using genetic algorithm evolution and an elimination mechanism can be effectively used to determine the optimal network parameters.

    3. Characteristic analysis of the spin memristor and crossbar array

    3.1. Model of the spintronic memristor

    The spintronic memristor is a layered domain wall memristor based on the spin properties of electrons. As shown in Fig. 5, the device consists of two magnetic layers and a thin oxide layer sandwiched between them,one of which is a free magnetic layer with movable domain walls that divide the layer into two magnetically opposite parts.The other layer is a fixed magnetic layer with a fixed magnetization intensity,and its fixed magnetization direction also makes it a reference.[38]When an external current is applied,the position of the domain walls shifts and the resistance changes.The resistance of those parts of the domain that are magnetized parallel to the fixed layer decreases,and that of the other parts increases. According to these properties, it can be seen that the spin memristor has a threshold value. When the domain wall moves to the edge parallel to the fixed layer due to the current action, the resistance value reaches the minimumRoff. When the domain walls move to the opposite parallel edges of the fixed layer,the resistance reaches its maximum valueRon. Because of the nanometer thickness of the domain walls, it is negligible.[11]The resistanceMof the spin memristor can be expressed by

    whereWis the width of the part of the free layer that is magnetically parallel to the fixed layer, andDis the total width of the spin memristor. The domain wall velocityvcan be expressed as

    whereΓis the domain wall coefficient that is dependent on the material and device structure, which is constant when the material and structure are fixed.Jis the current density,andSis the area of the spin memristor profile. Therefore,the resistance value of the spin memristor can also be expressed as

    According to Eq. (20), it can be seen that the domain wall velocityvis proportional to the current density,and the movement of the domain wall depends on whether the current densityJreaches the critical current density or not. When it is greater than or equal to the critical current density, the resistance state of the spin memristor is variable, otherwise it remains constant.

    Thus,the resistance of the spin memristor varies with the location of the domain walls, and whether the domain walls change or not depends on whether the current density reaches a critical value. The causal relationship allows the spin memristor to have a stable threshold and to switch characteristics.Compared with conventional memristors, the writing operation in a spintronic memristor is achieved by applying a current through the magnet itself rather than generating a solid reversal field;it is advantageous for a spintronic memristor to offer lower power consumption. In addition, the structure of the spin memristor does not require additional metal wires as current paths,which dramatically improves its scalability.

    Fig. 5. (a) The internal structure of the spin memristor. (b) The equivalent circuit of the spin memristor.

    3.2. Characteristic analysis of the spin memristor

    To better illustrate the advantages of this device,this section analyzes the characteristics of the spin memristor utilizing numerical simulation. Firstly,we set the parameters in the circuit as follows:Ron=6 kΩ,Roff=5 kΩ,Γ=1·357×10-11,J=5×107A/cm2,D=1000 nm,s=70 nm2.The input sinusoidal voltage isV=V0sin(2π ft), and the input voltage parameters range from[0.38 1.2]V to[10 50]MHz. Figure 6(a)shows the variation curves of three sinusoidal signals over time. The voltage parameters used in this paper areVa[0.81 V 10 MHz],Vb[0.5 V 10 MHz], andVc[0.42 V 50 MHz], respectively. In Fig.6(b),theI–Vcurves of the spin memristor are the same as those of the model of the memristor in the HP laboratory, showing typical hysteresis loops, which also indicates that the spin memristor continues the excellent circuit characteristics of the memristor. It is worth noting that the hysteresis loop is slightly rough at both ends due to the boundary of the spin memristor, because the resistance is at its maximum at both ends and the voltage and current have a transient linear relationship. Figures 6(c) and 6(d) are the relationship curves between resistance and time or voltage,respectively. Under the applied three sinusoidal voltages, the domain walls move in the free layer from the edge of the same magnetization direction as the fixed layer,so that the resistance changes periodically with the voltage starting from the minimum value ofRoff=5 kΩ. And theM–TandM–Vresponse curves show a horizontal state in a certain period,regardless of the voltage parameters,which means that there is more or less stagnation time for the change in the spin memory resistance value. Obviously,the stagnation is not caused by reaching the maximum resistanceRon. The primary reason for this is that the current densityJcannot reach the critical current density;therefore,the spin memory resistance stops changing the value because of the boundary effect,and the memory resistance remains constant for a period of time until the current density is greater than the critical current density.

    It can be clearly seen that the spin memristor device has a pronounced boundary effect and threshold properties,which is different from the threshold properties displayed by the traditional memristor only when the threshold value is at the critical importance.In the HP memristor,the resistance value depends only on the doping concentration of oxygen ions,and does not change its original properties according to the current density.This means the resistance of the memristor can be limited to a constant value betweenRonandRoffby changing the current parameters. Furthermore, it could be effectively applied to circuits to realize a neural network, which contains a large number of weight parameter storage,updates and calculation.

    Fig. 6. Circuit characteristic curves of the spin memristor with sinusoidal voltage input of different parameters. (a) Input voltage curve. (b) I–V curve. (c)M–T curve. (d)M–V curve.

    3.3. Spin memristor crossbar array

    Figure 7 shows a 5×5 cross array of a spin memristor,whose array size can be adjusted according to actual requirements. The resistance value of each node has a bidirectional relationship with the voltage applied in the vertical and horizontal approaches.[25]The resistance value can be changed not only by changing the voltage in the two directions,but also by reading the voltage in the two directions.[24,39]

    These contact each other and,independently of the mesh nodes structure, coincide with each other between neurons in a neural network topology structure similar to. Firstly, it essentially provides the ability to combine input signals in a weighted manner,mimicking the dendrite potential.Secondly,according to the characteristics of the synaptic network,it can support a large number of signal connections within a small memory footprint. Therefore, the weight variable of the neural network can change with the change of the resistance of each spin memristor. Moreover, the circuit composed of the memristor cross array can save the model and parameters after training in the case of a power failure. This can avoid repeated parameter optimization when the neural network is used for calculation again,and thus significantly improve the efficiency in practical application.

    Fig.7. The spin memristor crossbar array structure.

    4. Spintronic memristor-based RBF-MLP neural network design

    Fig.8. A spin memristor cross array circuit for implementing RBF-MLP networks.

    As mentioned above,the spin memristor can be applied to realize neural network functions due to its characteristics such as threshold,non-volatility,low power consumption and high efficiency. The two hidden layers in the RBF-MLP network have different effects; therefore, the overall hardware circuit structure is divided into two parts.The first part is used to simulate the parameter updating operation in the RBF layer,which consists of a 5×5 spin memristor cross array,two demodulators and several NMOS switches for control. The other part is mainly composed of four reverse amplifiers in the peripheral circuit,which is responsible for simulating the MLP layer optimization parameters and error correction.Vreadconverts the read signal between digital and analogue signals through the demodulator; then, the converted signal is input into the cross array circuit. Due to the spintronic memristors’boundary properties, the current density in each direction is controlled to determine whether we need to activate each unit of the cross array.When the two switches on the left are activated simultaneously,the resistance value is converted into a signal and connected to the first reverse amplifier,which is then connected to the peripheral circuit for processing and calculation.Under the effect of the error signalVbiasand the training signalVtrainimposed by the outside successively, the processed signal is returned to the demodulator and compared with the set standard to determine whether to suspend the error updating process or not. The overall circuit structure is shown in Fig.8.

    During the circuit operation,each memristor value is initialized betweenRonandRoff.Meanwhile,the memristor cross array can be regarded as an entire resistanceRMfor convenience. To read the whole resistanceRMin the training stage,the NMOS switch is opened under the joint action ofClk1 andClk2,and it is connected to the external circuit as the input of the first-level amplifier. According to the working nature of the invert amplifier,the outputv1of the invert amplifier can be expressed as

    Under the influence of input error signals,the outputv2,which is affected byvbias,can be expressed as

    The third stage amplifier is responsible for the input of the training signal. To update the value of the memristor array,the ratio ofRβtoRγcan be quantified and set as the training coefficientη:

    Based on the above relationship,it can be seen that the value ofv3is not only affected by the joint action of thevread,vtrain,andvbiassignals,but it can also be influenced by the training coefficientη.The weight reading/updating of the neural network can be simulated by three voltage signals,and the training coefficientηis similar to the learning rate,which represents the speed of adjusting the network’s weight by the loss gradient.After error adjustment and training operations, the signalsv3are stored in the capacitorCand released whenClk2 is activated. This switch setting can effectively avoid the disorderly superposition of signals. The signals through this series of operations are sent back to the outside of the demodulator for conversion and comparison with the set parameters. If the comparison results are not ideal, the next round of learning and training operation is re-entered.

    It can be seen that this kind of spintronic memristor-based circuit with low energy consumption and small volume can intuitively simulate the running process of the RBF-MLP network. In contrast to other devices,the memristor’s nonlinearity can adjust the value according to the input signal, and the threshold properties of spintronic memristor devices also help to stabilize the circuit parameters. In addition, the resistance value is no longer limited to the rigid boundariesRonandRoff,and this allows the circuit to change parameters by adjusting the current density,which further achieves the effect of adaptively changing the circuit parameters for the updated circuit signal.

    5. Analysis of experimental results

    In the experimental stage, as the network classification performance test samples must have complexity and diversity,and for comparative experiment,we also take into account the universality of the sample set. In this article, the Modified National Institute of Standards and Technology(MNIST)was chosen as the experimental database, which contains 60000 training images and 10000 testing images,as shown in Fig.9.Meanwhile, to verify the performance of the RBF-MLP network in task classification, the control variable method was used in the experiment to make a horizontal comparison of the specific implementation of the network: according to the characteristics of the network structure, three similar but not completely consistent network models are established. The specific models are shown as follows.

    Fig.9. Several examples from the MNIST dataset.

    (i)RBF network(hiddenlayer nodes optimized by GA)

    The network only contains an RBF neural network, and the genetic algorithm automatically finds the optimal node parameters of the RBF’s hidden layer.The number of basis functions can be determined adaptively,but without adaptive error adjustment.

    Fig.10. The training error rate curve of the network model(i).

    The experimental results are shown in Fig.10. It can be seen that after 25 epochs,the classification error rate remained stable at 0.02.

    (ii)RBF-MLP network(only RBF hiddenlayer nodes are optimized by GA)

    This model uses a complete RBF-MLP neural network structure. But only the genetic algorithm is considered to optimize the RBF’s hidden layer so that the number of MLP nodes cannot be adjusted automatically. According to the experimental results, the classification error rate decreased with the increase of epochs,and stabilized at 0.175 after 25 epochs,as shown in Fig.11.

    Fig.11. The training error rate curve of the network model(ii).

    (iii)RBF-MLP network(RBF&MLP hiddenlayer nodes are optimized by GA)

    This model is also an RBF-MLP network structure, but the genetic algorithm is used to optimize the parameters of two inner hidden layers. Therefore, the network’s structure could continuously be improving with the iteration of the genetic algorithm. In Fig.12,we find that the error rate of this structure is the lowest,which can be stable at 0.167. In other words,the classification accuracy rate of the network can reach 98.33%.

    Fig.12. The training error rate curve of the network model(iii).

    Based on multiple experiments, the time error of each of 25 epochs under the same operating environment is within 5% (this result will decrease with the functional of the operating environment configuration, such as when using a highperformance graphics card or directly running on the server with high computing power), so there is a minimal gap in efficiency. Therefore,the experimental results not only indicate that the performance of the network can be greatly improved by the genetic algorithm for automatic network search and parameter settings,but also confirm that the performance of the RBF-MLP structure is improved for a single RBF network.In addition, it is worth mentioning that even if the centre and width of the RBF network are randomly initialized, the same precision can be achieved after training,which means that the network model has outstanding robustness.

    Table 1 and Figure 13 respectively show the results comparison among various types of RBF neural networks. Obviously, the classification performance of the RBF-MLP is excellent when the algorithm/structure optimization is carried out simultaneously. It is worth mentioning that the accuracy of the FL-RBF-CNN network, which has the best classification effect,increases with the times of the training epochs.[40]By contrast, the RBF-MLP network can maintain relatively stable accuracy from the initial training.This is due to the universality of the best model obtained by the genetic algorithm,which is lacking in other networks.

    Table 1. The performances of various optimized RBF/MLP networks on MNIST datasets within 25 epochs.

    Fig. 13. Performance comparison of other types of RBF network models on the MNIST dataset. I: The RBF-MLP model optimized by GA. II: The deep-RBF model with gfirst-class loss function.[41] III:The fuzzy logic(FL)RBF-CNN model.[40]

    The RBF-MLP network also has outstanding advantages over other types of networks. Figure 14 shows the MNIST training accuracy of the unsupervised spiking neural network(SNN) network based on the spike timing dependent plasticity(STDP)learning rule.[44]As the number of reference vector neurons increases (400–1600–3600–6400), the portion of the input space per neuron decreases, improving accuracy by allowing each individual neuron to be more restrictive in its angular scope. As shown in Table 2, an SNN-STDP with considerable accuracy should contain enough neurons due to its fully-connected one-layer structure,and the RBF-MLP requires a much lower number of neurons to accomplish the same task. Even when the number of neurons reached 6400,the accuracy of the two training tasks remained at the same level.

    Table 2.Training accuracy results comparison of the RBF-MLP network and the SNN-STDP network on the Mnist dateset within 25 epochs. Highlighted cells are the best confgiuration for each size.

    Fig.14. Performance comparison of the RBF-MLP and the SNN-STDP on the MNIST dataset.

    However,after a series of experiments,it is found that the deficiency of the network structure is that it takes a long time in the training phase. It does not matter if the genetic algorithm is optimized or not, such as the training time of the model(b)and(c)length error is 2.4%–3.5%). This is mainly due to the mutation rates of the genetic algorithm and the descending trend with the number of iterations. In the later stages of evolution,extremely low mutation rates lead to an excessively long training time;therefore,obtaining the optimal solution is at the cost of sacrificing efficiency.

    6. Conclusion

    In this article, we first proposed the RBF-MLP structure according to the bottleneck of the RBF neural network. The centre and width of the hidden layer can be adjusted adaptively by working cooperatively, which solves the problem of confriming the parameters of the basis function. In addition, a genetic algorithm is used to optimize the hidden layer and realize the automatic search of network model parameters.Then,according to the characteristics of the network structure,a spintronic memristor-based circuit is used to simulate the operation of error backpropagation updating and training in the network by combining the memristor device with the external circuit,and a small neural network is realized on the memristor circuit.

    The experimental results show that the improved network structure can improve the classifciation accuracy to a certain extent, and the compatibility between the model and the data set can also be improved by invoking the external framework modelling method. In the process of testing with the MNIST data set,the accuracy of classifciation can reach 98.33%when the training amount is constant.However,without considering the improvement of the operating environment confgiuration(such as high-computing-power server or high-performance GPU for processing),the shortcomings of its training duration make further improvement possible,such as adjusting the activation function by further improving the generalization ability and adjusting the mutation rate adaptively according to network feedback. All these points can be used as the directions for further research work.

    猜你喜歡
    文武李杰
    Narrowed Si0.7Ge0.3 channel FinFET with subthreshold swing of 64 mV/Dec using cyclic self-limited oxidation and removal process
    Effect of megapore particles packing on dielectric barrier discharge, O3 generation and benzene degradation
    C9N4 as excellent dual electrocatalyst:A first principles study?
    人民海軍首次海戰(zhàn)
    源流(2021年11期)2021-03-25 10:32:07
    小胖熊半夜歷險記
    Zero-Sequence Current Suppression Strategy for Open-End Winding Permanent Magnet Synchronous Motor Based on Model Predictive Control
    ?。楱#镅酲耍颞Γ?多duō 多duo
    兼修文武身心沛 不誤言行舉止端
    丹青少年(2017年1期)2018-01-31 02:28:23
    Abstract
    The gas jet behavior in submerged Laval nozzle flow *
    国产午夜福利久久久久久| 亚洲国产精品国产精品| 亚洲在线观看片| 一级二级三级毛片免费看| 男女下面进入的视频免费午夜| 日本与韩国留学比较| 国产成人精品福利久久| 99re6热这里在线精品视频| 欧美xxxx性猛交bbbb| 精品久久国产蜜桃| 菩萨蛮人人尽说江南好唐韦庄| 午夜日本视频在线| 岛国毛片在线播放| 亚洲天堂国产精品一区在线| 极品少妇高潮喷水抽搐| 婷婷色综合www| 最近中文字幕高清免费大全6| 亚洲精品乱久久久久久| 美女大奶头视频| 天堂网av新在线| 一本一本综合久久| 亚洲精品乱久久久久久| 国产成人精品久久久久久| 亚洲性久久影院| 亚洲在线观看片| 国产免费福利视频在线观看| 99热全是精品| 日本av手机在线免费观看| 日韩精品有码人妻一区| 免费观看av网站的网址| 久久久精品免费免费高清| 国产中年淑女户外野战色| 免费观看无遮挡的男女| 国产 亚洲一区二区三区 | 国产成人精品一,二区| 欧美3d第一页| 精品久久久精品久久久| 激情五月婷婷亚洲| 七月丁香在线播放| 久久99蜜桃精品久久| 日本-黄色视频高清免费观看| 波多野结衣巨乳人妻| 日韩欧美国产在线观看| 禁无遮挡网站| 视频中文字幕在线观看| 九色成人免费人妻av| 国产不卡一卡二| 国产免费一级a男人的天堂| 成人午夜精彩视频在线观看| 中文字幕亚洲精品专区| 非洲黑人性xxxx精品又粗又长| av一本久久久久| 日韩中字成人| 亚洲av日韩在线播放| 日韩一区二区三区影片| 亚洲精品久久午夜乱码| 免费不卡的大黄色大毛片视频在线观看 | 好男人在线观看高清免费视频| 听说在线观看完整版免费高清| 男人舔女人下体高潮全视频| 国产女主播在线喷水免费视频网站 | 国产一级毛片七仙女欲春2| 人人妻人人看人人澡| 久久精品熟女亚洲av麻豆精品 | 国产成人午夜福利电影在线观看| 免费黄网站久久成人精品| 简卡轻食公司| 精品久久久精品久久久| 久久6这里有精品| 国产精品三级大全| 久久久久久久久久黄片| 日本色播在线视频| 欧美三级亚洲精品| av专区在线播放| 欧美 日韩 精品 国产| 亚洲最大成人手机在线| 日韩伦理黄色片| 欧美日韩一区二区视频在线观看视频在线 | 国产激情偷乱视频一区二区| 大片免费播放器 马上看| 国产亚洲午夜精品一区二区久久 | 全区人妻精品视频| 日韩在线高清观看一区二区三区| 欧美成人午夜免费资源| 亚洲欧美一区二区三区国产| 毛片一级片免费看久久久久| 男女国产视频网站| 国产精品人妻久久久影院| 哪个播放器可以免费观看大片| 国产黄频视频在线观看| 特级一级黄色大片| 免费不卡的大黄色大毛片视频在线观看 | 欧美日韩一区二区视频在线观看视频在线 | 1000部很黄的大片| 国产精品不卡视频一区二区| 极品少妇高潮喷水抽搐| 高清毛片免费看| 国产黄色视频一区二区在线观看| 亚洲婷婷狠狠爱综合网| 一级毛片久久久久久久久女| av播播在线观看一区| 麻豆成人午夜福利视频| 精品久久久久久久末码| 国产精品人妻久久久影院| 久久99精品国语久久久| 日产精品乱码卡一卡2卡三| 亚洲欧美精品自产自拍| 免费观看在线日韩| 日韩欧美一区视频在线观看 | 一级毛片我不卡| 69av精品久久久久久| 国产av国产精品国产| 色吧在线观看| 欧美日韩精品成人综合77777| 亚洲激情五月婷婷啪啪| 国产黄频视频在线观看| 岛国毛片在线播放| 在线播放无遮挡| 女的被弄到高潮叫床怎么办| 成人亚洲精品av一区二区| 蜜桃亚洲精品一区二区三区| 韩国高清视频一区二区三区| 九九爱精品视频在线观看| 国产v大片淫在线免费观看| 嘟嘟电影网在线观看| 99九九线精品视频在线观看视频| 日韩不卡一区二区三区视频在线| 欧美精品一区二区大全| 成人毛片a级毛片在线播放| 大话2 男鬼变身卡| 永久免费av网站大全| 搞女人的毛片| 亚洲,欧美,日韩| 欧美日韩亚洲高清精品| 99久国产av精品国产电影| 午夜免费激情av| 大片免费播放器 马上看| 午夜福利视频1000在线观看| 亚洲精品影视一区二区三区av| 国产成人福利小说| 夜夜看夜夜爽夜夜摸| 久久久久久久久久久免费av| www.av在线官网国产| 久久鲁丝午夜福利片| 日韩,欧美,国产一区二区三区| 成人美女网站在线观看视频| 在线观看人妻少妇| 国产精品国产三级国产专区5o| 最近中文字幕高清免费大全6| 日本av手机在线免费观看| 免费观看性生交大片5| 啦啦啦中文免费视频观看日本| 综合色丁香网| 日日摸夜夜添夜夜爱| 午夜老司机福利剧场| 日韩欧美精品免费久久| 嫩草影院入口| 国模一区二区三区四区视频| 天天躁日日操中文字幕| 69人妻影院| 久久97久久精品| 少妇猛男粗大的猛烈进出视频 | 少妇丰满av| 亚洲成人中文字幕在线播放| 黄片wwwwww| 久久精品夜夜夜夜夜久久蜜豆| 欧美日韩精品成人综合77777| 欧美bdsm另类| 七月丁香在线播放| 精品久久国产蜜桃| 美女xxoo啪啪120秒动态图| 美女xxoo啪啪120秒动态图| 日日干狠狠操夜夜爽| 日韩亚洲欧美综合| 中文字幕亚洲精品专区| 日本黄色片子视频| 久久久久久久大尺度免费视频| 中文字幕亚洲精品专区| 亚洲图色成人| 中文在线观看免费www的网站| 免费少妇av软件| 国产伦精品一区二区三区四那| 欧美人与善性xxx| 日韩国内少妇激情av| 全区人妻精品视频| 久久精品国产自在天天线| 免费高清在线观看视频在线观看| 日日啪夜夜撸| 菩萨蛮人人尽说江南好唐韦庄| www.色视频.com| 亚洲国产精品专区欧美| 亚洲av男天堂| 少妇的逼好多水| 777米奇影视久久| 亚洲人成网站高清观看| 中文字幕人妻熟人妻熟丝袜美| 精品人妻一区二区三区麻豆| 亚洲三级黄色毛片| 黄色欧美视频在线观看| 日韩电影二区| 赤兔流量卡办理| 熟妇人妻久久中文字幕3abv| 最近中文字幕2019免费版| 免费av不卡在线播放| 99视频精品全部免费 在线| 两个人的视频大全免费| 欧美三级亚洲精品| av网站免费在线观看视频 | 高清午夜精品一区二区三区| 亚洲精品aⅴ在线观看| 欧美xxⅹ黑人| freevideosex欧美| 久久国产乱子免费精品| 日韩精品有码人妻一区| 亚洲国产欧美人成| 国产亚洲最大av| 国产伦精品一区二区三区视频9| 久久99热这里只有精品18| 久久久亚洲精品成人影院| 天天躁夜夜躁狠狠久久av| 2021天堂中文幕一二区在线观| 最近中文字幕2019免费版| 亚洲第一区二区三区不卡| 成人毛片60女人毛片免费| av在线蜜桃| 亚洲真实伦在线观看| 亚洲国产欧美在线一区| 又粗又硬又长又爽又黄的视频| 国产一区亚洲一区在线观看| 久久99热这里只有精品18| 国产成人免费观看mmmm| 亚洲内射少妇av| 成人漫画全彩无遮挡| 18禁在线无遮挡免费观看视频| 99热全是精品| 亚洲精品中文字幕在线视频 | 亚洲丝袜综合中文字幕| 亚洲在线自拍视频| 欧美97在线视频| 日韩欧美一区视频在线观看 | 婷婷色av中文字幕| 国产久久久一区二区三区| 精品一区二区三卡| 人人妻人人澡人人爽人人夜夜 | 汤姆久久久久久久影院中文字幕 | 精品人妻视频免费看| 亚洲精品成人久久久久久| 26uuu在线亚洲综合色| 99热网站在线观看| 久久精品综合一区二区三区| 女人十人毛片免费观看3o分钟| 亚洲人与动物交配视频| 嘟嘟电影网在线观看| 乱码一卡2卡4卡精品| 国产精品99久久久久久久久| 天堂中文最新版在线下载 | 国产在线一区二区三区精| 秋霞伦理黄片| 亚洲成人av在线免费| 中文字幕av成人在线电影| 我的老师免费观看完整版| 一级毛片 在线播放| 激情 狠狠 欧美| 一二三四中文在线观看免费高清| 五月天丁香电影| 久久国内精品自在自线图片| 国产免费视频播放在线视频 | 国产淫语在线视频| 高清视频免费观看一区二区 | 亚洲怡红院男人天堂| 人人妻人人澡欧美一区二区| 亚洲18禁久久av| 天美传媒精品一区二区| 亚洲成人久久爱视频| videos熟女内射| 色网站视频免费| 亚洲国产av新网站| 国产成人a∨麻豆精品| 国产综合懂色| 国产精品一区二区在线观看99 | 91aial.com中文字幕在线观看| 国产精品久久久久久精品电影| 精品一区在线观看国产| 久久99热这里只频精品6学生| 亚洲国产精品成人久久小说| 久久久久久久亚洲中文字幕| 亚洲国产日韩欧美精品在线观看| 欧美xxⅹ黑人| 久久久成人免费电影| 亚洲精品日本国产第一区| 国产91av在线免费观看| 特级一级黄色大片| 亚洲成人精品中文字幕电影| 噜噜噜噜噜久久久久久91| 国产麻豆成人av免费视频| 日本一本二区三区精品| 欧美激情国产日韩精品一区| 亚洲色图av天堂| 一级毛片黄色毛片免费观看视频| 久久99蜜桃精品久久| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 丝袜喷水一区| 色综合色国产| 精华霜和精华液先用哪个| 搡老乐熟女国产| 美女cb高潮喷水在线观看| 直男gayav资源| 久久鲁丝午夜福利片| 国产精品福利在线免费观看| 精品久久国产蜜桃| 在现免费观看毛片| 精品久久久久久久久久久久久| 不卡视频在线观看欧美| 人体艺术视频欧美日本| 国产不卡一卡二| 亚洲精品久久午夜乱码| 99热这里只有是精品在线观看| 精品一区二区三卡| 你懂的网址亚洲精品在线观看| 搡女人真爽免费视频火全软件| 嫩草影院入口| 别揉我奶头 嗯啊视频| 国产免费一级a男人的天堂| 一级毛片 在线播放| 舔av片在线| 啦啦啦啦在线视频资源| 青春草视频在线免费观看| 寂寞人妻少妇视频99o| 少妇丰满av| 欧美一区二区亚洲| 久久人人爽人人片av| 国产激情偷乱视频一区二区| 国产av码专区亚洲av| 男女下面进入的视频免费午夜| 搡老乐熟女国产| 熟女人妻精品中文字幕| 国产成人a区在线观看| 在线a可以看的网站| 欧美日韩精品成人综合77777| 欧美性猛交╳xxx乱大交人| 少妇人妻精品综合一区二区| 可以在线观看毛片的网站| 午夜福利网站1000一区二区三区| 黑人高潮一二区| 国语对白做爰xxxⅹ性视频网站| 如何舔出高潮| 人妻制服诱惑在线中文字幕| av免费观看日本| 免费大片18禁| 国产精品美女特级片免费视频播放器| av在线老鸭窝| 国产老妇女一区| 亚洲综合色惰| 18禁裸乳无遮挡免费网站照片| 中文字幕免费在线视频6| 日韩,欧美,国产一区二区三区| 免费高清在线观看视频在线观看| 久久久国产一区二区| 国产片特级美女逼逼视频| 亚洲av成人精品一区久久| av福利片在线观看| 国产成人免费观看mmmm| 亚洲va在线va天堂va国产| 精品亚洲乱码少妇综合久久| 岛国毛片在线播放| 亚洲一级一片aⅴ在线观看| 禁无遮挡网站| 联通29元200g的流量卡| 日韩一本色道免费dvd| 国产综合精华液| 婷婷色av中文字幕| 看免费成人av毛片| 精品人妻视频免费看| 久久久久久伊人网av| 亚洲va在线va天堂va国产| 精品亚洲乱码少妇综合久久| 小蜜桃在线观看免费完整版高清| 国产精品国产三级专区第一集| 乱人视频在线观看| 18禁动态无遮挡网站| 搞女人的毛片| 久久久久久伊人网av| 午夜激情欧美在线| 麻豆成人av视频| 亚洲精品日韩av片在线观看| 91在线精品国自产拍蜜月| 中文字幕制服av| 色网站视频免费| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 国产精品不卡视频一区二区| 精品人妻视频免费看| 国产精品熟女久久久久浪| 真实男女啪啪啪动态图| 免费观看av网站的网址| 亚洲av电影在线观看一区二区三区 | 男女啪啪激烈高潮av片| 欧美性感艳星| 99久久人妻综合| 精品不卡国产一区二区三区| 99久久中文字幕三级久久日本| 成人性生交大片免费视频hd| 小蜜桃在线观看免费完整版高清| 97人妻精品一区二区三区麻豆| 性插视频无遮挡在线免费观看| 久久久a久久爽久久v久久| 在线观看av片永久免费下载| 免费观看性生交大片5| 老师上课跳d突然被开到最大视频| 国内少妇人妻偷人精品xxx网站| 少妇的逼好多水| 日韩一区二区三区影片| 色综合色国产| 亚州av有码| 三级毛片av免费| 大香蕉久久网| 午夜精品在线福利| 一级爰片在线观看| 精品人妻偷拍中文字幕| 狂野欧美激情性xxxx在线观看| 中文字幕制服av| 熟女电影av网| 我的老师免费观看完整版| 一级毛片久久久久久久久女| 晚上一个人看的免费电影| 欧美激情国产日韩精品一区| 亚洲av男天堂| 精品欧美国产一区二区三| 国产淫语在线视频| 精华霜和精华液先用哪个| 国产在视频线精品| 国产探花在线观看一区二区| 自拍偷自拍亚洲精品老妇| 欧美日韩国产mv在线观看视频 | 边亲边吃奶的免费视频| 色哟哟·www| 丰满少妇做爰视频| 国产69精品久久久久777片| 日韩欧美一区视频在线观看 | 岛国毛片在线播放| 亚洲av免费在线观看| 国产成人aa在线观看| 日韩国内少妇激情av| 我的女老师完整版在线观看| 91狼人影院| 中文字幕人妻熟人妻熟丝袜美| 国内精品一区二区在线观看| 黑人高潮一二区| 男人爽女人下面视频在线观看| 亚洲性久久影院| 久久这里只有精品中国| 国产精品一区www在线观看| 看十八女毛片水多多多| 国产精品女同一区二区软件| 国产 一区 欧美 日韩| 亚洲激情五月婷婷啪啪| 亚洲婷婷狠狠爱综合网| 午夜视频国产福利| 色综合亚洲欧美另类图片| 三级国产精品片| 亚洲国产高清在线一区二区三| 乱人视频在线观看| 中文在线观看免费www的网站| 2021天堂中文幕一二区在线观| 一级毛片我不卡| 亚洲经典国产精华液单| 亚洲精品日韩av片在线观看| 日本一二三区视频观看| 高清视频免费观看一区二区 | 插阴视频在线观看视频| 欧美+日韩+精品| 国产成年人精品一区二区| 成年av动漫网址| 日日啪夜夜撸| 春色校园在线视频观看| 欧美激情国产日韩精品一区| 国产国拍精品亚洲av在线观看| 成人鲁丝片一二三区免费| 亚洲av日韩在线播放| 久久精品久久精品一区二区三区| 黄色欧美视频在线观看| 久久久久精品久久久久真实原创| 免费在线观看成人毛片| 大陆偷拍与自拍| 国产黄色小视频在线观看| 国产日韩欧美在线精品| 日韩精品有码人妻一区| 亚洲精品成人av观看孕妇| 全区人妻精品视频| 日本爱情动作片www.在线观看| 成人一区二区视频在线观看| 黄片wwwwww| 国产精品嫩草影院av在线观看| 大香蕉久久网| 狂野欧美激情性xxxx在线观看| 国产高清三级在线| 国产午夜精品久久久久久一区二区三区| 国产亚洲精品久久久com| 色5月婷婷丁香| 国产精品一区二区三区四区久久| 亚洲精品中文字幕在线视频 | av在线老鸭窝| 最新中文字幕久久久久| 亚洲精品第二区| 一级毛片黄色毛片免费观看视频| 国产黄片视频在线免费观看| 80岁老熟妇乱子伦牲交| 色吧在线观看| 直男gayav资源| 亚洲国产精品专区欧美| 欧美成人精品欧美一级黄| 午夜福利在线观看免费完整高清在| 欧美xxxx黑人xx丫x性爽| 日韩,欧美,国产一区二区三区| 舔av片在线| 天堂av国产一区二区熟女人妻| 久久久成人免费电影| 男人狂女人下面高潮的视频| 国产乱人视频| 春色校园在线视频观看| 欧美激情久久久久久爽电影| 色哟哟·www| 精品久久久噜噜| 国内揄拍国产精品人妻在线| 建设人人有责人人尽责人人享有的 | 国产亚洲精品av在线| 亚洲18禁久久av| 精品久久久久久久久久久久久| 久久久久精品久久久久真实原创| 久久精品国产亚洲网站| 免费观看av网站的网址| 91精品伊人久久大香线蕉| 高清视频免费观看一区二区 | 中文字幕亚洲精品专区| 国产精品福利在线免费观看| 麻豆乱淫一区二区| 91久久精品电影网| 久久午夜福利片| 久久鲁丝午夜福利片| 中文字幕免费在线视频6| 久久久亚洲精品成人影院| 国产亚洲精品久久久com| 精品国产露脸久久av麻豆 | 男插女下体视频免费在线播放| 岛国毛片在线播放| 视频中文字幕在线观看| 日韩欧美一区视频在线观看 | 午夜激情久久久久久久| 丝袜喷水一区| www.色视频.com| 成人亚洲精品av一区二区| 国产精品无大码| 三级国产精品片| 国产午夜精品久久久久久一区二区三区| 能在线免费观看的黄片| 亚洲电影在线观看av| 国语对白做爰xxxⅹ性视频网站| 黄色日韩在线| 在线观看免费高清a一片| 九草在线视频观看| 久久精品综合一区二区三区| 久久久a久久爽久久v久久| 七月丁香在线播放| 精品久久久久久久久久久久久| 黄片wwwwww| 晚上一个人看的免费电影| 黄片无遮挡物在线观看| a级毛色黄片| 亚洲精品乱码久久久久久按摩| 简卡轻食公司| 国内少妇人妻偷人精品xxx网站| 一本久久精品| 在线a可以看的网站| 最近手机中文字幕大全| 成人亚洲欧美一区二区av| 色播亚洲综合网| 国产黄片美女视频| 国产成人a∨麻豆精品| 亚洲色图av天堂| 久久午夜福利片| 国产探花极品一区二区| 欧美日韩视频高清一区二区三区二| 午夜免费男女啪啪视频观看| 我的女老师完整版在线观看| 精品久久久久久久久久久久久| 亚洲国产av新网站| 久久精品国产亚洲av天美| 亚洲国产精品sss在线观看| 天堂俺去俺来也www色官网 | 搡女人真爽免费视频火全软件| 男人狂女人下面高潮的视频| av播播在线观看一区| 成人无遮挡网站| 波多野结衣巨乳人妻| 少妇的逼好多水| 青春草视频在线免费观看| xxx大片免费视频| 国产免费又黄又爽又色| 国产真实伦视频高清在线观看| 亚洲高清免费不卡视频| 黄色配什么色好看| 免费看美女性在线毛片视频| 中文字幕人妻熟人妻熟丝袜美| eeuss影院久久| 99热这里只有精品一区| 国产伦精品一区二区三区视频9| 高清在线视频一区二区三区| 美女国产视频在线观看| 国产在视频线在精品| 国产 亚洲一区二区三区 | 久久久久性生活片| 免费高清在线观看视频在线观看| 午夜激情福利司机影院| 国产亚洲5aaaaa淫片| 午夜激情欧美在线| 午夜福利在线在线|