• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A deep learning method based on prior knowledge with dual training for solving FPK equation

    2024-01-25 07:11:34DenghuiPeng彭登輝ShenlongWang王神龍andYuanchenHuang黃元辰
    Chinese Physics B 2024年1期
    關(guān)鍵詞:神龍

    Denghui Peng(彭登輝), Shenlong Wang(王神龍), and Yuanchen Huang(黃元辰)

    School of Mechanical Engineering,University of Shanghai for Science and Technology,Shanghai 200093,China

    Keywords: deep learning,prior knowledge,FPK equation,probability density function

    1.Introduction

    The Fokker–Planck–Kolmogorov (FPK) equation governs the temporal evolution of probability density functions(PDFs) within a stochastic dynamic system, which stands as one of the foundational equations for investigating stochastic phenomena.The FPK equation is widely utilized in many fields, such as biology,[1]chemistry,[2]economics,[3]applied mathematics,[4]statistical physics,[5]and engineering.[6]By integrating the PDF of the stochastic dynamic system,one can obtain the probability distribution and mathematic expectation of macroscopic variables.When the excitation of the stochastic dynamic system is Gaussian white noise,the corresponding FPK equation is one type of the second-order parabolic partial differential equation(PDE),whose analytical solution can be obtained only in some particular cases.Therefore,in most cases,the numerical method is considered feasible to solve the FPK equation.

    In the past few decades, a large number of numerical methods based on meshing have been used to solve the FPK equation, such as the finite element method,[7]finite difference method,[8]path integration method,[9]and variation method.[10]The disadvantage of the above methods is that they can only be solved at grid points, and the solution at any points other than grid points requires reconstruction or interpolation.Moreover, for high-dimensional systems, the dependence of those methods on grid division leads to a sharp increase in the amount of data, and the solution procedure would be quite time- and resource-consuming.In addition,Monte Carlo simulation(MCS)is also frequently employed to address the FPK equation,[11]which approximates the probability density distribution either by sampling extended trajectories of the stochastic differential equation(SDE)or through the application of the conditional Gaussian framework.[12]The correctness of the solution for this method depends on the volume of data collected,despite the fact that it does not require interpolation.Furthermore, for high-dimensional systems, it can be barely possible to obtain the joint probability density function (JPDF) of high accuracy using the aforementioned approaches due to the large computational cost involved.

    Recently, the increasing abundance of computing resources and the rapid development of neural networks (NNs)has provided a promising way to solve the PDEs.[13–17]For general high-dimensional parabolic PDEs, Hanet al.[13]designed a deep learning (DL) framework which used backward SDEs to reformulate PDEs and used NNs to approximate the gradient of solutions.Raissiet al.[14]proposed a DL method to solve the forward-backwards SDEs and their corresponding high-dimensional PDEs.Subsequently, they proposed physics-informed neural network(PINN)by embedding physical information into the NN.[15]The method could be applied to a wide range of classical problems in fluids and quantum mechanics.Berget al.[16]used deep feedforward artificial NNs to approximate the solution of PDEs in complex geometry and emphasized the advantages of the deep NN compared with the shallow one.Sirignanoet al.[17]were dedicated to solving higher-dimensional PDEs using a deep Galerkin method which trained the NN to meet the differential operator form, initial and boundary conditions.Despite the fact that the FPK equation is a PDE, the methods mentioned above cannot be used to solve the FPK equation.The primary cause is that the equation format lacks a constant term and there is no probability distribution at the boundary,which make that zero is used as a trial solution.Consequently, numerous methods for solving FPK equation based on DL have been proposed.[18–27]Among them, Xuet al.[18]proposed a method for solving general Fokker–Planck (FP) equations based on DL called DL-FP, which innovatively added normalization conditions and set penalty factors in the supervision conditions to avoid zero trial solution and skip local optima.However, the normalization condition of this method is closely related to global information,which means that the calculation data will grow exponentially with the increase of dimension, and the method will thus fall into the curse of dimension when solving higher-dimension problems.Subsequently, Zhanget al.[19]optimized the training data on the basis of DL-FP.A suitable and small number of discrete integral points and integral volumes were obtained through multiplek-dimensional-tree(KD-tree)segmentation of the random data set, which solved the problem brought by the global information that needs to be considered.Furthermore, they designed a new DL framework that included multiple NNs and matched one NN for each dimension,making the training data grow linearly instead of exponentially.[20]Wanget al.[21]proposed an iterative selection strategy of Gaussian neurons for radial basis function neural networks (RBFNN) which is applied to obtain the steady-state solution of the FPK equation.The strategy is much more efficient than the RBFNN method with uniformly distributed neurons.Another approach is to use the PDF calculated from prior data to supervise the solution of the FPK equation instead of the supervision of the normalization condition.[22,23]Among them,Zhaiet al.[23]developed a meshless FPK solver that used prior data from the solution of the FPK equation obtained from MCS as a supervised condition instead of a normalized condition, which not only excluded zero solution but also avoided data explosion.Nevertheless, that method still needs improvement, since the prior data supervising the solution obtained from numerical simulations is relatively rough.

    In general, the speed and accuracy of the above methods are not always guaranteed.For the DL-FP method,[18]the presence of normalization conditions in the loss function makes the solution less efficient and less applicable.For meshfree methods,[23]it is a good idea to utilize prior knowledge to guide the solution process to avoid normalization conditions.However, it is not a suitable choice to use prior knowledge containing gradient information errors together with the FPK differential operator to guide the solution process, even if the NN is robust.Inspired by the above methods,we propose a DL method based on prior knowledge(DL-BPK)to solve the FPK equation.The proposed method is composed of dual training processes: the pre-training process and the routine training or prediction process.In pre-training, the prior knowledge obtained from MCS is injected into the training to obtain a pretrained model.The role of the pre-trained model is to make the NN jump out of the local optimum faster to accelerate the training process, especially in high dimension, which is similar to the function of initializing network parameters.Based on the pre-trained model,the FPK differential operator is used as a supervised condition to make the final solution sufficiently smooth and accurate.In addition to the FPK differential operator,a supervised condition should be set to suppress the generation of zero solutions.Herein, the points with the highest probability density within a small range under the pre-trained model, which is the local maximum point, will be selected as the supervision points.There are two reasons for this setting: one is to avoid generating zero trial solution, and the other is to minimize perturbations in the shaping of the PDF,attributable to the absence of gradient information errors.Finally,the PDF generated by the NN undergoes normalization,a process that serves to correct errors in the PDF value.In summary,the main contributions of this article are as follows: (i)Introducing a new supervised condition based on local maximum points and FPK differential operator forms,significantly improving the accuracy and robustness of the solution;(ii)The implementation of pre-trained models based on prior knowledge reduces training costs and accelerates the convergence of NNs.

    The rest of this article is organized as follows: the DLBPK method is detailly described in Section 2.In Section 3,we use four examples to verify the effectiveness of our method.These examples include two two-dimensional (2D)systems, one six-dimensional (6D) system, and one eightdimensional(8D)system.At the same time,the comparisons between the proposed method and the analytical solution, as well as other methods, are carried out to demonstrate the superiority of the proposed method.In Section 4, the impact of the pre-trained model and penalty factors on the results are discussed.Finally,Section 5 presents conclusions to close this paper.

    2.Methodology

    In this section, the FPK equation and the necessary conditions for its solution are described in detail in Subsection 2.1 and a detailed introduction to the DL-BPK method is then provided in Subsection 2.2.

    2.1.The FPK equation

    Consider the general form of SDEs

    whereX=[x1,...,xi,...,xn]Tdenotes ann-dimensional variable,f(X) represents ann-dimensional vector function ofX,σ(X)serves as a square matrix that is assumed to be constant later in the paper andW(t)is ann-dimensional standard Gaussian white noise.For Eq.(1),the probability density distribution thatXobeys is described by its corresponding FPK equation

    whereLFPKis the FPK differential operator,andp(X),the solution of the FPK equation,denotes the PDF of the stochastic processX.Here, this paper studies the stationary FPK equation with natural boundary,which takes the following form:

    Moreover,the boundary conditions should be satisfied

    Then, the normalization condition should be preserved for PDF

    Combining Eqs.(2)and(3),it can be seen that the stationary FPK equation is a type of the second-order PDE.There is no constant term in the equation, which means that zero will become its trial solution.In short, the solution of FPK equation should bear the following four characteristics: (i)it must conform to the form of FPK equation;(ii)the probability density at the boundary is 0; (iii)as a PDF,normalization should be considered;(iv)it should be nonnegative.

    2.2.The algorithm

    Before describing the proposed method, two existing methods to solve the FPK equation are first introduced.The method proposed in Ref.[18] sets the discretization form of Eqs.(3)–(5)as three supervised conditions in the loss function of the NN, and introduces a penalty factor.The form of the loss function is as follows:

    wherepθ(·) represents the trial solution of the FPK equation obtained by NN,θbeing the parameters of the NN, which will be updated continuously during training of the NN.The second term in Eq.(6)is the integral approximation of all the uniformly discrete points in the region to satisfy the normalization condition.It indicates that the number of the calculation points increases exponentially with the increase of dimensionality,which leads to considerable growth in computational cost,and makes the application of this method in highdimensions almost impractical.In addition, the redundancy of supervised conditions inevitably makes the NN easy to fall into local optima, and even unable to converge after a long time.Even though the penalty factor alleviates this problem to some extent,the effect is not ideal.It is still necessary to find an approach to make the NN skip the turbulence in the early training stage and accelerate the fitting.

    Another method to solve the FPK equation is using the PDF calculated from prior data instead of the normalized condition to supervise the solution of FPK equation.[23]The associated loss function is given by

    where ?p(·)is an approximate estimate of the probability density,as determined by MCS.The second term in Eq.(7)serves to replace both the normalization and boundary conditions.This suggests that the exclusive use ofNytraining points with probability density, as opposed to using a large number of points for normalization,can yield superior guidance for PDF generation.As a result, it effectively mitigates the challenge of data explosion encountered in high-dimensional cases.Although the presence of the FPK operator has been demonstrated to confer robustness to NNs and facilitate the use of rough prior knowledge in approximating FPK equation solutions, such prior knowledge can still pose challenges for NN training.Specifically,it can complicate the initial training process and potentially limit the accuracy of the final solution.

    To address this problem, a pre-trained model is introduced to prevent the difficulty of fitting NNs.Meanwhile,the normalization condition is replaced by newly added supervision conditions to avoid zero trial solution.The detailed procedures of the DL-BPK method for solving FPK equation are described below,and the flow chart is shown in Fig.1.

    Step 1 Constructing the training set

    Due to the limited training set of NNs, MCS is utilized to obtain the range of variables in each dimension, and the range is defined as the bounded domain?.Each dimension of?is then discretized by step length Δx.For thendimensional case, the range of its variables in each dimension is assumed to be [a,b]n, and the number of training data and that of boundary data areND=((b?a)/Δx)nandNB=2n((b?a)/Δx)n?1, respectively.As a result, the training set becomesTD=XDi ∈?NDi=1, which consists of uniformly spaced grid points covering?; the boundary set isTB=XBi ∈??NBi=1, which represents the boundary of the training set.Subsequently, we adopt the Euler–Maruyama method to generate a long trajectory for Eq.(1), and selectNSpoints to form a training setTS=XSi ∈?NSi=1.The probability density of this training data can be obtained from MCS simulation.The training setTSand ?p(TS) are used as prior knowledge to pretrain the NNs.An important clarification is that the MCS used to generate prior knowledge in this study were run on small sample sizes.Although this approach comes at the expense of the accuracy of the a priori knowledge obtained,it does speed up the collection of data.

    Step 2 Training the NNs

    Setting the loss function as follows:

    Similar to Eq.(6),pθ(·)is employed to denote the PDF solution obtained from the NN.To differentiate the results from the dual-training stages of the DL-BPK method,we usepθ1(·)andpθ2(·)for the PDF solutions after the first and second rounds of training,respectively.In the method proposed in this paper,the NNs need pre-training to prevent the difficulties in fitting during the actual training process.L1should be minimized to ensure that the obtained pretrained model can generate the distribution of prior knowledge,and MCS is applied to get the PDF of prior data.

    In addition, in the actual training process,L1in Eq.(8)can be reduced to a satisfactory value with only thousands of iterations, which is due to the simplicity of the loss function term and the small amount of data.It should be noted that the time cost for obtaining the pre-trained model is small.The accuracy of the solution of the pre-trained model still needs improvement, so further training is performed.In this training,LFPKis considered as a supervision condition, and it is also necessary to constrain it to prevent the generation of zero solutions.The loss function for this further training is defined as follows:

    wherepθ1(XLi) is the probability density of the pretrained model atXLi, and the data setTLincludesNLpoints sampled from the local maximum points of PDF of the pre-trained model (NL=1 for a general monostable distributed system).It can be seen from Eqs.(10) and (11) thatE1ensures the final solution is in the FPK form, andE2supervises the solution of the FPK equation using the PDF calculated at local maximum points by the pre-trained model.The main purpose ofE2is to exclude zero solution.The local maximum points are selected to avoid introduction of spatially related gradient noise information.It should be noted that although the NN has good robustness,elimination of noise information greatly facilitatesE1training and enhances the accuracy of training results.In summary,the pretrained model is designed to provide more suitable initial parameters for subsequent training,which plays a significant role in the fast fitting of NNs,especially when solving high-dimensional problems.In addition,sinceL2does not include a loss function term with spatial gradient noise information, the accuracy of the final solution is satisfactory,as to be shown in Section 3.

    Step 3 Setting the parameters of the NN

    In this paper,the Adam optimizer is used to minimizeL1andL2.Unless otherwise specified, the NNs all have 4 hidden layers and 20 hidden nodes.To guarantee both the nonnegativity and differentiability of the PDF output generated by the NN, the sigmoid function is employed as the activation function.In Eq.(9),the FPK form controls the smoothness of the solution, which directly affects the accuracy of the solution.Therefore, we are more interested in the decline ofE1.In this regard,the penalty factorλis designed to influence the process of supervised training,and to reduce the weight ofE2,so that the NN approaches the shape of the exact solution.It should be pointed out that due to the reduction ofE2weight and the rough probability density of the local maximum point,the final solution cannot well satisfy the normalization conditions.In order to make the final result standardized,the Vegas integral technology[28]is used to complete the normalization.

    3.Examples

    In this section, four numerical examples are provided to support our conjecture,and to prove that DL-BPK method can be used successfully.In the first example, we showcase the advantages of DL-BPK over other deep learning methods in terms of both speed and accuracy, while also examining the influence of the number and type of supervision points on the results.The second example considers a 2D steady-state toggle switch system,using MCS results to demonstrate the suitability of DL-BPK.The third and fourth examples further illustrate the applicability of the method to high-dimensional systems.

    3.1.A two-dimensional stochastic gradient system

    Considering a 2D stochastic gradient system[23]

    whereΓ1andΓ2are independent standard Gaussian white noise excitation with zero mean-value andσdenotes the noise intensity.Then lettingV=V(xy)=(x2+y2?1)2and the reduced stationary FPK equation is

    and the precise steady-state solution can be expressed as

    whereCbeing the normalized constant.In this example, the selected calculation area is?=[?2,2]2and the system parameterσ=1.

    Utilizing the DL-BPK method, we set Δx=0.04, so the training setTDcomprised 104data,while the boundary setTBencompassed 400 data.For the system(12),TSwas acquired by selecting 104training data from the trajectory obtained by running the Euler–Maruyama numerical method with 108steps.The probability density ofTScan be obtained through MCS and the penalty factor is set toλ=0.01.The DL-BPK method is then employed to solve Eq.(13) according to the steps described in Subsection 2.2.The pretrained model is obtained after 2000 iterations of the NN, and the training set as well as the loss function were updated and further trained 4×4times to get the final result as shown in Fig.2.The DLBPK solution closely mirrors the exact solution determined by Eq.(14), with a maximum absolute error in the JPDF of approximately 1.4×10?4.Figure 2(f)shows the comparison of the exact solution and the DL-BPK solution for marginal probability density function (mPDF) ofp(x), wherep(y) is identical.Visually,the outcomes are practically indiscernible.In addition,by comparing the results of DL-BPK and the pretrained model, it can be seen thatLFPKgreatly improve the accuracy and smoothness of the solution.It is also proved that the accuracy of the DL-BPK solution would not be affected by the pre-trained model.The result of the pre-trained model is not satisfactory, but its accuracy is not the focus of our research.The function of the pre-trained model is more embodied in accelerating NN fitting.Moreover, it should be mentioned that in this example,we selected 4 uniformly sampled local maxima of the PDF of the circular distribution of the pre-trained model as the supervision points.

    In order to illustrate the advantages of the pre-training and to demonstrate the superiority of the DL-BPK method in training speed, we tested DL-BPK, DL-FP[18]and the mesh-free data-driven solver[23]simultaneously to solve system(12).In order to assess and measure the effectiveness and accuracy of the proposed method,we defined

    wherepsbeing the exact solution,and‖·‖2denoting theL2norm.The closerRaccis to 1,the higher the result accuracy is.This test used data set of the same size and ran on the same device.The experimental conditions and parameter settings are shown in Table 1,in which the optimizer defaults to Adam and the learning rate automatically adjusts with the number of iterations,and the obtained results are shown in Fig.3.

    Fig.2.Results for system(12): (a)solution of pre-trained model;(b)solution of DL-BPK;(c)exact solution;(d)solution error of pre-trained model;(e)solution error of DL-BPK;(f)contrast of solution for one-dimensional(1D)mPDF.

    Table 1.Experimental environment and parameter settings.

    Fig.3.Comparison of results from different methods.

    It is clearly observed that the NN can be fitted more quickly after pre-training.Compared to other methods, the proposed DL-BPK method achieves the same accuracy with fewer iterations: an accuracy of over 99.8% in less than 420 seconds (including pre-training).It can also be seen through local zooming in Fig.3 that the final accuracy of DL-BPK is very high, reaching around 99.97%.Additionally, figure 3 also shows that other deep learning methods may fall into fluctuations in the early stage of training, which requires a lot of time to get rid of.It further validates the importance of pretraining.Furthermore, we will discuss in detail the effect of errors of pre-trained PDF on the final PDF in terms of convergence speed and ultimate error in Subsection 4.1.

    In addition,the advantage of using local maximum points as the supervision points are verified using this example.To this end, 4, 20, 100, and 400 local maximum points are obtained from the prior distribution as supervision points, and then 4, 20, 100, and 400 points are randomly selected from the trajectory of the system (12) as supervision points.Their probability density is obtained from the exact solution with artificially injected noise,rather than from the pre-trained model.The purpose is to study the performance of the solution with rough supervision points, by adjusting the magnitude of the noise intensity,which simulated the actual solving process using inaccurate information.Specifically, we obtain a set of supervision pointsTLand calculateps(XLi), thenpθ1(XLi)=(1?ε+2εU)·ps(XLi), whereUis uniformly distributed on[0, 1].The maximal relative errorεof noise is set to 0, 0.1,0.3,0.5,and 0.8,respectively.It should be noted that this test is trained 104times on the basis of the pre-trained model to ensure that the NN basically fit.The results are shown in Fig.4.

    Fig.4.Accuracy under different numbers and types of supervision points and different noise intensities.

    It is evident that using local maximum points as supervision points mitigates the impact of increased noise intensity on result accuracy.In contrast, when supervision points are randomly sampled, accuracy tends to decline as noise intensity rises.This phenomenon can be attributed to two factors:First,the probability density at randomly sampled supervision points varies significantly,exacerbating the gradient information error between them when noise is high,thereby affecting the minimization ofE1.Second, when the supervision point corresponds to a local maximum in the PDF,any inaccuracies in its value are offset during the final normalization process.Finally, it is worth noting that increasing the number of supervision points, whether they are local maximum points or randomly sampled points from the trajectory,improves adaptability to roughness.However, to minimize the reliance on prior knowledge,the preference is to select a smaller number of supervision points.

    3.2.A 2D multi-stable toggle switch system

    It is challenging to solve multi-stable systems with low noise, and we would like to test the performance of the DLBPK method in such special situations.Considering a toggle switch system[29]

    whereΓ1andΓ2are the independent standard Gaussian white noise excitation with zero mean-value andσdenotes the noise intensity.The reduced stationary FPK equation is

    For this system, the selected calculation area is?=[0,1.5]2and constantb=0.25.We set Δx=0.015, so the training setTDcomprised 104data,while the boundary setTBencompassed 400 data.For Eq.(15),TSwas acquired by selecting 104training data from the MCS simulation with 105steps and the penalty factor is set to 0.001.For Eq.(16), the results of DL-BPK method on JPDF are shown in Fig.5.In addition,similar to the above example,the DL-FP,mesh-free,and DLBPK method proposed in this paper are tested on this example.It should be noted that this system does not have an analytical solution,so we will compare the results of 108steps simulated using MCS, which took 1.5 hours, and the final results and comparison are shown in Fig.6.

    Fig.5.The DL-BPK solution of system(15)with different noise intensities: (a)σ =0.05,(b)σ =0.1,(c)σ =0.15.

    From Figs.5 and 6, the results clearly indicate that finding a solution becomes more challenging when the noise intensity is low.However, the DL-BPK method still maintains good performance in less than 500 seconds (including pretraining).This is attributed to the adaptability of the method when local maximum points are used as supervision points,even in the presence of rough prior knowledge distributions.This has been proven in system (12).As the noise intensity increases, the performance of the Mesh-free method also improves,likely due to the increased accuracy of the prior knowledge.As for the DL-FP method, its inability to capture the distribution characteristics of the system even after more than 3×104iterations highlights the particular challenges posed by this system.This also underscores the significant advantages of using prior knowledge when solving complex systems.

    Fig.6.Comparison of results of different methods for mPDF p(x)under three different incentives.

    For bistable distribution systems with low noise intensity,the accuracy of the distribution is compromised when fewer MCS simulations are used to obtain prior knowledge.The inaccuracy subsequently affects the distribution of the pretrained model and the probability density of the supervision points,thereby slightly impacting the final results.We anticipate addressing this issue through improved sampling methods in future work.It is worth noting that there have been studies focused on solving low-noise systems.[24]Additionally,the test yielded optimal results when the number of hidden nodes was set to 50,aligning with observations in system(15).This suggests that wider NNs are more effective at approximating highly concentrated probability distributions compared to their narrower counterparts.

    3.3.A 6D system

    Consider a 6D system[19]

    wherein,Γi(wherei= 1, 2, 3) denotes independent standard Gaussian white noise excitation with zero mean-value andσi(wherei=1, 2, 3) denotes the noise intensity.AndThe corresponding stationary FPK equation is

    whereCbeing the normalization constant.For this system,the range of training set is set to[?5,5]6,and there are 5.0×105data in the training setTD, and 1.5×105data in the boundary setTB.1.0×105points are sampled from the trajectory of the system (17) to formTS.The probability density ofTScan be obtained through MCS and the penalty factor is set toλ=0.01.The pre-training is carried out 5000 times and then the actual training 5×104times to obtain the final solution.The results of Eq.(18)are shown in Fig.7.By comparing the final solution with Eq.(19), which is the exact solution, the DL-BPK method still performs well in terms of accuracy on the six dimensions.

    Fig.7.Results for system(17)with σ21 =σ22 =σ23 =2.0.Comparing probability densities on central slices in 6D state space: (a) p(x1,x2|y1=y2=x3=y3=0)of pre-trained model;(b) p(x1,x2|y1=y2=x3=y3=0)of DL-BPK method;(c)analytical p(x1,x2|y1=y2=x3=y3=0);(d)solution error of panel(a);(e)solution error of panel(b).Similarly,panels(f)–(j)are solutions for p(x1,x2|others are 0.6).

    Fig.8.Evolution of solution accuracy and the value of the loss function with the number of iterations.The solid line signifies the accuracy,corresponding to the left axis, while the dotted line indicates the value of the loss function,aligned with the right axis.

    As can be seen from Fig.8, the accuracy of the solution exceeded 99.13%after 2.0×104iterations and the calculation time was less than 1560 s(including pre-training),which implies that the method is still applicable in high-dimensional situations.The final result has an accuracy of approximately 99.65%, which is remarkable for high-dimensional systems.The accuracy of the solution decreased in the early training stage due to the fact thatE1of the pre-trained model is so large that, the effect ofE2in fitting the NN was temporarily overwhelmed.When the value ofE1returned to a normal level,the supervision points begin to play a role in suppressing the generation of zero solution,and the accuracy of the solutions start to improve.The accuracy of the final solution is demonstrated by showing results near the center of the 6D state space.It is obvious that the DL-BPK method is still applicable in highdimensions,and the accuracy is maintained at a good level.At the same time,it is worth noting that we only trained 5.0×104times on the basis of the pre-trained model to achieve satisfactory accuracy, which will be very difficult for other methods.To further illustrate the applicability of this method in higherdimensions,the slice probability density far from the center of the 6D state space is shown in Fig.9.

    As shown in Fig.9, in the region where the probability density approaches zero,the result acquired from the proposed DL-BPK still shows good agreement with the exact probability distribution.

    Fig.9.Comparing probability densities on slices far from the center in 6D state space: (a) p(x1,x2|y1 =y2 =x3 =y3 =2.0) of pre-trained model;(b)p(x1,x2|y1=y2=x3=y3=2.0)of DL-BPK method;(c)analytical p(x1,x2|y1=y2=x3=y3=2.0);(d)solution error of panel(a);(e)solution error of panel(b).

    3.4.A 8D system

    The applicability of the DL-BPK method in higherdimensions is of great interest.Nevertheless, it is generally challenging to verify the results in high-dimensional situations, so a linear system with known exact distributions is chosen.[23]Considering the following SDE with linear deterministic part

    whereΓi,i=1,...,Ndenotes the independent standard Gaussian white noise with zero mean-value, whileσ=1 signifies the noise intensity.X=(x1,x2,...,xN)Tis the column vector of variables.The deterministic part of system(20)has the matrix formAX.The invariant probability measure of system(20)satisfies

    We determineΣby solving the Lyapunov equationAΣ+ΣAT+2I=0.Given the symmetry ofA,we can representΣasin Eq.(21).The dimensionNwas set to 8.It is almost impractical to perform uniform sampling at such high dimensions,even if the sampling is extremely sparse.Hence,we use random sampling from within the domain and from the boundary to generateTDandTB,containing 1.0×106and 5.0×105points respectively.Then,Tsalso is used as the training set in this experiment, which has 1.0×105data sampled from the trajectory of Eq.(20).The probability density ofTSis obtained through MCS and the penalty factor is set toλ=0.01.The final solution was achieved through 2×104times of pre-training and then 5×105times of training which took nearly 30 hours.The training cost in this experiment is enormous,although it is only two dimension higher than system (17).And we are more interested in the applicability of high-dimensions, so there will be a sacrifice in efficiency to some extent.

    It can be seen from Fig.10 that the agreement between the exact solution and DL-BPK solution is acceptable only near the central slice,and when it is far from the central slice,the numerical value is no longer of the same magnitude as the exact solution.Such a phenomenon might result from the fact that more representative training sets for NNs are required when solving higher-dimensional systems.Assuming 100 grids are divided into each dimension, there will be 1016points in the entire space.Therefore, for an 8D system, the amount of data of points in its space is extremely terrifying.This results in the training set used in this paper being unable to quickly traverse the entire space,which leads to an increase in training costs.Therefore, how to sample more representative data is still a problem that must be solved when facing high-dimensional problems.This is also the focus and direction of our future research.

    Fig.10.Results for system(20)with N=8: (a) p(x1,x2|x(3~8)=0)of DL-BPK;(b)analytical p(x1,x2|x(3~8)=0);(c) p(x1,x3|x(2,4~8)=0)of DL-BPK;(d)analytical p(x1,x3|x(2,4~8)=2.0);(e) p(x1,x2|x(3~8)=2.0)of DL-BPK;(f)analytical p(x1,x2|x(3~8)=2.0).

    4.Discussion

    The advantages of the proposed DL-BPK method in computational efficiency and accuracy have been demonstrated in the previous section.Further experiments are also conducted to investigate the impact of different numbers of supervision points on the robustness of the method.In this section, the following two aspects of the DL-BPK method are studied,respectively: (i)the significance of the pre-trained model to the calculation performance,and(ii)the role of the penalty factors in the proposed method.

    4.1.The significance of pre-training

    The influence of the pre-trained model on the accuracy and speed is discussed first.We will discuss the systems(12)and (17), which represent the low-dimensional and highdimensional cases respectively.The experimental conditions are identical to those described in Section 3, except that the pre-training step is canceled,so that the impact of pre-training on accuracy could be reflected.It should be noted that in the absence of pre-training scenarios,pθ1(XLi)is obtained by MCS.

    Additionally, the total training times of NNs with and without pre-training are the same.We can clearly observe in Fig.11 that the accuracy of the results obtained without pretraining is much lower than that with pre-training.Especially for system(17),which is a 6D system,the results without pretraining are extremely poor.This test shows that the presence of pre-training is vital for the enhancement of solution accuracy as well as efficiency,which is particularly evident in highdimensional situations.

    Since the existence of the pre-training step casts great influence on solution accuracy, it is necessary to study further the impact of the errors of the pre-trained model on the final result.According to Subsection 2.2, the accuracy of the pre-trained model is primarily determined by the prior knowledge obtained by the MCS.Therefore,adjusting the number of MCS steps to affect the accuracy of the pre-trained model is reasonable.For system(12), the prior knowledge is obtained by MCS with 103,104,5×104,and 105steps respectively,and these four cases are referred to as case I–IV.Then the models that have been pre-trained for 2000 and 5000 times are used as the basis for subsequent training,and the results are shown in Fig.12.It can be seen from Figs.12(b)and 12(c)that the error of the pre-trained model hardly interferes with the final calculation in terms of convergence speed and ultimate error.The accuracy of case I in Fig.12(c) fluctuates significantly in the initial stage, mainly due to highly rough prior knowledge(induced by few iteration steps of MCS)which leads toE1increasing with iteration during the pre-training process as shown in Fig.12(d).However,it is gratifying that the introduction of FPK differential operator can quickly repair the NNs in the subsequent training.

    In short, the pre-trained model facilitates quick fitting of the NN by jumping out of local saddle points when solving high-dimensional systems,and the inclusion of the pre-trained model is essential for the accuracy of the final result.However, the accuracy of the pre-trained model is of less importance since it has little impact on the final result,which means that the DL-BPK method is not entirely determined by the accuracy of MCS and has excellent self-correction ability.

    Fig.12.The effect of errors of the pre-trained PDF on the final NN PDF in terms of convergence speed and ultimate error: (a)evolution of the value of Loss1 with iterations in pre-training process;(b)–(c)after 2000 and 5000 pre-training respectively,the variation of solution accuracy with iterations in the subsequent training;(d)evolution of the value of E1 with iterations in entire training process.

    4.2.The role of the penalty factors

    The effect of the penalty factors is discussed using systems (12), (17), and (20).It is important to mention that we have excluded system (15) from consideration due to the absence of an exact solution and the resulting inability to quantify the accuracy of its outcomes.The FPK equation is solved with penalty factors of different magnitudes, and other relevant parameters and settings remain the same as in Section 3.Each experimental result is the average of three independent trials and the final results are shown in Fig.13.

    It can be seen that penalty factors of different magnitudes have little impact on the results of each system.Only when the penalty factor is 1×10?4, is the result accuracy of a 2D system extremely poor.The reason is that the probability density value at the local maximum point of the 2D system is small.When the penalty factor is too small,it may cause the supervision points to lose its ability to prevent zero solution.However,the specific rules for setting the penalty factor need further investigation,which may relate to the error and the magnitude in the probability density at the supervision points.In general,it will not happen that an unreasonable setting of penalty factors causes the inability to solve the correct results.Compared with the DL-FP method,the method proposed does not entirely rely on penalty factors.The primary function of the penalty factor is controlling the NN to fit in the direction of the FPK differential operator, which is crucial to improve the smoothness and accuracy of the results,and to prevent zero solution.

    Fig.13.The impact of penalty factors on different systems.

    5.Conclusions

    This paper introduces a numerical method,DL-BPK,designed to solve FPK equations efficiently and accurately, and with applicability in high-dimensional systems.The basic idea is using prior knowledge to obtain a pre-trained model, and taking the discretization form of the FPK equation as a loss function to ensure the smoothness of the solution.To avoid the generation of zero solution, the supervision points with probability density are taken as a term in the loss function.The accuracy of DL-BPK solution in the first 2D system, as well as other 6D and 8D systems reaches 99.97%, 99.65%,and 95.55%,respectively.For the remaining 2D toggle switch system,DL-BPK solution also achieves good consistency with the MCS results.The proposed method outperformed other NN-based methods for solving the FPK equation in both accuracy and efficiency.

    Due to the introduction of supervision points in the loss function,the influence of the number of points and the selection rules on the results are studied.The results indicate that the points with the highest probability density within a specific range,the local maximum points of PDF,should be selected as the supervision points.Next, we delved into the significance of pre-trained model.All the results validate our conjecture and confirm the practicality of the proposed method.This provides some implications for solving higher-dimensional problems.Besides, the impact of penalty factors on results and possible setting rules are also discussed.The results indicate that different levels of penalty factors will have different impacts on the results.In general, and the recommended value lies between 0.1 and 1×10?4.Proper selection of penalty factors will bring the focus of NN training closer to the FPK form, facilitating the smoothness of the solution.Moreover,the setting of penalty factors also depends on the probability density of the supervision points, which is determined by the system equation and noise intensity.Specific and strict rules still need to be studied through more models.

    To compare with the exact solution,relatively simple dynamic systems are used to test the accuracy of the DL-BPK method,and future work will focus on the solution of complex nonlinear dynamic systems.It should be pointed out that since the focus of this study is not on how to sample datasets effectively, there are some data explosion issues in 8D and higher dimensional systems,which undoubtedly have a significant effect on computational cost and may limit the application of this method in higher dimensions.Therefore, data optimization and more efficient sampling methods will be the focus of our future research.Additionally, this paper only studies the FPK equation for the case of standard Gaussian white noise,and we will further consider the cases under non-Gaussian colored noise or extreme events.

    Acknowledgement

    Project supported by the National Natural Science Foundation of China(Grant No.12172226).

    猜你喜歡
    神龍
    神龍擺尾
    寶藏(2022年4期)2022-08-01 06:16:00
    神龍的傳說
    烽火戲諸侯
    《神龍:美學論文集》評介
    美育學刊(2019年2期)2019-03-29 03:51:36
    巧技對神龍
    業(yè)務(wù)再延伸,漢陽所獲神龍公司實驗室認證授牌
    專用汽車(2016年1期)2016-03-01 04:13:13
    Lanting Xu:Millennium Legend
    神龍門瀑布上的字跡
    爬神龍川
    小青蛙報(2006年1期)2006-02-18 05:06:08
    咕卡用的链子| 亚洲av日韩精品久久久久久密| av免费在线观看网站| 亚洲精品中文字幕在线视频| 一个人免费看片子| 免费黄频网站在线观看国产| 国产欧美日韩一区二区三| 深夜精品福利| 国产aⅴ精品一区二区三区波| 夜夜爽天天搞| 国产一区二区三区视频了| 91麻豆av在线| 99国产精品免费福利视频| 精品国内亚洲2022精品成人 | 窝窝影院91人妻| 午夜福利在线观看吧| 久久毛片免费看一区二区三区| 国产精品电影一区二区三区 | 亚洲欧美日韩高清在线视频 | 国产精品久久久av美女十八| 免费看a级黄色片| 国产亚洲精品第一综合不卡| 日韩精品免费视频一区二区三区| 另类精品久久| 91av网站免费观看| 国产av精品麻豆| 91av网站免费观看| 国产福利在线免费观看视频| 亚洲精品中文字幕一二三四区 | 伦理电影免费视频| 91老司机精品| 久久国产亚洲av麻豆专区| 精品卡一卡二卡四卡免费| 亚洲自偷自拍图片 自拍| 亚洲精品一卡2卡三卡4卡5卡| av免费在线观看网站| 国产无遮挡羞羞视频在线观看| 丝袜在线中文字幕| 国产精品影院久久| 欧美黑人精品巨大| 99re6热这里在线精品视频| 在线观看www视频免费| 一区二区三区乱码不卡18| 一级a爱视频在线免费观看| 久久久久久人人人人人| 97在线人人人人妻| 欧美变态另类bdsm刘玥| 午夜福利,免费看| 99精品欧美一区二区三区四区| 亚洲性夜色夜夜综合| 亚洲精品中文字幕一二三四区 | 咕卡用的链子| 欧美日韩黄片免| 精品亚洲成a人片在线观看| 国产精品香港三级国产av潘金莲| 高清黄色对白视频在线免费看| 成人国产av品久久久| 99re6热这里在线精品视频| 亚洲国产欧美一区二区综合| 法律面前人人平等表现在哪些方面| 久久久久网色| 国产欧美日韩一区二区三| 视频在线观看一区二区三区| 久久影院123| 国产视频一区二区在线看| 久久精品国产亚洲av香蕉五月 | 黑人猛操日本美女一级片| 成年版毛片免费区| 80岁老熟妇乱子伦牲交| 国产精品成人在线| 中文字幕av电影在线播放| 飞空精品影院首页| 免费高清在线观看日韩| 色综合欧美亚洲国产小说| 一区二区三区乱码不卡18| 丁香六月天网| 俄罗斯特黄特色一大片| 欧美日韩国产mv在线观看视频| a级毛片黄视频| 麻豆av在线久日| 大码成人一级视频| 黑人巨大精品欧美一区二区mp4| 久久久精品免费免费高清| 母亲3免费完整高清在线观看| 美女扒开内裤让男人捅视频| 国产日韩欧美视频二区| 免费在线观看完整版高清| 这个男人来自地球电影免费观看| 啦啦啦中文免费视频观看日本| 亚洲国产欧美日韩在线播放| 欧美激情极品国产一区二区三区| 亚洲成a人片在线一区二区| 香蕉久久夜色| 精品国产亚洲在线| 国产xxxxx性猛交| av有码第一页| 1024视频免费在线观看| 在线观看免费日韩欧美大片| 99久久国产精品久久久| 亚洲第一欧美日韩一区二区三区 | 国产日韩欧美视频二区| 好男人电影高清在线观看| 男女无遮挡免费网站观看| 亚洲熟妇熟女久久| 纵有疾风起免费观看全集完整版| 久久九九热精品免费| 国产区一区二久久| 久久久精品94久久精品| 91精品国产国语对白视频| 黄色成人免费大全| 久久人人97超碰香蕉20202| 欧美午夜高清在线| 久久精品91无色码中文字幕| 亚洲欧美日韩高清在线视频 | 精品欧美一区二区三区在线| 日韩中文字幕欧美一区二区| 色94色欧美一区二区| 亚洲人成伊人成综合网2020| 丝袜美足系列| 每晚都被弄得嗷嗷叫到高潮| 亚洲成人国产一区在线观看| 国产精品av久久久久免费| av福利片在线| 久久久精品94久久精品| 少妇猛男粗大的猛烈进出视频| 精品少妇一区二区三区视频日本电影| 18禁观看日本| www.自偷自拍.com| 少妇猛男粗大的猛烈进出视频| 757午夜福利合集在线观看| 女人被躁到高潮嗷嗷叫费观| 亚洲男人天堂网一区| 满18在线观看网站| 精品欧美一区二区三区在线| 性少妇av在线| 午夜福利影视在线免费观看| 免费黄频网站在线观看国产| 亚洲av日韩精品久久久久久密| 中文字幕另类日韩欧美亚洲嫩草| 久久久久国内视频| 免费看十八禁软件| 久久香蕉激情| 夫妻午夜视频| 久久精品国产综合久久久| 在线观看www视频免费| 一区福利在线观看| 国产成人精品在线电影| 青草久久国产| 美女扒开内裤让男人捅视频| 黑人操中国人逼视频| 精品国产亚洲在线| 人妻 亚洲 视频| 满18在线观看网站| 人人澡人人妻人| 天天躁夜夜躁狠狠躁躁| 亚洲一区二区三区欧美精品| 高潮久久久久久久久久久不卡| 久久久久久免费高清国产稀缺| 90打野战视频偷拍视频| 亚洲精品国产色婷婷电影| 老司机靠b影院| 黄色视频,在线免费观看| 亚洲国产中文字幕在线视频| 久久精品国产a三级三级三级| 男女下面插进去视频免费观看| 久久人妻av系列| 黑人巨大精品欧美一区二区蜜桃| 亚洲国产欧美一区二区综合| 一级毛片精品| 少妇被粗大的猛进出69影院| 色婷婷av一区二区三区视频| 午夜激情av网站| 一本一本久久a久久精品综合妖精| 一区二区av电影网| 国产日韩欧美视频二区| 肉色欧美久久久久久久蜜桃| xxxhd国产人妻xxx| 精品一品国产午夜福利视频| 精品久久久久久电影网| 精品国产超薄肉色丝袜足j| 久久精品人人爽人人爽视色| 黄色丝袜av网址大全| 18禁观看日本| 亚洲精品自拍成人| 免费在线观看日本一区| 一个人免费看片子| 欧美成狂野欧美在线观看| 色播在线永久视频| 热99久久久久精品小说推荐| 亚洲成国产人片在线观看| 又紧又爽又黄一区二区| 免费观看av网站的网址| 淫妇啪啪啪对白视频| 99国产极品粉嫩在线观看| 欧美精品一区二区大全| 天天躁日日躁夜夜躁夜夜| 久久久欧美国产精品| av福利片在线| 日韩一区二区三区影片| 18禁美女被吸乳视频| 欧美大码av| 日本五十路高清| 一级片免费观看大全| 色尼玛亚洲综合影院| 国产成人免费观看mmmm| 老司机午夜十八禁免费视频| 黄片播放在线免费| 变态另类成人亚洲欧美熟女 | 国产免费av片在线观看野外av| 黄色视频,在线免费观看| 精品福利永久在线观看| 69精品国产乱码久久久| 国产1区2区3区精品| 老熟妇仑乱视频hdxx| 日本五十路高清| 女性生殖器流出的白浆| 母亲3免费完整高清在线观看| av福利片在线| 搡老熟女国产l中国老女人| 亚洲成人免费av在线播放| 精品国产一区二区三区久久久樱花| 国产熟女午夜一区二区三区| 久久热在线av| av免费在线观看网站| av欧美777| 亚洲黑人精品在线| 色尼玛亚洲综合影院| 女人被躁到高潮嗷嗷叫费观| 亚洲人成77777在线视频| 色婷婷av一区二区三区视频| 免费日韩欧美在线观看| 人妻 亚洲 视频| 一级,二级,三级黄色视频| 国产精品亚洲av一区麻豆| 国产精品熟女久久久久浪| 亚洲伊人色综图| 51午夜福利影视在线观看| 亚洲国产中文字幕在线视频| 亚洲五月色婷婷综合| 国产精品免费一区二区三区在线 | 一二三四社区在线视频社区8| 人人妻人人澡人人看| 国产精品久久久久成人av| 亚洲第一av免费看| 夜夜爽天天搞| av网站在线播放免费| 一本综合久久免费| www.999成人在线观看| 精品久久蜜臀av无| 99久久人妻综合| 亚洲五月婷婷丁香| 老司机在亚洲福利影院| 91成年电影在线观看| 成人国产av品久久久| 精品福利永久在线观看| 免费不卡黄色视频| 一级毛片女人18水好多| 亚洲欧美一区二区三区久久| 欧美日韩亚洲高清精品| 黄片小视频在线播放| 母亲3免费完整高清在线观看| 十分钟在线观看高清视频www| 免费一级毛片在线播放高清视频 | 成年人免费黄色播放视频| 99精品久久久久人妻精品| 人人妻人人添人人爽欧美一区卜| 又紧又爽又黄一区二区| 久久精品熟女亚洲av麻豆精品| 俄罗斯特黄特色一大片| 一区二区av电影网| 美女视频免费永久观看网站| 日韩大码丰满熟妇| 曰老女人黄片| 中文字幕人妻丝袜制服| 窝窝影院91人妻| 91麻豆精品激情在线观看国产 | 国产精品成人在线| 侵犯人妻中文字幕一二三四区| 日韩视频一区二区在线观看| 丁香六月欧美| 99精品欧美一区二区三区四区| av网站在线播放免费| av线在线观看网站| 欧美日韩福利视频一区二区| 精品一区二区三区视频在线观看免费 | 日本一区二区免费在线视频| 国产亚洲一区二区精品| av超薄肉色丝袜交足视频| 99九九在线精品视频| 亚洲精品美女久久久久99蜜臀| 国产亚洲欧美精品永久| 97在线人人人人妻| 黄片大片在线免费观看| 美女福利国产在线| 在线观看一区二区三区激情| 男女下面插进去视频免费观看| 国产在线精品亚洲第一网站| 在线观看免费视频日本深夜| 久久久欧美国产精品| 久久久国产成人免费| 精品国产国语对白av| 精品国产一区二区三区四区第35| 视频在线观看一区二区三区| 精品国产乱子伦一区二区三区| 天天躁夜夜躁狠狠躁躁| 久久人妻熟女aⅴ| 亚洲精品国产一区二区精华液| 在线观看免费视频日本深夜| 国产三级黄色录像| 亚洲免费av在线视频| 亚洲一区中文字幕在线| 亚洲精品一二三| av福利片在线| 超碰97精品在线观看| 啪啪无遮挡十八禁网站| 久久久久网色| 国产伦人伦偷精品视频| 99国产精品一区二区三区| 国产亚洲欧美在线一区二区| 国产精品一区二区免费欧美| 亚洲成人国产一区在线观看| 久久国产精品人妻蜜桃| 亚洲,欧美精品.| 亚洲avbb在线观看| 国产在线一区二区三区精| 最黄视频免费看| 在线 av 中文字幕| 捣出白浆h1v1| 国产高清国产精品国产三级| 热99久久久久精品小说推荐| 国产人伦9x9x在线观看| 国产91精品成人一区二区三区 | 精品人妻熟女毛片av久久网站| 黄色丝袜av网址大全| avwww免费| 大码成人一级视频| 搡老岳熟女国产| e午夜精品久久久久久久| 午夜视频精品福利| 久久精品亚洲av国产电影网| 麻豆乱淫一区二区| 国产精品偷伦视频观看了| 国产精品久久久久久精品电影小说| 大陆偷拍与自拍| 国产成人av激情在线播放| 久久久欧美国产精品| 久久av网站| 国产精品免费大片| 成年动漫av网址| 亚洲第一欧美日韩一区二区三区 | 亚洲五月婷婷丁香| 妹子高潮喷水视频| 久久久国产成人免费| 亚洲精品国产区一区二| 如日韩欧美国产精品一区二区三区| 日韩中文字幕欧美一区二区| 亚洲五月婷婷丁香| 天堂俺去俺来也www色官网| 18在线观看网站| 国产精品亚洲一级av第二区| 桃红色精品国产亚洲av| 亚洲精品在线美女| 午夜精品久久久久久毛片777| 欧美日韩黄片免| 久久九九热精品免费| 国产亚洲一区二区精品| 亚洲天堂av无毛| 欧美日韩黄片免| 久久午夜亚洲精品久久| 12—13女人毛片做爰片一| 国产欧美日韩一区二区精品| 成人av一区二区三区在线看| 免费在线观看黄色视频的| 国产高清视频在线播放一区| 午夜老司机福利片| 精品亚洲乱码少妇综合久久| av片东京热男人的天堂| 亚洲第一av免费看| 亚洲欧洲日产国产| 国产不卡av网站在线观看| 黄色丝袜av网址大全| 热99久久久久精品小说推荐| 一级毛片电影观看| 亚洲,欧美精品.| 黑人欧美特级aaaaaa片| 日韩三级视频一区二区三区| 色综合婷婷激情| 精品国产一区二区三区久久久樱花| 在线观看舔阴道视频| 露出奶头的视频| 亚洲av片天天在线观看| 久久人妻熟女aⅴ| 成年人午夜在线观看视频| 最近最新中文字幕大全免费视频| 下体分泌物呈黄色| 亚洲精品自拍成人| 美女视频免费永久观看网站| 99久久国产精品久久久| 亚洲成人免费av在线播放| 亚洲第一欧美日韩一区二区三区 | 性少妇av在线| 天天影视国产精品| √禁漫天堂资源中文www| 久久精品aⅴ一区二区三区四区| 久久久久精品国产欧美久久久| 亚洲视频免费观看视频| 淫妇啪啪啪对白视频| 欧美激情极品国产一区二区三区| 9191精品国产免费久久| 黄频高清免费视频| 手机成人av网站| 最近最新免费中文字幕在线| 在线观看www视频免费| 国产欧美日韩一区二区精品| 亚洲av成人不卡在线观看播放网| 十分钟在线观看高清视频www| 丝瓜视频免费看黄片| 久9热在线精品视频| 午夜福利影视在线免费观看| 亚洲视频免费观看视频| 日韩有码中文字幕| 咕卡用的链子| 天堂动漫精品| 成年女人毛片免费观看观看9 | 国产成+人综合+亚洲专区| 久久久水蜜桃国产精品网| 国产成人啪精品午夜网站| 女人高潮潮喷娇喘18禁视频| 亚洲天堂av无毛| 999久久久精品免费观看国产| cao死你这个sao货| 搡老熟女国产l中国老女人| 女性生殖器流出的白浆| 麻豆国产av国片精品| 一二三四在线观看免费中文在| 中文字幕另类日韩欧美亚洲嫩草| 少妇猛男粗大的猛烈进出视频| 久久久欧美国产精品| 亚洲国产欧美在线一区| 大型黄色视频在线免费观看| 久久九九热精品免费| 麻豆成人av在线观看| 国产福利在线免费观看视频| 无限看片的www在线观看| 一边摸一边做爽爽视频免费| 天堂中文最新版在线下载| 日本wwww免费看| 精品午夜福利视频在线观看一区 | √禁漫天堂资源中文www| 亚洲第一av免费看| 国产成人一区二区三区免费视频网站| 天天躁夜夜躁狠狠躁躁| 免费观看a级毛片全部| 老汉色∧v一级毛片| 69av精品久久久久久 | 精品国产乱码久久久久久男人| 啦啦啦视频在线资源免费观看| 亚洲精品美女久久久久99蜜臀| 淫妇啪啪啪对白视频| 99精品欧美一区二区三区四区| 国产aⅴ精品一区二区三区波| 国产不卡av网站在线观看| 国产激情久久老熟女| 亚洲第一av免费看| 成人国语在线视频| 女人高潮潮喷娇喘18禁视频| 99香蕉大伊视频| 国产日韩欧美视频二区| 欧美日韩亚洲综合一区二区三区_| 久久99热这里只频精品6学生| 1024香蕉在线观看| 亚洲国产中文字幕在线视频| 欧美av亚洲av综合av国产av| 99热网站在线观看| 18禁观看日本| 欧美人与性动交α欧美精品济南到| 中文亚洲av片在线观看爽 | av电影中文网址| videosex国产| 一级毛片女人18水好多| 考比视频在线观看| 91麻豆av在线| 嫩草影视91久久| 国产欧美日韩综合在线一区二区| 国产一区二区三区视频了| 大香蕉久久成人网| 午夜两性在线视频| 汤姆久久久久久久影院中文字幕| 国产精品98久久久久久宅男小说| 18禁美女被吸乳视频| 一区二区三区乱码不卡18| 99re6热这里在线精品视频| 在线播放国产精品三级| 日本欧美视频一区| 久久久精品国产亚洲av高清涩受| 99re在线观看精品视频| 我的亚洲天堂| 欧美日韩亚洲综合一区二区三区_| 亚洲欧美精品综合一区二区三区| 久9热在线精品视频| 欧美精品高潮呻吟av久久| 久久午夜综合久久蜜桃| 国产成人精品在线电影| 99精品在免费线老司机午夜| 国产一区二区三区综合在线观看| 99精品在免费线老司机午夜| 日韩免费高清中文字幕av| 嫩草影视91久久| e午夜精品久久久久久久| 精品人妻1区二区| 视频区图区小说| 久久久久久免费高清国产稀缺| 后天国语完整版免费观看| 老汉色∧v一级毛片| 别揉我奶头~嗯~啊~动态视频| 在线观看www视频免费| 1024视频免费在线观看| 水蜜桃什么品种好| 日韩人妻精品一区2区三区| 变态另类成人亚洲欧美熟女 | 又大又爽又粗| 亚洲三区欧美一区| 国产精品 国内视频| 欧美日韩av久久| 母亲3免费完整高清在线观看| 久热爱精品视频在线9| 丝袜人妻中文字幕| 丰满饥渴人妻一区二区三| 日本av免费视频播放| 动漫黄色视频在线观看| 中文字幕人妻丝袜制服| 国产精品久久久久成人av| 精品少妇黑人巨大在线播放| 王馨瑶露胸无遮挡在线观看| 国产成人精品无人区| 精品一区二区三区视频在线观看免费 | 欧美成人免费av一区二区三区 | 国产精品98久久久久久宅男小说| 久久天躁狠狠躁夜夜2o2o| 妹子高潮喷水视频| 日本av免费视频播放| 国产高清视频在线播放一区| 亚洲人成伊人成综合网2020| 嫩草影视91久久| 三级毛片av免费| 亚洲第一av免费看| 2018国产大陆天天弄谢| 1024香蕉在线观看| 黄色怎么调成土黄色| 欧美黑人欧美精品刺激| 黄色a级毛片大全视频| 亚洲中文av在线| 亚洲精品久久午夜乱码| 性高湖久久久久久久久免费观看| av电影中文网址| 欧美精品一区二区大全| 激情视频va一区二区三区| 日本av免费视频播放| 99国产极品粉嫩在线观看| 欧美另类亚洲清纯唯美| 亚洲av美国av| 男男h啪啪无遮挡| 90打野战视频偷拍视频| 我要看黄色一级片免费的| 久久久久久久久久久久大奶| 亚洲精品在线美女| 久久香蕉激情| 午夜精品国产一区二区电影| 免费在线观看黄色视频的| 欧美精品高潮呻吟av久久| 高清av免费在线| 国产91精品成人一区二区三区 | 日韩三级视频一区二区三区| 一本综合久久免费| 国产精品 国内视频| 每晚都被弄得嗷嗷叫到高潮| 久久久精品国产亚洲av高清涩受| 精品亚洲成国产av| 国产亚洲欧美精品永久| 国产精品久久久人人做人人爽| 91老司机精品| 午夜福利免费观看在线| 下体分泌物呈黄色| 视频在线观看一区二区三区| 成人av一区二区三区在线看| 亚洲免费av在线视频| 国产精品久久久人人做人人爽| 高潮久久久久久久久久久不卡| 视频区欧美日本亚洲| 一本久久精品| 一边摸一边抽搐一进一小说 | 搡老岳熟女国产| www.熟女人妻精品国产| av有码第一页| 精品福利观看| 青草久久国产| 午夜免费成人在线视频| 国产精品免费大片| 久久久水蜜桃国产精品网| 欧美国产精品va在线观看不卡| 国产精品偷伦视频观看了| 久久婷婷成人综合色麻豆| 中文字幕色久视频| 日本精品一区二区三区蜜桃| 国产精品一区二区在线不卡| 亚洲成人国产一区在线观看| 咕卡用的链子| 欧美亚洲 丝袜 人妻 在线| 午夜免费成人在线视频| 韩国精品一区二区三区| 亚洲人成伊人成综合网2020| 亚洲第一青青草原| 午夜福利视频精品| 午夜福利,免费看| 91老司机精品|