• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A deep learning method based on prior knowledge with dual training for solving FPK equation

    2024-01-25 07:11:34DenghuiPeng彭登輝ShenlongWang王神龍andYuanchenHuang黃元辰
    Chinese Physics B 2024年1期
    關(guān)鍵詞:神龍

    Denghui Peng(彭登輝), Shenlong Wang(王神龍), and Yuanchen Huang(黃元辰)

    School of Mechanical Engineering,University of Shanghai for Science and Technology,Shanghai 200093,China

    Keywords: deep learning,prior knowledge,FPK equation,probability density function

    1.Introduction

    The Fokker–Planck–Kolmogorov (FPK) equation governs the temporal evolution of probability density functions(PDFs) within a stochastic dynamic system, which stands as one of the foundational equations for investigating stochastic phenomena.The FPK equation is widely utilized in many fields, such as biology,[1]chemistry,[2]economics,[3]applied mathematics,[4]statistical physics,[5]and engineering.[6]By integrating the PDF of the stochastic dynamic system,one can obtain the probability distribution and mathematic expectation of macroscopic variables.When the excitation of the stochastic dynamic system is Gaussian white noise,the corresponding FPK equation is one type of the second-order parabolic partial differential equation(PDE),whose analytical solution can be obtained only in some particular cases.Therefore,in most cases,the numerical method is considered feasible to solve the FPK equation.

    In the past few decades, a large number of numerical methods based on meshing have been used to solve the FPK equation, such as the finite element method,[7]finite difference method,[8]path integration method,[9]and variation method.[10]The disadvantage of the above methods is that they can only be solved at grid points, and the solution at any points other than grid points requires reconstruction or interpolation.Moreover, for high-dimensional systems, the dependence of those methods on grid division leads to a sharp increase in the amount of data, and the solution procedure would be quite time- and resource-consuming.In addition,Monte Carlo simulation(MCS)is also frequently employed to address the FPK equation,[11]which approximates the probability density distribution either by sampling extended trajectories of the stochastic differential equation(SDE)or through the application of the conditional Gaussian framework.[12]The correctness of the solution for this method depends on the volume of data collected,despite the fact that it does not require interpolation.Furthermore, for high-dimensional systems, it can be barely possible to obtain the joint probability density function (JPDF) of high accuracy using the aforementioned approaches due to the large computational cost involved.

    Recently, the increasing abundance of computing resources and the rapid development of neural networks (NNs)has provided a promising way to solve the PDEs.[13–17]For general high-dimensional parabolic PDEs, Hanet al.[13]designed a deep learning (DL) framework which used backward SDEs to reformulate PDEs and used NNs to approximate the gradient of solutions.Raissiet al.[14]proposed a DL method to solve the forward-backwards SDEs and their corresponding high-dimensional PDEs.Subsequently, they proposed physics-informed neural network(PINN)by embedding physical information into the NN.[15]The method could be applied to a wide range of classical problems in fluids and quantum mechanics.Berget al.[16]used deep feedforward artificial NNs to approximate the solution of PDEs in complex geometry and emphasized the advantages of the deep NN compared with the shallow one.Sirignanoet al.[17]were dedicated to solving higher-dimensional PDEs using a deep Galerkin method which trained the NN to meet the differential operator form, initial and boundary conditions.Despite the fact that the FPK equation is a PDE, the methods mentioned above cannot be used to solve the FPK equation.The primary cause is that the equation format lacks a constant term and there is no probability distribution at the boundary,which make that zero is used as a trial solution.Consequently, numerous methods for solving FPK equation based on DL have been proposed.[18–27]Among them, Xuet al.[18]proposed a method for solving general Fokker–Planck (FP) equations based on DL called DL-FP, which innovatively added normalization conditions and set penalty factors in the supervision conditions to avoid zero trial solution and skip local optima.However, the normalization condition of this method is closely related to global information,which means that the calculation data will grow exponentially with the increase of dimension, and the method will thus fall into the curse of dimension when solving higher-dimension problems.Subsequently, Zhanget al.[19]optimized the training data on the basis of DL-FP.A suitable and small number of discrete integral points and integral volumes were obtained through multiplek-dimensional-tree(KD-tree)segmentation of the random data set, which solved the problem brought by the global information that needs to be considered.Furthermore, they designed a new DL framework that included multiple NNs and matched one NN for each dimension,making the training data grow linearly instead of exponentially.[20]Wanget al.[21]proposed an iterative selection strategy of Gaussian neurons for radial basis function neural networks (RBFNN) which is applied to obtain the steady-state solution of the FPK equation.The strategy is much more efficient than the RBFNN method with uniformly distributed neurons.Another approach is to use the PDF calculated from prior data to supervise the solution of the FPK equation instead of the supervision of the normalization condition.[22,23]Among them,Zhaiet al.[23]developed a meshless FPK solver that used prior data from the solution of the FPK equation obtained from MCS as a supervised condition instead of a normalized condition, which not only excluded zero solution but also avoided data explosion.Nevertheless, that method still needs improvement, since the prior data supervising the solution obtained from numerical simulations is relatively rough.

    In general, the speed and accuracy of the above methods are not always guaranteed.For the DL-FP method,[18]the presence of normalization conditions in the loss function makes the solution less efficient and less applicable.For meshfree methods,[23]it is a good idea to utilize prior knowledge to guide the solution process to avoid normalization conditions.However, it is not a suitable choice to use prior knowledge containing gradient information errors together with the FPK differential operator to guide the solution process, even if the NN is robust.Inspired by the above methods,we propose a DL method based on prior knowledge(DL-BPK)to solve the FPK equation.The proposed method is composed of dual training processes: the pre-training process and the routine training or prediction process.In pre-training, the prior knowledge obtained from MCS is injected into the training to obtain a pretrained model.The role of the pre-trained model is to make the NN jump out of the local optimum faster to accelerate the training process, especially in high dimension, which is similar to the function of initializing network parameters.Based on the pre-trained model,the FPK differential operator is used as a supervised condition to make the final solution sufficiently smooth and accurate.In addition to the FPK differential operator,a supervised condition should be set to suppress the generation of zero solutions.Herein, the points with the highest probability density within a small range under the pre-trained model, which is the local maximum point, will be selected as the supervision points.There are two reasons for this setting: one is to avoid generating zero trial solution, and the other is to minimize perturbations in the shaping of the PDF,attributable to the absence of gradient information errors.Finally,the PDF generated by the NN undergoes normalization,a process that serves to correct errors in the PDF value.In summary,the main contributions of this article are as follows: (i)Introducing a new supervised condition based on local maximum points and FPK differential operator forms,significantly improving the accuracy and robustness of the solution;(ii)The implementation of pre-trained models based on prior knowledge reduces training costs and accelerates the convergence of NNs.

    The rest of this article is organized as follows: the DLBPK method is detailly described in Section 2.In Section 3,we use four examples to verify the effectiveness of our method.These examples include two two-dimensional (2D)systems, one six-dimensional (6D) system, and one eightdimensional(8D)system.At the same time,the comparisons between the proposed method and the analytical solution, as well as other methods, are carried out to demonstrate the superiority of the proposed method.In Section 4, the impact of the pre-trained model and penalty factors on the results are discussed.Finally,Section 5 presents conclusions to close this paper.

    2.Methodology

    In this section, the FPK equation and the necessary conditions for its solution are described in detail in Subsection 2.1 and a detailed introduction to the DL-BPK method is then provided in Subsection 2.2.

    2.1.The FPK equation

    Consider the general form of SDEs

    whereX=[x1,...,xi,...,xn]Tdenotes ann-dimensional variable,f(X) represents ann-dimensional vector function ofX,σ(X)serves as a square matrix that is assumed to be constant later in the paper andW(t)is ann-dimensional standard Gaussian white noise.For Eq.(1),the probability density distribution thatXobeys is described by its corresponding FPK equation

    whereLFPKis the FPK differential operator,andp(X),the solution of the FPK equation,denotes the PDF of the stochastic processX.Here, this paper studies the stationary FPK equation with natural boundary,which takes the following form:

    Moreover,the boundary conditions should be satisfied

    Then, the normalization condition should be preserved for PDF

    Combining Eqs.(2)and(3),it can be seen that the stationary FPK equation is a type of the second-order PDE.There is no constant term in the equation, which means that zero will become its trial solution.In short, the solution of FPK equation should bear the following four characteristics: (i)it must conform to the form of FPK equation;(ii)the probability density at the boundary is 0; (iii)as a PDF,normalization should be considered;(iv)it should be nonnegative.

    2.2.The algorithm

    Before describing the proposed method, two existing methods to solve the FPK equation are first introduced.The method proposed in Ref.[18] sets the discretization form of Eqs.(3)–(5)as three supervised conditions in the loss function of the NN, and introduces a penalty factor.The form of the loss function is as follows:

    wherepθ(·) represents the trial solution of the FPK equation obtained by NN,θbeing the parameters of the NN, which will be updated continuously during training of the NN.The second term in Eq.(6)is the integral approximation of all the uniformly discrete points in the region to satisfy the normalization condition.It indicates that the number of the calculation points increases exponentially with the increase of dimensionality,which leads to considerable growth in computational cost,and makes the application of this method in highdimensions almost impractical.In addition, the redundancy of supervised conditions inevitably makes the NN easy to fall into local optima, and even unable to converge after a long time.Even though the penalty factor alleviates this problem to some extent,the effect is not ideal.It is still necessary to find an approach to make the NN skip the turbulence in the early training stage and accelerate the fitting.

    Another method to solve the FPK equation is using the PDF calculated from prior data instead of the normalized condition to supervise the solution of FPK equation.[23]The associated loss function is given by

    where ?p(·)is an approximate estimate of the probability density,as determined by MCS.The second term in Eq.(7)serves to replace both the normalization and boundary conditions.This suggests that the exclusive use ofNytraining points with probability density, as opposed to using a large number of points for normalization,can yield superior guidance for PDF generation.As a result, it effectively mitigates the challenge of data explosion encountered in high-dimensional cases.Although the presence of the FPK operator has been demonstrated to confer robustness to NNs and facilitate the use of rough prior knowledge in approximating FPK equation solutions, such prior knowledge can still pose challenges for NN training.Specifically,it can complicate the initial training process and potentially limit the accuracy of the final solution.

    To address this problem, a pre-trained model is introduced to prevent the difficulty of fitting NNs.Meanwhile,the normalization condition is replaced by newly added supervision conditions to avoid zero trial solution.The detailed procedures of the DL-BPK method for solving FPK equation are described below,and the flow chart is shown in Fig.1.

    Step 1 Constructing the training set

    Due to the limited training set of NNs, MCS is utilized to obtain the range of variables in each dimension, and the range is defined as the bounded domain?.Each dimension of?is then discretized by step length Δx.For thendimensional case, the range of its variables in each dimension is assumed to be [a,b]n, and the number of training data and that of boundary data areND=((b?a)/Δx)nandNB=2n((b?a)/Δx)n?1, respectively.As a result, the training set becomesTD=XDi ∈?NDi=1, which consists of uniformly spaced grid points covering?; the boundary set isTB=XBi ∈??NBi=1, which represents the boundary of the training set.Subsequently, we adopt the Euler–Maruyama method to generate a long trajectory for Eq.(1), and selectNSpoints to form a training setTS=XSi ∈?NSi=1.The probability density of this training data can be obtained from MCS simulation.The training setTSand ?p(TS) are used as prior knowledge to pretrain the NNs.An important clarification is that the MCS used to generate prior knowledge in this study were run on small sample sizes.Although this approach comes at the expense of the accuracy of the a priori knowledge obtained,it does speed up the collection of data.

    Step 2 Training the NNs

    Setting the loss function as follows:

    Similar to Eq.(6),pθ(·)is employed to denote the PDF solution obtained from the NN.To differentiate the results from the dual-training stages of the DL-BPK method,we usepθ1(·)andpθ2(·)for the PDF solutions after the first and second rounds of training,respectively.In the method proposed in this paper,the NNs need pre-training to prevent the difficulties in fitting during the actual training process.L1should be minimized to ensure that the obtained pretrained model can generate the distribution of prior knowledge,and MCS is applied to get the PDF of prior data.

    In addition, in the actual training process,L1in Eq.(8)can be reduced to a satisfactory value with only thousands of iterations, which is due to the simplicity of the loss function term and the small amount of data.It should be noted that the time cost for obtaining the pre-trained model is small.The accuracy of the solution of the pre-trained model still needs improvement, so further training is performed.In this training,LFPKis considered as a supervision condition, and it is also necessary to constrain it to prevent the generation of zero solutions.The loss function for this further training is defined as follows:

    wherepθ1(XLi) is the probability density of the pretrained model atXLi, and the data setTLincludesNLpoints sampled from the local maximum points of PDF of the pre-trained model (NL=1 for a general monostable distributed system).It can be seen from Eqs.(10) and (11) thatE1ensures the final solution is in the FPK form, andE2supervises the solution of the FPK equation using the PDF calculated at local maximum points by the pre-trained model.The main purpose ofE2is to exclude zero solution.The local maximum points are selected to avoid introduction of spatially related gradient noise information.It should be noted that although the NN has good robustness,elimination of noise information greatly facilitatesE1training and enhances the accuracy of training results.In summary,the pretrained model is designed to provide more suitable initial parameters for subsequent training,which plays a significant role in the fast fitting of NNs,especially when solving high-dimensional problems.In addition,sinceL2does not include a loss function term with spatial gradient noise information, the accuracy of the final solution is satisfactory,as to be shown in Section 3.

    Step 3 Setting the parameters of the NN

    In this paper,the Adam optimizer is used to minimizeL1andL2.Unless otherwise specified, the NNs all have 4 hidden layers and 20 hidden nodes.To guarantee both the nonnegativity and differentiability of the PDF output generated by the NN, the sigmoid function is employed as the activation function.In Eq.(9),the FPK form controls the smoothness of the solution, which directly affects the accuracy of the solution.Therefore, we are more interested in the decline ofE1.In this regard,the penalty factorλis designed to influence the process of supervised training,and to reduce the weight ofE2,so that the NN approaches the shape of the exact solution.It should be pointed out that due to the reduction ofE2weight and the rough probability density of the local maximum point,the final solution cannot well satisfy the normalization conditions.In order to make the final result standardized,the Vegas integral technology[28]is used to complete the normalization.

    3.Examples

    In this section, four numerical examples are provided to support our conjecture,and to prove that DL-BPK method can be used successfully.In the first example, we showcase the advantages of DL-BPK over other deep learning methods in terms of both speed and accuracy, while also examining the influence of the number and type of supervision points on the results.The second example considers a 2D steady-state toggle switch system,using MCS results to demonstrate the suitability of DL-BPK.The third and fourth examples further illustrate the applicability of the method to high-dimensional systems.

    3.1.A two-dimensional stochastic gradient system

    Considering a 2D stochastic gradient system[23]

    whereΓ1andΓ2are independent standard Gaussian white noise excitation with zero mean-value andσdenotes the noise intensity.Then lettingV=V(xy)=(x2+y2?1)2and the reduced stationary FPK equation is

    and the precise steady-state solution can be expressed as

    whereCbeing the normalized constant.In this example, the selected calculation area is?=[?2,2]2and the system parameterσ=1.

    Utilizing the DL-BPK method, we set Δx=0.04, so the training setTDcomprised 104data,while the boundary setTBencompassed 400 data.For the system(12),TSwas acquired by selecting 104training data from the trajectory obtained by running the Euler–Maruyama numerical method with 108steps.The probability density ofTScan be obtained through MCS and the penalty factor is set toλ=0.01.The DL-BPK method is then employed to solve Eq.(13) according to the steps described in Subsection 2.2.The pretrained model is obtained after 2000 iterations of the NN, and the training set as well as the loss function were updated and further trained 4×4times to get the final result as shown in Fig.2.The DLBPK solution closely mirrors the exact solution determined by Eq.(14), with a maximum absolute error in the JPDF of approximately 1.4×10?4.Figure 2(f)shows the comparison of the exact solution and the DL-BPK solution for marginal probability density function (mPDF) ofp(x), wherep(y) is identical.Visually,the outcomes are practically indiscernible.In addition,by comparing the results of DL-BPK and the pretrained model, it can be seen thatLFPKgreatly improve the accuracy and smoothness of the solution.It is also proved that the accuracy of the DL-BPK solution would not be affected by the pre-trained model.The result of the pre-trained model is not satisfactory, but its accuracy is not the focus of our research.The function of the pre-trained model is more embodied in accelerating NN fitting.Moreover, it should be mentioned that in this example,we selected 4 uniformly sampled local maxima of the PDF of the circular distribution of the pre-trained model as the supervision points.

    In order to illustrate the advantages of the pre-training and to demonstrate the superiority of the DL-BPK method in training speed, we tested DL-BPK, DL-FP[18]and the mesh-free data-driven solver[23]simultaneously to solve system(12).In order to assess and measure the effectiveness and accuracy of the proposed method,we defined

    wherepsbeing the exact solution,and‖·‖2denoting theL2norm.The closerRaccis to 1,the higher the result accuracy is.This test used data set of the same size and ran on the same device.The experimental conditions and parameter settings are shown in Table 1,in which the optimizer defaults to Adam and the learning rate automatically adjusts with the number of iterations,and the obtained results are shown in Fig.3.

    Fig.2.Results for system(12): (a)solution of pre-trained model;(b)solution of DL-BPK;(c)exact solution;(d)solution error of pre-trained model;(e)solution error of DL-BPK;(f)contrast of solution for one-dimensional(1D)mPDF.

    Table 1.Experimental environment and parameter settings.

    Fig.3.Comparison of results from different methods.

    It is clearly observed that the NN can be fitted more quickly after pre-training.Compared to other methods, the proposed DL-BPK method achieves the same accuracy with fewer iterations: an accuracy of over 99.8% in less than 420 seconds (including pre-training).It can also be seen through local zooming in Fig.3 that the final accuracy of DL-BPK is very high, reaching around 99.97%.Additionally, figure 3 also shows that other deep learning methods may fall into fluctuations in the early stage of training, which requires a lot of time to get rid of.It further validates the importance of pretraining.Furthermore, we will discuss in detail the effect of errors of pre-trained PDF on the final PDF in terms of convergence speed and ultimate error in Subsection 4.1.

    In addition,the advantage of using local maximum points as the supervision points are verified using this example.To this end, 4, 20, 100, and 400 local maximum points are obtained from the prior distribution as supervision points, and then 4, 20, 100, and 400 points are randomly selected from the trajectory of the system (12) as supervision points.Their probability density is obtained from the exact solution with artificially injected noise,rather than from the pre-trained model.The purpose is to study the performance of the solution with rough supervision points, by adjusting the magnitude of the noise intensity,which simulated the actual solving process using inaccurate information.Specifically, we obtain a set of supervision pointsTLand calculateps(XLi), thenpθ1(XLi)=(1?ε+2εU)·ps(XLi), whereUis uniformly distributed on[0, 1].The maximal relative errorεof noise is set to 0, 0.1,0.3,0.5,and 0.8,respectively.It should be noted that this test is trained 104times on the basis of the pre-trained model to ensure that the NN basically fit.The results are shown in Fig.4.

    Fig.4.Accuracy under different numbers and types of supervision points and different noise intensities.

    It is evident that using local maximum points as supervision points mitigates the impact of increased noise intensity on result accuracy.In contrast, when supervision points are randomly sampled, accuracy tends to decline as noise intensity rises.This phenomenon can be attributed to two factors:First,the probability density at randomly sampled supervision points varies significantly,exacerbating the gradient information error between them when noise is high,thereby affecting the minimization ofE1.Second, when the supervision point corresponds to a local maximum in the PDF,any inaccuracies in its value are offset during the final normalization process.Finally, it is worth noting that increasing the number of supervision points, whether they are local maximum points or randomly sampled points from the trajectory,improves adaptability to roughness.However, to minimize the reliance on prior knowledge,the preference is to select a smaller number of supervision points.

    3.2.A 2D multi-stable toggle switch system

    It is challenging to solve multi-stable systems with low noise, and we would like to test the performance of the DLBPK method in such special situations.Considering a toggle switch system[29]

    whereΓ1andΓ2are the independent standard Gaussian white noise excitation with zero mean-value andσdenotes the noise intensity.The reduced stationary FPK equation is

    For this system, the selected calculation area is?=[0,1.5]2and constantb=0.25.We set Δx=0.015, so the training setTDcomprised 104data,while the boundary setTBencompassed 400 data.For Eq.(15),TSwas acquired by selecting 104training data from the MCS simulation with 105steps and the penalty factor is set to 0.001.For Eq.(16), the results of DL-BPK method on JPDF are shown in Fig.5.In addition,similar to the above example,the DL-FP,mesh-free,and DLBPK method proposed in this paper are tested on this example.It should be noted that this system does not have an analytical solution,so we will compare the results of 108steps simulated using MCS, which took 1.5 hours, and the final results and comparison are shown in Fig.6.

    Fig.5.The DL-BPK solution of system(15)with different noise intensities: (a)σ =0.05,(b)σ =0.1,(c)σ =0.15.

    From Figs.5 and 6, the results clearly indicate that finding a solution becomes more challenging when the noise intensity is low.However, the DL-BPK method still maintains good performance in less than 500 seconds (including pretraining).This is attributed to the adaptability of the method when local maximum points are used as supervision points,even in the presence of rough prior knowledge distributions.This has been proven in system (12).As the noise intensity increases, the performance of the Mesh-free method also improves,likely due to the increased accuracy of the prior knowledge.As for the DL-FP method, its inability to capture the distribution characteristics of the system even after more than 3×104iterations highlights the particular challenges posed by this system.This also underscores the significant advantages of using prior knowledge when solving complex systems.

    Fig.6.Comparison of results of different methods for mPDF p(x)under three different incentives.

    For bistable distribution systems with low noise intensity,the accuracy of the distribution is compromised when fewer MCS simulations are used to obtain prior knowledge.The inaccuracy subsequently affects the distribution of the pretrained model and the probability density of the supervision points,thereby slightly impacting the final results.We anticipate addressing this issue through improved sampling methods in future work.It is worth noting that there have been studies focused on solving low-noise systems.[24]Additionally,the test yielded optimal results when the number of hidden nodes was set to 50,aligning with observations in system(15).This suggests that wider NNs are more effective at approximating highly concentrated probability distributions compared to their narrower counterparts.

    3.3.A 6D system

    Consider a 6D system[19]

    wherein,Γi(wherei= 1, 2, 3) denotes independent standard Gaussian white noise excitation with zero mean-value andσi(wherei=1, 2, 3) denotes the noise intensity.AndThe corresponding stationary FPK equation is

    whereCbeing the normalization constant.For this system,the range of training set is set to[?5,5]6,and there are 5.0×105data in the training setTD, and 1.5×105data in the boundary setTB.1.0×105points are sampled from the trajectory of the system (17) to formTS.The probability density ofTScan be obtained through MCS and the penalty factor is set toλ=0.01.The pre-training is carried out 5000 times and then the actual training 5×104times to obtain the final solution.The results of Eq.(18)are shown in Fig.7.By comparing the final solution with Eq.(19), which is the exact solution, the DL-BPK method still performs well in terms of accuracy on the six dimensions.

    Fig.7.Results for system(17)with σ21 =σ22 =σ23 =2.0.Comparing probability densities on central slices in 6D state space: (a) p(x1,x2|y1=y2=x3=y3=0)of pre-trained model;(b) p(x1,x2|y1=y2=x3=y3=0)of DL-BPK method;(c)analytical p(x1,x2|y1=y2=x3=y3=0);(d)solution error of panel(a);(e)solution error of panel(b).Similarly,panels(f)–(j)are solutions for p(x1,x2|others are 0.6).

    Fig.8.Evolution of solution accuracy and the value of the loss function with the number of iterations.The solid line signifies the accuracy,corresponding to the left axis, while the dotted line indicates the value of the loss function,aligned with the right axis.

    As can be seen from Fig.8, the accuracy of the solution exceeded 99.13%after 2.0×104iterations and the calculation time was less than 1560 s(including pre-training),which implies that the method is still applicable in high-dimensional situations.The final result has an accuracy of approximately 99.65%, which is remarkable for high-dimensional systems.The accuracy of the solution decreased in the early training stage due to the fact thatE1of the pre-trained model is so large that, the effect ofE2in fitting the NN was temporarily overwhelmed.When the value ofE1returned to a normal level,the supervision points begin to play a role in suppressing the generation of zero solution,and the accuracy of the solutions start to improve.The accuracy of the final solution is demonstrated by showing results near the center of the 6D state space.It is obvious that the DL-BPK method is still applicable in highdimensions,and the accuracy is maintained at a good level.At the same time,it is worth noting that we only trained 5.0×104times on the basis of the pre-trained model to achieve satisfactory accuracy, which will be very difficult for other methods.To further illustrate the applicability of this method in higherdimensions,the slice probability density far from the center of the 6D state space is shown in Fig.9.

    As shown in Fig.9, in the region where the probability density approaches zero,the result acquired from the proposed DL-BPK still shows good agreement with the exact probability distribution.

    Fig.9.Comparing probability densities on slices far from the center in 6D state space: (a) p(x1,x2|y1 =y2 =x3 =y3 =2.0) of pre-trained model;(b)p(x1,x2|y1=y2=x3=y3=2.0)of DL-BPK method;(c)analytical p(x1,x2|y1=y2=x3=y3=2.0);(d)solution error of panel(a);(e)solution error of panel(b).

    3.4.A 8D system

    The applicability of the DL-BPK method in higherdimensions is of great interest.Nevertheless, it is generally challenging to verify the results in high-dimensional situations, so a linear system with known exact distributions is chosen.[23]Considering the following SDE with linear deterministic part

    whereΓi,i=1,...,Ndenotes the independent standard Gaussian white noise with zero mean-value, whileσ=1 signifies the noise intensity.X=(x1,x2,...,xN)Tis the column vector of variables.The deterministic part of system(20)has the matrix formAX.The invariant probability measure of system(20)satisfies

    We determineΣby solving the Lyapunov equationAΣ+ΣAT+2I=0.Given the symmetry ofA,we can representΣasin Eq.(21).The dimensionNwas set to 8.It is almost impractical to perform uniform sampling at such high dimensions,even if the sampling is extremely sparse.Hence,we use random sampling from within the domain and from the boundary to generateTDandTB,containing 1.0×106and 5.0×105points respectively.Then,Tsalso is used as the training set in this experiment, which has 1.0×105data sampled from the trajectory of Eq.(20).The probability density ofTSis obtained through MCS and the penalty factor is set toλ=0.01.The final solution was achieved through 2×104times of pre-training and then 5×105times of training which took nearly 30 hours.The training cost in this experiment is enormous,although it is only two dimension higher than system (17).And we are more interested in the applicability of high-dimensions, so there will be a sacrifice in efficiency to some extent.

    It can be seen from Fig.10 that the agreement between the exact solution and DL-BPK solution is acceptable only near the central slice,and when it is far from the central slice,the numerical value is no longer of the same magnitude as the exact solution.Such a phenomenon might result from the fact that more representative training sets for NNs are required when solving higher-dimensional systems.Assuming 100 grids are divided into each dimension, there will be 1016points in the entire space.Therefore, for an 8D system, the amount of data of points in its space is extremely terrifying.This results in the training set used in this paper being unable to quickly traverse the entire space,which leads to an increase in training costs.Therefore, how to sample more representative data is still a problem that must be solved when facing high-dimensional problems.This is also the focus and direction of our future research.

    Fig.10.Results for system(20)with N=8: (a) p(x1,x2|x(3~8)=0)of DL-BPK;(b)analytical p(x1,x2|x(3~8)=0);(c) p(x1,x3|x(2,4~8)=0)of DL-BPK;(d)analytical p(x1,x3|x(2,4~8)=2.0);(e) p(x1,x2|x(3~8)=2.0)of DL-BPK;(f)analytical p(x1,x2|x(3~8)=2.0).

    4.Discussion

    The advantages of the proposed DL-BPK method in computational efficiency and accuracy have been demonstrated in the previous section.Further experiments are also conducted to investigate the impact of different numbers of supervision points on the robustness of the method.In this section, the following two aspects of the DL-BPK method are studied,respectively: (i)the significance of the pre-trained model to the calculation performance,and(ii)the role of the penalty factors in the proposed method.

    4.1.The significance of pre-training

    The influence of the pre-trained model on the accuracy and speed is discussed first.We will discuss the systems(12)and (17), which represent the low-dimensional and highdimensional cases respectively.The experimental conditions are identical to those described in Section 3, except that the pre-training step is canceled,so that the impact of pre-training on accuracy could be reflected.It should be noted that in the absence of pre-training scenarios,pθ1(XLi)is obtained by MCS.

    Additionally, the total training times of NNs with and without pre-training are the same.We can clearly observe in Fig.11 that the accuracy of the results obtained without pretraining is much lower than that with pre-training.Especially for system(17),which is a 6D system,the results without pretraining are extremely poor.This test shows that the presence of pre-training is vital for the enhancement of solution accuracy as well as efficiency,which is particularly evident in highdimensional situations.

    Since the existence of the pre-training step casts great influence on solution accuracy, it is necessary to study further the impact of the errors of the pre-trained model on the final result.According to Subsection 2.2, the accuracy of the pre-trained model is primarily determined by the prior knowledge obtained by the MCS.Therefore,adjusting the number of MCS steps to affect the accuracy of the pre-trained model is reasonable.For system(12), the prior knowledge is obtained by MCS with 103,104,5×104,and 105steps respectively,and these four cases are referred to as case I–IV.Then the models that have been pre-trained for 2000 and 5000 times are used as the basis for subsequent training,and the results are shown in Fig.12.It can be seen from Figs.12(b)and 12(c)that the error of the pre-trained model hardly interferes with the final calculation in terms of convergence speed and ultimate error.The accuracy of case I in Fig.12(c) fluctuates significantly in the initial stage, mainly due to highly rough prior knowledge(induced by few iteration steps of MCS)which leads toE1increasing with iteration during the pre-training process as shown in Fig.12(d).However,it is gratifying that the introduction of FPK differential operator can quickly repair the NNs in the subsequent training.

    In short, the pre-trained model facilitates quick fitting of the NN by jumping out of local saddle points when solving high-dimensional systems,and the inclusion of the pre-trained model is essential for the accuracy of the final result.However, the accuracy of the pre-trained model is of less importance since it has little impact on the final result,which means that the DL-BPK method is not entirely determined by the accuracy of MCS and has excellent self-correction ability.

    Fig.12.The effect of errors of the pre-trained PDF on the final NN PDF in terms of convergence speed and ultimate error: (a)evolution of the value of Loss1 with iterations in pre-training process;(b)–(c)after 2000 and 5000 pre-training respectively,the variation of solution accuracy with iterations in the subsequent training;(d)evolution of the value of E1 with iterations in entire training process.

    4.2.The role of the penalty factors

    The effect of the penalty factors is discussed using systems (12), (17), and (20).It is important to mention that we have excluded system (15) from consideration due to the absence of an exact solution and the resulting inability to quantify the accuracy of its outcomes.The FPK equation is solved with penalty factors of different magnitudes, and other relevant parameters and settings remain the same as in Section 3.Each experimental result is the average of three independent trials and the final results are shown in Fig.13.

    It can be seen that penalty factors of different magnitudes have little impact on the results of each system.Only when the penalty factor is 1×10?4, is the result accuracy of a 2D system extremely poor.The reason is that the probability density value at the local maximum point of the 2D system is small.When the penalty factor is too small,it may cause the supervision points to lose its ability to prevent zero solution.However,the specific rules for setting the penalty factor need further investigation,which may relate to the error and the magnitude in the probability density at the supervision points.In general,it will not happen that an unreasonable setting of penalty factors causes the inability to solve the correct results.Compared with the DL-FP method,the method proposed does not entirely rely on penalty factors.The primary function of the penalty factor is controlling the NN to fit in the direction of the FPK differential operator, which is crucial to improve the smoothness and accuracy of the results,and to prevent zero solution.

    Fig.13.The impact of penalty factors on different systems.

    5.Conclusions

    This paper introduces a numerical method,DL-BPK,designed to solve FPK equations efficiently and accurately, and with applicability in high-dimensional systems.The basic idea is using prior knowledge to obtain a pre-trained model, and taking the discretization form of the FPK equation as a loss function to ensure the smoothness of the solution.To avoid the generation of zero solution, the supervision points with probability density are taken as a term in the loss function.The accuracy of DL-BPK solution in the first 2D system, as well as other 6D and 8D systems reaches 99.97%, 99.65%,and 95.55%,respectively.For the remaining 2D toggle switch system,DL-BPK solution also achieves good consistency with the MCS results.The proposed method outperformed other NN-based methods for solving the FPK equation in both accuracy and efficiency.

    Due to the introduction of supervision points in the loss function,the influence of the number of points and the selection rules on the results are studied.The results indicate that the points with the highest probability density within a specific range,the local maximum points of PDF,should be selected as the supervision points.Next, we delved into the significance of pre-trained model.All the results validate our conjecture and confirm the practicality of the proposed method.This provides some implications for solving higher-dimensional problems.Besides, the impact of penalty factors on results and possible setting rules are also discussed.The results indicate that different levels of penalty factors will have different impacts on the results.In general, and the recommended value lies between 0.1 and 1×10?4.Proper selection of penalty factors will bring the focus of NN training closer to the FPK form, facilitating the smoothness of the solution.Moreover,the setting of penalty factors also depends on the probability density of the supervision points, which is determined by the system equation and noise intensity.Specific and strict rules still need to be studied through more models.

    To compare with the exact solution,relatively simple dynamic systems are used to test the accuracy of the DL-BPK method,and future work will focus on the solution of complex nonlinear dynamic systems.It should be pointed out that since the focus of this study is not on how to sample datasets effectively, there are some data explosion issues in 8D and higher dimensional systems,which undoubtedly have a significant effect on computational cost and may limit the application of this method in higher dimensions.Therefore, data optimization and more efficient sampling methods will be the focus of our future research.Additionally, this paper only studies the FPK equation for the case of standard Gaussian white noise,and we will further consider the cases under non-Gaussian colored noise or extreme events.

    Acknowledgement

    Project supported by the National Natural Science Foundation of China(Grant No.12172226).

    猜你喜歡
    神龍
    神龍擺尾
    寶藏(2022年4期)2022-08-01 06:16:00
    神龍的傳說
    烽火戲諸侯
    《神龍:美學論文集》評介
    美育學刊(2019年2期)2019-03-29 03:51:36
    巧技對神龍
    業(yè)務(wù)再延伸,漢陽所獲神龍公司實驗室認證授牌
    專用汽車(2016年1期)2016-03-01 04:13:13
    Lanting Xu:Millennium Legend
    神龍門瀑布上的字跡
    爬神龍川
    小青蛙報(2006年1期)2006-02-18 05:06:08
    亚洲真实伦在线观看| 成人高潮视频无遮挡免费网站| av黄色大香蕉| kizo精华| 女同久久另类99精品国产91| av国产免费在线观看| 久久婷婷人人爽人人干人人爱| av又黄又爽大尺度在线免费看 | 成人美女网站在线观看视频| 97超碰精品成人国产| 色5月婷婷丁香| 国产免费男女视频| .国产精品久久| 久久久午夜欧美精品| 男人舔女人下体高潮全视频| 亚洲熟妇中文字幕五十中出| 嫩草影院精品99| 深爱激情五月婷婷| 热99re8久久精品国产| 免费看日本二区| 乱码一卡2卡4卡精品| 午夜精品一区二区三区免费看| 亚洲成人久久性| eeuss影院久久| 爱豆传媒免费全集在线观看| videossex国产| 国产亚洲av嫩草精品影院| 亚洲人与动物交配视频| 99久国产av精品| 啦啦啦啦在线视频资源| 国产精品免费一区二区三区在线| 欧美性猛交黑人性爽| 亚洲精品久久久久久婷婷小说 | 日韩欧美 国产精品| 成人漫画全彩无遮挡| 久久精品国产亚洲av涩爱 | 青春草视频在线免费观看| 色播亚洲综合网| 亚洲成人久久性| 国产成人一区二区在线| 蜜桃久久精品国产亚洲av| 少妇人妻精品综合一区二区 | 日本免费a在线| 亚洲av男天堂| 亚洲精品亚洲一区二区| 成人永久免费在线观看视频| 国产亚洲av片在线观看秒播厂 | 高清午夜精品一区二区三区 | 久99久视频精品免费| 亚洲成a人片在线一区二区| 尤物成人国产欧美一区二区三区| av专区在线播放| 中文亚洲av片在线观看爽| av天堂在线播放| 久久久久久伊人网av| 久久久久久久久久黄片| 男人和女人高潮做爰伦理| 成人性生交大片免费视频hd| 老师上课跳d突然被开到最大视频| 亚洲av中文字字幕乱码综合| 亚洲最大成人中文| 边亲边吃奶的免费视频| 久久久久久久久大av| or卡值多少钱| 国产精品久久久久久av不卡| 美女cb高潮喷水在线观看| 成年免费大片在线观看| 国产在视频线在精品| 观看免费一级毛片| 色综合色国产| 国产 一区精品| 亚洲一级一片aⅴ在线观看| 丝袜美腿在线中文| 国产亚洲欧美98| 日韩欧美国产在线观看| 久久久a久久爽久久v久久| 九九在线视频观看精品| 搡女人真爽免费视频火全软件| 精品久久国产蜜桃| 最近手机中文字幕大全| 蜜桃久久精品国产亚洲av| 小蜜桃在线观看免费完整版高清| 国产精品国产高清国产av| 国产一级毛片七仙女欲春2| 高清日韩中文字幕在线| 亚洲av第一区精品v没综合| 99在线视频只有这里精品首页| 男人狂女人下面高潮的视频| 亚洲国产精品合色在线| 神马国产精品三级电影在线观看| 国产精品一区二区性色av| 国产亚洲精品久久久com| 2022亚洲国产成人精品| 身体一侧抽搐| 国产在线精品亚洲第一网站| 一本精品99久久精品77| 中出人妻视频一区二区| 久久草成人影院| 99久国产av精品| 亚洲图色成人| 婷婷色av中文字幕| 日韩 亚洲 欧美在线| 人妻久久中文字幕网| a级毛色黄片| 性欧美人与动物交配| 天堂av国产一区二区熟女人妻| 小蜜桃在线观看免费完整版高清| 国产探花极品一区二区| 国产精品国产高清国产av| 人人妻人人澡人人爽人人夜夜 | 嘟嘟电影网在线观看| 日韩高清综合在线| 搡老妇女老女人老熟妇| 又粗又硬又长又爽又黄的视频 | 最近视频中文字幕2019在线8| 波多野结衣高清作品| 五月玫瑰六月丁香| 国产精品,欧美在线| 国产精品一区二区三区四区久久| 亚洲乱码一区二区免费版| 国产毛片a区久久久久| 亚洲欧洲国产日韩| 少妇熟女aⅴ在线视频| 黄色欧美视频在线观看| 级片在线观看| 亚洲一区高清亚洲精品| 午夜爱爱视频在线播放| 美女内射精品一级片tv| 国产精品女同一区二区软件| 国产精品久久久久久精品电影| 男女下面进入的视频免费午夜| 美女大奶头视频| 一进一出抽搐gif免费好疼| 全区人妻精品视频| 亚洲av成人av| 麻豆成人午夜福利视频| 天天一区二区日本电影三级| 婷婷色综合大香蕉| av女优亚洲男人天堂| 午夜久久久久精精品| 国产精品99久久久久久久久| 亚洲精品成人久久久久久| 亚洲精品国产成人久久av| 99国产精品一区二区蜜桃av| av卡一久久| 岛国毛片在线播放| 少妇的逼水好多| 亚洲一区高清亚洲精品| 精品一区二区三区视频在线| 最近中文字幕高清免费大全6| 亚洲一级一片aⅴ在线观看| or卡值多少钱| 超碰av人人做人人爽久久| 舔av片在线| 搡老妇女老女人老熟妇| 少妇裸体淫交视频免费看高清| 乱人视频在线观看| 中国国产av一级| 在线观看午夜福利视频| 午夜福利视频1000在线观看| 一本久久中文字幕| 麻豆精品久久久久久蜜桃| 色尼玛亚洲综合影院| 免费观看在线日韩| 久久精品影院6| 毛片一级片免费看久久久久| 最近最新中文字幕大全电影3| 色视频www国产| 国产精品野战在线观看| 国产午夜福利久久久久久| 国产伦精品一区二区三区视频9| 在线观看美女被高潮喷水网站| 狠狠狠狠99中文字幕| 色视频www国产| 欧洲精品卡2卡3卡4卡5卡区| 你懂的网址亚洲精品在线观看 | 亚洲四区av| 女人被狂操c到高潮| 亚洲精华国产精华液的使用体验 | 欧美成人一区二区免费高清观看| 啦啦啦啦在线视频资源| 变态另类成人亚洲欧美熟女| 久久久久久伊人网av| 一个人观看的视频www高清免费观看| 精品日产1卡2卡| 内地一区二区视频在线| 欧美人与善性xxx| av.在线天堂| 99在线人妻在线中文字幕| 好男人在线观看高清免费视频| 极品教师在线视频| 免费一级毛片在线播放高清视频| 在线a可以看的网站| 男人舔女人下体高潮全视频| 嫩草影院入口| 亚洲一区高清亚洲精品| 联通29元200g的流量卡| 1000部很黄的大片| 中文字幕熟女人妻在线| 成年av动漫网址| 国产毛片a区久久久久| 久久精品国产自在天天线| 校园人妻丝袜中文字幕| 最近手机中文字幕大全| 尤物成人国产欧美一区二区三区| 九九在线视频观看精品| 国产一区二区三区av在线 | 可以在线观看的亚洲视频| 国产探花在线观看一区二区| 一区二区三区高清视频在线| 欧美精品国产亚洲| 亚洲aⅴ乱码一区二区在线播放| 晚上一个人看的免费电影| 人妻制服诱惑在线中文字幕| 精华霜和精华液先用哪个| 国产黄色小视频在线观看| 变态另类丝袜制服| 亚洲电影在线观看av| 99热全是精品| 午夜久久久久精精品| 欧美高清性xxxxhd video| 国产精品嫩草影院av在线观看| 丝袜美腿在线中文| 一本久久中文字幕| 老司机影院成人| 韩国av在线不卡| 久久精品久久久久久久性| 国产精品一区www在线观看| 色视频www国产| 国产91av在线免费观看| 久久精品国产亚洲av天美| 久久久国产成人精品二区| 国产精品一二三区在线看| 亚洲精品久久国产高清桃花| 一级黄片播放器| 中文字幕人妻熟人妻熟丝袜美| 国产精品永久免费网站| 国产国拍精品亚洲av在线观看| 久久中文看片网| 国产高清激情床上av| 欧美一区二区精品小视频在线| 久久精品91蜜桃| 人妻夜夜爽99麻豆av| 2021天堂中文幕一二区在线观| 国产一区二区三区av在线 | 你懂的网址亚洲精品在线观看 | www.色视频.com| 久久久精品94久久精品| 干丝袜人妻中文字幕| 天天一区二区日本电影三级| 亚洲成人精品中文字幕电影| 国产精品女同一区二区软件| 午夜福利成人在线免费观看| 国产91av在线免费观看| 国产午夜精品论理片| 国产亚洲5aaaaa淫片| 日日干狠狠操夜夜爽| 亚洲四区av| 日韩一区二区视频免费看| 天天一区二区日本电影三级| 最新中文字幕久久久久| 亚洲欧美日韩无卡精品| 看片在线看免费视频| 国产又黄又爽又无遮挡在线| 在线a可以看的网站| 美女国产视频在线观看| 久久午夜亚洲精品久久| 美女被艹到高潮喷水动态| 欧美+日韩+精品| 久久国产乱子免费精品| 欧美性猛交黑人性爽| 99九九线精品视频在线观看视频| 内地一区二区视频在线| av在线亚洲专区| 国产午夜福利久久久久久| 亚洲精品成人久久久久久| 亚洲五月天丁香| 亚洲人成网站高清观看| 伦理电影大哥的女人| 久久99蜜桃精品久久| 边亲边吃奶的免费视频| 亚洲精品国产成人久久av| 一级av片app| 在线观看66精品国产| 国内揄拍国产精品人妻在线| 夜夜爽天天搞| 久久精品国产99精品国产亚洲性色| 三级经典国产精品| 国产精品国产三级国产av玫瑰| 2022亚洲国产成人精品| 嫩草影院入口| 精品久久久久久久久久免费视频| 日韩一区二区三区影片| 一个人看的www免费观看视频| 欧美精品国产亚洲| 老司机影院成人| 成人欧美大片| 欧美日韩在线观看h| 一级毛片久久久久久久久女| 亚洲aⅴ乱码一区二区在线播放| 美女 人体艺术 gogo| 国产综合懂色| 特级一级黄色大片| 丝袜喷水一区| 国产亚洲精品久久久久久毛片| 国产亚洲91精品色在线| 蜜桃亚洲精品一区二区三区| 久99久视频精品免费| 亚洲欧美中文字幕日韩二区| 免费观看人在逋| 国产伦精品一区二区三区四那| 日本熟妇午夜| 亚洲自拍偷在线| 久久这里只有精品中国| 国产成人aa在线观看| 不卡视频在线观看欧美| 免费av毛片视频| 观看美女的网站| 精品日产1卡2卡| 国产午夜福利久久久久久| av在线观看视频网站免费| 丰满乱子伦码专区| 干丝袜人妻中文字幕| 国产精品1区2区在线观看.| 草草在线视频免费看| 日本色播在线视频| 欧美一区二区国产精品久久精品| 成人亚洲精品av一区二区| 国产午夜福利久久久久久| 国产精品精品国产色婷婷| 国产69精品久久久久777片| 欧美人与善性xxx| 国产av一区在线观看免费| 日本黄色视频三级网站网址| 有码 亚洲区| 麻豆成人午夜福利视频| 天堂av国产一区二区熟女人妻| 亚洲成人中文字幕在线播放| 一边摸一边抽搐一进一小说| 在线播放国产精品三级| 久久热精品热| 深夜精品福利| 国产成人a∨麻豆精品| 日本av手机在线免费观看| 中文字幕制服av| 99热这里只有是精品在线观看| 亚洲人成网站高清观看| 成年av动漫网址| 观看免费一级毛片| 亚洲av成人精品一区久久| 国产色爽女视频免费观看| 国产精品国产高清国产av| 婷婷色综合大香蕉| 91久久精品国产一区二区三区| 欧美性感艳星| 欧美xxxx性猛交bbbb| 中文资源天堂在线| 久久99精品国语久久久| 草草在线视频免费看| 一级黄色大片毛片| 少妇高潮的动态图| 欧美+亚洲+日韩+国产| 26uuu在线亚洲综合色| 欧美高清成人免费视频www| 国产一级毛片七仙女欲春2| 日韩精品有码人妻一区| 日韩成人av中文字幕在线观看| 亚洲婷婷狠狠爱综合网| 久久久色成人| 欧美成人免费av一区二区三区| 日韩成人伦理影院| 国产精品国产高清国产av| 日产精品乱码卡一卡2卡三| 99久久精品热视频| 欧美成人a在线观看| 国产淫片久久久久久久久| 成熟少妇高潮喷水视频| 国产伦理片在线播放av一区 | 国产日本99.免费观看| .国产精品久久| 在线观看66精品国产| 日韩欧美一区二区三区在线观看| 日韩 亚洲 欧美在线| 欧美一级a爱片免费观看看| 两性午夜刺激爽爽歪歪视频在线观看| 婷婷六月久久综合丁香| 女人被狂操c到高潮| 日韩大尺度精品在线看网址| 18禁黄网站禁片免费观看直播| 国产 一区精品| 婷婷色av中文字幕| 精品少妇黑人巨大在线播放 | 国产伦理片在线播放av一区 | 久久人妻av系列| 村上凉子中文字幕在线| 久久午夜福利片| 婷婷色av中文字幕| 国产黄片美女视频| 波野结衣二区三区在线| 黑人高潮一二区| 国产精品一二三区在线看| 日韩欧美在线乱码| 久久亚洲精品不卡| 亚洲18禁久久av| 精品日产1卡2卡| 最新中文字幕久久久久| 夜夜夜夜夜久久久久| 乱系列少妇在线播放| 精品少妇黑人巨大在线播放 | 天天躁夜夜躁狠狠久久av| 色哟哟·www| 中国国产av一级| 日本一本二区三区精品| 国产精品1区2区在线观看.| 国产精品综合久久久久久久免费| 亚洲婷婷狠狠爱综合网| 天天一区二区日本电影三级| 亚州av有码| 少妇熟女欧美另类| 日韩欧美 国产精品| 国产精品免费一区二区三区在线| 五月玫瑰六月丁香| 美女xxoo啪啪120秒动态图| 精品国内亚洲2022精品成人| 国产精品麻豆人妻色哟哟久久 | 精品一区二区三区视频在线| a级一级毛片免费在线观看| 亚洲一区二区三区色噜噜| 国产片特级美女逼逼视频| av福利片在线观看| 好男人在线观看高清免费视频| 国产乱人偷精品视频| 伦精品一区二区三区| 午夜福利高清视频| 日韩欧美在线乱码| 日韩欧美 国产精品| 97人妻精品一区二区三区麻豆| а√天堂www在线а√下载| 成年av动漫网址| 久久精品91蜜桃| 精品久久久久久久久久久久久| 国产日韩欧美在线精品| 国产免费一级a男人的天堂| 日韩av不卡免费在线播放| 亚洲av成人av| 国产精品美女特级片免费视频播放器| 欧美潮喷喷水| 久久精品人妻少妇| 日日摸夜夜添夜夜爱| 日本黄色片子视频| 国产极品精品免费视频能看的| 中文字幕熟女人妻在线| 欧美潮喷喷水| 亚洲在久久综合| 日韩中字成人| 桃色一区二区三区在线观看| 免费av不卡在线播放| 欧美一区二区亚洲| 国产精品电影一区二区三区| 国产成年人精品一区二区| 成人无遮挡网站| 国产精品乱码一区二三区的特点| 狂野欧美激情性xxxx在线观看| 免费观看精品视频网站| 精品国产三级普通话版| 少妇人妻精品综合一区二区 | 日本五十路高清| 在现免费观看毛片| 亚洲经典国产精华液单| 最新中文字幕久久久久| 欧美xxxx性猛交bbbb| 99精品在免费线老司机午夜| 精品人妻视频免费看| av在线播放精品| 亚洲性久久影院| 在线观看免费视频日本深夜| 国产美女午夜福利| av福利片在线观看| 国产成人影院久久av| 两性午夜刺激爽爽歪歪视频在线观看| 淫秽高清视频在线观看| 国产精品野战在线观看| 变态另类丝袜制服| 美女高潮的动态| 三级男女做爰猛烈吃奶摸视频| 99热网站在线观看| 亚洲高清免费不卡视频| 99热这里只有是精品在线观看| 久久久久久久久久成人| 蜜臀久久99精品久久宅男| 色哟哟·www| 国产69精品久久久久777片| 三级男女做爰猛烈吃奶摸视频| 色吧在线观看| 久久国内精品自在自线图片| 亚洲av电影不卡..在线观看| 99热6这里只有精品| 人体艺术视频欧美日本| 亚洲国产欧洲综合997久久,| 99热只有精品国产| 18禁在线无遮挡免费观看视频| 99久久精品国产国产毛片| 最近最新中文字幕大全电影3| 免费无遮挡裸体视频| 日本五十路高清| 欧美成人a在线观看| 国产成人91sexporn| 少妇丰满av| 男人和女人高潮做爰伦理| 丝袜喷水一区| 午夜福利在线在线| 欧美xxxx黑人xx丫x性爽| 国产亚洲精品久久久com| 中文字幕人妻熟人妻熟丝袜美| 在线观看66精品国产| 国产一区二区在线av高清观看| 非洲黑人性xxxx精品又粗又长| 在线播放国产精品三级| 在线观看66精品国产| 男的添女的下面高潮视频| 亚洲七黄色美女视频| 一边亲一边摸免费视频| 日韩三级伦理在线观看| 97在线视频观看| 精品久久久久久久末码| 亚洲av不卡在线观看| 看片在线看免费视频| 欧美变态另类bdsm刘玥| 久久久久久伊人网av| 欧美xxxx性猛交bbbb| 色5月婷婷丁香| 国产午夜精品一二区理论片| 成人欧美大片| 草草在线视频免费看| 亚洲欧美日韩卡通动漫| 18禁在线无遮挡免费观看视频| 日本黄色片子视频| 亚洲欧美日韩高清在线视频| 两个人的视频大全免费| 偷拍熟女少妇极品色| 久久久国产成人免费| 日韩国内少妇激情av| 亚洲一级一片aⅴ在线观看| 久久草成人影院| 人体艺术视频欧美日本| 免费观看人在逋| 国产色婷婷99| 欧美日韩国产亚洲二区| 精品久久久久久久久av| 最近手机中文字幕大全| 亚洲性久久影院| 男女下面进入的视频免费午夜| or卡值多少钱| 99热这里只有精品一区| 偷拍熟女少妇极品色| 国产精品一二三区在线看| 国产免费一级a男人的天堂| 国产成人a∨麻豆精品| 久久这里有精品视频免费| 国产av麻豆久久久久久久| 蜜臀久久99精品久久宅男| 99在线人妻在线中文字幕| 午夜免费激情av| 亚洲国产日韩欧美精品在线观看| 丰满的人妻完整版| 亚洲精品日韩av片在线观看| 美女脱内裤让男人舔精品视频 | 九九在线视频观看精品| 91av网一区二区| 成人午夜高清在线视频| 国产成人aa在线观看| 免费看光身美女| 成人毛片60女人毛片免费| 国产av不卡久久| 成人鲁丝片一二三区免费| 精品日产1卡2卡| 成人特级av手机在线观看| 国产日本99.免费观看| 中国美白少妇内射xxxbb| 久久欧美精品欧美久久欧美| 亚洲av成人av| 亚洲国产精品久久男人天堂| 国语自产精品视频在线第100页| 我的老师免费观看完整版| 日韩强制内射视频| 中文字幕精品亚洲无线码一区| av专区在线播放| 麻豆久久精品国产亚洲av| 美女黄网站色视频| 国产亚洲av嫩草精品影院| 91精品一卡2卡3卡4卡| 亚洲18禁久久av| 一个人免费在线观看电影| 晚上一个人看的免费电影| 欧美三级亚洲精品| 欧美日韩乱码在线| 日本黄色片子视频| 国产探花极品一区二区| 亚洲欧美成人综合另类久久久 | 国产 一区 欧美 日韩| 蜜臀久久99精品久久宅男| 97热精品久久久久久| 国产亚洲精品av在线| 国产黄a三级三级三级人| 国产精品蜜桃在线观看 | 亚洲av中文字字幕乱码综合| 成人美女网站在线观看视频| 亚洲精品成人久久久久久| 日本爱情动作片www.在线观看| 97在线视频观看| 天堂√8在线中文| 日韩高清综合在线| 天天一区二区日本电影三级| 白带黄色成豆腐渣| 国产精品女同一区二区软件|