• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    ESR-PINNs: Physics-informed neural networks with expansion-shrinkage resampling selection strategies

    2023-09-05 08:47:54JiananLiu劉佳楠QingzhiHou侯慶志JianguoWei魏建國andZeweiSun孫澤瑋
    Chinese Physics B 2023年7期
    關(guān)鍵詞:魏建國

    Jianan Liu(劉佳楠), Qingzhi Hou(侯慶志), Jianguo Wei(魏建國), and Zewei Sun(孫澤瑋)

    1College of Intelligence and Computing,Tianjin University,Tianjin 300350,China

    2State Key Laboratory of Hydraulic Engineering Simulation and Safety,Tianjin University,Tianjin 300350,China

    Keywords: physical informed neural networks,resampling,partial differential equation

    1.Introduction

    In the past decade, the rapid development of electronic circuit technology has led to a tremendous increase in computing power of computers.This has led to deep learning applied to an increasing number of scenarios, playing a vital role in several fields such as computer vision,[1,2]natural language processing,[3,4]and speech.[5,6]The neural network is essentially a nonlinear function approximator that automatically extracts its own features by“l(fā)earning”the data through its powerful fitting ability.This enables a switch from extracting features through feature engineering to automatic feature extraction through date training,thus avoiding human interference as much as possible and realizing the process from“data”to“representation”.

    In recent years, along with the wave of deep learning development, deep learning methods have paid more attention to solving partial differential equations (PDEs),[7–10]among which physics-informed neural networks(PINNs)proposed by Raissiet al.[11]have provided a new way of thinking.It constructs a new road to describe the problem,and has successful applications in many areas, such as heat transfer,[12]thrombus material properties,[13]nano-optics,[14]fluid mechanics,[15,16]and vascular simulation.[17]As shown in Ref.[18], PINNs can be used in development of highperformance laser spot center of mass parameter measurement equipment.It transforms the PDEs problem into an optimization problem by using the automatic differentiation mechanism of neural networks to calculate the partial derivatives in the PDEs.The learning process of PINNs is different from that of neural networks applied in other fields,although there are no related studies qualitatively discussing this issue.In other tasks, neural networks learn information about the features of labeled data, while in PINNs the neural network is only constrained by initial and boundary conditions, and the residuals are minimized at all training points in the training process.Unlike the traditional grid methods,the values at all grid points are derived from the transfer of the grid calculation.A PINN can be essentially taken as a complex nonlinear function.PINNs are trained with various conditions as constraints, with the training goal of minimizing the total loss of all training points.The network pays equal attention to all training points during the training process, which leads to a possible propagation failure as mentioned in Ref.[19].The propagation failure phenomenon can be described such that some points in the whole spatio-temporal domain are difficult to be trained and they cause the errors of other points to be larger.This leads the surrounding training points to have difficulty in learning the correct values.This is equivalent to a particular solution of the PDEs learned by PINNs.Although the loss values are small, the solution space obtained is completely different from the correct one.

    The PINNs have been improved in many ways, and in Ref.[20] the importance of training points is reconsidered.The allocation points are resampled proportionally according to the loss function,and the training is accelerated by segmental constant approximation of the loss function.In Ref.[19]an evolutionary sampling (Evo) method was proposed in order to avoid the propagation failure problem.This method allows the training points to gradually cluster in the region with high PDEs residuals.The PDEs residual here usually refers to the error between the approximate solution of PINNs and the real solution.Since PDEs, boundary conditions, and initial conditions often differ in numerical values by orders of magnitude, this leads to an unbalanced contribution to each part when the network parameters are updated.The relative loss balancing with random lookback(ReLoBRaLo)was proposed in Ref.[21], aiming to balance the degree of contribution of multiple loss functions and their gradients and to improve the imbalance arising from the training process of some loss functions with large gradient values.Based on an adaptive learning rate annealing algorithm, a simple solution was proposed in Ref.[22].By balancing the interaction between data fitting and regularization, the instability problem due to unbalanced back propagation gradient distribution was improved when the gradient descent method was used for model training.A deep hybrid residual method for solving higher order PDEs was proposed in Ref.[23],where the given PDEs were rewritten as a first order system and the higher order derivatives were approximated in the form of unknown functions.The residual sum of equations in the sense of least squares was used as the loss function to achieve an approximated solution.In Refs.[24,25], the authors paid attention to the importance of the training points.During the training process,the weight coefficients of the training points were adaptively adjusted.Training points with large errors have higher coefficients, leading them to have a larger effect on the network parameters.For time-dependent problems, the authors of Refs.[26,27] firstly trained the network in a short period of time,and then gradually expanded the training interval according to certain rules to cover the whole spatio-temporal domain.This type of method mainly reflects the transfer process of initial and boundary conditions.In Ref.[28],the interaction between different terms in the loss function was balanced by the gradient statistics, and a gradient-optimized physical information neural network(GOPINNs)was proposed,which is more robust to suppress gradient fluctuation.

    To show the problem of propagation failure during training in PINNs,an expansion-shrinkage resampling(ESR)strategy was proposed and applied to the PINNs,which is referred to as ESR-PINNs.As introduced above,the PINNs will have the problem of propagation failure during training,but it is often irrational to focus only on training points with large errors.On the one hand,the proposed expansion-shrinkage point selection strategy avoids the difficulty of the network in jumping out of the local optimum due to excessive focus on points with large errors.On the other hand,the reselected points are more uniform reflecting the idea of transmit.Inspired by the idea of functional limit,the concept of continuity of PINNs during training is proposed for the first time.In this way, the propagation failure problem is alleviated and the learning ability of PINNs is improved.In addition,the additional resampling process consumes a negligible amount of time.For a total of 10000 test points,the entire resampling process on the Intel(R)Xeon(R)Silver 4210 CPU@2.20 GHz takes about 20 s in total,with the time mainly cost in the sorting process.

    This paper is structured as follows.In Section 2, we describe the PINNs and introduce our method to show how to implement the expansion-shrinkage point selection strategy.In Section 3, we demonstrate our method through a series of numerical experiments.Conclusions are drawn in Section 4.

    2.Methodology

    In this section, the PINNs are firstly reviewed, followed by a description of the proposed point allocation and selection approach with pertinent details.

    2.1.Physics-informed neural networks

    Consider the following partial differential equation(PDE)under the initial and boundary condition:

    whereu(x,t)represents a field variable associated with timetand spacex,Ndenotes the nonlinear operator,?is the spatial domain, and??is its the boundary, andut(x,t) represents the partial derivative ofu(x,t)with respect to timet.The loss function in PDEs with time terms is often defined as their residual in the form of minimum mean square errors:

    whereλ1,λ2, andλ3represent the weight coefficients for the PDE, initial condition and boundary condition, respectively.They are generally given empirically in advance when adaptive weight coefficients are not utilized.For PINNs systems,time and space in PDEs are homogeneous.The fundamental concept is the propagation of the solution into the spatiotemporal domain with initial and boundary values,making the training points satisfy the PDE in the domain.The loss function is used to measure how well the network fits the PDE.In this way, PDEs are solved for their intended purpose.The three terms in Eq.(4)describe how well the network’s output fulfills the PDEs,initial condition,and boundary condition.

    2.2.Resampling

    With fixed locations of training points for the boundary and initial conditions, the training of PINNs will be significantly influenced by the distribution of training points over the spatio-temporal domain.For instance,increasing the number of training points tends to produce more accurate results for the majority of algorithms.However,due to the limitation of computational resources and computational time,it is often impossible to conduct experiments with large distribution of training points.Therefore,a suitable set of sampling points is important for obtaining accurate solutions.Usually,a random probability distribution,such as LHS and Sober,is chosen for initialization.For different PDEs, the real solution distribution has a certain trend,but the training points with this trend are difficult to be generated by random initialization.Therefore, adaptive resampling is essential for training PINNs of some PDEs as opposed to using preset training points for each iteration.In particular, for some PDEs with drastic changes in time and space, the local values are largely different from other regions, and more points are needed to cope with the rapidly changing peaks.It is essential to dynamically modify the location of particular training points as the training process progresses in order to improve the accuracy of the neural network.In this study, we provide a new method for choosing and allocating points that can enhance the network’s PDEs fitting ability without modifying its structure, parameters and total number of training point.Figure 1 illustrates the structure of the ESR-PINNs.

    Fig.1.ESR-PINNs structure.

    2.2.1.Point selection strategy

    In the vast majority of machine learning problems,the labels of the training data are often given.Even though there are a few mislabels, it is guaranteed that the majority of the data could have accurate labels for training supervision.In contrast,boundary and initial conditions are frequently the driving forces for PINNs training.These conditions control the neural network and spread them to propagate into the interior of the spatio-temporal domain.Random training points usually produce better learning results during propagation in smooth changing regions.Nevertheless,some PDEs have regions with sharp changes, for which randomly initialized training points may not always be able to configure more training points in the sharply changing regions.Thus the solution cannot be reasonably learned by PINNs.Luet al.[29]proposed the first adaptive non-uniform sampling for PINNs, i.e., the residualbased adaptive refinement (RAR) method.This method can be used to calculate the residuals of PDEs by randomly generating far more test points than the training points, and we can select the test points with large residuals as new training points to be added to the original data.Changes to the training point distribution increase the neural network’s accuracy.However, adding only a few test points with large residuals to the training set is undesirable.In our experiments,we find that the distribution of training points may become pathological if the residuals are added continuously in a small region.This pathology is defined as the aggregation of a large number of training points in a small spatio-temporal domain.This pathological distribution may cause PINNs to be difficult to train and thus affects the overall performance.This work introduces a novel computational method for point resampling.As mentioned above,the real challenge for PINNs is that some training points locate in the spatio-temporal domain where the values vary drastically.Since only little training has been carried out,the neural network tends to learn poorly at these locations or gets stuck in local optima hardly jumping out.The set of parameters used in neural networks is intended to reduce the error across all training points.However,when the enormous number of training points that can be more precisely learnt is averaged,the loss values offered by these challenging training points become negligible,and PINNs become unable to depart from the local optimum.As a result,it is necessary to alter the distribution of training points to depart the network from the local optimum.

    Table 1.Terminology explanation.

    Some of the terminology explanations in the algorithm are listed in Table 1.Firstly, a part of fixed training points is generated according to certain probability distribution as the training point set.Then a large number of fixed invariant data points are generated uniformly in the spatio-temporal domain as the test point set.The calculation of the score in the point selection process is carried out with

    whereiindicates the number of iteration andLambis a hyperparameter that controls the ratio between the selection of large residuals and large change points.In the first iteration, i.e.,i=1, only some test points with large residuals are selected as alternative points since there is no reference values.The PDEs residual values of all test points are recorded as res1at this time.The selected alternative points enter into the point generation part, in which the new generated training points will be added to the fixed training point set for the first iteration.The residual values of all test points before this iteration are recorded as resold.Wheni/=1, the PDEs residual values of all test points will be recalculated as resi,and according to Eq.(5),the score is calculated,and its value represents the ranking of this point in the test points.The higher value of the corresponding part ofLamb,with unchanged test point,indicates that the neural network was not sufficiently optimized for this point in the previous iteration to make its output more compatible with the PDEs.If this item has a value larger than 1, the neural network works poorly at this test point as a result of the negative optimization in the previous iteration.This avoids to some extent that only the points with the largest residuals are considered in the point selection process,while a large number of training points with moderate residuals are ignored.This can lead to a reduction in the importance of these points during the training process, which may result in these points never being effectively optimized.Note that the ranking of this point’s PDEs residual among those of all other points replaces the value of res in this work,and the first component is expressed as the ranking of the position among all ratios.By using Eq.(5), the output score of Eq.(5)is taken as the final ranking, and the score inaccuracies brought by the difference in order of magnitude between the two portions are prevented.

    2.2.2.Training points generation method with expansion-shrinkage strategy

    To locate decent training points,the selection of points are carried out in the steps described above,and the generation of enlarged alternative points is performed in this section.

    Table 2.Parameter explanation.

    Table 2 describes the meaning of the parameters.The topmipoints are discovered by sorting the scores after they have been computed in the above section (we select one third to one half of the total number of test points).These locations are then used as the center of a circle with a diameter of ?x(since the fixed test point interval in the algorithm is ?x, the radius here is set to ?x/2 in order to reduce the impact on other test points)to produce new randomly generatednialternative points(niis often between 2 and 10).This is equivalent to an expansion of the alternative points generated in the part of the algorithm to obtainnitimes more alternative points.At this time, the residuals of the PDEs of the newly generated alternative points are calculated, and the largestmi+1points are taken as the alternative points in the next round.The remainingmini ?mi+1expansion pointsPiare further randomly chosen as training points to be saved.The current operation is then repeated with the chosenmi+1alternative points.Them4points with the greatest residuals from the final round of generated alternative points are added to the training set after this operation is repeated three times to produce a total ofP1+P2+P3training points,along with them4points.Figure 3 depicts this process.

    Fig.3.Training points generation method with expansion-shrinkage strategy process.

    We call this operation as the expansion-shrinkage strategy,and after obtaining the alternative points through the first part of the algorithm, we first perform expansion around the alternative points.Note that the expansion process does not include the alternative points themselves, and we choose to generate random points in a small field of alternative points to replace them.This is carried out to prevent that some data points are inherently difficult to train from being chosen more than once during numerous cycles.The distribution of training points as a whole is then destroyed and turns pathological, making it a challenge to train points under such a distribution.This is because PINNs cannot supervise the training points in the spatio-temporal domain during training.The proposed method of randomly creating new points around alternative points as replacement points improves the continuity in this neighborhood.We use the concept of function limits in this context, i.e., arbitraryε>0 and presenceδ>0.When|x?x0|<δ, we have|f(x)?f(x0)|<ε.Sufficiently minor perturbationsδto the input values have a relatively modest effect on the final output.As each layer’s weight coefficients are constant when the network is propagated forward, the output changes caused by input perturbations are minimal.Therefore,by boosting the continuity within a specific band of big residuals, we attempt to ameliorate the subpar training outcomes caused by the propagation problem.

    After obtaining a total ofminipoints withnitimes of expansion, the contraction process is entered.First, themi+1points with the largest residuals are selected as alternative points for the next expansion, and thenmi+1points are randomly selected as training points among the alternative points.Due to the random point selection, the obtained selection points are more uniform.The purpose is to avoid excessive focus on high residual points in the process of generating the training points.In the algorithm, we can divideP1+P2+P3into several times for selection similar to a mountain peak,of which the summit indicates the area with high residuals where the majority of training points are placed, and consequently the density of training points increases as the peak tightens higher.In order to make the process transition more gradually and prevent all the train points from condensing in a small area as much as possible,alternative points in the contraction process are also taken into consideration.In addition to paying attention to the high residual points, the medium residual points also receive more attention.The distribution of training points is made more appropriate for PDEs and this portion of the points is ensured to receive more attention during the training process.It is equivalent to homogenize all the high residuals if only a few points with the greatest PDEs residuals are chosen as training points, which does not accurately reflect the significance of various training point locations.It is illogical to consider all large residuals homogeneously because residuals frequently change greatly during the training of PINNs,and this is represented in our algorithm.Additionally, the residual variance of the training points may become flatter as a result of random selection of the alternative points in the expansion process,which is an advantage in the training of ESR-PINNs.

    3.Results and discussion

    In this section,we present several numerical experiments to verify the effectiveness of the proposed ESR-PINNs.

    3.1.Burgers equation

    The Burgers equation is a nonlinear PDE that models the propagation and reflection of shock waves.It is a fundamental equation used in various fields such as fluid mechanics, nonlinear acoustics,and gas dynamics.Here we can write it as

    Our experiments revolve around the forward problem, where the parametersa1anda2are 1.0 and 1/100×π,respectively,and the mean square error is used as the loss function.

    The following settings are selected for PINNs.Using Sober for random point selection, with a total of 80 points on the boundary and 160 randomly selected training points as the initial conditions, for a total of 2540 training points in the spatio-temporal domain.The neural network structure has three layers with 20 neurons in each layer.Three iterations of point selection,with 80 points are set on the boundary,160 points for the initial condition,and 1000 fixed points generated randomly by Sober as the training points.

    Table 3.Parameter setting for the Burgers equations.

    Table 3 shows the parameter settings.Each iteration is trained for 10000 epochs.During the first expansion, we setm1=2500 andn1=4, and then 10000 expansion points are obtained.Next, the residuals of PDEs for all extended points are calculated, and thenm2= 2500 points with the largest residuals are selected as alternative points for the next round.Among the remaining training points, a total ofP1= 300 points are randomly selected as the first extended training points using Monte Carlo method.We use the following parameters in the second and third round of iterations:n2=4,m3=2000,P2=300,n3=5,P3=400 andm4=500.After three rounds of expansion and contraction operation, a total of 1500 retention points are selected as new training points added to the training set.First, we use the Adam optimizer for initial training of the neural network with 10000 epochs at a learning rate of lr=1×10?3, followed by the L-BFGS optimizer to finely tune the neural network with a total of 10000 epochs.Without using any early stopping strategy and dynamic weighting algorithm, the weighting coefficientsλi(i=1,2,3)in Eq.(4)are taken as 1.

    Figure 4 shows the ESR-PINNs solution compared with that of the original PINNs.It is obvious that ESR-PINNs significantly reduce the absolute errors.By looking at the error distribution of PINNs as shown in Fig.4(d), it can be seen that there is indeed some dissemination failure problem.This problem is somewhat improved by changing the distribution of training points in ESR-PINNs.The overall error is reduced by about one order of magnitude.In Table 4 theL2errors of the ESR-PINNs and original PINNs for differentLambare shown.It can be seen that theL2errors of ESR-PINNs are significantly smaller than that of the original PINNs for anyLamb.The algorithm achieves the best performance forLamb=0.4.

    Fig.4.Numerical solutions of(a)PINNs and(b)ESR-PINNs,and absolute errors of(c)PINNs and(d)ESR-PINNs for the Burgers equation.

    Table 4.Comparison of L2 errors for the Burgers equation: PINNs and ESR-PINNs with different Lamb.

    Figure 5 shows the comparison of the losses of PINNs and ESR-PINNs with differentLambduring the training process.It can be seen that during the pre-training phase,all experimental groups converge to a fairly low loss, but the optimal solution is not obtained at this time.As the training proceeds,the loss of PINNs does not change much, but the neural network at every point is finely tuned to make the PINNs output more consistent with the PDEs.In the experimental group of ESRPINNs, it can be seen that loss changes dramatically as the algorithm takes different training points.With each iteration of reselected points, the loss tends to rise and then falls, and it is always higher than that of the original PINNs.The loss of the original PINNs is almost constant after 20000 training epochs.However,theL2error of PINNs at the end of training is large,which indicates that the neural network falls into a local optimum during training.The original PINNs have limited ability to jump out of the local optimum, resulting in disappointing outcomes.The distribution of training points has a significant impact on the performance of the original PINNs and extremely worse results most probably appear (the loss function stays at 1×10?3and is difficult to be optimized).The expansion-shrinkage strategy of ESR-PINNs selects the hardly optimized points with large errors as extra training points,aiming at reducing the total error in the training process by decreasing the errors of them.However,adding these points with high residual to the training set will naturally lead to higher loss function values.This is the reason why the loss function of PINNs is lower than that of ESR-PINNs for some PDEs.These “tough” points have a great impact on the PINNs results,so finding and optimizing them appropriately will allow the other training points to be better optimized and therefore significantly reduce theL2error.By dynamically adjusting the distribution of training points while maintaining some training points unchanged, the stability of the neural network training can be improved as the point selection iteration goes on.The placement of training points reduces the pathology issue that arises in neural networks and improves the numerical precision.However, the experimental findings with loss function fluctuation demonstrates that there is no direct relationship between the numerical accuracy and the loss function values.A low loss may correspond to a large error.

    Fig.5.The losses of PINNs and ESR-PINNs with different Lamb for the Burgers equation.

    3.2.Allen–Cahn equation

    Next,the Allen–Cahn equation is studied because it is an important second-order elliptic PDE.It is a classical nonlinear equation originated from the study of phase transformation of alloys,and it has a very wide range of applications in practical problems such as image processing,[30]and mean curvature motion.[31]It can be written as

    where the diffusion coefficientDis 0.001 and theL2norm is used as a measure.As a control, we select the PINNs setup as follows: using Sober for random point selection, with 400 points on the boundary,800 for the initial condition,and 8000 training points in the spatio-temporal domain.The neural network structure has three layers with 20 neurons in each layer.In order to improve the convergence speed and accuracy, the initial and boundary conditions are hard-constraints in the experimental setup.[32]

    Table 5.Parameters set for the Allen–Cahn equation.

    Table 5 shows the parameter settings.The ESR-PINNs method is set up for a total of three iterations.The boundary and initial condition points are set as PINNs, and 5000 fixed training points are randomly generated.After three rounds of expansion and contraction,a total of 3000 retention points are selected and added to the training set as new training points.Other settings are the same as those for the Burgers equation,but the total training number is 40000.

    Table 6.Comparison of L2 errors for the Allen–Cahn equation: PINNs and ESR-PINNs with different Lamb.

    Figure 6 shows the ESR-PINNs solution compared with that of the original PINNs.In Table 6,theL2errors of the original PINNs and ESR-PINNs with differentLambare shown.In the course of experiments,we found that the best and most stable results are obtained whenLamb=0.4.The experimental results show a relatively large variance atLamb=0.9.When the effect is poor theL2errors will be similar to that of the original PINN.However,when the selected point is exciting,L2errors will drop to about 0.002.This may be due to the fact that whenLambis relatively large, the algorithm pays more attention to the selection of training points with poor optimization effect in the iterative process.That is,those training points locate in the domain where the residual value changes less before and after this training.The locations with large PDEs residuals in the experiment are always concentrated in a small area.Neural networks tend to be more sensitive to low-frequency data during training,[33]and the vast majority of regions have small residuals.This results in these training points being difficult to be optimized.In ESR-PINNs,especially whenLambis relatively small, the algorithm takes this part of training points into full consideration.ESR-PINNs are actually optimized for the training process,i.e.,optimization from focusing on large error areas to the overall situation as much as possible.The overall error reduction is achieved by bringing down the error in more areas with moderate error.

    Fig.6.Numerical solutions of (a) PINNs and (b) ESR-PINNs, and absolute errors of (c) PINNs and (d) ESR-PINNs for the Allen–Cahn equation.

    Fig.7.The losses of PINNs and ESR-PINNs with different Lamb for the Allen–Cahn equation.

    Figure 7 shows the loss values of the original PINNs compared with those of ESR-PINNs with differentLambduring the training process.The loss curve forLamb= 0.4 shows that even if the results obtained in the pre-training phase are not satisfactory,ESR-PINNs can still improve the experimental accuracy by dynamic point selection.That is, because the training proceeds,the loss of the experimental group with bad performance in the early period can also be reduced to about the same value as the normal experimental group.The ESRPINNs better balance the large error points and small variation points,which is probably the reason for the best and stable results whenLambis 0.4(see Table 6).

    3.3.Lid-driven cavity

    The lid-driven cavity, a classical problem in computational fluid dynamics, is chosen as the object of study for the last experiment.This problem is chosen to verify the performance of the method for a time-independent problem, and to confirm that ESR-PINNs can maintain training points in the moderate error region and converge in the large error region.As a steady flow problem in a two-dimensional cavity, it is governed by the incompressible Navier–Stokes equations, in dimensionless form,as

    whereurepresents the velocity field andprepresents the pressure field.The Reynolds numberRe=100 is chosen in the experiment.The high-resolution dataset generated using the finite difference method is used as the evaluation criterion,where

    We select the PINNs setup as follows:using the LHS distribution for random point selection, with 400 points on the boundary,and 3000 training points in the spatio-temporal domain.The neural network structure has five layers with 30 neurons in each layer.

    In the lid-driven cavity experiment, the ESR-PINNs method is set up for a total of two iterations.We randomly select 400 points on the boundary, and the LHS is applied to randomly generate 2000 fixed training points.Each iteration is trained 10000 epochs,Table 7 shows the parameter settings.Other settings are the same as the Burgers equation, but the total training number is 40000.

    Table 7.Parameter setting for the lid-driven cavity.

    Figure 8 shows the ESR-PINNs solution compared with the high precision solution obtained by the grid method.[34]In Table 8 theL2errors of the ESR-PINNs and PINNs with differentLambare given.The best results are obtained for ESRPINNs whenLamb=0.4.In the experimental results, the errors are mainly concentrated at the top left and top right corner points.The reason is that singularity problem arises at the two corner points.That is,the velocity at the top cap is 1 and the velocity at the left and right walls is 0.The values at the points (0, 1) and (1, 1) create an ambiguity.Figure 9(a)shows the results after the first round of iterative expansion,and Fig.9(b)presents the results after three rounds of expansion and shrinkage.Compared with Fig.9(a),the distribution of training points in Fig.9(b)not only adds new training points in all high-error areas, but also gathers high-error points in high-error areas together,and the aggregation process of training points is generally smooth.

    Fig.8.Flow in the lid-driven cavity: (a) reference solution using a finite difference solver, (b) prediction of ESR-PINNs, and (c) absolute point-wise errors.

    Table 8.Comparison of L2 errors for the lid-driven cavity: PINNs and ESR-PINNs under different Lamb.

    Fig.9.Distribution of training points(a)at the end of the first expansion and(b)at the end of the resampling algorithm.

    Figure 10 compares the loss values for the original PINNs and ESR-PINNs under variousLambconditions throughout training.All experimental groups converge to a smaller loss once more during the pre-training phase.The original PINNs’convergence slows down as the number of training instances rises.The loss values of ESR-PINNs are higher than those of the original PINNs because some high residuals are selected and added to the training set, but the overall performance is better than that of the original PINNs.The loss values of ESRPINNs with changing selected points are similar to those in the previous experiments, but the average performance is better than that of PINNs.

    Fig.10.The losses of PINNs and ESR-PINNs with different Lamb for the liddriven cavity problem.

    4.Conclusions

    In this study,we reevaluate how training points influence PINNs performance.Due to the distribution of training points,the original PINNs may be challenging because the training points are invariant during the training process.In experiments, this issue comes up frequently.A poor training point distribution may result in the PINNs entering an impossibleto-exit local optimum, and hence producing an undesirable solution.PINNs may also fall into a certain PDEs solution because of a poor distribution.The difference between the PINNs solution and the correct solution is still large, even though the loss may be small at this point.The ESR-PINNs are developed as a solution to this issue.

    In ESR-PINNs, the resampling process is added to the training process of PINNs.The whole resampling process is divided into two parts, and the first one is point selection.We propose for the first time that the point selection process should not only focus on the training points with large errors,but also consider the parts that are not optimized during the iterative process.In the second part, we are inspired by the function continuity not only to place more training points in high error regions,but also to make the aggregation process of training points smoother.Thus,we avoid the problem that the network crashes due to a large number of training points in a small region.Then we verify the effectiveness of ESR-PINNs through three experiments, which prove that ESR-PINNs can effectively improve the accuracy of PINNs.

    At present, our study is relatively limited, and some aspects cannot be answered accurately, e.g., how to determine the fixed number of training points and the proportion of resampling parts,and how many rounds of expansion-shrinkage process are appropriate.In addition, this method can be inserted into PINNs as a plug-and-play tool,but the specific parameters need to be set in conjunction with PDEs.

    Acknowledgements

    Project supported by the National Key Research and Development Program of China(Grant No.2020YFC1807905),the National Natural Science Foundation of China (Grant Nos.52079090 and U20A20316),and the Basic Research Program of Qinghai Province(Grant No.2022-ZJ-704).

    猜你喜歡
    魏建國
    Simulation of a helicon plasma source in a magnetoplasma rocket engine
    “未來5年,中國將領(lǐng)跑全球高品質(zhì)消費”
    Study of double-chamber air arc plasma torch and the application in solid-waste disposal
    只有中國能提供消費大市場以及完整的供應(yīng)鏈”
    男子堅守小站33年 陪伴母親“守著”父親
    男子堅守小站三十三年 陪伴母親“守著”父親
    華聲文萃(2019年11期)2019-09-10 07:22:44
    魏建國作品
    中國篆刻(2017年2期)2017-05-17 06:20:33
    午夜成年电影在线免费观看| 亚洲一区中文字幕在线| 正在播放国产对白刺激| 国产黄频视频在线观看| 国产精品自产拍在线观看55亚洲 | 又黄又粗又硬又大视频| 老司机深夜福利视频在线观看| 高清毛片免费观看视频网站 | 欧美日韩福利视频一区二区| 国产精品麻豆人妻色哟哟久久| 国产精品电影一区二区三区 | 亚洲国产中文字幕在线视频| 欧美人与性动交α欧美精品济南到| 黄片大片在线免费观看| 美女午夜性视频免费| 国产精品99久久99久久久不卡| 在线观看免费高清a一片| 国产成人精品无人区| 亚洲精品久久成人aⅴ小说| 日韩成人在线观看一区二区三区| 亚洲情色 制服丝袜| 国产黄频视频在线观看| 久久天躁狠狠躁夜夜2o2o| videos熟女内射| 欧美精品啪啪一区二区三区| 一区二区三区乱码不卡18| 别揉我奶头~嗯~啊~动态视频| 美女高潮喷水抽搐中文字幕| 精品人妻在线不人妻| av不卡在线播放| 亚洲色图av天堂| 日韩免费高清中文字幕av| 国产免费视频播放在线视频| 国产成人av激情在线播放| 窝窝影院91人妻| 久久人人爽av亚洲精品天堂| av一本久久久久| 国产男女内射视频| 中文字幕色久视频| 久久久久久亚洲精品国产蜜桃av| bbb黄色大片| 亚洲五月婷婷丁香| 国产成人av教育| 色综合欧美亚洲国产小说| 亚洲国产中文字幕在线视频| 亚洲avbb在线观看| 久久热在线av| 亚洲 欧美一区二区三区| 精品高清国产在线一区| 午夜福利影视在线免费观看| 在线观看66精品国产| 国产精品久久久久久精品古装| 国产日韩欧美视频二区| 欧美 亚洲 国产 日韩一| 看免费av毛片| 久久人人爽av亚洲精品天堂| 91麻豆av在线| 国产欧美日韩综合在线一区二区| a在线观看视频网站| 精品福利永久在线观看| 亚洲av电影在线进入| 十八禁网站网址无遮挡| 黄片小视频在线播放| 日韩欧美一区二区三区在线观看 | 丝袜美足系列| 一级毛片精品| 香蕉久久夜色| 淫妇啪啪啪对白视频| 亚洲伊人久久精品综合| 一本色道久久久久久精品综合| 欧美人与性动交α欧美精品济南到| 久久午夜亚洲精品久久| 免费看十八禁软件| 亚洲av第一区精品v没综合| 精品免费久久久久久久清纯 | 亚洲视频免费观看视频| 亚洲精品美女久久久久99蜜臀| 久久久精品免费免费高清| 自线自在国产av| 欧美亚洲日本最大视频资源| 一本大道久久a久久精品| 国产1区2区3区精品| 国产一区二区在线观看av| 中亚洲国语对白在线视频| 国产真人三级小视频在线观看| 日韩一区二区三区影片| 日韩免费高清中文字幕av| 精品国产乱码久久久久久小说| 国产三级黄色录像| 精品一区二区三卡| 制服诱惑二区| 午夜福利在线免费观看网站| 夜夜爽天天搞| 国产有黄有色有爽视频| 免费女性裸体啪啪无遮挡网站| 亚洲人成77777在线视频| 精品少妇黑人巨大在线播放| 国产日韩欧美在线精品| 狠狠精品人妻久久久久久综合| 麻豆乱淫一区二区| 成年人午夜在线观看视频| 五月天丁香电影| 在线观看www视频免费| 一区二区三区激情视频| a级毛片黄视频| 亚洲精品国产区一区二| 欧美日韩黄片免| 国产精品99久久99久久久不卡| 一本综合久久免费| 一边摸一边抽搐一进一出视频| 美国免费a级毛片| 免费在线观看视频国产中文字幕亚洲| 一区二区三区激情视频| 极品少妇高潮喷水抽搐| 久久久久久久国产电影| 飞空精品影院首页| 免费黄频网站在线观看国产| 国产在线一区二区三区精| 天天躁夜夜躁狠狠躁躁| 纯流量卡能插随身wifi吗| 亚洲精品国产区一区二| 一区二区三区激情视频| 国产视频一区二区在线看| 成年版毛片免费区| 欧美性长视频在线观看| 精品高清国产在线一区| 丝袜美足系列| 91九色精品人成在线观看| 天天躁日日躁夜夜躁夜夜| 欧美乱码精品一区二区三区| 啦啦啦中文免费视频观看日本| 50天的宝宝边吃奶边哭怎么回事| 精品国产乱子伦一区二区三区| 亚洲自偷自拍图片 自拍| av电影中文网址| 欧美乱码精品一区二区三区| 丝袜美足系列| 女人爽到高潮嗷嗷叫在线视频| 成年版毛片免费区| 国产精品99久久99久久久不卡| 欧美日韩av久久| 久久精品国产a三级三级三级| 12—13女人毛片做爰片一| 午夜精品国产一区二区电影| 一边摸一边做爽爽视频免费| 亚洲成人免费电影在线观看| 老司机午夜福利在线观看视频 | 日本欧美视频一区| 精品一区二区三区视频在线观看免费 | 国产老妇伦熟女老妇高清| 日日爽夜夜爽网站| 日韩一区二区三区影片| 国产蜜桃级精品一区二区三区| 美女高潮喷水抽搐中文字幕| 久久中文字幕人妻熟女| 亚洲黑人精品在线| 日韩高清综合在线| 亚洲成a人片在线一区二区| 国产高清videossex| 真人做人爱边吃奶动态| 国产毛片a区久久久久| 午夜福利在线观看免费完整高清在 | 一本综合久久免费| 又紧又爽又黄一区二区| 国产一区在线观看成人免费| 国产成年人精品一区二区| 国产成人精品无人区| 别揉我奶头~嗯~啊~动态视频| 99久久综合精品五月天人人| 国产aⅴ精品一区二区三区波| 人人妻人人看人人澡| 日韩欧美 国产精品| 亚洲真实伦在线观看| 久久久成人免费电影| 国产亚洲欧美在线一区二区| 高清毛片免费观看视频网站| 久久草成人影院| 久久精品亚洲精品国产色婷小说| 老汉色∧v一级毛片| 五月伊人婷婷丁香| 天堂动漫精品| 极品教师在线免费播放| 丝袜人妻中文字幕| 麻豆成人午夜福利视频| 97超级碰碰碰精品色视频在线观看| 婷婷丁香在线五月| 久久久水蜜桃国产精品网| 国产91精品成人一区二区三区| 国产三级黄色录像| 欧美日韩乱码在线| 99久久无色码亚洲精品果冻| 性色avwww在线观看| 欧美日韩福利视频一区二区| 首页视频小说图片口味搜索| 日本在线视频免费播放| 免费观看精品视频网站| 亚洲无线观看免费| 一个人看的www免费观看视频| 欧美日韩综合久久久久久 | 夜夜爽天天搞| 国产亚洲av嫩草精品影院| 久9热在线精品视频| 日本撒尿小便嘘嘘汇集6| svipshipincom国产片| 夜夜躁狠狠躁天天躁| 国产亚洲精品一区二区www| 在线观看免费视频日本深夜| 一进一出抽搐gif免费好疼| 搡老岳熟女国产| 国产不卡一卡二| 亚洲av成人av| 精品免费久久久久久久清纯| 午夜精品在线福利| 亚洲第一欧美日韩一区二区三区| 中文在线观看免费www的网站| netflix在线观看网站| 国产精品亚洲一级av第二区| 搡老岳熟女国产| 精品久久久久久久毛片微露脸| tocl精华| 国产精品久久电影中文字幕| 国产伦一二天堂av在线观看| 悠悠久久av| 丰满人妻一区二区三区视频av | 熟女电影av网| 亚洲国产精品sss在线观看| 哪里可以看免费的av片| 色播亚洲综合网| 亚洲人成伊人成综合网2020| 国产 一区 欧美 日韩| 国产免费男女视频| 黑人巨大精品欧美一区二区mp4| 国产高清视频在线播放一区| 午夜两性在线视频| 国产野战对白在线观看| www.999成人在线观看| 久久久久国内视频| 夜夜夜夜夜久久久久| 欧美最黄视频在线播放免费| 国产极品精品免费视频能看的| 黄片小视频在线播放| 一个人观看的视频www高清免费观看 | 色播亚洲综合网| 国产97色在线日韩免费| 日韩大尺度精品在线看网址| 在线观看66精品国产| 巨乳人妻的诱惑在线观看| 黄色视频,在线免费观看| 99精品久久久久人妻精品| 伦理电影免费视频| 国产毛片a区久久久久| 男人舔奶头视频| 亚洲精品美女久久av网站| 色精品久久人妻99蜜桃| 色av中文字幕| 国产精品亚洲美女久久久| 国产男靠女视频免费网站| 深夜精品福利| 国产黄色小视频在线观看| 欧美色视频一区免费| 又爽又黄无遮挡网站| 亚洲在线自拍视频| 国产精品98久久久久久宅男小说| 国产男靠女视频免费网站| 97超视频在线观看视频| 亚洲精品中文字幕一二三四区| 国产真实乱freesex| 丰满人妻熟妇乱又伦精品不卡| 搡老熟女国产l中国老女人| 在线观看免费视频日本深夜| 色噜噜av男人的天堂激情| 成年女人永久免费观看视频| 免费看十八禁软件| 首页视频小说图片口味搜索| 国产伦精品一区二区三区四那| 女同久久另类99精品国产91| 一级a爱片免费观看的视频| 久久久久久国产a免费观看| 亚洲黑人精品在线| 成人亚洲精品av一区二区| 亚洲一区二区三区不卡视频| 成人国产一区最新在线观看| 床上黄色一级片| 国产亚洲av嫩草精品影院| 熟女少妇亚洲综合色aaa.| 一区二区三区国产精品乱码| 亚洲人成网站在线播放欧美日韩| av国产免费在线观看| 成人特级av手机在线观看| 床上黄色一级片| 麻豆国产97在线/欧美| 男女床上黄色一级片免费看| 精品乱码久久久久久99久播| 亚洲国产欧洲综合997久久,| 88av欧美| 天堂√8在线中文| 97人妻精品一区二区三区麻豆| 999久久久国产精品视频| 日韩欧美 国产精品| 日本免费一区二区三区高清不卡| 人妻夜夜爽99麻豆av| 亚洲一区二区三区色噜噜| 久久精品人妻少妇| 免费在线观看影片大全网站| 国产一区二区三区视频了| 在线a可以看的网站| 一个人看的www免费观看视频| 天天一区二区日本电影三级| 国产乱人伦免费视频| 成人特级黄色片久久久久久久| 午夜福利在线在线| 久久久久久久午夜电影| 国产成人aa在线观看| 亚洲欧美日韩高清在线视频| 夜夜看夜夜爽夜夜摸| 国产精品久久久人人做人人爽| 嫩草影院入口| 高清在线国产一区| 亚洲午夜精品一区,二区,三区| 无遮挡黄片免费观看| 亚洲精品中文字幕一二三四区| 欧美另类亚洲清纯唯美| 欧美不卡视频在线免费观看| 国产一区在线观看成人免费| 少妇裸体淫交视频免费看高清| 热99re8久久精品国产| 成人精品一区二区免费| 精品电影一区二区在线| 久久香蕉国产精品| 精品一区二区三区视频在线 | 狂野欧美激情性xxxx| 亚洲午夜理论影院| 亚洲精品一区av在线观看| 亚洲性夜色夜夜综合| 国产97色在线日韩免费| 网址你懂的国产日韩在线| 国产免费av片在线观看野外av| 免费观看的影片在线观看| xxx96com| 夜夜看夜夜爽夜夜摸| 亚洲国产欧洲综合997久久,| 国产欧美日韩一区二区精品| 亚洲欧美日韩高清在线视频| 亚洲 国产 在线| 免费看美女性在线毛片视频| 夜夜夜夜夜久久久久| 国内精品一区二区在线观看| 国产真人三级小视频在线观看| 中国美女看黄片| 国产高清videossex| 在线观看免费视频日本深夜| 久久精品国产清高在天天线| 男人和女人高潮做爰伦理| 欧美一级a爱片免费观看看| 精品国产亚洲在线| 亚洲av成人av| 99久久国产精品久久久| 久久久久免费精品人妻一区二区| 久久国产乱子伦精品免费另类| 色播亚洲综合网| svipshipincom国产片| 99视频精品全部免费 在线 | 首页视频小说图片口味搜索| 国产精品98久久久久久宅男小说| 91麻豆精品激情在线观看国产| 国产成人影院久久av| 欧洲精品卡2卡3卡4卡5卡区| 精品不卡国产一区二区三区| 一进一出好大好爽视频| a在线观看视频网站| 久久久精品大字幕| 又粗又爽又猛毛片免费看| 久久中文字幕一级| 欧美成人免费av一区二区三区| 欧美日韩精品网址| 国产欧美日韩一区二区三| 又大又爽又粗| 真实男女啪啪啪动态图| 精品国产超薄肉色丝袜足j| 十八禁网站免费在线| 三级毛片av免费| 欧美乱码精品一区二区三区| 久久久国产欧美日韩av| 在线观看美女被高潮喷水网站 | 久久午夜综合久久蜜桃| 一区二区三区激情视频| 一二三四社区在线视频社区8| 99热6这里只有精品| 曰老女人黄片| 亚洲av成人av| 女警被强在线播放| 国产精品电影一区二区三区| 1000部很黄的大片| 99精品欧美一区二区三区四区| 色播亚洲综合网| 亚洲avbb在线观看| 亚洲av成人一区二区三| 好男人电影高清在线观看| 中文字幕人妻丝袜一区二区| 亚洲国产精品成人综合色| 国产69精品久久久久777片 | 亚洲 国产 在线| 九色成人免费人妻av| 欧美色视频一区免费| 日本一本二区三区精品| 亚洲精品一卡2卡三卡4卡5卡| 日本五十路高清| 亚洲中文av在线| 亚洲色图 男人天堂 中文字幕| 亚洲国产欧美网| 国产欧美日韩一区二区三| 99热6这里只有精品| 国产精品98久久久久久宅男小说| 99热只有精品国产| 黄色片一级片一级黄色片| 亚洲专区国产一区二区| 久久精品影院6| 欧美中文日本在线观看视频| 12—13女人毛片做爰片一| 18禁美女被吸乳视频| 日本一本二区三区精品| 久久久国产欧美日韩av| 19禁男女啪啪无遮挡网站| 午夜激情欧美在线| 久久国产精品影院| 99国产综合亚洲精品| 91老司机精品| 国产精品久久久av美女十八| 久久性视频一级片| 特级一级黄色大片| 一二三四社区在线视频社区8| 久久精品国产99精品国产亚洲性色| 夜夜看夜夜爽夜夜摸| 制服丝袜大香蕉在线| 久久人妻av系列| 欧美不卡视频在线免费观看| 亚洲精品一区av在线观看| 美女cb高潮喷水在线观看 | 亚洲成人精品中文字幕电影| 最近最新免费中文字幕在线| 一级毛片精品| 国产一区二区在线观看日韩 | 国产三级黄色录像| 中文字幕最新亚洲高清| 中文字幕人成人乱码亚洲影| 熟女人妻精品中文字幕| 黄色成人免费大全| 我要搜黄色片| 精品国产乱子伦一区二区三区| 精华霜和精华液先用哪个| 99精品久久久久人妻精品| 黄色成人免费大全| 搡老妇女老女人老熟妇| 久久精品国产综合久久久| 波多野结衣高清作品| 日本在线视频免费播放| 免费看a级黄色片| 国产亚洲精品久久久com| 又黄又爽又免费观看的视频| 亚洲av电影在线进入| 免费观看精品视频网站| av欧美777| 国产日本99.免费观看| 国产午夜福利久久久久久| 国产高清有码在线观看视频| 黑人操中国人逼视频| 免费电影在线观看免费观看| 99久国产av精品| 日韩精品青青久久久久久| 一本久久中文字幕| 曰老女人黄片| 国产精品精品国产色婷婷| 国产在线精品亚洲第一网站| 最新中文字幕久久久久 | 天堂网av新在线| 免费av毛片视频| 美女扒开内裤让男人捅视频| 人人妻,人人澡人人爽秒播| 又大又爽又粗| 国产精品乱码一区二三区的特点| 久久精品影院6| 国产成人啪精品午夜网站| 日本五十路高清| 美女cb高潮喷水在线观看 | 亚洲中文av在线| 真实男女啪啪啪动态图| 一进一出好大好爽视频| 亚洲avbb在线观看| 午夜免费成人在线视频| 午夜精品久久久久久毛片777| 757午夜福利合集在线观看| 综合色av麻豆| 日日夜夜操网爽| 亚洲国产欧洲综合997久久,| 日本在线视频免费播放| 亚洲自拍偷在线| 在线观看免费午夜福利视频| av福利片在线观看| 狠狠狠狠99中文字幕| 99国产精品99久久久久| 亚洲国产欧洲综合997久久,| svipshipincom国产片| 最好的美女福利视频网| 日韩有码中文字幕| 久久香蕉精品热| 国产精品爽爽va在线观看网站| 亚洲一区高清亚洲精品| 99久久久亚洲精品蜜臀av| 最近最新免费中文字幕在线| 国产精品一区二区三区四区免费观看 | 欧美性猛交黑人性爽| 午夜久久久久精精品| 精华霜和精华液先用哪个| 99在线视频只有这里精品首页| 一级黄色大片毛片| 欧美成人性av电影在线观看| 欧美乱色亚洲激情| 性色av乱码一区二区三区2| 无限看片的www在线观看| 久久久久久久午夜电影| 国产精品野战在线观看| 老汉色av国产亚洲站长工具| 亚洲欧洲精品一区二区精品久久久| 亚洲片人在线观看| 精品日产1卡2卡| 亚洲成人精品中文字幕电影| 99热6这里只有精品| 欧美一区二区精品小视频在线| 99在线视频只有这里精品首页| 亚洲中文字幕日韩| 亚洲黑人精品在线| 免费在线观看视频国产中文字幕亚洲| 亚洲无线在线观看| 看免费av毛片| 国产精品免费一区二区三区在线| 亚洲国产精品久久男人天堂| 18禁黄网站禁片免费观看直播| 97超级碰碰碰精品色视频在线观看| 神马国产精品三级电影在线观看| 一本一本综合久久| 又紧又爽又黄一区二区| 嫩草影院入口| 成人高潮视频无遮挡免费网站| 黄频高清免费视频| 熟妇人妻久久中文字幕3abv| 午夜亚洲福利在线播放| 亚洲专区字幕在线| 淫秽高清视频在线观看| 国产精品野战在线观看| 女生性感内裤真人,穿戴方法视频| 精品国产乱子伦一区二区三区| 国产精品,欧美在线| 91老司机精品| 丰满人妻一区二区三区视频av | 51午夜福利影视在线观看| 亚洲专区字幕在线| 国产精品av久久久久免费| 99国产精品99久久久久| 波多野结衣高清作品| 九色国产91popny在线| 香蕉av资源在线| svipshipincom国产片| 一区福利在线观看| 成熟少妇高潮喷水视频| 亚洲国产精品999在线| 美女高潮的动态| 国产三级在线视频| 国产日本99.免费观看| 伊人久久大香线蕉亚洲五| 麻豆成人午夜福利视频| 国产极品精品免费视频能看的| 男人舔女人的私密视频| 一边摸一边抽搐一进一小说| 岛国在线观看网站| 岛国在线免费视频观看| 欧美日韩瑟瑟在线播放| 搡老妇女老女人老熟妇| 一本综合久久免费| 两个人视频免费观看高清| av天堂中文字幕网| 欧美乱妇无乱码| 成人国产一区最新在线观看| 男女视频在线观看网站免费| 中文资源天堂在线| 免费大片18禁| 色综合婷婷激情| 淫秽高清视频在线观看| 香蕉久久夜色| 俄罗斯特黄特色一大片| 特级一级黄色大片| 免费观看人在逋| 熟妇人妻久久中文字幕3abv| 免费大片18禁| 两性午夜刺激爽爽歪歪视频在线观看| 性色av乱码一区二区三区2| 曰老女人黄片| 性色avwww在线观看| 国产极品精品免费视频能看的| 成人性生交大片免费视频hd| 亚洲精品一区av在线观看| 久久草成人影院| 一进一出好大好爽视频| 中文字幕久久专区| 久久久久久久午夜电影| 超碰成人久久| 这个男人来自地球电影免费观看| 身体一侧抽搐| 免费在线观看亚洲国产| 999久久久国产精品视频| 久99久视频精品免费| 精品人妻1区二区| 亚洲精品456在线播放app | 亚洲中文字幕一区二区三区有码在线看 | 日韩有码中文字幕| av天堂在线播放|