• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    End-to-end differentiable learning of turbulence models from indirect observations

    2021-09-17 09:04:56CarlosMichelStrferHengXiao

    Carlos A.Michelén Str?fer ,Heng Xiao

    Kevin T. Crofton Department of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, Virginia, United States

    ABSTRACT The emerging push of the differentiable programming paradigm in scientific computing is conducive to training deep learning turbulence models using indirect observations.This paper demonstrates the viability of this approach and presents an end-to-end differentiable framework for training deep neural networks to learn eddy viscosity models from indirect observations derived from the velocity and pressure fields.The framework consists of a Reynolds-averaged Navier–Stokes (RANS) solver and a neuralnetwork-represented turbulence model,each accompanied by its derivative computations.For computing the sensitivities of the indirect observations to the Reynolds stress field,we use the continuous adjoint equations for the RANS equations,while the gradient of the neural network is obtained via its built-in automatic differentiation capability.We demonstrate the ability of this approach to learn the true underlying turbulence closure when one exists by training models using synthetic velocity data from linear and nonlinear closures.We also train a linear eddy viscosity model using synthetic velocity measurements from direct numerical simulations of the Navier–Stokes equations for which no true underlying linear closure exists.The trained deep-neural-network turbulence model showed predictive capability on similar flows.

    Key words:Turbulence modeling Machine learning Adjoint solver Reynolds-averaged Navier-Stokes equations

    There still is a practical need for improved closure models for the Reynolds-averaged Navier–Stokes (RANS) equations.Currently,the most widely used turbulence models are linear eddy viscosity models (LEVM),which presume the Reynolds stress is proportional to the mean strain rate.Although widely used,LEVM do not provide accurate predictions in many flows of practical interest,including the inability to predict secondary flows in noncircular ducts [1] .Alternatively,non-linear eddy viscosity models(NLEVM) capture nonlinear relations from both the strain and rotation rate tensors.NLEVM,however,do not result in consistent improvement over LEVM and can suffer from poor convergence [2] .Data-driven turbulence models are an emerging alternative to traditional single-point closures.Data-driven NLEVM use the integrity basis representation [3,4] to learn the mapping from the velocity gradient field to the normalised Reynolds stress anisotropy field,and retain the transport equations for turbulence quantities from a traditional model.

    It has been natural to train such models using Reynolds stress data [4,5].However,Reynolds stress data from high-fidelity simulations,i.e.from direct numerical simulations (DNS) of the Navier–Stokes equations,is mostly limited to simple geometries and low Reynolds number.It is therefore desirable to train withindirect observations,such as quantities based on the velocity or pressure fields,for which experimental data is available for a much wider range of complex flows.Such measurements include full field data,e.g.from particle image velocimetry,sparse point measurements such as from pressure probes,and integral quantities such as lift and drag.Training with indirect observations has the additional advantage of circumventing the need to extract turbulence scales that are consistent with the RANS modelled scales from the high fidelity data [5] .

    Recently,Zhao et al.[6] learned a zonal turbulence model for the wake-mixing regions in turbomachines in symbolic form (e.g.,polynomials and logarithms) from indirect observation data by using genetic algorithms.Similarly,Sa?di et al.[7] learned symbolic algebraic Reynolds stress models generalisable for two-dimensional separated flows.However,while symbolic models are easier to interpret,they may have limited expressive power as compared to,for instance,deep neural networks [8],which are successive composition of nonlinear functions.Symbolic models may therefore not be generalisable and be limited to zonal approaches.More importantly,gradient-free optimisation method such as genetic programming may not be as efficient as gradient-descent methods,and the latter should be used whenever available [9].In particular,deep learning methods [10] have achieved remarkable success in many fields and represent a promising approach for data-driven NLEVM[4].

    Fig.1.Schematic of the end-to-end differentiable training framework.The framework consists of two main components,the turbulence model and the observation operator,each of which has a forward and backwards (adjoint) model.For any value of the trainable parameters w the gradient of the objective function J can be obtained by solving these four problems.The turbulence model consists of a deep neural network representing the closure function using the integrity basis representation θ g and transport equations T(k,tτ)=0 for the turbulence quantities.The observation operator consists of solving the RANS equations with the proposed turbulence model and extracting the quantities of interest that are compared to the observations in the cost function J.

    A major obstacle for gradient-based learning from indirect observations is that a RANS solver must be involved in the training and the RANS sensitivites are required to learn the model.While,such sensitivites can be obtained by using adjoint equations,which have long been used in aerodynamic shape optimisation [11],these are not generally rapidly available or straight forward to implement.The emerging interest in differentiable programming is resulting in efficient methods to develop adjoint accompanying physical models,including modern programming languages that come with built-in automatic differentiation [12],or neural-networkbased solutions of partial differential equations [13].Recently,Holland et al.[14] used the discrete adjoint to learn a corrective scalar multiplicative field in the production term of the Spalart–Allmaras transport model.This is based on an alternative approach to data-driven turbulence modeling [15] in which empirical correction terms for the turbulence transport equations are learned while retaining a traditional linear closure (LEVM).In this work we demonstrate the viability of training deep neural networks to learn general eddy viscosity closures (NLEVM) using indirect observations.We embed a neural-network-represented turbulence model into a RANS solver using the integrity basis representation,and as a proof of concept we use the continuous adjoint equations to obtain the required RANS sensitivities.This leads to an end-toend differentiable framework that provides the gradient information needed to learn turbulence models from indirect observations.

    It is worth noting that the intrinsic connection between neural networks and adjoint differential equation solvers has long been recognized in the applied mathematics community [16,17].Recent works have explored this idea in learning constitutive or closure models in computational mechanics [18],in non-Newtonian fluid mechanics [19] and specifically in turbulence flows [20,21].However,note that the representation of turbulent stress in [20] lacked tensorial consistency.

    Differentiable framework for learning turbulence models

    In this proposed framework a neural network is trained by optimising an objective function that depends on quantities derived from the network’s outputs rather than on those outputs directly.The training framework is illustrated schematically in Fig.1 and consists of two components:the turbulence model and the objective function.Each of these two components has aforwardmodel that propagates inputs to outputs and abackwardsmodel that provides the derivatives of the outputs with respect to the inputs or parameters.The gradient of the objective functionJwith respect to the network’s trainable parameterswis obtained by combining the derivative information from the two components through the chain rule as

    whereτis the Reynolds stress tensor predicted by the turbulence model.

    Forward Model

    For given values of the trainable parametersw,the forward model evaluates the cost functionJ,which is the discrepancy between predicted and observed quantities.The forward evaluation consists of two main components:(i) evaluating the neural network turbulence model and (ii) mapping the network’s outputs to observation space by first solving the incompressible RANS equations.

    The turbulence model,shown on the left box in Fig.1,maps the velocity gradient field to the Reynolds stress field.The integrity basis representation for a general eddy viscosity model [3] is given as

    whereais the anisotropic (deviatoric) component of the Reynolds stress,bis the normalised anisotropic Reynolds stress,Tandθare the basis tensor functions and scalar invariants of the input tensors,gare the scalar coefficient functions to be learned,andIis the second rank identity tensor.The input tensors are the symmetric and antisymmetric components of the normalised velocity gradient:S=andR=wheretτis the turbulence time-scale anduis the mean velocity.The linear and quadratic terms in the integrity basis representation are given as

    where curly braces {·} indicate the tensor trace.

    Different eddy viscosity models differ in the form of the scalar coefficient functionsθgand in the models for the two turbulence scale quantitieskandtτ.We represent the scalar coeffi-cient functions using a deep neural network with 10 hidden layers with 10 neurons each and a rectified linear unit (ReLU) activation function.The turbulence scaling parameters are modelled using traditional transport equationsT(k,tτ)=0 with the TKE production termPmodified to account for the expanded formulation of Reynolds stress:P=τ:?u,where:denotes double contraction of tensors.

    The RANS solver along with its post-processing serves as an observation operator that maps the turbulence model’s outputs(Reynolds stressτ) to the observation quantities (e.g.,sparse velocity measurement,drag).This is shown in the right box in Fig.1.The first step in this operation is to solve the RANS equations with the given Reynolds stress closure to obtain mean velocityuand pressurepfields.This is followed by post-processing to obtain the observation quantities (e.g.,sampling velocities at certain locations or integrating surface pressure to obtain drag).When solving the RANS equations,explicit treatment of the divergence of Reynolds stress can make the RANS equations ill-conditioned [22,23].We treat part of the linear term implicitly by use of an effective viscosityνeffwhich is easily obtained since with the integrity basis representation the linear term is learned independently.The incompressible RANS equations are then given as

    where the term ?·νeff?uis treated implicitly.HereaNLrepresents the non-linear component of the Reynolds stress anisotropy,the isotropic component of Reynolds stress is incorporated into the pressure termp?,andsis the momentum source term representing any external body forces.The coefficientsg(i)are outputs of the neural network-based turbulence model that have the fieldsθas input.

    Adjoint Model

    For a proposed value of the trainable parameterswthe backwards model (represented by left-pointing arrows in Fig.1) provides the gradient?J/?wof the cost function with respect to the trainable parameters.This is done by separately obtaining the two terms on the right hand side of Eq.(1):(i) the gradient of the Reynolds stress?τ/?wfrom the neural network turbulence model and (ii) the sensitivity?J/?τof the cost function to the Reynolds stress,using their respective adjoint models.Combining the two adjoint models results in an end-to-end differentiable framework,whereby the gradient of observation quantities (e.g.sparse velocity) with respect to the neural network’s parameters can be obtained.

    The gradient of the Reynolds stress with respect to the neural network’s parameterswis obtained in two parts using the chain rule.The gradient of the neural network’s outputs with respect to its parameters,?g/?w,is efficiently obtained by backpropagation,which is a reverse accumulation automatic differentiation algorithm for deep neural networks that applies the chain rule on a per-layer basis.The sensitivities of the Reynolds stress to the coefficient functions are obtained asfrom differentiation of Eq.(2),which is a linear tensor equation.

    For the sensitivity of the objective function with respect to the Reynolds stress we derived the appropriate continuous adjoint equations (Appendix A).Since the Reynolds stress must satisfy the RANS equations,this is a constrained optimisation problem.The problem is reformulated as the optimisation of an unconstrained Lagrangian function with the Lagrange multipliers described by the adjoint equations.The resulting adjoint equations are

    For interested readers,Appendix A presents the derivation of the continuous adjoint equations,while Appendix B shows how to use this approach for different types of experimental measurements.Specifically,Appendix B shows how to express different objectives functions as integrals over the domain and boundary,as required by the adjoint equations.Additional details can also be found in Ref.[27].

    Gradient Descent Procedure

    The training is done using the Adam algorithm,a gradient descent algorithm with momentum and adaptive gradients commonly used in training deep neural networks.The default values for the Adam algorithm are used,including a learning rate of 0.001.The training requires solving the RANS equations at each training step.In a given training step the inputsθiare updated based on the previous RANS solution and scaled to the range 0-1,and the RANS equations are then solved with fixed values for the coefficient functionsg.The inputsθiand their scaling parameters are fixed at a given training step and converge alongside the main optimisation of the trainable parametersw.

    Initialisation of the neural network’s parameters requires special consideration.The usual practice of random initialisation of the weights is not suitable in this case since it leads to divergence of the RANS solution.We use existing closures (e.g.a laminar model withg(i)=0 or a linear model withg(1)=?0.09) to generate data for pre-training the neural network and thus provide a suitable initialisation.This has the additional benefit of embedding existing insight into the training by choosing an informed initial point in the parameter space.When pre-training to constant values (e.g.g(1)=?0.09) we add noise to the pre-training data,since starting from very accurate constant values can make the network difficult to train.

    Test cases

    Fig.2.Results of learning a LEVM from a single velocity measurement in a turbulent channel flow: a the learned coefficient function, b the anisotropic Reynolds stress field,and c the velocity.The final (trained) results overlap with the truth and the two are visually indistinguishable.

    The viability of the proposed framework is demonstrated by testing on three test cases.The first two cases use synthetic velocity data obtained from a linear and a non-linear closure,respectively,to train the neural network.The use of synthetic data allows us to evaluate the ability of the training framework to learn the true underlying turbulence closure when one exists.In the final test case realistic velocity measurements,obtained from a DNS solution and for which no known true underlying closure exists,are used to learn a linear eddy viscosity model.The trained LEVM is then used to predict similar flows and the predictions are compared to those from a traditional LEVM.

    Learning a Synthetic LEVM from Channel Flow

    As a first test case we use a synthetic velocity measurement at a single point from the turbulent channel flow to learn the underlying linear model.The flow has a Reynolds number of 10,000 based on bulk velocityuband half channel heighth.The turbulent equations used are thek–ωmodel of Wilcox [28],and the synthetic model corresponds to a constantg(1)=0.09.For the channel flow there is only one independent scalar invariant andT(1)is the only linear tensor function in the basis.We therefore use a neural network with one input and one output which mapsThe network has 1021 trainable parameters and ispre-trainedtothelaminarmodelg(1)=0.The sensitivity of the predicted point velocity to the Reynolds stress is obtained by solving the adjoint equations withJΩequal to the velocity field times a radial basis function (see Appendix B).Figure 2 shows the results of the training.The trained model not only results in the correct velocity field,but the correct underlying model is learned.

    Learning a Synthetic NLEVM from Flow Through a Square Duct

    As a second test case we use a synthetic full field velocity measurement from flow in a square duct to learn the underlying nonlinear model.The flow has a Reynolds number of 3,500 based on bulk axial velocityuband half duct side lengthh.This flow contains a secondary in-plane flow that is not captured by LEVM [1].For the objective function,JΩis the difference between the measured and predicted fields,with the discrepancy of the in-plane velocity scaled by a factor of 1000 as to have a similar weight to the axial velocity discrepancy.The NLEVM is the Shih quadratic model[29] which,using the basis in Eq.(3),can be written as [27]:

    For the flow in a square duct only four combinations of the Reynolds stress components affect the predicted velocity [1]:τxyandτxzin the axial equation andτyzand(τzz?τyy)in the in-plane equation.In this flow the in-plane velocity gradients are orders of magnitude smaller than the gradients of the axial velocityux.For these reasons only two combinations of coefficient functions can be learned [27],g(1)and the combinationg(2)?0.5g(3)+0.5g(4),and there is only one independent scalar invariant withθ1≈?θ2.

    The neural network has two inputs and four outputs and was pre-trained to the LEVM withg(1)=?0.09.The turbulent equations used are those from the Shih quadratick–εmodel.Figure 3 shows the learned model which shows improved agreement with the truth.The combinationg(2)?0.5g(3)+0.5g(4)shows good agreement only for the higher range of scalar invariantθ1.This is due to the smaller scalar invariants corresponding to smaller velocity gradients and smaller magnitudes of the tensorsT.The velocity field is therefor expected to be less sensitive to the value of the Reynolds stress in these regions.It was observed that the smaller range of the invariant,where the learned model fails to capture the truth,occurs mostly in the center channel.Figure 4 shows the ability of the learned model to capture the correct velocity,including predicting the in-plane velocities,and the Reynolds stress.The trained model fails to predict the correctτyzin the center channel,but this does not propagate to the predicted velocities.Additionally,it was observed that obtaining significant improvement in the velocity field requires only a few tens of training steps and only requires the coefficients to have roughly the correct order of magnitude.On the other hand obtaining better agreement of the scalar coefficients took 1–2 orders of magnitude more training steps with diminishing returns in velocity improvement.This shows the importance of using synthetic data to evaluate the ability of a training framework to learn the true underlying model when one exists rather than only comparing the quantities of interest.

    Learning a LEVM from Realistic Data of Flow Over Periodic Hills

    Fig.3.Results of learning a NLEVM,the Shih quadratic model,from full field velocity measurements in flow through a square duct.The results shown are the two combinations of coefficient functions that have an effect on velocity plotted against the scalar invariant θ1≈?θ2.

    Fig.4.a Velocity and anisotropic Reynolds stress results of learning a NLEVM from full field velocity measurements in flow through a square duct.The uz and bxz fields are the reflection of uy and bxy along the diagonal.b Schematic of flow through a square duct showing the secondary in-plane velocities.The simulation domain (bottom left quadrant) is highlighted.

    Fig.5.Velocity results of learning an eddy viscosity model from sparse velocity data of flow over periodic hills.Six different profiles of ux velocity are shown.The training case corresponds to the baseline geometry α=1.

    As a final test case,a LEVM is trained using sparse velocity measurements from DNS of flow over periodic hills.The DNS data comes from Xiao et al.[30] who performed DNS of flow over periodic hills of varying slopes.This flow is characterised by a recirculation region on the leeward side of the hill and scaling the hill width (scale factorα) modifies the slope and the characteristics of the recirculation region (e.g.from mild separation forα=1.5 to massive separation forα=0.5).For all flows,the Reynolds number based on hill heighthand bulk velocityubthrough the vertical profile at the hill top isRe=5,600.The training data consists of four point measurements of both velocity components in the flow over periodic hills with the baselineα=1 geometry.The two components of velocity are scaled equally in the objective function.The training data and training results are shown in Fig.5.The neural network in this case has one input and one output and is pre-trained to laminar flow,i.e.g(1)=0.The trained model is a spatially varying LEVMg(1)=g(1)(θ1)that closely predicts the true velocity in most of the flow with the exception of the free shear layer in the leeward side of the hills.

    To test the extrapolation performance of the trained LEVM,we use it to predict the flow over the other periodic hill geometries,α∈[0.5,0.8,1.2,1.5],and compare them to results with thek–ωmodelg(1)=?0.09.The results forα=0.5 andα=1.5 are shown in Fig.6.For theα >1.0 cases the trained linear model outperforms thek–ωmodel in the entire flow.For theα <1.0 cases the trained model results in better velocity predictions in some regions,particulary the upper channel,while thek–ωmodel results in better velocities in the lower channel.

    Fig.6.Comparison of horizontal velocity ux predictions using the trained eddy viscosity model g(1)=g(1)(θ1) and the k–ω model g(1)=?0.09 on two periodic hills geometries: a α=0.5 and b α=1.5.

    In this paper we present a framework to train deep learning turbulence models using quantities derived from velocity and pressure that are readily available for a wide range of flows.The method was first tested using synthetic data obtained from two traditional closure models:the lineark–ωand the Shih quadratick–εmodels.These two cases demonstrate the ability to learn the true underlying turbulence closure from measurement data when one exists.The method was then used to learn a linear eddy viscosity model from synthetic sparse velocity data derived from DNS simulations of flow over periodic hills.The trained model was used to predict flow over periodic hills of different geometries.

    This work demonstrates that deep learning turbulence models can be trained from indirect observations when the relevant sensitivities of the RANS equations are available.With the growing interest in differentiable programming for scientific simulations,it is expected that the availability of derivative information will become more commonplace in scientific and engineering computations,making it more seamless to couple scientific computations with novel deep learning methods.Future works include investigating different or multiple types of observation data (e.g.,surface friction,drag and lift coefficients) and parametric studies of locations of observations on the learned model.

    Declaration of Competing Interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    This material is based on research sponsored by the U.S.Air Force under agreement number FA865019-2-2204.The U.S.Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.

    Appendix A.Derivation of adjoint equations

    The adjoint equations derived here provide the sensitivity of a cost functionJ=J(u,p)with respect to the Reynolds stressτsubject to the constraint thatτ,u,andpmust satisfy the RANS equations.The procedure and notation used here is similar to that in Ref.[24],who derived the adjoint equations for the sensitivity of the Navier–Stokes equations with Darcy term to the scalar porosity field.For detailed derivation of the boundary conditions,which are identical for both cases,the reader is referred to Ref.[24].

    The RANS equations can be written asR(u,p,τ)=0 whereand

    The minimization of the cost functionJsubject to the RANS constraintR=0 can be formulated as the minimization of the unconstrained Lagrangian function

    whereΩis the flow domain andψ=consists of four Lagrange multipliers withreferred to as adjoint velocity and adjoint pressure.The negative of the adjoint pressure is used as the Lagrange multiplier so that in the resulting adjoint equations the adjoint pressure plays an analogous role to that of the physical pressure in the RANS equations.Since the velocity and pressure depend on the Reynolds stress,the total variation ofLis

    Here we have ignored the variation of the momentum source termg,which is correct only when the source term is constant,i.e.it does not depend on the other flow variables.If the momentum source term depends on the other flow variables,e.g.to achieve a prescribed bulk velocity as in the test cases used here,ignoring its variation constitutes an approximation.This is similar to the frozen turbulence approximation common in many adjoint-based optimization works [31],where the variations of the turbulence quantities are ignored.Since the constraintR(u,p,τ)is zero everywhere,the Lagrange multipliers can be chosen freely and are chosen such that

    to avoid calculating the sensitivities of the other flow variables (u,p) with respect to the Reynolds stress.Equation (A4) is the adjoint condition which leads to the adjoint equations and boundary conditions.Equation (A3) becomes

    which leads to an expression for the desired sensitivity?L/?τin terms of the adjoint variables.

    The variations ofLare obtained as

    with the variations of the RANS equationsRas

    where the higher order term(δu·?)δuinδuRwas ignored.Using these results,Eq.(A4) becomes

    Integration by parts is used to eliminate the sensitivity of gradients (e.g.?·δu) from the expression.The results of integration by parts are summarized in Table A1,whereΓ=?Ωis the boundary of the domainΩ.Integration by parts is done twice on the term with second order derivative ?2δuwhich leaves a first order derivative in the boundary integral.The termdoes not require integration by parts,but doing so leads to a more convenient adjoint equation that requires only the primal velocity and not its gradient.Finally,the cost functionJis written in terms of integrals over the interior domain and the boundary as

    Table A1 Integration by parts written as

    Table A1 Integration by parts written as

    and Eq.(A8) can be written as

    The volume and boundary integrals must vanish separately and the volume integrals will lead to the adjoint equations while the boundary integrals lead to the boundary conditions.

    Adjoint equations

    The adjoint equations are obtained by requiring the volume integral terms to vanish as

    Since Eq.(A11) must hold for any perturbationsδuandδpthat satisfy the RANS equations,the bracketed terms in each of the two integrals must vanish independently.This results in the adjoint equations

    a set of linear partial differential equations.

    Boundary Conditions

    The boundary conditions are obtained from setting the boundary integrals in Eq.(A10) to zero and requiring terms that involveδuandδpto vanish independently as

    This can be used to derive the adjoint boundary conditions corresponding to different primal boundary conditions.The resulting adjoint boundary conditions for two typical primal boundary conditions are derived in Othmer [24] and summarized here.The first is a primal boundary with constant velocity and zero pressure gradient,such as a wall or inlet.The corresponding adjoint boundary conditions are

    where ?tis the tangential,in-plane,component of the gradient.Equation (A15a) is used to determine the boundary values of the tangential component of the adjoint velocity.Equation (A15b) then gives a relation between the normal component of the adjoint velocity and the adjoint pressure that can be satisfied by enforcing the resulting adjoint pressure.

    Sensitivity

    We now derive an expression for the sensitivity of the Laplace function with respect to the Reynolds stress tensor in terms of the adjoint variables.Using the variations in Eqs.(A6) and (A7) and integration by parts (Table A1),Eq.(A5) becomes

    Based on Eq.(A16) the sensitivity ofLwith respect to the Reynolds stress at a pointxiin the domain is

    and with respect to a pointxiin the boundary it is

    Appendix B.Formulation of objective functions

    The adjoint method requires the function for which the gradient is sought to be expressible as integrals over the domainΩand boundaryΓ=?Ωas

    This appendix presents how to express the objective function in this manner for some typical experimental measurements.

    Full Field or Full Boundary Measurements

    First,as a straightforward case,we consider full field or boundary measurements such as full field velocity or pressure distribution along a wall boundary.In these cases the objective function,i.e.the discrepancy between predicted and measured quantities,can be directly expressed in the form in Eqs.(B1).For instance,for full field velocity discrepancy the objective function based on mean squared error is given by

    where the superscripts ?indicates experimental values,and its derivatives are

    Similarly,for the pressure distribution along a wall the objective function is given byJΩ=0 andfor points on the wall boundary andJΓ=0 for points on any other boundaries.

    Sparse and Integral Measurements

    Next,we consider an objective function for the mean squared error discrepancy of more general experimental measurements including sparse measurements such as point velocity measurements or integral quantities such as drag on a solid boundary.The objective function is written as

    whered?is a vector of measurements,Ris the measurement’s covariance matrix that captures the measurement uncertainties,andindicates the L2-norm of a vectorxwith a weight matrixW.The RANS predicted measurements can be written as

    with eachJ(i)written in the form in Eq.(B1).The required sensitivity for the objective function in Eq.(B4) is given by

    whereithrow of the sensitivity matrixd′(τ)is given by?J(i)/?τobtained by solving the adjoint equations with functionJ(i).This requires one adjoint solve for each measurement.For the common situation where the measurements are independent,Ris diagonal with entries corresponding to the variance of each measurementand Eq.(B6) can be written as

    where the discrepancy for each measurement is weighted by the measurement’s variance.Alternatively,for the purpose of learning a turbulence model one might want to weight different types of measurements differently,e.g.to manually give more weight to regions of rapid change such as in boundary and shear layers or use a more formal importance sampling approach.The weighting in Eq.(B7) becomesand the weight matrix in the norm in Eq.(B4) becomeswhere the vector of weightsλ=andDλis the diagonal matrix with entriesλ.

    We now provide two concrete examples of expressing measurement discrepancies in the form of Eqs.(B4) and (B5).First,for a single point measurement ofux,

    where the mask fieldMis the Dirac delta functionδifor the measurement location.For the discretized problem the mask can be approximated using a radial functionrcentered at the measurement location,and normalized such that the integral of the mask field is one.As a second example,for the drag on a wall boundary,

    wherexDis the unit vector in the direction of drag and the mask is one for points on the wall boundary and zero for points on any other boundaries.

    88av欧美| 99久久精品国产亚洲精品| 国产日韩一区二区三区精品不卡| 757午夜福利合集在线观看| 黑人巨大精品欧美一区二区蜜桃| 91麻豆精品激情在线观看国产| 91av网站免费观看| 在线av久久热| 一级毛片高清免费大全| 身体一侧抽搐| 99在线人妻在线中文字幕| 午夜福利在线观看吧| 欧美一级毛片孕妇| 久久精品国产综合久久久| 久久婷婷成人综合色麻豆| 久久狼人影院| 黄色视频,在线免费观看| 成年人黄色毛片网站| 日韩欧美在线二视频| 中文字幕精品免费在线观看视频| 日本五十路高清| 女性生殖器流出的白浆| 日本一区二区免费在线视频| 免费观看精品视频网站| 亚洲国产欧美日韩在线播放| 在线天堂中文资源库| 欧美乱妇无乱码| 亚洲欧洲精品一区二区精品久久久| 岛国在线观看网站| av视频免费观看在线观看| 日本a在线网址| 操美女的视频在线观看| 欧美乱色亚洲激情| 国产精品一区二区精品视频观看| 无遮挡黄片免费观看| 国产av精品麻豆| 在线观看免费午夜福利视频| 真人做人爱边吃奶动态| 亚洲精品在线观看二区| 黄色视频,在线免费观看| 很黄的视频免费| 成人欧美大片| 日韩欧美国产一区二区入口| 亚洲性夜色夜夜综合| 后天国语完整版免费观看| 91老司机精品| 色老头精品视频在线观看| 亚洲色图av天堂| 亚洲熟女毛片儿| 在线观看66精品国产| 精品少妇一区二区三区视频日本电影| 亚洲欧美激情在线| 好男人电影高清在线观看| 自线自在国产av| 亚洲国产看品久久| 国产精品一区二区三区四区久久 | 黑人操中国人逼视频| 亚洲国产日韩欧美精品在线观看 | 自拍欧美九色日韩亚洲蝌蚪91| 狂野欧美激情性xxxx| 正在播放国产对白刺激| 看片在线看免费视频| 久久久久国产精品人妻aⅴ院| 欧美日韩黄片免| 欧美成人午夜精品| 免费看a级黄色片| 国产91精品成人一区二区三区| 天堂√8在线中文| 久久精品人人爽人人爽视色| av网站免费在线观看视频| 久久午夜亚洲精品久久| 99久久精品国产亚洲精品| 看免费av毛片| 色老头精品视频在线观看| 每晚都被弄得嗷嗷叫到高潮| 色在线成人网| 欧美激情 高清一区二区三区| 日韩视频一区二区在线观看| 亚洲五月天丁香| 国产麻豆69| 亚洲天堂国产精品一区在线| 成年版毛片免费区| 99热只有精品国产| 别揉我奶头~嗯~啊~动态视频| 看免费av毛片| 伊人久久大香线蕉亚洲五| 亚洲中文av在线| 国产精品美女特级片免费视频播放器 | 一级黄色大片毛片| 亚洲熟妇中文字幕五十中出| 久久久精品国产亚洲av高清涩受| 搞女人的毛片| 久久国产精品男人的天堂亚洲| 免费人成视频x8x8入口观看| 中文字幕最新亚洲高清| 精品国产亚洲在线| 国产三级黄色录像| 首页视频小说图片口味搜索| 国语自产精品视频在线第100页| 午夜福利一区二区在线看| 久久精品成人免费网站| 精品第一国产精品| 亚洲全国av大片| 国产成人精品在线电影| 久久久国产成人精品二区| 久久精品亚洲熟妇少妇任你| 久久天堂一区二区三区四区| 99香蕉大伊视频| 久久国产亚洲av麻豆专区| 91字幕亚洲| 成在线人永久免费视频| 校园春色视频在线观看| 国产精品免费视频内射| www.999成人在线观看| av福利片在线| 两性午夜刺激爽爽歪歪视频在线观看 | 国产亚洲精品一区二区www| 久久人人精品亚洲av| 免费女性裸体啪啪无遮挡网站| 免费久久久久久久精品成人欧美视频| 亚洲精品在线美女| 国产亚洲欧美在线一区二区| 精品久久蜜臀av无| 国产亚洲精品久久久久久毛片| 深夜精品福利| 国产成人av教育| 精品久久久精品久久久| 日本 av在线| 十八禁人妻一区二区| 黄片播放在线免费| 18禁观看日本| 俄罗斯特黄特色一大片| 人妻久久中文字幕网| 国产精品99久久99久久久不卡| 欧美黄色片欧美黄色片| 99精品久久久久人妻精品| 在线国产一区二区在线| 午夜福利欧美成人| 亚洲人成电影观看| 看片在线看免费视频| 在线视频色国产色| 18禁美女被吸乳视频| 午夜久久久在线观看| 亚洲色图av天堂| 99热只有精品国产| 久久久久久亚洲精品国产蜜桃av| 亚洲欧美日韩另类电影网站| 国产三级黄色录像| 久久草成人影院| 岛国在线观看网站| 啦啦啦 在线观看视频| 很黄的视频免费| 成人国产综合亚洲| 亚洲熟妇中文字幕五十中出| 操美女的视频在线观看| 精品少妇一区二区三区视频日本电影| 国产精品爽爽va在线观看网站 | 老鸭窝网址在线观看| 日本在线视频免费播放| 亚洲 国产 在线| 18禁美女被吸乳视频| 亚洲天堂国产精品一区在线| 亚洲伊人色综图| 久久精品国产亚洲av高清一级| 在线播放国产精品三级| 亚洲少妇的诱惑av| 日韩视频一区二区在线观看| av天堂久久9| 97碰自拍视频| 午夜福利在线观看吧| 亚洲精品av麻豆狂野| 村上凉子中文字幕在线| 桃红色精品国产亚洲av| 国产精品亚洲美女久久久| 精品国产美女av久久久久小说| 真人一进一出gif抽搐免费| 在线观看舔阴道视频| avwww免费| 99香蕉大伊视频| 黄色丝袜av网址大全| 韩国av一区二区三区四区| 国产三级在线视频| 又大又爽又粗| 9191精品国产免费久久| 久久婷婷人人爽人人干人人爱 | 欧美一区二区精品小视频在线| 国产伦人伦偷精品视频| 两个人看的免费小视频| 人人妻人人澡欧美一区二区 | 不卡一级毛片| 日日干狠狠操夜夜爽| www.精华液| 悠悠久久av| 日本免费a在线| 国产片内射在线| 黄色 视频免费看| 亚洲性夜色夜夜综合| 黄色丝袜av网址大全| 亚洲全国av大片| 欧美在线一区亚洲| 黄色毛片三级朝国网站| 欧美黑人精品巨大| 极品人妻少妇av视频| 国产精品乱码一区二三区的特点 | 国产精品永久免费网站| 亚洲成人久久性| 免费女性裸体啪啪无遮挡网站| 两个人看的免费小视频| 巨乳人妻的诱惑在线观看| 丝袜人妻中文字幕| 精品久久久精品久久久| 宅男免费午夜| 性欧美人与动物交配| 亚洲熟女毛片儿| 叶爱在线成人免费视频播放| 国产激情欧美一区二区| 在线永久观看黄色视频| 亚洲欧美激情综合另类| 亚洲中文字幕一区二区三区有码在线看 | 欧美乱妇无乱码| 亚洲专区字幕在线| 亚洲,欧美精品.| 日韩一卡2卡3卡4卡2021年| 欧美不卡视频在线免费观看 | 啦啦啦免费观看视频1| 久久精品成人免费网站| 老司机深夜福利视频在线观看| 黄片小视频在线播放| 99久久国产精品久久久| 亚洲 欧美 日韩 在线 免费| 天天添夜夜摸| 怎么达到女性高潮| 美女免费视频网站| 一进一出抽搐动态| 在线观看66精品国产| 女性生殖器流出的白浆| 国产在线精品亚洲第一网站| 国产又色又爽无遮挡免费看| 国产精品秋霞免费鲁丝片| 国产精品1区2区在线观看.| 亚洲av成人av| 老司机午夜福利在线观看视频| 成人特级黄色片久久久久久久| 亚洲狠狠婷婷综合久久图片| 黄片播放在线免费| 丰满的人妻完整版| 亚洲成国产人片在线观看| 黄色 视频免费看| 国产高清有码在线观看视频 | 99在线视频只有这里精品首页| 一本大道久久a久久精品| 18禁国产床啪视频网站| 免费久久久久久久精品成人欧美视频| 91成人精品电影| 中国美女看黄片| 欧美乱色亚洲激情| 成人亚洲精品一区在线观看| 中文字幕久久专区| 99热只有精品国产| 国产片内射在线| 无遮挡黄片免费观看| 久久香蕉激情| 亚洲一区二区三区色噜噜| 国产高清有码在线观看视频 | 性少妇av在线| 大型黄色视频在线免费观看| 欧美成狂野欧美在线观看| 国产精品乱码一区二三区的特点 | 美女国产高潮福利片在线看| 国产伦人伦偷精品视频| 国产熟女xx| 亚洲精品美女久久av网站| 国产成人啪精品午夜网站| 欧美+亚洲+日韩+国产| 亚洲人成电影观看| www日本在线高清视频| 亚洲精品国产精品久久久不卡| 国产高清videossex| 国产成人系列免费观看| 日本撒尿小便嘘嘘汇集6| 亚洲一区二区三区色噜噜| 日本 欧美在线| 成人永久免费在线观看视频| 黄色视频,在线免费观看| 中文字幕久久专区| 亚洲男人天堂网一区| 国产精品久久电影中文字幕| 99re在线观看精品视频| 久久久久久亚洲精品国产蜜桃av| 男人舔女人的私密视频| 精品熟女少妇八av免费久了| 12—13女人毛片做爰片一| 中亚洲国语对白在线视频| 超碰成人久久| 亚洲九九香蕉| 女人爽到高潮嗷嗷叫在线视频| 一二三四社区在线视频社区8| 亚洲国产中文字幕在线视频| 国产又色又爽无遮挡免费看| 精品午夜福利视频在线观看一区| 婷婷精品国产亚洲av在线| 欧美人与性动交α欧美精品济南到| 亚洲五月色婷婷综合| 51午夜福利影视在线观看| 亚洲中文日韩欧美视频| 久热这里只有精品99| 性少妇av在线| 亚洲少妇的诱惑av| 97超级碰碰碰精品色视频在线观看| 12—13女人毛片做爰片一| 嫁个100分男人电影在线观看| 亚洲av五月六月丁香网| 满18在线观看网站| 欧美激情久久久久久爽电影 | 一边摸一边做爽爽视频免费| 亚洲精品久久国产高清桃花| 黑人巨大精品欧美一区二区mp4| av视频在线观看入口| 国产亚洲精品一区二区www| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲三区欧美一区| 日韩高清综合在线| 亚洲三区欧美一区| 禁无遮挡网站| 欧美日韩黄片免| 99riav亚洲国产免费| 亚洲色图av天堂| 亚洲七黄色美女视频| 国产亚洲精品av在线| 国产一区二区三区综合在线观看| 国产99久久九九免费精品| 18禁国产床啪视频网站| 欧美久久黑人一区二区| 黄色女人牲交| 欧美国产日韩亚洲一区| 一进一出好大好爽视频| 久久影院123| 他把我摸到了高潮在线观看| 午夜久久久久精精品| 亚洲第一av免费看| 大码成人一级视频| av在线播放免费不卡| 久久精品国产99精品国产亚洲性色 | 欧美日韩中文字幕国产精品一区二区三区 | 国产午夜福利久久久久久| 麻豆av在线久日| av免费在线观看网站| 男女下面插进去视频免费观看| 美国免费a级毛片| 久久久精品欧美日韩精品| 丁香六月欧美| 美女高潮到喷水免费观看| 国产精品电影一区二区三区| 99精品欧美一区二区三区四区| 黄色丝袜av网址大全| 午夜福利一区二区在线看| 欧洲精品卡2卡3卡4卡5卡区| 国产精品久久久久久精品电影 | 国产精品国产高清国产av| 91av网站免费观看| 亚洲国产精品合色在线| 日韩av在线大香蕉| 少妇 在线观看| 夜夜爽天天搞| 黄色成人免费大全| 在线观看午夜福利视频| 动漫黄色视频在线观看| 午夜精品在线福利| 国产成人欧美| 在线观看日韩欧美| 亚洲全国av大片| 国产麻豆成人av免费视频| 亚洲全国av大片| 女警被强在线播放| 精品电影一区二区在线| 美女 人体艺术 gogo| 乱人伦中国视频| 性少妇av在线| 免费少妇av软件| 精品久久蜜臀av无| 久热这里只有精品99| 色尼玛亚洲综合影院| 在线观看免费视频网站a站| 法律面前人人平等表现在哪些方面| 99re在线观看精品视频| 国内久久婷婷六月综合欲色啪| 一区福利在线观看| 黄色丝袜av网址大全| 国产一卡二卡三卡精品| 啦啦啦观看免费观看视频高清 | 久久精品亚洲熟妇少妇任你| av网站免费在线观看视频| 国产精品98久久久久久宅男小说| 国产成人影院久久av| 亚洲电影在线观看av| 午夜精品在线福利| 好男人电影高清在线观看| 精品一区二区三区四区五区乱码| 亚洲欧美日韩无卡精品| 亚洲国产中文字幕在线视频| 欧美老熟妇乱子伦牲交| 久久热在线av| av有码第一页| 韩国av一区二区三区四区| 免费观看人在逋| 两性午夜刺激爽爽歪歪视频在线观看 | 欧美av亚洲av综合av国产av| 成人亚洲精品av一区二区| 日韩欧美一区视频在线观看| 精品第一国产精品| 国产三级黄色录像| 亚洲专区字幕在线| av超薄肉色丝袜交足视频| 黑人巨大精品欧美一区二区mp4| 大香蕉久久成人网| 亚洲专区中文字幕在线| 女人被狂操c到高潮| 国产精品久久久av美女十八| 国产精品久久电影中文字幕| 亚洲av美国av| 男男h啪啪无遮挡| 亚洲人成77777在线视频| 国产成+人综合+亚洲专区| 一夜夜www| 97人妻天天添夜夜摸| a在线观看视频网站| 久久婷婷成人综合色麻豆| 国产国语露脸激情在线看| 乱人伦中国视频| www国产在线视频色| 午夜a级毛片| 久久午夜综合久久蜜桃| 天堂√8在线中文| 午夜日韩欧美国产| 最近最新中文字幕大全免费视频| 国产精品亚洲一级av第二区| 在线观看日韩欧美| 日韩成人在线观看一区二区三区| 色尼玛亚洲综合影院| 亚洲男人天堂网一区| 中文字幕av电影在线播放| 国产av精品麻豆| 美女国产高潮福利片在线看| 禁无遮挡网站| 久久欧美精品欧美久久欧美| 成人亚洲精品av一区二区| 久久精品国产清高在天天线| 一级毛片女人18水好多| 亚洲九九香蕉| 变态另类成人亚洲欧美熟女 | 亚洲自拍偷在线| 色综合欧美亚洲国产小说| 久久精品aⅴ一区二区三区四区| 久久热在线av| 精品久久久久久久毛片微露脸| 久久精品亚洲精品国产色婷小说| 麻豆av在线久日| 欧美亚洲日本最大视频资源| 国产亚洲欧美98| 在线视频色国产色| 午夜久久久久精精品| 啦啦啦 在线观看视频| 亚洲欧美精品综合一区二区三区| 大型av网站在线播放| 亚洲欧美日韩另类电影网站| 日韩有码中文字幕| 精品久久久久久,| 丰满人妻熟妇乱又伦精品不卡| 十八禁网站免费在线| 成人精品一区二区免费| 男女下面插进去视频免费观看| 国产亚洲精品久久久久久毛片| 欧美日韩福利视频一区二区| 亚洲自拍偷在线| 丰满人妻熟妇乱又伦精品不卡| 91精品三级在线观看| 中文字幕高清在线视频| 99国产精品一区二区蜜桃av| 亚洲一区二区三区色噜噜| 极品人妻少妇av视频| av在线播放免费不卡| 亚洲在线自拍视频| 久久中文字幕一级| 女同久久另类99精品国产91| 久99久视频精品免费| 色在线成人网| 国产精品一区二区在线不卡| 亚洲av电影在线进入| 成年版毛片免费区| tocl精华| 欧美日本亚洲视频在线播放| 正在播放国产对白刺激| 真人做人爱边吃奶动态| 两个人视频免费观看高清| 亚洲电影在线观看av| 日韩一卡2卡3卡4卡2021年| 久久亚洲精品不卡| 成人欧美大片| 男女下面插进去视频免费观看| 精品国产超薄肉色丝袜足j| 国产成人一区二区三区免费视频网站| av欧美777| 亚洲色图综合在线观看| 国产伦人伦偷精品视频| 精品一区二区三区视频在线观看免费| 精品无人区乱码1区二区| 亚洲片人在线观看| 可以在线观看毛片的网站| 中文字幕另类日韩欧美亚洲嫩草| 精品不卡国产一区二区三区| x7x7x7水蜜桃| 免费av毛片视频| 亚洲中文日韩欧美视频| 国产精品 国内视频| 午夜视频精品福利| aaaaa片日本免费| 国产精品久久久久久精品电影 | 精品人妻1区二区| 亚洲第一电影网av| 亚洲熟妇熟女久久| 国产精品乱码一区二三区的特点 | 国产三级在线视频| 国产日韩一区二区三区精品不卡| av在线天堂中文字幕| 国产精品一区二区精品视频观看| 日本一区二区免费在线视频| 久久精品aⅴ一区二区三区四区| 一级毛片精品| 一个人观看的视频www高清免费观看 | 亚洲精品美女久久久久99蜜臀| 久久精品国产综合久久久| 视频在线观看一区二区三区| av天堂在线播放| 多毛熟女@视频| videosex国产| 又紧又爽又黄一区二区| 三级毛片av免费| 国产蜜桃级精品一区二区三区| 黄色毛片三级朝国网站| 母亲3免费完整高清在线观看| 久久久久久人人人人人| 亚洲免费av在线视频| 免费在线观看完整版高清| 久久精品亚洲精品国产色婷小说| 成年人黄色毛片网站| 搡老岳熟女国产| 久久青草综合色| 精品免费久久久久久久清纯| 免费在线观看影片大全网站| 熟女少妇亚洲综合色aaa.| 99在线人妻在线中文字幕| 欧美不卡视频在线免费观看 | 国产av又大| 国产又爽黄色视频| 怎么达到女性高潮| 欧美在线一区亚洲| 自线自在国产av| 伦理电影免费视频| 黄色视频,在线免费观看| 夜夜看夜夜爽夜夜摸| 亚洲免费av在线视频| 日本精品一区二区三区蜜桃| 又黄又粗又硬又大视频| 欧美乱妇无乱码| 国产真人三级小视频在线观看| 亚洲精品av麻豆狂野| 国产精品 国内视频| 人人妻,人人澡人人爽秒播| 伦理电影免费视频| 精品国产乱码久久久久久男人| 国产一区二区三区视频了| 成人亚洲精品av一区二区| 欧美日韩中文字幕国产精品一区二区三区 | 精品一品国产午夜福利视频| 亚洲中文日韩欧美视频| 亚洲人成电影观看| 搡老岳熟女国产| 日日摸夜夜添夜夜添小说| 亚洲五月天丁香| 国产男靠女视频免费网站| 精品一区二区三区av网在线观看| 国产成人精品无人区| 午夜福利高清视频| 久久久久精品国产欧美久久久| 欧美日韩亚洲综合一区二区三区_| 亚洲国产高清在线一区二区三 | 九色国产91popny在线| 亚洲av第一区精品v没综合| 日本免费一区二区三区高清不卡 | 琪琪午夜伦伦电影理论片6080| 一边摸一边抽搐一进一出视频| 美女 人体艺术 gogo| 99热只有精品国产| 亚洲国产精品合色在线| 午夜老司机福利片| 午夜免费观看网址| 亚洲专区字幕在线| 精品熟女少妇八av免费久了| 婷婷六月久久综合丁香| 一区二区三区国产精品乱码| 一边摸一边抽搐一进一小说| 婷婷精品国产亚洲av在线| 夜夜躁狠狠躁天天躁| av超薄肉色丝袜交足视频| 亚洲美女黄片视频| 91老司机精品| 亚洲av熟女| 国产欧美日韩一区二区三区在线| 免费久久久久久久精品成人欧美视频| 亚洲av日韩精品久久久久久密| 97人妻天天添夜夜摸| 日韩一卡2卡3卡4卡2021年| 十八禁网站免费在线| av免费在线观看网站| 两人在一起打扑克的视频|