◆于建
數(shù)據(jù)驅(qū)動的湍流標(biāo)量混合非閉合深度學(xué)習(xí)模型
◆于建
(遼寧工程技術(shù)大學(xué)信息化與網(wǎng)絡(luò)管理中心 遼寧 123000)
借鑒物理信息深度學(xué)習(xí)和深隱藏物理模型,提出一個從噪聲的時空概率密度函數(shù)測算中辨識湍流模型的框架。該模型適用于由運積單點概率密度函數(shù)方程描述的條件Fickian標(biāo)量期望擴散或耗散。采用均勻湍流二元標(biāo)量混合的振幅映射閉包/Johnsohn-Edgeworth變換模型獲得精確解,對所構(gòu)建的模型進行了測評。
概率密度函數(shù)模型;湍流混合;模型辨識;深度學(xué)習(xí)
湍流標(biāo)量混合問題在近幾十年得到學(xué)術(shù)界廣泛研究和應(yīng)用[1-3]。在Reynolds-averaged Navier-Stokes(RANS)模擬中,湍流的輸運概率密度函數(shù)(PDF)明確地描述了這一問題。對于單點PDF描述子, Fickian標(biāo)量的混合作用表現(xiàn)為:條件期望耗散項和/或條件期望擴散項為非閉合的[2]。通過概率濾波密度函數(shù)(FDF)在大渦流模擬(LES)中也遇到了類似的非閉合問題[4]。相關(guān)非閉合項的閉包開發(fā)已成為熱點研究問題。湍流模型的首要目標(biāo)是為PDF/FDF輸運方程中出現(xiàn)的未閉合項找到精確的閉包。在湍流建模中,常采用封閉項表達非封閉項。這種閉包的形式是基于對已有問題的物理檢查,它本身就容易出錯。這是湍流閉合過程中建模不確定性的主要來源。
本文介紹了一種新的湍流標(biāo)量混合閉包方法,即從高保真度觀測中學(xué)習(xí)非閉合項。這種觀察可能來自直接數(shù)值模擬(DNS),如文獻[5-7];或時空分辨實驗測量,如文獻[8-9]。顯然,在DNS中,未閉合項可以直接從模擬結(jié)果中提取出來。然而實際應(yīng)用中執(zhí)行DNS成本過高。另一方面,從實驗數(shù)據(jù)中尋找閉包涉及從實驗數(shù)據(jù)中獲取時空和分解空間的導(dǎo)數(shù)(在某些情況下是高階導(dǎo)數(shù)),難度較大,且易因時空測量的分辨率引入新的閉包不確定性。本研究的最終目標(biāo)是開發(fā)一個閉包發(fā)現(xiàn)框架,從稀疏的高保真數(shù)據(jù)(如實驗測量)中學(xué)習(xí)閉包。上文中所提出的框架用一種數(shù)據(jù)驅(qū)動的方法取代了常用的“猜測”方法。該方法以系統(tǒng)的方式從數(shù)據(jù)中揭示出封閉性。本研究方法以偏微分方程深度學(xué)習(xí)[10]和數(shù)據(jù)驅(qū)動建模策略[11],特別是基于物理學(xué)的深度學(xué)習(xí)[12]和深層隱藏物理模型[13]為理論基礎(chǔ)。
二進制標(biāo)量混合常用在PDF閉包開發(fā)中[14-15],本研究中以它為演算實例。該問題通常出現(xiàn)在設(shè)置空間均勻流研究中,其亦考慮標(biāo)量PDF的時間傳輸。在這種情況下,亟須開發(fā)一個能夠準(zhǔn)確預(yù)測PDF的演化的閉包框架。這個問題相對簡單,適合DNS和實測實驗。文獻中有大量通過這些方法獲得的數(shù)據(jù)[5-7,16-23],測評結(jié)果表明:本文提出的計算模型學(xué)習(xí)獲得的條件期望耗散和擴散可靠性高誤差小。
其中豎線表示條件值。式(1)也可表示為:
其中表示條件期望擴散:
從上述設(shè)定及公式(3)可獲得以下物理信息神經(jīng)網(wǎng)絡(luò)(見圖1)。
圖1 條件期望擴散的物理信息神經(jīng)網(wǎng)絡(luò)示例
圖3 第一組概率密度函數(shù)、條件期望擴散函數(shù)及其深度學(xué)習(xí)結(jié)果值的分布情況(a)為精確概率密度函數(shù),(b)為精確概率密度函數(shù)的深度學(xué)習(xí)結(jié)果,(c)為二者的差值;(d)為條件期望擴散D,(e)為條件期望擴散D的深度學(xué)習(xí)結(jié)果,(f)為二者的差值
訓(xùn)練過程簡述如下:測試結(jié)果是在Adam優(yōu)化器[32]的105、2×105、3×105和4×105的連續(xù)時間段之后獲得的,相應(yīng)的學(xué)習(xí)率分別為10-3、10-4、10-5和10-6。每個時間段均對整個數(shù)據(jù)集完整計算一次。因此,Adam優(yōu)化器的總迭代次數(shù)是數(shù)據(jù)個數(shù)除以最小批量值的106倍。本文使用的小批量大小是20000,數(shù)據(jù)點的數(shù)量是20000。
圖4 條件期望擴散模型學(xué)習(xí)所得數(shù)據(jù)的誤差。左圖為精確概率密度函數(shù)及其深度學(xué)習(xí)結(jié)果的相對L2誤差(Rel. L2 Error),右圖為精確和學(xué)習(xí)的條件期望擴散D(t,) 的相對L2誤差(Rel. L2 Error)
圖5 第二組精確概率密度函數(shù)、條件期望耗散函數(shù)及其深度學(xué)習(xí)測試值的分布情況。(a)精確概率密度函數(shù),(b)精確概率密度函數(shù)的深度學(xué)習(xí)結(jié)果,(c)二者的差值;(d)條件期望耗散,(e)條件期望耗散的深度學(xué)習(xí)結(jié)果,(f)二者的差值
圖6 條件期望耗散模型學(xué)習(xí)所得數(shù)據(jù)的誤差。頂部面板中描繪了與學(xué)習(xí)函數(shù)相鄰的精確概率密度函數(shù)P(t,ψ),而精確和學(xué)習(xí)的條件期望耗散E(t,ψ)繪制在底部面板中
本文提出了一個數(shù)據(jù)驅(qū)動的湍流標(biāo)量混合非封閉項學(xué)習(xí)模型框架。在這個框架中,非閉合項是通過PDF輸運方程,以及在PDF上觀察到一些高保真的觀測值來學(xué)習(xí)的。上述提出的框架可以直接擴展到涉及多物種混合的高維情況,避開了數(shù)值離散化的缺陷,利用數(shù)據(jù)點位于低維流形的特點設(shè)計了可擴展到高維的算法。此外,本研究中所提出方法也可以高度擴展到研究湍流時經(jīng)常遇到的大數(shù)據(jù)客體,使這些數(shù)據(jù)將以更小批量、更小計算量得到有效處理。
[1]R.S. Brodkey(Ed.), Turbulence in Mixing Operation[D]. Academic Press,New York,NY,1975.
[2]S. B. Pope,Turbulent Flows[D]. Cambridge University Press,Cambridge,UK,2000.
[3]D. C. Haworth,Progress in probability density function methods for turbulent reacting flows[J]. Prog. Energ. Combust.,2010(36):168-259.
[4]A. G. Nouri,M. B. Nik,P. Givi,D. Livescu,S. B. Pope, A self-contained filtered density function[J]. Phys. Rev. Fluids,2017(2).
[5]S. S. Girimaji,Y. Zhou, Analysis and modeling of subgrid scalar mixing using numerical data[J]. Phys. Fluids,1996(8):1224-1236.
[6]F. A. Jaberi,R. S. Miller,C. K. Madnia,P. Givi,Non-Gaussian scalar statistics in homogeneous turbulence[J]. J. Fluid Mech.,1996(313):241-282.
[7]S. L. Christie,J. A. Domaradzki,Scale dependence of the statistical character of turbulent fluctuations in thermal convection[J]. Phys. Fluids,1994(6):1848-1855.
[8]A. Eckstein,P. P. Vlachos,Digital particle image velocimetry (dpiv) robust phase correlation[J]. Measurement Science and Technology,2009(20):055401.
[9]F. Pereira,M. Gharib,D. Dabiri,D. Modarress, Defocusing digital particle image velocimetry:a 3-component 3-dimensional dpiv measurement technique. application to bubbly flows[J]. Experiments in Fluids,2000(29):S078-S084.
[10]E. Weinan,J. Han,A. Jentzen,Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations[J]. Communications in Mathematics and Statistics,2017(5):349-380.
[11]S. H. Rudy,S. L. Brunton,J. L. Proctor,J. N. Kutz, Data-driven discovery of partial differential equations[J]. Science Advances,2017(3):1602614.
[12]M. Raissi,P. Perdikaris,G. Karniadakis,Physics-informed neural networks:A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics,2018.
[13]M. Raissi, Deep hidden physics models: Deep learning of nonlinear partial differential equations[J]. Journal of Machine Learning Research,2018(19):1-24.
[14]S. Subramaniam,S. B. Pope,A mixing model for turbulent reactive flows based on Euclidean minimum spanning trees[J]. Combust. Flame,1998(115):487-514.
[15]S. B. Pope,A Model for Turbulent Mixing Based on Shadow-Position Conditioning[J]. Phys. Fluids,2013(25):110803.
[16]S. Tavoularis,S. Corrsin,Experiments in nearly homogenous turbulent shear flow with a uniform mean temperature gradient[J]. Part 1,J. Fluid Mech.,1981(104):311-347.
[17]V. Eswaran,S. B. Pope,Direct numerical simulations of the turbulent mixing of a passive scalar[J]. Phys. Fluids,1988(31):506-520.
[18]P. A. McMurtry,P. Givi,Direct numerical simulations of mixing and reaction in a nonpremixed homogeneous turbulent flow[J]. Combust. Flame,1989(77):171-185.
[19]S. L. Christie,J. A. Domaradzki,Numerical evidence for the nonuniversality of the soft/hard turbulence classification for thermal convection[J]. Phys. Fluids,1993(A 5):412-421.
[20]T. H. Solomon,J. P. Gollub,Thermal boundary layers and heat flux in turbulent convection:The role of recirculating flows[J]. Phys. Rev.,1991(A 43):6683-6693.
[21]S. T. Thoroddsen,C. W. Van Atta,Exponential tails and skewness of density-gradient probability density functions in stably stratified turbulence[J]. J. Fluid Mech.,1992(244):547-566.
[22]Jayesh,Z. Warhaft,Probability distribution of a passive scalar in grid generated turbulence[J]. Phys. Rev. Lett.,1991(67):3503-3506.
[23]Jayesh,Z. Warhaft,Probability distribution,conditional dissipation,and transport of passive temperature fluctuations in grid-generated turbulence[J]. Phys. Fluids A,1992(4):2292-2307.
[24]M. Raissi,G. E. Karniadakis, Hidden physics models: Machine learning of nonlinear partial differential equations[J]. Journal of Computational Physics,2018(357):125-141.
[25]Baydin A. G.,Pearlmutter B. A.,Radul A. A.,et al. Automatic differentiation in machine learning:A survey[J]. Journal of Machine Learning Research,2018,18(153):1-43.
[26]M. Abadi,A. Agarwal,P. Barham,et al.,Tensor flow: Large-scale machine learning on heterogeneous distributed systems[J]. arXiv:2016,1603(04467).
[27]R.H.Kraichnan,Closures for probability distributions[J]. Bull. Amer. Phys. Soc.,1989(34):2298.
[28]H.Chen,S.Chen,R.H. Kraichnan,Probability distribution of a stochastically advected scalar field[J]. Phys. Rev. Lett.,1989(63):2657-2660.
[29]S. B. Pope,Mapping closures for turbulent mixing and reaction[J]. Theor. Comp. Fluid Dyn.,1991(2):255-270.
[30]R. S. Miller,S. H. Frankel,C. K. Madnia,P. Givi, Johnson-Edgeworth translation for probability modeling of binary scalar mixing in turbulent flows[J]. Combust. Sci. Technol.,1993(91):21-52.
[31]P. Ramachandran,B. Zoph,Q. V. Le, Searching for activation functions[J]. arXiv:2017,1603(04467).
[32]D. P. Kingma,J. Ba,Adam: A method for stochastic optimization[J]. arXiv:2014,1412(6980).