• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Hybrid windowed networks for on-the-fly Doppler broadening in RMC code

    2021-07-02 09:29:14TianYiHuangZeGuangLiKanWangXiaoYuGuoJinGangLiang
    Nuclear Science and Techniques 2021年6期

    Tian-Yi Huang· Ze-Guang Li· Kan Wang · Xiao-Yu Guo·Jin-Gang Liang

    Abstract On-the-fly Doppler broadening of cross sections is important in Monte Carlo simulations, particularly in Monte Carlo neutronics-thermal hydraulics coupling simulations. Methods such as Target Motion Sampling (TMS)and windowed multipole as well as a method based on regression models have been developed to solve this problem.However,these methods have limitations such as the need for a cross section in an ACE format at a given temperature or a limited application energy range. In this study, a new on-the-fly Doppler broadening method based on a Back Propagation (BP) neural network, called hybrid windowed networks (HWN), is proposed to resolve the resonance energy range.In the HWN method,the resolved resonance energy range is divided into windows to guarantee an even distribution of resonance peaks.BP networks with specially designed structures and training parameters are trained to evaluate the cross section at a base temperature and the broadening coefficient. The HWN method is implemented in the Reactor Monte Carlo(RMC)code,and the microscopic cross sections and macroscopic results are compared. The results show that the HWN method can reduce the memory requirement for cross-sectional data by approximately 65%; moreover, it can generate keff, power distribution, and energy spectrum results with acceptable accuracy and a limited increase in the calculation time. The feasibility and effectiveness of the proposed HWN method are thus demonstrated.

    Keywords Monte Carlo method · Reactor Monte Carlo(RMC) · On-the-fly Doppler broadening · BP network

    1 Introduction

    With the increase in the computing power of high-performance computing platforms, Monte Carlo neutronicsthermal hydraulics coupling has become an ideal approach for obtaining accurate results for the design and analysis of reactors. Traditional methods of linear interpolation with point-wise nuclear data require a large amount of memory resources to provide temperature-dependent microscopic cross sections for simulations [1]. On-the-fly Doppler broadening methods have been introduced to reduce the memory cost and enable thermal-hydraulics coupled reactor analysis [2]. Several on-the-fly Doppler broadening methods have been proposed to meet both the efficiency and memory requirements.Three methods,namely,Target Motion Sampling (TMS) [3, 4], windowed multipole [5],and a method based on regression model and fitting [6]have been proposed for the evaluation of cross sections across the resolved resonance energy range.Walsh studied an on-the-fly Doppler broadening method for the unresolved resonance energy range[7].Pavlou and Ji proposed a thermal energy range method [8].

    For heavy nuclides such as U-235, the data in the resolved resonance energy range account for most of the nuclear data. Among the methods applicable in this range,the TMS method requires cross-sectional data at a given temperature, such as 0 K [9, 10], while the windowed multipole method divides the resolved resonance energy range into energy windows and uses approximations to evaluate cross sections [1], Yesilyurt et al. used thirteen parameters for broad cross sections at 0 K to perform Doppler broadening for any temperature in the range of 77-3200 K[6].This method was further developed by Liu et al., and the number of parameters was made flexible[11]. Both the TMS method and the method proposed by Yesilyurt et al.require ACE data in their processes,and the latter requires a larger memory for storing the broadening parameters.The range of nuclides for which the windowed multipole method is applicable is limited.

    In this paper, a new on-the-fly Doppler broadening method based on BP neural networks, called hybrid windowed networks (HWN), is proposed. The total memory requirements are reduced by approximately 65%compared with ACE data at the expense of efficiency over the resolved energy range.The BP neural networks are used to evaluate the cross sections. The neural network method,which is a type of machine learning method, simulates the structure of biological neurons through establishing artificial neural networks[12].By relying mainly on multi-layer neuron nodes with various weights and biases, neutron networks can be used to solve complicated problems such as image [13] or audio [14] processing. Neural networks with simple structures also exhibit a satisfactory performance in data fitting [15]. The structures and training parameters of the networks reported herein were carefully determined to meet the needs of cross-sectional training.

    In this method, the resolved resonance energy range is divided into windows. The networks trained for each window can be used independently so that the method can be easily combined with other on-the-fly Doppler broadening methods. The application range of this method can be set by the users to avoid unacceptable losses of efficiency.

    The results confirm the feasibility of evaluating complex cross-sectional parameters through the use of neural networks.The potential of neural networks for memory saving is demonstrated in this work.Neural networks can be used to evaluate some of the parameters in the calculation of other developed on-the-fly Doppler broadening methods.Larger memory savings and higher accuracy might be achieved by incorporating the physics of Doppler broadening into the method.

    The principle of BP neural networks and the HWN method are introduced in Sect. 2. In Sect. 3, the results of numerical tests conducted to verify the effectiveness,accuracy, and efficiency of the method are reported and discussed.

    2 HWN Method

    In the HWN method,the resolved energy is divided into energy windows based on the number of extreme points in the cross section. In each window, two BP networks are used to calculate the cross section at 200 K and broaden the cross section to temperatures in the range of 250-1600 K.ACE data are used to train the BP networks.The networks for each window can be used independently; thus, the scope of the method can be set easily.

    Section 2.1 reviews the principle of BP neural networks.Section 2.2 describes the training and calculation process of the two networks within a window. Section 2.3 introduces the division of the resolved energy range and the parameter determination process.

    2.1 BP Neural network

    An efficient and memory-saving method for evaluating cross sections is needed for on-the-fly Doppler broadening.Neural networks, which are widely used in machine learning,need to be trained before they are used.Once the parameters of the networks are determined, the networks can be used to calculate the output from the given input.After training, the amount of calculations required is greatly reduced;therefore,this method can be used for onthe-fly Doppler broadening.A combination of the input and expected results is used in the training process, during which the weights and biases of all the neurons are adjusted.The deviation between the outputs of the network and the expected results is gradually reduced during training until the network meets the requirements.

    Many studies on neural networks such as convolutional neural networks [16] and deep neural networks [17] have been conducted. Such networks have many hidden layers and neurons,as well as complex structures,resulting in low computational efficiencies. Because computing speed is important in Monte Carlo codes, a neural network with a simple structure is more suitable.

    In this study, the back propagation (BP) network was used. The error back propagation algorithm proposed by Rumelhart [18] introduced in the following paragraphs is used for training this network. As shown in Fig. 1, a BP network consists of an input layer, hidden layers, and an output layer. Each layer contains a certain number of neurons. The simplicity of the structure and calculation steps of this type of network ensures its high computational efficiency.

    Fig. 1 Structure of BP neural network

    The hidden and output layer nodes of the BP neural network are, respectively, described by

    In equations,Ojis the output of the hidden layer node;Ykis the output value of the output layer node. f1and f2are the transfer functions of the hidden and output layer nodes,respectively. In general, f1is a nonlinear function, and f2can be a linear or nonlinear function.wijis the weight from the previous node to the hidden layer node. wjkis the weight from the node of the last hidden layer to the node of the output layer.bjand bkare the biases of the hidden layer node and output layer node, respectively.

    The weights and thresholds of the BP neural network in calculations are generally expressed in matrix form.Equations (1) and (2) are therefore expressed as Eqs. (3)and (4), respectively.

    The subscript ‘‘pre’’ means the previous layer, which can be a hidden layer or an input layer.Subscript‘‘h’’and‘‘o’’means hidden layer and output layer, respectively. In Eq. (3), Ohis the output value of the hidden layer. Wpretohis the matrix of weights from the previous layer to the hidden layer. Xpreis the matrix of output values of the previous layer and Bhis the matrix of bias of the hidden layer. In Eq. (4), Y is the matrix of output value of the output layer,and it is also the output value of the network.W and B in Eq. (4) have similar meanings to those in Eq. (3).The bias and output data of each layer are column vectors with lengths equal to the number of nodes in the layer. The weight is a matrix where the first and second dimensions are the number of nodes in the previous and current layers, respectively. The matrix elements correspond to those in Eqs. (1) and (2).

    The neural networks in this study were trained using MATLAB. The output of the neural networks approached the target value over successive iterations. All the weights and biases in the network were adjusted during the training process, while the structure of the network, including the number of hidden layers and nodes in each layer, and the transfer function, remained unchanged.

    The error back propagation algorithm was used for training the network. The error between the result in the training data and the corresponding result calculated by the network for a given input in the training data was propagated back to the parameters of each layer. The process is briefly described as follows.

    The set of input vectors and the corresponding target vectors are denoted as

    The square of errors between P and T are summed to defined R,which is the error of the network.The factor 1/2 is to simplify the following process.

    The goal of the training is to reduce the error.R is therefore expanded using Eq. (4) into the node parameters of the output layer as follows:

    Equation 8 can be further expanded using Eq. (1) as follows:

    The expansion ends here if the network has one hidden layer. Otherwise, R can be expanded layer by layer. The algorithm is illustrated for a one-hidden-layer network for which the transfer function of the output layer is given by

    The partial derivatives are called the gradient values of the error.The parameters are updated using the gradient values in each training Iteration as follows:

    where η is the learning rate, which determines the ratio of the correction to the parameters.The amplitude of a single adjustment is directly proportional to η. The derivation shown in Eqs. (1-14) is applied to each individual parameter, and the end result of the derivation can be expressed in the form of a matrix as

    The results of Eq. (13) and (14) for all the parameters can be collectively expressed as

    The training of the other parameter matrices can be described in a similar manner.The error plays a role in the parameter adjustment of each layer through the result deviation Δ and propagates to each parameter matrix.

    2.2 Training and calculation

    This section describes how the two networks were trained and used in an energy window. The method for dividing the resolved energy is introduced in the next section. Both the energy division and the sequential computation using two networks are designed to reduce the complexity of the training target.

    A temperature is chosen as the base temperature for the cross section of a nuclide. The cross section at this temperature, denoted as σT0(E ), is a function of the neutron energy E. The cross-sectional broadening coefficient,K, is introduced,which is defined as the ratio of the cross section at a temperature not less than T0to the cross section at T0.

    K is a binary function with the independent variables E and T and denoted as K (E,T). The features of K (E,T) are much less complex than those of σ(E,T).Thus,the ratio of the maximum value of K (E,T) to its minimum value is much lower than that of σ(E,T) within an energy window obtained by the method explained in Sect. 2.3.Therefore,it is expected that better results can be obtained using K (E,T) as the input data for network training.

    The HWN method is similar to the method proposed by Yesilyurt et al. [6], which also uses fitting parameters for on-the-fly Doppler broadening. However, the broadening coefficient K in the proposed method applies for all the energies in the corresponding energy window,whereas the method proposed by Yesilyurt et al. requires individual parameters for each energy point.

    To reduce the memory required, a network denoted as Network 1 is trained to calculate the cross section at temperature T0. The upper and lower bounds of the energy window are denoted as Eminand Emax, respectively. The corresponding relationship between the input and output of the network is expressed as

    where σ′T0is the cross section calculated by Network 1 at T0and F1is the mapping relationship between the input energy and output cross section of Network 1.It should be noted that some errors exist between the results and the actual cross section.

    Network 2 is trained to calculate the broadening coefficient,K.If K is used directly as the input data for training Network 2 and Eq. (18) is used to calculate the broadened cross section, the error of the calculation result will be the superposition of the errors of the two networks.

    To reduce the error, the K value calculated by the following formula is used as the training target.

    The difference between Eqs. (20) and (18) is that the cross section at T0in Eq. (18) is replaced by the output of Network 1 in Eq. (20). Therefore, the influence of the accuracy of Network 1 on the final result can be eliminated.However, the complexity of K will increase if the error of Network 1 is extremely large, which may affect the accuracy of Network 2. It is therefore necessary to control the error of Network 1.

    K in Eq. (20) is used as the input data to train Network 2. The temperature range of the network fitting is limited by the input data. A temperature range denoted as Tmin≤T ≤Tmaxthat could meet the actual requirements is selected. Tminshould not be lower than T0. The corresponding relationship between the input and output can therefore be expressed as

    where K′is the broadening coefficient calculated by Network 2, and F2is the mapping relationship between the input energy and the output coefficient of Network 2.K′is a function of the independent variables E and T and is denoted as K′(E,T).

    By rewriting Eq. (20), the equation for on-the-fly Doppler broadening is obtained as

    Equation (22) describes the Doppler broadening process.Network 1 is first used to calculate the cross section at the given energy E and basic temperature T0. Then, the cross section is broadened to the temperature T using the broadening coefficient K calculated by Network 2.

    The values of T0, Tmin, and Tmaxwere determined. The temperature ranges and their corresponding fields of study were summarized by Yesilyurt et al. [6] as shown in Table 1. Monte Carlo codes for reactors should be able to handle benchmarking calculations and reactor operation problems. Thus, Tminand Tmaxwere set as 250 K and 1650 K,respectively.It should be noted K approaches 1 as T approaches T0, which affects the accuracy of the neuralnetwork in the vicinity of T0. As a result, training using data at T =T0should be avoided, and T0should not be close to Tmin. Considering the lower complexity of the cross-sectional curve at higher temperatures, the base temperature T0was chosen as 200 K to reduce the difficulty of the network training.

    Table 1 Temperature ranges and their corresponding fields of study

    2.3 Window division and parameter determination

    The cross-sectional curve in the resolved resonance energy range of heavy nuclides at a single temperature is very complex because of the large number of resonance peaks. The addition of the temperature dimension further increases the complexity. Therefore, it is impractical to train the neural network directly using the ACE data over the entire resolved resonance energy range as the input.

    Dividing the resolved resonance energy range into energy windows can significantly reduce the difficulty of training in each window. Because similar processes are carried out for each window, the data should be divided evenly according to the training difficulty. Dividing the resonant peaks equally between the different windows is an easy and effective method. Each window has the same number of maximum points that correspond mainly to the positions of the resonance peaks within the resolved resonance energy range. The edges of the windows are set as the minimum points of the cross-sectional curve.

    Because of the Doppler broadening effect, the resonant peaks are broadened at temperatures above 0 K. At T0=200 K, some adjacent resonant peaks are combined into single peaks, leading to a reduction in the number of maximum points. The maximum points, rather than the resonant peaks at 0 K, were used for the deviation of the cross-sectional curve at T0.Because the distributions of the resonant peaks differ for different heavy nuclides, the number of windows and the positions of the edges determined by this process will also be different.

    The number of maximum points in each window was carefully determined. A smaller number of points for a given network would result in a higher number of divided windows and a corresponding increase in the memory required.In addition,if large number of peaks are included,the amount of data in each window will be extremely large,and the accuracy of the networks will be decreased. The number of points was set to 15 based on experience and testing. The aforementioned process was applied to the total cross-sectional curve at the base temperature of T0=200 K for window division.

    Relatively large deviations near the boundary of the windows were often observed during network training. To avoid this,the windows were extended along both edges,as is shown in Fig. 2. The data in the extended window were used for neural network training, whereas the acquired neural network was used only within the range of the original window. The windows next to the unresolved resonance energy range and thermal energy range were only extended in the direction of the resolved resonance energy range.

    Fig. 2 Extension of window

    The training results were greatly affected by the structure of the neural networks and the training parameters.The network parameters were determined according to the test results.

    The training function training was used in network training. This approach has the advantages of a faster convergence speed and better accuracies. The transfer function of the hidden layer nodes is the tansig function in MATLAB, which is defined as

    A BP network with one hidden layer is sufficient for solving many problems. Networks with multiple hidden layers may show better performance in some cases[19].A one-hidden-layer BP network was used for Network 1(described in Sect. 2.2) to reduce the memory requirements. For Network 2, a two-hidden-layer network was used to avoid overfitting in the training of K (E,T).

    Tests were performed to determine the number of neurons in each hidden layer. The number of neurons in the hidden layer of Network 1 was determined first. The total cross-sectional data of U-235 at 0 K were divided into 181 windows, and 19 equally spaced windows were selected.Networks with 80,90,100,110,120,and 130 nodes in the hidden layer were trained using the data in the windows.Each network was trained with the data from each window 10 times,and the minimum value of the maximum absolute relative error was calculated. The geometric means of the error values for the 19 windows are shown in Fig. 3.

    Fig.3 Relationship between the number of nodes in the hidden layer and the mean minimum absolute relative error

    In general, the accuracy of the network increased with the number of nodes, and it was at an acceptable level when the number of nodes was 130. As shown in Fig. 3, a further increase in the number of nodes had a limited effect on the improvement of accuracy.Networks with 130 nodes in the hidden layers were used for most windows.A larger number of nodes were used in a few windows in which the relative error was extremely large.

    Training and comparisons were performed to determine the number of nodes in the two hidden layers of Network 2.A reasonable maximum epoch number,training time limit,and accuracy target were set for training. Data from the second window of the U-235 total cross section were used to evaluate networks with different combinations of node numbers. The performance was determined using the percentage of data points where the absolute relative error with respect to the input data was less than 0.1%. The results are compared in Table 2. The results indicate that the best combination has 40 nodes in the first hidden layer and 20 nodes in the second hidden layer.

    3 Numerical tests of HWN method

    The HWN method was implemented in the Reactor Monte Carlo (RMC) code [20] for on-the-fly Doppler broadening of U-235. Data from the ENDF/B-VII.0 database were processed by NJOY[21]with an accuracy of no less than 0.001 to obtain the training ACE data. The resolved resonance energy range was divided into 80 energy windows, and the networks were trained for each window.cross-sectional data at 25-K intervals in the range of 250-1650 K were used to calculate the training target K (E,T) in each window. The microscopic cross sections and macroscopic results were compared to prove the feasibility and effectiveness of the HWN method.

    In Sect. 3.1, the accuracy and memory requirements of the HWN method are demonstrated by comparing the calculated microscopic cross section with the ACE data.In Sect. 3.2,the results of two macroscopic tests performed toverify the accuracy and efficiency of the method are presented.

    Table 2 Performance comparison of networks with different combinations of node numbers

    3.1 Microscopic accuracy and memory requirement comparison

    The HWN method was applied to U-235 and U-238,which are representative heavy nuclides with important resonances. The total cross section, elastic scattering cross section, absorption cross section of U-235, and total cross section of U-238 were compared with the ACE data at the same temperature. The cross-sectional results and absolute relative errors with respect to the ACE data are plotted in Fig. 4, where the comparisons were made in the energy regions with strong resonances at 300 K and 700 K. It can be observed that the errors are low for most data points and that the cross sections evaluated are accurate at both temperatures.The relative errors fluctuate with the energy.The fluctuations are caused by the characteristics of neural networks with many nodes.

    The HWN method and the windowed multipole method are both methods with new ways, which do not use continuous-energy data, of storing the cross-sectional data.Therefore, their memory requirements are lower than that of the ACE data. The relative errors of the HWN and windowed multiple [1] methods are compared in Table 3.The results show that the maximum relative errors are similar and that the average relative errors of both methods are within 0.1%.

    The theoretical memory consumption of the network parameters and ACE cross-sectional data are compared.In the continuous-energy Monte Carlo code using the ACE data, the cross sections are stored in the form of doubleprecision floating-point numbers. Each double-precision floating-point number occupies eight bytes of memory.Both the energy grid and cross-sectional values are needed for cross-sectional evaluation. For the total cross section,elastic scattering cross section, and absorption cross section, four double-precision floating-point numbers, which occupy 32 bytes of memory, are needed for each energy point. In comparison, most of the parameters in the HWN method are stored in the form of double-precision floatingpoint numbers, and a few parameters are integers. The memory consumptions of the two methods are calculated using the number of parameters and the memory space needed for each parameter.

    The ACE data at 0 K processed by NJOY were used for comparison. If on-the-fly Doppler broadening is not introduced to the Monte Carlo code,point-wise cross sections at more than a dozen temperatures will be needed for thermalhydraulics coupled analysis. The method proposed by Yesilyurt et al. uses parameters that require several times the memory needed for the ACE data at 0 K. Point-wise data are also needed in the TMS method. As a result, the HWN method shows a significant reduction of the memory requirement compared to the 0 K ACE data.

    The comparison results are listed in Table 4.The results show that for the three cross sections of U-235, the HWN method could reduce the memory requirements of crosssectional data in the resolved resonance energy range by 66.1%as compared with the case of the 0 K ACE data.The memory requirement reduction was 65.9% over the entire energy range.

    The networks for each window can be used independently because the training process of each window is independent.It is easy to set the scope of the method when the method is implemented in a Monte Carlo code.It is not necessary to store the 0 K ACE data within the selected windows if the HWN method is used. However, the speed of the cross-sectional evaluation in these windows will decrease. The efficiency drop is described in Sect. 4.Table 4 clearly shows that the resolved resonance energy range accounts for most the nuclear data. The memory optimization ratio is significant when the HWN method is used in some of the windows. Users can decide the efficiency to be compromised for saving memory.

    Fig. 4 (Color online)Comparison of microscopic cross sections at 300 K and 700 K. a Total cross section of U-235. b Absorption cross section of U-235 c Elastic scattering cross section of U-235. d Total cross section of U-238

    Table 3 Comparison of relative errors of HWN and windowed multipole

    3.2 Comparison of macroscopic test results

    The HWN method was used for Doppler broadening in all windows of the resolved resonance energy in the HWN cases, while ACE data from the processed ENDF/B-VII.0 database at the same temperature were used in the ACE case. All calculations were performed using an Intel i7-9750H CPU without parallelism.

    3.2.1 Concentric spheres

    The first comparison example consisted of two concentric spheres, as shown in Fig. 5. The radii of the inner and outer spheres were 6.782 cm and 11.862 cm, respectively. The inner sphere was filled with Material 1. The space between the two spheres was filled with Material 2.The outside of the outer sphere was set to vacuum. The nuclide compositions of the materials are listed in Table 5.The temperature of each material was set to 300 K.

    The calculation parameters are listed in Table 6,and the results are presented in Table 7. The deviation in keffis very small. There was a slight increase in the calculation time. The accuracy of the HWN method was therefore confirmed.

    3.2.2 PWR assembly

    The second comparison example is the PWR assembly model shown in Fig. 6. The model comprised an infinite cylinder with a cross section of 21.42 cm × 21.42 cm.There were 264 fuel rods and 25 pipes arranged in a 17 × 17 square. The fuel rods were cylinders with diameters of 0.8192 cm. Each fuel rod was surrounded by a 0.082 mm air layer and a 0.572 mm zirconium wall. Theinner and outer diameters of the pipes were 1.138 and 1.2294 cm, respectively. The pipes were filled with water,and the material of their walls was zirconium. The remainder of the assembly was filled with water. The nuclide compositions of the fuels are listed in Table 8.The temperature of all the materials was set to 700 K.

    Table 4 Theoretical memory requirements of HWN and ACE data

    Fig. 5 (Color online) Concentric spheres example

    Table 5 Nuclide composition of materials in concentric spheres example

    Table 6 Calculation parameters for the example of concentric spheres

    Table 7 Calculation results of the concentric spheres example

    Fig. 6 (Color online) Schematic diagram of the PWR assembly

    Table 8 Nuclide composition of fuel in the PWR assembly

    Table 9 Calculation parameters of PWR assembly

    The calculation parameters are listed in Table 9,and the results are presented in Table 10. The difference in keffbetween the two cases is within twice the standard deviation, indicating that there is no significant difference. The calculation time of the HWN case is 1.39 times that of the ACE case, which indicates that using the HWN method will prolong the calculation time. The calculation time is shorter if the method is not used in all the windows.

    The neutron flux spectrum of the fuel in a fuel rod adjacent to the central pipe and the neutron flux inside each fuel rod and pipe were calculated for both the ACE and HWN cases.The results are presented in Fig. 7.The blocks in Fig. 7c do not represent the actual geometry,rather they the corresponding positions.

    Figure 7a shows that the neutron flux spectra of the fuel rods are the same. Figure 7b shows that the relative deviations of the fluxes in most statistical intervals are within three times the standard deviation of the ACE case. Figure 7c compares the neutron fluxes of all the fuel rods in the assembly. There is no significant difference between the ACE and HWN cases.The blank grid squares represent the pipes whose fluxes are not shown in this figure. A numerical comparison shows that the deviation of most fuel rods and pipes is within twice the standard deviation,and the deviation of the remaining fuel rods and pipes is within three times the standard deviation except for a few fuel rods and pipes. The comparison results therefore demonstrate the accuracy of the proposed HWN method.

    4 Conclusion

    In this study, a hybrid windowed networks method for on-the-fly Doppler broadening was proposed and implemented in the RMC code. The resolved resonance energy range is divided into energy windows.In each window,two BP networks are trained to calculate the cross section at the base temperature and broaden the cross section to any temperature within the range of 250-1600 K. Thestructures of the neural networks and training parameters are determined through calculations.Networks for the total cross section, absorption cross section, and elastic scattering cross section of U-235 were trained.

    Table 10 Calculation results of PWR assembly

    Fig. 7 (Color online)Comparison results of ACE and HWN cases.a Flux spectrum of fuel rod beside central pipe b Relative error and standard deviation of the flux spectrum c Flux of fuel rods

    Microscopic cross-sectional comparison and macroscopic tests were performed to verify the utility and effectiveness of the HWN method. A comparison between the cross sections evaluated by this method and the ACE data shows the high accuracy of the proposed method.Macroscopic tests were conducted using RMC to verify the accuracy and efficiency of the method. The calculation time ratio between the HWN method and the ACE data for the PWR assembly calculation was 1.39. If the method is used with all the windows of the resolved resonance energy range, the theoretical memory consumption for U-235 nuclide can be reduced to 33.9%of the memory needed for ACE cross-sectional interpolation at 0 K. The theoretical memory consumption is reduced to 34.1%of the ACE data at 0 K if the ACE data outside the resolved resonance energy range are also included.

    The HWN can be combined with other on-the-fly Doppler broadening methods or linear interpolation with point-wise nuclear data.Using the HWN method,users can compromise efficiency according to the memory saving requirement. If the predicted neutron flux is high in some windows,the use of this method in the remaining windows can significantly reduce the memory cost without comprising efficiency to a great extent.

    The method proposed in this study should be further studied to improve its effectiveness. The calculation speed may be greatly improved by optimizing the calculation process,particularly the evaluation of the nonlinear transfer function. The accuracy of this method can be further improved by extending the training time or by choosing more suitable training parameters.

    The HWN method is applicable to any heavy nuclide with a resolved resonance energy range. Because the training is performed with point-wise data,the method can also be applied to nuclides for which the windowed multipole method is inapplicable. The method can be applied to more nuclides, especially those that are important in reactor simulations or those for which it is difficult to apply the windowed multipole method.

    The potential of the neural networks used in the HWN method for reducing the memory usage in the evaluation of complex parameters was demonstrated.The introduction of neural networks into other developed on-the-fly Doppler broadening methods may result in greater memory savings and higher accuracy.

    Author contributionsAll authors contributed to the study conception and design.Data collection and analysis were performed by Tian-Yi Huang and Ze-Guang Li. The programming and tests are strongly supported by Kan Wang,Xiao-Yu Guo and Jin-Gang Liang.The first draft of the manuscript was written by Tian-Yi Huang and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

    日本免费在线观看一区| 我的女老师完整版在线观看| 国产精品日韩av在线免费观看| 成人午夜高清在线视频| 国产精品美女特级片免费视频播放器| 免费观看精品视频网站| 亚洲av二区三区四区| 久久久久久久久久久免费av| 内射极品少妇av片p| 精品国内亚洲2022精品成人| 在线播放无遮挡| 午夜福利高清视频| 国产男人的电影天堂91| 男插女下体视频免费在线播放| 免费看av在线观看网站| 国产探花极品一区二区| 少妇丰满av| 一级毛片电影观看| 狂野欧美激情性xxxx在线观看| 日韩制服骚丝袜av| 久久久久精品久久久久真实原创| 国产乱人视频| 天天一区二区日本电影三级| 日韩一区二区视频免费看| 久久这里有精品视频免费| 国产精品麻豆人妻色哟哟久久 | 亚洲av电影不卡..在线观看| 最近手机中文字幕大全| 成人漫画全彩无遮挡| 在线免费观看的www视频| 亚洲av成人精品一二三区| 亚洲国产精品sss在线观看| 只有这里有精品99| 日韩精品青青久久久久久| 夜夜看夜夜爽夜夜摸| 精品国产一区二区三区久久久樱花 | 亚洲欧美日韩卡通动漫| 99久久中文字幕三级久久日本| 美女黄网站色视频| 久久久久久九九精品二区国产| 国产黄片美女视频| 赤兔流量卡办理| 建设人人有责人人尽责人人享有的 | 久久久国产一区二区| 久久精品国产自在天天线| 黄片wwwwww| 亚洲精品亚洲一区二区| 日韩av不卡免费在线播放| 国产高潮美女av| 边亲边吃奶的免费视频| 国产午夜福利久久久久久| 精品熟女少妇av免费看| 在线天堂最新版资源| 大陆偷拍与自拍| 美女脱内裤让男人舔精品视频| 精品熟女少妇av免费看| 亚洲国产高清在线一区二区三| 伊人久久精品亚洲午夜| 免费黄频网站在线观看国产| 成人av在线播放网站| 欧美激情在线99| 床上黄色一级片| 男女啪啪激烈高潮av片| 特级一级黄色大片| 欧美 日韩 精品 国产| 国产精品熟女久久久久浪| 嫩草影院精品99| 97人妻精品一区二区三区麻豆| 亚洲电影在线观看av| 国产伦在线观看视频一区| 国产大屁股一区二区在线视频| 在线免费观看不下载黄p国产| 久久午夜福利片| 一夜夜www| 日日撸夜夜添| 高清日韩中文字幕在线| 天天躁夜夜躁狠狠久久av| 亚洲人成网站在线观看播放| 亚洲美女搞黄在线观看| 五月天丁香电影| 免费观看性生交大片5| 亚洲av免费在线观看| 热99在线观看视频| 久久热精品热| 日本免费在线观看一区| 搞女人的毛片| 国产欧美日韩精品一区二区| 九九爱精品视频在线观看| 99热全是精品| 在线 av 中文字幕| 亚洲精品亚洲一区二区| 日韩伦理黄色片| 深夜a级毛片| 欧美xxⅹ黑人| 熟女电影av网| 黄色配什么色好看| 国产av码专区亚洲av| 日本免费在线观看一区| 中文资源天堂在线| 日韩视频在线欧美| 国产v大片淫在线免费观看| 国语对白做爰xxxⅹ性视频网站| 亚洲熟女精品中文字幕| 我的老师免费观看完整版| 免费观看av网站的网址| 精品国产露脸久久av麻豆 | 欧美+日韩+精品| 亚洲最大成人中文| 搞女人的毛片| 日本免费在线观看一区| 在线观看av片永久免费下载| 国产精品久久久久久久电影| 国产一区二区三区综合在线观看 | 国产精品不卡视频一区二区| 国产毛片a区久久久久| h日本视频在线播放| 国产成人一区二区在线| 国产在视频线在精品| 免费黄色在线免费观看| 一级毛片我不卡| 成人特级av手机在线观看| 久久这里只有精品中国| 亚洲av成人精品一二三区| 亚洲成人中文字幕在线播放| www.色视频.com| 国产精品国产三级国产av玫瑰| 国产欧美另类精品又又久久亚洲欧美| 精品久久久久久久久av| 国产 亚洲一区二区三区 | 只有这里有精品99| 精品一区二区三区人妻视频| 国产精品综合久久久久久久免费| 国产av码专区亚洲av| 国内精品美女久久久久久| 婷婷色综合大香蕉| 亚洲av男天堂| 国产成人精品婷婷| 国产 一区精品| 国产高潮美女av| 国产有黄有色有爽视频| 亚洲经典国产精华液单| 欧美变态另类bdsm刘玥| 久久久久免费精品人妻一区二区| 男人爽女人下面视频在线观看| 久久97久久精品| 一级毛片电影观看| 亚洲自偷自拍三级| 亚州av有码| 亚洲av.av天堂| 精品午夜福利在线看| 色综合亚洲欧美另类图片| 哪个播放器可以免费观看大片| 国产真实伦视频高清在线观看| 亚洲av日韩在线播放| 国产成人精品福利久久| 国产精品人妻久久久久久| 中文字幕av在线有码专区| 国产在线男女| 久久久久久久久久成人| 亚洲欧美精品自产自拍| 免费观看在线日韩| 亚洲精品自拍成人| 少妇丰满av| 一级a做视频免费观看| 搡女人真爽免费视频火全软件| 日韩不卡一区二区三区视频在线| 日韩一本色道免费dvd| 国产av在哪里看| 久久久久久伊人网av| 少妇被粗大猛烈的视频| 97超视频在线观看视频| 少妇人妻一区二区三区视频| 91午夜精品亚洲一区二区三区| 黄片无遮挡物在线观看| 哪个播放器可以免费观看大片| 日日撸夜夜添| 亚洲美女视频黄频| 亚洲欧洲日产国产| 69av精品久久久久久| 国产黄片视频在线免费观看| 成人无遮挡网站| 亚洲成人一二三区av| 联通29元200g的流量卡| 日本免费在线观看一区| h日本视频在线播放| 天天一区二区日本电影三级| 身体一侧抽搐| 日韩伦理黄色片| 床上黄色一级片| 日本免费a在线| 青青草视频在线视频观看| 亚洲欧美日韩卡通动漫| 日韩欧美精品免费久久| 午夜免费激情av| 国产老妇伦熟女老妇高清| 亚洲欧美日韩卡通动漫| 日本爱情动作片www.在线观看| 国产av不卡久久| 麻豆国产97在线/欧美| 一级片'在线观看视频| 日本色播在线视频| 人妻夜夜爽99麻豆av| 亚洲精品aⅴ在线观看| 99视频精品全部免费 在线| 国产精品不卡视频一区二区| 97超碰精品成人国产| 人人妻人人澡人人爽人人夜夜 | 亚洲最大成人中文| 国产爱豆传媒在线观看| 美女黄网站色视频| 青春草亚洲视频在线观看| 99九九线精品视频在线观看视频| 久久久精品94久久精品| 一个人免费在线观看电影| 亚洲精品影视一区二区三区av| 精品一区二区三区视频在线| 中文字幕亚洲精品专区| 国产探花在线观看一区二区| 联通29元200g的流量卡| 麻豆成人av视频| 一个人看的www免费观看视频| 精品国产一区二区三区久久久樱花 | 最近的中文字幕免费完整| 亚洲欧美中文字幕日韩二区| 亚洲精品456在线播放app| 精品久久久久久久久久久久久| 日日啪夜夜爽| 日韩欧美一区视频在线观看 | 成人亚洲欧美一区二区av| 男女啪啪激烈高潮av片| 成人国产麻豆网| 亚洲精品乱久久久久久| 免费观看性生交大片5| 18禁动态无遮挡网站| 伦精品一区二区三区| 寂寞人妻少妇视频99o| 人体艺术视频欧美日本| 国内少妇人妻偷人精品xxx网站| 在线观看一区二区三区| 欧美一区二区亚洲| 一级毛片久久久久久久久女| 免费观看性生交大片5| 精品人妻熟女av久视频| 亚洲图色成人| 国产伦在线观看视频一区| 久久久午夜欧美精品| 日日撸夜夜添| 乱人视频在线观看| 春色校园在线视频观看| 麻豆成人午夜福利视频| 99久国产av精品| 99热这里只有精品一区| 人人妻人人看人人澡| 午夜福利高清视频| 婷婷六月久久综合丁香| 国产成人a∨麻豆精品| 别揉我奶头 嗯啊视频| 中文资源天堂在线| 婷婷色麻豆天堂久久| 国产精品人妻久久久影院| 亚洲精品影视一区二区三区av| 内地一区二区视频在线| 国产激情偷乱视频一区二区| 网址你懂的国产日韩在线| 寂寞人妻少妇视频99o| 日韩不卡一区二区三区视频在线| 狂野欧美激情性xxxx在线观看| 丝袜美腿在线中文| 国语对白做爰xxxⅹ性视频网站| 韩国av在线不卡| 亚洲精品乱码久久久v下载方式| 国产在视频线在精品| 午夜精品国产一区二区电影 | 在线免费观看的www视频| 久久久国产一区二区| 免费大片18禁| 国语对白做爰xxxⅹ性视频网站| 高清毛片免费看| 久久久欧美国产精品| 免费观看在线日韩| 日日啪夜夜撸| 国产国拍精品亚洲av在线观看| 欧美一区二区亚洲| 男人和女人高潮做爰伦理| 亚洲精品国产av蜜桃| 高清毛片免费看| 日韩视频在线欧美| 秋霞伦理黄片| 亚洲精品456在线播放app| 亚洲国产成人一精品久久久| 色尼玛亚洲综合影院| 国产成人a区在线观看| 熟女电影av网| av国产免费在线观看| 视频中文字幕在线观看| 亚洲国产精品成人久久小说| 久久久久久久亚洲中文字幕| 亚洲第一区二区三区不卡| 成人国产麻豆网| 色综合站精品国产| 国产精品久久视频播放| 久久精品国产亚洲av天美| 中文资源天堂在线| 免费看a级黄色片| 99九九线精品视频在线观看视频| 久久草成人影院| 精品酒店卫生间| 亚洲欧洲国产日韩| 性插视频无遮挡在线免费观看| 久久久久九九精品影院| 成年免费大片在线观看| 精品国内亚洲2022精品成人| 三级国产精品欧美在线观看| 国产精品人妻久久久影院| 一区二区三区乱码不卡18| 一级爰片在线观看| 99re6热这里在线精品视频| 欧美最新免费一区二区三区| 成人毛片a级毛片在线播放| 日韩精品有码人妻一区| 舔av片在线| 国产片特级美女逼逼视频| 一级黄片播放器| 久久久久久久国产电影| 亚洲av中文字字幕乱码综合| 亚洲电影在线观看av| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 亚洲成人一二三区av| 九九爱精品视频在线观看| 国产伦一二天堂av在线观看| 国内精品美女久久久久久| 好男人在线观看高清免费视频| 国产视频内射| 国产麻豆成人av免费视频| 午夜久久久久精精品| 一级毛片aaaaaa免费看小| 久久热精品热| 一级片'在线观看视频| 禁无遮挡网站| 国产探花在线观看一区二区| 两个人的视频大全免费| 午夜激情久久久久久久| 国产精品一区www在线观看| 别揉我奶头 嗯啊视频| 少妇熟女aⅴ在线视频| 国产黄频视频在线观看| 99九九线精品视频在线观看视频| 美女高潮的动态| 黄色配什么色好看| 久久久久精品久久久久真实原创| 一级毛片电影观看| 国产美女午夜福利| 国产探花在线观看一区二区| 99视频精品全部免费 在线| 久久这里只有精品中国| 午夜福利视频精品| 精品99又大又爽又粗少妇毛片| 国产毛片a区久久久久| 国产精品无大码| 卡戴珊不雅视频在线播放| 久久久久久久久久久丰满| 色综合色国产| 亚洲欧洲国产日韩| 国产亚洲精品久久久com| 亚洲丝袜综合中文字幕| 亚洲欧美一区二区三区黑人 | 国产成人一区二区在线| 自拍偷自拍亚洲精品老妇| 国产精品爽爽va在线观看网站| 精品欧美国产一区二区三| 日韩欧美国产在线观看| 国产av码专区亚洲av| 爱豆传媒免费全集在线观看| av在线播放精品| a级毛色黄片| 久久久久久久久中文| 美女脱内裤让男人舔精品视频| eeuss影院久久| 18禁在线播放成人免费| 国产精品伦人一区二区| 插逼视频在线观看| 偷拍熟女少妇极品色| 国产亚洲av片在线观看秒播厂 | 亚洲av成人av| 国产69精品久久久久777片| 亚洲av中文字字幕乱码综合| 大陆偷拍与自拍| 亚洲成人中文字幕在线播放| 日日摸夜夜添夜夜爱| 日本三级黄在线观看| 国产伦理片在线播放av一区| 国产精品综合久久久久久久免费| 久久久久久久久大av| 男女视频在线观看网站免费| 干丝袜人妻中文字幕| 久久久久久久久久成人| 人人妻人人澡欧美一区二区| 一边亲一边摸免费视频| 久久国内精品自在自线图片| 大又大粗又爽又黄少妇毛片口| 国内精品美女久久久久久| 深夜a级毛片| av国产久精品久网站免费入址| 麻豆精品久久久久久蜜桃| 日韩成人伦理影院| 日日啪夜夜撸| 色哟哟·www| 午夜福利视频1000在线观看| 欧美成人精品欧美一级黄| 女人久久www免费人成看片| 国产成人免费观看mmmm| 最近的中文字幕免费完整| 免费av观看视频| 婷婷色麻豆天堂久久| 亚洲国产精品成人综合色| 中文字幕制服av| 特级一级黄色大片| 蜜臀久久99精品久久宅男| 又粗又硬又长又爽又黄的视频| 99久国产av精品国产电影| 联通29元200g的流量卡| 日本黄大片高清| 国产成人91sexporn| 一区二区三区免费毛片| 久久久久久久久久久丰满| 国产精品久久久久久久电影| 国产精品av视频在线免费观看| 老司机影院成人| 99久久精品一区二区三区| 纵有疾风起免费观看全集完整版 | 黑人高潮一二区| 亚洲欧美清纯卡通| 国产综合懂色| av.在线天堂| 国产成人精品婷婷| 免费黄色在线免费观看| 国产伦精品一区二区三区四那| 久久久久久九九精品二区国产| a级毛片免费高清观看在线播放| 国产精品国产三级国产专区5o| 国产精品一区二区三区四区免费观看| 七月丁香在线播放| 好男人在线观看高清免费视频| 国产成人a区在线观看| 男人舔奶头视频| 91久久精品国产一区二区成人| 国产免费又黄又爽又色| 久久久久久九九精品二区国产| 精品久久国产蜜桃| 韩国av在线不卡| 久久久久免费精品人妻一区二区| 亚洲精品色激情综合| 免费大片18禁| 久久久久久久国产电影| 亚洲av日韩在线播放| 日本免费a在线| 亚洲av中文字字幕乱码综合| 亚洲av免费在线观看| 美女高潮的动态| 熟女电影av网| 久久久久久国产a免费观看| 国产美女午夜福利| 亚洲性久久影院| 亚洲美女视频黄频| 最近的中文字幕免费完整| 精品不卡国产一区二区三区| 国产高清国产精品国产三级 | 午夜福利在线在线| 91久久精品国产一区二区成人| 免费少妇av软件| 男的添女的下面高潮视频| 免费大片18禁| 亚洲av成人精品一区久久| 亚洲欧美精品专区久久| 最近2019中文字幕mv第一页| av福利片在线观看| a级毛片免费高清观看在线播放| 午夜精品国产一区二区电影 | 久久久久国产网址| 搡老妇女老女人老熟妇| 免费av观看视频| 男女边吃奶边做爰视频| 国产高清三级在线| 久久久久久伊人网av| 别揉我奶头 嗯啊视频| 日韩不卡一区二区三区视频在线| 成人亚洲精品av一区二区| 美女国产视频在线观看| 中文字幕av在线有码专区| 女人十人毛片免费观看3o分钟| 免费观看av网站的网址| 成年免费大片在线观看| www.色视频.com| 久久久精品免费免费高清| 国产高清有码在线观看视频| 亚洲经典国产精华液单| 亚洲人与动物交配视频| 亚洲欧洲国产日韩| 可以在线观看毛片的网站| 久久99热这里只频精品6学生| 美女脱内裤让男人舔精品视频| 三级毛片av免费| 日韩精品有码人妻一区| av专区在线播放| 国产av国产精品国产| 日本黄大片高清| 女人被狂操c到高潮| 大陆偷拍与自拍| 简卡轻食公司| 日韩伦理黄色片| 大又大粗又爽又黄少妇毛片口| 超碰av人人做人人爽久久| 欧美最新免费一区二区三区| 少妇人妻精品综合一区二区| 草草在线视频免费看| 日韩欧美一区视频在线观看 | 午夜老司机福利剧场| 国产精品不卡视频一区二区| 国产亚洲午夜精品一区二区久久 | 美女被艹到高潮喷水动态| 亚洲精品色激情综合| 丝袜喷水一区| 亚洲真实伦在线观看| 国产美女午夜福利| 国产免费福利视频在线观看| 精品久久久久久久末码| 中国美白少妇内射xxxbb| 一级a做视频免费观看| 日韩欧美国产在线观看| 亚洲综合色惰| 特大巨黑吊av在线直播| 久久久久久九九精品二区国产| 婷婷色综合www| 国产高清不卡午夜福利| 久久韩国三级中文字幕| 亚洲最大成人中文| 九色成人免费人妻av| 亚洲av电影不卡..在线观看| 蜜臀久久99精品久久宅男| 亚洲欧美一区二区三区国产| 精品人妻偷拍中文字幕| 色视频www国产| 亚洲av日韩在线播放| 国产在视频线精品| 国内精品美女久久久久久| 亚洲四区av| 成人亚洲欧美一区二区av| 卡戴珊不雅视频在线播放| 亚洲在久久综合| or卡值多少钱| 中国美白少妇内射xxxbb| 国产精品爽爽va在线观看网站| 秋霞伦理黄片| 激情五月婷婷亚洲| 赤兔流量卡办理| 亚洲国产欧美人成| 看黄色毛片网站| 国产视频首页在线观看| 看黄色毛片网站| 99九九线精品视频在线观看视频| 亚洲精华国产精华液的使用体验| 搡老乐熟女国产| 国产精品不卡视频一区二区| 深爱激情五月婷婷| 亚洲最大成人中文| 亚洲高清免费不卡视频| 日本免费在线观看一区| 久久久久网色| 日韩精品青青久久久久久| 欧美3d第一页| 精品国产三级普通话版| 日日啪夜夜爽| 国产精品人妻久久久影院| 成人综合一区亚洲| 五月玫瑰六月丁香| 日韩一区二区视频免费看| 成人特级av手机在线观看| 亚洲无线观看免费| 两个人视频免费观看高清| 国产黄片美女视频| 久久国内精品自在自线图片| 日韩人妻高清精品专区| 99热6这里只有精品| 亚洲精品日韩在线中文字幕| 少妇人妻精品综合一区二区| 久久久午夜欧美精品| 99热全是精品| 免费观看a级毛片全部| 亚洲久久久久久中文字幕| 婷婷六月久久综合丁香| 丝袜喷水一区| 亚洲国产欧美在线一区| 日韩视频在线欧美| 高清av免费在线| 国产激情偷乱视频一区二区| 亚洲国产欧美人成| 特级一级黄色大片| 好男人视频免费观看在线| 欧美日韩一区二区视频在线观看视频在线 | 亚洲在线自拍视频| 国产一区二区亚洲精品在线观看| 91精品一卡2卡3卡4卡| 成人午夜高清在线视频| 色网站视频免费| 国内精品宾馆在线| av在线蜜桃| 最近的中文字幕免费完整| 男人舔奶头视频| 久久草成人影院| 日韩不卡一区二区三区视频在线| 老师上课跳d突然被开到最大视频| 国产精品一区二区三区四区久久| 成人一区二区视频在线观看| 青春草亚洲视频在线观看| 免费大片黄手机在线观看|