• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Hybrid windowed networks for on-the-fly Doppler broadening in RMC code

    2021-07-02 09:29:14TianYiHuangZeGuangLiKanWangXiaoYuGuoJinGangLiang
    Nuclear Science and Techniques 2021年6期

    Tian-Yi Huang· Ze-Guang Li· Kan Wang · Xiao-Yu Guo·Jin-Gang Liang

    Abstract On-the-fly Doppler broadening of cross sections is important in Monte Carlo simulations, particularly in Monte Carlo neutronics-thermal hydraulics coupling simulations. Methods such as Target Motion Sampling (TMS)and windowed multipole as well as a method based on regression models have been developed to solve this problem.However,these methods have limitations such as the need for a cross section in an ACE format at a given temperature or a limited application energy range. In this study, a new on-the-fly Doppler broadening method based on a Back Propagation (BP) neural network, called hybrid windowed networks (HWN), is proposed to resolve the resonance energy range.In the HWN method,the resolved resonance energy range is divided into windows to guarantee an even distribution of resonance peaks.BP networks with specially designed structures and training parameters are trained to evaluate the cross section at a base temperature and the broadening coefficient. The HWN method is implemented in the Reactor Monte Carlo(RMC)code,and the microscopic cross sections and macroscopic results are compared. The results show that the HWN method can reduce the memory requirement for cross-sectional data by approximately 65%; moreover, it can generate keff, power distribution, and energy spectrum results with acceptable accuracy and a limited increase in the calculation time. The feasibility and effectiveness of the proposed HWN method are thus demonstrated.

    Keywords Monte Carlo method · Reactor Monte Carlo(RMC) · On-the-fly Doppler broadening · BP network

    1 Introduction

    With the increase in the computing power of high-performance computing platforms, Monte Carlo neutronicsthermal hydraulics coupling has become an ideal approach for obtaining accurate results for the design and analysis of reactors. Traditional methods of linear interpolation with point-wise nuclear data require a large amount of memory resources to provide temperature-dependent microscopic cross sections for simulations [1]. On-the-fly Doppler broadening methods have been introduced to reduce the memory cost and enable thermal-hydraulics coupled reactor analysis [2]. Several on-the-fly Doppler broadening methods have been proposed to meet both the efficiency and memory requirements.Three methods,namely,Target Motion Sampling (TMS) [3, 4], windowed multipole [5],and a method based on regression model and fitting [6]have been proposed for the evaluation of cross sections across the resolved resonance energy range.Walsh studied an on-the-fly Doppler broadening method for the unresolved resonance energy range[7].Pavlou and Ji proposed a thermal energy range method [8].

    For heavy nuclides such as U-235, the data in the resolved resonance energy range account for most of the nuclear data. Among the methods applicable in this range,the TMS method requires cross-sectional data at a given temperature, such as 0 K [9, 10], while the windowed multipole method divides the resolved resonance energy range into energy windows and uses approximations to evaluate cross sections [1], Yesilyurt et al. used thirteen parameters for broad cross sections at 0 K to perform Doppler broadening for any temperature in the range of 77-3200 K[6].This method was further developed by Liu et al., and the number of parameters was made flexible[11]. Both the TMS method and the method proposed by Yesilyurt et al.require ACE data in their processes,and the latter requires a larger memory for storing the broadening parameters.The range of nuclides for which the windowed multipole method is applicable is limited.

    In this paper, a new on-the-fly Doppler broadening method based on BP neural networks, called hybrid windowed networks (HWN), is proposed. The total memory requirements are reduced by approximately 65%compared with ACE data at the expense of efficiency over the resolved energy range.The BP neural networks are used to evaluate the cross sections. The neural network method,which is a type of machine learning method, simulates the structure of biological neurons through establishing artificial neural networks[12].By relying mainly on multi-layer neuron nodes with various weights and biases, neutron networks can be used to solve complicated problems such as image [13] or audio [14] processing. Neural networks with simple structures also exhibit a satisfactory performance in data fitting [15]. The structures and training parameters of the networks reported herein were carefully determined to meet the needs of cross-sectional training.

    In this method, the resolved resonance energy range is divided into windows. The networks trained for each window can be used independently so that the method can be easily combined with other on-the-fly Doppler broadening methods. The application range of this method can be set by the users to avoid unacceptable losses of efficiency.

    The results confirm the feasibility of evaluating complex cross-sectional parameters through the use of neural networks.The potential of neural networks for memory saving is demonstrated in this work.Neural networks can be used to evaluate some of the parameters in the calculation of other developed on-the-fly Doppler broadening methods.Larger memory savings and higher accuracy might be achieved by incorporating the physics of Doppler broadening into the method.

    The principle of BP neural networks and the HWN method are introduced in Sect. 2. In Sect. 3, the results of numerical tests conducted to verify the effectiveness,accuracy, and efficiency of the method are reported and discussed.

    2 HWN Method

    In the HWN method,the resolved energy is divided into energy windows based on the number of extreme points in the cross section. In each window, two BP networks are used to calculate the cross section at 200 K and broaden the cross section to temperatures in the range of 250-1600 K.ACE data are used to train the BP networks.The networks for each window can be used independently; thus, the scope of the method can be set easily.

    Section 2.1 reviews the principle of BP neural networks.Section 2.2 describes the training and calculation process of the two networks within a window. Section 2.3 introduces the division of the resolved energy range and the parameter determination process.

    2.1 BP Neural network

    An efficient and memory-saving method for evaluating cross sections is needed for on-the-fly Doppler broadening.Neural networks, which are widely used in machine learning,need to be trained before they are used.Once the parameters of the networks are determined, the networks can be used to calculate the output from the given input.After training, the amount of calculations required is greatly reduced;therefore,this method can be used for onthe-fly Doppler broadening.A combination of the input and expected results is used in the training process, during which the weights and biases of all the neurons are adjusted.The deviation between the outputs of the network and the expected results is gradually reduced during training until the network meets the requirements.

    Many studies on neural networks such as convolutional neural networks [16] and deep neural networks [17] have been conducted. Such networks have many hidden layers and neurons,as well as complex structures,resulting in low computational efficiencies. Because computing speed is important in Monte Carlo codes, a neural network with a simple structure is more suitable.

    In this study, the back propagation (BP) network was used. The error back propagation algorithm proposed by Rumelhart [18] introduced in the following paragraphs is used for training this network. As shown in Fig. 1, a BP network consists of an input layer, hidden layers, and an output layer. Each layer contains a certain number of neurons. The simplicity of the structure and calculation steps of this type of network ensures its high computational efficiency.

    Fig. 1 Structure of BP neural network

    The hidden and output layer nodes of the BP neural network are, respectively, described by

    In equations,Ojis the output of the hidden layer node;Ykis the output value of the output layer node. f1and f2are the transfer functions of the hidden and output layer nodes,respectively. In general, f1is a nonlinear function, and f2can be a linear or nonlinear function.wijis the weight from the previous node to the hidden layer node. wjkis the weight from the node of the last hidden layer to the node of the output layer.bjand bkare the biases of the hidden layer node and output layer node, respectively.

    The weights and thresholds of the BP neural network in calculations are generally expressed in matrix form.Equations (1) and (2) are therefore expressed as Eqs. (3)and (4), respectively.

    The subscript ‘‘pre’’ means the previous layer, which can be a hidden layer or an input layer.Subscript‘‘h’’and‘‘o’’means hidden layer and output layer, respectively. In Eq. (3), Ohis the output value of the hidden layer. Wpretohis the matrix of weights from the previous layer to the hidden layer. Xpreis the matrix of output values of the previous layer and Bhis the matrix of bias of the hidden layer. In Eq. (4), Y is the matrix of output value of the output layer,and it is also the output value of the network.W and B in Eq. (4) have similar meanings to those in Eq. (3).The bias and output data of each layer are column vectors with lengths equal to the number of nodes in the layer. The weight is a matrix where the first and second dimensions are the number of nodes in the previous and current layers, respectively. The matrix elements correspond to those in Eqs. (1) and (2).

    The neural networks in this study were trained using MATLAB. The output of the neural networks approached the target value over successive iterations. All the weights and biases in the network were adjusted during the training process, while the structure of the network, including the number of hidden layers and nodes in each layer, and the transfer function, remained unchanged.

    The error back propagation algorithm was used for training the network. The error between the result in the training data and the corresponding result calculated by the network for a given input in the training data was propagated back to the parameters of each layer. The process is briefly described as follows.

    The set of input vectors and the corresponding target vectors are denoted as

    The square of errors between P and T are summed to defined R,which is the error of the network.The factor 1/2 is to simplify the following process.

    The goal of the training is to reduce the error.R is therefore expanded using Eq. (4) into the node parameters of the output layer as follows:

    Equation 8 can be further expanded using Eq. (1) as follows:

    The expansion ends here if the network has one hidden layer. Otherwise, R can be expanded layer by layer. The algorithm is illustrated for a one-hidden-layer network for which the transfer function of the output layer is given by

    The partial derivatives are called the gradient values of the error.The parameters are updated using the gradient values in each training Iteration as follows:

    where η is the learning rate, which determines the ratio of the correction to the parameters.The amplitude of a single adjustment is directly proportional to η. The derivation shown in Eqs. (1-14) is applied to each individual parameter, and the end result of the derivation can be expressed in the form of a matrix as

    The results of Eq. (13) and (14) for all the parameters can be collectively expressed as

    The training of the other parameter matrices can be described in a similar manner.The error plays a role in the parameter adjustment of each layer through the result deviation Δ and propagates to each parameter matrix.

    2.2 Training and calculation

    This section describes how the two networks were trained and used in an energy window. The method for dividing the resolved energy is introduced in the next section. Both the energy division and the sequential computation using two networks are designed to reduce the complexity of the training target.

    A temperature is chosen as the base temperature for the cross section of a nuclide. The cross section at this temperature, denoted as σT0(E ), is a function of the neutron energy E. The cross-sectional broadening coefficient,K, is introduced,which is defined as the ratio of the cross section at a temperature not less than T0to the cross section at T0.

    K is a binary function with the independent variables E and T and denoted as K (E,T). The features of K (E,T) are much less complex than those of σ(E,T).Thus,the ratio of the maximum value of K (E,T) to its minimum value is much lower than that of σ(E,T) within an energy window obtained by the method explained in Sect. 2.3.Therefore,it is expected that better results can be obtained using K (E,T) as the input data for network training.

    The HWN method is similar to the method proposed by Yesilyurt et al. [6], which also uses fitting parameters for on-the-fly Doppler broadening. However, the broadening coefficient K in the proposed method applies for all the energies in the corresponding energy window,whereas the method proposed by Yesilyurt et al. requires individual parameters for each energy point.

    To reduce the memory required, a network denoted as Network 1 is trained to calculate the cross section at temperature T0. The upper and lower bounds of the energy window are denoted as Eminand Emax, respectively. The corresponding relationship between the input and output of the network is expressed as

    where σ′T0is the cross section calculated by Network 1 at T0and F1is the mapping relationship between the input energy and output cross section of Network 1.It should be noted that some errors exist between the results and the actual cross section.

    Network 2 is trained to calculate the broadening coefficient,K.If K is used directly as the input data for training Network 2 and Eq. (18) is used to calculate the broadened cross section, the error of the calculation result will be the superposition of the errors of the two networks.

    To reduce the error, the K value calculated by the following formula is used as the training target.

    The difference between Eqs. (20) and (18) is that the cross section at T0in Eq. (18) is replaced by the output of Network 1 in Eq. (20). Therefore, the influence of the accuracy of Network 1 on the final result can be eliminated.However, the complexity of K will increase if the error of Network 1 is extremely large, which may affect the accuracy of Network 2. It is therefore necessary to control the error of Network 1.

    K in Eq. (20) is used as the input data to train Network 2. The temperature range of the network fitting is limited by the input data. A temperature range denoted as Tmin≤T ≤Tmaxthat could meet the actual requirements is selected. Tminshould not be lower than T0. The corresponding relationship between the input and output can therefore be expressed as

    where K′is the broadening coefficient calculated by Network 2, and F2is the mapping relationship between the input energy and the output coefficient of Network 2.K′is a function of the independent variables E and T and is denoted as K′(E,T).

    By rewriting Eq. (20), the equation for on-the-fly Doppler broadening is obtained as

    Equation (22) describes the Doppler broadening process.Network 1 is first used to calculate the cross section at the given energy E and basic temperature T0. Then, the cross section is broadened to the temperature T using the broadening coefficient K calculated by Network 2.

    The values of T0, Tmin, and Tmaxwere determined. The temperature ranges and their corresponding fields of study were summarized by Yesilyurt et al. [6] as shown in Table 1. Monte Carlo codes for reactors should be able to handle benchmarking calculations and reactor operation problems. Thus, Tminand Tmaxwere set as 250 K and 1650 K,respectively.It should be noted K approaches 1 as T approaches T0, which affects the accuracy of the neuralnetwork in the vicinity of T0. As a result, training using data at T =T0should be avoided, and T0should not be close to Tmin. Considering the lower complexity of the cross-sectional curve at higher temperatures, the base temperature T0was chosen as 200 K to reduce the difficulty of the network training.

    Table 1 Temperature ranges and their corresponding fields of study

    2.3 Window division and parameter determination

    The cross-sectional curve in the resolved resonance energy range of heavy nuclides at a single temperature is very complex because of the large number of resonance peaks. The addition of the temperature dimension further increases the complexity. Therefore, it is impractical to train the neural network directly using the ACE data over the entire resolved resonance energy range as the input.

    Dividing the resolved resonance energy range into energy windows can significantly reduce the difficulty of training in each window. Because similar processes are carried out for each window, the data should be divided evenly according to the training difficulty. Dividing the resonant peaks equally between the different windows is an easy and effective method. Each window has the same number of maximum points that correspond mainly to the positions of the resonance peaks within the resolved resonance energy range. The edges of the windows are set as the minimum points of the cross-sectional curve.

    Because of the Doppler broadening effect, the resonant peaks are broadened at temperatures above 0 K. At T0=200 K, some adjacent resonant peaks are combined into single peaks, leading to a reduction in the number of maximum points. The maximum points, rather than the resonant peaks at 0 K, were used for the deviation of the cross-sectional curve at T0.Because the distributions of the resonant peaks differ for different heavy nuclides, the number of windows and the positions of the edges determined by this process will also be different.

    The number of maximum points in each window was carefully determined. A smaller number of points for a given network would result in a higher number of divided windows and a corresponding increase in the memory required.In addition,if large number of peaks are included,the amount of data in each window will be extremely large,and the accuracy of the networks will be decreased. The number of points was set to 15 based on experience and testing. The aforementioned process was applied to the total cross-sectional curve at the base temperature of T0=200 K for window division.

    Relatively large deviations near the boundary of the windows were often observed during network training. To avoid this,the windows were extended along both edges,as is shown in Fig. 2. The data in the extended window were used for neural network training, whereas the acquired neural network was used only within the range of the original window. The windows next to the unresolved resonance energy range and thermal energy range were only extended in the direction of the resolved resonance energy range.

    Fig. 2 Extension of window

    The training results were greatly affected by the structure of the neural networks and the training parameters.The network parameters were determined according to the test results.

    The training function training was used in network training. This approach has the advantages of a faster convergence speed and better accuracies. The transfer function of the hidden layer nodes is the tansig function in MATLAB, which is defined as

    A BP network with one hidden layer is sufficient for solving many problems. Networks with multiple hidden layers may show better performance in some cases[19].A one-hidden-layer BP network was used for Network 1(described in Sect. 2.2) to reduce the memory requirements. For Network 2, a two-hidden-layer network was used to avoid overfitting in the training of K (E,T).

    Tests were performed to determine the number of neurons in each hidden layer. The number of neurons in the hidden layer of Network 1 was determined first. The total cross-sectional data of U-235 at 0 K were divided into 181 windows, and 19 equally spaced windows were selected.Networks with 80,90,100,110,120,and 130 nodes in the hidden layer were trained using the data in the windows.Each network was trained with the data from each window 10 times,and the minimum value of the maximum absolute relative error was calculated. The geometric means of the error values for the 19 windows are shown in Fig. 3.

    Fig.3 Relationship between the number of nodes in the hidden layer and the mean minimum absolute relative error

    In general, the accuracy of the network increased with the number of nodes, and it was at an acceptable level when the number of nodes was 130. As shown in Fig. 3, a further increase in the number of nodes had a limited effect on the improvement of accuracy.Networks with 130 nodes in the hidden layers were used for most windows.A larger number of nodes were used in a few windows in which the relative error was extremely large.

    Training and comparisons were performed to determine the number of nodes in the two hidden layers of Network 2.A reasonable maximum epoch number,training time limit,and accuracy target were set for training. Data from the second window of the U-235 total cross section were used to evaluate networks with different combinations of node numbers. The performance was determined using the percentage of data points where the absolute relative error with respect to the input data was less than 0.1%. The results are compared in Table 2. The results indicate that the best combination has 40 nodes in the first hidden layer and 20 nodes in the second hidden layer.

    3 Numerical tests of HWN method

    The HWN method was implemented in the Reactor Monte Carlo (RMC) code [20] for on-the-fly Doppler broadening of U-235. Data from the ENDF/B-VII.0 database were processed by NJOY[21]with an accuracy of no less than 0.001 to obtain the training ACE data. The resolved resonance energy range was divided into 80 energy windows, and the networks were trained for each window.cross-sectional data at 25-K intervals in the range of 250-1650 K were used to calculate the training target K (E,T) in each window. The microscopic cross sections and macroscopic results were compared to prove the feasibility and effectiveness of the HWN method.

    In Sect. 3.1, the accuracy and memory requirements of the HWN method are demonstrated by comparing the calculated microscopic cross section with the ACE data.In Sect. 3.2,the results of two macroscopic tests performed toverify the accuracy and efficiency of the method are presented.

    Table 2 Performance comparison of networks with different combinations of node numbers

    3.1 Microscopic accuracy and memory requirement comparison

    The HWN method was applied to U-235 and U-238,which are representative heavy nuclides with important resonances. The total cross section, elastic scattering cross section, absorption cross section of U-235, and total cross section of U-238 were compared with the ACE data at the same temperature. The cross-sectional results and absolute relative errors with respect to the ACE data are plotted in Fig. 4, where the comparisons were made in the energy regions with strong resonances at 300 K and 700 K. It can be observed that the errors are low for most data points and that the cross sections evaluated are accurate at both temperatures.The relative errors fluctuate with the energy.The fluctuations are caused by the characteristics of neural networks with many nodes.

    The HWN method and the windowed multipole method are both methods with new ways, which do not use continuous-energy data, of storing the cross-sectional data.Therefore, their memory requirements are lower than that of the ACE data. The relative errors of the HWN and windowed multiple [1] methods are compared in Table 3.The results show that the maximum relative errors are similar and that the average relative errors of both methods are within 0.1%.

    The theoretical memory consumption of the network parameters and ACE cross-sectional data are compared.In the continuous-energy Monte Carlo code using the ACE data, the cross sections are stored in the form of doubleprecision floating-point numbers. Each double-precision floating-point number occupies eight bytes of memory.Both the energy grid and cross-sectional values are needed for cross-sectional evaluation. For the total cross section,elastic scattering cross section, and absorption cross section, four double-precision floating-point numbers, which occupy 32 bytes of memory, are needed for each energy point. In comparison, most of the parameters in the HWN method are stored in the form of double-precision floatingpoint numbers, and a few parameters are integers. The memory consumptions of the two methods are calculated using the number of parameters and the memory space needed for each parameter.

    The ACE data at 0 K processed by NJOY were used for comparison. If on-the-fly Doppler broadening is not introduced to the Monte Carlo code,point-wise cross sections at more than a dozen temperatures will be needed for thermalhydraulics coupled analysis. The method proposed by Yesilyurt et al. uses parameters that require several times the memory needed for the ACE data at 0 K. Point-wise data are also needed in the TMS method. As a result, the HWN method shows a significant reduction of the memory requirement compared to the 0 K ACE data.

    The comparison results are listed in Table 4.The results show that for the three cross sections of U-235, the HWN method could reduce the memory requirements of crosssectional data in the resolved resonance energy range by 66.1%as compared with the case of the 0 K ACE data.The memory requirement reduction was 65.9% over the entire energy range.

    The networks for each window can be used independently because the training process of each window is independent.It is easy to set the scope of the method when the method is implemented in a Monte Carlo code.It is not necessary to store the 0 K ACE data within the selected windows if the HWN method is used. However, the speed of the cross-sectional evaluation in these windows will decrease. The efficiency drop is described in Sect. 4.Table 4 clearly shows that the resolved resonance energy range accounts for most the nuclear data. The memory optimization ratio is significant when the HWN method is used in some of the windows. Users can decide the efficiency to be compromised for saving memory.

    Fig. 4 (Color online)Comparison of microscopic cross sections at 300 K and 700 K. a Total cross section of U-235. b Absorption cross section of U-235 c Elastic scattering cross section of U-235. d Total cross section of U-238

    Table 3 Comparison of relative errors of HWN and windowed multipole

    3.2 Comparison of macroscopic test results

    The HWN method was used for Doppler broadening in all windows of the resolved resonance energy in the HWN cases, while ACE data from the processed ENDF/B-VII.0 database at the same temperature were used in the ACE case. All calculations were performed using an Intel i7-9750H CPU without parallelism.

    3.2.1 Concentric spheres

    The first comparison example consisted of two concentric spheres, as shown in Fig. 5. The radii of the inner and outer spheres were 6.782 cm and 11.862 cm, respectively. The inner sphere was filled with Material 1. The space between the two spheres was filled with Material 2.The outside of the outer sphere was set to vacuum. The nuclide compositions of the materials are listed in Table 5.The temperature of each material was set to 300 K.

    The calculation parameters are listed in Table 6,and the results are presented in Table 7. The deviation in keffis very small. There was a slight increase in the calculation time. The accuracy of the HWN method was therefore confirmed.

    3.2.2 PWR assembly

    The second comparison example is the PWR assembly model shown in Fig. 6. The model comprised an infinite cylinder with a cross section of 21.42 cm × 21.42 cm.There were 264 fuel rods and 25 pipes arranged in a 17 × 17 square. The fuel rods were cylinders with diameters of 0.8192 cm. Each fuel rod was surrounded by a 0.082 mm air layer and a 0.572 mm zirconium wall. Theinner and outer diameters of the pipes were 1.138 and 1.2294 cm, respectively. The pipes were filled with water,and the material of their walls was zirconium. The remainder of the assembly was filled with water. The nuclide compositions of the fuels are listed in Table 8.The temperature of all the materials was set to 700 K.

    Table 4 Theoretical memory requirements of HWN and ACE data

    Fig. 5 (Color online) Concentric spheres example

    Table 5 Nuclide composition of materials in concentric spheres example

    Table 6 Calculation parameters for the example of concentric spheres

    Table 7 Calculation results of the concentric spheres example

    Fig. 6 (Color online) Schematic diagram of the PWR assembly

    Table 8 Nuclide composition of fuel in the PWR assembly

    Table 9 Calculation parameters of PWR assembly

    The calculation parameters are listed in Table 9,and the results are presented in Table 10. The difference in keffbetween the two cases is within twice the standard deviation, indicating that there is no significant difference. The calculation time of the HWN case is 1.39 times that of the ACE case, which indicates that using the HWN method will prolong the calculation time. The calculation time is shorter if the method is not used in all the windows.

    The neutron flux spectrum of the fuel in a fuel rod adjacent to the central pipe and the neutron flux inside each fuel rod and pipe were calculated for both the ACE and HWN cases.The results are presented in Fig. 7.The blocks in Fig. 7c do not represent the actual geometry,rather they the corresponding positions.

    Figure 7a shows that the neutron flux spectra of the fuel rods are the same. Figure 7b shows that the relative deviations of the fluxes in most statistical intervals are within three times the standard deviation of the ACE case. Figure 7c compares the neutron fluxes of all the fuel rods in the assembly. There is no significant difference between the ACE and HWN cases.The blank grid squares represent the pipes whose fluxes are not shown in this figure. A numerical comparison shows that the deviation of most fuel rods and pipes is within twice the standard deviation,and the deviation of the remaining fuel rods and pipes is within three times the standard deviation except for a few fuel rods and pipes. The comparison results therefore demonstrate the accuracy of the proposed HWN method.

    4 Conclusion

    In this study, a hybrid windowed networks method for on-the-fly Doppler broadening was proposed and implemented in the RMC code. The resolved resonance energy range is divided into energy windows.In each window,two BP networks are trained to calculate the cross section at the base temperature and broaden the cross section to any temperature within the range of 250-1600 K. Thestructures of the neural networks and training parameters are determined through calculations.Networks for the total cross section, absorption cross section, and elastic scattering cross section of U-235 were trained.

    Table 10 Calculation results of PWR assembly

    Fig. 7 (Color online)Comparison results of ACE and HWN cases.a Flux spectrum of fuel rod beside central pipe b Relative error and standard deviation of the flux spectrum c Flux of fuel rods

    Microscopic cross-sectional comparison and macroscopic tests were performed to verify the utility and effectiveness of the HWN method. A comparison between the cross sections evaluated by this method and the ACE data shows the high accuracy of the proposed method.Macroscopic tests were conducted using RMC to verify the accuracy and efficiency of the method. The calculation time ratio between the HWN method and the ACE data for the PWR assembly calculation was 1.39. If the method is used with all the windows of the resolved resonance energy range, the theoretical memory consumption for U-235 nuclide can be reduced to 33.9%of the memory needed for ACE cross-sectional interpolation at 0 K. The theoretical memory consumption is reduced to 34.1%of the ACE data at 0 K if the ACE data outside the resolved resonance energy range are also included.

    The HWN can be combined with other on-the-fly Doppler broadening methods or linear interpolation with point-wise nuclear data.Using the HWN method,users can compromise efficiency according to the memory saving requirement. If the predicted neutron flux is high in some windows,the use of this method in the remaining windows can significantly reduce the memory cost without comprising efficiency to a great extent.

    The method proposed in this study should be further studied to improve its effectiveness. The calculation speed may be greatly improved by optimizing the calculation process,particularly the evaluation of the nonlinear transfer function. The accuracy of this method can be further improved by extending the training time or by choosing more suitable training parameters.

    The HWN method is applicable to any heavy nuclide with a resolved resonance energy range. Because the training is performed with point-wise data,the method can also be applied to nuclides for which the windowed multipole method is inapplicable. The method can be applied to more nuclides, especially those that are important in reactor simulations or those for which it is difficult to apply the windowed multipole method.

    The potential of the neural networks used in the HWN method for reducing the memory usage in the evaluation of complex parameters was demonstrated.The introduction of neural networks into other developed on-the-fly Doppler broadening methods may result in greater memory savings and higher accuracy.

    Author contributionsAll authors contributed to the study conception and design.Data collection and analysis were performed by Tian-Yi Huang and Ze-Guang Li. The programming and tests are strongly supported by Kan Wang,Xiao-Yu Guo and Jin-Gang Liang.The first draft of the manuscript was written by Tian-Yi Huang and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

    啦啦啦 在线观看视频| 国产精品.久久久| 国产免费现黄频在线看| 成在线人永久免费视频| 1024香蕉在线观看| 大香蕉久久网| 亚洲欧美一区二区三区黑人| 视频在线观看一区二区三区| 最新的欧美精品一区二区| 女警被强在线播放| 三级毛片av免费| 国产97色在线日韩免费| 欧美中文综合在线视频| 精品一区二区三区av网在线观看 | 精品欧美一区二区三区在线| 国产单亲对白刺激| 午夜激情久久久久久久| 成人18禁在线播放| 真人做人爱边吃奶动态| 色婷婷久久久亚洲欧美| 亚洲专区国产一区二区| 午夜精品国产一区二区电影| 欧美日韩av久久| 亚洲五月色婷婷综合| 啦啦啦视频在线资源免费观看| 美国免费a级毛片| 在线观看66精品国产| 午夜精品国产一区二区电影| 巨乳人妻的诱惑在线观看| 亚洲av美国av| 中文字幕人妻丝袜制服| 99riav亚洲国产免费| 香蕉丝袜av| 热re99久久精品国产66热6| 一二三四在线观看免费中文在| 精品国产国语对白av| 十八禁人妻一区二区| 中文欧美无线码| 亚洲熟女精品中文字幕| 精品久久蜜臀av无| 成人影院久久| 亚洲av片天天在线观看| 一本大道久久a久久精品| 一边摸一边抽搐一进一出视频| 午夜两性在线视频| 成人国产一区最新在线观看| 亚洲欧美色中文字幕在线| 国产不卡av网站在线观看| 性高湖久久久久久久久免费观看| 国产福利在线免费观看视频| 亚洲天堂av无毛| 精品久久蜜臀av无| 亚洲精品久久午夜乱码| www日本在线高清视频| 精品国产超薄肉色丝袜足j| 国产男靠女视频免费网站| 国产91精品成人一区二区三区 | 欧美成人免费av一区二区三区 | 亚洲国产中文字幕在线视频| 国产av国产精品国产| 欧美日韩av久久| 精品人妻1区二区| 少妇被粗大的猛进出69影院| 久久久水蜜桃国产精品网| 国产精品九九99| 亚洲九九香蕉| 久久性视频一级片| 亚洲第一欧美日韩一区二区三区 | 国产男靠女视频免费网站| 免费在线观看完整版高清| 男女之事视频高清在线观看| 久久久国产成人免费| 夫妻午夜视频| 9191精品国产免费久久| 日韩一区二区三区影片| 亚洲美女黄片视频| 精品少妇黑人巨大在线播放| 大型av网站在线播放| 人人妻人人澡人人看| 国产高清国产精品国产三级| 久久精品亚洲精品国产色婷小说| 久久久精品区二区三区| 精品一区二区三卡| 国产男女超爽视频在线观看| a在线观看视频网站| 亚洲精品乱久久久久久| 一级毛片女人18水好多| 下体分泌物呈黄色| 女人高潮潮喷娇喘18禁视频| 深夜精品福利| av网站免费在线观看视频| 日本撒尿小便嘘嘘汇集6| 国产精品一区二区免费欧美| www日本在线高清视频| 韩国精品一区二区三区| 中文字幕另类日韩欧美亚洲嫩草| 日韩视频一区二区在线观看| 大香蕉久久成人网| 777米奇影视久久| 精品卡一卡二卡四卡免费| 亚洲熟女毛片儿| 精品国产一区二区三区四区第35| 国产成人精品在线电影| 黄片大片在线免费观看| 美女扒开内裤让男人捅视频| 午夜91福利影院| 久久婷婷成人综合色麻豆| 一区二区日韩欧美中文字幕| 肉色欧美久久久久久久蜜桃| 精品熟女少妇八av免费久了| 欧美国产精品一级二级三级| 99久久99久久久精品蜜桃| 国产精品一区二区精品视频观看| 三上悠亚av全集在线观看| 午夜福利欧美成人| 美女高潮到喷水免费观看| 视频在线观看一区二区三区| 老司机午夜福利在线观看视频 | 久久久国产欧美日韩av| 久久久水蜜桃国产精品网| 久久精品国产亚洲av高清一级| 国产精品美女特级片免费视频播放器 | 亚洲成国产人片在线观看| 高潮久久久久久久久久久不卡| 老司机亚洲免费影院| 999久久久精品免费观看国产| 亚洲久久久国产精品| 欧美精品一区二区免费开放| 亚洲精品久久午夜乱码| 在线观看www视频免费| 午夜福利,免费看| 成人黄色视频免费在线看| 黄色a级毛片大全视频| 成人av一区二区三区在线看| 在线天堂中文资源库| 女警被强在线播放| 精品少妇内射三级| 亚洲精品在线观看二区| 俄罗斯特黄特色一大片| 99热网站在线观看| netflix在线观看网站| 国产精品久久久久久精品电影小说| 男女午夜视频在线观看| 欧美日本中文国产一区发布| 国产精品 国内视频| 亚洲精品乱久久久久久| 成人精品一区二区免费| 日韩中文字幕欧美一区二区| 日韩欧美一区视频在线观看| 欧美日韩中文字幕国产精品一区二区三区 | 一区二区三区激情视频| 日本五十路高清| 国产在线视频一区二区| 久久免费观看电影| 狂野欧美激情性xxxx| 黄色毛片三级朝国网站| 日本黄色视频三级网站网址 | 亚洲熟妇熟女久久| 最新的欧美精品一区二区| 国产男女内射视频| 又黄又粗又硬又大视频| 狠狠精品人妻久久久久久综合| 宅男免费午夜| 久久免费观看电影| 人妻久久中文字幕网| 美国免费a级毛片| 韩国精品一区二区三区| 国产欧美亚洲国产| 国产黄频视频在线观看| 夫妻午夜视频| 老司机在亚洲福利影院| 日韩视频在线欧美| 中文字幕人妻丝袜制服| 国产麻豆69| av天堂久久9| 夜夜骑夜夜射夜夜干| 老司机亚洲免费影院| 亚洲色图av天堂| 中文亚洲av片在线观看爽 | 亚洲成av片中文字幕在线观看| 午夜福利在线观看吧| 欧美成人免费av一区二区三区 | 久久国产精品人妻蜜桃| 日韩一区二区三区影片| 日韩视频在线欧美| 欧美日本中文国产一区发布| 国产色视频综合| 黄色丝袜av网址大全| 国产精品久久久久久精品电影小说| 丁香欧美五月| 日韩欧美免费精品| 最近最新中文字幕大全免费视频| av在线播放免费不卡| av有码第一页| 一级,二级,三级黄色视频| 国产97色在线日韩免费| 成人18禁在线播放| 成人精品一区二区免费| 亚洲中文字幕日韩| 国产日韩欧美在线精品| 欧美精品一区二区大全| 欧美一级毛片孕妇| 91精品国产国语对白视频| 国产成人欧美| 99精品欧美一区二区三区四区| 久久国产精品男人的天堂亚洲| 久久人妻福利社区极品人妻图片| 操美女的视频在线观看| 91九色精品人成在线观看| 午夜免费成人在线视频| 久久中文字幕人妻熟女| 成人国产av品久久久| 99国产极品粉嫩在线观看| 一个人免费在线观看的高清视频| 一区二区av电影网| 精品高清国产在线一区| 自线自在国产av| 大陆偷拍与自拍| 日本vs欧美在线观看视频| 女人爽到高潮嗷嗷叫在线视频| 日韩成人在线观看一区二区三区| 亚洲国产精品一区二区三区在线| 法律面前人人平等表现在哪些方面| 91精品国产国语对白视频| 免费久久久久久久精品成人欧美视频| 天堂8中文在线网| 精品福利观看| 一区二区三区激情视频| 日日爽夜夜爽网站| 嫩草影视91久久| 俄罗斯特黄特色一大片| 可以免费在线观看a视频的电影网站| 日韩免费高清中文字幕av| 国产精品久久久av美女十八| 亚洲国产精品一区二区三区在线| 婷婷丁香在线五月| 老熟妇乱子伦视频在线观看| av在线播放免费不卡| 十八禁网站网址无遮挡| 久久性视频一级片| 两人在一起打扑克的视频| 50天的宝宝边吃奶边哭怎么回事| 侵犯人妻中文字幕一二三四区| 亚洲视频免费观看视频| 亚洲色图 男人天堂 中文字幕| 看免费av毛片| 亚洲av成人不卡在线观看播放网| 黄色视频,在线免费观看| 十分钟在线观看高清视频www| 老司机亚洲免费影院| 久久亚洲精品不卡| 在线观看免费日韩欧美大片| 脱女人内裤的视频| 18禁观看日本| 黑丝袜美女国产一区| 成人国产av品久久久| 日韩欧美国产一区二区入口| 国内毛片毛片毛片毛片毛片| 欧美日韩亚洲国产一区二区在线观看 | avwww免费| 色精品久久人妻99蜜桃| 黑人欧美特级aaaaaa片| 丝袜在线中文字幕| 人人妻人人澡人人爽人人夜夜| 无人区码免费观看不卡 | 精品视频人人做人人爽| 国内毛片毛片毛片毛片毛片| 欧美在线一区亚洲| 国产三级黄色录像| 最近最新中文字幕大全免费视频| 国产国语露脸激情在线看| 免费在线观看日本一区| 婷婷成人精品国产| 久久久国产精品麻豆| 日本撒尿小便嘘嘘汇集6| 老司机深夜福利视频在线观看| 午夜福利欧美成人| 人成视频在线观看免费观看| 多毛熟女@视频| 国产日韩欧美视频二区| 捣出白浆h1v1| 一区二区av电影网| 成人永久免费在线观看视频 | 亚洲人成电影免费在线| 99久久99久久久精品蜜桃| 国产野战对白在线观看| 日韩有码中文字幕| 12—13女人毛片做爰片一| 一级,二级,三级黄色视频| 高清欧美精品videossex| 国产午夜精品久久久久久| 国产aⅴ精品一区二区三区波| 免费少妇av软件| 欧美日韩一级在线毛片| 久久久精品区二区三区| 青草久久国产| 999久久久精品免费观看国产| 亚洲欧美一区二区三区黑人| 18禁裸乳无遮挡动漫免费视频| 夜夜爽天天搞| 午夜激情久久久久久久| 国产精品国产高清国产av | 国产日韩欧美亚洲二区| 99国产综合亚洲精品| 女人精品久久久久毛片| 91九色精品人成在线观看| 天天影视国产精品| 中亚洲国语对白在线视频| 人人妻人人添人人爽欧美一区卜| av线在线观看网站| 大型av网站在线播放| 久久久久久久久免费视频了| www.自偷自拍.com| 99国产精品一区二区蜜桃av | 99国产精品99久久久久| 欧美在线黄色| 日韩大码丰满熟妇| 色综合婷婷激情| 国产主播在线观看一区二区| 亚洲国产中文字幕在线视频| 日韩 欧美 亚洲 中文字幕| 国产精品亚洲av一区麻豆| 99国产极品粉嫩在线观看| 一进一出抽搐动态| 12—13女人毛片做爰片一| 99riav亚洲国产免费| 欧美精品亚洲一区二区| 狠狠狠狠99中文字幕| 成人永久免费在线观看视频 | 水蜜桃什么品种好| av网站免费在线观看视频| 岛国在线观看网站| 亚洲av日韩在线播放| 无限看片的www在线观看| 中文字幕av电影在线播放| 91成人精品电影| 国产欧美日韩综合在线一区二区| 亚洲国产成人一精品久久久| 18禁黄网站禁片午夜丰满| 女人被躁到高潮嗷嗷叫费观| 日本一区二区免费在线视频| 久久久久久免费高清国产稀缺| 亚洲国产精品一区二区三区在线| 精品一区二区三区av网在线观看 | 99精品欧美一区二区三区四区| 国产成人一区二区三区免费视频网站| 五月天丁香电影| 十八禁网站免费在线| 久久久久久亚洲精品国产蜜桃av| 亚洲专区国产一区二区| 视频区欧美日本亚洲| 免费久久久久久久精品成人欧美视频| 国产一区二区三区视频了| 99精国产麻豆久久婷婷| av天堂久久9| 午夜免费成人在线视频| 这个男人来自地球电影免费观看| 大片免费播放器 马上看| 午夜福利乱码中文字幕| 精品国产乱码久久久久久小说| 国产精品免费视频内射| 久久精品成人免费网站| 亚洲欧美激情在线| 久久精品亚洲av国产电影网| 超碰97精品在线观看| 成年人黄色毛片网站| 天堂俺去俺来也www色官网| 麻豆av在线久日| 女同久久另类99精品国产91| 欧美乱妇无乱码| 不卡一级毛片| 99香蕉大伊视频| 老熟妇仑乱视频hdxx| 99热国产这里只有精品6| 999久久久国产精品视频| 99精品久久久久人妻精品| 在线亚洲精品国产二区图片欧美| 国产精品秋霞免费鲁丝片| 好男人电影高清在线观看| 人人妻人人澡人人看| 国产深夜福利视频在线观看| 亚洲成国产人片在线观看| 亚洲国产毛片av蜜桃av| 一级片'在线观看视频| avwww免费| svipshipincom国产片| 久久久欧美国产精品| 中文字幕最新亚洲高清| 国产欧美日韩精品亚洲av| 脱女人内裤的视频| 老司机影院毛片| 天天躁日日躁夜夜躁夜夜| 亚洲九九香蕉| 手机成人av网站| 1024视频免费在线观看| 蜜桃国产av成人99| 亚洲国产av新网站| 国产又爽黄色视频| 亚洲欧洲精品一区二区精品久久久| kizo精华| 窝窝影院91人妻| 女性生殖器流出的白浆| 亚洲专区字幕在线| 精品福利永久在线观看| 日韩中文字幕欧美一区二区| 亚洲色图 男人天堂 中文字幕| 国产高清视频在线播放一区| 国产麻豆69| 午夜福利视频在线观看免费| 欧美日韩亚洲国产一区二区在线观看 | 久久人妻福利社区极品人妻图片| 色94色欧美一区二区| 久久国产精品人妻蜜桃| 午夜福利视频在线观看免费| 两性夫妻黄色片| 老司机在亚洲福利影院| 91av网站免费观看| 日韩免费高清中文字幕av| 免费高清在线观看日韩| 两个人看的免费小视频| 日本欧美视频一区| 在线观看免费午夜福利视频| 在线观看免费视频日本深夜| 国产精品av久久久久免费| 成人18禁高潮啪啪吃奶动态图| 丁香六月天网| 视频区图区小说| 在线观看www视频免费| 狠狠狠狠99中文字幕| 大片免费播放器 马上看| 一本一本久久a久久精品综合妖精| 精品久久蜜臀av无| 韩国精品一区二区三区| 制服诱惑二区| 欧美精品亚洲一区二区| 亚洲九九香蕉| 美女福利国产在线| 国产精品国产av在线观看| 成年女人毛片免费观看观看9 | 91九色精品人成在线观看| tube8黄色片| 国产色视频综合| 一级a爱视频在线免费观看| 国产伦人伦偷精品视频| 日日爽夜夜爽网站| 国产精品二区激情视频| 国产片内射在线| 久久国产精品男人的天堂亚洲| 99久久国产精品久久久| 悠悠久久av| 如日韩欧美国产精品一区二区三区| 97人妻天天添夜夜摸| 免费女性裸体啪啪无遮挡网站| 中文字幕高清在线视频| 国产一区二区三区综合在线观看| 欧美精品av麻豆av| 在线观看免费视频网站a站| 十分钟在线观看高清视频www| 亚洲全国av大片| 午夜精品久久久久久毛片777| 国产亚洲av高清不卡| 电影成人av| www日本在线高清视频| 免费女性裸体啪啪无遮挡网站| 精品国产国语对白av| av有码第一页| 18禁裸乳无遮挡动漫免费视频| 一区二区三区激情视频| 久久天躁狠狠躁夜夜2o2o| 一边摸一边抽搐一进一出视频| 免费观看av网站的网址| 在线观看舔阴道视频| 午夜福利免费观看在线| 国产av国产精品国产| 精品少妇黑人巨大在线播放| 精品第一国产精品| 蜜桃在线观看..| 黄色丝袜av网址大全| 另类亚洲欧美激情| 99热网站在线观看| 人人妻,人人澡人人爽秒播| 99久久国产精品久久久| 亚洲国产成人一精品久久久| 亚洲熟女精品中文字幕| 精品一区二区三区视频在线观看免费 | 最近最新中文字幕大全免费视频| 亚洲精品av麻豆狂野| 他把我摸到了高潮在线观看 | 久久毛片免费看一区二区三区| 国产男女内射视频| 国产又色又爽无遮挡免费看| 少妇猛男粗大的猛烈进出视频| 亚洲男人天堂网一区| 成人18禁高潮啪啪吃奶动态图| 热99久久久久精品小说推荐| 法律面前人人平等表现在哪些方面| 99re在线观看精品视频| 首页视频小说图片口味搜索| 新久久久久国产一级毛片| 亚洲成a人片在线一区二区| 飞空精品影院首页| 一进一出抽搐动态| 男人舔女人的私密视频| 精品熟女少妇八av免费久了| 悠悠久久av| av福利片在线| 999精品在线视频| 欧美人与性动交α欧美精品济南到| 999精品在线视频| 如日韩欧美国产精品一区二区三区| 黄色成人免费大全| 满18在线观看网站| 国产亚洲精品一区二区www | 天天添夜夜摸| 亚洲午夜理论影院| 99精国产麻豆久久婷婷| 亚洲专区字幕在线| 日本a在线网址| 五月天丁香电影| 日本a在线网址| 亚洲一区中文字幕在线| 久久精品国产亚洲av高清一级| 岛国在线观看网站| 啪啪无遮挡十八禁网站| 国产精品1区2区在线观看. | 日韩视频一区二区在线观看| 在线观看舔阴道视频| 免费女性裸体啪啪无遮挡网站| 欧美 亚洲 国产 日韩一| 91av网站免费观看| 免费日韩欧美在线观看| 啪啪无遮挡十八禁网站| 亚洲精品成人av观看孕妇| 欧美日韩成人在线一区二区| 激情视频va一区二区三区| 男女边摸边吃奶| 国产精品99久久99久久久不卡| 国产成人精品无人区| 亚洲色图av天堂| cao死你这个sao货| 又紧又爽又黄一区二区| 一边摸一边抽搐一进一小说 | 三上悠亚av全集在线观看| 黄色片一级片一级黄色片| 亚洲精品在线观看二区| 操美女的视频在线观看| 国产日韩欧美亚洲二区| 日韩中文字幕视频在线看片| 少妇精品久久久久久久| 可以免费在线观看a视频的电影网站| 精品一区二区三卡| 色尼玛亚洲综合影院| 久久毛片免费看一区二区三区| 丰满迷人的少妇在线观看| 99国产精品免费福利视频| 亚洲精品久久午夜乱码| 国产伦人伦偷精品视频| 12—13女人毛片做爰片一| 2018国产大陆天天弄谢| 国产精品九九99| 国产精品免费视频内射| 桃花免费在线播放| 黄片小视频在线播放| 免费在线观看视频国产中文字幕亚洲| 麻豆国产av国片精品| 精品一品国产午夜福利视频| 色播在线永久视频| 嫁个100分男人电影在线观看| 国产精品麻豆人妻色哟哟久久| 久久99热这里只频精品6学生| 国产精品久久久久久人妻精品电影 | 十八禁网站免费在线| 五月天丁香电影| 麻豆av在线久日| 国产高清视频在线播放一区| 91国产中文字幕| 亚洲国产欧美在线一区| 99久久人妻综合| 两人在一起打扑克的视频| 99国产极品粉嫩在线观看| 欧美一级毛片孕妇| 另类亚洲欧美激情| 国产精品一区二区免费欧美| 成在线人永久免费视频| 97人妻天天添夜夜摸| 日韩大码丰满熟妇| 久久久久久免费高清国产稀缺| 黄色片一级片一级黄色片| 丝瓜视频免费看黄片| 大码成人一级视频| 黄片小视频在线播放| 国产免费视频播放在线视频| 日韩视频一区二区在线观看| 精品福利观看| 制服诱惑二区| 伊人久久大香线蕉亚洲五| 可以免费在线观看a视频的电影网站| 久久国产精品男人的天堂亚洲| 久久午夜综合久久蜜桃| 性高湖久久久久久久久免费观看| 久久精品亚洲熟妇少妇任你| 18在线观看网站| 亚洲专区中文字幕在线| 大片电影免费在线观看免费| 精品人妻熟女毛片av久久网站| 一区二区三区乱码不卡18| 亚洲国产欧美一区二区综合| 日韩免费av在线播放| 熟女少妇亚洲综合色aaa.| 久久av网站| 亚洲午夜理论影院| 亚洲一区二区三区欧美精品| www.自偷自拍.com| 熟女少妇亚洲综合色aaa.| 99国产精品免费福利视频|