• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Machine learning the nuclear mass

    2021-11-13 01:30:54ZePengGaoYongJiaWangHongLiangLuQingFengLiCaiWanShenLingLiu
    Nuclear Science and Techniques 2021年10期

    Ze-Peng Gao · Yong-Jia Wang· Hong-Liang Lu¨ · Qing-Feng Li,4 ·Cai-Wan Shen · Ling Liu

    Abstract

    Keywords Nuclear mass · Machine learning · Binding energy · Separation energy

    1 Introduction

    The mass of nuclei,which is of fundamental importance to explore nuclear landscape and properties of the nuclear force, plays a crucial role in understanding many issues in the fields of both nuclear physics and astrophysics[1-6].It is known that >7000 nuclei in the nuclear landscape from H(Z =1)to Og(Z =118)are predicted to exist according to various theoretical models, while ~3000 nuclei have been found or synthesized experimentally and ~2500 nuclei have been measured accurately[7,8].Exploring the masses of the remaining nuclei is of particular interest to both the nuclear experimental and theoretical communities.On the experimental side,facilities such as HIRFL-CSR in China, RIBF at RIKEN in Japan, the cooler-storage ring ESR and SHIPTRAP at GSI in Germany, CPT at Argonne and LEBIT at Michigan State University in the USA,ISOLTRAP at CERN,JYFLTRAP at Jyva¨skyla¨ in Finland,and TITAN at TRIUMPH in Canada are partly dedicated to measuring the nuclear mass, especially for nuclei around the drip lines.On the theoretical side,various models have been developed to study nuclear mass by considering different physics, such as the finite-range droplet model(FRDM) [9, 10], Weizsa¨cker-Skyrme (WS) model [11],Hartree-Fock-Bogoliubov mass models [12-14], relativistic mean-field (RMF) model [15], and relativistic continuum Hartree-Bogoliubov theory [16]. Although tremendous progress has been made from both experimental and theoretical perspectives, exploring the mass of nuclei around drip lines is still a great challenge.

    Machine learning (ML), which is a subset of artificial intelligence, has been widely applied for analyzing data in many branches of science, such as physics (see, e.g., Refs.[17-24]). In nuclear physics, a Bayesian neural network(BNN) has been applied to reduce the mass residuals between theory and experiment, and a significant improvement in the mass predictions of several theoretical models was obtained after BNN refinement [25-27]; for example, the root mean square deviation (RMSD) of the liquid-drop model (LDM) was reduced from ~3 to 0.8 MeV. Later, the BNN approach was also applied to study nuclear charge radii [28], β-decay half-lives [32], fission product yield [29], and fragment production in spallation reactions [30, 31]. In addition to the BNN, other machine learning or deep learning algorithms have also been employed in the study of nuclear reactions (see, e.g.,Refs.[33-38]). Focusing on nuclear mass, in addition to the BNN in Refs. [25-27], the Levenberg-Marquardt neural network approach [39], Gaussian processes [42, 43], the decision tree algorithm[44],and the multilayer perceptron(MLP) algorithm [45] have also been applied to refine nuclear mass models.

    Indeed, studying nuclear mass with machine learning algorithms is not a new topic, and it can be traced back to at least 1993(see Refs.[46-48]and reference therein).Ref.[46] addressed the capability of multilayer feedforward neural networks for learning the systematics of atomic masses and nuclear spins and parities with high accuracy.This topic has flourished again because of the rapid development of computer science and artificial intelligence. In 2016, Light Gradient Boosting Machine(LightGBM),which is a tree-based learning algorithm,was developed by Microsoft [65]. It is a state-of-the-art machine learning algorithm that has achieved better performance in many machine learning tasks. Therefore, it would be interesting to explore whether the LightGBM algorithm can achieve better accuracy than the BNN for predicting nuclear mass. The BNN comprises a statistical model and a neural network. It is a neural network with a probability distribution placed over their weights and biases. It can produce uncertainties in the predictions and generate the distribution of the parameters obtained by learning the data. Consequently, the overfitting problem in the regime of small data can be partially avoided.LightGBM is a tree-based learning algorithm, and the advantages of its framework include (1) faster-training speed and higher efficiency, (2) lower memory usage, (3)better accuracy, (4) support of parallel and graphics processing unit learning, and (5) capability of handling largescale data. More specifically, in the present work, because of its tree-based nature,LightGBM has an excellent degree of explainability, which is important for studying physical problems.Therefore,it is worth using LightGBM to obtain some physical insight into the nuclear mass.

    The remainder of this paper is organized as follows. In Sect. 2, we will introduce the LightGBM model and 10 input features. The predicted binding energy and neutron separation energy obtained with LightGBM are discussed in detail in Sect. 3. The conclusions and outlook are presented in Sect. 4.

    2 LightGBM and the input features

    LightGBM refers to a recent improvement of the gradient boosting decision tree that provides efficient implementation of gradient boosting algorithms. It is becoming increasingly popular because of its efficiency and capability of handling large amounts of data.LightGBM utilizes leaf-wise growth of trees, rather than level-wise growth.After the first partition,the next split is performed only on the leaf node that adds more to the information gain.

    The primary advantage of LightGBM is the change in the training algorithm that speeds up the optimization process dramatically and results in a more effective model in many cases. More specifically, to speed up the training process, LightGBM uses a histogram-based methodology to select the best segmentation. For any continuous variable, instead of individual values being used, these are divided into bins or buckets, which can accelerate the training process and reduce memory usage. In addition,LightGBM employs two novel techniques: gradient-based one-side sampling, which keeps all the instances of large gradients and performs random sampling on the instances with small gradients,and exclusive feature bundling,which helps to bundle multiple features into a single feature without losing any information.Furthermore,as a decisiontree-based algorithm, LightGBM also has a high level of interpretability, allowing the results obtained in the machine learning model to be checked against previous knowledge regarding nuclear mass. For example, one can find which feature is more important for predicting nuclear mass; this will be helpful in further improving the nuclear mass model.

    In this work, the binding energies of 2408 nuclei between16O and270Ds from the atomic mass evaluation(AME2016) [8] were employed as the training and test datasets. LightGBM was trained to learn the residual between the theoretical prediction and the experimental binding energy, δ(Z,A)=Bth(Z,A)-Bexp(Z,A). Four theoretical mass models were adopted in this study to obtain Bth, including the LDM [39], the Duflo-Zucker(DZ) mass model [57], the FRDM [9, 10], and the WS4 model [11]. After LightGBM learns the behavior of the residual δ(Z,A), the binding energy of an unknown mass nucleus can be obtained using BLightGBM(Z,A)=Bth(Z,A)-δ(Z,A).The RMSD of these four theoretical mass models can be significantly improved after LightGBM refinement.

    For the LDM model, the nucleus is regarded as a noncompressible droplet, which contains the volume energy,surface energy, Coulomb energy of proton repulsion,symmetry energy related to the ratio of neutrons to protons,and pairing energy of the neutron-proton pairing effect. It can be described as follows:

    This work mainly aims to find the relationship between the feature quantity of each nucleus and δ(Z,A) with the LightGBM model. For each nucleus, we selected 10 physical quantities (cf. Table 2) as the input features,which are thought to be related to the nuclei properties.[49-56] It is known that nuclear binding energy and nuclear structure are linked, therefore, we selected four physical quantities related to the shell structure, among which Zmand Nmare the shells where the last proton and neutron are located, and the level of the shell is given by the magic numbers.The number of protons between 8,20,50, 82 and 126 corresponds to Zmof 1, 2, 3, 4, and the number of neutrons between 8, 20, 50, 82, 126 and 184 corresponds to Nmof 1,2,3,4,5.In addition,|Z-m|and|N-m|are the absolute values of the difference between the number of protons, the number of neutrons, and the nearest magic number, respectively, which represent the distance between the number of protons, the number of neutrons, and the nearest magic number. Npair is an indexthat considers the proton-neutron pairing effect, the oddodd nucleus is 0, the odd-even nucleus or even-odd nucleus is 1, and the even-even nucleus is 2.

    Table 1 Parameter settings.All units are in MeV, except for kv and ks

    Table 2 Selection of characteristic quantities

    In this study, the value of num_boost_round (maximum number of decision trees allowed)was 50,000,num_leaves(maximum number of leaves allowed per tree)was 10,and the corresponding total number of parameters per tree was 19. Max_depth (maximum depth allowed per tree) is -1,and other parameters are set as their default values of the LightGBM model. Varying these parameters did not significantly alter the results. During the training process,LightGBM will generate a decision tree based on the relevant information between the features of the training set,and δ(Z,A). A total of 10,000 to 25,000 decision trees are generated in the training process depending on the training set and learning rate, and the overall model contains 190,000 to 475,000 parameters. Tenfold cross-validation,which is a technique to evaluate models by partitioning the original dataset into 10 equal-sized subsamples, is also applied to prevent overfitting and selection bias. After the training, the model makes predictions on the test set. Each nucleus in the test set traverses the decision tree grown during model training. Each decision tree will give its contribution to the predicted value according to the feature quantity of each nucleus. The sum of the contributions of all decision trees is the predicted value given by the final model.

    3 Results

    3.1 Predictions of the binding energy based on the LDM

    In this section,LightGBM is trained to learn the residual δ(Z,A) between the LDM and the experimental binding energies.For this purpose,the binding energies of the 2408 nuclei between16O and270Ds from AME2016 were split into training and test datasets. We note here that nuclei with proton (neutron) numbers smaller than 8 and with relatively large experimental uncertainties in AME2016 were not used.First,the influence of the training size on the predicted binding energy was examined, as shown in Fig. 1. We randomly selected 482 (~20% of the 2408 nuclei)nuclei to constitute the test set.The RMSD of LDM for the selected 482 nuclei was ~2.458 MeV, and after LightGBM refinement, the RMSD was reduced to 0.496,0.272, and 0.233 MeV when 482, 1204, and 1926 nuclei were used to train the LightGBM,respectively.This means that LightGBM can capture the missing physics of the LDM and decode the correlation between the input features and the residual, to further improve the agreement with experimental data.

    In addition,it can be seen that the deviation between the experimental and LightGBM-refined LDM predictions for nuclei with the small number of proton and neutron is usually larger, this may be because the microstructure effect in the light-mass nuclei is strong, and there are less data on the light-mass nuclei in the training set. The value of the RMSD fluctuates when the training and test data sets are randomly selected, because δ(Z,A) for some of the nuclei(i.e.,nuclei around the magic number)are large and some are small. To evaluate this issue, we randomly split the 2408 nuclei into training and test data sets 500 times with each ratio (i.e.,4:1, 1:1,and 1:4),and the RMSD and its density distribution are plotted in Figs. 2 and 3 . As observed in Fig. 2, fluctuation in the RMSD is the largest of all when the ratio of training size to test size is 1:4.The RMSD for 1926 nuclei predicted by LightGBM-refined LDM with a binding energy of 482 nuclei was ~0.508±0.035 MeV, which is comparable to many physical mass models. With the training data set built from 1926 nuclei and the remaining 482 nuclei constitute the test data set,the RMSD was obtained to be 0.234±0.022 MeV, which is better than that of many physical mass models.

    Fig. 1 (Color online) Upper panels: Locations of training data sets with a 20%,b 50%,and c 80%of the 2408 nuclei from AME2016 in the N-Z plane.Lower panels:The error between the experimental and LightGBM-refined LDM predicted binding energies for the test set(20% of the 2408 nuclei). Results obtained with the LightGBMrefined LDM trained with d 20%, e 50%, and f 80% of the 2408 nuclei, respectively. σpre is the RMSD of the original LDM, and σpost is the RMSD of the LightGBM-refined LDM

    Fig.2 (Color online)RMSD for the test data from 100 runs.In each run,the 2408 nuclei were randomly split into training and test datasets at a ratio of 4:1 (blue), 1:1 (orange), and 1:4 (green)

    Figure 4 shows the residual δ(Z,A) obtained from the LDM and LightGBM-refined LDM. The results from nine runs with randomly selected 80% of 2408 nuclei as the training set and the remaining 20% as the test set are displayed. It can be seen that the residual δ(Z,A) obtained with the original LDM is large,especially for nuclei around the magic number, owing to the absence of shell effect in the LDM. After the refinement of LightGBM, δ(Z,A) is considerably reduced, especially for nuclei with mass number larger than 60.The performance of LightGBM for nuclei with mass number smaller than 60 is not as good as that for nuclei with large mass number, the same as we already observed in Fig. 1. This could be improved by feeding more relevant features to LightGBM.

    Fig. 3 (Color online) Density distribution of RMSD between the experimental and predicted binding energy.The results from 500 runs for each set are displayed. Dashed lines denote a Gaussian fit to the distribution. The mean values and the standard deviation of the RMSD values are 0.508,0.303,and 0.224 MeV and 0.035,0.020,and 0.022 MeV for the three sets with different ratios of training to test size, respectively

    3.2 Predictions of the binding energy based on different mass models

    Fig. 4 (Color online) Residual δ(Z,A) plotted as a function of mass number. Nine runs with a random splitting of the 2408 nuclei into training and test groups with a ratio of 4:1 are displayed. Blue and orange points denote δ(Z,A)for the test data obtained with the LDM and LightGBM-refined LDM,respectively. σpre is the RMSD of the original LDM, and σpost is the RMSD of the LightGBMrefined LDM

    In the previous section, the capability of LightGBM to refine the LDM has been demonstrated. In this section, in addition to the LDM, three popular mass models, that is,DZ, WS4, and FRDM are tested as well. To do so, the δ(Z,A) between the experimental binding energy and the one obtained from each mass model is fed to LightGBM,and we randomly split the 2408 nuclei into training and test groups at a ratio of 4:1 and run 500 times for each mass model. The distribution of RMSD on the training and test datasets is shown in Fig. 5.In Table 3,the performance of serval ML-refined mass models is compared.It can be seen that the typical value of RMSD on the training data set is only ~0.05-0.1 MeV, which is the smallest of all, to the best of our knowledge, the highest performance mass model. The typical value of RMSD on the test data set is~0.2 MeV, which is also smaller than the others. In general, significant improvements of approximately 90%,65%,40%,and 60%after the LightGBM refinement on the LDM,DZ, WS4,and FRDM were obtained,indicating the strong capability of LightGBM to improve theoretical nuclear mass models.In addition,other approaches,such as the radial basis function (RBF) [11], two combinatorial radial basis functions (RBFs) [59], the kernel ridge regression (KRR) [40], and the RBF approach with oddeven corrections(RBFoe)[41],can also be used to improve the performance of nuclear mass models.

    In principle, if there exists a function that can precisely predict nuclear mass and LightGBM can find this function by learning the mapping between the mass and input features, the improvements of different mass models after the LightGBM refinement should be the same. However,the correlations between nuclear mass and input features are indeed very complicated. LightGBM can capture some of the missed ingredients of these mass models, thereby improving their performance in predicting nuclear mass.In general, the improvement in RMSD is significant for the nuclear mass model with large RMSD values and is relatively slight for the nuclear mass model with small RMSD values.

    Very recently, the AME2020 was published; thus, it is interesting to see whether the LightGBM-refined mass models also work well for newly measured nuclei that appear in the AME2020 mass evaluation.A comparison of the binding energies obtained with the LDM, DZ, WS4,and FRDM and LightGBM-refined mass models on the 66 newly measured nuclei that appeared in the AME2020 are illustrated in Fig. 6.The RMSD values of the original mass models on these newly measured nuclei were 2.468,0.821,0.350,and 0.778 MeV for the LDM,DZ,WS4,and FRDM,respectively. After the refinement of LightGBM, the RMSD of the above four mass models was significantly reduced to 0.452, 0.320, 0.222, and 0.292 MeV.

    By increasing the capacity of the training set,the model can learn more information, and the RMSD is reduced.However, the uncertainty of RMSD increases with an increase in the percentage of the training set, because the less tested nuclei, the larger the fluctuations. In addition,the RMSD of the 66 newly measured nuclei that appeared in AME2020 is also displayed in Fig. 7. When the percentage of the training set was larger than 90%,the RMSD of the 66 newly measured nuclei slightly increased, indicating that overfitting started to occur. However, this overfitting problem is not very serious because LightGBM has a strong ability to prevent overfitting. In the present work, to avoid either a large RMSD value or large uncertainty of RMSD, the percentage of the training set was chosen as 80% in most cases.

    Fig. 5 (Color online) Density distribution of RMSD for training and test data sets.Results from 500 runs for each mass model (LDM, DZ, WS4,and FRDM) are displayed.Dashed lines denote a Gaussian fit to the distribution. The corresponding mean values and standard deviations are listed in Table 3. In each run, the 2408 nuclei were randomly split into training and test datasets at a ratio of 4:1

    Fig. 6 (Color online) Difference between the theoretical and the experimental binding energies(red horizontal line)obtained using the LDM,DZ,WS4,and FRDM(open diamonds)and LightGBM-refined mass models(solid squares).The results of 66 newly measured nuclei that appeared in the AME2020 mass evaluation are displayed. σpre and σpost denote the RMSD of the original and LightGBM-refined mass models on the newly measured nuclei,respectively.The error of the predictions obtained using the LightGBM-refined mass model is the standard deviation of the predicted binding energy. It is obtained by running LightGBM 500 times with randomly splitting AME2016 data into training and test sets with a ratio of 4:1

    Fig. 7 (Color online) RMSD of the LightGBM-refined LDM plotted as a function of the percentage of the training set

    3.3 Extrapolation of neutron separation energy

    Single- and two-neutron separation energies are of particular interest because they provide information relevant to shell and subshell structures, nuclear deformation, paring effects, as well as the boundary of the nuclear landscape. They can be calculated using the following formulas:

    Table 4 RMSD of Sn and S2n obtained with the LightGBM-refined mass models. All values are in units of MeV

    Good performance of the LightGBM-refined mass models for the prediction of nuclear binding has been shown, and it is interesting to see whether the single- and two-neutron separation energies can also be reproduced well on the same foot. Based on the calculation of the LightGBM-refined models,the RMSD of Sn of 2255 nuclei and S2n of 2140 nuclei is displayed in Table 4. Figure 8 compares the single neutron separation energy of Ca, Zr,Sn, and Pb isotopic chains given by different theoretical models and the experimental data from AME2016. All predictions are in good agreement with the experimental data when there are data,while discrepancies appear as the neutron number increases. The general trend of Sn as a function of neutron number obtained with LightGBM-refined LDM and WS4 is similar to those obtained with other nuclear mass models, for example, the odd-even staggering can also be observed.

    Table 3 Comparison of the RMSD for the ML-refined mass models. σpre denotes the RMSD of the original mass models, and σpre is the result obtained using the LightGBMrefined mass models. All values are in units of MeV

    Fig. 8 (Color online) Singleneutron separation energy of Ca,Zr, Sn, and Pb isotopic chains given by different models. The results obtained using the LightGBM-refined LDM and WS4 were compared with those from the FRDM and WS4, as well as recent theoretical calculations given by Xia et al.[16], Ma et al. [59], and Yu et al. [60]

    Fig. 9 (Color online) Twoneutron separation energy of the neutron-rich nuclei on Ca, Ti,Pm,Sm isotopic chains given by different models. The red and green dots represent the experimental data from AME2016 and the latest measurements from Refs.[61, 62], respectively. The neutron numbers of the predicted drip line isotopes (Ca and Ti) in each nuclear mass model are also listed in the figure.Note thatS2n obtained with WS4 and LightGBMrefined WS4 almost completely overlap

    The latest experimental measurements of the two-neutron separation energies of the four elements (Ca, Ti, Pm,Sm) were compared with various theoretical model calculations, as shown in Fig. 9. It can be seen that the newly measured S2n can be well reproduced by the LightGBMrefined LDM and WS4 models;in particular,S2n obtained with LightGBM-refined LDM is much closer to the experimental data than that obtained with the LDM. For example, the sharp decrease in S2n around the magic number cannot be reproduced by the LDM,while this issue can be fixed after the refinement of LightGBM. The good performance of LightGBM-refined mass models on both Sn and S2n,again indicating the strong capacity of LightGMB in refining the nuclear mass model.

    3.4 Prediction of the residual δ(Z,A)=BLDM-Bexp

    It is known that the residual δ(Z,A)=BLDM-Bexp,that is, the difference in binding energy between the experimental data and the one calculated with the LDM, in the vicinity of the magic numbers are usually large, and δ(Z,A) can reflect quantum-mechanical shell effects. It is always interesting to know if new magic numbers exist for exotic nuclei. For this purpose, the residual δ(Z,A) predicted with LightGBM based on learning the pattern of δ(Z,A) for nuclei appearing in the AME2016 database is displayed in Fig. 10. As can be seen, δ(Z,A) for Z =126 and N =184 nuclei are relatively large, which might reveal that they are magic numbers. However,it should be noted that they have been used as input features, that is,|Z-m|and |N-m|. By removing four magic number related features (i.e., Zm, Nm, |Z-m|, and |N-m|) in thetraining, the predicted results are displayed in the upper subfigure of Fig. 10. It is interesting to note that δ(Z,A)has local minima around (Z =20, N =50), (Z =28,N =82), and (Z =50, N =126), but there is no shell structure on δ(Z,A) for nuclei with Z ≥82 or N ≥126.This is understandable because ML algorithms have a strong ability to address interpolation tasks and become less efficient and reliable for extrapolation tasks, particularly for samples that are far away from the training samples. In this context, one does not expect new magic numbers to be explored by ML algorithms. Nevertheless,the high accuracy for predicting nuclear mass around nuclei with experimental data has been demonstrated, for example,the RMSD for the 66 newly measured nuclei that appeared in the AME2020 of the LightGBM-refined mass.

    Fig. 10 (Color online) Residual δ(Z,A)=BLDM-Bexp predicted with LightGBM. The 10 input features are listed in Table 2. The training set consisted of 80% of the nuclear masses appearing in the AME2016 database,which are displayed in the area encircled in red.The neutron drip lines obtained by using the LDM, WS4, LDM_lgb,and Ma [59] are shown with different symbols. The upper subfigure shows the residual δ(Z,A) predicted with LightGBM but with only six input features, that is, excluding four magic numberrelated features

    Fig.11 (Color online)Importance ranking for the input features obtained with the SHAP package.Each row represents a feature,and the x-axis is the SHAP value,which shows the importance of a feature for a particular prediction.Each point represents a nucleus,and the color represents the feature value (with red being high and blue being low)

    3.5 Interpretability of the model

    Fig. 12 (Color online) Upper panel: Residual δ(Z,A) obtained from the LDM plotted against |N-m|colored by neutron number. Each point represents a nucleus, and the color represents the number of neutrons in the nucleus.Lower panel:Same as the upper one,but with the SHAP value plotted instead of the residual δ(Z,A)

    As a decision tree-based algorithm, one advantage of LightGBM is its excellent degree of explainability. This is important because, as a physicist, one expecting the ML algorithm not only has a good performance in refining nuclear mass models,but can also provide some underlying physics that is absent from the original nuclear mass models.Understanding what happens when ML algorithms make predictions that could help us further improve our knowledge about the relationship between the input feature quantity and the predicted value.One of the possible ways to understand how the LightGBM algorithm provides a particular prediction is to appreciate the most important features that drive the model. For this purpose, SHapley additive exPlanations (SHAP) [63], which is one of the most popular feature attribution methods, was applied to obtain the contributions of each feature value to the prediction. Figure 12 illustrates the ranking of importance of the input 10 features.The top is the most important feature,while the bottom is the most irrelevant feature for predicting the residual δ(Z,A) between the experimental and theoretical binding energies. It can be seen that the importance ranks of the input features are different for different mass models. Because shell effects are not included in the LDM, the residual δ(Z,A) around magic numbers is usually larger(also can be seen in Fig. 4).As a result, |N-m|and |Z-m|are more important for predicting δ(Z,A) between the LDM calculation and experimental data. To demonstrate the meaning of the SHAP value, the residual δ(Z,A) obtained from the LDM and SHAP values is shown in Fig. 12. The upper panel of Fig. 12, around the magic number, i.e., |N-m|is close to 0, larger difference between LDM calculated and experimental binding energy is existed,especially for nuclei with larger neutron number. A very similar behavior for the SHAP value can be seen in the lower panel. This implies that by adding a |N-m|-related term in the LDM, the accuracy of the LDM for calculating nuclear binding energy can be improved to some extent. For FRDM, the neutron number N represents the most relevant feature,and the SHAP value for smaller N is usually larger.Indeed,the residual δ(Z,A)for nuclei with a smaller neutron number N is larger has already been observed in the FRDM paper,that is, Fig. 6 of Ref. [64]. In addition, we see that Npair,Zm, and Nmare three of the most irrelevant features for predicting the residual δ(Z,A). We have checked that the enhancement of RMSD is ~5% when they are not considered. It is worth noting that the distribution of Bth-Bexpfor nuclei with different Zm(Nm) is slightly different,implying that shell corrections on nuclei mass are different in different regions, which deserves further study with the inclusion of more relevant features.

    4 Conclusion and outlook

    To summarize, several features are fed into the LightGBM algorithm to study the residual δ(Z,A)between the theoretical and experimental binding energies, and it was found that the LightGBM algorithm can mimic the patterns of δ(Z,A)with high accuracy,to refine theoretical mass models. In this study, significant reductions in the RMSD of approximately 90%, 65%, 40%, and 60% after LightGBM refinement on the LDM,DZ,WS4,and FRDM were obtained, indicating the strong capability of LightGBM to improve theoretical nuclear mass models.In addition, the RMSD for various mass models with respect to the 66 newly measured nuclei that appeared in AME2020 (compared with AME2016) was reduced to the same level as well. Furthermore, it was found that the single- and two-neutron separation energies obtained with the LightGBM-refined mass models are in good agreement with the newly developed experimental data. By using the SHAP package, the most relevant input features for predicting the residual δ(Z,A) for each mass model were determined, which may provide guidance for the further development of nuclear mass models.

    The good performance of the machine learning method in refining the nuclear mass model gives us a new tool to further investigate other properties of nuclei that we are interested in, such as superheavy nuclei, halo nuclei, and nuclei around drip lines.In addition,with the development of interpretable machine learning methods, more physical hints can be obtained,thereby improving our understanding of the present nuclear models.

    Supplementary InformationThe online version contains supplementary material available at https://doi.org/10.1007/s41365-021-00956-1.

    AcknowledgementsFruitful discussions with Prof. Jie Meng, Prof.Hong-Fei Zhang, Prof. Yu-Min Zhao, and Dr. Na-Na Ma are greatly appreciated. The authors acknowledge support by computing server C3S2 at the Huzhou University. The mass table for the LightGBMrefined mass models is available in the Supplemental Material.

    Author ContributionsAll authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Ze-Peng Gao, Yong-Jia Wang, Hong-Liang Lu¨,Qing-Feng Li, Cai-Wan Shen and Ling Liu. The first draft of the manuscript was written by Ze-Peng Gao, and all authors commented on previous versions of the manuscript.All authors read and approved the final manuscript. The contributions of Hong-Liang Lu¨ are non-Huawei achievements.

    日本欧美国产在线视频| 亚洲综合色网址| 美女高潮到喷水免费观看| 国产日韩欧美视频二区| av网站在线播放免费| 欧美日韩综合久久久久久| 国产亚洲精品第一综合不卡| 一本久久精品| 一本色道久久久久久精品综合| 天天添夜夜摸| 国产av一区二区精品久久| 亚洲精品第二区| 国产免费一区二区三区四区乱码| 国产麻豆69| 色网站视频免费| 亚洲精品一区蜜桃| 国精品久久久久久国模美| 国产色视频综合| 夫妻性生交免费视频一级片| 蜜桃在线观看..| 久久久久久人人人人人| 国产成人精品久久二区二区免费| 视频区图区小说| 两性夫妻黄色片| 美女脱内裤让男人舔精品视频| 91麻豆精品激情在线观看国产 | 九色亚洲精品在线播放| 中文字幕最新亚洲高清| 黑丝袜美女国产一区| 久热这里只有精品99| 久久久精品区二区三区| 91精品国产国语对白视频| 女警被强在线播放| 肉色欧美久久久久久久蜜桃| 久久狼人影院| 国产亚洲av高清不卡| 在线观看免费高清a一片| 国产精品 欧美亚洲| 一级片'在线观看视频| 日韩av免费高清视频| 一本色道久久久久久精品综合| 少妇人妻 视频| 亚洲黑人精品在线| 1024香蕉在线观看| 国产亚洲精品第一综合不卡| 亚洲精品在线美女| 亚洲自偷自拍图片 自拍| 丁香六月欧美| 欧美少妇被猛烈插入视频| 极品人妻少妇av视频| 亚洲精品久久久久久婷婷小说| 亚洲九九香蕉| 国产一区二区三区综合在线观看| 美女中出高潮动态图| 18禁观看日本| 欧美成人午夜精品| 2021少妇久久久久久久久久久| 高清不卡的av网站| 中文精品一卡2卡3卡4更新| 后天国语完整版免费观看| 久久精品国产亚洲av高清一级| 九草在线视频观看| 另类亚洲欧美激情| 无遮挡黄片免费观看| 亚洲精品国产区一区二| 亚洲午夜精品一区,二区,三区| 精品卡一卡二卡四卡免费| 两个人看的免费小视频| 中文精品一卡2卡3卡4更新| 日韩制服骚丝袜av| 后天国语完整版免费观看| a级毛片在线看网站| av有码第一页| 午夜免费男女啪啪视频观看| tube8黄色片| 亚洲国产毛片av蜜桃av| 国产99久久九九免费精品| 一边亲一边摸免费视频| 性高湖久久久久久久久免费观看| 日韩电影二区| 亚洲国产毛片av蜜桃av| 999久久久国产精品视频| 9热在线视频观看99| 精品少妇内射三级| 亚洲精品乱久久久久久| 免费看十八禁软件| 成年美女黄网站色视频大全免费| 岛国毛片在线播放| 亚洲成人国产一区在线观看 | 爱豆传媒免费全集在线观看| 亚洲精品国产色婷婷电影| 真人做人爱边吃奶动态| 亚洲午夜精品一区,二区,三区| 亚洲精品中文字幕在线视频| 美女视频免费永久观看网站| 精品久久久精品久久久| 欧美国产精品一级二级三级| 又大又黄又爽视频免费| 最新的欧美精品一区二区| 黑丝袜美女国产一区| 国产精品二区激情视频| 一区福利在线观看| videos熟女内射| 女人被躁到高潮嗷嗷叫费观| 国产精品一二三区在线看| 久久久久网色| 久久99热这里只频精品6学生| 高潮久久久久久久久久久不卡| 大片电影免费在线观看免费| 视频区图区小说| 国产欧美日韩综合在线一区二区| 亚洲激情五月婷婷啪啪| 日韩大码丰满熟妇| 久久精品亚洲av国产电影网| 久久人妻福利社区极品人妻图片 | 女警被强在线播放| 国产成人影院久久av| 每晚都被弄得嗷嗷叫到高潮| 我的亚洲天堂| 丝袜美足系列| 免费在线观看视频国产中文字幕亚洲 | 女人被躁到高潮嗷嗷叫费观| 国产一级毛片在线| 九色亚洲精品在线播放| 国产高清视频在线播放一区 | 三上悠亚av全集在线观看| 亚洲午夜精品一区,二区,三区| 亚洲精品久久午夜乱码| 色综合欧美亚洲国产小说| 大码成人一级视频| 精品一区二区三区四区五区乱码 | 亚洲精品国产色婷婷电影| 国产男人的电影天堂91| 久久久久视频综合| 国产成人欧美在线观看 | 69精品国产乱码久久久| 国产黄色免费在线视频| 国产精品 国内视频| 日韩av在线免费看完整版不卡| 久久免费观看电影| 电影成人av| 亚洲精品日韩在线中文字幕| 久久久久久久久久久久大奶| 欧美精品人与动牲交sv欧美| 久久久久久久久久久久大奶| 亚洲av国产av综合av卡| 日韩大码丰满熟妇| 欧美成狂野欧美在线观看| 汤姆久久久久久久影院中文字幕| 亚洲精品久久久久久婷婷小说| 欧美日韩成人在线一区二区| 亚洲精品久久久久久婷婷小说| 国产黄频视频在线观看| 久久99精品国语久久久| 免费人妻精品一区二区三区视频| 最新的欧美精品一区二区| 十八禁人妻一区二区| 色精品久久人妻99蜜桃| 国产精品 国内视频| 丝袜人妻中文字幕| 嫁个100分男人电影在线观看 | 热99国产精品久久久久久7| 超碰成人久久| a 毛片基地| 亚洲五月色婷婷综合| 91字幕亚洲| 国产爽快片一区二区三区| 男人舔女人的私密视频| 国产深夜福利视频在线观看| 乱人伦中国视频| 国产亚洲精品第一综合不卡| 国产成人系列免费观看| 青春草视频在线免费观看| av视频免费观看在线观看| 18禁国产床啪视频网站| 十八禁网站网址无遮挡| 国产在线视频一区二区| 国产免费现黄频在线看| 中文字幕另类日韩欧美亚洲嫩草| 免费观看av网站的网址| 国产成人精品久久久久久| 国产成人精品无人区| 国产女主播在线喷水免费视频网站| 黄频高清免费视频| 肉色欧美久久久久久久蜜桃| 免费观看人在逋| 日韩人妻精品一区2区三区| 免费人妻精品一区二区三区视频| 日本av免费视频播放| 日本一区二区免费在线视频| 国产97色在线日韩免费| 一边亲一边摸免费视频| 丰满迷人的少妇在线观看| 免费一级毛片在线播放高清视频 | 好男人视频免费观看在线| 一本大道久久a久久精品| 97人妻天天添夜夜摸| 国产一级毛片在线| 亚洲国产精品国产精品| 日韩 欧美 亚洲 中文字幕| 蜜桃在线观看..| 国产成人一区二区在线| 久久久久精品人妻al黑| 美女大奶头黄色视频| 操出白浆在线播放| 国产欧美日韩综合在线一区二区| e午夜精品久久久久久久| 大片电影免费在线观看免费| 亚洲国产精品一区三区| 一区二区三区四区激情视频| 中文字幕人妻熟女乱码| 国产一区二区在线观看av| 国产亚洲欧美精品永久| 久久热在线av| 亚洲av片天天在线观看| 一级毛片我不卡| 丝袜脚勾引网站| 蜜桃国产av成人99| 日本91视频免费播放| 国产主播在线观看一区二区 | 日本wwww免费看| 2018国产大陆天天弄谢| 我的亚洲天堂| 咕卡用的链子| 亚洲欧美一区二区三区国产| 老汉色∧v一级毛片| 精品亚洲成a人片在线观看| 美女午夜性视频免费| 自线自在国产av| 狠狠精品人妻久久久久久综合| 欧美+亚洲+日韩+国产| 久久人人97超碰香蕉20202| 秋霞在线观看毛片| 一边摸一边做爽爽视频免费| a 毛片基地| 纯流量卡能插随身wifi吗| 日本欧美视频一区| 1024香蕉在线观看| 老汉色av国产亚洲站长工具| 免费女性裸体啪啪无遮挡网站| 美女主播在线视频| 日本wwww免费看| 久久ye,这里只有精品| 亚洲 欧美一区二区三区| 婷婷色综合www| 欧美性长视频在线观看| 看免费av毛片| 校园人妻丝袜中文字幕| 久久国产亚洲av麻豆专区| 我要看黄色一级片免费的| 真人做人爱边吃奶动态| 中文字幕av电影在线播放| 国产精品一区二区免费欧美 | 亚洲欧美成人综合另类久久久| 菩萨蛮人人尽说江南好唐韦庄| 亚洲欧美成人综合另类久久久| 一级毛片电影观看| 午夜日韩欧美国产| 久久国产精品影院| 精品人妻1区二区| 人人妻,人人澡人人爽秒播 | 国产主播在线观看一区二区 | 精品国产一区二区久久| 久久精品久久久久久噜噜老黄| 国产亚洲午夜精品一区二区久久| 丁香六月欧美| 日韩 亚洲 欧美在线| 自线自在国产av| 免费看不卡的av| 国产高清不卡午夜福利| 一区二区三区精品91| 国产成人91sexporn| 丰满迷人的少妇在线观看| 日韩一卡2卡3卡4卡2021年| 国产一区二区三区av在线| av又黄又爽大尺度在线免费看| 国产成人av激情在线播放| 亚洲精品一卡2卡三卡4卡5卡 | 日日夜夜操网爽| 2021少妇久久久久久久久久久| 欧美成人精品欧美一级黄| 亚洲国产毛片av蜜桃av| 老汉色av国产亚洲站长工具| 亚洲黑人精品在线| 美女主播在线视频| 男女边吃奶边做爰视频| 午夜日韩欧美国产| 性色av一级| 亚洲第一av免费看| 国产精品.久久久| 视频在线观看一区二区三区| 99热网站在线观看| 9色porny在线观看| 亚洲国产精品成人久久小说| 91字幕亚洲| 亚洲男人天堂网一区| cao死你这个sao货| 久久久精品国产亚洲av高清涩受| 成人18禁高潮啪啪吃奶动态图| 欧美性长视频在线观看| 久久久久久人人人人人| 考比视频在线观看| 欧美激情高清一区二区三区| 亚洲色图 男人天堂 中文字幕| 91老司机精品| 久久精品久久久久久久性| 一个人免费看片子| av视频免费观看在线观看| 午夜福利在线免费观看网站| 亚洲熟女精品中文字幕| 男女国产视频网站| 在现免费观看毛片| 十八禁高潮呻吟视频| 高清不卡的av网站| 建设人人有责人人尽责人人享有的| 国产又爽黄色视频| 久久久国产精品麻豆| 午夜av观看不卡| 国产人伦9x9x在线观看| 欧美国产精品va在线观看不卡| 悠悠久久av| 国产一卡二卡三卡精品| 亚洲av日韩在线播放| 老司机午夜十八禁免费视频| 777久久人妻少妇嫩草av网站| 黑人欧美特级aaaaaa片| www日本在线高清视频| 成人国产一区最新在线观看 | 精品久久蜜臀av无| 国产av国产精品国产| 一区二区三区激情视频| 久热这里只有精品99| 高清黄色对白视频在线免费看| 一级毛片 在线播放| 亚洲精品av麻豆狂野| 免费看av在线观看网站| 久久天堂一区二区三区四区| 亚洲av欧美aⅴ国产| 国产精品人妻久久久影院| 国语对白做爰xxxⅹ性视频网站| 亚洲专区中文字幕在线| www.999成人在线观看| 一区二区三区精品91| 欧美精品av麻豆av| 丝袜脚勾引网站| 久久狼人影院| 永久免费av网站大全| 777米奇影视久久| 亚洲欧美一区二区三区久久| 国产爽快片一区二区三区| 老汉色av国产亚洲站长工具| 亚洲精品自拍成人| 国产片内射在线| 亚洲专区国产一区二区| 成人午夜精彩视频在线观看| 伊人亚洲综合成人网| 大片电影免费在线观看免费| 老司机亚洲免费影院| 考比视频在线观看| 这个男人来自地球电影免费观看| 多毛熟女@视频| 一级黄色大片毛片| 亚洲,欧美,日韩| 欧美精品一区二区免费开放| 午夜福利视频精品| videos熟女内射| 黄频高清免费视频| 精品久久蜜臀av无| 黄色视频在线播放观看不卡| 蜜桃在线观看..| 日本av免费视频播放| 看免费av毛片| 成年av动漫网址| 波野结衣二区三区在线| 可以免费在线观看a视频的电影网站| 亚洲自偷自拍图片 自拍| 亚洲国产看品久久| 十八禁人妻一区二区| 久久99热这里只频精品6学生| 蜜桃国产av成人99| 国产三级黄色录像| 色婷婷av一区二区三区视频| 一本大道久久a久久精品| av线在线观看网站| 你懂的网址亚洲精品在线观看| 可以免费在线观看a视频的电影网站| 免费在线观看视频国产中文字幕亚洲 | 亚洲欧美成人综合另类久久久| 校园人妻丝袜中文字幕| 婷婷成人精品国产| 午夜福利,免费看| 欧美日韩视频精品一区| 欧美少妇被猛烈插入视频| 在现免费观看毛片| 天天影视国产精品| 男女之事视频高清在线观看 | 色视频在线一区二区三区| 91字幕亚洲| 女性生殖器流出的白浆| 免费久久久久久久精品成人欧美视频| 亚洲成人免费电影在线观看 | 啦啦啦在线观看免费高清www| 日韩 亚洲 欧美在线| 人妻一区二区av| 精品国产乱码久久久久久小说| 午夜日韩欧美国产| 丝袜喷水一区| 免费看十八禁软件| 91九色精品人成在线观看| 亚洲国产看品久久| 久9热在线精品视频| 美女国产高潮福利片在线看| 亚洲精品一二三| 久久久精品国产亚洲av高清涩受| 一区二区三区乱码不卡18| 男女无遮挡免费网站观看| 免费在线观看黄色视频的| 欧美 亚洲 国产 日韩一| 成在线人永久免费视频| 国产精品一区二区免费欧美 | 少妇被粗大的猛进出69影院| 日韩中文字幕视频在线看片| 另类精品久久| 18在线观看网站| 久久久久国产精品人妻一区二区| 久久人妻熟女aⅴ| 欧美久久黑人一区二区| 91成人精品电影| 在线观看免费日韩欧美大片| 丰满少妇做爰视频| 国产欧美日韩精品亚洲av| 国产高清国产精品国产三级| 国产一区有黄有色的免费视频| 久9热在线精品视频| 国产精品熟女久久久久浪| 亚洲,欧美精品.| 国产亚洲欧美精品永久| 悠悠久久av| 久久久亚洲精品成人影院| 人妻人人澡人人爽人人| 男女床上黄色一级片免费看| 久久久国产一区二区| 婷婷成人精品国产| 欧美xxⅹ黑人| 午夜91福利影院| 亚洲一码二码三码区别大吗| 日本av手机在线免费观看| 国产午夜精品一二区理论片| 男人操女人黄网站| 美女国产高潮福利片在线看| 宅男免费午夜| 亚洲一卡2卡3卡4卡5卡精品中文| 久久精品国产综合久久久| 搡老岳熟女国产| 免费观看a级毛片全部| 女性生殖器流出的白浆| 亚洲专区国产一区二区| 国产高清不卡午夜福利| av一本久久久久| 真人做人爱边吃奶动态| 精品国产一区二区三区四区第35| 中文欧美无线码| 婷婷色综合www| 国产成人一区二区在线| 中文字幕色久视频| 老汉色∧v一级毛片| 国产伦人伦偷精品视频| 高清不卡的av网站| 1024香蕉在线观看| 午夜老司机福利片| 777久久人妻少妇嫩草av网站| 久久久久精品人妻al黑| 精品久久蜜臀av无| 亚洲精品国产区一区二| 亚洲国产精品一区二区三区在线| 亚洲欧美日韩高清在线视频 | 麻豆av在线久日| av国产久精品久网站免费入址| 熟女av电影| 国产精品亚洲av一区麻豆| 国产亚洲一区二区精品| 丝袜美腿诱惑在线| 亚洲国产中文字幕在线视频| 国产爽快片一区二区三区| 国产精品 国内视频| 真人做人爱边吃奶动态| 80岁老熟妇乱子伦牲交| 少妇人妻久久综合中文| 久热爱精品视频在线9| 国产成人精品在线电影| 成人影院久久| 国产亚洲av片在线观看秒播厂| 亚洲色图 男人天堂 中文字幕| 国产老妇伦熟女老妇高清| 亚洲中文av在线| 免费av中文字幕在线| 男女午夜视频在线观看| 午夜av观看不卡| 97人妻天天添夜夜摸| 夫妻性生交免费视频一级片| 精品第一国产精品| 在线观看一区二区三区激情| 国产极品粉嫩免费观看在线| 少妇 在线观看| 你懂的网址亚洲精品在线观看| 黄色怎么调成土黄色| 成年动漫av网址| 免费不卡黄色视频| 超碰97精品在线观看| 亚洲av日韩在线播放| 啦啦啦视频在线资源免费观看| 日韩欧美一区视频在线观看| 国产一区二区在线观看av| 黄网站色视频无遮挡免费观看| 国产精品久久久久久人妻精品电影 | 人人妻人人添人人爽欧美一区卜| 欧美在线黄色| 菩萨蛮人人尽说江南好唐韦庄| 天堂俺去俺来也www色官网| 久久99一区二区三区| 美女福利国产在线| 人妻一区二区av| 国产亚洲午夜精品一区二区久久| 91成人精品电影| 中文字幕人妻熟女乱码| 亚洲精品第二区| 高清黄色对白视频在线免费看| 国产成人精品久久二区二区免费| 人人妻人人爽人人添夜夜欢视频| 国产激情久久老熟女| 久久久国产欧美日韩av| 日韩伦理黄色片| 性色av一级| 亚洲激情五月婷婷啪啪| 女性被躁到高潮视频| 宅男免费午夜| 精品国产乱码久久久久久小说| 欧美成人午夜精品| 国产福利在线免费观看视频| 久久精品国产综合久久久| 欧美人与善性xxx| 欧美日韩福利视频一区二区| av不卡在线播放| 久久女婷五月综合色啪小说| 最新在线观看一区二区三区 | 精品一区二区三卡| 两人在一起打扑克的视频| 每晚都被弄得嗷嗷叫到高潮| 免费高清在线观看视频在线观看| a 毛片基地| 在线看a的网站| 国产在视频线精品| 看十八女毛片水多多多| 日本av手机在线免费观看| 国产精品国产av在线观看| 国产男女内射视频| 天天躁狠狠躁夜夜躁狠狠躁| 别揉我奶头~嗯~啊~动态视频 | 成年人午夜在线观看视频| 大陆偷拍与自拍| 精品高清国产在线一区| 99re6热这里在线精品视频| 久久午夜综合久久蜜桃| 成年动漫av网址| xxx大片免费视频| 丁香六月天网| 操美女的视频在线观看| 日韩制服丝袜自拍偷拍| 最黄视频免费看| 婷婷色av中文字幕| 亚洲五月婷婷丁香| 又紧又爽又黄一区二区| 欧美亚洲 丝袜 人妻 在线| 每晚都被弄得嗷嗷叫到高潮| 亚洲av男天堂| 日韩电影二区| 久久热在线av| 国产欧美亚洲国产| 咕卡用的链子| 国产成人精品无人区| 最近手机中文字幕大全| 深夜精品福利| 久久热在线av| 亚洲av美国av| 亚洲情色 制服丝袜| 国产又色又爽无遮挡免| 中国国产av一级| 日本av免费视频播放| 午夜精品国产一区二区电影| 国产成人免费观看mmmm| 成人午夜精彩视频在线观看| 久久热在线av| 99精国产麻豆久久婷婷| 国产精品一区二区精品视频观看| 后天国语完整版免费观看| 午夜免费成人在线视频| 日本猛色少妇xxxxx猛交久久| 久久亚洲国产成人精品v| 亚洲色图综合在线观看| 你懂的网址亚洲精品在线观看| 在线 av 中文字幕| 久久久亚洲精品成人影院| 国产无遮挡羞羞视频在线观看| 在线观看免费午夜福利视频| 国产欧美日韩一区二区三区在线| 国产成人91sexporn| 丝袜喷水一区| 欧美中文综合在线视频| 亚洲视频免费观看视频| 校园人妻丝袜中文字幕| 成在线人永久免费视频| 国产精品久久久久久精品电影小说| 18在线观看网站| 好男人电影高清在线观看| 久久久久视频综合|