Impact of H in H 2 O thermal scattering data on criticality calculation: uncertainty and adjustment

. In this paper, the impact of the thermal scattering data for H in H 2 0 is estimated on criticality benchmarks, based on the variations of the CAB model parameters. The Total Monte Carlo method for uncertainty propagation is applied for 63 k eﬀ criticality cases, sensitive to H in H 2 0. It is found that their impact is of a few tenth of pcm, up to 300 pcm maximum, and showing highly non-linear distributions. In a second step, an adjustment is proposed for these thermal scattering data, leading to a better agreement between calculated and experimental k eﬀ values, following an increase of scattering contribution. This work falls into the global approach of combining advanced theoretical modelling of nuclear data, followed by possible adjustment in order to improve the performances of a nuclear data library


Introduction
The evaluation of neutron-induced nuclear data, e.g.cross sections, fission yields, spectra, is a continuous process, mainly because the nuclear reaction theory is still improving, new measurements are frequently available, and because there is a need from the user community.Examples of recently evaluated nuclear data are the Nuclear Energy Agency JEFF-3.3library [1], the US ENDF/B-VIII.0library [2], the Japanese JENDL-4.0 library [3] and finally the TENDL libraries [4].The need from users might concern better estimations of current reactor quantities (boron letdown curves, power maps) [5][6][7], spent fuel quantities (decay heat, source terms) [8][9][10][11][12], or quantities for advanced systems [13][14][15].A commonly-used neutronics parameter is the neutron multiplication factor, or k eff , which plays for instance a crucial role in spent fuel management and in burnup credit [16][17][18].The validation of existing simulation codes and the assessment of the uncertainties from various sources is a necessity, often imposed by national and international regulations (see Refs. [19][20][21] and references within).In this context, the impact of the nuclear data uncertainties needs to be evaluated in a regular time manner, with respect to the release of new nuclear data libraries and their associated covariance information.
This is precisely the first goal of this paper: providing uncertainties for a number of criticality-safety benchmarks [22], due to the H in H 2 O thermal scattering data (also called Thermal Scattering Laws, or TSL).These TSL are from the latest evaluations based on the CAB model [23], and included in the ENDF/B-VIII.0and TENDL-2021 libraries.Regarding their uncertainties, reference [24] is used as a reference with a number of modifications (see Sect. 2).The uncertainty propagation method is the Total Monte Carlo method [25], leading to k eff uncertainties (later called ∆k eff ), corresponding to one standard deviation of the observed probability density functions (pdf), as well as higher moments.Similar works on uncertainty propagation and covariance generation for TSL can be found in references [26,27].In the first reference, the TMC method (and fast TMC) was applied, whereas the second reference defines TSL covariance matrices, with an application to UO 2 and MOX fuels.
The second goal of this paper is to propose adjusted TSL, or at least to demonstrate the feasibility of the adjustment, based on the result of the TMC method.Such possibility was already explored almost ten years ago and presented in reference [28] for the TSL based on a different evaluation model.The method, called the Petten method, simply proposes the best performing sampled (random) TSL, based on the uncertainty propagation for criticality benchmarks, as the new recommended evaluation.Such method was also applied to Cu and 239 Pu [29,30].In this approach, the best performing sampled TSL is the one leading to the smallest χ 2 .Other methods of adjustments have been proposed since, apart from the traditional GLLS method (Generalized Linear Least Squares), based on non-linear Monte Carlo parameter search, see for instance references [31][32][33][34][35][36], the Unified Monte Carlo method [37] and references [38,39] for a comparison and review between GLLS and the Monte Carlo-based methods.Naturally, based on the first part of this study (uncertainty propagation), we are calculating χ 2 values for each sampled TSL, and then proposing the best performing one as an example of adjustment.
Details of the study are provided in the next sections: first a recall on the sampling method of the TSL, followed by uncertainty values for a number of criticality benchmarks.Additionally, other moments are obtained, such as the skewness and kurtosis, indicating a non-linear propagation of uncertainties from the TSL to k eff .Finally, the last section will analyze the χ 2 distribution and draw conclusions and a potential adjustment of the TSL.

Variation of thermal scattering data
The same approach and sampled TSL files were presented and used in reference [40].It is basically based on the sampling of the parameters of the CAB model [23].The nominal (not perturbed) parameters of this model can be found in reference [41] and as mentioned, such TSL were included in some of the modern nuclear data libraries.The variations of the model parameters are derived from reference [42], with a number of modification, following the recommendations of the authors.Such variations are as follows: σ s (elastic cross section): ± 0.2% -∆ (scaling factor): ± 10% -C (diffusion coefficient): ± 0.5% ω t (translational weight): ± 15% ω c (continuous spectrum weight), ω 1 and ω 2 (1 st and 2 nd oscillator weights: correlated with ω t based on equation ( 1) -E 1 (first oscillator energy): ± 5%, and -E 2 (second oscillator energy): ± 30%.
For the oscillator weights ω c , ω 1 and ω 2 , a full correlation is considered with the following equation: Such approach follows the recommendation of reference [24].All variations are a Normal distributions.Uncertainties correspond to one standard deviation (1σ).
As in the case of reference [40], 3000 sampled TSL are produced, and processed with NJOY into ACE files [43].

Results
The impact of the H in H 2 O thermal scattering data are here assessed on criticality-safety benchmarks.Such benchmarks are obtained from the ICSBEP handbook [22].Such handbook contains a large number of benchmarks (simulation input files and experimental quantities such as k eff ), but only some of them are sensitive to TSL.Those benchmarks are mostly thermal ones (the neutron fluence is mainly in the thermal energy region), but not only.In the following, a total of 63 benchmarks are considered, among which only one is considered as fast (called pmf11-1) and one intermediate (usi1-1); the other ones are thermal (61 benchmarks).The nomenclature used here is similar to the handbook: all benchmark names consist of three letters: the first one denotes the type of fuel ("l" for low-enriched 235 U, "h" for high-enriched 235 U, "u" for 233 U fuel, "m" for a mixture of 235 U and 239 Pu, and "p" for 239 Pu fuel), the second one defines the physical form ("s" for solution and "c" for compound), and finally the last letter defines the neutron spectrum ("t" for thermal, "i" for intermediate and "f" for fast).All benchmark names are continued with an additional information (after a dash), indicating its subnumber, referring to the ICSBEP cases.
The benchmark calculations are performed with MCNP6; apart from the sampled TSL ACE files, the other nuclear data files correspond to the beta version of the JEFF-4.0library.The method of uncertainty propagation is the Total Monte Carlo (or TMC) [25]; it was already presented in many publications and it is summarized here as follows: it consists in repeating i times the same calculation, each time with a different sampled quantity of interest as input.In other words, each random file differs slightly from the previous one and averaging over all of them would return a credible standard deviation and covariance matrix.The averaging however takes place after the applied calculations.In our case, the quantity of interest corresponds to the TSL data, as explained in Section 2. In this work, i is equal to 3000 and it allows to achieve an acceptable convergence for various moments of the observed k eff probability density functions.In all uncertainty values, the statistical uncertainties from the MCNP6 is already removed, assuming that the observed variance is the sum of the effect of the TSL and statistics.Finally, together with the uncertainty ∆ k eff (or one standard deviation), the quantity MAD (for Median Absolute Deviation) is also provided.This is another measure of the width of a distribution, less affected by extreme values (both the standard deviation and MAD values will significantly differ for skewed distributions; for a Normal distribution, the MAD is equal to 0.67 of the standard deviation).The definition of the MAD is Median(|x i -x|), with x i the observed realization for the sample i, and x the average over all i. Results for the average, uncertainty, skewness and kurtosis for each considered benchmark are presented in Table 1.

Uncertainties and other moments
Compared to uncertainties calculated in reference [28], the present values are in general smaller.This is not surprising as the model parameters in reference [28] were modified in an ad-hoc way, being one of the first trials of the TMC method on TSL.As observed, the maximum uncertainties are less than 300 pcm (and in general less than 100 pcm).It is worth noticing that for skewed distributions with a strong kurtosis, as the majority of cases presented here, one standard deviation does not refer to 68% of the population: for lct8.5, 68.3% of the cases are within the mean ±1σ, but for lst2.1, 73% are within the mean ±1σ.One can also compare the present results for the lct and pst benchmarks with the values presented in reference [27].In this reference, uncertainties due to TSL for the MISTRAL-1 experiment, similar to the lct benchmarks, range from 70 to 150 pcm.Such uncertainties were obtained by a direct perturbation approach, based on different model uncertainties, and are of the same order of magnitude compared to the present values.In the case of the MOX cases, reference [27] reports uncertainties between 110 and 200 pcm for the MISTRAL-2 system, which can be compared to the present values for the pst benchmarks.In this case too, uncertainties are relatively similar.
Generally, the uncertainties (as well as skewness and kurtosis) are similar within types of benchmarks: for instance all lct benchmarks have negative skewnesses, all lct7 cases present uncertainties close to the same values.Examples of some pdf are presented in Figure 1.These distributions are sensibly different, with strong different skewnesses (for the pst and uct cases), zero skewness (the lct case), or zero kurtosis (corresponding to a flat distribution, for mct2).Given that the model parameters for the TSL production were sampled from Normal distributions, such a variety of k eff pdf indicates non linear cases of uncertainty propagation.
Similarities in pdf within families of benchmarks are certainly related to the fact that these benchmarks share similar geometry, enrichment, and possibly reflectors.This can be observed for the majority of cases, but with a number of exceptions, as for the uct1 benchmarks and the cases 3 and 7: uncertainties strongly differ, and skewnesses are far from zero with opposite signs.Both benchmarks are also strongly anti-correlated, with a Pearson correlation factor of −0.8.These two cases nevertheless present very different physical systems (e.g.rectangular and hexagonal cores with different reflectors).
The values from the lct benchmarks (being clusters of UO 2 rods in water) can be compared to the k ∞ uncertainties from reference [40] (being for a UO 2 pincell, 3.5% enriched, burned up to 100 MWd/kgU; same sampled TSL files were used), where the maximum uncertainty was about 30 pcm.In the case of the lct benchmarks, the maximum is about 80 pcm.These two values differ, but are also relatively close, given that a direct comparison is not possible (different k eff values, different enrichments, and different geometry).
Based on these random k eff values (3000 for each benchmark), the adjustment step can be performed, as presented in the next section.

Adjustment
The previous results were obtained based on 3000 sampled TSL; they all differ and lead to different benchmark k eff values.Let's i denotes a specific sampled TSL, and n the benchmark case (n goes from 1 to N = 63).For each i, a χ 2 value can be calculated as follows: k i,n,calc and k n,exp correspond to the calculated and experimental k eff values for the benchmark n, respectively.For instance, for i = 0, which corresponds to the beta version of the future JEFF-4.0library, called JEFF-4T0, χ 2 0 /N = 1.7.By considering all i sampled cases, the obtained χ 2 distribution can be calculated and is presented in Figure 2. The shape of this distribution is typical of χ 2 values, as seen in references [28][29][30].As it can be easily understood, some sampled TSL do not perform well (high χ 2 ), whereas some of them are performing better than the original evaluated TSL.Based on such distribution, the Petten method is excessively simple: select the sampled TSL with the lowest χ 2 .Naturally, as the TSL have a limited effect on the benchmark calculated values, the low tail of the χ 2 distribution does not reach χ 2 = 0, but as observed, noticeable improvements in terms of C/E are possible.Out of the 3000 sampled TSL, the one with the smallest χ 2 is number 1871.The calculated k eff from this specific TSL are presented in Figure 3, compared with the nominal (unperturbed) TSL evaluation.The changes on k eff values from the nominal calculations and from the TSL file number 1871 are not that striking, but the χ 2 values were definitively reduced.One has to keep in mind that the TSL induces low uncertainties on k eff , meaning that their impact is nevertheless limited.One can also repeat this work and double the uncertainties on the CAB parameters; this way, it is likely to find an even better solution.Naturally, this would imply that   2 (parameters ω c , ω 1 and ω 2 are fully correlated with ω t using Eq. ( 1)).The parameters for the file number 1871 are one example of adjustments with integral benchmarks.Table 2 is also showing parameters for random files performing almost as good as number 1871.As observed, their parameters are sensibly different, and it is difficult to extract a pattern.The E 2 for run 1871 might not be suitable, as such value is not valid for the second oscillator, which was closer to 400 meV.To avoid such result, a rejection condition could be used, based on water physics.This indicates that the adjustment solely based on this number of integral benchmarks might not be enough to obtain a unique solution, either because there is not a unique solution, or because the number of benchmarks is not enough.Additionally, the χ 2 definition can take into account differential data (measured cross sections), thus leading to a compromise between integral and differential agreements.More advanced method can also be applied, based on Bayesian Monte Carlo, or methods borrowed from Machine Learning, e.g.separating integral and differential data between a training set and a validation set.Such more advanced methods can be applied in future work.

Conclusion
This work presents an estimation of the impact of the thermal scattering data for H in H 2 O (or TSL) on k eff of criticality benchmarks.The method used is the sampling of the CAB model parameters, leading to sampled thermal scattering data in the form of ACER files, and finally to sampled k eff values.It was found that the impact of such TSL is modest compared to other nuclear data (less than 300 pcm).It was also found that the effect of TSL is highly non linear.Based on these results, a selection of "best performing" sampled TSL is proposed, using a simple χ 2 definition.In conclusion, this paper is an application of the TMC and Petten methods towards nuclear data adjustment, more precisely thermal scattering data adjustment.If used in combination of traditional nuclear data evaluation methods, it can help to reach a better agreement between simulated and experimental integral values.
We would like to thank J.I. Marquez Damian from the European Spallation Source and G. Noguere from the CEA Cadarache for their support and efforts in helping for the production of random TSL data.This work would not have been possible without their help.

Fig. 1 .
Fig. 1.Probability density function for a number of criticality benchmarks, obtained by randomly changing the H in H2O thermal scattering data.

Fig. 2 .
Fig. 2. Distribution of the χ 2 per degree of freedom considering sampled (random) H in H2O thermal scattering data.The vertical black line indicates the position of χ 2 /N = 1.

Fig. 3 .
Fig.3.Calculated over Experimental values for the considered criticality benchmarks.The case "Best" corresponds to a selected H in H2O thermal scattering data case.

Table 1 .
Uncertainties (standard deviation and MAD), skewness and kurtosis on criticality benchmarks due to thermal scattering data of H in H 2 0 obtained from MCNP6 calculations.The base library (except the thermal scattering data) is JEFF-4.0β.The statistical uncertainties are already removed.