Issue |
EPJ Nuclear Sci. Technol.
Volume 9, 2023
Templates of Expected Measurement Uncertainties: a CSEWG Effort
|
|
---|---|---|
Article Number | 35 | |
Number of page(s) | 10 | |
DOI | https://doi.org/10.1051/epjn/2023014 | |
Published online | 10 November 2023 |
https://doi.org/10.1051/epjn/2023014
Regular Article
Templates of expected measurement uncertainties
1
Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
2
Naval Nuclear Laboratory, Schenectady, NY, 12301-1072, USA
3
University of California Berkeley, Berkeley, CA, 94720, USA
4
U.S. Naval Academy, Annapolis, MD, 21402, USA
5
Argonne National Laboratory, Lemont, IL, 60439-4842, USA (retired)
6
Lancaster University, Lancaster, LA1 4YW, UK
7
National Institute of Standards and Technology, Gaithersburg, MD, 20899-8463, USA
8
Pacific Northwest National Laboratory, Richland, WA, 99352, USA
9
Helmholtz Centre Dresden Rossendorf, 01328 Dresden, Germany
10
Uppsala University, 75120 Uppsala, Sweden
11
Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
12
International Atomic Energy Agency, A-1400 Vienna, Austria
13
University of Vienna, A-1010 Vienna, Austria
14
Duke University, Durham, NC, 27708-0308, USA
15
Triangle Universities Nuclear Laboratory, Durham, NC, 27708-0308, USA
16
CEA, DAM, DIF, F-91297 Arpajon, France
17
Univ. Bordeaux, LP2I, UMR5797, CNRS, F-33170 Gradignan, France
18
Brookhaven National Laboratory, Upton, NY, 11973-5000, USA
19
Lawrence Livermore National Laboratory, Livermore, CA, 94551-0808, USA
20
The University of Tennessee Knoxville, Knoxville, Tennessee, 37996, USA
21
Université Paris-Saclay, CEA, LMCE, 91680 Bruyères-le-Châtel, France
* e-mail: dneudecker@lanl.gov
Received:
28
April
2023
Received in final form:
19
July
2023
Accepted:
7
August
2023
Published online: 10 November 2023
The covariance committee of CSEWG (Cross Section Evaluation Working Group) established templates of expected measurement uncertainties for neutron-induced total, (n,γ), neutron-induced charged-particle, and (n,xn) reaction cross sections as well as prompt fission neutron spectra, average prompt and total fission neutron multiplicities, and fission yields. Templates provide a list of what uncertainty sources are expected for each measurement type and observable, and suggest typical ranges of these uncertainties and correlations based on a survey of experimental data, associated literature, and feedback from experimenters. Information needed to faithfully include the experimental data in the nuclear-data evaluation process is also provided. These templates could assist (a) experimenters and EXFOR compilers in delivering more complete uncertainties and measurement information relevant for evaluations of new experimental data, and (b) evaluators in achieving a more comprehensive uncertainty quantification for evaluation purposes. This effort might ultimately lead to more realistic evaluated covariances for nuclear-data applications. In this topical issue, we cover the templates coming out of this CSEWG effort–typically, one observable per paper. This paper here prefaces this topical issue by introducing the concept and mathematical framework of templates, discussing potential use cases, and giving an example of how they can be applied (estimating missing experimental uncertainties of 235U(n,f) average prompt fission neutron multiplicities), and their impact on nuclear-data evaluations.
© D. Neudecker et al., Published by EDP Sciences, 2023
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Evaluated nuclear data are informed by either experimental data, nuclear models, or both. The accuracy and precision of these nuclear data strongly rely on the quality of available experimental data and the magnitude of their associated covariances. Experimental covariances not only bound the evaluated uncertainties in many cases, but they also provide a weight of individual experimental data points with respect to nuclear model calculations and other data sets if applicable. This weight can only be appropriate if all pertinent uncertainty sources are consistently provided for all experimental data used in the evaluation. However, more often than not, one or more uncertainty sources that are expected based on knowledge of similar measurements are missing for a specific data set in the EXFOR database [1–3]–an experimental low- and intermediate-energy nuclear reaction data library and the go-to database for experimental data for nuclear-data evaluations. Up to now, it has been up to the evaluator to identify the missing uncertainties and estimate a reasonable value for them. If evaluators choose to not estimate those values, they make the very strong and implicit assumption that such missing uncertainties are null. In some cases, this assumption might be justified. But, if a non-negligible uncertainty source was forgotten, the experimental data set will have an incorrect higher weight compared to other experimental data sets used for the evaluation. This will ultimately lead to a biased evaluation. It has been shown in references [4, 5] that neglecting these missing experimental uncertainties potentially impacts evaluated data and uncertainties such that these biases adversely affect application calculations considerably.
To mitigate the issue of missing experimental uncertainties and their adverse impact on evaluated nuclear-data mean values and covariances, templates of expected measurement uncertainties were established as part of a multi-year effort led by the covariance committee of CSEWG (Cross Section Evaluation Working Group). Several international experts on nuclear-data experiments and evaluations joined in this project, making it an international effort. Templates were established for neutron-induced total, capture, charged-particle, and (n,xn) cross sections as well as prompt fission neutron spectra, average prompt and total fission neutron multiplicities, and fission yields. They are described in detail in separate journal publications within this topical issue that is prefaced by this paper and were worked on for total capture, and fission yields as part of Ph.D. theses [6–13]1. All templates provide a comprehensive list of uncertainty sources, ranges of expected uncertainties, and suggested correlations for each of these sources. In addition to these templates, recommendations are given on what information should be supplied by experimenters for evaluation purposes.
The present paper serves a preface to this special issue on templates for various observables; we introduce in Section 2 the concept of templates, their history, and discuss potential use cases. Section 3 covers the mathematical basis for establishing templates. A specific example, 235U(n,f) average prompt fission neutron multiplicities, is given in Section 4 to illustrate how templates can be applied to estimate missing experimental uncertainties for evaluation purposes and their impact on evaluated mean values and covariances. Section 5 provides conclusions and an outlook to future work.
2. Introduction to the concept of templates
2.1. What are templates of expected measurement uncertainties?
The templates cover many common direct measurement techniques typically employed to provide experimental data for a specific nuclear data observable. A minimal set of uncertainty sources that are expected to be addressed in the uncertainty quantification of an experiment is then tabulated for each of these measurement types. They are subdivided according to measurement type as some uncertainty sources might not apply to all measurement techniques. For instance, in a ratio measurement, both samples–the one under investigation and the monitor sample2–are irradiated by the same neutron flux. Hence, the neutron flux does not need to be determined and no associated uncertainties apply for ratio measurements. However, they do apply for measurements of the same observable where the neutron flux is quantified directly.
The template also provides reasonable, conservative ranges of uncertainties for each of those sources. We give, unless otherwise noted, conservative ranges because if an uncertainty source was forgotten to be quantified, it is possible that the associated correction might not have been investigated in great detail. Also, one has to consider that some uncertainties reported in the literature might be overly optimistic or very low as considerable effort was spent to reduce them. We choose here to give uncertainties for partial uncertainty sources that were selected to be mutually independent, where possible, following the mathematical framework discussed in Section 3.
In addition to that, the templates list estimates of correlation coefficients between uncertainties of a specific source within the same and between different experiments. While the former information is rarely found in the EXFOR database or the experimental literature, correlations between uncertainty sources of different experiments are even less likely to be provided by experimenters. Estimating the latter correlations is usually the evaluator’s task. To aid this endeavor, correlation values for templates were mostly established based on the discussion of the underlying physics processes of the correction and its impact on the final deduced experimental observable.
2.2. Use cases and users
Templates were designed with the following users in mind: nuclear-data experimenters, nuclear-data evaluators, EXFOR compilers, and journal editors or reviewers.
One aim of the template project is to maintain and enhance best practices on what should be documented (including uncertainties) for a measurement. Following these best practices should help ensure that experimental data can be used more efficiently by the community for a long time. More specifically, an experimenter can use these templates as a checklist when planning the experiment and during the data-reduction phase, or at least before publishing the final data set. Cross-checking their reported uncertainties with these lists makes sure that all relevant uncertainty sources and information needed by evaluators are provided. It is highly recommended that experimenters clearly state, in the EXFOR entry and/or the associated journal article, whether an uncertainty source is practically zero, as this might not be obvious to an evaluator.
EXFOR compilers can utilize these templates as guidelines sanctioned by the broad experimental and evaluator community on what information would be of high importance to include in an EXFOR entry. The templates can be used to request this information in a targeted manner as input to EXFOR entries. Of course, we acknowledge that in some cases such requests by the EXFOR compilers might not always be fully satisfied by the authors. However, if applied over time, these templates will help improve the completeness and usability of new EXFOR entries with the help of EXFOR compilers and experimenters.
Journal editors and reviewers can also use the guidelines presented here to ask for the information needed by evaluators to be presented in an article. This should increase the usability of the resulting journal article.
These templates were designed to help evaluators in making better-informed choices to fill in missing uncertainty and correlation information comprehensively across the whole experimental database used for an evaluation. These templates are provided by means of large literature reviews and in discussion with the experimental community. Given that, the estimated values are expected to be better justified for many cases than those evaluators generate on their own. Also, they can help evaluators pin-point missing uncertainty sources and estimate them. They can also help identify unrealistically low uncertainties. Systematically using these templates for evaluations will lead to a more balanced uncertainty quantification across different data sets. In addition to that, if experimenters provide more complete uncertainties, as they use these templates as checklists, the need for assumptions on experimental data on the part of the evaluator will reduce. Hence, applying these templates can ultimately lead to more realistic evaluated uncertainties for nuclear-data libraries and applications.
2.3. What a template should not be
It should be emphasized that these templates are neither applicable to all types of measurements, nor are they to be construed as immutable laws. They are rather guidelines that will change with advancing our understanding regarding these measurements. For instance, if a novel measurement technique is designed to go beyond the standard methods discussed in references [6–13], some uncertainty sources might not apply at all for this new measurement type, or instead, new ones need to be considered. However, it would be best practice to mention that uncertainty sources are practically zero in this case compared to the standard technique. Also, the quantitative template values are not intended to be target or acceptable uncertainty values for future measurements of techniques discussed here. They are only guide values to be used when historic measurements are not described in full detail. We want to encourage continuous improvement in experimental methods.
It is equally important to understand that these templates should not be used by evaluators to replace a detailed analysis of experimental data. Template uncertainty values should only be used as a last resort if no experimental information is available that could enable the evaluator to estimate expected, but missing, uncertainty sources. This point is important as experimenters might spend considerable time analyzing their measurements and quantifying uncertainties. If template values are used instead of these carefully estimated uncertainties, the weight of the experimental data might, once again, become unjustified, thus adversely impacting evaluated nuclear data. Also, it is assumed that experimental data are updated to the latest monitor standard and decay data, and adjusted for obvious and easy-to-correct mistakes. Template uncertainties should not be used to cover differences between data sets due to failing to undertake straightforward corrections.
It is obvious that there are limitations to how much templates can help. If major corrections are simply missing for a data set, the experimental data might be too biased to use them for evaluation purposes. In such cases, it might be more adequate to reanalyze and correct historic data, or reject them, rather than add uncertainties accounting for the bias. The same applies if inadequate (e.g., only total uncertainties with no hint of their make-up) or no uncertainty information is provided for a measurement of an observable that needs to be evaluated to high precision. Then, the conclusion might be that new measurements are needed that avoid the pitfalls of previous ones and supply adequate information for evaluations.
2.4. History of templates and how they were established
The idea of these templates was based on the work of Schillebeeckx et al. [17, 18] that created templates for information that ought to be provided in EXFOR entries of transmission experiments. These templates included recommendations on how to document uncertainties and detailed associated procedures but were not limited only to uncertainty quantification. Helgesson et al. [19] created a first version of templates for experimental uncertainties for 59Ni a few years later. This work also encompasses tables of uncertainties and correlations encountered for selected observables ((n,tot), (n,p), (n,α), and (n,γ) cross sections) based on the limited set of EXFOR entries found for 59Ni. It did not distinguish between different measurement types when a template was created and did not include fission observables, nor did it do a broad literature survey across many isotopes and measurement techniques as done for references [6–13]. In the preparation of this paper, the uncertainty values obtained in reference [19] for the 59Ni observables (rare isotope) were compared to pertinent template values.
Out of these ideas and out of the need for comprehensive uncertainty quantification for Neutron Data Standards evaluations [16], a template of expected measurement uncertainties in fission chamber measurements was developed and published in an early version in reference [20]. It was finalized and systematically applied to updating 239Pu(n,f) cross-section covariances in the database underlying the Neutron Data Standards evaluation. This work presented in reference [4] highlighted clearly that considering all pertinent uncertainties in this database changed the evaluated 239Pu(n,f) mean values and covariances so significantly that application calculations were impacted. This conclusion motivated a larger CSEWG covariance committee effort to provide templates for more observables.
Initial versions of most templates in this issue and in Ph.D. theses, references. [6–13], were established based on discussions between experimenters, evaluators, and EXFOR representatives at a mini-CSEWG meeting that took place April 29, 30, and May 1st, 2019 in Los Alamos, NM, USA.
After introductions to the observables at hand and measurement techniques, evaluators stated what information they need to include data and their uncertainties correctly and effectively in evaluations. Experimenters provided input on typical measurement types, their typical uncertainty sources, and conservative ranges of these uncertainties and correlations. EXFOR representatives suggested that uncertainty information can be realistically stored given the current format. In addition to that, these templates were established based on the information found in multiple EXFOR entries and from a broad literature review, and further in-depth discussions with experimenters and evaluators over a period of one year.
3. Mathematical basis of templates of expected measurement uncertainties
Here, we formally examine the various assumptions and mathematical approximations that support the use of the template approach in quantifying experimental uncertainties and their correlation grounded in the mathematical ideas of references [21–23].
Let x = (x1, …, xi, …, xn) denote a set of all the primary parameters of an experiment. These should be mostly very basic parameters, not partially derived ones. Primary parameters, x, could, for instance, be counts of the main measurement, background counts, measured impurities, etc. Clearly n is likely to be a very large number, even for relatively simple experiments. Then, let y = (y1, …, yk, …, ym) represent the derived parameters that the experiment seeks to determine. The variable y could be, for instance, a collection of energy-dependent cross-sections for a particular reaction. The number m may or may not be large. The combined experimental and data-analysis procedures can be embodied in a specific experiment algorithm A. This algorithm is usually more complicated than can be expressed by simple mathematical functions. The process of deducing y from x can be symbolized by A{x}→y. The template approach seeks to assess uncertainties on the level of independent primary parameters x.
The elements of x and y are assumed to be random variables with statistical properties governed by two probability density functions (PDFs), px(x) and py(y). These functions are not known. However, it must be assumed that specific values x0 for the primary parameters x are known a priori, or are measurable. If not, the experiment is futile. The determination of x does not yield the true value of the underlying physical quantity but rather an estimate x0 due to the determination process being affected by random errors of, e.g., measurement or Monte Carlo simulation. How realizations of x scatter around the true value is reflected in the PDF px(x). x0 should also not be confused with the PDF mean values ⟨x⟩. They can only approximately approached by many repetitions of an otherwise unbiased experiment, which is very impractical. While it is often assumed that ⟨x⟩≈x0 is an adequate approximation, the possible discrepancy should then be quantified by the covariance matrix for x, Vx. Given this assumption, the experiment is usually represented symbolically by A{x0}→y0. Again, y0 is not comparable to the mean value ⟨y⟩, but we assume that ⟨y⟩≈y0 and assess the discrepancy in Vy, the covariance matrix for y. Quantifying Vx is required before comparable values for y and Vy can be generated. The two methods for estimating Vy to be considered here are stochastic and deterministic.
Stochastic method for estimating Vy: if the knowledge of px(x) is limited to ⟨x⟩ and Vx, the Maximum Entropy Principle suggests that px(x) is most likely to be multivariate normal (Gaussian) [22]. The stochastic approach involves generating a large collection of samples {xλ}(λ = 1, Λ) by Monte Carlo governed by PDF px(x). A collection {yλ} is produced by repeated application of A{xλ}→yλ. Then, ⟨y⟩ and Vy can be estimated from
The vykl are elements of Vy. The stochastic method is conceptually simple, but rarely undertaken due to the computational cost involved. Also, it does not lend itself to utilizing the template approach to categorizing experimental uncertainties if applied to estimating total experimental uncertainties.
Deterministic method for estimating Vy: this approach linearizes the relationships between all the yk and xi. Thus, a very small shift Δ y of derived y due to a very small shift Δ x in x is calculated using the matrix expression Δ y ≈ TΔ x, where T is an m × nsensitivity matrix. An element tki is obtained by computing the small shift in yk due only to shift Δxi in xi, with all other components of x held constant, and then dividing that shift in yk by Δxi. The m × m covariance matrix Vy for the derived results y is given approximately by
where T + is the transpose of T. This is the well-known Law of Error Propagation.
We proceed further by examining the primary variable covariance matrix Vx whose elements are vxij = exicxijexj (i, j = 1, n). The variables exi and exj are the uncertainties (standard deviations) in x, while the cxij are the correlation coefficients between all these uncertainties. Since n is likely to be a very large number, we proceed to seek ways to simplify Vx. Some uncertainties exi and/or sensitivity factors tki may be very small. Then, we can eliminate certain xi from the primary variable set for the purpose of estimating uncertainties, but not for calculating the derived y0k. Furthermore, many correlation coefficients cxij may be zero or nearly so since many of the primary variables are mutually independent. Finally, reorganizing the primary variables (i.e., re-assigning the i positions) may enable grouping of the primary variables into Q (with Q < n) subsets, each one representing a mostly independent experiment attribute (such as sample mass, a specific background source, etc.). This partitioning exercise results in off-diagonal regions, where the elements of Vx are either zero or nearly so. Then, we decompose Vx into a sum of Qn × n independent submatrices Vxq:
Combining equations (3) and (4) lead to
The Tq are submatrices of T that interact in a significant way with submatrices Vxq. All other elements of T multiply with zero values of Vx. Thus, the elements vykl of Vy are given by
where eyqk is the partial uncertainty in yk due to attribute q and the cyqkl are correlations only between these partial uncertainties. They are sometimes referred to as micro-correlations. The total uncertainty in yk is thus given by
The correlation matrix for the derived results y has elements cykl which are given by
Equations (6)–(8) provide formulas used to characterize the uncertainties and correlations for a derived experimental data set y in a manner compatible with applying the template approach in cataloging partial uncertainties and their correlations for all the significant sources of uncertainty in an experiment. The present discussion shows that many approximations and assumptions are required to reduce complex experiments to this level of simplicity.
It is instructive to consider an even stronger simplification in modeling an experiment for uncertainty analysis purposes. First, assume that x = (x1, …, xq, …, xQ). Each xq is an independent positive variable that alone characterizes a particular independent experiment attribute q. Then, assume that algorithm A{x}→y for deriving a single physical quantity y reduces to the formula
where the fz are Z simple functions of the parameters xq. Examples are: fz(x)=xq, or 1/xq, or (xq–xq’), or 1/(xq–xq’), or exp(−cqxq). Here, xq and xq’ signify two distinct primary variables and cq is a constant. Sensitivity parameters tq (q = 1, Q) are calculated analytically as the partial derivatives tq = (∂y/∂xq). Equation (9) symbolizes many of the formulas given in the various papers included in this issue for discussions of particular experimental types.
In summary, many assumptions and approximations are involved in justifying the use of templates to catalog experimental uncertainties and correlations. The most important of these is the notion that significant sources of experimental uncertainty can be attributed to various attributes of the experiment treated as mutually independent. Hence, care was taken here to group uncertainty sources such that they are mutually independent where possible.
4. Examples of applying a template for evaluations
Templates of expected measurement uncertainties were applied to estimate missing experimental uncertainties for measurements of neutron-induced 235U average prompt fission neutron multiplicities, , listed in Table 1. This analysis of experimental data was undertaken for an evaluation of the 235U
in the fast range for ENDF/B-VIII.1β1 [24]. Most measurements employed the ratio liquid-scintillator method. For the uncertainty analysis, the EXFOR entry and literature of each experiment were studied in detail to extract all possible uncertainty sources. The uncertainty sources found through these analyses were compared to those that should be supplied for ratio liquid-scintillator measurements in Table 1 of reference [11]. This procedure aids in identifying missing uncertainty sources3. If uncertainty values were found to be missing, they were estimated based on template values. It is obvious from the right-hand side column of Table 1 (tabulating missing uncertainty sources) that uncertainty sources were missing for all of the measurements which, unfortunately, is the rule rather than the exception. Correlation coefficients for specific uncertainty sources were only rarely provided in the experiment literature, and were thus estimated using templates of expected measurement uncertainties in the majority of the cases.
235U(n,f) experimental data sets included in the example evaluation are identified with their EXFOR No., first author, and year of publication. The last column lists all uncertainty sources that were added based on templates for
in Table 1 of [11]. The information presented here was assembled from parts of Table I of LANL Report [24].
The code ARIADNE [53] was then used to estimate total covariances for each correlated group of experiments. The code starts out by estimating covariances for each individual uncertainty source following the procedures outlined in Section 3. The same code was employed to perform two evaluations: One evaluation was undertaken with the generalized-least squares algorithm [54]. It was based on all experimental data and their covariances including template uncertainties listed in Table 1. ENDF/B-VIII.0 mean values [55] and a diagonal covariance matrix with 100% uncertainties were used as non-informative prior. The second evaluation differed from the first one only in that no template uncertainties were considered for missing uncertainty sources.
The left-hand side of Figure 1 shows that including these missing uncertainties in the evaluation via the templates of expected measurement uncertainties changes the evaluated 235U by up to 1% in the energy range from 0.1–15 MeV. Evaluated uncertainties are increased by up to 25% when template uncertainties are considered. These changes are especially large around 5–7 MeV, the energy region of the opening of second-chance fission, where we know today, thanks to improved nuclear data, that systematic prompt fission neutron spectra and angular distribution uncertainties for
measurements are usually the largest. There, template uncertainties incorporate our present-day knowledge into estimates of experimental uncertainties for measurements that largely took place from the 1960s to the 1980s, and significantly change evaluated mean values and uncertainties.
![]() |
Fig. 1. Evaluated mean values (left-hand side) and uncertainties (right-hand side) of 235U(n,f) |
However, even the comparably smaller changes in 235U from 0.1–4 MeV, maximally 0.5%, are significant when seen in the context of simulating the effective neutron multiplication factor, keff, of fast highly-enriched uranium benchmarks [14]. To demonstrate this, the two 235U
evaluations were included in ENDF/B-VIII.0 data4 via ENDFtk [56], processed with NJOY-2016 [57], and then used to simulate keff of the Godiva and Flattop critical assemblies [14] using MCNP-6.25 [58]. If we use the 235U
with template uncertainties for missing uncertainty sources, the simulated Godiva keff is 262 pcm (0.262%) higher than if simulated with the evaluated 235U
without accounting for missing uncertainties by template values. This increase is 192 pcm (0.192%) for simulated Flattop keff values. This difference in simulated keff values is significant when compared to the experimental uncertainties for Godiva and Flattop keff, 100 and 300 pcm, respectively.
So, in short, evaluated mean values and uncertainties can change significantly, on the level of these observables as well as for application simulations, if missing experimental uncertainty sources are quantified via templates of expected measurement uncertainties. While the templates are a last resort in the absence of information from the experiment, they are a better option than assuming missing uncertainties are zero which gives data sets with poorly estimated uncertainties an unjustified higher weight and biases evaluations.
5. Conclusions and outlook
Templates of expected measurement uncertainties were established by the CSEWG covariance committee and with the help of many other international contributors. These templates list expected uncertainty sources for each distinct measurement type of an observable. They also give ranges of uncertainties for most sources and estimates of correlation coefficients between uncertainties of each source. The uncertainty values were estimated conservatively based on information found for a broad range of experiments, from their literature, respective EXFOR entries, and expert judgment from experimenters. In addition to these templates, we list what information (what data, uncertainty information, data descriptors) evaluators need in order to include experimental data faithfully into the evaluation process. Templates were established in this effort for (n,total), (n,γ), (n,xn), and (n,cp) reaction cross sections as well as prompt fission neutron spectra, average prompt and total fission neutron multiplicities, and fission product yields; they are separate papers in this issue and Ph.D. theses [6–13]. Here, the general concept of these templates was introduced, as their history, use cases (as a checklist for experimenters, EXFOR compilers, and journal editors; for evaluators, to identify and estimate missing uncertainties), and mathematical basis. An example was given using evaluations of 235U average prompt fission neutron multiplicities. It showcased that estimating missing uncertainties via templates of expected uncertainties, instead of assuming them to be zero, can significantly impact evaluated mean values and uncertainties. In fact, these differences in evaluated mean values lead to changes in simulated effective neutron multiplication factors of the Godiva and Flattop critical assemblies that are three times larger or two-thirds of their experimental uncertainties, respectively.
In the near future, these templates and the information needed for evaluations will be concisely summarized on the NNDC (U.S. National Nuclear Data Center) homepage of BNL for an easy access by both evaluators and experimenters. We also plan to engage journal editors with the hope that they may propose to referees that they refer to the templates when reviewing papers. This information could help referees point out if information is missing in the journal articles that could potentially improve the usability of the data. If this information would be provided by experimenters, the content of the papers could be more easily used by evaluators for evaluation purposes. Also, if all pertinent information is in the paper or supplemental material, it is more likely to be recorded in the EXFOR database. In addition to that, we will provide these templates for EXFOR compilers in the hope that they ask experimenters for missing information compared to the template leading, thus, to more complete new EXFOR entries for evaluation purposes.
These templates will also be used as part of Subgroup 50 of the OECD “Working Party on International Nuclear Data Evaluation Co-operation” at the NEA [59]. This subgroup aims at developing an automatically readable, comprehensive, and curated experimental reaction database that initially will include a few selected entries of EXFOR. The structure of this database (especially the uncertainties) will be aligned such that it can contain uncertainty sources expected from the templates and information needed by evaluators. The template uncertainties might also be used to provide otherwise missing uncertainties for data sets in the database.
It should be emphasized that the templates should not be understood as immutable laws but rather a snapshot in time. Also, they will not apply to all possible measurement types; experimenters and evaluators should use their judgment in how far they are applicable. They should also be understood as a “living document” in as far as our understanding of uncertainty sources evolves over time as we uncover new physics effects with ever-improving technology and research. Consequently, these templates will be updated periodically on the NNDC webpage based on this improved understanding. Templates for additional observables or measurement types might be established. The aim of this continuous update is that the templates help to perform the best possible uncertainty quantification of experimental data for evaluation purposes at a specific time to result in more realistic evaluated mean values and covariances in our nuclear-data libraries.
We focus on standard measurement techniques and do not cover experiments specifically designed to validate them. While these are key in ensuring that our standard measurement techniques yield reliable results, there is insufficient information (i.e., too few measurements of the same type) to establish templates. We also do not cover integral experiments. For some types of integral experiments (e.g., criticality experiments in the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” (ICSBEP) [14]), stringent uncertainty-quantification rules and template-like tables have been developed [15]. They are rigorously applied as part of the review process to data entering ICSBEP since the release of reference [15]. This process significantly enhances the quality of uncertainties in that database and serves as an example of the benefit of applying templates such as the ones presented here.
A monitor can be, for instance, a reference cross-section into which considerable experimental and evaluation effort has been put [16]. Because these monitor cross sections are among the most accurate ones, a comparable total uncertainty for the measurement in ratio to the monitor can be achieved with far less effort.
The following uncertainty sources were identified as missing across several datasets: δcDG are delayed gamma-ray, δb random-background, δcff false-fission, δω impurity, δτ deadtime, δχ prompt fission neutron spectrum, δa angular distribution of fission neutrons, δd thickness of sample, and δds/m sample-displacement uncertainties.
MCNP® and Monte Carlo N-Particle® are registered trademarks owned by Triad National Security, LLC, manager and operator of Los Alamos National Laboratory. Any third-party use of such registered marks should be properly attributed to Triad National Security, LLC, including the use of the designation as appropriate.
Conflict of interests
The authors declare that they have no competing interests to report.
Acknowledgments
DN thanks P. Schillebeeckx and S. Kopecky for discussions.
Funding
Work at LANL was carried out under the auspices of the National Nuclear Security Administration (NNSA) of the U.S. Department of Energy (DOE) under contract 89233218CNA000001. We gratefully acknowledge partial support of the Advanced Simulation and Computing program at LANL and the DOE Nuclear Criticality Safety Program, funded and managed by NNSA for the DOE. Part of this material is based upon work supported by the Department of Energy National Nuclear Security Administration through the Nuclear Science and Security Consortium under Award Number(s) DE-NA0003180 and the Office of Nuclear Physics under DE-20SSC000056 and DE-SC0021243. This work was performed in part under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Work at Brookhaven National Laboratory was sponsored by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE- AC02-98CH10886 with Brookhaven Science Associates, LLC.
Data availability statement
The data that were created associated with this manuscript are all within its main text, figures and tables.
Author contribution statement
DN wrote the original draft, and undertook the evaluation and simulation example. DS and SC contributed text to the first draft. All authors reviewed the manuscripts. All authors were involved in the conception of the template idea, investigations, and data curation associated with the article and discussions on uncertainty quantification–either by taking part at the CSEWG mini-CSEWG workshop on templates 2019, or by discussions afterward.
References
- Experimental Nuclear Reaction Data Library (EXFOR), IAEA Nuclear Data Section, See https://www-nds.iaea.org/exfor (accessed 2016-11-8), or for the NNDC at Brookhaven National Laboratory, the mirror site is http://www.nndc.bnl.gov/exfor (accessed 2016-11-8) [Google Scholar]
- N. Otuka et al., Towards a more complete and accurate experimental nuclear reaction data library (EXFOR): International collaboration between nuclear reaction data centres (NRDC), Nucl. Data Sheets 120, 272 (2014) [CrossRef] [Google Scholar]
- V.V. Zerkin, B. Pritychenko, The Experimental Nuclear Reaction Data (EXFOR): Extended Computer Database and Web Retrieval System, Nucl. Instrum. Meth. Phys. Res. Sec. A 888, 31 (2018) [CrossRef] [Google Scholar]
- D. Neudecker et al., Applying a template of expected uncertainties to updating 239Pu(n, f) cross-section covariances in the neutron data standards database, Nucl. Data Sheets 163, 228 (2020) [CrossRef] [Google Scholar]
- D. Neudecker et al., The need for precise and well-documented experimental data on prompt fission neutron spectra from neutron-induced fission of 239Pu, Nucl. Data Sheets 131, 289 (2016) [CrossRef] [Google Scholar]
- A.M. Lewis et al., Templates of expected measurement uncertainties for total cross section observables, EPJ Nuclear Sci. Technol. 9, 34 (2023) [CrossRef] [EDP Sciences] [Google Scholar]
- A.M. Lewis et al., Templates of expected measurement uncertainties for capture and charged-particle production cross section observables, EPJ Nuclear Sci. Technol. 9, 33 (2023) [CrossRef] [EDP Sciences] [Google Scholar]
- A. Lewis, Uncertainty analysis procedures for neutron-induced cross section measurements and evaluations, PhD thesis. Department of Nucl. Engineering, University of California, Berkeley, USA, 2020 [Google Scholar]
- J.R. Vanhoy et al., Templates of expected measurement uncertainties for (n, xn) cross sections, EPJ Nuclear Sci. Technol. 9, 31 (2023) [CrossRef] [EDP Sciences] [Google Scholar]
- D. Neudecker et al., Templates of expected measurement uncertainties for prompt fission neutron spectra, EPJ Nuclear Sci. Technol. 9, 32 (2023) [CrossRef] [EDP Sciences] [Google Scholar]
- D. Neudecker et al., Templates of expected measurement uncertainties for average prompt and total fission neutron multiplicities, EPJ Nuclear Sci. Technol. 9, 30 (2023) [CrossRef] [EDP Sciences] [Google Scholar]
- E.F. Matthews et al., Templates of expected measurement uncertainties for fission yields, EPJ Nuclear Sci. Technol. 9, 29 (2023) [Google Scholar]
- E.F. Matthews, Advancements in the nuclear data of fission yields, PhD thesis, Department of Nucl. Engineering, University of California, Berkeley, USA, 2021. [Google Scholar]
- J. Bess, editor, International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP), Organization for Economic Co-operation and Development-Nuclear Energy Agency Report NEA/NSC/DOC(95)03, 2019. [Google Scholar]
- in [14] V.F. Dean, editor, ICSBEP Guide to the Expression of Uncertainties, Organization for Economic Co-operation and Development-Nuclear Energy Agency Report NEA/NSC/DOC(95)03, 2019. [Google Scholar]
- A.D. Carlson et al., Evaluation of the neutron data standards, Nucl. Data Sheets 148, 143 (2018) [Google Scholar]
- P. Schillebeeckx et al., Determination of resonance parameters and their covariances from neutron induced reaction cross section data, Nucl. Data Sheets 113, 3054 (2012) [CrossRef] [Google Scholar]
- F. Gunsing, P. Schillebeeckx, V. Semkova, IAEA Report INDC(NDS)-0647, 2013. [Google Scholar]
- P. Helgesson, H. Sjöstrand, D. Rochman, Uncertainty-driven nuclear data evaluation including thermal (n,α) applied to 59Ni, Nucl. Data Sheets 145, 1 (2017) [CrossRef] [Google Scholar]
- D. Neudecker et al., Template for estimating uncertainties of measured neutron-induced fission cross-sections, EPJ Nuclear Sci. Technol. 4, 21 (2018) [CrossRef] [EDP Sciences] [Google Scholar]
- D.L. Smith, N. Otuka, Experimental nuclear reaction data uncertainties: basic concepts and documentation, Nucl. Data Sheets 113, 3006 (2012) [CrossRef] [Google Scholar]
- D.L. Smith, Probability, Statistics and Data Uncertainties in Nuclear Science and Technology (American Nucl. Society, LaGrange Park, IL, 1991) [Google Scholar]
- N. Drosg, Dealing with Uncertainties. A Guide to Error Analysis, (Springer, Heidelberg-New York, Second Enlarged Edition, 2009) [CrossRef] [Google Scholar]
- A.E. Lovell, D. Neudecker, P. Talou, Release of Evaluated 235U(n, f) Average Prompt Neutron Multiplicities Including the CGMF Model, Los Alamos National Laboratory Report LA-UR-22-23475, 2022. [Google Scholar]
- Ju.A. Bljumkina, I.I. Bondarenko, V.F. Kuznetsov, V.G. Nesterov, V.N. Okolovitch, G.N. Smirenkin, L.N. Usachev, Channel effects in the energy dependence of the number of prompt neutrons and the kinetic energy of fragments in the fission of U235 and U233 by neutrons, Nucl. Phys. 52, 648 (1964) [CrossRef] [Google Scholar]
- D.W. Colvin, M.G. Sowerby, Boron pile nu-bar measurements, Proc. Nucl. Data Reactors Conf., Paris 1966 1, 307 (1966). [Google Scholar]
- H. Conde, Average number of neutrons from the fission of U-235, Arkiv foer Fysik 29, 293 (1965) [Google Scholar]
- B.C. Diven, H.C. Martin, R.F. Taschek, Multiplicities of fission neutrons, Phys. Rev. 101, 1012 (1956) [CrossRef] [Google Scholar]
- B.C. Diven, J.C. Hopkins, Numbers of prompt neutrons per fission for U233, U235, Pu239 and Cf252, Proc. Reactor Physics Sem., Vienna 1961 1, 149 (1961). [Google Scholar]
- D.S. Mather, M.H. Mctaggart, A. Moat, Revision of the Harwell 240Pu source strength and nu for 235U and 252Cf, J. Nucl. Energy A&B 20, 549 (1966) [Google Scholar]
- M. Soleihac, J. Frehaut, J. Gauriau, Energy Dependence of νp for Neutron-induced Fission of 235U, 238U and 239Pu from 1.3 to 15 MeV, J. Nucl Energy 23, 257 (1969) [CrossRef] [Google Scholar]
- J. Frehaut, G. Mosinski, M. Soleihac, Recent Results in # Measurements between 1.5 and 15 MeV, Topical Conference on # The Average Number of Neutrons Emitted in Fission, France, 1972, Report EANDC(E)-15 “U”, 1973. [Google Scholar]
- J. Frehaut, A. Bertin, R. Bois, Mesure de ν̅p et E-bar γ pour la fission de 232Th, 235U et 237Np induite par des neutrond d’energie comprise entre 1 et 15 MeV, Centre d’Etudes Nucleaires 2196 (1981). [Google Scholar]
- R. Gwin, R.R. Spencer, R.W. Ingle, Measurements of the energy dependence of prompt neutron emission from 233U, 235U, and 239Pu for En = 0.0005 to 10 MeV relative to emission from spontaneous fission of 252Cf, Nucl. Sci. Eng. 94, 365 (1986) [CrossRef] [Google Scholar]
- R. Gwin, R.R. Spencer, R.W. Ingle, Measurements of the energy dependence of prompt neutron neutron emission from 233U, 235U, 239Pu, and 241Pu for En 0.0005 to 10 eV relative to emission from spontaneous fission of 252Cf, Nucl. Sci. Eng. 87, 381 (1984) [CrossRef] [Google Scholar]
- R. Gwin et al., Measurements of the Average Number of Prompt Neutrons Emitted Per Fission of 239Pu and 235U (Oak Ridge National Laboratory ORNL/TM-6246, 1978). [Google Scholar]
- J.C. Hopkins, B.C. Diven, Prompt neutrons from fission, Nucl. Phys. 48, 433 (1963) [CrossRef] [Google Scholar]
- R.E. Howe, T.W. Phillips, Fission nu-bar measurements, Brookhaven Natl. Lab. Rep. 21501, 66 (1976) [Google Scholar]
- F. Kaeppeler, R.E. Bandl, The average number of prompt neutrons from neutron induced fission of U-235 between 0.2 and 1.4 MeV, Proc. Conf. Nucl. Cross-Sect. Techn., Washington 1975 2, 549 (1975). [Google Scholar]
- J.W. Meadows, J.F. Whalen, Energy dependence of prompt ν̅-for neutron-induced fission of U235, Phys. Rev. 126, 197 (1962) [CrossRef] [Google Scholar]
- J.W. Meadows, Measurement of ν̅p for 235U, U.S. Rep. EANDC 70, 9 (1964). [Google Scholar]
- J.W. Meadows, J.F. Whalen, Energy dependence ν̅p for neutron-induced fission of U235 below 1.0 MeV, J. Nucl Energy 21, 157 (1967) [CrossRef] [Google Scholar]
- L.I. Prokhorova et al., Yield of prompt neutrons ν̅tot in the fission of U235 by neutrons with energies up to 1.5 MeV, Atomnaya Énergiya 30, 250 (1971) [Google Scholar]
- L.I. Prokhorova, G.N. Smirenkin, Average number of prompt neutrons from U235 and Th232 fission induced by neutrons having energies up to 3.3 MeV, Yadernaya Fizika 7 961 (1968) [Google Scholar]
- A.N. Protopopov, M.V. Blinov, Mean number of neutrons emitted in U235 fission induced by 14.8-MeV neutrons, Atomnaya Energiya 4 (1958) [Google Scholar]
- M.V. Savin et al., The Average Number of Prompt Neutrons in Fast Neutron Induced Fission of U-235, Pu-239 and Pu-240, IAEA Report IAEA-CN-26/40 (1970) [Google Scholar]
- M.V. Savin, Y.A. Khokhlov, A.E. Savelev, I.N. Paramonova, Energy dependence of ν̅ in the fission of U235 by fast neutrons, Proc. Third All Union Conf. Neutron Phys. Kiev, 9–13 Jun 1975 5, 186 (1975). [Google Scholar]
- M.V. Savin, Ju.A. Khokhlov, V.N. Ludin, Average number of prompt neutrons at the 235U fission by neutrons in the energy interval MeV, Proc. Sec. Conf. Neutron Phys. Kiev 1973 4, 63 (1973). [Google Scholar]
- G.N. Smirenkin et al., Mean number of prompt neutrons in the fission of U233, U235, Pu239 by 4 and 15 MeV neutrons, Sov. At. Energy 4, 253 (1958) [CrossRef] [Google Scholar]
- M. Soleihac et al., Average number of prompt neutrons and relative fission cross-sections of U-235 and Pu-239 in the 0.3 to 1.4 MeV Range, Proc. Conf. Nucl. Data Reactors, Helsinki, 2, 145 (1970) [Google Scholar]
- J.W. Boldeman, J. Fréhaut, R.L. Walsh, A reconciliation of measurements of ν̅p for neutron-induced fission of Uranium-235, Nucl. Sci. Eng. 63 430 (1977) [CrossRef] [Google Scholar]
- R.L. Walsh, J.W. Boldeman, The energy dependence ν̅p for 233U, 235U and 239Pu below 5.0 MeV, J. Nucl Energy 25 321 (1971) [CrossRef] [Google Scholar]
- D. Neudecker, ARIADNE–A program estimating covariances in detail for neutron experiments, EPJ Nuclear Sci. Technol. 4, 34 (2018) [CrossRef] [EDP Sciences] [Google Scholar]
- M.W. Herman (co-ordinator), Covariance Data in the Fast Neutron Region, Organization for Economic Co-operation and Development-Nuclear Energy Agency NEA/NSC/WPEC/DOC(2010), 427 (2011) [Google Scholar]
- D.A. Brown et al., ENDF/B-VIII.0: The 8th major release of the nuclear reaction data library with CIELO-project cross sections, new standards and thermal scattering data, Nucl. Data Sheets 148, 1 (2018) [CrossRef] [Google Scholar]
- W. Haeck et al., ENDFtk, https://github.com/njoy/ENDFtk (accessed 2023-3-31). [Google Scholar]
- R.E. MacFarlane et al., The NJOY Nuclear Data Processing System, Version 2016, Los Alamos National Laboratory Report LA-UR-17-20093, 2017 [CrossRef] [Google Scholar]
- C. Werner et al., MCNP Users Manual – Code Version 6.2, Los Alamos National Laboratory Report LA-UR-17-29981, 2017 [Google Scholar]
- SubGroup 50 (Co-ordinators: A. Lewis, D. Neudecker, Monitor: A. Koning), Developing an Automatically Readable, Comprehensive and Curated Experimental Reaction Database, https://www.oecd-nea.org/download/wpec/sg50/ (accessed 2023-3-31) [Google Scholar]
Cite this article as: Denise Neudecker, Amanda M. Lewis, Eric F. Matthews, Jeffrey Vanhoy, Robert C. Haight, Donald L. Smith, Patrick Talou, Stephen Croft, Allan D. Carlson, Bruce Pierson, Anton Wallner, Ali Al-Adili, Lee Bernstein, Roberto Capote, Matthew Devlin, Manfred Drosg, Dana L. Duke, Sean Finch, Michal W. Herman, Keegan J. Kelly, Arjan Koning, Amy E. Lovell, Paola Marini, Kristina Montoya, Gustavo P.A. Nobre, Mark Paris, Boris Pritychenko, Henrik Sjöstrand, Lucas Snyder, Vladimir Sobes, Andreas Solders, and Julien Taieb. Templates of expected measurement uncertainties, EPJ Nuclear Sci. Technol. 9, 35 (2023)
All Tables
235U(n,f) experimental data sets included in the example evaluation are identified with their EXFOR No., first author, and year of publication. The last column lists all uncertainty sources that were added based on templates for
in Table 1 of [11]. The information presented here was assembled from parts of Table I of LANL Report [24].
All Figures
![]() |
Fig. 1. Evaluated mean values (left-hand side) and uncertainties (right-hand side) of 235U(n,f) |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.