Production and use of nuclear parameter covariance data: an overview of challenging cross cutting scientific issues

Nuclear data users’ requirements for uncertainty data started already in the seventies, when several fast reactor projects did use extensively “statistical data adjustments” to meet data improvement for core and shielding design. However, it was only ∼20–30 years later that a major effort started to produce scientifically based covariance data and in particular since ∼2005. Most work has been done since then with spectacular achievements and enhanced understanding both of the uncertainty evaluation process and of the data utilization in V&V. This paper summarizes some key developments and still open challenges.


Introduction
During the last two decades "Nuclear Data Needs, UQ, and Assimilation" has been recognized as a key area for research in the nuclear energy domain with multiple motivations: safety margin reductions and design optimization; streamline research and in particular new experiments; enhance multidisciplinary synergies among nuclear physics theoreticians, nuclear data evaluators, experimentalists and reactor and fuel cycle physicists.
These issues have been systematically approached and supported by NEA WPEC in the period 2005-2017. Despite the fact that activities on data uncertainty quantification have been performed since the (FR) reactor design of the seventies (see e.g. Ref. [1]), a new revival and much more widespread efforts have been underway recently. However new issues have been raised or revisited, in particular in terms of the use of integral experiments in the evaluation and some issues are still under discussion, e.g. the impact in terms of production of new a posteriori correlations is still to be fully exploited, despite the potential impact on uncertainty quantification, e.g. in reactor design and safety case.

A starting point (∼2005)
Data needs assessment was performed at the time of fashionable ADS, since there had been a multiplication of data requirements without much neither justification nor user implication.
At that time a powerful initiative, GENERATION-IV, did trigger a much wider effort in a wider area of innovative reactor systems.
This has been the case also for new nuclear fuel cycles and waste management issues.
In order to understand, rationalize and streamline potential needs, it was required to define target accuracies for most important design parameters and to verify both data uncertainties/covariance data and sensitivity tools availability for a meaningful SUA.
Users were consulted and some feedback was given both by R&D organizations and even by some industry. The first step to organize activity to meet the requirements was done within the OECD-NEA WPEC working party. There was a wide expert participation to that initiative. To perform the first systematic uncertainty analysis [2], not much was available in terms of data uncertainty and correlations and rather "provocative" uncertainty data (based on expert judgement) was initially used [3].
This initiative did trigger a large effort to assess systematically uncertainty data, see e.g. [4,5].
Another important outcome was a first list of updated priorities for GEN-IV reactors that was established and implemented in the high priority request list at NEA.
Successively, new covariance data bases were actively developed and new requirements for their completeness were expressed.
The issue of how to meet data needs was revisited: the role of new microscopic experiments, new evaluations, and/or data assimilation/adjustments was again actively 4 Next step: covariance data comparison (WPEC Subgroup 33;2009-2013 One objective of the Subgroup 33 was to find an international common answer to the very basic question: do we understand data assimilation methods? As a result, a comprehensive compilation of methods was delivered [6] that did indicate that, despite different stages of development, there is an unambiguous understanding on the methods used in nuclear data adjustments.
A further objective was to investigate, from the point of view of the applications, the issue of how reliable covariance data are. First performance comparisons of different covariance data sets and of their impact on applications were summarized [7,8] and some important points were made as feedback to evaluators.
Finally, a comprehensive benchmark exercise to understand if adjustments, starting from different x-section data bases and using different covariance data, do converge. Results and related analysis were presented at ND2013 [9].

Next step: reliability issues (Subgroup 39; 2013 to present)
With the availability of new covariance data, it became urgent to revisit the fundamental question of how reliable adjustment trends are. Stress tests performed within the group (see e.g. [10]), did once more point out to potential inconsistencies, if the integral data base was not carefully investigated and documented, in particular in terms of uncertainties, potential systematic errors and correlations.
The Subgroup did tackle the key issue of finding out methods to make the adjustment approach more robust, to make the best use of the information available, and to define scientific criteria in the selection of integral experiments, in particular to avoid compensations when modifications (i.e. adjustments) of cross sections of different isotope reactions were suggested. Some examples of methods and approaches to deal with these issues are described in [11] and in a dedicated report [12] and are still under further development.
As a significant example, new approaches to integral data selection were applied to a very comprehensive adjustment and a first large scale exercise has been presented at ND2016 [13]. New type of experiments, besides standard LANL and ANL criticals, were introduced in the adjustment, including neutron propagation experiments, variable spectrum experiments devoted to MAs and variable adjoint flux energy shape experiments.
Preliminary feedback was also provided (e.g. on the Fe-56 inelastic cross section) to new evaluations (e.g. CIELO-1, Ref. [14]). In fact the observed C/E would require a decrease of Fe-56 inelastic with respect to ENDF/ B-VII, while the new evaluation of the inelastic scattering cross section in CIELO-1 is larger, for the energy range from threshold up to 20 MeV, see Figure 1 (Fe-56 cross sections from Ref. [14]). 6 The future: effective feedbacks to evaluations (new Subgroup SG46) Most recent progress in covariance data methodologies and data production will be reported and discussed at this meeting. However, to make further progress towards the best use both of covariance data and of integral experiments for a wide range of applications, there are still a number of key issues: improvement and extension of covariance data (cross correlations among reactions and among isotopes; angular distributions; secondary neutrons from inelastic scattering; photon production data; delayed neutron data); development of methods to assess the reliability of covariance data: are there criteria beyond mathematics requirements? In this respect, the new Subgroup set-up at NEA (SG44 "Investigation of Covariance Data in General Purpose Nuclear Data Libraries"), will play an important role; definition of updated target accuracies (by combining inverse approach and integral experiments) for design, operation and fuel cycle parameters. Assess impact of present covariance data on accuracy requirements (case of Pu-239 fission); clarify of the respective role of nuclear data evaluators and users in the use of integral experiments, in order e.g. to avoid double use of the same experiments for the same isotope/reaction. Definition of a widely agreed protocol for the use of integral experiments in evaluations; one issue that deserves further investigation is the understanding of how to exploit induced (i.e. a posteriori) correlations between nuclear data and experiments. These correlations do appear according to the scheme shown below.
The global "a posteriori" covariance matrix is given by: In that expression, the "a posteriori" cross section covariance matrix is given by: the "a posteriori" integral parameter covariance matrix is given by: The above expressions indicate that the global "a posteriori" correlation matrix is fully correlated and a posteriori correlations are found between the cross sections and the integral parameters: as far as of integral experiment optimization, i.e. selecting and prioritizing appropriate experiments and in particular those that provide separate physics effects, it seems that this could be a reasonable goal for the short term. There are important issues that can easily benefit from new strategies and that could motivate progress: the performance or retrieval of past experiments related to the improvement of burn-up reactivity swing assessment [15], in particular for safety issues of metal fueled reactor cores; the performance/ retrieval of experiments able to separate capture from scattering in reactivity effects both for actinides and fission products [16]; the performance/retrieval of nleakage experiments from single material spheres for scattering data assessment [17]; investigate how to perform generalized adjustments to provide unambiguous feedbacks to nuclear data evaluators. Some approaches have been proposed (Yokoyama, Palmiotti, Pelloni and Ivanov, e.g. the PIA method Ref. [11]) but not yet finalized or widely used; explore the potential of the Continuous Energy Assimilation. Recent results [18] provide feasibility indications and open interesting paths; -finally it is necessary to define without ambiguity the application domain of any adjusted evaluation/library and a posteriori covariance/correlations. The notion of "representativity" of an experiment has been introduced [19] and used [20,21] in order to go beyond the simple comparison of one experiment with one specific reference system by means of the "similarity" of the associated sensitivity profiles S R and S E . The "representativity" factor r RE in the case of one experiment is given by: where M s is the nuclear data correlation matrix. It can be shown [19] that the uncertainty on the reference parameter R, DR 2 0 is reduced by: If more than one experiment is available, the previous equation can be generalized. For example, in the case of two experiments, characterized by sensitivity matrices S E1 and S E2 the following expression can be derived: where M 0 s is the a posteriori covariance matrix and These expressions can be used to plan experiments giving an optimized contribution to the uncertainty reduction of a reference system, but also to verify the range of applicability (in terms of capability to reduce significantly the uncertainties of a set of reference systems) of the adjustment performed.
Here too, the covariance data play a crucial role: in fact one should assess the impact on the "representativity" of the covariance data used.
-In order to keep track of improvements for applications, it has been suggested to quantify systematically the impact of new revised data on a list of selected target power reactors (thermal, epithermal, and fast spectrum reactors) and a rangeof integral parametersbeyond k eff . Thislist of reactors/ systems and parameters should be defined as far as possible with the help of industry representatives (TerraPower has been a good example through its participation to the activities in the UQ field within WPEC [22]); accounting for safety requirements, in a framework of data traceability. The cases of the Hot Channel Factor and the fuel assembly bowing revisited assessments or the safety margins definition during transients in FRs, point to a renewed interest in the role of credible nuclear data uncertainties.

Experiments perspective
The progress in methodologies and the availability of improved covariance data, suggests new "smart" integral experiments to be supported in the frame of wide international collaborations, e.g. the case of joint experiments on MA as proposed by the NEA Expert Group on experiments [23] in support both of waste management and of long burn-up reactivity swing. As indicated previously, a very high priority should be put e.g. in experiments that enhance the separation of physics effects and depend on a few nuclear data as n-leakage experiments from single material spheres or substitution reactivity experiments with different isotopic composition fuels, etc. Many experiments are already available and a few more could be defined, possibly in the frame of international collaborations.
Moreover, it is suggested to limit the use of criticality experiments mostly as a final verification of a series of specific data improvements, to avoid as much as possible misleading validation results, related to possible compensations among modified data [24].
Finally, it seems timely to generalize the use of the "representativity" of a series of experiments aiming to a wider range of reference applications.

Author contribution statement
The first author, M. Salvatores, has provided the historical perspective, the theoretical background and the recommendations while the second author, G. Palmiotti, has performed most of the calculations and analysis of results.