Open Access
Issue
EPJ Nuclear Sci. Technol.
Volume 10, 2024
Article Number 11
Number of page(s) 10
DOI https://doi.org/10.1051/epjn/2024011
Published online 11 October 2024

© V. Vallet et al., Published by EDP Sciences, 2024

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

Decay heat experiments enable the experimental validation of decay data and fission yield data [13]. They complement post-irradiation experiments on fuel inventories [4, 5], since all isotopes of interest for fuel cycle applications cannot be measured with satisfactory accuracy. To provide information on those specific nuclides individually, elementary fission bursts are also of great interest [68]. However, since they concern short cooling times – i.e. times for which a large amount of fission products are involved, each of them with a small contribution to the total decay heat – the results would be less interesting for feedback to the nuclear data evaluators’ community. In Huyghe et al. [1], the MERCI-1 experiment was considered in a data assimilation process. The MERCI experiment consisted of the irradiation of a fresh UOX fuel pin, with a 3.7 wt.% 235U enrichment, irradiated in the periphery of the OSIRIS reactor until 4 GWd/t. The decay heat was then measured in the MOSAIC calorimeter from 27 min to 45 days [9]. The main issue when analyzing the data in order to assimilate the C/E-1 discrepancies was to assess rigorous and realistic correlations between the experimental values as a function of the cooling time.

The work detailed in this paper has been initiated to study specific cumulative fission yields of four particularly important nuclides for spent fuel characterization: 235U(nth,f)133Cs (burnup credit and decay heat), 235U(nth,f)137Cs (burnup indicator and decay heat), 239Pu(nth,f)106Ru (decay heat) and 239Pu(nth,f)144Ce (decay heat). Those nuclides were already studied in a previous work [5] based on the JEFF-3.1.1 library [10], which consisted of the data assimilation of spent fuel chemical assay data of the experimental database of the DARWIN2.3 package [3]. Table 1 shows that for some yields, the trends from this work are in opposition to the evolution observed between JEFF-3.1.1 and JEFF-3.3 libraries:

  • The cumulated fission yield 235U(nth,f)137Cs was reduced by 2.1% whereas [5] concluded that this yield should be increased by +(7.5 ± 1.9)%.

  • The cumulated fission yield 239Pu(nth,f)144Ce was almost unchanged between JEFF-3.1.1 and JEFF-3.3 whereas [5] recommend lowering this yield by −(8.0 ± 1.3)%.

Table 1.

Prior values of the cumulated yields in the JEFF-3.1.1 library, the values in bold are relative to JEFF-3.1.1, the values in italics are relative uncertainties.

The General Electrics (GE) set of decay heat measurements [11, 12] has been selected for this study. The cooling times of the measured assemblies, between 3 and 8 years, cover the time range where the contribution of fission products of the masses of interest is non-negligible (see Fig. 1). Moreover, the associated C/E discrepancies and uncertainties are consistent with each other. The interpretation of the GE experiment is detailed in Section 2. Then the data assimilation method is briefly described in Section 3, with a focus on the definition of correlation matrices. The results are discussed in Section 4.

thumbnail Fig. 1.

Relative decay heat contributions – Point Beach 2 calculations.

2. Experimental validation of the GE decay heat experiment with DARWIN2.3

2.1. Description of the GE experiments

In the 80s in the United States of America, two experimental programs of decay heat measurements were designed and performed in order to be representative of the storage of spent fuels with both higher 235U enrichments and higher discharge burnups. These programs consisted of measuring the heat of entire assemblies irradiated in PWR and BWR in two facilities equipped with calorimeters: the storage facility of General Electric Morris Operation (GE), at Morris (Illinois), and the Engine Maintenance Assembly and Disassembly at Hanford Engineering Development Laboratory (HEDL), on the Nevada Test Site. The data collected with the HEDL calorimeter are discarded in this study because of excessive experimental uncertainty, ranging from 5 to 10% at one standard deviation compared to the 2−4% uncertainties of the GE calorimeter.

The main characteristics of the PWR data are summarized in Table 2 and a more detailed description is given in Gauld et al. [12]. Measurements were carried on 14 × 14 assemblies, irradiated in Point Beach 2 and San Onofre 1 PWR, with initial 235U enrichments ranging from 3.3 to 4.0 wt.% and burnup ranging from 26 to 40 GWd/t. The time between the end of irradiation and the measurement called “cooling time”, ranges from 1078 days (=2.95 y) to 3012 days (=8.25 y).

Table 2.

Main features of GE assemblies.

2.2. Description of the DARWIN2.3 package (PWR route)

In the PWR calculation route (see Fig. 2), the DARWIN2.3 package [13, 14] involves APOLLO2, a 2D lattice neutron transport code, and PEPIN2, a depletion code, whose strength is to have almost complete filiation chains, describing more than 3800 radionuclides. The APOLLO2 neutron code [15] is used to produce an input data file for PEPIN2. This file gathers both multigroup self-shielded cross-sections and neutron spectra as a function of burnup, for the studied fuel. The PEPIN2 depletion code uses the input data file provided by APOLLO2 to produce a collapsed library with burnup-dependent cross-sections. This library is completed with cross-sections from JEFF-3.1.1 for the missing isotopes in the APOLLO2 filiation chains, and completed with decay data and independent fission yields (FY) data coming from the JEFF-3.1.1 library as well. A very precise depletion history can be given to the PEPIN2 code, with for instance intra-cycles, power variations and cooling periods.

thumbnail Fig. 2.

Flow chart of the DARWIN2.3 package for PWR calculations [5].

2.3. Uncertainty budget and results

The systematic uncertainty associated with the GE calorimeter has been estimated to be ±2% for a measured power of 700 W and ±4% for a measured power of 200 W. A random error has been estimated at 16.7 W from the dispersion of the 14 measurements performed on the CZ205 assembly (BWR) with the GE calorimeter over 246 days period (the decay between measurements has been accounted for) [12].

In addition to measurement uncertainties, other sources must be considered originating from design tolerances and uncertainties and operating conditions:

  • Fuel temperature (±50°C). This uncertainty comes from the METEOR code [16] and is commonly used in the DARWIN2.3 experimental validation [13].

  • Moderator temperature (±2°C). This uncertainty also comes from the METEOR code.

  • Constant average boron concentration (±200 ppm). In a PWR loaded with UOX fuel, the boron concentration may vary from ∼1600 ppm to 0 and is a function of each burnup cycle. A value of 550 ppm of boron is given in the Point Beach reactor design specifications [12] and an uncertainty of 10 ppm is recommended when the average boron concentration is known [17]. Nevertheless in the experimental validation of the DARWIN2.3 package it is recommended to consider a larger uncertainty when the average boron content is not known precisely that is why this value has been chosen in this study.

  • Initial 235U enrichment (±0.03%). No specifications were given in the documents characterizing the fuel, so an uncertainty of 1% has been taken which leads to a variation of ±0.03% on the relative 235U enrichment.

  • Burnup (±2%). This uncertainty is commonly used in the experimental validation of the DARWIN2.3 package [13] and comes from the uncertainty of the cumulated yields of 145, 146, 148, 150Nd, weighted by the fraction of fission of each fissile isotope.

Table 3 summarizes the propagation of these uncertainties on decay heat computations by comparing calculations performed in nominal conditions and calculations performed with perturbed conditions (direct perturbation method): the total uncertainty is driven by the measurement uncertainties and the uncertainty on the burnup knowledge.

Table 3.

Uncertainty budget for the analysis of the GE decay heat experiments (relative uncertainties in %).

In addition to the evaluation of the uncertainties coming from modeling and technological sources, the uncertainty due to nuclear data covariances has been evaluated by direct perturbation and quadratic summation method [18]. This uncertainty propagation method allows the computation of sensitivity profiles that will be used later in the data assimilation calculations. Uncertainties are coming from the JEFF-3.1.1 library for decay data and fission yields, and from the COMAC database for cross-sections [19]. Regarding the covariance data of independent fission yields and decay data:

  • Only variances are available in the JEFF-3.1.1 library therefore only variances were propagated. Four sets of covariance matrices, all consistent with the JEFF-3.1.1 library have been produced since then for fission yields within the OECD/NEA Working Party on International Nuclear Data Evaluation Cooperation (WPEC) subgroup 37 [2023]. These covariances lead to a significant reduction in uncertainty [24]. Recently, new evaluations of 235U(nth,f) and 239Pu(nth,f) fission yields have been produced with covariances for the JEFF-4.0 library, confirming this strong reduction of uncertainty [25] and it would be very interesting to revisit this study with the JEFF-4.0 library on its release.

  • Many decay data do not have uncertainties (about 12% of the periods and about 50% of the mean beta and gamma energies are without uncertainties). Sensitivity studies performed on these data showed almost no impact on the results, especially because of the slight contribution of the decay data to the total uncertainty. This conclusion could be modified when taking into account sets of full covariances for fission yields.

Computation results, performed with the DARWIN2.3 package, are shown in Table 4, along with the nuclear data uncertainties. The calculation-to-experiment discrepancies are within the uncertainties at 2 standard deviations except for the Point Beach 2 C67 assembly. Point Beach 2 measurements are all performed at 4.5 years after irradiation and the results are consistent with each other. Besides, the experimental uncertainty is slightly lower than the uncertainty coming from the propagation of nuclear data covariance, which is not the case for the San Onofre 1 measurement. For these reasons, only Point Beach 2 experiments will be used in the data assimilation process.

Table 4.

DARWIN2.3 computation results and associated uncertainties. C/E-1 stands for Calculation/Experiment-1. The nuclear data uncertainties (not taken in the uncertainty budget of the C/E-1) have been computed thanks to covariance propagation with the quadratic summation method.

3. The data assimilation method

3.1. Description of the method

The evaluation code used for the data assimilation is the CONRAD (COde for Nuclear Reaction Analysis and Data Assimilation), developed at CEA, Cadarache [26]. CONRAD enables, inter alia, the assimilation of calculations-to-experiments discrepancies (C/E) to provide feedback on nuclear data (mean value and uncertainty). For a precise and detailed description of the method and its hypothesis, please see [1, 27, 28] for the first data assimilation work on decay heat experiments.

The adjustment method implemented in the CONRAD code is based on Bayesian inference. The prior situation is the calculation result of the decay heat with the nuclear data derived from the JEFF-3.1.1 library. The computation of integral experiments and its comparison with measurements provides information on the performance and accuracy of the calculation tools. In particular, if the numerical biases are negligible, C/E discrepancies are assumed to be entirely due to nuclear data, which are the input parameters of these calculation tools. The goal of the data assimilation is therefore to capitalize the experimental information (C/E discrepancies and uncertainties, including correlation matrix) to derive a new set of nuclear data, which minimizes the discrepancies to the experimental values. The posterior situation can be seen as an actualization of the knowledge, given the experimental data.

Let us assume E $ \vec{E} $ a vector of experimentally measured data (decay heat), x $ \overrightarrow{x} $ are the uncertain input parameters (nuclear data), and U represents the background of information from which the x $ \overrightarrow{x} $ values were assumed ( E $ \overrightarrow{E} $ is independent from U). Then, according to Bayes theorem:

posterior [ p ( x⃗ | E⃗ , U ) ] prior [ p ( x⃗ , U ) ] × likelihood [ p ( E⃗ | x⃗ , U ) ] . $$ \begin{aligned} \mathrm{posterior}[p(\vec{x}\vert {\vec{E},U})]\propto \mathrm{prior}[p(\vec{x},U)]\times \mathrm{likelihood}[p(\vec{E}\vert {\vec{x},U})]. \end{aligned} $$(1)

The likelihood distribution contains the additional experimental information that has to be taken into account to update the input data, which therefore depends on the calculation-to-experiment discrepancies. Posterior, prior, and likelihood distributions are assumed to be Gaussians (distribution law that maximizes the entropy given that only their mean values and covariances are known) and can be written as (2)–(4) where Mxprior is the prior nuclear data covariance matrix, MC/E is the covariance matrix associated to (C/E-1) discrepancies (experimental and modeling uncertainties), C(x) is the computation of the decay heat with the DARWIN2.3 package, with x nuclear data as input parameters, and E is the vector of experimental values of decay heat.

prior [ p ( x⃗ | U ) ] exp ( 1 2 [ ( x⃗ x⃗ prior ) T M x prior 1 ( x⃗ x⃗ prior ) ] ) $$ \begin{aligned}&\mathrm{prior}[p(\vec{x}\vert U)] \propto \nonumber \\&\quad \mathrm{exp}\bigg (-\frac{1}{2}\bigg [\left(\vec{x}-\vec{x}_{\rm prior}\right)^{T}M_{x_{\rm prior}}^{-1} \left(\vec{x}-\vec{x}_{\rm prior}\right)\bigg ]\bigg )\end{aligned} $$(2)

likelihood [ p ( E⃗ | x⃗ , U ) ] exp ( 1 2 [ ( C⃗ ( x⃗ ) E⃗ ) T M C E 1 ( C⃗ ( x⃗ ) E⃗ ) ] ) $$ \begin{aligned}&\mathrm{likelihood}[p(\vec{E}\vert \vec{x},U)] \propto \nonumber \\&\quad \mathrm{exp}\bigg (-\frac{1}{2}\bigg [\left(\vec{C}(\vec{x})-\vec{E}\right)^{T}M_{\frac{C}{E}}^{-1} \left(\vec{C}(\vec{x})-\vec{E}\right)\bigg ]\bigg )\end{aligned} $$(3)

posterior [ p ( x⃗ | E⃗ , U ) ] exp ( 1 2 [ ( x⃗ x⃗ posterior ) T M x posterior 1 ( x⃗ x⃗ posterior ) ] ) exp ( 1 2 [ ( C⃗ ( x⃗ ) E⃗ ) T M C E 1 ( C⃗ ( x⃗ ) E⃗ ) + ( x⃗ x⃗ prior ) T M x prior 1 ( x⃗ x⃗ prior ) ] ) . $$ \begin{aligned}&\mathrm{posterior}[p(\vec{x}\vert \vec{E},U)] \propto \nonumber \\&\quad \mathrm{exp}\bigg (-\frac{1}{2}\bigg [\left(\vec{x}-\vec{x}_{\rm posterior}\right)^{T} M_{x_{\rm posterior}}^{-1}\left(\vec{x}-\vec{x}_{\rm posterior}\right)\bigg ]\bigg )\nonumber \\&\quad \propto \mathrm{exp}\left(-\frac{1}{2}\left[{\begin{array}{ll}&\left(\vec{C}(\vec{x})-\vec{E}\right)^{T}M_{\frac{C}{E}}^{-1} \left(\vec{C}(\vec{x})-\vec{E}\right)\\&+ \left(\vec{x}-\vec{x}_{\rm prior}\right)^{T}M_{x_{\rm prior}}^{-1} \left(\vec{x}-\vec{x}_{\rm prior}\right)\end{array}}\right]\right). \end{aligned} $$(4)

The analytical solution is given by the minimization of the χ2 cost function (5) within the assumption that the C function (decay heat) can be approximated by a linear function of the nuclear data x $ \vec{x} $ (6). The linear coefficients are thus the sensitivity profiles ST, computed through direct perturbation of the input parameters in the calculation codes.

χ 2 = ( C⃗ ( x⃗ ) E⃗ ) T M C / E 1 ( C⃗ ( x⃗ ) E⃗ ) + ( x⃗ x⃗ prior ) T M x prior 1 ( x⃗ x⃗ prior ) $$ \begin{aligned}&\chi ^{2} = \left(\vec{C}(\vec{x})-\vec{E}\right)^{T}M_{\rm C/E}^{-1}\left(\vec{C}(\vec{x})-\vec{E}\right)\nonumber \\&\qquad \quad +\left(\vec{x}-\vec{x}_{\rm prior}\right)^{T}M_{x_{\rm prior}}^{-1}\left(\vec{x}-\vec{x}_{\rm prior}\right)\end{aligned} $$(5)

C⃗ ( x ) C⃗ ( x⃗ prior ) C⃗ ( x⃗ prior ) = S T ( x⃗ x⃗ prior x⃗ prior ) · $$ \begin{aligned}&\frac{\vec{C}(\vec{x})-\vec{C}\left(\vec{x}_{\rm prior}\right)}{\vec{C}\left(\vec{x}_{\rm prior}\right)}=S^{T}\left(\frac{\vec{x}-\vec{x}_{\rm prior}}{\vec{x}_{\rm prior}}\right)\cdot \end{aligned} $$(6)

Eventually, the posterior set of nuclear data and the associated covariance matrix after data assimilation, and more specifically the “trends” on nuclear data ( x posterior / x prior 1 ) $ (\vec{x}_{\mathrm{posterior}}/\vec{x}_{\mathrm{prior}}-1) $ and posterior uncertainties (diag(Mxposterior )) are given by formulas (7) and (8).

x⃗ posterior x⃗ prior x⃗ prior = M x prior . S T ( M C / E + S . M x prior . S T ) 1 · ( E⃗ C⃗ ( x⃗ prior ) C⃗ ( x⃗ prior ) ) $$ \begin{aligned}&\frac{\vec{x}_{\rm posterior}-\vec{x}_{\rm prior}}{\vec{x}_{\rm prior}}=\nonumber \\&\quad M_{x_{\rm prior}}.S^{T}\left(M_{\rm C/E}+S.M_{x_{\rm prior}}.S^{T}\right)^{-1} \cdot \left(\frac{\vec{E}-\vec{C} (\vec{x}_{\rm prior})}{\vec{C}(\vec{x}_{\rm prior})}\right) \end{aligned} $$(7)

M x posterior = M x prior M x prior . S T ( M C / E + S . M x prior . S T ) 1 . S . M x prior . $$ \begin{aligned}&M_{x_{\rm posterior}}=\nonumber \\&\quad M_{x_{\rm prior}}-M_{x_{\rm prior}}.S^{T}\left(M_{\rm C/E}+S.M_{x_{\rm prior}}.S^{T}\right)^{-1} .S.M_{x_{\rm prior}}. \end{aligned} $$(8)

3.2. Evaluation of the numerical bias

Before applying this method to the Point Beach 2 experiments, it was checked that the numerical biases due to deterministic approximations are indeed sufficiently low to be negligible compared to the uncertainties induced by nuclear data covariances.

The evaluation of the numerical bias is issued from a preliminary work of designing a calculation scheme for fuel cycle applications with the Monte Carlo code TRIPOLI-4® coupled with the MENDEL depletion solver [29]. This is not a bias evaluation as rigorous as in static calculations where it is considered that Monte Carlo codes are giving numerically unbiased results. Here, the coupling with the MENDEL solver introduces possible sources of discrepancy. Nevertheless, the inter-comparison of the two different approaches gives a plausible estimation of the potential numerical biases.

The TRIPOLI-4®/MENDEL simulation has been performed on a PWR UOX fuel assembly on a 2D geometry, with reflection boundary conditions. The Doppler Broadening Rejection Correction (DBRC) is activated for 238U, 240Pu, 242Pu, 1H, and 16O up to 360 eV. The depletion in burnup uses the MEAN temporal scheme: the nuclide concentrations at time step t + 1 is the mean of the nuclide concentrations calculated by MENDEL with the neutron flux of TRIPOLI-4® at time step t and t + 1, given the nuclide concentrations at time t (see formula (9)).

N ( t , Φ ( t ) ) MENDEL N ( t + 1 ) TRIPOLI 4 Φ ( t + 1 ) N ( t , Φ ( t + 1 ) ) MENDEL N ( t + 1 ) TRIPOLI 4 Φ ( t + 1 ) N ( t + 1 ) = N ( t + 1 ) + N ( t + 1 ) 2 · $$ \begin{aligned}&N(t,\Phi (t))\xrightarrow {\mathrm{MENDEL}} N^{*}(t+1) \xrightarrow {\mathrm{TRIPOLI4}} \Phi (t+1)\nonumber \\&N(t,\Phi (t+1))\xrightarrow {\mathrm{MENDEL}} N^{\prime }(t+1) \xrightarrow {\mathrm{TRIPOLI4}} \Phi ^{\prime }(t+1)\nonumber \\&N(t+1)=\frac{N^{*}(t+1)+N^{\prime }(t+1)}{2}\cdot \end{aligned} $$(9)

The burnup steps are 10, 30, 50, 75, 100, 200, 300, 400, 500, 750, 1000 MWd/t and every 1000 MWd/t. The depletion chain for the Boltzmann calculation includes 306 isotopes; the depletion chain for the Bateman calculation includes 2631 isotopes.

thumbnail Fig. 3.

Estimation of the numerical bias on decay heat calculations by comparison with a depleted Monte Carlo calculation on a UOX fuel assembly.

In order to assess a statistic uncertainty, the method of the independent replicas has been adopted. It consists of doing the same calculations n times but with a different random seed each time. Therefore 64 independent replicas of 505 batches (5 discarded) of 1000 neutrons have been calculated and the results are displayed in Figure 3 for a UOX fuel at a discharge burnup of 40 GWd/t. From this figure, one can derive a numerical bias for the DARWIN2.3 calculation of the decay heat of Point Beach 2 assemblies at 4.5 years of cooling: −0.25 ± 0.02% (1σ).

Table 5.

Covariance matrix MC/E = D + SST (%2, on the left) and correlation matrix (on the right) for Point Beach 2 data with the AGS method.

3.3. Experimental correlation matrices

The MC/E matrix contains the uncertainties and the correlations linked to the C/E-1 (calculations-to-experiments relative discrepancies). The correlations may have two distinct origins: experimental (linked to the “E”) or technological (linked to the “C”). Two experiments may be correlated because they are performed in the same setup, they may share materials, or the calibration of the instrumentation may be made using the same source or procedure, etc. Technological correlations may be coming from the fact that several assemblies were irradiated in the same PWR during the same campaigns, the fuel pellets may be coming from the same manufacturing batch and they share the same uncertainties due to the burnup estimation, local temperatures, etc. For the particular case of Point Beach 2 assemblies, all assemblies share the same initial composition and were irradiated during the same campaigns (the average burnup is 36.9 GWd/t with a dispersion of 7.8%). The decay heat measurements were done within a 6-day period.

To be consistent with the study performed earlier with the DARWIN2.3 package and the assimilation of post-irradiation experiments, the AGS formalism was used [30]. Note that the definition of experimental correlation may be touchy, depends strongly on the accessible data describing the experiments and is an entire matter for research [3133]. In the AGS method, systematic and random uncertainties are taken into account according to the formula (10).

M C / E = D + S . S T . $$ \begin{aligned} M_{\rm C/E}=D+S.S^{T}. \end{aligned} $$(10)

D is the non-correlated part of the matrix, it is a diagonal matrix of dimension (nexp; nexp) of the random uncertainties associated with each measurement whereas S.ST is the correlated part. S is a matrix of size (nexp; ns) with ns the number of systematic sources of uncertainty, obtained by direct perturbation of the parameters. Here ns = 6 (fuel temperature, moderator temperature, boron concentration, initial 235U enrichment, burnup estimation, and the systematic part of the measurement’s uncertainty). The resulting covariance matrix is reported in Table 5. Decay heat measurements are strongly correlated (∼0.6) mainly driven by burnup. Those results are consistent with what was demonstrated by Shama et al. [34].

Table 6.

Posterior biases and uncertainties on nuclear data after data assimilation of Point Beach 2 calculation-to-experiment discrepancies. IFYU5/PU9TH stands for Independent Fission Yield of U5 or PU9 after a thermal fission, A_BRANCHINGB stands for branching ratio of A decaying toward B, and CAPTURE stands for (n,γ) cross-section. Column 2 shows the percentage of the decay heat variance per nuclear data due to nuclear data covariance propagation. This list of 27 data is responsible for more than 90% of the total variance.

Table 7.

Expected C/E-1 discrepancies on GE experiments with the posterior biases and relative uncertainties (%) due to the data assimilation.

Table 8.

Trends on the four cumulative fission yields of interest after the data assimilation of Point Beach 2 calculation-to-experiment discrepancies (values in parenthesis and italics are relative uncertainties to cumulative fission yields).

4. Data assimilation results and discussion

The computed decay heat of the Point Beach 2 assemblies at 4.5 years after irradiation is particularly sensitive to 27 nuclear data, mainly independent thermal fission yields of 235U and 239Pu. Those 27 nuclear data, selected because they were responsible for more than 90% of the total decay heat variance due to nuclear data uncertainty propagation, are adjusted with the CONRAD code and the results are shown in Table 6. The nuclear data in Table 6 are responsible for the build-up and disappearance of the main decay heat contributors at this cooling time. For example regarding 144Ce, this isotope is mainly produced by β decay of its precursors 144Ba and 144La (80% of its concentration) whereas only 2% of 144Ce is directly produced by fission, that is why the main two nuclear data arising in Table 6 are Ba144_IFYPU9TH and La144_IFYPU9TH.

The decay heat calculations are smaller than the measurements, which results in a global trend of increasing the main nuclear data involved at that cooling time. Increases of 1 to 7% are suggested, along with a very small uncertainty reduction. This is understandable because the uncertainty propagation of the prior nuclear data covariance matrix leads to uncertainties very close to the ones associated with the C/E-1.

Let us change the 27 nuclear data according to the trends reported in Table 6 and compute again the decay heat of the Point Beach 2 and San Onofre 1 assemblies (see Tab. 7). The mean C/E-1 of Point Beach 2 and San Onofre 1 assemblies is now −0.6% (compared to −3.4% before) with a dispersion of the values of 2.1%.

Eventually, we can deduce the trends in the cumulative fission yields from the trends in independent fission yields, given the decay path of the nuclides of mass chain A = 106, 133, 137, and 144. Table 8 compares the results to the JEFF-3.1.1 and JEFF-3.3 values and to the work of Rizzo et al. [5] which consisted of the data assimilation of the discrepancies on spent fuel assay data (chemical analysis of radionuclides).

The 235U(nth,f)133Cs cumulative yield was slightly increased from JEFF-3.1.1 to JEFF-3.3 by 0.9%. The assimilation procedure using decay heat measurement performed here confirms the need to increase this yield but more moderately than what was proposed [5]. The four values (JEFF-3.1.1, JEFF-3.3, Rizzo, and this work) are compatible within 2 standard deviations. The same conclusions can be drawn for 239Pu(nth,f)106Ru cumulative yield.

Concerning the 239Pu(nth,f)144Ce cumulated fission yield, this work confirms the trend of a slight increase initiated in the JEFF-3.3 library, in disagreement with the work of Rizzo et al. [5]. Indeed, this previous data assimilation was based on chemical analyses on radionuclides, and for the 144Ce in particular, only a few measurements were available, mainly on the GRAVELINES experiment. It can be seen [13] that this isotope was strongly overestimated by the calculation. However, this measurement is also associated with a huge experimental uncertainty: around 40 GWd/t, the C/E-1 discrepancy on the 144Ce inventory is +12.4% ± 8.5% (at one standard deviation). The resulting trend of this work is compatible with the JEFF-3.1.1 and JEFF-3.3 at one standard deviation.

The most important result concerns the cumulated yield of 235U(nth,f)137Cs. This work confirms the trend highlighted by the previous study [5], in contradiction with what was proposed for JEFF-3.3 (and ENDF/B-VIII.0 and JENDL-4.0). Given the importance of that nuclide as a burnup indicator and a decay heat contributor, this trend should be confirmed with a new gamma spectrometry measurement of this yield and/or with the data assimilation of the CLAB experiments. Indeed, the maximum contribution of the 137Cs/137mBa isotopes is around 15 years and the CLAB experiments cover that range of cooling times. Besides, the discrepancies and the experimental uncertainties are lower and more accurate than the ones of the GE experiments.

5. Conclusion and perspectives

Integral decay heat measurements are a very interesting source of information for the nuclear data evaluator’s community and for fission yield evaluators in particular. Indeed, post-irradiation experiments based on the dissolution and analysis of the composition of spent fuels depleted in power reactors are widely used to validate some particular nuclides, but they do not give information on all the nuclides of importance for the back-end cycle. Decay heat experiments are filling that gap but the drawbacks are that only a few sets of measurements are available in literature, not covering all the cooling times of interest, and that the discrepancies and accuracies of the measurements are variable from one set of data to another.

A first attempt to exploit decay heat measurements in order to give feedback on nuclear data was done on the MERCI-1 experiment, which enabled the extension of the capabilities of the CONRAD code to take into account decay data and fission yield data. In the meantime, work was initiated to perform the data assimilation of the post-irradiation experiments of the DARWIN2.3 experimental database, which enabled us to derive trends in nuclear data, especially on 4 cumulated fission yields of interest for the back-end cycle.

This paper is dedicated to the data assimilation of the calorimetric measurements of the decay heat of whole PWR assemblies in the GE-Morris facility. The results rely on and depend on the quality of the experiment and our capability to assess experimental and modeling uncertainties and appropriate correlations. Given that, the trends on 235U(nth,f)133Cs and 239Pu(nth,f)106Ru were confirmed. It was uneasy to conclude on 239Pu(nth,f)144Ce because of the large discrepancies and uncertainties in the PIE used [5]. Eventually, the increase of 235U(nth,f)137Cs was confirmed, despite the decrease proposed in JEFF-3.3, but a confirmation of the obtained trend would be suitable, either thanks to a new measurement of this yield or with the data assimilation of the CLAB experiments.

Funding

This work has been funded by the CEA, ORANO Cycle and EDF.

Conflicts of interest

The authors declare that they have no competing interests to report.

Data availability statement

This article has no associated data generated and/or analyzed/Data associated with this article cannot be disclosed due to legal/ethical/other reasons.

Author contribution statement

V. Vallet has performed the work described in this paper. M. Tiphine did the modeling of the GE experiments and was involved in the analysis of the results. A. Rizzo helped with the use of the CONRAD code and in the implementation of this method for decay heat experiments. T. Nicol did the depleted Monte Carlo calculation to quantify the numerical bias.

References

  1. J. Huyghe, et al., Integral data assimilation of the MERCI-1 experiment for the nuclear data associated with the PWR decay heat computation, EPJ Web Conf. 211, 07004 (2019) [CrossRef] [EDP Sciences] [Google Scholar]
  2. G. Henning, et al., Need for precise nuclear structure data for reactor studies, EPJ Nucl. Sci. Technol. 10, 6 (2024) [CrossRef] [EDP Sciences] [Google Scholar]
  3. H.R. Doran, A.J. Cresswell, D.C.W. Sanderson, G. Falcone, Nuclear data evaluation for decay heat analysis of spent nuclear fuel over 1–100 k year timescale, Eur. Phys. J. Plus 137, 665 (2022) [CrossRef] [Google Scholar]
  4. D. Siefman, et al., Data assimilation of post-irradiation examination data for fission yields from GEF, EPJ Nucl. Sci. Technol. 6, 52 (2020) [CrossRef] [EDP Sciences] [Google Scholar]
  5. A. Rizzo, et al., Feedback from experimental isotopic compositions of used nuclear fuels on neutron cross sections and cumulative fission yields of the JEFF-3.1.1 library by using integral data assimilation, EPJ Nucl. Sci. Technol. 5, 24 (2019) [CrossRef] [EDP Sciences] [Google Scholar]
  6. A.L. Nichols, et al., Improving fission-product decay data for reactor applications: part I – decay heat, Eur. Phys. J. A 59, 78 (2023) [CrossRef] [Google Scholar]
  7. V. Vallet et al., Nuclear data uncertainty quantification for the decay heat of PWR MOX fuels using data assimilation of elementary fission bursts, EPJ Web Conf. 247, 10002 (2021) [CrossRef] [EDP Sciences] [Google Scholar]
  8. Y. Kawamoto, G. Chiba, Feasibility study of decay heat uncertainty reduction using nuclear data adjustment method with experimental data, J. Nucl. Sci. Technol. 54, 213 (2016) [Google Scholar]
  9. J.C. Jaboulay, S. Bourganel, Analysis of MERCI decay heat measurement for PWR UO2 fuel rod, Nucl. Technol. 177, 73 (2012) [CrossRef] [Google Scholar]
  10. A. Santamarina, et al., The JEFF-3.1.1 Nuclear Data Library, JEFF Report 22 (2009) [Google Scholar]
  11. F. Schmittroth, et al., ORIGEN2 Calculations of PWR Spent Fuel Decay Heat Compard with Calorimeter Data (Hanford Engineering Development Laboratory, Richland, WA, 1984) [Google Scholar]
  12. I. Gauld, et al., Validation of SCALE 5 decay heat predictions for LWR spent nuclear fuel, NUREG/CR-6972, ORNL/TM-2008-015, US NRC (2008) [Google Scholar]
  13. L. San Felice, et al., Experimental validation of the DARWIN2.3 package for fuel cycle applications, Nucl. Technol. 184, 217 (2013) [Google Scholar]
  14. A. Tsilanizara, et al., DARWIN: an evolution code system for a large range of applications, J. Nucl. Sci. Technol. 37, 845 (2000) [CrossRef] [Google Scholar]
  15. A. Santamarina, et al., APOLLO2.8: A Validated Code Package for PWR Calculations (Advances in Nuclear Fuel Management IV, Hilton Head Island, South Carolina, USA, 2009) [Google Scholar]
  16. C. Struzik, High burnup modelling of UO2 and MOX fuel with METEOR/TRANSURANUS 1.5 C, in Portland, USA, ANS Light Water Fuel Performance Meeting (1997) [Google Scholar]
  17. OECD/NEA, Evaluation Guide for the Evaluated Spent Nuclear Fuel Assay Database (SFCOMPO), NEA/NSC/R(2015)8 (2016) [Google Scholar]
  18. A. Tsilanizara, T. Huynh, New feature of DARWIN/PEPIN2 inventory code: Propagation of nuclear data uncertainties to decay heat and nuclide density, Ann. Nucl. Energy 164, 108579 (2021) [CrossRef] [Google Scholar]
  19. P. Archier, et al., COMAC. Nuclear Data Covariance Matrices Library for Reactor Applications (PHYSOR, Kyoto, Japan, 2014) [Google Scholar]
  20. N. Terranova, et al., Covariance matrix evaluations for independent mass fission yields, Nucl. Data Sheets 123, 225 (2015) [CrossRef] [Google Scholar]
  21. K.H. Schmidt, B. Jurado, C. Amouroux, C. Schmitt, General description of fission observables: GEF model code, Nucl. Data Sheets 131, 107 (2016) [CrossRef] [Google Scholar]
  22. D. Rochman, et al., A bayesian Monte Carlo method for fission yield covariance information, Ann. Nucl. Energy 95, 125 (2016) [CrossRef] [Google Scholar]
  23. L. Fiorito, et al., Generation of fission yield covariances to correct discrepancies in the nuclear data libraries, Ann. Nucl. Energy 88, 12 (2016) [CrossRef] [Google Scholar]
  24. M. Tiphine, V. Vallet, Impact of fission yield covariance matrices on decay heat uncertainty quantification with the DARWIN2 package, in PHYSOR 2022, Pittsburgh, United States, American Nuclear Society (2022) [Google Scholar]
  25. G. Kessedjian, et al., Covariance analysis of 235U(nth,f) independent and cumulative fission yields: Propositions for JEFF4, EPJ Web Conf. 281, 00022 (2023) [Google Scholar]
  26. P. Archier, et al., CONRAD Evaluation Code: Development Status and Perspectives (Nuclear Data for Science and Technology, Nice, France, 2007) [Google Scholar]
  27. NEA/WPEC-33, Assessment of Existing Nuclear Data Adjustment Methodologies, NSC/WPEC/DOC(2010)429 (2010) [Google Scholar]
  28. G. Palmiotti, M. Salvatores, G. Aliberti, A-priori and A-posteriori covariance data in nuclear cross section adjustments: Issues and challenges, Nucl. Data Sheets 123, 41 (2015) [CrossRef] [Google Scholar]
  29. E. Brun, et al., TRIPOLI-4(R), CEA, EDF and AREVA Monte Carlo code, Ann. Nucl. Energy 82, 151 (2015) [CrossRef] [Google Scholar]
  30. C. Bastian, et al., AGS, A Computer Code for Uncertainty Propagation in Time-of-flight Cross-section Data (PHYSOR, Vancouver, Canada, 2006) [Google Scholar]
  31. T. Nicol, C. Carmouze, Impact of experimental correlation on transposition method carry out with critical integral experiments (Paris, France, ICNC, 2019) [Google Scholar]
  32. A. Fowler, S. Dance, J. Waller, On the interaction of observation and prior error correlations in data assimilation, Quat. J. R. Meteorol. Soc. 144, 48 (2018) [CrossRef] [Google Scholar]
  33. L. Stewart, et al., Data assimilation with correlated observation errors: experiments with a 1-D shallow water model, Tellus A: Dyn. Meteorol. Oceanogr. 65, 19546 (2013) [CrossRef] [Google Scholar]
  34. A. Shama, et al., Uncertainty analyses of spent nuclear fuel decay heat calculations using SCALE modules, Nucl. Eng. Technol. 53, 2816 (2021) [CrossRef] [Google Scholar]

Cite this article as: Vanessa Vallet, Axel Rizzo, Marion Tiphine, Tangi Nicol. Data assimilation of decay heat experiments for a feedback on nuclear data, EPJ Nuclear Sci. Technol. 10, 11 (2024)

All Tables

Table 1.

Prior values of the cumulated yields in the JEFF-3.1.1 library, the values in bold are relative to JEFF-3.1.1, the values in italics are relative uncertainties.

Table 2.

Main features of GE assemblies.

Table 3.

Uncertainty budget for the analysis of the GE decay heat experiments (relative uncertainties in %).

Table 4.

DARWIN2.3 computation results and associated uncertainties. C/E-1 stands for Calculation/Experiment-1. The nuclear data uncertainties (not taken in the uncertainty budget of the C/E-1) have been computed thanks to covariance propagation with the quadratic summation method.

Table 5.

Covariance matrix MC/E = D + SST (%2, on the left) and correlation matrix (on the right) for Point Beach 2 data with the AGS method.

Table 6.

Posterior biases and uncertainties on nuclear data after data assimilation of Point Beach 2 calculation-to-experiment discrepancies. IFYU5/PU9TH stands for Independent Fission Yield of U5 or PU9 after a thermal fission, A_BRANCHINGB stands for branching ratio of A decaying toward B, and CAPTURE stands for (n,γ) cross-section. Column 2 shows the percentage of the decay heat variance per nuclear data due to nuclear data covariance propagation. This list of 27 data is responsible for more than 90% of the total variance.

Table 7.

Expected C/E-1 discrepancies on GE experiments with the posterior biases and relative uncertainties (%) due to the data assimilation.

Table 8.

Trends on the four cumulative fission yields of interest after the data assimilation of Point Beach 2 calculation-to-experiment discrepancies (values in parenthesis and italics are relative uncertainties to cumulative fission yields).

All Figures

thumbnail Fig. 1.

Relative decay heat contributions – Point Beach 2 calculations.

In the text
thumbnail Fig. 2.

Flow chart of the DARWIN2.3 package for PWR calculations [5].

In the text
thumbnail Fig. 3.

Estimation of the numerical bias on decay heat calculations by comparison with a depleted Monte Carlo calculation on a UOX fuel assembly.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.