How to obtain an enhanced extended uncertainty associated with decay heat calculations of industrial PWRs using the DARWIN2.3 package

. Decay heat is a crucial issue for in-core safety after reactor shutdown and the back-end cycle. An accurate computation of its value has been carried out at the CEA within the framework of the DARWIN2.3 package. The DARWIN2.3 package bene ﬁ ts from a Veri ﬁ cation, Validation and Uncertainty Quanti ﬁ cation (VVUQ) process. The VVUQ ensures that the parameters of interest computed with the DARWIN2.3 package have been validated over measurements and that biases and uncertainties have been quanti ﬁ ed for a particular domain. For the parameter “ decay heat ” , there are few integral experiments available to ensure the experimental validation over the whole range of parameters needed to cover the French reactor infrastructure ( ﬁ ssile content, burnup, fuel, cooling time). The experimental validation currently covers PWR UOX fuels for cooling times only between 45 minutes and 42 days, and between 13 and 23 years. Therefore, the uncertainty quanti ﬁ cation step is of paramount importance in order to increase the reliability and accuracy of decay heat calculations. This paper focuses on the strategy that could be used to resolve this issue with the complement and the exploitation of the DARWIN2.3 experimental validation.


Introduction
Nuclear decay heat is released by both radioactive decay of unstable fuel and material structure isotopes after reactor shutdown. The delayed fissions caused by delayed neutrons contribute significantly to the decay heat up to 100 seconds after reactor shutdown. Decay heat reaches about 7% of the nominal power one second after reactor shutdown [1] and is still about 1.5% of the nominal power one hour later, i.e. 40 MW for a 900 MW e Pressurized Water Reactor (PWR).
Heat removal is one of the 3 key reactor safety functions, the other two being radioactivity containment and nuclear chain reaction control. Decay heat is thus an important parameter for the safety demonstration of reactor operation under normal or accidental conditions and back-end nuclear cycle. Indeed, decay heat is a dimensioning parameter for normal and emergency cooling systems of the nuclear core after shutdown (up to 8 days) for a reactor in operation. It also imposes delays before the different stages of fuel unloading, storage and transportation (from 5 days to 10 years) until reaching the reprocessing steps or vitrification processes and final storage (ranging from 4 years to more than 300 000 years). Therefore, accurate control of the decay heat calculation is essential for all the PWRs in the French reactor infrastructure (UOX and MOX fuels with 235 U enrichments ranging from 1.0 to 5.0 wt.% and average plutonium contents ranging from 4.0 to 11.0 wt.%) over a wide range of cooling times (starting immediately with the moment after reactor shutdown and lasting up to more than 300 000 years).
The parameters required for fuel cycle applications decay heat but also fuel inventory, activity, neutron, gamma, alpha and beta sources and spectra, radiotoxicityare provided by the DARWIN2.3 [2] calculation package. This package is being developed by the CEA with the support of its French partners (EDF, Orano and Framatome); it is the French reference for fuel cycle studies. The package has been extensively validated on a large number of experimental programs based on spent fuel chemical analyses that have been carried out in France since 1993. It has also been validated for decay heat calculations on a more limited number of experimental programs based on elementary fission burst experiments and two integral calorimetric experiments, the MERCI-1 and CLAB experiments.
The objective of this paper is to present the methodological orientations to determine accurately the uncertainty associated with the DARWIN2.3 decay heat calculation. This uncertainty can be determined either by nuclear data covariance matrix propagation or by exploiting the experimental validation through the transposition of the Calculation/Experiment (C/E) discrepancies; these two points will be explained in the following chapters, after a brief description of the DARWIN2.3 package for PWR fuel decay heat calculations. In this framework, the restriction linked to the DARWIN2.3 integral experimental validation exploitation will be illustrated. Perspectives for the determination of an enhanced extended uncertainty associated with the decay heat calculations will be discussed at the end of the paper. The DARWIN2.3 package is the French reference for fuel cycle studies [2]. It is used as a reference for the validation of the industrial tool, CESAR [3] for nuclear fuel and waste characterization at the Orano La Hague reprocessing plant. It is dedicated to all fuel cycle studies for PWRs, Boiling Water Reactors, Material Testing Reactors (MTR) but also to sodium-cooled fast reactors. DARWIN2.3 estimates the physical quantities that characterize reactor spent fuels. In this chapter, the description of the DARWIN2.3 package focuses on the PWR application, with UOX and MOX fuels.
The PWR DARWIN2.3 calculation route is based on the chaining between the APOLLO2 [4] 2D transport code and the DARWIN/PEPIN2 [5] depletion solver with two successive steps (Fig. 1). The APOLLO2 calculation step solves the Boltzmann and the Bateman equations with a simplified depletion chain containing 162 isotopes corresponding to actinides, structural materials and 126 fission products. The APOLLO2 calculation route uses the European JEFF-3.1.1 nuclear data library [6], processed in a 281-group Santamarina-Hfaiedh Energy Mesh (SHEM) [7]. The neutron flux is calculated in a 2D assembly geometry, using a Pij multi-cell model [8]: the UP1 interface current method based on linearly an isotropic interface flux. The fuel pellets are split into four rings in order to give an accurate representation of 238 U absorption as well as fission product concentration profile. Space-dependent self-shielding, above 23 eV, is repeated at recommended burnup steps [9]. In the case of MOX assembly calculation, the UOX environment is taken into account. At the end of the APOLLO2 calculation, a library, called SAPHYB, is generated; it contains, both in the 281-group energy structure, the fuel microscopic self-shielded cross sections, spatially homogenized and the 2D scalar neutron flux, tabulated versus burnup. The APOLLO2 2D transport code is the reference code for neutron transport calculations for the DARWIN2.3 (for the PWR calculation route) package.
The PWR DARWIN/PEPIN2 calculation step solves the Bateman equations, under irradiation or under cooling (i.e. with no neutron flux) of homogeneous mixtures whose compositions are imposed by the user or read from the SAPHYB file, with a depletion chain containing up to 3800 isotopes. Given the fact that PEPIN2 uses homogeneous mixture, no spatial information is given to PEPIN2, and the choice of the 2D APOLLO2 transport code instead of a 3D (e.g. APOLLO3) does not really matter at this point. The equation solved under neutron flux (with the 4th order Runge Kutta method) uses the self-shielded cross sections and the 281-energy group neutron flux transmitted by the APOLLO2 code thanks to the SAPHYB. The irradiation history in DARWIN/PEPIN2 can be detailed with shutdown periods, bore concentration, moderator and fuel temperature tracking. It makes it possible to calculate precisely the depleted fuel inventory. The convolution of the fuel inventory with nuclear data leads to physical quantities such as decay heat, activity and radiotoxicity. The decay heat formula used in the DARWIN/PEPIN2 module (neglecting here the contribution of the fissions induced by delayed neutrons) is recalled in (1): where: -Q i is the average decay energy released for a decay of the nuclide i; -T 1=2 i is the half life of the nuclide i; -N i is the isotopic concentration of the nuclide i.

VVUQ process applied to DARWIN2.3 for the bias and uncertainty control
The DARWIN2.3 simulations are used to predict physical fuel cycle parameters with a quantifiable confidence and across the PWR application domain. The rigorous VVUQ process, conventionally used in many disciplines of science and engineering [10][11][12], is implemented for the DAR-WIN2.3 calculations, to assess the biases and uncertainties associated with the determination of the physical parameters; it gives strength to the results for R&D and industrial applications. This process requires the following four steps [13]: -Verification: it shows that the calculation scheme does not present programming errors and gives the expected numerical results; numerical Validation: in this step, the DARWIN2. 3 results are compared to a "reference calculation", integrating more accurate models than the calculations that have to be validated and using the same nuclear data library. This numerical validation relies for DARWIN2.3 on several elements. It relies on the comparisons with TRIPOLI4 ® [14] Monte Carlo calculations at step 0 (stationary conditions), before depletion; it has the capacity to validate the APOLLO2 multigroup flux calculation and reaction rates, such as 235 U fission, 238 U radiative capture rates at step 0. The recent coupling of the TRIPOLI4 ® stochastic transport code with the MENDEL [15] deterministic depletion solver allows depletion calculations to be performed. Even though this process is not considered as a reference procedure for depletion calculation, the benchmarking between TRIPOLI4 ® and DARWIN2.3 has been investigated to provide first elements of modeling biases quantification [16] for fuel cycle calculations (material balance); this work tends to show that the modeling biases are limited in comparison with the biases coming from nuclear data; experimental Validation: it consists in comparing the calculation results of the set "nuclear data library + calculation scheme + codes" to the values measured with experiments, for the decay heat and the material balance of the main actinides and the fission products involved in fuel cycle calculations and burnup credit criticality calculations; -Uncertainty and bias Quantification: it consists in associating with each parameter calculated by the DARWIN2.3 package, a controlled uncertainty over a range of applications. Generally speaking, there are two ways to achieve this goal: the first one relies on the analysis of the experimental validation results when there is enough experimental data to cover the DARWIN2.3 application domain; in this case, the transposition method is then applied in order to transpose the Calculation-over-Experiment (C/E) discrepancies. If the experimental validation cannot be exploited for DARWIN2.3, the second way relies on nuclear data covariance propagation studies. The next chapters below are dedicated to the presentation of these two ways of uncertainty quantification applied to the DARWIN2.3 package.
3 Nuclear data covariance matrix propagation for the uncertainty determination of DARWIN2.3 decay heat calculation

Description of the deterministic propagation method implemented in the DARWIN2.3 package
The Uncertainty Quantification is currently rigorously done by covariance propagation using a deterministic approach. The covariance is a symmetric bilinear form on a vector space of random variables. The consequence is that for a parameter of interest (Y), which can be written as a linear combination of random variables (X i ), for example Y = P i a i X i , its variance is given by the formula (2): When Y is not directly a linear combination of random variables, a Taylor series expansion of Y, truncated at first order, allows the application of the formula (2). The matrix form of the formula (2) is called the "sandwich rule" (see formula (3)), where the a i are the derivatives of Y to X i (also called sensitivity coefficients).
The DARWIN2.3 package has recently been enriched with this covariance propagation method [17]. The DARWIN/PEPIN2 module which manages the sensitivity profile calculations and the nuclear data covariance propagation is called DARWIN/IncerD [18]. Input data are the covariances taken from the European evaluation JEFF-3.1.1 for the decay data (decay periods, branching ratios and mean decay energies) and fission yields, and the CEA/Cadarache covariance matrix database COMAC-V2 [19] for cross sections. The prior calculation uncertainty resulting from the propagation of uncertainties of nuclear data covariances is as follows: where: -S A is the sensitivity vector due to nuclear data; -S T A is its transpose; -D is the nuclear data covariance matrix.
The choice of a deterministic approach for the uncertainty propagation with DARWIN2.3 is based on the underlying possibility to have access to sensitivity and variance analyses, enabling the identification of the nuclear data of influence for the decay heat and the main contributors to the decay heat variance.
When the hypothesis of linearity of the decay heat parameter to the nuclear data is not straightforward, it must be checked to ensure the legitimacy of this method [17]; this is done by comparing the results obtained with this quadratic summation method to those produced with a different uncertainty propagation method that does not need a linearity hypothesis [18]. This is the case of the sampling approach, implemented in the URANIE/MEN-DEL code system developed at the CEA. URANIE [20] is an uncertainty platform and MENDEL is the CEA new generation depletion code. MENDEL can use the same input data as the DARWIN/PEPIN2 code (same SAPHYB input files, nuclear data libraries and filiation chains). The sampling approach consists in selecting distribution laws for each random input (more often, Gaussian laws) and sampling them with a Latin Hypercube Sampling technique in order to have n realizations of each variable. The MENDEL code is then called n times with n different sets of input data according to the results of the sampling step. Eventually, the distribution of the decay heat is built and the moments are extracted.
Thus, the linearity hypothesis has been verified by comparison with URANIE/MENDEL [21] and allows us to calculate the sensitivity profiles by direct perturbation and the propagation of nuclear data covariances through the DARWIN/PEPIN2 calculation. For the calculation of the sensitivity profiles of the decay heat to the cross-sections with DARWIN/IncerD, there is no accurate Boltzmann/ Bateman coupling. The flux and reaction rates are performed in a previous calculation with the neutron code APOLLO2 and then stored in a file which is an input data for DARWIN/PEPIN2. When calculating the decay heat value resulting from the perturbation of a cross section with DARWIN/IncerD, it is the nominal value stored in the SAPHYB file that is used instead of recalculating the neutron spectrum with the APOLLO2 code. However, studies have shown that the impact of the coupling can be neglected on the uncertainty propagation calculation for the decay heat, for cooling times between 0.1 second and 300 years [22].
An illustration of the decay heat uncertainty estimated by the deterministic approach for standard PWR fuels is given in Figure 2; the considered fuels are a UOX fuel with 3.7 wt.% enriched uranium and a MOX fuel with a mean plutonium content of 9.5 wt.%, both at a 50 GWd/t discharge burnup. The uncertainty is below 3.5% (1 s) for the UOX fuel and below 4.5% (1 s) for a MOX fuel regardless of the cooling time.
The uncertainty estimated with the stochastic approach is also presented in Figure 2. A good accuracy between the deterministic and the stochastic method is observed (the maximum discrepancy is ∼0.3%).
Once the sensitivity profiles are known, the main contributors to the uncertainty can be identified at a given time, as illustrated in Figure 3. In the MOX fuel case, it appears that cross section uncertainties are responsible for about 20% of the total decay heat uncertainty at 1.0 second and for more than 90% of the total uncertainty after 10 8 seconds. The independent fission yields are the main contributors to the uncertainty up to 10 8 seconds.

Restrictions due to nuclear data covariance matrix completeness and accuracy
The nuclear data covariance matrices play a major role in this uncertainty propagation; uncertainty propagation results highly depend on the quality, accuracy and availability of the covariance matrices. Covariance matrices are sometimes a subject for debate for experts in nuclear data. It is often hard to know precisely how a covariance matrix was produced and how to measure its reliability.
It must be emphasized that more than 40 000 nuclear data entriescorresponding to cross sections, branching ratios, decay energies, half-lives, fission yieldsare involved in decay heat calculations. In the JEFF-3.1.1 evaluation, more than 7000 nuclear data entries do not have uncertainties, which is about 16% of the amount of nuclear data in JEFF-3.1.1. This nuclear data mainly consists of branching ratios and mean energies (alpha, beta, and gamma). However, we need to put something down to the parameters with no uncertainties, a value which is conservative but realistic. Other nuclear data libraries are studied and the experts' advice is used for this task.
There are few covariance matrices for cross sections in the JEFF-3.1.1 library. This data is taken in the COMAC-V2 database, which receives the benefit of a constant work for improvement. Even if the major part of the covariance matrices may be associated with the JEFF-3.1.1 library, some of the covariance matrices come from a different evaluation than the one giving the centered values. Such is the case of 241 Am which was re-evaluated at the CEA recently for JEFF-3.2 and its capture cross section which was increased by about 15%. Another example is the case of 242 Pu, which is a major contributor to decay heat  uncertainty through the build-up of 244 Cm, a strong contributor to decay heat (about 40% of the total decay heat for about 10 years). The covariance matrix for 242 Pu radiative capture cross section actually comes from the ENDF/B-VII.1 evaluation. The collapsed uncertainty results in about 11% at one standard deviation and is responsible for more than 80% of the total decay heat variance of MOX fuels at 10 years of cooling. A new evaluation of the covariance matrix for this isotope would have an impact on the prior decay heat uncertainty of MOX fuels.
There is also a crucial lack of correlation matrices for independent fission yields, although they are strongly correlated by physical constraints of conservation and normalization. A subgroup at the Working Party of International Nuclear Data Evaluation Co-operation (WPEC, NEA) [23], whose purpose was to come up with a new methodology to produce fission yield evaluations associated with covariance matrices was proposed in 2013 and came to an end in 2016: as a conclusion, covariance matrices will be produced for the next JEFF evaluation for fission yields, based on the GEF code [24]. Covariance matrices were also produced at the CEA, associated with JEFF-3.1.1 fission yields [25]; its considerable impact on the decay heat uncertainty has also been quantified in [25].
Generally speaking, the nuclear data covariance propagation is the starting point for decay heat in the Uncertainty Quantification process defined in chapter II; indeed, it enables us to quantify the uncertainty associated with the DARWIN2.3 calculated decay heat, assuming that the modeling calculation bias are limited, before exploiting the experimental validation results with the representativity/transposition method. The nuclear data uncertainty propagation is also necessary to implement the representativity/transposition method, as shown in the next chapter. These are based on the measurement of nuclide concentrations contributing to the decay heat also, after fuel chemical dissolution. They can also bring valuable elements for the DARWIN2.3 decay heat experimental validation [2]. This is especially true at long cooling times (typically over 6 months), when only a few isotopes contribute to the irradiated fuel decay heat (less than 50 After the demonstration of the analyzed experiment representativity relative to PWR fuel decay heat, the experimental validation must be transposed to provide valuable information for the DARWIN2.3 Uncertainty Quantification step.

First results concerning the implementation of the representativity and transposition on the DARWIN2.3 integral experimental validation
The representativity and transposition method relies on an experimental data assimilation process. The transposition is possible when the similarity between the experiment and the "reactor" case is strong. This similarity is quantified by the representativity factor, introduced by Orlov [34]. The representativity factor is a correlation factor between an experiment and the "reactor" case regarding nuclear data uncertainty for a physical parameter (decay heat in our case). The representativity factor r is described by the formula (4) where the index A stands for the reactor application and e for the experiment. The vector S A (respectively S e ) is the sensitivity to nuclear data in the reactor application (respectively the experiment), and D is the covariance matrix.
One can also define a weight w (see formula (5)) where s e is the uncertainty associated with the experiment and e the prior uncertainty due to nuclear data and calculated with formula (3). The weight w provides an indication of the interest of the experiment regarding a transposition application. In the ideal case where there is no experimental uncertainty (s e = 0), the weight is maximum and the main source of uncertainty comes from the nuclear data. As a conclusion, the transposition method is the most efficient, that is to say it leads to the largest bias and uncertainty reduction due to nuclear data covariances, when r and w are the closest to unity: The transposition method applied to one experiment and one reactor application allows an indirect reassessment of nuclear data leading to a new calculation bias (dR * ) and a posterior uncertainty ( Ã A ) due to nuclear data covariances (see formulas (6) and (7)): These formulas have been extended to more than one experiment [35,36]: where: -R 0 andR are the prior and posterior values of the calculated quantity of interest; and are the prior and posterior uncertainties due to nuclear data covariances for the reactor application; -b R A ¼ ðr 1 :::r i :::r n Þ is the extended representativity vector of the reactor application A; is the extended representativity matrix between experiments; ij -dE i is the experimental uncertainty for the experiment i and dE i;j is the experimental correlation between the experiments i and j; At the CEA, transposition applications were initiated for fuel cycle application in 2015. The first one concerns the transposition of C/E discrepancies on isotopic concentrations of a 17 Â 17 square lattice PWR MOx fuel to a 15 Â 15 square lattice PWR MOX fuel [37]. Indeed, the current experimental validation report of the DARWIN2.3 package focuses on 17 Â 17 lattices. The use of the representativity/transposition method on the isotopic concentrations was allowed by a strong representativity factor (r = 0.99) and led to an uncertainty reduction in nuclide concentration calculations ranging from 0 to 87%.
The second work conducted at the CEA is a prospecting study involving the use of the transposition for decay heat [22]. The goal was to quantify to what extent a measurement at a given set of parameters (t cooling , BU) could be used through the transposition at another set of parameters (t cooling ', BU'). Figure 4 shows the representativity factor obtained as a function of the cooling time for a UOX fuel ( 235 U e% = 3.7%) reactor application and that of the CLAB experiment at a cooling time of 4724 days (i.e. ∼13 years) and a very similar discharge burnup. The decay heat mainly originates from both short-life fission product decays and actinide decays. One can see in this illustration that the representativity factor quickly drops to under 2000 days (i.e. ∼5.5 years) of cooling. Therefore, it can be sensed here that it would be difficult to use the transposition to quantify biases and uncertainties due to nuclear data at a shorter cooling time than 2000 days. Bear in mind that the representativity is usually considered as satisfactory when it is close to 0.9 or higher.
Another work is now underway concerning the exploitation of the MERCI-1 experiment analysis, which is the integral experiment that provides the integral experimental data at the lowest cooling times (45 min). The decay heat mainly originates from short-life fission product decay. The representativity factor on the decay heat between MERCI-1 (respectively at 45 minutes/42 days) and a PWR standard UOX fuel ( 235 U e% = 3.7%), is presented in Figure 5 (respectively on the left/right). The MERCI-1 fuel sample was irradiated for three cycles in the OSIRIS experimental reactor, with a mean specific power of around 65 W/g, with two inter-cycle and shutdown periods; a fine irradiation history was taken into account for decay heat calculations [32]. For the standard PWR fuel, a simplified mean irradiation history was modeled, with a conventional specific power (around 40 W/g). With this assumption, it is observed that the representativity is very good, even for low cooling times (5 min), as long as the burnup is close to the MERCI-1 fuel burnup (3.6 GWd/t); for higher burn-up (>15 GWd/t), the experiment is not representative enough to use the transposition method.
Indeed, the low MERCI-1 representativity for a standard UOX spent fuel (BU > 15 GWd/t) is due to the contribution of 239 Pu fissions to decay heat, and thus to its associated uncertainty, from a burnup of 15 GWd/t (see Fig. 6), and increasing with the burnup. The representativity decreases as the cooling time of the standard UOX spent fuel increases because of strong differences in the decay heat contributors and thus in the sensitivity profiles.
The bias transposition may now be analyzed in a second step for cases presenting a good representativity.

Perspectives for the determination of an enhanced extended uncertainty associated with the DARWIN 2.3 decay heat calculations
The recent studies of nuclear data covariance propagation performed at the CEA over the last years on the decay heat of UOX and MOX fuels shows that the total uncertainty is reduced, sometimes by a factor 2, in comparison with the uncertainty determination at the end of the 1990's [38]. This is mainly due to the use of a more stringent method of uncertainty propagation and to improvements in the content of nuclear data libraries. However, this reduction emphasizes the fact that one should be careful when analyzing the results and supports the interest in keeping a critical look at nuclear data covariances.
Thus, the DARWIN2.3 experimental validation must be completed, considering the lack of representativity of the integral experiments. Its exploitation with the representativity and transposition method must also continue in order to provide elements that will enable us to validate the order of magnitude of the nuclear data uncertainty for the DARWIN2.3 calculated decay heat in the application domain. First, the DARWIN2.3 validation could be extended with the analysis of two sets of integral decay heat measurements which have been found in the literature: the GE-Morris and HEDL measurements. They are used for the validation of other international codes dedicated to decay heat computation such as SCALE/ORIGEN [39] or VESTA2.1 [40]. The GE-Morris and HEDL measurements enable us to cover lower cooling time ranges than the CLAB experiments. The characteristics are the following (Fig. 7): The experimental validation based on EFBs will have to be added to the experimental data assimilation process that only relies on MERCI-1 and CLAB assimilation for the moment; the objective will be to assimilate all this data at the same time, as recommended by [48] (for which only EFB data assimilation is performed), to obtain a final uncertainty capable of covering the largest DARWIN2.3 application domain, much more extensive than the current one covered by the experimental validation.

Conclusion
Decay heat is a crucial issue for in-core safety and the back-end cycle. In this framework, accurate control of decay heat calculation is needed for all the PWRs in the French nuclear infrastructure (UOX and MOX fuels with 235 U enrichments ranging from 1.0 to 5.0 wt.% and average plutonium contents ranging from 4.0 to 11.0 wt.%) over a wide range of cooling times (starting with the moment the reactor is shut down and continuing up to a period lasting more than 300 000 years). The calculation of the decay heat is provided by the DARWIN2.3 package. The DARWIN2.3 package benefits from the implementation of the VVUQ process. There are very few integral experiments available for decay heat to ensure the experimental validation of the DARWIN2.3 package over the whole range of parameters needed. Today, the Uncertainty Quantification associated with the decay heat calculated by DARWIN2.3 relies mainly on deterministic nuclear data covariance propagation. The input data for this propagation involves covariances taken from the European evaluation JEFF-3.1.1 for the decay data and fission yields and the  COMAC-V2 database for cross-sections. However, these covariances are often incomplete, considering the amount of nuclear data involved in decay heat calculations (more than 40 000 entries). Besides, their quality and accuracy is sometimes a subject of debate for experts in nuclear data (particularly concerning the fission yield correlation matrices). That is why the completion and exploitation of the DARWIN2.3 experimental validation is needed. The uncertainty level due to nuclear data uncertainties and associated with the decay heat calculation must be confirmed. In order to accomplish this, a data assimilation work, with the implementation of the representativity and transposition method, must be carried out, integrating the C/E discrepancies coming from both integral experiments (MERCI-1, CLAB, GE and HEDL) and EFBs, to increase the reliability and accuracy of decay heat calculations. The interpretations of the experiments from GE and HEDL are planned, so that it will be possible to study their representativity with UOX and MOX fuels and maybedepending on the representativity valuesuse the transposition method for cooling times between the cooling times of the MERCI-1 and CLAB experiments, which is to say over 6 months and until around 5 years. Moreover, it seems essential to plan new measurements (EFBs or integral experiments like MERCI-1) to be able to use them through the representativity and transposition method for UOX and MOX fuel applications at high burnup values and short cooling times.