Open Access
Issue
EPJ Nuclear Sci. Technol.
Volume 5, 2019
Article Number 4
Number of page(s) 32
DOI https://doi.org/10.1051/epjn/2018050
Published online 28 February 2019

© J.-B. Blanchard et al., published by EDP Sciences, 2019

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Uncertainty quantification is the science of quantitative characterisation and reduction of uncertainties in both computational and real world applications. This procedure usually requests a great number of code runs to get reliable results, which has been a real drawback for a long time. In the past few years, many interesting developments have been brought to try to overcome this, these improvements coming both from the methodological and computing side. Among the interesting features often used to perform uncertainty quantification, one can state, for instance, sensitivity analysis (SA) to get a rough ranking of uncertainty sources and surrogate model generation to emulate the code and perform a complete analysis on it (uncertainty propagation, optimisation, calibration, etc.). Knowing this and with the increasing number of resources available to assess complex computations (fluid evolution with a fine mesh, for instance), physicists should know whether or not it might be useful to increase the mesh resolution. It could instead be more relevant to reduce a specific uncertainty source, or add new locations to be included in a learning database for building a surrogate model.

1.1 The Uranie platform

The Uranie platform has been developed in order to gather the methodological developments coming both from the academic and the industrial world and provides them to the broadest audience possible. It is an open-source software dedicated to perform studies such as uncertainty propagation, SA, surrogate model generation and calibration, optimisation issues, etc. Given this wide range of possibilities provided by the platform (in terms of methodologies available), it can be compared to few other software, being either commercial (COSSAN [1]) or free (Dakota [2], Open-Turns [3], UQLab [4], etc.). Its evolution has been driven keeping in mind few important aspects such as:

  • Open-source: the platform can be used by anyone, and every motivated person can investigate the code and propose improvements or corrections.

  • Accessibility: the platform is developed on Linux but a windows-porting is performed. Also, even though it is written in  C++, it can be used either though a C++ interface or through Python one.

  • Modularity: the platform is organised in modules so that one should only load requested modules and that analysis can be organised as a compilation of fundamental bricks. The modules are introduced in Figure 1 and discussed later on.

    thumbnail Fig. 1

    Organisation of the Uranie-modules (green boxes) in terms of inter-dependencies. The external dependencies are shown as lightpurple boxes.

  • Genericity: the platform can work on an explicit function but it can also handle a code considering it as a black-box (as long as communications can be done through file, for instance). This assures that Uranie's methods are non-intrusive and that it can be applied to all science fields.

  • Universality: the platform is able to export data but also surrogate models in different formats (ascii, XML, JSON, etc.) so that they can be used by Uranie's user but also other platform's users (for cross-check and validation for instance).

Developed by the nuclear energy division of the CEA1 and written in C++, it is based on the ROOT [5] platform (discussed along with its advantage in Sect. 3.1). It consists in a set of so-called technical libraries, usually referred as modules (represented as the green boxes in Fig. 1), each performing a specific task. Some of them are considered low-level, in the sense that they are the foundation bricks upon which rely the rest of the modules, which can be considered more methodologically oriented (dedicated to a specific kind of analysis, see the discussion in Sect. 2.3).

Uranie has the particularity of storing both samples and results in a common object, namely the TDataServer, which is the centre of all the statistical treatments made by users. As stated above, the platform is able to launch different kinds of evaluators: functions (Python and C++ ones) and all kinds of codes (as long as one can communicate with them). On top of this, any combination of these evaluators can be created to define an estimation chain: the outputs of the ith evaluator being available as input arguments to the rest of the chain uphill.

1.2 Paper layout

This paper will describe several typical analysis that can be run using the Uranie platform. It is not only meant to be fully exhaustive concerning the methodology behind the introduced techniques, but also concerning the methods and options implemented in Uranie. The next few sections will complete the general description of our problem, from the general methodology used (see Sect. 2), the way the platform is constructed (see Sect. 3) to the use-case presentation (see Sect. 4).

The first introduced concept will be the generation of design-of-experiments and the way to take into account the uncertainties introduced in various input variables and see how to propagate them into uncertainty on the quantity of interest (see Sect. 5). Surrogate models will then be introduced (see Sect. 6) to show how to mimic a code or function with a simpler process. Three different techniques will be applied on a pre-produced design-of-experiments, called the training database, describing the lowest level of complexity of the use-case. The impact of every uncertainty source will be ranked but also numerically estimated thanks to various sensitivity analyses in Section 7. Finally, a calibration of some of the model parameters is performed using different techniques in Section 8, also questioning the fact that the thermal exchange coefficient h can be considered constant.

From Sections 5 to 8, some of the methods under consideration will contain a sub-part called To go further to present briefly the important, yet not discussed, options that can be offered to the user. In these sections, a final sub-part called More methodology and ongoing investigations will introduce other already implemented solutions as well as the improvement and new methods currently under investigation. Finally, some important left-over concepts are discussed along with the actual perspectives in Section 9 before getting to a conclusion in Section 10.

The code used to produce the figures of this paper can be found in https://sourceforge.net/projects/uranie/ along with the sources of the platform.

2 The uncertainty general methodology

2.1 Notation used throughout this paper

The following notation conventions will be used when dealing with generic methodology description.

  • Bold letters represent vectors.

  • Upper case letters represent random variable, so our general problem might be written as Y = f(X, d), where Y is the output variable under consideration, while X is the vector of uncertain input variables (whose dimension is set to nX) and d a set of fixed parameters (whose dimension is set to nd).

  • Lower case letters are realisation of a random variable: y = f(x, d) is a given output value when considering a realisation x of the input variable vector X, and the set of input parameters d.

  • xi represents the ith coordinate (i = 1, … , nX) of a realisation x of the random vector X .

  • When considering a set of realisations, one can write it as follow: , where xj is the jth realisation of X and nS is the size of this set.

2.2 Schematic description

Many issues related to uncertainty treatment of computer code simulations share the same framework. It can be sketched in a few key steps, gathered for illustration purpose in Figure 2 [6] and described below.

The problem specification (A): This step is the starting point of a great deal of study as it is when the number of input variable is defined, along with the variable of interest and the corresponding quantity of interest (a quantile, a mean, a standard deviation, etc.). All these are linked through a model that can be a function, a code or even a surrogate model (which can use instead of the code).

The quantification of uncertainty source (B): In this step, the statistical laws followed by the different input variables are chosen along with their characteristics (mean, standard deviation, etc.). The possible correlations between inputs can also be defined here.

The propagation of uncertainty sources (C): Given the choice made in steps A and B, the uncertainties on the input variables are propagated to get an estimation of the resulting uncertainty on the output under study. This can be performed, for instance, with analytic computation, using Monte-Carlo approach through a design-of-experiments, and so on.

The inverse quantification of sources (Bʹ): Given the definition of the problem in step A and a provided set of experiments, one can measure the mean value and/or the uncertainty of the input variables, in order, for instance, to spot which experiment should be run to constrain the largest one, or to calibrate the model.

The SA (Cʹ): Given the choice made in steps A and B, this analysis can be used to rank the input variables with respect to the impact of their uncertainty on the uncertainty of the variable of interest. Some methods even provide a quantitative illustration of this impact, for instance as a percentage of the output standard deviation.

This is a very broad description of the kind of analysis usually performed when discussing uncertainty quantification. All these steps can indeed be combined, or replaced, once or in an iterative way, to get a more refined analysis.

thumbnail Fig. 2

Diagram that represents in few boxes the different steps that can compose an uncertainty propagation or quantification analysis [6]. (This figure is subject to copyright protection and is not covered by a Creative Common license.)

2.3 Corresponding modules the Uranie platform

As discussed in the introduction, the Uranie platform is constructed as a set of library, called modules, which are often dedicated to specific tasks. The rest of this section introduces the main ones, used throughout this paper starting with the DataServer one, which is the spine of the Uranie project, as shown in Figure 1.

2.3.1 DataServer module

The DataServer library is the core of the Uranie platform. It contains all the necessary information about the variables of a problem (such as the names, units, probability laws, data files, and so on), the data itself (if information have been brought or generated) and it allows for very basic statistical operations (computing averages, standard deviations, quantiles, etc.).

1.3.2 Sampler module

The Sampler library allows to create a large variety of design-of-experiments depending on the problem to deal with (uncertainty propagation, surrogate model construction, etc.). Some of these methods are mainly present to be embedded by more complicated methods (such as designs developed in the Fourier-conjugate space, discussed later on in Sect. 7.3.1).

2.3.3 Modeler module

The Modeler library allows the construction of one or more surrogate models. The idea is to provide a simpler, and hence faster, function to emulate the specified output of a complex model (which is generally costly in terms of resources) for a given set of input factors. In this paper, the following surrogate models will be introduced: chaos polynomial expansion, artificial neural network, and Gaussian process, also known as kriging.

2.3.4 Optimizer and Reoptimizer modules

The Optimizer and Reoptimizer libraries are dedicated to optimisation and model calibration. Model calibration basically consists in setting up the degree-of-freedom of a model such that simulations are as close as possible to an experimental database. The optimisation is a complex procedure and several techniques are available to perform single-criterion or multi-criteria analysis, with and without constraint, using local or global approaches.

2.3.5 Sensitivity module

The Sensitivity library allows to perform SA of one or several output responses of a code, with respect to the chosen input factors. A glimpse of the very basic concepts of SA is introduced along with the method used throughout this paper: a screening one (the Morris method) and two different estimations of the Sobol coefficients.

3 Architecture and dependencies

This section introduces the different dependencies of the Uranie platform but also the way Uranie is developed, tested, and documented.

3.1 The Uranie platform dependencies

The dependencies to external packages (shown as light purple boxes in Fig. 1) are sorted in two categories: the compulsory and optional ones. The latter will only prevent, if not there, some methods from being used. Uranie, on the other hand, cannot work without the compulsory ones. Both types are listed and briefly discussed below.

3.1.1 Compulsory dependencies

 

  • ROOT: Discussed thoroughly below, the version used here is v6.14/00.

  • Cmake: Free and open-source software for managing the build process of compiled software, the version used here is v3.7.1 [7].

The ROOT system is an open-source object oriented framework for large scale data analysis. It started as a private project in 1995 at CERN2 and grew to be the officially supported LHC analysis toolkit. ROOT is written in C++, and contains, among others, an efficient hierarchical object-oriented database, a C++ interpreter, advanced statistical analysis (multi-dimensional histogramming, fitting, minimisation, cluster finding algorithms), and visualisation tools. The user interacts with ROOT via a graphical user interface, the command line or batch scripts. The object-oriented database design has been optimised for parallel access (reading as well as writing) by multiple processes.

Uranie is built as a layer on top of ROOT and, as a result, it benefits from numerous features of ROOT, among which:

  • the C++ on-the-fly compiler (CLING);

  • the Python automatic transcription;

  • an access to SQL databases;

  • an efficient and optimised data handling model;

  • many advanced data visualisation features;

  • and much more…

The new C++ inline-compiler provides a free speedup of every macros that can be written line-by-line as any Python one, and it comes along with a Jupyter kernel. This means that Uranie can be used in both languages by writing scripts, but also using the Jupyter notebook framework [8], benefiting from all its advantages (fast prototyping, rapid access to documentation, auto-completion, visual representation, etc.). Finally, as ROOT's community is very large (beyond 10 000 users), its documentation is very complete and many examples are provided either locally or in its very reactive web-forum.

3.1.2 Optional dependencies

 

  • CPPUnit: Unit testing framework for C++ programming, the version used here is v1.13.1 [9]. Allows to run unitary test to check the quality of an installation.

  • OPT++: Libraries that include nonlinear optimisation algorithms written in C++, the version used here is v2.4 [10]. As this package is not maintained anymore, a patched (and recommended) version is included in the Uranie archive.

  • FFTW: Library that computes the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, the version used here is v3.3.4 [11].

  • NLopt: Library for nonlinear optimisation, the version used here is v2.2.4 [12].

  • PCL (Portable Coroutine Library): Implements the low level functionality for coroutines, the version used here is v2.2.4.

  • MPI (Message Passing Interface): Standardised and portable message-passing system needed to run parallel computing, the version used here is v1.6.5 [13].

  • CUDA (Compute Unified Device Architecture): Parallel computing platform and programming model invented by NVIDIA to harness the power of the graphics processing unit (GPU), the version used here is v8.0 [14]. If requested, it should be used with the boost library, with a version greater than v1.47.

3.2 Uranie documentation and installation

In order to check and guarantee the best portability possible, the Uranie (the version under discussion here being 4.1.0) platform is tested daily on seven different Linux distributions and also on Windows 10. Getting the source of the Uranie platform can be done at the Sourceforge web page: https://sourceforge.net/projects/uranie/.

Once the sources have been retrieved, it is highly advised to follow the instruction listed in the README file to perform the installation. In addition to the code itself, this installation brings Uranie documentation, among which:

  • a methodological manual (both html and pdf format). It gives a shallow introduction to the main methods and algorithms, from a mathematical point of view, and provides references for the interested reader, to get a deeper insight on these problematics.

  • a user manual (both html and pdf format). It gives explanations on the implementations of methods along with a large number of examples.

  • a developer manual. This is a description of methods, from the computing point of view, obtained thanks to the Doxygen platform [15].

  • more than 100 examples of self-working macros, to show how to run different kind of analysis, both in C++ and Python.

In the case of the Windows version, an installation can be done from the previously introduced archive, but a dedicated free standing archive is specifically produced by the Uranie support team for every new release and is provided on request.3

4 Use case

4.1 The thermal exchange model

In this part, the physical equations of the use-case used throughout this paper are laid out in a simple way, discussing first the physical equations. This model will be more precisely detailed and also refined as required by the studies performed in the following sections.

4.1.1 Introduction

The experimental setup is depicted in Figure 3 and is composed of a planar sheet whose width is 2e (along the x-direction) while its length is considered infinite (represented without boundaries along the y-direction). At t = 0 this sheet, whose initial temperature is set to Ti, is exposed to a warmer fluid (whose temperature is written as T). The aim of this problem is to represent the temperature profiles, depending on the time and the position within the sheet, using different materials for the sheet, and to investigate the impact of various uncertainty sources these temperature profiles.

Studying the evolution of the temperature within the sheet in fact consists in solving the heat equation which can be written as follows, if we consider the mono-dimensional problem as depicted in Figure 3: (1) In this equation α [m2 s−1] is the thermal diffusivity which is defined by (2)where λ is the thermal conductivity [W m−1 K−1], Cρ is the massive thermal capacity [J kg−1 K−1], and ρ is the density [kg m−3]. There are three conditions used to resolve the heat equation, the first one being the initial temperature (3)the second one relies on the flow being null at the centre of the sheet (4)while the last one relies on the thermal flow equilibrium at the surface of the sheet (5)

Usually, the thermal coupling between a fluid and a solid structure is characterised by the thermal exchange coefficient h [W m−2 K−1]. This coefficient makes it possible to dispense with a complete description of the fluid, when one is only interested in the thermal evolution of the structure (and vice versa). Its value depends on the dimension of the complete system, on the physical properties of both the fluid and the structure, on the liquid flow, on the temperature difference, etc. The thermal exchange coefficient is characterised by the Nusselt number (Nu), from the fluid point of view, and by the Biot number (Bi), from the structure point of view. In the rest of this paper, the latter will be discussed and used thanks to the relation (6)

thumbnail Fig. 3

Simplified sketch of the thermal exchange problem.

4.1.2 Analytic model

In the specific case where the thermal exchange coefficient, h and the fluid temperature T can be considered constant, equation (1) has an analytic solution for all initial conditions (all the more so for the one stated in Eq. (3)), when it respects the flow conditions defined in equations (4) and (5). The resulting analytic form is usually express in terms of thermal gauge θ, which is defined as (7) The complete form is the following infinite series (8)where the original parameters have been changed to dimensionless ones (9) (10)Given this, the elements in the series (Eq. (8)) can be written (11)where (12)and ωn are solutions of the following equation (13)

This model has been implemented in Uranie and tested with two kinds of material to get an idea of the temperature profile in the structure. To do so, the infinite series in equation (8) has been truncated, keeping only the 41st elements.

4.1.3 Looking at PTFE and iron

In this part, two very different kinds of plate-sheets are compared: a composite one, made out of PTFE (whose best known brand name is Teflon) and an iron one. The main properties (of interest for our problem) of the sheets are gathered in Table 1 side-by-side for both PTFE and iron. The last column shows the relative uncertainty found in the literature (or chosen in the case of the thickness) for the iron case. They will be applied as well on the PTFE. The last three lines are the properties that are computed from the first four ones and once the thermal exchange coefficient has been set to a constant value (here 100), as stated in Section 4.1.2.

Given these properties (using the nominal values as reference), several plots have been produced to characterise the evolution of the temperature profiles in the sheet matter and are gathered in Figure 4. Looking at these plots, a major difference can be drawn between the two sheets: in the PTFE case, the gauge is very different between two positions at a same time and this difference also varies through time (see Figs. 4a and 4c). For the iron, on the other hand, the differences through time and space are very small. This is even more important when considering that the range over which the gauge is displayed is significantly reduced. The iron thermal gauge is actually far from reaching the value 1, even after 10 diffusion thermal time, whereas this is the case for PTFE.

These differences could have been foreseen, looking at the properties gathered in Table 1: the previously discussed observations are the expected ones once one considers the value of the Biot number. For a material with a Biot number greater than 1, as the PTFE sheet, the thermal conduction is small within the sheet, leading to temperature gradient within the structure. On the other hand, when the material has a Biot number significantly smaller than 1, as the iron plate, the temperature is expected to be quite similar at the surface and in the centre of the sheet.

PTFE will be considered as the main use-case for the rest of this paper. This means that, using the convention defined Section 2.1 and unless specified otherwise (for instance in Sect. 6), a realisation of the input variable vector will be written as and the set of fixed parameter is simply d = (h) (see Sect. 4.1.2). The corresponding realisation of our output variable (that depend on position and time) will be written as .

Table 1

Summary of both PTFE and iron characteristics. The last column shows the relative uncertainty found in the literature (or chosen in the case of the width) for the iron case. They will be applied as well on the PTFE.

thumbnail Fig. 4

Evolution of the thermal gauge as a function of either the position for different time steps (a,b), the time for different positions (c,d), or depending on both parameters (e,f), for PTFE (left) and iron (right).

5 Uncertainty propagation

As already stated in Section 2.2, many analysis will start in the same way, by defining the problem investigated in terms of number of input variables and their characteristics, setting their possible correlations, etc. From there, unless one has an already computed set of experiments (as it will the case in Sect. 6), it is common to generate a design-of-experiments as being a set of input locations to be assessed by the code/function and that should be the most representative of the input phase space with respect to aim of the study.

This section introduces the various mechanisms available in Uranie for sampling design-of-experiments, which could lead to the uncertainty propagation from the input parameters to the quantity of interest, as shown in Figure 2.

5.1 Random variable definition

5.1.1 Defining a variable

Uranie implements more than 15 parametric distributions (continuous ones) to describe the behaviour of a given random variable. The list of available continuous laws is given in Table 2, along with their corresponding adjustable parameters. For a complete description of these laws and a set of variations of all these parameters, see [16]. The classes, implementing these laws, give access to the main mathematical properties (theoretical ones) and they have been made to be an interface with the sampling methods discussed in Section 5.2, to get a dedicated design-of-experiments.

These classes also offer methods to compute the probability density function (PDF), the cumulative distribution function (CDF), and its inverse-CDF. Figure 5 shows example of PDF distributions in Figure 5a, CDF distributions in Figure 5b, and inverse-CDF distribution in Figure 5c, using a uniform (black), a normal (red), and a gumbelmax (blue) law.

On top of these definitions, it is also possible to create a new variable through a combination of already existing ones, for instance with simple mathematical expression. This can be done independently of the origin of the original variables: either read from a set-of-experiments without any knowledge of the underlying law, or generated from well-defined stochastic law.

Table 2

List of continuous available statistical laws in Uranie, along with their corresponding adjustable parameters.

thumbnail Fig. 5

Example of PDF (a), CDF (b), and inverse-CDF (c) for a uniform law (defined between −1 and 1, in black), a normal law (with μ = 0 and σ = 0.5, in red) and a gumbelmax one (with μ = −0.4 and β = 0.4 in blue).

5.1.2 Correlating the laws

Once the laws have been defined, one can introduce correlation between them. This, in Uranie, can be done with different methods. Starting from the simplest one, one can introduce a correlation coefficient between two variables or providing the complete correlation matrix.

Instead of using correlation matrix to get intricate variables, one can use methods relying on copula, in order to describe the dependencies. The idea of a copula is to define the interaction of variables using a parametric function that can allow a broader range of entanglement than only using a correlation matrix (various shapes can be done). The copulas provided in the Uranie platform are archimedian ones, with 4 pre-defined parametrisation: Ali–Mikhail–Haq, Clayton, Frank, and Plackett. Both methods will be illustrated in the next section.

5.2 Design-of-experiments definition

5.2.1 Stochastic methods

In Uranie, different kind of random-based algorithms can be used to generate design-of-experiments. Here is a brief introduction of the three main types which are illustrated in Figure 6 where two independent uniformly distributed variables are used. This kind of plot (called Tufte one) is an example of Uranie-implemented visualisation tool. The main pad, in the centre of the canvas, shows the dependence of the two variables under consideration, while the two other pads show projection along one of the dimension, as a mono-dimensional histogram.

Simple random sampling (SRS): This method consists in independently generating the samples for each parameter following its own probability density function. An example of this sampling when having two independent uniformly distributed variables is shown in Figure 6b. The random drawing is performed using a uniform law between 0 and 1 and getting the corresponding value by calling the inverse CDF function corresponding to the law under study.

Latin hypercube sampling (LHS): This method [17] consists in partitioning the variation interval of each variable to obtain equiprobable segments and then get, for each segment, a representative value. An example of this sampling when having two independent uniformly distributed variables is shown in Figure 6a. The random drawing is performed using a uniform law between 0 and 1, split into the requested number of points for the design-of-experiments. Thanks to this, a grid is prepared, assuring equi-probability in every axis-projection. Finally, a random drawing is performed in every sub-range. The obtained value is computed by means of the inverse CDF function corresponding to the law under study.

Maximin LHS: Considering the definition of an LHS sampling, it is clear that permuting a coordinate of two different points creates a different design-of-experiments that is still an LHS one. In Uranie, a new kind of LHS sampling, called maximin LHS, has been recently introduced with the purpose of maximising the minimal distance calculated between every pair of two design locations [18]. The criterion under consideration is the mindist criterion: let D = {xj} be a design-of-experiments of size nS. It is written as (14) where ∥ ⋅ ∥ 2 is the Euclidean norm. The designs which maximise the mindist criterion are referred to as maximin LHS. It has been observed that the best designs in terms of maximising equation (14) can be constructed by minimising its Lp regularisation instead, ϕp, which can be written: (15)The permutations done to go from a first LHS design-of-experiments to its maximin version are made through a simulated annealing method. An example is displayed, starting from the design-of-experiments in Figure 6a and resulting in the one in Figure 6c. Both have uniform projections along each axis but the locations are clearly more space filling in Figure 6c.

The SRS method is a pure-random method which populates the region following the inverse-CDF of the considered probability law. In other words, if the objective is to obtain quantiles for extreme probability values, the size of the sample should be large for this method to be used. However, one should keep in mind that it is rather trivial to double the size of an existing SRS sampling, as no extra caution has to be taken apart from the random seed. On the other hand, the LHS method is built in a way that ensures that the domain of variation of each variable is totally covered in a homogeneous way. The drawback of this construction is that it is absolutely not possible to remove or add points to an LHS sampling without having to regenerate it completely.

From a theoretical perspective, using a maximin LHS to build a GP emulator can reduce the predictive variance when the distribution of the GP is exactly known. However, it is not often the case in real applications where both the variance and the range parameters of the GP are actually estimated from a set of learning simulations run over the maximin LHS. Unfortunately, the locations of maximin LHS are far from each other, which is not a good feature to estimate these parameters with precision. That is why maximin LHS should be used with care. Relevant discussions dealing with this issue can be found in [19,20].

Finally, as introduced in Section 5.1.2, an example of correlation is provided in Figure 7, both using correlation coefficient and copula. In the first case, instead of relying on the “Bravais–Pearson” correlation coefficient definition, that exclusively reflects both the degree and sign of linearity between two variables Xi and Xj, the method used in Uranie [21] takes into account the correlation on ranks, i.e. the “Spearman” definition: (16) In this expression, ρS is the Spearman coefficient, ρ is the usual Bravais–Pearson definition but applied here on R which is the rank of the information under consideration. This method can be applied only if the correlation matrix provided by the user is positive definite. Figure 7a shows an example of correlation (set to a value of 0.9) between two uniform distributions.

The copula, introduced in Section 5.1.2, depends only on the input variables and a parameter ξ. An example using two uniform distributions is given in Figure 7b for the Ali–Mikhail–Haq copula.

thumbnail Fig. 6

Drawing of the design-of-experiments for two uniformly distributed variable x1 and x2, with an LHS sampling (a), an SRS one (b), and a maximin LHS one (c). Deterministic sampling are also shown with the Halton sequence (d), the Sobol one (e), and a Petras sparse grid (f).

thumbnail Fig. 7

Example of correlation introduced between two uniform distributions, using either the Spearman coefficient (a) or the Ali–Mikhail–Haq copula (b).

5.2.2 Quasi Monte-Carlo methods

The deterministic samplings can produce design-of-experiments with specific properties that can be very useful in cases such as:

  • cover at best the space of the input variables;

  • explore the extreme cases;

  • study combined or nonlinearity effects.

There are two kinds of quasi Monte-Carlo sampling methods implemented in Uranie: the regular ones and the sparse grid ones. The former can be generated using two different sequences:

  1. Sequences of Halton [22]

  2. Sequences of Sobol [23].

Figures 6d and 6e show the design-of-experiments obtained when having two independent uniformly distributed variables and can be compared with the stochastic ones (from Fig. 6a to c) already discussed in Section 5.2.1. The coverage is clearly more regular in the case of quasi Monte-Carlo sequences, but these methods can suffer from weird pattern appearance when nX is greater than 10. On the other hand, the sparse grid sampling can be very useful for integration purposes and can be used in some of the meta-modelling definition, see, for instance, in Section 6.1.4. In Uranie, the Petras algorithm [24] can be used to produce these sparse grids (shown when the level is set to 8, in Fig. 6f, that can be compared to the rest of the design-of-experiments in Fig. 6).

In practice, the main steps used to get one of the plot shown in Figure 6 are gathered in the following block:

5.3 Focusing on the PTFE case

In this section, the basic building blocks introduced in Sections 5.1 and 5.2 are put together to perform the uncertainty propagation. The following steps are then:

  1. create the input variables by specifying for each and every one of them a probabilistic law and their corresponding parameters. Here, all the input variables have been modelled using normal distributions and their nominal values and standard deviations have been estimated and gathered in Table 3.

    Table 3

    Summary of the PTFE uncertain physical parameters and their corresponding uncertainty. The absolute value of the uncertainty is computed from the values in Table 1 (where units are also provided).

  2. sample an LHS to be as much representative of the full input phase space as possible. No correlation between the parameters has been assumed. The size of this design-of-experiments has been set to 100 points.

  3. compute 11 absolute time steps for every locations and for 4 different depths in the sheet. Every configuration (a configuration being a precise value of the time and depth) consists of 100 measurements where the mean and standard deviation have been computed. These values are then represented in Figure 8.

    thumbnail Fig. 8

    Evolution of the thermal gauge (top pad) and its standard deviation (bottom pad), as a function of the absolute time, for four different values of depth within the sheet.

Given the distribution obtained in Figure 8, the user should decide what would be the next step in his analysis. The following list of actions gives an illustration of the various possibilities (but it is not meant to be exhaustive, because only provided for illustration purpose):

  • Compare this to already existing measurements:

  • check that the hypothesis are consistent with the model (in case of very surprising results for instance).

  • move forward to a calibration or the determination of the uncertainty of physical model's parameters (through the Circe [25] method for instance in Uranie).

  • Move to a SA on the code or on a surrogate model if this one is too resource consuming (as discussed in Sect. 7) to understand which input's uncertainty impacts the most the quantity of interest.

5.4 More methodology and ongoing investigations

This introduction to the design-of-experiments sampling is very brief with respect to the underlying complexity and possibility. It is indeed also possible to produce with Uranie:

  • design-of-experiments for integration in the conjugate Fourier space (used for instance in Sect. 7.3);

  • a representative set-of-points smaller than a given database to keep the main behaviour without having to run too many computations;

  • adaptive design-of-experiments. Unlike the previously discussed one, new locations can sequentially be added, the value of these locations depending on previous iterations (usually based on the use of surrogate models for instance).

6 Surrogate model generation

In this part, different surrogate models will be introduced to reproduce the behaviour of a given code or function. The aim of this step is to obtain a simplified model able to mimic, within a reasonable acceptance margin, the output of both a training and a test database, along with an important improvement in terms of time and memory consumption [26].

The full analytic model, detailed in equation (8), plays the role of the complex model that should be approximated. To do so, a training database  will be produced (as discussed in Sect. 2.1) for which nS is set to 40 points. In the case of our specific thermal example,  and . The Biot number is set to 4 as only the PTFE case will be considered (see Tab. 1).

Three different techniques will be applied: the polynomial chaos expansion, the artificial neural network, and the kriging approximation. Each and every method will have a brief introduction before being applied to our use-case. The interested readers are invited to go through the references for a more meticulous description. In all cases, the estimated values from the model will be called and it is possible to have a first estimation of the quality of the model by looking at criteria such as R2 or mean square error (MSE), defined on  as where is the mean of the quantity of interest on .

Another quality criteria, called predictivity coefficient Q2, is estimated using a test basis, meaning another set of realisations called hereafter , whose size is nP: This solution will be applied to the three techniques below, fixing nP to 2000.

Finally, intermediate methods can be used to estimate the validity and the quality of the surrogate model, using the training database in a specific way. Among these, one can find the K-fold approach (which is used for instance in the training of neural network, see Sect. 6.2) or the Leave-One-Out approach (discussed for instance in Sect. 6.3).

The starting point will always be the loading of the training database in a TDataServer object which is the spine of Uranie.

6.1 Polynomial chaos expansion

6.1.1 Introduction

The concept of polynomial chaos development relies on the homogeneous chaos theory introduced by Wiener [27] and further developed by Cameron and Martin [28]. Using polynomial chaos (later referred to as PC) in numerical simulation has been brought back to the light by Ghanem and Spanos [29]. The basic idea is that any square-integrable function can be written as (17) where {fα} are the PC coefficients, {Ψα} is the orthogonal polynomial-basis. The index over which the sum is done, α, corresponds to a multi-index whose dimension is equal to the dimension of the input vector x (i.e. here nX) and whose L1 norm  is the degree of the resulting polynomial.

From this development, it becomes clear that a threshold must be chosen on the order of the polynomials used, as the number of coefficient will growing quickly, following this rule , where p is the cut-off chosen on the polynomial degree.

6.1.2 Implementation in Uranie

In Uranie, the implementation of the polynomial chaos expansion method is done through the NISP library [30], NISP standing for Non-Intrusive Spectral Projection. Originally written to deal with normal laws, for which the natural orthogonal basis is Hermite polynomials, this decomposition can be applied to few other distributions, using other polynomial orthogonal basis, such as Legendre (for uniform and log-uniform laws), Laguerre (for exponential law), Jacobi (for beta law), etc.

The PC coefficients are estimated through a regression method, simply based on a least-squares approximation: given the training database , the vector of output y(nS) is computed with the code. The regression are estimated, given that one calls the correspondence matrix H(nS, nC) and the coefficient-vector β, by a minimization of , where (18) (19) (20) This leads to write the general form of the solution as which means that the estimation of the points using the surrogate model are given through , where . Here, the P matrix links directly the output variable and its estimation through the surrogate model: this formula is useful as it can be used to compute the Leave-One-Out uncertainty.

6.1.3 Application to the use-case

Figure 9 represents the distribution of the thermal gauge values (as defined in Eq. (7)) estimated by the surrogate model  as a function of the ones computed by the complete model (θ) in a test database containing 2000 locations, not used for the training. A nice agreement is found on the overall range.

In practice, the main steps used to get the PC expansion are gathered in the following block:

thumbnail Fig. 9

Distribution of the thermal gauge values estimated by the surrogate model () as a function of the ones computed by the complete model (θ) in a test database, not used for the training (with the estimated Q2).

6.1.4 To go further

There are several points not discussed in this section but which can be of interest for users:

  • Based on regression method explained in Section 6.1.2, Uranie also provides a method to estimate the best degree possible, relying on the Leave-One-Out estimation, limiting the range of tested degree, given the learning database size (nS).

  • PC coefficients can be estimated using an integration methods (instead of the regression) which relies on specific design-of-experiments (usually sparse-grids) that are often smaller, in terms of computations, than the regularly tensorised approaches [30].

  • When the PC development is done on the natural polynomial basis of the stochastic laws (listed in Sect. 6.1.2), the PC coefficients can be combined and transformed into Sobol coefficients (discussed in Sect. 7) providing both a surrogate model and a SA.

6.2 Artificial neural networks

The artificial neural networks (ANN) in Uranie are multilayer perceptron (MLP) with one or more hidden layer (containing nH neurons) which are not limited to one output variable.

6.2.1 Introduction

The concept of formal neuron has been proposed after observing the way biological neurons are intrinsically connected [31]. This model is a simplification of the various range of functions dealt by a biological neuron, the formal one (displayed in Fig. 10) being requested to satisfy only the two following purposes:

  • summing the weighted input values, leading to an output value, called neuron's activity, , where  are the synaptic weights of the neuron.

  • emitting a signal (whether the output level goes beyond a chosen threshold or not) s = f(a + θ) where f and θ are respectively the transfer function and the bias of the neuron.

One can introduce a shadow input defined as x0 = 1 (or −1), which is a way to consider the bias as another synaptic weight ω0 = θ. The resulting emitted signal is written as There are a large variety of transfer functions possible, and an usual starting point is the sigmoid family, defined with three real parameters, c, r, and k, as . Setting these parameters to peculiar values leads to known functions such as the hyperbolic tangent and the logistic function, shown in Figure 11 and defined as and

The first artificial neural network conception has been proposed and called the perceptron [32]. The architecture of a neural network is the description of the organisation of the formal neurons and the way they are connected together. There are two main topologies:

  • complete: all the neurons are connected to the others.

  • by layer: neurons on a layer are connected to all those on the previous and following layer.

thumbnail Fig. 10

Schematic description of a formal neuron [31]. (This figure is subject to copyright protection and is not covered by a Creative Common license.)

thumbnail Fig. 11

Example of transfer functions: the hyperbolic tangent (top) and the logistical one (bottom).

6.2.2 Implementation in Uranie

The general organisation of Uranie's ANN is detailed in three steps in the following part and displayed in Figure 12. The first layer, where the vector of entries is stored, is called the input layer. The last one, on the other hand, is called the output layer while in between lies the single hidden layer, composed of nH hidden neurons.

The first step is the definition of the problem: what are the input variables under study, how many neurons will be created in the hidden layer (or the layers if there is more than one hidden layer), what is the chosen activation function.

The second step is the training of the ANN. Using the full database , two mechanisms are run simultaneously:

  • the learning itself. By varying all the synaptic weights contained in the parameter Ξ, the aim is to produce the output set , that would be as close as possible to the output stored in L then keep the best configuration (denoted as Ξ*). The difference between the real output and the estimated one is measured through a loss function which could be, in the case of regression, a quadratic loss function such as

From there, one can define the risk function R(Ξ) used to transform the optimal parameters search into a minimisation problem. The empirical risk function can indeed be written as

  • the regularisation. Since the ANN is trained only on the L ensemble, the surrogate model could be trained too specifically for this sub-part of the input space which might not be representative of the overall input space. To avoid this, the learning database is split into two sub-parts: one for the training (see previous bullet), and one to prevent the over-fitting to happen. For every newly tested parameter set Ξ, the generalised error (computed as the average error over the set of points not used in the training procedure) is determined. While it is expected that the risk function is becoming smaller when the number of optimisation steps is getting higher, the generalised error is also becoming smaller at first, but then it should stabilise and even get worse. This flattening or worsening is used to stop the optimisation.

This procedure is stochastic: the splitting of the  ensemble is done using a random generator, so does the initialisation of the synaptic weights for all the formal neurons. It is important then to export the constructed neural network as running twice the same methods will not give the same performances. Both the splitting and initialisation are reproduced many times, meaning that a resulting neural network from Uranie is the best network from all the tested models.

thumbnail Fig. 12

Schematic description of the working flow of an ANN as used in Uranie.

6.2.3 Application to the use-case

Figure 13 represents the distribution of the thermal gauge values (as defined in Eq. (7)) estimated by the surrogate model  as a function of the ones computed by the complete model (θ) in a test database containing 2000 points, not used for the training. A nice agreement is found on the overall range.

In practice, the main steps used to get the neural network trained are gathered in the following block:

thumbnail Fig. 13

Distribution of the thermal gauge values estimated by the surrogate model (  ) as a function of the ones computed by the complete model (θ) in a test database, not used for the training (with the estimated Q2).

6.2.4 To go further

There are several points not discussed but which can be of interest for users:

  • The learning step can be run in parallel on GPUs which can boost it considerably.

  • It is possible to construct an ensemble of neural networks, with one global ANN which embeds the results of all the others.

6.3 Kriging

First developed for geostatistic needs, the kriging method, named after D. Krige and also called Gaussian process method (denoted GP hereafter) is another way to construct a surrogate model (a large description of their usage can be found in [33]). It recently became popular thanks to a series of interesting features:

  • it provides a prediction along with its uncertainty, which can then be used to plan simulations and therefore improve predictions of the surrogate model;

  • it relies on relatively simple mathematical principle;

  • some of its hyper-parameters can be estimated in a Bayesian fashion to take into account a priori knowledge.

Kriging is a family of interpolation methods developed for the mining industry [34]. It uses information about the “spatial” correlation between observations to make predictions along with a confidence interval at new locations. In order to produce the prediction model, the main task is to produce a spatial correlation model. This is done by choosing a correlation function and search for its optimal set of parameters, based on a specific criterion.

6.3.1 Introduction

The metamodelling relies on the assumption that the deterministic output y(x) can be written as a realisation of a Gaussian process Y(x) that can be decomposed as Y(x) = m(x) + Z(x) where m(x) is the deterministic part, called hereafter deterministic trend, that describes the expectation of the process and Z(x) is the stochastic part that allows the interpolation. This method can also take into account the uncertainty coming from the measurements. In this case, the previously written Y(x) is referred to as YReal(x) and the Gaussian process is then decomposed into , where ϵ(x) is the uncertainty introduced by the measurement.

To construct the model from the training database , a parametric correlation function can be chosen along with a deterministic trend (to bring more information on the behaviour of the output expectation). These steps define the list of hyper-parameters to be estimated (Ξ) by the training procedure. The best estimated hyperparametres (Ξ*) constitute then the kriging model that can then be used to predict the value of new points.

To end this introduction, it might be useful to show a very-general correlation function: the Matern function, called hereafter Kν. It uses the Gamma function Γ and the modified Bessel function of order ν. This ν parameter describes the regularity (or smoothness) of the trajectory (the larger, the smoother) which should be greater than 0.5. In one dimension, with δx the distance, this function can be written as (21)

In this function, l is the correlation length parameter, which has to be positive. The larger the l, the more the Y correlated between two fixed locations x1 and x2 and hence, the more the trajectories of Y vary slowly with respect to x.

6.3.2 Implementation in Uranie

The kriging approximation in Uranie is provided through the gpLib library. Based on the Gaussian process properties of the kriging [35], this library can estimate the hyper-parameters of the chosen correlation function in several possible ways, then build the prediction model.

The first step is to construct the model from a training database , by choosing a parametric correlation function, amongst the list below, for which l is the vector of correlation lengths and ν is the vector of regularity parameters:

  • Gauss: defined with one parameter per dimension, as .

  • Isogauss: defined with one parameter only, as .

  • Exponential: defined with two parameters per dimension, as , where p are the power parameters. If p = 2, the function is equivalent to the Gaussian correlation function.

  • MaternI: the most general form, defined with two parameters per dimension, as .

  • MaternII: defined as maternI, with only one smoothness (leading to nX + 1 parameters).

  • MaternIII: the distance is put in equation (21) instead of δx (leading to nX + 1 parameters).

  • Matern3/2: equivalent to maternIII, when ν = 3/2.

  • Matern5/2: equivalent to maternIII, when ν = 5/2.

  • Matern7/2: equivalent to maternIII, when ν = 7/2.

The next step is to find the optimal hyper-parameters ( Ξ*) of the correlation function and the deterministic trend (if one is prescribed), which can be done in Uranie by choosing:

  • an optimisation criterion (in the example: the log-likelihood function);

  • the size of the design-of-experiments used to define the best starting point for the optimisation;

  • an optimisation algorithm configured with a maximum number of runs.

Once the “best” starting point is found, the chosen optimisation algorithm is used to seek for an optimal solution. Depending on various conditions, convergence can be difficult to achieve. Once done, the kriging surrogate model can be applied to the testing database to get predicted output values and their corresponding uncertainties.

It is, however, possible, even before using a testing database, to check the specified covariance function at hand, using the Leave-One-Out technique (Loo). This method consists in the prediction of a value for yi using the rest of the known values in the training site, i.e.  for . From there, it is possible to use the Leave-One-Out prediction vector  and the expectation to calculate two criteria: the MSE and the quality criteria defined as and The first criterion should be close to 0 while, if the covariance function is correctly specified, the second one should be close to 1. Another possible test to check whether the model seems reasonable consists in using the predictive variance vector  to look at the distribution of the ratio for every point in the training site. A good modelling should result in a standard normal distribution.

6.3.3 Application to the use-case

The kriging technique has been applied twice to illustrate its principle and the results are gathered in Figure 14. In the first case, it is used on a mono-dimensional thermal gauge evolution as a function of the dimensionless time, see Figure 14a. In this figure, the black points represent the training database while the blue and red ones are respectively the real output values and their estimated counterpart from the kriging model using the testing database. A good agreement is found and confirmed by the MSE and Q2 criteria. The red band represents the uncertainty on the estimation. The kriging approximation has also been applied to , as for the ANN and PC, and a nice agreement is found on the overall range, as shown in Figure 14b.

In practice, the main steps used to get the kriging model gathered in the following block:

thumbnail Fig. 14

Distribution of the thermal gauge values, in a test database, computed with the code (in blue) and estimated by a kriging model (in red) whose training database is shown as black points (a). Distribution of the thermal gauge values estimated by the surrogate model () as a function of the ones computed by the complete model (θ) in a test database, not used for the training (with the estimated Q2, in (b)).

6.3.4 To go further

There are several points not discussed in this section but which can be of interest for users:

  • other optimisation criteria. Thanks to the linear nature of the kriging model, the Leave-One-Out error has an analytic formulation;

  • on top of the deterministic trend, an a priori knowledge on the mean and variance of the trend parameters can be used to perform a Bayesian study;

  • one can take into account measurement errors when looking for the optimal hyper-parameters.

6.4 More methodology and ongoing investigations

On top of the already introduced surrogate models, Uranie can provide few other solutions among which:

  • the regression method;

  • the k-nearest neighbour method;

  • the kernel method.

There are several kind of evolution under investigation for this module, based on the following observations:

  • the prediction of a neural network does not provide an uncertainty on the estimated parameters so far. It is possible to use a specific kind Markov Chain (called Hamiltonian or Hybrid Markov chain [36]) to determine the synaptic weight, not as a single value, but as a distribution, bringing a statistical interpretation of the provided estimations.

  • the development of “deep learning” capacities. Several possible developments are considered, such as the use of recurrent neural network (RNN [37]) or deep belief network (DBN [38]), but also the usage of dedicated external deep learning package (TensorFlow [39], N2D2, etc.).

7 Sensitivity analysis

In this section, we will briefly remind different ways to measure the sensitivity of the output of a model to its inputs starting from a screening method. A brief recap of the concept of SA will be done, before investigating the evolution of the Sobol indexes of our use-case through time, for two dimensionless positions: xds = 0.3 and xds = 0.8. For more complete methodological reviews, see [4042] for instance.

The starting point will always be the definition of the input variables as Gaussian-modelled objects, stored in the TDataServer.

7.1 Screening method

A screening method is a constrained version of dimensional reduction where a subset of the original variables is retained. There are more than one screening method allowing to rank the impact of input variables with respect to one another, either based on dependence measurements [43], on sequential bifurcation [44], etc. The one presented in this section is the Morris method.

7.1.1 Introduction to the Morris method

The Morris method [18] is an effective screening procedure that extends more robustly the One-factor-At-a-Time protocol (OAT). Instead of varying every input parameter only once (leading then to a minimum of nX + 1 assessments of the code/function, with an OAT technique), the Morris method repeats this OAT principle r times (practically, it is between 5 and 10 times), each time being called a trajectory or a replica. Every trajectory begins from a randomly chosen starting point (in the input parameters space). In order to do so, it computes elementary effects (later on called EE), defined as where Δt is the chosen variation in the trajectory t. The resulting cost (in terms of assessments) is then r(nX + 1). This method is schematised in Figure 15 for a problem with three inputs. The hyper-volume is normalised and transformed into a unit hyper-cube. The resulting volume is discretised with the requested level (here, p = 6) and two trajectories are drawn for different values of the elementary variation.

With the repetition of this procedure r times, it is possible to compute basic statistics on the elementary effects, for every input parameter, as (22) and (23)The variables μi and σi represent respectively the mean and standard deviation of the elementary effects of the ith input parameters. In the case where the model is not monotonic some may cancel each other out, resulting in a low μi value even for an important factor. For that reason, a revised version called has been created and defined as the mean of the absolute values of the [45]. The results are usually visualised in the () plane.

Even though the numerical results are not easily interpretable, their values can be used to rank the effect of one or several inputs with respect to others, the point being to spot a certain number of inputs that can safely be thrown away, given the underlying uncertainty model assessed.

thumbnail Fig. 15

Schematic view of two trajectories drawn randomly in the discretised hyper-volume (with p = 6) for two different values of the elementary variation (the optimal one in black and the smallest one in pink).

7.1.2 Implementation in Uranie

The Morris method has been implemented as explained previously. The variation parameter Δ can be set by the user, but the default (recommended, because it is said to be optimal [46]) value is , where p is the level that describes in how many intervals, the range should be split (see Fig. 15 that illustrates this).

7.1.3 Application to the use-case

The method has been applied to the thermal exchange model introduced in Section 4.1 which has been slightly changed here for illustration purpose: a new input variable has been added, with the explicit name “useless”. The idea is to show that it is possible to spot an input whose impact on the output can be considered so small that it can be discarded through the rest of the analysis.

Figures 16a and 16c represent the (μ*, σ) plane introduced in Section 7.1.1, respectively, for xds = 0.3 and xds = 0.8, measured when the time is set to 572 sec (about 2 thermal diffusion time). In both cases, it is possible to split the plot in three parts:

  • factors that have negligible effect on the output: both μ and σ are very small. The “useless” input enters this category.

  • factors that have linear effects, without interaction with other inputs: μ is larger (all variations have an impact) but σ is small (the impact is the same independently of the starting point). The massive thermal capacity is a very good illustration of this (as the thermal conductivity or the density at a smaller scale).

  • factors that have nonlinear effects and/or interactions with other inputs: both μ and σ are large. The thickness of the sheet is a perfect illustration of this.

Figures 16b and 16d, on the other hand, show the evolution of both the μ and σ as a function of the time for the different inputs. Here also, the “useless” inputs can clearly be spotted as negligible through time. Comparing all the other curves, one has to decide the number of other inputs that can be kept into consideration, given the time and memory consumption of a single calculation, but also the physics underlying this behaviour. For the thermal exchange example, considering that the code is fast and the number of inputs is small, the only variable dropped thanks to this method is the “useless” one.

In practice, the main steps used to obtain these results are gathered in the following block:

thumbnail Fig. 16

Measurement of the Morris µ* and σ for xds = 0.3 (top) and xds = 0.8 (bottom). The (a) and (c) parts represent this measurement for a single value of the time, while the (b) and (d) parts show the evolution of µ* (top pad) and σ (bottom pad) as a function of the time.

7.2 Introduction to Sobol indexes

If one can consider that the inputs are independent one to another, it is possible to study how the output variance changes when fixing Xi to a certain value .

This variance denoted by is called the conditional variance and depends on the chosen value of Xi. In order to study this dependence, one should consider Var (Y|Xi), the conditional variance over all possible value, which is a random variable and, as such, it can have an expectation, E(Var(Y|Xi)). As the theorem of the total variance states that Var(Y) = Var(E(Y|Xi)) + E(Var(Y|Xi)) under the assumption of having Xi and Y two jointly distributed random variables, it becomes clear that the variance of the conditional expectation can be a good estimator of the sensitivity of the output to the specific input Xi. The more common and practical normalised index in order to define this sensitivity is given by (24)

This normalised index is often called the first order Sobol index and quantifies the part of variance of Y only explained by Xi, but does not take into account the amount of variance explained by interactions between inputs. It can actually be made with the crossed impact of this particular input with any other variable or combination of variables, leading to a set of 2nX − 1 indexes to compute. A full estimation of all these coefficients is possible and would lead to a perfect break down of the output variance. It has been proposed by many authors in the literature and is referred to with many names, such as functional decomposition, ANOVA method (ANalysis Of VAriance), HDMR (High-Dimensional Model Representation), Sobol's decomposition, Hoeffding's decomposition... A much simpler index, which takes into account the interaction of an input Xi with all other inputs, is called the total order Sobol index or ([47] and can be computed as (25) where represents the group of indexes that does not contain the i index. These two indexes (the first order and total order) are referred to as the Sobol coefficients. They satisfy several properties and their values can be interpreted in several ways:

  • Si ≤ 1: should always be true.

  • : the model is purely additive, or in other words, there are no interaction between the inputs and .

  • 1 − ∑ Si is an indicator of the presence of interactions.

  • is a direct estimate of the interaction of the ith input with all the other factors.

7.3 The fast method

7.3.1 Introduction

The Fourier amplitude sensitivity test, known as FAST [48,49], provides an efficient way to estimate the first order sensitivity indexes. Its main advantage is that the evaluation of sensitivity can be carried out independently for each input factor, using just a dedicated set of runs, because all the terms in a Fourier expansion are mutually orthogonal. To do so, it transforms the nX-dimensional integration into a single-dimension one, by using the transformation where ideally, {ωi} is a set of angular frequencies said to be incommensurate (meaning that no frequency can be obtained by linear combination of the other ones when using integer coefficients) and Gi is a transformation function chosen in order to ensure that the variable is sampled accordingly with the PDF of Xi. The parametric variable s evolves in [ − ∞ , ∞ ] and the vector  traces out a curve that fills the entire nX-dimensional research volume. When both Gi and ωi are properly chosen, one can approximate the following relations: (26) (27)where and Ak and Bk are the Fourier coefficients: (28) (29)

7.3.2 Implementation in Uranie

This method is implemented in Uranie. The first order coefficient is obtained by estimating the variance for a fundamental ωi and its harmonics, which can be done by using the second half of equation (27) running over p instead of k and replacing the index by p ⋅ ωi. A cut-off M has to be chosen for the sum and is called the interference factor. Knowing that, the contribution to the output variance of a certain frequency, i.e. the first order sensitivity index, can be expressed from equations (28) and (29) as (30)

7.3.3 Application to the use-case

Once applied to the thermal exchange model, the results are gathered in Figure 17 which shows the evolution of the first order coefficients, as a function of the time, for the four input variables of the model. The histograms are stacked, which means that the contribution of every inputs can be seen as the area represented by the corresponding colour, while the upper limit of the histograms is the sum of all the contributions. Figures 17a and 17b show the evolution as a function of time respectively for xds = 0.3 and xds = 0.8. The conclusions drawn here are in agreement with the ones from the Morris method in Section 7.1.2:

  • the impact of the density uncertainty is negligible;

  • the two most important contributions are coming from the massive thermal capacity and thickness uncertainties;

  • the relative importance of the impact of the massive thermal capacity uncertainty with respect to the thickness one seems to increase once we are getting closer to the centre of the sheet.

On the other hand, by investigating the results in Figure 16, the only possible statement about the impact of the thickness uncertainty was that this factor had either a nonlinear effect and/or interaction with other inputs. Here, as the sum of the first order coefficients is equal to 1 for both dimensionless position, it seems reasonable to state that the model has no strong interaction but that the impact of the thickness uncertainty might be a nonlinear effect.

In practice, the main steps used to obtain these results are gathered in the following block:

thumbnail Fig. 17

Measurement of the first order coefficients, with the FAST method, for xds = 0.3 (a) and xds = 0.8 (b), as a function of time.

7.4 The Sobol method

7.4.1 Introduction

The Sobol method is a Monte-Carlo-based estimation that provides the first and total order sensitivity indexes (respectively introduced in Eqs. (24) and (25)) at the cost of requiring a total of nS(nX + 2) code assessments. Instead of generating a single design-of-experiments, the idea is to produce two of them, called M and N whom size is set to nS × nX (both matrices are different and independent random samplings). A schematic view of this method, called the pick-and-freeze method, can be found in Figure 18.

The first step is to compute the first order sensitivity index, based on the measurement of the numerator, Var(E(Y|Xi)), which can be written E(E(Y|Xi)2) − E(E(Y|Xi))2. Since the second part of previous formula is equivalent to the output expectation, the calculation of the first order indexes requires estimates of E(E(Y|Xi)2), Var(Y), and E(Y). The matrix M is passed to the code and nS assessments are done to get a vector of outputs (shown as the first line of Fig. 18). The ith column of N is then replaced by the M's one (pick), creating a new Ni matrix which is provided to the code, for an additional cost of nS × nX assessments. This step is represented by the second line and the right-part of the third line in Figure 18 and the total cost for the first indexes estimation is nS(nX + 11) code assessments.

Finally, the total order indexes are computed starting from the right-hand side of equation (25), which looks very much alike equation (24) used to compute the first order but instead of a condition on having i known (frozen), it is the exact opposite: the condition is to freeze all the columns but i. It is doable as this is the only difference between the N and Ni matrices. The total order indexes are thus obtained by passing the N matrix to the code, leading to nS additional code assessments, as shown by the left part of the third line in Figure 18.

thumbnail Fig. 18

Description of the method used to compute the Sobol coefficients from two matrices.

7.4.2 Implementation in Uranie

Different implementations of the pick-and-freeze method have been proposed throughout the literature. In Uranie, a single dedicated method gathers the results from several of them ([50,51], etc.). One of them in particular is providing the coefficient values along with an estimation of their 95% confidence level [52]. By rewriting a Sobol coefficient as a correlation coefficient, one can get, under certain hypothesis a confidence level using the Fisher's transformation rule that applies on empirical correlation coefficients determination.

7.4.3 Application to the use-case

As for all the methods detailed in this paper, this one has been applied to our thermal exchange model to compute both the first and total order coefficients. The results are gathered in Figure 19 which shows the evolution of both the first and total order coefficients, as a function of time, for the four input variables of the model, along with their 95% confidence interval. In Figure 19a and b, the upper part (the first order coefficients) and the lower one (the total order coefficients) are displayed and a reasonable agreement between both order can be found. It leads, once more, to the conclusion that the model has no interaction, as already stated in Section 7.3.2 (for both xds = 0.3 and xds = 0.8).

In order to compare these results and the ones presented in Figure 17, the first order coefficients estimated with the Sobol method are represented as stacked histograms in Figure 20. Here again, the contribution of every inputs can be seen as the area represented by the corresponding colour, while the upper limit of the histograms is the sum of all the contributions. Figure 20a and b show the evolution as a function of time respectively for xds = 0.3 and xds = 0.8.

In practice, the main steps used to obtain these results are gathered in the following block:

thumbnail Fig. 19

Evolution of both the first (top pad) and total order (bottom pad) coefficients, as a function of time, with the Sobol method, for xds = 0.3 (a) and xds = 0.8 (b), along with their 95% confidence interval.

thumbnail Fig. 20

Measurement of the first order coefficients, with the Sobol method, for xds = 0.3 (a) and xds = 0.8 (b), as a function of time.

7.5 More methodology and ongoing investigations

These methods to estimate either a ranking or more quantitative indicators, such as the Sobol coefficients, have dedicated options to change the way the computations are done. On top of this, there are other ways to get sensitivity indexes already implemented in Uranie, such as

  • the regression either on values, to get standard regression coefficient (SRC) and partial correlation coefficient (PCC), or on ranks, to get standard regression rank coefficient (SRRC) and partial correlation rank coefficient (PRCC). All indexes can be estimated at once thanks to the algorithm implemented in Uranie [53];

  • another Fourier-based algorithm, relying on a different paradigm, called random balance design (RBD) [54,55].

There are also several other ways to get Sobol indexes, or other kind of sensitivity indexes currently under investigation:

  • Sobol indexes measured using replicated Latin hypercube design-of-experiments. This provides an estimation whose cost is independent of the input space size [56,57];

  • Shapley indexes [58] that allow to cover correlated variables;

  • HSIC indexes [43] (for Hilbert–Schmidt independence criterion) that allow to capture complex dependencies;

  • PLI indexes [59] (for Perturbed–Law based Indices) that quantify the impact of statistical law mis-knowledge on a specific quantity of interest.

8 Optimisation

Each optimisation study has its own peculiarities and it often requires to grope one's way forward, before finding an interesting solution. Most commonly, when dealing with optimisation, there are:

  • one or more objectives that one wants to minimize (or maximise);

  • decision variables that have a clear influence on the objectives;

  • some constraints either on the decision variables, on combination of some of them, or on objectives (defining the search domain).

For every problem, it is compulsory to choose an optimization algorithm, which is a crucial part of the optimization procedure. It is possible to divide these algorithms into two different categories:

  • local ones: they allow mono-criterion optimisation, with or without constraints. They are generally computationally efficient, but cannot be used in parallel and tend to be trapped in local optima;

  • global ones: they allow multi-objective optimisation, with or without constraints. They are suitable for problems with many local optima, but are computationally expensive. However, they are easily parallelisable.

Uranie offers several possibilities, either by interfacing external library, as already stated in Section 3.1, or through the use of a dedicated package, called Vizir, developed at CEA, whose aim is to offer evolutionary algorithms to solve multi-objective problems.

8.1 Single-objective optimisation problem

8.1.1 Introduction

In the case of a single criterion problem, the optimization procedure is equivalent to the minimisation of a function ξ(x) which is called the cost function or the objective function. The optimisation leads to the determination of a minimum (that can be called optimum) that can either be global (there is no x in the research volume such as ξ(x) < ξ(xmin) or local (same relation as before, but only in the vicinity of xmin). In the case where a maximum should be determined, all the techniques remained, but the objective is changed (inverted) to get back to a minimum search. In order to do so, Uranie offers many solutions thanks to its external dependencies:

Minuit: it is ROOT's package to perform single-objective optimisation problem, without constraint. It provides two algorithms:

  • Simplex: it does not use the first derivatives, it is insensitive to local optima, but without guarantee of convergence.

  • Migrad: a fairly sophisticated gradient descent one that is able to escape from some local optima.

NLopt: it is a library for nonlinear optimisation providing algorithms for single-objective optimization problem, with or without constraint. The list of algorithms implemented in Uranie can be found in [16] along with a small description of their principle, taken from NLopt [12].

8.1.2 Application to the use-case

In this section, a calibration of some of the parameters of our thermal exchange model is performed. Indeed, performing the calibration of a code comes down to finding the optimal set of parameters of the code which minimises the distance between reference values and computations from the code. In Uranie, two distances are currently implemented:

  • the root mean square deviation;

  • the weighted root mean square deviation.

The starting point is the following: one has done a set of 30 computations or measurements on a PTFE sheet without keeping notes of the experimental conditions. Given that the sheet is made out of PTFE, several intrinsic properties are known, such as the thermal conductivity (λ), the massive thermal capacity (Cρ), and the density (ρ). On the other hand, there are two remaining unknown parameters: the thickness of the sheet (e) and the thermal exchange coefficient value (h).

The Simplex algorithm (from Minuit optimization package) is used to minimise the root mean square deviation between the reference thermal gauge values and the ones from every optimisation steps once the parameters under study have been changed. Since this is a local algorithm, the starting point in the (e, h) plane has to be chosen beforehand (it is represented with a red marker in Fig. 21b). A default step value is set for both parameters and the tolerance threshold is chosen, along with a maximum number of calculation, both being the optimisation stopping criteria. The optimisation is run leading to the results presented in Figure 21.

Figure 21a shows the evolution of the objective function with respect to the iteration of the optimisation algorithms. This evolution can be investigated along with the parameter variations shown in Figure 21b: from the chosen starting point in red, every optimisation steps is represented with a black marker and linked to the rest of the already done estimation through a black line. The optimisation has stopped after 52 steps, heading to best estimated value for our parameters of ebest = 0.01 and hbest = 100.076 (the blue point in Fig. 21b). These values are in agreement with the reference ones which have produced the original set of points (these values are shown in Tab. 1).

thumbnail Fig. 21

Evolution of the cost function as a function of the considered optimisation step (a). Evolution of the parameters from the initial point (red marker) to the optimal found one (blue marker) in the objective (e,h)-plane (b).

8.1.3 Possible limitations

This solution is very efficient, mainly because the code to be run is quick. In the case of a very time-memory and/or cpu consuming code, this might have been difficult: as the Simplex algorithm is sequential, no parallelism is possible. There are more refined techniques to perform optimisation with less code assessments (using surrogate model for instance), as introduced in Section 8.3.

8.2 Multi-objectives optimisation

8.2.1 Introduction

The optimisation problem, in the multi-objective case, can then be expressed as the minimisation of the function  where n is the number of objectives imposed and Ξ is the complete cost function. In some cases, the objectives can be combined, for instance by doing a weighted (or not) sum, resulting in a new objective over which the optimization is performed. This is what is done in the example above where the difference between the 30 output values in the reference set and the newly computed ones, for a given set of parameters, are combined into a single objective. Unlike this case, the multi-objective hypothesis is that no overall optimum can be determined when it is not be possible to quantify a relation between the objectives. In this case, when two solutions x1 and x2 are possible, x1 dominates x2 if it does as good as the latter for all the objectives and strictly better for at least one. The optimisation goal is then to get a group of solutions that are said to be not dominated: no solution out of this group dominates them, and in the group either. There is no best point, unless an external constraint or preference is imposed, usually with hindsight.

The group of not-dominated solutions is called the Pareto set and its representation in the objective space is called the Pareto front.4 Figure 22a shows an academic example of a pure analytic model with two objectives depending only on one variable. In this simple case, the Pareto set is shown in pink, as the area in between both criterion's minimum. Now looking in the objective space in Figure 22b, all the solutions are shown in black and the corresponding Pareto front is, once more, depicted in pink.

thumbnail Fig. 22

Naive example of an imaginary optimisation case relying on two objectives that only depend on a single input variable.

8.2.2 The Vizir package

In Uranie multi-objectives optimisation issues are dealt with the Vizir package, which gathers several solutions, all developed at CEA, regarding the considered evolutionary algorithms and the way to make them evolve (genetic or swarm algorithm, single or island evolution, etc.). In any case, the aim is to get a certain number N of solutions to describe correctly both the Pareto set and front, and the analysis can be described in few key steps (shown in Fig. 23) and detailed below.

  1. Initialisation: Create randomly, only using the research space definition, a population of the requested size (N). The first evaluation is performed for all candidates, meaning that the criteria and constrains will be tested and the results will be stored in a vector for all candidates. This step is represented as a black box in Figure 23, followed by the evaluation shown as the orange box.

  2. Ranking: The rank affected to a candidate under study corresponds to the number of other candidates that dominate it. The best candidates have then a rank 0 (they are not-dominated), following by those with rank 1, rank 2, etc.

  3. Convergence test: This test (green box in Fig. 23) can reach three possible states:

    • all the tested candidates are not-dominated. The algorithm has converged and the loop is stopped;

    • not all candidates are not-dominated but the maximum number of evaluation has been reached. The algorithm has stopped without having converged. The optimisation should be restarted (maybe changing the configuration);

    • not all candidates are not-dominated and the maximum number of evaluation is not reached.

  4. Re-generation: In the latter case of the convergence test, a fraction of the lowest-ranked candidates (λ) is kept (purple box in Fig. 23) and used to produce a new generation, the crossing procedure depending on the chosen algorithm (blue box in Fig. 23). This resulting population, made out of the selected (λN) and re-generated candidates (1−λN), is re-evaluated.

These steps are more thoroughly explained in [16]. Even though this library can be used on its own for multi-objective optimisation, the example provided below will embedded it in the context of efficient global optimisation (EGO).

thumbnail Fig. 23

Schematic description of the needed steps of an optimisation procedure, when this one is performed with Vizir.

8.3 Efficient global optimisation

This section layouts another optimisation possibility to look for a minimum using a global technique. The EGO [60] is first introduced and then applied to a simple mono-dimensional example that will fully illustrate the principle. Finally, the calibration problem discussed in Section 8.1.2 will be investigated with this technique, to help appreciate the pros and cons of this method.

8.3.1 Introduction

The EGO technique is the perfect illustration of method combination, introduced in Section 2.2, where all the building blocks are already implemented in Uranie. It might be useful in the case where the code is high time/cpu/memory consuming (so one wants to limit as much as possible the computation) given that there might be several local minimum in which one does not want to fall (preventing from using local optimization such as gradient ones for instance). Instead of having to choose a starting point (as done in Sect. 8.1.2), the idea is to provide a training database which is supposed to be representative of the problem, so whose size cannot be too small with respect to the dimension of the ongoing analysis. Once done, a kriging model is constructed.

Let be the current best function value (nt being the size of the kriging training database). It is true that before computing the output of the code for a given input vector x, we are uncertain about the value y(x). On the other hand, y(x) is not completely unknown as we can assimilate it to its realisation through the kriging surrogate model, which is provided along with its standard deviation s(x). With this hypothesis, it is possible to compute the probability of the real value to be below the actual minimum fmin. Different distances below the line y = fmin are associated with different density values. If we weight all these possible improvements by the associated density value, we get what we call the “expected improvement” (EI). The improvement for a given point x is I = max(fmin − Y, 0) which is a random variable because Y is a random variable that models our uncertainty about the code's value at x. To get the EI, from here, we simply take the expected value of this random variable: (31) In these equations, we use the previously introduced notations and s to denote the kriging predictor and its standard error at x, knowing that Y follows a normal law . φ(.) and Φ(.) are respectively the standard normal density and its cumulative distribution. These two contributions are bringing a trade-off between exploring within a promising area and exploring where the uncertainty of the surrogate model is large. The latter contribution brings back the global aspect requested.

Once done, the next step consists in searching the maximum of the EI which is a positive definite function. This is done by using the genetic algorithm, as this search is actually an optimisation: the aim is to minimise the opposite EI criteria, providing the best candidate. The code is then run on this specific location, which is then included in the training database. The kriging model is re-build, using the updated training database and a new location is determined, following the exact same recipe.

8.3.2 Application to the thermal exchange model

The idea here is to apply the EGO method on a mono-dimensional problem to get plots that would perfectly illustrate the way this procedure works. This will be done by working on a simple function which is an extension of our thermal exchange model. In Section 4.1.2, one of the first hypothesis to get an analytic solution was to fix the thermal exchange coefficient h to a constant value. This is known to be a rough approximation and a more rigorous way to describe the evolution of this coefficient through time is actually given by the following equation: (32) whose behaviour is represented in Figure 24.

This equation depends on 4 parameters: the initial and asymptotic value of the thermal exchange coefficient (which can be measured respectively at the very beginning and after a very long time on a given experiment). The other parameters are the coordinate of the maximum, whose measurement can be turned into an optimisation by looking for the minimum of the opposite function −h(t). This is exactly what has been done and this analysis can be summarised in Figure 25 where, in the upper part of every pad, the blue line is the real “unknown” function used, the black points are the training database, the red line is the approximation given by the kriging model along with its uncertainty and the green point is the latest point included in the training database from the previous iteration. The bottom pad of every plot shows the inverted EI (−EI) which is minimised using the evolutionary algorithm to determine the next location to be included in the training database.

Going through Figure 25, one can find back the different steps described in Section 8.3.1.

Figure 25a: Starting point, the training database is made out of 4 locations, and the uncertainty on the model at several places are large. One of the location is close to the real minimum and is the current fmin. From the EI, the next location to be computed will be on the other side of the real minimum (where the estimated values from the kriging model are small and the uncertainties are large).

Figure 25b: One more location has been computed and added to the training database (it is the current new fmin). The updated kriging model has changed tremendously and the lowest boundary is the next location to be computed. This is the perfect illustration of the global aspect of this method: a gradient would have been down to check for a smaller minimum, disregarding the fact that other part of the phase space might be really mis-modelled.

Figure 25c: One more location has been computed and added to the training database upon which the kriging model has been updated. The next location to be computed (from the EI curve) is very close to the real minimum.

Figure 25d: Two more locations have been added to the training database (among which a new fmin in agreement with the real minimum). As the model uncertainty band for the locations around fmin is small, the next location to be computed is the highest boundary, which, once more, shows the global behaviour of this protocol.

Figure 25e: Once the highest boundary has been included, few more locations around the real minimum will be tested.

Figure 25f: The method will start computing the closest location to fmin with the largest uncertainty.

The steps described in Figure 25 converge quickly toward an estimation of the parameters and which are in very good agreement with the injected values . The only sensitive aspect is to be able to stop the method: one cannot use a tolerance criteria as it could prevent the exploration needed to conserve the global behaviour. The only remaining option is to set a maximum number of location to be added, or to put a threshold on the Leave-One-Out Q2 criterion of the kriging model. This criterion is not really focusing on the minimum description, but more on agreement of the kriging model.

For illustration purpose only, the same method has been applied to the calibration problem introduced in Section 8.1.2. The idea is to compare the results given in Figure 21 to the one presented in Figure 26. A training database of 20 locations has been generated and the algorithm has been run over 32 more computations in order to get the same number of code assessments (52, as already computed in Sect. 8.1.2).

Figure 26 shows the training database (black points) and the 32 new locations computed using the EGO algorithm (blue points) along with the best minimum found (red box). The newly computed locations are split into three categories: the first ten ones in purple, the following ten ones in green, and the last twelve ones in blue. Thanks to this splitting, it is possible to check that the optimisation exploration remains global as there can be green and even blue new locations computed far away from the global minimum. However, most of the locations included by the EGO method are along a clear line, showing the shape of the minimum valley in the (e,h)-plane (that was also visible in Fig. 21b). Here is the different level of agreement obtained on the parameters, as a function of the number of locations computed (so the number of assessments):

  • 8 new locations: the accuracy obtained on e and h is respectively ∼0.3% and ∼1.5%.

  • 21 new locations: the accuracy obtained on e and h is respectively ∼0.01% and ∼0.3%.

  • 28 new locations: the accuracy obtained on e and h is respectively ∼0.01% and ∼0.2%.

thumbnail Fig. 24

Modelisation of the evolution of the thermal exchange coefficient as a function of the time.

thumbnail Fig. 25

Evolution of the kriging surrogate model in red, compared to the real (supposed unknown) function in blue, as a function of the time for different number of locations in the training database. These ones are represented as black dots, expect for the latest-introduced one, spotted in green. The bottom part of every plot represents the evolution of the EI.

thumbnail Fig. 26

Training database (black points) and the 32 new points computed using the EGO algorithm (purple, green and blue points) for the calibration problem discussed in Section 8.1.2. The best minimum found is represented as the red box.

8.4 More methodology and ongoing investigations

In this section, we focus mainly on mono-objective issues, only introducing some basic concepts of multiobjectives analysis discussed through an overall evolutionary algorithm description. This discussion is of course too shallow to encapsulate the difficulties that can arise either from having a large number of objectives or by requesting a small population to describe the Pareto front. In these cases, sorting the elements of a population just using the non-dominance criteria might be irrelevant, as many solutions will be noncomparable.

To cope for this, there are many proposed solutions, usually referred to as many-objectives algorithms, already implemented in Uranie. These algorithms use other strategies (on top of the pareto-dominance criteria) to sort out and rank solutions in a given population, such as:

  • the knee-point driven evolutionary algorithm [61] which compare the solution's distance with respect to the hyper-plan defined by extremum solution;

  • the IBEA algorithm (Indicator Based Evolutionary Algorithm [62]) which compute an indicator to compare all the solutions in a given set;

  • the MOEAD one (Multi Objective Evolutionary Algorithm based on Decomposition [63]) which decompose the domain first, in order to get a better coverage at first of the ensemble of solutions.

Concerning the EGO estimation, different methods are considered to distribute the computation and be able to speed up the optimisation. The main difference between the examined solutions relies on either a synchronous or asynchronous approach.

9 Perspectives

In this section, important yet not discussed properties are briefly discussed, from implementation aspects to methodological ones. A nonexhaustive list of developments considered by the team is also provided as perspective to end this section.

9.1 Parallelism

The fact that the code under consideration might be very consuming in terms of time, cpu, and/or memory has been raised several times throughout this paper. This, along with the fact that some of method might also need a great deal of internal computations, even without the external code, shows the clear necessity to have parallelism strategy to benefit from the recent hardware paradigm: the number of cpu is no-more a limiting criteria while on the other hand, cpu frequency has reached a plateau and memory access tend to become problematic ([64]).

In order to deal with this, the Uranie team is working on two different aspects:

Code assessments distribution: this is the way an external code is called. Several strategies are implemented in Uranie:

  • forking the code on a local node or on a list of resources listed as available at the initialisation. This is duplicating the code and the Launcher module (see Fig. 1) will distribute the computations.

  • shared-memory distribution. This technique is using the pthread protocol to distribute the code assessments through the Relauncher module (see Fig. 1). This, as all memory-shared strategy, can suffer from race conditions (which might only depend on whether the code used is thread-safe).

  • split-memory distribution. This technique relies on an mpirun distribution of the computation through the Relauncher module (see Fig. 1). This has the advantage of not suffering from race conditions, but the variables used can only be numerical-based (Uranie can also handle string variables as input/output of an external code).

Internal calculation distribution: mainly available for the k-nearest neighbour and ANN, the idea is to use the very high number of GPU given in a reasonable graphical card, to perform internal computation (in the already-introduced case, the training of the synaptic weight for instance). This is done using CUDA, provided by the NVIDIA company ([14]).

The proper use of these solutions allows to benefit from the structure of the new computers and the grid upon which the Uranie platform can be installed.

9.2 Ongoing methodological implementations

This paper has focused on some classical aspects of the uncertainty quantification, not discussing other important methods in the general VVUQ5 approach. On top of the already mentioned possible evolution (see Sects. 6.4, 7.5, and 8.4), there are many possible fields of interests toward which the developing team is investigating.

Bayesian calibration or inference is often an important step of code validation [65]. From an “a priori” law set on parameters, one can use residuals between physical quantities and simulations to get back the “a posteriori” laws of uncertain inputs [66,67], through Markov Chain usage, such as a Gibbs [68] or Metropolis-Hastings [69] one.

A reliability analysis aims at quantifying the safety of the structure by estimating the Frontier between the safe domain and the failure one. So far the Uranie platform is providing the Form-Sorm methodology [70], whose aim is to estimate the distance in the standard space between the origin and the Most Probable Failure Point [71]. Various improvements are currently under consideration, either based on Markov Chain [72] or on adaptive construction, for instance using kriging techniques [73].

The possibility of comparing empirical distributions to reference ones is also provided by the Uranie platform, for instance with different tests of goodness of fit, such as Kolmogorov–Smirnov [74], Anderson–Darling [75], Cramer–Von Mises [76], etc.

Finally, the use-case introduced in Section 4 has also been chosen, as the output can be considered as functional. The Uranie platform can deal with these functional variables by discretising them and, for instance, study the sensitivity of the output for every time steps (as done here in Sect. 7). Other ways than the discretisation are considered [77] and might be of interest quickly as the Uranie platform is, for instance, part of a European project called Escape2 on uncertainty methodology in weather forecast prediction.

This list of leads is not exhaustive and the priority with which they might be considered can depend as well from the needs and requests from our community.

10 Conclusion

Uranie is an open-source platform which has been developed over the last 10 years to make available the state of the art methodological developments on uncertainty quantification from the academic world to our industrial problems, but it can be used whatever the scientific field under consideration. To do so, it is relying on a small number of external dependencies, providing both a Python and C++ (compiled on the fly) interface that can be accessed either as simple scripts or through the Jupyter-notebook web application. It provides way of launching various kinds of interactive functions and codes, with a black-box approach (to guarantee nonintrusive results), and even to link them in a complete computational chain. As the computational budget is often a limit when considering the UQ analysis, different computation distribution techniques have been implemented in Uranie and a large number of visualization tools are also provided along the platform.

This paper introduces a pedagogical example, based on thermal exchange between a plane sheet and a fluid at constant temperature, which is used to address different problematics linked to uncertainty quantification: uncertainty propagation, SA, surrogate models constructions, optimisation issues, etc. For every steps, several methodologies have been introduced, theoretically first, then in the way they are implemented and finally applied to the use-case using the Uranie platform. Possible ways of improvement (both technical and methodological) are also introduced, as considered so far by the team, bearing in mind that it is not exhaustive and might change in the next few years, considering the needs and requests of the community.

Author contribution statement

Jean-Baptiste Blanchard has performed this study and has written the article. This analysis has been conducted with the Uranie platform, developed by Fabrice Gaudier, Jean-Marc Martinez, Gilles Arnaud and Guillaume Damblin. They all have contributed to this work by providing support and expert viewpoints.

References

  1. C.G. Bucher, H.J. Pradlwarter, G.I. Schuëller, Computational Stochastic Structural Analysis (COSSAN) (Springer, Berlin, Heidelberg, 1991), pp. 301–315 [Google Scholar]
  2. B.M. Adams, W. Bohnhoff, K. Dalbey, J. Eddy, M. Eldred, D. Gay, K. Haskell, P.D. Hough, L.P. Swiler, Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis: version 5.0 users manual, Technical Report SAND2010-2183, Sandia National Laboratories [Google Scholar]
  3. M. Baudin, R. Lebrun, B. Iooss, A.-L. Popelin, Openturns: An industrial software for uncertainty quantification in simulation, in Handbook of Uncertainty Quantification (Springer, Cham, 2017), pp. 2001–2038 [CrossRef] [Google Scholar]
  4. S. Marelli, B. Sudret, UQLab: A framework for uncertainty quantification in Matlab, in Proceedings, SIAM Conference on Uncertainty Quantification, Savannah, GA, USA (ETH-Zürich, 2014), pp. 2554–2563 [Google Scholar]
  5. R. Brun, F. Rademakers, Nucl. Instrum. Methods A389, 81 (1997) [CrossRef] [Google Scholar]
  6. E. de Rocquigny, N. Devictor, S. Tarantola, Uncertainty in Industrial Practice: A Guide to Quantitative Uncertainty Management (John Wiley & Sons, NJ, 2008) [CrossRef] [Google Scholar]
  7. K. Martin, B. Hoffman, IEEE Software 24, 46 (2007) [CrossRef] [Google Scholar]
  8. T. Kluyver, B. Ragan-Kelley, F. Pérez, B. Granger, M. Bussonnier, J. Frederic, K. Kelley, J. Hamrick, J. Grout, S. Corlay, P. Ivanov, D. Avila, S. Abdalla, C. Willing, Jupyter notebooks − a publishing format for reproducible computational workflows, in Positioning and Power in Academic Publishing: Players, Agents and Agendas, edited by F. Loizides, B. Schmidt (IOS Press, Amsterdam, 2016), pp. 87–90 [Google Scholar]
  9. M. Feathers, B. Lepilleur, Cppunit cookbook (2002), http://cppunit.sourceforge.net/doc/1.8.0/cppunit_cookbook.html [Google Scholar]
  10. J.C. Meza, R.A. Oliva, P.D. Hough, P.J. Williams, ACM Trans. Math. Softw. 33, 12 (2007) [CrossRef] [Google Scholar]
  11. M. Frigo, S.G. Johnson, Proc. IEEE 93, 216 (2005) (special issue on “Program Generation, Optimization, and Platform Adaptation”) [Google Scholar]
  12. S.G. Johnson, The nlopt nonlinear-optimization package, 2008, http://ab-initio.mit.edu/nlopt [Google Scholar]
  13. E. Gabriel, G.E. Fagg, G. Bosilca, T. Angskun, J.J. Dongarra, J.M. Squyres, V. Sahay, P. Kambadur, B. Barrett, A. Lumsdaine, R.H. Castain, D.J. Daniel, R.L. Graham, T.S. Woodall, Open MPI: Goals, concept, and design of a next generation MPI implementation, in Proceedings, 11th European PVM/MPI Users' Group Meeting, Budapest, Hungary , 2004, pp. 97–104 [Google Scholar]
  14. C. Nvidia, Nvidia Corporation 120, 8 (2011) [Google Scholar]
  15. D. Van Heesch, Doxygen: Source code documentation generator tool, 2008, http://www.doxygen.org [Google Scholar]
  16. J.-B. Blanchard, Methodological reference guide for uranie v3.11.0, Technical Report, CEA, DEN/DANS/DM2S/STMF/LGLS/RT/17-006/A, updated version provided in the source of the Uranie platform for every new release [Google Scholar]
  17. M.D. McKay, R.J. Beckman, W.J. Conover, Technometrics 42, 55 (2000) [CrossRef] [Google Scholar]
  18. D. Morris, J. Mitchell, J. Stat. Plan. Inference 43, 381 (1995) [CrossRef] [Google Scholar]
  19. L. Pronzato, W. Muller, Stat. Comput. 22, 681 (2012) [CrossRef] [Google Scholar]
  20. G. Damblin, M. Couplet, B. Iooss, J. Simul. 7, 276 (2013) [CrossRef] [Google Scholar]
  21. R.L. Iman, W.J. Conover, Commun. Stat. Simul. Comput. 11, 311 (1982) [CrossRef] [Google Scholar]
  22. J.H. Halton, Commun. ACM 7, 701 (1964) [CrossRef] [Google Scholar]
  23. I. Sobol', USSR Comput. Math. Math. Phys. 7, 86 (1967) [CrossRef] [Google Scholar]
  24. K. Petras, Numer. Algorithms 26, 93 (2001) [CrossRef] [Google Scholar]
  25. A. De Crécy, P. Bazin, Determination of the uncertainties of the constitutive relationship of the CATHARE 2 code (M&C, 2001) [Google Scholar]
  26. K.-T. Fang, R. Li, A. Sudjianto, Design and Modeling for Computer Experiments, Computer Science & Data Analysis Series (Chapman & Hall/CRC, Boca Raton, 2005) [Google Scholar]
  27. N. Wiener, Am. J. Math. 60, 897 (1938) [CrossRef] [MathSciNet] [Google Scholar]
  28. R.H. Cameron, W.T. Martin, Ann. Math. 48, 385 (1947) [CrossRef] [Google Scholar]
  29. R.G. Ghanem, P.D. Spanos, Stochastic Finite Elements: A Spectral Approach (Springer-Verlag, New York, 1991) [Google Scholar]
  30. M. Baudin, J.-M. Martinez, Polynômes de chaos sous Scilab via la librairie NISP, in 42èmes Journées de Statistique, Marseille, France, 2010, https://hal.inria.fr/inria-00494680 [Google Scholar]
  31. W. McCulloch, W. Pitts, Bull. Math. Biophys. 5, 115 (1943) [CrossRef] [MathSciNet] [Google Scholar]
  32. F. Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Report (Cornell Aeronautical Laboratory) (Spartan Books, Washington DC, 1962) [Google Scholar]
  33. C.E. Rasmussen, C.K. Williams, Gaussian Process for Machine Learning (MIT Press, MA, 2006) [Google Scholar]
  34. G. Matheron, La théorie des variables régionalisées, et ses applications, Fasicule 5 in Les Cahiers du Centre de Morphologie Mathématique de Fontainebleau [Google Scholar]
  35. F. Bachoc, Estimation paramétrique de la fonction de covariance dans le modèle de krigeage par processus gaussiens: application à la quantification des incertitues en simulation numérique, Ph.D. thesis, Mathématiques appliquées, Paris 7, thèse de doctorat dirigée par Garnier, Josselin, 2013 [Google Scholar]
  36. R.M. Neal et al., Mcmc using hamiltonian dynamics, in Handbook of Markov Chain Monte Carlo 2 (11) (Chapman and Hall/CRC) [Google Scholar]
  37. T. Robinson, F. Fallside, Compu. Speech Lang. 5, 259 (1991) [CrossRef] [Google Scholar]
  38. G.E. Hinton, Prog. Brain Res. 165, 535 (2007) [CrossRef] [Google Scholar]
  39. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G.S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems, software, available from tensorflow.org (2015), https://www.tensorflow.org/ [Google Scholar]
  40. B. Iooss, P. Lemaître, A review on global sensitivity analysis methods, in Uncertainty Management in Simulation-Optimization of Complex Systems: Algorithms and Applications, edited by C. Meloni, G. Dellino (Springer, NY, 2015), pp. 101–122 [CrossRef] [Google Scholar]
  41. R. Ghanem, D. Higdon, H. Owhadi (Eds.), Springer Handbook on Uncertainty Quantification (Springer, Cham, 2017) [CrossRef] [Google Scholar]
  42. A. Saltelli, K. Chan, E. Scott, Sensitivity Analysis (Wiley, New York, 2008) [Google Scholar]
  43. S. Da Veiga, J. Stat. Comput. Simul. 85, 1283 (2015) [CrossRef] [Google Scholar]
  44. B. Bettonvil, J.P. Kleijnen, Eur. J. Oper. Res. 96, 180 (1997) [CrossRef] [Google Scholar]
  45. A. Saltelli, S. Tarantola, F. Campolongo, M. Ratto, T. Andres, J. Cariboni, D. Gatelli, M. Saisana, Global Sensitivity Analysis: The Primer (Wiley, New York, 2008) [Google Scholar]
  46. A. Saltelli, S. Tarantola, F. Campolongo, M. Ratto, Sensitivity Analysis in Practice: A Guide to Assessing Scientific Models (Wiley, New York, 2004) [Google Scholar]
  47. T. Homma, A. Saltelli, Reliab. Eng. Syst. Saf. 52, 1 (1996) [CrossRef] [Google Scholar]
  48. G. McRae, J. Tilden, J. Seinfeld, Comput. Chem. Eng. 6, 15 (1982) [CrossRef] [Google Scholar]
  49. A. Saltelli, R. Bolado, Comput. Stat. Data Anal. 26, 445 (1998) [CrossRef] [Google Scholar]
  50. A. Saltelli, Comput. Phys. Commun. 145, 280 (2002) [CrossRef] [Google Scholar]
  51. H. Monod, C. Naud, D. Makowski, Uncertainty and sensitivity analysis for crop models, in Working with Dynamic Crop Models: Evaluation, Analysis, Parameterization, edited by D. Wallach, D. Makowski, J.W. Jones (Elsevier, Amsterdam, 2006) [Google Scholar]
  52. J.-M. Martinez, Analyse de sensibilité globale par décomposition de la variance, Technical Report, GdR Ondes et Mascot Num, institut Henri Poincaré, 2011 [Google Scholar]
  53. R. Iman, M. Shortencarier, J. Johnson, FORTRAN 77 program and users guide for the calculation of partial correlation and standardized regression coefficients, Sandia National Laboratories, 1985 [Google Scholar]
  54. S. Tarantola, D. Gatelli, T. Mara, Reliab. Eng. Syst. Saf. 91, 717 (2006) [CrossRef] [Google Scholar]
  55. J.-Y. Tissot, C. Prieur, Reliab. Eng. Syst. Saf. 107, 205 (2012) [CrossRef] [Google Scholar]
  56. M.D. McKay, J.D. Morrison, S.C. Upton et al., Comput. Phys. Commun. 117, 44 (1999) [CrossRef] [Google Scholar]
  57. T. Alex Mara, O. Rakoto Joseph, J. Stat. Comput. Simul. 78, 167 (2008) [CrossRef] [Google Scholar]
  58. A.B. Owen, SIAM/ASA J. Uncertain. Quantif. 2, 245 (2014) [Google Scholar]
  59. P. Lemaître, E. Sergienko, A. Arnaud, N. Bousquet, F. Gamboa, B. Iooss, J. Stat. Comput. Simul. 85, 1200 (2015) [CrossRef] [Google Scholar]
  60. D.R. Jones, M. Schonlau, W.J. Welch, J. Glob. Optim. 13, 455 (1998) [CrossRef] [MathSciNet] [Google Scholar]
  61. X. Zhang, Y. Tian, Y. Jin, IEEE Trans. Evol. Comput. 19, 761 (2015) [CrossRef] [Google Scholar]
  62. E. Zitzler, S. Künzli, Indicator-based selection in multiobjective search, in International Conference on Parallel Problem Solving from Nature (Springer, Heidelberg, 2004), pp. 832–842 [Google Scholar]
  63. Q. Zhang, H. Li, IEEE Trans. Evol. Comput. 11, 712 (2007) [CrossRef] [Google Scholar]
  64. U. Drepper, What Every Programmer Should Know About Memory (2007) [Google Scholar]
  65. M.J. Bayarri, J.O. Berger, R. Paulo, J. Sacks, J.A. Cafeo, J. Cavendish, C.-H. Lin, J. Tu, Technometrics 49, 138 (2007) [CrossRef] [Google Scholar]
  66. M.C. Kennedy, A. O’Hagan, J. R. Stat. Soc. 63, 425 (2001) [Google Scholar]
  67. F. Bachoc, G. Bois, J. Garnier, J.-M. Martinez, Nucl. Sci. Eng. 176, 81 (2014) [CrossRef] [Google Scholar]
  68. G. Casella, E.I. George, Am. Stat. 46, 167 (1992) [Google Scholar]
  69. S. Chib, E. Greenberg, Am. Stat. 49, 327 (1995) [Google Scholar]
  70. Y.-G. Zhao, T. Ono, Struct. Saf. 21, 95 (1999) [CrossRef] [Google Scholar]
  71. A.M. Hasofer, N.C. Lind, J. Eng. Mech. Div. 100, 111 (1974) [Google Scholar]
  72. S.-K. Au, J.L. Beck, Probab. Eng. Mech. 16, 263 (2001) [CrossRef] [MathSciNet] [Google Scholar]
  73. X. Huang, J. Chen, H. Zhu, Struct. Saf. 59, 86 (2016) [CrossRef] [Google Scholar]
  74. A.N. Kolmogorov, Giornale dell'Istituto Italiano degli Attuari 4, 83 (1933) [Google Scholar]
  75. T.W. Anderson, D.A. Darling, Ann. Math. Stat. 23, 193 (1952) [CrossRef] [Google Scholar]
  76. T.W. Anderson, Ann. Math. Stat. 33, 1148 (1962) [CrossRef] [Google Scholar]
  77. S. Nanty, C. Helbert, A. Marrel, N. Pérot, C. Prieur, Comput. Stat. 32, 559 (2017) [CrossRef] [Google Scholar]

1

Alternative Energies and Atomic Energy Commission, Saclay, France. The nuclear energy division is usually referred as Den.

2

European Organisation for Nuclear Research, Geneva, Switzerland.

4

Because of the discretisation, the obtained group is usually an approximation of the Pareto set.

5

Validation, verification and uncertainty quantification.

Cite this article as: Jean-Baptiste Blanchard, Guillaume Damblin, Jean-Marc Martinez, Gilles Arnaud, Fabrice Gaudier, The Uranie platform: an open-source software for optimisation, meta-modelling and uncertainty analysis, EPJ Nuclear Sci. Technol. 5, 4 (2019)

All Tables

Table 1

Summary of both PTFE and iron characteristics. The last column shows the relative uncertainty found in the literature (or chosen in the case of the width) for the iron case. They will be applied as well on the PTFE.

Table 2

List of continuous available statistical laws in Uranie, along with their corresponding adjustable parameters.

Table 3

Summary of the PTFE uncertain physical parameters and their corresponding uncertainty. The absolute value of the uncertainty is computed from the values in Table 1 (where units are also provided).

All Figures

thumbnail Fig. 1

Organisation of the Uranie-modules (green boxes) in terms of inter-dependencies. The external dependencies are shown as lightpurple boxes.

In the text
thumbnail Fig. 2

Diagram that represents in few boxes the different steps that can compose an uncertainty propagation or quantification analysis [6]. (This figure is subject to copyright protection and is not covered by a Creative Common license.)

In the text
thumbnail Fig. 3

Simplified sketch of the thermal exchange problem.

In the text
thumbnail Fig. 4

Evolution of the thermal gauge as a function of either the position for different time steps (a,b), the time for different positions (c,d), or depending on both parameters (e,f), for PTFE (left) and iron (right).

In the text
thumbnail Fig. 5

Example of PDF (a), CDF (b), and inverse-CDF (c) for a uniform law (defined between −1 and 1, in black), a normal law (with μ = 0 and σ = 0.5, in red) and a gumbelmax one (with μ = −0.4 and β = 0.4 in blue).

In the text
thumbnail Fig. 6

Drawing of the design-of-experiments for two uniformly distributed variable x1 and x2, with an LHS sampling (a), an SRS one (b), and a maximin LHS one (c). Deterministic sampling are also shown with the Halton sequence (d), the Sobol one (e), and a Petras sparse grid (f).

In the text
thumbnail Fig. 7

Example of correlation introduced between two uniform distributions, using either the Spearman coefficient (a) or the Ali–Mikhail–Haq copula (b).

In the text
thumbnail Fig. 8

Evolution of the thermal gauge (top pad) and its standard deviation (bottom pad), as a function of the absolute time, for four different values of depth within the sheet.

In the text
thumbnail Fig. 9

Distribution of the thermal gauge values estimated by the surrogate model () as a function of the ones computed by the complete model (θ) in a test database, not used for the training (with the estimated Q2).

In the text
thumbnail Fig. 10

Schematic description of a formal neuron [31]. (This figure is subject to copyright protection and is not covered by a Creative Common license.)

In the text
thumbnail Fig. 11

Example of transfer functions: the hyperbolic tangent (top) and the logistical one (bottom).

In the text
thumbnail Fig. 12

Schematic description of the working flow of an ANN as used in Uranie.

In the text
thumbnail Fig. 13

Distribution of the thermal gauge values estimated by the surrogate model (  ) as a function of the ones computed by the complete model (θ) in a test database, not used for the training (with the estimated Q2).

In the text
thumbnail Fig. 14

Distribution of the thermal gauge values, in a test database, computed with the code (in blue) and estimated by a kriging model (in red) whose training database is shown as black points (a). Distribution of the thermal gauge values estimated by the surrogate model () as a function of the ones computed by the complete model (θ) in a test database, not used for the training (with the estimated Q2, in (b)).

In the text
thumbnail Fig. 15

Schematic view of two trajectories drawn randomly in the discretised hyper-volume (with p = 6) for two different values of the elementary variation (the optimal one in black and the smallest one in pink).

In the text
thumbnail Fig. 16

Measurement of the Morris µ* and σ for xds = 0.3 (top) and xds = 0.8 (bottom). The (a) and (c) parts represent this measurement for a single value of the time, while the (b) and (d) parts show the evolution of µ* (top pad) and σ (bottom pad) as a function of the time.

In the text
thumbnail Fig. 17

Measurement of the first order coefficients, with the FAST method, for xds = 0.3 (a) and xds = 0.8 (b), as a function of time.

In the text
thumbnail Fig. 18

Description of the method used to compute the Sobol coefficients from two matrices.

In the text
thumbnail Fig. 19

Evolution of both the first (top pad) and total order (bottom pad) coefficients, as a function of time, with the Sobol method, for xds = 0.3 (a) and xds = 0.8 (b), along with their 95% confidence interval.

In the text
thumbnail Fig. 20

Measurement of the first order coefficients, with the Sobol method, for xds = 0.3 (a) and xds = 0.8 (b), as a function of time.

In the text
thumbnail Fig. 21

Evolution of the cost function as a function of the considered optimisation step (a). Evolution of the parameters from the initial point (red marker) to the optimal found one (blue marker) in the objective (e,h)-plane (b).

In the text
thumbnail Fig. 22

Naive example of an imaginary optimisation case relying on two objectives that only depend on a single input variable.

In the text
thumbnail Fig. 23

Schematic description of the needed steps of an optimisation procedure, when this one is performed with Vizir.

In the text
thumbnail Fig. 24

Modelisation of the evolution of the thermal exchange coefficient as a function of the time.

In the text
thumbnail Fig. 25

Evolution of the kriging surrogate model in red, compared to the real (supposed unknown) function in blue, as a function of the time for different number of locations in the training database. These ones are represented as black dots, expect for the latest-introduced one, spotted in green. The bottom part of every plot represents the evolution of the EI.

In the text
thumbnail Fig. 26

Training database (black points) and the 32 new points computed using the EGO algorithm (purple, green and blue points) for the calibration problem discussed in Section 8.1.2. The best minimum found is represented as the red box.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.