Issue |
EPJ Nuclear Sci. Technol.
Volume 3, 2017
|
|
---|---|---|
Article Number | 23 | |
Number of page(s) | 13 | |
DOI | https://doi.org/10.1051/epjn/2017017 | |
Published online | 10 July 2017 |
https://doi.org/10.1051/epjn/2017017
Regular Article
Probabilistic risk bounds for the characterization of radiological contamination
1 EDF Lab Les Renardières, Materials and Mechanics of Components Department, 77818 Moret-sur-Loing, France
2 EDF Lab Chatou, Department of Performance, Industrial Risk, Monitoring for Maintenance and Operations, 78401 Chatou, France
3 CEA Nuclear Energy Division, Centre de Cadarache, 13108 Saint-Paul-lès-Durance, France
⁎ e-mail: bertrand.iooss@edf.fr
Received:
9
December
2016
Received in final form:
26
May
2017
Accepted:
19
June
2017
Published online: 10 July 2017
The radiological characterization of contaminated elements (walls, grounds, objects) from nuclear facilities often suffers from too few measurements. In order to determine risk prediction bounds on the level of contamination, some classic statistical methods may therefore be unsuitable, as they rely upon strong assumptions (e.g., that the underlying distribution is Gaussian) which cannot be verified. Considering that a set of measurements or their average value come from a Gaussian distribution can sometimes lead to erroneous conclusions, possibly not sufficiently conservative. This paper presents several alternative statistical approaches which are based on much weaker hypotheses than the Gaussian one, which result from general probabilistic inequalities and order-statistic based formulas. Given a data sample, these inequalities make it possible to derive prediction intervals for a random variable which can be directly interpreted as probabilistic risk bounds. For the sake of validation, they are first applied to simulated data generated from several known theoretical distributions. Then, the proposed methods are applied to two data sets obtained from real radiological contamination measurements.
© G. Blatman et al., published by EDP Sciences, 2017
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
In nuclear engineering, as for most industrial domains, one often faces difficult decision-making processes, especially when safety issues are involved. In order to consider in a rigorous and consistent way all environment uncertainties in a decision process, a probabilistic framework offers invaluable help. For example, typical non-exhaustive sampling of a process or object induces uncertainty that needs to be understood in order to control its effects.
In particular, the estimation of risk prediction bounds is an important element of a comprehensive probabilistic risk assessment of radioactive elements (e.g., walls, grounds, objects) derived from the nuclear industry. The radiological characterization of contaminated elements in a nuclear facility may be difficult because of practical and/or strong operation constraints, often limiting the number of possible measurements. Nevertheless, the estimation of radioactivity levels is essential to assess the risk of exposure of nuclear dismantling operator, as well as the risk of environment contamination [1].
Performing a realistic and reasonable risk estimate is essential, not only from an economic perspective, but also for public acceptance. First, this is important information for decision makers in order to be able to implement suitable measures that are financially acceptable. Nevertheless, overestimating the risk often turns out to be counterproductive by inducing much uncertainty and a loss of confidence in public authorities from the population. We cite, for example, the management of the 2009 flu epidemic (H1N1), where many countries’ authorities had a disproportionate reaction with regard to real risk, due to the World Health Organization’s (WHO) poorly managed forecasts [2]. This led to public loss of confidence in the authorities, in vaccines, and in pharmaceutical companies as well as an enormous waste of public resources.
Drawing up a radiological inventory based on a small number of measurements (e.g., in the order of 10) is a particularly difficult statistical problem. The shortage of data can lead either to a coarse over-estimation, which has large impact on economic cost, or to a coarse under-estimation, which has an unacceptable impact in terms of public health and environment protection. In the past, several attempts have been made to deal with such problems. For instance, Perot and Iooss [3] focused on the problem of defining a sampling strategy and assessing the representativeness of the small samples at hand. In the context of irradiated graphite waste, Poncet and Petit [4] developed a method to assess the radionuclide inventory as precisely as possible with a 2.5% risk of under-assessment. In a recent work, Zaffora et al. [5] described several sampling methods to estimate the concentration of radionuclides in radioactive waste, by using correlations between different radionuclide’s activities. When the contamination characterized exhibits a certain spatial continuity and when the spatial localization of measurements can be chosen, geostatistical tools can be used, as shown in [6–8].
In this work, we focus on the difficult task of radiological characterization based on a small number of data which are assumed statistically independent (and non-spatially localized). This task belongs to a quite general class of problems: the statistical analysis of small data samples (see for example [9–11]). In this case, classical statistical tools turn out to be unsuitable. For example, assuming that a set of measurements or their average value arise from a Gaussian distribution can lead to erroneous and sometimes non-conservative conclusions. Indeed, if the estimation of the mean value is of interest, Gaussian distribution-based bounds may only be used in the asymptotic limit of a very large sample, and the convergence to this asymptotic regime may be very slow in the presence of a noticeably skewed actual data-generating distribution. Even if some solutions exist to correct this large-size sample requirement, the Gaussian distribution hypothesis may still be invalid or impossible to justify by rigorous statistical tests [12].
Alternative statistical tools, called concentration inequalities (but also denoted universal inequalities or robust inequalities), are applicable without knowing the probability distribution of the variable being studied. In general, from a data sample, statistics-based intervals allow the derivation of [13]:
Confidence intervals for the estimation of the mean (or other distribution parameters) of a random variable. For example, we can determine the size of the set of measurements to make in order to reach a given precision for calculating the average value of various contamination measures. This process allows us to optimize the sampling strategy and offers invaluable economic gains.
Prediction intervals for a random variable. For example, we can compute the probability that the value of a point contamination is larger than a given critical value. In practice, regulatory threshold values are set for different waste categories. Determination of the probability that the contaminant’s value is smaller than a given threshold can be used to predefine the volumes of waste by category.
Tolerance intervals, which extend prediction intervals to take into account uncertainty in the parameters of a variable’s distribution. A tolerance interval gives the statistical interval within which, with some confidence level, a specified proportion of a sampled population falls.
In this paper, we focus on the second and third intervals mentioned above (note that confidence intervals are also addressed in [14]). Easy to state and easy to use, the Bienaymé-Chebychev inequality [15] is the most famous probabilistic inequality. Unfortunately, it comes at the expense of extremely loose bounds, which make it unsuitable in practical situations, so it is not used often. From these considerations, Woo [16] has proposed to use the more efficient (but little-known) Guttman inequality [17]. Even if it requires no additional assumptions, it has however the drawback of requiring an estimate of the kurtosis (i.e., the fourth-order statistical moment) of the variable being studied. In the small dataset context (around ten points), a precise estimation of the kurtosis might seem to be an unrealistic goal.
In another context, that of the quality control domain, Pukelsheim [18] has developed narrower bounds than the Bienaymé-Chebychev ones, showing at the same time how the three-sigma rule can be justified (based on a unimodality assumption, proven for example in [19]). Starting with this statistical literature as a base, along with older results about unimodal and convex distributions (see [20] for a more recent example), several useful inequalities have been developed in [14]. While they also focused on the range of validity of each inequality, their results were preliminary; the present paper extends that work to estimate risk prediction bounds robustly, with several applications to real radiological characterization problems. Furthermore, we make a connection between this risk bound estimation problem and the problem of computing conservative estimates of a quantile, classically addressed in nuclear thermal-hydraulic safety using the so-called Wilks formula [21]. Comparisons are then performed between the various approaches.
The following section provides all of the probabilistic inequalities that we can use to attempt to solve our problems. For validation purposes, all of these are applied in Section 3 to simulated data samples generated from several known theoretical distributions. More specifically, the accuracy of the resulting prediction and tolerance intervals are compared to those obtained from standard methods such as the Gaussian approximation. Section 4 shows how the probabilistic inequalities can be used in practice, and more precisely, to analyze radiological contamination measures. A conclusion follows to summarize the results of this work.
2 Probabilistic inequalities for prediction and tolerance intervals
We are first interested in the determination of a unilateral prediction interval. This allows us to define a limit value that a variable cannot exceed (or reach, depending on the context) with a given probability level. In the real-life radiological context, this can then be used to estimate, on the basis of a small number of contaminant measures, the quantity of contaminant which does not exceed a safety threshold value.
Mathematically, a unilateral prediction interval for a random variable is:(1)where is the threshold value and α ∈ [0, 1] is the risk probability. For an absolutely continuous random variable, this is equivalent to the following inequality:(2)In other words, s is a quantile of X of order greater than γ.
In the following sections, we introduce some theoretical inequalities which require the existence (and sometimes the knowledge) of the mean μ and standard deviation σ of X. Such inequalities are of the following form:(3)where t ≥ 0 and k is a positive constant.
General hypothesis of all of these inequalities: X is absolutely continuous with finite mean and variance.
Hypothesis related to applying them: The sample from X is i.i.d. (independent and identically distributed).
2.1 The Gaussian approximation
Provided that the random variable X is normally distributed, the derivation of an unilateral prediction interval related to a given risk probability α depends on the knowledge or the ignorance of the mean and standard deviation of X.
2.1.1 Known moments
Denoting by zu the quantile of level u of the standard Gaussian distribution (whose value can be easily found in standard normal tables or basic statistical software), we get(4)which is equivalent to(5)This relationship is a special case of equation (3) in which the right hand side is no longer an upper bound but the actual risk probability α, and where t = σz1−α. It is easy to show that the parameter k is then equal to .
2.1.2 Unknown moments: the k-factor method
The k-factor method (also called Owen’s method in the literature) develops corrected formulas in order to take into account lack of knowledge of the mean and standard deviation of the variable [11,22]. It provides tolerance intervals for a normal distribution and in the particular case of a unilateral tolerance interval, one can write:(6)with(7)and(8)where n is the sample size, tn−1,β,δ is the β-quantile of the non-central t-distribution with n − 1 degrees of freedom and non-centrality parameter δ. and are respectively the empirical mean and standard deviation computed from the sample. It is quite easy to compute k knowing α and β. In our context, the problem is more difficult as we have to solve in order to find the risk probability α associated with the confidence β.
However, the k-factor method is also based on the assumption of normality, and strong care must be taken when applying it to distributions other than Gaussian. In such situations, the application of a Gaussian approximation may provide non-conservative bounds due to the presence of a small sample, especially in the case of a significantly skewed random variable X.
2.1.3 Transformation to a Gaussian distribution
If the original sample of the data does not appear to follow a normal distribution, several transformations of the data can be tried to allow the data to be represented by a normal distribution. The k-factor method can therefore be applied to the normal transformed data in order to obtain either the risk probability, either the prediction or tolerance interval (which has to be back-transformed to obtain the right values).
However, for our purpose which is the risk statistical estimation from small samples, we consider this solution not satisfactory because it might be often not applicable. First, the Box-Cox family of transformations [23], which is a power transformation and includes the logarithmic transformation, requires the fitting by maximum likelihood of the transformation tuning parameter. The maximum likelihood process is subject to caution for small data samples. Second, the iso-probabilistic transformation (see for example [24]), also called the Nataf transformation or the Gaussian anamorphosis, consists in applying the data inverse distribution function to the sample. Such a distribution transformation requires the empirical cumulative distribution function, which is built from the data. With small-size samples, the empirical distribution function is a coarse approximation of the true distribution function and the resulting transformation would be in doubt.
In conclusion, we consider that this solution is inadequate because we cannot guarantee that the transformation to a Gaussian distribution is valid due to the small sample size. In particular, the normality of the probability distribution tail, which is our zone of interest, would be largely subject to caution. Moreover, for the same reason, validating the Gaussian distribution of the transformed data seems irrelevant by statistical tests [12]. We recall that the validation issue of the inequality remaining hypothesis is essential in our context. In the following, what are known as concentration inequalities are presented in order to help determine conservative intervals without the Gaussian distribution assumption.
2.2 Concentration inequalities
In probability theory, concentration inequalities relate the tail probabilities of a random variable to its statistical central moments.1 Therefore, they provide bounds on the deviation of a random variable away from a given value (for example its mean). The various inequalities arise from the information we have about the random variable (mean, variance, bounds, positiveness, etc.). This is a very old research topic in the statistics and probability fields. For example, [25] reviews thirteen such classic inequalities. New results have been obtained in the previous decades based on numerous mathematical works focused on concentration of measure principles (see for example [26]). We restrict our work to three classical inequalities that would seem to be most useful for radiological characterization problems with small samples. Indeed, they only require the mean and variance of the studied variable that does not need to be bounded, with very low assumptions on its probability distribution.
2.2.1 The Bienaymé-Chebychev inequality
The Bienaymé-Chebychev inequality is written [15]:(9)which corresponds to equation (3) with k = 1. As μ and σ are unknown in practical situations, they are replaced with their empirical counterparts and (i.e., their estimates from the sample values). This inequality does not require any hypotheses on the probability distribution of X.
In fact, equation (9) is also known as the Cantelli inequality or Bienaymé-Chebychev-Cantelli inequality. It is an extension of the classical Bienaymé-Chebychev inequality (see [26]) where an absolute deviation is considered inside the probability term of equation (9). For the two following inequalities, the same rearrangement is made.
Hypotheses of the Bienaymé-Chebychev (BC) inequality: None
2.2.2 The Camp-Meidell inequality
The Camp-Meidell inequality is given by [18,27]:(10)which corresponds to equation (3) with k = 4/9. As μ and σ are unknown in practical situations, they are replaced with their empirical counterparts and .
It is interesting to note that this inequality, in its two-sided version, justifies the so-called “three-sigma rule”. This rule is traditionally used in manufacturing processes, as it states that 95% of a scalar-valued product output X is found in the interval [μ − 3σ, μ + 3σ]. In fact, it has been shown in [19] that this rule is only valid for an output X following a unimodal distribution. Indeed, one proof of the expression (10) is based on the bounding of the distribution function by a linear one [28]. Furthermore, this inequality requires the hypothesis that the probability distribution of X is differentiable as well as unimodality of the probability density function (pdf) of X. If so, it can then be applied to all of the unimodal continuous probability distributions used in practice (uniform, Gaussian, triangular, log-normal, Weibull, Gumbel, etc.).
Hypothesis of the Camp-Meidell (CM) inequality: Unimodality of the pdf
2.2.3 Van Dantzig inequality
The Van Dantzig inequality is given by [28]:(11)which corresponds to equation (3) with k = 3/8. As μ and σ are unknown in practical situations, they are replaced with their empirical counterparts and .
We note that this inequality is relatively unknown, which may be explained by the only minor improvement obtained with respect to the CM inequality. One proof of the expression (11) is based on bounding the distribution function by a quadratic function [28]. This inequality requires the hypothesis of second-order differentiability of the probability distribution of X, and convexity of the density function of X. In fact, it can be applied to all unimodal continuous probability distributions in their convex parts. Indeed, the tail of most classical pdfs is convex, including for example the exponential distribution’s density function (convex everywhere), the Gaussian, the log-normal, the Weibull, and so on. Note, however, that it is not valid for uniform variables.
Hypothesis of the Van Dantzig (VD) inequality: Convexity of the pdf’s tail
2.2.4 Conservative estimates based on bootstrapping
Application of the three above concentration inequalities requires knowledge of the mean μ and standard deviation σ of the variable under consideration. In most practical situations, these quantities are unknown and are directly estimated from their sample counterparts. However, low confidence is associated with these estimates when dealing with small sample situations: substituting the actual moments by their sample estimates can in fact lead to overly optimistic results. To overcome this problem, we propose a penalized approach based on bootstrapping, a common tool in statistical practice [29].
The principles of bootstrap variation which are used in this work are as follows: for a given sample of size n, we generate a large number B of resamples, i.e., samples made of n values selected randomly with replacement from the original sample. We then compute the empirical mean and standard deviation of each resample and apply the inequality of interest. We thus obtain B resulting values, and can compute certain statistics such as high quantiles (of order β, which is the confidence value). We can take, for example, the 95%-quantile (i.e., β = 0.95) to derive a large and conservative value.
2.3 Using Wilks’ formula
We consider the quantile estimation problem of a random variable as stated in equation (2), where γ = 1 − α is the order of the quantile. This problem is equivalent to the previous one of risk bound estimation (Eq. (1)). The classic (empirical) estimator is based on order statistic derivations [30] of a Monte-Carlo sample. With small sample size (typically less than 100 observations), this estimator gives very imprecise quantile estimates (i.e., with large variance), especially for low (less than 5%) and large (more than 95%) β [31].
Another strategy consists of calculating a tolerance limit instead of a quantile, using certain order statistics theorems [30]. For an upper bound, this provides a upper limit value of the desired quantile with a given confidence level (for example 95%). Based on this principle, Wilks’ formula [21,32] allows us to precisely determine the required sample size in order to estimate, for a random variable, a quantile of order α with confidence level β. This formula was introduced in the nuclear engineering community by the German nuclear safety institute (GRS) at the beginning of the 1990s [33], and then used for various safety assessment problems (see for example [3,34,35]).
We restrict our explanations below to the one-sided case. Suppose we have an i.i.d. n-sample X1, X2, …, Xn drawn from a random variable X. We note M = maxi(Xi). For M to be an upper bound for at least 100 × γ% of possible values of X with given confidence level β, we require(12)Wilks’ formula implies that the sample size n must therefore satisfy the following inequality:(13)In Table 1, we present several consistent combinations of the sample size n, the quantile order γ, and the confidence level β.
Equation (13) is a first-order equation because the upper bound is set equal to the maximum value of the sample. To extend Wilks’ formula to higher orders, we consider the n-sample of the random variable X sorted into increasing order: X(1) ≤ X(2) ≤ … ≤ X(r) ≤ … ≤ X(n) (r is the rank). For all 1 ≤ r ≤ n, we set(14)According to Wilks’ formula, the previous equation can be recast as(15)The value X(r) is an upper-bound of the γ-quantile with confidence level β if 1 − G(γ) ≥ β.
Increasing the order when using Wilks’ formula helps reduce the variance in the quantile estimator, the price being the requirement of a larger n (according to formula (15) with β = 1 − G(γ) and fixed γ). Wilks’ formula can be used in two ways:
when the goal is to determine the sample size n to be measured for a given γ-quantile with a given level of confidence β, the formula (13) can be used with the fixed order o (corresponding to the oth greatest value, o = n − r + 1): first order (o = 1) gives r = n (maximal value for the quantile), second order (o = 2) gives r = n − 1 (second largest value for the quantile), etc.;
when a sample of size n is already available, then the formula (13) can be used to determine the pairs (α, β) and the orders s for the estimation of the Wilks quantile.
Hypotheses of the Wilks formula: None.
3 Numerical tests
3.1 Introduction
The goal of this section is to assess the degree of conservatism of the various approaches presented above, namely the Gaussian-approximation based inequality (Sect. 2.1), variations of the concentration inequalities (Sect. 2.2) and the Wilks method (Sect. 2.3). To this end, we consider four probability distributions which are assumed to generate the data:
one Gaussian distribution with mean 210 and standard deviation 20;
three log-normal distributions with standard deviations equal to 30, 50 and 70, and with means calculated in such a way that the density maxima (i.e., the modes) be all equal to 210.
We recall that a random variable X is said to be log-normal if log(X) is normal. Note also that the log-normal distribution is classically used for modeling environmental data, such as pollutant concentrations. Moreover, the hypotheses for all of the concentration inequalities above are valid for these distributions.
As shown in Figure 1, the distributions exhibit various levels of skewness. The tests carried out for the most skewed distributions are able to challenge the robustness of the probabilistic inequalities.
For each theoretical distribution, we want to estimate the minimum probability that a value randomly drawn from this same distribution exceeds a given threshold s. This probability corresponds to the variable α in equation (1), and the threshold s to the quantity μ + t in equations (9)–(11). In other words, the problem can be cast as follows:where X is a random variable following one of the four theoretical distributions.
The numerical analysis is organized into two parts:
The distribution’s moments are assumed to be perfectly known. Thus we can compute the exact values of the α-estimates given by the concentration inequalities (Eqs. (9)–(11)). We can also estimate α using the Gaussian assumption. However, it does not make sense to use Wilks’ formula or the k-factor method at this stage since the sample uncertainty is not taken into account.
The distribution’s moments are considered to be unknown (realistic case). The moments are estimated from the data sample at hand. Hence, the α estimates are affected by uncertainty in the sample. In other words, these estimates are random and can be characterized by their statistical distributions. In this context, it is relevant to quantify the probability that the estimates under-predict or over-predict the α value obtained theoretically for each method.
Fig. 1 Four different theoretical pdfs for the random variable X. |
3.2 Analysis with known moments
The methods reviewed in this section are the three concentration inequalities and the Gaussian approximation. For the concentration inequalities, the risk α is estimated as follows:(16)where μ and σ denote the exact values of the distribution’s mean and standard deviation. For the Gaussian approximation, α is estimated through the evaluation of the cumulative density function of the Gaussian random variable with mean μ and standard deviation σ. The estimates are compared to the actual probabilities that any random output value exceeds the threshold s. A different value for s is given for each distribution, so that the probability α of exceeding the threshold is neither too low nor too high. s is chosen as the quantile of order 95% of the distribution under consideration (accordingly, the actual value of α is 5%).
The estimates of the risk α are shown in Table 2. We observe that in accordance with the theory, the concentration inequalities are always conservative, constantly overestimating the actual risk level. As expected, the BC formula is the most conservative, followed by the CM one and then the VD one. The CM case is particularly interesting because a factor of two is gained over the BC one. The degree of conservatism decreases as the distribution skewness increases. The Gaussian approximation is of course exact when the distribution is itself Gaussian, but it provides overly optimistic estimates of α when the distribution is not Gaussian.
Estimates of the risk α obtained from the Gaussian approximation and the concentration inequalities. The true risk is equal to 5%.
3.3 Analysis with unknown moments
3.3.1 Sample moments-based risk estimates
In practice, the distribution of the population from which the data were generated is unknown, and hence the distribution’s mean and standard deviation. Therefore, these parameters have to be estimated from the data sample at hand in order to derive the risk of exceeding the threshold s. This induces some randomness in estimates of the risk α since the sample is itself random. As a consequence, it may be possible to greater or less noticeably underestimate α in practical situations.
In order to study the level of conservatism of the various approaches when subject to sample randomness, we estimate the statistical distribution of the estimates and look at the proportion of non-conservative estimates, i.e., estimates less than the actual value α = 5%. We only consider the most skewed distribution in this section, i.e., the log-normal with mean 237.86 and standard deviation 70. From the theoretical distribution, we randomly draw a large number N of samples of a given size n. In this study, we choose N = 5000 and n ∈ {10, 30}. For a given distribution and a given sample size n, N risk estimates are computed from the various methods, leading to a sample of N estimates of α. The empirical distribution of this sample is compared to the true risk 5%, and the proportion of non-conservative estimates out of the N values is computed.
The results obtained with n = 10 and n = 30 are represented in Figures 2 and 3, respectively. It appears that the Gaussian approximation strongly underestimates the actual risk level, with about 70% of its results being smaller than the reference risk value 0.05 (note that the method provided many negative values for risk levels that were truncated to zero). The concentration inequalities turned out to be much more conservative, especially the BC one. Nonetheless, when dealing with samples of size n = 10, the CM and VD formulas yield a non-negligible proportion of overly optimistic estimates of α. The three types of inequality are more conservative when more data are available, i.e., when the sample size n is set to 30.
These results are compared to the first-order Wilks approach. The first-order approach (which means that the threshold is the sample maximum) is taken because the sample sizes are extremely small in these tests. According to formula (13), for an actual risk of level α = 0.05 and a sample size n = 10, the proportion of non-conservative estimates is less than 0.60 (this value corresponds to the quantity 1 − β). This value decreases to 0.21 when the sample size is n = 30. Thus the Wilks method lies between the Gaussian approximation and the concentration inequalities for this test case, in terms of conservatism. However, an advantage of the method is that it directly gives an upper bound of 1 − β of the risk of being non-conservative, in contrast to the other strategies.
Fig. 2 Statistical distributions of the α estimates based on samples of size n = 10, for the log-normal distribution with mean 237.86 and standard deviation 70. The actual risk is equal to 5%. |
Fig. 3 Statistical distributions of the α estimates based on samples of size n = 30, for the log-normal distribution with mean 237.86 and standard deviation 70. The actual risk is equal to 5%. |
3.3.2 Penalized risk estimates based on bootstrapping
We have shown that applying the Gaussian and robust methods by simply substituting the actual moments by their sample estimates can lead to overly optimistic results. To overcome this problem, we propose a penalized approach based on bootstrapping (see Sect. 2.2.4) that we will compare to the k-factor method, i.e. the Gaussian approximation to get tolerance intervals (see Sect. 2.1.2). The principles are as follows. For a given sample of size n, we generate a large number B of resamples (say, B = 500). We compute the empirical mean and standard deviation of each resample, and then derive in each case an estimate of α, as shown in the previous section. This results in a bootstrap set of B estimates of α. In a conservative way, we can compute a high quantile of a given order of this set, say 5%. The calculated value serves as an estimate of the risk α.
As in the previous section, we focus on the log-normal distribution with the greatest skewness. The results related to sample sizes n equal to 10 and 30 are plotted in Figures 4 and 5, respectively. As expected, we see that the bootstrap-based estimators are significantly more conservative than their “moment-based” counterparts. In particular, for n = 10, the level of conservatism is roughly doubled. For n = 30, all of the concentration inequalities led to very small proportions (less than 1%) of under-prediction of the actual risk α. The Gaussian approximation, however, still remains unreliable for the skewed distribution under consideration.
The drawback of the conservatism of the robust approaches is that they can yield grossly exaggerated estimations of the risk of exceeding the threshold value, especially the BC formula. On the other hand, the VD formula relies upon assumptions about the density function of the population, which are not easy to check in practice. In the absence of further investigation, combination of the CM inequality with a bootstrap penalization may be a reasonable approach.
In conclusion, for all these tests based on small data samples, the inadequacy of the Gaussian approximation has been shown, while concentration inequalities, used in a conservative manner (using a boostrap technique), show strong robustness. Wilks’ formula offers the advantage of directly giving an upper bound of the risk of being non-conservative, but is not as advantageous when dealing with very small sample sizes (low conservatism). If their assumptions can be considered reasonable, the Camp-Meidell and Van Dantzig inequalities should be used preferentially, as the Bienaymé-Chebychev inequality usually gives overly conservative results. In the following section, we illustrate the practical usefulness of all these tools in real situations.
Fig. 4 Statistical distributions of the bootstrap α estimates based on samples of size n = 10, for the log-normal distribution with mean 237.86 and standard deviation 70. The actual risk is equal to 5%. |
Fig. 5 Statistical distributions of the bootstrap α estimates based on samples of size n = 30, for the log-normal distribution with mean 237.86 and standard deviation 70. The actual risk is equal to 5%. |
4 Applications
4.1 Case 1: Contamination characterization
This case study concerns the radiological activity (denoted X) of Cesium 137 in a large-sized population of waste objects. This characterization enables us to put each waste object in a suitable waste category, e.g., low-activity or high-activity. The reliability of this classification is all the more crucial as it directly affects the total cost of waste management. Indeed, putting objects in the high-activity waste category is much more expensive than in the low-activity one.
A complete characterization of the population of waste objects is impossible, and only 21 measurements were in fact made. Reasoning in terms of statistics, it is assumed that this small-sized sample (n = 21) has been randomly chosen from an unknown infinite population associated with some probability distribution. Each object of this sample has been characterized by its 137Cs activity measure (in Bq/cm2). The summary statistics which are estimated with these data are the following: mean , median = 15.4, standard deviation , Min = 0.83, Max = 156.67. Figure 6 shows the boxplot, histogram and smoothed-kernel density of these data. The distribution resembles a log-normal one, with high asymmetry, a mean much larger than the median, a standard deviation larger than the mean, many low values and a few high ones. The better quantile-quantile plot is the one obtained with respect to the log-normal distribution (see Fig. 6) and then supports this intuition. However, the size of the sample is too small to be confident on the results of statistical tests which would confirm this [12]. The extreme value at 156.67 seems to be isolated from the rest of the sample values, but we have no argument allowing us to consider it as an outlier. Moreover, the actual data density is considered as unimodal because there is no physical reason to believe that this high value comes from a second population with a different contamination type. Even if it may be subject to discussion, the hypothesis of convexity of the density’s tail can be supposed.
From 21 activity measures, we want to estimate the proportion of the total population which has a radiological activity larger than a given threshold. First, the quantity of waste objects whose activity does exceed the threshold s = 100 Bq/cm2 has to be determined. This could be an important issue in terms of predefining the volume of this waste category.
We were unsuccessful in fitting (with a high degree of confidence) a parametric statistical distribution (even a log-normal one) to these data. Thus, only distribution-free tools, as those discussed in this paper, can be used to build prediction intervals with a sufficient degree of confidence. The probabilistic inequalities of type (3) were then applied, by replacing μ and σ by their estimates:(17)where and the values of k depend on the inequality (k = 1 for BC, k = 4/9 for CM, and k = 3/8 for VD). This equation can also be expressed by using s:(18)
The first row of Table 3 gives the risk bound results for the various inequalities for the threshold set equal to 100 Bq/cm2, and using the empirical estimates of μ and σ. The second row provides bootstrap-based conservative estimates of the risk bound by taking the 95% quantile of B = 10 000 risk bounds estimated from B replicas of the data sample. The interpretation of these two results reveals that:
by the BC inequality, we obtain from (18) that , so we coarsely estimate that less than 21.7% of the population has an activity larger than 100 Bq/cm2, and we can guarantee (at a 95% confidence level) that less than 44.8% of the population has an activity larger than 100 Bq/cm2;
by the CM inequality, we obtain from (18) that , so we coarsely estimate that less than 11% of the population has an activity larger than 100 Bq/cm2, and we can guarantee (at a 95% confidence level) that less than 26.5% of the population has an activity larger than 100 Bq/cm2;
by the VD inequality, we obtain from (18) that , so we coarsely estimate that less than 9.4% of the population has an activity larger than 100 Bq/cm2, and we can guarantee (at a 95% confidence level) that less than 23.3% of the population has an activity larger than 100 Bq/cm2.
This simple application illustrates the gain we can obtain by using the CM or VD inequalities instead of the BC one. Knowing from the bootstrap’s conservative estimates that 24% instead of 45% of the waste objects can be classified in the high-activity waste category would help to avoid an overly conservative estimate of the waste management cost.
Next, we use Wilks’ formula to illustrate what kind of statistical information can be inferred for the given data sample. For the sample size n = 21, we can estimate two types of quantile:
A unilateral first-order γ-quantile with a confidence level β, and then we deduce α = 1 − γ and β from equation (13). We obtain the following solutions:
- –
,
- –
.
- –
A unilateral second-order γ-quantile with a confidence level β, and then we deduce α = 1 − γ and β from equation (15) with o = 2 and r = n − 1. We obtain the following potential solutions:
- –
,
- –
.
- –
In Table 3 (rows 3 and 4), we compare these results with those of the concentration inequalities by adjusting the corresponding thresholds s. Indeed, comparisons cannot be made with s = 100 because Wilks’ formula can only be applied with a quantile value coming from the data sample values. This is the major drawback of Wilks’ formula. For the low-quantile case (79.67, in row 4), i.e., large risk bounds, Wilks’ formula relevance is clear because it always provides less penalized results than the BC, CM and VD inequalities; the gain is a factor of two. However, in the high-quantile case (156.67, in row 3), i.e., small risk bounds, the CM and VD inequalities provide less penalized results, with 8.3% and 7.1% of the population that may have an activity higher than 156.67 Bq/cm2 and 13.3% for Wilks’ method with a confidence β of 95% (resp. 5.6% and 4.8% for CM and VD, 7.1% for Wilks’ method with a confidence β of 78%). The gain with VD is a factor of two. The same results are also obtained by giving the values of β obtained via Wilks’ formula using the conservative α result obtained by the VD inequality.
Fig. 6 Case 1 (21 137Cs activity measures): boxplot (left), histogram with a smoothed-kernel density function (middle) and quantile-quantile plot with respect to a log-normal distribution. |
Case 1: estimates of the risk α obtained from the concentration inequalities and Wilks’ formula, for different threshold values s.
4.2 Case 2: H2 flow rate characterization for drums of radioactive waste
Some categories of radioactive waste drums may produce hydrogen gas because of the radiolyse reaction of organic matter like PVC, Polyethylene or cellulose mixed with α-emitters in the waste. The evaluation of the hydrogen flow rate (denoted X in l/drum/year) produced by radioactive waste drums is required for their disposal in final waste repositories. However, considering the time required for the H2 flow rate measurement of only one drum (more than one month) and the need to characterize a population of several thousand drums, only a small (n = 38) randomly chosen sample has been measured.
The summary statistics estimated with the H2 flow rate data are the following: mean , median = 1.43, standard deviation , Min = 0.02, Max = 13.97. Figure 7 shows the boxplot, histogram and smoothed-kernel density of these data. As for Case 1, the distribution looks like a log-normal one, with high asymmetry, a mean much larger than the median, a standard deviation larger than the mean, a lot of low values and a few high ones. The better quantile-quantile plot is the one obtained with respect to the log-normal distribution (see Fig. 7) and then supports this intuition. The extreme value at 13.97 seems to be isolated from the rest of the sample values, but we have no argument that justifies considering it as an outlier. It tends the distribution to appear as a heavier tail distribution than the log-normal one. We can directly estimate the 95%-quantile from the log-normal theoretical distribution that has been fitted :However, due to the small number of data that served to fit the pdf, little confidence can be accorded to this value, and justifying it to safety authorities could be difficult. Moreover, the log-normal distribution is rejected by the Shapiro-Wilks adequacy test (the most robust test for small sample size) with the threshold 5%. In any case, we are confident that the density can be considered as unimodal and the hypothesis of convexity of the density’s tail could also be accepted.
The first row of Table 4 gives the risk bound results of the different inequalities for the threshold s = 10 l/drum/year, using the empirical estimates of μ and σ. The second row provides bootstrap-based conservative estimates of the risk bound by taking the 95% quantile of B = 10 000 risk bounds estimated from B replicas of the data sample. The interpretation of these two results reveals that:
by the BC inequality, we coarsely estimate that less than 10.5% of the population has an activity larger than 10 l/drum/year, and we can guarantee (at a 95% confidence level) that less than 21.2% of the population has a H2 flow rate larger than 10 l/drum/year;
by the CM inequality, we coarsely estimate that less than 5% of the population has an activity larger than 10 l/drum/year, and we can guarantee (at a 95% confidence level) that less than 10.7% of the population has a H2 flow rate larger than 10 l/drum/year;
by the VD inequality, we coarsely estimate that less than 4.2% of the population has an activity larger than 10 l/drum/year, and we can guarantee (at a 95% confidence level) that less than 9.2% of the population has a H2 flow rate larger than 10 l/drum/year.
As for Case 1, the gain we can obtain (here, about 12%) by using the CM or VD inequality instead of the BC one is relatively large, in terms of estimating the waste management cost.
We now use Wilks’ formula to illustrate, for the given data sample, what kind of statistical information can be inferred. For the sample size n = 38, we can estimate two types of quantile:
A unilateral first-order γ-quantile with a confidence level β, and then we deduce α = 1 − γ and β from equation (13). We obtain the following solutions:
- –
,
- –
.
- –
A unilateral second-order γ-quantile with a confidence level β, and then we deduce α = 1 − γ and β from equation (15) with o = 2 and r = n − 1. We obtain the following potential solutions:
- –
,
- –
.
- –
In Table 4 (rows 3 and 4), we compare these results with those of the concentration inequalities, by adjusting the corresponding thresholds s. Indeed, comparisons cannot be made with s = 10 because Wilks’ formula can only be applied with a quantile value coming from the data sample values. Again, this is the major drawback of Wilks’ formula. For the low-quantile case (8.29, in row 4), i.e., large risk bounds, Wilks’ formula is clearly relevant because it always gives less penalized results than the BC, CM and VD inequalities. However, in the high-quantile case (13.97, in row 3), i.e., small risk bounds, the CM and VD inequalities provide less penalized results; the gain with VD is a factor of two. The same results are also obtained with the values of β obtained from Wilks’ formula using the conservative α result obtained by the VD inequality.
Fig. 7 Case 2 (38 hydrogen flow rates): boxplot (left), histogram with a smoothed-kernel density function (middle) and quantile-quantile plot with respect to a log-normal distribution. |
Case 2: estimates of the risk α obtained from the concentration inequalities and Wilks’ formula, for different threshold values s.
5 Conclusion
As explained in the introduction, a realistic assessment of risk is of major importance in improving the management of risk, as well as public acceptance. In this paper, we have presented a statistical approach which works towards this. We studied several statistical tools to derive risk prediction and tolerance bounds in the context of nuclear waste characterization. The main challenge was related to the small number of data which are usually available in real-world situations. In this context, the normality assumption is generally unfounded, especially in the case of strongly asymmetrical data distributions, which are common in real-world characterization studies. Much narrower bounds exist in the statistical literature, and this paper has highlighted them. Moreover, these are distribution-free tools and no strong assumptions are needed, e.g., with respect to the normality of the distribution of the variable under consideration. These tools are distribution statistics aids which can provide practical confidence bounds for radiological probabilistic risk assessment.
Certain concentration inequalities, used in a conservative way (with a boostrapping technique), have shown to be strongly robust. However, the prediction and tolerance bounds given by the standard Bienaymé-Chebychef inequality are very loose. Thus, their use in risk assessment leads to unnecessarily high conservatism. If their assumptions (unimodality and tail convexity of the pdf) can be justified, the Camp-Meidell and Van Dantzig inequalities should be considered first. In the absence of any assumptions, Wilks’ formula offers the advantage of directly giving an upper bound of the risk of being non-conservative, but is not of great advantage when dealing with very small-sized samples or low risk bounds. Indeed, in such cases, the excessive conservatism can be greater than when using the concentration inequalities. Moreover, Wilks’ formula can suffer from a lack of flexibility in practical situations.
In terms of future directions, more recent concentration inequalities [26,36] could be studied and may potentially give much narrower intervals. As an aside, it has also been shown in [14] how to use probabilistic inequalities to determine the precision in the estimation of the mean of a random variable from a measurement sample. With these kinds of inequalities, we can find the minimal number of measurements required in order to reach a given confidence level in estimating the mean. In conclusion, possible applications of these tools are numerous across all safety considerations based on expensive experimental processes. Further research and applied case studies could lead to the development of useful guides for practitioners, in particular in the nuclear dismantling context.
Acknowledgments
The authors wish to thank Hervé Lamotte, Alexandre Le Cocguen, Dominique Carré and Ingmar Pointeau from the CEA Department of Nuclear Services, and Thierry Advocat, head of the CEA GFDM research program, for allowing the use of the H2 flow rates data from drums of radioactive waste. We also thank an anonymous reviewer, Léandre Brault and Emmanuel Remy for many useful comments on this paper. Finally, we are grateful to Kevin Bleakley for the English language corrections.
References
- J. Attiogbe, E. Aubonnet, L. De Maquille, P. De Moura, Y. Desnoyers, D. Dubot, B. Feret, P. Fichet, G. Granier, B. Iooss, J.-G. Nokhamzon, C. Ollivier Dehaye, L. Pillette-Cousin, A. Savary, Soil radiological characterisation methodology. CEA-R-6386 (Commissariat à l'énergie atomique et aux énergies alternatives (CEA), CEA Marcoule Center, Nuclear Energy Division, Radiochemistry and Processes Department, Analytical Methods Committee (CETAMA), France, 2014) [Google Scholar]
- P. Flynn, La gestion de la pandémie H1N1 : nécessité de plus de transparence, in AS/Soc 12 (Council of Europe, 2010) [Google Scholar]
- N. Pérot, B. Iooss, Quelques problématiques d'échantillonnage statistique pour le démantèlement d'installations nucléaires, in Conférence λμ16, Avignon, France, October 2008 (2008) [Google Scholar]
- B. Poncet, L. Petit, Method to assess the radionuclide inventory of irradiated graphite waste from gas-cooled reactors, J. Radioanal. Nucl. Chem. 298, 941 (2013) [CrossRef] [Google Scholar]
- B. Zaffora, M. Magistris, G. Saporta, F. La Torre, Statistical sampling applied to the radiological characterization of historical waste, EPJ Nuclear Sci. Technol. 2, 11 (2016) [Google Scholar]
- N. Jeannée, Y. Desnoyers, F. Lamadie, B. Iooss, Geostatistical sampling optimization of contaminated premises, in DEM – Decommissioning challenges: an industrial reality? Avignon, France, 2008 (2008) [Google Scholar]
- Y. Desnoyers, J.-P. Chilès, D. Dubot, N. Jeannée, J.-M. Idasiak, Geostatistics for radiological evaluation: study of structuring of extreme values, Stoch. Environ. Res. Risk Assess. 25, 1031 (2011) [CrossRef] [Google Scholar]
- A. Bechler, T. Romary, N. Jeannée, Y. Desnoyers, Geostatistical sampling optimization of contaminated facilities, Stoch. Environ. Res. Risk Assess. 27, 1967 (2013) [CrossRef] [Google Scholar]
- A.R. Brazzale, A.C. Davison, N. Reid, Applied asymptotics – case studies in small-sample statistics (Cambridge University Press, 2007) [CrossRef] [Google Scholar]
- P.-C. Pupion, G. Pupion, Méthodes statistiques applicables aux petits échantillons (Hermann, 2010) [Google Scholar]
- E.G. Schilling, D.V. Neubauer, Acceptance sampling in quality control, 2nd ed. (CRC Press, 2009) [Google Scholar]
- R.B. D'Agostino, M.A. Stephens, eds., Goodness-of-fit techniques (Dekker, 1986) [Google Scholar]
- G.J. Hahn, W.Q. Meeker, Statistical intervals. A guide for practitioners (Wiley-Interscience, 1991) [CrossRef] [Google Scholar]
- G. Blatman, B. Iooss, Confidence bounds on risk assessments − application to radiological contamination, in Proceedings of the PSAM11 ESREL 2012 Conference, Helsinki, Finland, June 2012 (2012), pp. 1223–1232 [Google Scholar]
- R. Nelson, Probability, stochastic processes, and queuing theory: the mathematics of computer performance modeling (Springer, 1995) [CrossRef] [Google Scholar]
- G. Woo, Confidence bounds on risk assessments for underground nuclear waste repositories, Terra Res. 1, 79 (1988) [Google Scholar]
- L. Guttman, A distribution-free confidence interval for the mean, Ann. Math. Stat. 19, 410 (1948) [CrossRef] [Google Scholar]
- F. Pukelsheim, The three sigma rule, Am. Stat. 48, 88 (1994) [Google Scholar]
- D.F. Vysochanskii, Y.I. Petunin, Justification of the 3a rule for unimodal distribution, Theor. Probab. Math. Stat. 21, 25 (1980) [Google Scholar]
- S. Dharmadhikari, K. Joagdev, Unimodality, convexity, and applications (Academic Press, Inc., 1988) [Google Scholar]
- W.T. Nutt, G.B. Wallis, Evaluation of nuclear safety from the outputs of computer codes in the presence of uncertainties, Reliab. Eng. Syst. Saf. 83, 57 (2004) [CrossRef] [Google Scholar]
- D.B. Owen, Factors for one-sided tolerance limits and for variables sampling plans, SCR-607 (Sandia Corporation Monograph, 1963) [CrossRef] [Google Scholar]
- G.E.P. Box, D.R. Cox, An analysis of transformations, J. R. Stat. Soc. 26, 211 (1964) [Google Scholar]
- M. Lemaire, Structural reliability (Wiley, 2009) [CrossRef] [Google Scholar]
- I.R. Savage, Probability inequalities of the Tchebycheff type, J. Res. Natl. Bur. Stand. B: Math. Math. Phys. 65B, 211 (1961) [CrossRef] [Google Scholar]
- S. Boucheron, G. Lugosi, S. Massart, Concentration inequalities: a nonasymptotic theory of independence (OUP, Oxford, 2013) [Google Scholar]
- B. Meidell, Sur un problème du calcul des probabilités et les statistiques mathématiques, C. R. Acad. Sci. 175, 806 (1922) [Google Scholar]
- D. Van Dantzig, Une nouvelle généralisation de l'inégalité de Bienaymé (extrait d'une lettre à M.M. Fréchet), Ann. Inst. Henri Poincaré 12, 31 (1951), Available at: http://archive.numdam.org [Google Scholar]
- B. Efron, R.J. Tibshirani, An introduction to the bootstrap (Chapman & Hall, 1993) [CrossRef] [Google Scholar]
- H.A. David, H.N. Nagaraja, Order statistics, 3rd ed. (Wiley, New York, 2003) [Google Scholar]
- C. Cannamela, J. Garnier, B. Iooss, Controlled stratification for quantile estimation, Ann. Appl. Stat. 2, 1554 (2008) [CrossRef] [Google Scholar]
- S.S. Wilks, Determination of sample sizes for setting tolerance limits, Ann. Math. Stat. 12, 91 (1941) [Google Scholar]
- E. Hofer, Probabilistische unsicherheitsanalyse von ergeb-nissen umfangreicher rechenmodelle, in GRS-A-2002 (1993) [Google Scholar]
- A. de Crécy, P. Bazin, H. Glaeser, T. Skorek, J. Joucla, P. Probst, K. Fujioka, B.D. Chung, D.Y. Oh, M. Kyncl, R. Pernica, J. Macek, R. Meca, R. Macian, F. D'Auria, A. Petruzzi, L. Batet, M. Perez, F. Reventos, Uncertainty and sensitivity analysis of the LOFT L2-5 test: results of the BEMUSE programme, Nucl. Eng. Des. 12, 3561 (2008) [CrossRef] [Google Scholar]
- E. Zio, F. Di Maio, Bootstrap and order statistics for quantifying thermal-hydraulic code uncertainties in the estimation of safety margins, Sci. Technol. Nucl. Install. 9, 340164 (2008) [Google Scholar]
- W. Hoeffding, Probability inequalities for sums of bounded random variables, J. Am. Stat. Assoc. 58, 13 (1963) [CrossRef] [MathSciNet] [Google Scholar]
Cite this article as: Géraud Blatman, Thibault Delage, Bertrand Iooss, Nadia Pérot, Probabilistic risk bounds for the characterization of radiological contamination, EPJ Nuclear Sci. Technol. 3, 23 (2017)
All Tables
Estimates of the risk α obtained from the Gaussian approximation and the concentration inequalities. The true risk is equal to 5%.
Case 1: estimates of the risk α obtained from the concentration inequalities and Wilks’ formula, for different threshold values s.
Case 2: estimates of the risk α obtained from the concentration inequalities and Wilks’ formula, for different threshold values s.
All Figures
Fig. 1 Four different theoretical pdfs for the random variable X. |
|
In the text |
Fig. 2 Statistical distributions of the α estimates based on samples of size n = 10, for the log-normal distribution with mean 237.86 and standard deviation 70. The actual risk is equal to 5%. |
|
In the text |
Fig. 3 Statistical distributions of the α estimates based on samples of size n = 30, for the log-normal distribution with mean 237.86 and standard deviation 70. The actual risk is equal to 5%. |
|
In the text |
Fig. 4 Statistical distributions of the bootstrap α estimates based on samples of size n = 10, for the log-normal distribution with mean 237.86 and standard deviation 70. The actual risk is equal to 5%. |
|
In the text |
Fig. 5 Statistical distributions of the bootstrap α estimates based on samples of size n = 30, for the log-normal distribution with mean 237.86 and standard deviation 70. The actual risk is equal to 5%. |
|
In the text |
Fig. 6 Case 1 (21 137Cs activity measures): boxplot (left), histogram with a smoothed-kernel density function (middle) and quantile-quantile plot with respect to a log-normal distribution. |
|
In the text |
Fig. 7 Case 2 (38 hydrogen flow rates): boxplot (left), histogram with a smoothed-kernel density function (middle) and quantile-quantile plot with respect to a log-normal distribution. |
|
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.