On February 21st, 2019, I was invited to give a key note at the Joint EFSA/BfR International Conference on Uncertainty in Risk Analysis. You can watch the video of the talk here

And I share the text of the talk below!


Uncertainty within and beyond the data: approaches to deal with different types of unknowns

Why do we measure uncertainty?

Uncertainty is very much part of measurements. Quantitative measurements are often preferred because they are thought to be more precise than qualitative assessments, and thus leave less space to interpretation. I want to make the point that precision does not reduce uncertainty.

I will use the example of the metric system to show that standard measurements produce/entail uncertainty. Before the adoption of the metric system, trade of grains and produce was much more localized and the quantities exchanged were measured using baskets and pots. The term “bushel” refers to the use of a basket as a means to measure quantities. The use of bushels did not guarantee that different producers traded in similar quantities, as bushels came in many different sizes. For this reason, many towns in Europe during the Middle Ages had an official bushel, sometimes carved in stone and placed at the entrance of the town, which was to be used to measure the quantities of grain, fruits and liquids that were traded. Even with a single bushel, measurements could be meddled with, and for this reason rules were created establishing, for instance, whether in the bushel should be filled until it was flat on the top or until it had a small heap, whether grains should be dry or wet, as some grains gained volume when wet, while others were rather traded dry to increase volume. Theodore Porter and James Scott explain that standardized measurements were introduced by central governments in order to allow for trade between villages, and to create bigger markets. Standardisation required that all the different bushels, pots, stone carvings, etc. were translated into an equivalent measure in kilograms, or pounds in the UK and US. Standardisation required a great effort, as people were not familiar with abstract concepts such as kilogram, a kilogram was not something observable or relatable to experience, like a basket or a bushel are. A kilogram is not a description of reality, but an abstract convention. This type of tables were used to help with the conversions. In this great effort to switch to standard measurements, the knowledge of how to fill a basket, of whether it was better to use dry or wet grains to maximize the volume, of how each grain was different, all this knowledge was lost. Standardisation made it possible for someone to trade without knowing what was being traded. The metric system makes grains legible at a distance, but also implies an important loss of information. Therefore, standard measures also caused a loss of knowledge, they created uncertainty. This example shows how quantification is the result of a convention about what to measure, and how to measure it – but also how measurement entails loss of information, and therefore uncertainty.

Rather than measuring uncertainty, it may be better to talk of assessing uncertainty which can be thought of as acknowledging the knowledge that is lost through measurements. As in the case of grains, uncertainty goes much beyond the standard error in the measurement of weight, but includes the knowledge about seasonality of grains, the quality of yields in the current year, the nutritional value of each type of grain, how they were handled, how far they travelled. Out of the analogy with grains, uncertainty is a short hand to signal the limits of knowledge.

There are different types of uncertainties. What I would like to do is to introduce some of the theory of uncertainty, and give an overview of some of the methods that can be used to analyse these different types of uncertainty.

Knight first introduced the distinction between risk and strict uncertainty in economics in the 1920s, defining risk as a situation in which the possible outcomes are known and the probabilities associated with each outcome can be calculated. An example of risk is a casino game of roulette. Strict uncertainty is defined as a situation in which the possible outcomes are known but the associated probabilities cannot be calculated. An example of strict uncertainty is a hurricane: there are areas that are known to be prone to hurricanes like the gulf of Mexico, but this knowledge cannot be used to predict when the next hurricane will happen. The novelty introduced by Knight is that not all types of uncertainty can be quantified. Risk is quantifiable, through probabilities and statistics, but strict uncertainty is less amenable to quantification. As in the example of hurricanes, quantitative information about, for example, historical occurrence of hurricanes does not reduce uncertainty about the future. Brian Wynne introduced other two types of uncertainty: (iii) ignorance, in which possible outcomes are unknown, and therefore “we don’t know what we don’t know”. An example of ignorance would be the existence of black swans before the so-called discovery of Australia. Black swans were not considered as a possibility. (iv) indeterminacy or systemic uncertainty, in which “causal chains or networks are open”. Indeterminacy is opposed to determinacy, and it refers to the complexity of what is studied, the existence of non-linear causal relations and the fact that knowledge cannot be used to make predictions about complex systems. Indeterminacy is associated with systemic changes, which means that “that which we don’t know” may be changing and not be knowable. The example I use is that of computers, which were supposed to make work faster and have changed the way we work. These different types of uncertainty speak not only of the limits of existing knowledge but also of the limits of what can be known. Finally, Andy Stirling introduces ambiguity as an additional type of uncertainty. Ambiguity is defined as a situation in which outcomes are unknown, not due to lack of knowledge, but to the fact that one cannot predict which of the known outcomes will be realized – probabilities may be known, but because of multiple, divergent and contested evidence. BPA is an example: it is one of the most studied chemicals, yet this does not reduce the controversy. Ambiguity alerts to the fact that more research may increase uncertainty rather than reduce it: new evidence may contradict previously existing knowledge, changing knowledge gaps into controversies.

Qualitative methods are necessary for two reasons: One, is that some types of uncertainty cannot be meaningfully quantified, or cannot be quantified at all. Quantification does not help predict hurricanes and does not help solve the controversy over BPA. In the case of unknown unknowns, quantification is not possible at all. The second reason is that a more qualitative assessment may help identify some of the knowledge that is lost, or not considered in the problem definition.

There are many qualitative methods of uncertainty analysis, which I divide in 2 types of approaches: those that categorize uncertainty and those that deal with the non-quantifiable aspects of uncertainty. I will give an example of how these methods can be applied, referring to one method for each of these approaches.

The first example of qualitative method is NUSAP. The NUSAP system was introduced by Silvio Funtowicz and Jerome Ravetz, and developed by Jeroen van der Sluijs – my colleagues at the University of Bergen. NUSAP is aimed at the characterization of the uncertainty not explicitly communicated by numbers. The NUSAP system combines quantitative and qualitative assessments of the uncertainty present in scientific information. Numbers are classified in terms of Numeral (the quantity), Unit (the type of measure), Spread (the statistical error), Assessment (the quality of the information), and Pedigree (the quality of the process producing the information). The NUSAP system has been widely applied to a variety of case studies, such as the uncertainties surrounding climate change predictions (Wardekker et al. 2008), groundwater modeling (Refsgaard et al. 2006), and the monitoring of emissions and environmental assessment in the Netherlands (van der Sluijs et al. 2005). The Netherlands Environmental Assessment Agency has adopted NUSAP as part of its Guidance on Uncertainty Assessment and Communication, and it is also one of the methods in EFSA’s Guidance on Uncertainty Analysis in Scientific Assessments.

The various categories make it possible to assess the quality of the estimates used and the possible tradeoffs between, for example, assessment and spread. Increasing the sample size can help reduce the spread but decrease the significance of the study. The pedigree category gives an additional measure of the confidence of the experts involved about the data and measurement schemes adopted (van der Sluijs et al. 2005). Pedigree yields a series of scores given by different experts to the process through which results are produced. Pedigree evaluates the quality of the methods used, the quality of the data, of the acceptance of the approach in a scientific field. It may ask for an assessment of the quality of the theories used, etc. Experts assess the assumptions behind the numbers analyzed. The measure of uncertainty is given by the degree to which experts’ scores diverge rather than by the actual scores themselves (van der Sluijs et al. 2005). This makes the vagueness and ambiguity associated with numbers explicit and easy to communicate to policy makers.  By relying on expert elicitation, NUSAP captures the tacit knowledge of experts, the craft skills, that knowledge that comes with the skills required to fill the bushel with different types of grain.

This type of analysis is being applied by my colleague Dafne Lemus to the BPA case. This is a highly controversial case, in which more studies often lead to more controversy, and more precision does not mean less uncertainty. BPA has become a textbook example of cases in which, scientific studies will produce different results based on the same data. This calls for the analysis of uncertainty within and beyond the data. The pedigree analysis can help identify the uncertainties related to different theoretical assumption, different models or combinations of models. In addition, I argue that it is useful to distinguish between the different types of uncertainty, which brings me to my second example related to typology analysis.

One example of the second type of approach to uncertainty analysis is to distinguish between technical uncertainty, methodological uncertainty and epistemic uncertainty. I will refer to a case study to explain how this approach can be applied. The case study refers to Enkanini, a slum in the city of Stellenbosch, South Africa. Enkanini is an informal settlement that was built on occupied land, without authorization from the local government, and therefore it lacks basic services such as electricity provision and waste collection. Because of the lack of electricity, residents use mainly paraffin for lighting, cooking and heating, which has terrible consequences for health when burned indoors. As a consequence, one of the requests that emerged from the residents of Enkanini was to be connected to Eskom, the national electric utility. In response to this demand, a local NGO set up a project that provides small rooftop solar panels. Although the solar panels apparently responded to the request for electricity, the project caused an important conflict in the settlement and residents vandalized the infrastructure leading the project to a halt. What happened? I argue that the problem resides in the type of knowledge that was not considered, that is, in the poor management of uncertainty. The project solved the technical uncertainty associated with the practical issue of how to provide electricity. This was solved through a technological fix. However, the electricity provided by the solar panels was not enough to meet the energy needs of the residents: electricity could only be used for lighting, and did not substitute paraffin in cooking and heating – and did not meet the demand. Demand could not be easily estimated because of methodological uncertainty. The population doubled in 4 years. Changes in demand were due not only to population growth but also to changes in household composition – for fuels used for lighting consumption does not change with more people, while for fuels used for cooking, more people mean more consumption. So different fuels have different demand patterns. The different consumption patterns interact with each other, so that introducing electricity changed the energy mix as a whole. Finally, and most importantly, the project only took into account technical knowledge and ignored local knowledge of Enkanini’s dwellers. The request to be connected to Eskom was seen by Enkanini’s residents as a means to obtain the official recognition of the settlement and the provision of public services in general. For this reason, they saw the solar panels as a threat to their requests and this created a conflict. This is an instance of epistemic uncertainty: by giving privilege to technical knowledge, the problem was misinterpreted and the solution misguided. Uncertainty in this case arises from the framing of the problem as a technical issue, and from ignoring the social and political dimensions.

Coming back to the more familiar example of BPA, this is also a case in which different types of uncertainty are present. Technical uncertainty is found in the way the practical problem is translated into a technical problem: how to assess the safety of BPA? Choosing to carry out a risk assessment does not eliminate the uncertainty: how can risk be assessed? By establishing a Tolerable Daily Intake? Should we look at one chemical at the time or at cocktail effects? Should we look only at food exposure or all kinds of exposure? Methodological uncertainty refers to how the assessment is done. Also in this case, there are many questions: should we use standardized studies or independent studies? Standardized studies have statistical power but are not sensitive to non-standard side-effects, they take a long time to develop, are not easily updated and may not respond quickly enough to regulation needs. Independent studies, on the other hand, raise questions of reproducibility, do not have the same statistical power. If one uses more approaches, then how should results be combined? How should evidence be weighted?  what time frame should be used to capture long term effects?  Finally, there is epistemic uncertainty linked to the different types of knowledge involved – toxicology and endocrinology. Which type of knowledge should be taken into account, since BPA raises concerns both for health and for agriculture? Epistemic uncertainty is related to what is at stake in the BPA risk assessment, is the health of future generations at stake? What are the ethical implications?The ban of BPA in some countries has led the industry to use other types of bisphenol, like BPB and BPF, which are also endocrine disruptors but are not regulated yet. So the challenge is bigger than establishing the TDI of one chemical.These questions make the assessment really difficult, which is probably why so many studies have not managed to decrease the controversy.

To conclude: I have argued that quantification is inevitably linked to uncertainty and that it is important to pay attention to the limits of knowledge and the types of knowledge that are lost through quantification. I want to stress that there is a difference between speaking of “limits of knowledge” and doubting the validity of knowledge. The fact that scientific evidence does not give all the answers does not mean that it is not valid, that one might just as well base their decisions on reading the remains of coffee in the bottom of the cup. What it means is that scientific evidence should be complemented with additional types knowledge, such as lay knowledge, local knowledge, practical knowledge from the skills of craftsmanship. What this means for uncertainty analysis is that the choice of method of analysis is not simply a technical choice, but depends on what is at stake.

There is no blue print for uncertainty analysis. A simple way of putting it is that, qualitative analysis is not a second best option for when quantification is not possible, it is rather a complementary type of assessment that should be done on top of the quantitative assessment, especially when producing evidence that is policy relevant and politically sensitive. A less simple way is that, uncertainty analysis requires a critical reflection about the limits of knowledge, and that analyzing uncertainty does not tame the uncertainty.


0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.