How one theorizes uncertainty is central to how uncertainty is analysed. If uncertainty is conceptualised as incomplete knowledge, uncertainty analysis will focus on describing knowledge gaps. If uncertainty is conceptualised as the challenge of predicting the future, uncertainty analysis will focus on describing how present variables may change in the future. In this post, I argue that a theoretical understanding of uncertainty is necessary to guide decision-making, and that the analysis of uncertainty must be theoretically informed and not only solution-oriented.

Theory of uncertainty

Uncertainty may be defined in general terms as “any deviation from the unachievable ideal of completely deterministic knowledge of the relevant system” (Walker et al., 2003). I chose this definition because it explicitly references an important underlying discussion that is central to the theorisation of uncertainty, namely, how uncertainty affects the ideal of scientific knowledge. The theory of uncertainty is thus closely related to the theory of science.

There are longstanding theoretical debates on uncertainty, starting as far back as Knight’s seminal 1921 book. Knight stressed the distinction between risk and uncertainty to explain the divergence between actual and theoretical competition and thus develop a theory of profit (Knight, 1921). Risk is defined as measurable uncertainty, and what Knight calls “true” uncertainty refers to non-quantitative uncertainty. This distinction has ramifications for decision-making, as non-quantifiable uncertainty leads economic agents to different behaviours than those described by theoretical competition. Knightian uncertainty is a matter of incomplete knowledge. Incompleteness matters for how knowledge is used, but is not a consequence of the uses of knowledge.

Funtowicz and Ravetz also distinguish between different types of uncertainty: technical uncertainty, methodological uncertainty and epistemic uncertainty (Funtowicz & Ravetz, 1990).

  • Technical uncertainty is related to the application of technical knowledge to known challenges, e.g. risk management.
  • Methodological uncertainty is about which analytical method to apply, e.g. when modelling complex systems.
  • Epistemic uncertainty is the uncertainty about problem framing, not knowing which type of knowledge or which knowledge claim to apply. In this case, uncertainty relates to the use of knowledge.

Under all three types of uncertainty, knowledge is limited because of the choice of approach, disciplinary lens and expertise. Uncertainty is thus not necessarily a matter of “weak knowledge” but an acknowledgement that, in Taoist terms, what can be named is not all there is to know about the whole.

Wynne once again proposes the use of four types of uncertainty (Wynne, 1992):

  • Risk: when the system behaviour is basically well known, and chances of different outcomes can be defined and quantified by structured analysis of mechanisms and probabilities.
  • Uncertainty: if we know the important system parameters but not the probability distributions.
  • Ignorance: We don’t know what we don’t know. This is not so much a characteristic of knowledge itself as of the commitments on the completeness and validity of that knowledge.
  • Indeterminacy: introduces the idea of contingent social behaviour in the definition of uncertainty – uncertainty is not just a matter of inadequate scientific knowledge, but also of how knowledge and the application of knowledge constitute each other.

Wynne criticises the conceptualisation of uncertainty as a matter of levels, or degrees, of uncertainty, and argues that different types of uncertainty may co-exist. “There is indeterminacy underlying scientific knowledge even when ‘uncertainty’ is small” (Wynne, 1992). Which type of uncertainty is included in decision-making thus depends not on the level of uncertainty, but on the relationship between science and policy. This relationship is mutually constituted: when science advisors produce indicators that signal, for instance, climate-related risks, they enact an interpretation of policy-makers as “unable to cope with more than a modicum of scientific uncertainty” (Shackley & Wynne, 1997). Policy-makers, in turn, may “conclude from the presentation of a single value by advisory scientists that they do not need to grapple with the issue of scientific uncertainty” (Shackley & Wynne, 1997). Uncertainty in this case is not exogenous to the practices of knowledge use, but is a consequence of these very practices.

Decision-making under uncertainty

Much of the uncertainty literature focuses on the science-policy interface (Funtowicz & Ravetz, 1990; Maxim & van der Sluijs, 2011; Stirling, 1993; Walker et al., 2003; Wynne, 1992). So it is important also to understand how the relationship between science and policy is theorised and enacted. What is the role of science and science advisors? What is the role of policy-makers? Are there two such distinct categories of people? Are their activities fundamentally different? Who is responsible for understanding uncertainty? Who is responsible for communicating uncertainty?

There are many different answers to these questions. I use two examples to illustrate how the possible understandings of the science-policy interface in the context of uncertainty may differ.

EFSA

The European Food Safety Authority (EFSA) is a pioneer in the institutionalisation of uncertainty analysis. EFSA “was set up in 2002 following a series of food crises in the late 1990s to be a source of scientific advice and communication on risks associated with the food chain” (https://www.efsa.europa.eu/en/aboutefsa). Uncertainty analysis is thus at the heart of EFSA’s mandate. The agency published revised guidelines for uncertainty analysis in 2018, with a comprehensive overview of state-of-the-art quantitative and qualitative methods of uncertainty analysis. The guidelines are based on the assumption that decision-making needs precise knowledge, and that uncertainty should be communicated clearly and preferably using quantitative measures of probability to avoid, or minimise, ambiguity. The relationship between science and policy is defined as a linear model of science giving policy-makers the facts – both about what is known and about what is not known. Uncertainty is seen as something that should be precisely defined, measured and communicated. There is a desire for complete knowledge about uncertainty, and for a type of knowledge of uncertainty which conforms with the standards of laboratory science: uncertainty is something that should ideally be defined objectively, precisely, unambiguously. EFSA defines very clearly roles and responsibilities: “Assessors are responsible for analysis of uncertainty; decision-makers are responsible for resolving the impact of uncertainty on decision-making” (EFSA, 2018: 5).

Post-normal science

“Limits of knowledge” are not used as an argument in support of what nowadays may be called alternative facts – but as the basis to develop a theoretically informed understanding of uncertainty as a challenge for decision-making (Funtowicz & Ravetz, 1993). In this case, Funtowicz and Ravetz refer to different levels of uncertainty. When uncertainty is low and decision stakes are low, normal science can be applied to the problem. Low uncertainty may refer to different types of uncertainty: the definition of the problem, of the approach to be used and of the social actors to be involved in decision-making. As uncertainty and decision stakes increase, as may be the case in medical surgery, a consultancy approach may be used, and more than one expert (and problem definition) may be consulted, more than one cure (method) may be used, and more knowledge claims may be relevant. In the case of irreducible uncertainty and high decision stakes, scientific expertise may no longer suffice, and an “extended peer review” may be necessary to assess problem definitions, relevant sources of knowledge and approaches. Funtowicz and Ravetz advance different conceptualisations of science: applied science, consultancy and post-normal science, and of the relationship between science and policy-making, which takes into account not only “the facts” but also stakes and stakeholders. The concept of “extended peer community” signals also a diffusion of responsibility, and of the roles of knowledge production (in which experts and non-experts participate) and of decision-making.

Quantitative and qualitative analysis of uncertainty

Uncertainty raises a debate about the very understanding of what science is, and of how science can and should be used in decision-making. For this reason, choosing a method for uncertainty analysis is not only a practical question, but carries also one’s value judgements. Quantitative uncertainty analysis that measures uncertainty as probability, limits uncertainty to risk. Questions of indeterminacy and epistemic uncertainty necessitate approaches such as institutional analysis, policy analysis and narrative analysis. Quantitative and qualitative approaches refer to different theoretical understandings of uncertainty. Uncertainty analysis is not just about choosing the appropriate method, or combining quantitative and qualitative methods, but is part of the practice of defining the role of science and the use of scientific information in policy advice. The choice of analytical methods should thus be guided by questions such as: What is at stake in the analysis of uncertainty? Whose uncertainties are visible and whose uncertainties are invisible? How does communicating uncertainty impact trust between science, policy and society? How does communicating uncertainty impact responsibility?

 

References

EFSA (European Food Safety Authority) Scientific Committee, Benford D, Halldorsson T, Jeger MJ, Knutsen HK, More S, Naegeli H, Noteborn H, Ockleford C, Ricci A, Rychen G, Schlatter JR, Silano V, Solecki R, Turck D, Younes M, Craig P, Hart A, Von Goetz N, Koutsoumanis K, Mortensen A, Ossendorp B, Martino L, Merten C, Mosbach-Schulz O and Hardy A. (2018). Guidance on Uncertainty Analysis in Scientific Assessments. EFSA Journal 2018; 16(1): 5123.

Funtowicz, S. O., & Ravetz, J. R. (1990). Uncertainty and quality in science for policy. Ecological Economics (Vol. 6). Dorcrecht: Kluwer Academic Publishers. https://doi.org/10.1016/0921-8009(92)90014-J

Funtowicz, S. O., & Ravetz, J. R. (1993). Science for the post-normal age. Futures, (September), 739–755. https://doi.org/0016-3287/93/07739-17

Knight, F. H. (1921). Risk, uncertainty and profit. New York: Hart, Schaffner and Marx.

Maxim, L., & van der Sluijs, J. P. (2011). Quality in environmental science for policy: Assessing uncertainty as a component of policy analysis. Environmental Science and Policy, 14(4), 482–492. https://doi.org/10.1016/j.envsci.2011.01.003

Shackley, S., & Wynne, B. (1997). Global warming potentials: Ambiguity or precision as an aid to policy? Climate Research, 8(2), 89–106. https://doi.org/10.3354/cr008089

Stirling, A. (1993). Keep it complex, 468, 1029–1031.

Walker, W. E., Harremoës, P., Rotmans, J., van der Sluijs, J. P., van Asselt, M. B. a., Janssen, P., & Krayer von Krauss, M. P. (2003). Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support. Integrated Assessment, 4(1), 5–17. https://doi.org/10.1076/iaij.4.1.5.16466

Wynne, B. (1992). Uncertainty and environmental learning: reconceiving science and policy in the preventive paradigm. Global Environmental Change, June, 111–127. https://doi.org/10.1016/0959-3780(92)90017-2

 


0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.