How to provide good science advice to policymakers, under conditions of scientific complexity and uncertainty?

There are three points that I would like to make. 

First, it is important to draw a distinction between complexity and complicatedness. Complicated is something that is difficult to describe, to make sense of, that may have many variables and many variations. Complexity, on the contrary is easy to understand and to describe. One of the classic examples of complexity is that of starlings: the beautiful formations of a group of starlings flying in the sky are the result of “non-aggregative interactions or self-organizing that derives from local rules of motion plus feedback among the individuals in group flight” (Mitchell, 2009: 35). This is easy to describe, but does not lead to predictions. An example that may be closer to policy, and that can be described in a similar way, is that of subprime mortgages. At the individual level, economic agents respond to simple profit maximization rules: a risky bond is sliced up in pieces and combined with less risky bonds to form a financial product that sells on the market. However, at a higher level of analysis, when considering the economy as whole, the individual decisions of rational economic agents have led, in 2008, to a distribution of risk to the whole economy and to a systemic loss of information about the location and quantity of risk. A rational choice at the individual level led to a financial crisis at the economy level. What characterizes complexity is that it cannot be described only at one scale (only the individual starlings or only the group as a whole). Complexity requires multiple levels of analysis, it requires information that cannot be compressed in one number, one indicator.

The distinction between complexity and complicatedness is important for policy making to avoid falling in the false impression that complexity is at odds with communicability. Complicated things are difficult to communicate. Complexity is not. This means that science need not skew complexity out of its policy recommendations. There is scientific jargon that is far more difficult to understand than complexity. Policy makers not only understand complexity, they experience it. Complexity makes more sense than a simple indicator. Let’s communicate complexity.

The second point is that complex systems are adaptive systems. An example that many will be familiar with is the Jevons paradox: although improvements in efficiency in the short run will lead to savings in consumption, in the long run the system will adapt. Jevons studied coal consumption in the UK and observed how as coal engines became more efficient, they could be profitably deployed also outside of coal mines, and started being used for transportation. In the long run, coal engines revolutionized transport on rail and water, leading to greater coal consumption. Another example may be computers, which were meant to save time spent on calculations and have completely changed the way we work. If the system will adapt to interventions, if changing one part will generate feedback loops and trade-offs, if initial choices may lead to path dependency down the line, that means that the capacity of policy to steer a complex system is limited. Complexity means that truth doesn’t always lead to action, determinism doesn’t always lead to prediction, rationality doesn’t always lead to certainty. Local rules can be described in deterministic terms, and can be rational, but at the level of the whole, that does not make the system easy to steer.

 

Adaptability poses a problem of uncertainty. There is irreducible uncertainty in complex systems: that is, uncertainty is not just problems of missing data (which could be collected with more time, money, technology), not just problems of models (which could be improved, refined, integrated, made interdisciplinary), not just problems of ignorance (which could be reduced with more research, with the involvement of different people, cultures, practices). Uncertainty may also be due to an object of study that changes as we study it, that changes because we study it. Rather than governance of complexity, we may speak of governance in complexity (Rip, 2006). Governance in complexity entails letting go of predictability and control, letting go of precise science and allowing for uncertainty to be part of the information that science produces. That is, governance in complexity could be seen as a shift from treating uncertainty as a temporary issue, as something that more research and more funds can fix, into something that is unavoidably part of science and of policy making.

What might governance in complexity look like?

The third point that I would like to make is that governance in complexity is not about shifting responsibility. It’s not about taking responsibility away from science and letting the policy-maker deal with uncertainty. A fruitful parallel can be made with health care: a doctor that informs the patient or the patient family’s about their options with regard to a risky surgery does not make the decision any easier. Informing corresponds to a logic of choice: you know what the options are, now choose. The problem of complexity is not that it is difficult to communicate, but that in communicating complexity one shifts the burden of making an impossible choice to policy makers. An alternative to that would be thinking of the science policy interface through the logic of care (Mol, 2008): difficult decisions are difficult to make.

The logic of care is not about being pessimistic and focusing on what cannot be done. Admitting the limits of knowledge does not mean that no policy advice can be given by science other than focusing on the environmental disasters that need to be stopped, on unintended consequences. The logic of care invites dialogue, openness, humility. It invites relationships that are protracted in time. Humility means letting go of the idea of infallibility, without tossing science out with the famous bath water. As a scientist, I have to be able to say that I don’t know, or that I didn’t get the result I expected, without losing funding. Care requires a cultural shift, that moves beyond and rejects easy-fix mentality. There are no silver bullets, but there may be long terms improvements that require a lot of work. The logic of care could be about understanding what science can contribute, where its strengths lie, and where its limits lie. While scientific knowledge cannot be used to predict the exact height that a child will grow up to, it can be used to say that certainly one will not grow to 5 meters of height. Biological laws hold even if they don’t produce precise predictions. The logic of care means building trust and respect for the knowledge of others, interest in understanding policy processes, the challenges that policy makers face, the multiple needs they have to attend to. Building respect and trust means that the effect of policies may not be predictable, but that does not lead to crisis of trust in science and in political institutions.

 

References

Mitchell, S. 2009. Unsimple truths. University of Chicago Press.

Mol, A. 2008. The Logic of Care: Health and the Problem of Patient Choice. New York: Routledge.

Rip, A. 2006. “A co-evolutionary approach to reflexive governance – and its ironies” in J-P Voss, D

Bauknecht, R Kemp (eds). Reflexive governance for sustainable development. Edware Elgar.


0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.