Consensus and Uncertainty
Since its origins, the IPCC has been open and explicit about seeking to generate a ‘scientific consensus’ around climate change and especially about the role of humans in climate change. Yet this has been a source of both strength and vulnerability for the IPCC. Understanding consensus as a process of ‘truth creation’ (or the more nuanced ‘knowledge production’) which marginalises dissenting voices – as has frequently been portrayed by some of the IPCC’s critics (see Edwards & Schneider, 2001; Petersen, 2010) – does not do justice to the process. Consensus-building in fact serves several different goals. As Horst and Irwin (2010) have explained, seeking consensus can be as much about building a community identity – what Haas (1992) refers to as an epistemic community – as it is about seeking the ‘truth’. Equally, as Yearley (2009) explains, IPCC consensus-making is an exercise in collective judgement about subjective (or Bayesian) likelihoods in areas of uncertain knowledge. Consensus-making in the IPCC has been largely driven by the desire to communicate climate science coherently to a wide spectrum of policy users – ‘to construct knowledge’ (Weingart, 1999) - but in so doing communicating uncertainties have been down-played (van der Sluijs, 1998). As Oppenheimer et al. (2007: 1506) remark: “The establishment of consensus by the IPCC is no longer as critical to governments as [is] a full exploration of uncertainty.” Without a careful explanation about what it means, this drive for consensus can leave the IPCC vulnerable to outside criticism. Claims such as ‘2,500 of the world’s leading scientists have reached a consensus that human activities are having a significant influence on the climate’ are disingenuous. That particular consensus judgement, as are many others in the IPCC reports, is reached by only a few dozen experts in the specific field of detection and attribution studies; other IPCC authors are experts in other fields. But consensusmaking can also lead to criticism for being too conservative, as Hansen (2007) has most visibly argued. Was the IPCC AR4 too conservative in reaching its consensus about future sea-level rise? Many glaciologists and oceanographers think they were (Kerr, 2007; Rahmstorf, 2010), leading to what Hansen attacks as ‘scientific reticence’. Solomon et al. (2008) offer a robust defence, stating that far from reaching a premature consensus, the AR4 report stated that in fact no consensus could be reached on the magnitude of the possible fast ice-sheet melt processes that some fear could lead to 1 or 2 metres of sea-level rise this century. Hence these processes were not included in the quantitative estimates. This leads onto the question of how uncertainty more generally has been treated across the various IPCC Working Groups. As Ha-Duong et al. (2007) and Swart et al. (2009) explain, despite efforts by the IPCC leadership to introduce a consistent methodology for uncertainty communication (Moss & Schneider, 2000; Manning, 2006), it has in fact been impossible to police. Different Working Groups, familiar and comfortable with different epistemic traditions, construct and communicate uncertainty in different ways. This opens up possibilities for confusion and misunderstanding not just for policy-makers and the public, but among the experts within the IPCC itself (Risbey & Kandlikar, 2007). For Ha-Duong et al. (2007) this diversity is an advantage: “The diverse, multidimensional approach to uncertainty communication used by IPCC author teams is not only legitimate, but enhances the quality of the assessment by providing information about the nature of the uncertainties” (p.10). This position reflects that of others who have thought hard about how best to construct uncertainty for policy-relevant assessments (Van der Sluijs, 2005; Van der Sluijs et al., 2005). For these authors ‘taming the uncertainty monster’ requires combining quantitative and qualitative measures of uncertainty in model-based environmental assessment: the so-called NUSAP (Numerical, Unit, Spread, Assessment, Pedigrees) System (Funtowicz & Ravetz, 1990). Webster (2009) agrees with regard to the IPCC: “Treatment of uncertainty will become more important than consensus if the IPCC is to stay relevant to the decisions that face us” (p.39). Yet Webster also argues that such diverse forms of uncertainty assessment will require much more careful explanation abouthow different uncertainty metrics are reached; for example the difference between frequentist and Bayesian probabilities and the necessity of expert, and therefore subjective, judgements in any assessment process (see also Hulme, 2009a; Guy & Estrada, 2010). This suggests that more studies such as Petersen’s detailed investigation of the claim about detection and attribution in the IPCC Third Assessment Report (Petersen, 2010; see also 2000 and 2006) are to be welcomed. He examines the crafting of this statement in both scientific and policy contexts, explores the way in which the IPCC mobilised Bayesian beliefs and how outside review comments were either resisted or embraced. While he concludes that the IPCC writing team did a reasonable job of reflecting the state of knowledge in this specific area, he is also critical of the inconsistencies and ambiguities in the ways the IPCC, more broadly, handled and presented uncertainty (cf. Swart et al., 2009). Betz (2009) offers a second example of a detailed case study of how the IPCC constructs its knowledge claims, this time a more theoretical and methodological example. Betz contrasts two methodological principles which may guide the construction of the IPCC climate scenario range: modal inductivism and modal falsificationism. He argues that modal inductivism, the methodology implicitly underlying the IPCC assessments, is severely flawed and advocates a radical overhaul of the IPCC practice to embrace modal falsificationism. Equally important for the IPCC is how the uncertainties embedded in its knowledge claims are communicated and received more widely. This too is an area where scholars have been at work. Patt (2007) and Budescu et al. (2009) approach the question empirically and draw upon psychological theory to examine how different forms of uncertainty communication used by the IPCC – for example uncertainties deriving from model differences versus disagreements between experts – alter the perceived reception of respective knowledge claims. Patt (2007) found that these two framings of uncertainty did influence lay perceptions and Budescu et al. found respondents interpreted IPCC’s quantitative uncertainties in ways rather different from that intended by the Assessments. They both call for the social features of uncertainty to be attended to more carefully in future IPCC assessments and suggest some alternative formulations. Schenk and Lensink (2007) and Fogel (2005) examine more precise examples of uncertainty communication from IPCC assessments: uncertainty about future emissions of greenhouse gases and uncertainties in national inventories of greenhouse gas emissions. Schenk and Lensink (2007), for example, suggest improved communication of complex messages from the IPCC through clearer reasoning when communicating with nonscientists, making emissions scenarios explicitly normative and increasing stakeholder participation in scenario development. — Mike hulme