At Medsin's 2013 National Health Conference in Leeds, Ruth Laurence-King and I ran Healthy Planet's stream session on climate change, health and the fossil fuel industry's role in fuelling the climate change denial machine. This is the text of part of the stream, on the meaning of good and bad science in the context of the IPCC and NIPCC reports.
A few weeks ago, as most of you are probably aware, the UN’s
Intergovernmental Panel on Climate Change released part I of its 5th
Assessment Report on climate change. The IPCC is a democratic body with
participants from over 150 nations, which invites hundreds of scientists from
across the globe to participate in constructing systematic reviews of the best
evidence on issues of climate change in their area of expertise. The process by
which these reports are constructed is laid bare for all to see on the IPCC’s
websites, and the peer review process is entirely open – anyone can register to
participate in the peer review process. The AR5 WGI report concluded that
warming of the atmosphere and oceans was unequivocal, with it being more than
95% likely that human influence had been the dominant cause of observed warming
since 1950.
With any luck, fewer of you will be aware of another report
of climate change that was released in the past few months. Calling itself the
second report of the Non-governmental International Panel on Climate Change
(NIPCC), it details a list of grievances with the scientific consensus embodied
in the IPCC reports, attempting to review evidence that weighs against the
extent of anthropogenic climate change. This report was compiled by 47 authors
(35 of them scientists from a variety of fields) working for the Heartland
Institute, a libertarian think-tank bankrolled by tobacco, fossil fuel and
pharmaceutical companies with a fine tradition of mounting ‘scientific’
resistance to evidence of the dangers of second-hand smoke, the existence of
acid rain, and the growing depletion of the ozone layer - ably assisted in many of these enterprises
by Lead Author of the NIPCC report, former rocket physicist S Fred Singer. The
NIPCC report finds, in contrast, that CO2 is a mild greenhouse gas that may at most produce a fraction of a degree of global temperature increase, which in any case would probably be beneficial for the world overall.
Now, if you had to pick one of these as an exemplar of good
science, which would you go for? Not hard really, is it… But it turns out that
it’s actually kind of hard to put your finger on precisely what ‘good science’
is; and, as the climate change denial machine shows, making that distinction
can be vitally important. I want now to use the IPCC and NIPCC reports to
explore a little what we might mean by good science, and how it is opposed to
pseudoscience, pathological science, cargo cult science – whatever you want to
call it, just ‘bad’ science. In the process, hopefully we’ll see a bit more
about the inner workings of climate change denial and its relation to industry.
In other words, Ruth suggested that maybe I could talk a bit
about philosophy of science and I had a geek-out and got a little bit carried
away…
Now, a lot of people think that they have a pretty good
handle on how scientific research works, even what The Scientific Method
(always be wary of capitalised nouns) might be. A lot of philosophers of
science are enamoured with inductive inference based on some form of Bayes’
theorem; perhaps the more prevalent popular impression is along the lines of
Karl Popper’s falsificationism. There are many other views besides that attempt
to construct a prescriptive model of the process of scientific inference. The
problem is, they don’t bear much relation to how scientists make such
inferences.
Why would I think that the attempt to find a purified essence of The Scientific Method as a standard to which we ought to hold all our scientists is a vain one? Well, for starters, as people such as Paul Feyerabend have noted, once you try to prescribe any given methodology as Science, then you end up resigning an awful lot of good, useful work to the junk heap. Einstein used fudge factors in general relativity (but when he tried to be a good Popperian, made one of the most spectacularly incorrect predictions of his career); Dirac told experimentalists to re-do their sums because his theory was too beautiful to be wrong. Perhaps closer to home, even Koch (that’s Robert the 19th century pathological anatomist, not one of the dirty-energy billionaires…) recognised that Koch’s Postulates weren’t much good as a definition of good causal inference – and such principles of good science served more than anything to postpone such groundbreaking work in medical science as John Snow’s on cholera and Ignaaz Semmelweis’ on puerperal fever.
Moreover, once you demand a whiter-than-white standard of scientific conduct, assuming that scientists make their decisions according to some neat prescriptive algorithm, you begin to get pretty sceptical about whether any good science happens at all. It shouldn’t be controversial to point out that scientists are people too, doing all kinds of things for all kinds of reasons, and susceptible to the whole range of decisional heuristics and biases that govern the rest of our actions and perceptions. More fundamentally, it’s a fairly common observation that any amount of observational evidence is in theory compatible with multiple, incommensurable, hypotheses; something else needs to be invoked to fill in the gaps between observation and theory. As Miriam Solomon argues, the assumption of individualistic normative perspectives in scientific methodology plays into the hands of the most radical scepticism – because there’s not a whole lot of evidence that scientists are much good at living up to any norm you might suggest (well, short of Paul Feyerabend’s ‘anything goes’).
Why would I think that the attempt to find a purified essence of The Scientific Method as a standard to which we ought to hold all our scientists is a vain one? Well, for starters, as people such as Paul Feyerabend have noted, once you try to prescribe any given methodology as Science, then you end up resigning an awful lot of good, useful work to the junk heap. Einstein used fudge factors in general relativity (but when he tried to be a good Popperian, made one of the most spectacularly incorrect predictions of his career); Dirac told experimentalists to re-do their sums because his theory was too beautiful to be wrong. Perhaps closer to home, even Koch (that’s Robert the 19th century pathological anatomist, not one of the dirty-energy billionaires…) recognised that Koch’s Postulates weren’t much good as a definition of good causal inference – and such principles of good science served more than anything to postpone such groundbreaking work in medical science as John Snow’s on cholera and Ignaaz Semmelweis’ on puerperal fever.
Moreover, once you demand a whiter-than-white standard of scientific conduct, assuming that scientists make their decisions according to some neat prescriptive algorithm, you begin to get pretty sceptical about whether any good science happens at all. It shouldn’t be controversial to point out that scientists are people too, doing all kinds of things for all kinds of reasons, and susceptible to the whole range of decisional heuristics and biases that govern the rest of our actions and perceptions. More fundamentally, it’s a fairly common observation that any amount of observational evidence is in theory compatible with multiple, incommensurable, hypotheses; something else needs to be invoked to fill in the gaps between observation and theory. As Miriam Solomon argues, the assumption of individualistic normative perspectives in scientific methodology plays into the hands of the most radical scepticism – because there’s not a whole lot of evidence that scientists are much good at living up to any norm you might suggest (well, short of Paul Feyerabend’s ‘anything goes’).
But that scepticism itself flies in the face of some obvious
facts about scientific decision-making; at some level, it must work, ‘cos there
are some pretty damn successful theories out there. And moreover, there does
seem to be a pretty important difference between evolution and creationism,
pharmacology and homeopathy, quantum mechanics and whatever abuse of the word
‘quantum’ Deepak Chopra chooses to employ; there’s an astounding difference in empirical success – the predictive,
retrodictive, explanatory and technological abilities of the theories in
question – and somewhere along the line there must be something that accounts
for these differences.
Given that we don’t seem able to find that something at the
level of the individual scientist – the motives of the pharmacist are no more pure
than the homeopath – that accounts for these differences, why not broaden the
perspective a little? That’s the basic tenet of a form of approach to these
questions that particularly excites me, what tends to get called social epistemology. The basic idea of
social epistemology is that knowledge claims don’t have to be evaluated at the
level of the individual knower, but can be considered an emergent property of
communities – indeed, on some accounts it might even be said that those
communities (rather than the individuals comprising them) are the knowers. On
such views, we can attempt to explain the differences between good science and
pathological science, not by the purity of researchers’ motives, nor their
unbending adherence to the catechism of Method, but rather by certain facts
about the social structures in which such knowledge is generated that provide a
kind of emergent objectivity, conducive to empirical success.
In a very broad-brush and non-exhaustive framework, we can
look at two approaches to examining these kinds of social conditions. The
first, which I’ll call procedural,
attempts to analyse what kinds of community we could describe as ‘objective’ –
‘fair’, or ‘unbiased’, in an epistemic sense – by examining the procedures by
which it conducts itself. The second, a consequentialist
approach, evaluates community structures, decision-making procedures, and so
forth, by their conduciveness to achieving the goals of the enterprise –
empirical success, in the current contrast.
An example of the first, procedural, approach, is found in
Helen Longino’s critical contextual
empiricism. She suggests four communal norms for what constitutes
‘objective’ knowledge-producing communities:
1)
‘Tempered’ equality of intellectual authority;
2)
Public forums for criticism (peer review,
conferences, replies to papers);
3)
Responsiveness to criticism
4)
Shared values, especially those of empirical
success,
Longino offers these criteria as an explication of what we
might mean by a community norm of ‘objective’ knowledge production and
evaluation. Her position gains at least some plausibility by considering what
their negations would be like – communities in which certain voices are written
out of debate a priori; where positions
are held dogmatically, without any attempt to engage with dissenting voices;
where such dissenting voices aren’t even granted an opportunity to be heard;
where there isn’t even an agreed goal on what it is people should be working
towards.
By contrast, Miriam Solomon suggests a thoroughly
consequentialist epistemology for science. Her approach is to look at the
history of science and ask, firstly: what sorts of social conditions have
influenced the success or failure of different theories? And secondly: what
conditions have influenced such success or failure in ways that tend reliably
to promote the success of empirically successful theories, and the failure of
unsuccessful ones? These influences can range across anything from a
community’s valuing a theory’s ability to generate novel predictions, to the effects
of peer pressure or deference to authority in shaping unanimity of theoretical
position. And there’s no necessary a
priori conclusion as to which kinds of influence are the ‘good’ ones (that
promote empirical success) – as Philip Kitcher says, scientific communities can
make good epistemic use of the grubbiest of motives.
I don’t want today to suggest a substantive social
epistemology of science, or to adjudicate between the procedural and
consequentialist approaches. I’d rather actually return to the notional point
of this session – climate denialism and the influence of industry. But both
Longino’s and Solomon’s approaches provide frameworks through which we can
consider the IPCC and NIPCC projects – and the various influences driving them
– and examine their conflicting claims regarding their validity as good
science. I don’t think it’d be much of a spoiler to let you know who comes out
on top.
For starters, let’s look at Longino’s first two criteria of
objectivity. Tempered equality of authority means that in more objective
communities, there are no privileged voices, the opinions of lone experts are
not taken as gospel, but rather all voices are heard. There is ‘tempering’,
however, in the acknowledgement that differing degrees of expertise do lend
different weightings to the utility of different perspectives – though along
with the acknowledgement that with expertise may come greater susceptibility to
group social dynamics such as peer pressure, or availability or confirmation
biases. Public fora for criticism, meanwhile, require there to be venues in
which these voices can be expressed and have influence in the course of
scientific endeavour.
The equality of authority condition speaks in favour of
diversity in the knowledge-creating community, and systematicity in evaluating
the available evidence. The public fora requirement demands engagement with
critical voices and the opportunity to review available evidence. These are
ideals the IPCC at least tries to live up to; the authors number in the thousands
across all the working groups; and, to avoid anchoring to dominant
perspectives, new authors are involved in each iteration (over 60% AR5 authors
had not been involved in previous IPCC reports). The reviewer community is the
most diverse possible – literally anyone may submit their own review. Authors,
meanwhile, are explicitly cautioned to be systematic in evaluating the
evidence, and to “consider all plausible sources of uncertainty arising from
incomplete understanding of or [sic] competing conceptual frameworks for
relevant systems and processes.”
The same cannot be said of the NIPCC. The perspectives of
less-industrialised countries are neglected completely, the 47 authors being
drawn from just 14 wealthy nations (over half of them from the US or
Australia). Most of these authors were those who wrote the first NIPCC report;
and their ‘reviews’ rely heavily on self-citation. There is no explicit
statement of how they unearthed the documents they choose to cite, but a
narrative structure is very much in evidence – and one focused on any narrative
that opposes the IPCC findings. Furthermore, the NIPCC’s peer review structure
has been described as more akin to ‘pal review’ – sending a paper to a select
few like-minded individuals to give the veneer of respectability coming from a
peer review-like process, without the inconvenience of exposing oneself to
substantive criticism. The pal review process is a technique that has played a
core part in the climate denial arsenal for some time – several of the papers
cited by the NIPCC come from the journal Climate
Research during the period when Chris de Freitas was editor. 17 papers were
published from a small group of deniers in the journal during this period, all
bar three edited by Chris de Freitas.
The diversity of the relative authorial and critical
communities also comes heavily into play in evaluating where the IPCC and NIPCC
stand with reference to Longino’s third requirement, of responsiveness to
criticism. To evaluate the IPCC’s engagement with critical challenges levelled
against it, one need only need look at the organisation’s website – they
publish detailed records of changes made in response to review comments (right
down to each replacement of the word ‘period’ with ‘interval’. In the entire
text). Mistakes in previous reports – of course there have been mistakes,
there’s no such thing as a finished science – have been acknowledged, worked on
and revised by the community of climate science researchers. Erroneous claims
aren’t propagated through numerous iterations of the report.
The same can’t be said for the NIPCC. Of the small sample of
relevant papers it actually deigns to cite, the claims of a number of them had
been resoundingly rebutted well before the compilation of the second review –
but those claims still survived into the second report. Reports of sea level
rises that even the authors have since retracted, predictions that an end to El
Nino-driven warming would make 2011 would be the coolest year since 1956, and
estimates of solar forcing that don’t survive even the most cursory of stress
testing, all find a second life in the NIPCC report.
So, lastly, we come to the question of shared values: what
is it the authors are really working for
in compiling these reports. Both, of course, claim to promote the values of
empirically successful science in their work. Whether they do so, however, is
another matter.
A brief comparison of the respective reports’ chapters on
climate modelling is illustrative. The IPCC AR5 chapter, ‘Evaluation of climate
models’ opens with, after an overview of the mechanics of climate modelling, a detailed discussion of the criteria by which
climate models are evaluated, how modelling research was evaluated by the
report authors, and a discussion of the limitations of their approach. They
emphasise the values of predictive success, robustness, and other criteria of
empirical success in their account. The NIPCC CCR2, meanwhile, opens its chapter – entitled ‘global
climate models and their limitations’ – with a discussion of the failure of
climate models to live up to the standards of a different field of prediction –
weather forecasting, research highlighting the low success rate of political
forecasting and then, having just denigrating the opinions of experts, a few
anecdotes from a handful of scientists
expressing scepticism at the value of climate models. This opening reads like a
rogues’ gallery of the kinds of framing effects that IPCC authors are explicitly
counselled to avoid (to the extent that they are advised to state certain
claims in multiple equivalent ways, just to avoid the framing effects of e.g.
observing a 10% chance of death, rather than a 90% chance of survival). There’s
a reason that IPCC authors are counselled to avoid these effects – the
reasoning biases they elicit do not reliably promote the values of empirically
successful science. The tone of the NIPCC report, then, raises certain
questions about their commitment to those values. Then again, that would be
hardly surprising, since the organisations and individuals behind the NIPCC
reports are largely also those involved in coordinating the Heartland
Institute’s International Climate Change Conferences – and as you just heard
from Ruth, they don’t even make the pretence of being interested in promoting
those values. And given many of the players’ associations with previous
pseudo-scientific endeavours where we now know the goal was precisely to
subvert the values prized most highly in scientific research – remembering, of
course, the ‘doubt is our product’ mantra of the tobacco strategy – one might
question to what extent such individuals hold those values dear.
So much for the procedural approach. What of the
consequentialist evaluation of the NIPCC methodology? Here, things are rather
simpler. We can thank many of the organisations and individuals involved in the
NIPCC for having conducted several trial runs – natural experiments, if you
like – testing the extent to which the research structures, funding models, and
so forth promote empirically successful science. As we’ve heard a bit from Ruth
– and as you can read about in Naomi Oreskes and Erik Conway’s excellent book Merchants of Doubt – the community
structures and methodologies responsible for producing the NIPCC are fundamentally
the same – even in some cases down to the individual scientists – as those
involved in work on the health impacts of tobacco and passive smoking, the
effects of industry on acid rain, the associations between CFC use and ozone
depletion, and a range of other topics besides. Unfortunately for the NIPCC,
their track record here isn’t great – the model used by the NIPCC – small
research groups, industry-funded scientists often working outside their
original field of expertise, a flexible approach to determining boundaries of
reasonable doubt and uncertainty, prominent use of non-academic venues to
circulate results – was consistently wrong in all of the above cases. It’s not
just the methodology that is the same here – though it is similar, with an
emphasis on implausible demands for claims of causality, inappropriately stringent
evidential demands, selective highlighting of evidence and ad hoc selection of statistical inference techniques to best
support hypotheses. It is often the same institutions, the same funding, and
the same individuals involved. And the NIPCC methodology seems reliably to
produce unsuccessful science.
So there you have it. A brief insight into the workings of
the industry-funded denial machine, through the lens of social epistemology –
‘cos Ruth said I could do it. Thanks for listening.
Further reading
The IPCC AR5 WGI report, and its parody the NIPCC CCR2.
Merchants of Doubt, by Naomi Oreskes and Erik Conway, lays out in elaborate detail the personalities and corporations behind the industry denial machine, from tobacco to climate change via Star Wars, ozone depletion and acid rain. It's a thoroughly-researched and terrifying read; you can also see Naomi Oreskes talking about the book on Youtube
Dealing in Doubt, Greenpeace USA's report on fossil fuel funding of climate change denial and the small hardcore contingent of scientists cooperating with them, further explores the vested interests at work funding contemporary climate denial.
This post from the always-excellent Skeptical Science provides a lucid, non-expert outline of some of the key disparities between the IPCC and NIPCC. It's well complemented by this article exploring some of the scientific deficiencies of the NIPCC report.
The philosophical content of this talk draws mainly on the work of Helen Longino and Miriam Solomon. Longino's books Science as Social Knowledge and The Fate of Knowledge, and Solomon's Social Empiricism, are thorough, rigorous, but accessible works that I'd heartily recommend. Longino's SEP entry on the social epistemology of science is a nice intro to the topic.
No comments:
Post a Comment