Earlier this month, Nature published a piece by Daniel Sarewitz on emerging challenges faced in science and research, which has some useful lessons for the aid system.
The greatest threat to science is not  due to the usual suspects of “inadequate funding, misconduct, political interference”, etc, etc. Instead, according to Daniel Sarewitz, the problem is more fundamental and relates to a widespread bias towards over-simplified models and positive results.

Bias is an inescapable element of research, especially in fields… that strive to isolate cause–effect relations in complex systems in which relevant variables and phenomena can never be fully identified or characterized…”

The field Sarewitz is writing here about is biomedicine, but he could easily be describing development or humanitarian work. The fundamental problem, as he sees it, is that biases are not random but systemic: “if biases were random, then multiple studies ought to converge on truth [but] evidence is mounting that biases are not random.”

This claim is not new, of course. As the piece argues, systematic positive bias was identfied in clinical trials funded by the pharmaceutical industry back in the mid-1990s. More recently, reviews of so-called ‘landmark’ studies in fields such as cancer research has shown that positive results could only be replicated in a minority of cases.

However, these previous assessments tended to assume that the problem was not with science per se, but rather with those forces that sought to co-op it: industry, government, special interests, and so on. Reduce the influence of these interests, the argument went, and you would eradicate such biases.

But it is now emerging that there are some serious underlying problems within science itself. The cases are wide-ranging across biomedicine: “evidence of systematic positive bias [is] turning up in research ranging from basic to clinical, and on subjects ranging from genetic disease markers to testing of traditional Chinese medical practices.”

The two major faultlines, according to Sarewitz, are the methodological narrowness of the approaches employed to generate evidence, and the culture and incentives of scientists and science funders.

The first one is pertinent for readers of this blog. Researchers seek to reduce bias “through tightly controlled experimental investigations. In doing so, however, they are also moving farther away from the real-world complexity in which scientific results must be applied to solve problems.” Ironically, “the canonical tenets of ‘scientific excellence’” are threatening to undermine the whole enterprise. One rather shocking (for me, at least) example relates to the latest developments in research on mice, where a lot of resources and funds have been poured into the cloning of genetically identical animals, in order to enable fully controlled, replicable experiments and rigorous hypothesis-testing. Any sense of moral repugnance aside, perhaps the worst thing about this endeavour is that the findings of the research subsequently undertaken have turned out to be useless when applied in the real world.

Sarewitz also writes about the lack of incentives to ‘report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties’. There are also challenges around the various cultural and attitudinal positions taken toward science among funders, scientists, the media and the public at large. Sound familiar?

It should – such issues are not a problem for biomedicine alone:

[they are] likely to be prevalent in any field that seeks to predict the behaviour of complex systems — economics, ecology, environmental science, epidemiology and so on. The cracks will be there, they are just harder to spot because it is harder to test research results through direct technological applications… and straightforward indicators of desired outcomes…

Sarewitz closes with one potential solution, which may also be of relevance for work in development and humanitarian fields:

Scientists rightly extol the capacity of research to self-correct. But the lesson coming from biomedicine is that this self-correction depends… on the close ties between science and its application that allow society to push back against biased and useless results.

So what can we in the aid sector do about such bias, if indeed it is present in our work?

The first idea is the one that Sarewitz suggests: “societal push back”. Sadly, despite the rhetoric and growing practice of participation, the scope for Southern stakeholders – especially aid recipients – to ‘push back’ against useless results in development and humanitarian research is still severely limited. This doesn’t mean we should stop the effort, however, and perhaps new technologies and feedback processes can help us here.

The second strategy might be to address issues of the incentives and cultures which perpetuate such biases. But we seem to be far too concerned with developing country actor incentives and motivations to look at those in our own organisations. As one participant at a recent ODI event put it: “why do we always say that developing country leaders have mixed motives at best whereas the motives of donors [and other aid actors] are always considered impeccable?” We should find a way to ensure that these aid “physicians” first heal themselves.

The final course of action is to try to expand and adapt the concepts and models used in our work. This effort (of which this blog is one small part) is still very much a work-in-progress, but the growing interest  among researchers and practitioners should give us some small cause for hope. After all, the key to paradigm shifts in science – and in other fields – is not just logical argument and experimental proof. In the words of Thomas Kuhn:

as in political revolutions, so in paradigm choice—there is no standard higher than the assent of the relevant community.”

About these ads

Join the conversation! 6 Comments

  1. I think one thing that differentiates *some* aid and development related research from medical research is that negative findings (i.e. no positive result) are often treated as significant. See, for example, Roodman’s work on Microfinance and the Rajan paper on the, absence of an aid/growth relationship.

    Reply
  2. [...] have grappled with the problems posed by publication bias, they have also posited solutions. Writing in his blog, Aid on the Edge of Chaos, Ben Ramalingam cites Daniel Sarewitz’s recent article in Nature, which examines the challenges [...]

    Reply
  3. [...] also Reflections on bias and complexity May 29, 2012 by Ben Ramalingam, which talks about a paper in Nature, May 2012 by Daniel Sarewitz, [...]

    Reply
  4. [...] to the “hard” sciences, think again – it has already raised discussions of implications in humanitarian aid and in the more mainstream business community (the latter summing things up nicely with a headline, [...]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

About Ben Ramalingam

I am a freelance consultant and writer specialising on international development and humanitarian issues. I am currently working on a number of consulting and advisory assignments for international agencies. I am also writing a book on complexity sciences and international aid which will be published by Oxford University Press. I hold Senior Research Associate and Visiting Fellow positions at the Institute of Development Studies, the Overseas Development Institute, and the London School of Economics.

Category

Accountability, Evaluation, Healthcare, Knowledge and learning, Reports and Studies, Research