“The fox knows many things, but the hedgehog knows one big thing.” ~ Archilochus

In 2005, Philip Tetlock published a widely acclaimed book, “Expert Political Judgment: How Good Is It? How Can We Know?”, which presented the findings of a study of a diverse group almost 300 individuals, examining their decision-making processes over a number of years. The group was made up of high-profile and significant experts. And the findings were rather damning. As one review summarises it:

These are people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables. And they are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake.

Tetlock studied the expert individuals over almost two decades years, reviewing over 80,000 judgements made across the group. He continuously asked questions about how the experts reached their judgements, how they responded to new contradictory information, how they thought about rival perspectives, and how often they changed their minds when their decisions were proved to be wrong. Following the philosopher Isaiah Berlin’s famous distinction, he identified two broad categories of thinkers, ‘foxes’ and ‘hedgehogs’ (see a 2013 Why Dev post on Berlin’s distinction and its relevance to development workers). Interestingly, Tetlock didn’t start off with this classification – it emerged from his data.

Hedgehogs thinkers “know one big thing” and tend to extend the explanatory reach of that one big thing into new domains. They “relate everything to a single central vision. …in terms of which all that they say has significance.” They often over simplify, and don’t use diverse data sources. Tellingly, Tetlock found that “the accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge.”

Ironically, the more famous the expert, the less accurate his or her predictions tended to be. The less successful forecasters tended to have one big, beautiful idea that they loved to stretch, sometimes to the breaking point. They tended to be articulate and very persuasive as to why their idea explained everything… they are more entertaining… The media loves them… Experts in demand were more overconfident than their colleagues who eked out existences far from the limelight…”

(Would it be a cheap shot to pause at this point, and ask if any specific development thinkers come to readers minds?)

By contrast, foxes are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hoc-ery” that require stitching together diverse sources of information, and are rather more modest about their own forecasting and decision-making prowess. They emphasise instead learning by doing, and “pursue many ends, often unrelated and even contradictory….entertain ideas that are centrifugal rather than centripetal;…..without seeking to fit them into, or exclude them from, any one all-embracing inner vision.”

In his analysis of how these different styles mapped onto effective decision making, Tetlock was able to draw a number of fascinating conclusions:

  • there was no significant correlation between how experts think and what their politics are. Both hedgehogs and foxes were liberal as well as conservative – although hedgehogs were more likely to be at the political extreme, right or left
  • That over-simplification was one of the gravest errors: “experts go wrong when they try to fit simple models to complex situations.”
  • that hedgehogs performed worse in areas in which they specialized, while foxes actually enjoyed a modest benefit from expertise
  • that hedgehogs routinely over-predicted, but that when they are right, they are often ‘spectacularly right’
  • that overall the foxes outstripped the hedgehogs in terms of their decision making accuracy and effectiveness: “The better were eclectic thinkers who were willing to update their beliefs when faced with contrary evidence, were doubtful of grand schemes and were rather modest about their predictive ability.”
  • that the best decisions and forecasts were made by those who used a variety of formal ‘aggregating models’ to clarify and simulate the problems they faced: “whereas the best humans were hard-pressed to predict more than 20 percent of the total variability in outcomes… models explained on average 47 percent of the variance…”

Relevance and ‘so whats’ for development? Three things stand out for me.

First, that there is an obvious tendency towards hedgehogs dominating the upper echelons of the development system, both intellectually and politically. What are the implications? Should we be worried?

Second, that the pathway to better decisions – which as I read it is to be humble, eclectic, and use better models – very much resonate with the key tenets of applying complex adaptive systems thinking to development.

Third, we are seldom explicit about the way in which we make decisions within the aid system: so many things are so very opaque. More transparency in aid shouldn’t solely be in the form of data, tables and figures – though of course these are important – but also in justifications and explanations. As Tetlock himself puts it: “we as a society would be better off if participants in policy debates stated their beliefs in testable forms… monitored [their] performance, and honored their reputational bets.”

Addendum: for those wanting to learn more, April Harding has shared this excellent podcast with Tetlock discussing his work.

Advertisements

Join the conversation! 6 Comments

  1. […] on the Edge of Chaos take a look at Philip Tetlock’s research on expert judgement and draws some compelling lessons for the development […]

    Reply
  2. Ben,
    I’m glad you’ve brought Tetlock’s excellent research to the attention of aid and development folks. When I started reading his work, it struck me also as very relevant it is for development assistance/ aid effectiveness field.

    Tetlock’s work contributed to a rule-of-thumb I now apply when reading research: if a researcher’s findings across studies are very consistent, with a single “take” on an issue or are otherwise very consistent with a certain worldview, then I discount the likely trustworthiness of the findings. I started this practice in 2007, and have discovered that there are quite a few analysts in my fields (health systems analysis, global health and health aid effectiveness) who display a “worrying consistency”.

    Reply
  3. PS: In case any of your readers’ interest is piqued, here is an excellent 73 minute podcast with Tetlock explaining this work http://download.fora.tv/rss_media/Long_Now_Podcasts/podcast-2007-01-26-tetlock.mp3

    Reply
  4. Would love to chat. Will look for you on skype.

    Reply
  5. Hi Ben,

    I would like to stress your first point for development: Yes we should be worried.

    Just now I was asked to look into the New Mdg proposals. it looks like a “no foxes allowed” party.

    I am especially worried that the population growth as root cause of problems will be confirmed by consensus, because so many pundits believe it is. This will be a big step back for women’s’ rights. Apparently the women’s groups are more worried to get their own pet indicator in than this goal out.

    The old MDGs were not perfect, but they focused minds on goals, while leaving options open for getting there. Most of the goals were very good indicators of poverty and human suffering. The risk is getting a whole bunch of fuzzy parameters, essentially getting everybody off the hook.

    The bloggers should pay attention to this process, because it will shape funding for years to come. I know it is difficult to take it seriously (like the whole Busan story), but please do.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

About Ben Ramalingam

I am a researcher and writer specialising on international development and humanitarian issues. I am currently working on a number of consulting and advisory assignments for international agencies. I am also writing a book on complexity sciences and international aid which will be published by Oxford University Press. I hold Senior Research Associate and Visiting Fellow positions at the Institute of Development Studies, the Overseas Development Institute, and the London School of Economics.

Category

Evaluation, Influence, Innovation, Institutions, Knowledge and learning, Leadership, Public Policy, Reports and Studies, Research, Science, Strategy, Uncategorized