When does crowdsourcing work best? New research from the Institute for Human Development provides answers which may be of relevance for aid projects and programmes.

There has been a lot written, spoken and blogged about the power of crowds in making decisions. In James Surowiecki‘s bestselling Wisdom of Crowds, published in 2004, the central thesis was that diverse groups are likely to make certain types of decisions and predictions better than individuals – even those with specialist expertise. As Surowiecki noted:

…under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them”.

The six years since the Wisdom of Crowds was published have seen the rise and rise of online social networking and related technologies. Social media and the power of the crowd have been at the heart of everything from political resistance movements to presidential elections (and indeed, resistance movements following presidential elections). The term crowdsourcing was coined in 2006 to describe an organisational approach that harnesses the creative solutions of a distributed network of individuals. As one of the originators put it:

Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is per formed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.”

There is a growing – some would say evangelistic – enthusiasm for crowdsourcing as the answer to a whole range of problems. Just a few initiatives off the top of my head: fundraising for socially responsible films, the development of transit planning in urban areas, combating corruption, creating markets for innovations, expanding scientific peer review processes. A quick Google illustrates just how expansive this agenda is.

The potential for crowdsourcing to contribute to international aid has also attracted a lot of attention, with perhaps the most prominent example being the role of new innovative technologies in the aftermath of disasters. The following is a typical example of the arguments made by the ‘pro-crowd’ camp:

The rapid proliferation of broadband, wireless and cell phones, coupled with new crowdsourcing technology, is completely changing the face of disaster relief. Everyone with a computer can provide crucial assistance, sifting through satellite photos, translating messages or updating maps, and most people are happy to do this free of charge — contributing to life-saving relief efforts is a powerful motivator… At a fraction of the cost of most relief budgets, crowdsourcing can solve coordination problems on the ground.

As many readers will be aware, crowdsourcing in disaster responses has been the focus of a passionate, sometimes vehement, and at times rather distracting debate.

My intention isn’t to retread ground that has already been well covered – and occasionally angrily stamped on – elsewhere. Instead, I want to explore evidence that tries to explain – following Surowiecki – the specific conditions under which a crowd is effective. Does recent research on decision-making yield any lessons or ideas worth a closer look?

Certainly, some of the crowdsourcing argument is borne out by the evidence. Numerous disciplines – from anthropology, cognitive psychology and evolutionary biology – suggest that collective decision making can help group members cope more effectively with unfamiliar contexts, and it is almost a cliche to say that humanitarian disasters are the archetypal unfamiliar context. However, reviews of this literature suggest many of these studies lack testable, well-structured concepts and hypotheses to explain exactly what collective decision making involves when compared to other kinds of decision making. They also often fail to examine the implications of different kinds of decision-making processes for the accuracy of decisions. These issues echo the challenges that have been put to the crowdsourcing community.

One recent exception to the above is simulation-based research that has been undertaken by analysts at the Institute for Human Development in Berlin. This work looks at a range of decision making processes, and suggests that there are two distinct ways in which groups can work to provide solutions to a problem.

First, individuals can follow specific ‘leaders’ in the crowd. This usually means drawing on those experts with information particularly relevant to the decision at hand. This is comparable to the typical aid decision-making process.

Second, crowds can work to aggregate information from the members, which is then made available to the crowd itself or to a third party. This enables decision making to be enhanced through ‘collective cognition’, a concept that underpins many of the arguments for crowdsourcing. This collective cognition can be unconscious emergent property, or it might be facilitated consciously through network interactions within the crowd.

The work by the HDI suggests a number of findings which are pertinent for the aid crowdsourcing debates:

  • a number of conditions influence when groups use ‘follow an expert’ or ‘wisdom of the crowd’ strategies. Specifically, the researchers found that the diversity of the group, the quality of individual information and group size all had a bearing on which approach is chosen.
  • in so-called single-shot decisions, experts are almost always more accurate than the collective across a range of conditions. However, for repeated decisions – where individuals should be able to consider the success of previous decision outcomes – the collective’s aggregated information is almost always superior
  • regardless of the decision-making approach taken, groups must have the potential to acquire information through social interaction, respond positively to those who possess pertinent information, and update their approaches based on the success of the previous decisions
  • In ephemeral and unstable social groups that make collective decisions only occasionally, individuals tend to follow the most informed individual. Stable social groups that encounter repeated decision points would do well to use some information aggregating process.

At the risk of over-generalising, the above suggests an emerging hypothesis – that for many simple or complicated issues where only one attempt is needed – ‘puzzles’ or ‘problems’, as a previous Aid on the Edge post put it – there is potential for experts to outperform crowds. The best illustration is to point out all those problems Malcolm Gladwell covered in Blink – detecting if a work of art was a fake, whether a teenager was carrying a gun, whether a fire would lead to a building collapsing, and so on.

In complex problems that require ‘multiple shots’, crowds can help augment expert perspectives by developing emergent solutions to evolving problems. The processes of information aggregation, transparent decision-making and effective feedback loops are essential here – all concepts which will be familiar to those interested in complex systems thinking.

Although the research is narrow, preliminary and based on mostly on theoretical simulations, the HDI work does point towards a more structured way of understanding the limits and possibilities of crowdsourcing. As such, it could be a constructive way to start to navigate some of the entrenched debates we have seen to date. Ultimately the research suggests that we shouldn’t be asking ‘does crowdsourcing work or not?’, but rather ‘when does it work, why, how, and with what benefits?’

This is not to say the answers will always be clear-cut or unambiguous, but asking the right questions will surely get us closer.

Now all we need is for some aid researchers to pick these concepts and questions up and run with them.

Or maybe an aid crowd would be better?

About these ads

Join the conversation! 6 Comments

  1. Excellent post–completely agree with your analysis about what questions we should be asking about crowdsourcing (certainly should not be so simple as “does it work or not?”).

    I recently read an article on group vs. individual ability that has relevance to the debate:

    http://www.boston.com/bostonglobe/ideas/articles/2010/12/19/group_iq/?page=1

    Reply
  2. […] Ramalingam has a good piece on his blog that looks at some of the latest research on the necessary conditions for crowds to be […]

    Reply
  3. Great post on the crowdsourcing and aid effectiveness. I recently wrote a post on the Value of CrowdSourcing for nonprofits, http://blog.guidestarinternational.org/2010/12/15/crowdsourcing-a-value-to-nonprofits/. I agree with what you said, which is that ‘we shouldn’t be asking ‘does crowdsourcing work or not?’, but rather ‘when does it work, why, how, and with what benefits?’ I do think that if there is a way to validate the information that is crowdsourced information can prove useful for helping with aid effectiveness. This will particularly be the case if you move away from a focus on aid experts and include the public, the ultimate beneficiary. (Linked data technology for instance may be able to assist with validation). However, This will again tie into the who, what, why and how question.

    Reply
  4. This is a great post. It seems to me that more and more NGOs and aid agencies out are trying to put crowdsourcing to use in development practice. Ushahidi (www.ushahidi.com) created an application that allowed users in Kenya to report instances of post-election violence in 2007 on an interactive online map. Development Gateway and AidData just created “Development Loop” (http://www.youtube.com/watch?v=lMccrl1Eq1Q&feature=player_embedded#!), which allows users to view information on aid projects and development statistics, and upload their own comments on the same projects. These organizations are not necessarily looking for a “solution” to a problem; they are simply using crowdsourcing as a more efficient way to gather and disseminate important information. As Wikipedia attests to, this seems to be the function that crowds perform best.

    Reply
  5. IRIN did an article trying to sort the hype from the reality in crowd-sourcing.

    See what you think here: http://www.irinnews.org/Report.aspx?ReportId=89735

    Reply
  6. […] a blog about complexity sciences and international aid (they intersect, apparently; who knew?), deftly touched on the topic of single-point versus (for lack of a better term) multi-point decisions…, when Ramalingam talked about James Surowiecki’s now famous book Wisdom of Crowds (emphasis is […]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

About Ben Ramalingam

I am a freelance consultant and writer specialising on international development and humanitarian issues. I am currently working on a number of consulting and advisory assignments for international agencies. I am also writing a book on complexity sciences and international aid which will be published by Oxford University Press. I hold Senior Research Associate and Visiting Fellow positions at the Institute of Development Studies, the Overseas Development Institute, and the London School of Economics.

Category

Biology, Campaigns, Evaluation, Innovation, Knowledge and learning, Natural disasters, Networks, Organisations, Self organisation, Technology