This week a breaking story in the UK focused on how unemployed jobseekers are being forced to complete bogus psychometric tests designed by the government’s Behavioural Insights Team (commonly known as the “nudge” unit). The story raises important issues for ethical experimentation which are very pertinent for aid efforts.

The Guardian reported the story as follows:

The test called My Strength… has been exposed by bloggers as a sham with results having no relation to the answers given. Some of the 48 statements on the DWP test include: “I never go out of my way to visit museums,” and: “I have not created anything of beauty in the last year.” People are asked to grade their answers from “very much like me” to “very much unlike me”. When those being tested complete the official online questionnaire, they are assigned a set of five positive “strengths” including “love of learning” and “curiosity” and “originality”. However, those taking the supposed psychological survey have found that by clicking on the same answer repeatedly, users will get the same set of personality results as those entering a completely opposite set of answers.

The Behavioural Insights Team, meanwhile, argue that their intervention was based on sound evidence and good intentions, and had decent results. The latter includes the finding from randomised controlled trials (RCTs) that the survey had led to ‘building psychological resilience and wellbeing for those who are still claiming after 8 weeks through ‘expressive writing’ and strengths identification.’

For many critics, however, any potential positive benefits of the exercise were diminished by the fact that jobseekers were warned that the survey was compulsory and not filling it out would lead to allowances being curtailed. Instead of building wellbeing, this exercise simply gave the unemployed something else to worry about.

Clearly, there are some fundamental ethical problems with the way that this whole effort was designed and implemented. And of course, this is not unique to ‘nudge’ efforts, but extends to all kinds of social policy interventions. But the experimental approach of nudge interventions does open up a range of ethical quandaries that we need to be looking at more closely.

What the admirable efforts of the UK blogger community highlight for me is that aid recipients in developed countries do at least have some means for addressing their grievances about such experimental processes – even if (as in this case) they are indirect and work through informal rather than formal channels of accountability.

However, the poor in developing countries have few such channels for voicing their grievances and issues. As one statistician put it back in 2010 in a review of RCTs:

In conducting research with people, the need for guidance and adherence to ethical standards is of the utmost importance. Most areas of research involving human subjects have compulsory or voluntary codes of conduct and ethical rules, and many countries have strict processes in place to ensure that ethical standards
are met by any research involving human experimental units. There seems to be a gap, however, in research that involves human subjects carried out in the context of international development. We do not have a system of checks and balances that ensures adherence to high ethical standards. This may be because the jurisdiction of research committees does not extend to the areas where some of this research is conducted.
And this, specifically on RCTs:
When RCTs are proposed for impact evaluation, the issue of consent from participants is not discussed. Telling a group of people that they will be included in an experiment, but not implementing a development intervention that might benefit them, is something that most people working in international development would find difficult.
There is a lot of talk about feedback mechanisms at the moment as a means of addressing the long-observed ‘broken feedback loop’ in foreign aid. But without “a system of checks and balances that ensures adherence to high ethical standards” such mechanisms will be prone to the same problems as other mechanisms used in development.
In a study I wrote on innovation in aid with Kim Scriven and Conor Foley a few years back, we argued that there was a need to find safe spaces for experimentation, and establish mechanisms to promote “honourable risk” if we wanted to see a more innovative, and yet still principled, aid system. Even though other aspects of aid innovation have advanced considerably since that study, especially in terms of resources and policy attention, I am not sure we have really seen much progress on the issue of “honourable risk”. As a result, many of us in development aid run the risk of taking our experiments just a little too far.
In fact, we may be doing so already, and not know about it.
Advertisements

Join the conversation! 8 Comments

  1. This isn’t the first time you’ve raised general concerns about such approaches to figuring out what works through econometric analysis and RCTs. Where are the examples of unethical RCTs? Are you just generally worried or have you seen actually unethical RCTs?
    I’ve had colleagues react quite viscerally when hearing about using randomization or even quasi-experimental designs to do such research but they seem to be completely oblivious to the rather arbitrary, random or non-random, and unethical way in which the PRESENT approach to targeting assistance is done, without anyone being given a choice or consulted. Considering the typical way limited amounts of assistance are assigned to different groups of people (i.e. divided between districts or sub-districts, provinces or even countries), I’m rather perplexed at people’s sudden concern about trails with controls. A further absurdity is that the whole point of such research is that there isn’t really very much evidence that some of the things being done actually make any difference! At present, with no trials, no repeated controlled interventions with evaluation, we don’t really know if the assistance being given does any good anyways.
    I think there’s a wrong caricature of randomization people have formed. There are ways of randomizing that are for example not changing how a project is going to be delivered anyways: a wait-list or phased roll out for example is a common design, something we’re attempting to do with CFS research. The Innovations for Poverty Action has been doing many interesting studies with randomization and they certainly do consider the ethics.Latest study from IPA of Uganda AVSI project is good example.
    http://poverty-action.org/project/0104

    Reply
  2. Dear Ben,
    many thanks for very interesting post. There is a lot of discussion around tentative successes stemming from applying behavioral science insights to social problems. However, ethical issues you mentioned often take the back banner; and tt would be a pity, if those advancements are undermined by lax attitudes towards ethical issues. After all, the whole idea is that people should remain free to make choices themselves, and this should refer both to the decision whether to take part in an experiment or not; and what decisions to make being part of experiment.
    Indeed, there is need for those spaces for the experimentation. We, development practitioners should handle it with care.
    I would also appreciate if you could suggest more reading on this issue.

    Reply
  3. Hi Ben, Thanks for the post. Not yet able to create sufficient credibilty among aid recipients, as you say. But I think digital technology is a game changer. As I blogged (http://www.humanicontrarian.com/2012/06/07/328/), we NGOs no longer have monopoly control over the narrative of who we are. Not here in our Western homelands, where people from Africa can read and comment on how we portray our work or their socieities, and not in our missions. A recent example: western Myanmar, where highly politiicized social media messages declare that MSF is not neutral or independent, but pro-Islamic and hence pro-Rohingya. Keep up the interesting posts. Marc

    Reply
  4. […] of starting to think about how we can apply it to our work. (Keeping in mind that there is some criticism of the approach […]

    Reply
  5. […] However, development practitioner beware, this approach does not go without a certain degree of criticism. […]

    Reply
  6. […] However, development practitioner beware, this approach does not go without a certain degree of criticism. […]

    Reply
  7. […] are usually not consulted and no consent to be part of a social experiment is sought. (I recommend this post for a good overview on the current debate by Ben Ramaligan.) Innovations to end gender-based […]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

About Ben Ramalingam

I am a researcher and writer specialising on international development and humanitarian issues. I am currently working on a number of consulting and advisory assignments for international agencies. I am also writing a book on complexity sciences and international aid which will be published by Oxford University Press. I hold Senior Research Associate and Visiting Fellow positions at the Institute of Development Studies, the Overseas Development Institute, and the London School of Economics.

Category

Accountability, Evaluation, Evolution, Innovation, Knowledge and learning, Public Policy, Research