Thoughts from an agricultural development gal in Ghana

Posts tagged “evaluation

Evaluating Complexity

Warning for my non-development-worker readers: this post is a bit technical. But I hope I’ve explained everything clearly, and please ask questions if I haven’t! I would love to hear some new perspectives on this topic.

Also, I started writing this post a few months ago, when two big themes in the aid blogosphere were complexity and impact evaluation. People seem to be writing about this a bit less these days, but they’re still important topics (and we don’t have any clear answers) so it’s my turn to weigh in!

Where does all the money go?

So what is impact evaluation? This is the attempt to use rigorous methods to understand what actually works, or has impact, in aid and development. Of course there is no “silver bullet” solution to global poverty, but which interventions are more effective than others? Which things have no impact at all, or a negative impact? Ultimately, rigorous impact evaluation should lead funders and policy-makers to direct funds towards interventions which are most effective and reliable in improving people’s lives.

There is a huge push right now to “show what works” in foreign aid. Citizens are seeing that the Western world has spent trillions of dollars on foreign aid in the past 50 years, yet global inequality is worse than ever before. Of course progress has been made, but have the results justified the spending? Citizens want accountability from their governments, and proof that their hard-earned tax dollars are actually having a positive impact on the lives of those they are targeting in the developing world.

Many agree that the most rigorous form of impact evaluation today is the Randomized Control Trial (RCT). This technique takes a statistically significant sample size and randomizes selection into two groups: control and treatment. The treatment group receives a development intervention, such as crop insurance, or microcredit, or whatever you are trying to test, while the control group receives nothing. Both groups are evaluated over the course of the study (usually 1-5 years) and the results come out somewhere along the spectrum of “yes this works” (how?) to “this has no effect”. Sometimes it’s inconclusive, or the results are not easily generalizable, or there is further research to be done, but RCTs are generally considered the gold standard in evaluating development interventions.

There is a lot of controversy around RCTs because of the high cost and time involved. These studies are not appropriate in all cases. They shouldn’t be used by organizations looking to evaluate past programs, or smallscale projects looking for continued funding. Instead, RCTs should be used to inform development and foreign policy on a large scale. Citizens giving foreign aid want to know, for a fact, which development interventions are the best bang for their buck – and RCTs should, in theory, be able to tell them.

Inside the black box

The second trend these days in the blogosphere is complexity, or complex adaptive systems. Aid on the Edge of Chaos has a good round-up on complexity posts. The bottom line here is that in a complex system, results are unpredictable. The system is not static, or linear, or deterministic; it evolves over time, adapting and growing based on both internal and external influences. When dealing with a complex system, you need to take a “systems approach” by monitoring the whole rather than the individual pieces.

What does this mean for development? Well, people and communities and indeed the world are all complex systems. It is hard to predict when something will change, as we’ve seen with the recent wave of revolutions across the Middle East. From this perspective, it is hard to ever know which development interventions will achieve the results we want.

Most development interventions are designed around something like an impact chain – what will you do, and what results will it produce? However, complexity theory tells us that we should monitor a system generally for results, not just for our predicted or desired results. There are often unintended results of our actions, sometimes negative and sometimes positive. In addition, it can be hard to attribute positive changes to our particular intervention in a complex system – so much is happening at once and there are so many stimuli to the system, you never really know where something originated. This also poses problems when we’re discussing replicability of development interventions – just because something worked once in a particular set of circumstances, doesn’t mean it will work again in a different setting.

So what does it all add up to?

Impact evaluation and complexity. Now it’s time to bring these two concepts together. The big question here is this: Can we understand “what works” in enough detail to be able to predict future results of our interventions?

My answer is yes, but maybe not in the way you’re thinking. In the past, evaluation has usually focused on the question “what intervention worked?” where the answer is “fertilizer subsidies” or “school feeding programs”. I think we need to start looking more at HOW things work. Instead of looking for programs that we can replicate across entire regions, we should be asking, “what worked?”, “under what conditions?” and “with what approach?” which give answers more like “foster innovation”, “promote local ownership” and “give people a choice”. These conditions may be found across many different areas, but may have more of an effect on the success of an intiative than the WHAT of the initiative itself.

I generally support rigorous impact evaluation for 2 reasons:

  1. fostering a culture of accountability to donors and stakeholders (and taxpayers);
  2. foster learning so that we understand conditions for success and can set projects up for success in the future.

I think the aid industry has learned (and can learn even more) something about what works, or probably more about what doesn’t work. I also think aid can’t be prescriptive since human beings are complex and our behaviour is irrational and unpredictable. But we can set conditions for success when designing our interventions. And while the results may not be wholly predictable, at least the intervention will be more likely to succeed.

What does this look like in practice?

We are currently in the process of team transition and strategy re-development. Here are a few principles I’m looking to follow with our team strategy as we go forward:

  • always have a portfolio of initiatives on the go (don’t put all your eggs in one basket)
  • make sure these initiatives all contribute toward the bigger change we’re trying to make in the agric sector
  • range of timescales: short-, medium- and long-term changes, informing and building on each other
  • constant learning and iteration: testing, getting feedback, adapting, and testing again
  • focus on articulation of our observations and learning to external audiences
  • high awareness of the system as a whole: what does it look like? where are the strongest influences? the most volatile players? who exerts the most force on the system?

What principles would you add to my list?