Student - February 7, 2008

‘Hard figures say little about development’

Huge amounts are spent on monitoring the success of development projects. But much of that money is spent on attempting to measure the immeasurable, says recent PhD graduate Irene Guijt. She suggests that donors should be more open to surprises instead of wanting hard figures when it comes to evaluating development projects.

Learning from your mistakes is useful, and you want to know whether what you are doing is worthwhile: two important reasons for keeping an eye on a development project’s progress. The term used is monitoring and evaluation – in itself a multi-million dollar business. Between five and ten percent of development cooperation budgets worldwide are spent on M&E. However, only a small amount of that money is spent usefully, thinks communication scientist Dr Irene Guijt. A large amount goes on evaluations that change little in practice, or on monitoring that shows that something is happening, but not what or why.

Donors want to know precisely how their money is being spent, and increasingly so. The emphasis on accountability leads development organisations to draw up indicators intended to measure whether the planned objectives have been achieved. The result is a flood of often dubious facts and figures, says Guijt.

Policymakers and donors want policy that is based on proven successes elsewhere. All donors, says Guijt, demand impact evaluations. If an organisation carries out a project in a certain village, the donor can ask that the effect of the project is demonstrated by assessing the situation in a nearby village where no intervention took place. If a positive effect is demonstrated, the donor concludes that the approach was good and that it can be used elsewhere.

Many development projects used to have a relatively simple objective, such as digging wells, building a school or vaccinating people against measles. Now projects are often more complex, with aims such as democratisation, regional rural development, or emancipation of homosexuals in Guatemala City. According to Guijt, these objectives require changes in culture, behaviour and institutions. But monitoring of development projects is still based on the old-style projects and as a result is becoming less and less useful.

Guijt’s study shows that the protocols used all over the world for evaluations are based on a number of assumptions. Part of her dissertation is an examination of 33 projects carried out by the International Fund for Agricultural Development. ‘The most important assumption is that there is a linear relation between cause and effect, and it is therefore possible to indicate what factors contributed to things going wrong,’ says Guijt. ‘But when it comes to changes in values, norms, legislation, knowledge or power relations, there are often different long-term processes at play and no guarantee that there will be results.’

Another assumption in conventional monitoring is that everything can be measured, as long as you can devise a good indicator. Guijt: ‘The assumption is that the whole can be divided up into pieces of reality that are recognisable and measurable, and that these pieces of reality say something about the whole.’

But we shouldn’t be wasting money measuring the immeasurable, says Guijt. If it’s about really complex matters, you learn more from a discussion with people who are involved in the project, rather than from ‘objective’ indicators. ‘The great thing is that good monitoring of complex processes isn’t at all boring or stuffy; it’s about discussion and debate between the target group, the executors and the donors involved in a project.’

However, a participatory approach isn’t the solution to everything. Guijt worked for many years on projects in Brazil in which small-scale farmers and their organisations were involved in the evaluations. ‘Many of the assumptions from conventional monitoring were adopted in the participatory form, but joint designing and implementing of this kind of process can mask inequality between parties.’

Guijt concludes that the monitoring approach should not be the same everywhere, but should depend on your objectives. ‘If your objective is to demonstrate accountability to financial donors, maybe monitoring doesn’t need to be very participatory. But if you want to learn how to support a process of democratisation, you’ll learn far more from a good discussion than running after a table of figures.’ / Joris Tielens

Dr Irene Guijt received her PhD on 30 January 2008. Her promotor was Niels Röling, emeritus chair of Agricultural Knowledge Systems in Developing Countries.

Re:act