Of Devils, Details and Multiple Perceptions
At the end of February we stopped taking stories from the crowdsourcing survey, pills having received a total of 151 responses and over 100 individual stories and snippets of experiences. We’ll be working them up into a paper to share at next month’s Conference, viagra but here are a few observations as a trailer.
The first observation is that the vast majority of the snippets concern the day-to-day practice of small-e evidence –results and targets in management – rather than large-E evidence of establishing theories of change or development approaches. The stories are about concerns and opportunities in the nuts and bolts of the practice of evidence – what ‘e’ is being collected, how it is used, and to what effect. They show that the devil is in the detail, the practice.
The second observation is that the stories reveal that perceptions of the results agenda are as multi-dimensional at the practical level as they are at the theoretical. Thus if one story says that the pressure to articulate measurable results has promoted a desirable realism, then the next suggests it has generated perverse incentives to pursue easy gains. One snippet may show the problems of collecting meaningless, over-simplified data, but it is immediately countered by another story which enthuses about a new-found discipline in articulating results. These practical tensions replicate the polarised debate revealed in the survey responses, as well as the comments to the posts in January’s wonkwars on Duncan Green’s blog (posts 1, 2, 3 and summary).
The stories are also useful for identifying the circumstances grounding practitioners’ perceptions of the results agenda, as being either constructive, or as hindering their work. While the full report will deal with this in more detail, one issue worth highlighting is the funding recipient’s starting capacity and interest in evidence and evaluation: respondents in more sluggish bureaucracies talk of having a fire set under them, helping them focus attention and opening a space for results-focus. In contrast, those with sophisticated existing systems spoke of reductionist data-requests and wasted time. Those working with partners in the South, with limited experience or capacity for data collection and management, talked of steep learning curves and big challenges in meeting requirements. The different perspectives are reflected in the multiple ways in which agencies seem to be responding to the agenda, i.e. using it to leverage internal evaluation reforms, pragmatic adoption, negotiation, surface compliance, subversion, resistance and rejection.
The third observation is the stories confirm a tension between learning and accountability, and further that you have to choose one or the other. And, as one respondent observed and others affirmed, “accountability trumps learning”. What this seems to mean is a shift in the locus of power, where key decisions are moved from programme staff to the funder. Information is sent up, rather than power devolved down. As with other aspects of the agenda, this has good points and bad points. Several respondents from the donor and senior management perspective observe that the emphasis on accountability results in a more effective allocation of funds by donors to those organisations who can demonstrate they achieve results. However, those at the coal face argue that the accountability processes block learning at the point of implementation, are costly in resources, and – in some cases – generate hopelessly reductionist information and are therefore of very limited use.
The fourth observation leads from the third, and is drawn from one of the first responses we received: that the results agenda resembles the managerial reforms of New Public Management in the eighties (short pdf) as much as the Evidence-Based Policy focus of the nineties/noughties. This is reflected in a number of aspects of the reforms and the stories: the desire to make professions more accountable in their decisions; the confidence in management bureaucracy’s ability to do this; the diverse reactions of the story-tellers; and the prioritisation of efficiency above other values. The stories, likewise, hint at both the useful and dysfunctional outcomes from the new public management reforms. They will allow us the opportunity to tease these out in more detail. Watch this space!
We’re very grateful for the time and effort of all those who responded, and will make the full analysis of the responses public after the discussion in the Politics of Evidence conference – please check out the newly posted programme. We will of course respect the confidentiality commitments fully. Case studies to take into the discussions on the day are still welcome.
Comments are closed.