The Battlefields of Rigour
The weapons of mass destruction never existed yet unleashed a human tragedy that continues today. What can we learn for the results agenda from the evidence base surrounding the WMD myth?
Last week, thumb Dr. Michael Patton was with us in Wageningen, pills the Netherlands. On March 23rd over 50 people in international development working on evaluation debated the politics of evaluation and the battlefield of ‘rigour’. Two issues struck me in relation to rigour as a contested concept.
First, what is rigour all about? Patton shared the example from the intelligence community in the USA in relation to the non-existent weapons of mass destruction for which there was supposedly so much evidence. The intelligence community was seriously discredited. In response, a community of practice emerged to redefine ‘rigour’. This resulted in the Rigour Attribute Model by Zelik, Patterson and Woods (2010). This model challenges the focus on prescribed standards and tight adherence to this, linking this practice to a failure of ‘intelligence’. The authors look at different sources of risk around evidence, and offer eight attributes of analytical rigour.
These attributes of analytical rigour are (Zelik et al 2010):
- Hypothesis exploration (multiple hypotheses about the data)
- Information search (depth and breadth)
- Information validation (checking and corroborating)
- Stance analysis (source of data and putting this into context)
- Sensitivity analysis (analyst understanding of assumptions and limitations of their analysis)
- Specialist collaboration (inclusion of perspectives of domain experts)
- Information synthesis (beyond listing data)
- Explanation critique (collaboration to incorporate different perspectives on primary hypotheses).
Take attribute of ‘information synthesis’ – a task that challenges the most experienced of evaluators at times. Zelik et al define this as: “the extent to which an analyst goes beyond simply collecting and listing data in “putting things together” into a cohesive assessment”. Moving from low rigour to high rigour for this attribute leads them to question the lack of insight that accompanies much data (low rigour). High rigour occurs when individuals are involved “who are “reflexive” in that they are attentive to the ways in which their cognitive processes may have hindered effective synthesis” (ibid).
The second issue concerns the relative value of rigour. Rick Davies reminded us that there is more to evaluation quality than ‘rigour’. The American Evaluation Association upholds five agreed and regularly reviewed standards: utility, feasibility, propriety, accuracy and evaluation accountability. The sequence is not accidental. In the recent review process by the Joint Committee on Evaluation Standards of the AEA, Patton said that the most intense discussions were between those keen to elevate accuracy to the first standard and those waving the flag to maintain ‘utility’. Utility won out in that space of the professional evaluator. Yet I know that in other spaces the ‘accuracy’ standard wins. Maybe there are ways there to expand the discussion on rigour to one on ‘quality’ – one that encompasses the other standards as well.
With thanks to the defense community for challenging the myopia clouding visions in the narrow results agenda.
Long live rigour! May the quality of thought process and utility long be valued.
Trackbacks and Pingbacks
Comments are closed.