Thursday, 19 November 2015

Challenges of monitoring and evaluating: attributing causality

Photo:  DarkoStojanovic, Pixabay
Creative Commons CC0 Public Domain License
In popular culture the laboratory is a place with largely negative connotations: the word conjures up images of beakers, test tubes and white coats. Yet, as a social scientist I look upon it with a slight sense of envy. This is because the laboratory represents a degree of experimental control rarely possible in the social sciences.  As a monitoring and evaluation specialist on the African Universities Research (AURA) programme, however, ours is a study of human behaviour in its natural surroundings. These natural surroundings happen to be diverse: the first year of the programme focuses on multiple stakeholders at several universities spread across Eastern Africa. This familiar terrain of bureaucracy and institutional politics ensures our focus is far removed from the controlled environment of the laboratory. My argument is that the further your research moves from the laboratory the harder it becomes to exercise control, and therefore the harder it becomes to attribute causal relations between variables. This is a problem given the prevalence of linearity assuming quantitative indicators in donor reporting, particularly in logframes.  So, given this degree of complexity in programmes such as AURA, it’s sensible to adopt mixed methods approaches. This is the first in a series of blogs I will be writing to make this case using examples from my experiences as an evaluator on the AURA programme. 

Measuring short-term impact and long-term value
AURA is a capacity development programme aimed at shaping behavioural change. This will be achieved, in part, through the AURA suite of capacity development courses. One of the challenges of the evaluation effort lies in showing that these courses have been effective. This is done at different levels for different attributes. So let’s first consider an example in which the objective is to measure improvements in the research skills of our workshop participants. Firstly, this requires a benchmark, something to compare progress against. So, before participating in any sort of workshop, the participant takes a combination of test and self-assessment questions that give us an idea of their current skills set. The scores on these questions are then compared to scores obtained from a similar exercise immediately after the workshop. The difference between them is what we report in our logframe; this is the increase in skills measured, and typically it is expressed as a percentage. How well this method works depends on how sophisticated the test questions are and also how well it’s supported by data obtained from other, typically qualitative, methods. At its best this builds a strong case for attributing an increase in skills to the intervention. But this snapshot covers only a short space of time: the immediate pre and post intervention periods. Yet, the real value of the AURA programme will depend on how these skills develop over a longer time period and what’s done with them. This will determine the impact the programme has had; and for good reason this is increasingly what donors are focusing on.

Correlation does not equal causation
Let’s carry on with our example. Our participants, having completed their workshop go back into their everyday lives at their respective institutions. The easiest and perhaps least rewarding evaluation exercise might be to repeat the same test given to them previously, but taken several months after the workshop to determine whether they’ve sustained their skill set. However, we are now into less secure territory as far as causality goes. What made the first assessment so useful is the very thing that hinders this one: the lapse of time. Earlier we could more confidently attribute the increase in skills to our intervention, because the testing was done immediately before and after it. Now, however, several months have passed and each of our participants have done different things in that time, some of these activities might have advanced these skills further, while others might have hindered them. So even if we have another snapshot of the participants skill set, several months down the line, how do we know whether any shifts are because or in spite of our intervention? Even if we found an upward surge in our participants skill set, how can attribution be safe given the possibility of so many other variables? Perhaps we could correlate the rise of skills with the amount of post-workshop interaction we have had with the participant. This is the point at which I hear the voice of my statistics tutor who reinforced over and over in his class: that correlation does not equal causation. Anyone who’s studied the social sciences has their favourite examples of two independent patterns that are strongly correlated, and my favourite come from military intelligence analyst Tyler Vigen, in his book Spurious Correlations. Here we learn that the per capital consumption of mozzarella cheese is strongly correlated with the number of doctorates awarded in civil engineering; and that the greater the consumption of sour cream the more motorcycle riders killed in non-collision transport accidents. It makes you wonder: are the patterns I’m finding between our interventions and a participants long term progress just as spurious as these examples. 

Towards a mixed methods approach
This needs to be investigated further. The evaluator has a choice to make at this point, to either move towards or away from the laboratory. The move towards is taken by the econometrician, only this time it is a figurative laboratory, and instead of using actual instruments of control we use numerical values in a regression analysis. A regression analysis is a tool used to investigate the relationship between different variables, which in our case might be a relationship between high performance on one of our courses and, say, an increase in earnings. However, there are several other factors that might explain an increase in earnings that are independent of the AURA programme, such as the participant’s prior levels of education, income differentials between different geographical locations and different sectors of the economy.  The aim of the regression equation is to zone in on the relationship between the variables in which we are interested while controlling for those that we are not. This approach, however, relies on being able to ascribe numerical values to your control variables. This is not always possible. To use our example further:  what if the difference in earnings is due not to what we’ve hypothesised or tried to control for, but rather simply down to the participant having the right family connections? It’s less clear how we can place numerical values on things like patronage, and knowing this and conceding the point allows us move away from the laboratory and embrace a mixed methods approach. 

Jagdeep Shokar is Monitoring and Evaluation Advisor for the African Universities’ Research project. He is an M&E specialist on the evaluation of capacity development programmes promoting research and communications capabilities. He has been involved in the monitoring and evaluation of programmes held across South Asia and Sub-Saharan Africa.


  1. This Great post referring to good topic .
    This is very important Post, absolutely fantastic & very usefull i really appreciate thanks for this Great Information , Your post have inspires me a lot Vastu for Office || Human Aura || Cosmic Man || Human Aura Layers || Aura Cosmic Vastu || Understanding Auras


  2. Thank you for another essential blog. Where else could anyone get that kind of information I really learned from this blog, i really appreciate . Thanks a lot. Vastu for Home || Vastu Dosh || Vastu Dosh Remedies || Vastu Expert in Goregaon || Vastu Consultant in Goregaon || ?

  3. First time I visited this blog. Really this is awesome blog. It is very pleasure to get it as I got huge helps right here. I do like your hard workings and appreciate your concept.Thanks a lot. Dowsing Pendulum || Ethics Dowsing || Geopathic Stress || Geopathic Survey || Space Clearing || Earth Healing

  4. First off , congratulation on this post. this is really awesome.It is superb content I thought and even I was searching similar this subject articles for knowing about it. And I truly enjoyed to read plus collected a tremendous information about it.

    college essay writing service