At Upaya, the process of collecting and assessing beneficiary data to track outcomes and measure impact is fundamental to our work. After all, four years ago the organization was founded with a hypothesis that providing jobs -- and not handouts -- is the most efficient and effective way for the “ultra poor” to progress out of poverty. To date, we have sought to prove or disprove this hypothesis through social performance measurement (what we internally refer to as “SPM”), the systematic collection and analysis of beneficiary level information.
I recently had the wonderful opportunity of attending the “Impact Measurement (IM) and Performance Management Training” in Bangalore organized by the Global Impact Investing Network (GIIN) in collaboration with international consultancy Steward Redqueen and Social Value International, to reflect on Upaya’s methodology and revisit some of these ideas. The purpose of the training was to introduce practitioners to different IM frameworks and ways in which these frameworks could be used to suit the context of an individual organization and its mission.
Some of my key takeaways from the training were:
- Don’t overstress the rigour; it should be “good enough”
- Data is not just for reporting but also for decision making
- Consider establishing a counterfactual in the setting
Financial and business decisions are made everyday on the basis of imperfect information provided by financial accounting frameworks. Yet when it comes to evaluating impact, we tend to hold ourselves to a higher standard or rigour --where randomized control trials are considered to be the gold standard. However, sometimes the fear of publishing imperfect impact data gets in the way of doing any IM activity. If the data is ”good enough” to spot trends, aid in decision-making and undertake some course correction, and not overly tax human or financial resources, then it is worth undertaking.
Secondly, these days it appears that impact data is mostly used for reporting outwardly to stakeholders, such as philanthropists and investors. In doing this, we overlook the critical role data can play in helping an entrepreneur and/or management team make decisions about the business itself. The challenge most often cited among practitioners is one of constrained resources - the buy-in needed to undertake the exercise of data collection and the need for doing it all. For an early stage enterprise that is working to expand the business, build the team, formulate marketing strategy, and raise funding, IM is a weighty commitment.
Data collected through impact measurement, however, should not be seen separate and distinct from business data. Instead, if an IM framework were designed to provide useful insights for the business about its beneficiaries, that could then inform refinements to the core product or service, then there exists a higher probability of entrepreneur’s buy-in and also better quality data.
Last, but certainly not least, is the need to accommodate the counterfactual in the actual impact assessment. A counterfactual simply put means “what would have happened anyways, without the intervention in question.” A counterfactual could be in the form of state and national averages based on government data. Or it could be a more statistically rigorous “control group” or “non- intervention group.” The counterfactual, when compared with data that depicts the outcomes of the intervention, can give us a clearer indication of impact, or what positive effects can be attributed to the intervention. In essence, it can provide the necessary context and help us paint a picture of the impact that has occurred.
In the coming months, we will integrate some of these learnings into our SPM framework. We hope to make our system of impact assessment more robust so as to provide high quality information on the progress made out of poverty by our beneficiaries. Our goal is to simultaneously provide valuable insights and learnings to our entrepreneurs, to support the continued growth of their businesses and the creation of hundreds more jobs.