top of page


Young 1ove employs rigorious methodologies to monitor and evaluate the impact of our programs

Using evidence to ensure that all our activities have maximum impact is core to Young 1ove. We therefore conduct and use multiple forms of research methods throughout program design and implementation to ensure our programs have maximum impact. 



Randomized Control Trials are considered the “Gold Standard” of evidence for measuring direct impact of an intervention. The premise of the evaluation is to measure the effectiveness of a program by using a comparison group. Through the comparison, we know what would have happened without the program. The only way to create an equitable comparison is through random assignment. The graphic on the right proved by Innovations for Poverty Action exemplifies the method: one group receives the intervention, and the other does not. Since assignment to each group is random, each individual has equal probability of receiving the program or not. 

While scouring the literature for ‘what works’ in health and education programming and deciding what to adapt, implement and scale as an organization, we pay particular attention to those interventions that have been proven to have impact through Randomized Control Trials. A RCT in one place and one time, however, is no guarantee that the program tested will work everywhere anytime. We therefore also conduct our own RCTs of interventions to ensure that our version, adapted to context, still has the desired impact.


Although Randomized Control Trials are useful for testing the impact of a program, they do not always provide sufficient and timely insight into the mechanisms behind why it works. This is especially vital during the Research and Development phase of an intervention. While we are still designing, adapting and refining a program, we need to know what changes drive the most impact. In order to receive continuous feedback on aspects of the program that work and those that do not work, we use Random Impact Assessments. RIAs are a technique of implementing potential versions of a program to smaller numbers of youth and measuring the effect on early signs of change. This allows us to refine our programs, test out different versions and maximize the likelihood of eventual final impact in a rapid and cost-effective manner.


Programs may be well designed and proven through RCTs, but fail to have impact because they are poorly implemented. We therefore rigorously monitor our programs throughout all implementation phases to ensure they are not only delivered, but delivered at a high quality. 


We have built in-house monitoring systems to capture and analyze data and ensure that we are continuously driving towards the desired impact. This data can take many forms, from the assessment of training quality, the availability of necessary resources and observations of program implementation. 


For a program to have impact, it needs to address an existing need in the context and be adapted to suit that context. To this end, we conduct situational analyses of issues of interest: does the evidence-based program address a health or education challenge that exists here? Here we use multiple methodologies and types of data.


Quantitative data from surveys can tell us baseline rates of existing health or education outcomes e.g. HIV rates or literacy levels and whether there is a need. Qualitative data from interviews and focus groups gives depth and insight into what is behind these numbers and helps us make an informed decision as to whether a program may be suitable. If this is the case, such formative research is also then essential for making sure that the program is tailored to address the specific needs of the population being targeted and that messages and activities are relatable and appropriate for the audience.

bottom of page