“Matchmaking” Impact Evaluations with the J-PAL Research Network
Thank you for your interest in collaborating on an impact evaluation.
To help find programs that will be a suitable match, we would like to ask you to provide the information below about projects that your organization is implementing or will be implemented which could potentially be evaluated via a randomized evaluation. Randomized evaluations involve the random assignment of a program into intervention and comparison groups, e.g. by a lottery.
The objective is to identify projects that could be evaluated using a randomized evaluation and which would benefit from the results of such an evaluation. We can also use this information to identify potential affiliated faculty to contact to explore a research partnership. Keep in mind that an evaluation could estimate the impact of (i) a new program, (ii) a change in the design or implementation of a currently existing program, (iii) or a particular feature of a multi-faceted program. Below are some additional questions to consider as you consider your teams' interest in participating in the matchmaking process. J-PAL Education sector staff will vet and attempt to match promising opportunities for rigorous impact evaluations with researchers as such opportunities arise.
Considerations
Team capacity: Generally, does your team have the time and capacity to engage with researchers, especially to do the up front work of setting up the research project?
Funding: Do you have sustained access to implementation funding? Researchers are typically responsible for securing funding for any costs related to the evaluation itself (data collection, etc), but it requires the implementing partner to commit to having the financial and logistical capacity to implement the program that’s being evaluated.
Technical viability: While the researchers typically lead the technical design, there are a few components particularly important to the partner. - Number of beneficiaries: In order to meaningfully be able to detect any difference in outcomes between those who receive the program and those who are serving as the comparison group, there needs to be enough people in the study. Exactly how many differs based on program design, expected impact, etc., but generally, this needs to be a few hundred participants at the very least up to a few thousand or more.
- Timeline: Another key consideration is the project timeline; we are unable to conduct a retrospective randomized evaluation and instead need to make sure we can implement the research design before people in the study receive the program. So if a program has already started or is about to start, it probably makes sense to wait until a future iteration of the program to conduct a randomized evaluation.
- Willingness to randomize: A third key factor is the willingness of all partners to randomize who receives the program. There are many ways we can do this, but in principle, partners need to be open to randomization.
Please note that J-PAL's network of affiliate professors are all independent academics with university affiliations. As such, they have full control over which projects they take on, and we can neither guarantee any program will be suited to a rigorous impact evaluation with our network, nor that any particular impact evaluation opportunity can be “matched” to an J-PAL-affiliated investigator. That is, we do not operate a consultant model where program interest in a particular research opportunity allows us to commit the time and partnership of our particular network investigators. However, J-PAL staff have experience vetting potential research opportunities, circulating a subset of promising opportunities with our network of affiliated investigators, and where partnerships are formed, having research teams pursue funding jointly and moving forward with an impact evaluation.
About LAI
The Abdul Latif Jameel Poverty Action Lab (J-PAL) Education sector has launched the Learning For All Initiative (LAI). LAI will generate research in key open areas related to foundational literacy and numeracy, and holistic skills and summarize lessons for policymakers to incorporate into their decisions. Through this support, LAI aims to enhance global learning outcomes by identifying the next wave of promising evidence-based strategies that can be evaluated, duplicated, and customized by policymakers to their own local circumstances. LAI’s research agenda presents promising overlaps with many implementing organizations, inspiring us to connect implementing organizations with interested affiliates in the J-PAL network. Read more>>
Policy and academic relevance: While in theory, anyone can conduct a randomized evaluation of any program large enough, two main considerations we keep in mind when deciding which proposals we fund through LAI's competitive proposal process, and that our affiliated academics consider when deciding whether to partner on a project are:
- Policy relevance: What would you or your partners do with the results? Is there a commitment to or interest in action? Might the research questions asked to speak to broader global policy dialogues?
- Academic interest: Since the affiliates in our network are all independent academics based at universities around the world, they are often most interested in understanding the mechanisms behind why something does or does not work and to fit it into the broader literature. As such, it’s often most compelling when partners are open to adapting program components with a researcher to answer really critical research questions.
If you have any questions, don't hesitate to indicate so in the final question box or reach out to lai@povertyactionlab.org.