3 Linking Spending and Outcomes: Some Lessons from Impact Evaluations in Education and Health
- International Monetary Fund
- Published Date:
- May 2011
Education and health are key dimensions of many of the Millennium Development Goals (MDGs). As 2015 approaches, calls for greater development effectiveness are taking on increased urgency for two reasons: development assistance for health and education has risen to unprecedented amounts, but has not led to expected improvements in outcomes; and the global crisis has forced a reexamination of social spending.
This chapter highlights lessons about this disconnect between spending and outcomes derived from recent impact evaluations in health and education. Although designing the right approach to improving outcomes is complex and context dependent, a systematic evidence base generated through impact evaluations can provide useful guidance for policy makers. Furthermore, the body of these evaluations has grown tremendously in recent years. The base of impact evaluations is still far from complete, and much more work is needed; but some interesting findings have already begun to emerge.
Existing impact evaluations show that efforts aimed at improving access to education and health services have met with some success. However, it has been much more difficult to improve the quality of education and health services, which remain low in many developing countries. Hence, the disconnect between development spending and learning and health outcomes. The disconnect stems in part from the narrow focus on financing inputs, while ignoring other parts of the causal chain that links public spending to changes in outcomes. The causal chain shows that incentives facing both service providers and consumers are important mediating variables in the link between development spending and outcomes, and policies have often failed to sufficiently account for them. Not surprisingly, therefore, one of the strong lessons emerging from impact evaluations is that continuing to increase provision of traditional inputs will have only limited impact if issues of service delivery and consumer behavior are not addressed. As highlighted in the 2004 World Development Report: Making Services Work for Poor People, improving service delivery in health and education, in addition to increasing resources, is vital to reach many MDGs.
Although this chapter points to the limited impact of input provision, it does not question the need for additional financing because improving quality also requires resources. It is important to ensure that institutions provide services efficiently and responsively—and that their potential clients have the ability and desire to use services efficiently and hold service providers accountable for quality. And when quality and institutional arrangements improve, the returns to additional investments are also greater. This focus on institutional arrangements is consistent with the findings presented in Chapter 1, which highlights policies and institutions in explaining the heterogeneity of MDG progress.
Disconnect between expenditure and outcomes
Budget allocations for health and education are at an all-time high in the developing world. Development assistance for health has quadrupled since 1990, peaking at $24 billion in 2008 (figure 3.1a). Health spending by developing-country governments has also peaked (albeit at a much lower level), nearly doubling to reach $240 million during 1995–2006. Similarly, development assistance for education has doubled since 2002, reaching a high of $10.8 billion in 2007 (figure 3.1b).
Despite these huge funding gains, progress toward development targets has been uneven. Lack of resources only partly explains many of the remaining gaps because links between spending and human development outcomes are weak,1 and good policies and strong institutions are central to improving the productivity of education and health spending.
True, access to education and health services has expanded sharply in recent years. For example, basic immunization coverage in low-income countries increased by 20 percentage points during 2000-08.2 Primary and secondary school enrollment rates have also improved, in some cases dramatically. But the quality (or outcomes) of education and health remains a grave concern. In education, this is made evident by the poor learning outcomes of school children.3 Often, countries that spend more on primary education (compared with the level predicted by per capita income) generate on average only a small improvement in test scores (compared with scores predicted by per capita income) (figure 3.2). In India, even though most children of primary-school age were enrolled in school, 35 percent of them could not read a simple paragraph and 41 percent could not do a simple subtraction.4
FIGURE 3.1.Government and donor spending on health and education is unprecedented
Note: Includes all on-budget and off-budget health-related disbursements from bilateral donor agencies, grant-making and loan-making institutions, United Nations agencies, and nongovernmental organizations. Development assistance to education comes not only as direct allocations to the education sector but also through general budget support.
FIGURE 3.2.The relationship between expenditure and learning outcomes is weak
Source:Bruns, Filmer, and Patrinos 2011.
Note: The graph shows the deviation of normalized test scores from that predicted by GDP per capita against the deviation of public spending on primary education per student (relative to GDP per capita) from that predicted by GDP per capita.
In health, basic immunization rates have expanded, but full immunization coverage—needed for the full impact on child mortality—is much lower.5 Likewise, the global average attendance at a minimum of one prenatal visit is 78 percent, but attendance at four visits—needed for the full impact on maternal mortality—is around 48 percent.6 Clinical competence may also be lacking (as in Senegal, where an assessment of service quality found that only 34 percent of cases were correctly diagnosed).7
So, more resources—even when increasing coverage—may not be enough to have a big effect on outcomes.
The disconnect between spending and outcomes partly reflects the failure of human development spending to reach poor people. Public spending on health and education is generally believed to be an effective means to reach the poor, but empirical evidence fails to support this assumption. Since the 1990s, numerous studies have found that public health spending is not concentrated among the poor and that the rich benefit disproportionately from public health subsidies.8 Apart from Latin America and the Caribbean, the rich receive far greater benefits from public health spending than do poor people (figure 3.3). In Sub-Saharan Africa, South Asia, and Europe and Central Asia, the incidence of health spending among the richest fifth of the populace was more than twice that of the poorest fifth.
So, what are some of the factors that cause the disconnect between public spending and changes in human development outcomes?
FIGURE 3.3.There are wide differences in the share of public health spending going to the rich and the poor
Source: Authors’ calculations, based on data from Filmer 2003.
Understanding the disconnect
Calls for greater development effectiveness are implicitly calls to strengthen the links in the causal chain between spending on inputs and human development outcomes. Each link of the chain, if weak, can prevent increased public financial and physical resources from being translated into improvements in human development outcomes: the link between resources allocated and services and goods generated, between goods and services generated and consumption, and between consumption of goods and services and education outcomes (figure 3.4). The first link emphasizes supply-side factors, the third link demand-side factors. The middle link is the critical intermediate step where the demand-side and supply-side factors intersect for the delivery of health and education services.
FIGURE 3.4.Links between public expenditure and human development outcomes
Note: The service delivery triangle showing the interaction among clients, service providers, and policy makers has been adapted from the 2004 World Development Report (World Bank 2003). This diagram is a simplification of very complex relationships among spending, supply, service delivery, demand, and outcomes. There are nuances that are not captured here. One such nuance is that services provision implies use, whereas the generation of goods does not. Another is that different service providers (government, nongovernment, faith-based) have different motivations and incentives, so their interaction with clients and policy makers and the outcomes will be different. See, for example, Reinikka and Svensson (2010). Finally, incentives created by different financing arrangements (including prepayment mechanisms) and their impact on provider and consumer behavior are not fully captured here.
On the supply side, inequitable resource allocation, lack of complementary inputs (both sectoral and cross-sectoral), and crowding out of private sector provision can weaken the link between public spending and the amount of goods and services generated. And the consumption of these goods and services will depend critically on the quality of service delivery, a crucial intermediate step where the results chain frequently breaks down. This can often be traced to poor incentives for service providers and low accountability. The weak service delivery link can also reinforce existing inequalities in human development outcomes because the challenges of service delivery are often more pronounced in hard-to-reach areas and underserved populations.
The service delivery triangle in figure 3.4 outlines the relationships among different system actors. Citizens, especially poor people, who ultimately consume the education and health services generated by the public system are the clients. They have a direct relationship with frontline service providers, such as teachers in public schools and health care workers in public health facilities—the short route of accountability. Crucially, however, the service providers generally have no direct accountability to the consumers, unlike in a market transaction. Instead, they are accountable only to the government that employs them. The accountability route from consumers to service providers is therefore through the government—the long route. To hold service providers accountable for the quantity and quality of services provided, citizens must act through the government—a process that is difficult for poor people especially because they can seldom organize themselves and be heard by policy makers. Moreover, the government rarely has enough information—or indeed the mechanisms—to improve service provider performance.
Demand-side factors also mediate between public spending and human development outcomes. The lack of uptake from consumers resulting from the lack of demand, information, or other factors can undermine the positive impacts of public spending in health and education.
All this suggests that merely increasing the supply of human development inputs and services will not automatically improve human development outcomes. Thus it is that the pervasive approach in global resource mobilization efforts (that is, filling narrowly defined resource gaps through predominantly supply-side interventions without strengthening service delivery and consumer uptake) has limited impact.
What do impact evaluations say?
By understanding the causal pathways, practitioners can be better placed to improve development effectiveness and thus accelerate progress toward the MDGs (and secure more value for money).
Impact evaluations have emerged as an important tool to conduct such analyses. They assess the changes in the well-being of individuals that can be directly attributed to a particular project, program, or policy—and thereby provide robust and credible evidence on how a project has performed and what changes they have “caused.” At a broader level, the information they generate can contribute to a systematic evidence base on the kinds of interventions that work. Rigorous, project-specific impact evaluations can also shed light on the broader functioning of the education and health systems and on general household behavior. They often have forced a rethinking of assumptions regarding human behavior (as illustrated in box 3.1) or brought to light unexpected impacts of certain types of development interventions (as illustrated in box 3.2).
Because of the explicit focus on causality, the main methodological requirement for an impact evaluation, and what sets it apart from other types of monitoring and evaluation, is the need for a credible counterfactual (that is, what the outcome for project beneficiaries would have been without the project).9 Establishing a counterfactual retrospectively is hard. For this reason, it is best to design the impact evaluation alongside the program and to build it into the program.
Some caveats regarding impact evaluations need to be noted. Although extremely useful, impact evaluations cannot answer all development questions. Furthermore, the existing knowledge base of impact evaluations in education and health is far from complete. In many areas, the current evidence does not support as clear an answer as one would like. In others, it does not answer some key questions. The results of many individual impact evaluations cannot be generalized. In fact, the question of external validity of impact evaluations has been an important and often somewhat contentious one.10 Formulating general recommendations from the conclusions of impact evaluations is often hampered by lack of comparability among settings, different forms of the intervention, and varying levels of rigor in the evaluation. Also, very few impact evaluations inform us about the longer-term impact of programs or whether observed impacts persist after the program has ended. Another frequently heard critique of impact evaluations is that they do not focus enough on the reasons why certain projects work and others do not.11 It has also been argued that impact evaluations do not lend themselves easily to distributional analysis because they are focused on estimating average treatment effects.12 Finally, very few impact evaluations collect or report on program cost data; and this represents a missed opportunity to gain insights into efficiency, cost effectiveness, as well as longer-term financing issues such as sustainability.
Many of these objections can be resolved by expanding the base of rigorous impact evaluations on certain topics. Researchers should replicate successful models in different contexts to assess the extent to which programs work under varying circumstances. In addition, replicating interventions that have been successful in small-scale settings at a regional or national level will help contribute to knowledge of what works in the real world. In certain cases, such as incentives and accountability, there is a need to test various combinations and extensions of existing approaches. Conducting a series of related evaluations in a comparable setting with cross-cutting treatment designs generates considerable cost savings in data collection and allows cost-effectiveness comparisons across program variants. It is also important to collect cost and process data, data on intermediate outcomes, and good qualitative data to complement quantitative data. The World Bank’s Development Impact Evaluation (DIME) initiative is encouraging the use of impact evaluations and promoting their integration into government programs (box 3.3).
Although this chapter focuses on impact evaluations, such evaluations are not the only source of evidence concerning effective interventions to advance the education and health MDGs. Other evaluation methods also contribute to this debate—such as mixed methods, objectives-based approaches, qualitative methods, and theories of change.
BOX 3.1.Some impact evaluations have forced a rethinking of our assumptions about HIV/AIDS
Development interventions are often designed on the basis of certain assumptions, particularly for HIV/ AIDS—an area where knowledge has grown by leaps and bounds, especially over the past decade. Impact evaluations have contributed to that learning, and some evaluations have shown how erroneous these assumptions can be. Here are a few examples.
Assumption: Promoting knowledge of HIV/AIDS will reduce risky behavior
Information and education campaigns have been the main focus of HIV/AIDS prevention campaigns. Although they have improved knowledge and self-reported behavior, their apparent impact on behaviors may actually be the result of a reporting bias fueled by the people’s greater knowledge of what they would need to do to reduce their risk, rather than a reflection of substantial changes in actual behaviors. Two recent meta-analytic reviews show that only a small fraction of behavior change interventions had an impact on HIV or other sexually transmitted infections.a Information needs to be better targeted and packaged to have an impact on actual behavior. Nor is more information necessarily better. Consider a program that sent text-message reminders concerning adherence to antiretroviral treatment: weekly reminders improved adherence, daily reminders did not. People got habituated to them or perhaps considered them intrusive.
Assumption: More HIV testing will lead to declines in risky behavior
HIV testing is one of the key policy responses to the HIV/AIDS epidemic in Africa, but there is little rigorous evidence on how individual sexual behavior responds to testing. Impact evaluations from East Africa produce three findings.b First, individuals surprised by an HIV-positive test are more than five times more likely to contract a sexually transmitted infection than are members of a similar untested control group, indicating an increase in risky sexual behavior. Second, individuals who believed they were at high risk for HIV have a 60 percent decrease in their likelihood of contracting a sexually transmitted infection following an HIV-negative test, indicating safer sexual behavior. Third, when HIV tests agree with a person’s belief of HIV infection, there is no statistically significant change in contracting a sexually transmitted infection. Using the distribution of beliefs of HIV infection and prevalence from the study, the evaluation finds the overall number of HIV infections to increase by 25 percent when people are tested, compared with when they are unaware of their status—an unintended consequence of testing.
Assumption: It is usually stigma that prevents people from picking up HIV test results
It is often argued that getting people to learn their HIV status is crucial for fighting HIV/AIDS, but that stigma and fear of obtaining a positive result create a major barrier preventing people from picking up their results at testing centers. An impact evaluation in Malawi found, however, that small incentives and deadlines were enough to induce people to do this.c Distance to the center was also a key determinant of attendance. These findings suggest that procrastination and the inconvenience of travel, rather than stigma, explain much of the problem.
Assumption: People who test positive will reduce their risky behavior
Although people can behave altruistically once they know they are HIV-positive, they may also show disinhibition behavior because they have less to lose when engaging in risky behavior. A recent study in Mozambiqued found that risky sexual behaviors increase in response to the perceived changes in risk associated with greater access to antiretroviral therapy. So, scaling up access to antiretroviral therapy without prevention programs may not be optimal if the objective is to contain the disease because people would adjust their sexual behavior in response to the perceived changes in risk.
BOX 3.2Spillovers and unintended effects can be important
Impact evaluations have helped highlight positive spillovers of certain types of human development interventions. In Kenya, providing deworming drugs at treatment schools lowered the incidence among nontreated children as a result of a fall in transmission rates.a Spillover effects are also shown to be strong in the provision of AIDS antiretroviral treatment. Also in Kenya, children’s weekly hours of school attendance increased more than 20 percent within six months of initiating antiretroviral treatment for any adult household member.b For boys in treatment households, these increases closely followed their reduced supply of paid labor. Similarly, young children’s short-term nutritional status—measured as weight for height—also improved dramatically.
Although Mexico’s conditional cash transfer (CCT) program Opportunidades/Progresa targeted poor children, school participation also increased among children above the poverty cutoff.c Scholarship programs for girls have demonstrated positive externalities for boys’ school participation and teachers’ attendance.d
Some projects, however, may have unintended negative consequences. For example, increasing the availability of affordable antimalarial drugs—as with artemisinin-based combination therapy, the only remaining effective class of first-line antimalarial drugs—may result in increased resistance of the malaria parasite to those drugs and overtreat-ment of individuals not infected with malaria who are presumptively diagnosed (and the resultant negative health outcomes because the true cause of illness remains untreated). There are also concerns over the sustainability of the subsidy (as a result of inefficient targeting). Cohen, Dupas, and Schaner (2010) confirmed the fears of overtreatment and showed that subsidized access to malaria rapid diagnostic tests substantially reduced these unintended consequences. Another example of a negative spillover effect is that some CCT programs show that, in some contexts, parents compensate for the reduction in labor market work of one child (the CCT beneficiary who is required to attend school) by increasing the hours worked by other siblings.ea. Miguel and Kremer 2004.b. Zivin, Thirumurthy, and Goldstein 2006.c. Lalive and Cattaneo 2006; Bobonis and Finan 2009.d. Kremer, Miguel, and Thornton 2009.e. Barerra-Osorio, Linden, and Urquiola 2007; Filmer and Schady 2009.
This chapter draws especially on the body of knowledge emerging from impact evaluations in the areas of health and education. Even though we have restricted the scope of this discussion to health and education, it is important to recognize that impact evaluations are also generating valuable policy-relevant evidence in the areas of other MDGs (such as water and sanitation)13 that are not discussed here. In synthesizing the evidence on health and education, we have restricted ourselves to the more rigorous empirical work that has relied on a viable identification strategy (that is, the counterfactual is reliable) and produced causal links between interventions and impacts—something that traditional monitoring and evaluation does not always accomplish. Acknowledging that not all rigorous evidence comes from impact evaluations alone, we focus only on them, partly because of limitations of space and partly because a lot of evidence coming from impact evaluations is fairly recent and highlights how evidence-based decision making can be operationalized (for examples, see boxes 3.1 and 3.2). As noted, the evidence base from impact evaluations in health and education sectors is still evolving and, at this stage, remains very far from complete. In light of that, the chapter refrains from making any explicit policy recommendations or engaging in detailed policy discussions.
BOX 3.3The DIME initiative
There are several initiatives in the World Bank to improve the quality of evidence to inform development practice. One example is the Development Impact Evaluation (DIME) initiative. Created in 2005 in the Office of the Chief Economist, DIME was relaunched in 2009 as a broad-based, decentralized effort to mainstream impact evaluation in the Bank. The objective is to improve the quality of the Bank’s operations, strengthen country institutions for evidence-based policy making, and generate knowledge in 15 development areas.
With 185 completed studies and 280 active studies in 85 countries, the World Bank is attempting to enhance and expand knowledge of the what and the how of economic development by delivering precise estimates of the cause-effect relationship between policy action and outcomes.
The DIME approach centers on three aspects, in a dramatic shift in the practice of evaluation. First, DIME has been moving the evaluation practice from external and ex post evaluations toward internal, prospective, and operationally driven evaluations that are externally validated through the use of state-of-the-art methods. This framework promotes a results-based model and allows governments to own and set the research agenda, thus receiving just-in-time advice on their most-pressing operational questions and genuinely transforming the way they make decisions for their programs.
Second, DIME builds teams that combine both operational and technical expertise. This ensures that researchers work in close collaboration with project teams from the planning stage onward to sustain a process of prospective and operationally driven knowledge generation. In this model, the project and research teams work together to embed a learning agenda in the project design.
Third, DIME organizes impact evaluations around development areas and offers tailored programmatic and strategic support. Cross-country events develop regional communities of practice around certain sub-themes, and teams share knowledge and experience going beyond the impact evaluation agenda. This ensures that all studies share a coherent and strategic research agenda and that the results are policy relevant and are disseminated to a wide audience of practitioners. DIME sponsors 15 thematic impact evaluation programs: access to infrastructure, active labor market programs, agriculture adaptations, conditional cash transfers, early childhood development, education service delivery, energy mitigation, health systems and results-based financing, HIV/AIDS, finance and private sector growth, forestry adaptations, institutional reform, local development, malaria, and water resource management adaptations.Source:Legovini 2010.
Efforts to help improve access have met with some success, but some questions remain
Reducing schooling costs has helped increase school enrollment
The driving focus of education policy making in developing countries has been a push to increase enrollments in primary and secondary schools. Interventions that reduce the out-of-pocket costs of schooling have been particularly successful.14 The reduction in schooling costs can take many direct or indirect forms. For instance, providing free uniforms and textbooks and building more classrooms in rural Kenya had a strong impact on total years of schooling com-pleted.15 Similarly, school feeding interventions have increased school participation in a variety of settings.16
A rich evidence base also shows the positive role of financial incentives in promoting school participation in a variety of contexts. This evidence includes the evaluation of Mexico’s conditional cash transfer (CCT) program that documented the efficacy of CCTs as a mechanism for encouraging enrollment and attendance.17 These results have been replicated by many other researchers in many other countries.18 In fact, evidence from CCT programs in nine other countries (Bangladesh, Cambodia, Chile, Colombia, Ecuador,
MAP 3.1.The maternal mortality rate is declining only slowly, even though the vast majority of deaths are avoidable
Source: World Bank staff calculations based on data from the World Development Indicators database.
Honduras, Nicaragua, Pakistan, and Turkey) has consistently demonstrated significant impacts of these programs on school enrollment in the initial years of program operation. Reducing user fees can similarly increase enrollment.19 However, few of these programs have been around long enough to determine whether the impact is a short-term effect of a novel project or is more lasting. In addition, the higher enrollments induced by these programs were not accompanied by increased achievement.20
Some impact evaluations have shown that improving school quality does not seem to be a major inducement for increasing school participation.21 Indeed, interventions improving the quality of education have, on average, generated no changes in participation.22 But much more evidence on this question is needed before the results can be generalized.
Health input provision programs have to grapple with uptake issues
All provision programs deal with the central issue of how to get citizens to take up the inputs and services they offer. In developing countries, health facilities are not always accessible to poor, rural, and remote communities. Other modes of delivery, such as school-based and community-based health interventions, have been popular. But we are only beginning to learn how well such programs work and under what conditions.
Impact evaluations have shown that schools can be an effective mode of delivery for health input provision and can address some uptake challenges because the families do not have to be persuaded. Some (but not all) school feeding programs have improved children’s nutritional intake23 and health. A recent study has shown improvements in anthropometric outcomes of the younger siblings of students who receive take-home rations.24
Other school-based health programs have also shown positive impacts. School-based mass treatment with inexpensive deworming drugs in Kenya (where the prevalence of intestinal worms among children is very high) improved health not only among treated students but also among untreated students at treatment schools and among untreated students at nearby nontreatment schools, thanks to reduced disease transmission.25 Also in Kenya, school-based intermittent preventive treatment helped reduce anemia prevalence.26 The delivery of such treatment was estimated to cost $2 per child treated per year, and the estimated cost per anemia case averted was $30. This type of intervention would be especially cost effective in areas with high rates of malaria transmission.27
Cheap telecommunication technologies can address uptake, especially in resource-limited settings. An intervention in Kenya used text messages to remind antiretroviral therapy patients to adhere to the treatment guide-lines—an important goal, given concerns of widespread drug resistance when adherence is poor. Weekly reminders increased the percentage of participants achieving 90 percent adherence to the therapy by approximately 13-16 percent, compared with no reminder.28 They also reduced the frequency of treatment interruptions.
Community-based interventions have also become more popular, particularly as complements to facility-based care, linking communities with facilities-based services. Despite showing great promise, community health worker programs have had mixed results, as reflected in the findings of one of the few rigorous impact evaluations of such programs in Ethiopia. The Ethiopian program significantly increased the proportion of children fully immunized and the proportions of children and women using insecticide-treated bednets for malaria protection. But the effect on prenatal and postnatal care was rather limited, as was the impact on the incidence and duration of diarrhea among children under 5 years of age.29
Because community-based interventions often come as a package comprising input provision, information, advocacy, and other components, it is hard to untangle the impacts of the different components. Impact evaluations may be useful in this; but, so far, their number is quite limited. A meta-analysis of health literature found that clean delivery had the most pronounced effect on reducing maternal mortality. In comparison, building community support and advocacy groups, involving other family members through community mobilization, or training community health workers and traditional birth attendants had little or no impact on reducing maternal mortality. But community interventions affected perinatal mortality through different channels: community support groups, advocacy through group sessions, and family involvement in care were especially effective in reducing perinatal deaths. Family involvement in care also showed a positive impact in reducing stillbirths. Additional impact evaluations are needed to confirm these findings.
Other programs have tried to deal with the uptake issue by combining demand-side and supply-side interventions. An impact evaluation of an immunization program in India shows that whereas pure supply-side improvements were associated with increases in immunization, uptake rates were much higher when supply-side improvements were combined with modest demand-side incentives in kind.30 There are few comparative studies of the relative impact on demand-side stimulation versus supply-side responses. For one of the few CCTs that combined interventions on both sides—Nicaragua’s Red de Proteccion Social—the impact evaluation was unable to untangle the demand-side and supply-side effects.31
Cost sharing does not necessarily improve the efficiency of health input subsidies
Many practitioners argue that for some types of health inputs (particularly those requiring the recipient to follow a treatment regime to see health benefits), cost sharing can improve the efficiency of the input subsidy.32 In this long-standing debate, others contend that cost sharing dampens demand, especially among the poor.
In Zambia, households paying a higher price for a water treatment product were more likely to report treating their drinking water two weeks later.33 Likewise, controlled studies in several countries record improvements in the use of services among poor people after copayments increased the transparency and accountability of providers to poor clients.34 In contrast, an impact evaluation in Kenya finds that cost sharing does not seem to improve targeting of insecticide-treated bednets for malaria prevention to those in greatest need, and that women who pay for their bednets are no more likely to use them than are those who receive them free.35 Cost sharing also dampens demand considerably, with uptake dropping by 75 percent when the price of bednets increases from zero to $0.75 (from a 100 percent subsidy to an 88 percent subsidy).
Another impact evaluation in Kenya shows that introducing a small cost-sharing component into a school-based deworming program dramatically reduced the uptake of deworming medication and raised little revenue, relative to administrative costs. Nor did the user fees help target treatment to the sickest students.36 Interestingly, one study shows that uptake is particularly sensitive to price at prices close to zero.37 In other words, uptake is found to be highly sensitive to having a positive price, but there is less evidence that it is sensitive to variation within the positive price range.
These findings suggest the need for more research, particularly to establish optimal levels of subsidization for different health inputs.38
CCT programs have often been effective in reaching the poor
CCTs have traditionally been designed to target poor people, so it is not surprising that a number of CCT programs have reduced poverty. But the impact varies, and there is still not much clarity about the conditions under which CCTs will be most effective in reducing poverty. Red de Proteccion Social in Nicaragua and Familias en Accion in Colombia show strong effects on poverty, whereas Programa de Asignacion Familiar in Honduras and Oportunidades/Progresa in Mexico had significant but more modest effects. Some of the larger CCTs from Brazil, Jamaica, and Mexico have also shown an impact on poverty at the national level.
But the CCTs in Cambodia and Ecuador did not affect median consumption or reduce national poverty.39
In Ecuador, cash transfers improved cognitive outcomes for the poorest children but not for children somewhat better off.40 In poor households, child work is a major cause of school-age children dropping out of school. Reduced child work by CCT beneficiaries has been found in Brazil, Cambodia, Ecuador, Mexico, and Nicaragua. In Cambodia, for example, the average child receiving the transfer was 10 percentage points less likely to work for pay. Results on this have been mixed, however, and more research is needed to understand work’s impact on siblings—especially if the number of CCTs targeting individual children increases.
Despite being more likely to adopt improved child care practices, the worst-off households in Mexico had greater difficulties translating these improvements into improved nutritional outcomes—largely because of the lack of complementary inputs among poorly educated mothers, poor access to safe water, and poorer quality nutrients.41
Incentives do not need to be large
Some impact evaluations have shown that small benefits can often be enough to incen-tivize the desired change in behavior, and increasing the total size of the transfer has only small marginal effects on service uptake or behavior. In Cambodia, giving a child $45 quarterly to stay in school proved as effective as giving $60.42 Similarly, in Malawi, the school participation effects of giving a household $5 were not significantly different from giving $10. “[T]he key parameter in setting benefit levels is the size of the elasticity of the relevant outcomes to the benefit level.”43
Elasticity also varies across the type of behavior being incentivized. In southwest Tanzania, the group receiving $60 had reductions of sexually transmitted infections, but the $30 group had the same infection rate as the control group that received no payments. Not surprisingly, the program was more effective for people from poorer and rural areas.
Whether cash transfers should be “conditioned” on certain behaviors is context specific
Although CCTs have shown some positive impacts on schooling and other outcomes,44 unconditional cash transfers have also improved child health and increased school participation.45 So, development practitioners and policy makers have become very interested in whether the programs should be conditioned on certain behaviors or not.
A study shows that some households in Mexico and Ecuador did not realize that the cash transfers they were getting were conditional on school attendance; and, for these households, school enrollment was significantly lower than for those who thought that the transfers were conditional.46
In Malawi, the effect of conditioning differed for different outcomes. For school participation of adolescent girls, the impact of the transfers was significantly greater when they were conditioned on school attendance than when not so conditioned. But they had little effect on the likelihood of teenage pregnancies or marriages, while unconditional cash transfers were very effective in delaying marriage and childbearing.47 Therefore, the answer to the question of whether cash transfers should be conditioned on certain behaviors appears to be context specific—but, again, more research is needed on the subject.
Providing information can be a highly cost-effective way to change behavior
An impact evaluation from Madagascar shows that providing information to youths about returns to schooling improved children’s school performance and attendance in the first few months following the intervention.48 Similarly, providing information on returns to schooling to boys in eighth grade increased average years of schooling in the Dominican Republic, although the program did not have an impact on the poorest students.49
Providing information to adults has also shown some promising results. In India, informing households that their drinking water is contaminated increases the probability that they start purifying their water.50 In Bangladesh, informing households that the water in their wells has an unsafe concentration of arsenic raises the probability that they will switch to another well.51 But information provision alone does not change consumer behavior in all contexts. A study from Kenya shows that providing consumers information about the mortality and morbidity from malaria or about the financial gains from avoiding malaria infection had no impact on the uptake of malaria control devices.52 Similarly, evidence from the United States has shown that information alone is usually ineffective in changing risky health-related behavior.53
The experience with translating information and knowledge into HIV risk-reducing behavior change has had mixed results, as the AIDS literature suggests.54 There are, however, some impact evaluations of behavioral interventions that have shown a reduction in risky sexual behavior.55 In western Kenya, for example, female students were warned about the high HIV prevalence rates among older men and they responded by dramatically reducing the number of teenage childbirths with older men. Similarly, helping girls stay in school by giving them a free school uniform reduced teen pregnancy at a relatively low cost ($12 per girl) when compared to the cost per teenage childbirth averted ($750). Providing abstinence-only information about teenage sexual behavior had no impact on the likelihood of risky behavior. But providing detailed information on the risk of HIV led to a 28 percent decrease in the likelihood that girls started childbearing within a year—thus suggesting a reduction in unprotected sex among those girls.56
Despite improvements in access, it has been very difficult to improve learning and health outcomes
Providing increased traditional schooling inputs has often been ineffective in improving learning outcomes
Attempting to fill narrowly defined resource gaps in schooling by increasing the provision of traditional inputs has not been very successful for improving learning outcomes. Traditional inputs that have been tested on this dimension include textbooks, school meals, blackboards and other visual aids (like flip charts), teacher training, and even smaller class sizes.57 Some of these studies are discussed below.
Some example of such studies include one in Kenya that found that providing textbooks to students did not raise average test scores; it raised only the scores of the best students.58 Also in Kenya, introducing complementary learning aids such as flip charts to schools had no impact on student performance.59 Similarly, school feeding interventions have had little or no impact on student perfor-mance.60 The Bolivian Social Investment Fund had a significant impact on school infrastructure but not on education outcomes within the evaluation period.61 An extra teacher in rural Indian nonformal education centers did not significantly improve test scores (although this was not measured with great precision).62 And, contrary to expectations, some studies have shown that simply reducing pupil–teacher ratios does little for student performance.63
In recent years, education practitioners have experimented with nontraditional inputs—mostly those to fill gaps left by teachers—and technology-based inputs emerge as promising. A Nicaraguan program of radio instruction for students finds strong positive impacts on mathematics performance.64 Computer-assisted learning in urban India increased test scores in mathematics.65 In another program in Indian schools, an electronic machine or flash card-based activities (to help teach English) increased test scores in English.66 Computers failed, however, to improve student performance in a program in Colombia.67
Newer approaches toward remedial education for poor-performing students have also shown promise. A reading intervention in rural India that gave four days of training to community volunteers with grade-10 or grade-12 educations to enable them to teach children how to read significantly improved reading.68
CCTs help increase the uptake of services, but their impact on health and learning outcomes is mixed
A positive impact on health care utilization (such as children’s visits to health centers) was reported in Honduras, Jamaica, Mexico, Nicaragua, and Paraguay (but not in Chile).69 Out of seven studies reporting immunization results, significant and sizable impacts on full immunization coverage (an intermediate outcome) were found only in Nicaragua and Turkey.70 In Mexico, however, the impact was small and not significant—results ascribed to the already high immunization rate at baseline (above 90 percent).
The impact of CCTs on health and nutritional outcomes is also mixed. One of the few studies that reported health impacts was Mexico’s Oportunidades/Progresa, where children under 3 who received CCTs were 22 percent less likely to report an illness episode in the previous four weeks than were the children in the comparison group. Children young enough to be exposed to the program for 24 months were 40 percent less likely to be reported ill—a finding that suggests the program generated cumulative health benefits. Conversely, no effect on child height was reported among children of any age in Brazil, Ecuador, Honduras, and Nicaragua;71 and, in Colombia, among children ages 3-7 years.72 Where studies have reported positive institutional impacts, the impacts have been larger among younger children (that is, younger at the time of the baseline survey—in particular, younger than 2 years at baseline). One of the few CCTs that addressed maternal health outcomes, Janani Surksha Yojana in India, had a small impact on the uptake of prenatal care; a large, positive impact on institutional deliveries; and a smaller effect on home delivery with a skilled birth attendant. There was also some evidence of a lower probability of perinatal and neonatal deaths, but not of maternal deaths.73
Although the positive impact of CCTs in expanding access to education is well documented,74 evidence on the impact of educational transfers (in-kind or cash transfers) on learning outcomes is not as encouraging.75
One example is a small cash transfer program in Malawi’s Zomba district, where the transfers improved school enrollment and attendance but did not significantly improve learning variables (which included a self-report on English literacy and a teacher’s evaluation of student progress).76 The lack of any discernible effect on learning outcomes (despite large impacts on school enrollment) may partly be caused by their drawing lower-ability students back to school.77
In a similar vein, CCTs have had a positive impact on the uptake of growth monitoring services.78 But the impact on growth indicators has been mixed, and the positive impacts were generally modest.79 CCTs in Colombia, Mexico, and Nicaragua were accompanied by a substantial positive impact on linear growth,80 although no impact was found in Brazil81 or Honduras.82 Nonetheless, despite the difficulty of cross-study comparisons (because of differences in age groups, exposure, and program design), it can be concluded that programs with larger cash transfers (such as those in Colombia, Mexico, and Nicaragua, where transfers represented 15-25 percent of total household expenditures) tend to have the largest impact. Younger age groups are likely to benefit more (consistent with the nutrition literature), and the effects are larger for height than for weight indicators.83
Only three of the CCT program evaluations (Honduras, Mexico, and Nicaragua) looked at the impact on micronutrient status. In Mexico, after one year, beneficiary children had higher mean hemoglobin levels and significantly lower rates of anemia than did nonbeneficiaries.84 But nearly half the children under 1 year were still anemic after one year of exposure to the intervention. No effect was found for the older children, and no differences were observed for other micronutrients (such as iron, vitamin A, and zinc). The modest impacts on micronutrient status were ascribed to low use or uptake of the fortified foods (Mexico)85 and supplements (Nicaragua),86 sharing foods with other beneficiary household members, over-dilution of the product, and program design weaknesses.87
MAP 3.2.Each year of a girl’s education reduces, by as much as 10 percent, the risk of her children dying before age five
Source: World Bank staff calculations based on data from the World Development Indicators database.
Nutritional improvements are expected to flow from the increased consumption arising from the income effect of the cash transfer and the better practices (such as breastfeeding, proper weaning, growth monitoring, and consumption of nutritious foods) that arise from health education during growth monitoring. The relative contribution of health education to each of these practices has not been assessed. The majority of impact evaluations have found that CCTs may disproportionately affect the consumption of particular items, such as food. It has been hypothesized that CCTs increase the bargaining power of women within the household, and that this drives the greater food expenditure.88 There is also some evidence of changes in diet composition. Not only do households increase dietary diversity, but they also shift toward higher-quality sources of calories. In Nicaragua, households that receive transfers from the Atencion a Crisis program spend significantly less on staples and more on animal protein, fruits, and vegetables.89
Identifying the reasons for increased use of health or education services is complicated. Sometimes it rises because services are physically more accessible. This is likely the case for such low-cost health services as bednets and treatment for sexually transmitted infections where the nonservice costs of use—particularly transport and perhaps facility search costs—are a larger share of cost to the consumer than is the service cost. In education, a crucial element is always likely to be the opportunity cost to households (the loss in earnings from sending children). A further reason for increased uptake is that use is linked to finance, and thus it improves the providers’ incentive to increase the quantity of services.90
Intervening early in a child’s life can produce strong impacts
Evidence suggests that education and health programs targeting young children can have strong impacts on overall human development outcomes. Some of the most compelling evidence for this comes from Guatemala, where nutritional intervention and micronutrient supplementation before 3 years of age showed beneficial effects on schooling, reading, and intelligence tests in adulthood (25-42 years).91 Some evaluations show significant improvements in cognitive and learning outcomes among children who benefited from CCT programs before they entered school.92 At every level of income, households with young children receiving the cash transfers were more likely to read, tell stories, and sing to their children and to have books, paper, and pencils for them to use at home.93 This is an important result because early cognitive development was a strong predictor of school attainment in Brazil, Guatemala, Jamaica, the Philippines, and South Africa, even after controlling for wealth and mother’s education.94
Children who qualify for CCTs benefit most from schooling at an early age. In Ecuador, most children at age 3 in the sample were only modestly behind the reference population on a test of cognitive development.95 By age 6, however, when they entered first grade, children in the two poorest deciles of the national distribution of wealth were almost three standard deviations behind where they should be—and have almost no chance of catching up. Remedial investments targeting them therefore need to make equity-efficiency tradeoffs; investments in early childhood are more likely to avoid such tradeoffs.96 Similarly, the results reported above on the impact of nutrition programs on growth and micronutrient status suggest larger impacts among small children. Despite this evidence, there is some debate over which age group CCTs should target to ensure that school-age children also benefit.97
Service delivery often fails the poor
A recent study reports results from surveys in which enumerators made unannounced visits to primary schools and health clinics in Bangladesh, Ecuador, India, Indonesia, Peru, and Uganda. They recorded whether they found teachers and health workers in the facilities.98 On average, about 19 percent of teachers and 35 percent of health workers were absent, and many teachers and health workers in their facilities were not working. Across Indian government-run schools, only 45 percent of teachers assigned to a school were engaged in teaching at any given time. In Ghana, Morocco, Tunisia, and the Brazilian state of Pernambuco99 the instructional time teachers allocated to learning tasks ranged from 78 percent in Tunisia to 39 percent in Ghana.
Observational data for a sample of doctors in New Delhi, India, showed differences between the levels of expertise and quality of care provided among private doctors who serve the rich and the poor. Poorer patients receive low-quality advice and spend a fair amount of money on unnecessary (and often substandard) drugs. Wealthier patients get better advice, both because they see more competent providers and because their providers put in more effort.100
Why are incentives weak and accountability low among service providers in developing countries? One reason is the highly centralized and bureaucratic education and health systems. Hiring, salaries, and promotion decisions are made at the center, determined largely by educational qualifications and seniority, with almost no scope for performance-based pay. Governments find it difficult to monitor the performance of health workers, especially those providing highly discretionary services such as clinical care. Furthermore, teachers and health workers are typically organized interest groups that can be politically influential. These factors imply that disciplinary action for poor performance is rare and that teachers and health workers are almost never fired. In addition, asymmetric information between client and provider (especially strong for health services) makes accountability difficult.
These problems are almost never addressed through the long route of accountability—through political pressure that consumers exert on the government. One possible reason poor performance of service providers is not on the political agenda: providers are an organized interest group, and clients, particularly in health, are diffuse. Those who are poor enough to use public schools and public clinics have less political power than do middle-class teachers and health workers. In many countries, even those moderately well off send their children to private schools and use private clinics. Even when part of the population chooses private providers instead of public ones, public money keeps flowing into public schools and hospitals. Hence, the cycle of weak accountability and weak incentives continues.
New approaches are being designed to improve the critical link of service delivery
As seen in figure 3.4, service delivery is the link between generation and consumption of goods and services. Empirical evidence from microeconomic studies has shown that this link is often weak in developing countries—an important reason why public spending on inputs and demand-side interventions does not translate into commensurate human development outcomes. What explains the service delivery weaknesses?
The following issues stand out: inequitably low allocations to services reaching low-income groups, leaks of funding between central ministries and frontline providers, and failures of frontline providers such as teachers and doctors to perform effectively. Impact evaluations have highlighted and explored solutions on the last of those three points.101 It has been argued that frontline service providers’ weak incentives and low accountability in developing countries often translates into ineffective service delivery.102 New service delivery approaches attempt to solve the incentive and accountability issues while reducing the need for (onerous) monitoring by the government and the citizenry. These approaches can be classified under three headings: information for accountability, greater citizen accountability and autonomy in service provision, and pay for performance.103
Information to increase accountability can be effective in some contexts
To hold their service providers accountable, poor people need information on their rights and on how well their service providers are performing. Furthermore, availability of information on service delivery performance provides the means to monitor progress. In recent years, programs have been designed to provide citizens with this information in the hope that it will enable them to hold service providers accountable. This has been easier to do in education than in health because of the nature of the services.104
Citizen report cards provide clients with simple information on how well local service providers are doing. In Pakistan, providing parents with report cards containing information about the relative performance of children and schools in the village (including private schools) improved student performance in public schools and lower-quality private schools and reduced fees at higher-quality private schools.105
Elsewhere, impact evaluations of similar programs have shown less direct results. For example, a Liberian program that publicized to parents the results of an early-grade reading assessment and showed teachers how to prepare quarterly report cards for parents showed very little impact on learning after two years. But when the information on student reading skills was combined with intensive teacher training, student performance improved dra-matically.106 Similarly in Chile, publicizing rankings of school quality by region did not appear to affect school performance.107
A large information campaign in Uganda used school display boards and newspapers to provide information to parents and communities on the exact amount of operational funding each school could expect and the date when the funding transfer to the school would be made. The campaign was associated with a sharp reduction in the leak of school grants expenditures.108 The researchers also documented that schools in districts with higher media exposure experienced greater increases in enrollments and higher test scores on the primary-school exit exam than did schools in districts with lower media exposure.
Another dimension of information for accountability, tested in two studies in India, yielded mixed results. In both studies, villagers were provided with information about their rights and responsibilities for education provision and oversight. One part of the country showed no impact from providing information alone,109 and a different part of the country showed some impact on student learning through improved service provision by teachers.110
It seems clear that, although providing information for accountability to citizens may be important, it is ineffective to simply give communities information on school quality without also increasing their ability to take action.
Greater citizen accountability and autonomy in service provision show promise
Parents and community leaders have information about local service providers and local service delivery that centralized systems tend to ignore. Involving them in decision making with system actors (teachers, head teachers, doctors, and nurses) can better align spending with local needs. Greater autonomy and accountability of local service delivery can increase incentives to adopt proven successful practices; evaluate the effectiveness of homegrown initiatives; and build a sense of commitment, ownership, and pride. But there are also risks, including limited information, narrow interests, elite capture, and inadequate representation of the disadvantaged. Increasingly, development practitioners are experimenting with ways to increase the autonomy of local stakeholders in service provision. Again, this model has been evaluated more in education than in health.
One way to increase accountability is to give local communities resources and training to manage service provision, an approach tried in school-based management programs that give local communities decision-making power and resources for school management. Studies show a positive impact on how schools are managed and how teachers, head-teachers, and parents interact, but they reveal no consistent improvement in student performance.
For example, school-based management in Mexico has a positive impact on teacher behavior and school management,111 but little impact on student performance.112 A similar program in Nepal showed a significant impact on access and equity; the impact on school governance measures was mixed, with regional variation in outcomes; and there was no evidence that the changes were associated with improvements in learning outcomes.113
There has been considerable work on hospital autonomy and the impacts of decentralized management on hospital performance, but the analyses have rarely had any controlled or quasi-experimental design to control for a counterfactual. This is partly because hospital autonomy is usually implemented with many other management reforms, making it difficult to isolate autonomy from management capacity and system improvements.
Another approach is to give clients more control over service providers. In western Kenya, giving parents stronger oversight over teachers improved student performance when combined with other reforms. And the effect on student performance of hiring extra teachers can be greater when school committees are given training on how to supervise and manage teachers; the effect was largest among the students assigned to a contract teacher (as opposed to a civil servant). In India, local communities’ hiring of contract teachers (who were younger and less qualified) at lower salaries than those received by regular teachers by improved student perfor-mance.114 But very little is known about the long-term effects of contract teachers, and such a system is unlikely to be sustainable over a long period.
Again, the evidence on school-based management is inconclusive. More impact evaluations are needed, particularly on the pathways that translate this kind of decentralization into better school quality.
Pay for performance holds great promise, but more evidence is needed
Performance-based bonuses that reward schools and teachers for improving student outcomes have been introduced in diverse settings (such as India, Israel, and Kenya), and all have positive impacts on student learning outcomes. In some cases (such as in India),115 individual bonuses led to larger improvements in student outcomes, whereas group bonuses had lower implementation costs. That suggests some cost-benefit trade-offs between the two. In Chile’s National System of School Performance, group bonuses also yielded quite modest impacts. In more detail, one evaluation of the system found a modest cumulative positive effect on student achievement for schools that had a higher probability of winning the bonus,116 and a positive effect on student test scores.117
Some evaluations suggest uncertainty about the duration of benefits. In Kenya, group-based incentives for teachers improved student performance but the gains were short-lived.118 Other settings suggest that the targets are important. In the Brazilian state of Pernambuco, during the first year, schools with more ambitious targets made significantly larger test score gains than did similarly performing comparison schools assigned lower targets. In the second year—with controls for schools’ 2008 test results and other school characteristics—schools that barely missed the threshold for getting the bonus in 2008 improved more than schools that barely achieved it. It appears that both higher targets and barely missed bonus achievement created incentives that had a positive effect on school motivation and performance.119
Two other evaluated programs rewarded teachers for attendance. One allowed school heads in rural Kenya to give individual teachers bonus pay for regular attendance (measured by unannounced random visits).120 But they simply distributed the full bonus to all teachers, regardless of attendance. The inability or unwillingness of the head teacher to enforce the performance bonus to reward regular attendance explained the lack of impact on attendance as well as the absence of change in teacher pedagogy, pupil attendance, or pupil test scores.
A program in rural India, however, produced very different results.121 In randomly selected rural schools run by nongovernmental organizations, a schedule of monthly teacher bonuses and fines based on attendance was monitored with daily date-stamped and time-stamped photographs. The maximum bonus for a teacher with no days absent was about 25 percent of a month’s salary. The program had a dramatic effect on teacher absenteeism over three years. Although there were no observed changes in teachers’ classroom behavior and pedagogy (other than greater presence), student test scores and rates of graduation to the next level of education rose significantly. The program was also quite cost effective.
In health care, paying for performance is designed to provide financial rewards for providers who increase service delivery and improve the quality of care. To date, the experience has been promising, but there is little evidence from evaluations using credible comparison groups—in fact, a recent review of the literature finds methodological limitations that constrain the ability to interpret the results as causal impacts of pay for performance.122 Several studies without comparison groups found increases in service delivery indicators, such as immunization and attended deliveries (as in Haiti),123 and in the number of curative consultations and institutional deliveries (as in early pilot programs in Rwanda).124
A quasi-experimental study in Cambodia found service delivery improvements among contracted providers, but the providers received significantly more resources than did the control groups—suggesting that it was potentially the additional resources (the resource effect) rather than pay for performance (the incentive effect) that explained these findings.125 This raises an important policy issue: if pay for performance achieves its results from increased financial resources rather than incentives, the same results could be achieved from an increase in traditional inputs, and there would be no reason to incur the administrative costs associated with pay for performance.
The only impact evaluation in a low-resource setting that has been able to separate the incentive effect from the resource effect is a recent one of the Rwanda health center pay-for-performance program targeting maternal and child health outcomes. The incentive effect was isolated from the resource effect by increasing comparison facilities’ budgets by the average payments made to the treatment facilities. The evaluation shows the greatest impact on those services that had the highest payment rates (institutional deliveries) and that needed the lowest provider effort (preventive care visits by young children).126 Financial performance incentives improved both the use and quality of health services.
Development practitioners need to do much more rigorous empirical work—particularly, impact evaluations—to come closer to answering critical questions of development effectiveness. But the start has been promising. Indeed, a recent review of CCTs concludes that no new research is needed on whether households—particularly, poor households—respond to conditions or incentives.127
Other questions remain unresolved or underaddressed—for example, the size of the incentive and its influence on the size of the impact,128 complementary supply-side interventions or investments, payment frequency, means of payment, and other features of program design (such as how frequently conditions should be monitored, the gender of the cash transfer recipient, and whether to penalize noncomplying households).
A systematic base of impact evaluations of different interventions, in different contexts, will go a long way in moving toward evidence-based decision making and in making public spending more effective to close the remaining gaps in education and health outcomes in developing countries. Some topics, such as maternal mortality, are under-represented in the impact evaluation litera-ture—a critical shortcoming given that this coincides with the MDG that is least likely to be achieved. Also needed is more work on designing impact evaluations that have a basis in theory and provide useful information on whether and why programs work.
The evidence suggests that service delivery for education and health is weak in developing countries. But very little is known about the best ways to measure and strengthen this critical link in the results chain. Exploring new and different approaches to measure and improve service delivery is one of the most promising areas for future work—especially, comparative evaluations that assess alternative models of implementation.
Cost and cost effectiveness should be an explicit part of evaluation design. There rarely is discussion of the costs to achieve particular impacts. What magnitude of impact justifies the cost, or is it impact at any cost? Information to implement at scale is often missing, with no indication of the institutional investments to feasibly implement smaller experiments at scale. This lack of information on program costs and investments hampers decision making and implementing at scale.
More work is also needed on the political economy surrounding the use of empirical evidence to inform policy. Should donors invest more in impact evaluation and intervention design to improve political incentives and government interventions? How can governments be encouraged to pursue policies that strengthen service delivery (by improving the incentives of providers, among other things)? Some evidence on these issues is emerging. For example, enabling less-educated people in Brazil to use electronic voting technologies increased their political participation, shifted public spending toward public health care, improved service use by less-educated mothers, and thus reduced low-weight births.129 Public spending on primary education rose in Africa when countries moved to greater multiparty competition.130 A similar association between electoral competition and education spending has been found in Indian states.131 But mobilizing communities may have little impact on public provider accountability if communities are severely constrained in taking public action.132
Interest is growing in demand-side financing approaches and more explicit use of incentives for service providers to improve performance. On the demand side, the massive increase in CCTs has improved human development outcomes. Similarly, vouchers received increased prominence in health and education. On the supply side, performance-based financing and pay for performance have improved the performance of service delivery units in education and health.
Policy makers are realizing that continuing to spend ever-greater funds on health and education without quality improvements will neither bring their countries closer to the MDGs nor greatly improve human development outcomes among poor people. More supply-side subsidies for health and education services are not always the answer. The problems of supply-side dominance are, among others, weak targeting of the poor, lack of consumer choice, absence of links between provider payment and performance, weak incentives for service quality, and weak accountability to consumers. Supply-side effects have dominated planning—and too many multilateral development banks, including the World Bank, are guilty of this. Resource-needs projection models too often ignore the evidence suggesting a tenuous relationship between increased health expenditure and outcomes.
The stronger and more explicit focus on incentives in the health and education literature is encouraging. The modest human development results do not stem from a shortage of technical solutions. But a principal finding of this review is that simply increasing the size of interventions may not be enough to make adequate progress. Even where access and use have been expanded, human development outcomes have not necessarily improved (as the experience with CCTs suggests). Establishing appropriate incentives for suppliers and consumers, ensuring adequate provision of services to meet demand, ensuring that suppliers are accountable to consumers for the quality of service, and providing sufficient resources to poor consumers are all necessary to have an appreciable impact on human development outcomes.
- Search Google Scholar
- Export Citation
- Search Google Scholar
- Export Citation
- Search Google Scholar
- Export Citation
- Search Google Scholar
- Export Citation
- Search Google Scholar
- Export Citation
AbadziH.2009. “Instructional Time Loss in Developing Countries: Concepts, Measurement, and Implications.”World Bank Research Observer24 (2): 267-90.
AdatoM. andT.Roopnaraine.2004. “Sistema de evaluación de la Red de Protección Social de Nicaragua: Un análisis social de la ‘Red de Protección Social’ (RPS) en Nicaragua.”International Food Policy Research InstituteWashington, DC.
AdmassieA.D.Abebaw andA.Woldemi-chael.2009. “Impact Evaluation of the Ethiopian Health Services Extension Programme.”Journal of Development Effectiveness1 (4): 430-49.
AfridiF.2007. “Child Welfare Programs and Child Nutrition: Evidence from a Mandated School Meal Program in India.”Journal of Development Economics92 (2): 152-65.
AhmedA.M.AdatoA.KudatD.Gilligan andR.Colasan.2006. “Interim Impact Evaluation of the Conditional Cash Transfer Program in Turkey. Final Report.”International Food Policy Research InstituteWashington, DC.
AldermanH. ed.2011. No Small Matter: The Impact of Poverty, Shocks, and Human Capital Investments in Early Childhood Development. Washington, DC: World Bank.
AndrabiT.J.Das andA.Khwaja.2009. “Report Cards: The Impact of Providing School and Child Test Scores on Educational Markets.”Unpublished manuscriptWorld BankWashington, DC.
AngristJ. D. andJ.Pischke.2010. “The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con Out of Econometrics”Journal of Economic Perspectives24 (2): 3-30.
AshrafN.J.Berry andJ. M.Shapiro.2007. “Can Higher Prices Stimulate Product Use? Evidence from a Field Experiment in Zambia.”Working Paper 13247National Bureau of Economic ResearchCambridge, MA.
AshworthA.R.Shrimpton andK.Jamil.2008. “Growth Monitoring and Promotion: Review of Evidence of Impact.”Maternal and ChildNutrition4 (Suppl 1): 86-117.
AttanasioO.E.BattistinE.FitzsimmonsA.Mesnard andM.Vera-Hernández.2005. “How Effective Are Conditional Cash Transfers? Evidence from Colombia.”Briefing Note 54Institute for Fiscal StudiesLondon.
BairdS.E.ChirwaC.McIntosh andB.Özler.2010. “The Short-Term Impacts of a Schooling Conditional Cash Transfer Program on the Sexual Behavior of Young Women.”Health Economics 19: 55-68.
BairdS.C.McIntosh andB.Özler.2011. “Cash or Condition? Evidence from a Cash Transfer Experiment.”Policy Research Working Paper 5259World BankWashington, DC.
BaldacciE.M.Guin-Siu andL.de Mello.2002. “More on the Effectiveness of Public Spending on Health Care and Education: A Covariance Structure Model.”Working Paper WP/02/90International Monetary FundWashington DC.
BanerjeeA. V.R.BanerjiE.DufloR.Glenner-ster andS.Khemani.2010. “Pitfalls of Participatory Programs: Evidence from a Randomized Evaluation in Education in India.”American Economic Journal: Economic Policy2 (1): 1-30.
BanerjeeA.E.DufloS.Cole andL.Linden.2007. “Remedying Education: Evidence from Two Randomized Experiments in India.”Quarterly Journal of Economics122 (3): 1235-64.
BanerjeeA.E.DufloR.Glennerster andD.Kothari.2010. “Improving Immunisation Coverage in Rural India: Clustered Randomised Controlled Evaluation of Immunisation Campaigns With and Without Incentives.”BritishMedical Journal340 (1): c2220.
BanerjeeA.S.Jacob andM.Kremer withJ.Lanjouw andP.Lanjouw.2005. “Promoting School Participation in Rural Rajasthan: Results from Some Prospective Trials.”Unpublished manuscriptMassachusetts Institute of TechnologyCambridge, MA.
BarberS. L. andP. J.Gertler.2010. “Empowering Women: How Mexico’s Conditional Cash Transfer Program Raised Prenatal Care Quality and Birth Weight.”Journal of Development Effectiveness2 (1): 51-73.
BarhamT. andJ.Maluccio.2008. “The Effect of Conditional Cash Transfers on Vaccination Coverage in Nicaragua.”Health and Society Working Paper HS2008-01Institute of Behavioral ScienceUniversity of Colorado, Boulder, CO.
Barrera-OsorioF.M.BertrandL. L.Linden andF.Perez-Calle.2008. “Conditional Cash Transfers in Education: Design Features, Peer and Sibling Effects Evidence from a Randomized Experiment in Colombia.”Policy Research Working Paper 4580World BankWashington, DC.
Barrera-OsorioF. andL. LLinden.2009. “The Use and Misuse of Computers in Education: Evidence from a Randomized Experiment in Colombia.”Policy Research Working Paper 4836World BankWashington, DC.
Barrera-OsorioF.L. LLinden andM.Urquiola.2007. “The Effects of User Fee Reductions on Enrollment: Evidence from a Quasi-Experiment.”Working PaperColumbia UniversityNew York.
BasingaP.P. J. GertlerA.BinagwahoA.SoucatJ.Sturdy andC.Vermeersch.2010. “Paying Primary Health Care Centers for Performance in Rwanda.”Policy Research Working Paper 5190World BankWashington, DC.
BehrmanJ. andJ.Hoddinott.2001. “An Evaluation of the Impact of Progresa on Pre-school child Height.”Discussion Paper 104International Food Policy Research InstituteWashington, DC.
BertrandJ. T.K.O’ReillyJ.DenisonR.Anhang andM.Sweat.2006. “Systematic Review of the Effectiveness of Mass Communication Programs to Change HIV/AIDS-related Behaviors in Developing Countries.”Health Education Research21 (4): 567-97.
BhuttaZ.T.AhmedR.BlackS.CousensK.DeweyE.GiuglianiB.HaiderB.KirkwoodS.MorrisH. P. S.Sachdev andM.Shekar.2008. “What Works? Interventions for Maternal and Child Undernutrition and Survival.”Lancet 371: 417-40.
BidaniB. andM.Ravallion.1997. “Decomposing Social Indicators Using Distributional Data.”Journal of Econometrics77 (1): 125-39.
BjörkmanM.2006. “Does Money Matter for Student Performance? Evidence from a Grant Program in Uganda.”Working Paper 326Institute for International Economic Studies, Stockholm UniversitySweden.
BjörkmanM. andJ.Svensson.2009. “Power to the People: Evidence from a Randomized Experiment of a Community-based Monitoring Project in Uganda.”Quarterly Journal of Economics 124: 734-69.
BobonisG. andF.Finan.2009. “Neighborhood Peer Effects in Secondary School Enrollment Decisions.”Review of Economics and Statistics91 (4): 695-716.
BorkumE.2009. “Can Eliminating School Fees in Poor Districts Boost Enrollment? Evidence from South Africa.”Discussion Paper 0910-06Columbia UniversityNew York.
BrunsB.D.Filmer andH. A.Patrinos.2011. Making Schools Work: New Evidence on Accountability Reforms.Washington, DC: World Bank.
BurnsideC. andD.Dollar.1998. “Aid, the Incentive Regime, and Poverty Reduction.”Policy Research Working Paper 1937World BankWashington, DC.
CaldésN.D.Coady andJ.Maluccio.2006. “The Cost of Poverty Alleviation Transfer Programs: A Comparative Analysis of Three Programs in Latin America.”World Development34 (5): 818-37.
CardosoE. andA.Souza.2004. “The Impact of Cash Transfers on Child Labor and School Attendance in Brazil.”Working Paper 04-W07Department of Economics, Vanderbilt UniversityNashville, TN.
CaseA.V.Hosegood andF.Lund.2005. “The Reach and Impact of Child Support Grants: Evidence from KwaZulu-Natal.”Development Southern Africa 22: 467-82
Castro-LealF.J.DaytonL.Demery andK.Mehra.1999. “Public Social Spending in Africa: Do the Poor Benefit?”World Bank Research Observer14 (1): 49-72.
ChaudhuryN.J.HammerM.KremerK.Muralidharan andF. H.Rogers.2006. “Missing in Action: Teacher and Health Worker Absence in Developing Countries.”Journal of Economic Perspectives20 (1): 91-116.
ChaudhuryN. andD.Parajuli.2010. “Giving It Back: Evaluating the Impact of Devolution of School Management to Communities in Nepal.”Unpublished manuscriptWorld BankWashington, DC.
ClarkeS. E.M. C. H.JukesK.NjagiL.KhasakhalaB.CundillJ.OtidoC.CrudderB.Estambale andS.Brooker.2008. “Effect of Intermittent Preventive Treatment on Health and Education in Schoolchildren: A Cluster-Randomised, Double-Blind, Placebo-Controlled Trial.”Lancet 372:127-38.
CohenJ. andP.Dupas.2010. “Free Distribution or Cost-Sharing? Evidence from a Randomized Malaria Prevention Experiment.”Quarterly Journal of Economics125 (1): 1-45.
CohenJ.P.Dupas andS.Schaner.2010. “Prices, Diagnostic Tests and the Demand for Malaria Treatment: Evidence from a Randomized Trial.”Working PaperHarvard UniversityCambridge, MA.
CunhaF. andJ.Heckman.2010. “Investing in Our Young People.”Working Paper 16201National Bureau of Economic ResearchCambridge, MA.
DasJ. andJ.Hammer.2005. “Money for Nothing: The Dire Straits of Medical Practice in Delhi, India.”Policy Research Paper 3669World BankWashington, DC.
DavisK. C.M. C.FarrellyP.Messeri andJ.Duke.2009. “The Impact of National Smoking Prevention Campaigns on Tobacco-Related Beliefs, Intentions to Smoke and Smoking Initiation: Results from a Longitudinal Survey of Youth in the United States.”International Journal of Environmental Research and Public Health6 (2): 722-40.
DeatonA.2009. “Instruments of Development: Randomization in the Tropics, and the Search for the Elusive Keys to Economic Development.”Working Paper 14690National Bureau of Economic ResearchCambridge, MA.
de BrauwA. andJ.Hoddinott.2008. “Must Conditional Cash Transfer Programs Be Conditioned to Be Effective? The Impact of Conditioning Transfers on School Enrollment in Mexico.”Discussion Paper 757International Food Policy Research InstituteWashington, DC.
de JanvryA.F.Finan andE.Sadoulet.2006. “Can Conditional Cash Transfer Programs Serve as Safety Nets in Keeping Children in School and from Working When Exposed to Shocks?”Journal of Development Economics 79: 349-73.
de WalqueD.H.Kazianga andM.Over.2010. “Antiretroviral Therapy Awareness and Risky Sexual Behaviors: Evidence from Mozambique.”Policy Research Working Paper 5486World BankWashington, DC.
DownsJ. S.G.Loewenstein andJ.Wisdom.2009. “Strategies for Promoting Healthier Food Choices.”American Economic Review99 (2): 159-64.
DufloE.2003. “Grandmothers and Granddaughters: Old-Age Pension and Intrahouse-hold Allocation in South Africa.”World Bank Economic Review 17: 1-25.
DufloE.P.Dupas andM.Kremer.2007. “Peer Effects, Pupil-teacher Ratios, and Teacher Incentives.”Unpublished manuscriptHarvard UniversityCambridge, MA.
DufloE.P.DupasM.Kremer andS.Sinei.2006. “Education and HIV/AIDS Prevention: Evidence from a Randomized Evaluation in Western Kenya.”Policy Research Working Paper 4024World BankWashington, DC.
DufloE.R.Hanna andS.Ryan.2010. “Incentives Work: Getting Teachers to Come to School.”Unpublished manuscriptUniversity of ChicagoChicago, IL.
DupasP.2006. “Relative Risks and the Market for Sex: Teenagers, Sugar Daddies, and HIV in Kenya.”Unpublished manuscriptDartmouth CollegeHanover, NH.
DupasP.2009. “Do Teenagers Respond to HIV Risk Information? Evidence from a Field Experiment in Kenya.”Working Paper 14707National Bureau of Economic ResearchCambridge, MA.
EdmondsE. V.2006. “Child Labor and Schooling Responses to Anticipated Income Shock in South Africa.”Journal of Development Economics 81: 386-414.
EdmondsE. V. andN.Schady.2009. “Poverty Alleviation and Child Labor.”Working Paper 15345National Bureau of Economic ResearchCambridge, MA.
EichlerR.P.AuxilaA.Uder andB.Desmangles.2007. “Performance-Based Incentives for Health: Six Years of Results from Supply-Side Programs in Haiti.”Working Paper 121Center for Global DevelopmentWashington, DC.
EldridgeC. andN.Palmer.2009. “Performance-Based Payment: Some Reflections on the Discourse, Evidence and Unanswered Questions.”Health Policy and Planning24 (3): 160-66.
FerrazC. andB.Bruns. Forthcoming. “Incentives to Teach: The Effects of Performance Pay in Brazilian Schools.”World BankWashington, DC.
FilmerD.2003. “The Incidence of Public Expenditure on Health and Education.”Background Note for World Development Report 2004World BankWashington, DC.
FilmerD.J.Hammer andL.Pritchett.1998. “Health Policy in Poor Countries: Weak Links in the Chain?”Policy Research Working Paper 1874World BankWashington, DC.
FilmerD. andN.Schady.2008. “Getting Girls into School: Evidence from a Scholarship Program in Cambodia.”Economic Development and Cultural Change 56: 581-617.
FilmerD. andN.Schady.2009. “Are There Diminishing Returns to Transfer Size in Conditional Cash Transfers?”Policy Research Working Paper 4999World BankWashington, DC.
FilmerD. andN.Schady.2010. “Does More Cash in Conditional Cash Transfer Programs Always Lead to Larger Impacts on School Attendance?”Journal of Development Economics.
FiszbeinA. andN.Schady.2009. Conditional Cash Transfers: Reducing Present and Future Poverty. Washington DC: World Bank.
FujiwaraT.2010. “Voting Technology, Political Responsiveness, and Infant Health: Evidence from Brazil.”Department of EconomicsUniversity of British ColumbiaVancouver. http://grad.econ.ubc.ca/fujiwara/jmp.pdf.
GaarderM.A.Glassman andJ.Todd.2010. “Conditional Cash Transfers and Health: Unpacking the Causal Chain.”Journal of Development Effectiveness2 (1): 6-50.
GalassoE. andN.Umapathi.2007. “Improving Nutritional Status through Behavioral Change: Lessons from Madagascar.”Policy Research Working Paper 4424World BankWashington, DC.
GarnerP.R.PanpanichS.Logan andD.Davis.2000. “Is Routine Growth Monitoring Effective? A Systematic Review of Trials.”Archives of Diseases in Childhood 82: 197-201.
GertlerP. J.2004. “Do Conditional Cash Transfers Improve Child Health? Evidence from Progresa’s Control Randomized Experiment.”American Economic Review94 (2): 336-41.
GertlerP. J. andS.Boyce.2001. “An Experiment in Incentive-Based Welfare: The Impact of Progresa on Health in Mexico.”Royal Economic Society Annual Conference 2003No. 85.
GertlerP. J.S.MartinezP.PremandL. B.Rawlings andC. M. J.Vermeersch.2010. Impact Evaluation in Practice. Washington, DC: World Bank.
GertlerP. J.H. A.Patrinos andM.Rubio-Codina.2006. “Empowering Parents to Improve Education: Evidence from Rural Mexico.”Policy Research Working Paper 3935World BankWashington, DC.
GlewweP.N.Ilas andM.Kremer.2003. “Teacher Incentives.”Working Paper 9671National Bureau of Economic ResearchCambridge, MA.
GlewweP.M.Kremer andS.Moulin.2007. “Many Children Left Behind? Textbooks and Test Scores in Kenya.”Working Paper 13300National Bureau of Economic ResearchCambridge, MA.
GlewweP.M.KremerS.Moulin andE.Zitze-witz.2004. “Retrospective vs. Prospective Analyses of School Inputs: The Case of Flip Charts in Kenya.”Journal of Development Economics 74: 251-68.
GoD.2010. “Global Monitoring Report 2011 Approach Paper.”Unpublished manuscriptWorld BankWashington, DC.
GongR.2010. “HIV Testing and Risky Sexual Behavior.”Job Market Paper PhD CandidateUniversity of California-BerkeleyBerkeley, CA.
GoyalS. andP.Pandey.2009. “How Do Government and Private Schools Differ? Findings from Two Large Indian States.”South Asia Human Development Sector Report No. 30World BankWashington, DC.
Grantham-McGregorS.Y. B.CheungS.CuetoP.GlewweL.RicherB.Trupp andthe International Child Development Steering Group. 2007. “Developmental Potential in the First 5 Years for Children in Developing Countries.”Lancet 369: 60-70.
GuptaS.M.Verhoeven andE.Tiongson.2003. “Public Spending on Health Care and the Poor.”Health Economics 12: 685-96.
HanushekE. A.2008. “Incentives for Efficiency and Equity in the School System.”Perspektiven der Wirtschaftspolitik9 (Special Issue): 5-27.
HanushekE. A.V.Lavy andK.Hitomi.2008. “Do Students Care about School Quality? Determinants of Dropout Behavior in Developing Countries.”Journal of Human Capital1 (2): 69-105.
HanushekE. A. andL.Woessmann.2008. “The Role of Cognitive Skills in Economic Development.”Journal of Economic Literature46 (3): 607-68.
HeF.L.Linden andM.MacLeod.2007. “Teaching What Teachers Don’t Know: An Assessment of the Pratham English Language Program.”Unpublished manuscriptColumbia UniversityNew York.
HoddinottJ.2008. “Nutrition and Conditional Cash Transfer (CCT) Programs.”Unpublished manuscriptInternational Food Policy Research InstituteWashington, DC.
IHME (Institute for Health Metrics and Evaluation). 2010. “Financing Global Health 2009: Tracking Development Assistance for Health.”University of WashingtonSeattle, WA.
JacobyE.S.Cueto andE.Pollitt.1996. “Benefits of a School Breakfast Program among Andean Children in Huaraz, Peru.”Food andNutrition Bulletin17 (1): 54-64.
JalanJ. andE.Somanathan.2008. “The Importance of Being Informed: Experimental Evidence on Demand for Environmental Quality.”Journal of Development Economics87 (1) 14-28.
JamisonD.B.SearleK.Galda andS.Heyneman.1981. “Improving Elementary Mathematics Education in Nicaragua: An Experimental Study of the Impact of Textbooks and Radio on Achievement.”Journal of Educational Psychology73 (4): 556-67.
JensenR.2010. “The (Perceived) Returns to Education and Demand for Schooling.”Quarterly Journal of Economics125 (2): 515-48.
KaziangaH.D.de Walque andH.Alderman.2009. “Educational and Health Impacts of Two School Feeding Schemes: Evidence from a Randomized Trial in Rural Burkina Faso.”Policy Research Working Paper 4976World BankWashington, DC.
KhemaniS.2010. “Political Economy of Infrastructure Spending in India.”Policy Research Working Paper 5423World BankWashington, DC.
KremerM. E. andD.Chen.2001. “Interim Report on a Teacher Incentive Program in Kenya.”Unpublished manuscriptHarvard UniversityCambridge, MA.
KremerM. andA.Holla.2008. “Pricing and Access: Lessons from Randomized Evaluations in Education and Health.”Unpublished manuscriptHarvard UniversityCambridge, MA.
KremerM. andA.Holla.2009. “Improving Education in the Developing World: What Have We Learned from Randomized Evaluations?”Annual Review of Economics 1: 513-45.
KremerM. andE.Miguel.2007. “The Illusion of Sustainability.”Quarterly Journal of Economics112 (3): 1007-65.
KremerM.E.Miguel andR.Thornton.2009. “Incentives to Learn.”Review of Economics and Statistics91 (3): 437-56.
KremerM.S.Moulin andR.Namunyu.2003. “Decentralization: A Cautionary Tale.”Unpublished manuscriptHarvard UniversityCambridge, MA.
KremerM. andA. P.Zwane.2007. “What Works in Fighting Diarrheal Diseases in Developing Countries? A Critical Review.”World Bank Research Observer 22:1-24.
LaliveR. andA.Cattaneo.2006. “Social Interactions and Schooling Decisions.”Discussion Paper 2250Institute for the Study of Labor (IZA)Bonn, Germany.
LegoviniA.2010. “Development Impact Evaluation Initiative: A World Bank-Wide Strategic Approach to Enhancing Development Effectiveness”Unpublished manuscriptWorld BankWashington, DC.
LeroyJ. L.R.Marie andE.Verhofstadt.2009. “The Impact of Conditional Cash Transfer Programmes on Child Nutrition: A Review of Evidence Using a Programme Theory Framework.”Journal of Development Effectiveness1: (2) 103-29.
LevyD. andJ.Ohls.2007. “Evaluation of Jamaica’s PATH Program: Final Report.”Math-ematica Policy ResearchWashington, DC.
LimS.L.DandonaJ.HoisingtonS.JamesM.Hogan andE.Gakidou.2010. “India’s Janani Suraksha Yojana, a Conditional Cash Transfer Programme to Increase Births in Health Facilities: An Impact Evaluation.”Lancet 375: 2009-23.
MacoursK.N.Schady andR.Vakis.2008. “Cash Transfers, Behavioral Changes, and the Cognitive Development of Young Children: Evidence from a Randomized Experiment.”Policy Research Working Paper 4759World BankWashington, DC.
MadajewiczM.A.PfaffA.van GeenJ.GrazianoI.HusseinH.MonotajR.Sylvi andH.Ahsan.2007. “Can Information Alone Change Behavior? Response to Arsenic Contamination of Groundwater in Bangladesh.”Journal of Development Economics84 (2) 731-54.
MahalA.J.SingF.AfridiV.LambaA.Gumber andV.Selvaraju.2000. “Who Benefits from Public Health Spending in India?”National Council for Applied Economic ResearchNew Delhi.
MaluccioJ. andR.Flores.2005. “Impact Evaluation of a Conditional Cash Transfer Program: The Nicaraguan Red de Protección Social.”Research Report 141International Food Policy Research InstituteWashington, DC.
MavedzengeS.A.Doyle andD.Ross.2010. “HIV Prevention in Young People in Sub-Saharan Africa: A Systematic Review.”Infectious Disease Epidemiology UnitLondon School of Hygiene and Tropical MedicineLondon.
McCoyS.R.Kangwende andN.Padian.2009. “Behavior Change Interventions to Prevent HIV among Women Living in Low and Middle Income Countries.”Synthetic Review 008International Initiative for Impact EvaluationNew Delhi, India.
MedleyA.C.KennedyK.O’Reilly andM.Sweat.2009. “Effectiveness of Peer Education Interventions for HIV Prevention in Developing Countries: A Systematic Review and Meta-Analysis.”AIDS Education and Prevention21 (3): 181-206.
MeessenB.J. P.Kashala andL.Musango.2007. “Output-Based Payment to Boost Staff Productivity in Public Health Centres: Contracting in Kabutare District, Rwanda.”Bulletin of the World Health Organization85 (2): 108-15.
MeessenB.L.MusangoJ. P.Kashala andJ.Lemlin.2006. “Reviewing Institutions of Rural Health Centres: The Performance Initiative in Butare, Rwanda.”Tropical Medicine & International Health11 (8): 1303-17.
MengX. andJ.Ryan.2010. “Does a Food for Education Program Affect School Outcomes?”The Bangladesh Case. Journal of Population Economics23 (2): 415-47.
MesnardA.2005. “Evaluation of the Familias en Acción Program in Colombia: Do Conditional Subsidies Improve Education, Health and Nutritional Outcomes?”Institute for Fiscal StudiesLondon.
MiguelE. andM.Kremer.2004. “Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities.”Econo-metrica72 (1): 159-217.
MizalaA.P.Romaguera andM.Urquiola.2007. “Socioeconomic Status or Noise? Tradeoffs in the Generation of School Quality Information.”Journal of Development Economics 84: 61-75.
MorrisS.P.OlintoR.FloresE.A.F.Nilson andA. C.Figueiró.2004. “Conditional Cash Transfers Are Associated with a Small Reduction in the Weight Gain of Preschool Children in Northeast Brazil.”Journal of Nutrition 134: 2336-41.
MuralidharanK. andV.Sundararaman.2009. “Teacher Performance Pay: Experimental Evidence from India.”Working Paper 15323National Bureau of Economic ResearchCambridge, MA.
NewmanJ. L.M.PradhanL. B.RawlingsG.RidderR.Coa andJ. L.Evia.2002. “An Impact Evaluation of Education, Health, and Water Supply Investments by the Bolivian Social Investment Fund.”World Bank Economic Review16 (2): 241-74.
NguyenT.2008. “Information, Role Models and Perceived Returns to Education: Experimental Evidence from Madagascar.”Working PaperMassachusetts Institute of TechnologyCambridge, MA.
O’DonnellO.E.van DoorslaerR.Rannan-EliyaA.SomanathanS.AdhikariD.HarbiantoC.GargP.HanvoravongchaiM.HuqA.KaranG.LeungC. W.NgB. R.PandeK.TinK.TisayaticomL.TrisnantoroY.Zhang andY.Zhao.2007. “The Incidence of Public Spending on Healthcare: Comparative Evidence from Asia.”World Bank Economic Review21 (1): 93-123.
OECD-DAC (Organisation for Economic Cooperation and Development-Development Assistance Committee). 2009. International Development Statistics: Online Databases on Aid and Other Resource Flows. Paris, France. http://www.oecd.org/dac/stats/idsonline.
PaxsonC. andN.Schady.2007. “Cognitive Development among Young Children in Ecuador: The Roles of Wealth, Health, and Parenting.”Journal of Human Resources42 (1): 49-84.
PaxsonC. andN.Schady.2008. “Does Money Matter? The Effects of Cash Transfers on Child Health and Development in Rural Ecuador.”Unpublished manuscriptWorld BankWashington, DC.
PiperB. andM.Korda.2010. “EGRA Plus: Liberia. Program Evaluation Report. Draft.”RTI International, Research Triangle ParkNC.
Pop-ElechesC.H.ThirumurthyJ. P. HabyarimanaJ. G.ZivinM. P.GoldsteinD.de WalqueL.MackeenJ.HabererS.KimaiyoJ.SidleD.Ngare andD. R.Bangsberg.2011. “Mobile Phone Technologies Improve Adherence to Antiretroviral Treatment in a Resource-Limited Setting: A Randomized Controlled Trial of Text Message Reminders.”AIDS25 (6): 825-34.
Powell C. A.S. P.WalkerS. M.Chang andS. M.Grantham-McGregor.1998. “Nutrition and Education: A Randomized Trial of the Effects of Breakfast in Rural Primary School Children.”American Journal of ClinicalNutrition 68:873-79.
Pratham Organization. 2005. “Annual Status of Education Report.”Pratham Resource CenterMumbai, India.
RavallionM.2009. “Evaluation in the Practice of Development.”World Bank ResearchObserver24 (1): 29-53.
RegalíaF. andL.Castro.2009. “Nicaragua: Combining Demand- and Supply-Side Incentives.”In Performance Incentives for Global Health—Potential and Pitfallsed.R.EichlerR.Levine andthe Performance-Based Incentives Working Group215-35. Washington, DC: Center for Global Development.
ReinikkaR. andJ.Svensson.2005. “Fighting Corruption to Improve Schooling: Evidence from a Newspaper Campaign in Uganda.”Journal of the European Economic Association3 (2-3): 259-67.
ReinikkaR. andJ.Svensson.2006. “The Power of Information: Evidence from a Newspaper Campaign to Reduce Capture of Public Funds.”World BankWashington, DC; Institute for International Economic Studies, Stockholm UniversitySweden.
ReinikkaR. andJ.Svensson.2010. “Working for God? Evidence from a Change in Financing of Nonprofit Health Care Providers in Uganda.”Journal of the European Economic Association8 (6): 1159-78.
RiveraJ. A.D.Sotres-ÁlvarezJ.-P.HabichtT.Shamah andS.Villalpando.2004. “Impact of the Mexican Program for Education, Health, and Nutrition (Progresa) on Rates of Growth and Anemia in Infants and Young Children: A Randomized Effectiveness Study.”JAMA291 (21): 2563-70.
RoberfroidD.P.KolsterenT.Hoerée andB.Maire.2005. “Do Growth Monitoring and Promotion Programs Answer the Performance Criteria of a Screening Program? A Critical Analysis Based on a Systematic Review.”Tropical Medicine & International Health 10: 1121-33.
RodrikD.2008. “The New Development Economics: We Shall Experiment, But How Shall We Learn?”Unpublished manuscriptJohn F. Kennedy School of Government, Harvard UniversityCambridge, MA.
SahnD. andS.Younger.2000. “Expenditure Incidence in Africa: Microeconomic Evidence.”Fiscal Studies21 (3): 321-48.
SchadyN. andM. C.Araujo.2008. “Cash Transfers, Conditions, and School Enrollment in Ecuador.”Economía8 (2): 43-70.
SchadyN. andJ.Rosero.2008. “Are Cash Transfers Made to Women Spent Like Other Sources of Income?”Economics Letters101 (3): 246-48.
SchultzT. P.2004. “School Subsidies for the Poor: Evaluating the Mexican Progresa Poverty Program.”Journal of Development Economics74 (1): 199-250.
SchwartzJ. B. andI.Bhushan.2004. “Improving Equity in Immunization through a Public-Private Partnership in Cambodia.”Bulletin of the World Health Organization 82: 661-67.
SimsC.A.2010. “But Economics Is Not an Experimental Science.”Journal of EconomicPerspectives24 (2): 59-68.
SkoufiasE. andJ.Shapiro.2006. “Evaluating the Impact of Mexico’s Quality Schools Program: The Pitfalls of Using Nonexperimental Data.”Policy Research Working Paper 4036World BankWashington, DC.
SoaresF. V.R.Ribas andG. I.Hirata.2008. “Achievements and Shortfalls of Conditional Cash Transfers: Impact Evaluation of Paraguay’s Tekopora Program.”International Poverty CenterBrasilia, Brazil.
SoetersR.C.Habineza andP. B.Peerenboom.2006. “Performance-Based Financing and Changing the District Health System: Experience from Rwanda.”Bulletin of the WorldHealth Organization84 (11): 884-89.
StasavageD.2005. “Democracy and Education Spending in Africa.”American Journal of Political Science49 (2): 343-58.
TemperleyM.D. H.MuellerJ.Kiambo NjagiW.AkhwaleS. E.ClarkeM. C. H.JukesB. B. A.Estambale andS.Brooker.2008. “Costs and Cost-Effectiveness of Delivering Intermittent Preventive Treatment through Schools in Western Kenya.”Malaria Journal 7: 196-207.
ThorntonR.2005. “The Demand for and Impact of HIV Testing: Evidence from a Field Experiment.”Unpublished manuscriptHarvard UniversityCambridge, MA.
UNESCO (United Nations Educational Scientific and Cultural Organization). 2010. Education for All Global Monitoring Report 2010: Reaching the Marginalized.Oxford, UK: Oxford University Press.
Van de WalleD.1995. “The Distribution of Subsidies through Public Health Services in Indonesia, 1978-87.”In Public Spending and the Poor: Theory and Evidenceed.D.Van de Walle andK.Neads226-58. Washington, DC: World Bank.
VermeerschC. andM.Kremer.2004. “School Meals, Educational Achievement, and School Competition: Evidence from a Randomized Evaluation.”Policy Research Working Paper 3523World BankWashington, DC.
WeinhardL. S.M. P.CareyB. T.Johnston andN. L.Bickham.1999. “Effects of HIV Counseling and Testing on Sexual Risk Behavior: A Meta-Analytic Review of Published Research, 1985-1997.”American Journal of PublicHealth89 (9): 1152-63.
WHO (World Health Organization). 2008. WHO Recommendations for Routine Immunization. Geneva, Switzerland. http://www.who.int/immunization/policy/WHO_EPI_Sum_ tables_Def_200713.pdf.
WHO (World Health Organization). 2010. World Health Report. Health Systems Financing: The Path to Universal Coverage. Geneva, Switzerland.
World Bank. 2003. World Development Report 2004: Making Services Work for Poor People.Washington, DC.
World Bank/AERC (African Economic Research Consortium). 2010. “Service Delivery Indicators: Pilot in Education and Health in Africa.”World BankWashington, DC.
ZivinJ.H.Thirumurthy andM.Goldstein.2006. “AIDS Treatment and Intrahousehold Resource Allocations: Children’s Nutrition and Schooling in Kenya.”Working Paper 12689National Bureau of Economic ResearchCambridge, MA.