From Chris Walters:
Studies of small-scale “model” early-childhood programs show that high-quality preschool can have transformative effects on human capital and economic outcomes. Evidence on the Head Start program is more mixed. Inputs and practices vary widely across Head Start centers, however, and little is known about variation in effectiveness within Head Start. This paper uses data from a multi-site randomized evaluation to quantify and explain variation in effectiveness across Head Start childcare centers. I answer two questions: (1) Is there meaningful variation in short-run effectiveness across Head Start programs? and (2) Is variation in Head Start effectiveness related to observed inputs? To answer the first question, I develop an empirical Bayes instrumental variables procedure that measures variation in local average treatment effects (LATE), accounting for non-compliance with experimental assignments. I estimate that the cross-center standard deviation of cognitive effects is 0.3 test score standard deviations, which is substantially larger than typical estimates of variation in teacher or school effectiveness. Next, I assess the role of inputs in generating this variation, focusing on inputs commonly cited as drivers of the success of small-scale model programs. My results show that Head Start centers offering full-day service boost cognitive skills more than other centers, while Head Start centers offering frequent home visiting are especially effective at raising non-cognitive skills. Other key inputs, including the High/Scope curriculum, teacher education and certification, and class size, are not associated with increased effectiveness in Head Start. Together, observed inputs explain a small share of the variation in Head Start effectiveness. These findings suggest that replicating the effects of successful programs may be difficult, as the factors responsible for their success are largely unidentified.