RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.
Lessons from Kenya’s Tusome national literacy program
Piper, B. L., DeStefano, J., Kinyanjui, E., & Ong'ele, S. A. (2018). Scaling up successfully: Lessons from Kenya’s Tusome national literacy program. Journal of Educational Change, 19(3), 293-321. https://doi.org/10.1007/s10833-018-9325-4
Many successful piloted programs fail when scaled up to a national level. In Kenya, which has a long history of particularly ineffective implementation after successful pilot programs, the Tusome national literacy program—which receives funding from the United States Agency for International Development—is a national-level scale-up of previous literacy and numeracy programs. We applied a scaling framework (Crouch and DeStefano, 2017) to examine whether Tusome’s implementation was rolled out in ways that would enable government structures and officers to respond effectively to the new program. We found that Tusome was able to clarify expectations for implementation and outcomes nationally using benchmarks for Kiswahili and English learning outcomes, and that these expectations were communicated all the way down to the school level. We noted that the essential program inputs were provided fairly consistently, across the nation. In addition, our analyses showed that Kenya developed functional, if simple, accountability and feedback mechanisms to track performance against benchmark expectations. We also established that the Tusome feedback data were utilized to encourage greater levels of instructional support within Kenya’s county level structures for education quality support. The results indicated that several of the key elements for successful scale-up were therefore put in place. However, we also discovered that Tusome failed to fully exploit the available classroom observational data to better target instructional support. In the context of this scaling framework, the Tusome literacy program’s external evaluation results showed program impacts of 0.6–1.0 standard deviations on English and Kiswahili learning outcomes. The program implemented a functional classroom observational feedback system through existing government systems, although usage of those systems varied widely across Kenya. Classroom visits, even if still falling short of the desired rate, were far more frequent, were focused on instructional quality, and included basic feedback and advice to teachers. These findings are promising with respect to the ability of countries facing quality problems to implement a coherent instructional reform through government systems at scale.