This research report discusses how eight districts approach the program evaluation process. Specifically, the report explores how profiled districts prioritize programs to evaluate each year, determine evaluation goals and questions, and communicate evaluation results to different stakeholders. In addition, the report offers recommendations on increasing staff engagement and promoting a culture of continuous improvement in the program evaluation process. Keep reading for key observations from this research, and download the full report.
Selecting high-priority programs
39

At District H, department heads recommend programs for review to a program evaluation committee that includes the Director of Research and Evaluation and district administrators from the departments of Teaching and Learning and School Leadership. This committee scores all recommended programs across seven categories, such as alignment with strategic plan, program cost, and community and stakeholder interest. The committee then proposes evaluations for six to seven programs with the highest average scores, pending approval from the board and superintendent. This standardized approach allows departments to contribute to the evaluation process and thus mitigates potential misperceptions of program evaluation staff bias. By collecting feedback from departments, the program evaluation committee promotes collaboration with—and ultimately staff engagement in—the program evaluation process.
Customizing the program evaluation process
Program evaluation staff at most profiled districts develop customized evaluation processes, including customized success metrics, for each program under review based on program characteristics (e.g., program objectives, cost). Contacts at District C and District E note that, through a customized evaluation process, program evaluation staff can adapt the structure and scope of each evaluation to the context of the program. For example, program evaluation staff at District A, District D, District E, and District F create logic models during individual program evaluations that outline the relationships between program resources, activities, and desired outcomes. Program evaluation staff then develop evaluation questions related to specific logic model components.
Logic models outline the relationships among program resources, activities, and desired outcomes
1-3

Evaluating team structure
At profiled districts, district size correlates with the size of the program evaluation team. For example, administrators at District A, District C, District D, and District E—districts that serve between 14,000 and 91,000 students—dedicate one to three staff members to the program evaluation team. In contrast, administrators at District B—which serves 157,000 students— dedicate 13 to 16 staff members to the program evaluation team.
Communicating district program evaluation findings
Program evaluation staff at profiled districts use different formats to communicate evaluation results and recommendations to internal and external stakeholders. For example, program evaluation staff at multiple profiled districts create comprehensive reports to provide in-depth information on every step of the program evaluation process. In contrast, evaluation staff can create executive summaries to present a more concise, highlevel-overview format for stakeholders who may possess varying levels of knowledge of and involvement in the program and have limited time to read a full, comprehensive report.
This resource requires EAB partnership access to view.
Access the research report
Learn how you can get access to this resource as well as hands-on support from our experts through District Leadership Forum.
Learn More