Program evaluation is an invaluable aid in planning, developing, and managing programs. To be effective, however, program evaluation efforts must be placed within the broader context of program management. A flexible capacity for internal self-evaluation is fundamental to the management and ongoing improvement of programs. Click a topic below to learn more about the evaluation process:
Framework for Evaluation
Program evaluation offers a way to understand and improve community health and development practice using methods that are useful, feasible, proper, and accurate. The framework described below is a practical non-prescriptive tool that summarizes in a logical order the important elements of program evaluation.
(Source: Community Tool Box)
The flowchart above describes the evaluation process. The evaluation standards are grouped around four important attributes: utility (serve information needs of intended users); feasibility (be realistic, prudent, diplomatic, and frugal); proprietary (behave legally, ethically, and with due regard for the welfare of those involved and those affected); and accuracy (evaluation is comprehensive and grounded in the data).
A strong evaluation approach ensures that the following questions will be addressed as part of the evaluation so that the value of program efforts can be determined and judgments about value can be made on the basis of evidence:
What will be evaluated? (i.e.., what is "the program" and in what context does it exist?)
What aspects of the program will be considered when judging program performance?
What standards (i.e.., type or level of performance) must be reached for the program to be considered successful?
What evidence will be used to indicate how the program has performed?
What conclusions regarding program performance are justified by comparing the available evidence to the selected standards?
How will the lessons learned from the inquiry be used to improve program and system effectiveness?
Evaluation Depends on Clear Goals and Objectives
Programs must have clearly specified goals and objectives before an evaluation can take place. A program goal is a broad statement of what the program hopes to accomplish or what changes it expects to produce. For example, to reduce reoffending among substance abusing offenders served by the program, or reduce the crime rate in the neighborhood targeted by the program. An objective is a specific and measurable condition that must be attained in order to accomplish a particular program goal. There are many different ways to specify objectives; the program and evaluator should choose the method that works best for each situation. An objective, for example, would be to, "assist substance abusing offenders in abstaining from drug use," or "ensure that victims of crime feel compensated for their losses."
Types of Evaluations
Evaluation of program performance should be done on a continuing basis and should provide an overall framework for all participants involved with the program to benefit through the utilization of evaluation findings and recommendations.
The two main types of program and project evaluations are process and outcome. A process evaluation focuses on program implementation and operation. It identifies the procedures and the decisions made in developing the program, and it describes how the program operates, the services it delivers, and the functions it carries out. Outcome evaluation is used to identify the results of a program’s effort. It seeks to answer the question: “What difference did the program make?”
When to Evaluate? Conducting an Evaluability Assessment
Most SAAs and policymakers with full plates and limited funds, will not have the capacity or resources to engage in or support rigorous longitudinal research projects that study a group of individuals over a relatively long period of time. Indeed, many will feel stretched to allocate funds which some will argue are better invested in programs to evaluate one or more of those programs. Therefore, before trying to determine which kind of evaluation approach best suits both the needs of important stakeholders and the nature of the project, determine whether to evaluate the project at all. This is known as evaluability assessment. States that fund a number of different projects in several program areas will find it difficult to evaluate every project. Rather than attempting to do so, administrators should focus their evaluation resources so that they provide the most useful information possible. In deciding which projects to evaluate, consider the following questions:
How central is the project to the state's strategy?
How costly is it, relative to others?
Are the project's objectives such that progress towards meeting them is difficult to estimate accurately with existing monitoring procedures?
How much knowledge exists about the effectiveness of the type of project being supported? Other things being equal, where more uncertainty exists about a project's effects, the need for evaluation is greater.
Are evaluations underway elsewhere that are assessing similarly designed projects? If so, the administrator may choose to wait until the results of those evaluations are in, and to devote evaluation resources instead to projects about which less is known.
Selecting Appropriate Evaluation Methods
It is important to select the method(s) most appropriate to answer the evaluation question. The types of data needed should be reviewed and considered for credibility and feasibility. Based on the methods chosen, you may need a variety of input, such as case studies, interviews, naturalistic inquiry, focus groups, standardized indicators, and surveys.
What is the overall purpose of the evaluation, and what measures and methods will be used to assess this? An evaluation of the success of a pretrial program, for example, may include measures such as appearance and safety rates, which can be calculated using administrative records. Measuring the success of a victim services program, on the other hand, may rely in part on client satisfaction surveys. Some evaluations employ a combination of quantitative and qualitative methods. (See the section on sample performance and outcome measures by program).
No design is necessarily better than another. Evaluation methods should be selected because they provide the appropriate information to answer stakeholders' questions. The choice of methods has implications for what will count as evidence, how that evidence will be gathered, and what kind of claims can be made. Because each method has its own biases and limitations, evaluations that mix methods are generally more robust.
Over the course of an evaluation, methods may need to be revised or modified. Circumstances that make a particular approach useful can change. For example, the intended use of the evaluation could shift from discovering how to improve the program to helping decide about whether the program should continue or not. Thus, methods may need to be adapted or redesigned to keep the evaluation on track. See the resources below for more information about conducting and developing evaluations.
Stakeholders can help (or hinder) an evaluation before it is conducted, while it is being conducted, and after the results are collected and ready for use. Use of results will be enhanced if you give priority to those stakeholders who:
Can increase the credibility of your efforts or your evaluation;
Are responsible for day-to-day implementation of the activities that are part of the program;
Will advocate for or authorize changes to the program that the evaluation may recommend; and
Will fund or authorize the continuation or expansion of the program.
In addition, to be proper/ethical and accurate, you need to include those who participate in the program and are affected by the program or its evaluation. Obviously, stakeholder input in “describing the program” ensures a clear and consensual understanding of the program’s activities and outcomes. This is an important backdrop for even more valuable stakeholder input in “focusing the evaluation design” to ensure that the key questions of most importance will be included. Stakeholders may also have insights or preferences on the most effective and appropriate ways to collect data from target respondents. In “justifying conclusions,” the perspectives and values that stakeholders bring to the project are explicitly acknowledged and honored in making judgments about evidence gathered. Finally, the considerable time and effort spent in engaging and building consensus among stakeholders pays off in the last step, “ensuring use,” because stakeholder engagement has created a market for the evaluation results. Stakeholders can be involved in the evaluation at various levels. For example, you may want to include coalition members on an evaluation team and engage them in developing questions, data collection, and analysis. Or consider ways to assess your partners’ needs and interests in the evaluation, and develop means of keeping them informed of its progress and integrating their ideas into evaluation activities. Again, stakeholders are more likely to support the evaluation and act on results and recommendations if they are involved in the evaluation process.
In addition, it can be beneficial to engage your program’s critics in the evaluation. In some cases, these critics can help identify issues around your program strategies and evaluation information that could be attacked or discredited, thus helping you strengthen the evaluation process. This information might also help you and others understand the opposition’s rationale and could help you engage potential agents of change within the opposition. However, use caution: It is important to understand the motives of the opposition before engaging them in any meaningful way.
Visit the section on Strategic Planning to learn more about identifying and engaging key stakeholders.