Program Evaluation*


Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs to allow informed judgments about program improvement, program effectiveness, and decisions about future programming. The three primary purposes of evaluation are program planning, program development, and program accountability. There is a multitude of evaluation study approaches, including complex, multi faceted approaches. A recent article in New Directions for Evaluation listed twenty-two widely used models.

Most commonly used by PEIS are formative evaluations conducted by internal FSIS staff which are intended to assist program managers to refine and improve their programs. "Process" evaluation aims to describe how the program is actually functioning; "normative" evaluation aims to determine the extent to which programs are implemented in the way they were meant to be; and "outcome" or "impact" evaluation aims to assess what effect the program had. Evaluators also use the terms "formative" versus "summative" evaluation to refer to work that focuses on forming/planning/improving a program, versus assessing the end result or summary effects of the program.

Formative evaluations are conducted during the development or ongoing implementation of a program with the intent to improve the program. This process evaluation describes the programs and its outcomes. In contrast, summative evaluations are conducted on well-established programs to allow policy makers to make major decisions on the future of the program.

Evaluation Questions
The type and complexity of an evaluation depends on the evaluation questions it was designed to answer. For example, clients may ask one or more of the following kinds of questions, requiring one or more approaches:

Program Planning Questions


Program Monitoring Questions


Impact Assessment Questions


Economic Efficiency Questions


Evaluation Approaches
Among the many evaluation approaches available to answer evaluation questions such as those posed above are:

Case Study Evaluation is a method for learning about a complex instance, based on a comprehensive understanding obtained by extensive description and analysis. Most case studies are intended to either illustrate findings obtained via other techniques or an in-depth description of a critical instance of unique interest. Case studies can also serve to explore new ideas for later investigation; to investigate the operations of program operation; to examine cause and effect conclusions in depth. Multiple case studies can be used in a cumulative way to assess program effects.

Cost-Benefit and Cost-Effectiveness Evaluations use economic methods to assess relationships between costs and outcomes of programs, expressed in monetary terms and the relationships between costs and outcomes, expressed as costs per unit of outcome achieved.

Evaluation Synthesis is an appropriate method when evaluation questions have been previously addressed with substantial research. Researchers aggregate the findings from many individual studies in order to provide a conclusion more credible than any single study. This approach is most useful when the field of knowledge has reached an extensive enough state that data are available to make major conclusions.

Prospective Evaluations use methods to deal with forward looking, future-oriented evaluation questions, in contrast to the retrospective approaches previously discussed which explore what happened in the past.

Each of these types of evaluations can employ a number of research methods such as quantitative analysis using program or survey data or qualitative analysis from observations or interviews.

In addition to developing and implementing evaluation studies, PEIS staff are available to consult on the type of evaluation to undertake and the design of the evaluation. Trained evaluators possess interdisciplinary skills in evaluation, quantitative and qualitative research methods, economics, management, public policy, writing, and interpersonal communication. They can serve as program consultants, group facilitators, observers, statisticians, writers, and trainers.


* See U.S.D.A

<http://www.fsis.usda.gov/regulations_&_policies/Program_Evaluation/index.asp>