With just over one week left in my sabbatical, I am racing through as much reading as I can, and have been riveted by December's American Journal of Evaluation. It starts, as each volume does, with the guiding principles for evaluators. These six principles guide our work and the work of all evaluators. Several of the subsequent articles explore how the final guiding principle is dependent on each of the others.
The final guiding principle is Responsibility for General and Public Welfare. In her role running the Program Evaluation and Methodology Division of the Government Accountability office for fourteen years, Eleanor Chelimsky had the opportunity to reflect on clashes between evaluation and politics, and how evaluation must serve the public in the face of those clashes. The clashes she identifies might be ones that our readers would expect:
* Agencies, in the face of an evaluation imposed externally with the potential to put them at risk, withhold information, dispute methods and findings, or endeavor to suppress results
* Leaders request evaluations that answer questions designed to show a specific slant (her example is of receiving a list of 19 questions related to bovine growth hormones, which included the methods to use, people to talk to, and suggested answers to those questions)
* When an evaluation is concluded, the results are either uninteresting, too technical, or go against prevailing public opinion, and are never used to help improve programs or develop public policies
Chelimisky presents five suggestions to address these clashes. Several of these suggestions are incorporated into our work (or we've learned some hard lessons when we've failed to do these things) and I thought I'd share some of our own examples.
Suggestion 1: Expand the design phase to probe the values, stakeholders, methodological strengths and weaknesses, and potential credibility. We found that when we give our clients, and several of the people who will be interested in the study, the opportunity to help us explore these issues, we come away with a stronger evaluation design and greater buy-in from all of those involved.
Suggestion 2: Include public groups in the evaluation. When evaluations rely exclusively on available data and interviews with staff or experts, the data is unbalanced and potentially biased. What we emphasize in our evaluation design is first- hand knowledge of the questions we are asking; if the question involves experiences, knowledge, attitudes or beliefs of participants, than it is crucial that participants are the ones to give us answers.
Suggestion 3: Lean heavily on negotiation to encourage all stakeholders to participate in the evaluation. Chelimisky encountered adversarial situations in which agencies withheld information. Although we rarely encounter the hostility she describes, we do meet people who are reluctant or fearful. Rather than negotiation, we more frequently use education - helping people understand how they can use the results and how findings will be shared. However, we do try to help stakeholders feel that there is something "in it for me" - that they will have their own questions answered, access to data, or an opportunity to respond to results.
Suggestion 4: Never stop thinking about credibility - evaluation must be technically competent and objective. Credibility is gained by matching methods to the evaluation questions, honesty in reporting both results and the confidence that readers can have in results, and by abstaining from advocacy of one issue or another.
Suggestion 5: Develop a dissemination strategy. This is one area where, at the Improve Group, we continue to learn. In one of our very first evaluations, of the statewide charter school system, the mixed results were never released to the public. The guiding principles hadn't been developed yet, and with relative inexperience we didn't feel comfortable engaging the evaluation's sponsors in a discussion about the public benefit of the results. With more experience, we are better prepared to refer to the guiding principles and help our clients feel comfortable and empowered sharing findings, even when results are mixed. Unlike evaluators at a public agency, however, we are usually contractually unable to release results, and rely on our clients (with assistance) to disseminate findings.
Do you have thoughts about the role of evaluation in fulfilling a public purpose? I'd love to hear them!