January 2, 2007 - 12:35pm
Clients often ask us how they can be sure they are getting the "true" answer to their research questions. This question comes up when developing a survey, reviewing participant files, or even recruiting for a focus group. The answer comes down to both sampling strategy and sample size.
December 26, 2006 - 4:09pm
Recently we had a data security issue in our office; a flash drive with client data was lost. All of us involved were very devastated; we felt horrible that it happened and worried about the risks to our client and the people they serve. Although our client was understandably upset, and we felt horrible, we did the best we could under the circumstances to make amends. We actually received compliments (!) on how we handled the issue. We also were told by several people that this sort of thing happens all the time.
December 14, 2006 - 6:36am
In our on-going work with Performing Arts Workshop, we've been asked to conduct a quasi-experimental evaluation of their arts residency program, Artists-in-Schools. Based on results in the last three years, they created the ARISE program, designed to bring arts residencies to classrooms with some students with special needs (called inclusion classrooms in California).
November 13, 2006 - 1:02pm
In October I attended a session and the Minnesota Council of Non-Profits/Foundations conference on how to build on assets in rural communities. The session’s panel shared some interesting examples of how their rural communities had uncovered new resources to help meet their needs. For instance, one non-profit organization described how they had been able to secure a phone system donation from a local company that had upgraded to a new phone system; another organization described how they had been able to shape a fundraising event around board members' landscaping expertise.
October 20, 2006 - 2:33pm
The most recent issue of the American Journal of Evaluation (v. 27, n3, Sept. 2006) presents an interesting ethical scenario: when is an evaluator no longer external? In the scenario, an evaluator has been working with an agency for a number of years, and a significant portion of their income comes from the single agency. A foundation, interested in funding a replication of a program that was found to have promising results by the evaluator, is concerned that the evaluator is not external.
September 14, 2006 - 3:47pm
Today we had the opportunity to join Tom DeCaigny, executive director of Performing Arts Workshop, as he visited with corporate giving staff from St. Paul Travelers, one of their funders. Although we've worked directly with funders who require evaluation, this is the first time we've had the opportunity to sit down and learn about the interests of those who primarily support programs.
September 8, 2006 - 4:16pm
All good ideas... are difficult to implement! The Improve Group has some tools that we are making better use of in our own planning, however. Thanks to a SharePoint site, our staff are able to host discussions, post ideas and share lists of resources that are open to any contributers. We can also create sub-sites for any of our projects which serves us well for managing some of our larger projects.
September 1, 2006 - 6:48am
We are in the process of piloting retrospective pre-test surveys for one of our clients. The retrospective pre-test has been shown in research to be the most valid way to capture change as a result of a program or intervention, particularly among young people. It asks participants at the end of the program to rate their own status at the beginning of the program and their current status; so a sample question might be: For each question below, circle your answer.
August 14, 2006 - 2:13pm
One of the things we work very hard to do is collect data in the language that is most comfortable for respondents. We have translated print surveys into Spanish, Vietnamese, Hmong and Chinese; we've conducted focus groups in Spanish and Hmong, and conducted telephone interviews or surveys in a number of languages.
August 11, 2006 - 1:40pm
What do you do when your evaluation finds mixed results? How do you let your client know that what you've proven is only modest success? This is one of the biggest challenges for external evaluators.