Have you ever surveyed your program participants, and wanted to “fine-tune” your survey as you went? Or have you wondered how to take a retrospective look at results when your surveys have changed over time or when you have multiple survey occurrences? We often work with programs that have several iterations of participant surveys, administered to reach people who participated at different times. The survey structure might also change over time; sometimes different response scales are used, and sometimes new questions are added or omitted to be relevant to different programs. The steps we took in a recent project to clean and prepare the pre- and post-test participant data for analysis may be useful to you in your work:
  1. We carefully labeled and tracked each set of survey data to ensure that all survey cases were included and not duplicated.
  2. We created master spreadsheets, one for pre- and one for post-test, which included all of the data sets as well as a field indicating which survey they were from. These spreadsheets were used for additional coding, cleaning and pre-post matching.
  3. Because some questions had different response scales and others were not used on every survey, we determined which questions were consistent enough for us to evaluate across all survey iterations. We kept the goal of college attainment in mind when determining what to keep.
  4. When response scales changed between surveys, we created a hybrid scale that we could apply to all surveys using an “IF” calculation in Excel. For example, if “neutral” is the middle category in one survey, but “ok” is the middle category in another survey, we used the IF function to match these into one master “ok/neutral” category.
  5. We created unique ID numbers for each individual to allow us to match pre- to post-test results. These IDs were based on participant names and birthdates, which were gathered from the surveys. It took some trial and error to come up with an appropriate ID scheme that could be consistently applied to all survey cases, wouldn’t accidentally exclude any responses, and had a low likelihood of being duplicated for different respondents.
Some of these steps could be avoided by using the same survey over time. Common survey software packages (like Survey Monkey or Survey Gizmo) allow you to add, hide, and show specific questions, or include question or page routing to make your survey customized and relevant for specific groups of program participants. If having different data sets is important, you can filter responses by a field such as date or program year when you export and then you can do targeted analysis and evaluation. Working through these kinds of survey and data challenges is one of my favorite parts of my work as an evaluation consultant—I love helping you to develop strategies that will make your life easier down the road. If you are feeling overwhelmed by your survey system or need some help setting up a clean and user-friendly system, contact me at deborahm@theimprovegroup.com or 651-315-8914 to see how we can work together.