It’s about not crashing the plane.
“Wow, I certainly missed that one” is not what you want to say at the end of a research project that crashed into the ground.
In the process of developing usability tests or surveys, we can become experts about what users think, or so we believe. For this post, I will apply the process of pilot testing strictly to surveys, even though the strategy or practice of pilot testing has many applications. We work long and hard at crafting every word and that is the exact process that can be our undoing. We become so familiar with what we want to find out that we can lose our objectivity and forget to ask ourselves:
- Are the pilot survey directions clear and easy to understand?
- Are any of the pilot survey questions vague or confusing?
- Does the order of the questions influence the respondents’ answers negatively?
Pilot testing is kind of like telling your friends you are thinking about grabbing a bed sheet and jumping off the roof of your house to find out if you can float to the ground. You hope to find out from your friends whether it’s a good or bad idea before you actually jump. Pilot survey testing is a way to learn what you might not know before committing yourself wholeheartedly to the consequences of your actions.
How to Pilot Test Your Survey
1. Conduct your pilot test with people who are similar to the intended survey respondents
It is critical that you pilot test your survey in the same manner as you intend to administer it to your actual respondents. The degree to which you vary the pilot survey from the actual survey is the degree to which you can’t trust your data. This is not a step to cut corners on. Pilot testing sets up the confidence you have in your user research data. You wouldn’t want to test the traction of your car’s tires around a corner on a sunny day to predict how they will do on a rainy day.
2. Always pilot test using two or more people
If you observe the same issues with both people you might be fine to move on to releasing the survey once the issues are resolved. However, if the results are fairly different then a third or fourth person would be needed to confirm what adjustments should be made.
3. Toss the pilot survey responses
Data from a pilot survey is always discarded and not included in the analysis of the main body of work. This is because you almost always discover items that should be changed in order to improve the quality of the survey. In pilot testing, you are not focused on the literal answers to the survey questions. The goal of pilot testing is to confirm the survey is ready to be sent out in its current state. You are looking for insight into things like:
- Are respondents struggling with understanding the questions?
- Are they confused about how to answer?
- Are they given a mechanism to answer accurately with confidence?
- Do they think the questions make sense?
4. Follow up with your test subjects
After the survey test is complete, the researcher should ask if anything was unclear to the tester. Be careful not to fall into the trap of assuming you know what the participants were thinking. If you are not sure, it is so easy to ask your tester while it is fresh in their mind. In the process of asking questions, you have the chance to discover insights you possibly hadn’t thought of.
5. Repeat the pilot survey process
Once you have resolved all the issues, try the pilot test one last time. Hopefully this pilot test will confirm to you that all of the issues have been corrected.
Help yourself to avoid saying “Wow, I certainly missed that one” after all your hard work. If you’re interested in additional help with pilot testing, seek out the experience of a Perficient Digital UX researcher to ensure that you are set up for success every step of the way.
Speaking of success, did your mom pilot test cookies for your school bake sale by asking you to try one? See? Moms know about pilot testing. Make Mom proud and pilot test your work before you send it off to the masses. Oh, and give your mom a call, it’s been way too long.