Editor's note: we offer our long-term sponsors the opportunity to write "Sponsor Posts" and tell their story. These posts are clearly marked as written by sponsors, but we also want them to be useful and interesting to our readers. We hope you like the posts and we encourage you to support our sponsors by trying out their products.Online, survey research is great for collecting deep, interesting data about your audience. It's the perfect compliment to web analytics and syndicated audience measurement and can provide benefits to everybody in your organization, from the editorial staff to the ad sales folks. However, it's easy to get it wrong. In fact, we tend to think that most people do just that. And, depending on how it's used, bad data can often be worse than no data.
Most of the time, problems can be traced back to sampling error. In technical terms, sampling is the act of observing a small selection from a larger population in order to learn information about the entire population. In the case of online surveys, sampling refers to how you recruit respondents.
So, here are three things to think about that will help improve the quality of results next time you survey your audience
The way in which you engage website visitors affects the type of visitor you will attract. There are many important aspects to the design of an invitation, including tone of voice and branding, but perhaps the most important is the level of noisiness. At one end of the spectrum, there are invitations that are too subtle: a button in a blog post, a static feedback link in a footer, etc. These can result in a polarized, self-selected sample.
At the other end of the spectrum are noisy invitations. A typical example is the "TAKE A SURVEY, WIN AN IPOD" popup invitation. Crass promises of compensation can have many unintended affects on the sample and carry a high cost in terms of visitor experience.
In general, the goal should be an invitation that is measured, interesting, and respectful. It should be obvious enough so that every (hopefully randomly) selected visitor has an equal, but not annoying, opportunity to participate. At Crowd Science we tend to use text-based HTML overlays.
Respondents feel and respond differently depending on when they are engaged. Catching someone in a particularly bad, or good, or busy, or anxious mood can affect the way they respond to a survey.
The obvious example here is Monday-itis. If you run editorial on a website, and your bonus is tied to visitor satisfaction scores, you definitely don't want pre-caffeinated, Monday-morning respondents overwhelming your sample. Similarly, if you're running an Apple blog, it's important to be cognizant of the effect of macro events, like Sir Steve unveiling the fabled tablet at Macworld.
Avoid errors due to timing by recruiting as evenly as possible across all meaningful time periods. The unusual visitors are important, but only in the right proportions.
Respect respondents' time and they'll return the favor with good data. This is obvious, but it bears repeating. Even the most generous respondents will suffer respondent fatigue given a long enough questionnaire.
The sweet spot for online research is six to 12 questions. That's long enough to dig deeply into one or two topic areas and collect a few attributes upon which to segment - but well within the tolerance and attention span of most visitors, even without monetary compensation.
Online survey research is easy to do, but easy to screw up. Follow these tips and listen to your audience. What survey methods have worked for you? What hasn't worked? Let us know in the comments.
Photo credit: Dominik Gwarek.