How reliable are Online Market Research Surveys?
How reliable are Online Market Research Surveys?
This Newsletter is useful for people who want to conduct an online survey and who want to ensure their response data is truly clean & reliable.
If you are not rejecting 25-30% of your qualified online survey respondents your survey results will be severely compromised.
Online Survey Panel providers tell us their panel is not only the largest (in some cases) but also the best (in almost all cases). Whilst it may be true that the major 'Research Only Panel' providers in Australia may have large panels e.g., Research Now has 6 million members worldwide and My Opinions has over 300,000 members in Australia alone), can we be sure that the panels are real people representative of the population, who are what they say they are (e.g., age, income levels, location, product usage and shopping habits etc.), who are not over-surveyed, not completing the same survey twice, and that the panel is actively managed to ensure up-to-date respondent data, and involved respondents?
I believe the larger Australian research only panels are doing a reasonably good job of fulfilling the above requirements, with perhaps one exception – are they weeding out all respondents who are unreliable in their survey responses, and removing them from their panel? The major panel providers say they expect no more than 1% to 5% of respondents invited and qualified to complete the survey are unreliable ‘dirty’ data. They say that if a panel member continues to provide unreliable ‘dirty’ data they then remove them from the panel, after appropriate warnings. Is this good enough?
A recent survey experience
This survey was conducted amongst n=1387 ‘Main Grocery Buyer’ respondents across Australia with the topic being ‘Grocery Buying‘. Hard quotas were set on location (by Capital Cities & Regions), and soft quotas on Age and Income Groups. The Panel provider did an excellent job in matching the age and income groups’ to the ABS figures supplied. The survey median length was 8.8 minutes, covering recent buyer experience e.g., grocery shop brands used, and approximate expenditure, frequency of shopping and some attitudinal questions on 15 supermarkets attributes to measure Importance, Performance and Change over the past year). It was a relatively short survey, not at all complex and with no open-ended questions. So what was the unreliable or ‘dirty’ response rate where the survey was terminated?
A total of 386 (or 27.8%) respondents were terminated for unreliable responses either part way or even at the end of the survey leaving us with the required n=1001 acceptable completions.
Reasons for terminating a total of 27.8% of ‘qualified’ participants
11.2% flat-lining (i.e. giving straight-line responses to one or more of the three ’15 attribute’ questions where it did not make sense to do so)
5.9% speeders – those who completed the survey in less than 4 minutes (versus the median length of 8.8 minutes). Note that we apply a tougher test here than the Panel providers norm of less than 1/3rd (some less than ¼) of the median completion time.
4.8% where the Household composition was not in line with the number in the household (e.g., ‘Couple with no children or other adults living in the household’ and the total number people in the household not =2)
3.9% spending less than $80 per person in the household for all grocery spending in the last 4 weeks (i.e., less than $20 per person per week – highly dubious)
2.1% where the grocery spend was more than $1,250 per person in the household in the last 4 weeks or household spending was exceptionally high in relation to income
0% garbage responses to open-ended questions (e.g., ‘adsf’ or other rubbish responses) because there were no open-ended questions in this survey)
This may seem like a high termination rate, but it means we have the highest possible confidence in the final analysis.
How best to clean the Data?
So if the Panel Providers cannot guarantee the ‘truth’ and ‘purity’ of the responses, and we want to deliver accuracy and reliability in our analysis for our clients, then we researchers have to manage this ourselves.
There are two ways to do this:
1. Firstly to review every completed respondent’s data for the types of errors described above, flag them and with the agreement of the panel provider, remove them from the results. The problem here is that in these cases payment of the respondent has most probably already been made and Panel Providers may ask for unreasonable tests to limit the number of rejections. In addition, this takes an enormous amount of time for the research analyst to review every response to every question and omissions will inevitably occur where complex conditions may apply.
2. A better method (and one we use at Solutions Marketing) is to include comprehensive logic tests built into the questionnaire so termination automatically occurs during the respondent’s completion. The best way to achieve this is to think carefully about the termination logic up front, then run a pilot test of say the first 75-100 completions and look at the results, perhaps adding additional logic tests for the remainder of the survey
Following the process above will ensure the cleanest possible data, with the best use of the research analyst’s time, and an analysis report with the highest confidence.
Want to know more? Contact Tony Nix on 02 9955 5133 for an obligation free discussion about your Online Survey methodology. In our research we dedicate ourselves to uncovering key market insights. In our analysis & strategy planning we aim to find solutions 'beyond the obvious!' Our offer is simply 'more for less' - more consumer insight for less cost ...guaranteed!
Why not connect with us on LinkedIn – Tony Nix: https://au.linkedin.com/in/solutionsmarketing
Or follow us on our company page: https://www.linkedin.com/company/solutions-marketing-&-consulting