written by Daniel Somerset,
Market research began with face-to-face interviewing, then to reach people faster, (and as telephone use gained popularity) they began to do phone surveys. Naturally, with the invention of the Internet, researchers shifted away from telephone interviews and today, many – if not most – surveys are done online. In fact, millions are completed through desktops and laptops every year. Computer monitors might be different sizes, but users receive the same (or similar) user experience.
Recently, another shift has come along…mobile devices. In a survey by Adobe, and reported by mediapost.com and the thedroidguy.com, websites are getting more traffic from tablets and smartphones and it’s growing. The demand for mobile surveys is increasing too and respondents expect that the survey experience will be as good on a mobile device as on laptops or desktops. While the surveys are still “online,” mobile devices present new challenges. Respondents are not always getting the same experience – and perhaps researchers are not getting the same results.
Respondents taking mobile surveys don’t want to struggle to read questionnaires on the smaller screens or have to scroll when given an 11-point scale. Researchers are not only concerned about survey experience, but also about survey best practices. How do you administer a max-diff study on a smartphone screen? How long should a survey be? Or is the survey competing with a TV commercial during the season finale of “The Bachelorette.”
The reality is the transition from face-to-face to phone is similar to the transition from online to mobile but it’s different in some important ways as well – it’s happening way faster than the previous shift and this time the respondent is the one deciding to make the switch – it’s not the researchers choice. It’s happening whether we are ready or not. The industry will adjust, but how quickly? There are best practices in designing a mobile survey, but there are still some unanswered questions. Is it better to use the tablet / smartphone’s web browser or are respondents willing download a stand-alone app? Where should development start: iOS or Android? There is no doubt that methods will be refined, but what does the future look like? Place your bets now.
Subscribe to SSI KnowledgeWatch for more thought-provoking views, time-proven tips and innovative research insights or visit our corporate website to find out how we can help you with your next research project.
written by Pete Cape,
About the questions:
5. Ask individual questions: trying to get more than one piece of data at time is a recipe for disaster
6. Avoid the statements in a grid style question, it is tedious, mentally exhausting, and leads to over-fast processing.
7. Be careful with Yes/No questions – acquiescence bias leads to overuse of “Yes” – present alternatives to pick from instead (respondent will not know which is your ‘right’ answer)
8. Pictures speak volumes and overwhelm the words – use sparingly not as illustrations or examples
9. Make sure your scaled questions/items are scalable and not dichotomies. Make sure your scale is appropriate to your question/item
10. If the answer is a number and they know it they will give it to you. Do not use banded options unless you already know the distribution of the answers (the central banding should contain the mean).
written by Pete Cape,
About the people:
- These are ordinary people; ask simple straightforward questions written in plain language. Avoid jargon and complexity.
- The respondent knows they are in an experiment and wants to look good to you the experimenter; everything you write and show is thought to have meaning. Do you understand the meaning of what you wrote? It is easy to manipulate public opinion – don’t!
- People are unreliable witnesses to their own behaviour; they mis-remember because it was inconsequential, they post-event justify because they are human.
Check back next week to find out About the questionaire and About the questions!
Just when we think we figured out the world of smartphones, Google throws a curveball. Like something you would normally see in a sci-fi movie, Google has released their prototype of their new Google Glass. It is essentially like wearing your smartphone as a pair of eyeglasses. The controls are voice activated and allow you to visit websites, click photos and take video just by saying specific commands. Talk about hands free. The images are shown on a small screen positioned in front of your eyeball.
Is this the wave of the future? According to reports from the Wall Street Journal, this isn’t too far way. Google Glass will be available in early to mid 2013 to developers and available to the public shortly after that. As exciting as this is, how will this affect our surveys? Now instead of making sure our radio buttons are displayed properly on the screen of a smartphone, we need to make sure that the programming accurately records and interprets the voice prompts of the respondent. How will background noise affect the quality of these voice commands?
But why focus on the negative, let’s instead focus on the positive. What doors will Google Glass open up in terms of market research? One thing that comes to mind is a video diary. While this isn’t a new avenue of research, it will certainly be made easier when all you have to do is command the Google Glass to begin recording and the device will do so. The participant can then move about freely and not worry about holding a video camera. The researcher will see exactly what the respondent sees.
written by Caithlin Skultety
Does anyone ever get the feeling that organizing your email inbox is like cleaning out your closet ? You’re initially overwhelmed over what’s been accumulating and then the fear sets in that it will take days to sort through.
Well I experienced both this past weekend. I was in the Spring-cleaning mood and wanted to clean up my Yahoo account. I deleted the obvious spam files and the 25 “deal of the day” emails that now just cloud my inbox on a daily basis. I made some folders and sorted the emails I needed. I even saved some emails I will probably never need again but like your favorite pair of jeans you’re not ready to throw away!
It was then that I came across an email from the car dealership where I had gotten my car serviced 3 months ago. The email had thanked me for coming in for my service and asked me to take a “quick” survey about my experience. I remember starting the “quick” survey then dropping out after 8 minutes with no end in sight. It was false advertising! I guess I would have been more inclined to finish the survey if it was presented differently. As a respondent I want to know what I’m getting into before I’m 8 minutes in.
I did end up deleting the email but next time around I would take the survey knowing I need a few more minutes to complete it. It might be partly because I work within the market research industry, but mostly because I think the dealership did a great job and I was looking forward to giving my feedback. It’s also important for me to give my opinion on something that could ultimately affect me.
But the company missed out on my opinion this time around, just because they didn’t take the time to let me know what to expect.
written by Rosie Greening
Oh dear. A few too many times recently, I have seen surveys which are just not fair. You know the ones I mean. You’ve taken surveys like this before and commented how badly written the question was. You have struggled to answer the same barrage of questions about ten different concepts one after the other. You have not understood the question – and (some of) you are researchers.
Surveys should be easy. We may not be able to make them enjoyable, (I mean, who really wants to talk about toilet paper?) but we shouldn’t make it hard for people to help us.
A couple of years ago, I went to a conference about the future of research, which also, unsurprisingly included some comments on the past. Years ago, one company used to send their new hires out (all of them without exception) on compulsory fieldwork – face to face, in the street, testing the questionnaires they had designed. The speaker commented on just how much better it made him, at simplifying things. I think this is a very good rule to apply. If you can’t read the question out loud, and be easily understood, then perhaps it is too complex? Obviously this would not work for every survey, but perhaps an interesting (old) new way to pilot your 30 minute questionnaires?
Every few years the focus of research shifts, or cycles through various concerns: quality, response rates, response times, price and so on. Perhaps it is time for the focus came back to the respondent?
At SSI we use the QUEST score to ensure everyone is having a good experience. We do this, by giving our respondents the chance to give feedback at the end of every survey. Even with the most rigorous testing, sometimes we get questionnaires wrong, and with respondent feedback, we can catch this after soft launch. Not only can we see where respondents are dropping out but we can see what they are saying about the survey. The scores are available when we send our debrief pack, after we complete a study. So important it is to us, that we even give prizes to the companies who have the best QUEST scores at the end of the year!
written by Jackie Lorch
As researchers, we often seek quite complex information from people and some of the most challenging questions we ask involve describing things by numbers.
I remember seeing a survey once which asked: What is the capacity of your refrigerator in cubic feet? I challenge anyone to get this right without looking up the model number online to find out.
If we think the capacity of a refrigerator is a confusing number to think about, it doesn’t remotely compare to the challenge of answering the question: How big is the universe?
No one knows how big the universe is, but just describing the size of the observable universe can quickly tie us up in knots, since it’s about 93,000,000,000 light years across.
A better way to think about it is to anchor it to something that’s easier to visualize. A recent article in Wired magazine, suggests: “Place a penny down in front of you. If our sun were the size of that penny, the nearest star, Alpha Centauri, would be 350 miles away…at this scale, the Milky Way galaxy would be 7.5 million miles across.” That 7.5 million miles is the tiny dot at the center of the observable universe in the picture from Wired below:
The only way to start to grasp these enormities is by anchoring and comparison. The Hayden Planetarium at the Natural History Museum in New York does a fantastic job of this in their “Scales of the Universe” exhibit, where different sized objects illustrate the relative size of cosmic and human items.
To get back to the “size-of-the-fridge” example: rather than asking people for an answer in cubic feet, we need to anchor the question in something concrete that a respondent can relate to. The most obvious anchor is a photo of different refrigerator models. Another might be the height and width of the refrigerator.
Whether we’re explaining the size of the universe or gathering the sizes of kitchen appliances, it’s worth taking the time to think about how we can make our concepts relatable to the people we’re talking to–we’re sure to get our point across better if we do.
written by Jackie Lorch
Here are a couple of real examples from surveys I saw this week (Note: some minor details have been changed to preserve anonymity).
A label on the scale which doesn’t match the question:
A yes/no question with an agree/disagree scale:
One wonders what the center point is supposed to mean in this example: “I neither agree nor disagree that I received a confirmation”? Is that like “taking the fifth” on a survey? That’s probably behavior that would get you branded as a “bad respondent”!
As sample providers we see these things all the time. Our conundrum is how to react.
Refuse to field until it’s fixed? But we don’t want to be difficult. And we’re conscious that we’re not researchers ourselves. Is speaking up overstepping our role?
We expect our participants to answer carefully. But really, why should they, when so often we don’t ask carefully?