3.1 What do you need to know?
Why are you making the survey? What do you want to know? Try to answer this question first. Hopefully you can formulate some decision that would depend on the findings of the survey. That would be a good reason to conduct the survey, even better if the result was meaningful to the respondents.
If you do not find any single reason for the survey maybe you should not do it. It is entirely possible that the survey would only produce useless information or maybe in best case some nice-to-know data. The best solution might be to not to do the survey unless you have a clear question in mind.
3.2 Who knows what?
The most overlooked point is designing surveys seems to be over estimating of the respondent’s knowledge. A person can give meaningful answers only if the person has the knowledge. There a lot of important things that would be extremely useful to know, BUT which are very difficult to study. A typical situation might be that there are embedded elements, i.e. an application in a device. The user might not be able to tell the difference between effects of hardware or software problems.
RULE NUMBER 1
Do not ask things people do not know or understand.
This is important; a lot of surveys fail by asking impossible questions. It is amazing what you can find in real life surveys; here is a beauty. (The survey had a long table of these questions so I have simplified it a bit)
What is the reference value for number of calls per priority class
There is no explanation what they mean by lower or higher reference value. They also want just one number so what does the per priority class – mean?
Here is another beautiful example.
My 7-year old son had started school and the school wanted to hear the parents’ opinions.
One question was this: Does the Principal support the Teachers?
I did not know the principal; I had only met my son’s teacher. We had not discussed matters like that. I doubt if any parent had any useful and valuable knowledge on this. Unfortunately there are many people who feel that they need to answer each question.
It is understandable that the Principals would like to get some feedback and there is nothing wrong with the question. What is wrong is the audience; they should ask the question from the teachers.
I have seen several IT Service Management surveys where the respondents are asked to estimate their process maturity. I think it is a silly notion that people would be able to judge their own maturity.
There are several ways of asking questions. A question can be open or closed. You can ask for an exact number or give the respondent a choice of options. Let us assume that you have released an application to a large group of people and would like to know if the users have had problems with it. The release process has taken four months and now all users have had their training and have started using the application.
A perfectly closed question could be this. 1 Have you had any problems with the new application? Yes or no.
You might find that 90 % answer yes and that would leave you wondering.
A different version of the question could be: 2 How many problems you have had with the new application? Unfortunately there is a problem with this question. Some people have been using the new application longer than others. You could rephrase this question like this: 3 How many problems per week you have had with the new application?
If you tested this question, you might find that 75 % of the people answer 0 problems and only 5 % give a number larger than 1. That seems to be in conflict with the answer to question #1 but is not; maybe most people had one or two problems in the beginning but nothing after that.
Another way of asking this would be asking:
3 How often you have problems with this application:
- nearly daily
- every week
- a few times per month
- less than that
This question is better but it would be difficult to calculate the number of problems they have had. That could be solved with a bit more detailed question
4 How many problems you have had with this application?
_____ per week/month/year
The problem with the last question is that it is a bit difficult to understand. Playing with several time scales in a single survey can lead to unrealistic results if people misunderstand the question. In one survey there were several time scales in use and the questions were related to how much time a person used for supporting other users. Some people used more than 150% of their available time for support. They had been confused with the different time scales.
Do not lead with questions. It is too easy to influence opinions. In a classic example two British papers polled the same group of people to find out their opinions. Surprisingly the two polls gave same numbers to opposite views.
Our local council wanted to know how they could improve their services and asked which street in our village needs better care. Sounds reasonable but in my opinion they seem to be overdoing their job. They should have asked do I want more or less maintenance.
In some cases you might find many questions you actually need to be answered. You might want to know which product or department is good and which needs improvement. You might now that there can be several reasons why customers choose you and you would like to know how you how you are doing in those terms. If this is the case, you need to consider would it be possible to turn the question upside down. This is a bit tricky, so I give now some examples:
You have ten different products which are sold globally and you want to know what customers in your important markets think of the products. Customers need to register so you have a file with the contact info and product for each customer.
Your staff would like to know what users think of the
- user interface,
- physical appearance,
They have come up with an average of 5 questions on these 10 areas, totaling 50 questions. How could you ever pare that down to three questions?
One solution is this, ask these three questions:
- What were the most important reasons in your decision to select this product?
- What are the best features of the product?
- What could be improved?
Notice that these are open questions; you do not give a list to the respondents. The reason behind this is that the list always limits responses. In worst case your list might miss the most important reason. This could easily happen in multicultural environments where your ideas of the reason could be completely wrong.
This would be a simple survey to make but the results could be quite complicated. Let’s assume you have 10 products and you operate in 5 important markets. To get some stability, you should have the anwers of at least 20 customers who bought a product per product and market, i.e. 1.000 people. In the end you would have a complex report analyzing the reasons why some your products are doing better or worse in specific markets. In some cases the reasons might be complex, in some cases simple. You might find that it would be easy to adjust a product for a specific market or that you would need to pull some products from the markets.
The key point is here that you need to get to the point. What is it that you really want to know?
Let’s continue now with the application example. A good 3 question survey would ask:
What are your experiences with the new application
- What is good in it?
- What could be improved?
- How much time you have lost during the first weeks of using it?
In chapter 2 there were already some examples of useful questions.
3.4. Measuring customer satisfaction
The 3 question method is highly suitable for customer satisfaction measurements. The typical situation is that we know that there can be many different things that annoy a customer. The natural solution seems to ask about all of them. This creates a longish survey with a lot of questions and scales as it is difficult to use a common scale for all questions. Using a single scale for all questions will make the design look good but may render some of the answers as meaningless.
A simple 3QS will look like this:
1 How do you rate our service? (use appropriate scale with at least seven levels)
2 What is best in our service?
3 What could be improved?
The last two questions are open so DO NOT take a huge sample. A hundred responses will give you a lot of info.