Webinar – How to Get Customer Feedback You Can Use

Here's the recording and slide deck from our webinar.

webinar_imageThanks a bunch for taking the time to give us your feedback on our webinar, “How to Get Customer Feedback You Can Use.”

Your feedback will help us choose a topic for our next webinar—and make the experience even better!

We experienced technical difficulties during the webinar and were unable to see all of the questions that came in. We’re sorry for the inconvenience this may have caused you.

Please scroll past the video to see the webinar Q&A below!

Click here to download the slides >>

Watch/Listen to the recording below:

Q: Any recommendations on finding external benchmarks that don’t cost thousands via paid reports? 

Q: How do you Benchmark externally?

Q: How do you get competitors’ benchmarks?

Q: How do you know what your competitor’s NPS score is?

In order to access external benchmarking data, you’ve got to go through a service that collects the data from organizations across your industry. Remember–external benchmarking is when you compare your scores against your competitors’ scores. Many agencies will collect this information for you, so you’ll have to contact them directly for how much it costs. SurveyMonkey also provides benchmarking data at an affordable price.Click here to see if we have benchmarking data relevant to you.

For example, you can use the industry-standard Net Promoter Score (NPS) question type in SurveyMonkey. Then we calculate your NPS for you. Let’s say it’s 23. You can get in touch with us to request that the industry standard appears alongside your own data, so you can see exactly where you stand.

(Just curious about the average NPS score across industries? We actually have NPS benchmarks available, which can be found in our NPS Benchmarks Report. You can download the report here.)

Q: Please elaborate more on touchpoint visits.

A: Here’s a great article on how to identify your customer touchpoints.

And here’s a post on how you can use those touchpoints to map the customer journey. It’s all about taking a walk in the customer’s shoes so you can empathize with (and troubleshoot) customer problems.

We’ve also got an entire eGuide on how to listen to your customers, which can be found for free here.

Q: I use two client surveys in my work. I think the surveys are good and I want them to be great. Does SM offer a service for reviewing and improving surveys? Can I pay a “survey scientist”, like Sarah, to look at and improve my surveys? Thank you.

A: Yes! Our staff trained by our survey scientists can absolutely help you with your surveys. Click here to get started.

Q: Are millennials responding to surveys? If not, what’s the best way to reach them?

A: Great question. This is a big concern for many folks since millennials are a key market for them. The good news is that millennials are responding to surveys. The bad news is that their response rates are generally lower than other age groups. But there are a few things that can increase their participation.

  • I’d keep the survey as short as possible to prevent any drop-offs and have a high completion rate.
  • Millennials are the group with the highest smartphone penetration. They also spend the most amount of time on their smartphones out of any age group so they will likely respond to your survey on a mobile device. Make sure to make the formatting of your survey mobile-friendly. That will make it easier for them to respond. This blog post has more information on how to do that.
  • We talked about incentives in the webinar, so try offering an incentive that is more appealing to them, like iTunes gift cards.
  • Another method is called “snowball sampling”, this wouldn’t give you a representative population per se, but you’d be able to reach a lot of millennials. With snowball sampling, you would find a few millennials, ask them to complete the survey and also spread the word along about your survey to a few friends, then ask those respondents to pass it along to a few friends (hence “snowball”).

CEM Promo

Q: Are surveys built on SurveyMonkey responsive to the various devices?

A: SurveyMonkey surveys are mobile-optimized. This means that survey questions will be displayed in a mobile-friendly format. But like we mentioned in the webinar, the mobile optimization can only do so much when you are talking about screens that fractions of the size of desktop monitors. The best thing to do is to preview your survey on a mobile device/smartphone and make sure it looks okay.

Note, while SurveyMonkey surveys are mobile-optimized, they are not mobile-responsive, which means that questions would be displayed in completely different formats (e.g., a desktop respondent gets a matrix question, but for that same question the mobile respondent would have those questions be automatically broken out).

Q: Are you going to offer a quiz type survey which can be graded?

A: Our analysis tools are designed to calculate response counts and percentages per question—there isn’t a way to automatically calculate a grade or score for each respondent. Instead, you can export the survey results and grade the quiz manually.

While we don’t have this functionality currently built in, it is a popular request for our product team! So we will make sure they hear your feedback!

Q: Do you have to get their permission to use their statements as testimonials?

A: Yes, you should get permission from your respondents if you plan on using their statements as testimonials. A simple question at the end of your survey would suffice. Try something like “Would you be willing to have your open-ended responses to this survey published on our website as a customer testimonial, or not?”

Q: For an internal survey, do you recommend keeping them anonymous?

A: So, a few notes on anonymity. It is very difficult for many surveys to be completely anonymous, but it is possible to have a survey be confidential. Often, for internal surveys within your own company you’ll be able to break down the data far enough to figure out, for example, who that “female, aged 39, in the Finance department is”.

But, you can definitely keep survey responses confidential by only sharing the responses with people on a “need to know” basis. Plus, you should tell your employees that the responses will be kept confidential, which will encourage them to honestly respond to the questions you ask.

Q: What is your opinion of anonymous surveys?

A: It is very difficult for many surveys to be completely anonymous, but it is possible to have a survey be confidential. Often, for internal surveys within your own company you’ll be able to break down the data far enough to figure out, for example, who that “female, aged 39, in the Finance department is”.

But, you can definitely keep survey responses confidential by only sharing the responses with people on a “need to know” basis. Plus, you should tell your respondents that the responses will be kept confidential, which will encourage them to respond to the questions you ask with honesty.

Q: For double-barreled questions, is it also a possibility to reformulate the question in more general terms? Something like, ‘Generally speaking, how would you rate our business?’

A: You can of course do that! But, by making it more general, you lose the opportunity to dig a little deeper into the aspects of your business that are driving general satisfaction or dissatisfaction. So I’d ask both the general AND the specific questions about your business. It makes the survey longer (which is a trade-off), but often worth it in the end when you receive data you can make actionable decisions on.

Q: Thank you for the webinar, it is really valuable. Your biggest emphasis was on customer feedback, but how about market research? How that data can be interpreted? What are the differences? Thank you.

A: Market research is often a type of customer survey. Sometimes when you conduct market research (say, you want to see how much people are willing to pay for a new product you are about to release), you are surveying both your current customers AND people who you want to be future customers.

If you are surveying both these groups, one thing you’ll want to do in the analysis phase is to compare these two groups separately since their opinions on your product might differ. For example, your current customers may be willing to pay more for your new product because they are loyal fans. But, in the end, you may make more money by offering a lower price because you’ll gain more new customers!

It’s hard to figure out who your future customers might be, so I’d recommend finding those people either through your social media channels (like we mentioned in the webinar) or by looking into having your survey conducted on an online panel of respondents, like SurveyMonkey Audience. When you are collecting data, make sure you have a large enough sample size to compare the two groups.

There is a lot more that goes into market research, but hopefully that helps get you started.

Q: How do you measure a statistically significant response rate?

A: Unfortunately there is no such thing as a “statistically significant response rate”. Statistical significance is often trying to help survey analysts determine whether two groups (or results from two different surveys) are significantly different from one another.

There are a few things that factor into the formula for statistical significance – sample size and the percentage point difference between the two numbers, being the most important – but response rate is not a factor into that equation.

As we mentioned in the webinar, it isn’t about having a certain response rate – a high response rate isn’t necessarily better than a low response rate survey. It is more about what your respondent population looks like. If they look similar to your target population/customer base, then you probably have good data that you can make some insights based on. If they look very different, you’ll want to rethink how you are collecting the data to see if you can get a closer respondent pool to what your customer base looks like.

Q: How many reminders to fill out the survey can you send out before you become a pest?

A: This is a hard question to answer, since it depends on which group of people you are surveying. But in general, you’ll get a majority of your responses with the initial survey invitation email you send. The second reminder email you send will garner another large chunk of responses, but after that, reminders generally aren’t as successful.

So for most surveys, I’d probably consider sending out 3 emails: 1) initial email invitation; 2) reminder; 3) final reminder a day or two before you close out your survey. That is sort of the sweet spot between SPAMing your respondents too much, but still getting enough responses to your survey so you can be confident in your results.

Q: If I send out a survey twice a year is it good to send the same questions so I can compare the same data?

A: Yes, absolutely! This is the best way to ensure that differences in the results that you get aren’t due to changes in question wording. Better yet, it is even best to keep the order of the questions the same. Often differences in which question comes first can influence answers to subsequent questions, so to minimize order effect differences, I’d even keep the order of the questions the same.

Q: If providing a scaled response ranging from something like satisfied to dissatisfied, do you find it is best to provide 4 options (such that there is no neutral option) or 5 options so there is a neutral middle option?

A: It really depends on the goals for your survey. Generally, it is most standard to provide 5 options with a midpoint of “neither satisfied nor dissatisfied” – that is the most common way, particularly since it is a valid opinion to have. But there are exceptions to this.

  • Trend: if you’ve already asked this same question with a 4-point scale and you want to be able to compare the results from one survey to the next, it is better to stick with the 4-point scale option so you can have a direct comparison
  • If you want to force people to choose between the two opposites, rather than give respondents an easy out of a neutral option, then a 4-point scale might be a better option for your survey. For example, news polls often do this when they ask about support or opposition to bills and proposals that Congress is considering. Often they don’t offer a no opinion or neutral option because that is an easy out and just ask people to rate whether they strongly support, somewhat support, somewhat oppose, or strongly oppose something.

Q: Is it a good idea to list the positive responses first to give the survey a positive spin? (or is that like leading questions) — example: how would you rate this webinar?  Very Good, Good, etc…   OR   Very Poor, Poor, etc.   —- I typically like to list the positive stuff FIRST in my answer set – is that a good practice?

A: Most survey creators list the positive option first, so respondents are now used to having the positive option come first. Therefore, it isn’t leading to list the positive options first.

If you are concerned about “leading” respondents into a more positive response, you could consider using the “flip answer options” feature, which can be found under the “options” tab when you are creating a survey question. What this feature does is randomly flip the answer options for each respondent. So one respondent may see: Excellent, Very Good, Good, Fair, Poor. While another may see: Poor, Fair, Good, Very Good, Excellent.

Q: Is it a good idea to send a reminder if your initial response is lower that desired?

A: Yes, of course! But there is a fine line between sending too many reminders and annoying folks and sending too few that you have a low response rate.

I’d recommend sending out 3 emails: 1) initial email invitation; 2) reminder; 3) final reminder a day or two before you close out your survey. That is sort of the sweet spot between SPAMing your respondents too much, but still getting enough responses to your survey so you can be confident in your results.

Q: Is there a tip to ensure that customers reply honestly and not just random answers?

A: There are two big things that you can do:

  • Make sure the survey is easy to complete. Often respondents get “survey fatigue” when a survey is too long and start to click on random answers just to finish the survey. So keep your survey short. Another way to prevent random clicking or “straightlining” (where people click the same answer for every single question) is to use matrix questions cautiously. Keep them small (no more than 5 rows and 5 columns), or better yet, break them out into individual questions.
  • Another easy thing you can do is to ensure that their responses remain confidential. That way, they know that their responses aren’t going to be shared with the world and they’ll be more honest in their responses. Note though, while it is easy to ensure confidentiality, I wouldn’t go so far as to claim anonymity since it is often easy to trace back responses to an individual by doing a few crosstabs.

Q: When you ask for gender, is it helpful or not helpful to include “prefer not to answer”, should you provide that option?

A: Generally I’d stay away from offering a “prefer not to answer” option because that could give people an easy way out. Instead, I would not make the question required, allowing the respondent to skip it if they don’t want to answer that question.

But of course, there are exceptions to this. For example, if you were doing a survey on gender identity, you’d want to include that option, along with options like “transgender male, transgender female, etc.” in addition to just “male, female”.

Q: What can you do to survey a customer base that is largely anti-technology (e.g. seniors)?

A: There are lots of ways that you could contact customers that aren’t internet savvy. Here are a couple:

Telephone: If you have telephone numbers for your customers, you could call your customer base on the phone and directly survey them. You can still use SurveyMonkey to collect this information. Set up a collector to accept multiple responses and then have the person asking the questions use the survey you’ve set up in SurveyMonkey and click the answer options that the respondent indicates over the phone.

Paper: You can also export out a PDF of your survey and then use the Manual Data Entry option to enter in your data into SurveyMonkey. With a paper survey, if you can print out a few copies and pass them out to your customers whenever you see them in person. Or if you have access to their mailing addresses, you could mail them the survey and ask them to return it to you.

With these options, you can still use online surveys for those customers who are able to respond online, but not limit yourself to just that respondent pool and analyze all the results together!

Q: Have you found any data that shows sending a survey increases the number of complaints that come in over the following time period?

A: It depends, if you have never collected customer feedback through a survey, you may see an increase in the number of complaints. But if this is your first time conducting a customer feedback survey, you’ll at least have created your first internal benchmark that you can use for your subsequent surveys.

But, if you have already sent the survey before, you probably won’t find an increase in the number of complaints just due to sending out the survey. If you find an increase, it most likely indicates an issue you may need to address.

Q: I am in a service/education oriented non-profit organization… is there anything different I should keep in mind?

A: Some of the touchpoints mentioned during the webinar aren’t relevant (for example sales data). But that doesn’t mean you can’t use surveys to provide feedback that will help your organization to improve. You can still do surveys among your employees and maybe the people who your non-profit targets (e.g., students).

The survey tips that we mentioned are very broad and can be applied in any survey, so you should feel free to apply them to your use case as well.