Log inSign up for free
Blog results
Showing 0 of 0 results
Stay curious! You'll find something.

2 Tips for Writing Agree/Disagree Survey Questions

Your healthcare provider cares about you—and they also care that you come back. So after a doctor visit, many of you may receive a feedback survey asking about your experience. A key question might look like this:

In the field of survey research, this is called an agree/disagree question, named for the answer options respondents are asked to consider. This type of question has been extremely popular among survey researchers for decades. Why?

Well, they’re easy to write—and it’s a pretty standard question that’s used across industries. That being said, research has revealed some challenges with this question type. We’ll go over two such challenges and share our tips on how to overcome them in a snap.

The seemingly simple agree/disagree construct has what’s called an acquiescence response bias. What we mean by that is, in general, people who answer surveys like to be seen as agreeable. So they’ll say they agree when given the choice, regardless of the actual content of the question.

Another challenge is that the agree/disagree question appears to be so straightforward that researchers sometimes write a whole bunch of questions using the same answer choice options. Then, they put the questions into a matrix question type.

Since an agree/disagree matrix question type packs a lot of information in a small space—it’s essentially one question that asks respondents to agree or disagree with a series of statements—respondents may not be careful about how how they answer these questions.

We call this phenomenon straight-lining. (Basically, straight-lining is when a respondent moves down a series of statements too quickly, selecting the same answer choice for all.)

Obviously, this is a major problem if you’re trying to collect accurate data. And, in the case of your doctor’s office, as a result of straight-lining, you won’t be able to use the data you collect to improve services and make informed decisions.

In order to force all questions into the same format, survey designers need to come up with a rating subject for each question. In the example above, respondents are asked to rate the statement that their health provider spent enough time with them. The same question can also be asked as “not enough time” or “too much time.”

Say one healthcare clinic wants to compare its level of patient satisfaction with another’s—if their question wordings are different, it’s difficult then to make a meaningful comparison.

Now, you must be wondering how to ask patients about their satisfaction levels if you’re a health provider? Let’s look at another question.

This type of question is called an item specific question. Meaning that response options are specific to the survey question. Different questions have a different set of response options.

Research has shown that item specific rating scales are much less prone to an acquiescent response bias. Also, a group of researchers from both Spain and the U.S. have experimentally compared item specific and agree/disagree scales in 14 European countries and found that in general the reliability and validity of item specific scales are superior to those of agree/disagree scales.

Now it’s your turn to design a rating scale. And don’t forget to check out the SurveyMonkey Question Bank when designing your survey. There’s a ton of item-specific scales for you to choose from out of the thousands of methodology-certified questions available!

As always, thanks for reading and leave your comments or questions for us below.