Read our research on: Congress | Economy | Gender

Regions & Countries
Writing survey questions.
Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.
Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.
Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.
For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.
Question development
There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.
At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as focus groups , cognitive interviews, pretesting (often using an online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.
Measuring change over time
Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.
When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see question wording and question order for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.
The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.
Open- and closed-ended questions
One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.
For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.
When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see “High Marks for the Campaign, a High Bar for Obama” for more information.)

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.
When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.
In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.
In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).
Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.
Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.
Question wording
The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.
An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.
There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:
First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions. Based on that research, the Center generally avoids using select-all-that-apply questions.
It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.
In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose not allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.
Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”
We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two forms of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.
One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).
Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.
Question order
Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).
One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.
For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.
Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).
Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.
Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).
The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see measuring change over time for more information).
A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.
U.S. Surveys
Other research methods.
About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .
Academic Experience
Great survey questions: How to write them & avoid common mistakes
Updated January 9, 2023
Learning how to write survey questions is both art and science. The wording you choose can make the difference between accurate, useful data and just the opposite. Fortunately, we’ve got a raft of tips to help.
Figuring out how to make a good survey that yields actionable insights is all about sweating the details. And writing effective questionnaire questions is the first step.
Essential for success is understanding the different types of survey questions and how they work. Each format needs a slightly different approach to question-writing.
In this article, we’ll share how to write survey questionnaires and list some common errors to avoid so you can improve your surveys and the data they provide.
Free eBook: The Qualtrics survey template guide
Survey question types
Did you know that Qualtrics provides 23 question types you can use in your surveys ? Some are very popular and used frequently by a wide range of people from students to market researchers, while others are more specialist and used to explore complex topics. Here’s an introduction to some basic survey question formats, and how to write them well.
Multiple choice
Familiar to many, multiple choice questions ask a respondent to pick from a range of options. You can set up the question so that only one selection is possible, or allow more than one to be ticked.
When writing a multiple choice question…
- Be clear about whether the survey taker should choose one (“pick only one”) or several (“select all that apply”).
- Think carefully about the options you provide, since these will shape your results data.
- The phrase “of the following” can be helpful for setting expectations. For example, if you ask “What is your favorite meal” and provide the options “hamburger and fries”, “spaghetti and meatballs”, there’s a good chance your respondent’s true favorite won’t be included. If you add “of the following” the question makes more sense.
Asking participants to rank things in order, whether it’s order of preference, frequency or perceived value, is done using a rank structure. There can be a variety of interfaces, including drag-and-drop, radio buttons, text boxes and more.
When writing a rank order question…
- Explain how the interface works and what the respondent should do to indicate their choice. For example “drag and drop the items in this list to show your order of preference.”
- Be clear about which end of the scale is which. For example, “With the best at the top, rank these items from best to worst”
- Be as specific as you can about how the respondent should consider the options and how to rank them. For example, “thinking about the last 3 months’ viewing, rank these TV streaming services in order of quality, starting with the best”
Slider structures ask the respondent to move a pointer or button along a scale, usually a numerical one, to indicate their answers.
When writing a slider question…
- Consider whether the question format will be intuitive to your respondents, and whether you should add help text such as “click/tap and drag on the bar to select your answer”
- Qualtrics includes the option for an open field where your respondent can type their answer instead of using a slider. If you offer this, make sure to reference it in the survey question so the respondent understands its purpose.
Also known as an open field question, this format allows survey-takers to answer in their own words by typing into the comments box.
When writing a text entry question…
- Use open-ended question structures like “How do you feel about…” “If you said x, why?” or “What makes a good x?”
- Open-ended questions take more effort to answer, so use these types of questions sparingly.
- Be as clear and specific as possible in how you frame the question. Give them as much context as you can to help make answering easier. For example, rather than “How is our customer service?”, write “Thinking about your experience with us today, in what areas could we do better?”
Matrix table
Matrix structures allow you to address several topics using the same rating system, for example a Likert scale (Very satisfied / satisfied / neither satisfied nor dissatisfied / dissatisfied / very dissatisfied).
When writing a matrix table question…
- Make sure the topics are clearly differentiated from each other, so that participants don’t get confused by similar questions placed side by side and answer the wrong one.
- Keep text brief and focused. A matrix includes a lot of information already, so make it easier for your survey-taker by using plain language and short, clear phrases in your matrix text.
- Add detail to the introductory static text if necessary to help keep the labels short. For example, if your introductory text says “In the Philadelphia store, how satisfied were you with the…” you can make the topic labels very brief, for example “staff friendliness” “signage” “price labeling” etc.
Now that you know your rating scales from your open fields, here are the 7 most common mistakes to avoid when you write questions. We’ve also added plenty of survey question examples to help illustrate the points.
Likert Scale Questions
Likert scales are commonly used in market research when dealing with single topic survyes. They're simple and most reliable when combatting survey bias . For each question or statement, subjects choose from a range of possible responses. The responses, for example, typically include:
- Strongly agree
- Strongly disagree
7 survey question examples to avoid.
There are countless great examples of writing survey questions but how do you know if your types of survey questions will perform well? We've highlighted the 7 most common mistakes when attempting to get customer feedback with online surveys.
Survey question mistake #1: Failing to avoid leading words / questions
Subtle wording differences can produce great differences in results. For example, non-specific words and ideas can cause a certain level of confusing ambiguity in your survey. “Could,” “should,” and “might” all sound about the same, but may produce a 20% difference in agreement to a question.
In addition, strong words such as “force” and “prohibit” represent control or action and can bias your results.
Example: The government should force you to pay higher taxes.
No one likes to be forced, and no one likes higher taxes. This agreement scale question makes it sound doubly bad to raise taxes. When survey questions read more like normative statements than questions looking for objective feedback, any ability to measure that feedback becomes difficult.
Wording alternatives can be developed. How about simple statements such as: The government should increase taxes, or the government needs to increase taxes.
Example: How would you rate the career of legendary outfielder Joe Dimaggio?
This survey question tells you Joe Dimaggio is a legendary outfielder. This type of wording can bias respondents.
How about replacing the word “legendary” with “baseball” as in: How would you rate the career of baseball outfielder Joe Dimaggio? A rating scale question like this gets more accurate answers from the start.
Survey question mistake #2: Failing to give mutually exclusive choices
Multiple choice response options should be mutually exclusive so that respondents can make clear choices. Don’t create ambiguity for respondents.
Review your survey and identify ways respondents could get stuck with either too many or no single, correct answers to choose from.
Example: What is your age group?
What answer would you select if you were 10, 20, or 30? Survey questions like this will frustrate a respondent and invalidate your results.
Example: What type of vehicle do you own?
This question has the same problem. What if the respondent owns a truck, hybrid, convertible, cross-over, motorcycle, or no vehicle at all?
Survey question mistake #3: Not asking direct questions
Questions that are vague and do not communicate your intent can limit the usefulness of your results. Make sure respondents know what you’re asking.
Example: What suggestions do you have for improving Tom’s Tomato Juice?
This question may be intended to obtain suggestions about improving taste, but respondents will offer suggestions about texture, the type of can or bottle, about mixing juices, or even suggestions relating to using tomato juice as a mixer or in recipes.
Example: What do you like to do for fun?
Finding out that respondents like to play Scrabble isn’t what the researcher is looking for, but it may be the response received. It is unclear that the researcher is asking about movies vs. other forms of paid entertainment. A respondent could take this question in many directions.
Survey question mistake #4: Forgetting to add a “prefer not to answer” option
Sometimes respondents may not want you to collect certain types of information or may not want to provide you with the types of information requested.
Questions about income, occupation, personal health, finances, family life, personal hygiene, and personal, political, or religious beliefs can be too intrusive and be rejected by the respondent.
Privacy is an important issue to most people. Incentives and assurances of confidentiality can make it easier to obtain private information.
While current research does not support that PNA (Prefer Not to Answer) options increase data quality or response rates, many respondents appreciate this non-disclosure option.
Furthermore, different cultural groups may respond differently. One recent study found that while U.S. respondents skip sensitive questions, Asian respondents often discontinue the survey entirely.
- What is your race?
- What is your age?
- Did you vote in the last election?
- What are your religious beliefs?
- What are your political beliefs?
- What is your annual household income?
These types of questions should be asked only when absolutely necessary. In addition, they should always include an option to not answer. (e.g. “Prefer Not to Answer”).
Survey question mistake #5: Failing to cover all possible answer choices
Do you have all of the options covered? If you are unsure, conduct a pretest version of your survey using “Other (please specify)” as an option.
If more than 10% of respondents (in a pretest or otherwise) select “other,” you are probably missing an answer. Review the “Other” text your test respondents have provided and add the most frequently mentioned new options to the list.
Example: You indicated that you eat at Joe's fast food once every 3 months. Why don't you eat at Joe's more often?
There isn't a location near my house
I don't like the taste of the food
Never heard of it
This question doesn’t include other options, such as healthiness of the food, price/value or some “other” reason. Over 10% of respondents would probably have a problem answering this question.
Survey question mistake #6: Not using unbalanced scales carefully
Unbalanced scales may be appropriate for some situations and promote bias in others.
For instance, a hospital might use an Excellent - Very Good - Good - Fair scale where “Fair” is the lowest customer satisfaction point because they believe “Fair” is absolutely unacceptable and requires correction.
The key is to correctly interpret your analysis of the scale. If “Fair” is the lowest point on a scale, then a result slightly better than fair is probably not a good one.
Additionally, scale points should represent equi-distant points on a scale. That is, they should have the same equal conceptual distance from one point to the next.
For example, researchers have shown the points to be nearly equi-distant on the strongly disagree–disagree–neutral–agree–strongly agree scale.
Set your bottom point as the worst possible situation and top point as the best possible, then evenly spread the labels for your scale points in-between.
Example: What is your opinion of Crazy Justin's auto-repair?
Pretty good
The Best Ever
This question puts the center of the scale at fantastic, and the lowest possible rating as “Pretty Good.” This question is not capable of collecting true opinions of respondents.
Survey question mistake #7: Not asking only one question at a time
There is often a temptation to ask multiple questions at once. This can cause problems for respondents and influence their responses.
Review each question and make sure it asks only one clear question.
Example: What is the fastest and most economical internet service for you?
This is really asking two questions. The fastest is often not the most economical.
Example: How likely are you to go out for dinner and a movie this weekend?
Dinner and Movie
Dinner Only
Even though “dinner and a movie” is a common term, this is two questions as well. It is best to separate activities into different questions or give respondents these options:
5 more tips on how to write a survey
Here are 5 easy ways to help ensure your survey results are unbiased and actionable.
1. Use the Funnel Technique
Structure your questionnaire using the “funnel” technique. Start with broad, general interest questions that are easy for the respondent to answer. These questions serve to warm up the respondent and get them involved in the survey before giving them a challenge. The most difficult questions are placed in the middle – those that take time to think about and those that are of less general interest. At the end, we again place general questions that are easier to answer and of broad interest and application. Typically, these last questions include demographic and other classification questions.
2. Use “Ringer” questions
In social settings, are you more introverted or more extroverted?
That was a ringer question and its purpose was to recapture your attention if you happened to lose focus earlier in this article.
Questionnaires often include “ringer” or “throw away” questions to increase interest and willingness to respond to a survey. These questions are about hot topics of the day and often have little to do with the survey. While these questions will definitely spice up a boring survey, they require valuable space that could be devoted to the main topic of interest. Use this type of question sparingly.
3. Keep your questionnaire short
Questionnaires should be kept short and to the point. Most long surveys are not completed, and the ones that are completed are often answered hastily. A quick look at a survey containing page after page of boring questions produces a response of, “there is no way I’m going to complete this thing”. If a questionnaire is long, the person must either be very interested in the topic, an employee, or paid for their time. Web surveys have some advantages because the respondent often can't view all of the survey questions at once. However, if your survey's navigation sends them page after page of questions, your response rate will drop off dramatically.
How long is too long? The sweet spot is to keep the survey to less than five minutes. This translates into about 15 questions. The average respondent is able to complete about 3 multiple choice questions per minute. An open-ended text response question counts for about three multiple choice questions depending, of course, on the difficulty of the question. While only a rule of thumb, this formula will accurately predict the limits of your survey.
4. Watch your writing style
The best survey questions are always easy to read and understand. As a rule of thumb, the level of sophistication in your survey writing should be at the 9th to 11th grade level. Don’t use big words. Use simple sentences and simple choices for the answers. Simplicity is always best.
5. Use randomization
We know that being the first on the list in elections increases the chance of being elected. Similar bias occurs in all questionnaires when the same answer appears at the top of the list for each respondent. Randomization corrects this bias by randomly rotating the order of the multiple choice matrix questions for each respondent.
Free Templates: Get free access to 30+ of Qualtrics' best survey templates
While not totally inclusive, these seven survey question tips are common offenders in building good survey questions. And the five tips above should steer you in the right direction.
Focus on creating clear questions and having an understandable, appropriate, and complete set of answer choices. Great questions and great answer choices lead to great research success. To learn more about survey question design, download our eBook, The Qualtrics survey template guide or get started with a free survey account with our world-class survey software .
Sarah Fisher
Related Articles
February 8, 2023
Smoothing the transition from school to work with work-based learning
December 6, 2022
How customer experience helps bring Open Universities Australia’s brand promise to life
August 18, 2022
School safety, learning gaps top of mind for parents this back-to-school season
August 9, 2022
3 things that will improve your teachers’ school experience
August 2, 2022
Why a sense of belonging at school matters for K-12 students
July 14, 2022
Improve the student experience with simplified course evaluations
March 17, 2022
Understanding what’s important to college students
February 18, 2022
Malala: ‘Education transforms lives, communities, and countries’
Stay up to date with the latest xm thought leadership, tips and news., request demo.
Ready to learn more about Qualtrics?
Create & send surveys with the world’s leading online survey software
Empower your organization with our secure survey platform
Bring survey insights into your business apps
- Specialized products
Collect survey responses from our global consumer panel
Understand & improve customer experience (NPS®)
Understand & increase employee engagement
Create marketing content from customer feedback
Collect, review & manage applications online
Gather data & payments with online forms
Customer feedback for Salesforce
- Customer Satisfaction Customer Loyalty Event Surveys
- Employee Engagement Job Satisfaction HR Surveys
- Market Research Opinion Polls Concept Testing
- People Powered Data for business
Win more business with Customer Powered Data
Build a stronger workforce with Employee Powered Data
Validate business strategy with Market Powered Data
- Solutions for teams
Delight customers & increase loyalty through feedback
Improve your employee experience, engagement & retention
Create winning campaigns, boost ROI & drive growth
Elevate your student experience and become a data-driven institution
Best practices for using surveys & survey data
Our blog about surveys, tips for business, & more
Tutorials & how-to guides for using SurveyMonkey
Best practices for writing good survey and poll questions
Getting insightful and actionable answers to your surveys and polls starts with asking the right questions.
If you take the time to write good survey questions , you’ll be well on the path to getting the reliable responses you need to reach your goals. Writing good survey questions isn’t difficult. We’ve created this guide so you can understand best practices and get quick tips to help you create good survey and poll questions—ones that generate useful insights and data.
Using templates that include survey questions can speed up your survey creation process, ensuring you are asking good questions that elicit useful answers from the audiences and demographics you are targeting. You can then analyze and present your survey results in a variety of formats, such as a word cloud, that creates a visual representation of the most common words and phrases in your responses. Success in your surveys first depends on the types of survey questions to use.
Need respondents for your survey?
SurveyMonkey Audience uses its trusted panel of respondents from around the world to power your surveys.
Open-ended questions ask respondents to add personal comments in the form of a text box, whereas close-ended questions give respondents a fixed set of options to choose from. These closed-ended response choices can be simple yes/no options, multiple choice options, Likert rating scales, and more.
Get a deep dive on the difference between open-ended and closed-ended questions , so you can use them with confidence.
Within this guide, you’ll learn how to ask your questions to elicit the most useful responses. To help you write a top-notch questionnaire, we’ll cover:
- Ways to write great survey questions using neutral answer options
- Examples of ensuring your surveys have a balanced set of answer options
- How to avoid asking for two things at once
- Creating good survey questions that are closed-ended
- Writing a survey that uses a diverse set of questions
- How to ensure you’re sending a good survey
How weak questions impact poll results
Good survey questions can help you achieve your goals, but poorly written questions can undermine your efforts and potentially skew your results, especially in single-question polls. Weak questions can range from those that confuse respondents to questions that are hampered by bias, or lead respondents toward a particular response.
Weak questions can reduce survey participation and make it more difficult to capture reliable data. Relying on straightforward multiple choice questions can serve as a strong foundation for crafting good survey questions that generate solid data. Bottom line is, if you’re launching an online poll that only has one question, you’ve got to get it right.
Pro tip: Use customization features to brand your polls to add credibility for the respondents. Designing polls that include your logo, brand colors, or a custom theme ensure the questionnaire is recognizable to your target audience.
7 tips for writing a great survey or poll
Whether you are conducting market research surveys , gathering large amounts of data to analyze, collecting feedback from employees, or running online polls—you can follow these tips to craft a survey, poll, or questionnaire that gets results.
1. Ask more closed-ended questions instead than open-ended questions
If you are looking for data that is easy to capture and analyze, closed-ended questions can be your ticket to success. Closed-ended questions generate quantitative data that can be used to measure variables. The answers to closed-ended questions are always objective and conclusive. Another benefit, the data derived from this question type can be presented in very accessible formats showing overall percentages to how respondents answered—graphs and charts are best.
Open-ended questions generate qualitative data, which requires more effort and time for respondents to answer compared to closed-ended questions. Qualitative data is often more time consuming to analyze because it does not generate clear-cut numerical results. When thinking about how to write a great survey, you should consider minimizing the use of open-ended questions. This will also help increase your completion rates as if respondents feel like they have to spend too much time writing in their answers, they’ll leave your survey early.
In general, when writing a survey, you should try to avoid asking more than two open-ended questions per survey or poll. If possible, put them on a separate page at the end. That way, even if a respondent drops out of the survey, you’re able to collect their responses from the questions on previous pages. No doubt, open-ended questions can generate extremely useful insights, but it’s important to be strategic in the ways you use them to get the maximum benefit.
Get more survey guidelines to help you on survey creation.
2. Ensure your survey questions are neutral
Putting an opinion in your question prompt is asking a leading question. This can damage your survey data because it can influence respondents to answer in a way that doesn’t reflect how they really feel. Say you asked the leading question :
“We think our customer service representatives are really awesome. How awesome do you think they are?”
The question seems to convey an opinion that you want respondents to agree with. Do you know if your respondents actually feel like your customer service representatives are awesome? If you’re looking to get feedback on your customer service representatives, then this can be a serious problem because you’re not giving respondents the opportunity to refute the fact that reps are awesome.
You can make the tone of your survey question more objective by editing it as follows:
“How helpful or unhelpful do you find our customer service representatives to be?”
Learn more about how to prevent bias from impacting your surveys .
3. Keep a balanced set of answer choices
Respondents need a way to provide honest and thoughtful feedback. Otherwise, the credibility of their responses is at risk.
The answer choices you include can be another potential source of bias. Let’s assume we included the following as answer options when asking respondents how helpful or unhelpful your customer service reps are:
- Extremely helpful
- Very helpful
You’ll notice that there isn’t an opportunity for respondents to say that the reps aren’t helpful. Writing good survey questions involve using an objective tone. This means adopting a more balanced set of answer options, like the following:
- Neither helpful nor unhelpful
- Very unhelpful
4. Don’t ask for two things at once
Confusing respondents is equally as bad as influencing their answers. In both cases, they’ll choose an answer that doesn’t reflect their true opinions and preferences.
A common culprit in causing confusion is the double-barreled question . It asks respondents to assess two different things at the same time. For example:
“How would you rate our customer service and product reliability?”
Customer service and product reliability are two separate topics. Including both in the same question can push the respondent to either evaluate one or to skip the question altogether. Either way, you will be hard-pressed to get an answer that is useful or relevant. Your products may be extremely reliable, but what is weighing on a respondent’s mind is a recent bad customer service experience.
Fortunately, there’s an easy fix here. Simply separate these two topics into their own closed-ended questions:
- “How would you rate our customer service?”
- “How would you rate our product’s reliability?”
This approach helps you pinpoint problem areas while also getting a clear sense of where you are meeting or exceeding customer expectations.
5. Keep your questions different from each other
Imagine if someone asked you the same question over, and over, and over again. You’d probably get annoyed, right? That’s how respondents may feel if you repeatedly ask questions that use the same question prompt or answer choices. It leads respondents to either leave your survey or engage in straightlining , which is answering your questions without putting much thought into them.
A thoughtless answer can be more damaging than no answer at all, as it does not represent the true feelings of the respondent. You can proactively address this by varying the types of questions you ask, how you ask them, and by spacing out questions that look similar. Using one of our expert-written survey templates can help you present a variety of questions posed in different ways to avoid this pitfall.
6. Let most of your questions be optional to answer
Respondents may not know the answers to all of your questions. And there may be some questions they simply don’t feel comfortable answering. But, you still want them to take the survey, and provide valuable feedback.
Keep both of these things in mind when deciding which questions to require answers to. And when you’re unsure whether to make a certain question optional or required, lean on making it optional. We’ve found that forcing respondents to answer your questions makes them more likely to quit your survey or select an answer at random.
7. Do a test drive
As a survey creator, there’s no worse feeling than finding mistakes in your survey once its already sent to respondents. In some instances, this may require you to scrap the survey altogether and start anew. Another option might be to send a revised survey, but this can reduce trust and participation among respondents, and can create a scenario in which some people complete the original survey while others respond to the revised version.
Prevent the situation from happening to you by sharing your survey in advance with colleagues, friends, and anyone else that can be a fresh set of eyes for you. An objective opinion of a reviewer can be all takes to spot mistakes in your survey. Having others review the survey can also weed out any potential bias that might be offensive or off-putting to a particular demographic.
Get your surveys in best shape when you work on them as a team
Our collaboration tools ensure you can access surveys asynchronously for reviewing, sending, or analyzing surveys.
Bonus: Writing poll questions for Zoom
You can make the most of your Zoom video conferences by adding poll questions to engage participants and capture valuable feedback. You can conduct icebreaker polls to get your audience quickly engaged, as well as multiple choice questions and quizzes.
An icebreaker poll is a simple, fun and engaging question that helps get people engaged from the start of your meeting. Icebreaker polls often feature “What’s your favorite …” questions that focus on favorite foods, activities, movies, etc.
You can also give participants options to help guide the meeting, asking them “Would you rather … ?” and then provide some choices. The key to writing good Zoom poll questions is to keep the questions brief and snappy. Don’t go overboard on polling, but include enough within your call to create an interactive environment and gather information that can be useful to you moving forward.
Ultimately, polling is a simple yet powerful way to gather attendee sentiment, and give everyone an equal voice. As responses come in, they will be displayed within Zoom chat so you can gauge their experiences instantly.
Learn more about how you can pair SurveyMonkey with Zoom to lead more productive virtual meetings.
Gain confidence writing good survey questions
Writing a good survey means asking questions in a way that lets respondents answer truthfully. At the same time, it means providing respondents with a quick and easy survey-taking experience. The better your surveys get, the better your responses become.
Explore our resources for creating and analyzing surveys , no matter who you’re trying to reach.
Get a quick start with our survey templates
Search our gallery of 150+ expert-written surveys for any project—customer satisfaction, employee engagement, market research, education, and more.
Filter by survey type
See how surveymonkey can power your curiosity.
Leadership Team
Board of Directors
Investor Relations
App Directory
Office Locations
Terms of Use
Privacy Notice
California Privacy Notice
Acceptable Uses Policy
Security Statement
GDPR Compliance
Email Opt-In
Accessibility
Cookies Notice
Online Polls
Facebook Surveys
Survey Template
Scheduling Polls
Google Forms vs. SurveyMonkey
Employee Satisfaction Surveys
Free Survey Templates
Mobile Surveys
How to Improve Customer Service
AB Test Significance Calculator
NPS Calculator
Questionnaire Templates
Event Survey
Sample Size Calculator
Writing Good Surveys
Likert Scale
Survey Analysis
360 Degree Feedback
Education Surveys
Survey Questions
NPS Calculation
Customer Satisfaction Survey Questions
Agree Disagree Questions
Create a Survey
Online Quizzes
Qualitative vs Quantitative Research
Customer Survey
Market Research Surveys
Survey Design Best Practices
Margin of Error Calculator
Questionnaire
Demographic Questions
Training Survey
Offline Survey
360 Review Template

University Center for Teaching and Learning
Tips for writing good survey questions.
You may refer to our Question Library for suggestions. There are more than 300 sample questions covering topics like inclusion, open educational resources, classroom activities, and gathering midterm feedback.
Here are some suggestions when writing your own questions:
- Use simple, direct language
- Put easy questions first but be aware of the flow of the questions
- Ask one thing per question – Avoid the use of the word “and”
- Ask questions that you need to know the answers to not just that you would like to know
- Make sure that only one of the answers could apply and not multiple selections
- Make sure that there is a response choice for possible answer
- May have to have an “other” option
- Changing types of scales and wording on surveys can be confusing and some respondents may not be aware of the change and thus answer erroneously
- Good idea to emphasis the scale and the direction of it in the instructions
- Survey fatigue can be a real issue
- Good idea to let students know how many questions the survey has and/or the amount of time anticipated to complete it
- Asking questions that would reveal the identity of the student. Questions about program enrollment or expected grade are acceptable but be wary of how class size can affect student anonymity.
- Asking questions that are leading, emotional, or use evocative language
- Do not ask students if they would like to have or do something if it is an impossibility or something that you would not do
- Do not give false hope
- Tip Sheet on Question Wording (PDF – 40.5KB)
- Rider University Course Evaluation website
- New Faculty Resources
- Online programs at Pitt
- Resources for Assessment
- Requesting equipment for your classroom
- Reporting a classroom problem
- Teaching surveys
- Teaching with Panopto
- Not sure what you need?
- Accessibility
- Creating and Using Video
- Diversity, Equity and Inclusion
- General Pedagogy
- Graduate Student/TA Resources
- Remote Learning
- Student Communication and Engagement
- Technology and Equipment
- Canvas Support
- Classroom Technology Requests
- Assessment of Teaching
- Classroom Technology
- Custom Workshops
- Open Lab Makerspace
- Teaching Support
- Need something else? Contact Us
- Event Support Requests
- Faculty Learning Communities
- Makerspaces and Emerging Technology
- Online Programs
- Teaching Surveys
- Testing Services
- Video/Media Services
- Assessment and Teaching Conference
- Diversity Institute
- New Faculty Orientation
- New TA Orientation
- Teaching Center Newsletter
- Meet Our Team
- About the Executive Director
- Award Nomination Form
- Award Recipients
- About the Teaching Center
- Annual Report
- Join Our Team
- Hubspot Blog
- HubSpot.com
Oh no! We couldn't find anything like that.
Try another search, and we'll give it our best shot.
How to Write Good (Even Great!) Survey Questions

Updated: June 15, 2021
Published: July 31, 2019
You know that customer feedback is important. You know that, in order to evaluate the happiness and loyalty of your customers, you need to know what they truly think about your product or service.

So, how do you get those insights? A customer survey.
![help writing survey questions → Free Download: 5 Customer Survey Templates [Access Now]](https://no-cache.hubspot.com/cta/default/53/9d36416b-3b0d-470c-a707-269296bb8683.png)
Once you know how to create a survey , you might be wondering what types of questions you should ask. The first question is one you need to answer -- what information are you trying to find out?
Once you know the answer to that question, you can start writing your own survey questions.
How to Write a Survey
The best surveys are simple, concise, and organized in a logical order. They contain a user-friendly balance of short-answer and multiple-choice questions that derive specific information from the participant. Additionally, most questions should be optional and framed in a manner that avoids any bias.
Picking the right questions can be difficult because you want to make sure your survey contains an even balance of different question types.
Keep reading this blog post to learn about different survey questions types, what information they can tell you, and examples of each -- along with some hard and fast best practices to follow.
Types of Survey Questions
- Multiple Choice
- Rating Scale
- Likert Scale
- Semantic Differential
- Dichotomous
- Close-Ended
1. Multiple Choice
Multiple choice survey questions are questions that offer respondents a variety of different responses to choose from. These questions are usually accompanied by an "other" option that the respondent can fill in with a customer answer if the options don't apply to them.
Multiple choice survey questions among the most popular types of survey questions because they're easy for respondents to fill out, and the results produce clean data that's easy to break out and analyze. Ask multiple-choice questions to learn about your customers' demographic information, product or service usage, and consumer priorities.
Single-Answer
Single-answer multiple choice questions only allow respondents to select one answer from a list of options. These frequently appear online as circular buttons respondents can click.
Multiple-Answer
Multiple-answer multiple choice questions allow respondents to select all responses that apply from a list of options. These frequently appear as checkboxes respondents can select.
2. Rating Scale
Rating scale questions (also known as ordinal questions) ask respondents to rate something on a numerical scale assigned to sentiment. The question might ask respondents to rate satisfaction or happiness on a scale of 1-10, and indicate which number is assigned to positive and negative sentiment.
Rating scale survey questions are helpful to measure progress over time. If you send the same group a rating scale several times throughout a time period, you can measure if the sentiment is trending positive or negative.
Use rating scale questions to gauge your company's Net Promoter Score® (NPS), an example of a common rating scale survey question.
3. Likert Scale
Likert scale survey questions evaluate if a respondent agrees or disagrees with a question. Usually appearing on a five or seven-point scale, the scale might range from "not at all likely" to "highly likely," or "strongly disagree" to "strongly agree."
Use Likert scale questions to evaluate customer satisfaction .
Ranking survey questions ask respondents to rank a variety of different answer options in terms of relative priority or importance to them. Ranking questions provide qualitative feedback about the pool of respondents, but they don't offer the "why" behind the respondents' choice.
Use ranking questions to learn about customer needs and behavior to analyze how they're using your product or service, and what needs they might still have that your product doesn't serve.
5. Semantic Differential
Semantic differential survey questions also ask for respondents to rate something on a scale, but each end of the scale is a different, opposing statement. So, instead of answering the question "Do you agree or disagree with X?" respondents must answer questions about how something makes them feel or is perceived to them.
For example, a semantic differential question might ask, "On a scale of 1 to 5, how would you evaluate the service you received?" with 1 being "terrible" and 5 being "exceptional." These questions are all about evaluating respondents intuitive responses, but they can be tougher to evaluate than more cut-and-dry responses, like agreement or disagreement.
Use semantic differential questions to get clear-cut qualitative feedback from your customers.
Likert Scale vs. Semantic Differential
Both Likert scale and semantic differential questions are asked on a scale respondents have to evaluate, but the difference lies in how the questions are asked. With Likert scale survey questions, respondents are presented with a statement they must agree or disagree with. With semantic differential survey questions, respondents are asked to complete a sentence, with each end of the scale consisting of different and opposing words or phrases.
6. Dichotomous
Dichotomous survey questions offer only two responses that respondents must choose between. These questions are quick and easy for the respondents to answer and for you to analyze, but they don't leave much room for interpretation, either.
Use dichotomous questions to get more clear-cut data that's quick and simple to analyze.
7. Close-Ended
Close ended survey questions are questions that have a set number of answers that respondents must choose from. All of the questions above are examples of close-ended survey questions. Whether the choices are multiple or only two, close-ended questions must be answered from a set of options provided by the survey creator.
8. Open-Ended
Where the survey types above all have closed-ended answers that you input as different options to choose from, open-ended questions are usually accompanied by an empty text box, where the respondent can write a customer answer to the question.
This qualitative feedback can be incredibly helpful to understand and interpret customer sentiment and challenges, but it isn't the easiest data to interpret if you want to analyze trends or changes in opinions. You need humans to interpret qualitative feedback to analyze for sentiment, tone, or spelling errors.
We suggest including open-ended questions alongside at least one other closed-ended question to collect data you can analyze and forecast over time, as well as those valuable qualitative insights straight from the horse's mouth.
Survey Question Examples
Here's an example of a single-answer multiple choice question:

Here's an example of a multiple-answer multiple choice question:

Here's an example of a rating scale survey question in a frequently-used format: NPS.

Here's an example of a five-point Likert scale:

Here's an example of a seven-point Likert scale:

Here's an example of a ranking survey question:

Here are examples of semantic differential survey questions:

Here's an example of a dichotomous survey question:

Here's an example of a close-ended, multiple-choice survey question:

Here's an example of an open-ended question you might include in a survey:


How to Write Survey Questions
- Write unbiased survey questions.
- Don't write loaded questions.
- Keep survey question phrasing neutral.
- Don't use jargon.
- Avoid double negatives.
- Don't write double-barreled questions.
- Encourage respondents to answer all questions.
- Always provide an alternative answer.
- Keep questions clear and concise.
- Test your survey.
1. Write unbiased survey questions.
Leading survey questions are questions that suggest what answer the respondent should select. Here's an example of a leading survey question:
"What's your favorite tool that HubSpot offers?"
This is a leading question because the survey respondent might not like using HubSpot, so a list of different software tools might not accurately reflect the true answer. This question could be improved by offering an answer that allows the respondent to not have a favorite tool.
These questions aren't objective, and will lead your respondents to answer a question in a certain way based solely on the wording of the question, making the results unreliable. To avoid this, keep survey questions clear and concise, leaving you little room to lead respondents to your preferred answer, or have someone unfamiliar with the survey or subject matter review it and get their feedback.
2. Don't write loaded questions.
Along the same lines, loaded questions force survey respondents to choose an answer that doesn't reflect their opinion, thereby making your data unreliable. Here's an example of a loaded question:
"Where do you enjoy watching sports games?"
This is a loaded question because the respondent might not watch sports games. The survey question would need to include an answer option along the lines of "I don't watch sports games" in order to be objective.
Loaded questions typically contain emotionally charged assumptions that can push a respondent toward answering in a specific way. Remove the emotion from your survey questions by keeping them (mostly) free of adjectives.
3. Keep survey question phrasing neutral.
You know what they say about assuming. Don't build assumptions about what the respondent knows or thinks into the questions -- rather, include details or additional information for them. Here's an example of a question based on assumptions:
"What brand of laptop computer do you own?"
This question assumes the respondent owns a laptop computer, when they could be responding to the survey via phone or a shared device. So, to amend the example question above, the answer options would have to include "I don't own a laptop computer" to avoid assumptions.
Instead, keep question phrasing neutral, and leave different options to account for variability in your survey respondents.
4. Don't use jargon.
Jargon can make respondents feel unintelligent. Use clear, straightforward language that doesn't require a respondent to consult a dictionary. Here's an example of a jargon-y survey question:
"What's your CAC:LTV ratio?"
This survey question assumes the respondent is familiar with both acronyms, as well as the ratio as a business metric, which may not be the case. If your survey question includes acronyms, abbreviations, or any words specific to your lexicon, simplify it to ensure greater understanding. This survey question should unpack the definitions of these terms, and provide an answer option that accounts for the respondent not having that data on hand.
5. Avoid double negatives.
Double negatives are confusing, and they irritate respondents -- possibly to the point of not wanting to complete your survey. Here's an example of a survey question that includes a double negative, or two sentiments that contradict themselves:
"Do you not like using Google?"
Instead, the survey question should be phrased as "Do you like using Google?" so the meaning is crystal-clear.
These questions also make it tough for you to analyze the results when you get the survey back, too. Think about it: How can you know which statement the respondent is agreeing to if the question is structured confusingly?
6. Don't write double-barreled questions.
Double-barreled survey questions ask two questions at once. If you present two questions at the same time, respondents won't know which one to answer -- and your results will be misleading. Here's an example of a double-barreled survey question:
"Are you satisfied or unsatisfied with your compensation and career growth at your current employer?"
If the respondent is happy with their compensation, but unhappy with their career growth, they won't know if they should select "Satisfied" or "Unsatisfied" as their answer.
Instead, ask these distinct thoughts in the format of two distinct survey questions. That way, respondents won't be confused, and resulting data will be clear for you.
7. Encourage respondents to answer all questions.
You're taking a survey to get people's opinions and it's tough when they provide you with something like "No comment," or "Not relevant," for an answer. To avoid this, supply the participant with answer options that account for every possibility to reflect the best possible information. Use a more specific answer option, like "I'm not sure," to give you a better idea of your survey base.
8. Always provide an alternative answer.
The goal of your survey should be to obtain customer feedback . However, you don't want this process to come at the expense of your customers' comfort. When asking questions, be sure to include a "I prefer not to answer this question," option. While you'll forfeit the answer, customers won't feel forced to give up sensitive information.
The other benefit of including this option is that you can measure your ability to write a survey. If customers are continuous leaving questions blank, you'll know it's the phrasing or structure of your survey. You can then reassess your survey's layout and optimize it for engagement.
9. Keep questions clear and concise.
The best surveys are short and take only a few minutes to complete. In fact, studies show that your completion rate can drop up to 20% if your survey takes more than seven or eight minutes to finish. That's because customers have busy schedules and will be more interested in your survey if it's shorter time commitment.
10. Test your survey.
Testing your questions is one of the best ways to see whether or not your survey is effective with your customer base . You can release early versions of your survey to see how participants react to your questions. If you have low engagement or poor feedback, you can tweak your survey and correct user roadblocks. That way, you can make sure your survey is perfect before it’s sent to all of your stakeholders.
To learn more, read everything you need to know about questionnaires next.
Net Promoter, Net Promoter System, Net Promoter Score, NPS and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., Fred Reichheld and Satmetrix Systems, Inc.

Don't forget to share this post!
Related articles.

Common Types of Survey Bias and How to Avoid Them

SurveyMonkey vs. Qualtrics: What’s the Difference?

Everything You Need to Get Started With Concept Testing

Top 10 Survey Email Subject Lines To Maximize Your Results
![help writing survey questions What Is a Likert Scale? [Examples & Templates]](https://blog.hubspot.com/hubfs/Likert-scale.jpg)
What Is a Likert Scale? [Examples & Templates]

The 18 Best Totally Free Online Survey Makers & Tools

Nonresponse Bias: What to Avoid When Creating Surveys
![help writing survey questions Leading Questions: What They Are & Why They Matter [+ 7 Examples]](https://blog.hubspot.com/hubfs/leading%20questions_featured.jpg)
Leading Questions: What They Are & Why They Matter [+ 7 Examples]

14 of the Best Survey Templates to Put in Front of Your Customers

28 Questionnaire Examples, Questions, & Templates to Survey Your Clients
5 free templates for learning more about your customers and respondents.
The Essential Guide to Writing Effective Survey Questions
User surveys are popping up on websites and mobile apps everywhere. Well-designed ones yield helpful data. Poorly executed ones are a waste of time for users and researchers. Crafting a good questionnaire is a science, but luckily, substantial published research is available on the topic. It’s easy to write good questions when “survey best practices” are just a Google search away…
Indeed. There are about 68,300,000 results . All that information can be overwhelming to those new to survey writing. UX Booth is here to help.
In this article, I will focus solely on writing effective questions and answers—the critically important and challenging part between a proper introduction and conclusion. Skipping the basics of question types , I’ll dive into proven techniques researchers rely on to craft effective inquiries and response options to gather useful data and results.
(If you need help with survey planning, defining goals, or understanding the basics of structure and question types, check out How to Create an Effective Survey .)
Question Wording and Structure
The creation of effective survey questions is essential to accurately measure the opinions of the participants. If the questions are poorly worded, unclear or biased, the responses will be useless. A well-written question will mean the same thing to all respondents. It will communicate the desired information so that all participants interpret it the same way and understand the expected type of response.
Use these guidelines for writing survey questions to yield informative and accurate information.
Be clear, specific, and direct
Failure to clearly explain the intent of the question can lead to confusion and misinterpretation. Be very specific and avoid imprecise or vague words. Present the topic and define the behaviors, events, or timeframe. This will help ensure every participant is providing the same type of response.
Vague: What is your income?
For what time period? For just the respondent or the entire household? Before or after taxes?
Specific: What was your household’s yearly income before taxes in 2016?
Use the participants’ vocabulary
Consider the education level of the survey audience, and use words that will will be easily understood. Avoid jargon, complex terms, undefined abbreviations and acronyms. Use simple language and never assume knowledge; always provide the necessary information for the respondent to understand what is being asked. Define any concepts or terms that the respondent needs to understand in order to answer. If referencing something participants might not be familiar with, be sure to add details to help explain it.
Unclear: How likely would you be to subscribe to UIE’s library?
Whose library? The International Union for Electricity? What kind of library–documentation, podcasts, ebooks?
Clear: User Interface Engineering’s library offers online seminars by experts in UX design. You can access the recordings anytime for only $25 a month. How likely would you be to subscribe?
Tip: If the question requires a lengthy explanation, consider separating it from the question itself to help make the information easier to digest.
Talk like a real person and treat the questions like a conversation
Group similar topics together and order the questions in a logical way to create a natural flow as if having a conversation. The voice and tone of the survey should match who it is from and being designed for. The writing can be friendly and familiar but don’t sacrifice clarity for cutesy. Consider the MailChimp writing tone guideline , “It’s always more important to be clear than entertaining.”
Formal: Would you be willing to participate in our 10-question feedback survey? Your responses to this questionnaire will help us improve your experience with Corporation ABC’s website.
Informal: Hi! Are you willing to answer a few quick questions? It’s won’t take more than five minutes. (And there’s a really good prize!)
Tip: Although I’m focusing on introductions and not question writing, it’s worth noting that being up front about the time-investment and offering incentives can also help with response rates.
Ask only one question at a time
Each question should focus on a single item or concept. This generally means that questions should have one subject and verb. Double-barrel questions ask a respondent to evaluate more than one thing in a question yet only allow for a single response.
Double-barrel: Was the content marketing seminar worthwhile and entertaining?
What if the seminar was educational but the presenter was a dreadful bore, and the response options are Yes or No? A double-barrel question is also known as a compound question. This is a common mistake, which can be corrected by breaking questions into two. Let’s look at an example with how to correct it:
Double-barrel: How satisfied are you with your work environment and compensation?
Single and separate:
- How satisfied are you with your work environment?
- How satisfied are you with your compensation?
By breaking the double-barrel question into two questions, one about satisfaction with the work environment and another question about pay, the participant is now able to provide a response to both inquiries separately.
Practice good grammar
Keep the questions simple and grammatically correct. Maintaining a parallel structure and consistently using the same words and phrases improves respondents’ comprehension. Avoid two-part or complex questions which can be hard to interpret, as can double negatives .
Double Negative: Do you agree or disagree that user interface designers should never not know how to code?
Better: User interface designers should know how to code.
An agreement scale goes well with this reworked question—more on that later.
Avoid bias and loaded words
A biased question will lead participants in the direction of a particular answer. Some phrases, particularly adjectives and adverbs, may add bias to questions. Depending on how a question is presented, people can react in different ways (for example, asking a question using the word “loss” versus “gain”). The use of emotional, politically-charged, or sensitive words can also trigger a different response. Remain neutral regardless of topic and watch for wording that skews positive or negative.
Biased: We think this UX Booth article on Survey Question Writing is very helpful. How helpful do you think this article is?
Unbiased: What do you think of this UX Booth article on Survey Question Writing?
Start with broad, general questions and progress to specific and harder ones
Beginning with basic, easier questions can encourage a respondent to continue. When possible, try to balance simple and complex inquiries. Interspersing easier questions among more challenging ones can make it seem less burdensome and help reduce abandonment. And remember to save sensitive questions like income for the end and make them optional.
Keep the survey short and don’t be greedy!
Don’t waste people’s time–only ask for what you really need. (Requiring answers to questions will slow people down, but it won’t necessarily get you want you and will increase drop off rates.) If there aren’t too many questions, and respondents can immediately understand what is being asked, they are more likely to be willing and able to provide useful information. If the answers are also well-crafted…
Answer Wording and Structure
Since this article is concentrated on writing, I’ll focus on answers to closed questions , where responses need to be crafted by the survey designer. When providing responses to closed-ended questions, how each answer is described, the number of options, and the order can all influence how people respond. Whether presented as multiple choice, checklists, or in scales, just like when writing questions, the answers should use precise, clear wording. Here’s how to make that happen.
Present all the possibilities
The number of answers should be kept relatively small but include all the possible choices. Answers need to be balanced both ways (e.g. positive to negative, high to low frequency).
All respondents need to be able to find an answer that fits their situation—including opting out. If there could be a situation where none of the answers apply, provide the option to select “don’t know,” “not applicable” or “prefer not to answer” for sensitive questions. Including an “Other,” with a free-form text field to provide a custom answer, is a great way to learn about alternative responses not provided in the defined answer set.
Incomplete and Unbalanced:
- Very Important
- Moderately important
- Slightly important
What if it is not important at all? Or not even applicable to the participant?
Complete and Balanced:
- Extremely important
- Very important
- Not at all important
- Not applicable
Say “no” only when necessary
Dichotomous questions present only two options and are clearly distinct. These answers, like yes/no and true/false, can produce less helpful data because they don’t provide context or specificity. (Though when using skip logic , these responses can often appropriate.) Formatting responses to use scales that measure things like attitudes or frequency yield more information-rich results. These answers can make a single question work harder.
Yes/No: Do you use the mobile app?
Frequency: How often do you use the mobile app?
Tip: These answers also follow the first guideline to cover all the possibilities in a balanced way, ranging from high to low or not at all. An even stronger set of choices would include references for the time period to clearly define what “sometimes” is versus “rarely.” Check out the UX Booth blog example below.
Keep answers mutually exclusive
If a participant can only select one response than each answer should be distinct and not cross-over. For example, options might be 0-5 or 6-10 rather than 0-5 or 5-10. Having the “5” in both answers makes them not mutually exclusive:
Not Distinct: Where is your work location?
- In an office building.
- From my home.
- In the city.
The person could work in an office building in the city or from their home in the city.
Remove universal statements
Words like “never, none, always, all” are extreme choices that respondents might be hesitant to commit to. Rather than absolute words, provide answers with specific references for behaviors or timeframes.
- I always read UX Booth’s blog.
- I never read UX Booth’s blog.
Referenced Alternatives: I read UX Booth’s blog:
- Once a week
- 2-3 times a week
- 4 or more times a week
- I do not read UX Booth’s blog.
Use ratings and scales
The Likert Scale , where respondents indicate their level of agreement or disagreement, is the most commonly used approach to scaling options when measuring attitudes or behaviors. Likert scales should be symmetrical and balanced. They should contain equal numbers of positive and negative responses within the distance between each item being the same.
Experts’ debates about scales—the number of levels (5, 7,10), and the inclusion of a neutral midpoint (neither agree nor disagree)—is too overwhelming to tackle in this article. Consult the many resources for Likert Scale best practices . SurveyMonkey suggests five scale points for unipolar and seven for bipolar. (My personal opinion is between five to seven is best; the higher the number the harder it is for people to gauge.) Always include word labels not just numbers to identify what each point on the scale means.
Common Scales:
- Agreement: Disagree to Agree
- Familiarity: Not Familiar to Very Familiar
- Frequency: Never to Always
- Important: Not Important to Extremely Important
- Likelihood: Not Likely to Extremely Likely
- Quality: Poor to Excellent
- More Examples (Iowa State PDF)
Use the expected, “natural” order for answer scales because it is easier for people to respond. For ranges (e.g. excellent to poor) it’s okay to reverse the order, such as starting with the favorable and ending with unfavorable answer, since it can also influence choices.
Tip: Read “ There is a Right Way and Wrong Way to Number Rating Scales .”
Good survey design leads to good data.
The unfortunate result of bad survey design is bad data. Asking good questions and providing solid answers is not easy. Take advantage of what other researchers and academics have done and use starter templates when appropriate. It is the survey designer’s responsibility to be clear and unbiased. Make sure the questions will be informative, the answers accurate, and that the insight you can will lead to actionable results.
UX Booth is trusted by over 100,000 user experience professionals. Start your subscription today for free.
Related Articles
Cookie consent
We use our own and third-party cookies to show you more relevant content based on your browsing and navigation history. Please accept or manage your cookie settings below. Here's our cookie policy
Survey Questions 101: Question Types, Examples, and Tips
Need some help writing survey questions? We've got you. Dive into our survey question examples and write kick-ass survey questions.
Types of survey questions
This is what you came for—the good stuff.
Here are the types of survey questions you should be using to get more survey responses:
Open-ended questions
Closed-ended questions
Rating questions
Likert scale questions
Multiple choice questions
Picture choice questions
Demographic questions
Maybe that’s exactly what you needed—seven types of sample survey questions. Great. But maybe you’re looking for more. Maybe, you want some of our juiciest tips for writing better questions. The questions you should be asking to potential customers. Or maybe you want to hear what a psychology researcher-turned-marketer thinks you should do. Then you should read on. Definitely.

Open up a conversation with this question. These are good survey questions to get more meaningful answers from as people have the opportunity to give you more feedback through a text box. If you’re looking for a yes/no answer—you’ll need to use a closed-end question.
Open-ended question examples:
What are you wearing today?
How did you meet your best friend?
What is it like to live in Barcelona?
Some questions just need a one-word answer. Like yes. Or no. You can use them for finding out some quick tit-bits of information—then go on to segment your survey-filler-inners accordingly.
Closed-ended questions examples:
Did you order the chicken?
Do you like learning German?
Are you living in Australia?
Reach for the stars. Or the hearts. Or smiles. Send a rating question and find out how your survey-takers would rate something. It’s a super useful question to ask, as you can gauge peoples’ opinions across the board.
Rating questions examples:
How would you rate our service out of 5?
How many stars would you give our film?
Please, rate how valuable our training was today.
Likert scale questions are good survey questions for finding out what people think about certain things. Generally, they come in 5, 7, or 9-point scales and you’ve probably filled one out before.
Likert scale questions examples:
Do you agree that channel 5 offers more comedy than channel 6?
How satisfied are you today with our customer service?
Do you feel affected by the recent changes in the office?
Sending out a test or quiz ? Multiple choice questions are your friend, friend. You can give a few answers and hide the real answer. Also, if you want to find out time periods, or dates for an event—multiple-choice questions are the one. Plus, you can bundle them up nice and neatly in a dropdown menu.
Multiple choice questions examples:
Facebook was launched in… 2003 | 2004 | 2005 | 2006
How many of our restaurants have you visited? 1 | 2 | 3 | 4+
What is the capital of Scotland? Perth | Glasgow | Aberdeen | Edinburgh
A picture paints a thousand words. But in a survey? It does so much more. Ask a picture choice question and make your survey even more interactive. Tell a story, and show rather than tell.
Picture choice questions example

Demographic survey questions are a mix of different forms of questions. It’s up to you whether you want to use a dropdown here or an open-ended question with them. They all talk about things which can be seen as a bit touchy, so take heed.
How old are you?
What’s your gender?
Which industry do you work in?
Ask the right survey questions and get better results
We live in the Information Age, a time where data is a source of capital. Surveys have become one of the time-tested ways of gathering data. But even with 200+ years of published research and experience, people still fail to obtain helpful information from their surveys. Lucky for you, it doesn’t take much time to learn how to carry out a successful survey.
If you're still stuck wondering what type of data you should be looking to collect, take a look at our guide to qualitative vs quantitative research methods.
Best practices for survey questions—in a nutshell
Your objective is to get as many responses as you can. Because then you can make the best decisions. But to do this, you need to follow a few basic “rules”. Here’s an overview of the best practices for writing survey questions:
Keep your language simple and specific. Unless you’re asking Ph.D. students about their opinions on string theory, there’s no need to include scientific or confusing language. Type like you’d talk.
Avoid leading questions “How were our amazing customer service team today?” Emm, not actually that great… Don’t plant opinions in peoples’ heads before the answer.
One question = one idea This happens often when people put two questions into one—“How was the food and ambiance?” Separate questions to get better answers.
Don’t make the survey excessively long If your survey is over 20 questions long, have a think. Some surveys are just, well long. But if you can condense it, your audience will thank you.
Show how much longer there is That said, if it’s long, let people know how long. Tell people when they’re halfway through your survey. And with typeforms, you can show them with the Progress Bar.
Make your surveys mobile friendly We’re always on the move. So make sure your survey can be taken from the subway as well as from the office. Psst, typeforms are.
For big ideas, split them into multiple questions If you’re asking for lots of opinions on one subject, try and split a huge question into several different questions—each covering a different angle. And consider using rating scale questions to see how people feel about different ideas.
Use open-ended questions sparingly An open-ended survey question is a brilliant response option for getting honest and actionable feedback. But people get bored of typing in long answers. So vary your answer options and don’t stuff in open-ended questions.
How to write great survey questions
While there is an art to designing effective survey questions, there are also several principles of survey design that will help you get the information you need from your friends or customers.
In this section, we have distilled some of the most authoritative survey research into 6 tips for writing survey questions:
→ Not sure whether you need a survey or a questionnaire? Check out our guide on survey vs questionnaire
Remember that the aim of conducting a survey isn’t just to get answers. We are interested in what the answers will tell us about something else, which is why it’s crucial to define a clear purpose to every question you ask in a survey.
“One of the biggest mistakes people make in designing survey questions is failing to translate the intention of the topic being asked into a meaningful and relevant survey question,” explains Robert Gray, President of Insightlink Communications.
“It is absolutely critical to have a clear understanding of the purpose and objectives of the survey and of the individual topics to be covered.”
Before you start writing survey questions, create a list of objectives that outlines the kind of information you’re trying to glean with each question. A plan for how you will use the data gathered from each response will help you ensure that the questions are targeted, relevant, and purposeful.
Example objective: Assess employee attitudes towards standing desks
Possible questions:
In the past 12 months, have you used a standing desk? If yes: The standing desk improved my overall productivity (Agree—Neutral—Disagree) If no: I like the idea of testing a standing desk at work (Agree—Neutral—Disagree) Research has shown that standing desks result in fewer sick days and more productivity in the workplace. I believe the company should invest in standing desks for employees (Agree—Neutral—Disagree)
While several articles expound various types of surveys, such as multiple choice, Likert scales, open-ended, and so on, these are actually the types of responses. On the other hand, there are two survey question types: factual or objective questions and attitude or subjective questions.
Factual questions are aimed at gathering data to categorize and quantify people or events. Hypothetically, people’s responses to factual survey questions can be independently verified and have right and wrong answers. Examples of what objective survey questions cover are things like how often someone exercises, where they were born, and what their purchase habits are.
Attitude questions, on the other hand, measure perceptions, feelings, and judgements. These are things that cannot be observed or objectively assessed because they are based on what individuals think or experience. Some examples of what subjective survey questions might cover include favorite brands, overall experience at a restaurant, or reasons for not voting for a certain candidate. With subjective survey questions, standardization is critical to ensure that people are interpreting and understanding the questions in the same way.
The type of questions you choose will be influenced by the objective of your overall survey. The question type also has an impact the response format (e.g. agree—disagree versus single-answer multiple choice).
These two types of survey questions produce different kinds of data. Understanding the difference and how to treat each one will ensure you are producing meaningful information.
This seems obvious, yet there are surveys filled with questions that participants are unequipped to respond to. There are three difficulties people have when answering survey questions:
They don’t have the information. Most people cannot answer with any accuracy how many times they get up from their desk in a day, but they can give a vague indication (rarely, sometimes, often, never).
They had the information but have forgotten. Some people might know their exact income from two years ago, but most won’t. Avoid asking questions that rely on long-term memory or calculations.
They have difficulty placing events in time. Participants may remember the last time they went to the movie theater, but they won’t remember whether it was six months ago or eight. If you must include questions that rely on long-term memory, use memory aids and association, e.g. have them play out a scenario in their minds.
Imagine your objective is to learn whether water conservation warnings were effective.
A poor survey question would be: How much water did you use in your home last month?
A better question is: In the last 30 days, how much water would you say your household used? (More than usual, less than usual, about the same as usual)
Even in online surveys, people exhibit what social scientists call social desirability bias. This is the tendency for people to answer questions in socially acceptable ways. In some cases, it means overreporting good behaviors (‘I get up from my desk every hour’) or underreporting perceived negative behaviors (‘I drink alcohol once per week’).
Being aware of sensitive and taboo topics in the population you’re studying can help you anticipate these areas. To generate accurate responses, incorporate these strategies into the survey:
Include an introductory statement. Research shows that long preambles and concise questions can improve response rates. By explaining why you’re asking, you set up the question and help them understand the motivation behind it.
Example: To help us contextualize what you and your peers think about our new alcoholic beverage offerings, we’re going to ask a few questions about your alcohol consumption. How many alcoholic beverages have you drank in the last seven days? (0, 1—2, 3—5, 6+)
Emphasize the anonymity of the survey. People who are confident their responses won’t be identified are more likely to respond honestly.
Put sensitive and demographic questions at the end. Starting a survey with intimidating or demographic questions like age and income can put people off. Your first survey question should be interesting, light, and easy to answer. Once they’ve started, they’re more likely to finish—and answer more sensitive questions.
Stress the importance of accuracy. Discourage dishonest answers by outlining the end goal of the survey. People who believe their answers will help are more likely to be truthful.
Failing to write clear and specific questions can hinder your respondents’ ability to answer. The standard is that people should have a consistent understanding of what is being asked of them. If someone could interpret a question differently than you intended, the question can be improved. Avoid ambiguities. Don’t take for granted that people know what you mean in a survey question.
Poor survey question: In the past month, how many times have you visited a doctor?
There are two ambiguities in this question. First is the time frame: does ‘in the past month’ refer to the last 30 days or the most recent calendar month? The second is ‘doctor’. There is a lot of room for interpretation—do nutritionists, spiritual healers, or psychologists count as doctors?
Better survey question: We would like to understand how often you have visited a licensed medical professional, including dentists, psychologists, chiropractors, and nutritionists. In the past 30 days, how many times have you visited a medical professional?
If you need to define a term, be sure to put it first. Most people stop paying attention after the question has been asked.
Vague survey question: How would you rate your health?
The understanding of ‘health’ isn’t consistent. Some people consider good health the absence of health conditions. Other people may be thinking about the extent to which they lead a healthy lifestyle.
Better survey question: Do you think you eat enough vegetables? (I eat plenty, I eat just enough, I could eat more, I don’t eat vegetables at all)
This question gets people to respond more directly to your interpretation of ‘health’: a healthy lifestyle. It may require asking more questions, but it will give you better data to work with.
Here are six survey question examples that should be avoided for the best survey data:
Loaded question: Do you think there are more postgraduates (Master’s, PhD, MBA) because of the country’s weak economy?
The question also includes a false premise: the participant is required to agree that the economy is weak to answer. The question also imposes a causal relationship between the economy and postgraduate study that a person may not see. Loaded questions are inherently biased and push respondents into confirming a particular argument they may not agree with.

Double-barreled question: Would you like to be rich and famous?
Double-barreled questions are difficult for people to answer. A person might like to be rich but not famous and would thus have trouble responding to this question. Additionally, you don’t know whether they are responding to both parts of the question or just one.
Biased question: Do you agree that the President is doing a wonderful job on foreign policy?
Biased language that either triggers emotional responses or imposes your opinion can influence the results of your survey. Survey questions should be neutral, simple, and void of emotion.
Assumptive question: Do you have extra money after paying bills that you invest?
This question assumes that the participant has extra money after paying bills. When a person reads a question they feel is irrelevant to him or her, it can lead to attrition from the survey. This is why Logic Jump is useful—surveys should adapt to respondents’ answers so they can skip questions that don’t apply to them.
This question would be better asked in two parts: do you have extra money after paying bills? (If yes: Do you invest the extra money you have after paying bills?
Second-hand knowledge question: Does your community have a problem with crime?
Not only are ‘crime’ and ‘problem’ vague, it’s challenging for a layperson to report on something related to the community-at-large. The responses to the question wouldn’t be reliable. Stick to asking questions that cover people’s first-hand knowledge.
If you are trying to understand the prevalence of criminal acts, it would be better to ask: In the past 12 months, have you been the victim of a crime?
Hypothetical questions: If you received a $10,000 bonus at work, would you invest it?
People are terrible at predicting future behavior, particularly in situations they’ve never encountered. Behavior is deeply situational, so what a person might do upon receiving a bonus could depend on whether they had credit card debt, whether they needed to make an immediate purchase, the time of year, and so on.
Final Thoughts
“The goal of writing a survey question is for every potential respondent to interpret it in the same way, be able to respond accurately, and be willing to answer,” explains Tammy Duggan-Herd, PhD, a psychology researcher-turned-marketer.
She explains that poorly written survey questions don’t measure what they think they do.
“Always evaluate questions for yourself and make adjustments where you see fit to get to the heart of what you want to know,” says Duggan-Herd.
Focus on creating great survey questions, and you’ll get the answers and insights you need to achieve your goals.
How you ask is everything.
- Market Research
- Pollfish School
- Survey Guides
- Get started
How to write good survey questions
Good survey questions lead to good data. But what makes a survey question “good” and when is the right time to use specific types?

At Pollfish, we have distributed tens of thousands of surveys and manually review them all, so we know a thing or two about writing good survey questions. Our experts have compiled a list of the essentials below into a sort-of questionnaire template to make sure you have what you need to create great surveys and get the highest-quality data.
1. Have a goal in mind.
Consider what you’re trying to learn by conducting this survey. Do you have an idea that you want to validate, or are you hoping that you can disprove an assumption under which you have been operating? Surveys work best when they focus on one specific goal. When building the questionnaire for your survey, it is important to offer questions that support your goal.

2. Eliminate Jargon.
Just because a concept is clear to you doesn’t mean your target audience is always on the same page. A well-designed questionnaire contains good survey questions to be sure. But they also use plain language (no jargon) to explain concepts or acronyms that customers may be unfamiliar with and offer an opt-out for those who are unsure. Don’t be afraid to use more than one question or offer an example to ensure clarity on complex information in your questionnaire template— a confused audience leads to frustration and low-quality responses.
3. Make answer choices clear and distinct.
When multiple-choice answers are presented, the respondent must make a selection. If these responses overlap or are confusing for the respondent, the quality of the data decreases because they aren’t sure what is being asked of them. Make sure answers are distinct and specific whenever possible so the respondent can confidently choose the best answer.
4. Give users an “other” option.
Make sure that, in a multiple-choice sequence, you’ve given respondents the chance to opt out of the question if it doesn’t apply to them or if none of their answers fit. Provide an option like “no opinion,” “neutral,” or “none of the above.” You can also offer the option to select “other” and provide an open-ended response that can give you more context.
5. Avoid “yes/ no” screening questions.
Screening questions help you connect with a qualified audience at the beginning of the survey. When respondents select a qualified response, they will enter the rest of the survey. However, people are biased toward choosing “yes” or a positive response when presented with a yes/ no question, even if their real opinion is more neutral. To reduce bias, provide a list of answer choices with no indication that one is preferred over the others.
6. Don’t ask two questions at once.
Each question should focus on obtaining a specific piece of information. When you ask two questions at once using “and” or “or,” you’re introducing another question, which may have a different answer. This will have one of two results: either you’ll confuse your respondents, who are forced to choose the right answer to one question; or your respondents will confuse you with their answers. Either way, make sure you write simple survey questions asking for separate pieces of information as separate questions.
7. Use skip logic when applicable.
Skip logic , or branching, allows you to create multiple question paths based on an earlier answer. This means more qualified respondents will be asked to answer more in-depth questions and reduces answers like “don’t know” or “no opinion” later.
8. Use different question types .
Respondents offer better and more thoughtful answers when they are engaged. And that means asking several types of research questions. Use ranking , matrix , open-ended, or multiple-choice questions to stimulate them and keep them interested, especially in a longer questionnaire. Different question types not only keep the respondents engaged (which can increase your completion rates ) but can also elicit different responses.
9. Shuffle answer choices for ranking, matrix, and multiple-choice questions.
We are naturally inclined towards the first information we are presented with—the top answer— in a series of answer options. Shuffling the order of the answer choices reduces bias in responses. However, for answers that relate to one another—such as a Likert scale or timeline—it’s helpful to keep them in an order that flows logically to avoid confusion or misreading.
10. Add media or images to provide helpful context.
Media—such as images, video, or audio clips—provides another level of clarity to your survey questions. You can use these either to give added context to the question or offer media as an answer choice.
11. Always keep the audience in mind.
Remember as you are writing the questions to always keep your target audience in mind. The audience can be as broad as the “general population” or as narrow as you need it to be . The important thing is to know who you want to target so you can communicate with them effectively.

Regardless of the types of survey questions you select, questions should be short, clear, and to the point, but also engage the respondent through multi-media and question types. Remember that the less confused your respondents are, the clearer your data will be.
These best practices will help you write good survey questions on any platform, including our own. If you have additional questions specific to Pollfish, check out our resource center or reach out to our Support team to learn more.
Do you want to distribute your survey? Pollfish offers you access to millions of targeted consumers to get survey responses from $0.95 per complete. Launch your survey today.
Privacy Preference Center
Privacy preferences.
Five Tips for Designing an Effective Survey
Published January 22, 2018 under Research News

Chances are, you’ve recently taken a survey. People are being surveyed now more than ever before. Were you frustrated by any of the survey items? Did you feel limited or overwhelmed by response options? Rae Jean Proeschold-Bell , associate research professor and founding director of the Evidence Lab at the Duke Global Health Institute, recently gave a talk about best practices for writing survey items based on research by Stanford University professor Jon Krosnick and other survey experts. Here are five key takeaways from her talk.
1. Write Questions with the Answering Process in Mind
Answering a survey item, Proeschold-Bell says, is actually a five-step process. Respondents read the question, figure out what it’s trying to assess, search their memory for relevant information, integrate their thoughts into a single judgment and translate that judgment to the best response option on the survey. Keeping these steps in mind while writing questions not only makes the respondent’s job easier, but also increases the likelihood of an accurate response.
Survey respondents typically fall into one of two categories: optimizers and satisficers. Optimizers are respondents who are motivated and able to complete the survey and who put in work for all five steps of the answering process. Satisficers complete the survey less carefully—typically providing low-quality data by either responding too neutrally or by not reading the question closely. People typically “satisfice” due to task difficulty, respondent ability and respondent motivation, so these are key factors to consider in survey item design.
“You have to assume that everyone is going to be a satisficer,” said Proeschold-Bell, “so you need to make all the steps as easy as possible.”
To help ease the respondents’ burden, Proeschold-Bell recommends designing surveys with simple, concrete words, as well as consistent words and syntax. Response options should be exhaustive and mutually exclusive. Double negatives, leading questions and “double-barreled” items that touch upon more than one issue should be avoided.
2. Make it Easy for the Respondent to Agree or Disagree
Agree/disagree scales often place a high cognitive burden on respondents. Here’s a sample item that highlights this burden and ways to minimize it:
When I look at the world, I don’t see much to be grateful for.
Strongly disagree Moderately disagree Slightly disagree Neutral Slightly agree Moderately agree Strongly agree
Consider the problems respondents may face when answering this item. The three “disagree” response options, when combined with the word “don’t” in the question, introduce a double negative that can confuse respondents. Also, it’s unclear whether the item is trying to get at how grateful the respondent feels or how often the respondent feels grateful. Now, let’s take a look at an improved version of the same item:
When you look at the world, how grateful are you?
Extremely grateful Very grateful Moderately grateful Slightly grateful Not at all grateful
The rewritten item eliminates both the double negative and the confusion about how grateful the respondent feels versus how often he or she feels grateful.
3. Minimize Rating Scale Confusion
When designing a rating scale, it’s critical that each point is unique and means the same to both the researcher and the respondent. Proeschold-Bell encourages survey writers to carefully consider whether people make fine-grained distinctions about the construct before creating a rating scale with many options. Rating scales with five to seven response options are most likely to be reliable and valid and yield quality data.
And what about that nebulous “neutral” option? If you expect to have satisficers among your respondents, Proeschold-Bell says, it’s best not to include a neutral option. On the other hand, a neutral option can be good for optimizers. (And remember Proeschold-Bell’s earlier advice: You have to assume that everyone is going to be a satisficer.)
4. Carefully Order Every Aspect of Your Survey
The order of a survey and options for each question can have surprising effects on the results. For a self-administered survey (either print or online), respondents are most likely to choose options listed first, whereas for a survey given verbally, respondents tend to choose options listed last.
Items toward the end of self-administered surveys tend to be subject to more “satisficing.” “Because of this, I always put the demographic information at the end,” said Proeschold-Bell.
Proeschold-Bell also points out that optimizers improve accuracy as they progress through the survey. Since people learn about what the surveyor is trying to discern as they answer questions, responses closest to the end tend to be most accurate. Grouping questions by topic can aid in this learning and improve response accuracy.
5. Test Your Survey before Distributing It
Proeschold-Bell emphasizes the importance of pre-testing a survey. She recommends reviewing the survey with someone similar to your intended respondents to determine whether any questions may be confusing or unclear. Another helpful resource is the Question Understanding Aid tool , an online application that gives feedback on reading level and precision of words used in survey questions.
The most useful pre-testing tool, according to Proeschold-Bell, is cognitive interviewing, in which the researcher gives a sample respondent an open-ended prompt and asks them to think out loud. This is a great way to see where respondents struggle in each of the five response steps. For example, they may have a hard time mapping their answer onto the existing response options. The researcher can also ask follow-up questions about specific words used, determining what words such as “stress” or “happy” might mean to a respondent. Sometimes words that mean one thing to researchers mean something else to respondents. The researcher can also see how long items take and conduct a respondent debrief to collect any additional feedback.
The bottom line? Creating effective survey questions is tough, but so is answering them. For best results, make every effort to ease the burden on your respondents.
Want more? Watch Proeschold-Bell's talk .
When designing a rating scale, it’s critical that each point is unique and means the same to both the researcher and the respondent.

IMAGES
VIDEO
COMMENTS
First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be
The best survey questions are always easy to read and understand. As a rule of thumb, the level of sophistication in your survey writing should be at the 9th to
7 tips for writing a great survey or poll · 1. Ask more closed-ended questions instead than open-ended questions · 2. Ensure your survey questions are neutral · 3.
Use simple, direct language · Put easy questions first but be aware of the flow of the questions · Ask one thing per question – Avoid the use of the word “and”
Write unbiased survey questions. · Don't write loaded questions. · Keep survey question phrasing neutral. · Don't use jargon. · Avoid double
Question Wording and Structure · Be clear, specific, and direct · Use the participants' vocabulary · Talk like a real person and treat the
Before you start writing survey questions, create a list of objectives that outlines the kind of information you're trying to glean with each question. A plan
How to write good survey questions · 1. Have a goal in mind. · 2. Eliminate Jargon. · 3. Make answer choices clear and distinct. · 4. Give users an “other” option.
Good questions are necessary to get good data. There are many things to take into account when writing survey questions to avoid
1. Write Questions with the Answering Process in Mind · 2. Make it Easy for the Respondent to Agree or Disagree · 3. Minimize Rating Scale