Don't Reinvent the Wheel.
If you are trying to measure an attitude, concept, or behavior, there’s a pretty good chance someone has done this before. In the course of your literature review, pay careful attention to how others are measuring the concept you want to measure. They may have already tested the reliability and validity of a measure. An identical question also allows for comparisons across surveys.
You might want to consult a resource such as the General Social Survey. The GSS has been a reliable source of data to help researchers, students, and journalists monitor and explain trends in American behaviors, demographics, and opinions. You'll find the complete GSS data set on this site, and can access the GSS Data Explorer to explore, analyze, extract, and share custom sets of GSS data.
This page will otherwise contain information about how to shape questions for a questionnaire, and how to actually create a a questionnaire form using Google Forms. There are also many publicly available guides for other free resources such as Survey Monkey (including this tutorial on how to build an online survey in which you might find some helpful advice), which is another popular tool for creating surveys, but we will focus on Google Forms in this guide.
Keep Your Questionnaire Short:
Respondents are less likely to answer a long questionnaire than a short one, and often pay less attention to questionnaires which seem long, monotonous, or boring.
Keep Question Order in Mind:
Survey responses can be impacted by previous questions. Think about the context that respondents are hearing your questions.
• Start a questionnaire with an introduction. Provide titles for each section if necessary.
• It’s usually best to start a survey with general questions that will be easy for a respondent to answer.
• Things mentioned early in a survey can impact answers later. If the survey mentions something, respondents might be primed to think of this in other questions.
• It’s usually best to ask any sensitive questions, including demographics (especially income), near the end of the survey.
• If you are asking a series of similar questions, randomizing the order respondents hear them can improve your data.
Filtering and Branching:
Respondents should only be asked questions that apply to them. In cases where some questions might be relevant to only some respondents, it is best to specifically determine applicability.
Open-Ended versus Closed-Ended Questions:
Open-ended questions ask respondents to respond to a question in their own terms. Closed-ended questions are questions where the respondent is asked to place themselves into one of a limited number of responses which are provided to them.
• Open-ended questions allow the greatest variety of responses, but are time consuming to ask and require a lot of work to analyze.
• Closed-ended questions, when well designed, ensure that respondents interpret questions the same way.
• Respondents are more likely to skip an open-ended than closed-ended question
Rating Scales for Attitude Questions:
• Usually between five and seven points is best
• Generally, providing a middle category provides better data
• Points on the scale should be labeled with clear, unambiguous words
• Questions which use agree/disagree scales can be biased toward the “agree” side, so it’s usually best to avoid this wording.
• Try to write questions so that both positive and negative items are scored “high” and “low” on a scale.
• The order that response categories are presented to a respondent can also influence their answer choices. Consider:
> Primacy Effect: Occurs in paper and internet surveys: respondents tend to pick the first choice.
> Recency Effect: Occurs when questions are read to a respondent. Respondents tend to pick the choice they heard most recently.
> Randomizing or rotating response options is usually a good idea.
> In Internet surveys, radio buttons work better than drop-down menus.
The ideal question accomplishes three goals:
• It measures the underlying concept it is intended to tap.
• It doesn’t measure other concepts.
• It means the same thing to all respondents.
The following rules help to accomplish this:
Avoid technical terms and jargon. Words used in surveys should be easily understood by anyone taking the survey. Examples: “Do you support or oppose tort reform?” “Should people held on terror related crimes have the right of habeas corpus?”
Avoid Vague or Imprecise Terms. Usually, it’s best to use terms that will have the same specific meaning to all respondents. For example, it’s not clear what you get when you ask “How important is it that a candidate shares your values?” You might get a more consistent answer if you asked: “How important is it that a candidate shares your religious values?”
Define Things Very Specifically. For example, don’t ask: “What is your income?” A better question would be specific and might ask: “What was your total household income before taxes in 2005?”
Avoid Complex Sentences. Sentences with too many clauses or unusual constructions often confuse respondents. Scales that ask respondents to make complex calculations can cause problems. How easy will it be for a typical person to answer: “Do you think the increase in the rate of immigration, controlling for the economy, is higher or lower than the increase in the rate of crime in your area?”
Provide Reference Frames. Make sure all respondents are answering questions about the same time and place. For example, if you ask: “How often do you feel sad?” some people might provide an answer about their life’s experience, while others might only be thinking about today. Usually, it’s better to provide a reference frame: “How often have you felt sad during the past week?” Don’t ask: “How good is the economy these days” and assume everyone is talking about the same economy. A better way might be to ask: “How good is the national economy these days” or “How good is the economy in your community these days”
Make Sure Scales Are Ordinal: If you are using a rating scale, each point should be clearly higher or lower than the other for all people. For example, don’t ask “How many jobs are available in your town: Many, a lot, some, or a few. “ It’s not clear to everyone that “a lot” is less than “many.” A better scale might be: “A lot, some, only a few, or none at all.”
Avoid Double-Barreled Questions. Questions should measure one thing. Double barreled questions try to measure two (or more!) things. For example: “Do you think the president should lower taxes and spending.” Respondents who think the president should do only one of these things might be confused.
Answer Choices Should Anticipate All Possibilities. If a respondent could have more than one response to a question, it’s best to allow for multiple choices. If the categories you provide don’t anticipate all possible choices, it’s often a good idea to include an “Other-Specify” category.
If You Want a Single Answer, Make Sure Your Answer Choices Are Unique and Include all Possible Responses. If you are measuring something that falls on a continuum, word your categories as a range. For example, the following scale misses possible responses: What punishment should this person receive: No punishment, Five years in prison, Ten years in prison, Twenty years in prison, Life in prison, or the death penalty?” A better scale might be worded: What punishment should this person receive: No punishment, Punishment not including jail time,
Up to five years in prison, From five years to ten years in prison, From ten years to 20 years in prison, More than 20 years but less than life in prison, Life in prison, or the death penalty?”
Avoid Questions Using Leading, Emotional, or Evocative Language. For example, “Do you believe the US should immediately withdraw troops from the failed war in Iraq?” “Do you support or oppose the death tax?.” Sometimes the associations can be more subtle. For example, “Do
you support or oppose President Bush’s plan to require standardized testing of all public school students?” Some people might support or oppose this because it is sponsored by President Bush, not because of their opinions toward the merits of policy.
For more information, see the Harvard University Program on Survey Research Tip Sheet on Question Wording, from which most of this information was adapted.
You are welcome to contact us individually at:
Jaeda Calaway - Information Literacy Instructor and Student Research Support Specialist
217.245.3207
McKenna Jacquemet - Research Services and Information Literacy Librarian
217.245.3117
Bree Kirsch - Director of the Library
217.245.3573