Musings on rating scales

It’s the Christmas break, I’ve stopped work and I’m officially on holiday. I’m relaxing and enjoying myself and definitely not thinking about research issues. Except… yesterday while we were having our lunch my children mentioned that they had filled in a survey at school before they broke up for the holidays. Naturally, my ears pricked up. What was the survey about? ‘Stuff’, apparently. What sorts of questions were there? ‘Saying if you agreed with stuff.’ After several more questions that resulted in various answers concerning ‘stuff’, I determined that they were asked to tick boxes on a rating scale and that they had ticked the neutral box if the question ‘wasn’t relevant to me’. Hmmm. So, some of the questions weren’t relevant to everyone answering the survey, yet they still had to tick a box. This led me to muse on rating scales, even though I am, as I said earlier, on holiday.

Rating scale questions are common in surveys and ask the participant to say to what extent they agree or disagree with a statement or a series of statements, using strongly agree, agree, neutral, disagree and strongly disagree as the answer options. The answer options may also be tailored to ask participants how they rate a particular item, for example from 1 to 5 where 1 is very poor, 2 is poor, 3 is satisfactory, 4 is good and 5 is very good. No matter how the question and answer options are designed the point is that the rating scale gives gradations of opinion rather than a simple yes/no.

There are usually an odd number of answer options on a rating scale, so there is often an option in the middle of the scale that is, in some way, neutral. It’s that neutral box in the middle of a rating scale that I want to consider first. It’s neither agree nor disagree. It’s ‘I can’t make up my mind’ or ‘I don’t know’ or worryingly ‘This question doesn’t apply to me but I have to tick a box, so I’ll tick this one’. There are, therefore, a couple of problems with this box – first, the researcher doesn’t know why it has been ticked and second, the researcher is then faced with the problem of how to include the data in the analysis. Do you remove these responses from the analysis? Or do you include them with the positive responses or include them with the negative responses? I’ve seen all three approaches used with appropriate caveats and justifications.

There are some potential solutions to these problems. The easiest solution to the first problem is to include a ‘Don’t know’ or ‘N/A’ answer option. (But if your questions are not applicable to the participants, why are you asking them?) This should result in participants being able to tick a box truthfully. A potential, if controversial, solution to the second problem of how to analyse the data is to design the question to have an even number of options and therefore no neutral option. This forces participants either to agree or disagree with the statements or to say definitively that they think something is good or bad. It allows for no grey areas. While this would eliminate the uncertainty associated with a neutral option, it does put a lot of pressure on participants. Participants may feel very uncomfortable having to make a decision rather than opting for the central option. They may not wish to admit to holding particular views, even to themselves, and may not trust the researcher’s promises of confidentiality and anonymity. If participants feel uncomfortable in this way, they may simply skip the question entirely, which then yields no data. So this is not a solution without its own problems.

Another issue around rating scales is that of how many options should be included as answers. Five is very common, but I have seen up to 10 (sometimes including zero in order to make five the neutral option). What is the value in having a larger scale? On a 5-point scale it’s easy to make the difference between the options clear, so participants know exactly what each one means. On those scales that are larger, I’m not sure that it is as clear what the difference is between each of the options and the researcher may end up aggregating the responses into those above and below the neutral point (if there is one) anyway. Is it easier for a participant to judge something on a scale that goes up to 10? What is the qualitative difference between 6 and 7? Or between 3 and 4? Will participants be able to remember what they felt a 6 was in question 2 and apply the same distinction to a 6 in question 14? Does a 7 mean the same to all participants? There are elements of personal judgement involved here. I feel longer scales are, therefore, problematic and that it’s best to stick to shorter scales for clarity and ease of completion.

My final concern with rating scales is answer options that don’t match up grammatically with the statement that precedes them. If the opening rubric asks to what extent one agrees or disagrees with the following statements, the statements must be written such that they complete the opening statement and one can agree or disagree with them, for example:

Alison’s blog post about rating scales is:

 Strongly disagreeDisagreeNeither agree nor disagreeAgreeStrongly agree

The following example doesn’t work, as it doesn’t make grammatical sense. Not all the answer options follow grammatically from the opening statement.

Alison’s blog post about rating scales offers:

 Strongly disagreeDisagreeNeither agree nor disagreeAgreeStrongly agree

I’ve seen this kind of grammatical error quite a few times and it’s likely that participants can still understand what is meant and can complete their response, but researchers should always try to make responding as easy as possible. It may be that it is only me who worries about this, but some participants may be confused and not answer or may sigh at the grammar error and wonder at the quality of the research project.

Next time you design a rating scale question for your survey consider these ideas. How many points should I include in the scale – an even or an odd number? Five or more? Are all the questions relevant to participants? Do I need a ‘don’t know’ option alongside the scale? Do my questions make sense? How will I use the answers in the neutral box in my analysis and how will I justify that use?

Finally, my last tip is not to quiz your children about the survey they completed at school yesterday. They won’t be able to remember anything useful.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s