Like most evaluators, I gather lots of Likert and numerical data. Over the years I have changed the way I report these data so that my clients can better grasp what the results mean.

*Likert Data*Almost all of my Likert data are numerically anchored. So, for example, Strongly Disagree is anchored at a 1 and Strongly Agree would be anchored as a 5, with all other qualitative options anchored numerically in-between. In these occasions I tend to report the sample size (n), minimum value, maximum value, mean, and standard deviation, as well as the percent of persons who Agree or Strongly Agree (i.e., those who selected a 4 or 5). I can report the mean and standard deviation because I have numerically anchored m Likert scale. This allows my clients to see right away how persons responded, on average. By reporting the standard deviation I am able to show them how “spread out” the data are, another point of information for my clients. Last, by reporting the percent of persons selecting a 4 or 5, I provide my clients an additional data point for them to consider.

*Numerical Data*Like most evaluators reporting numerical data, I report the sample size (n), minimum value, maximum value, as well as the mean and standard deviation. Lately I’ve also begun reviewing the median value in a dataset and reporting that value as well. Oftentimes this data point is very revealing when compared to the mean, especially if they are very different in value. Explore what this means for your dataset and share with your client.

**ALL DATA**

Even better than presenting these data in table format, consider providing them in graphical form. Excel allows you to do most graphs you might need. Using text boxes and shapes such as arrows I can usually point out the mean and median values and the interval that is bounded by the sd.

## 3 comments:

I like to look at the data using both averages and percentages. Just the other day I had a bunch of data where all the averages on items and scales were coming out to about 3 on a 5 point scale. However, using percentages we were able to see that only 50% of people tended on the 'agree' side, and another 20% were 'neutral'. That characterization paints a pretty different picture for stakeholders, and I think it's much more useful - after all, that's a decent chunk of people who weren't so sure or were in disagreement with the program's strategy. Averages can hide all sorts of interesting distributions!

Hi Amy- just subscribed to your blog and have it on my google homepage.

I also use Likert type scales alot and like your approach for mining the nuances that are often not present in the mean -

One thing I was curious about- I prefer to use a six point Likert type scale so that "neutral" is not an option- those "in them middle" must decide whether they "slightly agree" or "slightly disagree"

What are your thoughts?

The problem I have with interpreting likert scales is with reporting the significance of a mean of say 1.12 when i have anchored 1 to strongly agree and 5 to strongly disagree, bearing in mind that i have other numerical anchors to other scales in between 1 and 5. I mostly prefer to use the mode which is simpler and gives me a single digit which i have already defined in the scale.

Post a Comment