Thursday, October 4, 2012


Nothing makes me more excited about blogging than when I have the opportunity to share new evaluation ideas. “Actionable Evaluation Basics:Getting succinct answers to the most important questions” by Jane Davidson provides insights that will be new to many regarding making evaluation more meaningful to stakeholders.

In this minibook Dr. Davidson defines actionable evaluation as: 
  •  clearly relevant to the key actions, decisions, and thinking of those the evaluation needs to inform
  • going right to the heart of what is really important, and doesn't get lost in the details;
  •  favoring approximate answers to important questions over accuracy to four decimal places on trivia;
  •  resisting being lured into a focus on the outcomes that are most easily measured;
  • presenting findings in a way that is simple, but not simplistic;
  • useful — at both strategic and practical (or operational) levels;
  • influencing and clarifying thinking, action, and decision-making; and
  •  providing insights that help people figure out what actions to take.
 To get to actionable evaluation she presents the six critical elements and then details how one, as an evaluator, addresses each of these elements in an evaluation:
  1. a clear purpose for the evaluation;
  2.  the right stakeholder engagement strategy;
  3.  important, big picture evaluation questions to guide the whole evaluation;
  4. well-reasoned answers to the big picture questions, backed by a convincing mix of evidence;
  5.  succinct, straight to the point reporting that doesn't get lost in the details; and
  6. answers and insights that are actionable, that we can do something with. 
Whereas some readers of this minibook will assert they that already do actionable evaluation, that nothing new is presented here, I would argue that few evaluation studies I have read would qualify as actionable evaluation for two main reasons: Evaluators measure what they can measure rather than risk finding “approximate” answers to the right questions and evaluators generate only evidence, not evaluative conclusions, telling us "what's so" (e.g., what the outcomes are) but not "so what" (how good, valuable, or worthwhile the outcomes are). Make your evaluation truly a measure of worth, merit, or value and more actionable (i.e., utilizable) by reading this short publication and attempting the simple methodologies presented. It may be the best $3.99 you ever spend!

.

Sunday, January 15, 2012

Dealing with Small Sample Sizes


The most recent ATE newsletter developed by Western Michigan University for NSF Advanced Technology Education projects and centers has a nice article about how to handle small sample sizes (along with some other great articles).

Small sample sizes are something all evaluators face at some point in their professional lives.  And while evaluators may want ignore small sample sizes and treat such data as they might had they had larger sample sizes, evaluators might want to consider small sample sizes as opportunities! Specifically an opportunity to collect qualitative data that might substantiate results and potentially provide more powerful stories than numbers alone. 

Other recommendations ( these from Eboni Zamani-Gallaher) include:

·         Try to gather data on everyone by using a census, rather than a sample. Just remember to limit data analysis to descriptive statistics, rather than inferential.
·         Be upfront about the limitations and document your sampling strategies, decisions, and criteria.
·         See it as an opportunity to keep evaluation costs low recognizing that a large study without sufficient resources can under-power results.

Additionally:

·         Hesitate to report percentages, or don't at all - report fraction instead, as percents can be misleading and may overstate results.

·         Do not conduct quantitative tests of statistical inference where data requirements are at best ignored and at worst violated.

·         If small sampling sizes are the result of missing data, then there may be other possibilities for dealing with this issue.  One way is imputing missing data , of which there are multiple methods, including mean substitution, regression, and more intricate imputations such as multiple imputation using virtual datasets.

Monday, January 9, 2012

Perfect versus Interesting: Where should evaluation sit?


 I read a lot of different blogs as part of my personal and professional learning and last year began reading Seth Godin's blog. An entrepreneur, marketer, and author, he wrote the following in April 2011:

"There are two jobs available to most of us:

You can be the person or the organization that's perfect. The one that always ships on time, without typos, that delivers flawlessly and dots every i. You can be the hosting company or the doctor that might be boring, but is always right.

Or you can be the person or the organization that's interesting. The thing about being interesting, making a ruckus, creating remarkable products and being magnetic is that you only have to be that way once in a while. No one is expected to be interesting all the time.

When an interesting person is momentarily not-interesting, I wait patiently. When a perfect organization, the boring one that's constantly using its policies to dumb things down, is imperfect, I get annoyed. Because perfect has to be perfect all the time."

This post made me wonder where evaluation and evaluators fit?  Should evaluation/evaluators  be perfect or should it/we be interesting?

I would argue that good evaluation is perfect whereas better evaluation is interesting - and thus most likely imperfect some of the time. And that such imperfection is just the cost that comes when taking risks.

Let me try to explain: One example of a huge hit has been evaluators' forays into data visualization and reporting. Attendees at the American Evaluators Association meeting in 2011 could attend multiple sessions on using data visualization that were intended to improve client's understanding of,  interest in, and use of data and evaluation. The evaluators presenting these materials had all taken risks with their evaluations and clients to use new techniques to increase greater stakeholder buy-in. Examples included attempting different reporting formats, using data dashboards, and creating new data visualizations.

Of course, there are is always the flip-sides to the same coin. I recently took a risk when sharing some data with clients by putting together a Prezi presentation, thinking "why not try something different from PowerPoint"?  While they loved the presentation, they were much more interested in what they could do with Prezi than the actual data I presented. It took a lot of time and effort to steer the conversation back to the data!

But my imperfection had a few benefits.  It reminded me of the value of my data  message and the need to ensure that nothing obscures that message when sharing it with clients and /or stakeholders.  It also allowed my clients to see that I was taking risks to better reach them, which they appreciated, and knew I would do when helping them with their own outreach and messaging. So that old saying, 'nothing ventured, nothing gained' was true in this case.

So the moral to the story, for me at least, is to try to be more interesting, even at the risk of not being perfect. I think risk is necessary to growing evaluation and meta-evaluation and truly making evaluation a trans-discipline, to use a term from Michael Scriven.

 

 



Wednesday, September 28, 2011

What I'm Reading

I'm always interested in what evaluators are reading, so thought I would share below some of the books and blogs I’ve been reading lately. I would love to know what you are reading and how is it improving your evaluation and consulting practices.  Please leave me a post!

Books

I’m constantly referring to this book for ideas – definitely a classic: Campbell, D. & Stanly, J. (1963). Experimental and quasi-experimental designs for research. New York, NY: Wadsworth Publishing.

Okay – not a straight read – but another go-to book: Scriven, M. (1991).  Evaluation thesaurus (4th Edition). Thousand Oaks, CA: Sage Publications, Inc.

Very useful for helping me understand the history of evaluation: Alkin, M. (2004).  Evaluation roots: Tracing theorists' views and influences. Thousand Oaks, CA: Sage Publications, Inc.

Because almost all evaluators are consultants as well: Block, P. (2011).  Flawless consulting: A guide to getting your expertise used (3rd Edition).  Hoboken, NJ: John Wiley & Sons.

Blogs

Stephen Few on data visualization (perceptualedge.com ) and Nancy Duarte on presenting blog.duarte.com/), because both are central to my evaluation work. 

For Pleasure

A beautiful and haunting “evaluation” of the Mann Gulch Fire.  This is probably the 5th time I’ve read it: Maclean, N. (1993). Young men and fire. Chicago, IL: University of Chicago Press

What I Hope to Read Next

Michael Quinn Patton - Essentials of Utilization-Focused Evaluation. Just arrived!

Sunday, August 28, 2011

Analyzing and Reporting Likert and Numerical Data

Like most evaluators, I gather lots of Likert and numerical data.  Over the years I have changed the way I report these data so that my clients can better grasp what the results mean.

Likert Data

Almost all of my Likert data are numerically anchored.  So, for example, Strongly Disagree is anchored at a 1 and Strongly Agree would be anchored as a 5, with all other qualitative options anchored numerically in-between.  In these occasions I tend to report the sample size (n), minimum value, maximum value, mean, and standard deviation, as well as the percent of persons who Agree or Strongly Agree (i.e., those who selected a 4 or 5).  I can report the mean and standard deviation because I have numerically anchored m Likert scale. This allows my clients to see right away how persons responded, on average. By reporting the standard deviation I am able to show them how “spread out” the data are, another point of information for my clients. Last, by reporting the percent of persons selecting a 4 or 5, I provide my clients an additional data point for them to consider.

Numerical Data

Like most evaluators reporting numerical data, I report the sample size (n), minimum value, maximum value, as well as the mean and standard deviation. Lately I’ve also begun reviewing the median value in a dataset and reporting that value as well. Oftentimes this data point is very revealing when compared to the mean, especially if they are very different in value. Explore what this means for your dataset and share with your client.

ALL DATA

Even better than presenting these data in table format, consider providing them in graphical form.  Excel allows you to do most graphs you might need.  Using text boxes and shapes such as arrows I can usually point out the mean and median values and the interval that is bounded by the sd.