Friday, April 1, 2011

Evaluation: Thinking Evaluatively

Recently there was a discussion on AEA's listserv EvalTalk on what it means to think evaluatively.  Multiple persons weighed in on this topic, but it surprised me how few persons seemed to understand the idea of using evidence versus using evidence to make an estimation of merit, worth , or value. In essence, it's the same conundrum I've found in purported evaluation reports that detail "what is" versus going one step further and addressing "what is the value (of what is)".

However, a few people did, as it seems to me, accurately identify what is meant by thinking evaluatively.  One example I really liked was written by Eileen Stryker, who wrote, "It seems to me that thinking evaluatively is about how we arrive at or account for judgments about value or quality.  Evaluative thinking means being tuned in to value judgments people make (e.g.,  listening for such language as: that's good, he's doing a good job, the program is working, they're really getting better, and words like effective, quality, good, bad, better, improving, etc.), and questioning how those judgments were arrived at and what evidence may exist to substantiate the value claim."

Eric Weir strongly agreed with Eileen's definition and added: "[Evaluative thinking] is either
using evidence to support value judgments or assessing the extent to which value judgments are supported by evidence.....Evidence does not support or undermine judgments by itself. Arguments are needed to connect the evidence to the judgment."

Last, Bob Williams summed it up (nicely) as "evaluative thinking is the informed judgment of value, merit or worth".

This was an incredibly interesting discussion and made me wonder a few things:
Why do we, as evaluators, seem to so often feel insecure in evaluatively assessing outcomes?
What does this say about evaluation training and the need for more emphasis on training to think evaluatively?, and
Are there differences in how evaluatively persons think based on training, evaluation field,  years of experience, etc?

Would love to know what others think about these questions.  Feel free to comment!

3 comments:

KT said...

I wonder if it comes down to any struggle with the idea of research versus evaluation? Research is (traditionally?) more reporting on and making sense of what's happening, yet not making any judgments. Evaluation of course draws on many similar methodologies but has that added or different mandate, as you and others have pointed out, to make a judgment.

Mohamad Hasan Mohaqeq Moein said...

Salaam,

Why you don’t participate directly in mentioned discussion?

http://www.aime.ua.edu/cgi-bin/wa?A2=ind1104a&L=evaltalk&T=0&F=P&S=&P=2563

Best

Moein

Kelci Price said...

One of the challenges I've found in my work is that many evaluators don't want to draw evaluative conclusions in reports. Sometimes this is because they want to encourage 'evaluative thinking' in stakeholders by presenting the evidence and then having stakeholders assess it and craft conclusions and evaluative judgments themselves. A plus of this approach is that it allows the 'evaluation' to come from the stakeholders themselves, so it reflects their point of view with regards to value and worth. Some minuses are that stakeholders are not always very good at crafting logical arguments based on evidence, they often fail to diagnose major programmatic issues suggested by the data, and it puts a large burden on stakeholders to spend lots of time engaging with the evidence (time and effort they may not put in, in which case the evaluation report won't be useful). My current preference is to include as much in the way of evaluative conclusions as appropriate, but leave room for stakeholders to draw evaluative conclusions (especially where their expertise in assessing the meaning/implication of the evidence exceeds mine). Providing structured time for stakeholders to discuss findings and conclusions is key to promoting evaluative thinking.