Sunday, November 15, 2009

Evaluation: A Picture Equals a Thousand Words

Wordle.net is a website that let's you build word clouds based on how often certain words appear in text.

This is how the front page of my website, EvalWorks,  looks in Wordle.

Imagine using Wordle to analyze and graphically display the analysis of qualitative data.....


Consulting: Five thoughts

For the attendees at my American Evaluation Session "Starting and Succeeding as an Independent Consultant". As always I enjoyed our conversation very much!

http://despair.com/consulting.html
Probably not a great motto. Wouldn't want it on a coffee cup, especially if you give them to clients! But worth a laugh....

Please note many of the following thoughts come from consulting or business books I have read....probably most are from Alan Weiss http://www.summitconsulting.com/

1. A consultant’s job is not to respond, but to anticipate.

2. Emphasize value and results – not activities, outputs, or price.

3. It’s not important whether the number of sales calls increases, but whether the number of sales do.

4. The consulting profession is really the marketing profession.

5. The creation of new opportunities should be low volume, focused, and highly intense: rifle shot – not shotgun

Tuesday, November 10, 2009

Consulting: How to collaborate with other consultants

Tomorrow I will travel to Orlando for the 2009 American Evaluation Association Annual Conference. http://www.eval.org/eval2009/default.htm One of the presentations I am involved in is a panel titled "Starting and Succeeding as an Independent Evaluation Consultant". Here's one of the things I will tell the audience, many of whom are new or young evaluators and have not worked as an independent evaluator:

Statement: Collaborations are a good way to understand what it means to be an independent evaluation consultant.

However, I always present this as the caveat to the statement above:

Caveat: Collaboration implies direction both ways.

I always present this caveat because sometimes persons misunderstand what is meant by collaboration. I've had people tell me that if I write the proposal they are happy to do most of the work (and I guess get most of the money). In what way is that a collaboration and why would I do that? What incentive is there for me to "collaborate" with this person? It feels more as if that person is trying to ride on my coattails. While I am flattered that he or she thinks she can go far doing such, they will have to think again.

Before suggesting someone collaborate with you, it may be important first to ask, “What do I have to offer the other person?” before asking “What does that other person have to offer me?". Then make an offer no one can refuse – but one which recognizes your own value and the value of the other person instead of minimizing both. If you cannot answer why I should want to collaborate with you, then why would I?

If I were to graph the value of collaborations on an x-y axis, where x represents what Evaluator X brings to the collaboration and y represents what Evaluator Y contributes to the evaluation, the "value" is maximized when both x and y values are the highest. x and y do not have to be equal, but both should be high to see a benefit in collaboration.

One can see this as well if one views a square divided into 4 equal parts. In one square the contributions of both evaluators is low and thus the overall value of collaboration is low. In the diagonal square both evaluators contribute greatly to the evaluation and the value is high. The other squares represent the cases when contribution is unequal (low from one, high from the other) and thus the value of collaboration is not maximized.

Thus one needs both evaluators in a 2-person collaboration to contribute greatly to the process for the actual collaboration to be of great value.

I believe I bring value to collaborations and enjoy the opportunity to work with someone who also brings some other value to a collaboration, whether it is a new method, new way to engage stakeholders, new means of reporting data, etc. These persons need not be experts and can even be new to evaluation, as long as they can define what value they are able to contribute. Collaborations are great ways to learn, but beware of them being mentorships in disguise.

Monday, November 2, 2009

Evaluation: Evaluation Reports 2.0

Eventually an evaluator must write an evaluation report. While this may seem very apparent, I think how to write one is less apparent, especially how to write an interesting and informative one. A good starting place is to read different evaluation reports to get a feel for the way different authors structure their reports and their writing styles. Unfortunately, what I think you may find is that too often evaluation reports are dry recounts of the methodologies and data collection activities undertaken and their results, with little effort made to pull results together and show what they mean from a combined standpoint. Reports instead include a lot of raw data or data that are not addressed or are contained in overly artful are included as if quantity is supposed to imply quality reporting. Additionally, in many cases, evaluation reports fail to identify, much less answer, the evaluation questions I presume guided the authors' efforts.

Luckily, there are two very good tools out there that are helpful for evaluators of all levels. The first is a checklist developed by Dr. Gary Miron (professor at Western Michigan University and former Chief of Staff at The Evaluation Center at WMU):

http://www.wmich.edu/evalctr/checklists/checklistmenu.htm

The "Evaluation Report Checklist" as it is called can be used as a "tool to guide a discussion between evaluators and their clients regarding the preferred contents of evaluation reports and a tool to provide formative feedback to report writers". It provides a great outline of the eight main sections in an evaluation report (Title page, Exec. Summary, Table of Contents, Introduction and Background, Methodology, Results, Summary and Conclusion, References) and the various things that should be included in each. This checklist can help evaluators structure their report and identify the strengths and weaknesses of their report. However, as Dr. Miron notes, "Evaluation reports differ greatly in terms of purpose, budget, expectations, and needs of the client". Thus one may need to consider or weight the checkpoints within sections and to weight the relative importance and value of each section when reviewing one's own writing (or someone else's).

However, before you download this checklist (as I hope you will), begin by reading this great article "Unlearning Some of our Social Scientist Habits" by Jane Davidson (independent consultant and evaluator extraordinaire) that, frankly, I think has been overlooked for its valuable contributions.

http://davidsonconsulting.co.nz/index_files/pubs.htm

Among other great advice for evaluators (including models or theories but not using them evaluatively and leaping to measurement too quickly) she addresses these common pitfalls when reporting evaluation findings: (1) reporting results separately by data type or source and (2) ordering evaluation report sections like a Master’s thesis. This entertaining article (especially the parts about evaluation reporting) really makes a case for using the questions that guide the evaluation to guide the report as well. Using the Evaluation Report Checklist in conjunction with some of Dr. Davidson's suggestions has increased the quality and utility of my evaluation reports and should do the same for yours.

Sunday, November 1, 2009

Methodology: Logistic Regression and Relative Risk - Part 2

A friend of mine read my blog on Logistic Regression and relative Risk and asked a few questions I'll try to answer here.

In that blog I gave the following example:
Suppose that we have a group of students of which some are classified as ADD. Of 80 boys, 13 were classified as ADD and 67 were not. Of 100 girls, 6 were classified as ADD and 94 were not. The odds of a boy being classified as ADD (as the logistic regression output would report) is 13/67 = .194; the odds of a girl being so classified is 6/94 = .064.

I also provided information on how to calculate estimate relative risk:
Estimated Relative Risk = Odds Ratio / ((1-Pr) + (Pr * Odds Ratio))
where Pr is the proportion of non-treated persons that exhibit the outcome of interest.

My friend asked about the following scenarios:

Can the estimated relative risk ever be 0?

Yes. This will only happen when Odds Ratio is 0, or in this case, when the odds of a boy being classified as ADD is 0 (as is the probability) meaning no boys were classified as ADD.

Can the relative risk ever be 1?

Yes. This will only happen when Odds Ratio is 1, or in this case, when the odds of a boy being classified as ADD is 1 (the probability is 100%) meaning all boys were classified as ADD.

Can the relative risk ever be negative?

Thankfully not.

Can the relative risk ever be between 0 and 1?

Yes. Let's look at the case of the relative risk for a girl being classified as ADD. It is:
Odds Ratio / ((1-Pr) + (Pr * Odds Ratio)) , where Odds Ratio = .064/.194 = 0.3298969
and Pr = 13/80 = .1625

Estimated Relative Risk = .3229 / ((1-.1625) + (.1625*.3229)) = .3229 / (.8375 + .0536) =
.3229/.8911 = .3624

Thus girls are .3624 times as likely as a boys of being classified as ADD.