Monday, May 31, 2010

Consulting: Test Yourself

Seth Kravitz recently was a guest writer for the Harvard Business Review website. In his article he identified twenty reality-check statements one might want to test himself or herself against when deciding whether to become an entrepreneur (think consultant) Warning: As this was in response to another blog about consulting by someone wearing rosy-colored glasses, these are relatively negative statements. However, they point to the other side of consulting - it isn't always easy to be a consultant. As one friend noted, "It's the best 80 hours a week you'll ever work", and those 80 hours may be tedious, frustrating, non-paying, and yet necessary. For me, consulting has been very rewarding personally and professionally and I hope to continue as a consultant for a long time. But consulting is also scary at times and I appreciate someone willing to bring conversations about consulting / being an entrepreneur back to reality.

Here are Seth's statements. To read more, please see his article: 20 (More) Reality-Checking Questions for Would-Be Entrepreneurs

1. I am willing to lose everything.
2. I embrace failure.
3. I am always willing to do tedious work.
4. I can handle watching my dreams fall apart.
5. Even if I am puking my guts out with the flu and my mother passed away last week, there is nothing that will keep me from being ready to work.
6. My relationship/marriage is so strong, nothing work-related could ever damage it.
7. My family doesn't need an income.
8. This is a connected world and I don't need alone time. I want to be reachable 24/7 by my employees, customers, and business partners.
9. I like instability and I live for uncertainty.
10. I don't need a vacation for years at a time.
11. I accept that not everyone likes my ideas and that it's quite likely that many of my ideas are garbage.
12. If I go into business with friends or family, I am okay with losing that relationship forever if things end badly.
13. I don't have existing anxiety issues and I handle stress with ease.
14. I am willing to fire or lay off anyone no matter what — how good of a friend they are, if they are my own sibling, if they just had a baby, if they have worked with me for 20 years, if their spouse also just lost their job, if I know they might end up homeless, if they have cancer but no outside medical insurance, or any other horrible scenario millions of bosses and HR people have faced countless times.
15. I am okay with being socially cut–off and walking away from my friends when work beckons.
16. I love naysayers and I won't explode or give up when a family member, friend, customer, business associate, partner, or anyone for that matter tells me my idea, product, or service is a terrible idea, a waste of time, will never work, or that I must be a moron.
17. I accept the fact that I can do everything right, can work 70 hours a week for years, can hire all the right people, can arrange amazing business deals, and still lose everything in a flash because of something out of my control.
18. I accept that I may hire people that are much better at my job than I am and I will get out of their way.
19. I realize and accept that I am wrong ten times more than I am right.
20. I am willing to walk away if it doesn't work out.

Friday, May 28, 2010

Evaluation - Esther Duflo and Experimental Design

Great article by Ian Parker in the May 17 New Yorker about Esther Duflo, economist, MacArthur genius, and most recently recipient of the John Bates Clark Medal. The article discusses her work with the Jameel Poverty Action Lab (J-PAL), a research network specializing in randomized evaluations of social programs. Very thought provoking as it does make me wonder how we can use more randomized evaluations in education - as well as whether we should.

See part of the article here (the rest is for New Yorker subscribers only): The Poverty Lab

Wednesday, May 19, 2010

Evaluation - Again, Saving Pie for Dessert

Although not a new concept (Edward Tufte addresses this same topic in many of his bools), Stephen Few of Perceptual Edge addresses the overuse of pie and other circular charts and more importantly their failure to convey meaning to data in his latest blog post (see Our Irresistible Fascination with All Things Circular). I find such posts as these very helpful for rethinking ways to visualize data meaningfully as opposed to just artistically, which seems to be all for which some people strive. I also appreciate that Few finds one of the circular graphs well enough designed to keep, but am still left to wonder if the pie chart is useful at any time as a graphic when compared to a data table. If anyone can show me such an instance then I'll happily link to it from my website as I've not seen one yet.

As a note, I track Stephen Few's blog regularly and find him as interesting and exciting as Tufte in terms of pushing data visualization. Another person very influential in that area and in my reporting is Garr Reynolds. Between these two, Tufte, and what Windows 7 offers in terms of graphing I feel like I've moved my reporting to a whole new level. Just remember - simple can be elegant.

Tuesday, May 18, 2010

Evaluation: Seven Critical Books

I was recently asked by a friend identify books I think every evaluator should read. It proved to be an interesting discussion as we are both evaluators and have different training backgrounds (psychology versus education) and interests (qualitative versus quantitative). I thought it might be interesting to share my list; here it is in no particular order:

Dillman, D., Smyth, J., and Christian, L. (2008). Internet, mail, and mixed-mode surveys: The tailored design method (3rd. Edition). New York, NY: Wiley & Sons.

Campbell, D. & Stanly, J. (1963). Experimental and quasi-experimental designs for research. New York, NY: Wadsworth Publishing.

Davidson, E.J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Scriven, M. (1991) Evaluation thesaurus (4th. Edition). Thousand Oaks, CA: Sage Publications, Inc.

Alkin, M. (2004) Evaluation roots: Tracing theorists' views and influences. Thousand Oaks, CA: Sage Publications, Inc.

Patton, M.Q.(1990). Qualitative evaluation and research methods. Thousand Oaks, CA: Sage Publications, Inc.

Yin, R. (2002). Case study research: Design and methods, (3rd. Edition), Applied Social Research Methods Series, Vol. 5. Thousand Oaks, CA: Sage Publications, Inc.

Interestingly, she agreed with my choices of the Dillman book, Campbell and Stanly ("a classic"), and likes Patton but would have chosen his book on Utilization-focused Evaluation as opposed to the one I chose.

What ones would you identify?

Tuesday, May 4, 2010

Evaluation: Evaluating Training

It's that time of year again when many training programs are delivered, whether around teachers' use of technology in instruction, health practitioners' use of medical records to reduce costs and save time, or training for business persons on increasing sales volume and frequency.

If asked to evaluate these training programs many of us would turn to Kirkpatrick's' 4-level model for evaluating training efforts (See Kirkpatrick ). We would assess participants' reactions, then their learning (unfortunately often via a self-report survey versus actual assessment), then their behavior (again, using the time-tested and shown-to-be-inaccurate self-report survey), and would fail to have money or time left over to assess impacts.

Okay - I admit this is how I have done this in the past - and I know there are others out there that have done this as well! But let's not point fingers :) Let's ask instead:

How can we do it better?


I for one do like Kirkpatrick's model. Guskey's addition to it (assessing support and viability support for change at the participant's organization) was a great addition. And try as I might I cannot think of anything to enhance these models.

But do I really have to evaluate at all levels of the model? For persons who buy the training for their employees, Level 4 evaluation is generally all that they care about - and we hardly ever do that. Instead we argue that we must start at Level 1 and make sure persons' react favorably to the training before moving on to evaluating Levels 2, 3, and 4.

Boehle (2006) cites multiple studies which show very little correlation between Level 1 evaluations and how well people actually perform when they return to their job. (See Are You Too Nice to Train) That is, often enough, persons do not like the training received but may perform better afterwards. So is evaluation of Level 1 even critical for a formative evaluation, much less a summative one?

Given these findings one may also question the importance of Level 2 evaluation. Kirkpatrick's model is based on the understanding that persons must react favorably to something to learn from it. This link seems tenuous at best based on the research noted above. However, because behavior change follows changes in knowledge, Level 2 evaluations are still important.

Level 3 evaluation seems to me the most important, if the training really has merit. And that is a big IF. As long as the behavior changes that the training purports will produce results occurs, one need only to evaluate this Level and not Level 4.

However, not evaluating Level 4 is a huge gamble, especially as that is what the buyer most cares about.

Given these thoughts, next time I evaluate a training program, I am going to put most of my money into evaluating Levels 3 and 4 and some into evaluating Level 2. As part of Level 2 I may ask some formative questions around Level 1 but overall will put very little time and money into this effort.

I'll let you know how it goes.