Monday, February 22, 2010
One-Shot Pre Test Only Design:
Ha-Ha - I made this one up! Who would do this, although I guess it could occur if you ran out of funding (or everyone dropped out of the treatment pool). I have seen stranger things proposed.....
One-shot Post Test Only Design:
Sigh. How do you know there was even a change? Enough said.
One-shot Pre-Post Test Design:
O X O
Double sigh. Okay - you may be able to detect a change but how can you attribute it to the treatment? You need another group. However, I see this proposed WAY TOO OFTEN.
Post-test Only Intact group Design:
Another sigh (but only one). Good that you now have two groups. However, you don't know whether your two groups started at the same place. For example, if your treatment group scores high on a test (a good thing) and your control group does not, you don't know whether your treatment group would have scored higher than the control group to begin with (and whether their final score is really one of NO CHANGE).
Pre-Test Post-test Intact Group Design:
O X O
O - O
Now we're cooking! Two groups, hopefully equivalent. If persons were randomly assigned to a group this would be an experimental design (and we would be cooking with gas!). Without random assignment it remains a quasi-experimental design. However, if we used propensity score matching to match groups it would be a HIGH QUALITY quasi-experimental design. Or we could use regression discontinuity and use a cut score to determine the two groups. Again, that would result in a HIGH QUALITY quasi-experimental design.
Now - go forth . And design better evaluations!
Monday, February 15, 2010
Friday, February 12, 2010
1) Many causes for the same effect: An increase in x (teacher content knowledge) causes an increase in y (student achievement) in some cases but does not have this same effect in other cases, where y is caused by an entirely different set of causes (an example of such a cause could be increased time spent by student studying). This is one we evaluators see quite often, hence the need for a very thoughtful research design!
2) Cause dependency upon time: An increase in x (years as an educator) is associated with an increase in y (student achievement) at one point in time, but not another. Much research supports the opinion that at least some teachers become less effective as they near retirement, for multiple reasons.
3) Same cause but different outcomes: An increase in x (greater governance by school boards) causes outcome y (increased diversity across all schools in a system) in some cases, but outcome z in other cases (less diversity - more neighborhood schools). This is (unfortunately) happening in Wake County in NC, which is near where I live.
4) Outcomes are the effects of various causes that depend on each other: Outcome y is dependent upon many other variables v, w, and x - whose values are in turn jointly dependent upon each other. I couldn't think of an example for this one. Hall 's example is y (successful wage coordination) depends on the value of many other variables - v (union density), w (social democratic governance), and x (social policy regime) - whose values are in turn jointly dependent on each other
5) Circular causality: Increases in x (student achievement) increase y (student expectations), but increases in y (student expectations) also increase x (student achievement). In this case such causality is a good thing!
Source: Peter A. Hall. 2003. "Aligning Ontology and Methodology in Comparative Research" In J. Mahoney and D. Reuschemeyer, eds. Comparative Historical Analysis in the Social Sciences. New York, NY: Cambridge University Press. Pp.373-404
Thursday, February 11, 2010
Table: A potential framework for conceptualising evaluative analysis
Types of questions
FINDINGS - raw
Description of findings
What worked, for whom, under what
How and Why
FINDINGS - analysed
Analysis of findings
Conclusions about findings
So What do the findings say
CONCLUSIONS about the policy /
Merit or worth (quality or value) of a
So What do the findings mean
SIGNIFICANCE and implications of
Policy and programme decision-making and/or
Anyway, as shown, it's four analytic components are linked to broad evaluation questions and may provide another way to format evaluation reports.
Question 1 is "What?" and is where one would address the findings to date of the evaluation. It's meant to be a place where raw data are presented, or, in other words, results are described.
Question 2 is "How and Why?" and is where data are actually analyzed and compared to generate conclusions about what resulted, for whom results were greatest, etc.
Question 3 is "So what do the findings say about the evaluand (i.e., program, policy, etc.)?" and relates to what most of us think about when we think evaluation - the merit or worth of the evaluand. Many reports stop short of answering this question even though the basis of evaluation is to draw such conclusions.
Question 4 is "So what do the findings mean?" and is where the significance of findings and implications for the evaluand are discussed. Again, based on my report reading, few reports address such a question, especially when results are not positive.
If anyone finds the source for this please let me know so I can give the author his or her due. Again, while it nicely identifies some of the questions driving evaluation analysis, it may also be a nice guide for framing reports!