This post is for research geeks, and it's really just an introduction -- maybe a gentle warning -- as I don't have time or the statistical expertise to explore this deeply.
The Basics
When scientific experiments get done, researchers typically compare one experimental treatment to second one (or to no-treatment at all). So for example, we might compare two versions of the same elearning program, one that utilizes spaced repetitions and a second that uses unspaced repetitions. When we do such comparisons we need to know two things before we can draw conclusions:
- Statistical Significance:
How likely is it that the experimental results might be caused by random chance. Social scientists aim for results that are more than 95% likely to result from the experimental factors being studied. In other words, if we did the same experiment 100 times, we should expect the same outcome at least 95% of the time. - Effect Size:
How different are the actual results. Are they sufficiently large to be meaningful?
If we don't take effect sizes into account, we can have an experiment that is statistically significant but not practically significant. That is, we can have statistical significance, but not effect-size significance. Without looking at effect-size calculations, we can be fooled into thinking that an experimental result is meaningful when it actually shows no substantial advantage for one learning method compared with another.
So for example, suppose that a new mobile-learning app improves learning by less than one-half of one percent, but cost $10,000 per learner...
Meta-analyses are statistical studies that compile many scientific studies, looking at the whole of the results. Meta-analyses have been a potent source of wisdom because they take complicated and complex results over a range of studies and combine them in a way that helps us make sense of the overall trends. Meta-analyses rely on effect sizes to calculate the overall importance of the factors being studied.
Some Subtleties
As with all things in science, over time scientists make improvement and refinements in their work. Effect sizes are no different. Recently, researchers have found that meta-analyses have to be interpreted with wisdom, otherwise the results may not be what they seem. Of specific concern is the finding that published studies tend to report higher effect sizes than unpublished studies. Quasi-experimental designs reported higher effect sizes than randomized control studies. Et cetera...
Here are some recommendations for researchers from Cheung and Slavin (2016), who are focused on educational research, but whose recommendations are widely applicable:
- In doing a meta-analysis, don't just look at published studies. Moreover, work diligently to gather all studies that have been done.
- Researchers, in general, should utilize randomized trials whenever possible. Those doing meta-analyses should look at these separately because they are likely to have the least-biased data.
- Policy makers and educators (and I, Will Thalheimer, would add all workplace learning professionals) should "insist on large, randomized evaluations to validate promising programs."