Just a few thoughts after reading Schmidt’s *Detecting and Correcting the Lies That Data Tell* (2010) (see Bonnie’s October 14 post for link). In it Schmidt argues, presenting clarifying examples, that accurate interpretation of collected data suffers from “ … researchers’ continued reliance on the use of statistical significance testing in data analysis and interpretation and the failure to correct for the distorting effects of sampling error, measurement error, and other artifacts.” [from the Abstract] Schmidt suggests the use of meta-analysis but improved by the estimation of and the elimination of the “distorting effects.”

Schmidt’s applications to meta-analysis are elegant, with a beauty (to me) similar to that of structural equation modeling, in both of which distinctions are made between and independent estimations are made of the constructs of interest and the errors necessarily attached to our measurements. This is valuable work providing a powerful tool for theory testing. But it also makes me uneasy. As we all know, sampling and measurement errors are always present in collected data. So when we get an estimate of an effect size after stripping away the intrinsic error, what does it mean? Schmidt presents as “the truth” what the results would look like if the data were different from what the data really are. I am reminded of what an editor once wrote to my co-author and me, criticizing an analysis we had done on transformed scores. He said he wanted to see that the subjects did, not what the experimenters did. We thought he had a point.

Schmidt also argues against the use statistical significance testing, citing a number of ways it has led to misinterpretations. I agree with him about the misinterpretations, and I agree with Bonnie (see her recent blog here) about what to do about those misguided uses of a significance test – don’t do that! But I do not agree that therefore significance testing should be abandoned. Meta-analysis is not necessarily appropriate for all research questions and studies. For a stand-alone study in which a researcher claims her independent variable has shown an effect, it is not unreasonable to ask for some evidence that the obtained difference is unlikely to have resulted by chance (i.e., from the effects of those pesky sampling and measurement errors). Good experimental design attempts to establish a cause-and-effect conclusion by eliminating all other “rival hypotheses.” The statistical significance test simply assesses the likelihood of the rival hypothesis of “chance.”