Jonas Ranstam's website


5. Analysis strategies to avoid


Analysis strategies based on dichotomising results as statistically significant and nonsignificant without considering the source and collection of data and tested hypotheses are common in medical research. The strategies reflect the methodological misconceptions that statistical significance indicates clinical relevance and that nonsignificant findings do not exist. However, interpreting p-values like this isn't meaningful. As stated by As stated by Ron Wasserstein, "The p-value was never intended to be a substitute for scientific reasoning". A p-value reflects the tested data's compatibility with a specific statistical hypothesis and is irrelevant when the tested hypothesis is irrelevant. Furthermore, a p-value doesn't reflect the inferential uncertainty of an estimated effect. The same estimated effect produces different p-values in studies with different sample sizes.

The investigator's task is to define what is or isn't clinically relevant. A minimal clinically significant effect may exist or has to be determined. If the correct hypothesis is tested, a p-value may help show empirical support for the effect's existence. However, a confidence interval, which showes a range of plausible effects, is usually the better measure.

Needless to say, valid and precise findings require more than the calculation of p-values and confidence intervals. Randomised trials rely on adequate study designs and observational studies on sufficient considerations and adjustments for different forms of bias.