## Jonas Ranstam's website## Tips for statistical reviewersAn ideal manuscript starts with a specific research question and ends with an empirically supported answer. In practice, however, many manuscripts are difficult to read, with unclear explanations linking aims and interpretations. Many investigators are more interested in developing dogmas than evidence, and statistical methodology and results are, in many cases, used to disguise subjective opinions. Conflicts of interest are common. The investigator's personal, the affiliated research organisation's, and the scientific journal's status depend on publications and impact factors.
## Reviewers and authorsOne typical reviewer mistake is to give too specific and detailed review comments, which may be caused by a wish to help but increases the risks of authorship issues and conflicts of interest. The different responsibilities of authors and reviewers should be respected. From a formal viewpoint, the reviewer is assigned his tasks by the editor-in-chief, to which all review comments should be addressed even if the corresponding author is copied on the comments. The reviewer is usually also requested to provide confidential comments to the editor. Recommendations about the rejection, revision, and accepting of manuscripts should be directed only to the editor. ## General statistical problemsThe most common mistake a statistical reviewer is likely to find is the misinterpretation of p-values and statistical significance. The reasons are not philosophical, such as about differences between the Fisher and Neyman-Pearson approach to hypothesis testing. On the contrary, the problem is much simpler; few medical investigators grasp the difference between description and inference, i.e. between describing findings in a sample and quantifying the uncertainty of these findings when generalised beyond the sample they were observed in. Sampling variation and sampling uncertainty are crucial phenomena to consider in empirical research, but the uncertainty measures, confidence intervals and p-values are often misinterpreted, confidence intervals as dispersion measures and p-values as indicators that either show practically relevant differences (p<0.05) or indicate evidence of "no difference" (NS).
## Specific statistical problemsSeveral technical problems tend to appear in manuscript after manuscript. The terminology used often reveals that authors are methodologically ignorant. "Assessing" and "determining" effects are commonly used instead of the more appropriate term "estimating" or "evaluating". Misunderstood technical terms such as multivariate instead of multiple or multivariable and quartile instead of quart are ubiquitous. Authors also often tend to use technical terms such as correlation and incidence in nontechnical ways. The ICMJE recommendation is to "Avoid nontechnical uses of technical terms", and statistical reviewers have a professional responsibility to care about the integrity and coherence of the statistical terminology.
## Accept or rejectWhether to recommend accepting or rejecting a manuscript may be a difficult question, but it seems reasonable to require compliance with the ICMJE recommendations for acceptance. Noncompliance can perhaps be changed with a revision, but for some manuscripts, the best advice may be to start over from scratch. Getting this outcome of a review may be disappointing for the author, but it should not be taken personally. Statistical reviewing is about the evaluation of evidence. A well-performed statistical review can improve a manuscript substantially and help the author avoid publishing embarrassing mistakes. This sections of the website is still under development. |