Jonas Ranstam's website


1. Three simple tips for better manuscripts


1. Comply with reporting guidelines

The main guidelines for manuscripts to be submitted to scientific medical journals are the ICMJE recommendations for manuscript preparation. However, different medical research projects, e.g. randomised trials, epidemiological studies, public health interventions, laboratory experiments, development of predictive scores, and systematic reviews, require different methodological approaches and practical arrangements. They also need to be reported differently, and specific reporting guidelines have, therefore, been developed for most study types, see The Equator Network. Many scientific journals request compliance with these guidelines, and even when they don't, complying with the guidelines probably improves the manuscript.

2. Do not try to impress with terminology you don't understand

One of the ICMJE's recommendations is to avoid non-technical use of statistical technical terms. Unfortunately, misunderstood technical terms are typical for many medical publications, and these misunderstandings neither improve the communication with the reader nor render the author any benefit. Statistical terms are specific and have clear definitions (see the International Statistical Institute's dictionary of statistical terms). First, statistics and mathematics have different terminologies. For example, do not confuse "parameter" with "variable". Second, longer words are not necessarily more scientific than shorter ones. For example, do not use "correlation" instead of "relation", "quartile" instead of "quarter", "efficacy" instead of "effectiveness", "adverse event" instead of "complication" or "endpoint" instead of "outcome". If you need to use a technical term, make sure you understand it and use it correctly. 3.

Be rational

Another of ICMJE's recommendations is to present the results in terms of effect sizes with confidence intervals instead of p-values. The typical author of a medical research publication is unable to distinguish between statistical description of observed data and statistical inference for generalising findings but uses p-values to classify results as significantly different and not significantly different. While statistical significance is considered an indication of practical importance, "no difference" is used to prove equivalence. All this is, of course, a grotesque approach to scientific reasoning. A p-value is the result of a statistical hypothesis test, but only tests of meaningful hypotheses yield meaningful p-values. Testing merely to "compare data" without underlying scientific reasoning about the tested hypothesis is futile. Effect size estimates with confidence intervals provide a better alternative for generalising findings. The point estimate shows the studied effect in the analysed sample of observed data and the confidence interval a range of plausible values in the unobservable population to which the finding is generalised. In contrast to p-values, confidence interval bounds play an essential role in the evaluation of the practical importance of findings.