The International Committee of Medical Journal Editors (ICMJE) recommendations for manuscript preparation (https://icmje.org) are essential for authors and reviewers. Two guidelines stand out for their critical role in scientific reporting: first, regarding clear and detailed descriptions of used statistical methods, and second, the distinguishing between clinical and statistical significance.
The guideline to “describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to judge its appropriateness for the study and to verify the reported results” underscores the importance of transparent reporting. Statistical methods are the bridge between data and conclusions. If the methods used are inadequately described, readers can neither assess the validity of the findings nor replicate them. The consequence is scepticism about the study’s credibility. Instead, the level of detail should be enough to ensure that peers and reviewers can replicate the analysis.
The guideline, “link the conclusions with the goals of the study but avoid unqualified statements and conclusions not adequately supported by the data,” is equally vital. It warns researchers about the common conflation of inferential uncertainty with practical relevance. Statistical significance, typically defined by a p-value less than 0.05, indicates the uncertainty of a hypothesis tested to evaluate sampling variability. However, it does not imply that the results are meaningful in a practical context. Whether or not a statistically significant test outcome is clinically relevant depends on the tested hypothesis, not the p-value. Furthermore, statistical nonsignificance shows uncertainty, not equivalence or similarity. The only rational conclusion that can be drawn from a statistically nonsignificant hypothesis test is that the sample size is too small to refute the tested hypothesis.
Evaluating the empirical support for a particular clinically significant effect requires defining what effects are clinically relevant (the minimal clinically important difference) and estimating the effect size with calculated estimation uncertainty, e.g. a confidence interval.
The ICMJE emphasizes aligning conclusions with the study’s goals. This requires critical reflection on whether the data supports the authors’ claims. Overstating findings or speculating beyond the evidence is common but risks misleading readers and undermining the study’s and the journal’s credibility. Thus, the main task for the statistical reviewer is to perform a critical evidence evaluation.
In practice, the presented conclusion from an empirical study is typically a repetition of what has been observed in the studied data. A single empirical study can contribute to solving a scientific problem but is rarely sufficient to prove that a certain theory is true. The purpose of empirical research is primarily to develop evidence for further consideration. Research reports are published in scientific journals to document that the study has been performed.