Stay intimated with the recent happenings and occurrences all over the world...your satisfaction is our priority.

Friday 15 February 2019

Despite public pledges, leading scientific journals still allow statistical misconduct and refuse to correct it

A leading form of statistical malpractice in scientific studies is to retroactively comb through the data for "interesting" patterns; while such patterns may provide useful leads for future investigations, simply cherry-picking data that looks significant out of a study that has otherwise failed to prove out the researcher's initial hypothesis can generate false -- but plausible-seeming -- conclusions.

To combat this practice, many journals have signed up to the Consolidated Standards of Reporting Trials (CONSORT), a set of principles requiring researchers to publicly register, in advance, what data they will collect, and how they will analyze it, and to prominently note when they change methodologies after the fact, so that readers can treat any conclusions from these changes with additional care. CONSORT-compliant journals also promise to accept, review, and expeditiously publish correction letters when the papers they publish are found to have breached CONSORT standards.

Evidence-based medicine ninja Ben Goldacre (previously) and colleagues reviewed every single paper published in five leading CONSORT-signed-up journals (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) for six weeks, and, when they detected statistical practices that violated CONSORT principles, they informed the journals in writing and recorded and published their replies, and tabulated the findings, producing a league table of the journals that do the most to live up to their commitments to good statistical practice. They also analyzed the reasons that journals (and researchers) gave for not publishing corrections, and point out the wide gaps in the journal editors' and researchers' understanding of good statistical practice.

The results are very bad. While the vast majority (76%) of papers adhered to CONSORT standards, the out-of-compliance papers were very unlikely to be corrected, and when they were, it took a very long time (median: 99 days).

Two journals -- JAMA and NEMJ -- refused outright to publish a single correction (by contrast, the Lancet published 75% of the corrections letters).

All the underlying data, correspondence and other materials have been published on an excellent website, and Goldacre and co have presented their findings in a paper on Biomedcentral.

Before carrying out a clinical trial, all outcomes that will be measured (e.g. blood pressure after one year of treatment) should be pre-specified in a trial protocol, and on a clinical trial registry.

This is because if researchers measure lots of things, some of those things are likely to give a positive result by random chance (a false positive). A pre-specified outcome is much less likely to give a false-positive result.

Once the trial is complete, the trial report should then report all pre-specified outcomes. Where reported outcomes differ from those pre-specified, this must be declared in the report, along with an explanation of the timing and reason for the change. This ensures a fair picture of the trial results.

However, in reality, pre-specified outcomes are often left unreported, while outcomes that were not pre-specified are reported, without being declared as novel. This is an extremely common problem that distorts the evidence we use to make real-world clinical decisions.

COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time [Ben Goldacre, Henry Drysdale, Aaron Dale, Ioan Milosevic, Eirion Slade, Philip Hartley, Cicely Marston, Anna Powell-Smith, Carl Heneghan and Kamal R. Mahtani/Biomedcentral]

Compare Trials

(via Neil Gaiman)

Share:

Popular Posts

Powered by Blogger.