bims-skolko Biomed News
on Scholarly communication
Issue of 2019–02–17
eightteen papers selected by
Thomas Krichel, Open Library Society



  1. J Clin Epidemiol. 2019 Feb 06. pii: S0895-4356(18)30606-1. [Epub ahead of print]
       OBJECTIVE: To analyze data sharing practices among authors of randomized controlled trials (RCT) published in seven high-ranking anesthesiology journals from 2014 to 2016.
    STUDY DESIGN AND SETTING: We analyzed data sharing statements in 619 included RCTs and contacted their corresponding authors, asking them to share de-identified raw data from trial.
    RESULTS: Of the 86 (14%) authors who responded to our query for data sharing, only 24 (4%) provided the requested data. Only one of those 24 had a data sharing statement in published manuscript. Only 24 (4%) of manuscripts contained statements suggesting a willingness to share trial data; only one of those authors actually shared data. There was no difference in proportion of data sharing between studies with commercial versus non-profit funding. Among the 62 authors who refused to provide data, reasons were seldom provided. When reasons were provided, common themes included issues regarding data ownership and participant privacy. Only one of the seven analyzed journals encouraged authors towards data sharing.
    CONCLUSION: Willingness to share data among anesthesiology RCTs is very low. To achieve widespread availability of de-identified trial data, journals should request their publication, as opposed to only encouraging authors to do so.
    Keywords:  anesthesiology; authors; data sharing; publication; randomized controlled trial; raw data
    DOI:  https://doi.org/10.1016/j.jclinepi.2019.01.012
  2. PLoS Biol. 2019 Feb;17(2): e3000116
      Science advances through rich, scholarly discussion. More than ever before, digital tools allow us to take that dialogue online. To chart a new future for open publishing, we must consider alternatives to the core features of the legacy print publishing system, such as an access paywall and editorial selection before publication. Although journals have their strengths, the traditional approach of selecting articles before publication ("curate first, publish second") forces a focus on "getting into the right journals," which can delay dissemination of scientific work, create opportunity costs for pushing science forward, and promote undesirable behaviors among scientists and the institutions that evaluate them. We believe that a "publish first, curate second" approach with the following features would be a strong alternative: authors decide when and what to publish; peer review reports are published, either anonymously or with attribution; and curation occurs after publication, incorporating community feedback and expert judgment to select articles for target audiences and to evaluate whether scientific work has stood the test of time. These proposed changes could optimize publishing practices for the digital age, emphasizing transparency, peer-mediated improvement, and post-publication appraisal of scientific articles.
    DOI:  https://doi.org/10.1371/journal.pbio.3000116
  3. Vet Anaesth Analg. 2018 Nov 28. pii: S1467-2987(18)30295-2. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.vaa.2018.11.002
  4. Nature. 2019 Feb;566(7743): 182
      
    Keywords:  Publishing; Research management
    DOI:  https://doi.org/10.1038/d41586-019-00548-5
  5. PLoS Biol. 2019 Feb;17(2): e3000117
      Although a case can be made for rewarding scientists for risky, novel science rather than for incremental, reliable science, novelty without reliability ceases to be science. The currently available evidence suggests that the most prestigious journals are no better at detecting unreliable science than other journals. In fact, some of the most convincing studies show a negative correlation, with the most prestigious journals publishing the least reliable science. With the credibility of science increasingly under siege, how much longer can we afford to reward novelty at the expense of reliability? Here, I argue for replacing the legacy journals with a modern information infrastructure that is governed by scholars. This infrastructure would allow renewed focus on scientific reliability, with improved sort, filter, and discovery functionalities, at massive cost savings. If these savings were invested in additional infrastructure for research data and scientific code and/or software, scientific reliability would receive additional support, and funding woes-for, e.g., biological databases-would be a concern of the past.
    DOI:  https://doi.org/10.1371/journal.pbio.3000117
  6. Law Hum Behav. 2019 Feb;43(1): 1-8
      In this editorial, the authors note that steady submission rate and a rejection rate that hovers at 80%, indicates the journal is flourishing and provides them with the fortunate opportunity to make an excellent journal even better. To that end, they describe three initiatives they are working on and explain the changes readers can expect as they begin to implement them in the journal. Specifically, these initiatives include: (1) promoting transparency, openness, and reproducibility in published research; (2) improving author-reviewer fit; and (3) expanding the diversity of journal content and decision makers. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
    DOI:  https://doi.org/10.1037/lhb0000322
  7. Eur J Case Rep Intern Med. 2019 ;6(1): 001005
      In this Letter to the Editor, Agrawal et al. debate the conflicts that can arise regarding the authorship of case reports. Like all other medical journals, EJCRIM has zero tolerance for the willful undisclosed re-submission of papers that have already been published elsewhere. However, this may occasionally happen by accident, especially in large healthcare institutions in which multiple teams of physicians may care for a patient throughout their illness. EJCRIM endorses and recommends to all potential authors the very sensible suggestions made by Agrawal et al. to avoid such an error occurring. EJCRIM would also encourage authors to consider the following: The first author should ensure that no one else involved in the case has reported it or plans to report it. This is especially important for physicians working in large healthcare centres, and/or for case reports of patients who have been under investigation or treatment for prolonged periods.On rare occasions EJCRIM will consider a case that has already been published, provided that this is fully and explicitly disclosed, and there is a clear reason why re-publication is justified. An example might be where new information has come to light that significantly changes the conclusions of the original report. As in all reports published by EJCRIM the decision to publish will depend on the educational value, or learning points, of the case.
    DOI:  https://doi.org/10.12890/2019_001005
  8. Aesthet Surg J. 2019 Feb 14. pii: sjz042. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1093/asj/sjz042
  9. J Exp Anal Behav. 2019 Feb 12.
      Debates about the utility of p values and correct ways to analyze data have inspired new guidelines on statistical inference by the American Psychological Association (APA) and changes in the way results are reported in other scientific journals, but their impact on the Journal of the Experimental Analysis of Behavior (JEAB) has not previously been evaluated. A content analysis of empirical articles published in JEAB between 1992 and 2017 investigated whether statistical and graphing practices changed during that time period. The likelihood that a JEAB article reported a null hypothesis significance test, included a confidence interval, or depicted at least one figure with error bars has increased over time. Features of graphs in JEAB, including the proportion depicting single-subject data, have not changed systematically during the same period. Statistics and graphing trends in JEAB largely paralleled those in mainstream psychology journals, but there was no evidence that changes to APA style had any direct impact on JEAB. In the future, the onus will continue to be on authors, reviewers and editors to ensure that statistical and graphing practices in JEAB continue to evolve without interfering with characteristics that set the journal apart from other scientific journals.
    Keywords:  confidence intervals; error bars; graphs; null hypothesis significance testing; statistical reform
    DOI:  https://doi.org/10.1002/jeab.509
  10. Trials. 2019 Feb 14. 20(1): 118
       BACKGROUND: Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it.
    METHODS: We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted.
    RESULTS: Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25-96%), secondary outcomes (mean 55%, range 31-72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9-8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67-100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0-100%). Where letters were published, there were delays (median 99 days, range 0-257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0-86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines.
    CONCLUSIONS: All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals' willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT's mechanisms for enforcement, and novel strategies for research on methods and reporting.
    Keywords:  Audit; CONSORT; Editorial conduct; ICMJE; Misreporting; Outcomes; Trials
    DOI:  https://doi.org/10.1186/s13063-019-3173-2
  11. BMC Med Res Methodol. 2019 Feb 14. 19(1): 32
       BACKGROUND: Reporting quality of systematic reviews' (SRs) abstracts is important because this is often the only information about a study that readers have. The aim of this study was to assess adherence of SR abstracts in the field of anesthesiology with the reporting checklist PRISMA extension for Abstracts (PRISMA-A) and to analyze to what extent will the use of PRISMA-A yield concordant ratings in two raters without prior experience with the checklist.
    METHODS: We analyzed reporting quality of SRs with meta-analysis of randomized controlled trials of interventions published in the field of anesthesiology from 2012 to 2016 by using 12-item PRISMA-A checklist. After calibration exercise, two authors without prior experience with PRISMA-A scored the abstracts. Primary outcome was median adherence to PRISMA-A checklist. Secondary outcome was adherence to individual items of the checklist. We analyzed whether there was improvement in reporting of SR abstracts over time. Additionally, we analyzed discrepancies between the two raters in scoring individual PRISMA-A items.
    RESULTS: Our search yielded 318 results, of which we included 244 SRs. Median adherence to PRISMA-A checklist was 42% (5 items of 12). The majority of analyzed SR abstracts (N = 148, 61%) had a total adherence score under 50%, and not a single one had adherence above 75%. Adherence to individual items was very variable, ranging from 0% for reporting SR funding, to 97% for interpreting SR findings. Overall adherence to PRISMA-A did not change over the analyzed 5 years before and after publication of PRISMA-A in 2013. Even after calibration exercise, discrepancies between the two raters were found in 275 (9.3%) out of 2928 analyzed PRISMA-A items. Cohen's Kappa was 0.807. In the item about the description of effect there were discrepancies in 59% of the abstracts between the raters.
    CONCLUSION: Reporting quality of systematic review abstracts in the field of anesthesiology is suboptimal, and did not improve after publication of PRISMA-A checklist in 2013. We need stricter adherence to reporting checklists by authors, editors and peer-reviewers, and interventions that will help those stakeholders to improve reporting of systematic reviews. Some items of PRISMA-A checklist are difficult to score.
    Keywords:  Abstract; PRISMA; Reporting; Systematic review
    DOI:  https://doi.org/10.1186/s12874-019-0675-2