bims-skolko Biomed News
on Scholarly communication
Issue of 2019–11–24
twenty papers selected by
Thomas Krichel, Open Library Society



  1. Bioessays. 2019 Nov 21. e1900189
      The ACcess to Transparent Statistics (ACTS) call to action assembles four measures that are rapidly achievable by journals and funding agencies to enhance the quality of statistical reporting. The ACTS call to action is an appeal for concrete actions from institutions that should spearhead the battle for reproducibility.
    Keywords:  biostatistics; good practices; publishing; reporting; reproducibility
    DOI:  https://doi.org/10.1002/bies.201900189
  2. Nature. 2019 Nov;575(7783): 430-433
      
    Keywords:  Ethics; Publishing; Research data; Research management
    DOI:  https://doi.org/10.1038/d41586-019-03529-w
  3. World Neurosurg. 2019 Nov 19. pii: S1878-8750(19)32927-4. [Epub ahead of print]
      
    Keywords:  Authorship; Conflicts of Interest; Consent; Ethics; Peer review
    DOI:  https://doi.org/10.1016/j.wneu.2019.11.087
  4. J Am Dent Assoc. 2019 Nov 18. pii: S0002-8177(19)30606-3. [Epub ahead of print]
       BACKGROUND: Spin in randomized controlled trial (RCT) abstracts can misguide clinicians. In this cross-sectional analysis, the authors assessed the prevalence of spin in RCT abstracts and explored the factors potentially influencing it.
    METHODS: In this cross-sectional analysis, the authors conducted a systematic search in top 10 dental journals based on Eigenfactor score and selected RCTs published in 2015 with statistically nonsignificant primary outcomes. The dentistry disciplines covered in these journals include general dentistry, dental research, oral implantology, endodontics, oral surgery, periodontology, and oral oncology. In these RCT abstracts, the authors assessed the prevalence of 3 different categories of spin and factors that could influence its presence using the t test and χ2 test.
    RESULTS: Spin assessment performed in the included 75 RCTs revealed the existence of spin in 23 abstracts (30.7%). Associations between the presence of spin in abstracts and the variables international collaborations, commercial support type, number of treatment arms, and journal impact factor were found to be statistically nonsignificant (P ≥ .05).
    CONCLUSIONS: Approximately one-third of the 75 RCT abstracts published in high-impact dental journals in 2015 with nonsignificant outcomes presented with some form of spin, irrespective of funding type and journal impact factor.
    PRACTICAL IMPLICATIONS: Clinicians should be aware of the potential existence of spin in abstracts and be diligent in reading and appraising the full trial before incorporating its recommendations in clinical practice.
    Keywords:  Evidence-based dentistry; abstract; clinical decision-making; randomized controlled trial; spin
    DOI:  https://doi.org/10.1016/j.adaj.2019.08.009
  5. BMC Med. 2019 Nov 19. 17(1): 205
       BACKGROUND: The peer review process has been questioned as it may fail to allow the publication of high-quality articles. This study aimed to evaluate the accuracy in identifying inadequate reporting in RCT reports by early career researchers (ECRs) using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process.
    METHODS: We performed a cross-sectional diagnostic study of 119 manuscripts, from BMC series medical journals, BMJ, BMJ Open, and Annals of Emergency Medicine reporting the results of two-arm parallel-group RCTs. One hundred and nineteen ECRs who had never reviewed an RCT manuscript were recruited from December 2017 to January 2018. Each ECR assessed one manuscript. To assess accuracy in identifying inadequate reporting, we used two tests: (1) ECRs assessing a manuscript using the COBPeer tool (after completing an online training module) and (2) the usual peer-review process. The reference standard was the assessment of the manuscript by two systematic reviewers. Inadequate reporting was defined as incomplete reporting or a switch in primary outcome and considered nine domains: the eight most important CONSORT domains and a switch in primary outcome(s). The primary outcome was the mean number of domains accurately classified (scale from 0 to 9).
    RESULTS: The mean (SD) number of domains (0 to 9) accurately classified per manuscript was 6.39 (1.49) for ECRs using COBPeer versus 5.03 (1.84) for the journal's usual peer-review process, with a mean difference [95% CI] of 1.36 [0.88-1.84] (p < 0.001). Concerning secondary outcomes, the sensitivity of ECRs using COBPeer versus the usual peer-review process in detecting incompletely reported CONSORT items was 86% [95% CI 82-89] versus 20% [16-24] and in identifying a switch in primary outcome 61% [44-77] versus 11% [3-26]. The specificity of ECRs using COBPeer versus the usual process to detect incompletely reported CONSORT domains was 61% [57-65] versus 77% [74-81] and to identify a switch in primary outcome 77% [67-86] versus 98% [92-100].
    CONCLUSIONS: Trained ECRs using the COBPeer tool were more likely to detect inadequate reporting in RCTs than the usual peer review processes used by journals. Implementing a two-step peer-review process could help improve the quality of reporting.
    TRIAL REGISTRATION: Clinical.Trials.gov NCT03119376 (Registered April, 18, 2017).
    Keywords:  CONSORT statement; Peer reviewers; Randomized controlled trials; Reporting
    DOI:  https://doi.org/10.1186/s12916-019-1436-0
  6. Sci Data. 2018 Nov 20. 5(1): 180259
      This article presents a practical roadmap for scholarly publishers to implement data citation in accordance with the Joint Declaration of Data Citation Principles (JDDCP), a synopsis and harmonization of the recommendations of major science policy bodies. It was developed by the Publishers Early Adopters Expert Group as part of the Data Citation Implementation Pilot (DCIP) project, an initiative of FORCE11.org and the NIH BioCADDIE program. The structure of the roadmap presented here follows the "life of a paper" workflow and includes the categories Pre-submission, Submission, Production, and Publication. The roadmap is intended to be publisher-agnostic so that all publishers can use this as a starting point when implementing JDDCP-compliant data citation. Authors reading this roadmap will also better know what to expect from publishers and how to enable their own data citations to gain maximum impact, as well as complying with what will become increasingly common funder mandates on data transparency.
    DOI:  https://doi.org/10.1038/sdata.2018.259
  7. World J Clin Cases. 2019 Nov 06. 7(21): 3505-3516
       BACKGROUND: As a significantly important part of clinical practice, the professional nursing process can be advanced in many ways. Despite the fact that case reports are regarded to be of a lower quality grade in the hierarchy of evidence, one of the principles of evidence-based medicine is that decision-making should be based on a systematic summary of evidence. However, the evidence on the reporting characteristics of case reports in the nursing field is deficient.
    AIM: To use the CARE guidelines to assess reporting quality and factors influencing the quality of case reports in the nursing field.
    METHODS: Nursing science citation indexed (SCI-indexed) journals were identified from the professional website. Each of the identified journals was searched on their website for articles published before December 2017. Twenty-one sub-items on the CARE checklist were recorded as "YES", "PARTLY", or "NO" according to information reported by the included studies. The responses were assigned corresponding scores of 1, 0.5, and 0, respectively. The overall score was the sum of the 21 sub-items and was defined as "high" (more than 15), "medium" (10.5 to 14.5), and "low" (less than 10). The means, standard deviations, odds ratios (OR), and the associated 95% confidence interval (CI) were determined using Stata 12.0 software.
    RESULTS: Ultimately, 184 case reports from 16 SCI-indexed journals were identified, with overall scores ranging from 6.5 to 18 (mean = 13.6 ± 2.3). Of the included case reports, 10.3% were regarded low-quality, 52.7% were considered middle-quality, and 37% were regarded high-quality. There were statistical differences in the mean overall scores of the included case reports with funding versus those without funding (14.2 ± 1.7 vs 13.6 ± 2.4, respectively; P = 0.4456) and journal impact factor < 1.8 versus impact factor ≥ 1.8 (13.3 ± 2.3 vs 13.6 ± 2.4, respectively; P = 0.4977). Five items from the CARE guidelines, 5a (Patient), 6 (Clinical findings), 8c (Diagnostic reasoning), 9 (Therapeutic intervention), and 11d (The main take-away lessons) were well-reported (Reporting rate more than 90%) in most of the included case reports. However, only three items, 2 (Keywords, OR = 0.42, 95%CI: 0.19-0.92, P = 0.03), 4 (Introduction, OR = 0.35, 95%CI: 0.15-0.83, P = 0.017), and 11b (The relevant medical literature, OR = 0.19, 95%CI: 0.06-0.56, P = 0.003) were considered better-reported after the CARE guidelines published in 2013.
    CONCLUSION: The reporting quality of case reports in the nursing field apparently has not improved since the publication of the CARE guidelines.
    Keywords:  Case report guidelines; Case reports; Nursing; Reporting quality; Science citation indexed journals; Systematic review
    DOI:  https://doi.org/10.12998/wjcc.v7.i21.3505
  8. Behav Res Ther. 2019 Oct 26. pii: S0005-7967(19)30185-8. [Epub ahead of print]124 103499
      Addressing the 'replication crisis' and questionable research practices are at the forefront of international research agendas in clinical psychological science. The aim of this paper is to consider how the quality of research practices can be improved by a specific focus on publication practices. Currently, the responsibility for documenting quality research practices is primarily placed on authors. However, barriers to improved quality publication practices cut across all levels of the research community and require a broader approach that shares the burden for ensuring the production of high quality publications. We describe a framework that is intended to be ambitious and aspirational and encourage discussion and adoption of strategies to improve quality publication practices (QPPs). The framework cuts across multiple stakeholders and is designed to enhance (a) the quality of reporting; (b) adherence to protocols and guidelines; (c) timely accessibility of study materials and data. We discuss how QPPs might be improved by (a) funding bodies considering formally supporting QPPs; (b) research institutions encouraging a research culture that espouses quality research practices, and internally supporting QPP review processes and professional development in QPPs; (c) journals expanding editorial teams to include reviewers with design and statistical expertise, considering strategies to enhance QPP adherence during the peer review process, and committing to ongoing assessment and development of QPP training for peer reviewers; and (d) authors and peer reviewers integrating QPPs during the manuscript preparation/peer review process, engaging in ongoing QPP training, and committing to openness and transparency initiatives. We discuss the current state and potential next steps within each stage of the framework and provide information and resources to enhance QPPs. We hope that the suggestions offered here inspire research institutions, leaders and faculty to discuss, reflect on, and take action towards, integrating these, or other, QPPs into their research practice and workplace.
    Keywords:  Publication process; Quality research practices; Reproducibility
    DOI:  https://doi.org/10.1016/j.brat.2019.103499
  9. Chin Clin Oncol. 2019 Nov 13. pii: cco.2019.10.04. [Epub ahead of print]
    Editorial Office
      
    DOI:  https://doi.org/10.21037/cco.2019.10.04
  10. Eur Neuropsychopharmacol. 2019 Nov 18. pii: S0924-977X(19)31719-5. [Epub ahead of print]
      Both positive and negative (null or neutral) results are essential for the progress of science and its self-correcting nature. However, there is general reluctance to publish negative results, and this may be due a range of factors (e.g., the widely held perception that negative results are more difficult to publish, the preference to publish positive findings that are more likely to generate citations and funding for additional research). It is particularly challenging to disclose negative results that are not consistent with previously published positive data, especially if the initial publication appeared in a high impact journal. Ideally, there should be both incentives and support to reduce the costs associated with investing efforts into preparing publications with negative results. We describe here a set of criteria that can help scientists, reviewers and editors to publish technically sound, scientifically high-impact negative (or null) results originating from rigorously designed and executed studies. Proposed criteria emphasize the importance of collaborative efforts and communication among scientists (also including the authors of original publications with positive results).
    Keywords:  Good research practice; Negative results; Publication bias; Reproducibility
    DOI:  https://doi.org/10.1016/j.euroneuro.2019.10.007
  11. J Med Internet Res. 2019 Nov 20. 21(11): e16259
      Clinical implementation of digital health is a major hurdle to overcome in the coming years. Considering the role of the Journal of Medical Internet Research in the past 20 years and looking toward the journal's future, this viewpoint acknowledges the vision of medicine and the role that digital health plays in that vision. It also highlights barriers to implementation of digital health as an obstacle to achieving that vision. In particular, this paper focuses on how digital health research must start looking toward implementation as an area of inquiry and the role that the Journal of Medical Internet Research and its' sister journals from JMIR Publications can play in this process.
    Keywords:  digital health; digital medicine; ehealth; implementation; journalogy; knowledge translation; mhealth; open access; publishing
    DOI:  https://doi.org/10.2196/16259
  12. Can J Anaesth. 2019 Nov 18.
       PURPOSE: Our objectives were to analyze the gender of reviewers of all manuscripts submitted to the Canadian Journal of Anesthesia in 2016 and 2017. We hypothesized that the percentage of reviewers who were women would be ≤ 25%, an estimate based on the expert opinion estimates of the investigators and much less than the overall proportion of women in medicine.
    METHODS: Reviewers and authors of manuscripts submitted between 1 January 2016 and 31 December 2017 were coded as "woman", "man", or "unknown gender" according to an internet search of the person's name, address, medical registration, and/or first name. We also explored associations between reviewer gender and author gender, numbers and types of manuscripts assigned, as well as speed of acceptance and completion of reviews.
    RESULTS: Of the 1,300 manuscripts for which first and corresponding author gender were identified, 855 manuscripts (66%) were only assessed internally by the editor-in-chief and/or associate editors, and 445 manuscripts (34%) were sent for external peer review. Of the 280 reviewers for these manuscripts, 64 (22.9%; 95% confidence interval [CI], 18.3 to 28.1) were women (P = 0.40 compared with 25%). Women provided 174 (18%) and men provided 780 (82%) of the 954 external written reviews. Four hundred and seventy of the 1,300 manuscripts (36.2%; 95% CI, 33.6 to 38.8) had a woman as the first and/or corresponding author.
    CONCLUSIONS: Despite 36.2% of the authors being women, only 22.9% of reviewers were women and they represented only 18% of the individual written reviews gathered. Our results are consistent with previous reports of underrepresentation of women as reviewers in various disciplines. Formal policies that promote increased gender diversity should be considered.
    DOI:  https://doi.org/10.1007/s12630-019-01533-2
  13. J Bodyw Mov Ther. 2019 Oct;pii: S1360-8592(19)30308-0. [Epub ahead of print]23(4): 683-689
      Though the basics of peer review are common knowledge in the scientific community, to many authors the publication process is mystifying, frustrating, and often confusing. The purpose of this editorial is to lift the curtain between authors and editors and provide insight into the actual life-cycle of a manuscript from submission to publication, including practical tips regarding editorial processes, explanations of the most common reasons for rejection and advice on how to avoid it. While the detail is specific to the editorial setup at JBMT, it aims to provide useful insight to all authors seeking publication in a scientific journal, and to function as a teaching tool for educators guiding their students towards publication.
    DOI:  https://doi.org/10.1016/j.jbmt.2019.09.007
  14. Eur J Clin Invest. 2019 Nov 21. e13186
      When it comes to publishing, researchers' stated norms for established best practices often do not align with their actual behaviour1 . Consider the example of data sharing: failure to share research data when publishing is increasingly viewed as a barrier to research progress, and to contribute to waste and inefficiency2 . Policies seeking to maximise the value of public funding are at the heart of the data sharing movement. Patients appear to be overwhelmingly agreeable to their data being shared too3 .
    DOI:  https://doi.org/10.1111/eci.13186
  15. Nature. 2019 Nov;575(7783): S36-S37
      
    Keywords:  Publishing; Research management
    DOI:  https://doi.org/10.1038/d41586-019-03544-x
  16. Environ Sci Pollut Res Int. 2019 Nov 19.
      There have been numerous environmental geochemistry studies using chemical, geological, ecological, and toxicological methods but each of these fields requires more subject specialist rigour than has generally been applied so far. Field-specific terminology has been misused and the resulting interpretations rendered inaccurate. In this paper, we propose a series of suggestions, based on our experience as teachers, researchers, reviewers, and editorial board members, to help authors to avoid pitfalls. Many scientific inaccuracies continue to be unchecked and are repeatedly republished by the scientific community. These recommendations should help our colleagues and editorial board members, as well as reviewers, to avoid the numerous inaccuracies and misconceptions currently in circulation and establish a trend towards greater rigour in scientific writing.
    Keywords:  Ecology; Ecotoxicology; Fractionation; Modelling; Risk assessment; Speciation
    DOI:  https://doi.org/10.1007/s11356-019-06835-y