bims-skolko Biomed News
on Scholarly communication
Issue of 2021‒01‒31
twenty papers selected by
Thomas Krichel
Open Library Society


  1. Patterns (N Y). 2021 Jan 08. 2(1): 100191
      Author reviews digital transformation of scholarly communication since 1990s and explains how COVID-19 is accelerating open science, with some analogy of chemical reactions. Discussing the current situation of preprint, the potential of peer review, and the essence of open science, developing additional services and balancing incremental and innovation in the transition state is crucial to foster new trust among stakeholders.
    DOI:  https://doi.org/10.1016/j.patter.2020.100191
  2. JHEP Rep. 2021 Feb;3(1): 100196
      Background & Aims: In 2005, the registration of all randomised controlled trials (RCTs) before enrolment of participants became a condition for publication by the International Committee of Medical Journal Editors to increase transparency in trial reporting. Among RCTs on transarterial chemoembolisation (TACE) for the treatment of hepatocellular carcinoma (HCC) published after 2007, we assess the proportion that were registered and compare registered primary outcomes (PO) with those reported in publications to determine whether primary outcome reporting bias favoured significant outcomes.Methods: We searched MEDLINE and EMBASE for reports of RCTs evaluating TACE for HCC treatment between 1 September 2007 and 31 March 2018. Registration and publication information for each included RCT was compared using a standardised data extraction form.
    Results: Thirteen out of 53 (25%) included RCTs were correctly registered (i.e. before the starting date of the RCT), 14 (26%) were registered after the RCT starting date, and 26 (49%) were not registered. Six out of 14 of the retrospectively registered RCTs (43%) were registered after their completion date. The PO was clearly reported in the published article of all registered RCTs, whereas the report was not clear in 8/26 (31%) of the non-registered RCTs (p = 0.01). Among registered RCTs, 8/27 (30%) had major discrepancies between registered and published PO. The influence of these discrepancies could be assessed in 6 of them and was shown to statistically favour significant results in 2.
    Conclusions: Registration and outcome reporting in RCTs on TACE for HCC are often inadequate. Registration should be reinforced because it is a key to transparency.
    Lay summary: Trial registration is fundamental to our understanding and interpretation of results, as it provides information on all relevant clinical trials (to place the results in a broader context), and on the details of their associated protocols (to ensure that the scientific plan is followed). Once a randomised controlled trial (RCT) is completed, the trial results are usually publicly shared via scientific articles that are expected to thoroughly and objectively report them. This study shows that half of the RCTs evaluating transarterial chemoembolisation for hepatocellular carcinoma were not registered, and identified major discrepancies between registered and published primary outcome favouring significant results.
    Keywords:  Bias; CENTRAL, Cochrane Central Register of Controlled Trials; ChiCTR, Chinese Clinical Trial Registry; Embolisation; HCC, hepatocellular carcinoma; Hepatocellular carcinoma; ICMJE, International Committee of Medical Journal Editors; ICTRP, International Clinical Trials Registry Platform; PO, primary outcome; Primary outcome; RCTs, randomised controlled trials; Randomised controlled trial; Registration; TACE, transarterial chemoembolisation; UMIN-CTR, University Hospital Medical Information Network Clinical Trials registry
    DOI:  https://doi.org/10.1016/j.jhepr.2020.100196
  3. J Biomed Inform. 2021 Jan 21. pii: S1532-0464(21)00014-9. [Epub ahead of print] 103685
      The COVID-19 crisis led a group of scientific and informatics experts to accelerate development of an infrastructure for electronic data exchange for the identification, processing, and reporting of scientific findings. The Fast Healthcare Interoperability Resources (FHIR®) standard which is overcoming the interoperability problems in health information exchange was extended to evidence-based medicine (EBM) knowledge with the EBMonFHIR project. A 13-step Code System Development Protocol was created in September 2020 to support global development of terminologies for exchange of scientific evidence. For Step 1, we assembled expert working groups with 55 people from 26 countries by October 2020. For Step 2, we identified 23 commonly used tools and systems for which the first version of code systems will be developed. For Step 3, a total of 368 non-redundant concepts were drafted to become display terms for four code systems (Statistic Type, Statistic Model, Study Design, Risk of Bias). Steps 4 through 13 will guide ongoing development and maintenance of these terminologies for scientific exchange. When completed, the code systems will facilitate identifying, processing, and reporting research results and the reliability of those results. More efficient and detailed scientific communication will reduce cost and burden and improve health outcomes, quality of life, and patient, caregiver, and healthcare professional satisfaction. We hope the achievements reached thus far will outlive COVID-19 and provide an infrastructure to make science computable for future generations. Anyone may join the effort at https://www.gps.health/covid19_knowledge_accelerator.html.
    Keywords:  Code System; Evidence-based medicine; Ontology; Research literature; Science communication; Terminology
    DOI:  https://doi.org/10.1016/j.jbi.2021.103685
  4. medRxiv. 2021 Jan 22. pii: 2021.01.21.21250243. [Epub ahead of print]
      Introduction: The impact of policies on COVID-19 outcomes is one of the most important questions of our time. Unfortunately, there are substantial concerns about the strength and quality of the literature examining policy impacts. This study systematically assessed the currently published COVID-19 policy impact literature for a checklist of study design elements and methodological issues.Methods: We included studies that were primarily designed to estimate the quantitative impact of one or more implemented COVID-19 policies on direct SARS-CoV-2 and COVID-19 outcomes. After searching PubMed for peer-reviewed articles published on November 26 or earlier and screening, all studies were reviewed by three reviewers independently and in consensus. The review tool was based on review guidance for assessing COVID-19 health policy impact evaluation analyses, including first identifying the assumptions behind the methods used, followed by assessing graphical display of outcomes data, functional form for the outcomes, timing between policy and impact, concurrent changes to the outcomes, and an overall rating.
    Results: After 102 articles were identified as potentially meeting inclusion criteria, we identified 36 published articles that evaluated the quantitative impact of COVID-19 policies on direct COVID-19 outcomes. The majority (n=23/36) of studies in our sample examined the impact of stay-at-home requirements. Nine studies were set aside due to inappropriate study design (n=8 pre/post; n=1 cross-section), and 27 articles were given a full consensus assessment. 20/27 met criteria for graphical display of data, 5/27 for functional form, 19/27 for timing between policy implementation and impact, and only 3/27 for concurrent changes to the outcomes. Only 1/27 studies passed all of the above checks, and 4/27 were rated as overall appropriate. Including the 9 studies set aside, we found that only four (or by a stricter standard, only one) of the 36 identified published and peer-reviewed health policy impact evaluation studies passed a set of key design checks for identifying the causal impact of policies on COVID-19 outcomes.
    Discussion: The current literature directly evaluating the impact of COVID-19 policies largely fails to meet key design criteria for useful inference. This may be partially due to the circumstances for evaluation being particularly difficult, as well as a context with desire for rapid publication, the importance of the topic, and weak peer review processes. Importantly, weak evidence is non-informative and does not indicate how effective these policies were on COVID-19 outcomes.
    DOI:  https://doi.org/10.1101/2021.01.21.21250243
  5. J Indian Assoc Pediatr Surg. 2020 Nov-Dec;25(6):25(6): 349-351
      Published articles in scientific journals are a key method for knowledge-sharing. Researchers can face the pressures to publish and this can sometimes lead to a breach of ethical values, whether consciously or unconsciously. The prevention of such practices is achieved by the application of strict ethical guidelines applicable to experiments involving human subjects or biological tissues. Editors too are faced with ethical problems, including how best to handle peer-review bias, and find reviewers with experience, probity, and professionalism. This article emphasizes that authors and their sponsoring organizations need to be informed of the importance of upholding the guidelines in research and ethical rules when disclosing scientific work.
    Keywords:  Ethics; guidelines; medical research; scientific misconduct
    DOI:  https://doi.org/10.4103/jiaps.JIAPS_219_19
  6. Perspect Clin Res. 2020 Oct-Dec;11(4):11(4): 168-173
      Background: A conflict of interest (COI) in publication exists when the primary interest of publication is influenced by a secondary interest (financial or non-financial). International guidelines are available that can be used by journal editors to formulate their own COI policies. The present study was carried out with the objective of evaluating COI policies existing among Indian biomedical journals.Materials and Methods: MEDLINE/PubMed and MedIND/IndMed databases were searched. Inclusions were journals that were active and indexed. Outcome measures were proportion of journals: (a) mentioning COI disclosure statement for authors, reviewers, and editors, (b) adequately explaining COI, (c) referring to three international guidelines, and (d) the proportion of PubMed/other than PubMed indexed journals mentioning COI policy for authors, reviewers, and editors and providing an adequate explanation for COI. Apart from descriptive statistics, associations between indexing and COI Policy for all three stakeholders were evaluated.
    Results: A total of n = 106 journals formed the final sample. Among them, 82 (77%) were PubMed and 24 (23%) were MedIND/IndMed indexed. COI disclosure statement was mentioned in 93 (87.7%) journals for authors, 10 (9.4%) for reviewers, and 06 (5.6%) for editors. Only 35 (33%) journals adequately explained COI. A total of 61 (57.5%) journals endorsed all the three international guidelines. PubMed indexing was found to be associated with approximately 19 times the odds of COI policies being present on the journal's home page relative to the journals indexed with other indexing agencies (crude odds ratio - 18.8, 95% confidence interval [4.6, 77],P < 0.0001).
    Conclusion: Very few Indian biomedical journals have COI policies for reviewers and editors and most did not explain it adequately. Nearly, a fifth of the journals we evaluated did not follow any guideline for disclosing COI.
    Keywords:  Authors; editor; medical journals; reviewer
    DOI:  https://doi.org/10.4103/picr.PICR_85_19
  7. J Contin Educ Nurs. 2021 Feb 01. 52(2): 64-66
      The peer-review process is a form of self-regulation by qualified members of the profession to evaluate works done by one or more individuals. However, without a clear structure, the peer-review process can be problematic. Rubrics have been shown to increase peer reviewer satisfaction and author compliance, but only when they convey clear and specific descriptions for task-specific criteria. Sigma developed a peer-review rubric to provide consistency in judging scientific abstracts. An asynchronous provider-directed, provider-paced educational activity can be used to successfully educate peer reviewers on the benefit and use of a peer-review rubric. [J Contin Educ Nurs. 2021;52(2):64-66.].
    DOI:  https://doi.org/10.3928/00220124-20210114-04
  8. Hippokratia. 2020 Apr-Jun;24(2):24(2): 94
      
    Keywords:  Publication; blinding; peer review; prestige bias
  9. Med Intensiva. 2021 Jan 25. pii: S0210-5691(20)30395-8. [Epub ahead of print]
      OBJECTIVE: To know the fate of the rejected manuscripts in Medicina Intensiva journal (MI) from 2015 to 2017 with surveillance until 2019.DESIGN: Retrospective observational study.
    SETTING: Biomedical journals publication.
    PARTICIPANTS: Rejected manuscripts in MI journal.
    INTERVENTIONS: None.
    MAIN VARIABLES OF INTEREST: Time of publication, impact factor (IF), generated citations and variables associated to publication.
    RESULTS: The 69% (420) of analyzed articles (344 originals and 263 scientific letters) were rejected, and 205 (48.8%) were subsequently published, with 180 citations of 66 articles. Journal IF was lower in 173 (84.4%) articles. The number of FI-valid citations was higher than the FI of MI in 21 articles. Origin of manuscript OR 2,11 (IC 95% 1.29 - 3.46), female author OR 1.58 (IC 95% 1.03-2.44), english language OR 2,38 (IC 95% 1.41-4.0) and reviewed papers OR 1.71 (IC 95% 1.10-2.66) were associated to publication in PubMed database.
    CONCLUSIONS: The rejected articles in MI have a mean publication rate in other journals. Most of these articles are published in journals with less IF and fewer citations than the IF of MI.
    Keywords:  Artículos rechazados; Bibliometrics; Bibliometría; Factor de impacto; Gender; Género; Impact factor; Peer review; Publication rate; Rejected articles; Revisión por pares; Tasa de publicación
    DOI:  https://doi.org/10.1016/j.medin.2020.11.006
  10. Nature. 2020 Jan 30.
      
    Keywords:  Infection; Publishing; Virology
    DOI:  https://doi.org/10.1038/d41586-020-00253-8
  11. Br J Sports Med. 2021 Jan 29. pii: bjsports-2020-103652. [Epub ahead of print]
      Misuse of statistics in medical and sports science research is common and may lead to detrimental consequences to healthcare. Many authors, editors and peer reviewers of medical papers will not have expert knowledge of statistics or may be unconvinced about the importance of applying correct statistics in medical research. Although there are guidelines on reporting statistics in medical papers, a checklist on the more general and commonly seen aspects of statistics to assess when peer-reviewing an article is needed. In this article, we propose a CHecklist for statistical Assessment of Medical Papers (CHAMP) comprising 30 items related to the design and conduct, data analysis, reporting and presentation, and interpretation of a research paper. While CHAMP is primarily aimed at editors and peer reviewers during the statistical assessment of a medical paper, we believe it will serve as a useful reference to improve authors' and readers' practice in their use of statistics in medical research. We strongly encourage editors and peer reviewers to consult CHAMP when assessing manuscripts for potential publication. Authors also may apply CHAMP to ensure the validity of their statistical approach and reporting of medical research, and readers may consider using CHAMP to enhance their statistical assessment of a paper.
    Keywords:  methodology; statistics
    DOI:  https://doi.org/10.1136/bjsports-2020-103652
  12. J Korean Med Sci. 2021 Jan 25. 36(4): e36
      The current digital era has led to a surge in the use of Social Media in academia. Worldwide connectivity has brought to the fore a scarce participation of Central Asia and adjoining regions in scientific discussions. Global perspectives in science may not be recorded due to such communication disparities. An equal representation of all ethnic groups is essential to have a rounded picture of the topic at hand. The extent of use of social media platforms in various regions is determined by social, economic, religious, political, cultural and ethnic factors, which may limit participation. The paper aims to examine the use of social media by academicians in the Central Asian countries, China and Mongolia. It also focusses on the linguistic skills of the Central Asian, Chinese and Mongolian population and their eagerness to be involved in global discussions. Understanding the factors limiting participation from specific regions is the first step in this direction.
    Keywords:  Access to Information; Central Asia; Internet Access; Language; Social Media
    DOI:  https://doi.org/10.3346/jkms.2021.36.e36
  13. F1000Res. 2020 ;9 1257
      Software is as integral as a research paper, monograph, or dataset in terms of facilitating the full understanding and dissemination of research. This article provides broadly applicable guidance on software citation for the communities and institutions publishing academic journals and conference proceedings. We expect those communities and institutions to produce versions of this document with software examples and citation styles that are appropriate for their intended audience. This article (and those community-specific versions) are aimed at authors citing software, including software developed by the authors or by others. We also include brief instructions on how software can be made citable, directing readers to more comprehensive guidance published elsewhere. The guidance presented in this article helps to support proper attribution and credit, reproducibility, collaboration and reuse, and encourages building on the work of others to further research.
    Keywords:  Software citation; bibliometrics; guidelines; publishing; scholarly communication
    DOI:  https://doi.org/10.12688/f1000research.26932.1
  14. JAMA Netw Open. 2021 Jan 04. 4(1): e2033972
      Importance: The benefits of responsible sharing of individual-participant data (IPD) from clinical studies are well recognized, but stakeholders often disagree on how to align those benefits with privacy risks, costs, and incentives for clinical trialists and sponsors. The International Committee of Medical Journal Editors (ICMJE) required a data sharing statement (DSS) from submissions reporting clinical trials effective July 1, 2018. The required DSSs provide a window into current data sharing rates, practices, and norms among trialists and sponsors.Objective: To evaluate the implementation of the ICMJE DSS requirement in 3 leading medical journals: JAMA, Lancet, and New England Journal of Medicine (NEJM).
    Design, Setting, and Participants: This is a cross-sectional study of clinical trial reports published as articles in JAMA, Lancet, and NEJM between July 1, 2018, and April 4, 2020. Articles not eligible for DSS, including observational studies and letters or correspondence, were excluded. A MEDLINE/PubMed search identified 487 eligible clinical trials in JAMA (112 trials), Lancet (147 trials), and NEJM (228 trials). Two reviewers evaluated each of the 487 articles independently.
    Exposure: Publication of clinical trial reports in an ICMJE medical journal requiring a DSS.
    Main Outcomes and Measures: The primary outcomes of the study were declared data availability and actual data availability in repositories. Other captured outcomes were data type, access, and conditions and reasons for data availability or unavailability. Associations with funding sources were examined.
    Results: A total of 334 of 487 articles (68.6%; 95% CI, 64%-73%) declared data sharing, with nonindustry NIH-funded trials exhibiting the highest rates of declared data sharing (89%; 95% CI, 80%-98%) and industry-funded trials the lowest (61%; 95% CI, 54%-68%). However, only 2 IPD sets (0.6%; 95% CI, 0.0%-1.5%) were actually deidentified and publicly available as of April 10, 2020. The remaining were supposedly accessible via request to authors (143 of 334 articles [42.8%]), repository (89 of 334 articles [26.6%]), and company (78 of 334 articles [23.4%]). Among the 89 articles declaring that IPD would be stored in repositories, only 17 (19.1%) deposited data, mostly because of embargo and regulatory approval. Embargo was set in 47.3% of data-sharing articles (158 of 334), and in half of them the period exceeded 1 year or was unspecified.
    Conclusions and Relevance: Most trials published in JAMA, Lancet, and NEJM after the implementation of the ICMJE policy declared their intent to make clinical data available. However, a wide gap between declared and actual data sharing exists. To improve transparency and data reuse, journals should promote the use of unique pointers to data set location and standardized choices for embargo periods and access requirements.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2020.33972
  15. Cureus. 2020 Dec 14. 12(12): e12069
      INTRODUCTION: Herein, we aimed to compare the scientometric data of hematology journals, and compare the publication models, especially the scientometric data of journals with all-open access (OA) and hybrid-OA publication models.METHODS: Data were obtained from Scimago Journal & Country Rank and Clarivate Analytics InCites websites. Fifty-four journals indexed in Science Citation Index (SCI) and SCI-Expanded were evaluated. Bibliometric data and impact factor (IF), scientific journal rank (SJR), eigenfactor score (ES), and Hirsch (h)-index of the journals were obtained. United States dollar (USD) was used as the requested article publishing charge (APC). Statistics Package for the Social Sciences (SPSS, IBM Corp., Armonk, NY) version 23.0 was used for data analysis.
    RESULTS: As a publication model, Hybrid-OA was the most common. One journal had subscription-only, and two journals had a free-OA model. Nine journals had a mandatory OA with the APC model and 42 journals used a hybrid model. The Median OA fee was 3400 USD. Hybrid-OA journals had a significantly higher median h-index (72 vs. 40, p=0.03) compared to all-OA journals. Other scientometric indexes were similar. When APCs were compared, all-OA journals were median 900 USD lower than hybrid-OA journals (2490 vs. 3400 USD, p=0.019).
    CONCLUSION: There is a widespread use of the OA publication model in hematology journals. Although hybrid OA journals have higher h-index, other scientometric indexes are similar. All-OA journals are more economically feasible considering a lower median APC. Further scientometric studies for journals in the field of hematology, randomized to follow citation per publication according to the OA model would better shed light on the data in this area.
    Keywords:  economics; health scientometrics; hematology; open access; open access publishing; scientometrics
    DOI:  https://doi.org/10.7759/cureus.12069