bims-skolko Biomed News
on Scholarly communication
Issue of 2023‒07‒16
twenty-six papers selected by
Thomas Krichel
Open Library Society


  1. J Gen Intern Med. 2023 Jul 12.
      Medical journal publishing has changed dramatically over the past decade. The shift from print to electronic distribution altered the industry's economic model. This was followed by open access mandates from funding organizations and the subsequent imposition of article processing charges on authors. The medical publishing industry is large and while there is variation across journals, it is overall highly profitable. As journals have moved to digital dissemination, advertising revenues decreased and publishers shifted some of the losses onto authors by way of article processing charges. The number of open access journals has increased substantially in recent years. The open access model presents an equity paradox; while it liberates scientific knowledge for the consumer, it presents barriers to those who produce research. This emerging "pay-to-publish" system offers advantages to authors who work in countries and at institutes with more resources. Finally, the medical publishing industry represents an unusual business model; the people who provide both the content and the external peer review receive no payment from the publisher, who generates revenue from the content. The very unusual economic model of this industry makes it vulnerable to disruptive change. The economic model of medical publishing is rapidly evolving and this will lead to disruption of the industry. These changes will accelerate dissemination of science and may lead to a shift away from lower-impact journals towards pre-print servers.
    DOI:  https://doi.org/10.1007/s11606-023-08307-z
  2. HCA Healthc J Med. 2022 ;3(6): 355-362
      Description Among the pillars of science is the galvanizing process of peer review. Editors of medical and scientific publications recruit specialty leaders to evaluate the quality of manuscripts. These peer reviewers help to ensure that data are collected, analyzed, and interpreted as accurately as possible, thereby moving the field forward and ultimately improving patient care. As physician-scientists, we are given the opportunity and responsibility to participate in the peer review process. There are many benefits to engaging in the peer review process including exposure to cutting-edge research, growing your connection with the academic community, and fulfilling the scholarly activity requirements of your accrediting organization. In the present manuscript, we discuss the key components of the peer review process and hope that it will serve as a primer for the novice reviewer and as a useful guide for the experienced reviewer.
    Keywords:  GME; graduate medical education; journal; publishing; research peer review; scholarly activity; scholarly communication
    DOI:  https://doi.org/10.36518/2689-0216.1325
  3. PLoS One. 2023 ;18(7): e0287443
      Peer review is the backbone of academia and humans constitute a cornerstone of this process, being responsible for reviewing submissions and making the final acceptance/rejection decisions. Given that human decision-making is known to be susceptible to various cognitive biases, it is important to understand which (if any) biases are present in the peer-review process, and design the pipeline such that the impact of these biases is minimized. In this work, we focus on the dynamics of discussions between reviewers and investigate the presence of herding behaviour therein. Specifically, we aim to understand whether reviewers and discussion chairs get disproportionately influenced by the first argument presented in the discussion when (in case of reviewers) they form an independent opinion about the paper before discussing it with others. In conjunction with the review process of a large, top tier machine learning conference, we design and execute a randomized controlled trial that involves 1,544 papers and 2,797 reviewers with the goal of testing for the conditional causal effect of the discussion initiator's opinion on the outcome of a paper. Our experiment reveals no evidence of herding in peer-review discussions. This observation is in contrast with past work that has documented an undue influence of the first piece of information on the final decision (e.g., anchoring effect) and analyzed herding behaviour in other applications (e.g., financial markets). Regarding policy implications, the absence of the herding effect suggests that the current status quo of the absence of a unified policy towards discussion initiation does not result in an increased arbitrariness of the resulting decisions.
    DOI:  https://doi.org/10.1371/journal.pone.0287443
  4. PLoS One. 2023 ;18(7): e0287660
      BACKGROUND: Despite having a crucial role in scholarly publishing, peer reviewers do not typically require any training. The purpose of this study was to conduct an international survey on the current perceptions and motivations of researchers regarding peer review training.METHODS: A cross-sectional online survey was conducted of biomedical researchers. A total of 2000 corresponding authors from 100 randomly selected medical journals were invited via email. Quantitative items were reported using frequencies and percentages or means and SE, as appropriate. A thematic content analysis was conducted for qualitative items in which two researchers independently assigned codes to the responses for each written-text question, and subsequently grouped the codes into themes. A descriptive definition of each category was then created and unique themes-as well as the number and frequency of codes within each theme-were reported.
    RESULTS: A total of 186 participants completed the survey of which 14 were excluded. The majority of participants indicated they were men (n = 97 of 170, 57.1%), independent researchers (n = 108 of 172, 62.8%), and primarily affiliated with an academic organization (n = 103 of 170, 62.8%). A total of 144 of 171 participants (84.2%) indicated they had never received formal training in peer review. Most participants (n = 128, 75.7%) agreed-of which 41 (32.0%) agreed strongly-that peer reviewers should receive formal training in peer review prior to acting as a peer reviewer. The most preferred training formats were online courses, online lectures, and online modules. Most respondents (n = 111 of 147, 75.5%) stated that difficulty finding and/or accessing training was a barrier to completing training in peer review.
    CONCLUSION: Despite being desired, most biomedical researchers have not received formal training in peer review and indicated that training was difficult to access or not available.
    DOI:  https://doi.org/10.1371/journal.pone.0287660
  5. J Clin Epidemiol. 2023 Jul 06. pii: S0895-4356(23)00170-1. [Epub ahead of print]
      OBJECTIVE: To create a comprehensive list of all openly available online trainings in scholarly peer review and to analyze their characteristics.STUDY DESIGN AND SETTING: A systematic review of online training material in scholarly peer review openly accessible between 2012 and 2022. Training characteristics were presented in evidence tables and summarized narratively. A risk of bias tool was purpose-built for this study to evaluate the included training material as evidence-based.
    RESULTS: Fourty-two training opportunities in manuscript peer review were identified, of which only twenty were openly accessible. Most were online modules (n = 12, 60%) with an estimated completion time of less than one hour (n = 13, 65%). Using our ad-hoc risk of bias tool, four sources (20%) met our criteria of evidence-based.
    CONCLUSION: Our comprehensive search of the literature identified 20 openly accessible online training materials in manuscript peer review. For such a crucial step in the dissemination of literature, a lack of training could potentially explain disparities in the quality of scholarly publishing.
    DOI:  https://doi.org/10.1016/j.jclinepi.2023.06.023
  6. J Biomol Tech. 2023 07 01. pii: 3fc1f5fe.77a08d5d. [Epub ahead of print]34(2):
      We are thrilled to share the latest developments at the Journal of Biomolecular Techniques (JBT), your esteemed peer-reviewed publication dedicated to advancing biotechnology research. Since its inception, JBT has been committed to promoting the pivotal role that biotechnology plays in contemporary scientific endeavors, fostering knowledge exchange among biomolecular resource facilities, and communicating the groundbreaking research conducted by the Association's Research Groups, members, and other investigators.
    DOI:  https://doi.org/10.7171/3fc1f5fe.77a08d5d
  7. Cureus. 2023 Jun;15(6): e40126
      Academic conference participation and publications serve as a litmus test to evaluate researchers irrespective of their scientific discipline. Predatory or fake conferences and journals exploit this issue and rebrand themselves through multiple methods. This paper serves to introduce rebranding as one of the features adopted by predatory journals and conferences and formulate some important measures that could be taken by academic libraries, researchers, and publishers to address this issue. We found that rebranding serves as an efficient measure to evade legal implications. However, empirical longitudinal studies addressing the issue are absent. We have explained rebranding, multiple ways of rebranding, issues surrounding predatory publishing, and the role of academic libraries and provided a five-point prevention strategy to protect researchers from academic malpractices. Dedicated tools, scientific prowess, and vigilance of academic libraries and researchers can safeguard the scientific community. Creating awareness, increasing transparency of available databases, and the support of academic libraries and publishing houses along with global support will serve as effective measures to tackle predatory malpractices.
    Keywords:  bootlegging; ethics; fake conference; predatory journals; rebranding; scientific publishing
    DOI:  https://doi.org/10.7759/cureus.40126
  8. J Prosthet Dent. 2023 Jul 10. pii: S0022-3913(23)00371-2. [Epub ahead of print]
      STATEMENT OF PROBLEM: Use of the ChatGPT software program by authors raises many questions, primarily regarding egregious issues such as plagiarism. Nevertheless, little is known about the extent to which artificial intelligence (AI) models can produce high-quality research publications and advance and shape the direction of a research topic.PURPOSE: The purpose of this study was to determine how well the ChatGPT software program, a writing tool powered by AI, could respond to questions about scientific or research writing and generate accurate references with academic examples.
    MATERIAL AND METHODS: Questions were made for the ChatGPT software program to locate an abstract containing a particular keyword in the Journal of Prosthetic Dentistry (JPD). Then, whether the resulting articles existed or were published was determined. Questions were made for the algorithm 5 times to locate 5 JPD articles containing 2 specific keywords, bringing the total number of articles to 25. The process was repeated twice, each time with a different set of keywords, and the ChatGPT software program provided a total of 75 articles. The search was conducted at various times between April 1 and 4, 2023. Finally, 2 authors independently searched the JPD website and Google Scholar to determine whether the articles provided by the ChatGPT software program existed.
    RESULTS: When the author tested the ChatGPT software program's ability to locate articles in the JPD and Google Scholar using a set of keywords, the results did not match the papers that the ChatGPT software program had generated with the help of the AI tool. Consequently, all 75 articles provided by the ChatGPT software program were not accurately located in the JPD or Google Scholar databases and had to be added manually to ensure the accuracy of the relevant references.
    CONCLUSIONS: Researchers and academic scholars must be cautious when using the ChatGPT software program because AI-generated content cannot provide or analyze the same information as an author or researcher. In addition, the results indicated that writing credit or references to such content or references in prestigious academic journals is not yet appropriate. At this time, scientific writing is only valid when performed manually by researchers.
    DOI:  https://doi.org/10.1016/j.prosdent.2023.05.023
  9. Nature. 2023 Jul 13.
      
    Keywords:  Computer science; Machine learning; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-02270-9
  10. J Bone Joint Surg Am. 2023 Jul 12.
      ABSTRACT: ➢ Natural language processing with large language models is a subdivision of artificial intelligence (AI) that extracts meaning from text with use of linguistic rules, statistics, and machine learning to generate appropriate text responses. Its utilization in medicine and in the field of orthopaedic surgery is rapidly growing.➢ Large language models can be utilized in generating scientific manuscript texts of a publishable quality; however, they suffer from AI hallucinations, in which untruths or half-truths are stated with misleading confidence. Their use raises considerable concerns regarding the potential for research misconduct and for hallucinations to insert misinformation into the clinical literature.➢ Current editorial processes are insufficient for identifying the involvement of large language models in manuscripts. Academic publishing must adapt to encourage safe use of these tools by establishing clear guidelines for their use, which should be adopted across the orthopaedic literature, and by implementing additional steps in the editorial screening process to identify the use of these tools in submitted manuscripts.
    DOI:  https://doi.org/10.2106/JBJS.23.00473
  11. Nature. 2023 Jul 12.
      
    Keywords:  Ageing; Authorship; Climate change; Climate sciences; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-02298-x
  12. Elife. 2023 07 11. pii: e85550. [Epub ahead of print]12
      Nullius in verba ('trust no one'), chosen as the motto of the Royal Society in 1660, implies that independently verifiable observations-rather than authoritative claims-are a defining feature of empirical science. As the complexity of modern scientific instrumentation has made exact replications prohibitive, sharing data is now essential for ensuring the trustworthiness of one's findings. While embraced in spirit by many, in practice open data sharing remains the exception in contemporary systems neuroscience. Here, we take stock of the Allen Brain Observatory, an effort to share data and metadata associated with surveys of neuronal activity in the visual system of laboratory mice. Data from these surveys have been used to produce new discoveries, to validate computational algorithms, and as a benchmark for comparison with other data, resulting in over 100 publications and preprints to date. We distill some of the lessons learned about open surveys and data reuse, including remaining barriers to data sharing and what might be done to address these.
    Keywords:  data sharing; electrophysiology; mouse; neurophysiology; neuroscience; open science; two photon calcium imaging
    DOI:  https://doi.org/10.7554/eLife.85550
  13. Ann Otol Rhinol Laryngol. 2023 Jul 11. 34894231185642
      OBJECTIVE: Data-sharing plays an essential role in advancing scientific understanding. Here, we aim to identify the commonalities and differences in data-sharing policies endorsed by otolaryngology journals and to assess their adherence to the FAIR (findable, accessible, interoperable, reusable) principles.METHODS: Data-sharing policies were searched for among 111 otolaryngology journals, as listed by Scimago Journal & Country Rank. Policy extraction of the top biomedical journals as ranked by Google Scholar metrics were used as a comparison. The FAIR principles for scientific data management and stewardship were used for the extraction framework. This occurred in a blind, masked, and independent fashion.
    RESULTS: Of the 111 ranked otolaryngology journals, 100 met inclusion criteria. Of those 100 journals, 79 provided data-sharing policies. There was a clear lack of standardization across policies, along with specific gaps in accessibility and reusability which need to be addressed. Seventy-two policies (of 79; 91%) designated that metadata should have globally unique and persistent identifiers. Seventy-one (of 79; 90%) policies specified that metadata should clearly include the identifier of the data they describe. Fifty-six policies (of 79; 71%) outlined that metadata should be richly described with a plurality of accurate and relevant attributes.
    CONCLUSION: Otolaryngology journals have varying data-sharing policies, and adherence to the FAIR principles appears to be moderate. This calls for increased data transparency, allowing for results to be reproduced, confirmed, and debated.
    Keywords:  FAIR; data-sharing; metadata; otolaryngology; reproducibility; transparency
    DOI:  https://doi.org/10.1177/00034894231185642
  14. BMJ. 2023 07 11. 382 e075767
      OBJECTIVES: To synthesise research investigating data and code sharing in medicine and health to establish an accurate representation of the prevalence of sharing, how this frequency has changed over time, and what factors influence availability.DESIGN: Systematic review with meta-analysis of individual participant data.
    DATA SOURCES: Ovid Medline, Ovid Embase, and the preprint servers medRxiv, bioRxiv, and MetaArXiv were searched from inception to 1 July 2021. Forward citation searches were also performed on 30 August 2022.
    REVIEW METHODS: Meta-research studies that investigated data or code sharing across a sample of scientific articles presenting original medical and health research were identified. Two authors screened records, assessed the risk of bias, and extracted summary data from study reports when individual participant data could not be retrieved. Key outcomes of interest were the prevalence of statements that declared that data or code were publicly or privately available (declared availability) and the success rates of retrieving these products (actual availability). The associations between data and code availability and several factors (eg, journal policy, type of data, trial design, and human participants) were also examined. A two stage approach to meta-analysis of individual participant data was performed, with proportions and risk ratios pooled with the Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis.
    RESULTS: The review included 105 meta-research studies examining 2 121 580 articles across 31 specialties. Eligible studies examined a median of 195 primary articles (interquartile range 113-475), with a median publication year of 2015 (interquartile range 2012-2018). Only eight studies (8%) were classified as having a low risk of bias. Meta-analyses showed a prevalence of declared and actual public data availability of 8% (95% confidence interval 5% to 11%) and 2% (1% to 3%), respectively, between 2016 and 2021. For public code sharing, both the prevalence of declared and actual availability were estimated to be <0.5% since 2016. Meta-regressions indicated that only declared public data sharing prevalence estimates have increased over time. Compliance with mandatory data sharing policies ranged from 0% to 100% across journals and varied by type of data. In contrast, success in privately obtaining data and code from authors historically ranged between 0% and 37% and 0% and 23%, respectively.
    CONCLUSIONS: The review found that public code sharing was persistently low across medical research. Declarations of data sharing were also low, increasing over time, but did not always correspond to actual sharing of data. The effectiveness of mandatory data sharing policies varied substantially by journal and type of data, a finding that might be informative for policy makers when designing policies and allocating resources to audit compliance.
    SYSTEMATIC REVIEW REGISTRATION: Open Science Framework doi:10.17605/OSF.IO/7SX8U.
    DOI:  https://doi.org/10.1136/bmj-2023-075767
  15. F1000Res. 2023 ;12 561
      The rate of science information's spread has accelerated in recent years. In this context, it appears that many scientific disciplines are beginning to recognize the value and possibility of sharing open access (OA) online manuscripts in their preprint form. Preprints are academic papers that are published but have not yet been evaluated by peers. They have existed in research at least since the 1960s and the creation of ArXiv in physics and mathematics. Since then, preprint platforms-which can be publisher- or community-driven, profit or not for profit, and based on proprietary or free and open source software-have gained popularity in many fields (for example, bioRxiv for the biological sciences). Today, there are many platforms that are either disciplinary-specific or cross-domain, with exponential development over the past ten years. Preprints as a whole still make up a very small portion of scholarly publishing, but a large group of early adopters are testing out these value-adding tools across a much wider range of disciplines than in the past. In this opinion article, we provide perspective on the three main options available for earth scientists, namely EarthArXiv, ESSOAr/ESS Open Archive and EGUsphere.
    Keywords:  Open Access; Open Science; Preprint
    DOI:  https://doi.org/10.12688/f1000research.133612.2
  16. Qual Res Med Healthc. 2022 Dec 31. 6(3): 11170
      
    Keywords:  Qualitative research; methodology; quotations
    DOI:  https://doi.org/10.4081/qrmh.2022.11170
  17. HCA Healthc J Med. 2023 ;4(1): 61-68
      Description Case reports play an essential role in the dissemination of knowledge in medicine. A published case is typically an unusual or unexpected presentation in which the outcomes, clinical course, and prognosis are linked to a literature review in order to place the case into the appropriate context. Case reports are a good option for new writers to generate scholarly output. This article can serve as a template for writing a case report, which includes instructions for creating the abstract and crafting the body of the case report-introduction, case presentation, and discussion. Instructions for writing an effective cover letter to the journal editor are also provided as well as a checklist to help authors prepare their case reports for submission.
    Keywords:  GME; case report; graduate medical education; journal; publishing; scholarly activity; scholarly communication
    DOI:  https://doi.org/10.36518/2689-0216.1485
  18. Gland Surg. 2023 Jun 30. 12(6): 749-766
      Background: Surgical technique plays an essential role in achieving good health outcomes. However, the quality of surgical technique reporting remains heterogeneous. Reporting checklists could help authors to describe the surgical technique more transparently and effectively, as well as to assist reviewers and editors evaluate it more informatively, and promote readers to better understand the technique. We previously developed SUPER (surgical technique reporting checklist and standards) to assist authors in reporting their research that contains surgical technique more transparently. However, further explanation and elaboration of each item are needed for better understanding and reporting practice.Methods: We searched surgical literature in PubMed, Google Scholar and journal websites published up to January 2023 to find multidiscipline examples in various article types for each SUPER item.
    Results: We explain the 22 items of the SUPER and provide rationales item by item alongside. We provide 69 examples from 53 literature that present optimal reporting of the 22 items. Article types of examples include pure surgical technique, and case reports, observational studies and clinical trials that contain surgical technique. Examples are multidisciplinary, including general surgery, orthopaedical surgery, cardiac surgery, thoracic surgery, gastrointestinal surgery, neurological surgery, oncogenic surgery, and emergency surgery etc.
    Conclusions: Along with SUPER article, this explanation and elaboration file can promote deeper understanding on the SUPER items. We hope that the article could further guide surgeons and researchers in reporting, and assist editors and peer reviewers in reviewing manuscripts related to surgical technique.
    Keywords:  SUPER; Surgical technique; guideline; reporting checklist; surgery
    DOI:  https://doi.org/10.21037/gs-23-76