bims-skolko Biomed News
on Scholarly communication
Issue of 2023‒06‒11
thirty papers selected by
Thomas Krichel
Open Library Society


  1. Nature. 2023 Jun 08.
      
    Keywords:  Publishing; Research management; Scientific community
    DOI:  https://doi.org/10.1038/d41586-023-01846-9
  2. Waste Manag Res. 2023 Jun 05. 734242X231172104
      The rationale for this article is that often, decision-makers in waste management (wm) tend to neglect goals and confuse them with means like circular economy or waste hierarchy. Because clear goals are crucial for developing effective wm strategies, the objectives of this mini review are (1) to clarify wm goals in a historical context by a literature review, (2) to investigate how (a) these goals have been observed in general scientific publishing and (b) specifically in Waste Management and Research (WM&R) and (3) to recommend measures for better consideration of wm goals by the publication sector. Based on general as well as specific bibliographic analyses of databases in Scopus and Google Scholar, the study confirms that little attention was given to wm goals in scientific publishing. For instance, during the first 40 years of WM&R, 63 publications and eight editorials were found containing terms related to wm goals, but only 14 respectively and eight explicitly discuss wm goals. We recommend focussing more on wm goals. Editors, authors, reviewers and professional associations in the field of wm should become aware of this challenge and react. If WM&R decides to become a strong platform for the issue wm goals, it will be in a unique selling proposition and more authors, articles and readers are likely to result. This article aims at setting a starting signal for such an endeavour.
    Keywords:  Goals of waste management; Waste Management & Research; bibliographic analysis; effective waste management; literature review; waste management objectives; waste management strategy
    DOI:  https://doi.org/10.1177/0734242X231172104
  3. Am J Pharm Educ. 2023 May;pii: S0002-9459(23)00023-2. [Epub ahead of print]87(5): 100009
      Dissemination of information through publications is central to academic research, as well as professional advancement. Although seemingly a straightforward endeavor, publication authorship may present challenges. Although the International Committee of Medical Journal Editors defines authorship based on 4 required criteria, contemporary interdisciplinary collaborations can complicate authorship determinations. However, communication that occurs early and frequently in the research and writing process can help to prevent or mitigate potential conflicts, while a process for defining authorship contributions can aid in awarding proper credit. The Contributor Roles Taxonomy (CRediT) defines 14 essential roles of manuscript authors that can be utilized to characterize individual author contributions toward any given publication. This information is useful for academic administrators when evaluating contributors of faculty during promotion and tenure decisions. In the era of collaborative scientific, clinical, and pedagogical scholarship, providing faculty development, including statements of credit in the published work, and developing institutional systems to capture and assess contributions are key.
    Keywords:  Authorship; Collaborative; Conflict; Contributions; Credit
    DOI:  https://doi.org/10.1016/j.ajpe.2022.10.002
  4. Arthroscopy. 2023 07;pii: S0749-8063(23)00254-2. [Epub ahead of print]39(7): 1597-1599
      Biomedical research Infographics, a short-form neologism for "information graphics," illustrate medical educational information in an engaging manner by enhancing concise text with figures, tables, and data visualizations in the form of charts and graphs. Visual Abstracts present a graphic summary of the information contained in a medical research abstract. In addition to improving retention, both Infographics and Visual Abstracts allow for dissemination of medical information on social media and increase the breadth of medical journal readership. In addition, these new methods of scientific communication increase citation rates, as well as social media attention as determined by Altmetrics (alternative metrics).
    DOI:  https://doi.org/10.1016/j.arthro.2023.03.015
  5. Nature. 2023 Jun 06.
      
    Keywords:  Careers; Funding; Government; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-01729-z
  6. Nature. 2023 Jun 07.
      
    Keywords:  Careers; Funding; Research management
    DOI:  https://doi.org/10.1038/d41586-023-01881-6
  7. J Clin Epidemiol. 2023 Jun 05. pii: S0895-4356(23)00137-3. [Epub ahead of print]
      Traditional peer-review of clinical trials happens too late, after the trials are already done. However, lack of methodological rigor and presence of many biases can be detected and remedied in advance. Here, we examine several options for review and improvement of trials before their conduct: protocol review by peers, sponsors, regulatory authorities, and institutional ethical committees; registration in registry sites; deposition of protocol and/or the statistical analysis plan in a public repository; peer-review and publication of the protocol and/or the statistical analysis plan in a journal; and Registered Reports. Some practices are considered standard (e.g. registration in trial registry), while others are still uncommon but are becoming more frequent (e.g. publication of full trial protocols and statistical analysis plans). Ongoing challenges hinder a large-scale implementation of some promising practices such as Registered Reports. Innovative ideas are necessary to advance peer-review efficiency and rigor in clinical trials but also to lower the cumulative burden for peer-reviewers. We make several suggestions to enhance pre-conduct peer-review. Making all steps of research process public and open may reverse siloed environments. Pre-conduct peer-review may be improved by making routinely publicly available all protocols that have gone through review by institutional review boards and regulatory agencies.
    Keywords:  Clinical trials; Peer-review; methodology; preregistration; protocol; registered reports
    DOI:  https://doi.org/10.1016/j.jclinepi.2023.05.024
  8. Am J Physiol Regul Integr Comp Physiol. 2023 Jun 05.
      In Part 1 of this Perspective, I share my thoughts on several basic principles of scientific peer review for early career-stage investigators. I begin by defining scientific peer review and its primary goals, and briefly discuss the historical development of peer review. I then describe the reputed benefits of the process for science and society. Next, I characterize the "2-stage" structure of peer review, as well as the most prevalent evaluation formats used for determining scientific merit of peer reviewed documents, including grant applications and manuscripts. I then discuss the primary responsibilities and core values of scientific peer review and offer several general tips for how to be an effective scientific peer reviewer. I next share commonly voiced concerns about the peer review process and oft-cited suggestions for improving the system. I finish the commentary by emphasizing numerous benefits of having a sound working knowledge of peer review for enhancing research career development and describe various opportunities for obtaining experience in peer review. This discussion of general issues is intended to lay a proper foundation upon which to address specific aspects of peer review of manuscripts in Part 2 and grant applications in Part 3 of the Perspective.
    Keywords:  Peer review; career development; professional skills
    DOI:  https://doi.org/10.1152/ajpregu.00062.2023
  9. JAMA Netw Open. 2023 Jun 01. 6(6): e2317651
      Importance: Numerous studies have shown that adherence to reporting guidelines is suboptimal.Objective: To evaluate whether asking peer reviewers to check if specific reporting guideline items were adequately reported would improve adherence to reporting guidelines in published articles.
    Design, Setting, and Participants: Two parallel-group, superiority randomized trials were performed using manuscripts submitted to 7 biomedical journals (5 from the BMJ Publishing Group and 2 from the Public Library of Science) as the unit of randomization, with peer reviewers allocated to the intervention or control group.
    Interventions: The first trial (CONSORT-PR) focused on manuscripts that presented randomized clinical trial (RCT) results and reported following the Consolidated Standards of Reporting Trials (CONSORT) guideline, and the second trial (SPIRIT-PR) focused on manuscripts that presented RCT protocols and reported following the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guideline. The CONSORT-PR trial included manuscripts that described RCT primary results (submitted July 2019 to July 2021). The SPIRIT-PR trial included manuscripts that contained RCT protocols (submitted June 2020 to May 2021). Manuscripts in both trials were randomized (1:1) to the intervention or control group; the control group received usual journal practice. In the intervention group of both trials, peer reviewers received an email from the journal that asked them to check whether the 10 most important and poorly reported CONSORT (for CONSORT-PR) or SPIRIT (for SPIRIT-PR) items were adequately reported in the manuscript. Peer reviewers and authors were not informed of the purpose of the study, and outcome assessors were blinded.
    Main Outcomes and Measures: The difference in the mean proportion of adequately reported 10 CONSORT or SPIRIT items between the intervention and control groups in published articles.
    Results: In the CONSORT-PR trial, 510 manuscripts were randomized. Of those, 243 were published (122 in the intervention group and 121 in the control group). A mean proportion of 69.3% (95% CI, 66.0%-72.7%) of the 10 CONSORT items were adequately reported in the intervention group and 66.6% (95% CI, 62.5%-70.7%) in the control group (mean difference, 2.7%; 95% CI, -2.6% to 8.0%). In the SPIRIT-PR trial, of the 244 randomized manuscripts, 178 were published (90 in the intervention group and 88 in the control group). A mean proportion of 46.1% (95% CI, 41.8%-50.4%) of the 10 SPIRIT items were adequately reported in the intervention group and 45.6% (95% CI, 41.7% to 49.4%) in the control group (mean difference, 0.5%; 95% CI, -5.2% to 6.3%).
    Conclusions and Relevance: These 2 randomized trials found that it was not useful to implement the tested intervention to increase reporting completeness in published articles. Other interventions should be assessed and considered in the future.
    Trial Registration: ClinicalTrials.gov Identifiers: NCT05820971 (CONSORT-PR) and NCT05820984 (SPIRIT-PR).
    DOI:  https://doi.org/10.1001/jamanetworkopen.2023.17651
  10. Radiol Med. 2023 Jun 07.
      
    Keywords:  Accountability; Bias; Indexing; Open peer review; Open science
    DOI:  https://doi.org/10.1007/s11547-023-01656-z
  11. PLoS One. 2023 ;18(6): e0286908
      OBJECTIVES: To assess the extent to which peer reviewers and journals editors address study funding and authors' conflicts of interests (COI). Also, we aimed to assess the extent to which peer reviewers and journals editors reported and commented on their own or each other's COI.STUDY DESIGN AND METHODS: We conducted a systematic survey of original studies published in open access peer reviewed journals that publish their peer review reports. Using REDCap, we collected data in duplicate and independently from journals' websites and articles' peer review reports.
    RESULTS: We included a sample of original studies (N = 144) and a second one of randomized clinical trials (N = 115) RCTs. In both samples, and for the majority of studies, reviewers reported absence of COI (70% and 66%), while substantive percentages of reviewers did not report on COI (28% and 30%) and only small percentages reported any COI (2% and 4%). For both samples, none of the editors whose names were publicly posted reported on COI. The percentages of peer reviewers commenting on the study funding, authors' COI, editors' COI, or their own COI ranged between 0 and 2% in either one of the two samples. 25% and 7% of editors respectively in the two samples commented on study funding, while none commented on authors' COI, peer reviewers' COI, or their own COI. The percentages of authors commenting in their response letters on the study funding, peer reviewers' COI, editors' COI, or their own COI ranged between 0 and 3% in either one of the two samples.
    CONCLUSION: The percentages of peer reviewers and journals editors who addressed study funding and authors' COI and were extremely low. In addition, peer reviewers and journal editors rarely reported their own COI, or commented on their own or on each other's COI.
    DOI:  https://doi.org/10.1371/journal.pone.0286908
  12. Front Psychol. 2023 ;14 1120938
      Psychology aims to capture the diversity of our human experience, yet racial inequity ensures only specific experiences are studied, peer-reviewed, and eventually published. Despite recent publications on racial bias in research topics, study samples, academic teams, and publication trends, bias in the peer review process remains largely unexamined. Drawing on compelling case study examples from APA and other leading international journals, this article proposes key mechanisms underlying racial bias and censorship in the editorial and peer review process, including bias in reviewer selection, devaluing racialized expertise, censorship of critical perspectives, minimal consideration of harm to racialized people, and the publication of unscientific and racist studies. The field of psychology needs more diverse researchers, perspectives, and topics to reach its full potential and meet the mental health needs of communities of colour. Several recommendations are called for to ensure the APA can centre racial equity throughout the editorial and review process.
    Keywords:  bias; censorship; peer review; publication; racism
    DOI:  https://doi.org/10.3389/fpsyg.2023.1120938
  13. Am J Obstet Gynecol. 2023 Jun 06. pii: S0002-9378(23)00372-1. [Epub ahead of print]
      
    Keywords:  artificial intelligence; chatbot; paper significance; paper writing; reviewer
    DOI:  https://doi.org/10.1016/j.ajog.2023.06.001
  14. Korean J Radiol. 2023 06;24(6): 599
      
    Keywords:  Authorship; ChatGPT; Policy
    DOI:  https://doi.org/10.3348/kjr.2023.0383
  15. Trends Ecol Evol. 2023 Jun 01. pii: S0169-5347(23)00130-1. [Epub ahead of print]
      Scientific writing can prove challenging, particularly for those who are non-native English speakers writing in English. Here, we explore the potential of advanced artificial intelligence (AI) tools, guided by principles of second-language acquisition, to help scientists improve their scientific writing skills in numerous contexts.
    DOI:  https://doi.org/10.1016/j.tree.2023.05.007
  16. Nature. 2023 Jun;618(7964): 214
      
    Keywords:  Authorship; Education; Machine learning; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-01546-4
  17. Ann Biomed Eng. 2023 Jun 07.
      Arguably ChatGPT jeopardizes the integrity and validity of the academic publications instead of ethically facilitating them. ChatGPT can apparently fulfill a portion of one of the four authorship criteria set by the International Committee of Medical Journal Editors (ICMJE), i.e., "drafting." However, the authorship criteria by ICMJE must all be collectively met, not singly or partially. Many published manuscripts or preprints have credited ChatGPT by including it in the author byline, and the academic publishing enterprise seems to be unguided on how to handle such manuscripts. Interestingly, PLoS Digital Health removed ChatGPT off a paper which had ChatGPT listed initially in the author byline of the preprint version. Revised publishing policies are, thus, promptly required to guide a consistent stance regarding ChatGPT or similar artificial content generators. Publishing policies must accord among publishers, preprint servers ( https://asapbio.org/preprint-servers ), universities, and research institutions worldwide and across different disciplines. Ideally, considering any declaration of the contribution of ChatGPT to writing any scientific article should be recognized as publishing misconduct immediately and be retracted. Meanwhile, all parties involved in the scientific reporting and publishing must be educated about how ChatGPT fails to meet the essential authorship criteria, so that no author must submit a manuscript with ChatGPT contributing as a "co-author." Meanwhile, using ChatGPT for writing laboratory reports or short summaries of experiments may be acceptable, but not for academic publishing or formal scientific reporting.
    Keywords:  Artificial intelligence; Authorship; ChatGPT; International Committee of Medical Journal Editors; Scientific misconduct
    DOI:  https://doi.org/10.1007/s10439-023-03260-8
  18. Nature. 2023 Jun;618(7964): 238
      
    Keywords:  Communication; Culture; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-01852-x
  19. Ann Thorac Surg. 2023 Jun 02. pii: S0003-4975(23)00573-8. [Epub ahead of print]
      BACKGROUND: Amongst academic surgery publications, self-reporting of conflicts of interest (COI) has often proven to be inaccurate. Here, we review the accuracy of COI disclosures among studies related to the use of robotic technology in cardiothoracic surgery and evaluate factors associated with increased discrepancies.METHODS: A literature search identified robotic surgery-related studies with at least one American author published between January 2015 - December 2020 from three major American cardiothoracic surgery journals (Journal of Thoracic and Cardiovascular Surgery, Annals of Thoracic Surgery, and Annals of Cardiothoracic Surgery). Industry payments from Intuitive SurgicalTM (IntuitiveTM) were collected using the Centers for Medicare & Medicaid Open Payments database. COI discrepancies were identified by comparing author declaration statements with payments found for the year of publication and year prior (24-month period).
    RESULTS: A total of 144 studies (764 authors) were identified. One-hundred-twelve studies (78%) had at least one author receive payments from IntuitiveTM. Ninety-eight (68%) studies had at least one author receive an undeclared payment from IntuitiveTM. Authors who accurately disclosed payments received significantly higher median payments compared to authors who did not ($16,511 [IQR: $6,389 - $159,035] vs $1,762 [IQR: $338 - $7,500], p<0.0001). Last authors were significantly more likely to have a COI discrepancy compared to middle and first authors (p = 0.018; p = 0.0015).
    CONCLUSIONS: The majority of studies investigating the use of robotic technology in cardiothoracic surgery did not accurately declare COI with IntuitiveTM. This study highlights the need for improved accuracy of reporting industry sponsorship by publishing authors.
    DOI:  https://doi.org/10.1016/j.athoracsur.2023.04.047
  20. J Surg Res. 2023 Jun 06. pii: S0022-4804(23)00169-5. [Epub ahead of print]
      INTRODUCTION: Open access publishing has exhibited rapid growth in recent years. However, there is uncertainty surrounding the quality of open access journals and their ability to reach target audiences. This study reviews and characterizes open access surgical journals.MATERIALS AND METHODS: The directory of open access journals was used to search for open access surgical journals. PubMed indexing status, impact factor, article processing charge (APC), initial year of open access publishing, average weeks from manuscript submission to publication, publisher, and peer-review processes were evaluated.
    RESULTS: Ninety-two open access surgical journals were identified. Most (n = 49, 53.3%) were indexed in PubMed. Journals established >10 y were more likely to be indexed in PubMed compared to journals established <5 y (28 of 41 [68.3%] versus 4 of 20 [20%], P < 0.001). 44 journals (47.8%) used a double-blind review method. 49 (53.2%) journals received an impact factor for 2021, ranging from <0.1 to 10.2 (median 1.4). The median APC was $362 United States dollar [interquartile range $0 - 1802 United States dollar]. 35 journals (38%) did not charge a processing fee. There was a significant positive correlation between the APC and impact factor (r = 0.61, P < 0.001). If accepted, the median time from manuscript submission to publication was 12 wk.
    CONCLUSIONS: Open access surgical journals are largely indexed on PubMed, have transparent review processes, employ variable APCs (including no publication fees), and proceed efficiently from submission to publication. These results should increase readers' confidence in the quality of surgical literature published in open access journals.
    Keywords:  Abstracting and indexing; Open access publishing; Peer review; Publishing
    DOI:  https://doi.org/10.1016/j.jss.2023.04.008
  21. BMC Infect Dis. 2023 Jun 08. 23(1): 383
      Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
    Keywords:  Certainty of evidence; Critical appraisal; Methodological quality; Risk of bias; Systematic review
    DOI:  https://doi.org/10.1186/s12879-023-08304-x
  22. Public Underst Sci. 2023 Jun 05. 9636625231176377
      Lay readers' trust in scientific texts can be shaped by perceived text easiness and scientificness. The two effects seem vital in a time of rapid science information sharing, yet have so far only been examined separately. A preregistered online study was conducted to assess them jointly, to probe for author and text trustworthiness overlap, and to investigate interindividual influences on the effects. N = 1467 lay readers read four short research summaries, with easiness and scientificness (high vs low) being experimentally varied. A more scientific writing style led to higher perceived author and text trustworthiness. Higher personal justification belief, lower justification by multiple-sources belief, and lower need for cognitive closure attenuated the influence of scientificness on trustworthiness. However, text easiness showed no influence on trustworthiness and no interaction with text scientificness. Implications for future studies and suggestions for enhancing the perceived trustworthiness of research summaries are discussed.
    Keywords:  easiness effect; epistemic justification beliefs; epistemic trust; need for cognitive closure; research summaries; science communication; scientificness effect
    DOI:  https://doi.org/10.1177/09636625231176377
  23. Sci Data. 2023 Jun 07. 10(1): 366
      This paper introduces CORE, a widely used scholarly service, which provides access to the world's largest collection of open access research publications, acquired from a global network of repositories and journals. CORE was created with the goal of enabling text and data mining of scientific literature and thus supporting scientific discovery, but it is now used in a wide range of use cases within higher education, industry, not-for-profit organisations, as well as by the general public. Through the provided services, CORE powers innovative use cases, such as plagiarism detection, in market-leading third-party organisations. CORE has played a pivotal role in the global move towards universal open access by making scientific knowledge more easily and freely discoverable. In this paper, we describe CORE's continuously growing dataset and the motivation behind its creation, present the challenges associated with systematically gathering research papers from thousands of data providers worldwide at scale, and introduce the novel solutions that were developed to overcome these challenges. The paper then provides an in-depth discussion of the services and tools built on top of the aggregated data and finally examines several use cases that have leveraged the CORE dataset and services.
    DOI:  https://doi.org/10.1038/s41597-023-02208-w