bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒01‒21
23 papers selected by
Thomas Krichel, Open Library Society



  1. Med Educ. 2024 Jan 17.
      INTRODUCTION: Much published research writing is dull and dry at best, impenetrable and off-putting at worst. This state of affairs both frustrates readers and impedes research uptake. Scientific conventions of objectivity and neutrality contribute to the problem, implying that 'good' research writing should have no discernible authorial 'voice'. Yet some research writers have a distinctive voice in their work that may contribute to their scholarly influence. In this study, we explore this notion of voice, examining what strong research writers aim for with their voice and what strategies they use.METHODS: Using a combination of purposive, snowball and theoretical sampling, we recruited 21 scholars working in health professions education or adjacent health research fields, representing varied career stages, research paradigms and geographical locations. We interviewed participants about their approaches to writing and asked each to provide one to three illustrative publications. Iterative data collection and analysis followed constructivist grounded theory principles. We analysed interview transcripts thematically and examined publications for evidence of the writers' described approaches.
    RESULTS: Participants shared goals of a voice that was clear and logical, and that engaged readers and held their attention. They accomplished these goals using approaches both conventional and unconventional. Conventional approaches included attention to coherence through signposting, symmetry and metacommentary. Unconventional approaches included using language that was evocative (metaphor, imagery), provocative (pointed critique), plainspoken ('non-academic' phrasing), playful (including humour) and lyrical (attending to cadence and sound). Unconventional elements were more prominent in non-standard genres (e.g. commentaries), but also appeared in empiric papers.
    DISCUSSION: What readers interpret as 'voice' reflects strategic use of a repertoire of writing techniques. Conventional techniques, used expertly, can make for compelling reading, but strong writers also draw on unconventional strategies. A broadened writing repertoire might assist health professions education research writers in effectively communicating their work.
    DOI:  https://doi.org/10.1111/medu.15298
  2. Nature. 2024 Jan;625(7995): 450
      
    Keywords:  Institutions; Policy; Publishing
    DOI:  https://doi.org/10.1038/d41586-024-00116-6
  3. Science. 2024 Jan 19. 383(6680): 252-255
    Retraction Watch
      In the latest twist of the publishing arms race, firms churning out fake papers have taken to bribing journal editors.
    DOI:  https://doi.org/10.1126/science.ado0309
  4. J Dent. 2024 Jan 12. pii: S0300-5712(24)00010-1. [Epub ahead of print] 104840
      OBJECTIVES: To assess whether ChatGPT can help to identify predatory biomedical and dental journals, analyze the content of its responses and compare the frequency of positive and negative indicators provided by ChatGPT concerning predatory and legitimate journals.METHODS: Four-hundred predatory and legitimate biomedical and dental journals were selected from four sources: Beall's list, unsolicited emails, the Web of Science (WOS) journal list and the Directory of Open Access Journals (DOAJ). ChatGPT was asked to determine journal legitimacy. Journals were classified into legitimate or predatory. Pearson's Chi-squared test and logistic regression were conducted. Two machine learning algorithms determined the most influential criteria on the correct classification of journals.
    RESULTS: The data were categorized under 10 criteria with the most frequently coded criteria being the transparency of processes and policies. ChatGPT correctly classified predatory and legitimate journals in 92.5% and 71% of the sample, respectively. The accuracy of ChatGPT responses was 0.82. ChatGPT also demonstrated a high level of sensitivity (0.93). Additionally, the model exhibited a specificity of 0.71, accurately identifying true negatives. A highly significant association between ChatGPT verdicts and the classification based on known sources was observed (P <0.001). ChatGPT was 30.2 times more likely to correctly classify a predatory journal (95% confidence interval: 16.9-57.43, p-value: <0.001).
    CONCLUSIONS: ChatGPT can accurately distinguish predatory and legitimate journals with a high level of accuracy. While some false positive (29%) and false negative (7.5%) results were observed, it may be reasonable to harness ChatGPT to assist with the identification of predatory journals.
    CLINICAL SIGNIFICANCE STATEMENT: ChatGPT may effectively distinguish between predatory and legitimate journals, with accuracy rates of 92.5% and 71%, respectively. The potential utility of large-scale language models in exposing predatory publications is worthy of further consideration.
    Keywords:  Editorial policies; ethics in publication; medical ethics; open access publishing; scientific publishing; transparency
    DOI:  https://doi.org/10.1016/j.jdent.2024.104840
  5. J Pharm Policy Pract. 2024 ;17(1): 2303759
      Generative AI can be a powerful research tool, but researchers must employ it ethically and transparently. This commentary addresses how the editors of pharmacy practice journals can identify manuscripts generated by generative AI and AI-assisted technologies. Editors and reviewers must stay well-informed about developments in AI technologies to effectively recognise AI-written papers. Editors should safeguard the reliability of journal publishing and sustain industry standards for pharmacy practice by implementing the crucial strategies outlined in this editorial. Although obstacles, including ignorance, time constraints, and protean AI strategies, might hinder detection efforts, several facilitators can help overcome those obstacles. Pharmacy practice journal editors and reviewers would benefit from educational programmes, collaborations with AI experts, and sophisticated plagiarism-detection techniques geared toward accurately identifying AI-generated text. Academics and practitioners can further uphold the integrity of published research through transparent reporting and ethical standards. Pharmacy practice journal staffs can sustain academic rigour and guarantee the validity of scholarly work by recognising and addressing the relevant barriers and utilising the proper enablers. Navigating the changing world of AI-generated content and preserving standards of excellence in pharmaceutical research and practice requires a proactive strategy of constant learning and community participation.
    Keywords:  AI-assisted technologies; Artificial intelligence pharmacy practice; detection; manuscript
    DOI:  https://doi.org/10.1080/20523211.2024.2303759
  6. Curr Osteoporos Rep. 2024 Jan 13.
      PURPOSE OF REVIEW: Three review articles have been written that discuss the roles of the central and peripheral nervous systems in fracture healing. While content among the articles is overlapping, there is a key difference between them: the use of artificial intelligence (AI). In one paper, the first draft was written solely by humans. In the second paper, the first draft was written solely by AI using ChatGPT 4.0 (AI-only or AIO). In the third paper, the first draft was written using ChatGPT 4.0 but the literature references were supplied from the human-written paper (AI-assisted or AIA). This project was done to evaluate the capacity of AI to conduct scientific writing. Importantly, all manuscripts were fact checked and extensively edited by all co-authors rendering the final manuscript drafts significantly different from the first drafts.RECENT FINDINGS: Unsurprisingly, the use of AI decreased the time spent to write a review. The two AI-written reviews took less time to write than the human-written paper; however, the changes and editing required in all three manuscripts were extensive. The human-written paper was edited the most. On the other hand, the AI-only paper was the most inaccurate with inappropriate reference usage and the AI-assisted paper had the greatest incidence of plagiarism. These findings show that each style of writing presents its own unique set of challenges and advantages. While AI can theoretically write scientific reviews, from these findings, the extent of editing done subsequently, the inaccuracy of the claims it makes, and the plagiarism by AI are all factors to be considered and a primary reason why it may be several years into the future before AI can present itself as a viable alternative for traditional scientific writing.
    Keywords:  AI; Artificial intelligence; ChatGPT; Fracture healing; Neural regulation; Scientific writing
    DOI:  https://doi.org/10.1007/s11914-023-00854-y
  7. Curr Osteoporos Rep. 2024 Jan 16.
      PURPOSE OF REVIEW: With the recent explosion in the use of artificial intelligence (AI) and specifically ChatGPT, we sought to determine whether ChatGPT could be used to assist in writing credible, peer-reviewed, scientific review articles. We also sought to assess, in a scientific study, the advantages and limitations of using ChatGPT for this purpose. To accomplish this, 3 topics of importance in musculoskeletal research were selected: (1) the intersection of Alzheimer's disease and bone; (2) the neural regulation of fracture healing; and (3) COVID-19 and musculoskeletal health. For each of these topics, 3 approaches to write manuscript drafts were undertaken: (1) human only; (2) ChatGPT only (AI-only); and (3) combination approach of #1 and #2 (AI-assisted). Articles were extensively fact checked and edited to ensure scientific quality, resulting in final manuscripts that were significantly different from the original drafts. Numerous parameters were measured throughout the process to quantitate advantages and disadvantages of approaches.RECENT FINDINGS: Overall, use of AI decreased the time spent to write the review article, but required more extensive fact checking. With the AI-only approach, up to 70% of the references cited were found to be inaccurate. Interestingly, the AI-assisted approach resulted in the highest similarity indices suggesting a higher likelihood of plagiarism. Finally, although the technology is rapidly changing, at the time of study, ChatGPT 4.0 had a cutoff date of September 2021 rendering identification of recent articles impossible. Therefore, all literature published past the cutoff date was manually provided to ChatGPT, rendering approaches #2 and #3 identical for contemporary citations. As a result, for the COVID-19 and musculoskeletal health topic, approach #2 was abandoned midstream due to the extensive overlap with approach #3. The main objective of this scientific study was to see whether AI could be used in a scientifically appropriate manner to improve the scientific writing process. Indeed, AI reduced the time for writing but had significant inaccuracies. The latter necessitates that AI cannot currently be used alone but could be used with careful oversight by humans to assist in writing scientific review articles.
    Keywords:  Alzheimer's disease; Artificial intelligence (AI); COVID-19; ChatGPT; Fracture healing; Musculoskeletal system; Neural regulation; Osteoporosis; SARS-CoV-2; Scientific writing
    DOI:  https://doi.org/10.1007/s11914-023-00852-0
  8. JOR Spine. 2024 Mar;7(1): e1296
      ChatGPT and AI chatbots are revolutionizing several science fields, including medical writing. However, the inadequate use of such advantageous tools can raise numerous methodological and ethical issues.
    DOI:  https://doi.org/10.1002/jsp2.1296
  9. Curr Opin Allergy Clin Immunol. 2024 Jan 18.
      PURPOSE OF REVIEW: The aim of the review conducted was to present recent articles indicating the need to implement statistical recommendations in the daily work of biomedical journals.RECENT FINDINGS: The most recent literature shows an unchanged percentage of journals using specialized statistical review over 20 years. The problems of finding statistical reviewers, the impractical way in which biostatistics is taught and the nonimplementation of published statistical recommendations contribute to the fact that a small percentage of accepted manuscripts contain correctly performed analysis. The statistical recommendations published for authors and editorial board members in recent years contain important advice, but more emphasis should be placed on their practical and rigorous implementation. If this is not the case, we will additionally continue to experience low reproducibility of the research.
    SUMMARY: There is a low level of statistical reporting these days. Recommendations related to the statistical review of submitted manuscripts should be followed more rigorously.
    DOI:  https://doi.org/10.1097/ACI.0000000000000965
  10. BMC Proc. 2024 Jan 16. 18(Suppl 1): 4
      While the structure and composition of the scientific manuscript is well known within scientific communities, insider knowledge such as the tricks of the trade and editorial viewpoints of scientific publishing are often less known to early-career research scientists. This article focuses on the key aspects of scientific publishing, including tips for success geared towards senior postdocs and junior faculty. It also highlights important considerations for getting manuscripts published in an efficient and successful manner.
    Keywords:  Career development; Early-career scientists; Journal submission; Scientific writing
    DOI:  https://doi.org/10.1186/s12919-023-00286-7
  11. Clin Nutr ESPEN. 2024 Feb;pii: S2405-4577(23)02248-9. [Epub ahead of print]59 307-311
      We provide comprehensive insights into the peer review process and guide potential reviewers through the steps of reviewing scientific manuscripts. We discuss essential aspects such as the reviewer's responsibility in responding to invitations and maintaining confidentiality throughout the process, the criteria for accepting or rejecting papers, and efficient review of resubmissions. We emphasize the importance of prioritizing the review responsibility within other commitments, communication using professional and courteous language, and adherence to deadlines. We also offer practical tips on evaluating the abstract, introduction, materials and methods, results, and discussion section and summarizing the critiques in the review report.
    Keywords:  ESPEN LLL; Publication skills
    DOI:  https://doi.org/10.1016/j.clnesp.2023.12.023
  12. Res Integr Peer Rev. 2024 Jan 19. 9(1): 1
      BACKGROUND: Objectives of this study were to analyze the impact of including librarians and information specialist as methodological peer-reviewers. We sought to determine if and how librarians' comments differed from subject peer-reviewers'; whether there were differences in the implementation of their recommendations; how this impacted editorial decision-making; and the perceived utility of librarian peer-review by librarians and authors.METHODS: We used a mixed method approach, conducting a qualitative analysis of reviewer reports, author replies and editors' decisions of submissions to the International Journal of Health Governance. Our content analysis categorized 16 thematic areas, so that methodological and subject peer-reviewers' comments, decisions and rejection rates could be compared. Categories were based on the standard areas covered in peer-review (e.g., title, originality, etc.) as well as additional in-depth categories relating to the methodology (e.g., search strategy, reporting guidelines, etc.). We developed and used criteria to judge reviewers' perspectives and code their comments. We conducted two online multiple-choice surveys which were qualitatively analyzed: one of methodological peer-reviewers' perceptions of peer-reviewing, the other of published authors' views on the suggested revisions.
    RESULTS: Methodological peer-reviewers assessed 13 literature reviews submitted between September 2020 and March 2023. 55 reviewer reports were collected: 25 from methodological peer-reviewers, 30 from subject peer-reviewers (mean: 4.2 reviews per manuscript). Methodological peer-reviewers made more comments on methodologies, with authors more likely to implement their changes (52 of 65 changes, vs. 51 of 82 by subject peer-reviewers); they were also more likely to reject submissions (seven vs. four times, respectively). Where there were differences in recommendations to editors, journal editors were more likely to follow methodological peer-reviewers (nine vs. three times, respectively). The survey of published authors (87.5% response rate) revealed four of seven found comments on methodologies helpful. Librarians' survey responses (66.5% response rate) revealed those who conducted peer-reviews felt they improved quality of publications.
    CONCLUSIONS: Librarians can enhance evidence synthesis publications by ensuring methodologies have been conducted and reported appropriately. Their recommendations helped authors revise submissions and facilitated editorial decision-making. Further research could determine if sharing reviews with subject peer-reviewers and journal editors could benefit them in better understanding of evidence synthesis methodologies.
    Keywords:  Evidence synthesis; Health science librarians; Information specialists; Methodological peer-reviewers; Segmented peer-review
    DOI:  https://doi.org/10.1186/s41073-023-00142-4
  13. J Am Acad Dermatol. 2024 Jan 13. pii: S0190-9622(24)00071-9. [Epub ahead of print]
      
    Keywords:  authorship; ethics; histology; pathology
    DOI:  https://doi.org/10.1016/j.jaad.2024.01.012
  14. Radiol Artif Intell. 2024 Jan;6(1): e230337
      
    Keywords:  Data Sharing; Open Science
    DOI:  https://doi.org/10.1148/ryai.230337
  15. Open Heart. 2024 Jan 17. pii: e002433. [Epub ahead of print]11(1):
      OBJECTIVE: Open science is a movement and set of practices to conduct research more transparently. Implementing open science will significantly improve public access and supports equity. It also has the potential to foster innovation and reduce duplication through data and materials sharing. Here, we survey an international group of researchers publishing in cardiovascular journals regarding their perceptions and practices related to open science.METHODS: We identified the top 100 'Cardiology and Cardiovascular Medicine' subject category journals from the SCImago journal ranking platform. This is a publicly available portal that draws from Scopus. We then extracted the corresponding author's name and email from all articles published in these journals between 1 March 2021 and 1 March 2022. Participants were sent a purpose-built survey about open science. The survey contained primarily multiple choice and scale-based questions for which we report count data and percentages. For the few text-based responses we conducted thematic content analysis.
    RESULTS: 198 participants responded to our survey. Participants had a mean response of 6.8 (N=197, SD=1.8) on a 9-point scale with endpoints, not at all familiar (1) and extremely familiar (9), when indicating how familiar they were with open science. When asked about where they obtained open science training, most participants indicated this was done on the job self-initiated while conducting research (n=103, 52%), or that they had no formal training with respect to open science (n=72, 36%). More than half of the participants indicated they would benefit from practical support from their institution on how to perform open science practices (N=106, 54%). A diversity of barriers to each of the open science practices presented to participants were acknowledged. Participants indicated that funding was the most essential incentive to adopt open science.
    CONCLUSIONS: It is clear that policy alone will not lead to the effective implementation of open science. This survey serves as a baseline for the cardiovascular research community's open science performance and perception and can be used to inform future interventions and monitoring.
    Keywords:  Education, Medical; Ethics, Medical; Research Design; Translational Medical Research
    DOI:  https://doi.org/10.1136/openhrt-2023-002433