bims-skolko Biomed News
on Scholarly communication
Issue of 2023‒10‒15
34 papers selected by
Thomas Krichel, Open Library Society



  1. Lancet. 2023 Oct 07. pii: S0140-6736(23)02191-8. [Epub ahead of print]402(10409): 1220-1221
      
    DOI:  https://doi.org/10.1016/S0140-6736(23)02191-8
  2. BMC Res Notes. 2023 Oct 13. 16(1): 269
      OBJECTIVES: Publication is one of the crucial parameters in research, and the inability to publish has been noted in many medical students' projects due to different reasons. This cross-sectional study aimed to determine the obstacles that prevented medical students in a health science university from publishing their research from 2018 to 2021. First, an online survey was distributed to assess the obstacles to publication perceived by the medical students. Second, a total of 81 research projects were evaluated by scientific reviewers and their final decision about the publication was recorded.RESULTS: In total, 162 students filled out the survey. The barriers faced by the students were various. They included an unsupportive research supervisor, a lack of time, an insufficient sample size, and many others. In the reviewer's evaluation, out of 81 projects, 70 projects (86.4%) were recommended to be published after minor or major modifications, while 11 projects (13.6%) were rejected due to poor writing style, poor results interpretation, and incorrect methodology.
    CONCLUSION: Articulating the barriers to undergraduate medical research publication is important in boosting publication rates and research experience of graduating medical students. Medical research educators and research supervisors should strongly consider creating a framework that tackles existing obstacles and any future matters.
    Keywords:  Barriers; Medical research; Publication; Students; Undergraduate
    DOI:  https://doi.org/10.1186/s13104-023-06542-5
  3. Phys Ther. 2023 Oct 10. pii: pzad133. [Epub ahead of print]
      OBJECTIVE: The goals of this study were to evaluate the extent that physical therapy journals support open science research practices by adhering to the Transparency and Openness Promotion guidelines and to assess the relationship between journal scores and their respective journal impact factor.METHODS: Scimago, mapping studies, the National Library of Medicine, and journal author guidelines were searched to identify physical therapy journals for inclusion. Journals were graded on 10 standards (29 available total points) related to transparency with data, code, research materials, study design and analysis, preregistration of studies and statistical analyses, replication, and open science badges. The relationship between journal transparency and openness scores and their journal impact factor was determined.
    RESULTS: Thirty-five journals' author guidelines were assigned transparency and openness factor scores. The median score (interquartile range) across journals was 3.00 out of 29 (3.00) points (for all journals the scores ranged from 0-8). The 2 standards with the highest degree of implementation were design and analysis transparency (reporting guidelines) and study preregistration. No journals reported on code transparency, materials transparency, replication, and open science badges. Transparency and openness promotion factor scores were a significant predictor of journal impact factor scores.
    CONCLUSION: There is low implementation of the transparency and openness promotion standards by physical therapy journals. Transparency and openness promotion factor scores demonstrated predictive abilities for journal impact factor scores. Policies from journals must improve to make open science practices the standard in research. Journals are in an influential position to guide practices that can improve the rigor of publication which, ultimately, enhances the evidence-based information used by physical therapists.
    IMPACT: Transparent, open, and reproducible research will move the profession forward by improving the quality of research and increasing the confidence in results for implementation in clinical care.Running Head: Transparency in Physical Therapy Journals.
    Keywords:  Openness; Reproducibility of Results; Research; Science; Transparency
    DOI:  https://doi.org/10.1093/ptj/pzad133
  4. Sci Rep. 2023 10 09. 13(1): 17034
      There is concern that preprint articles will lead to an increase in the amount of scientifically invalid work available. The objectives of this study were to determine the proportion of prevention preprints published within 12 months, the consistency of the effect estimates and conclusions between preprint and published articles, and the reasons for the nonpublication of preprints. Of the 329 prevention preprints that met our eligibility criteria, almost half (48.9%) were published in a peer-reviewed journal within 12 months of being posted. While 16.8% published preprints showed some change in the magnitude of the primary outcome effect estimate, 4.4% were classified as having a major change. The style or wording of the conclusion changed in 42.2%, the content in 3.1%. Preprints on chemoprevention, with a cross-sectional design, and with public and noncommercial funding had the highest probabilities of publication. The main reasons for the nonpublication of preprints were journal rejection or lack of time. The reliability of preprint articles for evidence-based decision-making is questionable. Less than half of the preprint articles on prevention research are published in a peer-reviewed journal within 12 months, and significant changes in effect sizes and/or conclusions are still possible during the peer-review process.
    DOI:  https://doi.org/10.1038/s41598-023-44291-4
  5. mBio. 2023 Oct 09. e0194823
      Peer review is considered by many to be a fundamental component of scientific publishing. In this context, open peer review (OPR) has gained popularity in recent years as a tool to increase transparency, rigor, and inclusivity in research. But how does OPR really affect the review process? How does OPR impact specific groups, such as early career researchers? This editorial explores and discusses these aspects as well as some suggested actions for journals.
    DOI:  https://doi.org/10.1128/mbio.01948-23
  6. Fertil Steril. 2023 Oct 10. pii: S0015-0282(23)01920-9. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.fertnstert.2023.10.005
  7. Surgery. 2023 11;pii: S0039-6060(23)00663-3. [Epub ahead of print]174(5): 1099-1101
      
    DOI:  https://doi.org/10.1016/j.surg.2023.09.026
  8. Br J Neurosurg. 2023 10;37(5): 961-962
      
    DOI:  https://doi.org/10.1080/02688697.2023.2253615
  9. PLoS Biol. 2023 Oct 13. 21(10): e3002364
      Journal authorship practices have not sufficiently evolved to reflect the way research is now done. Improvements to support teams, collaboration, and open science are urgently needed.
    DOI:  https://doi.org/10.1371/journal.pbio.3002364
  10. Front Res Metr Anal. 2023 ;8 1215401
      The use of citation counts (among other bibliometrics) as a facet of academic research evaluation can influence citation behavior in scientific publications. One possible unintended consequence of this bibliometric is excessive self-referencing, where an author favors referencing their own publications over related publications from different research groups. Peer reviewers are often prompted by journals to determine whether references listed in the manuscript under review are unbiased, but there is no consensus on what is considered "excessive" self-referencing. Here, self-referencing rates are examined across multiple journals in the fields of biology, genetics, computational biology, medicine, pathology, and cell biology. Median self-referencing rates are between 8-13% across a range of journals within these disciplines. However, self-referencing rates vary as a function of total number of references, number of authors, author status/rank, author position, and total number of publications for each author. Importantly, these relationships exhibit interdisciplinary and journal-dependent differences that are not captured by examining broader fields in aggregate (e.g., Biology, Chemistry, Physics, etc.). These results provide useful statistical guidelines for authors, editors, reviewers, and journals when considering referencing practices for individual publications, and highlight the effects of additional factors influencing self-referencing rates.
    Keywords:  bibliometrics; citation analysis; research incentives; self-citation; self-references; self-referencing
    DOI:  https://doi.org/10.3389/frma.2023.1215401
  11. Nature. 2023 Oct 10.
      
    Keywords:  Authorship; Ethics; Lab life; Peer review; Policy
    DOI:  https://doi.org/10.1038/d41586-023-03196-y
  12. Account Res. 2023 Oct 09.
      We propose a type of DOI-based manuscript, the author expression of concern (AEOC), allowing authors to formally publish their concerns about legitimate procedural problems associated with editors, reviewers, journals or publishers. Managed by a neutral third-party arbitrator or moderator, AEOCs would be limited in size and subjected to fair but strict screening of presented evidence. When an AEOC is approved for publication by an arbitrator, the criticized party would also need to formally respond within a reasonable period, as a "letter to the author(s)", which is also screened by the same arbitrator. Expanding the range of publishing options for authors, as AEOCs, would allow them to voice their legitimate concerns related to a journal's procedures in a formalized format. Although implementation might be challenging at first, it could demonstrate the fairness of editorial policies and democratize the publication process by taking authors' legitimate expressions of discontent related to procedure, and their rights of expression into account, elevating them to a formal article status, allowing for a more balanced two-way system of accountability and openness. Author empowerment that matches editorial and publisher empowerment is essential for a journal to truly claim to be fair, just and accountable.
    Keywords:  Accountability; arbitrator; authors’ rights; conflict and conflict resolution; e-mails
    DOI:  https://doi.org/10.1080/08989621.2023.2258625
  13. Lancet. 2023 Oct 07. pii: S0140-6736(23)02220-1. [Epub ahead of print]402(10409): 1219
      
    DOI:  https://doi.org/10.1016/S0140-6736(23)02220-1
  14. Hum Reprod. 2023 Oct 13. pii: dead207. [Epub ahead of print]
      Artificial intelligence (AI)-driven language models have the potential to serve as an educational tool, facilitate clinical decision-making, and support research and academic writing. The benefits of their use are yet to be evaluated and concerns have been raised regarding the accuracy, transparency, and ethical implications of using this AI technology in academic publishing. At the moment, Chat Generative Pre-trained Transformer (ChatGPT) is one of the most powerful and widely debated AI language models. Here, we discuss its feasibility to answer scientific questions, identify relevant literature, and assist writing in the field of human reproduction. With consideration of the scarcity of data on this topic, we assessed the feasibility of ChatGPT in academic writing, using data from six meta-analyses published in a leading journal of human reproduction. The text generated by ChatGPT was evaluated and compared to the original text by blinded reviewers. While ChatGPT can produce high-quality text and summarize information efficiently, its current ability to interpret data and answer scientific questions is limited, and it cannot be relied upon for a literature search or accurate source citation due to the potential spread of incomplete or false information. We advocate for open discussions within the reproductive medicine research community to explore the advantages and disadvantages of implementing this AI technology. Researchers and reviewers should be informed about AI language models, and we encourage authors to transparently disclose their use.
    Keywords:  ChatGPT; academic writing; artificial intelligence; language models; reproductive medicine
    DOI:  https://doi.org/10.1093/humrep/dead207
  15. Nature. 2023 Oct 13.
      
    Keywords:  Authorship; Ethics; Funding; Peer review
    DOI:  https://doi.org/10.1038/d41586-023-03238-5
  16. Med Leg J. 2023 Oct 06. 258172231184548
      Since its launch, ChatGPT, an artificial intelligence-powered language model tool, has generated significant attention in research writing. The use of ChatGPT in medical research can be a double-edged sword. ChatGPT can expedite the research writing process by assisting with hypothesis formulation, literature review, data analysis and manuscript writing. On the other hand, using ChatGPT raises concerns regarding the originality and authenticity of content, the precision and potential bias of the tool's output, and the potential legal issues associated with privacy, confidentiality and plagiarism. The article also calls for adherence to stringent citation guidelines and the development of regulations promoting the responsible application of AI. Despite the revolutionary capabilities of ChatGPT, the article highlights its inability to replicate human thought and the difficulties in maintaining the integrity and reliability of ChatGPT-enabled research, particularly in complex fields such as medicine and law. AI tools can be used as supplementary aids rather than primary sources of analysis in medical research writing.
    Keywords:  ChatGPT; Medical research; accuracy; artificial intelligence; ethical considerations; research misconduct
    DOI:  https://doi.org/10.1177/00258172231184548
  17. J Allergy Clin Immunol Pract. 2023 Oct 11. pii: S2213-2198(23)01126-1. [Epub ahead of print]
      BACKGROUND: Review articles play a critical role in informing medical decisions and identifying avenues for future research. With the introduction of Artificial intelligence (AI) there has been a growing interest in the potential of this technology to transform the synthesis of medical literature. Open AI's GPT-4 tool provides access to advanced AI which is able to quickly produce medical literature following only simple prompts. The accuracy of the generated articles requires review, especially in subspecialty fields like Allergy/Immunology.OBJECTIVE: To critically appraise AI-synthesized allergy-focused mini-reviews.
    METHODS: We tasked the GPT-4 Chatbot (Open AI Inc, San Francisco, CA) with generating two 1000-word reviews on the topics of hereditary angioedema and eosinophilic esophagitis. Authors critically appraised these articles using the Joanna Briggs Institute (JBI) tool for text and opinion, and additionally evaluated domains of interest such as language, reference quality, and accuracy of the content.
    RESULTS: The language of the AI-generated mini-reviews was carefully articulated and logically focused on the topic of interest, however, reviewers of the AI-generated articles indicated that the AI-generated content lacked depth, did not appear to be the result of an analytical process, missed critical information, and contained inaccurate information. Despite being provided instruction to utilize scientific references, the AI chatbot relied mainly on freely available resources, and the AI chatbot fabricated references.
    CONCLUSION: AI holds the potential to change the landscape of synthesizing medical literature, however, apparent inaccurate and fabricated information, calls for rigorous evaluation and validation of AI tools in generating medical literature, especially on subjects associated with limited resources.
    Keywords:  AI; Artificial intelligence; Chat-GPT; GPT-4; fabrication; medical literature; review articles
    DOI:  https://doi.org/10.1016/j.jaip.2023.10.010
  18. J Infect Dev Ctries. 2023 Sep 30. 17(9): 1292-1299
      INTRODUCTION: The emergence of artificial intelligence (AI) has presented several opportunities to ease human work. AI applications are available for almost every domain of life. A new technology, Chat Generative Pre-Trained Transformer (ChatGPT), was introduced by OpenAI in November 2022, and has become a topic of discussion across the world. ChatGPT-3 has brought many opportunities, as well as ethical and privacy considerations. ChatGPT is a large language model (LLM) which has been trained on the events that happened until 2021. The use of AI and its assisted technologies in scientific writing is against research and publication ethics. Therefore, policies and guidelines need to be developed over the use of such tools in scientific writing. The main objective of the present study was to highlight the use of AI and AI assisted technologies such as the ChatGPT and other chatbots in the scientific writing and in the research domain resulting in bias, spread of inaccurate information and plagiarism.METHODOLOGY: Experiments were designed to test the accuracy of ChatGPT when used in research and academic writing.
    RESULTS: The information provided by ChatGPT was inaccurate and may have far-reaching implications in the field of medical science and engineering. Critical thinking should be encouraged among researchers to raise awareness about the associated privacy and ethical risks.
    CONCLUSIONS: Regulations for ethical and privacy concerns related to the use of ChatGPT in academics and research need to be developed.
    Keywords:  ChatGPT; Open AI; artificial intelligence; chatbot; privacy concerns; publication ethics
    DOI:  https://doi.org/10.3855/jidc.18738
  19. Health Info Libr J. 2023 Oct 08.
      The artificial intelligence (AI) tool ChatGPT, which is based on a large language model (LLM), is gaining popularity in academic institutions, notably in the medical field. This article provides a brief overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. It provides a list of AI generative tools, common use of AI generative tools for medical writing, and provides a list of AI generative text detection tools. It also provides recommendations for policymakers, information professionals, and medical faculty for the constructive use of AI generative tools and related technology. It also highlights the role of health sciences librarians and educators in protecting students from generating text through ChatGPT in their academic work.
    Keywords:  artificial intelligence (AI); librarians, health science; libraries, academic; plagiarism; students, medical
    DOI:  https://doi.org/10.1111/hir.12509
  20. Malays Fam Physician. 2023 ;18 58
      ChatGPT, an artificial intelligence (AI) language model based on the GPT-3.5 architecture, is revolutionising scientific writing and medical research. Researchers employ ChatGPT for diverse tasks, including automated literature reviews, structured-outline generation and drafting/editing assistance. The tool adapts language for varied audiences, aids in citation management, supports collaborative writing and peer review and facilitates table/figure creation. While it enhances efficiency, concerns arise regarding ethics, bias, accuracy and originality. Transparent data sourcing and validation are crucial, as ChatGPT complements human efforts but does not replace critical thinking. Accordingly, researchers must uphold integrity, ensuring that AI-assisted content aligns with research principles. Acknowledgement of AI use in manuscripts, as recommended by the International Committee of Medical Journal Editors, ensures accountability. ChatGPT's transformative potential lies in harmonising its capabilities with researchers' expertise, fostering a symbiotic relationship that advances scientific progress and ethical standards.
    Keywords:  Artificial intelligence (AI); ChatGPT; Medical research; Scientific writing
    DOI:  https://doi.org/10.51866/cm0006
  21. Nature. 2023 Oct;622(7982): 234-236
      
    Keywords:  Computer science; Machine learning; Peer review; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-03144-w
  22. J Obstet Gynaecol Can. 2023 Oct 10. pii: S1701-2163(23)00581-9. [Epub ahead of print] 102236
      For various reasons, journals may convert from subscription based (SB) to open-access (OA), commonly referred to as flipping. In the 2022, the Acta Obstetricia et Gynecologica Scandinavica (AOGS), has flipped to OA. We aim to perform a bibliometric analysis of authorship patterns of the publications in this journal during the flipping period. A total of 898 research articles were included. In the OA period, there were more publications by authors from China (7.2% vs. 3.3%), p=.001. Flipping to OA in a leading obstetrics and gynecology journal is associated with a change in authorship.
    Keywords:  Bibliometrics; citations; flipping; metrics; open access
    DOI:  https://doi.org/10.1016/j.jogc.2023.102236