bims-skolko Biomed News
on Scholarly communication
Issue of 2024–04–21
28 papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2024 Apr 16.
      
    Keywords:  Politics; Publishing; SARS-CoV-2
    DOI:  https://doi.org/10.1038/d41586-024-01129-x
  2. Ann Surg. 2024 Apr 19.
       OBJECTIVE: To evaluate the accuracy of self-reported conflicts of interest (COIs) for articles published in prominent minimally invasive and general surgical journals.
    BACKGROUND: Accurate reporting of industry relationships and COIs is crucial for unbiased assessment of a particular study. Despite the enactment of COI laws, such as the Physician Payments Sunshine Act in 2010, prior work suggests that 40-70% of self-reported COIs have discrepancies.
    METHODS: We utilized three public databases -- Open Payments (USA), Disclosure UK, and Disclosure Australia -- to assess the accuracy of COI disclosures among authors of 918 published articles from these respective countries. Seven journals were utilized to review the COIs for authors of manuscripts published in 2022 - JAMA Surgery, Annals of Surgery, British Journal of Surgery (BJS), Journal of American College of Surgeons (JACS), Surgical Endoscopy, Obesity Surgery, and Surgery for Obesity and Related Diseases (SOARD).
    RESULTS: Among the analyzed 6206 authors, 5675 belonged to countries of interest: USA (4282), UK (718), and Australia (213). Of these, 774 authors (12.5%) self-reported a conflict of interest in their papers. Overall, only 4055 researchers (69.1%) reported COIs accurately. Authors from the US had the lowest accuracy of reporting COI at 69% as opposed to UK (93%) and Australia (96%). Inaccurate COI reporting was most common in corresponding/senior authors (39%) and least common amongst first authors (18%). Most payments in excess of $50,000 made to authors by an industry sponsor were not disclosed appropriately.
    CONCLUSIONS: Our study shows that inaccuracy of self-reported COIs in general surgery journals remains high at 31%. While our findings should encourage authors to overreport any possible COI, journals should consider verifying the authors' COI to facilitate more accurate reporting.
    DOI:  https://doi.org/10.1097/SLA.0000000000006303
  3. Trials. 2024 Apr 19. 25(1): 271
       BACKGROUND: Informativeness, in the context of clinical trials, defines whether a study's results definitively answer its research questions with meaningful next steps. Many clinical trials end uninformatively. Clinical trial protocols are required to go through reviews in regulatory and ethical domains: areas that focus on specifics outside of trial design, biostatistics, and research methods. Private foundations and government funders rarely require focused scientific design reviews for these areas. There are no documented standards and processes, or even best practices, toward a capability for funders to perform scientific design reviews after their peer review process prior to a funding commitment.
    MAIN BODY: Considering the investment in and standardization of ethical and regulatory reviews, and the prevalence of studies never finishing or failing to provide definitive results, it may be that scientific reviews of trial designs with a focus on informativeness offer the best chance for improved outcomes and return-on-investment in clinical trials. A maturity model is a helpful tool for knowledge transfer to help grow capabilities in a new area or for those looking to perform a self-assessment in an existing area. Such a model is offered for scientific design reviews of clinical trial protocols. This maturity model includes 11 process areas and 5 maturity levels. Each of the 55 process area levels is populated with descriptions on a continuum toward an optimal state to improve trial protocols in the areas of risk of failure or uninformativeness.
    CONCLUSION: This tool allows for prescriptive guidance on next investments to improve attributes of post-funding reviews of trials, with a focus on informativeness. Traditional pre-funding peer review has limited capacity for trial design review, especially for detailed biostatistical and methodological review. Select non-industry funders have begun to explore or invest in post-funding review programs of grantee protocols, based on exemplars of such programs. Funders with a desire to meet fiduciary responsibilities and mission goals can use the described model to enhance efforts supporting trial participant commitment and faster cures.
    Keywords:  Clinical trial; Design review; Informativeness; Maturity model; Trial methods
    DOI:  https://doi.org/10.1186/s13063-024-08099-5
  4. Cureus. 2024 Mar;16(3): e56193
      In the ever-evolving landscape of biomedical research and publishing, the International Committee of Medical Journal Editors recommendations serve as a critical framework for maintaining ethical standards. By providing a framework that adapts to technological advancements, the International Committee of Medical Journal Editors recommendations actively shape responsible and transparent practices, ensuring the integrity of scientific inquiry and fostering global collaboration in the ever-evolving landscape of medical publishing. This editorial delves into key aspects of the latest changes in the International Committee of Medical Journal Editors recommendations, focusing on authorship, conflict of interest disclosure, data sharing and reproducibility, medical publishing and carbon emissions, the use of artificial intelligence, and the challenges posed by predatory journals within the realm of open access. It emphasizes the importance of new recommendations, which represent a beacon of ethical guidance in the ever-evolving domain of biomedical research and publishing.
    Keywords:  authorship; conflicts of interest; data-sharing; icmje; international committee of medical journal editors; predatory journal
    DOI:  https://doi.org/10.7759/cureus.56193
  5. J Prof Nurs. 2024 Mar-Apr;51:pii: S8755-7223(24)00003-6. [Epub ahead of print]51 1-8
       BACKGROUND: Selecting a journal with an appropriate scope and breadth, well-respected by other scholars in the field, and widely indexed and accessible to readers is an integral part of publishing. Academic publishing has recently seen a significant shift away from traditional print publications and toward open access journals and online publications.
    OBJECTIVE: The aim of this study was to investigate academic nurse researchers' knowledge, experience, and attitudes regarding predatory journals.
    METHODS: A descriptive cross-sectional quantitative study design was conducted using Predatory Journals Questionnaire to collect the data from academic nurse educators working at X and XX University.
    RESULTS: Almost two-thirds (68.6 %) of participants had previous knowledge of the term "predatory journal." As well as, the majority of academic educators had previous experience as they had used predatory journals before, as by being asked to publish in their journal (84.3 %) or serve on its editorial board (24.3 %), participants were more likely to receive requests to submit an article to a predatory journal (52.9 %) via email, mail, or phone. In addition, academic nurse researchers had a moderate perspective (mean = 3.87 ± 1.06; mean % score = 71.71) toward predatory journals.
    CONCLUSION: Publishing in a predatory journal, whether done knowingly or unknowingly, can harm authors' reputations as academics, their capacity to submit to other journals, and the quality of their work. According to the results of our study, many researchers still lacked a thorough understanding of the predatory journal publishing model, which is a phenomenon that demands an increasing amount of research, despite hearing about the phenomenon of a predatory journal and having previously attended training.
    Keywords:  Academic; Attitude; Experience; Knowledge; Nurse; Predatory journals; Researchers
    DOI:  https://doi.org/10.1016/j.profnurs.2024.01.003
  6. J Assist Reprod Genet. 2024 Apr 15.
       PURPOSE: To evaluate the ability of ChatGPT-4 to generate a biomedical review article on fertility preservation.
    METHODS: ChatGPT-4 was prompted to create an outline for a review on fertility preservation in men and prepubertal boys. The outline provided by ChatGPT-4 was subsequently used to prompt ChatGPT-4 to write the different parts of the review and provide five references for each section. The different parts of the article and the references provided were combined to create a single scientific review that was evaluated by the authors, who are experts in fertility preservation. The experts assessed the article and the references for accuracy and checked for plagiarism using online tools. In addition, both experts independently scored the relevance, depth, and currentness of the ChatGPT-4's article using a scoring matrix ranging from 0 to 5 where higher scores indicate higher quality.
    RESULTS: ChatGPT-4 successfully generated a relevant scientific article with references. Among 27 statements needing citations, four were inaccurate. Of 25 references, 36% were accurate, 48% had correct titles but other errors, and 16% were completely fabricated. Plagiarism was minimal (mean = 3%). Experts rated the article's relevance highly (5/5) but gave lower scores for depth (2-3/5) and currentness (3/5).
    CONCLUSION: ChatGPT-4 can produce a scientific review on fertility preservation with minimal plagiarism. While precise in content, it showed factual and contextual inaccuracies and inconsistent reference reliability. These issues limit ChatGPT-4 as a sole tool for scientific writing but suggest its potential as an aid in the writing process.
    Keywords:  Academic writing; Artificial intelligence (AI); ChatGPT; Fertility; Natural language processing
    DOI:  https://doi.org/10.1007/s10815-024-03089-7
  7. Proc (Bayl Univ Med Cent). 2024 ;37(3): 459-464
       Background: The retraction of medical articles periodically occurs in most medical journals and can involve multiple article types. These retractions are beneficial if they remove flawed or fraudulent information from the medical literature. However, retractions may also decrease confidence in the medical literature and require significant amounts of time by editors.
    Methods: One publisher (Hindawi) announced that it will retract over 1200 articles. Given this, the PubMed database was searched to identify retracted publications on or related to COVID-19, and articles retracted by journals sponsored by the publisher Hindawi were then identified.
    Results: These journals retracted 25 articles and, in most cases, did not provide an exact explanation about the particular problem(s) resulting in the retraction. The time to retraction was 468.7 ± 109.8 days (median =  446 days). These articles had 9.3 ± 9.9 citations.
    Conclusion: Analysis of the titles and abstracts of the articles suggests that their removal from the medical literature would have limited effects on the near-term management decisions during the COVID-19 pandemic. Nevertheless, retraction of medical articles creates uncertainty in medical care and science and in the public regarding the validity of medical research and related publications and the level of professionalism of the individuals submitting these articles.
    Keywords:  COVID-19; Hindawi; medical publication; retraction
    DOI:  https://doi.org/10.1080/08998280.2024.2313333
  8. Int J Gynecol Cancer. 2024 Apr 16. pii: ijgc-2023-005162. [Epub ahead of print]
       OBJECTIVE: To determine if reviewer experience impacts the ability to discriminate between human-written and ChatGPT-written abstracts.
    METHODS: Thirty reviewers (10 seniors, 10 juniors, and 10 residents) were asked to differentiate between 10 ChatGPT-written and 10 human-written (fabricated) abstracts. For the study, 10 gynecologic oncology abstracts were fabricated by the authors. For each human-written abstract we generated a ChatGPT matching abstract by using the same title and the fabricated results of each of the human generated abstracts. A web-based questionnaire was used to gather demographic data and to record the reviewers' evaluation of the 20 abstracts. Comparative statistics and multivariable regression were used to identify factors associated with a higher correct identification rate.
    RESULTS: The 30 reviewers discriminated 20 abstracts, giving a total of 600 abstract evaluations. The reviewers were able to correctly identify 300/600 (50%) of the abstracts: 139/300 (46.3%) of the ChatGPT-generated abstracts and 161/300 (53.7%) of the human-written abstracts (p=0.07). Human-written abstracts had a higher rate of correct identification (median (IQR) 56.7% (49.2-64.1%) vs 45.0% (43.2-48.3%), p=0.023). Senior reviewers had a higher correct identification rate (60%) than junior reviewers and residents (45% each; p=0.043 and p=0.002, respectively). In a linear regression model including the experience level of the reviewers, familiarity with artificial intelligence (AI) and the country in which the majority of medical training was achieved (English speaking vs non-English speaking), the experience of the reviewer (β=10.2 (95% CI 1.8 to 18.7)) and familiarity with AI (β=7.78 (95% CI 0.6 to 15.0)) were independently associated with the correct identification rate (p=0.019 and p=0.035, respectively). In a correlation analysis the number of publications by the reviewer was positively correlated with the correct identification rate (r28)=0.61, p<0.001.
    CONCLUSION: A total of 46.3% of abstracts written by ChatGPT were detected by reviewers. The correct identification rate increased with reviewer and publication experience.
    Keywords:  Gynecologic Surgical Procedures
    DOI:  https://doi.org/10.1136/ijgc-2023-005162
  9. Arch Iran Med. 2024 Feb 01. 27(2): 110-112
      Those who participate in and contribute to academic publishing are affected by its evolution. Funding bodies, academic institutions, researchers and peer-reviewers, junior scholars, freelance language editors, language-editing services, and journal editors are to enforce and uphold the ethical norms on which academic publishing is founded. Deviating from such norms will challenge and threaten the scholarly reputation, academic careers, and institutional standing; reduce the publishers' true impacts; squander public funding; and erode the public trust to the academic enterprise. Rigorous review is paramount because peer-review norms guarantee that scientific findings are scrutinized before being publicized. Volunteer peer-reviewers and guest journal editors devote an immense amount of unremunerated time to reviewing papers, voluntarily serving the scientific community, and benefiting the publishers. Some mega-journals are motivated to mass-produce publications and attract the funded projects instead of maintaining the scientific rigor. The rapid development of mega-journals may diminish some traditional journals by outcompeting their impacts. Artificial intelligence (AI) tools/algorithms such as ChatGPT may be misused to contribute to the mass-production of publications which may have not been rigorously revised or peer-reviewed. Maintaining norms that guarantee scientific rigor and academic integrity enable the academic community to overcome the new challenges such as mega-journals and AI tools.
    Keywords:  Academic publishing; ChatGPT; Ethical norms; Mega-journals; Mega-publishers
    DOI:  https://doi.org/10.34172/aim.2024.17
  10. J Am Psychiatr Nurses Assoc. 2024 Apr 14. 10783903241245423
      
    DOI:  https://doi.org/10.1177/10783903241245423
  11. Science. 2024 Apr 19. 384(6693): 261
      Researchers may be using generative artificial intelligence to help write 1%-5% of manuscripts.
    DOI:  https://doi.org/10.1126/science.adp8901
  12. Behav Res Methods. 2024 Apr 16.
      Computer code plays a vital role in modern science, from the conception and design of experiments through to final data analyses. Open sharing of code has been widely discussed as being advantageous to the scientific process, allowing experiments to be more easily replicated, helping with error detection, and reducing wasted effort and resources. In the case of psychology, the code used to present stimuli is a fundamental component of many experiments. It is not known, however, the degree to which researchers are sharing this type of code. To estimate this, we conducted a survey of 400 psychology papers published between 2016 and 2021, identifying those working with the open-source tools Psychtoolbox and PsychoPy that openly share stimulus presentation code. For those that did, we established if it would run following download and also appraised the code's usability in terms of style and documentation. It was found that only 8.4% of papers shared stimulus code, compared to 17.9% sharing analysis code and 31.7% sharing data. Of shared code, 70% ran directly or after minor corrections. For code that did not run, the main error was missing dependencies (66.7%). The usability of the code was moderate, with low levels of code annotation and minimal documentation provided. These results suggest that stimulus presentation code sharing lags behind other forms of code and data sharing, potentially due to less emphasis on such code in open-science discussions and in journal policies. The results also highlight a need for improved documentation to maximize code utility.
    Keywords:  Experiment code; Open science; Reproducibility
    DOI:  https://doi.org/10.3758/s13428-024-02390-8
  13. Nature. 2024 Apr;628(8008): 476
      
    Keywords:  Peer review; Scientific community
    DOI:  https://doi.org/10.1038/d41586-024-01101-9
  14. Nature. 2024 Apr 17.
      
    Keywords:  Authorship; Publishing; Research data; Research management
    DOI:  https://doi.org/10.1038/d41586-024-01135-z
  15. Reprod Biomed Online. 2024 Mar 05. pii: S1472-6483(24)00125-1. [Epub ahead of print] 103936
      Research in medicine is an indispensable tool to advance knowledge and improve patient care. This may be particularly true in the field of human reproduction as it is a relatively new field and treatment options are rapidly evolving. This is of particular importance in an emerging field like "human reproduction", where treatment options evolve fast.The cornerstone of evidence-based knowledge, leading to evidence-based treatment decisions, is randomized controlled trials as they explore the benefits of new treatment approaches. The study design and performance are crucial and, if they are carried out correctly, solid conclusions can be drawn and be implemented in daily clinical routines. The dissemination of new findings throughout the scientific community occurs in the form of publications in scientific journals, and the importance of the journal is reflected in part by the impact factor. The peer review process before publication is fundamental in preventing flaws in the study design. Thus, readers of journals with a high impact factor usually rely on a thorough peer review process and therefore might not question the published data. However, even papers published in high-impact journals might not be free of flaws, so the aim of this paper is to encourage readers to be aware of this fact and critically read scientific papers as 'the devil lies in the details'.
    Keywords:  Critical assessment; High-impact journals; Randomized controlled trials; Research
    DOI:  https://doi.org/10.1016/j.rbmo.2024.103936
  16. Infect Dis Now. 2024 Apr 13. pii: S2666-9919(24)00064-2. [Epub ahead of print]54(4): 104909
       INTRODUCTION: While Open Access (OA) journals provide free access to articles, they entail high article processing charges (APC), limiting opportunities for young researchers and those from low-middle income countries to publish OA.
    METHODS: Cross-sectional study, evaluating APC and academic impact of full OA (FOA) journals in infectious diseases (ID) and clinical microbiology (CM) compared to hybrid journals. Data were collected from Journal Citation Reports and journals' websites.
    RESULTS: Among 255 journals, median APC was 2850 (interquartile range [IQR] 1325-3654$). Median APC for 120 FOA journals was significantly lower than for 119 hybrid journals (2000, IQR 648-2767$ versus 3550, IQR 2948-4120$, p < 0.001). FOA journals had lower citation numbers and impact metrics compared to hybrid journals.
    CONCLUSION: While FOA ID/CM journals have lower APCs, they also lower academic impact compared to hybrid journals. These findings highlight the need for reforms in the publication process in view of achieving equitable data dissemination.
    Keywords:  Article production fee; Infectious diseases; LMIC; Open access; Publication
    DOI:  https://doi.org/10.1016/j.idnow.2024.104909
  17. Acad Med. 2024 Apr 15.
       PURPOSE: A preprint is a version of a research manuscript posted to a preprint server prior to peer review. Preprints enable authors to quickly and openly share research, afford opportunities for expedient feedback, and enable immediate listing of research on grant and promotion applications. In medical education, most journals welcome preprints, which suggests preprints play a role in the field's discourse. Yet, little is known about medical education preprints, including author characteristics, preprint use, and ultimate publication status. This study provides an overview of preprints in medical education to better understand their role in the field's discourse.
    METHOD: The authors queried medRxiv, a preprint repository, to identify preprints categorized as "medical education" and downloaded related metadata. CrossRef was queried to gather information on preprints later published in journals. Data were analyzed using descriptive statistics.
    RESULTS: Between 2019 and 2022, 204 preprints were classified in medRxiv as "medical education," with most deposited in 2021 (n = 76, 37.3%). On average, preprint full-texts were downloaded 1,875.2 times, and all were promoted on social media. Preprints were authored, on average, by 5.9 authors. Corresponding authors were based in 41 countries, with 45.6% in the United States, United Kingdom, and Canada. Almost half (n = 101, 49.5%) became published articles in predominantly peer-reviewed journals. Preprints appeared in 65 peer-reviewed journals, with BMC Medical Education (n = 9, 8.9%) most represented.
    CONCLUSIONS: Medical education research is being deposited as preprints, which are promoted, heavily accessed, and subsequently published in peer-reviewed journals, including medical education journals. Considering the benefits of preprints and the slowness of medical education publishing, it is likely that preprint depositing will increase and preprints will be integrated into the field's discourse. The authors propose next steps to facilitate responsible and effective creation and use of preprints.
    DOI:  https://doi.org/10.1097/ACM.0000000000005742
  18. J Bacteriol. 2024 Apr 16. e0011324
      
    Keywords:  Journal of Bacteriology; junior faculty; minireview; thesis
    DOI:  https://doi.org/10.1128/jb.00113-24
  19. Mult Scler Relat Disord. 2024 Apr 15. pii: S2211-0348(24)00202-5. [Epub ahead of print]86 105625
      
    Keywords:  Artificial intelligence; ChatGPT; Multiple Sclerosis; Open access publishing
    DOI:  https://doi.org/10.1016/j.msard.2024.105625
  20. Ocul Surf. 2024 Apr 16. pii: S1542-0124(24)00041-7. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.jtos.2024.04.001
  21. Adv Health Sci Educ Theory Pract. 2024 Apr 18.
      This column is intended to address the kinds of knotty problems and dilemmas with which many scholars grapple in studying health professions education. In this article, the authors address the question of whether one should conduct a literature review or knowledge synthesis, considering the why, when, and how, as well as its potential pitfalls. The goal is to guide supervisors and students who are considering whether to embark on a literature review in education research.
    DOI:  https://doi.org/10.1007/s10459-024-10335-1
  22. J Prof Nurs. 2024 Mar-Apr;51:pii: S8755-7223(24)00026-7. [Epub ahead of print]51 45-50
      Nurses have valuable knowledge and expertise to share. Yet, for a variety of reasons, many nurses do not write for publication. Members in one Sigma Theta Tau International chapter requested information about publishing so a writing for publication program (WPP) was convened. Ten nurses from diverse clinical and academic backgrounds participated. The goal of the WPP was to support a small group of nurses to advance knowledge and develop practical skills through the development of a manuscript with mentorship from doctorally-prepared nurses with publishing experience. The anticipated effect was that participants would share what they learned with colleagues or mentor others to publish in the future. Beginning with informational sessions to lay the foundation for writing and publishing, the WPP included biweekly, two-hour online sessions over a seven-month period whereby individual and group writing with embedded peer and WPP leader feedback occurred. WPP participants gained proficiency in searching online databases, synthesizing published literature, and working as a member of a writing team. The group successfully published a manuscript based on a topic of interest. This current article describes the structured support and mentorship provided during the WPP with recommendations for overcoming publication barriers commonly described in the literature.
    Keywords:  Dissemination; Mentorship; Nursing; Scholarly writing; Writing for publication
    DOI:  https://doi.org/10.1016/j.profnurs.2024.01.013