bims-skolko Biomed News
on Scholarly communication
Issue of 2024–11–24
twenty-one papers selected by
Thomas Krichel, Open Library Society



  1. Science. 2024 Nov 22. 386(6724): 839
      Web of Science index pulls metric because of the publisher's unusual peer-review model.
    DOI:  https://doi.org/10.1126/science.adu7465
  2. Cureus. 2024 Oct;16(10): e71877
      Background Scholarly activity by trainees is required for US-accredited graduate medical education (GME) programs. Several factors, including financial barriers to open access (OA) journals, may impact trainees' successful completion of scholarly activity, but little is known to what extent, particularly for neurology trainees. Method The authors implemented a cross-sectional, web-based 17-item survey of US-accredited neurology residency and fellowship programs during the 2022-2023 academic year. Participant responses for producing scholarly activity during GME were analyzed by mixed methods and examined by trainee motivation and perceived barriers, available institutional research support, and OA awareness and compared against socio-demographics (i.e., disadvantaged status history, underrepresented in medicine (URiM) status, international medical school graduate (IMG)) and prior research experience. Results Seventy-two respondents from 63 neurology programs completed the survey. Participants represented all US census regions and mostly from academic health centers and in advanced years of training. Overall, 17 (23.6%) self-reported as URiM and 20 (27.8%) as an IMG. Sixty-two (86.1%) were familiar with OA. Prior publications were associated with OA awareness (X2 = 5.3, p<0.05), and 27 (37.5%) reported financial barriers to publishing. IMGs reported less motivation to publish based on a journal's impact factor (odds ratio [OR] = 0.15, 95% confidence interval [CI], 0.03-0.65, p<0.01) but were nearly 5 times more likely to report financial barriers to OA publishing (OR = 4.62, 95% CI 1.31-16.80, p<0.01). Trainees successfully publishing while training reported prior research experience (OR = 7.27, 95% CI 1.71-42.64, p<0.05) and access to mentors (OR = 4.67, 95% CI 1.52-14.64, p<0.001). Dedicated time for scholarly activity and publishing were reported as significant barriers in open-ended responses. Conclusions One-third of the study participants reported financial barriers to publishing scholarship, with these barriers disproportionately affecting IMGs. Prior research experience and access to mentors were associated with a higher likelihood of publishing.
    Keywords:  all neurology; gme scholarly activity; neurology medical education; post-grad medical education; publishing
    DOI:  https://doi.org/10.7759/cureus.71877
  3. J Vis Commun Med. 2024 Nov 16. 1-9
      This study assessed the impact of posting video abstracts on journal articles and authors' X engagement. European Journal of Sport Science articles were disseminated on X as animated video abstracts (AN), author-provided video abstracts (AU), or title-only (TO) posts. Metrics, including page views, Altmetric Attention Score (AAS), X engagements, impressions, link clicks, media engagements, and views, were compared at 7 and 30 days. Authors' X presence and video abstract creation were also examined. Page views did not differ between groups after 7 or 30 days. After 7 days, AN received significantly more AAS, impressions, media views and media engagements than AU or TO. After 30 days, AN received significantly more AAS, impressions, engagements, media views and media engagements than AU or TO. TO received significantly more link clicks than AU or TO after 7 and 30 days. Fifty percent of authors have an X account and 11.2% indicated interest in creating a video abstract. Articles promoted using animated video abstracts received more attention, reach and impact on X than those promoted using author-provided video abstracts or title-only posts. Animated video abstracts can be used by authors to effectively promote their research findings and increase their visibility on social media.
    Keywords:  Social media; Twitter; dissemination; medical journals; video abstracts
    DOI:  https://doi.org/10.1080/17453054.2024.2423087
  4. Br J Anaesth. 2024 Dec;pii: S0007-0912(24)00490-2. [Epub ahead of print]133(6): 1134-1136
      Authorship provides academic recognition for substantial intellectual contributions to scholarly articles. Beyond recognition, authorship has become a form of currency within the academic community, acting as an indicator of academic output and thus influencing standing within an institution and the general medical community. It might further impact salary as well as job and research grant funding opportunities. Unfortunately, this emphasis on authorship has also been linked to instances of misconduct. We discuss our personal experience with editorial misconduct hoping to highlight the issue and thereby increase awareness and peer-to-peer control to reduce future authorship misconduct and to encourage others to speak up.
    Keywords:  academic misconduct; authorship; medical publishing; speaking up; whistle-blower
    DOI:  https://doi.org/10.1016/j.bja.2024.08.015
  5. Eur J Orthod. 2024 Dec 01. pii: cjae064. [Epub ahead of print]46(6):
       AIM: To identify data sharing practices of authors of randomized-controlled trials (RCTs) in indexed orthodontic journals and explore associations between published reports and several publication characteristics.
    MATERIALS AND METHODS: RCTs from indexed orthodontic journals in major databases, namely PubMed® (Medline), Scopus®, EMBASE®, and Web of Science™, were included from January 2019 to December 2023. Data extraction was conducted for outcome and predictor variables such as data and statistical code sharing practices reported, protocol registration, funding sources, and other publication characteristics, including the year of publication, journal ranking, the origin of authorship, number of authors, design of the RCT, and outcome-related variables (e.g. efficacy/safety). Statistical analyses included descriptive statistics, cross-tabulations, and univariable and multivariable logistic regression.
    RESULTS: A total of 318 RCTs were included. Statement for intention of the authors to provide their data upon request was recorded in 51 of 318 RCTs (16.0%), while 6 of 318 (1.9%) openly provided their data in repositories. No RCT provided any code or script for statistical analysis. A significant association was found between data sharing practices and the year of publication, with increasing odds for data sharing by 1.56 times across the years (odds ratio [OR]: 1.56; 95% confidence interval [CI]: 1.22, 2.01; P < .001). RCTs reporting on safety outcomes presented 62% lower odds for including positive data sharing statements compared to efficacy outcomes (OR: 0.38; 95% CI: 0.17, 0.88). There was evidence that funded RCTs were more likely to report on data sharing compared to non-funded (P = .02).
    CONCLUSIONS: Albeit progress has been made towards credibility and transparency in the presentation of findings from RCTs in orthodontics, less than 20% of published orthodontic trials include a positive data sharing statement while less than 2% openly provide their data with publication.
    Keywords:  data sharing; individual participant data; orthodontic RCTs; registration; transparency
    DOI:  https://doi.org/10.1093/ejo/cjae064
  6. Adv Physiol Educ. 2024 Nov 21.
      An increase in scholarly publishing has been accompanied by a proliferation of potentially illegitimate publishers (PIP), commonly known as "predatory publishers". These PIP often engage in fraudulent practices and publish articles that are not subject to the same scrutiny as those published in journals from legitimate publishers (LP). This places academics at risk, in particular students who utilize journal articles for learning and assignments. This analysis sought to characterise PIP in physiology, as this has yet to be determined, and identify overlaps in lists of PIP and LP used to provide guidance on legitimacy of journals. Searching seven databases (2 of PIP, 5 of LP), this analysis identified 67 potentially illegitimate journals (PIJ) that explicitly include "physiology" in their titles, with 8801 articles being published in them. Of these articles, 39% claimed to be indexed in GoogleScholar, and 9% were available on PubMed. This resulted in 17 publications 'infiltrating' PubMed and attracting >100 citations in the process. Overlap between lists of PIP and LP was present, with eight PIJ occurring in both LP and PIP lists. Two of these journals appeared to be 'phishing' journals, and six were genuine infiltrations into established databases; indicating that LP lists cannot be solely relied upon as proof a journal is legitimate. This analysis indicates that physiology is not immune to the threat of PIP, and that future work is required by educators to ensure students do not fall prey to their use.
    Keywords:  databases; legitimacy; physiology; publishing
    DOI:  https://doi.org/10.1152/advan.00162.2024
  7. Nature. 2024 Nov 21.
      
    Keywords:  Media; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-024-03784-6
  8. PLoS One. 2024 ;19(11): e0301111
       OBJECTIVE: Peer review frequently follows a process where reviewers first provide initial reviews, authors respond to these reviews, then reviewers update their reviews based on the authors' response. There is mixed evidence regarding whether this process is useful, including frequent anecdotal complaints that reviewers insufficiently update their scores. In this study, we aim to investigate whether reviewers anchor to their original scores when updating their reviews, which serves as a potential explanation for the lack of updates in reviewer scores.
    DESIGN: We design a novel randomized controlled trial to test if reviewers exhibit anchoring. In the experimental condition, participants initially see a flawed version of a paper that is corrected after they submit their initial review, while in the control condition, participants only see the correct version. We take various measures to ensure that in the absence of anchoring, reviewers in the experimental group should revise their scores to be identically distributed to the scores from the control group. Furthermore, we construct the reviewed paper to maximize the difference between the flawed and corrected versions, and employ deception to hide the true experiment purpose.
    RESULTS: Our randomized controlled trial consists of 108 researchers as participants. First, we find that our intervention was successful at creating a difference in perceived paper quality between the flawed and corrected versions: Using a permutation test with the Mann-Whitney U statistic, we find that the experimental group's initial scores are lower than the control group's scores in both the Evaluation category (Vargha-Delaney A = 0.64, p = 0.0096) and Overall score (A = 0.59, p = 0.058). Next, we test for anchoring by comparing the experimental group's revised scores with the control group's scores. We find no significant evidence of anchoring in either the Overall (A = 0.50, p = 0.61) or Evaluation category (A = 0.49, p = 0.61). The Mann-Whitney U represents the number of individual pairwise comparisons across groups in which the value from the specified group is stochastically greater, while the Vargha-Delaney A is the normalized version in [0, 1].
    DOI:  https://doi.org/10.1371/journal.pone.0301111
  9. JAACAP Open. 2023 Nov;1(3): 151-153
      A critical piece in the launch of JAACAP Open is the establishment of a high-quality and robust peer review process for incoming submissions. Indeed, peer review is the backbone of our scientific process. Here, we will discuss the importance of peer review, describe the process as we are expanding the JAACAP journal family, and explain why and how you can be involved in the peer review process.
    DOI:  https://doi.org/10.1016/j.jaacop.2023.09.003
  10. J Orthop. 2025 May;63 98-100
      As artificial intelligence continues its ascent across numerous sectors, it presents both exciting opportunities and unique challenges for the future of academic publishing. Artificial intelligence (AI) is the way of computer systems to perform tasks that has been done by human brain before, such as learning, problem-solving, and decision-making. In the realm of medical writing, AI is being harnessed through various applications. The increasing amalgamation of AI into medical writing has ignited a fervent debate, with experts and stakeholders divided on whether it represents a valuable tool for progress or a potential threat to the integrity and quality of scientific publications. While proponents celebrate AI's potential to streamline research, enhance efficiency, and broaden access to knowledge, critics voice concerns about ethical implications, the risk of plagiarism, and the potential for deskilling among researchers. Therefore, it is pivotal to acknowledge that AI has the potential to be both a boon and a bane, and its ethical and practical implications must be carefully considered to ensure its responsible and beneficial integration into the spectrum of medical writing.
    DOI:  https://doi.org/10.1016/j.jor.2024.10.045
  11. Cureus. 2024 Oct;16(10): e71744
      Scientific medical manuscripts are fundamental to advancing research and enhancing patient care. With the emergence of artificial intelligence (AI), the process of composing such manuscripts has witnessed profound transformations. This review delves into the multifaceted role of AI in medical manuscript composition, analyzing its applications, benefits, drawbacks, and ethical implications. Employing a comprehensive narrative review methodology, we explored databases such as PubMed, Google Scholar, and Science Direct. The review charts the evolution of AI in medical writing, from basic word processing to sophisticated neural network-based models like GPT-3 and GPT-4. Various AI-powered tools such as ChatGPT, Google Bard, Elicit, and Consensus AI are examined in terms of their functionalities and contributions to research and medical writing. While AI technologies offer notable advantages in automating content creation and boosting research productivity, concerns persist regarding overreliance, potential homogenization of writing styles, and ethical considerations such as originality and authorship. Because of this concern, some companies are restricting the use of AI in peer review processes, medical examinations, etc. It is crucial to strike a balance in integrating AI tools, ensuring human oversight, conducting thorough algorithm audits, addressing financial implications, and upholding academic integrity. The review underscores the transformative potential of AI in medical manuscript composition while emphasizing the ongoing significance of human expertise, creativity, and ethical responsibility in scientific communication. Recommendations are provided for the effective integration of AI tools into medical writing processes, emphasizing collaborative efforts between AI developers, researchers, and journal editors to navigate ethical dilemmas and maximize the benefits of AI-driven advancements in scientific publishing.
    Keywords:  ai technologies; applications; artificial intelligence; ethical issues; medical manuscript writing
    DOI:  https://doi.org/10.7759/cureus.71744
  12. Patterns (N Y). 2024 Oct 11. 5(10): 101075
      Scientific writing is an essential skill for researchers to publish their work in respected peer-reviewed journals. While using AI-assisted tools can help researchers with spelling checks, grammar corrections, and even rephrasing of paragraphs to improve the language and meet journal standards, unethical use of these tools may raise research integrity concerns during this process. In this piece, three Patterns authors share their thoughts on three questions: how do you use AI tools ethically during manuscript writing? What benefits and risks do you believe AI tools will bring to scientific writing? Do you have any recommendations for better policies regulating AI tools' use in scientific writing?
    DOI:  https://doi.org/10.1016/j.patter.2024.101075
  13. Eur J Dent Educ. 2024 Nov 19.
       OBJECTIVES: To evaluate the performance of a Generative Pre-trained Transformer (GPT) in generating scientific abstracts in dentistry.
    METHODS: Ten scientific articles in dental radiology had their original abstracts collected, while another 10 articles had their methodology and results added to a ChatGPT prompt to generate an abstract. All abstracts were randomised and compiled into a single file for subsequent assessment. Five evaluators classified whether the abstract was generated by a human using a 5-point scale and provided justifications within seven aspects: formatting, information accuracy, orthography, punctuation, terminology, text fluency, and writing style. Furthermore, an online GPT detector provided "Human Score" values, and a plagiarism detector assessed similarity with existing literature.
    RESULTS: Sensitivity values for detecting human writing ranged from 0.20 to 0.70, with a mean of 0.58; specificity values ranged from 0.40 to 0.90, with a mean of 0.62; and accuracy values ranged from 0.50 to 0.80, with a mean of 0.60. Orthography and Punctuation were the most indicated aspects for the abstract generated by ChatGPT. The GPT detector revealed confidence levels for a "Human Score" of 16.9% for the AI-generated texts and plagiarism levels averaging 35%.
    CONCLUSION: The GPT exhibited commendable performance in generating scientific abstracts when evaluated by humans, as the generated abstracts were indistinguishable from those generated by humans. When evaluated by an online GPT detector, the use of GPT became apparent.
    Keywords:  artificial intelligence; dental research; education; large language models; scientific writing; similarity detectors
    DOI:  https://doi.org/10.1111/eje.13057
  14. Am J Obstet Gynecol. 2024 Dec;pii: S0002-9378(24)00706-3. [Epub ahead of print]231(6): e222
      
    DOI:  https://doi.org/10.1016/j.ajog.2024.06.030
  15. Tunis Med. 2024 Nov 05. 102(11): 858-865
       INTRODUCTION: The benchmark of a medical thesis' success is often its acceptance for publication in an indexed journal.
    AIM: To determine the publication rate of practice theses in the field of Cardiology at the Faculty of Medicine of Sousse (FMSo) in Tunisia and to identify predictive factors for successful publication.
    METHODS: We conducted a descriptive bibliometric analysis of Cardiology theses defended at FMSo from 2000 to 2019. Data were extracted from the theses' cover pages, abstracts, and conclusions. The publication status was ascertained via searches in "MEDLINE", "Scopus", and "Google Scholar". Predictive factors for publication were identified using multivariate analysis with a 90% Confidence Interval (CI).
    RESULTS: Of the 111 Cardiology theses defended at FMSo between 2000 and 2019, 36 were published yielding 42 scientific articles (publication rate of 32%). Notably, 86% of these articles were indexed in "MEDLINE" and/or "Scopus". In 79% of cases, doctoral candidates were co-authors of the resultant publications. Publication was significantly influenced by three factors: scientific mentorship by an Assistant or Associate Professor (aOR=3.021; 90%CI: 1.06-10.14; p=0.082), a prospective study design (aOR=2.536; 90%CI: 1.07-6.02; p=0.076), and a satisfactory quality of writing (aOR=2.384; 90%CI: 1.11-5.11; p=0.061).
    CONCLUSION: The publication of Cardiology theses at FMSo was found to be associated with the prospective design of the study and the quality of writing. Thus, it is imperative to enhance the research methodology and scientific communication skills of medical thesis candidates and their mentors to facilitate the transition from academic dissertations to medical articles.
    Keywords:  Academic dissertation; Bibliometrics; Medical Writing; Medicine; Publications; Schools; Tunisia
    DOI:  https://doi.org/10.62438/tunismed.v102i11.5230
  16. Int J Epidemiol. 2024 Oct 13. pii: dyae154. [Epub ahead of print]53(6):
      Journal peer review is a gatekeeper in the scientific process, determining which papers are published in academic journals. It also supports authors in improving their papers before they go to press. Training for early-career researchers on how to conduct a high-quality peer review is scarce, however, and there are concerns about the quality of peer review in the health sciences. Standardized training and guidance may help reviewers to improve the quality of their feedback. In this paper, we approach peer review as a staged writing activity and apply writing process best practices to help early-career researchers and others learn to create a comprehensive and respectful peer-review report. The writing stages of reading, planning and composing are reflected in our three-step peer-review process. The first step involves reading the entire manuscript to get a sense of the paper as a whole. The second step is to comprehensive evaluate the paper. The third step, of writing the review, emphasizes a respectful tone, providing feedback that motivates revision as well as balance in pointing out strengths and making suggestions. Detailed checklists that are provided in the Supplementary material (available as Supplementary data at IJE online) aid in the paper evaluation process and examples demonstrate points about writing an effective review.
    Keywords:  Journal peer review; mentoring; professional development; publishing; teaching; writing
    DOI:  https://doi.org/10.1093/ije/dyae154
  17. R Soc Open Sci. 2024 Nov;11(11): 241311
      In the wake of the COVID-19 pandemic, many journals swiftly changed their editorial policies and peer-review processes to accelerate the provision of knowledge about COVID-related issues to a wide audience. These changes may have favoured speed at the cost of accuracy and methodological rigour. In this study, we compare 100 COVID-related articles published in four major psychological journals between 2020 and 2022 with 100 non-COVID articles from the same journal issues and 100 pre-COVID articles published between 2017 and 2019. Articles were coded with regard to design features, sampling and recruitment features, and openness and transparency practices. Even though COVID research was, by and large, more 'observational' in nature and less experimentally controlled than non- or pre-COVID research, we found that COVID-related studies were more likely to use 'stronger' (i.e. more longitudinal and fewer cross-sectional) designs, larger samples, justify their sample sizes based on a priori power analysis, pre-register their hypotheses and analysis plans and make their data, materials and code openly available. Thus, COVID-related psychological research does not appear to be less rigorous in these regards than non-COVID research.
    Keywords:  COVID-19 pandemic; meta-science; open science; psychology; research quality
    DOI:  https://doi.org/10.1098/rsos.241311
  18. N C Med J. 2024 Aug;85(6): 373
      With this issue, the North Carolina Medical Journal ceases to publish in print and will appear exclusively online. The NCMJ will reach back almost 175 years to our founding in 1849 and will once again focus on peer-reviewed original research. Dr. Ronny Bell assumes to role of Editor-in-Chief.
    Keywords:  introduction; oral health; publishing
    DOI:  https://doi.org/10.18043/001c.125108