bims-skolko Biomed News
on Scholarly communication
Issue of 2023‒12‒17
27 papers selected by
Thomas Krichel, Open Library Society



  1. Indian J Urol. 2023 Oct-Dec;39(4):39(4): 265-273
      Introduction: This bibliometric study is designed to investigate the relations of urology journals with access types and article processing charges (APCs) to assess the changing paradigm in urology publishing.Methods: The three major databases: The Master Journal List directory by Clavirate Analytics, Scopus® and PubMed were queried for relevant journals in urology and subspecialties. Characterization of urology journals was undertaken, and citation metrics and APCs were compared across access types. A partial sampling was used to investigate the number of open access (OA) articles according to access types and correlations with both APCs and CiteScore.
    Results: Seventy-seven journals were included into the study. Gold and diamond OA journals comprised 35.4% of urology journals in 2009 and were increased to 49.3% in 2022. No significant difference was found for change in the CiteScore of 2017 and 2021 between the access types, F (2,63) = 0.152, P = 0.859, η2 = 0.005. A moderate positive correlation was found between APCs and CiteScore for both hybrid (rs [27] =0.431, P < 0.0005) and gold OA (rs [27] =0.489, P = 0.007) journals. The authors need to pay $1175 more to publish their articles in OA model in hybrid journals. The number of articles published in OA model by hybrid journals were not correlated with APCs (rs = 0.332, P = 0.078) but correlated with CiteScore (rs = 0.393, P = 0.035).
    Conclusions: A paradigm shift in urology publishing toward OA model has been occurring. Authors choose prestige, OA model, rapid publication, and less rigorous peer-review to publish their articles. APCs bear only moderate correlation with the citation metrics of the urology journals.
    DOI:  https://doi.org/10.4103/iju.iju_159_23
  2. J Cutan Med Surg. 2023 Nov;27(6): 577-578
      
    DOI:  https://doi.org/10.1177/12034754231216791
  3. Clin Neuroradiol. 2023 Dec 14.
      PURPOSE: It is unclear if undesired practices such as scientific fraud, publication bias, and honorary authorship are present in neuroradiology. Therefore, the objective was to explore the integrity of clinical neuroradiological research using a survey method.METHODS: Corresponding authors who published in one of four top clinical neuroradiology journals were invited to complete a survey about integrity in clinical neuroradiology research.
    RESULTS: A total of 232 corresponding authors participated in our survey. Confidence in the integrity of published scientific work in clinical neuroradiology (0-10 point scale) was rated as a median score of 8 (range 3-10). In linear regression analysis, respondents from Asia had significantly higher confidence (beta coefficient of 0.569, 95% confidence interval, CI: 0.049-1.088, P = 0.032). Of the respondents 8 (3.4%) reported to have committed scientific fraud in the past 5 years, whereas 66 respondents (28.4%) reported to have witnessed or suspected scientific fraud by anyone from their department in the past 5 years. A total of 192 respondents (82.8%) thought that a study with positive results is more likely to be accepted by a journal than a similar study with negative results and 96 respondents (41.4%) had an honorary author on any of their publications in the past 5 years.
    CONCLUSION: Experts in the field have overall high confidence in published clinical neuroradiology research; however, scientific integrity concerns are not negligible, publication bias is a problem and honorary authorship is common. The findings from this survey may help to increase awareness and vigilance among anyone involved in clinical neuroradiological research.
    Keywords:  Ethics; Fraud; Neurology; Radiology; Scientific misconduct
    DOI:  https://doi.org/10.1007/s00062-023-01280-4
  4. JAMA Netw Open. 2023 Dec 01. 6(12): e2347607
      Importance: High-quality peer reviews are often thought to be essential to ensuring the integrity of the scientific publication process, but measuring peer review quality is challenging. Although imperfect, review word count could potentially serve as a simple, objective metric of review quality.Objective: To determine the prevalence of very short peer reviews and how often they inform editorial decisions on research articles in 3 leading general medical journals.
    Design, Setting, and Participants: This cross-sectional study compiled a data set of peer reviews from published, full-length original research articles from 3 general medical journals (The BMJ, PLOS Medicine, and BMC Medicine) between 2003 and 2022. Eligible articles were those with peer review data; all peer reviews used to make the first editorial decision (ie, accept vs revise and resubmit) were included.
    Main Outcomes and Measures: Prevalence of very short reviews was the primary outcome, which was defined as a review of fewer than 200 words. In secondary analyses, thresholds of fewer than 100 words and fewer than 300 words were used. Results were disaggregated by journal and year. The proportion of articles for which the first editorial decision was made based on a set of peer reviews in which very short reviews constituted 100%, 50% or more, 33% or more, and 20% or more of the reviews was calculated.
    Results: In this sample of 11 466 reviews (including 6086 in BMC Medicine, 3816 in The BMJ, and 1564 in PLOS Medicine) corresponding to 4038 published articles, the median (IQR) word count per review was 425 (253-575) words, and the mean (SD) word count was 520.0 (401.0) words. The overall prevalence of very short (<200 words) peer reviews was 1958 of 11 466 reviews (17.1%). Across the 3 journals, 843 of 4038 initial editorial decisions (20.9%) were based on review sets containing 50% or more very short reviews. The prevalence of very short reviews and share of editorial decisions based on review sets containing 50% or more very short reviews was highest for BMC Medicine (693 of 2585 editorial decisions [26.8%]) and lowest for The BMJ (76 of 1040 editorial decisions [7.3%]).
    Conclusion and Relevance: In this study of 3 leading general medical journals, one-fifth of initial editorial decisions for published articles were likely based at least partially on reviews of such short length that they were unlikely to be of high quality. Future research could determine whether monitoring peer review length improves the quality of peer reviews and which interventions, such as incentives and norm-based interventions, may elicit more detailed reviews.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2023.47607
  5. Account Res. 2023 Dec 11. 1-19
      This case study analyzes the expertise, potential conflicts of interest, and objectivity of editors, authors, and peer reviewers involved in a 2022 special journal issue on fertility, pregnancy, and mental health. Data were collected on qualifications, organizational affiliations, and relationships among six papers' authors, three guest editors, and twelve peer reviewers. Two articles were found to have undisclosed conflicts of interest between authors, an editor, and multiple peer reviewers affiliated with anti-abortion advocacy and lobbying groups, indicating compromised objectivity. This lack of transparency undermines the peer review process and enables biased research and disinformation proliferation.Our study is limited by a few factors including: difficulty collecting peer reviewer data, potentially missing affiliations, and a small sample without comparisons. While this is a case study of one special issue, we do have suggestions for increasing integrity.
    Keywords:  Research integrity; peer review bias; research ethics
    DOI:  https://doi.org/10.1080/08989621.2023.2292043
  6. BMC Med Res Methodol. 2023 Dec 13. 23(1): 292
      BACKGROUND: Complete reporting is essential for clinical research. However, the endorsement of reporting guidelines in radiological journals is still unclear. Further, as a field extensively utilizing artificial intelligence (AI), the adoption of both general and AI reporting guidelines would be necessary for enhancing quality and transparency of radiological research. This study aims to investigate the endorsement of general reporting guidelines and those for AI applications in medical imaging in radiological journals, and explore associated journal characteristic variables.METHODS: This meta-research study screened journals from the Radiology, Nuclear Medicine & Medical Imaging category, Science Citation Index Expanded of the 2022 Journal Citation Reports, and excluded journals not publishing original research, in non-English languages, and instructions for authors unavailable. The endorsement of fifteen general reporting guidelines and ten AI reporting guidelines was rated using a five-level tool: "active strong", "active weak", "passive moderate", "passive weak", and "none". The association between endorsement and journal characteristic variables was evaluated by logistic regression analysis.
    RESULTS: We included 117 journals. The top-five endorsed reporting guidelines were CONSORT (Consolidated Standards of Reporting Trials, 58.1%, 68/117), PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses, 54.7%, 64/117), STROBE (STrengthening the Reporting of Observational Studies in Epidemiology, 51.3%, 60/117), STARD (Standards for Reporting of Diagnostic Accuracy, 50.4%, 59/117), and ARRIVE (Animal Research Reporting of In Vivo Experiments, 35.9%, 42/117). The most implemented AI reporting guideline was CLAIM (Checklist for Artificial Intelligence in Medical Imaging, 1.7%, 2/117), while other nine AI reporting guidelines were not mentioned. The Journal Impact Factor quartile and publisher were associated with endorsement of reporting guidelines in radiological journals.
    CONCLUSIONS: The general reporting guideline endorsement was suboptimal in radiological journals. The implementation of reporting guidelines for AI applications in medical imaging was extremely low. Their adoption should be strengthened to facilitate quality and transparency of radiological study reporting.
    Keywords:  Artificial intelligence; Checklist; Guideline; Radiology; Research report
    DOI:  https://doi.org/10.1186/s12874-023-02117-x
  7. Colomb Med (Cali). 2023 Jul-Sep;54(3):54(3): e1015868
      This statement revises our earlier "WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications" (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop.
    Keywords:  ChatGPT; artificial intelligence; authorship scientific misconduct; chatbots; confidentiality; deep learning; disinformation; plagiarism; scholarly communication; scientific manuscript; wame revised recommendation
    DOI:  https://doi.org/10.25100/cm.v54i3.5868
  8. Cell Rep Phys Sci. 2023 Nov;pii: 101672. [Epub ahead of print]4(11):
      Large language models like ChatGPT can generate authentic-seeming text at lightning speed, but many journal publishers reject language models as authors on manuscripts. Thus, a means to accurately distinguish human-generated from artificial intelligence (AI)-generated text is immediately needed. We recently developed an accurate AI text detector for scientific journals and, herein, test its ability in a variety of challenging situations, including on human text from a wide variety of chemistry journals, on AI text from the most advanced publicly available language model (GPT-4), and, most important, on AI text generated using prompts designed to obfuscate AI use. In all cases, AI and human text was assigned with high accuracy. ChatGPT-generated text can be readily detected in chemistry journals; this advance is a fundamental prerequisite for understanding how automated text generation will impact scientific publishing from now into the future.
    DOI:  https://doi.org/10.1016/j.xcrp.2023.101672
  9. Cureus. 2023 Nov;15(11): e48366
      In the ever-evolving realm of scientific research, this letter underscores the vital role of ChatGPT as an invaluable ally in manuscript creation, focusing on its remarkable grammar and spelling error correction capabilities. Furthermore, it highlights ChatGPT's efficacy in expediting the manuscript preparation process by streamlining the collection and highlighting critical scientific information. By elucidating the aim of this letter and the multifaceted benefits of ChatGPT, we aspire to illuminate the path toward a future where scientific writing achieves unparalleled efficiency and precision.
    Keywords:  artificial intelligence; chatgpt; future in medicine; research; scientific manuscript
    DOI:  https://doi.org/10.7759/cureus.48366
  10. Nature. 2023 Dec;624(7991): S13
      
    Keywords:  Authorship; Developing world; Funding; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-023-03905-7
  11. Nature. 2023 Dec;624(7991): S8-S9
      
    Keywords:  Developing world; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-03903-9
  12. Nature. 2023 Dec;624(7991): S1
      
    Keywords:  Developing world; Funding; Policy
    DOI:  https://doi.org/10.1038/d41586-023-03901-x
  13. mBio. 2023 Dec 13. e0199423
      In this editorial, written by early-career scientists, we advocate for the invaluable role of society journals in our scientific community. By choosing to support these journals as authors, peer reviewers, and as editors, we can reinforce our academic growth and benefit from their re-investment back into the scientific ecosystem. Considering the numerous clear merits of this system for future generations of microbiologists and more broadly, society, we argue that early-career researchers should publish our high-quality research in society journals to shape the future of science and scientific publishing landscape.
    Keywords:  early career scientists; impact; society journals
    DOI:  https://doi.org/10.1128/mbio.01994-23
  14. PLoS One. 2023 ;18(12): e0294805
      The fairness of decisions made at various stages of the publication process is an important topic in meta-research. Here, based on an analysis of data on the gender of authors, editors and reviewers for 23,876 initial submissions and 7,192 full submissions to the journal eLife, we report on five stages of the publication process. We find that the board of reviewing editors (BRE) is men-dominant (69%) and that authors disproportionately suggest male editors when making an initial submission. We do not find evidence for gender bias when Senior Editors consult Reviewing Editors about initial submissions, but women Reviewing Editors are less engaged in discussions about these submissions than expected by their proportion. We find evidence of gender homophily when Senior Editors assign full submissions to Reviewing Editors (i.e., men are more likely to assign full submissions to other men (77% compared to the base assignment rate to men RE of 70%), and likewise for women (41% compared to women RE base assignment rate of 30%))). This tendency was stronger in more gender-balanced scientific disciplines. However, we do not find evidence for gender bias when authors appeal decisions made by editors to reject submissions. Together, our findings confirm that gender disparities exist along the editorial process and suggest that merely increasing the proportion of women might not be sufficient to eliminate this bias. Measures accounting for women's circumstances and needs (e.g., delaying discussions until all RE are engaged) and raising editorial awareness to women's needs may be essential to increasing gender equity and enhancing academic publication.
    DOI:  https://doi.org/10.1371/journal.pone.0294805
  15. Nature. 2023 Dec;624(7991): S34-S36
      
    Keywords:  Developing world; Publishing; Research management
    DOI:  https://doi.org/10.1038/d41586-023-03913-7
  16. Am J Occup Ther. 2023 Nov 01. pii: 7706070010. [Epub ahead of print]77(6):
      The American Journal of Occupational Therapy (AJOT) has maintained its top-ranking status in the field of occupational therapy, as evidenced by an increase in its 2-yr impact factor. As the Editor-in-Chief enters her second 3-yr term, the journal faces both challenges and opportunities stemming from trends in academic publishing. The editorial team seeks to navigate these shifts while fostering meaningful research dissemination and translation. Key outcomes for 2023 showcase the journal's dedication to addressing diverse occupational therapy needs. A special issue on autism and mental health in 2023 and upcoming themes on recovery after neurological injury and play in occupational therapy in 2024 exemplify AJOT's commitment to relevant topics. The AJOT Authors & Issues interview series and an active presence on social media platforms further bolster research engagement and translation. Despite challenges, AJOT's impact factor and rankings in the rehabilitation category have demonstrated its global influence and leadership. The journal's commitment to diversity, equity, and inclusion (DEI) is evident through initiatives such as AJOT's DEI Committee and DEI article collection, as well as AJOT's comprehensive approach to combating bias. As AJOT looks ahead to 2024, its goals include reviving State of the Science articles, updating our Author Guidelines to incorporate artificial intelligence and bias-free language policies, and fostering engagement through the AJOT Instagram account and monthly AJOT Authors & Issues discussions. With its dedication to rigorous research and meaningful translation, AJOT remains a crucial resource for occupational therapy professionals striving to make evidence-based decisions.
    DOI:  https://doi.org/10.5014/ajot.2023.077602