bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒04‒14
twenty papers selected by
Thomas Krichel, Open Library Society



  1. J Eval Clin Pract. 2024 Apr 07.
      
    Keywords:  academia; medical science; publishing; research ethics; research integrity; technology
    DOI:  https://doi.org/10.1111/jep.13989
  2. PLoS One. 2024 ;19(4): e0300710
      How do author perceptions match up to the outcomes of the peer-review process and perceptions of others? In a top-tier computer science conference (NeurIPS 2021) with more than 23,000 submitting authors and 9,000 submitted papers, we surveyed the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews. The salient results are: (1) Authors had roughly a three-fold overestimate of the acceptance probability of their papers: The median prediction was 70% for an approximately 25% acceptance rate. (2) Female authors exhibited a marginally higher (statistically significant) miscalibration than male authors; predictions of authors invited to serve as meta-reviewers or reviewers were similarly calibrated, but better than authors who were not invited to review. (3) Authors' relative ranking of scientific contribution of two submissions they made generally agreed with their predicted acceptance probabilities (93% agreement), but there was a notable 7% responses where authors predicted a worse outcome for their better paper. (4) The author-provided rankings disagreed with the peer-review decisions about a third of the time; when co-authors ranked their jointly authored papers, co-authors disagreed at a similar rate-about a third of the time. (5) At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process. The stakeholders in peer review should take these findings into account in setting their expectations from peer review.
    DOI:  https://doi.org/10.1371/journal.pone.0300710
  3. Neurosurgery. 2024 Apr 08.
      BACKGROUND AND OBJECTIVES: Financial conflicts of interest between editorial board members and industry could lead to biases and impartial editorial decisions. We aimed to evaluate the frequency, amount, and characteristics of payments to editorial board members of neurosurgery journals over a 6-year period.METHODS: In this cross-sectional study, editorial board members were derived from the top 10 neurosurgery journals based on Google Scholar metrics. The Open Payments database by the Centers for Medicare and Medicaid Services was accessed to evaluate industry payments to editorial board members from 2017to 2022. Descriptive analyses were performed on payment data, adjusted for inflation using the consumer price indices.
    RESULTS: We included 805 editorial board members. After excluding duplicate names, 342 (53.9%) of 634 had received payments between 2017 and 2022. Eight of 10 journals had more than 50% of editorial board members listed in the Open Payments database. Between 2017 and 2022, the total number of payments to editorial board members was $143 732 057, encompassing $1 323 936 in research payments, $69 122 067 in associated research funding, $5 380 926 in ownership and investment interests, and $67 905 128 in general payments. General payments decreased from $13 676 382 in 2017 to $8 528 003 in 2022. Royalties ($43 393 697) and consulting ($13 157 934) contributed the most to general payments between 2017 and 2022. Four journals had a percentage increase in total payments, whereas general payments decreased for 6 journals.
    CONCLUSION: Around 54% of editorial board members of neurosurgical journals received industry payments between 2017 and 2022. We identified journal-specific trends in industry payments and highlighted the importance of transparency and disclosure of financial conflicts of interests for neurosurgery journals.
    DOI:  https://doi.org/10.1227/neu.0000000000002934
  4. West J Emerg Med. 2024 Mar;25(2): 254-263
      Introduction: Despite the importance of peer review to publications, there is no generally accepted approach for editorial evaluation of a peer review's value to a journal editor's decision-making. The graduate medical education editors of the Western Journal of Emergency Medicine Special Issue in Educational Research & Practice (Special Issue) developed and studied the holistic editor's scoring rubric (HESR) with the objective of assessing the quality of a review and an emphasis on the degree to which it informs a holistic appreciation for the submission under consideration.Methods: Using peer-review guidelines from several journals, the Special Issue's editors formulated the rubric as descriptions of peer reviews of varying degree of quality from the ideal to the unacceptable. Once a review was assessed by each editor using the rubric, the score was submitted to a third party for blinding purposes. We compared the performance of the new rubric to a previously used semantic differential scale instrument. Kane's validity framework guided the evaluation of the new scoring rubric around three basic assumptions: improved distribution of scores; relative consistency rather than absolute inter-rater reliability across editors; and statistical evidence that editors valued peer reviews that contributed most to their decision-making.
    Results: Ninety peer reviews were the subject of this study, all were assessed by two editors. Compared to the highly skewed distribution of the prior rating scale, the distribution of the new scoring rubric was bell shaped and demonstrated full use of the rubric scale. Absolute agreement between editors was low to moderate, while relative consistency between editor's rubric ratings was high. Finally, we showed that recommendations of higher rated peer reviews were more likely to concur with the editor's formal decision.
    Conclusion: Early evidence regarding the HESR supports the use of this instrument in determining the quality of peer reviews as well as its relative importance in informing editorial decision-making.
    DOI:  https://doi.org/10.5811/westjem.18432
  5. Nature. 2024 Apr 10.
      
    Keywords:  Machine learning; Peer review; Publishing
    DOI:  https://doi.org/10.1038/d41586-024-01051-2
  6. Arthroscopy. 2024 Apr 09. pii: S0749-8063(24)00271-8. [Epub ahead of print]
      Authors may have the misconception that the purpose of peer review is to serve as an arbiter or referee, or in other words, to make a binary, Accept After Revision versus Reject, decision whether or not an article will be published in our journal. In truth, while making that difficult decision is part of the process, it is only a part. The principal goal of peer review is to make articles better.
    DOI:  https://doi.org/10.1016/j.arthro.2024.04.003
  7. J Obstet Gynaecol Res. 2024 Apr 08.
      AIM: ChatGPT's role in medical writing is a topic of discussion. I experimented whether ChatGPT almost automatically generates Correspondence or Letter addressed to a "translated" article, and thereby wish to arouse discussion regarding ChatGPT use in medical writing.METHODS: I input an English article of mine into ChatGPT, tasking it with generating an English Disagreement Letter (Letter 1). Next, I tasked ChatGPT with translating the manuscript addressed to from English-French-Spanish-German. Then, I once again tasked ChatGPT with generating an English Disagreement Letter addressed to a German manuscript (triplicate translated manuscript) (Letter 2).
    RESULTS: Letters 1 and 2 are readable and reasonable, shooting the point that the author (myself) felt as the weakness of the article. Letters addressed to French (single translation) and to Spanish (double translation) and longer Letters (corresponding to Letters 1 and 2) are also readable, and thus stand.
    CONCLUSIONS: Solely based on this experiment, one may be able to write a letter even without understanding the meaning of the paper being addressed, let alone the language of the paper. Although this humble experiment does not conclude anything, I plea for a comprehensive discussion on the implications of these findings.
    Keywords:  ChatGPT; artificial intelligence; correspondence; manuscript; writing
    DOI:  https://doi.org/10.1111/jog.15948
  8. Food Technol Biotechnol. 2024 Mar;62(1): 127-129
      
  9. Invest Educ Enferm. 2023 Nov;41(3):
      Objective: From my experience as a member of the editorial board of the journal Investigación y Educación en Enfermería, the implications and scope of participating in this entity and the mutual and reciprocal benefits of this academic interaction between members of the editorial board and the journal are explained.Content synthesis: The key elements on operation, integration, tasks, and responsibilities of editorial boards to disseminate scientific research in different disciplines are analyzed and described, highlighting the rigor and commitment to academic ethics that allows guaranteeing the credibility of the contents published and topics addressed by a journal within a context of high competitiveness and risk of breaches of academic and scientific probity and ethics.
    Conclusion: Integrating an editorial board requires developing a fundamental role that implies a series of commitments and challenges that must be addressed with professionalism and ethics to guarantee the quality and prestige of the academic publication. In this task, achievements and goals are reached for the journal, as well as academic benefits for the editorial board members.
    Keywords:  editorial policies; periodicals as topic; scholarly communication; scientific and technical publications
    DOI:  https://doi.org/10.17533/udea.iee.v41n3e13
  10. Nature. 2024 Apr 08.
      
    Keywords:  Authorship; Careers; Machine learning; Peer review; Publishing
    DOI:  https://doi.org/10.1038/d41586-024-01042-3
  11. Adv Pharm Bull. 2024 Mar;14(1): 1-4
      Purpose: Academic and other researchers have limited tools with which to address the current proliferation of predatory and hijacked journals. These journals can have negative effects on science, research funding, and the dissemination of information. As most predatory and hijacked journals are not error free, this study used ChatGPT, an artificial intelligence (AI) technology tool, to conduct an evaluation of journal quality.Methods: Predatory and hijacked journals were analyzed for reliability using ChatGPT, and the reliability of result have been discussed.
    Results: It shows that ChatGPT is an unreliable tool for journal quality evaluation for both hijacked and predatory journals.
    Conclusion: To show how to address this gap, an early trial version of Journal Checker Chatbot has been developed and is discussed as an alternative chatbot that can assist researchers in detecting hijacked journals.
    Keywords:  Artificial intelligence; ChatGPT; Hijacked journals; Language models; Predatory journals; Research ethics
    DOI:  https://doi.org/10.34172/apb.2024.020
  12. J Arthroplasty. 2024 Apr 05. pii: S0883-5403(24)00322-X. [Epub ahead of print]
      Open access journals provide wider and faster dissemination of information, creating the potential for readers to have more difficulty assessing the validity of the publications. This rapid expansion of published research brings with it the risk of prioritizing quantity over quality, and posing risks to patients if decisions are influenced by invalid research. With the expansion of orthopaedic literature in multiple formats, it is our responsibility to shape our evidence-based clinical decision-making in total joint arthroplasty based on the quality of the publications, relying on methodologically sound studies to guide our choices for diagnosis and treatment. This changing paradigm in the publishing of medical journals has the advantage of advancing knowledge, but it remains our obligation to be the guardians for the dissemination of accurate and valid information.
    DOI:  https://doi.org/10.1016/j.arth.2024.04.010
  13. J Clin Anesth. 2024 Apr 09. pii: S0952-8180(24)00084-9. [Epub ahead of print] 111455
      
    DOI:  https://doi.org/10.1016/j.jclinane.2024.111455
  14. OMICS. 2024 Apr 08.
      This concise review and analysis offers an initial unpacking of a previously under-recognized issue within the microRNA research and communications field regarding the inadvertent use of "has" instead of "hsa" in article titles in the microRNA nomenclature. This subtle change, often the result of grammar auto correction tools, introduces considerable ambiguity and confusion among readers and researchers in reporting of microRNA-related discoveries. The impact of this issue cannot be underestimated, as precise and consistent nomenclature is vital for science communication and computational retrieval of relevant scientific literature and to advance science and innovation. We suggest that the recognition and correction of these often inadvertent "hsa" to "has" substitution errors are timely and important so as to ensure a higher level of accuracy throughout the writing and publication process in the microRNA field in particular. Doing so will also contribute to clarity and consistency in the field of microRNA research, ultimately improving scientific veracity, communication, and progress.
    Keywords:  knowledge translation; microRNA; nomenclature; omics; science communication; software autocorrection
    DOI:  https://doi.org/10.1089/omi.2023.0248