bims-skolko Biomed News
on Scholarly communication
Issue of 2023–04–16
27 papers selected by
Thomas Krichel, Open Library Society



  1. Korean J Intern Med. 2023 Apr 13.
      Preprints are preliminary research reports that have not yet been peer-reviewed. They have been widely adopted to promote the timely dissemination of research across many scientific fields. In August 1991, Paul Ginsparg launched an electronic bulletin board intended to serve a few hundred colleagues working in a subfield of theoretical high-energy physics, thus launching arXiv, the first and largest preprint platform. Additional preprint servers have since been implemented in different academic fields, such as BioRxiv (2013, Biology; www.biorxiv.org) and medRxiv (2019, Health Science; www.medrxiv.org). While preprint availability has made valuable research resources accessible to the general public, thus bridging the gap between academic and non-academic audiences, it has also facilitated the spread of unsupported conclusions through various media channels. Issues surrounding the preprint policies of a journal must be addressed, ultimately, by editors and include the acceptance of preprint manuscripts, allowing the citation of preprints, maintaining a double-blind peer review process, changes to the preprint's content and authors' list, scoop priorities, commenting on preprints, and preventing the influence of social media. Editors must be able to deal with these issues adequately, to maintain the scientific integrity of their journal. In this review, the history, current status, and strengths and weaknesses of preprints as well as ongoing concerns regarding journal articles with preprints are discussed. An optimal approach to preprints is suggested for editorial board members, authors, and researchers.
    Keywords:  Peer review; Preprint; Research report; medRxiv
    DOI:  https://doi.org/10.3904/kjim.2023.099
  2. PLoS One. 2023 ;18(4): e0284212
      It is common in scientific publishing to request from authors reviewer suggestions for their own manuscripts. The question then arises: How many submissions are needed to discover friendly suggested reviewers? To answer this question, as the data we would need is anonymized, we present an agent-based simulation of (single-blinded) peer review to generate synthetic data. We then use a Bayesian framework to classify suggested reviewers. To set a lower bound on the number of submissions possible, we create an optimistically simple model that should allow us to more readily deduce the degree of friendliness of the reviewer. Despite this model's optimistic conditions, we find that one would need hundreds of submissions to classify even a small reviewer subset. Thus, it is virtually unfeasible under realistic conditions. This ensures that the peer review system is sufficiently robust to allow authors to suggest their own reviewers.
    DOI:  https://doi.org/10.1371/journal.pone.0284212
  3. Nature. 2023 Apr;616(7956): 219
      
    Keywords:  Institutions; Policy; Research management; Scientific community
    DOI:  https://doi.org/10.1038/d41586-023-00973-7
  4. Res Integr Peer Rev. 2023 Apr 11. 8(1): 1
      Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford's Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford's Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.
    Keywords:  Animal behaviour; Benford’s Law; Benford’s Law tests; Peer review; Retracted article testing; Scientific misconduct
    DOI:  https://doi.org/10.1186/s41073-022-00126-w
  5. Brain Behav Immun. 2023 Apr 08. pii: S0889-1591(23)00097-1. [Epub ahead of print]111 124
      
    DOI:  https://doi.org/10.1016/j.bbi.2023.04.004
  6. Ann Rheum Dis. 2023 Apr 11. pii: ard-2023-223936. [Epub ahead of print]
      In this editorial we discuss the place of artificial intelligence (AI) in the writing of scientific articles and especially editorials. We asked chatGPT « to write an editorial for Annals of Rheumatic Diseases about how AI may replace the rheumatologist in editorial writing ». chatGPT's response is diplomatic and describes AI as a tool to help the rheumatologist but not replace him. AI is already used in medicine, especially in image analysis, but the domains are infinite and it is possible that AI could quickly help or replace rheumatologists in the writing of scientific articles. We discuss the ethical aspects and the future role of rheumatologists.
    Keywords:  Health services research; Patient Care Team; Qualitative research; Social work
    DOI:  https://doi.org/10.1136/ard-2023-223936
  7. Clin Nurse Spec. 2023 May-Jun 01;37(3):37(3): 109-110
      
    DOI:  https://doi.org/10.1097/NUR.0000000000000750
  8. Ned Tijdschr Geneeskd. 2023 Apr 12. pii: D7496. [Epub ahead of print]167
      In this article, we describe the process - from the first draft, through peer revision to a final manuscript - of writing a scientific article only using AI. We discuss the problems and questions that arise and make recommendations for how text-generative AI may be used in the medical-scientific world.
  9. J Pharm Pharm Sci. 2023 ;26 11349
      
    Keywords:  ChatGPT; artificial; evolution; intelligence; pharmaceutical
    DOI:  https://doi.org/10.3389/jpps.2023.11349
  10. Resuscitation. 2023 Apr 12. pii: S0300-9572(23)00108-9. [Epub ahead of print] 109795
      
    Keywords:  CPR; ChatGPT; academic publishing; cardiopulmonary resuscitation; journal selection
    DOI:  https://doi.org/10.1016/j.resuscitation.2023.109795
  11. Ir J Med Sci. 2023 Apr 14.
      This letter to the editor points out weaknesses in the editorial policies of some academic journals regarding the use of ChatGPT-generated content. Editorial policies should provide more specific details on which parts of an academic paper are allowed to use ChatGPT-generated content. If authors use ChatGPT-generated content in the conclusion or results section, it may harm the academic paper's originality and, therefore, should not be accepted.
    Keywords:  Academic journals; Research ethics; Research integrity; Research originality; Scholarly publishing
    DOI:  https://doi.org/10.1007/s11845-023-03374-x
  12. J Food Sci. 2023 Apr;88(4): 1219-1220
      
    DOI:  https://doi.org/10.1111/1750-3841.16568
  13. Adv Biomed Res. 2023 ;12 41
       Background: Following personalized medicine and the development of e-publishing, a large number of case report-dedicated journals have emerged. But the lack of integrated guidelines is a major obstacle to the quality of this evidence. The purpose of this study is to analyze the reporting requirements of case report-dedicated journals to update and strengthen the CARE guidelines.
    Material and Methods: Quantitative and qualitative research approach has been done using the content analysis method. All case report-dedicated journals were selected from Scopus (54 out of a total of 68 journals). By referring to these journals' websites, all the contents of the authors' guideline section and two sample articles were examined as a unit of analysis. Quantitative data includes frequency and percentile; qualitative data was conducted through open coding, creating categories, and abstraction.
    Results: 51% of journals are related to Elsevier and Hindawi publications. 14.8% of journals have been launched in the form of companions. 52% of journals endorse the CARE guidelines. Among the CARE elements, title, consent form (100%), discussion, abstract (94.4%), and introduction (90.7%) had the most frequent elements, and timeline and patients' perspective had the least repetition in the authors' guideline. Also, 19 new reporting elements and 27 types of case reports were identified.
    Conclusions: Improving the reporting and content quality of case reports is very important to benefit from knowledge synthesis services. Medical journals publishing case reports should follow a more integrated process. An updated version of reporting guidelines needs to be available for publishers and editors of journals.
    Keywords:  Authors’ guideline; CARE; authors’ instructions; case reports; guideline as topic; medical journalism; reporting guideline
    DOI:  https://doi.org/10.4103/abr.abr_391_21
  14. PLoS One. 2023 ;18(4): e0284243
      Sharing research data allows the scientific community to verify and build upon published work. However, data sharing is not common practice yet. The reasons for not sharing data are myriad: Some are practical, others are more fear-related. One particular fear is that a reanalysis may expose errors. For this explanation, it would be interesting to know whether authors that do not share data genuinely made more errors than authors who do share data. (Wicherts, Bakker and Molenaar 2011) examined errors that can be discovered based on the published manuscript only, because it is impossible to reanalyze unavailable data. They found a higher prevalence of such errors in papers for which the data were not shared. However, (Nuijten et al. 2017) did not find support for this finding in three large studies. To shed more light on this relation, we conducted a replication of the study by (Wicherts et al. 2011). Our study consisted of two parts. In the first part, we reproduced the analyses from (Wicherts et al. 2011) to verify the results, and we carried out several alternative analytical approaches to evaluate the robustness of the results against other analytical decisions. In the second part, we used a unique and larger data set that originated from (Vanpaemel et al. 2015) on data sharing upon request for reanalysis, to replicate the findings in (Wicherts et al. 2011). We applied statcheck for the detection of consistency errors in all included papers and manually corrected false positives. Finally, we again assessed the robustness of the replication results against other analytical decisions. Everything taken together, we found no robust empirical evidence for the claim that not sharing research data for reanalysis is associated with consistency errors.
    DOI:  https://doi.org/10.1371/journal.pone.0284243
  15. Nat Methods. 2023 Apr;20(4): 471
      
    DOI:  https://doi.org/10.1038/s41592-023-01865-4
  16. Thorac Cardiovasc Surg. 2023 Apr 09.
       OBJECTIVE:  To evaluate the experience with a new peer review method, "Select Crowd Review" (SCR): anonymized PDFs of manuscripts are accessible to a reviewer crowd via an online platform. It has access for 10 days to enter anonymized comments directly into the manuscript. A SCR-Editor summarizes the annotations, giving a recommendation. Both reviewed PDF and summary are sent back to authors. Upon submission, authors are given a choice to accept or decline SCR.
    DESIGN:  All manuscript submissions since introduction in July 2021 until July 2022 were analyzed regarding acceptance and quality. Manuscripts were sent to a crowd of 45 reviewers and regular double-blinded peer review at the same time. Efficiency and performance of the crowd's reviews were compared with those of regular review. For thoracic manuscripts, a crowd was not yet available.
    RESULTS:  SCR was accepted by the authors for 73/179 manuscripts (40.8%). After desk rejections, 51 cardiac manuscripts entered SCR. For five manuscripts, the crowd did not respond. In all remaining papers, the crowd's recommendation concurred with that of the normal reviewers. Regular peer review took up to 6 weeks. Twelve manuscripts underwent repeated SCR after revision. A median of 2 (0-9) crowd members sent in reviews. In revisions, average response was one reviewer responding.
    CONCLUSIONS:  SCR encountered good acceptance by authors. As the first experience showed concordant recommendations compared with traditional review, we have extended SCR to thoracic manuscripts for more experience. SCR may become the sole review method for eligible manuscripts. Efficiency should be increased, especially for re-review of revisions.
    DOI:  https://doi.org/10.1055/s-0043-1768032