bims-skolko Biomed News
on Scholarly communication
Issue of 2023‒09‒03
thirty-two papers selected by
Thomas Krichel, Open Library Society



  1. BMJ Evid Based Med. 2023 Sep 01. pii: bmjebm-2023-112456. [Epub ahead of print]
      
    Keywords:  Ethics; Neoplasms; PUBLIC HEALTH; Policy; Publishing
    DOI:  https://doi.org/10.1136/bmjebm-2023-112456
  2. Adv Sci (Weinh). 2023 Aug 30. e2303226
      There is growing recognition that animal methods bias, a preference for animal-based methods where they are not necessary or where nonanimal-based methods may already be suitable, can impact the likelihood or timeliness of a manuscript being accepted for publication. Following April 2022 workshop about animal methods bias in scientific publishing, a coalition of scientists and advocates formed a Coalition to Illuminate and Address Animal Methods Bias (COLAAB). The COLAAB has developed this guide to be used by authors who use nonanimal methods to avoid and respond to animal methods bias from manuscript reviewers. It contains information that researchers may use during 1) study design, including how to find and select appropriate nonanimal methods and preregister a research plan, 2) manuscript preparation and submission, including tips for discussing methods and choosing journals and reviewers that may be more receptive to nonanimal methods, and 3) the peer review process, providing suggested language and literature to aid authors in responding to biased reviews. The author's guide for addressing animal methods bias in publishing is a living resource also available online at animalmethodsbias.org, which aims to help ensure fair dissemination of research that uses nonanimal methods and prevent unnecessary experiments on animals.
    Keywords:  alternatives to animal testing; animal methods bias; peer review; publishing
    DOI:  https://doi.org/10.1002/advs.202303226
  3. Account Res. 2023 Aug 31. 1-6
      Yamada and Teixeira da Silva voiced valid concerns with the inadequacies of an online machine learning-based tool to detect predatory journals, and stressed on the urgent need for an automated, open, online-based semi-quantitative system that measures "predatoriness". We agree that the said machine learning-based tool lacks accuracy in its demarcation and identification of journals outside those already found within existing black and white lists, and that its use could have undesirable impact on the community. We note further that the key characteristic of journals being predatory, namely a lack of stringent peer review, would normally not have the visibility necessary for training and informing machine learning-based online tools. This, together with the gray zone of inadequate scholarly practice and the plurality in authors' perception of predatoriness, makes it desirable for any machine-based, quantitative assessment to be complemented or moderated by a community-based, qualitative assessment that would do more justice to both journals and authors.
    Keywords:  Predatoriness; peer review; predatory journals; scholarly publishing
    DOI:  https://doi.org/10.1080/08989621.2023.2253425
  4. JMIR Dermatol. 2022 Sep 14. 5(3): e39365
      BACKGROUND: Predatory publishing is a deceptive form of publishing that uses unethical business practices, minimal to no peer review processes, or limited editorial oversight to publish articles. It may be problematic to our highest standard of scientific evidence-systematic reviews-through the inclusion of poor-quality and unusable data, which could mislead results, challenge outcomes, and undermine confidence. Thus, there is a growing concern surrounding the effects predatory publishing may have on scientific research and clinical decision-making.OBJECTIVE: The objective of this study was to evaluate whether systematic reviews published in top dermatology journals contain primary studies published in suspected predatory journals (SPJs).
    METHODS: We searched PubMed for systematic reviews published in the top five dermatology journals (determined by 5-year h-indices) between January 1, 2019, and May 24, 2021. Primary studies were extracted from each systematic review, and the publishing journal of these primary studies was cross-referenced using Beall's List and the Directory of Open Access Journals. Screening and data extraction were performed in a masked, duplicate fashion. We performed chi-square tests to determine possible associations between a systematic review's inclusion of a primary study published in a SPJ and particular study characteristics.
    RESULTS: Our randomized sample included 100 systematic reviews, of which 31 (31%) were found to contain a primary study published in a SPJ. Of the top five dermatology journals, the Journal of the American Academy of Dermatology had the most systematic reviews containing a primary study published in an SPJ. Systematic reviews containing a meta-analysis or registered protocol were significantly less likely to contain a primary study published in a SPJ. No statistically significant associations were found between other study characteristics.
    CONCLUSIONS: Studies published in SPJs are commonly included as primary studies in systematic reviews published in high-impact dermatology journals. Future research is needed to investigate the effects of including suspected predatory publications in scientific research.
    Keywords:  articles; data; dermatology; evidence synthesis; general dermatology; journals; meta-analysis; peer review; predatory journals; primary studies; publications; publishing; quality; research; scientific communication; systematic review
    DOI:  https://doi.org/10.2196/39365
  5. J Clin Epidemiol. 2023 Aug 25. pii: S0895-4356(23)00216-0. [Epub ahead of print]
      OBJECTIVE: Preprints became a major source of research communication during the COVID-19 pandemic. We aimed to evaluate whether summary treatment effect estimates differ between preprint and peer-reviewed journal trials.STUDY DESIGN AND SETTING: A meta-epidemiological study. Data were derived from the COVID-NMA living systematic review (covid-nma.com) up to July 20, 2022. We identified all meta-analyses evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that included at least one preprint and one peer-reviewed journal article. Difference in effect estimates between preprint and peer-reviewed journal trials were estimated by the ratio of odds ratio (ROR); ROR < 1 indicated larger effects in preprint trials.
    RESULTS: Thirty-seven meta-analyses including 114 trials (44 preprints, 70 peer-reviewed publications) were selected. The median number of RCTs per meta-analysis was 2 (IQR, 2-4; maximum, 11), median sample size of RCTs was 199 (IQR, 99-478). Overall, there was no statistically significant difference in summary effect estimates between preprint and peer-reviewed journal trials (ROR, 0.88; 95% CI, 0.71-1.09; I2 = 17.8%; τ2= 0.06).
    CONCLUSION: We did not find an important difference between summary treatment effects of preprints and summary treatment effects of peer-reviewed publications. Systematic reviewers and guideline developers should assess preprint inclusion individually, accounting for risk of bias and completeness of reporting.
    Keywords:  COVID-19; Meta-analysis; Meta-epidemiology; Peer-review; Preprint; Randomized controlled trial
    DOI:  https://doi.org/10.1016/j.jclinepi.2023.08.011
  6. JAMA Netw Open. 2023 Aug 01. 6(8): e2331410
      Importance: Preprints have been increasingly used in biomedical science, and a key feature of many platforms is public commenting. The content of these comments, however, has not been well studied, and it is unclear whether they resemble those found in journal peer review.Objective: To describe the content of comments on the bioRxiv and medRxiv preprint platforms.
    Design, Setting, and Participants: In this cross-sectional study, preprints posted on the bioRxiv and medRxiv platforms in 2020 were accessed through each platform's application programming interface on March 29, 2021, and a random sample of preprints containing between 1 and 20 comments was evaluated independently by 3 evaluators using an instrument to assess their features and general content.
    Main Outcome and Measures: The numbers and percentages of comments from authors or nonauthors were assessed, and the comments from nonauthors were assessed for content. These nonauthor comments were assessed to determine whether they included compliments, criticisms, corrections, suggestions, or questions, as well as their topics (eg, relevance, interpretation, and methods). Nonauthor comments were also analyzed to determine whether they included references, provided a summary of the findings, or questioned the preprint's conclusions.
    Results: Of 52 736 preprints, 3850 (7.3%) received at least 1 comment (mean [SD] follow-up, 7.5 [3.6] months), and the 1921 assessed comments (from 1037 preprints) had a median length of 43 words (range, 1-3172 words). The criticisms, corrections, or suggestions present in 694 of 1125 comments (61.7%) were the most prevalent content, followed by compliments (n = 428 [38.0%]) and questions (n = 393 [35.0%]). Criticisms usually regarded interpretation (n = 286), methodological design (n = 267), and data collection (n = 238), while compliments were mainly about relevance (n = 111) and implications (n = 72).
    Conclusions and Relevance: In this cross-sectional study of preprint comments, topics commonly associated with journal peer review were frequent. However, only a small percentage of preprints posted on the bioRxiv and medRxiv platforms in 2020 received comments on these platforms. A clearer taxonomy of peer review roles would help to describe whether postpublication peer review fulfills them.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2023.31410
  7. PeerJ. 2023 ;11 e15864
      The COVID-19 pandemic caused a rise in preprinting, triggered by the need for open and rapid dissemination of research outputs. We surveyed authors of COVID-19 preprints to learn about their experiences with preprinting their work and also with publishing their work in a peer-reviewed journal. Our research had the following objectives: 1. to learn about authors' experiences with preprinting, their motivations, and future intentions; 2. to consider preprints in terms of their effectiveness in enabling authors to receive feedback on their work; 3. to compare the impact of feedback on preprints with the impact of comments of editors and reviewers on papers submitted to journals. In our survey, 78% of the new adopters of preprinting reported the intention to also preprint their future work. The boost in preprinting may therefore have a structural effect that will last after the pandemic, although future developments will also depend on other factors, including the broader growth in the adoption of open science practices. A total of 53% of the respondents reported that they had received feedback on their preprints. However, more than half of the feedback was received through "closed" channels-privately to the authors. This means that preprinting was a useful way to receive feedback on research, but the value of feedback could be increased further by facilitating and promoting "open" channels for preprint feedback. Almost a quarter of the feedback received by respondents consisted of detailed comments, showing the potential of preprint feedback to provide valuable comments on research. Respondents also reported that, compared to preprint feedback, journal peer review was more likely to lead to major changes to their work, suggesting that journal peer review provides significant added value compared to feedback received on preprints.
    Keywords:  Covid-19 crisis; Feedback; Peer review; Preprints; Scholarly communication; Survey
    DOI:  https://doi.org/10.7717/peerj.15864
  8. Women Birth. 2023 Aug 25. pii: S1871-5192(23)00249-4. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.wombi.2023.08.002
  9. PLoS Biol. 2023 Aug;21(8): e3002238
      The Journal Impact Factor is often used as a proxy measure for journal quality, but the empirical evidence is scarce. In particular, it is unclear how peer review characteristics for a journal relate to its impact factor. We analysed 10,000 peer review reports submitted to 1,644 biomedical journals with impact factors ranging from 0.21 to 74.7. Two researchers hand-coded sentences using categories of content related to the thoroughness of the review (Materials and Methods, Presentation and Reporting, Results and Discussion, Importance and Relevance) and helpfulness (Suggestion and Solution, Examples, Praise, Criticism). We fine-tuned and validated transformer machine learning language models to classify sentences. We then examined the association between the number and percentage of sentences addressing different content categories and 10 groups defined by the Journal Impact Factor. The median length of reviews increased with higher impact factor, from 185 words (group 1) to 387 words (group 10). The percentage of sentences addressing Materials and Methods was greater in the highest Journal Impact Factor journals than in the lowest Journal Impact Factor group. The results for Presentation and Reporting went in the opposite direction, with the highest Journal Impact Factor journals giving less emphasis to such content. For helpfulness, reviews for higher impact factor journals devoted relatively less attention to Suggestion and Solution than lower impact factor journals. In conclusion, peer review in journals with higher impact factors tends to be more thorough, particularly in addressing study methods while giving relatively less emphasis to presentation or suggesting solutions. Differences were modest and variability high, indicating that the Journal Impact Factor is a bad predictor of the quality of peer review of an individual manuscript.
    DOI:  https://doi.org/10.1371/journal.pbio.3002238
  10. Maturitas. 2023 Aug 25. pii: S0378-5122(23)00448-6. [Epub ahead of print] 107842
      
    DOI:  https://doi.org/10.1016/j.maturitas.2023.107842
  11. Nat Mater. 2023 Sep;22(9): 1047
      
    DOI:  https://doi.org/10.1038/s41563-023-01661-7
  12. Cont Lens Anterior Eye. 2023 Aug 27. pii: S1367-0484(23)00274-6. [Epub ahead of print] 102050
      
    DOI:  https://doi.org/10.1016/j.clae.2023.102050
  13. Korean J Radiol. 2023 Sep;24(9): 924-925
      
    Keywords:  Authorship; ChatGPT; Ethics; Large language model; Scientific writing
    DOI:  https://doi.org/10.3348/kjr.2023.0738
  14. J Med Internet Res. 2023 Aug 31. 25 e50591
      
    Keywords:  AI; Chat Generative Pre-trained Transformer; ChatGPT; artificial intelligence; ethics; fraudulent medical articles; language models; neurosurgery; publications
    DOI:  https://doi.org/10.2196/50591
  15. J Med Internet Res. 2023 Aug 31. 25 e51584
      The ethics of generative artificial intelligence (AI) use in scientific manuscript content creation has become a serious matter of concern in the scientific publishing community. Generative AI has computationally become capable of elaborating research questions; refining programming code; generating text in scientific language; and generating images, graphics, or figures. However, this technology should be used with caution. In this editorial, we outline the current state of editorial policies on generative AI or chatbot use in authorship, peer review, and editorial processing of scientific and scholarly manuscripts. Additionally, we provide JMIR Publications' editorial policies on these issues. We further detail JMIR Publications' approach to the applications of AI in the editorial process for manuscripts in review in a JMIR Publications journal.
    Keywords:  AI; artificial intelligence; editorial; open access publishing; open science; publication policy; publishing; research; scholarly publishing; science editing; scientific publishing; scientific research
    DOI:  https://doi.org/10.2196/51584
  16. Front Artif Intell. 2023 ;6 1259407
      
    Keywords:  AI writing; culture; education; generative AI; hype
    DOI:  https://doi.org/10.3389/frai.2023.1259407
  17. J Korean Assoc Oral Maxillofac Surg. 2023 Aug 31. 49(4): 239-240
      
    DOI:  https://doi.org/10.5125/jkaoms.2023.49.4.239
  18. Am J Intellect Dev Disabil. 2023 Sep 01. 128(5): 386-387
      We respond to the recommendations made by Kover and Abbeduto in their article, "Toward Equity in Research on Intellectual and Developmental Disabilities," through the discussion of what journal editors should be considering in advancing equitable processes for research with individuals with intellectual and developmental disabilities (IDD). We provide practical suggestions from our experience as co-editors in promoting diversity in research partnerships with people with IDD.
    Keywords:  advocates; editor’s role; equity; research
    DOI:  https://doi.org/10.1352/1944-7558-128.5.386
  19. JMIR Dermatol. 2023 May 05. 6 e43256
      Gender disparities exist across all facets of academic medicine including within the editorial boards of dermatology journals. Only 22% of these editorial boards comprised women, even though 51% of full-time, faculty dermatologists are female. When inviting academic dermatologists to our editorial board at JMIR Dermatology, we invited 50% women to represent the gender distribution of academic dermatologists; however, we have not sufficiently reached gender equity among accepted editorial board members. We will continue to strive toward the goal of gender equity on our editorial board and invite other dermatology journals to do the same.
    Keywords:  academia; dermatology; diversity; editorial board members; equality; equity; gender; gender equity; inclusion
    DOI:  https://doi.org/10.2196/43256
  20. Med Teach. 2023 Aug 26. 1-9
      PURPOSE OF THE ARTICLE: As editorial boards (EBs) of medical education journals (MEJs) hold substantial control over framing current medical education scholarship, we aimed to evaluate representation of women as well as geographic and socioeconomic diversity on EBs of these journals.MATERIALS AND METHODS: In our cross-sectional study, Composite Editorial Board Diversity Score (CEBDS) was used to evaluate diversity at gender, geographic region, and country income level. Websites of MEJs were screened for relevant information. Job titles were categorized into 3 editorial roles and data were analyzed using SPSS version 26.
    RESULTS: Out of 42 MEJs, 19 journals (45.2%) were published from the Global South. Among 1219 editors, 57.5% were men. Out of 46 editors in chief (EICs), 34.7% were women, and 60.9% were based in high income countries. No EIC belonged to low-income country. The proportion of female advisory board members was found to be positively correlated with the presence of a female EIC. Moreover, 2 journals achieved the maximum CEBDS. All editors belonged to the same World Bank income group and geographic region for 12 and 8 journals respectively.
    CONCLUSIONS: In order to allow a truly global perspective in medical education to prevail, diversity and inclusivity on these journals become important parameters to address. Thus, promoting policies centered on improving diversity in all aspects should become a top priority.
    Keywords:  Medical education; gender equity; leadership; multiculturalism
    DOI:  https://doi.org/10.1080/0142159X.2023.2249212
  21. JMIR Dermatol. 2023 Mar 10. 6 e44217
      Dermatology as a whole suffers from minority underrepresentation. We conducted a search of the top 60 dermatology journals for mention of their approach to increasing diversity, equity, and inclusion (DEI) within their publication through editorial board members or peer-review processes. Of those 60, only 5 had DEI statements or editorial board members dedicated to increasing DEI. There are publications with checklists and frameworks for increasing DEI within the literature. We propose that more journals implement these resources within their peer-review process to increase diversity within their publication.
    Keywords:  dermatology; diversity; equity; inclusion
    DOI:  https://doi.org/10.2196/44217
  22. Indian J Dermatol Venereol Leprol. 2023 Sep-Oct;89(5):pii: 10.25259/IDJVL_809_2023. [Epub ahead of print]89(5): 645-646
      
    DOI:  https://doi.org/10.25259/IDJVL_809_2023
  23. J Nat Prod. 2023 Aug 28.
      Given that the essence of Science is a search for the truth, one might expect that those identifying as scientists would be conscientious and observant of the demands this places on them. However, that expectation is not fulfilled universally as, not too surprisingly, egregious examples of unethical behavior appear and are driven by money, personal ambition, performance pressure, and other incentives. The reproducibility-, fact-, and truth-oriented modus operandi of Science has come to face a variety of challenges. Organized into 11 cases, this article outlines examples of compromised integrity from borderline to blatant unethical behavior that disgrace our profession unnecessarily. Considering technological developments in neural networks/artificial intelligence, a host of factors are identified as impacting Good Ethical Practices. The goal is manifold: to raise awareness and offer perspectives for refocusing on Science and true scientific evidence; to trigger discussion and developments that strengthen ethical behavior; to foster the recognition of the beauty, simplicity, and rewarding nature of scientific integrity; and to highlight the originality of intelligence.
    Keywords:  Good Ethical Practices; dishonesty; ethics; honesty; integrity; reproducibility
    DOI:  https://doi.org/10.1021/acs.jnatprod.3c00165
  24. Sports Health. 2023 Sep-Oct;15(5):15(5): 629-630
      
    DOI:  https://doi.org/10.1177/19417381231192039
  25. Open Res Eur. 2023 ;3 22
      Registered reports are a publication format that involves peer reviewing studies both before and after carrying out research procedures. Although registered reports were originally developed to combat challenges in quantitative and confirmatory study designs, today registered reports are also available for qualitative and exploratory work. This article provides a brief primer that aims to help researchers in choosing, designing, and evaluating registered reports, which are driven by qualitative methods.
    Keywords:  guidelines; open science; qualitative research; registered reports; transparency
    DOI:  https://doi.org/10.12688/openreseurope.15532.2
  26. Rev Infirm. 2023 Aug-Sep;72(293):pii: S1293-8505(23)00252-X. [Epub ahead of print]72(293): 47-48
      
    DOI:  https://doi.org/10.1016/j.revinf.2023.07.014
  27. Plast Surg (Oakv). 2023 Aug;31(3): 306-310
      Credible clinical research is a precondition of evidence-based surgery. If clinical research is not conducted and reported properly, such research can be unreliable, unclear, and misleading. Our journal, Plastic Surgery, aims to improve its quality and thus enhance interest, submissions, and readership. To do so, we must ensure that the articles published in our journal align with these goals. This article guides future clinical research contributors, how to design, conduct and report valuable and reliable research. Readers are informed how to choose a title and keywords that properly reflect the content of the article. The proper organization of a manuscript, and the information that goes into each section is described. Valuable tools like the EQUATOR Network Guidelines, the FINER Criteria and the PICOT Format are described for the reader. These resources help formulate a proper research question and ensure transparency in reporting. Commonly used study designs, and the research questions they answer are presented. This ensures that those engaged in research are choosing the right study design for their research. We outline the statistical information that should be presented in the Methods section and differentiate between the content that should be found in the Results and Discussion sections. As Plastic Surgery strives to publish high-quality, reliable research, it is by the standards presented in this article that we will judge all manuscripts submitted for publication.
    Keywords:  FINER criteria; PICOT format; article structure; reporting guidelines; research question
    DOI:  https://doi.org/10.1177/22925503211054136