bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒08‒18
fourteen papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2024 Aug 09.
      
    Keywords:  Careers; Lab life; Publishing
    DOI:  https://doi.org/10.1038/d41586-024-02596-y
  2. J Postgrad Med. 2024 Aug 09.
      ABSTRACT: The "publish and flourish" culture in the biomedical field has led to an increase in the number of publications worldwide, creating pressure on researchers to publish frequently. However, this focus on quantity over quality has resulted in an inflation of the number of authors listed in articles, leading to authorship issues and the rise of fraudulent or predatory scientific and medical journals. To maintain the credibility of scientific research, it is necessary to reform the publication metrics and explore innovative ways of evaluating an author's contributions. Traditional metrics, such as publication counts, fail to capture the research's quality, significance, and impact. As a result, this viewpoint explores and highlights different metrics and novel methods by which an author's productivity and impact can be assessed beyond traditional metrics, such as the H index, i10 index, FWCI, HCP, ALEF, AIF, AAS, JIF, CNA, awards/honors, citation percentile, n-index, and ACI. By using multiple metrics, one can determine the true impact and productivity of an author, and other measures such as awards and honors, research collaborations, research output diversity, and journal impact factors can further aid in serving the purpose. Accurately assessing an author's productivity and impact has significant implications on their academic career, institution, and the broader scientific community. It can also help funding agencies make informed decisions, improve resource allocation, and enhance public trust in scientific research. Therefore, it is crucial to address these issues and continue the ongoing discussion on best method to evaluate and recognize the contributions of authors in today's rapidly changing academic landscape.
    DOI:  https://doi.org/10.4103/jpgm.jpgm_343_24
  3. Cureus. 2024 Jul;16(7): e64674
      This systematic review aims to identify the countries most active in combatting predatory journals and their definitions of such practices. It also seeks to assess awareness within academic communities, examine the impact of predatory journals on research quality and integrity, and compile existing policies to mitigate their negative effects and strengthen global scholarly integrity. A systematic search was performed in the PubMed, Scopus, and Embase databases on February 7, 2024, in line with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The focus was solely on identifying studies that examined the unique experiences and interventions associated with predatory journals in distinct national contexts. The analysis included a presentation of quantitative results and a thematic examination of qualitative data. A total of 40 articles covering 19 countries were included. Twenty-four countries (60%) were in Asia, 11 (27.5%) in Africa, two (5%) in Europe, and one (2.5%) each in Australia, North America, and South America. Although not all articles cited Beall's list to identify predatory journals, the thematic analysis showed consistent topics across various definitions and Beall's themes. Our analysis identified factors affecting academic publishing perceptions globally, highlighting publication pressure, predatory practices, and policy impacts on ethics and standards. This systematic review examined the literature on predatory publishing and identified the leading countries in the fight against these predatory publications. This analysis underscores a complex interplay of factors affecting academic publishing globally, from the push towards predatory journals as a response to publishing pressures, to the critical role of government and institutional frameworks.
    Keywords:  education; ethics; predatory journals; scholarity integrity; scientific publishing
    DOI:  https://doi.org/10.7759/cureus.64674
  4. Trop Doct. 2024 Aug 16. 494755241271955
      With the majority of medical journals having a rejection rate of >80% of submitted manuscripts, it does come as a shock and as grief to the author who great expectations before submission. Though the majority of literature available does mention how to overcome the lacunae in the manuscript before considering resubmission in another journal, none addresses the mental agony and setback the author faces and the way to overcome this setback. Every author should develop immunity and also be adequately mentally prepared to overcome this misery.
    Keywords:  Asia; environment; general health; location; other; surgery
    DOI:  https://doi.org/10.1177/00494755241271955
  5. Nature. 2024 Aug 14.
      
    Keywords:  Authorship; Lab life; Machine learning; Scientific community
    DOI:  https://doi.org/10.1038/d41586-024-02630-z
  6. Diagn Interv Imaging. 2024 Aug 08. pii: S2211-5684(24)00168-2. [Epub ahead of print]
      
    Keywords:  Bibliometry; Biomedical research; Impact factor; Publishing; Radiology
    DOI:  https://doi.org/10.1016/j.diii.2024.07.007
  7. Nurse Educ Pract. 2024 Aug 10. pii: S1471-5953(24)00219-1. [Epub ahead of print] 104090
      
    DOI:  https://doi.org/10.1016/j.nepr.2024.104090
  8. Psychol Sci. 2024 Aug 14. 9567976241254037
      Using publicly available data from 299 preregistered replications from the social sciences, we found that the language used to describe a study can predict its replicability above and beyond a large set of controls related to the article characteristics, study design and results, author information, and replication effort. To understand why, we analyzed the textual differences between replicable and nonreplicable studies. Our findings suggest that the language in replicable studies is transparent and confident, written in a detailed and complex manner, and generally exhibits markers of truthful communication, possibly demonstrating the researchers' confidence in the study. Nonreplicable studies, however, are vaguely written and have markers of persuasion techniques, such as the use of positivity and clout. Thus, our findings allude to the possibility that authors of nonreplicable studies are more likely to make an effort, through their writing, to persuade readers of their (possibly weaker) results.
    Keywords:  computational social sciences; machine-learning models; open data; open materials; open science; psychometric properties of language; replication prediction; text analysis
    DOI:  https://doi.org/10.1177/09567976241254037
  9. Account Res. 2024 Aug 13. 1-19
      The founders of PubPeer envisioned their website as an online form of a "journal club" that would facilitate post-publication peer review. Recently, PubPeer comments have led to a significant number of research misconduct proceedings - a development that could not have been anticipated when the current federal research misconduct regulations were developed two decades ago. Yet the number, frequency, and velocity of PubPeer comments identifying data integrity concerns, and institutional and government practices that treat all such comments as potential research misconduct allegations, have overwhelmed institutions and threaten to divert attention and resources away from other research integrity initiatives. Recent, high profile research misconduct cases accentuate the increasing public interest in research integrity and make it inevitable that the use of platforms such as PubPeer to challenge research findings will intensify. This article examines the origins of PubPeer and its central role in the modern era of online-based scouring of scientific publications for potential problems and outlines the challenges that institutions must manage in addressing issues identified on PubPeer. In conclusion, we discuss some potential enhancements to the investigatory process specified under federal regulations that could, if implemented, allow institutions to manage some of these challenges more efficiently.
    Keywords:  ORI; PubPeer; research integrity; ; research misconduct
    DOI:  https://doi.org/10.1080/08989621.2024.2390007
  10. J Med Educ Curric Dev. 2024 Jan-Dec;11:11 23821205241269378
      Objectives: Proficiency in medical writing and publishing is essential for medical researchers. Workshops can play a valuable role in addressing these issues. However, there is a lack of systematic summaries of evidence on the evaluation of their impacts. So, in this systematic review, we aimed to evaluate all articles published on the impact of such workshops worldwide.Methods: We searched Ovid EMBASE, Ovid Medline, ISI Web of Science, ERIC database, and grey literature with no language, time period, or geographical location limitations. Randomized controlled trials, cohort studies, before-after studies, surveys, and program evaluation and development studies were included. We performed a meta-analysis on data related to knowledge increase after the workshops and descriptively reported the evaluation of other articles that did not have sufficient data for a meta-analysis. All analyses were performed using Stata software, version 15.0.
    Results: Of 23 040 reports, 222 articles underwent full-text review, leading to 45 articles reporting the impacts of workshops. Overall, the reports on the impact of such workshops were incomplete or lacked the necessary precision to draw acceptable conclusions. The workshops were sporadic, and researchers used their own method of assessment. Meta-analyses of the impact on the knowledge showed that workshops could nonsignificantly increase the mean or percentage of participants' knowledge.
    Conclusion: In the absence of systematic academic courses on medical writing/publishing, workshops are conducted worldwide; however, reports on educational activities during such workshops, the methods of presentations, and their curricula are incomplete and vary. Their impact is not evaluated using standardized methods, and no valid and reliable measurement tools have been employed for these assessments.
    Keywords:  Medical writing; impact; meta-analysis; publishing; systematic review; workshop
    DOI:  https://doi.org/10.1177/23821205241269378