bims-skolko Biomed News
on Scholarly communication
Issue of 2023‒10‒22
34 papers selected by
Thomas Krichel, Open Library Society



  1. Front Psychiatry. 2023 ;14 1271229
      A core principle in the pursuit of scientific knowledge is that science is self-correcting and that important results should be replicable. Hypotheses need to be reinforced, adjusted, or rejected when novel results are obtained. Replication of results confirms hypotheses and enhances their integration into scientific practice. In contrast, publication of substantiated and replicated negative findings (i.e., non-significant or opposite findings) can be the basis to reject erroneous hypotheses or develop alternative strategies for investigation. Replication is a problem in all research fields. The Psychology Reproductivity Project reported that only 36% of 'highly influential' published research in highly ranked journals were reproduced. Similar to positive data, negative data can be flawed. Errors in a negative data set can be based on methodology, statistics, conceptual defects, and flawed peer review. The peer review process has received progressive scrutiny. A large-scale review of the peer review process of manuscripts submitted to the British Medical Journal group indicated that the process could be characterized as inconsistent, inaccurate, and biased. Further analysis indicated that the peer process is easily manipulated, indicative of a failed system, is a major factor behind the lack of replication in science (acceptance of flawed manuscripts), suppresses opposing scientific evidence and views, and causes gaps in and lack of growth of science. Complicating the integrity of scientific publication is the role of Editors/Researchers. Ethical guidelines exist for major publishing houses about editorial ethics, behavior, and practice.
    Keywords:  5HT-3 receptor; alcohol treatment; ethics; false negative; genotype; ondansetron; precision medicine; serotonin
    DOI:  https://doi.org/10.3389/fpsyt.2023.1271229
  2. J Oral Maxillofac Pathol. 2023 Apr-Jun;27(2):27(2): 254-256
      A 'Letter to the Editor' is an abbreviated form of communication where 'readers' can express their carefully considered scientific opinion about a recently published article in a journal. It is considered as 'post-publication peer review'. There are certain things that a letter writer and the 'editor' need to keep in mind while writing a 'Letter' for a journal. The 'editor' needs to curate the contents of the 'Letter' and make sure that there are no misinformation shared. The formatting, type, scope and the scientific quality of the 'Letter' depend on the journal that publishes them, and hence, different publications may require their 'letter writers' to present the information that they want in a certain way. The following article reflects an overview of the role of editors and writers, guidelines, scope, and format of the 'Letter to the Editor'.
    Keywords:  Editorial; Letter; Short communication
    DOI:  https://doi.org/10.4103/jomfp.jomfp_65_23
  3. J Am Psychiatr Nurses Assoc. 2023 Oct 19. 10783903231205311
      
    DOI:  https://doi.org/10.1177/10783903231205311
  4. Br J Psychiatry. 2023 Oct;223(4): 453-455
      After thanking his predecessors, the newly appointed College Editor and Editor-in-Chief of The British Journal of Psychiatry, Professor Gin Malhi, outlines both the historical and personal significance of the journal in this proemial editorial.
    Keywords:  History; asylum; editors; mental science; publishing
    DOI:  https://doi.org/10.1192/bjp.2023.121
  5. J Korean Med Sci. 2023 Oct 16. 38(40): e324
      BACKGROUND: Retraction is an essential procedure for correcting scientific literature and informing readers about articles containing significant errors or omissions. Ethical violations are one of the significant triggers of the retraction process. The objective of this study was to evaluate the characteristics of retracted articles in the medical literature due to ethical violations.METHODS: The Retraction Watch Database was utilized for this descriptive study. The 'ethical violations' and 'medicine' options were chosen. The date range was 2010 to 2023. The collected data included the number of authors, the date of publication and retraction, the journal of publication, the indexing status of the journal, the country of the corresponding author, the subject area of the article, and the particular retraction reasons.
    RESULTS: A total of 177 articles were analyzed. The most retractions were detected in 2019 (n = 29) and 2012 (n = 28). The median time period between the articles' first publication date and the date of retraction was 647 (0-4,295) days. The leading countries were China (n = 47), USA (n = 25), South Korea (n = 23), Iran (n = 14), and India (n = 12). The main causes of retraction were ethical approval issues (n = 65), data-related concerns (n = 51), informed consent issues (n = 45), and fake-biased peer review (n = 30).
    CONCLUSION: Unethical behavior is one of the most significant obstacles to scientific advancement. Obtaining appropriate ethics committee approvals and informed consent forms is crucial in ensuring the ethical conduct of medical research. It is the responsibility of journal editors to ensure that raw data is controlled and peer review processes are conducted effectively. It is essential to educate young researchers on unethical practices and the negative outcomes that may result from them.
    Keywords:  Article; Ethics; Medicine; Peer Review; Plagiarism; Publishing; Retraction of Publication as Topic; Scientific Misconduct
    DOI:  https://doi.org/10.3346/jkms.2023.38.e324
  6. J Pain Symptom Manage. 2023 Oct 18. pii: S0885-3924(23)00739-X. [Epub ahead of print]
      CONTEXT: Scientific journals are the primary source for dissemination of research findings, and this process relies on rigorous editorial and peer-review. As part of continuing efforts by the Journal of Pain and Symptom Management (JPSM) to advance equity, diversity, and inclusion, JPSM's leadership requested an external evaluation of their publication decisions.OBJECTIVES: (1) Describe primary author characteristics associated with final decisions to accept or reject manuscripts submitted for publication; (2) Report on whether there are potential publication biases in the JPSM editorial or peer-review processes.
    METHODS: Data consisted of self-reported primary author demographic characteristics associated with manuscript submissions between June 18, 2020, and December 31, 2022. Characteristics included region of residence, race, gender, and ethnicity. A multiple logistic regression model was used to estimate adjusted odds of rejection for each author characteristic.
    RESULTS: 1,940 submissions were evaluated. Compared to authors residing in North America, authors residing in Asia had six-fold greater odds of rejection, authors residing in Europe had four-fold greater odds of rejection, and authors residing in other regions had two-fold greater odds of rejection. Female authors submitted 1.7 times more papers than males, but there was no difference in acceptance rates of their papers in adjusted analysis.
    CONCLUSIONS: In this analysis of publication decisions by the JPSM, there were differences in acceptance rates by region of residence, ethnicity, and race but not by gender. Asian authors and authors residing in regions outside of North America had greater odds of rejection compared to White or North American authors.
    Keywords:  diversity; equity; inclusion; publication bias
    DOI:  https://doi.org/10.1016/j.jpainsymman.2023.10.014
  7. Learn Health Syst. 2023 Oct;7(4): e10396
      Computable biomedical knowledge artifacts (CBKs) are software programs that transform input data into practical output. CBKs are expected to play a critical role in the future of learning health systems. While there has been rapid growth in the development of CBKs, broad adoption is hampered by limited verification, documentation, and dissemination channels. To address these issues, the Learning Health Systems journal created a track dedicated to publishing CBKs through a peer-review process. Peer review of CBKs should improve reproducibility, reuse, trust, and recognition in biomedical fields, contributing to learning health systems. This special issue introduces the CBK track with four manuscripts reporting a functioning CBK, and another four manuscripts tackling methodological, policy, deployment, and platform issues related to fostering a healthy ecosystem for CBKs. It is our hope that the potential of CBKs exemplified and highlighted by these quality publications will encourage scientists within learning health systems and related biomedical fields to engage with this new form of scientific discourse.
    DOI:  https://doi.org/10.1002/lrh2.10396
  8. EMBO Rep. 2023 Oct 18. e58271
      The stringent selection criteria applied at EMBO Reports result in the publication of around 16% of submitted papers. What do we select for and why is this a service to the scientific community? A counterpoint to EMBO Reports (2023) e58127.
    DOI:  https://doi.org/10.15252/embr.202358271
  9. Orthop Traumatol Surg Res. 2023 Oct 16. pii: S1877-0568(23)00227-X. [Epub ahead of print] 103709
      
    DOI:  https://doi.org/10.1016/j.otsr.2023.103709
  10. J Orthop Sports Phys Ther. 2023 Oct 20. 1-32
      OBJECTIVE: To investigate open science practices in research published in the top five sports medicine journals from 01 May 2022 and 01 October 2022. DESIGN: A meta-research systematic review LITERATURE SEARCH: Open science practices were searched in MEDLINE. STUDY SELECTION CRITERIA: We included original scientific research published in one of the identified top-five sports medicine journals in 2022 as ranked by Clarivate ((1) British Journal of Sports Medicine, (2) Journal of Sport and Health Science, (3) American Journal of Sports Medicine, (4) Medicine Science Sport and Exercise, and (5) Sports Medicine-Open). Studies were excluded if they were systematic reviews, qualitative research, grey literature, or animal or cadaver models. DATA SYNTHESIS: Open science practices were extracted in accordance with the Transparency and Openness Promotion (TOP) guidelines and patient and public involvement (PPI). RESULTS: 243 studies were included. The median number of open science practices in each study was 2, out of a maximum of 12 (Range: 0-8; IQR: 2). 234 studies (96%, 95% CI: 94-99%) provided an author conflict of interest statement and 163 (67%, 95% CI: 62-73%) reported funding. 21 studies (9%, 95% CI: 5-12%) provided open access data. Fifty-four studies (22%, 95% CI: 17-27%) included a data availability statement and 3 (1%, 95% CI: 0-3%) made code available. Seventy-six studies (32%, 95% CI: 25-37%) had transparent materials and 30 (12%, 95% CI: 8-16) used a reporting guideline. Twenty-eight studies (12%, 95% CI: 8-16%) were pre-registered. Six studies (3%, 95% CI: 1-4%) published a protocol. Four studies (2%, 95% CI: 0-3%) reported an analysis plan a priori. Seven studies (3%, 95% CI: 1-5%) reported patient and public involvement. CONCLUSION: Open science practices in the sports medicine field are extremely limited. The least followed practices were sharing code, data, and analysis plans.
    Keywords:  Open Access; Open Code; Reporting Guideline; Study Protocol
    DOI:  https://doi.org/10.2519/jospt.2023.12016
  11. J Hum Resour. 2023 Jul;58(4): 1307-1346
      Studying 5.6 million biomedical science articles published over three decades, we reconcile conflicts in a longstanding interdisciplinary literature on scientists' life-cycle productivity by controlling for selective attrition and distinguishing between research quantity and quality. While research quality declines monotonically over the career, this decline is easily overlooked because higher "ability" authors have longer publishing careers. Our results have implications for broader questions of human capital accumulation over the career and federal research policies that shift funding to early-career researchers - while funding researchers at their most creative, these policies must be undertaken carefully because young researchers are less "able" on average.
    DOI:  https://doi.org/10.3368/jhr.59.2.1219-10630r1
  12. J Clin Epidemiol. 2023 Oct 16. pii: S0895-4356(23)00264-0. [Epub ahead of print]
      OBJECTIVE: To assess the endorsement of reporting guidelines by high impact factor journals over the period 2017 to 2022, with a specific focus on the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement. Study Design and Setting We searched the online 'Instructions to authors' of high impact factor medical journals in February 2017 and in January 2022 for any reference to reporting guidelines and TRIPOD in particular.RESULTS: In 2017, 205 out of 337 (61%) journals mentioned any reporting guideline in their instructions to authors and in 2022 this increased to 245 (73%) journals. A reference to TRIPOD was provided by 27 (8%) journals in 2017 and 67 (20%) in 2022. Of those journals mentioning TRIPOD in 2022, 22% provided a link to the TRIPOD website and 60% linked to TRIPOD information on the EQUATOR Network website. 25% of the journals required adherence to TRIPOD.
    CONCLUSION: About three quarters of high-impact medical journals endorse the use of reporting guidelines and 20% endorse TRIPOD. Transparent reporting is important in enhancing the usefulness of health research and endorsement by journals plays a critical role in this.
    Keywords:  TRIPOD; adherence; endorsement; implementation; prediction model; reporting guideline
    DOI:  https://doi.org/10.1016/j.jclinepi.2023.10.004
  13. Lancet Infect Dis. 2023 Oct 11. pii: S1473-3099(23)00455-3. [Epub ahead of print]
      Non-timely reporting, selective reporting, or non-reporting of clinical trial results are prevalent and serious issues. WHO mandates that summary results be available in registries within 12 months of study completion and published in full text within 24 months. However, only a limited number of clinical trials in infectious diseases, including those done during the COVID-19 pandemic, have their results posted on ClinicalTrials.gov. An analysis of 50 trials of eight antiviral drugs tested against COVID-19 with a completion date of at least 2 years ago revealed that only 18% had their results published in the registry, with 40% not publishing any results. Non-timely and non-reporting practices undermine patient participation and are ethically unacceptable. Strategies should include obligatory reporting of summary results within 12 months in clinical trial registries, with progress towards peer-reviewed publication within 24 months indicated. Timely publication of research papers should be encouraged through an automated flagging mechanism in clinical trial registries that draws attention to the status of results reporting, such as a green tick for trials that have reported summary results within 12 months and a red tick in case of failure to do so. We propose the inclusion of mandatory clinical trial reporting standards in the International Conference on Harmonization Good Clinical Practice guidelines, which should prohibit sponsor contract clauses that restrict reporting (referred to as gag clauses) and require timely reporting of results as part of the ethics committees' clearance process for clinical trial protocols.
    DOI:  https://doi.org/10.1016/S1473-3099(23)00455-3
  14. Rheumatol Int. 2023 Oct 20.
      The purpose of this study was to investigate the instructions for authors of rheumatology journals and analyze their endorsement of reporting guidelines and clinical trial registration. Sixty rheumatology journals were selected by a research librarian and an investigator through the 2021 Scopus CiteScore tool. The instructions for authors' subsection of each journal was assessed to determine endorsement of study design-specific reporting guidelines or clinical trial registration. Descriptive statistics were calculated using R (version 4.2.1) and RStudio. Of the 58 journals analyzed, 34 (34/58; 59%) mentioned the EQUATOR Network: an online compendium of best practice reporting guidelines. The most commonly mentioned reporting guidelines were CONSORT with 44 journals (44/58; 75%), and PRISMA with 35 journals (35/58; 60%). The least mentioned guidelines were QUOROM with 56 journals not mentioning the guideline (56/58; 97%), and SRQR with 53 journals not mentioning the guideline (53/57, 93%). Clinical trial registration was required by 38 journals (38/58; 66%) and recommended by 8 journals (8/58; 14%). Our study found that endorsement of reporting guidelines and clinical trial registration within rheumatology journals was suboptimal with great room for improvement. Endorsement of reporting guidelines have shown to not only mitigate bias, but also improve research methodologies. Therefore, we recommend rheumatology journals broadly expand their endorsement of reporting guidelines and clinical trial registration to improve the quality of evidence they publish.
    Keywords:  Clinical trial registration; Instructions for authors; Reporting guidelines; Reporting standardization
    DOI:  https://doi.org/10.1007/s00296-023-05474-4
  15. Orthop Traumatol Surg Res. 2023 Oct 12. pii: S1877-0568(23)00224-4. [Epub ahead of print] 103706
      BACKGROUND: Artificial intelligence (AI) tools, although beneficial for data collection and analysis, can also facilitate scientific fraud. AI detectors can help resolve this problem, but their effectiveness depends on their ability to track AI progress. In addition, many methods of evading AI detection exist and their constantly evolving sophistication can make the task more difficult. Thus, from an AI-generated text, we wanted to 1) evaluate the AI detection sites on a text generated entirely by the AI, 2) test the methods described for evading AI detection, and 3) evaluate the effectiveness of these methods to evade AI detection on the sites tested previously.HYPOTHESIS: Not all AI detection tools are equally effective in detecting AI-generated text and some techniques used to evade АI detection can make an AI-produced text almost undetectable.
    MATERIALS AND METHODS: We created a text with ChatGPT-4 (Chat Gеnеrаtivе Prе-trained Trаnsfоrmеr) and submitted it to 11 АI detection web tools (Оriginаlity, ZеrоGPT, Writеr, Cоpylеаks, Crоssplag, GPTZеrо, Sapling, Cоntеnt аt scаlе, Cоrrеctоr, Writеfull еt Quill), bеfоrе аnd аftеr applying strаtеgiеs tо minimise AI detection. The strategies used to minimize AI detection were the improvement of command messages in ChatPGT, the introduction of minor grammatical errors such as comma deletion, paraphrasing, and the substitution of Latin letters with similar Cyrillic letters (a and о) which is also a method used elsewhere to evade the detection of plagiarism. We have also tested the effectiveness of these tools in correctly identifying a scientific text written by a human in 1960.
    RESULTS: From the initial text generated by the AI, 7 of the 11 detectors concluded that the text was mainly written by humans. Subsequently, the introduction of simple modifications, such as the removal of commas or paraphrasing can effectively reduce AI detection and make the text appear human for all detectors. In addition, replacing certain Latin letters with Cyrillic letters can make an AI text completely undetectable. Finally, we observe that in a paradoxical way, certain sites detect a significant proportion of AI in a text written by a human in 1960.
    DISCUSSION: AI detectors have low efficiency, and simple modifications can allow even the most robust detectors to be easily bypassed. The rapid development of generative AI raises questions about the future of scientific writing but also about the detection of scientific fraud, such as data fabrication.
    LEVEL OF EVIDENCE: III; Control case study.
    DOI:  https://doi.org/10.1016/j.otsr.2023.103706
  16. PLoS Biol. 2023 Oct 19. 21(10): e3002377
    PLOS Biology staff editors
      Twenty years ago this month, PLOS Biology was launched, helping to catalyze a movement that has transformed publishing in the life sciences. In this issue, we explore how the community can continue innovating for positive change in the next decades.
    DOI:  https://doi.org/10.1371/journal.pbio.3002377
  17. Nature. 2023 Oct;622(7984): 693-696
      
    Keywords:  Computer science; Machine learning; Policy; Technology
    DOI:  https://doi.org/10.1038/d41586-023-03266-1
  18. R Soc Open Sci. 2023 Oct;10(10): 230677
      Questionable research practises (QRPs) have been the focus of the scientific community amid greater scrutiny and evidence highlighting issues with replicability across many fields of science. To capture the most impactful publications and the main thematic domains in the literature on QRPs, this study uses a document co-citation analysis. The analysis was conducted on a sample of 341 documents that covered the past 50 years of research in QRPs. Nine major thematic clusters emerged. Statistical reporting and statistical power emerged as key areas of research, where systemic-level factors in how research is conducted are consistently raised as the precipitating factors for QRPs. There is also an encouraging shift in the focus of research into open science practises designed to address engagement in QRPs. Such a shift is indicative of the growing momentum of the open science movement, and more research can be conducted on how these practises are employed on the ground and how their uptake by researchers can be further promoted. However, the results suggest that, while pre-registration and registered reports receive the most research interest, less attention has been paid to other open science practises (e.g. data sharing).
    Keywords:  ethics of research; questionable research practises; scientific integrity
    DOI:  https://doi.org/10.1098/rsos.230677
  19. Science. 2023 Oct 20. 382(6668): 248-249
      Journal policy aims to boost tribal ties, but critics fear loss of academic freedom.
    DOI:  https://doi.org/10.1126/science.adl4274