bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒01‒14
twenty papers selected by
Thomas Krichel, Open Library Society



  1. Autophagy. 2024 Jan 08.
      As a follow up to that wildly popular11 At least in my own mind. Editor's Corner, "Look youse guys and gals, dat just ain't right" published in 2021, I have put together a follow-up guide to some common grammatical mistakes that I encounter in papers submitted to Autophagy. This guide is meant in particular to help non-native English speakers write more clearly but may also be of benefit to other authors who grew up at a time when learning grammar was replaced by autocorrect functions in Word and other programs, or the desire to simplify sentences so that they fit within a tweet, or to ease thumb strain when typing on a smartphone. The guide is also meant to help editors by reducing the number of changes needed to bring papers up to the high standards of clarity that we strive to maintain at Autophagy.
    Keywords:  Autophagy; grammar; it’s all about clarity; meaning what you write; writing what you mean
    DOI:  https://doi.org/10.1080/15548627.2024.2301800
  2. Nature. 2024 Jan 10.
      
    Keywords:  Authorship; Policy; Publishing; Research data; Scientific community
    DOI:  https://doi.org/10.1038/d41586-024-00090-z
  3. PeerJ. 2024 ;12 e16514
      Background: Optimizing access to high-quality scientific journals has become an important priority for academic departments, including the ability to read the scientific literature and the ability to afford to publish papers in those journals. In this contribution, we assess the question of whether institutional investment in scientific journals aligns with the journals where researchers send their papers for publication, and where they serve as unpaid reviewers and editors.Methods: We assembled a unique suite of information about the publishing habits of our Department of Ecology and Evolutionary Biology, including summaries of 3,540 journal publications by 35 faculty members. These data include economic costs of journals to institutions and to authors, benefits to authors in terms of journal prestige and citation rates, and considerations of ease of reading access for individuals both inside and outside the university. This dataset included data on institutional costs, including subscription pricing (rarely visible to scholars), and "investment" by scholars in supporting journals, such as time spent as editors and reviewers.
    Results: Our results highlighted the complex set of relationships between these factors, and showed that institutional costs often do not match well with payoffs in terms of benefits to researchers (e.g., citation rate, prestige of journal, ease of access). Overall, we advocate for greater cost-benefit transparency to help compare different journals and different journal business models; such transparency would help both researchers and their institutions in investing wisely the limited resources available to academics.
    Keywords:  Article processing charges; Journals; Open access; Publication; Subscription
    DOI:  https://doi.org/10.7717/peerj.16514
  4. BMC Med Res Methodol. 2024 Jan 11. 24(1): 9
      BACKGROUND: Preprints are increasingly used to disseminate research results, providing multiple sources of information for the same study. We assessed the consistency in effect estimates between preprint and subsequent journal article of COVID-19 randomized controlled trials.METHODS: The study utilized data from the COVID-NMA living systematic review of pharmacological treatments for COVID-19 (covid-nma.com) up to July 20, 2022. We identified randomized controlled trials (RCTs) evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that were originally posted as preprints and subsequently published as journal articles. Trials that did not report the same analysis in both documents were excluded. Data were extracted independently by pairs of researchers with consensus to resolve disagreements. Effect estimates extracted from the first preprint were compared to effect estimates from the journal article.
    RESULTS: The search identified 135 RCTs originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs that did not meet the eligibility criteria, of which 13 RCTs reported an interim analysis in the preprint and a final analysis in the journal article. Overall, 109 preprint-article RCTs were included in the analysis. The median (interquartile range) delay between preprint and journal article was 121 (73-187) days, the median sample size was 150 (71-464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias was rated as 'some concern' for 80% of RCTs. We found that 81 preprint-article pairs of RCTs were consistent for all outcomes reported. There were nine RCTs with at least one outcome with a discrepancy in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect. Furthermore, six RCTs had at least one outcome missing in the journal article and 14 RCTs had at least one outcome added in the journal article compared to the preprint. There was a change in the direction of effect in one RCT. No changes in statistical significance or conclusions were found.
    CONCLUSIONS: Effect estimates were generally consistent between COVID-19 preprints and subsequent journal articles. The main results and interpretation did not change in any trial. Nevertheless, some outcomes were added and deleted in some journal articles.
    Keywords:  COVID-19; Discrepancy; Peer-review; Preprint; Randomized controlled trial
    DOI:  https://doi.org/10.1186/s12874-023-02136-8
  5. Neuropsychopharmacology. 2024 Jan 11.
      Neuropsychopharmacology (NPP) offers the option to publish articles in different tiers of an open access (OA) publishing system: Green, Bronze, or Hybrid. Green articles follow a standard access (SA) subscription model, in which readers must pay a subscription fee to access article content on the publisher's website. Bronze articles are selected at the publisher's discretion and offer free availability to readers at the same article processing charge (APC) as Green articles. Hybrid articles are fully OA, but authors pay an increased APC to ensure public access. Here, we aimed to determine whether publishing tier affect the impact and reach of scientific articles in NPP. A sample of 6000 articles published between 2001-2021 were chosen for the analysis. Articles were separated by article type and publication year. Citation counts and Altmetric scores were compared between the three tiers. Bronze articles received significantly more citations than Green and Hybrid articles overall. However, when analyzed by year, Bronze and Hybrid articles received comparable citation counts within the past decade. Altmetric scores were comparable between all tiers, although this effect varied by year. Our findings indicate that free availability of article content on the publisher's website is associated with an increase in citations of NPP articles but may only provide a moderate boost in Altmetric score. Overall, our results suggest that easily accessible article content is most often cited by readers, but that the higher APCs of Hybrid tier publishing may not guarantee increased scholarly or social impact.
    DOI:  https://doi.org/10.1038/s41386-024-01796-4
  6. Immunol Cell Biol. 2024 Jan 11.
      Immunology & Cell Biology celebrated its 100-year birthday as a journal with an editorial workshop focused on how we can improve the author experience. In our renewed editorial policies, we articulate our editorial focus on the quality of the scientific question and the robustness of the conclusions, including a new "scoop protection" policy to live our values. The journal is dedicated to maintaining its relationship with reviewers, enabling rapid quality peer review, but is also opening new lines of submission with expedited cross-platform assessment of reviews and incorporation into the Review Commons submission pipeline. In 2024 we will expand our social media promotion of articles and build on the career development resource of Immunology Futures. Here we lay out the ethos, numbers and rationale behind ICB's renewed author-centric publication policies for 2024.
    Keywords:  academia; publication; research
    DOI:  https://doi.org/10.1111/imcb.12722
  7. Int J Periodontics Restorative Dent. 2024 Jan 10. 0(0): 1-12
      The use of Artificial Intelligence (AI) is rapidly expanding. While it comes with some drawbacks, it also offers numerous advantages. One significant application of AI is chatbots, which utilize natural language processing and machine learning to provide information, answer queries, and assist users. AI has various applications and dentistry is no exception. The authors conducted an experiment to assess the application of AI, particularly OpenAI's ChatGPT and Google Apps Script, in various stages of information gathering and manuscript preparation in parallel with conventional human-driven approaches. AI can serve as a valuable instrument in manuscript preparation; however, relying solely or predominantly on AI for manuscript writing is insufficient if the goal is to produce a high-quality article for publication in a peer-reviewed, high-impact journal that can contribute to the advancement of science and society.
    DOI:  https://doi.org/10.11607/prd.7022
  8. Trends Cogn Sci. 2024 Jan 08. pii: S1364-6613(23)00288-7. [Epub ahead of print]
      The rapid adoption of artificial intelligence (AI) tools in academic research raises pressing ethical concerns. I examine major publishing policies in science and medicine, uncovering inconsistencies and limitations in guiding AI usage. To encourage responsible AI integration while upholding transparency, I propose an enabling framework with author and reviewer policy templates.
    Keywords:  generative artificial intelligence; large language models; policy; publishing; science
    DOI:  https://doi.org/10.1016/j.tics.2023.12.002
  9. Curr Probl Cardiol. 2024 Jan 05. pii: S0146-2806(24)00026-4. [Epub ahead of print]49(3): 102387
      BACKGROUND: Generative Artificial Intelligence (AI) tools have experienced rapid development over the last decade and are gaining increasing popularity as assistive models in academic writing. However, the ability of AI to generate reliable and accurate research articles is a topic of debate. Major scientific journals have issued policies regarding the contribution of AI tools in scientific writing.METHODS: We conducted a review of the author and peer reviewer guidelines of the top 25 Cardiology and Cardiovascular Medicine journals as per the 2023 SCImago rankings. Data were obtained though reviewing journal websites and directly emailing the editorial office. Descriptive data regarding journal characteristics were coded on SPSS. Subgroup analyses of the journal guidelines were conducted based on the publishing company policies.
    RESULTS: Our analysis revealed that all scientific journals in our study permitted the documented use of AI in scientific writing with certain limitations as per ICMJE recommendations. We found that AI tools cannot be included in the authorship or be used for image generation, and that all authors are required to assume full responsibility of their submitted and published work. The use of generative AI tools in the peer review process is strictly prohibited.
    CONCLUSION: Guidelines regarding the use of generative AI in scientific writing are standardized, detailed, and unanimously followed by all journals in our study according to the recommendations set forth by international forums. It is imperative to ensure that these policies are carefully followed and updated to maintain scientific integrity.
    Keywords:  Artificial Intelligence; Cardiology; ChatGPT; Editorial Policies; Large Language Models; Machine Learning; SCImago; Scientific Writing
    DOI:  https://doi.org/10.1016/j.cpcardiol.2024.102387
  10. PeerJ. 2024 ;12 e16731
      Missing or inaccessible information about the methods used in scientific research slows the pace of discovery and hampers reproducibility. Yet little is known about how, why, and under what conditions researchers share detailed methods information, or about how such practices vary across social categories like career stage, field, and region. In this exploratory study, we surveyed 997 active researchers about their attitudes and behaviors with respect to methods sharing. The most common approach reported by respondents was private sharing upon request, but a substantial minority (33%) had publicly shared detailed methods information independently of their research findings. The most widely used channels for public sharing were connected to peer-reviewed publications, while the most significant barriers to public sharing were found to be lack of time and lack of awareness about how or where to share. Insofar as respondents were moderately satisfied with their ability to accomplish various goals associated with methods sharing, we conclude that efforts to increase public sharing may wish to focus on enhancing and building awareness of existing solutions-even as future research should seek to understand the needs of methods users and the extent to which they align with prevailing practices of sharing.
    Keywords:  Methods sharing; Open science; Protocol sharing; Research methods
    DOI:  https://doi.org/10.7717/peerj.16731
  11. JAMA Netw Open. 2024 Jan 02. 7(1): e2350688
      Importance: Publishing study protocols might reduce research waste because of unclear methods or incomplete reporting; on the other hand, there might be few additional benefits of publishing protocols for registered trials that are never completed or published. No study has investigated the proportion of published protocols associated with published results.Objective: To estimate the proportion of published trial protocols for which there are not associated published results.
    Design, Setting, and Participants: This cross-sectional study used stratified random sampling to identify registered clinical trials with protocols published between January 2011 and August 2022 and indexed in PubMed Central. Ongoing studies and those within 1 year of the primary completion date on ClinicalTrials.gov were excluded. Published results were sought from August 2022 to March 2023 by searching ClinicalTrials.gov, emailing authors, and using an automated tool, as well as through incidental discovery.
    Main Outcomes and Measures: The primary outcome was a weighted estimate of the proportion of registered trials with published protocols that also had published main results. The proportion of trials with unpublished results was estimated using a weighted mean.
    Results: From 1500 citations that were screened, 308 clinical trial protocols were included, and it was found that 87 trials had not published their main results. Most included trials were investigator-initiated evaluations of nonregulated products. When published, results appeared a mean (SD) of 3.4 (2.0) years after protocol publications. With the use of a weighted mean, an estimated 4754 (95% CI, 4296-5226) eligible clinical trial protocols were published and indexed in PubMed Central between 2011 and 2022. In the weighted analysis, 1708 of those protocols (36%; 95% CI, 31%-41%) were not associated with publication of main results. In a sensitivity analysis excluding protocols published after 2019, an estimated 25% (95% CI, 20%-30%) of 3670 (95% CI, 3310-4032) protocol publications were not associated with publication of main results.
    Conclusions and Relevance: This cross-sectional study of clinical trial protocols published on PubMed Central between 2011 and 2022 suggests that many protocols were not associated with subsequent publication of results. The overall benefits of publishing study protocols might outweigh the research waste caused by unnecessary protocol publications.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2023.50688
  12. Syst Rev. 2024 Jan 12. 13(1): 24
      BACKGROUND: This systematic review aimed to investigate the relationship between retraction status and the methodology quality in the retracted non-Cochrane systematic review.METHOD: PubMed, Web of Science, and Scopus databases were searched with keywords including systematic review, meta-analysis, and retraction or retracted as a type of publication until September 2023. There were no time or language restrictions. Non-Cochrane medical systematic review studies that were retracted were included in the present study. The data related to the retraction status of the articles were extracted from the retraction notice and Retraction Watch, and the quality of the methodology was evaluated with the AMSTAR-2 checklist by two independent researchers. Data were analyzed in the Excel 2019 and SPSS 21 software.
    RESULT: Of the 282 systematic reviews, the corresponding authors of 208 (73.75%) articles were from China. The average interval between publish and retraction of the article was about 23 months and about half of the non-Cochrane systematic reviews were retracted in the last 4 years. The most common reasons for retractions were fake peer reviews and unreliable data, respectively. Editors and publishers were the most retractors or requestors for retractions. More than 86% of the retracted non-Cochrane SRs were published in journals with an impact factor above two and had a critically low quality. Items 7, 9, and 13 among the critical items of the AMSTAR-2 checklist received the lowest scores.
    DISCUSSION AND CONCLUSION: There was a significant relationship between the reasons of retraction and the quality of the methodology (P-value < 0.05). Plagiarism software and using the Cope guidelines may decrease the time of retraction. In some countries, strict rules for promoting researchers increase the risk of misconduct. To avoid scientific errors and improve the quality of systematic reviews/meta-analyses (SRs/MAs), it is better to create protocol registration and retraction guidelines in each journal for SRs/MAs.
    Keywords:  AMSTAR-2; Methodology; Quality; Retraction; Systematic reviews
    DOI:  https://doi.org/10.1186/s13643-023-02439-3
  13. Jpn Dent Sci Rev. 2024 Dec;60 40-43
      The publication status of dental journals in Japan was examined, with a focus on metrics such as Journal Impact Factor (JIF), Eigenfactor, Article Influence Score, and percentage of open access. A total of 18 journals published by Japanese dental organizations were identified in the Journal Citation Reports (JCR), with JIF values ranging from 0.4 to 6.6. The highest JIF was observed in The Japanese Dental Science Review. Additionally, 16 journals were not listed on the JCR. The authors explored the implications of these findings on the visibility and impact of Japanese dental research, and discussed the potential benefits of embracing open-access publications for greater global dissemination. This study highlighted the opportunities for journals to enhance their international recognition by meeting the criteria for JIF inclusion and embracing open-access publications. By adopting effective publication strategies, the dental community in Japan will be able to contribute to the advancement of dentistry globally, ensuring broader accessibility and recognition of its research contributions.
    Keywords:  Article Influence Score; Dentistry; Eigenfactor; Journal Impact Factor; Open access; Publishing
    DOI:  https://doi.org/10.1016/j.jdsr.2023.12.001
  14. Dev World Bioeth. 2024 Jan 09.
      We aimed to conduct a scoping review to assess the profile of retracted health sciences articles authored by individuals affiliated with academic institutions in Latin America and the Caribbean (LAC). We systematically searched seven databases (PubMed, Scopus, Web of Science, Embase, Medline/Ovid, Scielo, and LILACS). We included articles published in peer-reviewed journals between 2003 and 2022 that had at least one author with an institutional affiliation in LAC. Data were collected on the year of publication, study design, authors' countries of origin, number of authors, subject matter of the manuscript, scientific journals of publication, retraction characteristics, and reasons for retraction. We included 147 articles, the majority being observational studies (41.5%). The LAC countries with the highest number of retractions were Brazil (n = 69), Colombia (n = 16), and Mexico (n = 15). The areas of study with the highest number of retractions were infectology (n = 21) and basic sciences (n = 15). A retraction label was applied to 89.1% of the articles, 70.7% were retracted by journal editors, and 89.1% followed international retraction guidelines. The primary reasons for retraction included errors in procedures or data collection (n = 39), inconsistency in results or conclusions (n = 37), plagiarism (n = 21), and suspected scientific fraud (n = 19). In conclusion, most retractions of scientific publications in health sciences in LAC adhered to international guidelines and were linked to methodological issues in execution and scientific misconduct. Efforts should be directed toward ensuring the integrity of scientific research in the field of health.
    Keywords:  health science; journal impact factor; plagiarism; research misconduct; retractions
    DOI:  https://doi.org/10.1111/dewb.12439