bims-skolko Biomed News
on Scholarly communication
Issue of 2023–10–08
35 papers selected by
Thomas Krichel, Open Library Society



  1. Complement Ther Med. 2023 Sep 29. pii: S0965-2299(23)00077-8. [Epub ahead of print] 102990
      It appears that ever more frequently the corresponding author of a multi-author manuscript is not what he/she was originally supposed to be: the most involved researcher with the best overview concerning the presented study. Numerous journals now use the term 'corresponding author', however, for the author who acts as a kind of secretary for the submitted manuscript, irrespective of his/her expertise in the subject. Another problem is that a significant number of universities give more scientific credits to the corresponding author than to his/her co-authors, which fairly commonly results in granting the corresponding authorship to the student or young scientist who needs scientific credits most urgently for his/her academic career. Consequently, readers of a multi-author article are nowadays hardly able to judge which author of an interesting article might best be contacted for additional information. An increasing number of journals seem unaware of the problems that this changing role of the corresponding author may cause. The present contribution both mentions the main resulting problems and proposes possible solutions.
    Keywords:  author’s responsibility; editor’s responsibility; journal’s responsibility; publisher’s responsibility; science communication
    DOI:  https://doi.org/10.1016/j.ctim.2023.102990
  2. PLoS Biol. 2023 Oct 06. 21(10): e3002360
      Biomaterial sharing offers enormous benefits for research and for the scientific community. Individuals, funders, institutions, and journals can overcome the barriers to sharing and work together to promote a better sharing culture.
    DOI:  https://doi.org/10.1371/journal.pbio.3002360
  3. Am J Case Rep. 2023 Oct 01. 24 e942670
      Between 2012 and 2022, the American Journal of Case Reports published over 3,500 case reports and case series. In 2022-23, this journal achieved an impact factor (IF) of 1.2. The significant merits of published case reports include identifying rare diseases and syndromes, treatment complications or side effects, pharmacovigilance, and medical education. The limitations or cautions of the case report include the inability to generalize, the lack of establishment of a cause-effect relationship, and over-interpretation. Historically, new clinical conditions and syndromes have been identified. Since 2020, the COVID-19 pandemic has significantly impacted manuscript submissions and publications, as illustrated for this journal. This editorial aims to highlight the importance of case reports and series, recent publication trends and includes recommendations on what to do and what not to do when preparing and writing the manuscript for a case report.
    DOI:  https://doi.org/10.12659/AJCR.942670
  4. Account Res. 2023 Oct 07.
      Questionable journal lists are often referred to as "blacklists" and conventionally used alongside "whitelists." Nevertheless, it is crucial to note that these terms carry historical connotations that can be perceived as racist, and their use should be actively avoided. This article proposes alternative terms, such as "watchlist" and "safelist," taking into consideration their etymology. Nonetheless, it should be emphasized that the quality of a journal cannot be adequately characterized in a dualistic manner, and this aspect is also of significant importance.
    Keywords:  Predatory publishing; academic journals; questionable journals; racism; research evaluation; research policy; scholarly publishing; terminology
    DOI:  https://doi.org/10.1080/08989621.2023.2267969
  5. J Med Internet Res. 2023 Oct 06. 25 e48529
      We examined the gender distribution of authors of retracted articles in 134 medical journals across 10 disciplines, compared it with the gender distribution of authors of all published articles, and found that women were underrepresented among authors of retracted articles, and, in particular, of articles retracted for misconduct.
    Keywords:  error; fraud; gender; inequality; integrity; journal; misconduct; plagiarism; publication; publish; publishing; research; research article; research study; retraction; retractions; retrospective; scientific integrity; scientific research; woman; women
    DOI:  https://doi.org/10.2196/48529
  6. Naunyn Schmiedebergs Arch Pharmacol. 2023 Oct 05.
      Honesty of publications is fundamental in science. Unfortunately, science has an increasing fake paper problem with multiple cases having surfaced in recent years, even in renowned journals. There are companies, the so-called paper mills, which professionally fake research data and papers. However, there is no easy way to systematically identify these papers. Here, we show that scanning for exchanged authors in resubmissions is a simple approach to detect potential fake papers. We investigated 2056 withdrawn or rejected submissions to Naunyn-Schmiedeberg's Archives of Pharmacology (NSAP), 952 of which were subsequently published in other journals. In six cases, the stated authors of the final publications differed by more than two thirds from those named in the submission to NSAP. In four cases, they differed completely. Our results reveal that paper mills take advantage of the fact that journals are unaware of submissions to other journals. Consequently, papers can be submitted multiple times (even simultaneously), and authors can be replaced if they withdraw from their purchased authorship. We suggest that publishers collaborate with each other by sharing titles, authors, and abstracts of their submissions. Doing so would allow the detection of suspicious changes in the authorship of submitted and already published papers. Independently of such collaboration across publishers, every scientific journal can make an important contribution to the integrity of the scientific record by analyzing its own pool of withdrawn and rejected papers versus published papers according to the simple algorithm proposed in the present paper.
    Keywords:  Fake paper; Naunyn–Schmiedeberg’s Archives of Pharmacology; Paper mill; Rejected; Scientific misconduct; Withdrawn
    DOI:  https://doi.org/10.1007/s00210-023-02741-w
  7. PLoS Biol. 2023 Oct;21(10): e3002255
      Open Peer Review is gaining prominence in attention and use, but to responsibly open up peer review, there is an urgent need for additional evidence. Here, we propose a preliminary research agenda and issue a call to action.
    DOI:  https://doi.org/10.1371/journal.pbio.3002255
  8. Nature. 2023 Oct 03.
      
    Keywords:  Publishing; Scientific community; Software
    DOI:  https://doi.org/10.1038/d41586-023-02920-y
  9. Int J Radiat Oncol Biol Phys. 2023 Oct 01. pii: S0360-3016(23)06241-7. [Epub ahead of print]117(2S): e528-e529
       PURPOSE/OBJECTIVE(S): Publishing and editorial policies differ substantially across the Radiation Oncology (RO) and Medical Physics (MedPhys) compendium of journals. Adoptance of modern standards in scientific publishing and data sharing have the potential to improve the impact and reliability of the RO literature.
    MATERIALS/METHODS: We characterized the editorial, authorship and peer reviewer policies of various prominent clinical RO (N = 16) and medical physics (N = 9) peer-reviewed journals affiliated with professional societies for characteristics that are associated with improved reproducibility and rigorous review. A combination of tools including Enhancing the QUAlity and Transparency Of health Research (EQUATOR), Findability, Accessibility, Interoperability, and Reuse (FAIR), and Quality Output Checklist and Content Assessment (QuOCCA) principles were used to quantify the value and reproducibility of journal policies. Cohen's kappa coefficient was utilized to assess agreement between reviewers. Components of the above tools were regressed against various scientometric indices (H-index, IF, etc.) to identify factors that are associated with perceived relative importance within the field.
    RESULTS: Reviewer agreement (κ) for scientometric indices was highest (1.0) for criteria for statistical review and data submission standards and lowest (-0.246) for various submission checklists. Data availability statements were endorsed (44%) or required (31%) in a higher proportion of RO journals relative to MedPhys journals (44%, 0% respectively). Data repository submission was required in <10% of journals. FAIR adoptance was poor (31%, 22%) in RO and MedPhys journals. ≥1 EQUATOR guideline checklist was endorsed or required in 76% of journals. While there were no glaring differences in editorial policies between RO and MedPhys journals, there was substantial heterogeneity of scientometrics evaluating the rigor of data submission, reproducibility standards, and statistical review criteria. Linear regression of journal impact factors indicated a predictive relationship between FAIR adoption standards, use of EQUATOR checklists, and more rigorous statistical method submission criteria.
    CONCLUSION: The present review documented and confirmed significant variation in submission, review, and publication policies across RO and MedPhys journals. Established scientometric standards, FAIR principle adoptance, and more rigorous statistical methodology were predictive of increasing journal impact factor.
    DOI:  https://doi.org/10.1016/j.ijrobp.2023.06.1807
  10. Yale J Biol Med. 2023 Sep;96(3): 415-420
      The increasing volume of research submissions to academic journals poses a significant challenge for traditional peer-review processes. To address this issue, this study explores the potential of employing ChatGPT, an advanced large language model (LLM), developed by OpenAI, as an artificial intelligence (AI) reviewer for academic journals. By leveraging the vast knowledge and natural language processing capabilities of ChatGPT, we hypothesize it may be possible to enhance the efficiency, consistency, and quality of the peer-review process. This research investigated key aspects of integrating ChatGPT into the journal review workflow. We compared the critical analysis of ChatGPT, acting as an AI reviewer, to human reviews for a single published article. Our methodological framework involved subjecting ChatGPT to an intricate examination, wherein its evaluative acumen was juxtaposed against human-authored reviews of a singular published article. As this is a feasibility study, one article was reviewed, which was a case report on scurvy. The entire article was used as an input into ChatGPT and commanded it to "Please perform a review of the following article and give points for revision." Since this was a case report with a limited word count the entire article could fit in one chat box. The output by ChatGPT was then compared with the comments by human reviewers. Key performance metrics, including precision and overall agreement, were judiciously and subjectively measured to portray the efficacy of ChatGPT as an AI reviewer in comparison to its human counterparts. The outcomes of this rigorous analysis unveiled compelling evidence regarding ChatGPT's performance as an AI reviewer. We demonstrated that ChatGPT's critical analyses aligned with those of human reviewers, as evidenced by the inter-rater agreement. Notably, ChatGPT exhibited commendable capability in identifying methodological flaws, articulating insightful feedback on theoretical frameworks, and gauging the overall contribution of the articles to their respective fields. While the integration of ChatGPT showcased immense promise, certain challenges and caveats surfaced. For example, ambiguities might present with complex research articles, leading to nuanced discrepancies between AI and human reviews. Also figures and images cannot be reviewed by ChatGPT. Lengthy articles need to be reviewed in parts by ChatGPT as the entire article will not fit in one chat/response. The benefits consist of reduction in time needed by journals to review the articles submitted to them, as well as an AI assistant to give a different perspective about the research papers other than the human reviewers. In conclusion, this research contributes a groundbreaking foundation for incorporating ChatGPT into the pantheon of journal reviewers. The delineated guidelines distill key insights into operationalizing ChatGPT as a proficient reviewer within academic journal frameworks, paving the way for a more efficient and insightful review process.
    Keywords:  chatGPT; journal review; review
    DOI:  https://doi.org/10.59249/SKDH9286
  11. Radiologia (Engl Ed). 2023 Sep-Oct;65(5):pii: S2173-5107(23)00107-6. [Epub ahead of print]65(5): 389-391
      
    DOI:  https://doi.org/10.1016/j.rxeng.2023.05.004
  12. PLoS One. 2023 ;18(10): e0292306
      The allocation of public funds for research has been predominantly based on peer review where reviewers are asked to rate an application on some form of ordinal scale from poor to excellent. Poor reliability and bias of peer review rating has led funding agencies to experiment with different approaches to assess applications. In this study, we compared the reliability and potential sources of bias associated with application rating with those of application ranking in 3,156 applications to the Canadian Institutes of Health Research. Ranking was more reliable than rating and less susceptible to the characteristics of the review panel, such as level of expertise and experience, for both reliability and potential sources of bias. However, both rating and ranking penalized early career investigators and favoured older applicants. Sex bias was only evident for rating and only when the applicant's H-index was at the lower end of the H-index distribution. We conclude that when compared to rating, ranking provides a more reliable assessment of the quality of research applications, is not as influenced by reviewer expertise or experience, and is associated with fewer sources of bias. Research funding agencies should consider adopting ranking methods to improve the quality of funding decisions in health research.
    DOI:  https://doi.org/10.1371/journal.pone.0292306
  13. Knee Surg Sports Traumatol Arthrosc. 2023 Oct 04.
      Peer review is an essential process to ensure that scientific articles meet high standards of methodology, ethics and quality. The peer-review process is a part of the academic mission for physicians in the university setting. The work of reviewers is of great value for authors, as it gives constructive criticism and improves manuscript quality before publication. Often, however, reviews are of suboptimal quality. Usually, reviewers do not receive formal training either on how to perform a review or on the peer-review process. In addition, it is generally believed that experienced authors are great reviewers, but this may not always be true. The overarching goal of a review is to make the manuscript better; to help the authors. The purpose of this article is to offer relevant suggestions and provide a checklist on how to perform a useful review.
    DOI:  https://doi.org/10.1007/s00167-023-07595-6
  14. Hastings Cent Rep. 2023 Oct 01.
      Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. In the interests of fostering a wider conversation about how generative AI may be used, we have developed a preliminary set of recommendations for its use in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.
    Keywords:  ChatGPT; accountability; bioethics; community of scholars; generative AI; humanities; journal publishing; large language models; transparency
    DOI:  https://doi.org/10.1002/hast.1507
  15. J Coll Physicians Surg Pak. 2023 Oct;33(10): 1198-1200
      Health and scientific researchers in non-English speaking countries such as Pakistan, are not proficient in English, which limits their ability to communicate their ideas and findings to the international scientific community. ChatGPT is a large language model that can help non-native English speakers to write high-quality scientific papers much faster by assisting them in conveying their ideas in a clear and understandable manner, as well as avoiding common language errors. In fact, ChatGPT has already been used in publication of research papers, literature reviews, and editorials. However, it is imperative to recognise that ChatGPT is still in its early stages, thus, it is important to recognise its limitations. It is suggested that ChatGPT should be employed to complement writing and reviewing tasks but should not be relied on to generate original content or perform essential analysis, as it cannot replace human expertise, contextual knowledge, experience, and intelligence. Researchers should exercise caution and thoroughly scrutinise the generated text for accuracy and plagiarism before incorporating it into their work. Key Words: Artificial intelligence, ChatGPT, Health research, Scientific research.
    DOI:  https://doi.org/10.29271/jcpsp.2023.10.1198
  16. Korean J Radiol. 2023 Oct;24(10): 952-959
      Large language models (LLMs) such as ChatGPT have garnered considerable interest for their potential to aid non-native English-speaking researchers. These models can function as personal, round-the-clock English tutors, akin to how Prometheus in Greek mythology bestowed fire upon humans for their advancement. LLMs can be particularly helpful for non-native researchers in writing the Introduction and Discussion sections of manuscripts, where they often encounter challenges. However, using LLMs to generate text for research manuscripts entails concerns such as hallucination, plagiarism, and privacy issues; to mitigate these risks, authors should verify the accuracy of generated content, employ text similarity detectors, and avoid inputting sensitive information into their prompts. Consequently, it may be more prudent to utilize LLMs for editing and refining text rather than generating large portions of text. Journal policies concerning the use of LLMs vary, but transparency in disclosing artificial intelligence tool usage is emphasized. This paper aims to summarize how LLMs can lower the barrier to academic writing in English, enabling researchers to concentrate on domain-specific research, provided they are used responsibly and cautiously.
    Keywords:  Academic writing; Artificial intelligence; ChatGPT; Editing; Generative pretrained transformer; Large language model; Publication
    DOI:  https://doi.org/10.3348/kjr.2023.0773
  17. PLoS One. 2023 ;18(10): e0292279
       BACKGROUND: Publishing study results in scientific journals has been the standard way of disseminating science. However, getting results published may depend on their statistical significance. The consequence of this is that the representation of scientific knowledge might be biased. This type of bias has been called publication bias. The main objective of the present study is to get more insight into publication bias by examining it at the author, reviewer, and editor level. Additionally, we make a direct comparison between publication bias induced by authors, by reviewers, and by editors. We approached our participants by e-mail, asking them to fill out an online survey.
    RESULTS: Our findings suggest that statistically significant findings have a higher likelihood to be published than statistically non-significant findings, because (1) authors (n = 65) are more likely to write up and submit articles with significant results compared to articles with non-significant results (median effect size 1.10, BF10 = 1.09*107); (2) reviewers (n = 60) give more favourable reviews to articles with significant results compared to articles with non-significant results (median effect size 0.58, BF10 = 4.73*102); and (3) editors (n = 171) are more likely to accept for publication articles with significant results compared to articles with non-significant results (median effect size, 0.94, BF10 = 7.63*107). Evidence on differences in the relative contributions to publication bias by authors, reviewers, and editors is ambiguous (editors vs reviewers: BF10 = 0.31, reviewers vs authors: BF10 = 3.11, and editors vs authors: BF10 = 0.42).
    DISCUSSION: One of the main limitations was that rather than investigating publication bias directly, we studied potential for publication bias. Another limitation was the low response rate to the survey.
    DOI:  https://doi.org/10.1371/journal.pone.0292279
  18. BMJ Evid Based Med. 2023 Sep 28. pii: bmjebm-2022-112126. [Epub ahead of print]
      
    Keywords:  Evidence-Based Practice; Information Storage and Retrieval; Methods; Policy; Publishing
    DOI:  https://doi.org/10.1136/bmjebm-2022-112126
  19. Jt Comm J Qual Patient Saf. 2023 Aug 30. pii: S1553-7250(23)00203-9. [Epub ahead of print]
       BACKGROUND: Improving quality and safety is a goal in health care, and sharing quality improvement (QI) work with internal and external audiences is key to spreading knowledge and ideas for change. Peer-reviewed journals are interested in manuscripts reporting QI work.
    METHODOLOGY: Although QI work is methodologically different from traditionally published research articles, it can be publishable if conducted in a way that is scholarly and well-planned. The authors suggest that key strategies to producing publishable, scholarly improvement work exist within two broad categories: rigorous work and compelling writing. Rigorous improvement work includes the following four key components: (1) understanding baseline processes, (2) developing a solid methodology and measurement plan, (3) analyzing and describing context, and (4) clearly explaining the intervention. Creating compelling writing includes clear team expectations that are defined early in the process, including authorship and division of the work. The team should identify a journal early in the process and follow a clear plan for team writing that includes an outline and frequent feedback.
    CONCLUSION: Elements of rigorous QI work and compelling writing align to develop strong material for publishing scholarly QI work.
    DOI:  https://doi.org/10.1016/j.jcjq.2023.08.002
  20. Kans J Med. 2023 ;16 247-250
      
    Keywords:  case report; impact factor; journal impact factor; open-access publishing; predatory journals
    DOI:  https://doi.org/10.17161/kjm.vol16.21169
  21. Nurs Sci Q. 2023 Oct;36(4): 321-322
      
    Keywords:  literature reviews; research; systematic inquiry; thematic reviews
    DOI:  https://doi.org/10.1177/08943184221115132
  22. PLoS Biol. 2023 Oct;21(10): e3002234
      Academic journals have been publishing the results of biomedical research for more than 350 years. Reviewing their history reveals that the ways in which journals vet submissions have changed over time, culminating in the relatively recent appearance of the current peer-review process. Journal brand and Impact Factor have meanwhile become quality proxies that are widely used to filter articles and evaluate scientists in a hypercompetitive prestige economy. The Web created the potential for a more decoupled publishing system in which articles are initially disseminated by preprint servers and then undergo evaluation elsewhere. To build this future, we must first understand the roles journals currently play and consider what types of content screening and review are necessary and for which papers. A new, open ecosystem involving preprint servers, journals, independent content-vetting initiatives, and curation services could provide more multidimensional signals for papers and avoid the current conflation of trust, quality, and impact. Academia should strive to avoid the alternative scenario, however, in which stratified publisher silos lock in submissions and simply perpetuate this conflation.
    DOI:  https://doi.org/10.1371/journal.pbio.3002234
  23. Eur Ann Otorhinolaryngol Head Neck Dis. 2023 Sep 29. pii: S1879-7296(23)00121-7. [Epub ahead of print]
      Too many articles are still rejected by scientific medical journals due to lack of preparation of the manuscript and of knowledge of the modern editorial rules that govern scientific medical writing. Therefore, the editorial board of the European Annals of Otorhinolaryngology Heads & Neck Diseases summarized studies published by its members since 2020 in the columns of the scientific journal of the French Society of Otorhinolaryngology and the International Francophone Society of Otorhinolaryngology and data from the PubMed indexed literature dedicated to scientific medical writing in otolaryngology in the 21st century. The authors hope that this review, in the form of a list of "Dos and Don'ts", will provide authors with a practical guide facilitating publication of rigorous, reproducible and transparent scientific studies, in accordance with the movement toward better science that society as a whole has been fighting for since the beginning of this century.
    Keywords:  Medical writing; Otorhinolaryngology; Scientific report
    DOI:  https://doi.org/10.1016/j.anorl.2023.09.005
  24. Naunyn Schmiedebergs Arch Pharmacol. 2023 Sep 30.
      Publications in peer-reviewed journals are the most important currency in science. But what about publications in non-peer-reviewed magazines? The objective of this study was to analyze the publications of scientists, with a focus on pharmacologists, in the non-peer-reviewed German science magazine Biospektrum from 1999 to 2021. Biospektrum is edited by five scientific societies in Germany including the Society for Experimental and Clinical Society Pharmacology and Toxicology (DGPT) and provides opportunities to researchers to showcase their research to a broad audience. We analyzed 3197 authors of 1326 articles. Compared to the fields of biochemistry, microbiology, and genetics, pharmacology was largely underrepresented. Just three institutions in Germany contributed most papers to Biospektrum. Researchers with a doctoral degree were the largest author group, followed by researchers with a habilitation degree. Among all major fields, women were underrepresented as authors, particularly as senior authors. The Covid pandemic leads to a drop of publications of female first authors but not last authors. Compared to publications in the peer-reviewed journal Naunyn-Schmiedeberg's Archives of Pharmacology (Zehetbauer et al., Naunyn-Schmiedebergs Arch Pharmacol 395:39-50 (2022)), female pharmacologists were underrepresented in the Biospektrum. Thus, German pharmacologists as a group do not value investing in "social impact" gained by publications in Biospektrum, and this attitude is even more prominent among female pharmacologists. Investing less in "social impact" by female pharmacologists may result in reduced visibility on the academic job market and may contribute to reduced opportunities to achieve high academic positions.
    Keywords:  Biospektrum; Covid pandemic; Gender studies; Naunyn–Schmiedeberg’s Archives of Pharmacology; Pharmacology
    DOI:  https://doi.org/10.1007/s00210-023-02740-x
  25. Front Psychol. 2023 ;14 1249857
      Non-native language scholars often struggle to choose between English and their native language in scholarly publishing. This study aims to identify the mechanism by which journal attributes influence language choice by investigating the perspectives of 18 Chinese scholars through semi-structured interviews. Drawing on grounded theory, this study develops a model for how journal attributes influence researchers' language preferences. We find that journal attributes influence researchers' perceived value which, in turn, affects their particular language choice, with contextual factors playing a moderating role. By examining the motivations underlying Chinese scholars' language choice, this study provides a critical understanding of the factors shaping their decision-making processes. These findings have significant implications for Chinese scholars, policymakers, and journal operators, shedding light on the issue of discrimination in academic publishing. Addressing these concerns is crucial for fostering a fair and inclusive academic environment.
    Keywords:  Chinese researchers; academic language choice; grounded theory; journal attributes; non-native language
    DOI:  https://doi.org/10.3389/fpsyg.2023.1249857
  26. J Public Health Manag Pract. 2023 Oct 05.
       CONTEXT: The Centers for Disease Control and Prevention (CDC) has a long history of using high-quality science to drive public health action that has improved the health, safety, and well-being of people in the United States and globally. To ensure scientific quality, manuscripts authored by CDC staff are required to undergo an internal review and approval process known as clearance. During 2022, CDC launched a scientific clearance transformation initiative to improve the efficiency of the clearance process while ensuring scientific quality.
    PROGRAM: As part of the scientific clearance transformation initiative, a group of senior scientists across CDC developed a framework called the Domains of Excellence for High-Quality Publications (DOE framework). The framework includes 7 areas ("domains") that authors can consider for developing high-quality and impactful scientific manuscripts: Clarity, Scientific Rigor, Public Health Relevance, Policy Content, Ethical Standards, Collaboration, and Health Equity. Each domain includes multiple quality elements, highlighting specific key considerations within.
    IMPLEMENTATION: CDC scientists are expected to use the DOE framework when conceptualizing, developing, revising, and reviewing scientific products to support collaboration and to ensure the quality and impact of their scientific manuscripts.
    DISCUSSION: The DOE framework sets expectations for a consistent standard for scientific manuscripts across CDC and promotes collaboration among authors, partners, and other subject matter experts. Many aspects have broad applicability to the public health field at large and might be relevant for others developing high-quality manuscripts in public health science. The framework can serve as a useful reference document for CDC authors and others in the public health community as they prepare scientific manuscripts for publication and dissemination.
    DOI:  https://doi.org/10.1097/PHH.0000000000001815