bims-skolko Biomed News
on Scholarly communication
Issue of 2023‒12‒03
35 papers selected by
Thomas Krichel, Open Library Society



  1. Acta Med Port. 2023 Nov 30.
      
    Keywords:  Artificial Intelligence; Plagiarism; Scientific Integrity Review
    DOI:  https://doi.org/10.20344/amp.20233
  2. J Korean Med Sci. 2023 Nov 27. 38(46): e390
      BACKGROUND: Retraction is a correction process for the scientific literature that acts as a barrier to the dissemination of articles that have serious faults or misleading data. The purpose of this study was to investigate the characteristics of retracted papers from Kazakhstan.METHODS: Utilizing data from Retraction Watch, this cross-sectional descriptive analysis documented all retracted papers from Kazakhstan without regard to publication dates. The following data were recorded: publication title, DOI number, number of authors, publication date, retraction date, source, publication type, subject category of publication, collaborating country, and retraction reason. Source index status, Scopus citation value, and Altmetric Attention Score were obtained.
    RESULTS: Following the search, a total of 92 retracted papers were discovered. One duplicate article was excluded, leaving 91 publications for analysis. Most articles were retracted in 2022 (n = 22) and 2018 (n = 19). Among the identified publications, 49 (53.9%) were research articles, 39 (42.9%) were conference papers, 2 (2.2%) were review articles, and 1 (1.1%) was a book chapter. Russia (n = 24) and China (n = 5) were the most collaborative countries in the retracted publications. Fake-biased peer review (n = 38), plagiarism (n = 25), and duplication (n = 14) were the leading causes of retraction.
    CONCLUSION: The vast majority of the publications were research articles and conference papers. Russia was the leading collaborative country. The most prominent retraction reasons were fake-biased peer review, plagiarism, and duplication. Efforts to raise researchers' understanding of the grounds for retraction and ethical research techniques are required in Kazakhstan.
    Keywords:  Ethics; Kazakhstan; Peer Review; Plagiarism; Publications; Retraction of Publication; Scientific Misconduct
    DOI:  https://doi.org/10.3346/jkms.2023.38.e390
  3. Adv Pharm Bull. 2023 Nov;13(4): 627-634
      Purpose: Flattering emails are crucial in tempting authors to submit papers to predatory journals. Although there is ample literature regarding the questionable practices of predatory journals, the nature and detection of spam emails need more attention. Current research provides insight into fallacious calls for papers from potential predatory journals and develops a toolkit in this regard.Methods: In this study, we analyzed three datasets of calls for papers from potential predatory journals and legitimate journals using a text mining approach and R programming language.
    Results: Overall, most potential predatory journals use similar language and templates in their calls for papers. Importantly, these journals praise themselves in glorious terms involving positive words that may be rarely seen in emails from legitimate journals. Based on these findings, we developed a lexicon for detecting unsolicited calls for papers from potential predatory journals.
    Conclusion: We conclude that calls for papers from potential predatory journals and legitimate journals are different, and it can help to distinguish them. By providing an educational plan and easily usable tools, we can deal with predatory journals better than previously.
    Keywords:  Academic ethics; Calls for papers; Data science; Journal publishing; Predatory journal; Sentiment analysis
    DOI:  https://doi.org/10.34172/apb.2023.068
  4. Diagn Interv Imaging. 2023 Nov 30. pii: S2211-5684(23)00220-6. [Epub ahead of print]
      
    Keywords:  Peer review; Scholarly journals
    DOI:  https://doi.org/10.1016/j.diii.2023.11.003
  5. Korean J Radiol. 2023 Dec;24(12): 1179-1189
      OBJECTIVE: We aimed to evaluate the reporting quality of research articles that applied deep learning to medical imaging. Using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines and a journal with prominence in Asia as a sample, we intended to provide an insight into reporting quality in the Asian region and establish a journal-specific audit.MATERIALS AND METHODS: A total of 38 articles published in the Korean Journal of Radiology between June 2018 and January 2023 were analyzed. The analysis included calculating the percentage of studies that adhered to each CLAIM item and identifying items that were met by ≤ 50% of the studies. The article review was initially conducted independently by two reviewers, and the consensus results were used for the final analysis. We also compared adherence rates to CLAIM before and after December 2020.
    RESULTS: Of the 42 items in the CLAIM guidelines, 12 items (29%) were satisfied by ≤ 50% of the included articles. None of the studies reported handling missing data (item #13). Only one study respectively presented the use of de-identification methods (#12), intended sample size (#19), robustness or sensitivity analysis (#30), and full study protocol (#41). Of the studies, 35% reported the selection of data subsets (#10), 40% reported registration information (#40), and 50% measured inter and intrarater variability (#18). No significant changes were observed in the rates of adherence to these 12 items before and after December 2020.
    CONCLUSION: The reporting quality of artificial intelligence studies according to CLAIM guidelines, in our study sample, showed room for improvement. We recommend that the authors and reviewers have a solid understanding of the relevant reporting guidelines and ensure that the essential elements are adequately reported when writing and reviewing the manuscripts for publication.
    Keywords:  Artificial intelligence; Asia; CLAIM guidelines; Medical imaging; Reporting quality
    DOI:  https://doi.org/10.3348/kjr.2023.1027
  6. Nature. 2023 Nov;623(7989): 916
      
    Keywords:  Communication; Machine learning; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-03739-3
  7. Cochrane Database Syst Rev. 2023 11 28. 11 MR000056
      BACKGROUND: Funders and scientific journals use peer review to decide which projects to fund or articles to publish. Reviewer training is an intervention to improve the quality of peer review. However, studies on the effects of such training yield inconsistent results, and there are no up-to-date systematic reviews addressing this question.OBJECTIVES: To evaluate the effect of peer reviewer training on the quality of grant and journal peer review.
    SEARCH METHODS: We used standard, extensive Cochrane search methods. The latest search date was 27 April 2022.
    SELECTION CRITERIA: We included randomized controlled trials (RCTs; including cluster-RCTs) that evaluated peer review with training interventions versus usual processes, no training interventions, or other interventions to improve the quality of peer review.
    DATA COLLECTION AND ANALYSIS: We used standard Cochrane methods. Our primary outcomes were 1. completeness of reporting and 2. peer review detection of errors. Our secondary outcomes were 1. bibliometric scores, 2. stakeholders' assessment of peer review quality, 3. inter-reviewer agreement, 4. process-centred outcomes, 5. peer reviewer satisfaction, and 6. completion rate and speed of funded projects. We used the first version of the Cochrane risk of bias tool to assess the risk of bias, and we used GRADE to assess the certainty of evidence.
    MAIN RESULTS: We included 10 RCTs with a total of 1213 units of analysis. The unit of analysis was the individual reviewer in seven studies (722 reviewers in total), and the reviewed manuscript in three studies (491 manuscripts in total). In eight RCTs, participants were journal peer reviewers. In two studies, the participants were grant peer reviewers. The training interventions can be broadly divided into dialogue-based interventions (interactive workshop, face-to-face training, mentoring) and one-way communication (written information, video course, checklist, written feedback). Most studies were small. We found moderate-certainty evidence that emails reminding peer reviewers to check items of reporting checklists, compared with standard journal practice, have little or no effect on the completeness of reporting, measured as the proportion of items (from 0.00 to 1.00) that were adequately reported (mean difference (MD) 0.02, 95% confidence interval (CI) -0.02 to 0.06; 2 RCTs, 421 manuscripts). There was low-certainty evidence that reviewer training, compared with standard journal practice, slightly improves peer reviewer ability to detect errors (MD 0.55, 95% CI 0.20 to 0.90; 1 RCT, 418 reviewers). We found low-certainty evidence that reviewer training, compared with standard journal practice, has little or no effect on stakeholders' assessment of review quality in journal peer review (standardized mean difference (SMD) 0.13 standard deviations (SDs), 95% CI -0.07 to 0.33; 1 RCT, 418 reviewers), or change in stakeholders' assessment of review quality in journal peer review (SMD -0.15 SDs, 95% CI -0.39 to 0.10; 5 RCTs, 258 reviewers). We found very low-certainty evidence that a video course, compared with no video course, has little or no effect on inter-reviewer agreement in grant peer review (MD 0.14 points, 95% CI -0.07 to 0.35; 1 RCT, 75 reviewers). There was low-certainty evidence that structured individual feedback on scoring, compared with general information on scoring, has little or no effect on the change in inter-reviewer agreement in grant peer review (MD 0.18 points, 95% CI -0.14 to 0.50; 1 RCT, 41 reviewers, low-certainty evidence).
    AUTHORS' CONCLUSIONS: Evidence from 10 RCTs suggests that training peer reviewers may lead to little or no improvement in the quality of peer review. There is a need for studies with more participants and a broader spectrum of valid and reliable outcome measures. Studies evaluating stakeholders' assessments of the quality of peer review should ensure that these instruments have sufficient levels of validity and reliability.
    DOI:  https://doi.org/10.1002/14651858.MR000056.pub2
  8. Nature. 2023 Nov;623(7989): 916
      
    Keywords:  Peer review; Publishing
    DOI:  https://doi.org/10.1038/d41586-023-03740-w
  9. Eur J Radiol. 2023 Nov 22. pii: S0720-048X(23)00526-0. [Epub ahead of print]170 111212
      There is a need to ensure the accuracy of linguistic descriptors in the medical literature, including that related to radiology, to allow peers and professionals to communicate ideas and scientific results in a clear and unambiguous manner. This letter highlights an issue that could undermine the clarity of scientific writing in radiology literature, namely the presence of non-standard terminology for established jargon, and emphasizes the need for authors to transparently declare the use of language editing services and AI-driven tools, such as ChatGPT, if these have been used to formulate text and ideas in their papers. Ultimately, clear radiology papers that are compliant with current publishing ethics will serve radiologists and patients well.
    Keywords:  Ethics; Jargon; Medical communication; Scientific ethos; Truth
    DOI:  https://doi.org/10.1016/j.ejrad.2023.111212
  10. Front Artif Intell. 2023 ;6 1283353
      The integration of large language models (LLMs) and artificial intelligence (AI) into scientific writing, especially in medical literature, presents both unprecedented opportunities and inherent challenges. This manuscript evaluates the transformative potential of LLMs for the synthesis of information, linguistic enhancements, and global knowledge dissemination. At the same time, it raises concerns about unintentional plagiarism, the risk of misinformation, data biases, and an over-reliance on AI. To address these, we propose governing principles for AI adoption that ensure integrity, transparency, validity, and accountability. Additionally, guidelines for reporting AI involvement in manuscript development are delineated, and a classification system to specify the level of AI assistance is introduced. This approach uniquely addresses the challenges of AI in scientific writing, emphasizing transparency in authorship, qualification of AI involvement, and ethical considerations. Concerns regarding access equity, potential biases in AI-generated content, authorship dynamics, and accountability are also explored, emphasizing the human author's continued responsibility. Recommendations are made for fostering collaboration between AI developers, researchers, and journal editors and for emphasizing the importance of AI's responsible use in academic writing. Regular evaluations of AI's impact on the quality and biases of medical manuscripts are also advocated. As we navigate the expanding realm of AI in scientific discourse, it is crucial to maintain the human element of creativity, ethics, and oversight, ensuring that the integrity of scientific literature remains uncompromised.
    Keywords:  artificial intelligence; ethics; guidelines and recommendations; innovation; large language model; medicine; natural language processing; scientific writing
    DOI:  https://doi.org/10.3389/frai.2023.1283353
  11. Clin Chem Lab Med. 2023 Nov 30.
      BACKGROUND: In the rapid evolving landscape of artificial intelligence (AI), scientific publishing is experiencing significant transformations. AI tools, while offering unparalleled efficiencies in paper drafting and peer review, also introduce notable ethical concerns.CONTENT: This study delineates AI's dual role in scientific publishing: as a co-creator in the writing and review of scientific papers and as an ethical challenge. We first explore the potential of AI as an enhancer of efficiency, efficacy, and quality in creating scientific papers. A critical assessment follows, evaluating the risks vs. rewards for researchers, especially those early in their careers, emphasizing the need to maintain a balance between AI's capabilities and fostering independent reasoning and creativity. Subsequently, we delve into the ethical dilemmas of AI's involvement, particularly concerning originality, plagiarism, and preserving the genuine essence of scientific discourse. The evolving dynamics further highlight an overlooked aspect: the inadequate recognition of human reviewers in the academic community. With the increasing volume of scientific literature, tangible metrics and incentives for reviewers are proposed as essential to ensure a balanced academic environment.
    SUMMARY: AI's incorporation in scientific publishing is promising yet comes with significant ethical and operational challenges. The role of human reviewers is accentuated, ensuring authenticity in an AI-influenced environment.
    OUTLOOK: As the scientific community treads the path of AI integration, a balanced symbiosis between AI's efficiency and human discernment is pivotal. Emphasizing human expertise, while exploit artificial intelligence responsibly, will determine the trajectory of an ethically sound and efficient AI-augmented future in scientific publishing.
    Keywords:  ChatGPT; artificial intelligence; authorship; ethical implications; peer review; scientific writing
    DOI:  https://doi.org/10.1515/cclm-2023-1136
  12. Nat Neurosci. 2023 Dec;26(12): 2038
      
    DOI:  https://doi.org/10.1038/s41593-023-01529-8
  13. J Am Acad Child Adolesc Psychiatry. 2023 Dec;pii: S0890-8567(23)02063-4. [Epub ahead of print]62(12): 1295-1296
      Five years ago, we wrote to you regarding our launching a new initiative for JAACAP: study registration.1 As we noted then, "study registration divides the peer review process into two stages. The first stage, preregistration, occurs at the time that the study is being planned, whereas the second occurs after the study is completed." To preregister their study, authors submit a manuscript consisting of the introduction and method sections for their study, along with a study synopsis, for peer review. If the study preregistration is approved after this initial peer review, the Journal will issue an in-principle acceptance to the authors, and the study synopsis will be published in JAACAP as a registered study protocol. When the study is completed, the authors will submit a complete manuscript, using the introduction and method sections that have already been reviewed and accepted (with an updated literature review) as well as their new results and discussion sections. This complete manuscript will undergo a second peer review focused on how consistent the manuscript is with the study's preregistration. If the paper is then accepted, it will be published as a Registered Report.1 We are pleased to report that with this issue of the Journal we have now published 2 such research articles, each demonstrating the strengths of this process.
    DOI:  https://doi.org/10.1016/j.jaac.2023.09.532
  14. Adv Health Sci Educ Theory Pract. 2023 Dec 01.
      This column is intended to address the kinds of knotty problems and dilemmas with which many scholars grapple in studying health professions education. In this article, the authors conclude their short series of articles on academic authorship by addressing the question of how to determine author order, including taking into account power dynamics that may be at play.
    DOI:  https://doi.org/10.1007/s10459-023-10308-w
  15. J Oral Maxillofac Pathol. 2023 Jul-Sep;27(3):27(3): 524-527
      Authors have a multitude of options for journals for publishing their research. However, their choice is mostly based on academic credits required for promotion, cost of publication, timeliness of process, etc., The purpose of this narrative review is to enlighten the authors about some other journal metrics used to assess journal ranking and quality in international scenario. The main concepts discussed in this paper are the impact factor and cite score. The paper includes an explanation of terms like web of science, journal citation reports, and how they are related to impact factor. This will help the authors to make the right decision about choosing the right journal for publishing their research. Along with the historic concepts we have included the latest updates about changes being made to the journal citation report and impact factor released in 2023 June. Hopefully with the review paper, we will be able to encourage the inclusion of such concepts and curriculum of post-graduation courses considering publishing a paper and choosing a journal are an integral aspect of a researcher's work life.
    Keywords:  Author; cite score; impact factor; journal quality; journal ranking; publish
    DOI:  https://doi.org/10.4103/jomfp.jomfp_316_23
  16. Harv Data Sci Rev. 2022 ;4(SI3):
      N-of-1 trials are multiple crossover trials done over time within a single person; they can also be done with a series of individuals. Their focus on the individual as the unit of analysis maintains statistical power while accommodating greater differences between patients than most standard clinical trials. This makes them particularly useful in rare diseases, while also being applicable across many health conditions and populations. Best practices recommend the use of reporting guidelines to publish research in a standardized and transparent fashion. N-of-1 trials have the SPIRIT extension for N-of-1 protocols (SPENT) and the CONSORT extension for N-of-1 trials (CENT). Open science is a recent movement focused on making scientific knowledge fully available to anyone, increasing collaboration, and sharing of scientific efforts. Open science goals increase research transparency, rigor, and reproducibility, and reduce research waste. Many organizations and articles focus on specific aspects of open science, for example, open access publishing. Throughout the trajectory of research (idea, development, running a trial, analysis, publication, dissemination, knowledge translation/reflection), many open science ideals are addressed by the individual-focused nature of N-of-1 trials, including issues such as patient perspectives in research development, personalization, and publications, enhanced equity from the broader inclusion criteria possible, and easier remote trials options. However, N-of-1 trials also help us understand areas of caution, such as monitoring of post hoc analyses and the nuances of confidentiality for rare diseases in open data sharing. The N-of-1 reporting guidelines encourage rigor and transparency of N-of-1 considerations for key aspects of the research trajectory.
    Keywords:  N-of-1 trials; open science; reporting guidelines; reproducible research; scholarly communication; single case experimental designs
    DOI:  https://doi.org/10.1162/99608f92.a65a257a
  17. Am J Sports Med. 2023 Dec;51(14): 3632-3633
      
    Keywords:  peer review; publication; research integrity
    DOI:  https://doi.org/10.1177/03635465231210848
  18. Adv Pharm Bull. 2023 Nov;13(4): 635-638
      Modern science has been transformed by open access (OA) publishing levied a significant economic burden on the authors. This article analyzes the discrepancies among OA publication fees in pharmacology, toxicology, and pharmaceutics. The observations comprise 160 OA journals and their corresponding Q ranking, SJR, H index, impact factor, country, and cost of publication. The OA fees were found to depend on the quality matrices, which was unexpected. Differences in OA fees raise ethical questions as OA fees are meant to cover the publication charges by the publishers or generate more revenues by taking advantage of the authors' temptation to publish in high-impact journals. Despite our findings being based on limited sample size and belonging to a particular field (pharmacy), it will shed considerable light on the issue of discrepancies among APCs charged by OA journals.
    Keywords:  Impact factor; Open access; Publication fees; Q ranking
    DOI:  https://doi.org/10.34172/apb.2023.076
  19. J Allied Health. 2023 ;52(4): 241
      Periodicals in the biomedical and natural sciences differ in fundamental ways, such as whether they use an impact factor. Peer review is considered another key element in scientific publications, but also can be viewed as having various flaws, e.g., poor in detecting fraud, highly subjective, prone to bias, expensive, and easily abused. Single-blind peer review is the traditional model in which reviewers know the identity of authors, but the reverse is not true, thereby raising a related concern that there is a serious power imbalance. The results of a recent study describe an investigation in which it was found that after switching from single-blind to double-blind peer review the quality of review reports, measured using the modified Review Quality Instrument (RQI), improved. Results indicate that double-blind peer review is a feasible model to a journal in a small language area without major downsides. The Journal of Allied Health uses double-blind peer review.
  20. J Infect. 2023 Nov 24. pii: S0163-4453(23)00582-0. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.jinf.2023.11.012
  21. Hum Reprod. 2023 Nov 28. pii: dead248. [Epub ahead of print]
      STUDY QUESTION: What were the frequency and temporal trends of reporting P-values and effect measures in the abstracts of reproductive medicine studies in 1990-2022, how were reported P-values distributed, and what proportion of articles that present with statistical inference reported statistically significant results, i.e. 'positive' results?SUMMARY ANSWER: Around one in six abstracts reported P-values alone without effect measures, while the prevalence of effect measures, whether reported alone or accompanied by P-values, has been increasing, especially in meta-analyses and randomized controlled trials (RCTs); the reported P-values were frequently observed around certain cut-off values, notably at 0.001, 0.01, or 0.05, and among abstracts present with statistical inference (i.e. P-value, CIs, or significant terms), a large majority (77%) reported at least one statistically significant finding.
    WHAT IS KNOWN ALREADY: Publishing or reporting only results that show a 'positive' finding causes bias in evaluating interventions and risk factors and may incur adverse health outcomes for patients.
    Despite efforts to minimize publication reporting bias in medical research, it remains unclear whether the magnitude and patterns of the bias have changed over time.
    STUDY DESIGN, SIZE, DURATION: We studied abstracts of reproductive medicine studies from 1990 to 2022. The reproductive medicine studies were published in 23 first-quartile journals under the category of Obstetrics and Gynaecology and Reproductive Biology in Journal Citation Reports and 5 high-impact general medical journals (The Journal of the American Medical Association, The Lancet, The BMJ, The New England Journal of Medicine, and PLoS Medicine). Articles without abstracts, animal studies, and non-research articles, such as case reports or guidelines, were excluded.
    PARTICIPANTS/MATERIALS, SETTING, METHODS: Automated text-mining was used to extract three types of statistical significance reporting, including P-values, CIs, and text description. Meanwhile, abstracts were text-mined for the presence of effect size metrics and Bayes factors. Five hundred abstracts were randomly selected and manually checked for the accuracy of automatic text extraction. The extracted statistical significance information was then analysed for temporal trends and distribution in general as well as in subgroups of study designs and journals.
    MAIN RESULTS AND THE ROLE OF CHANCE: A total of 24 907 eligible reproductive medicine articles were identified from 170 739 screened articles published in 28 journals. The proportion of abstracts not reporting any statistical significance inference halved from 81% (95% CI, 76-84%) in 1990 to 40% (95% CI, 38-44%) in 2021, while reporting P-values alone remained relatively stable, at 15% (95% CI, 12-18%) in 1990 and 19% (95% CI, 16-22%) in 2021. By contrast, the proportion of abstracts reporting effect measures alone increased considerably from 4.1% (95% CI, 2.6-6.3%) in 1990 to 26% (95% CI, 23-29%) in 2021. Similarly, the proportion of abstracts reporting effect measures together with P-values showed substantial growth from 0.8% (95% CI, 0.3-2.2%) to 14% (95% CI, 12-17%) during the same timeframe. Of 30 182 statistical significance inferences, 56% (n = 17 077) conveyed statistical inferences via P-values alone, 30% (n = 8945) via text description alone such as significant or non-significant, 9.3% (n = 2820) via CIs alone, and 4.7% (n = 1340) via both CI and P-values. The reported P-values (n = 18 417), including both a continuum of P-values and dichotomized P-values, were frequently observed around common cut-off values such as 0.001 (20%), 0.05 (16%), and 0.01 (10%). Of the 13 200 reproductive medicine abstracts containing at least one statistical inference, 77% of abstracts made at least one statistically significant statement. Among articles that reported statistical inference, a decline in the proportion of making at least one statistically significant inference was only seen in RCTs, dropping from 71% (95% CI, 48-88%) in 1990 to 59% (95% CI, 42-73%) in 2021, whereas the proportion in the rest of study types remained almost constant over the years. Of abstracts that reported P-value, 87% (95% CI, 86-88%) reported at least one statistically significant P-value; it was 92% (95% CI, 82-97%) in 1990 and reached its peak at 97% (95% CI, 93-99%) in 2001 before declining to 81% (95% CI, 76-85%) in 2021.
    LIMITATIONS, REASONS FOR CAUTION: First, our analysis focused solely on reporting patterns in abstracts but not full-text papers; however, in principle, abstracts should include condensed impartial information and avoid selective reporting. Second, while we attempted to identify all types of statistical significance reporting, our text mining was not flawless. However, the manual assessment showed that inaccuracies were not frequent.
    WIDER IMPLICATIONS OF THE FINDINGS: There is a welcome trend that effect measures are increasingly reported in the abstracts of reproductive medicine studies, specifically in RCTs and meta-analyses. Publication reporting bias remains a major concern. Inflated estimates of interventions and risk factors could harm decisions built upon biased evidence, including clinical recommendations and planning of future research.
    STUDY FUNDING/COMPETING INTEREST(S): No funding was received for this study. B.W.M. is supported by an NHMRC Investigator grant (GNT1176437); B.W.M. reports research grants and travel support from Merck and consultancy from Merch and ObsEva. W.L. is supported by an NHMRC Investigator Grant (GNT2016729). Q.F. reports receiving a PhD scholarship from Merck. The other author has no conflict of interest to declare.
    TRIAL REGISTRATION NUMBER: N/A.
    Keywords:   P-values; publication bias; publication reporting bias; reporting quality; significance chasing
    DOI:  https://doi.org/10.1093/humrep/dead248
  22. J Sports Sci. 2023 Nov 29. 1-11
      Two factors that decrease the replicability of studies in the scientific literature are publication bias and studies with underpowered desgins. One way to ensure that studies have adequate statistical power to detect the effect size of interest is by conducting a-priori power analyses. Yet, a previous editorial published in the Journal of Sports Sciences reported a median sample size of 19 and the scarce usage of a-priori power analyses. We meta-analysed 89 studies from the same journal to assess the presence and extent of publication bias, as well as the average statistical power, by conducting a z-curve analysis. In a larger sample of 174 studies, we also examined a) the usage, reporting practices and reproducibility of a-priori power analyses; and b) the prevalence of reporting practices of t-statistic or F-ratio, degrees of freedom, exact p-values, effect sizes and confidence intervals. Our results indicate that there was some indication of publication bias and the average observed power was low (53% for significant and non-significant findings and 61% for only significant findings). Finally, the usage and reporting practices of a-priori power analyses as well as statistical results including test statistics, effect sizes and confidence intervals were suboptimal.
    Keywords:  Replicability; publication bias; reporting practices; reproducibility; statistical power
    DOI:  https://doi.org/10.1080/02640414.2023.2269357
  23. Front Sociol. 2023 ;8 1277292
      
    Keywords:  adequacy for science; commercial scholarly publishers; copyright; extraterritorial state obligations; global South; right to science; scholarly knowledge commons; scholarly publications
    DOI:  https://doi.org/10.3389/fsoc.2023.1277292
  24. Online J Public Health Inform. 2023 ;15 e50243
      Founded in 2009, the Online Journal of Public Health Informatics (OJPHI) strives to provide an unparalleled experience as the platform of choice to advance public and population health informatics. As a premier peer-reviewed journal in this field, OJPHI's mission is to serve as an advocate for the discipline through the dissemination of public health informatics research results and best practices among practitioners, researchers, policymakers, and educators. However, in the current environment, running an independent open access journal has not been without challenges. Judging from the low geographic spread of our current stakeholders, the overreliance on a small volunteer management staff, the limited scope of topics published by the journal, and the long article turnaround time, it is obvious that OJPHI requires a change in direction in order to fully achieve its mission. Fortunately, our new publisher JMIR Publications is the leading brand in this field, with a portfolio of top peer-reviewed journals covering innovation, technology, digital medicine and health services research in the internet age. Under the leadership of JMIR Publications, OJPHI plans to expand its scope to include new topics such as precision public health informatics, the use of artificial intelligence and machine learning in public health research and practice, and infodemiology in public health informatics.
    Keywords:  artificial intelligence; data science; disease prevention; health promotion; precision public health; public health informatics
    DOI:  https://doi.org/10.2196/50243