bims-skolko Biomed News
on Scholarly communication
Issue of 2022‒10‒02
seventeen papers selected by
Thomas Krichel
Open Library Society


  1. J Evid Based Dent Pract. 2022 09;pii: S1532-3382(21)00121-4. [Epub ahead of print]22(3): 101646
      OBJECTIVES: To present the actual usage of different structure formats in abstracts of randomized controlled trials (RCTs) and systematic reviews (SRs) published in SCIE-indexed dental journals, and to assess the awareness, knowledge, as well as attitudes towards the structured formats of RCT and SR abstracts among editors-in-chief (EICs) of dental journals.METHODS: In the first part of this study, we selected SCIE-indexed dental journals and assessed their eligibility according to pre-determined criteria. All RCTs and SRs published in the included journals during January-June 2020 were identified through a hand-search. The actual usage of different structure formats and headings, as well as relevant editorial policies were extracted. In the second part, an anonymous online survey among the EICs of included dental journals was conducted.
    RESULTS: A total of 88 journals were included, from which 364 RCT abstracts and 130 SR abstracts were identified. For RCT abstracts, 86% were structured, with 83% in IMRaD format (Introduction, Methods, Results, and Discussion) and 3% in highly structured (HS) format. For SR abstracts, 80% were structured, including 73% in IMRaD and 7% in HS format. According to the "instructions to authors", most journals required either IMRaD (68%) or HS (5%) for RCTs, while less than half required either IMRaD (36%) or HS (9%) for SRs. Twenty-one (24%) EICs participated in our survey, among which 18 agreed that structured formats could improve the reporting quality of RCT abstracts, while only 12 of them thought HS format should be widely recommended in the dental field.
    CONCLUSIONS: Compared with the HS format, IMRaD was more frequently used and required among RCT and SR abstracts in dentistry. Structured formats held a relatively high degree of recognition among EICs of dental journals. Joint efforts are needed for improving the awareness and usage of HS format.
    Keywords:  Abstract; Editorial policy; Randomized controlled trial; Structure format; Systematic review
    DOI:  https://doi.org/10.1016/j.jebdp.2021.101646
  2. Dtsch Arztebl Int. 2022 Oct 07. pii: arztebl.m2022.0293. [Epub ahead of print]
      BACKGROUND: Pre-prints have become an increasing part of the biomedical landscape. For example, during the first month of operation, July 2019, medRxiv received 176 submissions, one year later, in June 2020, including the first few months of COVID-19, it received 1866 submissions. The current relevant question is how to ensure an accurate scientific record given that there may be important differences between a pre-print and the peer-reviewed publication.METHODS: Based upon the experience of the authors, conversations with editors, and a focused selective review of the literature, including the recommendations of some professional groups, a limited number of practical recommendations were formulated.
    RESULTS: Peer-reviewed journals should request that authors indicate if the submitted manuscript has been posted on a pre-print server; ensure this is noted in the article if it is published by including the digital object identifier (DOI); and detail any major differences in the conclusions between the pre-print and the article. Pre-print servers should ensure that all content is marked as not peer-reviewed and be prepared to retract any pre-print that is fundamentally flawed within days that could influence clinical or public health recommendations that have therapeutic implications.
    CONCLUSION: Authors, those responsible for pre-print servers, and editors of peer- reviewed journals are responsible for ensuring an accurate scientific record.
    DOI:  https://doi.org/10.3238/arztebl.m2022.0293
  3. R Soc Open Sci. 2022 Sep;9(9): 220440
      Many publications on COVID-19 were released on preprint servers such as medRxiv and bioRxiv. It is unknown how reliable these preprints are, and which ones will eventually be published in scientific journals. In this study, we use crowdsourced human forecasts to predict publication outcomes and future citation counts for a sample of 400 preprints with high Altmetric score. Most of these preprints were published within 1 year of upload on a preprint server (70%), with a considerable fraction (45%) appearing in a high-impact journal with a journal impact factor of at least 10. On average, the preprints received 162 citations within the first year. We found that forecasters can predict if preprints will be published after 1 year and if the publishing journal has high impact. Forecasts are also informative with respect to Google Scholar citations within 1 year of upload on a preprint server. For both types of assessment, we found statistically significant positive correlations between forecasts and observed outcomes. While the forecasts can help to provide a preliminary assessment of preprints at a faster pace than traditional peer-review, it remains to be investigated if such an assessment is suited to identify methodological problems in preprints.
    Keywords:  forecasting; preprinting; science policy
    DOI:  https://doi.org/10.1098/rsos.220440
  4. J Gen Intern Med. 2022 Sep 26.
      BACKGROUND: Community members may provide useful perspectives on manuscripts submitted to medical journals.OBJECTIVE: To determine the impact of community members reviewing medical journal manuscripts.
    DESIGN: Randomized controlled trial involving 578 original research manuscripts submitted to two medical journals from June 2018 to November 2021.
    PARTICIPANTS: Twenty-eight community members who were trained, supervised, and compensated.
    INTERVENTIONS: A total of 289 randomly selected control manuscripts were reviewed by scientific reviewers only. And 289 randomly selected intervention manuscripts were reviewed by scientific reviewers and one community member. Journal editorial teams used all reviews to make decisions about acceptance, revision, or rejection of manuscripts.
    MAIN MEASURES: Usefulness of reviews to editors, content of community reviews, and changes made to published articles in response to community reviewer comments.
    KEY RESULTS: Editor ratings of community and scientific reviews averaged 3.1 and 3.3, respectively (difference 0.2, 95% confidence interval [CI] 0.1 to 0.3), on a 5-point scale where a higher score indicates a more useful review. Qualitative analysis of the content of community reviews identified two taxonomies of themes: study attributes and viewpoints. Study attributes are the sections, topics, and components of manuscripts commented on by reviewers. Viewpoints are reviewer perceptions and perspectives on the research described in manuscripts and consisted of four major themes: (1) diversity of study participants, (2) relevance to patients and communities, (3) cultural considerations and social context, and (4) implementation of research by patients and communities. A total of 186 community reviewer comments were integrated into 64 published intervention group articles. Viewpoint themes were present more often in 66 published intervention articles compared to 54 published control articles (2.8 vs. 1.7 themes/article, difference 1.1, 95% CI 0.4 to 1.8).
    CONCLUSIONS: With training, supervision, and compensation, community members are able to review manuscripts submitted to medical journals. Their comments are useful to editors, address topics relevant to patients and communities, and are reflected in published articles.
    TRIAL REGISTRATION: ClinicalTrials.gov NCT03432143.
    DOI:  https://doi.org/10.1007/s11606-022-07802-z
  5. Indian J Psychol Med. 2022 Sep;44(5): 493-498
      Background: Little is known about the publication outcomes of submissions rejected by specialty psychiatry journals. We aimed to investigate the publication fate of original research manuscripts previously rejected by the Indian Journal of Psychological Medicine (IJPM).Methods: A random sampling of manuscripts was drawn from all submissions rejected between January 1, 2018, and December 31, 2019. Using the titles of these papers and the author names, a systematic search of electronic databases was carried out to examine if these manuscripts have been published elsewhere or not. We extracted data on a range of scientific and nonscientific parameters from the journal's manuscript management portal for every rejected manuscript. Multivariable analysis was used to detect factors associated with eventual publication.
    Results: Out of 302 manuscripts analyzed, 139 (46.0%) were published elsewhere; of these, only 18 articles (13.0%) were published in a journal with higher standing than IJPM. Manuscripts of foreign origin (odds ratio [OR] 1.77, 95% confidence interval [CI] = 1.06-2.97) and rejection following peer review or editorial re-review (OR 2.41, 95% CI = 1.22-4.74) were significantly associated with publication.
    Conclusion: Nearly half of the papers rejected by IJPM were eventually published in other journals, though such papers are more often published in journals with lower standing. Manuscripts rejected following peer review were more likely to reach full publication status compared to those which were desk rejected.
    Keywords:  Desk rejection; Editorial policy; Peer review; Publication; Triage
    DOI:  https://doi.org/10.1177/02537176211046470
  6. BMC Med. 2022 09 26. 20(1): 363
      BACKGROUND: In the context of the COVID-19 pandemic, randomized controlled trials (RCTs) are essential to support clinical decision-making. We aimed (1) to assess and compare the reporting characteristics of RCTs between preprints and peer-reviewed publications and (2) to assess whether reporting improves after the peer review process for all preprints subsequently published in peer-reviewed journals.METHODS: We searched the Cochrane COVID-19 Study Register and L·OVE COVID-19 platform to identify all reports of RCTs assessing pharmacological treatments of COVID-19, up to May 2021. We extracted indicators of transparency (e.g., trial registration, data sharing intentions) and assessed the completeness of reporting (i.e., some important CONSORT items, conflict of interest, ethical approval) using a standardized data extraction form. We also identified paired reports published in preprint and peer-reviewed publications.
    RESULTS: We identified 251 trial reports: 121 (48%) were first published in peer-reviewed journals, and 130 (52%) were first published as preprints. Transparency was poor. About half of trials were prospectively registered (n = 140, 56%); 38% (n = 95) made their full protocols available, and 29% (n = 72) provided access to their statistical analysis plan report. A data sharing statement was reported in 68% (n = 170) of the reports of which 91% stated their willingness to share. Completeness of reporting was low: only 32% (n = 81) of trials completely defined the pre-specified primary outcome measures; 57% (n = 143) reported the process of allocation concealment. Overall, 51% (n = 127) adequately reported the results for the primary outcomes while only 14% (n = 36) of trials adequately described harms. Primary outcome(s) reported in trial registries and published reports were inconsistent in 49% (n = 104) of trials; of them, only 15% (n = 16) disclosed outcome switching in the report. There were no major differences between preprints and peer-reviewed publications. Of the 130 RCTs published as preprints, 78 were subsequently published in a peer-reviewed journal. There was no major improvement after the journal peer review process for most items.
    CONCLUSIONS: Transparency, completeness, and consistency of reporting of COVID-19 clinical trials were insufficient both in preprints and peer-reviewed publications. A comparison of paired reports published in preprint and peer-reviewed publication did not indicate major improvement.
    Keywords:  CONSORT; COVID-19; Peer review; Completeness of reporting; Quality of reporting; Randomized controlled trial; Selection bias; Selective outcome reporting; Transparency
    DOI:  https://doi.org/10.1186/s12916-022-02567-y
  7. Nature. 2022 09;609(7929): 875-876
      
    Keywords:  Authorship; Ethics; Publishing; Research data; Research management; Society
    DOI:  https://doi.org/10.1038/d41586-022-03035-6
  8. Am J Vet Res. 2022 Sep 29. pii: ajvr.22.10.editorial. [Epub ahead of print]83(10):
      
    DOI:  https://doi.org/10.2460/ajvr.22.10.editorial
  9. Sr Care Pharm. 2022 Oct 01. 37(10): 469-470
      
    DOI:  https://doi.org/10.4140/TCP.n.2022.469
  10. BMJ Open. 2022 Sep 28. 12(9): e066624
      OBJECTIVE: To test whether providing relevant clinical trial registry information to peer reviewers evaluating trial manuscripts decreases discrepancies between registered and published trial outcomes.DESIGN: Stepped wedge, cluster-randomised trial, with clusters comprised of eligible manuscripts submitted to each participating journal between 1 November 2018 and 31 October 2019.
    SETTING: Thirteen medical journals.
    PARTICIPANTS: Manuscripts were eligible for inclusion if they were submitted to a participating journal during the study period, presented results from the primary analysis of a clinical trial, and were peer reviewed.
    INTERVENTIONS: During the control phase, there were no changes to pre-existing peer review practices. After journals crossed over into the intervention phase, peer reviewers received a data sheet describing whether trials were registered, the initial registration and enrolment dates, and the registered primary outcome(s) when enrolment began.
    MAIN OUTCOME MEASURE: The presence of a clearly defined, prospectively registered primary outcome consistent with the primary outcome in the published trial manuscript, as determined by two independent outcome assessors.
    RESULTS: We included 419 manuscripts (243 control and 176 intervention). Participating journals published 43% of control-phase manuscripts and 39% of intervention-phase manuscripts (model-estimated percentage difference between intervention and control trials = -10%, 95% CI -25% to 4%). Among the 173 accepted trials, published primary outcomes were consistent with clearly defined, prospectively registered primary outcomes in 40 of 105 (38%) control-phase trials and 27 of 68 (40%) intervention-phase trials. A linear mixed model did not show evidence of a statistically significant primary outcome effect from the intervention (estimated difference between intervention and control=-6% (90% CI -27% to 15%); one-sided p value=0.68).
    CONCLUSIONS: These results do not support use of the tested intervention as implemented here to increase agreement between prospectively registered and published trial outcomes. Other approaches are needed to improve the quality of outcome reporting of clinical trials.
    TRIAL REGISTRATION NUMBER: ISRCTN41225307.
    Keywords:  GENERAL MEDICINE (see Internal Medicine); STATISTICS & RESEARCH METHODS; World Wide Web technology
    DOI:  https://doi.org/10.1136/bmjopen-2022-066624