bims-skolko Biomed News
on Scholarly communication
Issue of 2022–06–19
thirteen papers selected by
Thomas Krichel, Open Library Society



  1. Indian J Med Ethics. 2022 Mar 22. -(-): 1-8
      In this article, I argue that many of the ethical problems associated with the authorship of journal literature can usefully be clarified if authorship is placed within the broader concept of attribution, which extends beyond the author byline to encompass everything that readers are told about the work's origination and the parties responsible. I also suggest that as the attribution of literature has grown more complex, and the opportunities for misattribution have become more subtle and multifarious, attribution has become increasingly vulnerable to systematic bias. Accordingly, I define "credit bias" as the systematic distortion of attribution, frequently in the interests of those with influence over the publication. I present a four-step framework for evaluating publications, discuss misattribution in drug industry literature as an illustration of credit bias, and examine the role of editorial standards in mitigating, but also in assisting, credit bias. I also argue for an independent scientific standard to promote ethical conduct in the medical journal sector.
    DOI:  https://doi.org/10.20529/IJME.2022.023
  2. Indian J Med Ethics. 2022 Feb 23. -(-): 1-5
      The discovery of a case of data manipulation resulting in retraction of a high-impact paper revived conversations around scientific misconduct in India. Such malpractice is neither new nor rare. When it is discovered, there is a tendency to push the blame onto a junior author. But what makes one eligible to be an author in a scientific manuscript? In a case of misconduct, which authors must take the blame, and how do we hold them accountable? In this essay, I use the case of the recent retraction mentioned above to highlight the contentious nature of authorship in science.
    DOI:  https://doi.org/10.20529/IJME.2022.015
  3. Clin Rheumatol. 2022 Jun 13.
       OBJECTIVES: To assess the quality and performance of manuscripts previously rejected by a rheumatology-focused journal.
    METHODS: This was a cross-sectional, audit-type, exploratory study of manuscripts submitted to Clinical Rheumatology (CLRH) and rejected by one associate editor in 2019. We used a 36-item quality assessment instrument (5-point ordinal scale, 1 being worst). Performance variables included whether a rejected manuscript was published in another PubMed-listed journal, impact factor of the publishing journal (Scimago), number of citations (Web of Science), and social media attention (Altmetrics). Exploratory variables included authors' past publications, use of reporting guidelines, and text structure. Exploratory variables were assessed using non-parametric tests.
    RESULTS: In total, 165 manuscripts were rejected. Reporting guidelines were used in only five (4%) manuscripts. The mean overall quality rating was 2.48 ± 0.73, with 54% of manuscripts rated 2; 40-80% were rated < 3 on crucial items. Over a 26-month follow-up, 79 (48%) rejected manuscripts were published in other journals, mostly with lower impact factors; 70% of these had at least one citation, compared with 90.5% for manuscripts published in CLRH. Altmetrics was significantly lower for manuscripts published elsewhere than for those published in CLRH. As for text structure, the methods and results sections were shorter and the discussion longer than suggested. The corresponding authors' past experience and text structure were not associated with quality or acceptance.
    CONCLUSIONS: Research report quality is an area for improvement, mainly for items critical to explaining the research and findings. The use of reporting guidelines should be encouraged by journals. Key Points • The quality of research reports (in rejected manuscripts) is insufficient. • Guidelines for reporting are seldom used in rejected manuscripts. • A manuscript rejected by Clinical Rheumatology may subsequently be published in another journal with a lower impact factor and have fewer citations and less social media attention than accepted manuscripts.
    Keywords:  Editorial policies; Journal impact factor; Manuscripts, medical as topic*; Peer review, research; Publishing/statistics & numerical data*; Rheumatology/statistics & numerical data
    DOI:  https://doi.org/10.1007/s10067-022-06238-4
  4. BMC Res Notes. 2022 Jun 11. 15(1): 203
      The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors' conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study's conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.
    Keywords:  Automated screening; Peer review; Reproducibility; Rigor; Transparency
    DOI:  https://doi.org/10.1186/s13104-022-06080-6
  5. J Pediatr Rehabil Med. 2022 Jun 06.
      
    Keywords:  Peer review; fellows; pediatric rehabilitation medicine; residents
    DOI:  https://doi.org/10.3233/PRM-229005
  6. PeerJ. 2022 ;10 e13539
      Reviewers do not only help editors to screen manuscripts for publication in academic journals; they also serve to increase the rigor and value of manuscripts by constructive feedback. However, measuring this developmental function of peer review is difficult as it requires fine-grained data on reports and journals without any optimal benchmark. To fill this gap, we adapted a recently proposed quality assessment tool and tested it on a sample of 1.3 million reports submitted to 740 Elsevier journals in 2018-2020. Results showed that the developmental standards of peer review are shared across areas of research, yet with remarkable differences. Reports submitted to social science and economics journals show the highest developmental standards. Reports from junior reviewers, women and reviewers from Western Europe are generally more developmental than those from senior, men and reviewers working in academic institutions outside Western regions. Our findings suggest that increasing the standards of peer review at journals requires effort to assess interventions and measure practices with context-specific and multi-dimensional frameworks.
    Keywords:  Academic journals; Natural language processing; Peer review; Reviewers; Standards
    DOI:  https://doi.org/10.7717/peerj.13539
  7. Cad Saude Publica. 2022 ;pii: S0102-311X2022000600101. [Epub ahead of print]38(6): e00089822
      
    DOI:  https://doi.org/10.1590/0102-311XPT089822
  8. Scientometrics. 2022 May 25. 1-18
      The role of preprints in the scientific production and their part in citations have been growing over the past 10 years. In this paper we study preprint citations in several different aspects: the progression of preprint citations over time, their relative frequencies in relation to the IMRaD structure of articles, their distributions over time, per preprint database and per PLOS journal. We have processed the PLOS corpus that covers 7 journals and a total of about 240,000 articles up to January 2021, and produced a dataset of 8460 preprint citation contexts that cite 12 different preprint databases. Our results show that preprint citations are found with the highest frequency in the Method section of articles, though small variations exist with respect to journals. The PLOS Computational Biology journal stands out as it contains more than three times more preprint citations than any other PLOS journal. The relative parts of the different preprint databases are also examined. While ArXiv and bioRxiv are the most frequent citation sources, bioRxiv's disciplinary nature can be observed as it is the source of more than 70% of preprint citations in PLOS Biology, PLOS Genetics and PLOS Pathogens. We have also compared the lexical content of preprint citation contexts to the citation content to peer-reviewed publications. Finally, by performing a lexicometric analysis, we have shown that preprint citation contexts differ significantly from citation contexts of peer-reviewed publications. This confirms that authors make use of different lexical content when citing preprints compared to the rest of citations.
    Keywords:  Citation contexts; Correspondence analysis; IMRaD; PLOS; Preprint
    DOI:  https://doi.org/10.1007/s11192-022-04388-5
  9. Indian J Med Ethics. 2022 Jan-Mar;VII(1):VII(1): 1-2
      Thank you for submitting your review comments on my diligently drafted manuscript. I appreciate the opportunity to evaluate your review but unfortunately (highlighted so you get the gist of this email and don't jump for joy prematurely), your reviews seem to lack the optimism I was looking for. <br><br>.
    DOI:  https://doi.org/10.20529/IJME.2020.103
  10. Niger J Clin Pract. 2022 Jun;25(6): 817-824
       Background: The publication rate of abstracts is a measure of the quality of scientific meetings.
    Aims: The present study aimed to determine the radiation oncologists' publication rates of abstracts presented at the National Radiation Oncology Congresses (UROK) and National Cancer Congresses (UKK) and identify the top journals that published these studies.
    Materials and Methods: We reviewed the abstracts presented at UROK and UKK, held between 2013 and 2017. To retrieve any publications originating from the presented abstracts, we searched the match terms in the public databases, including PubMed, Web of Science, Google Scholar, The Turkish Academic Network, and Information Center (ULAKBIM). We evaluated the articles' publication dates and peer-review history and noted the journals' impact factors.
    Results: Three thousand seven hundred six abstracts were accepted for presentation; 1178 papers met the study criteria and were included in the analyses. There were 297 oral and 881 poster presentations. The overall publication rate was 18.9%. The median time to publication was 12 months. The studies were published in 94 scientific journals with a median impact factor of 1.28. Breast cancer and lung cancer studies had the highest publication rates among all subspecialties (15.2%). Retrospective studies had higher publication rates than those with other study designs (P < 0.0001).
    Conclusion: Almost 20% of abstracts presented at UROK and UKK were converted into full-text publications. Most of the abstracts achieved publication within 2 years from the presentation. Oral presentations had a significantly higher publication rate than poster presentations, reflecting their higher quality. The authors' affiliations and the study designs were among the significant factors that determine publication success.
    Keywords:  Abstract; annual congress; impact factor; publication rate; radiation oncology
    DOI:  https://doi.org/10.4103/njcp.njcp_1794_21
  11. Sci Data. 2022 Jun 17. 9(1): 345
      Data sharing can accelerate scientific discovery while increasing return on investment beyond the researcher or group that produced them. Data repositories enable data sharing and preservation over the long term, but little is known about scientists' perceptions of them and their perspectives on data management and sharing practices. Using focus groups with scientists from five disciplines (atmospheric and earth science, computer science, chemistry, ecology, and neuroscience), we asked questions about data management to lead into a discussion of what features they think are necessary to include in data repository systems and services to help them implement the data sharing and preservation parts of their data management plans. Participants identified metadata quality control and training as problem areas in data management. Additionally, participants discussed several desired repository features, including: metadata control, data traceability, security, stable infrastructure, and data use restrictions. We present their desired repository features as a rubric for the research community to encourage repository utilization. Future directions for research are discussed.
    DOI:  https://doi.org/10.1038/s41597-022-01428-w
  12. Gigascience. 2022 Jun 14. pii: giac058. [Epub ahead of print]11
      Research resource identifiers (RRIDs) are persistent unique identifiers for scientific resources used to conduct studies such as reagents and tools. Inclusion of these identifiers into the scientific literature has been demonstrated to improve the reproducibility of papers because resources, like antibodies, are easier to find, making methods easier to reproduce. RRIDs also dramatically reduce the use of problematic resources, such as contaminated cell lines. The addition of RRIDs into a manuscript means that authors have to look up information that they may have previously omitted or confront information about problems that may have been reported about their resources. The use of RRIDs is primarily driven by champion journals, such as GigaScience and others. Although still nascent, this practice lays important groundwork for citation types that can cover non-traditional scholarly output, such as software tools and key reagents; giving authors of various types of tools scholarly credit for their contributions.
    DOI:  https://doi.org/10.1093/gigascience/giac058