bims-skolko Biomed News
on Scholarly communication
Issue of 2022–02–13
25 papers selected by
Thomas Krichel, Open Library Society



  1. Curr Drug Res Rev. 2022 Feb 09.
      due to its low selectivity of accepted articles A megajournal is a peer-reviewed scientific open access journal designed to be much larger than a classical traditional journal. The low selectivity review criteria largely focused on the scientific soundness of the research methodology and on ethical issues without regard to the importance and application of the results, the advocated fast peer review and the very broad scope usually covering a whole discipline such as biomedicine or social science, are the major hallmarks. This publishing model was pioneered by PLOS One and was soon followed by other publishers. A few years ago there was a belief that the academic journal landscape would became dominated by the megajournals model, but a decline has been registered in the last few years. In this editorial aims to present the current state of the art of the open-access megajournals (OAMJs) in the universe of scientific publications.
    Keywords:  broad scope; open-access megajournals; scientific publishing; scientific soundness
    DOI:  https://doi.org/10.2174/2589977514666220209101713
  2. Scientometrics. 2022 Jan 31. 1-14
      COVID-19-related (vs. non-related) articles appear to be more expeditiously processed and published in peer-reviewed journals. We aimed to evaluate: (i) whether COVID-19-related preprints were favored for publication, (ii) preprinting trends and public discussion of the preprints, and (iii) the relationship between the publication topic (COVID-19-related or not) and quality issues. Manuscripts deposited at bioRxiv and medRxiv between January 1 and September 27 2020 were assessed for the probability of publishing in peer-reviewed journals, and those published were evaluated for submission-to-acceptance time. The extent of public discussion was assessed based on Altmetric and Disqus data. The Retraction Watch Database and PubMed were used to explore the retraction of COVID-19 and non-COVID-19 articles and preprints. With adjustment for the preprinting server and number of deposited versions, COVID-19-related preprints were more likely to be published within 120 days since the deposition of the first version (OR = 1.96, 95% CI: 1.80-2.14) as well as over the entire observed period (OR = 1.39, 95% CI: 1.31-1.48). Submission-to-acceptance was by 35.85 days (95% CI: 32.25-39.45) shorter for COVID-19 articles. Public discussion of preprints was modest and COVID-19 articles were overrepresented in the pool of retracted articles in 2020. Current data suggest a preference for publication of COVID-19-related preprints over the observed period.
    Supplementary Information: The online version contains supplementary material available at 10.1007/s11192-021-04249-7.
    Keywords:  COVID19; Peer-review; Preprint; Publishing
    DOI:  https://doi.org/10.1007/s11192-021-04249-7
  3. United European Gastroenterol J. 2022 Feb;10(1): 130-133
      
    Keywords:  journal; open access; predatory; publish
    DOI:  https://doi.org/10.1002/ueg2.12198
  4. Heart Lung. 2022 Feb 03. pii: S0147-9563(22)00008-5. [Epub ahead of print]53 32-35
      Methodological transparency and reproducibility are essential for systematic reviews. Peer review of systematic review manuscripts ensures researchers achieve transparency and reproducibility. Using critical appraisal and quality assessment tools is a methodological way for peer reviewers to conduct a thorough critique to assess the rigor and transparency of the systematic review.
    Keywords:  AMSTAR 2; JBI; Meta-analysis; PRESS; Peer review; ROBIS; Systematic review
    DOI:  https://doi.org/10.1016/j.hrtlng.2022.01.008
  5. Front Res Metr Anal. 2021 ;6 751734
      A wide array of existing metrics quantifies a scientific paper's prominence or the author's prestige. Many who use these metrics make assumptions that higher citation counts or more public attention must indicate more reliable, better quality science. While current metrics offer valuable insight into scientific publications, they are an inadequate proxy for measuring the quality, transparency, and trustworthiness of published research. Three essential elements to establishing trust in a work include: trust in the paper, trust in the author, and trust in the data. To address these elements in a systematic and automated way, we propose the ripetaScore as a direct measurement of a paper's research practices, professionalism, and reproducibility. Using a sample of our current corpus of academic papers, we demonstrate the ripetaScore's efficacy in determining the quality, transparency, and trustworthiness of an academic work. In this paper, we aim to provide a metric to evaluate scientific reporting quality in terms of transparency and trustworthiness of the research, professionalism, and reproducibility.
    Keywords:  reproducibility; research integrity; research metrics; research quality; scientific indicators
    DOI:  https://doi.org/10.3389/frma.2021.751734
  6. Front Res Metr Anal. 2022 ;7 812312
      
    Keywords:  academic disciplines; academic publishing; bibliometrics; higher education research; research evaluation
    DOI:  https://doi.org/10.3389/frma.2022.812312
  7. J Am Acad Dermatol. 2022 Feb 04. pii: S0190-9622(22)00198-0. [Epub ahead of print]
      
    Keywords:  Frequency; editorial rejections; medical journals; patterns; peer review
    DOI:  https://doi.org/10.1016/j.jaad.2022.01.047
  8. Nurse Educ Pract. 2022 Feb 01. pii: S1471-5953(22)00018-X. [Epub ahead of print]59 103304
      
    DOI:  https://doi.org/10.1016/j.nepr.2022.103304
  9. J Occup Health Psychol. 2022 Feb;27(1): 1-2
      In this brief article, the editor of the Journal of Occupational Health Psychology notes that there has been a rapid increase in the visibility of occupational health psychology over the last 25 years, which has seen growing impact and importance of OHP topics. In this time, the nature of work has changed considerably due to significant societal and technological transformations and particularly over the last 2 years as the result of the coronavirus disease (COVID-19) global pandemic, which has impacted all of our lives, including our mental health, well-being, and safety in the context of work. The author welcomes the incoming editorial team, thanks members of the outgoing editorial team, and thanks the editorial board for their support as she starts her editorial tenure in 2022. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
    DOI:  https://doi.org/10.1037/ocp0000319
  10. Hum Brain Mapp. 2022 Feb 10.
      Sharing data is a scientific imperative that accelerates scientific discoveries, reinforces open science inquiry, and allows for efficient use of public investment and research resources. Considering these benefits, data sharing has been widely promoted in diverse fields and neuroscience has been no exception to this movement. For all its promise, however, the sharing of human neuroimaging data raises critical ethical and legal issues, such as data privacy. Recently, the heightened risks to data privacy posed by the rapid advances in artificial intelligence and machine learning techniques have made data sharing more challenging; the regulatory landscape around data sharing has also been evolving rapidly. Here we present an in-depth ethical and regulatory analysis that examines how neuroimaging data are currently shared against the backdrop of the relevant regulations and policies in the United States and how advanced software tools and algorithms might undermine subjects' privacy in neuroimaging data sharing. The implications of these novel technological threats to privacy in neuroimaging data sharing practices and policies will also be discussed. We then conclude with a proposal for a legal prohibition against malicious use of neuroscience data as a regulatory mechanism to address privacy risks associated with the data while maximizing the benefits of data sharing and open science practice in the field of neuroscience.
    Keywords:  data privacy; data re-identification; data sharing; data use agreement; neuroethics; neuroimaging
    DOI:  https://doi.org/10.1002/hbm.25803
  11. PLoS One. 2022 ;17(2): e0263725
      Social media has surrounded every area of life, and social media platforms have become indispensable for today's communication. Many journals use social media actively to promote and disseminate new articles. Its use to share the articles contributes many benefits, such as reaching more people and spreading information faster. However, there is no consensus in the studies that to evaluate between tweeted and non-tweeted papers regarding their citation numbers. Therefore, it was aimed to show the effect of social media on the citations of articles in the top ten communication-based journals. For this purpose, this work evaluated original articles published in the top 10 communication journals in 2018. The top 10 communication-based journals were chosen based on SCImago Journal & Country Rank (cited in 2019). Afterward, it was recorded the traditional citation numbers (Google Scholar and Thompson-Reuters Web of Science) and social media exposure of the articles in January 2021 (nearly three years after the articles' publication date). It was assumed that this period would allow the impact of the published articles (the citations and Twitter mentions) to be fully observed. Based on this assessment, a positive correlation between exposure to social media and article citations was observed in this study.
    DOI:  https://doi.org/10.1371/journal.pone.0263725
  12. Behav Brain Sci. 2022 Feb 10. 45 e26
      Artificial intelligence (AI) shares many generalizability challenges with psychology. But the fields publish differently. AI publishes fast, through rapid preprint sharing and conference publications. Psychology publishes more slowly, but creates integrative reviews and meta-analyses. We discuss the complementary advantages of each strategy, and suggest that incorporating both types of strategies could lead to more generalizable research in both fields.
    DOI:  https://doi.org/10.1017/S0140525X21000224
  13. Am J Med Sci. 2022 Feb 02. pii: S0002-9629(22)00060-X. [Epub ahead of print]
      
    Keywords:  COVID-19; PubMed; publications; retractions
    DOI:  https://doi.org/10.1016/j.amjms.2022.01.014
  14. J Dev Behav Pediatr. 2022 Feb 02.
       OBJECTIVE: Individuals with developmental conditions, such as autism, experience stigma, which is reflected in derogatory language and labels. To limit stigma associated with disabilities, government agencies and medical organizations have adopted the use of person-centered language (PCL). This study investigated adherence to PCL guidelines among peer-reviewed research publications focused on autism. In addition, we investigated the co-occurrence of stigmatizing language in articles using person-first language (PFL) and identity-first language (IFL) styles.
    METHODS: We performed a systematic search of PubMed for autism-focused articles from January 2019 to May 2020. Articles from journals with more than 20 search returns were included, and a random sample of 700 publications were screened and examined for inclusion of prespecified, non-PCL terminology.
    RESULTS: Of the 315 publications, 156 (49.5%) were PCL compliant. Articles frequently used PCL and non-PCL terminology concomitantly, and 10% of publications included obsolete nomenclature. A logistic regression model showed the odds were more likely that publications using IFL were more likely to include other stigmatizing terminology than publications using PFL (odds ratio = 2.03, 95% confidence interval: 1.15-3.58).
    CONCLUSION: Within medical research, the language to describe individuals and populations needs to be used with intentionality and acknowledges that individuals are more than the diagnosis under study. This may reduce the structural stigma that may be implied otherwise. Our study showed that when PFL is used when addressing individuals with autism, other more stigmatizing language is often avoided and is in line with medical education and clinical practice.
    DOI:  https://doi.org/10.1097/DBP.0000000000001038
  15. Eur J Orthod. 2022 Feb 11. pii: cjac001. [Epub ahead of print]
       AIM: To assess the extent of publication bias assessment in systematic reviews (SRs) across the orthodontic literature over the last 12 years and to identify the appropriateness of assessment and association with publication characteristics, including year of publication, journal, searching practices within unpublished literature or attempts to contact primary study authors and others.
    MATERIALS AND METHODS: We searched six journals and the Cochrane Database of Systematic Reviews for relevant articles, since January 2010, until November 2021. We recorded practices interrelated with publication bias assessment, at the SR and meta-analysis level. These pertained to reporting strategies for searching within unpublished literature, attempts to communicate with authors of primary studies and formal assessment of publication bias either graphically or statistically. Potential associations between publication bias assessment practices with variables such as journal, year, methodologist involvement, and others were sought at the meta-analysis level.
    RESULTS: A sum of 289 SRs were ultimately included, with 139 of those incorporating at least one available mathematical synthesis. Efforts to search within unpublished literature were reported in 191 out of 289 Reviews (66.1%), while efforts to communicate with primary study authors were recorded for 150 of 289 of those (51.9%). An appropriate strategy plan to address issues of publication bias, conditional on the number of studies available and the methodology plan reported, was followed in 78 of the 139 meta-analyses (56.1%). Formal publication bias assessment was actually reported in 35 of 139 meta-analyses (25.2%), while only half of those (19/35; 54.3%) followed an appropriately established methodology. Ten of the latter 19 studies detected the presence of publication bias (52.6%). Predictor variables of appropriate publication bias assessment did not reveal any significant effects.
    CONCLUSIONS: Appropriate methodology and rigorous practices for appraisal of publication bias are underreported in SRs within the orthodontic literature since 2010 and up-to-date, while other established methodologies including search strategies for unpublished data or communication with authors appear currently suboptimal.
    DOI:  https://doi.org/10.1093/ejo/cjac001
  16. J Hip Preserv Surg. 2021 Jul;8(2): 143-144
      
    DOI:  https://doi.org/10.1093/jhps/hnab074
  17. Behav Brain Sci. 2022 Feb 10. 45 e30
      Improvements to the validity of psychological science depend upon more than the actions of individual researchers. Editors, journals, and publishers wield considerable power in shaping the incentives that have ushered in the generalizability crisis. These gatekeepers must raise their standards to ensure authors' claims are supported by evidence. Unless gatekeepers change, changes made by individual scientists will not be sustainable.
    DOI:  https://doi.org/10.1017/S0140525X21000546
  18. J Hand Surg Eur Vol. 2022 Feb 06. 17531934221076996
      
    DOI:  https://doi.org/10.1177/17531934221076996
  19. J Pediatr Urol. 2022 Jan 23. pii: S1477-5131(22)00017-1. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.jpurol.2021.12.020