bims-skolko Biomed News
on Scholarly communication
Issue of 2019‒10‒20
sixteen papers selected by
Thomas Krichel
Open Library Society


  1. J Clin Child Adolesc Psychol. 2019 Oct 16. 1-14
    Youngstrom EA, Salcedo S, Frazier TW, Perez Algorta G.
      In 2018, De Los Reyes and Langer expanded the scope of the Evidence Base Updates series to include reviews of psychological assessment techniques. In keeping with the goal of offering clear "take-home messages" about the evidence underlying the technique, experts have proposed a rubric for evaluating the reliability and validity support. Changes in the research environment and pressures in the peer review process, as well as a lack of familiarity with some statistical methods, have created a situation in which many findings that appear "excellent" in the rubric are likely to be "too good to be true," in the sense that they are unlikely to generalize to clinical settings or are unlikely to be reproduced in independent samples. We describe several common scenarios in which published results are often too good to be true, including internal consistency, interrater reliability, correlation, standardized mean differences, diagnostic accuracy, and global model fit statistics. Simple practices could go a long way toward improving design, reporting, and interpretation of findings. When effect sizes are in the "excellent" range for issues that have been challenging, scrutinize before celebrating. When benchmarks are available based on theory or meta-analyses, results that are moderately better than expected in the favorable direction (i.e., Cohen's q ≥ +.30) also invite critical appraisal and replication before application. If readers and reviewers pull for transparency and do not unduly penalize authors who provide it, then change in research quality will be faster and both generalizability and reproducibility are likely to benefit.
    DOI:  https://doi.org/10.1080/15374416.2019.1669158
  2. J Med Libr Assoc. 2019 Oct;107(4): 618-620
    Lapidow A, Scudder P.
      In most scientific communities, the order of author names on a publication serves to assign credit and responsibility. Unless authors are presented in alphabetical order, it is assumed that the first author contributes the most and the last author is the driving force, both intellectually and financially, behind the research. Many, but not all, journals individually delineate what it means to be a contributing author and the nature of each author's role. But what does this mean when a paper has co-first authors? How are academic librarians going to handle questions surrounding co-first authorship in an era in which author metrics are important for career advancement and tenure? In this commentary, the authors look at the growing trend of co-first authorship and what this means for database searchers.
    DOI:  https://doi.org/10.5195/jmla.2019.700
  3. Nature. 2019 Oct;574(7778): 441-442
    Savage V, Yeh P.
      
    Keywords:  Authorship; Careers; Publishing
    DOI:  https://doi.org/10.1038/d41586-019-02918-5
  4. Can J Anaesth. 2019 Oct 15.
    Nair S, Yean C, Yoo J, Leff J, Delphin E, Adams DC.
      BACKGROUND: Increasing awareness of scientific misconduct has prompted various fields of medicine, including orthopedic surgery, neurosurgery, and dentistry to characterize the reasons for article retraction. The purpose of this review was to evaluate the reasons for and the rate of article retraction in the field of anesthesia within the last 30 years.METHODS: Based on a reproducible search strategy, two independent reviewers searched MEDLINE, EMBASE, and the Retraction Watch website to identify retracted anesthesiology articles. Extracted data included: author names, year of publication, year of the retracted article, journal name, journal five-year impact factor, research type (clinical, basic science, or review), reason for article retraction, number of citations, and presence of a watermark indicating article retraction.
    RESULTS: Three hundred and fifty articles were included for data extraction. Reasons for article retraction could be grouped into six broad categories. The most common reason for retraction was fraud (data fabrication or manipulation), which accounted for nearly half (49.4%) of all retractions, followed by lack of appropriate ethical approval (28%). Other reasons for retraction included publication issues (e.g., duplicate publications), plagiarism, and studies with methodologic or other non-fraud data issues. Four authors were associated with most of the retracted articles (59%). The majority (69%) of publications utilized a watermark on the original article to indicate that the article was retracted. Journal Citation Reports journal impact factors ranged from 0.9 to 48.1 (median [interquartile range (IQR)], 3.6 [2.5-4.0]), and the most cited article was referenced 197 times (median [IQR], 13 [5-26]). Most retracted articles (66%) were cited at least once by other journal articles after having been withdrawn.
    CONCLUSIONS: Most retracted articles in anesthesiology literature were retracted because of research misconduct. Limited information is available in the retraction notices, unless explicitly stated, so it is challenging to distinguish between an honest error and research misconduct. Therefore, a standardized reporting process with structured retraction notices is desired.
    DOI:  https://doi.org/10.1007/s12630-019-01508-3
  5. J Med Libr Assoc. 2019 Oct;107(4): 468-471
    Akers KG, Read KB, Amos L, Federer LM, Logan A, Plutchak TS.
      As librarians are generally advocates of open access and data sharing, it is a bit surprising that peer-reviewed journals in the field of librarianship have been slow to adopt data sharing policies. Starting October 1, 2019, the Journal of the Medical Library Association (JMLA) is taking a step forward and implementing a firm data sharing policy to increase the rigor and reproducibility of published research, enable data reuse, and promote open science. This editorial explains the data sharing policy, describes how compliance with the policy will fit into the journal's workflow, and provides further guidance for preparing for data sharing.
    DOI:  https://doi.org/10.5195/jmla.2019.801
  6. J Am Podiatr Med Assoc. 2019 Oct 17.
    Rushing CJ, Arena T, Spinner SM, Hardigan P.
      Not all abstracts accepted for oral presentation at the American Podiatric Medical Associations (APMA) annual conference ultimately go on to successfully navigate the peer review process to achieve journal publication despite its obvious merits. The purpose of the present study was to identify the factors associated with, and barriers to the journal publication and time to publication for oral abstracts from the APMA conference from 2010 to 2014. Databases containing information on the abstracts were procured and predictor variables categorized as abstract-or author specific. Bivariate analysis was conducted using the Mann-Whitney U-Test, Fisher's exact test, chi-square test of independence, or Spearman's rank correlation. Multivariable logistic regression, and generalized linear regression models were utilized to analyze predictor variables. A questionnaire was distributed to the primary authors of any unpublished abstracts to determine the current status of the abstract, as well as the reasons for the failure to pursue, or achieve journal publication. Overall, oral abstracts by authors without a formal research degree were published more often than abstracts by authors with a research degree, as were funded projects (p=0.031). No other associations were identified between any of the abstract and author specific variables and successful conversion of an oral abstract to a journal publication or the time to publication. Six barriers questionnaires were completed. At the time of the survey, 2 oral abstracts had since achieved publication, 2 had been submitted for publication but were rejected, and 2 had never been submitted. The principal reason citied by the authors for the failure to pursue or achieve journal publication was insufficient time for manuscript preparation.
    DOI:  https://doi.org/10.7547/19-009
  7. Rev Med Inst Mex Seguro Soc. 2019 07 31. pii: http://revistamedica.imss.gob.mx/editorial/index.php/revista_medica/article/view/3500/3633. [Epub ahead of print]57(2): 59-61
    Espinosa-Alarcón PA.
      Writing content regarding the progress of medical knowledge demands the adherence to recommended practices in scientific research. It also requires clear communication, choosing the most fitting journal for its review, and observing author’s instructions in order to enable its publication.
    Keywords:  Knowledge; Ethics, Research; Copyright; Conflict of Interest; Information Dissemination
  8. Nature. 2019 Oct;574(7778): 333
    D'Antuono P, Ciavarella M.
      
    Keywords:  Publishing
    DOI:  https://doi.org/10.1038/d41586-019-03119-w
  9. Acta Med Port. 2019 Oct 01. 32(10): 623-624
    Brito D, Villanueva T, Sousa C, Nunes AB, Duarte S, Reis M.
      
    Keywords:  Medical Writing; Periodicals; Publishing
    DOI:  https://doi.org/10.20344/amp.12605
  10. Acta Physiol (Oxf). 2019 Oct 15. e13405
    Persson PB.
      In the following, novel developments in biomedical publishing important to all of our authors, reviewers and editors involved in the publication process at Acta Physiologica will be highlighted. Acta Physiologica's recommendations for authors focus on (a) the current implications of the revised ICMJE Guidelines, (b) the still recent European General Data Protection Regulation (GDPR) and (c) guidelines for experimental biomedical research involving animals. In addition, (d) Acta Physiologica follows the COPE Guidelines, including the recently issued Guidelines for Managing the Relationships between Society-owned Journals, their Society, and Publishers.
    DOI:  https://doi.org/10.1111/apha.13405
  11. Turk Arch Otorhinolaryngol. 2019 Sep;57(3): 111-112
    Erdağ TK.
      
    DOI:  https://doi.org/10.5152/tao.2019.969878