bims-skolko Biomed News
on Scholarly communication
Issue of 2020‒12‒13
nineteen papers selected by
Thomas Krichel
Open Library Society


  1. BMC Bioinformatics. 2020 Dec 09. 21(1): 564
      BACKGROUND: A low replication rate has been reported in some scientific areas motivating the creation of resource intensive collaborations to estimate the replication rate by repeating individual studies. The substantial resources required by these projects limits the number of studies that can be repeated and consequently the generalizability of the findings. We extend the use of a method from Jager and Leek to estimate the false discovery rate for 94 journals over a 5-year period using p values from over 30,000 abstracts enabling the study of how the false discovery rate varies by journal characteristics.RESULTS: We find that the empirical false discovery rate is higher for cancer versus general medicine journals (p = 9.801E-07, 95% CI: 0.045, 0.097; adjusted mean false discovery rate cancer = 0.264 vs. general medicine = 0.194). We also find that false discovery rate is negatively associated with log journal impact factor. A two-fold decrease in journal impact factor is associated with an average increase of 0.020 in FDR (p = 2.545E-04). Conversely, we find no statistically significant evidence of a higher false discovery rate, on average, for Open Access versus closed access journals (p = 0.320, 95% CI - 0.015, 0.046, adjusted mean false discovery rate Open Access = 0.241 vs. closed access = 0.225).
    CONCLUSIONS: Our results identify areas of research that may need additional scrutiny and support to facilitate replicable science. Given our publicly available R code and data, others can complete a broad assessment of the empirical false discovery rate across other subject areas and characteristics of published research.
    Keywords:  Cancer; False discovery rate; Impact factor; Open access; Replication; Reproducibility
    DOI:  https://doi.org/10.1186/s12859-020-03817-7
  2. Res Integr Peer Rev. 2020 Dec 01. 5(1): 16
      BACKGROUND: Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader's ability to independently interpret data and reproduce findings.METHODS: In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals.
    RESULTS: Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication.
    CONCLUSIONS: Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.
    Keywords:  Peer review; Preprint; Publication; Quality of reporting; Scientific journal; bioRxiv
    DOI:  https://doi.org/10.1186/s41073-020-00101-3
  3. Nature. 2020 Dec;588(7837): S136-S137
      
    Keywords:  Computer science; Lab life; Publishing; Technology
    DOI:  https://doi.org/10.1038/d41586-020-03415-w
  4. Nature. 2020 Dec;588(7837): S138-S141
      
    Keywords:  Computer science; Publishing; Technology
    DOI:  https://doi.org/10.1038/d41586-020-03416-9
  5. Nature. 2020 Dec 10.
      
    Keywords:  Careers; Immunology; Publishing
    DOI:  https://doi.org/10.1038/d41586-020-03498-5
  6. Clin Gastroenterol Hepatol. 2020 Dec 03. pii: S1542-3565(20)31639-6. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.cgh.2020.12.001
  7. Account Res. 2020 Dec 08.
      Despite the widely used author contribution criteria, unethical authorship practices such as guest, ghost, and honorary authorship remains largely unsolved. We have identified six major reasons by analyzing 78 published papers addressing unethical authorship practice. Those are lack of: (i) awareness about and (ii) compliance with authorship criteria, (iii) universal definition and scope for determining authorship, (iv) common mechanisms for positioning an author in the list, (v) quantitative measures of intellectual contribution; and (vi) pressure to publish. As a possible measure to control unethical practice, we have evaluated the possibility to adopt an author categorization scheme - proposed according to the common understanding of how first-, co-, principal-, or corresponding- author is perceived. Based on an online opinion survey, the proposed scheme was supported by ~80% of the respondents (n=370). The impact of the proposed categorization was then evaluated using a novel mathematical tool to measure "Author Performance Index (API)" that can be higher for those who might have authored more papers as primary and/or principal authors than those as coauthors. Hence, if adopted, the proposed author categorization scheme together with the API would provide a better way to evaluate the credit of an individual as a primary and principal author.
    Keywords:  Authorship criteria; Hyperauthorship; corresponding author; primary author; principal author; relative intellectual contribution
    DOI:  https://doi.org/10.1080/08989621.2020.1860764
  8. PLoS Biol. 2020 Dec;18(12): e3000937
      Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called "OSF Preregistration," http://osf.io/prereg/). The Prereg Challenge format was a "structured" workflow with detailed instructions and an independent review to confirm completeness; the "Standard" format was "unstructured" with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the "structured" format restricted the opportunistic use of researcher degrees of freedom better (Cliff's Delta = 0.49) than the "unstructured" format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.
    DOI:  https://doi.org/10.1371/journal.pbio.3000937
  9. Hist Sci. 2020 Dec;58(4): 354-368
      This introductory article frames our special issue in terms of how historicizing research integrity and fraud can benefit current discussions of scientific conduct and the need to improve public trust in science.
    Keywords:  Research integrity; fraud; public trust; scientific conduct; scientific values
    DOI:  https://doi.org/10.1177/0073275320952268
  10. PLoS One. 2020 ;15(12): e0242525
      In 1996, an international group of representatives from national archives and libraries, universities, industry, publishing offices, and other government and private sector organizations first articulated the need for certified Trustworthy Digital Repositories (TDRs). Henceforth, multiple standards for TDRs have developed worldwide and their reviewers provide third party audit of digital repositories. Even though hundreds of repositories are currently certified, we do not know if audit and certification of TDRs actually matters. For example, we do not know if digital repositories are actually better at preserving digital information after certification than they were before. Additionally, we do not know if TDRs preserve digital information better than their counterparts, although TDR standards definitely promulgate this assumption. One way of assessing whether audit and certification of TDRs matters is to study its impact on TDRs' stakeholders (e.g., funders, data producers, data consumers). As an initial critical step forward, this study examines what certification-related information repositories actually include on their websites since repository websites provide a means of disseminating information. Using findings from a content analysis of 91 TDR-certified repository websites, this research examines: 1) written statements about TDR status, 2) the presence of TDR seals and their location, 3) whether the seals hyperlink to additional certification information, 4) the extent to which the certification process is explained, and 5) whether audit reports are shared. Nearly three-fourths of the repository websites provide TDR status statements and put seals in one or more places; nearly 60% post audit reports and link seals to additional certification information; and over one-third explain the certification process. Directions for future research and practical application of the results are discussed.
    DOI:  https://doi.org/10.1371/journal.pone.0242525
  11. Res Integr Peer Rev. 2020 Dec 11. 5(1): 17
      BACKGROUND: Research on research integrity has tended to focus on frequency of research misconduct and factors that might induce someone to commit research misconduct. A definitive answer to the first question has been elusive, but it remains clear that any research misconduct is too much. Answers to the second question are so diverse, it might be productive to ask a different question: What about how research is done allows research misconduct to occur?METHODS: With that question in mind, research integrity officers (RIOs) of the 62 members of the American Association of Universities were invited to complete a brief survey about their most recent instance of a finding of research misconduct. Respondents were asked whether one or more good practices of research (e.g., openness and transparency, keeping good research records) were present in their case of research misconduct.
    RESULTS: Twenty-four (24) of the respondents (39% response rate) indicated they had dealt with at least one finding of research misconduct and answered the survey questions. Over half of these RIOs reported that their case of research misconduct had occurred in an environment in which at least nine of the ten listed good practices of research were deficient.
    CONCLUSIONS: These results are not evidence for a causal effect of poor practices, but it is arguable that committing research misconduct would be more difficult if not impossible in research environments adhering to good practices of research.
    Keywords:  Good practices of research; Research integrity officer; Research misconduct; Responsible conduct of research
    DOI:  https://doi.org/10.1186/s41073-020-00103-1
  12. J Evid Based Soc Work (2019). 2020 Mar-Apr;17(2):17(2): 137-148
      Social work has a longstanding commitment to sound research and the development and dissemination of evidence-based practice. To that end, multiple professional groups have developed or refined guidelines for reporting research procedures and findings, with the objectives of enhancing transparency, integrity, and rigor in science. Such guidelines can also facilitate replication and systematic review. The Template for Intervention Description and Replication (TIDieR) checklist represents the culmination of a multi-stage process to expand upon existing reporting guidelines. As such, the checklist provides a framework for more transparent communication about empirically-grounded interventions addressing a broad range of social and behavioral health issues. Use of this checklist can be beneficial for researchers, practitioners, and recipients of social work interventions. After discussing selected background regarding the need for and benefit of reporting standards and describing the TIDieR measure, we outline practical considerations in the checklist's use by those engaged in social work research.
    Keywords:  TIDieR; intervention methods EQUATOR network; replication; reporting standards
    DOI:  https://doi.org/10.1080/26408066.2020.1724226
  13. Cien Saude Colet. 2020 Dec;pii: S1413-81232020001204875. [Epub ahead of print]25(12): 4875-4886
      Celebrating the 25 years of existence of the Journal Ciência & Saúde Coletiva (C&SC), this paper analyzed 375 documents published between 2000-2019 as an integral part of the editorial of collective oral health. The production analysis aimed to understand how oral health core appears in publications and how it could have contributed to knowledge on the population's health-disease, specific public policies, education, and management of oral health services in the SUS. The process employed bibliometric and documental analysis. We could show the authors' territorial distribution, their extensive collaboration network, and the dimension of citations in publications, including the international plan. The Brazilian states most present in the publications were São Paulo and Minas Gerais, followed by authors from Pernambuco, Rio Grande do Sul, and Santa Catarina. Citations were more frequent in Brazil (85.14%), followed by the United States (2.31%), Portugal (1.34%), and Australia (1.34%). We concluded that, despite the limitations, the C&SC showed unequivocally a powerful instrument for the dissemination of scientific production from the perspective of collective oral health, enabling the exchange of information and facilitating the integration between researchers and enabling a path to its consolidation.
    DOI:  https://doi.org/10.1590/1413-812320202512.28362020
  14. Afr J Emerg Med. 2020 ;10(Suppl 2): S154-S157
      Clear and precise writing is a vital skill for healthcare providers and those involved in global emergency care research. It allows one to publish in scientific literature and present oral and written summaries of their work. However, writing skills for publishing are rarely part of the curriculum in the healthcare education system. This review gives you a step-by-step guide on how to successfully write for scientific publication following the IMRaD principle (Introduction, Methods, Results, and Discussion) with every part supporting the key message. There are specific benefits of writing for publication that justify the extra work involved. Any lessons learned about improving global emergency care delivery can be useful to emergency clinicians. The end result can lead to changing others' practice and pave the way for further research.
    Keywords:  Evidence-based medicine; Publishing; Scientific writing
    DOI:  https://doi.org/10.1016/j.afjem.2020.06.006
  15. J Arthroplasty. 2020 Nov 14. pii: S0883-5403(20)31205-5. [Epub ahead of print]
      BACKGROUND: Despite the importance of diversity in advancing scientific progress, diversity among leading authors in arthroplasty has not been examined. This study aimed to identify, characterize, and assess disparities among leading authors in arthroplasty literature from 2002 to 2019.METHODS: Articles published between 2002 and 2019 from 12 academic journals that publish orthopedic and arthroplasty research were extracted from PubMed. Original articles containing keywords related to arthroplasty were analyzed. Author gender was assigned using the Genderize algorithm. Gender and characterization of the top 100 male and female authors utilized available information on academic profiles.
    RESULTS: From the 14,692 articles that met inclusion criteria, the genders of 23,626 unique authors were identified. Women were less likely than men to publish 5 years after beginning their publishing careers (adjusted odds ratio 0.51, 95% confidence interval 0.45-0.57, P < .001). Of the top 100 authors, 96 were men, while only 4 were women. Orthopedic surgeons made up 93 of 100 top authors, of which 92 were men and 1 was a woman. Among the top 10 publishing female and male authors, 10 of 10 men were orthopedic surgeons, only 2 of 10 women were physicians, and only one was an attending orthopedic surgeon.
    CONCLUSION: While the majority of authors with high arthroplasty publication volume were orthopedic surgeons, there were significant gender disparities among the leading researchers. We should continue working to increase gender representation and supporting the research careers of women in arthroplasty.
    Keywords:  arthroplasty; authorship; disparities; gender; leadership; women
    DOI:  https://doi.org/10.1016/j.arth.2020.11.014