bims-skolko Biomed News
on Scholarly communication
Issue of 2020–06–07
34 papers selected by
Thomas Krichel, Open Library Society



  1. Med J Aust. 2020 May 31.
      
    Keywords:  COVID-19; Infectious diseases; Publishing; Respiratory tract infections
    DOI:  https://doi.org/10.5694/mja2.50617
  2. EMBO Rep. 2020 06 04. 21(6): e50817
      Pre-print servers have helped to rapidly publish important information during the COVID-19 pandemic. The downside is the risk of spreading false information or fake news though.
    DOI:  https://doi.org/10.15252/embr.202050817
  3. PLoS Biol. 2020 Jun 01. 18(6): e3000716
      Data-driven research in biomedical science requires structured, computable data. Increasingly, these data are created with support from automated text mining. Text-mining tools have rapidly matured: although not perfect, they now frequently provide outstanding results. We describe 10 straightforward writing tips-and a web tool, PubReCheck-guiding authors to help address the most common cases that remain difficult for text-mining tools. We anticipate these guides will help authors' work be found more readily and used more widely, ultimately increasing the impact of their work and the overall benefit to both authors and readers. PubReCheck is available at http://www.ncbi.nlm.nih.gov/research/pubrecheck.
    DOI:  https://doi.org/10.1371/journal.pbio.3000716
  4. Gigascience. 2020 Jun 01. pii: giaa056. [Epub ahead of print]9(6):
      Biomedical research depends increasingly on computational tools, but mechanisms ensuring open data, open software, and reproducibility are variably enforced by academic institutions, funders, and publishers. Publications may present software for which source code or documentation are or become unavailable; this compromises the role of peer review in evaluating technical strength and scientific contribution. Incomplete ancillary information for an academic software package may bias or limit subsequent work. We provide 8 recommendations to improve reproducibility, transparency, and rigor in computational biology-precisely the values that should be emphasized in life science curricula. Our recommendations for improving software availability, usability, and archival stability aim to foster a sustainable data science ecosystem in life science research.
    Keywords:  archival stability; big data; installability; open science; reproducible research; rigor
    DOI:  https://doi.org/10.1093/gigascience/giaa056
  5. Nature. 2020 Jun 02.
      
    Keywords:  Conferences and meetings; SARS-CoV-2
    DOI:  https://doi.org/10.1038/d41586-020-01521-3
  6. Nature. 2020 Jun 03.
      
    Keywords:  Peer review; Publishing; SARS-CoV-2
    DOI:  https://doi.org/10.1038/d41586-020-01520-4
  7. PLoS One. 2020 ;15(6): e0233432
      The essential role of journals as registries of scientific activity in all areas of knowledge justifies concern about their ownership and type of access. The purpose of this research is to analyze the main characteristics of publishers with journals that have received the DOAJ Seal. The specific objectives are a) to identify publishers and journals registered with the DOAJ Seal; b) to characterize those publishers; and c) to analyze their article processing fees. The research method involved the use of the DOAJ database, the Seal option and the following indicators: publisher, title, country, number of articles, knowledge area, article processing charges in USD, time for publication in weeks, and year of indexing in DOAJ. The results reveal a fast-rising oligopoly, dominated by Springer with 35% of the titles and PLOS with more than 20% of the articles. We've identified three models of expansion: a) a few titles with hundreds of articles; b) a high number of titles with a mix of big and small journals; and c) a high number of titles with medium-size journals. We identify a high number of titles without APCs (27%) in all areas while medicine was found to be the most expensive area. Commercial publishers clearly exercise control over the scope of journals and the creation of new titles, according to the interests of their companies, which are not necessarily the same as those of the scientific community or of society in general.
    DOI:  https://doi.org/10.1371/journal.pone.0233432
  8. Blood Cells Mol Dis. 2020 May 23. pii: S1079-9796(20)30227-8. [Epub ahead of print]84 102454
      The authorship of articles in biomedical journals has proliferated despite efforts of publishers and editors to require justification of authorship. A proposal is made, herein, to resolve that matter by applying the "one paper: one citation" concept, so as to adhere to the thermodynamic principle of conservation of mass. This proposal provides (i) a means to allow authors to agree on their relative contribution, (ii) an incentive to assign only significant contributors to authorship, and (iii) the appropriate fractional contribution of each author when there are multiple authors. As a result, the sum of citations for any one paper shall be one paper. I believe this proposal the only method suggested, thus far, to make authorship of a biomedical paper authentic.
    Keywords:  Attribution; Authorship; Biomedical article; Citation; Citation index
    DOI:  https://doi.org/10.1016/j.bcmd.2020.102454
  9. Rev Esp Enferm Dig. 2020 Jun 04. 112 425
      The complexity and research publications have expanded exponentially.The role of authors and collaborators need to be clarified according with the standards of the International Committee of medical Journal Editors (ICMJE). New projects as Contributor Role Taxonomy (CRediT), has been launched in order to provide credit and transparency to the authors, readers and evaluation agencies.
    DOI:  https://doi.org/10.17235/reed.2020.7251/2020
  10. Account Res. 2020 Jun 04. 1-25
      Over the past several years, there has been a significant increase in the number of scientific articles with two or more authors claiming "Equal Co-First Authorship" (ECFA). This study provides a critical background to ECFA designations, discusses likely causes of its increased use, and explores arguments for and against the practice. Subsequently, it presents the results of a qualitative study that sought the opinion of 19 authors listed among equal first authors of recent publications in leading scientific journals about ECFA designations. Results show that circumstances leading to ECFA designations vary significantly from each other. While the development of policies for these situations would not be easy, participants suggested that the lack of clear and consistent policies regarding the attribution and evaluation of ECFA contributes to tensions amongst ECFA authors and obscures their preferred attributions of credit.
    Keywords:  Authorship; Equal Co-First Authorship; equal contributions; ethics; publications
    DOI:  https://doi.org/10.1080/08989621.2020.1776122
  11. BMC Med Ethics. 2020 Jun 01. 21(1): 44
       BACKGROUND: Plagiarism is considered as serious research misconduct, together with data fabrication and falsification. However, little is known about biomedical researchers' views on plagiarism. Moreover, it has been argued - based on limited empirical evidence - that perceptions of plagiarism depend on cultural and other determinants. The authors explored, by means of an online survey among 46 reputable universities in Europe and China, how plagiarism is perceived by biomedical researchers in both regions.
    METHODS: We collected work e-mail addresses of biomedical researchers identified through the websites of 13 reputable universities in Europe and 33 reputable universities in China and invited them to participate in an online anonymous survey. Our questionnaire was designed to assess respondents' views about plagiarism by asking whether they considered specific practices as plagiarism. We analyzed if respondents in China and Europe responded differently, using logistic regression analysis with adjustments for demographic and other relevant factors.
    RESULTS: The authors obtained valid responses from 204 researchers based in China (response rate 2.1%) and 826 researchers based in Europe (response rate 5.6%). Copying text from someone else's publication without crediting the source, using idea(s) from someone else's publication without crediting the source and republishing one's own work in another language without crediting the source were considered as plagiarism by 98, 67 and 64%, respectively. About one-third of the respondents reported to have been unsure whether they had been plagiarizing. Overall, the pattern of responses was similar among respondents based in Europe and China. Nevertheless, for some items significant differences did occur in disadvantage of Chinese respondents.
    CONCLUSIONS: Findings indicate that nearly all biomedical researchers understand (and disapprove of) the most obvious forms of plagiarism, but uncertainties and doubts were apparent for many aspects. And the minority of researchers who did not recognize some types of plagiarism as plagiarism was larger among China-based respondents than among Europe-based respondents. The authors conclude that biomedical researchers need clearer working definitions of plagiarism in order to deal with grey zones.
    Keywords:  Biomedicine; China; Europe; Plagiarism; Research misconduct; University researchers
    DOI:  https://doi.org/10.1186/s12910-020-00473-7
  12. J Obstet Gynaecol Can. 2020 Jun;pii: S1701-2163(20)30304-2. [Epub ahead of print]42(6): 705-706
      
    DOI:  https://doi.org/10.1016/j.jogc.2020.04.001
  13. Health Res Policy Syst. 2020 Jun 05. 18(1): 59
       BACKGROUND: Scientific journals play a critical role in research validation and dissemination and are increasingly vocal about the identification of research priorities and the targeting of research results to key audiences. No new journals specialising in health policy and systems research (HPSR) and focusing in the developing world or in a specific developing world region have been established since the early 1980s. This paper compares the growth of publications on HPSR across Latin America and the world and explores the potential, feasibility and challenges of innovative publication strategies.
    METHODS: A bibliometric analysis was undertaken using HPSR MeSH terms with journals indexed in Medline. A survey was undertaken among 2500 authors publishing on HPSR in Latin America (LA) through an online survey, with a 13.1% response rate. Aggregate indicators were constructed and validated, and two-way ANOVA tests were performed on key variables.
    RESULTS: HPSR publications on LA observed an average annual growth of 27.5% from the years 2000 to 2018, as against 11.4% worldwide and yet a lag on papers published per capita. A total of 48 journals with an Impact Factor publish HPSR on LA, of which 5 non-specialised journals are published in the region and are ranked in the bottom quintile of Impact Factor. While the majority of HPSR papers worldwide is published in specialised HPSR journals, in LA this is the minority. Very few researchers from LA sit in the Editorial Board of international journals. Researchers highly support strengthening quality HPSR publications through publishing in open access, on-line journals with a focus on the LA region and with peer reviewers specialized on the region. Researchers would support a new open access journal specializing in the LA region and in HPSR, publishing in English. Open access up-front costs and disincentives while waiting for an Impact Factor can be overcome.
    CONCLUSION: Researchers publishing on HPSR in LA widely support the launching of a new specialised journal for the region with a vigorous editorial policy focusing on regional and country priorities. Strategies should be in place to support English-language publishing and to develop a community of practice around the publication process. In the first years, special issues should be promoted through a priority-setting process to attract prominent authors, develop the audience and attain an Impact Factor.
    Keywords:  Health policy and systems research; Health research capacity strengthening; Latin America; Scientometrics
    DOI:  https://doi.org/10.1186/s12961-020-00565-1
  14. BMJ Open. 2020 May 30. 10(5): e038887
       OBJECTIVE: To explore the implementation of the International Committee of Medical Journal Editors (ICMJE) data-sharing policy which came into force on 1 July 2018 by ICMJE-member journals and by ICMJE-affiliated journals declaring they follow the ICMJE recommendations.
    DESIGN: A cross-sectional survey of data-sharing policies in 2018 on journal websites and in data-sharing statements in randomised controlled trials (RCTs).
    SETTING: ICMJE website; PubMed/Medline.
    ELIGIBILITY CRITERIA: ICMJE-member journals and 489 ICMJE-affiliated journals that published an RCT in 2018, had an accessible online website and were not considered as predatory journals according to Beall's list. One hundred RCTs for member journals and 100 RCTs for affiliated journals with a data-sharing policy, submitted after 1 July 2018.
    MAIN OUTCOME MEASURES: The primary outcome for the policies was the existence of a data-sharing policy (explicit data-sharing policy, no data-sharing policy, policy merely referring to ICMJE recommendations) as reported on the journal website, especially in the instructions for authors. For RCTs, our primary outcome was the intention to share individual participant data set out in the data-sharing statement.
    RESULTS: Eight (out of 14; 57%) member journals had an explicit data-sharing policy on their website (three were more stringent than the ICMJE requirements, one was less demanding and four were compliant), five (35%) additional journals stated that they followed the ICMJE requirements, and one (8%) had no policy online. In RCTs published in these journals, there were data-sharing statements in 98 out of 100, with expressed intention to share individual patient data reaching 77 out of 100 (77%; 95% CI 67% to 85%). One hundred and forty-five (out of 489) ICMJE-affiliated journals (30%; 26% to 34%) had an explicit data-sharing policy on their website (11 were more stringent than the ICMJE requirements, 85 were less demanding and 49 were compliant) and 276 (56%; 52% to 61%) merely referred to the ICMJE requirements. In RCTs published in affiliated journals with an explicit data-sharing policy, data-sharing statements were rare (25%), and expressed intentions to share data were found in 22% (15% to 32%).
    CONCLUSION: The implementation of ICMJE data-sharing requirements in online journal policies was suboptimal for ICMJE-member journals and poor for ICMJE-affiliated journals. The implementation of the policy was good in member journals and of concern for affiliated journals. We suggest the conduct of continuous audits of medical journal data-sharing policies in the future.
    REGISTRATION: The protocol was registered before the start of the research on the Open Science Framework (https://osf.io/n6whd/).
    Keywords:  clinical trials; epidemiology; statistics & research methods
    DOI:  https://doi.org/10.1136/bmjopen-2020-038887
  15. J Obstet Gynaecol Can. 2020 Jun;pii: S1701-2163(20)30273-5. [Epub ahead of print]42(6): 703-704
      
    DOI:  https://doi.org/10.1016/j.jogc.2020.03.011
  16. BJPsych Open. 2020 Jun 01. 6(4): e52
      BJPsych Open has come of age. This editorial celebrates the journal's fifth anniversary by reviewing the history of BJPsych Open, what we have accomplished, where we strive to go (our planned trajectory) and the passion of being an Editor-in-Chief.
    Keywords:  BJPsych Open; Plan S; academic publishing; anniversary; authors; editorial board; global reach; history; methodologic rigor; metrics; passion of an Editor-in-Chief; publication integrity; research ethics; reviewers; thematic issues
    DOI:  https://doi.org/10.1192/bjo.2020.34
  17. J Can Chiropr Assoc. 2020 Apr;64(1): 82-87
       Introduction: Clinical trial registries are used to help improve transparency in trial reporting. Our study aimed to identify potential publication bias in chiropractic and spinal manipulation research by assessing data drawn from published studies listed in clinincaltrials.gov.
    Methods: We searched the clinicaltrials.gov registry database for completed trials tagged with the key indexing terms chiropractic or spinal manipulation. We assessed if the trial registry had been updated with data, then searched for publications corresponding to the registered trials. Finally, the frequency of positive or negative results was determined from published studies.
    Results: For the term 'chiropractic', 63% of studies supported the intervention and 52% supported the intervention for the term 'spinal manipulation'.
    Discussion: Publication bias in chiropractic and spinal manipulation research listed in clinicaltrials.gov appears to occur. Further work may help understand why this happens and what may be done to mitigate this moving forward.
    Keywords:  chiropractic; publication bias; scientific journals
  18. Eur J Clin Invest. 2020 May 30. e13293
      COVID-19 has created the necessity to rapidly generate evidence to enlighten many blind spots encompassing the pandemic, from pathophysiology to management. Scientific journals have timely responded to this challenge by prioritizing COVID-19 research, with proactive editorial efforts favoring open-access to articles, launching calls for papers, implementing specific sections and special issues on COVID-19, among others. However, the impact of all these measures on the overall quality and adequacy of research are largely unknown and several authors have shown concern in this regard.
    Keywords:  COVID-19; EQUATOR reporting guidelines; open-access; quality of scientific reporting; research integrity; reviews
    DOI:  https://doi.org/10.1111/eci.13293
  19. Front Psychol. 2020 ;11 815
      Recent calls to end the practice of categorizing findings based on statistical significance have focused on what not to do. Practitioners who subscribe to the conceptual basis behind these calls may be unaccustomed to presenting results in the nuanced and integrative manner that has been recommended as an alternative. This alternative is often presented as a vague proposal. Here, we provide practical guidance and examples for adopting a research evaluation posture and communication style that operates without bright-line significance testing. Characteristics of the structure of results communications that are based on conventional significance testing are presented. Guidelines for writing results without the use of bright-line significance testing are then provided. Examples of conventional styles for communicating results are presented. These examples are then modified to conform to recent recommendations. These examples demonstrate that basic modifications to written scientific communications can increase the information content of scientific reports without a loss of rigor. The adoption of alternative approaches to results presentations can help researchers comply with multiple recommendations and standards for the communication and reporting of statistics in the psychological sciences.
    Keywords:  bright-line testing; confidence intervals; null hypothesis significance testing; scientific communication; statistical significance
    DOI:  https://doi.org/10.3389/fpsyg.2020.00815
  20. BMC Med Res Methodol. 2020 Jun 03. 20(1): 139
       BACKGROUND: We investigated the feasibility of using a machine learning tool's relevance predictions to expedite title and abstract screening.
    METHODS: We subjected 11 systematic reviews and six rapid reviews to four retrospective screening simulations (automated and semi-automated approaches to single-reviewer and dual independent screening) in Abstrackr, a freely-available machine learning software. We calculated the proportion missed, workload savings, and time savings compared to single-reviewer and dual independent screening by human reviewers. We performed cited reference searches to determine if missed studies would be identified via reference list scanning.
    RESULTS: For systematic reviews, the semi-automated, dual independent screening approach provided the best balance of time savings (median (range) 20 (3-82) hours) and reliability (median (range) proportion missed records, 1 (0-14)%). The cited references search identified 59% (n = 10/17) of the records missed. For the rapid reviews, the fully and semi-automated approaches saved time (median (range) 9 (2-18) hours and 3 (1-10) hours, respectively), but less so than for the systematic reviews. The median (range) proportion missed records for both approaches was 6 (0-22)%.
    CONCLUSION: Using Abstrackr to assist one of two reviewers in systematic reviews saves time with little risk of missing relevant records. Many missed records would be identified via other means.
    Keywords:  Automation; Efficiency; Machine learning; Rapid reviews; Systematic reviews
    DOI:  https://doi.org/10.1186/s12874-020-01031-w
  21. J Am Soc Echocardiogr. 2020 Jun;pii: S0894-7317(20)30221-2. [Epub ahead of print]33(6): 647
      
    DOI:  https://doi.org/10.1016/j.echo.2020.04.010
  22. Elife. 2020 Jun 05. pii: e59636. [Epub ahead of print]9
      eLife, like the rest of science, must tackle the many inequalities experienced by Black scientists.
    Keywords:  careers in science; discrimination; eLife; equality, diversity and inclusion; racism in science; scientific publishing
    DOI:  https://doi.org/10.7554/eLife.59636