bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒02‒11
twenty-six papers selected by
Thomas Krichel, Open Library Society



  1. PLoS One. 2024 ;19(2): e0296956
      BACKGROUND: Data sharing is commonly seen as beneficial for science but is not yet common practice. Research funding agencies are known to play a key role in promoting data sharing, but German funders' data sharing policies appear to lag behind in international comparison. This study aims to answer the question of how German data sharing experts inside and outside funding agencies perceive and evaluate German funders' data sharing policies and overall efforts to promote data sharing.METHODS: This study is based on sixteen guided expert interviews with representatives of German funders and German research data experts from stakeholder organisations, who shared their perceptions of German' funders efforts to promote data sharing. By applying the method of qualitative content analysis to our interview data, we categorise and describe noteworthy aspects of the German data sharing policy landscape and illustrate our findings with interview passages.
    RESULTS: We present our findings in five sections to distinguish our interviewees' perceptions on a) the status quo of German funders' data sharing policies, b) the role of funders in promoting data sharing, c) current and potential measures by funders to promote data sharing, d) general barriers to those measures, and e) the implementation of more binding data sharing requirements.
    DISCUSSION AND CONCLUSION: Although funders are perceived to be important promoters and facilitators of data sharing throughout our interviews, only few German funding agencies have data sharing policies in place. Several interviewees stated that funders could do more, for example by providing incentives for data sharing or by introducing more concrete policies. Our interviews suggest the academic freedom of grantees is widely perceived as an obstacle for German funders in introducing mandatory data sharing requirements. However, some interviewees stated that stricter data sharing requirements could be justified if data sharing is a part of good scientific practice.
    DOI:  https://doi.org/10.1371/journal.pone.0296956
  2. Mol Cell Proteomics. 2024 Feb 06. pii: S1535-9476(24)00021-5. [Epub ahead of print] 100731
      Proteomics data sharing has profound benefits at individual level as well as at community level. While data sharing has increased over the years, mostly due to journal and funding agency requirements, the reluctance of researchers with regards to data sharing is evident as many shares only the bare minimum dataset required to publish an article. In many cases, proper metadata is missing, essentially making the dataset useless. This behavior can be explained by lack of incentives, insufficient awareness, or a lack of clarity surrounding ethical issues. Through adequate training at research institutes, researchers can realize the benefits associated with data sharing and can accelerate the norm of data sharing for the field of proteomics, as has been the standard in genomics for decades. In this article, we have put together various repository options available for proteomics data. We have also added pros and cons of those repositories to facilitate researchers in selecting the repository most suitable for their data submission. It is also important to note that a few types of proteomics data have the potential to re-identify an individual in certain scenarios. In such cases, extra caution should be taken to remove any personal identifiers before sharing on public repositories. Datasets which will be useless without personal identifiers need to be shared in a controlled access repository so that only authorized researchers can access the data and personal identifiers are kept safe.
    Keywords:  Data privacy; Data sharing; Personal Identifiers; Proteomics techniques; Repository
    DOI:  https://doi.org/10.1016/j.mcpro.2024.100731
  3. Nature. 2024 Feb 05.
      
    Keywords:  Authorship; Careers; Ethics; Lab life; Machine learning
    DOI:  https://doi.org/10.1038/d41586-024-00349-5
  4. Phytopathology. 2024 Feb 08.
      The landscape of scientific publishing is experiencing a transformative shift towards open access (OA), a paradigm that mandates the availability of research outputs such as data, code, materials, and publications. OA provides increased reproducibility and allows for reuse of these resources. This article provides guidance for best publishing practices of scientific research, data, and associated resources, including code, in APS journals. Key areas such as diagnostic assays, experimental design, data sharing, and code deposition are explored in detail. This guidance is in line with those observed by other leading journals. We hope the information assembled in this paper will raise awareness of best practices and enable greater appraisal of the true effects of biological phenomena in plant pathology.
    Keywords:  Bioinformatics; Computational Biology; Data Science; Epidemiology; Genomics; Microbiome; Modelling; Pathogen Detection; Population Biology; Techniques
    DOI:  https://doi.org/10.1094/PHYTO-12-23-0483-IA
  5. Politics Life Sci. 2024 Feb 08. 1-4
      As the scientific community becomes aware of low replicability rates in the extant literature, peer-reviewed journals have begun implementing initiatives with the goal of improving replicability. Such initiatives center around various rules to which authors must adhere to demonstrate their engagement in best practices. Preliminary evidence in the psychological science literature demonstrates a degree of efficacy in these initiatives. With such efficacy in place, it would be advantageous for other fields of behavioral sciences to adopt similar measures. This letter provides a discussion on lessons learned from psychological science while similarly addressing the unique challenges of other sciences to adopt measures that would be most appropriate for their field. We offer broad considerations for peer-reviewed journals in their implementation of specific policies and recommend that governing bodies of science prioritize the funding of research that addresses these measures.
    Keywords:  behavioral sciences; peer review; replication; research integrity; submission requirements
    DOI:  https://doi.org/10.1017/pls.2023.28
  6. Scientometrics. 2022 Oct;127(10): 5753-5771
      Although citations are used as a quantifiable, objective metric of academic influence, references could be added to a paper solely to inflate the perceived influence of a body of research. This reference list manipulation (RLM) could take place during the peer-review process, or prior to it. Surveys have estimated how many people may have been affected by coercive RLM at one time or another, but it is not known how many authors engage in RLM, nor to what degree. By examining a subset of active, highly published authors (n = 20,803) in PubMed, we find the frequency of non-self-citations (NSC) to one author coming from a single paper approximates Zipf's law. Author-centric deviations from it are approximately normally distributed, permitting deviations to be quantified statistically. Framed as an anomaly detection problem, statistical confidence increases when an author is an outlier by multiple metrics. Anomalies are not proof of RLM, but authors engaged in RLM will almost unavoidably create anomalies. We find the NSC Gini Index correlates highly with anomalous patterns across multiple "red flags", each suggestive of RLM. Between 81 (0.4%, FDR < 0.05) and 231 (1.1%, FDR < 0.10) authors are outliers on the curve, suggestive of chronic, repeated RLM. Approximately 16% of all authors may have engaged in RLM to some degree. Authors who use 18% or more of their references for self-citation are significantly more likely to have NSC Gini distortions, suggesting a potential willingness to coerce others to cite them.
    Keywords:  Citation analysis; Citation behavior; Scientific ethics
    DOI:  https://doi.org/10.1007/s11192-022-04503-6
  7. Nature. 2024 Feb 07.
      
    Keywords:  Ethics; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-024-00344-w
  8. Res Eval. 2023 Oct;32(4): 648-657
      Previous studies of the use of peer review for the allocation of competitive funding agencies have concentrated on questions of efficiency and how to make the 'best' decision, by ensuring that successful applicants are also the more productive or visible in the long term. This paper examines the components of feedback received from an unsuccessful grant application, is associated with motivating applicants career decisions to persist (reapply for funding at T1), or to switch (not to reapply, or else leave academia). This study combined data from interviews with unsuccessful ECR applicants (n = 19) to The Wellcome Trust 2009-19, and manual coding of reviewer comments received by applicants (n = 81). All applicants received feedback on their application at T0 with a large proportion of unsuccessful applicants reapplying for funding at T1. Here, peer-review-comments-as-feedback sends signals to applicants to encourage them to persist (continue) or switch (not continue) even when the initial application has failed. Feedback associated by unsuccessful applicants as motivating their decision to resubmit had three characteristics: actionable; targeted; and fair. The results lead to identification of standards of feedback for funding agencies and peer-reviewers to promote when providing reviewer feedback to applicants as part of their peer review process. The provision of quality reviewer-reports-as-feedback to applicants, ensures that peer review acts as a participatory research governance tool focused on supporting the development of individuals and their future research plans.
    Keywords:  early career researchers; peer review; qualitative; research behaviour; research careers
    DOI:  https://doi.org/10.1093/reseval/rvad034
  9. J Imaging Inform Med. 2024 Feb 05.
      Peer review plays a crucial role in accreditation and credentialing processes as it can identify outliers and foster a peer learning approach, facilitating error analysis and knowledge sharing. However, traditional peer review methods may fall short in effectively addressing the interpretive variability among reviewing and primary reading radiologists, hindering scalability and effectiveness. Reducing this variability is key to enhancing the reliability of results and instilling confidence in the review process. In this paper, we propose a novel statistical approach called "Bayesian Inter-Reviewer Agreement Rate" (BIRAR) that integrates radiologist variability. By doing so, BIRAR aims to enhance the accuracy and consistency of peer review assessments, providing physicians involved in quality improvement and peer learning programs with valuable and reliable insights. A computer simulation was designed to assign predefined interpretive error rates to hypothetical interpreting and peer-reviewing radiologists. The Monte Carlo simulation then sampled (100 samples per experiment) the data that would be generated by peer reviews. The performances of BIRAR and four other peer review methods for measuring interpretive error rates were then evaluated, including a method that uses a gold standard diagnosis. Application of the BIRAR method resulted in 93% and 79% higher relative accuracy and 43% and 66% lower relative variability, compared to "Single/Standard" and "Majority Panel" peer review methods, respectively. Accuracy was defined by the median difference of Monte Carlo simulations between measured and pre-defined "actual" interpretive error rates. Variability was defined by the 95% CI around the median difference of Monte Carlo simulations between measured and pre-defined "actual" interpretive error rates. BIRAR is a practical and scalable peer review method that produces more accurate and less variable assessments of interpretive quality by accounting for variability within the group's radiologists, implicitly applying a standard derived from the level of consensus within the group across various types of interpretive findings.
    Keywords:  Gold standard; Peer review; Reliability
    DOI:  https://doi.org/10.1007/s10278-024-00971-9
  10. Diabetes Metab Syndr. 2024 Feb 03. pii: S1871-4021(24)00007-9. [Epub ahead of print]18(2): 102946
      BACKGROUND: Peer review is the established method for evaluating the quality and validity of research manuscripts in scholarly publishing. However, scientific peer review faces challenges as the volume of submitted research has steadily increased in recent years. Time constraints and peer review quality assurance can place burdens on reviewers, potentially discouraging their participation. Some artificial intelligence (AI) tools might assist in relieving these pressures. This study explores the efficiency and effectiveness of one of the artificial intelligence (AI) chatbots, ChatGPT (Generative Pre-trained Transformer), in the peer review process.METHODS: Twenty-one peer-reviewed research articles were anonymised to ensure unbiased evaluation. Each article was reviewed by two humans and by versions 3.5 and 4.0 of ChatGPT. The AI was instructed to provide three positive and three negative comments on the articles and recommend whether they should be accepted or rejected. The human and AI results were compared using a 5-point Likert scale to determine the level of agreement. The correlation between ChatGPT responses and the acceptance or rejection of the papers was also examined.
    RESULTS: Subjective review similarity between human reviewers and ChatGPT showed a mean score of 3.6/5 for ChatGPT 3.5 and 3.76/5 for ChatGPT 4.0. The correlation between human and AI review scores was statistically significant for ChatGPT 3.5, but not for ChatGPT 4.0.
    CONCLUSION: ChatGPT can complement human scientific peer review, enhancing efficiency and promptness in the editorial process. However, a fully automated AI review process is currently not advisable, and ChatGPT's role should be regarded as highly constrained for the present and near future.
    Keywords:  Algorithms; Artificial intelligence; ChatGPT; Computers; Manuscript writing; Peer review; Scientific writing
    DOI:  https://doi.org/10.1016/j.dsx.2024.102946
  11. J Osteopath Med. 2024 Feb 07.
      CONTEXT: Stigmatizing language or non-person-centered language (non-PCL) has been shown to impact patients negatively, especially in the case of obesity. This has led many associations, such as the American Medical Association (AMA) and the International Committee of Medical Journal Editors (ICMJE), to enact guidelines prohibiting the use of stigmatizing language in medical research. In 2018, the AMA adopted person-centered language (PCL) guidelines, including a specific obesity amendment to which all researchers should adhere. However, little research has been conducted to determine if these guidelines are being followed.OBJECTIVES: Our primary objective was to determine if PCL guidelines specific to obesity have been properly followed in the sports medicine journals that are interacted with most frequently.
    METHODS: We searched within PubMed for obesity-related articles between 2019 and 2022 published in the top 10 most-interacted sports medicine journals based on Google Metrics data. A predetermined list of stigmatizing and non-PCL terms/language was searched within each article.
    RESULTS: A total of 198 articles were sampled, of which 58.6 % were found to be not compliant with PCL guidelines. The most common non-PCL terms were "obese" utilized in 49.5 % of articles, followed by "overweight" as the next most common stigmatizing term at 40.4 %. Stigmatizing labels such as "heavy, heavier, heaviness," "fat" as an adjective, and "morbid" appeared in articles but at a lower rate.
    CONCLUSIONS: Our study shows that there is a severe lack of adherence to PCL guidelines in the most-interacted sports medicine journals. Negative associations between stigmatizing language and individuals with obesity will only persist if a greater effort is not made to change this. All journals, including the most prestigious ones, should adopt and execute PCL guidelines to prevent the spread of demeaning language in the medical community.
    Keywords:  obesity; person-centered language; sports medicine; weight loss
    DOI:  https://doi.org/10.1515/jom-2023-0254
  12. F1000Res. 2023 ;12 1398
      Background: As Artificial Intelligence (AI) technologies such as Generative AI (GenAI) have become more common in academic settings, it is necessary to examine how these tools interact with issues of authorship, academic integrity, and research methodologies. The current landscape lacks cohesive policies and guidelines for regulating AI's role in academic research which has prompted discussions among publishers, authors, and institutions.Methods: This study employs inductive thematic analysis to explore publisher policies regarding AI-assisted authorship and academic work. Our methods involved a two-fold analysis using both AI-assisted and traditional unassisted techniques to examine the available policies from leading academic publishers and other publishing or academic entities. The framework was designed to offer multiple perspectives, harnessing the strengths of AI for pattern recognition while leveraging human expertise for nuanced interpretation. The results of these two analyses are combined to form the final themes.
    Results: Our findings indicate six overall themes, three of which were independently identified in both the AI-assisted and unassisted, manual analysis using common software tools. A broad consensus appears among publishers that human authorship remains paramount and that the use of GenAI tools is permissible but must be disclosed. However, GenAI tools are increasingly acknowledged for their supportive roles, including text generation and data analysis. The study also discusses the inherent limitations and biases of AI-assisted analysis, necessitating rigorous scrutiny by authors, reviewers, and editors.
    Conclusions: There is a growing recognition of AI's role as a valuable auxiliary tool in academic research, but one that comes with caveats pertaining to integrity, accountability, and interpretive limitations. This study used a novel analysis supported by GenAI tools to identify themes emerging in the policy landscape, underscoring the need for an informed, flexible approach to policy formulation that can adapt to the rapidly evolving landscape of AI technologies.
    Keywords:  AI-Assisted Authorship; Academic Integrity; Academic Publishing; Ethical Guidelines; Generative AI; Inductive Thematic Analysis; Publisher Policies; Research Methodologies
    DOI:  https://doi.org/10.12688/f1000research.142411.2
  13. BMJ. 2024 01 31. 384 e077192
      OBJECTIVES: To determine the extent and content of academic publishers' and scientific journals' guidance for authors on the use of generative artificial intelligence (GAI).DESIGN: Cross sectional, bibliometric study.
    SETTING: Websites of academic publishers and scientific journals, screened on 19-20 May 2023, with the search updated on 8-9 October 2023.
    PARTICIPANTS: Top 100 largest academic publishers and top 100 highly ranked scientific journals, regardless of subject, language, or country of origin. Publishers were identified by the total number of journals in their portfolio, and journals were identified through the Scimago journal rank using the Hirsch index (H index) as an indicator of journal productivity and impact.
    MAIN OUTCOME MEASURES: The primary outcomes were the content of GAI guidelines listed on the websites of the top 100 academic publishers and scientific journals, and the consistency of guidance between the publishers and their affiliated journals.
    RESULTS: Among the top 100 largest publishers, 24% provided guidance on the use of GAI, of which 15 (63%) were among the top 25 publishers. Among the top 100 highly ranked journals, 87% provided guidance on GAI. Of the publishers and journals with guidelines, the inclusion of GAI as an author was prohibited in 96% and 98%, respectively. Only one journal (1%) explicitly prohibited the use of GAI in the generation of a manuscript, and two (8%) publishers and 19 (22%) journals indicated that their guidelines exclusively applied to the writing process. When disclosing the use of GAI, 75% of publishers and 43% of journals included specific disclosure criteria. Where to disclose the use of GAI varied, including in the methods or acknowledgments, in the cover letter, or in a new section. Variability was also found in how to access GAI guidelines shared between journals and publishers. GAI guidelines in 12 journals directly conflicted with those developed by the publishers. The guidelines developed by top medical journals were broadly similar to those of academic journals.
    CONCLUSIONS: Guidelines by some top publishers and journals on the use of GAI by authors are lacking. Among those that provided guidelines, the allowable uses of GAI and how it should be disclosed varied substantially, with this heterogeneity persisting in some instances among affiliated publishers and journals. Lack of standardization places a burden on authors and could limit the effectiveness of the regulations. As GAI continues to grow in popularity, standardized guidelines to protect the integrity of scientific output are needed.
    DOI:  https://doi.org/10.1136/bmj-2023-077192
  14. Heliyon. 2024 Jan 15. 10(1): e22871
      This paper introduces Heliyon's Business and Management Section, established in 2023 as a platform committed to maintaining rigorous ethical and scientific publishing standards within the field. Prioritizing scientific correctness and technical soundness over mere novelty, it encompasses a wide range of research domains, encouraging contributions from scholars across diverse backgrounds. Within this guide, we provide insights into the process of preparing effective papers and offer constructive guidelines for the reviewing process. Authors will find valuable tools to align their work with the journal's expectations, incorporating current literature to enhance the probability of successful publication. Both aspiring authors and reviewers will benefit from this resource, which emphasizes academic and professional growth. By promoting collaboration and upholding high-quality standards, we aim to fortify the scholarly publishing community and advance knowledge in the field of business and management.
    DOI:  https://doi.org/10.1016/j.heliyon.2023.e22871
  15. Australas Psychiatry. 2024 Feb 08. 10398562241231460
      OBJECTIVE: This paper aims to provide an introductory resource for beginner peer reviewers in psychiatry and the broader biomedical science field. It will provide a concise overview of the peer review process, alongside some reviewing tips and tricks.CONCLUSION: The peer review process is a fundamental aspect of biomedical science publishing. The model of peer review offered varies between journals and usually relies on a pool of volunteers with differing levels of expertise and scope. The aim of peer review is to collaboratively leverage reviewers' collective knowledge with the objective of increasing the quality and merit of published works. The limitations, methodology and need for transparency in the peer review process are often poorly understood. Although imperfect, the peer review process provides some degree of scientific rigour by emphasising the need for an ethical, comprehensive and systematic approach to reviewing articles. Contributions from junior reviewers can add significant value to manuscripts.
    Keywords:  biomedical publishing; medical education; peer review; psychiatry
    DOI:  https://doi.org/10.1177/10398562241231460
  16. Int J Clin Pharm. 2024 Feb 08.
      Publishing in reputable peer-reviewed journals is an integral step of the clinical pharmacy research process, allowing for knowledge transfer and advancement in clinical pharmacy practice. Writing a manuscript for publication in a journal requires several careful considerations to ensure that research findings are communicated to the satisfaction of editors and reviewers, and effectively to the readers. This commentary provides a summary of the main points to consider, outlining how to: (1) select a suitable journal, (2) tailor the manuscript for the journal readership, (3) organise the content of the manuscript in line with the journal's guidelines, and (4) manage feedback from the peer review process. This commentary reviews the steps of the writing process, identifies common pitfalls, and proposes ways to overcome them. It aims to assist both novice and established researchers in the field of clinical pharmacy to enhance the quality of writing in a research paper to maximise impact.
    Keywords:  Clinical pharmacy; Journal article; Peer review; Publishing; Research; Writing
    DOI:  https://doi.org/10.1007/s11096-023-01695-6
  17. BMC Med Educ. 2024 Feb 06. 24(1): 115
      INTRODUCTION: Medical undergraduate students receive limited education on scholarly publishing. However, publishing experiences during this phase are known to influence study and career paths. The medical bachelor Honours Program (HP) at Utrecht University initiated a hands-on writing and publishing course, which resulted in nine reviews published in internationally peer reviewed academic journals. We wanted to share the project set-up, explore the academic development of the participating students and determine the impact of the reviews on the scientific community.METHODS: Thirty-one out of 50 alumni completed a digital retrospective questionnaire on for example, development of skills and benefit for their studies and career. Publication metrics of the HP review papers were retrieved from Web of Science.
    RESULTS: This hands-on project provides a clear teaching method on academic writing and scholarly publishing in the bachelor medical curriculum. Participants were able to obtain and improve writing and publishing skills. The output yielded well-recognized scientific papers and valuable learning experiences. 71% of the participating students published at least one additional paper following this project, and 55% of the students indicated the project influenced their academic study and/or career path. Nine manuscripts were published in journals with an average impact factor of 3.56 and cited on average 3.73 times per year.
    DISCUSSION: This course might inspire other medical educators to incorporate similar projects successfully into their curriculum. To this end, a number of recommendations with regard to supervision, time investment and group size are given.
    Keywords:  Bachelor/undergraduate education; Review writing; Scholarly publishing; Skill development
    DOI:  https://doi.org/10.1186/s12909-024-05098-7
  18. Trends Plant Sci. 2024 Feb;pii: S1360-1385(24)00001-3. [Epub ahead of print]29(2): 101-103
      
    DOI:  https://doi.org/10.1016/j.tplants.2024.01.001
  19. Sch Psychol. 2024 Jan;39(1): 1-3
      School Psychology is an outlet for research on children, youth, educators, and families that has scientific, practice, and policy implications for education and educational systems. In this editorial, annual updates are provided regarding journal impact, award winners, special topics, and editorial leadership, as well as reflections on how the journal engages in the open science process to promote transparency, rigor, and reproducibility in the science produced in the field of school psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
    DOI:  https://doi.org/10.1037/spq0000623