bims-librar Biomed News
on Biomedical librarianship
Issue of 2024–11–24
29 papers selected by
Thomas Krichel, Open Library Society



  1. Health Inf Manag. 2024 Nov 19. 18333583241283518
      Background: An increasing number of people are exploring their genetic predisposition to many diseases, allowing them to make healthcare decisions with improved knowledge. Objectives: The aim of this study was to identify factors that influence individuals to consider genetic testing utilising a modified health belief model (HBM). Method: The authors tested the modified HBM using a convenience sample of individuals from across the United States after a pilot study was used to test the validity and reliability of the constructs. Using SmartPLS, the researchers determined that the modified HBM explains the decision-making process used to determine what influences individuals to consider genetic testing. Results: Results suggested that perceived susceptibility, perceived benefits, cues to action, self-efficacy, e-health literacy and normative belief all play a role in an individual's decision to test their genetics. Conclusion: By conducting genetic testing, individuals may benefit from knowing they are predisposed to certain cancers and other diseases. Yet, research results have indicated that most individuals are unaware of resources available online that will help them in understanding genetic test results and associated diseases. Implications: Since healthcare literacy is an issue reported by these individuals, health information management professionals are well qualified to support them in e-health literacy by assisting them to evaluate the trustworthiness of available resources, and to educate them about privacy rights regarding access to and protection of their genetic information.
    Keywords:  e-health literacy; genetic testing; health belief model (HBM); health information management; health information management professional
    DOI:  https://doi.org/10.1177/18333583241283518
  2. Stud Health Technol Inform. 2024 Nov 18. 320 141-148
      Inclusive library services rely on accessible buildings and services, and knowledgeable staff. This study explored whether a standard accessibility evaluation of physical library buildings is sufficient to reveal barriers that must be attended to for the library to be universally designed. Data was collected through a survey, two accessibility evaluations, and interviews with staff in the evaluated libraries. The findings show that evaluations based upon technical requirements for construction work is useful to identify barriers related to access and safety in the library and certain conditions inside, such as staircases and lightning. However, there is a need for library specific guidelines addressing the organization and presentation of the collection, signage, and to understand what users need to be self sufficient in the library when it is open outside opening hours and no staff is present.
    Keywords:  Public libraries; accessibility evaluations; universal design
    DOI:  https://doi.org/10.3233/SHTI240995
  3. Nucleic Acids Res. 2024 Nov 18. pii: gkae1059. [Epub ahead of print]
      PubChem (https://pubchem.ncbi.nlm.nih.gov) is a large and highly-integrated public chemical database resource at NIH. In the past two years, significant updates were made to PubChem. With additions from over 130 new sources, PubChem contains >1000 data sources, 119 million compounds, 322 million substances and 295 million bioactivities. New interfaces, such as the consolidated literature panel and the patent knowledge panel, were developed. The consolidated literature panel combines all references about a compound into a single list, allowing users to easily find, sort, and export all relevant articles for a chemical in one place. The patent knowledge panels for a given query chemical or gene display chemicals, genes, and diseases co-mentioned with the query in patent documents, helping users to explore relationships between co-occurring entities within patent documents. PubChemRDF was expanded to include the co-occurrence data underlying the literature knowledge panel, enabling users to exploit semantic web technologies to explore entity relationships based on the co-occurrences in the scientific literature. The usability and accessibility of information on chemicals with non-discrete structures (e.g. biologics, minerals, polymers, UVCBs and glycans) were greatly improved with dedicated web pages that provide a comprehensive view of all available information in PubChem for these chemicals.
    DOI:  https://doi.org/10.1093/nar/gkae1059
  4. Med Ref Serv Q. 2024 Nov 20. 1-14
      This study explores database selection for systematic reviews in medical informatics, addressing challenges researchers face in maximizing relevant article retrieval. Systematic reviews from top medical informatics journals in 2021 were analyzed, divided into randomized control trials and non-randomized control trials groups. Four databases were evaluated based on Recall, Precision, and Unique references. Findings revealed that for randomized control trials, the best combination was PubMed, Embase, and Web of Science, while for the non-restricted group, recommended combination included PubMed, Embase, Web of Science, and Scopus, highlighting effective literature search strategies.
    Keywords:  Databases; information seeking behavior; information services; information storage and retrieval; medical informatics
    DOI:  https://doi.org/10.1080/02763869.2024.2429066
  5. J Med Internet Res. 2024 Nov 19. 26 e53781
       BACKGROUND: The massive increase in the number of published scientific articles enhances knowledge but makes it more complicated to summarize results. The Medical Subject Headings (MeSH) thesaurus was created in the mid-20th century with the aim of systematizing article indexing and facilitating their retrieval. Despite the advent of search engines, few studies have questioned the relevance of the MeSH thesaurus, and none have done so systematically.
    OBJECTIVE: The objective of this study was to estimate the added value of using MeSH terms in PubMed queries for systematic reviews (SRs).
    METHODS: SRs published in 4 high-impact medical journals in general medicine over the past 10 years were selected. Only SRs for which a PubMed query was provided were included. Each query was transformed to obtain 3 versions: the original query (V1), the query with free-text terms only (V2), and the query with MeSH terms only (V3). These 3 queries were compared with each other based on their sensitivity and positive predictive values.
    RESULTS: In total, 59 SRs were included. The suppression of MeSH terms had an impact on the number of relevant articles retrieved for 24 (41%) out of 59 SRs. The median (IQR) sensitivities of queries V1 and V2 were 77.8% (62.1%-95.2%) and 71.4% (42.6%-90%), respectively. V1 queries provided an average of 2.62 additional relevant papers per SR compared with V2 queries. However, an additional 820.29 papers had to be screened. The cost of screening an additional collected paper was therefore 313.09, which was slightly more than triple the mean reading cost associated with V2 queries (88.67).
    CONCLUSIONS: Our results revealed that removing MeSH terms from a query decreases sensitivity while slightly increasing the positive predictive value. Queries containing both MeSH and free-text terms yielded more relevant articles but required screening many additional papers. Despite this additional workload, MeSH terms remain indispensable for SRs.
    Keywords:  MeSH; MeSH thesaurus; Medical Subject Headings; PPV; PubMed; comparative analysis; literature review; medical knowledge; positive predictive value; review; scientific knowledge; search strategy; systematic literature review; systematic review; utility
    DOI:  https://doi.org/10.2196/53781
  6. Stud Health Technol Inform. 2024 Nov 18. 320 67-73
      Libraries have been pinpointed as a possible hub for information and safety during a crisis. We present a workshop design to contribute to the education of librarians with the goal to make them more able to provide inclusive and accessible information in a crisis. The workshop was carried out during a conference for librarians. The results from the workshop indicate that there is a need for further knowledge about methods and tools to make information accessible, as well as practical information about crisis preparedness. The workshop presented here could furthermore be improved upon by creating tailored personas and spending more time on the activity.
    Keywords:  Workshop; crisis preparedness; personas; universal design; variation cards
    DOI:  https://doi.org/10.3233/SHTI240985
  7. Nucleic Acids Res. 2024 Nov 18. pii: gkae1010. [Epub ahead of print]
    UniProt Consortium
      The aim of the UniProt Knowledgebase (UniProtKB; https://www.uniprot.org/) is to provide users with a comprehensive, high-quality and freely accessible set of protein sequences annotated with functional information. In this publication, we describe ongoing changes to our production pipeline to limit the sequences available in UniProtKB to high-quality, non-redundant reference proteomes. We continue to manually curate the scientific literature to add the latest functional data and use machine learning techniques. We also encourage community curation to ensure key publications are not missed. We provide an update on the automatic annotation methods used by UniProtKB to predict information for unreviewed entries describing unstudied proteins. Finally, updates to the UniProt website are described, including a new tab linking protein to genomic information. In recognition of its value to the scientific community, the UniProt database has been awarded Global Core Biodata Resource status.
    DOI:  https://doi.org/10.1093/nar/gkae1010
  8. Trends Microbiol. 2024 Nov 21. pii: S0966-842X(24)00279-8. [Epub ahead of print]
      References in the published microbiology literature provide the foundation for current scientific knowledge within the field. However, reference errors can occur, as discussed here, including an illustrative example on the origin of the term 'pyroptosis'.
    DOI:  https://doi.org/10.1016/j.tim.2024.10.006
  9. Acad Radiol. 2024 Nov 16. pii: S1076-6332(24)00791-8. [Epub ahead of print]
       RATIONALE AND OBJECTIVES: It is crucial to inform the patient about potential complications and obtain consent before interventional radiology procedures. In this study, we investigated the accuracy, reliability, and readability of the information provided by ChatGPT-4 about potential complications of interventional radiology procedures.
    MATERIALS AND METHODS: Potential major and minor complications of 25 different interventional radiology procedures (8 non-vascular, 17 vascular) were asked to ChatGPT-4 chatbot. The responses were evaluated by two experienced interventional radiologists (>25 years and 10 years of experience) using a 5-point Likert scale according to Cardiovascular and Interventional Radiological Society of Europe guidelines. The correlation between the two interventional radiologists' scoring was evaluated by the Wilcoxon signed-rank test, Intraclass Correlation Coefficient (ICC), and Pearson correlation coefficient (PCC). In addition, readability and complexity were quantitatively assessed using the Flesch-Kincaid Grade Level, Flesch Reading Ease scores, and Simple Measure of Gobbledygook (SMOG) index.
    RESULTS: Interventional radiologist 1 (IR1) and interventional radiologist 2 (IR2) gave 104 and 109 points, respectively, out of a potential 125 points for the total of all procedures. There was no statistically significant difference between the total scores of the two IRs (p = 0.244). The IRs demonstrated high agreement across all procedure ratings (ICC=0.928). Both IRs scored 34 out of 40 points for the eight non-vascular procedures. 17 vascular procedures received 70 points out of 85 from IR1 and 75 from IR2. The agreement between the two observers' assessments was good, with PCC values of 0.908 and 0.896 for non-vascular and vascular procedures, respectively. Readability levels were overall low. The mean Flesch-Kincaid Grade Level, Flesch Reading Ease scores, and SMOG index were 12.51 ± 1.14 (college level) 30.27 ± 8.38 (college level), and 14.46 ± 0.76 (college level), respectively. There was no statistically significant difference in readability between non-vascular and vascular procedures (p = 0.16).
    CONCLUSION: ChatGPT-4 demonstrated remarkable performance, highlighting its potential to enhance accessibility to information about interventional radiology procedures and support the creation of educational materials for patients. Based on the findings of our study, while ChatGPT provides accurate information and shows no evidence of hallucinations, it is important to emphasize that a high level of education and health literacy are required to fully comprehend its responses.
    Keywords:  ChatGPT; Complication; Interventional radiology; Patient interviewing; Readability
    DOI:  https://doi.org/10.1016/j.acra.2024.10.028
  10. PRiMER. 2024 ;8 51
       Background: Artificial intelligence (AI)-generated explanations about medical topics may be clearer and more accessible than traditional evidence-based sources, enhancing patient understanding and autonomy. We evaluated different AI explanations for patients about common diagnoses to aid in patient care.
    Methods: We prompted ChatGPT 3.5, Google Bard, HuggingChat, and Claude 2 separately to generate a short patient education paragraph about seven common diagnoses. We used the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) to evaluate the readability and grade level of the responses. We used the Agency for Healthcare Research and Quality's Patient Education Materials Assessment Tool (PEMAT) grading rubric to evaluate the understandability and actionability of responses.
    Results: Claude 2 demonstrated scores of FRE (67.0), FKGL (7.4), and PEMAT, 69% for understandability, and 34% for actionability. ChatGPT scores were FRE (58.5), FKGL (9.3), PEMAT (69% and 31%, respectively). Google Bard scores were FRE (50.1), FKGL (9.9), PEMAT (52% and 23%). HuggingChat scores were FRE (48.7) and FKGL (11.6), PEMAT (57% and 29%).
    Conclusion: Claude 2 and ChatGPT demonstrated superior readability and understandability, but practical application and patient outcomes need further exploration. This study is limited by the rapid development of these tools with newer improved models replacing the older ones. Additionally, the accuracy and clarity of AI responses is based on that of the user-generated response. The PEMAT grading rubric is also mainly used for patient information leaflets that include visual aids and may contain subjective evaluations.
    DOI:  https://doi.org/10.22454/PRiMER.2024.916089
  11. Eur Urol Open Sci. 2024 Dec;70 148-153
       Background and objective: Patients struggle to classify symptoms, which hinders timely medical presentation. With 35-75% of patients seeking information online before consulting a health care professional, generative language-based artificial intelligence (AI), exemplified by ChatGPT-3.5 (GPT-3.5) from OpenAI, has emerged as an important source. The aim of our study was to evaluate the role of GPT-3.5 in triaging acute urological conditions to address a gap in current research.
    Methods: We assessed GPT-3.5 performance in providing urological differential diagnoses (DD) and recommending a course of action (CoA). Six acute urological pathologies were identified for evaluation. Lay descriptions, sourced from patient forums, formed the basis for 472 queries that were independently entered by nine urologists. We evaluated the output in terms of compliance with the European Association of Urology (EAU) guidelines, the quality of the patient information using the validated DISCERN questionnaire, and a linguistic analysis.
    Key findings and limitations: The median GPT-3.5 ratings were 4/5 for DD and CoA, and 3/5 for overall information quality. English outputs received higher median ratings than German outputs for DD (4.27 vs 3.95; p < 0.001) and CoA (4.25 vs 4.05; p < 0.005). There was no difference in performance between urgent and non-urgent cases. Analysis of the information quality revealed notable underperformance for source indication, risk assessment, and influence on quality of life.
    Conclusion and clinical implications: Our results highlights the potential of GPT-3.5 as a triage system for offering individualized, empathetic advice mostly aligned with the EAU guidelines, outscoring other online information. Relevant shortcomings in terms of information quality, especially for risk assessment, need to be addressed to enhance the reliability. Broader transparency and quality improvements are needed before integration into, primarily English-speaking, patient care.
    Patient summary: We looked at the performance of ChatGPT-3.5 for patients seeking urology advice. We entered more than 400 German and English inputs and assessed the possible diagnoses suggested by this artificial intelligence tool. ChatGPT-3.5 scored well in providing a complete list of possible diagnoses and recommending a course of action mostly in line with current guidelines. The quality of the information was good overall, but missing and unclear sources for the information can be a problem.
    Keywords:  Artificial intelligence; ChatGPT; Internet use; Triage; Urological emergency
    DOI:  https://doi.org/10.1016/j.euros.2024.10.015
  12. J Curr Glaucoma Pract. 2024 Jul-Sep;18(3):18(3): 110-116
       Aim and background: Patients are increasingly turning to the internet to learn more about their ocular disease. In this study, we sought (1) to compare the accuracy and readability of Google and ChatGPT responses to patients' glaucoma-related frequently asked questions (FAQs) and (2) to evaluate ChatGPT's capacity to improve glaucoma patient education materials by accurately reducing the grade level at which they are written.
    Materials and methods: We executed a Google search to identify the three most common FAQs related to 10 search terms associated with glaucoma diagnosis and treatment. Each of the 30 FAQs was inputted into both Google and ChatGPT and responses were recorded. The accuracy of responses was evaluated by three glaucoma specialists while readability was assessed using five validated readability indices. Subsequently, ChatGPT was instructed to generate patient education materials at specific reading levels to explain seven glaucoma procedures. The accuracy and readability of procedural explanations were measured.
    Results: ChatGPT responses to glaucoma FAQs were significantly more accurate than Google responses (97 vs 77% accuracy, respectively, p < 0.001). ChatGPT responses were also written at a significantly higher reading level (grade 14.3 vs 9.4, respectively, p < 0.001). When instructed to revise glaucoma procedural explanations to improve understandability, ChatGPT reduced the average reading level of educational materials from grade 16.6 (college level) to grade 9.4 (high school level) (p < 0.001) without reducing the accuracy of procedural explanations.
    Conclusion: ChatGPT is more accurate than Google search when responding to glaucoma patient FAQs. ChatGPT successfully reduced the reading level of glaucoma procedural explanations without sacrificing accuracy, with implications for the future of customized patient education for patients with varying health literacy.
    Clinical significance: Our study demonstrates the utility of ChatGPT for patients seeking information about glaucoma and for physicians when creating unique patient education materials at reading levels that optimize understanding by patients. An enhanced patient understanding of glaucoma may lead to informed decision-making and improve treatment compliance.
    How to cite this article: Cohen SA, Fisher AC, Xu BY, et al. Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT. J Curr Glaucoma Pract 2024;18(3):110-116.
    Keywords:  Artificial intelligence; ChatGPT; Glaucoma; Google; Patient education
    DOI:  https://doi.org/10.5005/jp-journals-10078-1448
  13. Indian J Otolaryngol Head Neck Surg. 2024 Dec;76(6): 5793-5798
      Tracheostomy is a surgical procedure to create an opening in the neck to insert a tube into the trachea to help a person breathe. Proper cleaning and care of the tracheostomy tube is vital to prevent infections. Patients frequently use the internet to learn about tracheostomy tube care before and after the procedure. To assess the readability and reliability of 50 websites providing patient information on tracheostomy tube care. The websites were evaluated using the Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), Gunning Fog score (GF), DISCERN score, and JAMA benchmark criteria. The mean FKGL was 6.2, FRES was 61.9 and GF score was 7.2, indicating moderate overall readability. The reliability scores were average, too, with mean DISCERN and JAMA scores being 3.2 and 1.8, respectively. There is immense scope for improvement in the readability and reliability of online resources on tracheostomy tube care for patients to comprehend the information quickly.
    Keywords:  Comprehension; Digital health; Internet; Trachea; Tracheostomy
    DOI:  https://doi.org/10.1007/s12070-024-05098-5
  14. BMC Med Educ. 2024 Nov 15. 24(1): 1318
       BACKGROUND: Tunneled catheters can be inserted for many reasons, and in most centers, and in most centers it is not clear who should insert these catheters. Some anesthesiologists may not have seen first-hand the insertion of a tunneled catheter during their residency, depending on the policies of the institution. YouTube is one of the most commonly used online platforms for accessing medical information. The aim of this study was to investigate the reliability of YouTube videos, for tunneled central venous catheter (Hickman and tunneled hemodialysis catheters) insertion for education.
    METHODS: The keywords "Tunneled catheter insertion" and "Tunneled central venous catheter insertion" were searched for on YouTube. The first 100 videos ranked by the YouTube algorithm were analysed. Animation and theoretical content videos, as well as videos that included only a part of the catheter insertion, were excluded. The sources of the videos were categorized as medical doctors or professional organizations. medical device advertisement and hospital. Two authors evaluated all videos via the Journal of the American Medical Association (JAMA) benchmark criteria, modified DISCERN scores and the Global Quality Scale (GQS).
    RESULTS: Twenty-three videos were analysed in the study. The video quality scores were similar across the video sources. The number of views and the number of likes were significantly positively correlated. Furthermore, a significant correlation was found between the JAMA, Modified DISCERN, and GQS scores. Notably, none of the analysed videos achieved full points in all three scoring systems.
    CONCLUSIONS: Relying on a single criterion, such as the video source or number of likes, is not sufficient to determine a video's quality. Therefore, what is learned from videos needs to be double-checked. These platforms should be used as an additional tool, not as the primary source of education.
    Keywords:  Central venous catheters; Continuing medical education; Online education; Vascular access devices; Video sharing networks; YouTube
    DOI:  https://doi.org/10.1186/s12909-024-06330-0
  15. HSS J. 2024 May 26. 15563316241254056
      Background: Younger patients are more likely than older patients to experience shoulder instability and to rely on online educational resources. Although the Internet has increased patient access to medical information, this may not translate to increased health literacy. Purpose: We sought to analyze the quality and readability of online information on shoulder instability. Methods: We conducted a Google search using 6 terms related to shoulder instability. We collected the first 20 non-sponsored results for each term. Readability was evaluated using the Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), and Gunning Fox Index (GFI) instruments. Quality was assessed using a Quality Grading Sheet (QGS) and the validated DISCERN instrument. Results: A total of 64 of 120 patient educational materials (PEMs) met the inclusion criteria. The mean FKGL, FRE, and GFI scores were 9.45 ± 0.552, 50.51 ± 3.4, and 11.5 ± 0.6, respectively. The mean DISCERN score and QGS rating were 33.09 ± 2.02 and 10.52 ± 1.28, respectively. While 49 (76.6%) articles discussed operative treatment for persistent shoulder instability, only 4 (6.3%) mentioned risks associated with surgery. Non-institutional sources had higher DISCERN scores than those from medical institutions. Conclusions: This review of online shoulder instability-related PEMs suggests that many do not meet current recommendations, with an average quality rating of "poor" and a mean ninth-grade reading level. Surgeons should be aware of the relative paucity of information on the risks and outcomes associated with operative treatment of shoulder instability contained in these PEMs.
    Keywords:  health literacy; patient education; shoulder dislocation; shoulder instability
    DOI:  https://doi.org/10.1177/15563316241254056
  16. Plast Surg (Oakv). 2024 Apr 02. 22925503241234936
      Introduction: Popular video-sharing platforms YouTube and TikTok offer a plethora of information on the topic of breast implant illness (BII). As a largely patient-driven phenomenon, it is important to understand the influence of social media on patient knowledge regarding BII. This study sought to evaluate the quality, reliability, visibility, and popularity of YouTube and TikTok videos about BII. Methods: Two validated tools for health information, DISCERN and the Patient Education Materials Assessment Tool (PEMAT), were utilized to evaluate the quality of information regarding the topic of BII on YouTube and TikTok. High DISCERN score indicates content of superior quality and reliability, while elevated PEMAT scores signify content that is easily understandable and actionable for viewers. The search phrase "breast implant illness" was used to screen videos, which were sorted based on relevance and view count. The first 100 videos that fulfilled inclusion criteria were independently graded by three reviewers. Results: TikTok videos of longer duration, a higher number of shares, and in the patient education category were all significantly associated with a higher total DISCERN score (P < 0.05). YouTube videos that included a provider or a patient were significantly more likely to have a higher total DISCERN and PEMAT understandability score (all P < 0.05). Discussion of physician education, operation details, and patient experience was significantly associated with higher total DISCERN and PEMAT understandability scores (all P < 0.05). Conclusions: Total DISCERN and PEMAT scores for videos regarding BII on two popular social media platforms are low. Video length, patient experience categorization, and the presence of a provider are worth considering when developing high-quality online content for breast reconstruction and augmentation patients.
    Keywords:  DISCERN; PEMAT; TikTok; YouTube; breast implant illness; social media
    DOI:  https://doi.org/10.1177/22925503241234936
  17. Int J Med Inform. 2024 Nov 14. pii: S1386-5056(24)00354-X. [Epub ahead of print]194 105691
       INTRODUCTION: In the digital age, electronic health literacy (eHealth literacy) has become crucial for maintaining and improving health outcomes. As the population ages, developing and validating tools that accurately measure eHealth literacy levels among older adults in different cultures is essential.
    OBJECTIVES: This study aimed to validate the Hebrew version of the electronic Health Literacy scale among Israelis aged 65 and older by assessing its psychometric properties, including content validity, construct validity, age-based convergent validity, internal consistency reliability, and test-retest reliability.
    METHODS: A sample of 628 Israelis aged 65 and older was recruited using convenience sampling. Participants completed an online survey consisting of the HeHEALS, demographic questions, items related to participants' use of online health information sources, and measures of self-rated health, satisfaction with health, and perceived health compared to others. Psychometric properties were assessed using various statistical analyses.
    RESULTS: The HeHEALS demonstrated good content validity, construct validity, age-based convergent validity, internal consistency reliability, and test-retest reliability. Exploratory factor analysis supported a unidimensional structure of the HeHEALS. Significant positive correlations were found between HeHEALS and education, income, and subjective health measures. Users of online health information sources had significantly higher electronic health literacy scores than non-users.
    CONCLUSIONS: The HeHEALS is a valid and reliable tool for assessing eHealth literacy among older adults in Israel, with potential applications in research and practice to promote digital health equity.
    Keywords:  Electronic health literacy; Older adults; Psychometric validation; Questionnaire; eHealth
    DOI:  https://doi.org/10.1016/j.ijmedinf.2024.105691
  18. Pediatr Dermatol. 2024 Nov-Dec;41(6):41(6): 1251-1252
      
    Keywords:  accessibility; artificial intelligence; atopic dermatitis; health literacy; patient education; readability
    DOI:  https://doi.org/10.1111/pde.15674
  19. World Neurosurg. 2024 Nov 20. pii: S1878-8750(24)01914-4. [Epub ahead of print]
       BACKGROUND AND OBJECTIVES: AtlasGPT represents an innovative generative pre-trained transformer (GPT), trained using neurosurgery literature. Its ability to contour its response according to the training level of the user is unique; however, whether its responses can be comprehended at each user's training level remains unknown. This study aimed to analyze the readability of responses provided by AtlasGPT.
    METHODS: Ten queries were presented to AtlasGPT across its four user profiles (i.e., surgeon, resident, medical student, patient). A readability analysis was performed using multiple instruments on Readability Studio. Readability scores of user-specific responses were compared using ANOVA testing and post-hoc pairwise t-tests with Bonferroni correction. P-value < 0.05 was considered to be significant.
    RESULTS: Across the readability instruments that were leveraged, significant differences in reading ease were observed across all user profiles on comparisons to the patient (p < 0.005). Readability scores for the medical student profile tended to show greater reading ease than the surgeon and resident profiles; these differences, however, were not significant. The mean grade levels for patient responses across multiple instruments ranged from 8.8-11.51. Only one output via the New Dale-Chall assessment was written at the level of 5th-6th grade.
    CONCLUSIONS: AtlasGPT-generated content demonstrates readability variations according to the user profile selected; however, the readability of patient content still exceeds recommendations set by United States departmental agencies, necessitating a call to action.
    Keywords:  AtlasGPT; ChatGPT; Education; Health Literacy; Neurosurgery; Readability
    DOI:  https://doi.org/10.1016/j.wneu.2024.11.052
  20. Arch Dermatol Res. 2024 Nov 22. 317(1): 43
      
    Keywords:  Dermatology education; Patient education; Patient-focused; Quality improvement
    DOI:  https://doi.org/10.1007/s00403-024-03518-8
  21. Am J Health Promot. 2024 Nov 20. 8901171241302011
       PURPOSE: We investigate how individuals with Limited English Proficiency (LEP) seek, access, and evaluate traditional and online sources they rely on for health information.
    DESIGN: Retrospective cross-sectional survey analysis from the United States.
    SETTING: Pooled Health Information National Trends Survey surveys (2013-2019).
    SUBJECTS: The sample was comprised 15,316 respondents; 236/15,316 (1.54%) completed the survey in Spanish and 1727/14,734 (11.72%) had LEP (did not speak English "very well"). The sample was nationally representative across demographic categories.
    MEASURES: Independent and dependent variables were self-reported using validated measures.
    ANALYSIS: Multivariable logistic regression models using jackknife replicate weights for population estimates.
    RESULTS: Adults with LEP were less confident in their capacity to access health information (aOR = 0.59, CI: 0.47-0.75) and had less trust in health information from medical professionals (aOR = 0.57,CI: 0.46-0.72) than English proficient (EP) adults. Although LEP and EP adults were both most likely to use the internet as their first source of information, LEP adults were more likely than EP adults to consult health professionals, print sources like books, news or brochures, family and friends, television and radio. Spanish language survey respondents were more likely to trust health information from government agencies (aOR = 1.99, CI: 1.09-3.62) and watch health-related videos on the internet than respondents who took the survey in English (aOR = 2.51, CI: 1.23-5.12).
    CONCLUSION: Our results show how language barriers may contribute to health disparities experienced by linguistic minorities. Government agencies and health care organizations need to promote health information dissemination in underserved communities and may need to embrace the use of alternative information sources such as television, radio, and the internet to reach LEP populations.
    Keywords:  confidence; health information seeking; internet; limited English proficiency; trust
    DOI:  https://doi.org/10.1177/08901171241302011
  22. Digit Health. 2024 Jan-Dec;10:10 20552076241298480
       Introduction: Musculoskeletal pain is a significant public health concern in Europe. With the advent of the digital age, online health information-seeking behaviour has become increasingly important, influencing health outcomes and the ability of individuals to make well-informed decisions regarding their own well-being or of those they are responsible for. This study protocol outlines an investigation into how individuals in five European countries (Austria, Denmark, Ireland, Italy, and Spain) seek online health information for musculoskeletal pain.
    Methods: The protocol adopts an exploratory and systematic two-phase approach to analyse online health information-seeking behaviour. Phase 1 involves four steps: (1) extraction of an extensive list of keywords using Google Ads Keyword Planner; (2) refinement of the list of keywords by an expert panel; (3) investigation of related topics and queries and their degree of association with keywords using Google Trends; and (4) creation of visual representations (word clouds and simplified network graphs) using R. These visual representations provide insights into how individuals search for online health information for musculoskeletal pain. Phase 2 identifies relevant online sources by conducting platform-specific searches on Google, X, Facebook, and Instagram using the refined list of keywords. These sources are then analysed and categorised with NVivo and R to understand the types of information that individuals encounter.
    Conclusions: This innovative protocol has significant potential to advance the state-of-the-art in digital health literacy and musculoskeletal pain through a comprehensive understanding of online health information-seeking behaviour. The results may enable the development of effective online health resources and interventions.
    Keywords:  Internet; Musculoskeletal pain; digital health; health communication; health literacy; information-seeking behaviour; social media
    DOI:  https://doi.org/10.1177/20552076241298480
  23. Int J Public Health. 2024 ;69 1606850
       Objectives: During the COVID-19 pandemic, online health information search has been shown to influence the public's health beliefs, risk attitudes, and vaccination behavior. This study constructs a conditional process model to explore how online health information search impacts public vaccination behavior, considering critical factors like healthcare system satisfaction, vaccine risk perception, and the perceived usefulness of information.
    Methods: Data from the 2021 Chinese General Social Survey (N = 2,547) were analysed. The study utilized logistic regression, path analysis, and the Bootstrap method to test the conditional process model.
    Results: Increased online health information search promotes vaccination behavior, while increased vaccine risk perception hinders vaccination behavior. Higher satisfaction with the healthcare system encourages vaccination behavior, but online health information search reduces healthcare system satisfaction. Satisfaction with the healthcare system and vaccine risk perception play a chain mediating role between online health information search and vaccination behavior. Additionally, the perceived usefulness of information has a negative moderating effect on online health information search and healthcare system satisfaction.
    Conclusion: The research findings provide new insights for health information dissemination and vaccination interventions.
    Keywords:  online health information search; perceived usefulness of information; satisfaction with the healthcare system; vaccination behavior; vaccine risk perception
    DOI:  https://doi.org/10.3389/ijph.2024.1606850
  24. ORL J Otorhinolaryngol Relat Spec. 2024 Nov 18. 1-19
       INTRODUCTION: The most pressing questions patients ask about facial fillers and the sources to which patients are directed remain incompletely understood.
    METHODS: The search engine optimization tool Ahrefs was utilized to extract Google metadata on searches performed in the United States. The most frequently asked questions were categorized by topic, while websites were categorized by authoring organization. JAMA benchmark criteria were used for website information quality assessment.
    RESULTS: A total of 300 questions for the term "fillers" were extracted. The majority of search queries (24.0%) and monthly search volume (39.3%) pertained to procedural costs. The mean JAMA score for private practice sources (1.1 ± 0.57) was significantly lower than that of corporate sources (2.6 ± 0.55, p= 0.0003) but not significantly lower than academic pages (1.6 ± 1.34, p=0.483). With respect to monthly search volume, queries concerning lip fillers have been increasingly asked at a rate that exceeds other injection sites.
    CONCLUSION: Online searches for facial fillers often involve the topic of cost, and frequently direct patients to websites that contain inadequate information on authorship, attribution, disclosure and currency. When compared to other anatomic sites, search queries involving lip fillers have increased over the last three years.
    DOI:  https://doi.org/10.1159/000541497