bims-librar Biomed News
on Biomedical librarianship
Issue of 2023‒08‒27
nineteen papers selected by
Thomas Krichel, Open Library Society



  1. Epidemiol Serv Saude. 2023 Aug 21. pii: S2237-96222023000200100. [Epub ahead of print]32(2): e2022433
      
    DOI:  https://doi.org/10.1590/S2237-96222023000200001
  2. Health Info Libr J. 2023 Sep;40(3): 231-232
      In the first of two special collections of COVID-19-related manuscripts, this issue focuses on how colleges and universities libraries and their users responded to the need for health information during the pandemic.
    Keywords:  health information needs; higher education; library outreach; pandemic; review; social media; surveys; systematic
    DOI:  https://doi.org/10.1111/hir.12501
  3. Health Info Libr J. 2023 Aug 22.
      BACKGROUND: Health professionals require up-to-date information in their pursuit of evidence-based practice in health care. There is a plethora of literature on information behaviour of different user groups across disciplines in Malawi with little evidence on health professionals.OBJECTIVES: The study investigated the information behaviour of health professionals at one of the three biggest hospitals in Malawi.
    METHODS: A descriptive survey design was used. Ninety-four health professionals participated in the study. SPSS was used for descriptive analysis to generate frequencies and percentages.
    RESULTS: Personal and professional development constituted the major information need among all the health professionals. Health professionals used books and colleagues as sources of information, but many preferred to use websites, rather than print resources. The challenges that affected their information behaviour included, inadequate information resources, limited access to the internet and e-databases, and lack of information literacy skills.
    DISCUSSION: The study revealed various information needs of health professionals and their preferred information sources. Health professionals need adequate library and information services with both print and digital resources and support from information professionals. Nevertheless, use of the hospital library was very low among health professionals.
    CONCLUSION: Health professionals at MCH continue to face various challenges that hinder access and efficient use of information resources.
    Keywords:  Africa south; developing economies; health information needs; health professionals; information seeking behaviour; information sources
    DOI:  https://doi.org/10.1111/hir.12507
  4. Health Info Libr J. 2023 Aug 22.
      BACKGROUND: Gloucestershire Hospitals NHS Foundation Trust (GHNHSFT) is actively participating in the Magnet4Europe® research study, which aims to advance nursing excellence and promote evidence-based practice.OBJECTIVES: As part of this initiative, the Nursing, Allied Health Professional and Midwifery Research Council at GHNHSFT has been actively engaging colleagues in evidence-based practice and research.
    METHODS: This has been achieved through the development of sessions using the Critically Appraised Topics (CATs) framework, where clinical questions and relevant research articles are discussed.
    RESULTS AND DISCUSSION: This article describes the collaborative approach between the Lead Nurse for Continual Professional Development and the Deputy Manager of the Library and Knowledge Services to develop and run the sessions.
    CONCLUSION: Collaboration between clinical staff and library and knowledge teams can be useful in encouraging healthcare professionals' engagement with the evidence base in order to consider changes to practice.
    Keywords:  collaboration; critical appraisal; evidence-based nursing; information literacy
    DOI:  https://doi.org/10.1111/hir.12504
  5. Otolaryngol Head Neck Surg. 2023 Aug 25.
      OBJECTIVE: To quantitatively compare online patient education materials found using traditional search engines (Google) versus conversational Artificial Intelligence (AI) models (ChatGPT) for benign paroxysmal positional vertigo (BPPV).STUDY DESIGN: The top 30 Google search results for "benign paroxysmal positional vertigo" were compared to the OpenAI conversational AI language model, ChatGPT, responses for 5 common patient questions posed about BPPV in February 2023. Metrics included readability, quality, understandability, and actionability.
    SETTING: Online information.
    METHODS: Validated online information metrics including Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease (FRE), DISCERN instrument score, and Patient Education Materials Assessment Tool for Printed Materials were analyzed and scored by reviewers.
    RESULTS: Mean readability scores, FKGL and FRE, for the Google webpages were 10.7 ± 2.6 and 46.5 ± 14.3, respectively. ChatGPT responses had a higher FKGL score of 13.9 ± 2.5 (P < .001) and a lower FRE score of 34.9 ± 11.2 (P = .005), both corresponding to lower readability. The Google webpages had a DISCERN part 2 score of 25.4 ± 7.5 compared to the individual ChatGPT responses with a score of 17.5 ± 3.9 (P = .001), and the combined ChatGPT responses with a score of 25.0 ± 0.9 (P = .928). The average scores of the reviewers for all ChatGPT responses for accuracy were 4.19 ± 0.82 and 4.31 ± 0.67 for currency.
    CONCLUSION: The results of this study suggest that the information on ChatGPT is more difficult to read, of lower quality, and more difficult to comprehend compared to information on Google searches.
    Keywords:  ChatGPT; Google; artificial intelligence; benign paroxysmal positional vertigo; online information; quality; readability; understandability
    DOI:  https://doi.org/10.1002/ohn.506
  6. ANZ J Surg. 2023 Aug 21.
      BACKGROUND: The COVID-19 pandemic has significantly disrupted clinical experience and exposure of medical students and junior doctors. Artificial Intelligence (AI) integration in medical education has the potential to enhance learning and improve patient care. This study aimed to evaluate the effectiveness of three popular large language models (LLMs) in serving as clinical decision-making support tools for junior doctors.METHODS: A series of increasingly complex clinical scenarios were presented to ChatGPT, Google's Bard and Bing's AI. Their responses were evaluated against standard guidelines, and for reliability by the Flesch Reading Ease Score, Flesch-Kincaid Grade Level, the Coleman-Liau Index, and the modified DISCERN score for assessing suitability. Lastly, the LLMs outputs were assessed by using the Likert scale for accuracy, informativeness, and accessibility by three experienced specialists.
    RESULTS: In terms of readability and reliability, ChatGPT stood out among the three LLMs, recording the highest scores in Flesch Reading Ease (31.2 ± 3.5), Flesch-Kincaid Grade Level (13.5 ± 0.7), Coleman-Lau Index (13) and DISCERN (62 ± 4.4). These results suggest statistically significant superior comprehensibility and alignment with clinical guidelines in the medical advice given by ChatGPT. Bard followed closely behind, with BingAI trailing in all categories. The only non-significant statistical differences (P > 0.05) were found between ChatGPT and Bard's readability indices, and between the Flesch Reading Ease scores of ChatGPT/Bard and BingAI.
    CONCLUSION: This study demonstrates the potential utility of LLMs in fostering self-directed and personalized learning, as well as bolstering clinical decision-making support for junior doctors. However further development is needed for its integration into education.
    Keywords:  ChatGPT; artificial intelligence; junior doctor; large language model; surgical education
    DOI:  https://doi.org/10.1111/ans.18666
  7. JAMA Oncol. 2023 Aug 24.
      Importance: Consumers are increasingly using artificial intelligence (AI) chatbots as a source of information. However, the quality of the cancer information generated by these chatbots has not yet been evaluated using validated instruments.Objective: To characterize the quality of information and presence of misinformation about skin, lung, breast, colorectal, and prostate cancers generated by 4 AI chatbots.
    Design, Setting, and Participants: This cross-sectional study assessed AI chatbots' text responses to the 5 most commonly searched queries related to the 5 most common cancers using validated instruments. Search data were extracted from the publicly available Google Trends platform and identical prompts were used to generate responses from 4 AI chatbots: ChatGPT version 3.5 (OpenAI), Perplexity (Perplexity.AI), Chatsonic (Writesonic), and Bing AI (Microsoft).
    Exposures: Google Trends' top 5 search queries related to skin, lung, breast, colorectal, and prostate cancer from January 1, 2021, to January 1, 2023, were input into 4 AI chatbots.
    Main Outcomes and Measures: The primary outcomes were the quality of consumer health information based on the validated DISCERN instrument (scores from 1 [low] to 5 [high] for quality of information) and the understandability and actionability of this information based on the understandability and actionability domains of the Patient Education Materials Assessment Tool (PEMAT) (scores of 0%-100%, with higher scores indicating a higher level of understandability and actionability). Secondary outcomes included misinformation scored using a 5-item Likert scale (scores from 1 [no misinformation] to 5 [high misinformation]) and readability assessed using the Flesch-Kincaid Grade Level readability score.
    Results: The analysis included 100 responses from 4 chatbots about the 5 most common search queries for skin, lung, breast, colorectal, and prostate cancer. The quality of text responses generated by the 4 AI chatbots was good (median [range] DISCERN score, 5 [2-5]) and no misinformation was identified. Understandability was moderate (median [range] PEMAT Understandability score, 66.7% [33.3%-90.1%]), and actionability was poor (median [range] PEMAT Actionability score, 20.0% [0%-40.0%]). The responses were written at the college level based on the Flesch-Kincaid Grade Level score.
    Conclusions and Relevance: Findings of this cross-sectional study suggest that AI chatbots generally produce accurate information for the top cancer-related search queries, but the responses are not readily actionable and are written at a college reading level. These limitations suggest that AI chatbots should be used supplementarily and not as a primary source for medical information.
    DOI:  https://doi.org/10.1001/jamaoncol.2023.2947
  8. Cardiol Young. 2023 Jul;33(7): 1079-1085
      PURPOSE: Publicly available health information is increasingly important for patients and their families. While the average US citizen reads at an 8th-grade level, electronic educational materials for patients and families are often advanced. We assessed the quality and readability of publicly available resources regarding hypoplastic left heart syndrome (HLHS).METHODS: We queried four search engines for "hypoplastic left heart syndrome", "HLHS", and "hypoplastic left ventricle". The top 30 websites from searches on Google, Yahoo!, Bing, and Dogpile were combined into a single list. Duplicates, commercial websites, physician-oriented resources, disability websites, and broken links were removed. Websites were graded for accountability, content, interactivity, and structure using a two-reviewer system. Nonparametric analysis of variance was performed.
    RESULTS: Fifty-two websites were analysed. Inter-rater agreement was high (Kappa = 0.874). Website types included 35 hospital/healthcare organisation (67.3%), 12 open access (23.1%), 4 governmental agency (7.7%), and 1 professional medical society (1.9%). Median total score was 19 of 39 (interquartile range = 15.8-25.3): accountability 5.5 of 17 (interquartile range = 2.0-9.3), content 8 of 12 (interquartile range = 6.4-10.0), interactivity 2 of 6 (interquartile range = 2.0-3.0), and structure 3 of 4 (interquartile range = 2.8-4.0). Accountability was low with 32.7% (n = 17) of sites disclosing authorship and 26.9% (n = 14) citing sources. Forty-two percent (n = 22) of websites were available in Spanish. Total score varied by website type (p = 0.03), with open access sites scoring highest (median = 26.5; interquartile range = 20.5-28.6) and hospital/healthcare organisation websites scoring lowest (median = 17.5; interquartile range = 13.5-21.5). Score differences were driven by differences in accountability (p = 0.001) - content scores were similar between groups (p = 0.25). Overall readability was low, with median Flesch-Kincaid Grade Level of 11th grade (interquartile range = 10th-12th grade).
    CONCLUSIONS: Our evaluation of popular websites about HLHS identifies multiple opportunities for improvement, including increasing accountability by disclosing authorship and citing sources, enhancing readability by providing material that is understandable to readers with the full spectrum of educational background, and providing information in languages besides English, all of which would enhance health equity.
    Keywords:  HLHS; Patient education; congenital heart disease; family education; hypoplastic left heart syndrome
    DOI:  https://doi.org/10.1017/S1047951123001294
  9. Clin Shoulder Elb. 2023 Aug 22.
      Background: Many patients use online resources to educate themselves on surgical procedures and make well-informed healthcare decisions. The aim of our study was to evaluate the quality and readability of online resources exploring shoulder arthroplasty.Methods: An internet search pertaining to shoulder arthroplasty (partial, anatomic, and reverse) was conducted using the three most popular online search engines. The top 25 results generated from each term in each search engine were included. Webpages were excluded if they were duplicates, advertised by search engines, subpages of other pages, required payments or subscription, or were irrelevant to our scope. Webpages were classified into different source categories. Quality of information was assessed by HONcode certification, Journal of the American Medical Association (JAMA) criteria, and DISCERN benchmark criteria. Webpage readability was assessed using the Flesch reading ease score (FRES).
    Results: Our final dataset included 125 web pages. Academic sources were the most common with 45 web pages (36.0%) followed by physician/private practice with 39 web pages (31.2%). The mean JAMA and DISCERN scores for all web pages were 1.96±1.31 and 51.4±10.7, respectively. The total mean FRES score was 44.0±11.0. Only nine web pages (7.2%) were HONcode certified. Websites specified for healthcare professionals had the highest JAMA and DISCERN scores with means of 2.92±0.90 and 57.96±8.91, respectively (P<0.001). HONcode-certified webpages had higher quality and readability scores than other web pages.
    Conclusions: Web-based patient resources for shoulder arthroplasty information did not show high-quality scores and easy readability. When presenting medical information, sources should maintain a balance between readability and quality and should seek HONcode certification as it helps establish the reliability and accessibility of the presented information. Level of evidence: IV.
    Keywords:  HONcode; Partial shoulder; Quality of online resource; Reverse shoulder; Shoulder replacement; patient education
    DOI:  https://doi.org/10.5397/cise.2023.00290
  10. J Laryngol Otol. 2023 Mar 08. 1-6
      OBJECTIVE: Complications of parotidectomy can have a massive impact on patients' quality of life. This study aimed to evaluate the readability and quality of online health information on parotidectomy.METHOD: The search terms 'parotidectomy', 'parotid surgery', 'parotidectomy patient information' and 'parotid surgery patient information' were parsed through three popular search engines.
    RESULTS: The websites were analysed using readability scores of the Flesch Reading Ease test and the Gunning Fog Index. The DISCERN instrument was used to assess quality and reliability. The average Flesch Reading Ease score was 50.2 ± 9.0, indicating that the materials were fairly difficult to read, the Gunning Fog Index score showed that the patient health information was suitable for an individual above 12th grade level, and the DISCERN score indicated that the online patient health information had fair quality. The Kruskal-Wallis test showed a significant difference in Flesch Reading Ease and DISCERN tool scores according to website category (p < 0.05).
    CONCLUSION: Current online patient health information on parotidectomy is too difficult for the public to understand, and it exceeds the reading levels recommended by Health Education England and the American Medical Association.
    Keywords:  Decision making; comprehension; health literacy; informed consent; vocabulary
    DOI:  https://doi.org/10.1017/S0022215123000336
  11. Eur Arch Paediatr Dent. 2023 Aug 23.
      PURPOSE: To assess the coverage of information about early childhood caries (ECC) available on YouTube videos in three different languages, regarding technical characteristics of videos and interaction metrics.METHODS: Search strategies were developed in English, Spanish, and Portuguese to make a comprehensive collection of videos from YouTube, encompassing 60 samples for each language, regarding all video types. The videos were assessed by a thematic checklist regarding 17 items on ECC. Videos were dichotomized according to the median of the thematic score and the nature of their authorship (health and non-health authors) to compare groups. The statistical analysis was performed using the Statistical Package for Social Science (version 25.0), applying Spearman's rank correlation coefficient and Mann-Whitney U test. P < 0.05 values were considered significant.
    RESULTS: Among 120 videos meeting inclusion criteria, ECC aetiology and prevention information proved incomplete, with a median score of 5 (Q1-Q3 = 3-7). No correlation emerged between this score and other video characteristics. However, interaction metrics like views, likes, dislikes, and viewing rates displayed significant correlations. Health authors primarily created these videos, yet non-health author channels had more subscribers. Surprisingly, videos focused on the impact of regular sugary food and beverage consumption on ECC progression received the most attention.
    CONCLUSIONS: Videos that presented information about the aetiology and prevention of ECC invariably focused on partial aspects of the disease. This highlights the need for better-quality educational videos and the importance of dental professionals in guiding patients toward reliable sources of information.
    Keywords:  Dental caries; Early childhood caries; Primary tooth; Social media; YouTube
    DOI:  https://doi.org/10.1007/s40368-023-00830-1
  12. J Cancer Educ. 2023 Aug 22.
      Pancreatic cancer is one of the most lethal diseases worldwide and incidence continues to rise, resulting in increased deaths each year. In the modern era, patients often turn to online sources like YouTube for information regarding their disease, which may be subject to a high degree of bias and misinformation; previous analyses have demonstrated low quality of other cancer-related YouTube videos. Thus, we sought to determine if patients can rely on educational YouTube videos for accurate and comprehensive information about pancreatic cancer diagnosis and treatment. We designed a search query and inclusion/exclusion criteria based on published studies evaluating YouTube user tendencies, which were used to identify videos most likely watched by patients. Videos were evaluated based on two well-known criteria, the DISCERN and JAMA tools, as well as a tool published by Sahin et al. to evaluate the comprehensiveness of YouTube videos. Statistical analyses were performed using Chi-square analysis to compare categorical variables. We used linear regression to assess for correlations between quantitative variables. Kruskal-Wallis and independent samples t-test were used to compare means between groups. We assessed inter-rater reliability using Cronbach's alpha. After the initial search query, 39 videos were retrieved that met inclusion criteria. The comprehensiveness and quality of these materials was generally low to moderate, with only 7 videos being considered comprehensive. Pearson's R demonstrated strong correlations between video length and both comprehensiveness and quality. Higher-quality videos also tended to be newer. YouTube videos regarding pancreatic cancer are generally of low to moderate quality and lack comprehensiveness, which could affect patients' perceptions of their disease or understanding of treatment options. These videos, which have collectively been viewed over 6 million times, should be subject to some form of expert review before upload, and producers of this content should consider citing the sources used in the video.
    Keywords:  Misinformation; Oncology; Pancreatic cancer; Patient information; Social media; YouTube
    DOI:  https://doi.org/10.1007/s13187-023-02355-z
  13. Turk J Gastroenterol. 2023 Aug 21.
      BACKGROUND/AIMS: The aim of this study is to evaluate the efficiency for educational purposes by evaluating the videos published on YouTube channel, which is an open source video sharing platform, for robotic right hemicolectomy procedure.MATERIALS AND METHODS: We searched YouTube website to choose video clips that included information about robotic right hemicolec- tomy for right colon cancer. All videos were analyzed according to the criteria like quality of videos, quality of teaching, and modified Laparoscopic Surgery Video Educational Guidelines.
    RESULTS: There were 16 complete mesocolic excision and 56 noncomplete mesocolic excision videos in the study. According to the Likert scale, calculated complete mesocolic excision scores were analyzed better than the noncomplete mesocolic excision group and this difference was statistically significant (P < .0001). The teaching quality scores of complete mesocolic excision videos were higher than noncomplete mesocolic excision group and this result was statistically significant (P = .02). The videos were scored according to the modified Laparoscopic Surgery Video Educational Guideline, and the score difference was statistically significant between complete mesocolic excision and noncomplete mesocolic excision videos (P < .001). The video power index was higher (mean 5.52 ± 15.56 vs. mean 1.66 ± 3.41) in the complete mesocolic excision group, but there was no statistically significant difference between the 2 groups (P = .086).
    CONCLUSIONS: Most of the robotic right hemicolectomy videos on the YouTube platform are insufficient in terms of educational capaci- ties. Complete mesocolic excision-containing videos are slightly superior in this respect to noncomplete mesocolic excision videos, as considering a new technique can make video presenters more attentive. In our opinion, if the images presented to the video platforms are to be used for educational purposes, they must undergo a certain evaluation and screening process.
    DOI:  https://doi.org/10.5152/tjg.2023.22827
  14. Cureus. 2023 Aug;15(8): e43881
      Objective YouTube (YouTube LLC, San Bruno, California, United States) is used as a primary resource for many patients looking to gain healthcare knowledge. Recently, YouTube made efforts to increase the quality of posted content by accrediting trusted healthcare sources. With an increasing emphasis being placed on minimally invasive options, this study was done to investigate the quality of YouTube videos on MitraClip™ (Abbott Laboratories, Chicago, Illinois, United States) with respect to patient education.  Methods  YouTube was searched using the keyword "MitraClip". A total of 66 videos were evaluated, with 32 of those videos being included for final analysis after applying exclusionary criteria. Three independent reviewers separately scored the videos using the Global Quality Scale. Likes, dislikes, views, comments, and dates of upload were also recorded. Two-tailed t-tests were used to determine statistical significance. Results  MitraClip videos on YouTube proved to be of medium quality, receiving an average Global Quality Scale score of 3.39. When stratified by the new YouTube accreditation process, those with accreditation had a significantly higher Global Quality Scale score of 4.11, while non-accredited videos had an average Global Quality Scale score of 3.12 (p<0.01). Shorter and more patient-friendly videos were also significantly lower in quality (p<0.05). Conclusion The YouTube accreditation process has demonstrated initial success at regulating the quality of MitraClip content, thereby reducing the spread of misinformation. However, this progress is undermined by the lack of unique videos present on the platform. Increasing the amount of original content about MitraClip may allow viewers to diversify their educational sources and ultimately gain a better understanding of the procedure.
    Keywords:  accreditation; cardiology; mitraclip; patient education; quality; social media; youtube
    DOI:  https://doi.org/10.7759/cureus.43881
  15. Sci Rep. 2023 Aug 21. 13(1): 13579
      More people use the internet for medical information, especially YouTube. Nevertheless, no study has been conducted to analyze the quality of YouTube videos about tinnitus in Korea. This study aims to review the contents and quality of YouTube videos on tinnitus. The top 100 Korean YouTube videos on tinnitus were reviewed by a tinnitus expert. This study assessed video details: title, creator, length, and popularity indicators-subscribers, views, and likes. The contents of the video clips were analyzed to determine the relevance, understandability, actionability, and quality of information. Out of 100 tinnitus videos, 27 were created by otolaryngologists, 25 by traditional Korean medicine doctors, 25 by other medical professionals, and 3 by lay persons. Sensorineural tinnitus was frequently dealt, and hearing loss, stress, and noise were introduced as main causes of tinnitus. Otolaryngologists' videos covered verified treatments, but others suggested unproven therapies including herbal medicine or acupressure. Otolaryngologists' videos showed significantly higher understandability and quality of information compared to others (p < 0.001). This study found that tinnitus YouTube videos frequently present low-quality and incorrect material, which could have an adverse effect on patients. Results highlight the need for tinnitus specialists to provide accurate information.
    DOI:  https://doi.org/10.1038/s41598-023-40523-9
  16. Int J Dent Hyg. 2023 Aug 25.
      OBJECTIVES: Even though tooth sensitivity is a prevalent dental issue today, more information is available to patients via social media concerning the subject. This study aimed to examine what patients may learn about tooth sensitivity from online videos on YouTube™ and evaluate the accuracy of the information given.METHODS: In this cross-sectional investigation, two experienced periodontologists used the keyword 'tooth sensitivity' to conduct an organized search into YouTube videos containing knowledge regarding dentin hypersensitivity. Videos' type, origin, number of days since upload, duration, number of views, likes and dislikes, and comments were all noted; the viewing rate and interaction index were calculated. Videos were graded based on their content. The DISCERN and Global Quality Scales were used to rate each video's level of quality and reliability.
    RESULTS: After the initial 260 videos were examined, 199 were kept for additional study. Healthcare professionals, hospitals, and colleges posted the great majority of the videos. There was a significant positive relationship between the number of views and Total Content scores of the videos, the viewing rate, comments, and likes (p < 0.05). Significant relationships were obtained between total discernment, video type, source of upload, and global quality variables, and Total Content scores (p < 0.05).
    CONCLUSIONS: When looking for information on dentin hypersensitivity, patients might find watchable, reliable, and helpful videos on YouTube™.
    Keywords:  YouTube; dentin hypersensitivity; e-health; tooth sensitivity
    DOI:  https://doi.org/10.1111/idh.12723
  17. Health Informatics J. 2023 Jul-Sep;29(3):29(3): 14604582231198022
      This study assesses the quality of the health information in Arabic YouTube videos regarding herbal cancer treatment. It also provides an overview of how the quality of video content shapes user awareness by assessing the users' engagement indicators. A simple Python tool was developed using YouTube API V3 to automate the YouTube search based on the recommendation of Google Trends. After applying inclusion and exclusion criteria, 110 YouTube videos were selected, of which 95% were uploaded by non-experts and had a total of 8,633,569 views. The analyzed videos presented more than 40 different herbals as sources of cancer treatment; for example, Ephedra, garden cress, Green tea, Ginseng, Rosemary, and Thyme. 32.7% of the videos provided information about a single herb, 41% about mixing herbals, and 26.3% provided testimonials and success stories without pointing to specific herbs. The videos were assessed by two experts using two reliable tools, DISCERN and PEMAT, which were produced mainly for assessing health information quality. DISCERN has evaluated the reliability and quality of health information. PEMAT has assessed the understandability and actionability. The qualitative and quantitative analyses of the videos represent bias and poor health information quality, with a total score of 27 out of 80 for DISCERN and 31 out of 100 for the PEMAT. The results also showed weak users' awareness regarding the content of videos with no association between user engagement indicators (likes, dislikes, VPI, views, comments) and the dimensions of the two tools. The study concludes that it is evident that YouTube, in its current form, is an inadequate Arabic source for herbal cancer treatment information. To overcome this, this study proposed the GAP framework for social media that integrated Governance, Awareness, and Proficiency.
    Keywords:  Content analysis; cancer informatics; herbal treatment; social media; youtube
    DOI:  https://doi.org/10.1177/14604582231198022
  18. JMIR Dermatol. 2023 Aug 09. 6 e48140
      
    Keywords:  Instagram; Instagram Reels; TikTok; YouTube; YouTube Shorts; acne; acne treatment; dermatologist; dermatology; general dermatology; health information; medical dermatology; online information; patient education; skin; social media; video
    DOI:  https://doi.org/10.2196/48140