bims-librar Biomed News
on Biomedical librarianship
Issue of 2024–09–08
twenty-one papers selected by
Thomas Krichel, Open Library Society



  1. J Escience Librariansh. 2023 ;pii: e754. [Epub ahead of print]12(3):
      Committee work is a requisite job function for many in academia, yet designing a productive collaborative experience often remains a challenge. In this article, we reflect on our experiences as part of a successful cross-institutional working group and describe strategies to improve leadership structure, group dynamics, accountability, and incentives for collaborative projects. As of January 2023, the National Institutes of Health (NIH) Data Management & Sharing (DMS) Policy requires investigators applying for funding to submit a Data Management and Sharing Plan (DMS Plan) that describes how scientific data will be managed, preserved, and shared. In response to this new policy, a community of more than 30 librarians and other research data professionals convened the Working Group on NIH DMSP Guidance, collaboratively producing comprehensive guidance about the policy for researchers and research support staff. In less than a year, the working group produced glossaries of NIH and data management jargon, an example data management and sharing plan, a directory of existing example plans, checklists for researchers and librarians, and an interactive repository finder. This group was a successful grassroots effort by contributors with diverse expertise and backgrounds. We discuss practical strategies for each stage of activity throughout the lifecycle of the working group; from recruiting members, designing pathways to encourage participation from busy professionals, structuring the meetings to facilitate progress and productivity, and disseminating final products broadly. We invite fellow librarians, data professionals, and academics to apply and build upon these strategies to tackle cross-institutional challenges.
    DOI:  https://doi.org/10.7191/jeslib.754
  2. BMC Med Inform Decis Mak. 2024 Sep 02. 24(1): 243
       BACKGROUND: Data quality in health information systems has a complex structure and consists of several dimensions. This research conducted for identify Common data quality elements for health information systems.
    METHODS: A literature review was conducted and search strategies run in Web of Knowledge, Science Direct, Emerald, PubMed, Scopus and Google Scholar search engine as an additional source for tracing references. We found 760 papers, excluded 314 duplicates, 339 on abstract review and 167 on full-text review; leaving 58 papers for critical appraisal.
    RESULTS: Current review shown that 14 criteria are categorized as the main dimensions for data quality for health information system include: Accuracy, Consistency, Security, Timeliness, Completeness, Reliability, Accessibility, Objectivity, Relevancy, Understandability, Navigation, Reputation, Efficiency and Value- added. Accuracy, Completeness, and Timeliness, were the three most-used dimensions in literature.
    CONCLUSIONS: At present, there is a lack of uniformity and potential applicability in the dimensions employed to evaluate the data quality of health information system. Typically, different approaches (qualitative, quantitative and mixed methods) were utilized to evaluate data quality for health information system in the publications that were reviewed. Consequently, due to the inconsistency in defining dimensions and assessing methods, it became imperative to categorize the dimensions of data quality into a limited set of primary dimensions.
    Keywords:  Data quality; Health Information System; Systematic review
    DOI:  https://doi.org/10.1186/s12911-024-02644-7
  3. Stud Health Technol Inform. 2024 Aug 30. 317 210-217
       INTRODUCTION: Human and veterinary medicine are practiced separately, but literature databases such as Pubmed include articles from both fields. This impedes supporting clinical decisions with automated information retrieval, because treatment considerations would not ignore the discipline of mixed sources. Here we investigate data-driven methods from computational linguistics for automatically distinguishing between human and veterinary medical texts.
    METHODS: For our experiments, we selected language models after a literature review of benchmark datasets and reported performances. We generated a dataset of around 48,000 samples for binary text classification, specifically designed to differentiate between human medical and veterinary subjects. Using this dataset, we trained and fine-tuned classifiers based on selected transformer-based models as well as support vector machines (SVM).
    RESULTS: All trained classifiers achieved more than 99% accuracy, even though the transformer-based classifiers moderately outperformed the SVM-based one.
    DISCUSSION: Such classifiers could be applicable in clinical decision support functions that build on automated information retrieval.
    Keywords:  Human Medicine; Support Vector Machine; Text Classification; Transformers; Veterinary Medicine
    DOI:  https://doi.org/10.3233/SHTI240858
  4. BMC Womens Health. 2024 Sep 02. 24(1): 482
       BACKGROUND: Cervical cancer (CC) and breast cancer (BC) threaten women's well-being, influenced by health-related stigma and a lack of reliable information, which can cause late diagnosis and early death. ChatGPT is likely to become a key source of health information, although quality concerns could also influence health-seeking behaviours.
    METHODS: This cross-sectional online survey compared ChatGPT's responses to five physicians specializing in mammography and five specializing in gynaecology. Twenty frequently asked questions about CC and BC were asked on 26th and 29th of April, 2023. A panel of seven experts assessed the accuracy, consistency, and relevance of ChatGPT's responses using a 7-point Likert scale. Responses were analyzed for readability, reliability, and efficiency. ChatGPT's responses were synthesized, and findings are presented as a radar chart.
    RESULTS: ChatGPT had an accuracy score of 7.0 (range: 6.6-7.0) for CC and BC questions, surpassing the highest-scoring physicians (P < 0.05). ChatGPT took an average of 13.6 s (range: 7.6-24.0) to answer each of the 20 questions presented. Readability was comparable to that of experts and physicians involved, but ChatGPT generated more extended responses compared to physicians. The consistency of repeated answers was 5.2 (range: 3.4-6.7). With different contexts combined, the overall ChatGPT relevance score was 6.5 (range: 4.8-7.0). Radar plot analysis indicated comparably good accuracy, efficiency, and to a certain extent, relevance. However, there were apparent inconsistencies, and the reliability and readability be considered inadequate.
    CONCLUSIONS: ChatGPT shows promise as an initial source of information for CC and BC. ChatGPT is also highly functional and appears to be superior to physicians, and aligns with expert consensus, although there is room for improvement in readability, reliability, and consistency. Future efforts should focus on developing advanced ChatGPT models explicitly designed to improve medical practice and for those with concerns about symptoms.
    Keywords:  Artificial intelligence; Breast cancer; Cervical cancer; ChatGPT; Frequently asked question
    DOI:  https://doi.org/10.1186/s12905-024-03320-8
  5. Digit Health. 2024 Jan-Dec;10:10 20552076241277021
       Introduction: ChatGPT can serve as an adjunct informational tool for ophthalmologists and their patients. However, the reliability and readability of its responses to myopia-related queries in the Chinese language remain underexplored.
    Purpose: This study aimed to evaluate the ability of ChatGPT to address frequently asked questions (FAQs) about myopia by parents and caregivers.
    Method: Myopia-related FAQs were input three times into fresh ChatGPT sessions, and the responses were evaluated by 10 ophthalmologists using a Likert scale for appropriateness, usability, and clarity. The Chinese Readability Index Explorer (CRIE) was used to evaluate the readability of each response. Inter-rater reliability among the reviewers was examined using Cohen's kappa coefficient, and Spearman's rank correlation analysis and one-way analysis of variance were used to investigate the relationship between CRIE scores and each criterion.
    Results: Forty-five percent of the responses of ChatGPT in Chinese language were appropriate and usable and only 35% met all the set criteria. The CRIE scores for 20 ChatGPT responses ranged from 7.29 to 12.09, indicating that the readability level was equivalent to a middle-to-high school level. Responses about the treatment efficacy and side effects were deficient for all three criteria.
    Conclusions: The performance of ChatGPT in addressing pediatric myopia-related questions is currently suboptimal. As parents increasingly utilize digital resources to obtain health information, it has become crucial for eye care professionals to familiarize themselves with artificial intelligence-driven information on pediatric myopia.
    Keywords:  ChatGPT; health education; myopia; ophthalmologists; quality
    DOI:  https://doi.org/10.1177/20552076241277021
  6. J Endourol. 2024 Sep 06.
      Objective: To evaluate and compare the quality and comprehensibility of answers produced by five distinct artificial intelligence (AI) chatbots-GPT-4, Claude, Mistral, Google PaLM, and Grok-in response to the most frequently searched questions about kidney stones (KS). Materials and Methods: Google Trends facilitated the identification of pertinent terms related to KS. Each AI chatbot was provided with a unique sequence of 25 commonly searched phrases as input. The responses were assessed using DISCERN, the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P), the Flesch-Kincaid Grade Level (FKGL), and the Flesch-Kincaid Reading Ease (FKRE) criteria. Results: The three most frequently searched terms were "stone in kidney," "kidney stone pain," and "kidney pain." Nepal, India, and Trinidad and Tobago were the countries that performed the most searches in KS. None of the AI chatbots attained the requisite level of comprehensibility. Grok demonstrated the highest FKRE (55.6 ± 7.1) and lowest FKGL (10.0 ± 1.1) ratings (p = 0.001), whereas Claude outperformed the other chatbots in its DISCERN scores (47.6 ± 1.2) (p = 0.001). PEMAT-P understandability was the lowest in GPT-4 (53.2 ± 2.0), and actionability was the highest in Claude (61.8 ± 3.5) (p = 0.001). Conclusion: GPT-4 had the most complex language structure of the five chatbots, making it the most difficult to read and comprehend, whereas Grok was the simplest. Claude had the best KS text quality. Chatbot technology can improve healthcare material and make it easier to grasp.
    Keywords:  Claude; GPT-4; Google PaLM; Grok; Mistral; artificial intelligence; kidney stone
    DOI:  https://doi.org/10.1089/end.2024.0474
  7. Clin Transl Gastroenterol. 2024 Aug 30.
       BACKGROUND AND AIMS: The advent of artificial intelligence-powered large language models capable of generating interactive responses to intricate queries marks a groundbreaking development in how patients access medical information. Our aim was to evaluate the appropriateness and readability of gastroenterological information generated by ChatGPT.
    METHODS: We analyzed responses generated by ChatGPT to 16 dialogue-based queries assessing symptoms and treatments for gastrointestinal conditions and 13 definition-based queries on prevalent topics in gastroenterology. Three board-certified gastroenterologists evaluated output appropriateness with a 5-point Likert-scale proxy measurement of currency, relevance, accuracy, comprehensiveness, clarity, and urgency/next steps. Outputs with a score of 4 or 5 in all 6 categories were designated as "appropriate." Output readability was assessed with Flesch Reading Ease score, Flesch-Kinkaid Reading Level, and Simple Measure of Gobbledygook scores.
    RESULTS: ChatGTP responses to 44% of the 16 dialogue-based and 69% of the 13 definition-based questions were deemed appropriate, and the proportion of appropriate responses within the 2 groups of questions was not significantly different (P = .17). Notably, none of ChatGTP's responses to questions related to gastrointestinal emergencies were designated appropriate. The mean readability scores showed that outputs were written at a college-level reading proficiency.
    CONCLUSION: ChatGPT can produce generally fitting responses to gastroenterological medical queries, but responses were constrained in appropriateness and readability, which limits the current utility of this large language model. Substantial development is essential before these models can be unequivocally endorsed as reliable sources of medical information.
    DOI:  https://doi.org/10.14309/ctg.0000000000000765
  8. J Am Dent Assoc. 2024 Aug 28. pii: S0002-8177(24)00393-3. [Epub ahead of print]
       BACKGROUND: Social networks have become a widely used and accessible source of health-related information for patients, but this material is not always accurate or appropriate. The purpose of this study was to evaluate the quality of orthodontic information available on 2 of the most popular social media platforms.
    STUDIES REVIEWED: The authors conducted a systematic search of the literature that analyzed the quality of information regarding orthodontics on social networks and used recognized quality-evaluation methods, such as DISCERN, modified DISCERN, and the Quality Global Scale or the Video Information Quality Index, in the electronic databases of PubMed, Embase, and Scopus and through a manual search of gray literature.
    RESULTS: The authors identified a total of 534 potentially eligible articles, of which 22 eventually were included in the qualitative analysis. The application of the scales revealed that most of the content was of insufficient quality and lacked scientific rigor, precision, and support from reliable sources. The authors observed marked heterogeneity in the nature of the publications analyzed, with the most recurrent topics being general orthodontic treatment and the use of clear aligners.
    PRACTICAL IMPLICATIONS: Social media platforms provide low-quality information to patients, which potentially can be harmful. These findings underscore the need to offer alternative ways to resolve patient queries before and during treatment and highlight the importance of promoting informed and responsible education regarding online information on orthodontic treatments.
    Keywords:  Orthodontics; information; internet; quality; reliability; social media platforms
    DOI:  https://doi.org/10.1016/j.adaj.2024.07.012
  9. J Hand Microsurg. 2024 Oct;16(4): 100119
       Background: Thumb carpometacarpal (CMC) joint osteoarthritis is a common degenerative condition that affects up to 15 ​% of the population older than 30 years. Poor readability of online health resources has been associated with misinformation, inappropriate care, incorrect self-treatment, worse health outcomes, and increased healthcare resource waste. This study aims to assess the readability and quality of online information regarding thumb carpometacarpal (CMC) joint replacement surgery.
    Methods: The terms "thumb joint replacement surgery", "thumb carpometacarpal joint replacement surgery", "thumb cmc joint replacement surgery", "thumb arthroplasty", "thumb carpometacarpal arthroplasty", and "thumb cmc arthroplasty" were searched in Google and Bing. Readability was determined using the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Reading Grade Level (FKGL). FRES >65 or a grade level score of sixth grade and under was considered acceptable. Quality was assessed using the Patient Education Materials Assessment Tool (PEMAT) and a modified DISCERN tool. PEMAT scores below 70 were considered poorly understandable and poorly actionable.
    Results: A total of 34 websites underwent qualitative analysis. The average FRES was 54.60 ​± ​7.91 (range 30.30-67.80). Only 3 (8.82 ​%) websites had a FRES score >65. The average FKGL score was 8.19 ​± ​1.80 (range 5.60-12.90). Only 3 (8.82 ​%) websites were written at or below a sixth-grade level. The average PEMAT percentage score for understandability and actionability was 76.82 ​± ​9.43 (range 61.54-93.75) and 36.18 ​± ​24.12 (range 0.00-60.00) respectively. Although 22 (64.71 ​%) of websites met the acceptable standard of 70 ​% for understandability, none of the websites met the acceptable standard of 70 ​% for actionability. The average total DISCERN score was 32.00 ​± ​4.29 (range 24.00-42.00).
    Conclusions: Most websites reviewed were written above recommended reading levels. Most showed acceptable understandability but none showed acceptable actionability. To avoid the negative outcomes of poor patient understanding of online resources, providers of these resources should optimise accessibility to the average reader by using simple words, avoiding jargon, and analysing texts with readability software before publishing the materials online. Websites should also utilise visual aids and provide clearer pre-operative and post-operative instructions.
    Keywords:  Hand surgery; Health literacy; Internet; Thumb carpometacarpal joint surgery; Thumb surgery
    DOI:  https://doi.org/10.1016/j.jham.2024.100119
  10. Digit Health. 2024 Jan-Dec;10:10 20552076241277033
       Objective: The internet has become a preferred source for people seeking health information, including diet recommendations which are pivotal in the management of inflammatory bowel disease (IBD). Hence, we aimed to assess the quality of online information in China regarding IBD dietary recommendations.
    Methods: The search engines Baidu and Bing were used to screen for their top 25 webpages using the keywords "inflammatory bowel disease diet," "ulcerative colitis diet," "Crohn's disease diet," "inflammatory bowel disease nutrition," "ulcerative colitis nutrition," and "Crohn's disease nutrition." The quality of information was assessed by two physicians according to the Journal of the American Medical Association (JAMA) benchmark, the Global Quality Score (GQS), and the DISCERN instrument.
    Results: One hundred and eight webpages were selected for evaluation. The mean scores for JAMA, GQS, and DISCERN were 1.48, 3.11, and 36.20, respectively. Articles from professionals and non-profit organizations demonstrated superior quality compared to those from commercial and health portal websites. Many webpages failed to provide an explicit source of information or support for shared decision-making. The information on several pages lacked comprehensive descriptions of food types for IBD, with some pages even containing inaccuracies. No statistically significant differences in scores were observed between Baidu and Bing.
    Conclusions: The quality of online information on IBD dietary recommendations in China is moderate to low and exhibits significant variation across different sources. This warrants joint efforts from online authors, internet platforms, and regulators, to improve the quality of popular medical information.
    Keywords:  Internet; diet; health information; inflammatory bowel disease; quality
    DOI:  https://doi.org/10.1177/20552076241277033
  11. Sci Rep. 2024 09 04. 14(1): 20604
      Lung cancer has emerged as a major global public health concern. With growing public interest in lung cancer, online searches for related information have surged. However, a comprehensive evaluation of the credibility, quality, and value of lung cancer-related videos on digital media platforms remains unexamined. This study aimed to assess the informational quality and content of lung cancer-related videos on Douyin and Bilibili. A total of 200 lung cancer-related videos that met the criteria were selected from Douyin and Bilibili for evaluation and analysis. The first step involved recording and analyzing the basic information provided in the videos. Subsequently, the source and type of content for each video were identified. All videos' educational content and quality were then evaluated using JAMA, GQS, and Modified DISCERN. Douyin videos were found to be more popular in terms of likes, comments, favorites, and shares, whereas Bilibili videos were longer in duration (P < .001). The majority of video content on both platforms comprised lung cancer introductions (31/100, 31%), with medical professionals being the primary source of uploaded videos (Douyin, n = 55, 55%; Bilibili, n = 43, 43%). General users on Douyin scored the lowest on the JAMA scale, whereas for-profit businesses scored the highest (2.50 points). The results indicated that the videos' informational quality was insufficient. Videos from science communications and health professionals were deemed more reliable regarding completeness and content quality compared to videos from other sources. The public should exercise caution and consider the scientific validity when seeking healthcare information on short video platforms.
    Keywords:  Bilibili; Douyin; Information quality; Lung cancer; Short videos; Social media
    DOI:  https://doi.org/10.1038/s41598-024-70640-y
  12. J Craniofac Surg. 2024 Sep 02.
      This study aimed to assess the quality, credibility, and readability of online health information concerning turbinoplasty, given the increasing reliance on internet resources for health education. Using four search terms related to turbinoplasty, we analyzed 71 text-based webpages from Google.com, Bing.com, and Yahoo.com. Readability was evaluated using the Flesch-Kincaid grade level, Gunning-Fog Index, SMOG Index, and Coleman-Liau Index were utilized. Web page quality was computed through the DISCERN Instrument (DISCERN), the Journal of the American Medical Association benchmark criteria (JAMA), Novel Turbinoplasty Index (NTI), and presence of code certification by Health on The Net (HON). Seventy-one text-based web pages were assessed. Information quality was measured by an average DISCERN score of 47.4±7.40, indicating "fair" quality. The average readability was a grade level of 9.7±1.57, notably higher than AMA and NIH recommendations. Of all web pages, only 11 (15.49%) proficiently met all 4 listed JAMA criteria. Significant correlations between web page classification and average DISCERN (P=0.0042), as well as JAMA score (P<0.001) were discovered. The web pages that had HON code certification revealed significantly higher quality metrics such as DISCERN scores (P<0.001), JAMA scores (P<0.001), and NTI scores (P=0.038). Online health information for turbinoplasty is of "fair" quality, and the average readability is several grade levels above current AMA and NIH recommendations. Health care providers should aim to guide their patients on finding appropriate educational resources and should improve the readability of their patient education materials.
    DOI:  https://doi.org/10.1097/SCS.0000000000010511
  13. Trends Psychiatry Psychother. 2024 Sep 05.
       AIM: This study evaluated the quality and reliability of information about autism spectrum disorder (ASD) available in Portuguese on YouTube, based on the growing demand for accessible information about ASD and the relevance of digital platforms as sources of health information.
    METHODS: Using a cross-sectional observational study design, videos published in the last 5 years with more than 50,000 views were selected. The analysis consisted of two stages: characterization of the profile of the selected videos and assessment of information quality with the DISCERN Questionnaire.
    RESULTS: A total of 48 videos, predominantly produced by healthcare providers and educators, were analyzed. The content of videos made by professionals was of higher quality and reliability compared to videos posted by laypersons and news reports. These findings highlight expertise in the field as a critical determinant of content quality, stressing the importance of relying on expert sources when disseminating information about ASD. The ICD-10 and DSM-V were rarely mentioned, especially in videos by non-professionals, which is indicative of challenges in conveying diagnostic information.
    CONCLUSIONS: The findings of this study demonstrate the significant potential of YouTube as an educational tool to raise ASD awareness, but also highlight the need for a collaborative approach between content creators, healthcare providers, educators, and policymakers to ensure that the information made available is reliable, accurate, and of high quality. Therefore, we recommend the development of specific guidelines for content creators and the implementation of verification mechanisms for YouTube channels run by subject matter experts.
    Keywords:  Health promotion; austism spectrum disorder; social media
    DOI:  https://doi.org/10.47626/2237-6089-2024-0884
  14. Cureus. 2024 Jul;16(7): e65760
       AIM:  Complete mesocolic excision (CME) is becoming an oncological surgical principle day by day for right hemicolectomy. However, the procedure is technically difficult and carries a higher risk of complications than open surgery. In this study, the adequacy of YouTube videos that facilitate education for laparoscopic right hemicolectomy with complete mesocolic excision (LRHCME) was investigated.
    METHODS: In July 2024, in the search bar of the YouTube platform, the term "laparoscopic right hemicolectomy complete mesocolic excision" was searched. The first 100 videos in each search were evaluated. Animations, advertisements, lectures, non-surgical videos (pre-surgery, post-surgery vlog, etc.), and non-English videos were excluded from the study. Steps identified in the Delphi consensus were used to determine the reliability of the videos. The quality of the videos was measured using the Global Quality Scale (GQS) and the modified DISCERN score.
    RESULTS: Seventy videos were included in the evaluation. While 28 (40%) of these videos were classified as reliable, 42 (60%) were not found reliable. In reliable videos, video description, HD resolution, GQS, modified DISCERN, and duration were significantly higher (p-value <0.001, 0.012, <0.001, <0.001, 0.041 respectively). Reliable videos had a better rank than unreliable videos (p=0.046).
    CONCLUSION: When evaluated according to Delphi consensus, the most of LRHCME videos on the YouTube platform were unreliable. We conclude that YouTube alone is insufficient for learning LRHCME without a professional instructor.
    Keywords:  laparoscopic right hemicolectomy; reliable; total mesocolic excision; unreliable; youtube videos
    DOI:  https://doi.org/10.7759/cureus.65760
  15. Am J Otolaryngol. 2024 Jul 21. pii: S0196-0709(24)00182-0. [Epub ahead of print]45(6): 104396
       PURPOSE: Patients often refer to online materials when researching surgical procedures. This study compares the educational quality of online videos about tympanostomy tubes on two popular video platforms: YouTube and Facebook. This study provides clinicians with context about the content and quality of information patients may possess after watching online videos on tympanostomy tubes.
    MATERIALS AND METHODS: YouTube and Facebook were searched using key terms related to tympanostomy tubes. Videos were screened and scored in triplicate. DISCERN quality, content, production, and alternative medicine scores were assigned. Statistical analysis was conducted using GraphPad Prism.
    RESULTS: 76 YouTube and 86 Facebook videos were analyzed. DISCERN quality scores (mean = 1.8 vs. 1.4, P < .0001), content scores (mean = 1.7 vs. 1.0, P < .0001), and production scores (mean = 4.8 vs. 4.6, P = .0327) were significantly higher on YouTube compared to Facebook. 33 % of Facebook videos referenced alternative medicine, as compared with 0 % of YouTube videos (P < .0001). Physician/hospital-generated videos had significantly higher DISCERN and content scores than parent-, product-, and chiropractor-generated videos. Views did not correlate with DISCERN or content scores.
    CONCLUSION: YouTube is a better platform than Facebook for educational videos about tympanostomy tubes. YouTube videos had higher educational quality, more comprehensive content, and less alternative medicine. One third of Facebook videos advocated for alternative treatments. Importantly, videos on both platforms were of limited educational quality as demonstrated through low DISCERN reliability scores and coverage of few important content areas.
    Keywords:  Acute otitis media; Facebook; Patient education; Pediatric otology; Social media; Tympanostomy tubes; YouTube
    DOI:  https://doi.org/10.1016/j.amjoto.2024.104396
  16. JMIR Form Res. 2024 Aug 29. 8 e48389
       BACKGROUND: Social media platforms like TikTok are a very popular source of information, especially for skin diseases. Topical steroid withdrawal (TSW) is a condition that is yet to be fully defined and understood. This did not stop the hashtag #topicalsteroidwithdrawal from amassing more than 600 million views on TikTok. It is of utmost importance to assess the quality and content of TikTok videos on TSW to prevent the spread of misinformation.
    OBJECTIVE: This study aims to assess the quality and content of the top 100 videos dedicated to the topic of TSW on TikTok.
    METHODS: This observational study assesses the content and quality of the top 100 videos about TSW on TikTok. A total of 3 independent scoring systems: DISCERN, Journal of the American Medical Association, and Global Quality Scale were used to assess the video quality. The content of the videos was coded by 2 reviewers and analyzed for recurrent themes and topics.
    RESULTS: This study found that only 10.0% (n=10) of the videos clearly defined what TSW is. Videos were predominantly posted by White, middle-aged, and female creators. Neither cause nor mechanism of the disease were described in the videos. The symptoms suggested itching, peeling, and dryness which resembled the symptoms of atopic dermatitis. The videos fail to mention important information regarding the use of steroids such as the reason it was initially prescribed, the name of the drug, concentration, mechanism of usage, and method of discontinuation. Management techniques varied from hydration methods approved for treatment of atopic dermatitis to treatment options without scientific evidence. Overall, the videos had immense reach with over 200 million views, 45 million likes, 90,000 comments, and 100,000 shares. Video quality was poor with an average DISCERN score of 1.63 (SD 0.56)/5. Video length, total view count, and views/day were all associated with increased quality, indicating that patients were interacting more with higher quality videos. However, videos were created exclusively by personal accounts, highlighting the absence of dermatologists on the platform to discuss this topic.
    CONCLUSIONS: The videos posted on TikTok are of low quality and lack pertinent information. The content is varied and not consistent. Health care professionals, including dermatologists and residents in the field, need to be more active on the topic, to spread proper information and prevent an increase in steroid phobia. Health care professionals are encouraged to ride the wave and produce high-quality videos discussing what is known about TSW to avoid the spread of misinformation.
    Keywords:  TikTok; content analysis; dermatology; drug response; information quality; medical dermatology; misinformation; skin; social media; steroid withdrawal; steroids; topical; videos
    DOI:  https://doi.org/10.2196/48389
  17. Digit Health. 2024 Jan-Dec;10:10 20552076241277688
       Purpose: Breast cancer, the most common cancer in women globally, highlights the need for patient education. Despite many breast cancer discussions on TikTok, their scientific evaluation is lacking. Our study seeks to assess the content quality and accuracy of popular TikTok videos on breast cancer, to improve the dissemination of health knowledge.
    Methods: On August 22, 2023, we collected the top 100 trending videos from TikTok's Chinese version using "breast cancer/breast nodule" as keywords. We noted their length, TikTok duration, likes, comments, favorites, reposts, uploader types, and topics. Four assessment tools were used: Goobie's six questions, the Patient Educational Material Assessment Tool (PEMAT), the Video Information and Quality Index (VIQI), and the Global Quality Score (GQS). These instruments evaluate videos based on content, informational integrity, and overall quality.
    Results: Among the 100 videos, content quality was low with Goobie's questions mostly scoring 0, except for management at 1.0 (QR 1.0). PEMAT scores were moderate: 54.1 (QR 1.6) for sum, 47.0 (QR 18.8) for PEMAT-A, and 52.3 (QR 11.7) for PEMAT-U. Regarding the quality of information, the VIQI (sum) median was 14.1 (QR 0.2). Additionally, the median GQS score was 3.5 (QR 0.1). Medical professionals' videos focused on breast cancer stages, while patient videos centered on personal experiences. Patient videos had lower content and overall quality compared to those by medical professionals (PEMAT, GQS: P < 0.001, P = 0.004) but received more comments, indicating higher engagement (all P < 0.05).
    Conclusion: TikTok's breast cancer content shows educational potential, but while informational quality is moderate, content quality needs improvement. Videos by medical professionals are of higher quality. We recommend increased involvement of healthcare professionals on TikTok to enhance content quality. Non-medical users should share verified information, and TikTok should strengthen its content vetting. Users must scrutinize the credibility of health information on social platforms.
    Keywords:  Breast cancer; TikTok; short video apps; social media; video quality assessment
    DOI:  https://doi.org/10.1177/20552076241277688
  18. JMIR Form Res. 2024 Sep 03. 8 e51513
       BACKGROUND: Coronary heart disease (CHD) is a leading cause of death worldwide and imposes a significant economic burden. TikTok has risen as a favored platform within the social media sphere for disseminating CHD-related information and stands as a pivotal resource for patients seeking knowledge about CHD. However, the quality of such content on TikTok remains largely unexplored.
    OBJECTIVE: This study aims to assess the quality of information conveyed in TikTok CHD-related videos.
    METHODS: A comprehensive cross-sectional study was undertaken on TikTok videos related to CHD. The sources of the videos were identified and analyzed. The comprehensiveness of content was assessed through 6 questions addressing the definition, signs and symptoms, risk factors, evaluation, management, and outcomes. The quality of the videos was assessed using 3 standardized evaluative instruments: DISCERN, the Journal of the American Medical Association (JAMA) benchmarks, and the Global Quality Scale (GQS). Furthermore, correlative analyses between video quality and characteristics of the uploaders and the videos themselves were conducted.
    RESULTS: The search yielded 145 CHD-related videos from TikTok, predominantly uploaded by health professionals (n=128, 88.3%), followed by news agencies (n=6, 4.1%), nonprofit organizations (n=10, 6.9%), and for-profit organizations (n=1, 0.7%). Content comprehensiveness achieved a median score of 3 (IQR 2-4). Median values for the DISCERN, JAMA, and GQS evaluations across all videos stood at 27 (IQR 24-32), 2 (IQR 2-2), and 2 (IQR 2-3), respectively. Videos from health professionals and nonprofit organizations attained significantly superior JAMA scores in comparison to those of news agencies (P<.001 and P=.02, respectively), whereas GQS scores for videos from health professionals were also notably higher than those from news agencies (P=.048). Within health professionals, cardiologists demonstrated discernibly enhanced performance over noncardiologists in both DISCERN and GQS assessments (P=.02). Correlative analyses unveiled positive correlations between video quality and uploader metrics, encompassing the positive correlations between the number of followers; total likes; average likes per video; and established quality indices such as DISCERN, JAMA, or GQS scores. Similar investigations relating to video attributes showed correlations between user engagement factors-likes, comments, collections, shares-and the aforementioned quality indicators. In contrast, a negative correlation emerged between the number of days since upload and quality indices, while a longer video duration corresponded positively with higher DISCERN and GQS scores.
    CONCLUSIONS: The quality of the videos was generally poor, with significant disparities based on source category. The content comprehensiveness coverage proved insufficient, casting doubts on the reliability and quality of the information relayed through these videos. Among health professionals, video contributions from cardiologists exhibited superior quality compared to noncardiologists. As TikTok's role in health information dissemination expands, ensuring accurate and reliable content is crucial to better meet patients' needs for CHD information that conventional health education fails to fulfill.
    Keywords:  TikTok; content quality; coronary heart disease; short-video platform; social media
    DOI:  https://doi.org/10.2196/51513
  19. Psychol Health. 2024 Sep 01. 1-16
       OBJECTIVE: This study aimed to unravel micro-processes that link information seeking to subsequent affective well-being (i.e., positive and negative affect) at the within-person level, as well as the role of worry as a mediator in this relationship.
    METHODS AND MEASURES: Within the initial weeks following the Chinese government's relaxation of its epidemic control measures, 184 participants completed experience sampling methods on information seeking, COVID-related worry, and affective well-being three times a day for 14 days.
    RESULTS: According to dynamic structural equation models, information seeking was associated with high negative affect but not with low positive affect. COVID-related worry acted as a full mediator between information seeking at the previous time point (approximately 5 h ago) and the current negative affect, but not in positive affect.
    CONCLUSION: These findings suggested that the impact of information seeking on affective well-being was different for the two dimensions of affect. Furthermore, the persistent impact of information seeking on negative affect was attributed to the indirect effect of worry, suggesting that worry should be a point of focus for intervention to mitigate the potentially negative effects of information seeking within the context of the public health crises.
    Keywords:  Affective well-being; information seeking; perseverative cognition hypothesis; worry
    DOI:  https://doi.org/10.1080/08870446.2024.2395867