bims-librar Biomed News
on Biomedical librarianship
Issue of 2023–12–03
23 papers selected by
Thomas Krichel, Open Library Society



  1. Health Info Libr J. 2023 Nov 27.
       BACKGROUND: Medication discontinuation studies explore the outcomes of stopping a medication compared to continuing it. Comprehensively identifying medication discontinuation articles in bibliographic databases remains challenging due to variability in terminology.
    OBJECTIVES: To develop and validate search filters to retrieve medication discontinuation articles in Medline and Embase.
    METHODS: We identified medication discontinuation articles in a convenience sample of systematic reviews. We used primary articles to create two reference sets for Medline and Embase, respectively. The reference sets were equally divided by randomization in development sets and validation sets. Terms relevant for discontinuation were identified by term frequency analysis in development sets and combined to develop two search filters that maximized relative recalls. The filters were validated against validation sets. Relative recalls were calculated with their 95% confidences intervals (95% CI).
    RESULTS: We included 316 articles for Medline and 407 articles for Embase, from 15 systematic reviews. The Medline optimized search filter combined 7 terms. The Embase optimized search filter combined 8 terms. The relative recalls were respectively 92% (95% CI: 87-96) and 91% (95% CI: 86-94).
    CONCLUSIONS: We developed two search filters for retrieving medication discontinuation articles in Medline and Embase. Further research is needed to estimate precision and specificity of the filters.
    Keywords:  Embase; MEDLINE; database searching; medical subject headings (MeSH); methodological filters; research methodology; review and systematic search; search strategies
    DOI:  https://doi.org/10.1111/hir.12516
  2. Cureus. 2023 Oct;15(10): e47976
      Introduction Academic departments need to monitor their faculty's academic productivity for various purposes, such as reporting to the medical school dean, assessing the allocation of non-clinical research time, evaluating for rank promotion, and reporting to the Accreditation Council for Graduate Medical Education (ACGME). Our objective was to develop and validate a simple method that automatically generates query strings to identify and process distinct department faculty publications listed in PubMed and Scopus. Methods We created a macro-enabled Excel workbook (Microsoft, Redmond, WA) to automate the retrieval of faculty publications from the PubMed and Scopus bibliometric databases (available at https://bit.ly/get-pubs). Where the returned reference includes the digital object identifier (doi), a link is provided in the workbook. Duplicate publications are removed automatically, and false attributions are managed. Results At the University of Miami, between 2020 and 2021, there were 143 anesthesiology faculty-authored publications with a PubMed identifier (PMID), 95.8% identified by the query and 4.2% missed. At Vanderbilt University Medical Center, between 2019 and 2021, there were 760 anesthesiology faculty-authored publications with a PMID, 94.3% identified by the query and 5.7% missed. Recall, precision, and the F1 score were all above 93% at both medical centers. Conclusions We developed a highly accurate, simple, transportable, scalable method to identify publications in PubMed and Scopus authored by anesthesiology faculty. Manual checking and faculty feedback are required because not all names can be disambiguated, and some references are missed. This process can greatly reduce the burden of curating a list of faculty publications. The methodology applies to other academic departments that track faculty publications.
    Keywords:  authorship; bibliometrics; library science; pubmed; scopus
    DOI:  https://doi.org/10.7759/cureus.47976
  3. Cureus. 2023 Nov;15(11): e49373
      Background Artificial intelligence (AI)-based conversational models, such as Chat Generative Pre-trained Transformer (ChatGPT), Microsoft Bing, and Google Bard, have emerged as valuable sources of health information for lay individuals. However, the accuracy of the information provided by these AI models remains a significant concern. This pilot study aimed to test a new tool with key themes for inclusion as follows: Completeness of content, Lack of false information in the content, Evidence supporting the content, Appropriateness of the content, and Relevance, referred to as "CLEAR", designed to assess the quality of health information delivered by AI-based models. Methods Tool development involved a literature review on health information quality, followed by the initial establishment of the CLEAR tool, which comprised five items that aimed to assess the following: completeness, lack of false information, evidence support, appropriateness, and relevance. Each item was scored on a five-point Likert scale from excellent to poor. Content validity was checked by expert review. Pilot testing involved 32 healthcare professionals using the CLEAR tool to assess content on eight different health topics deliberately designed with varying qualities. The internal consistency was checked with Cronbach's alpha (α). Feedback from the pilot test resulted in language modifications to improve the clarity of the items. The final CLEAR tool was used to assess the quality of health information generated by four distinct AI models on five health topics. The AI models were ChatGPT 3.5, ChatGPT 4, Microsoft Bing, and Google Bard, and the content generated was scored by two independent raters with Cohen's kappa (κ) for inter-rater agreement. Results The final five CLEAR items were: (1) Is the content sufficient?; (2) Is the content accurate?; (3) Is the content evidence-based?; (4) Is the content clear, concise, and easy to understand?; and (5) Is the content free from irrelevant information? Pilot testing on the eight health topics revealed acceptable internal consistency with a Cronbach's α range of 0.669-0.981. The use of the final CLEAR tool yielded the following average scores: Microsoft Bing (mean=24.4±0.42), ChatGPT-4 (mean=23.6±0.96), Google Bard (mean=21.2±1.79), and ChatGPT-3.5 (mean=20.6±5.20). The inter-rater agreement revealed the following Cohen κ values: for ChatGPT-3.5 (κ=0.875, P<.001), ChatGPT-4 (κ=0.780, P<.001), Microsoft Bing (κ=0.348, P=.037), and Google Bard (κ=.749, P<.001). Conclusions The CLEAR tool is a brief yet helpful tool that can aid in standardizing testing of the quality of health information generated by AI-based models. Future studies are recommended to validate the utility of the CLEAR tool in the quality assessment of AI-generated health-related content using a larger sample across various complex health topics.
    Keywords:  ai in healthcare; ai-generated health information; assessment tool feasibility; health information reliability; quality of health information
    DOI:  https://doi.org/10.7759/cureus.49373
  4. Digit Health. 2023 Jan-Dec;9:9 20552076231212296
       Background: Due to the large volume of online health information, while quality remains dubious, understanding the usage of artificial intelligence to evaluate health information and surpass human-level performance is crucial. However, the existing studies still need a comprehensive review highlighting the vital machine, and Deep learning techniques for the automatic health information evaluation process.
    Objective: Therefore, this study outlines the most recent developments and the current state of the art regarding evaluating the quality of online health information on web pages and specifies the direction of future research.
    Methods: In this article, a systematic literature is conducted according to the PRISMA statement in eight online databases PubMed, Science Direct, Scopus, ACM, Springer Link, Wiley Online Library, Emerald Insight, and Web of Science to identify all empirical studies that use machine and deep learning models for evaluating the online health information quality. Furthermore, the selected techniques are compared based on their characteristics, such as health quality criteria, quality measurement tools, algorithm type, and achieved performance.
    Results: The included papers evaluate health information on web pages using over 100 quality criteria. The results show no universal quality dimensions used by health professionals and machine or deep learning practitioners while evaluating health information quality. In addition, the metrics used to assess the model performance are not the same as those used to evaluate human performance.
    Conclusions: This systemic review offers a novel perspective in approaching the health information quality in web pages that can be used by machine and deep learning practitioners to tackle the problem more effectively.
    Keywords:  Machine learning; deep learning; online health information; quality assessment; quality metrics
    DOI:  https://doi.org/10.1177/20552076231212296
  5. PLoS One. 2023 ;18(11): e0294812
      Modern biological research depends on data resources. These resources archive difficult-to-reproduce data and provide added-value aggregation, curation, and analyses. Collectively, they constitute a global infrastructure of biodata resources. While the organic proliferation of biodata resources has enabled incredible research, sustained support for the individual resources that make up this distributed infrastructure is a challenge. The Global Biodata Coalition (GBC) was established by research funders in part to aid in developing sustainable funding strategies for biodata resources. An important component of this work is understanding the scope of the resource infrastructure; how many biodata resources there are, where they are, and how they are supported. Existing registries require self-registration and/or extensive curation, and we sought to develop a method for assembling a global inventory of biodata resources that could be periodically updated with minimal human intervention. The approach we developed identifies biodata resources using open data from the scientific literature. Specifically, we used a machine learning-enabled natural language processing approach to identify biodata resources from titles and abstracts of life sciences publications contained in Europe PMC. Pretrained BERT (Bidirectional Encoder Representations from Transformers) models were fine-tuned to classify publications as describing a biodata resource or not and to predict the resource name using named entity recognition. To improve the quality of the resulting inventory, low-confidence predictions and potential duplicates were manually reviewed. Further information about the resources were then obtained using article metadata, such as funder and geolocation information. These efforts yielded an inventory of 3112 unique biodata resources based on articles published from 2011-2021. The code was developed to facilitate reuse and includes automated pipelines. All products of this effort are released under permissive licensing, including the biodata resource inventory itself (CC0) and all associated code (BSD/MIT).
    DOI:  https://doi.org/10.1371/journal.pone.0294812
  6. Thyroid. 2023 Nov 27.
       BACKGROUND: ChatGPT, an artificial intelligence (AI) chatbot, is the fastest growing consumer application in history. Given recent trends identifying increasing patient use of Internet sources for self-education, we seek to evaluate the quality of ChatGPT-generated responses for patient education on thyroid nodules.
    METHODS: ChatGPT was queried 4 times with 30 identical questions. Queries differed by initial chatbot prompting: no prompting, patient-friendly prompting, 8th-grade level prompting, and prompting for references. Answers were scored on a hierarchical score: incorrect, partially correct, correct, or correct with references. Proportions of responses at incremental score thresholds were compared by prompt type using chi-squared analysis. Flesch-Kincaid grade level was calculated for each answer. The relationship between prompt type and grade level was assessed using analysis of variance. References provided within ChatGPT answers were totaled and analyzed for veracity.
    RESULTS: Across all prompts (n=120 questions), 83 answers (69.2%) were at least correct. Proportions of responses that were at least partially correct (p=0.795) and correct (p=0.402) did not differ by prompt; responses that were correct with references did (p<0.0001). Responses from 8th-grade level prompting were the lowest mean grade level (13.43 ± 2.86) and were significantly lower than no prompting (14.97 ± 2.01, p=0.01) and prompting for references (16.43 ± 2.05, p<0.0001). Prompting for references generated 80/80 (100%) of referenced publications within answers. Seventy references (87.5%) were legitimate citations, and 58/80 (72.5%) provided accurately reported information from the referenced publications.
    CONCLUSION: ChatGPT overall provides appropriate answers to most questions on thyroid nodules regardless of prompting. Despite targeted prompting strategies, ChatGPT reliably generates responses corresponding to grade levels well-above accepted recommendations for presenting medical information to patients. Significant rates of AI hallucination may preclude clinicians from recommending the current version of ChatGPT as an educational tool for patients at this time.
    DOI:  https://doi.org/10.1089/thy.2023.0491
  7. Cornea. 2023 Nov 28.
       PURPOSE: ChatGPT is a commonly used source of information by patients and clinicians. However, it can be prone to error and requires validation. We sought to assess the quality and accuracy of information regarding corneal transplantation and Fuchs dystrophy from 2 iterations of ChatGPT, and whether its answers improve over time.
    METHODS: A total of 10 corneal specialists collaborated to assess responses of the algorithm to 10 commonly asked questions related to endothelial keratoplasty and Fuchs dystrophy. These questions were asked from both ChatGPT-3.5 and its newer generation, GPT-4. Assessments tested quality, safety, accuracy, and bias of information. Chi-squared, Fisher exact tests, and regression analyses were conducted.
    RESULTS: We analyzed 180 valid responses. On a 1 (A+) to 5 (F) scale, the average score given by all specialists across questions was 2.5 for ChatGPT-3.5 and 1.4 for GPT-4, a significant improvement (P < 0.0001). Most responses by both ChatGPT-3.5 (61%) and GPT-4 (89%) used correct facts, a proportion that significantly improved across iterations (P < 0.00001). Approximately a third (35%) of responses from ChatGPT-3.5 were considered against the scientific consensus, a notable rate of error that decreased to only 5% of answers from GPT-4 (P < 0.00001).
    CONCLUSIONS: The quality of responses in ChatGPT significantly improved between versions 3.5 and 4, and the odds of providing information against the scientific consensus decreased. However, the technology is still capable of producing inaccurate statements. Corneal specialists are uniquely positioned to assist users to discern the veracity and application of such information.
    DOI:  https://doi.org/10.1097/ICO.0000000000003439
  8. Foot Ankle Orthop. 2023 Oct;8(4): 24730114231209919
       Background: Artificial intelligence (AI) platforms, such as ChatGPT, have become increasingly popular outlets for the consumption and distribution of health care-related advice. Because of a lack of regulation and oversight, the reliability of health care-related responses has become a topic of controversy in the medical community. To date, no study has explored the quality of AI-derived information as it relates to common foot and ankle pathologies. This study aims to assess the quality and educational benefit of ChatGPT responses to common foot and ankle-related questions.
    Methods: ChatGPT was asked a series of 5 questions, including "What is the optimal treatment for ankle arthritis?" "How should I decide on ankle arthroplasty versus ankle arthrodesis?" "Do I need surgery for Jones fracture?" "How can I prevent Charcot arthropathy?" and "Do I need to see a doctor for my ankle sprain?" Five responses (1 per each question) were included after applying the exclusion criteria. The content was graded using DISCERN (a well-validated informational analysis tool) and AIRM (a self-designed tool for exercise evaluation).
    Results: Health care professionals graded the ChatGPT-generated responses as bottom tier 4.5% of the time, middle tier 27.3% of the time, and top tier 68.2% of the time.
    Conclusion: Although ChatGPT and other related AI platforms have become a popular means for medical information distribution, the educational value of the AI-generated responses related to foot and ankle pathologies was variable. With 4.5% of responses receiving a bottom-tier rating, 27.3% of responses receiving a middle-tier rating, and 68.2% of responses receiving a top-tier rating, health care professionals should be aware of the high viewership of variable-quality content easily accessible on ChatGPT.
    Level of Evidence: Level III, cross sectional study.
    Keywords:  Charcot arthropathy; ChatGPT; Jones fracture; ankle arthritis; ankle arthroplasty; ankle sprain; artificial intelligence; education
    DOI:  https://doi.org/10.1177/24730114231209919
  9. J Glaucoma. 2023 Nov 24.
       PRCIS: ChatGPT can help healthcare providers automate the quality assessment of online health information, but it does not produce easier to understand responses compared to existing online health information.
    PURPOSE: To compare the readability of ChatGPT-generated health information about glaucoma surgery to existing material online, and to evaluate ChatGPT's ability to analyze the quality of information found online about glaucoma surgery.
    METHODS: ChatGPT was asked to create patient handouts on glaucoma surgery using 7 independent prompts aiming to generate 6th grade level reading material. Existing patient-targeted online health information about glaucoma surgery was selected from the top 50 search results of three search engines, excluding advertisements, blog posts, information intended for health professionals, irrelevant content, and duplicate links. Four validated tools were used to assess readability, and the readability of the ChatGPT-generated material was compared with the readability of existing online information. The DISCERN Instrument was used for quality assessment of online materials. The DISCERN instrument was also programmed to ChatGPT to evaluate its ability to analyze quality. The R software and descriptive statistics were used for data analysis.
    RESULTS: 35 webpages were included. There was no difference between the reading level of online webpages (12th grade) and the reading level of ChatGPT-generated responses (11th grade), despite the ChatGPT prompts asking for simple language and a 6th grade reading level. The quality of health content was "fair" with only 5 resources receiving an "excellent" score. ChatGPT scored the quality of health resources with high precision (r=0.725).
    CONCLUSIONS: Patient-targeted information on glaucoma surgery is beyond the reading level of the average patient, and therefore at risk of not being understood, and is of subpar quality, per DISCERN tool scoring. ChatGPT did not generate documents at a lower reading level as prompted, but this tool can aid in automating the time-consuming and subjective process of quality assessment.
    DOI:  https://doi.org/10.1097/IJG.0000000000002338
  10. Cureus. 2023 Oct;15(10): e46736
       AIM: We aimed to evaluate the performance of Chat Generative Pre-trained Transformer (ChatGPT) within the context of inflammatory bowel disease (IBD), which is expected to become an increasingly significant health issue in the future. In addition, the objective of the study was to assess whether ChatGPT serves as a reliable and useful resource for both patients and healthcare professionals.
    METHODS: For this study, 20 specific questions were identified for the two main components of IBD, which are Crohn's disease (CD) and ulcerative colitis (UC). The questions were divided into two sets: one set contained questions directed at healthcare professionals while the second set contained questions directed toward patients. The responses were evaluated with seven-point Likert-type reliability and usefulness scales.
    RESULTS: The distribution of the reliability and utility scores was calculated into four groups (two diseases and two question sources) by averaging the mean scores from both raters. The highest scores in both reliability and usefulness were obtained from professional sources (5.00± 1.21 and 5.15±1.08, respectively). The ranking in terms of reliability and usefulness, respectively, was as follows: CD questions (4.70±1.26 and 4.75±1.06) and UC questions (4.40±1.21 and 4.55±1.31). The reliability scores of the answers for the professionals were significantly higher than those for the patients (both raters, p=0.032).  Conclusion: Despite its capacity for reliability and usefulness in the context of IBD, ChatGPT still has some limitations and deficiencies. The correction of ChatGPT's deficiencies and its enhancement by developers with more detailed and up-to-date information could make it a significant source of information for both patients and medical professionals.
    Keywords:  artificial intelligence (ai); chatgpt; crohn’s disease (cd); healthcare research; inflammatory bowel diseases (ibd); large language model; ulcerative colitis (uc)
    DOI:  https://doi.org/10.7759/cureus.46736
  11. Cureus. 2023 Oct;15(10): e47080
      Objective Complications of esophageal strictures have decreased in recent years due to evolved endoscopic methods. This has primarily been through esophageal dilation. This study examines the level of readability of patient information on esophageal dilation across 40 websites found via internet search. Methods In this cross-sectional readability study, the content of the first 40 websites about "esophageal dilation" and "upper GI endoscopy" found via Google search was analyzed using WebFX (Harrisburg, PA), an established readability tool. Five readability indices, each having a unique mathematical formula, were used to analyze online material. Outputs were then scored and averaged together. Results The aggregate readability of online esophageal dilation was found to be 9.2, corresponding to a ninth-grade reading level. This average was found based on 38 unique, non-duplicated websites evaluated. Conclusions The information currently available on the internet regarding esophageal dilation is considered to be at a difficult reading level for an average patient. There remains a significant amount of development required in the domain of information accessibility to enhance the patient comprehension of invasive procedures they are poised to undergo. It is imperative to refine the articulation of complex procedures further to prepare patients for forthcoming medical procedures.
    Keywords:  click-through rate; esophageal dilation; esophageal dysmotility; esophageal stricture; health literacy; online information; readability; upper gi endoscopy
    DOI:  https://doi.org/10.7759/cureus.47080
  12. Endocrine. 2023 Nov 27.
       INTRODUCTION: Recent data show that many patients with NETs do not receive sufficient education regarding their diagnosis and therefore tend to search for information or literature independently. We sought to assess the readability of OPI for neuroendocrine tumors and to analyze compliance to NIH guidelines for OPI (readability level 8th grade or below).
    METHODS: We performed a Google search to compile a list of the top ten OPI websites using the search term "neuroendocrine tumor". We calculated median readability scores for each website across the 9 scales as well as overall readability scores across all sites.
    RESULTS: A total of 10 websites were included for analysis. 6/10 (60%) of the websites belonged to academic institutions, while 2/10 (20%) were from non-profit organizations, and 1 each were a government website (10%) and patient advocacy organization (10%). The median (with interquartile range or IQR) readability score for all websites across the nine readability tests was 9.6 (IQR 8.8-11.2).
    CONCLUSION: Our findings underscore the need to develop online patient education material that is readable and therefore easily understandable for patients and caregivers dealing with this unique group of malignancies.
    DOI:  https://doi.org/10.1007/s12020-023-03607-0
  13. Cureus. 2023 Nov;15(11): e49184
      Introduction A common complication of first-time or recurrent shoulder dislocations is bone loss at the humeral head and glenoid. Recurrent shoulder instability is often a result of bony defects in the glenoid following shoulder dislocations. In the setting of glenoid bone loss, surgical interventions are generally required to restore stability. The Latarjet procedure is a challenging operation and, due to its complexity, may be associated with operative complications. It can be difficult to explain the procedure to patients in a manner that is easily comprehensible, which may lead to confusion or being overwhelmed with information. Hence, it is important that the information available to patients is easily accessible and perceivable to allow for adequate health literacy. Health literacy is defined as the ability to make health decisions in the context of everyday life. Methods The search engines Google and Bing were accessed on a single day in the month of July 2023, searching the terms "Latarjet surgery" and "Latarjet procedure." For each term on both search engines, the first three pages were evaluated, resulting in a total of 114 websites for review. Out of these, 25 websites met the inclusion criteria and underwent further in-depth analysis through the online readability software, WEB FX. This software generated a Flesch Reading Ease Score (FRES) and a Reading Grade Level (RGL) for each website. Results In our study, the mean FRES was 50.3 (SD ±12.5), categorizing the data as 'fairly difficult to read.' The mean RGL score was 8.12 (SD ±2.35), which exceeds the recommended target. Conclusion In conclusion, the results of this study have demonstrated that the material available on the Internet about the Latarjet procedure is above the recommended readability levels for the majority of the population. Our findings align with similar studies assessing the readability of online patient information. Based on these findings, physicians should provide patients with vetted information to facilitate a better understanding of the procedure, thereby enabling patients to make more informed decisions regarding their health.
    Keywords:  health literacy; latarjet procedure; orthopaedics; readability; sports surgery
    DOI:  https://doi.org/10.7759/cureus.49184
  14. JMIR Form Res. 2023 Nov 27. 7 e47762
       BACKGROUND: Nasopharyngeal carcinoma (NPC) is a rare disease that is strongly associated with exposure to the Epstein-Barr virus and is characterized by the formation of malignant cells in nasopharynx tissues. Early diagnosis of NPC is often difficult owing to the location of initial tumor sites and the nonspecificity of initial symptoms, resulting in a higher frequency of advanced-stage diagnoses and a poorer prognosis. Access to high-quality, readable information could improve the early detection of the disease and provide support to patients during disease management.
    OBJECTIVE: This study aims to assess the quality and readability of publicly available web-based information in the English language about NPC, using the most popular search engines.
    METHODS: Key terms relevant to NPC were searched across 3 of the most popular internet search engines: Google, Yahoo, and Bing. The top 25 results from each search engine were included in the analysis. Websites that contained text written in languages other than English, required paywall access, targeted medical professionals, or included nontext content were excluded. Readability for each website was assessed using the Flesch Reading Ease score and the Flesch-Kincaid grade level. Website quality was assessed using the Journal of the American Medical Association (JAMA) and DISCERN tools as well as the presence of a Health on the Net Foundation seal.
    RESULTS: Overall, 57 suitable websites were included in this study; 26% (15/57) of the websites were academic. The mean JAMA and DISCERN scores of all websites were 2.80 (IQR 3) and 57.60 (IQR 19), respectively, with a median of 3 (IQR 2-4) and 61 (IQR 49-68), respectively. Health care industry websites (n=3) had the highest mean JAMA score of 4 (SD 0). Academic websites (15/57, 26%) had the highest mean DISCERN score of 77.5. The Health on the Net Foundation seal was present on only 1 website, which also achieved a JAMA score of 3 and a DISCERN score of 50. Significant differences were observed between the JAMA score of hospital websites and the scores of industry websites (P=.04), news service websites (P<.048), charity and nongovernmental organization websites (P=.03). Despite being a vital source for patients, general practitioner websites were found to have significantly lower JAMA scores compared with charity websites (P=.05). The overall mean readability scores reflected an average reading age of 14.3 (SD 1.1) years.
    CONCLUSIONS: The results of this study suggest an inconsistent and suboptimal quality of information related to NPC on the internet. On average, websites presented readability challenges, as written information about NPC was above the recommended reading level of sixth grade. As such, web-based information requires improvement in both quality and accessibility, and healthcare providers should be selective about information recommended to patients, ensuring they are reliable and readable.
    Keywords:  AI; DISCERN; JAMA; Journal of the American Medical Association; artificial intelligence; internet information; nasopharyngeal cancer; readability
    DOI:  https://doi.org/10.2196/47762
  15. Cureus. 2023 Oct;15(10): e47132
      Background and aims In the age of social media, a vast amount of information is widely and easily accessible. Platforms such as Instagram allow its users to post pictures and videos that can reach millions of users. This could be utilized by healthcare providers to provide education to a vast number of the population about a disease such as hypothyroidism with an easily digestible infographic. However, this easy accessibility comes with the risk of rampant misinformation. This study aimed to evaluate the characteristics of Instagram posts, the type of information, and the quality and reliability of information posted about hypothyroidism. Methodology This is a cross-sectional observational study that was conducted over the course of days on Instagram. Top posts meeting inclusion criteria under seven different hypothyroidism-related hashtags were surveyed for content and social media metrics by the authors utilizing Google Forms. The quality and reliability of the posts were analyzed using the global quality scale and DISCERN scales, respectively. The data were exported to an Excel sheet and analyzed using the SPSS software version 21.0 (Armonk, NY: IBM Corp.). Results A total of 629 posts met the inclusion criteria of which 62.5% were images and 37.5% were reels. The content heavily focused on the medical aspect of hypothyroidism with posts about symptoms (46.1%), prevention (39.59%), cause/etiology (36.41%), and treatment (34.34%). The median DISCERN score which reflects the reliability of the posts uploaded was highest for doctors at 3 and the least reliable posts were uploaded by dieticians, naturopathic doctors, and patients. This study found that the quality of posts uploaded by nutritionists and naturopathic doctors with a median Global Quality Score (GQS) score of 3. Conclusions There is a need to establish a quality control body that regulates the quality and reliability of the posts to curb misinformation and help patients gain easy access to reliable resources that will aid their decision-making.
    Keywords:  discern; global quality scale; gqs; hypothyroidism; instagram; quality; reliability; social media
    DOI:  https://doi.org/10.7759/cureus.47132
  16. Cureus. 2023 Oct;15(10): e47340
      Introduction YouTube, the world's largest video platform, hosts thousands of educational surgical videos that many trainees rely on to enhance their understanding and proficiency in various surgical procedures. Consequently, a crucial inquiry arises regarding the trustworthiness of these videos as a valuable resource for these trainees. In this article, we address this question by focusing on one of the most frequently performed surgical procedures in the field of urology and assessing the effectiveness of these videos as an educational tool for urology trainees (ST3+: Specialty Training Year 3 and above). Methodology We conducted a comprehensive search on YouTube for all videos related to 'Testicular Exploration'. After applying specific inclusion and exclusion criteria, we identified a total of nine eligible videos for analysis. These videos were assessed using the LAParoscopic Surgery Video Educational GuidelineS (LAP-VEGaS) scoring system, which categorized them into two distinct groups. The first group, known as the 'high-quality group', included videos that scored 11 points or higher according to the LAP-VEGaS scoring criteria. The second group, termed the 'low-quality group', consisted of videos that scored less than 11 points using the LAP-VEGaS scoring tool. Additionally, we collected data on various metrics, such as video view counts, duration, likes and dislikes counts, comments count, like ratio, view ratio, and power index, and performed a comparative analysis between the two aforementioned groups. Results  Between April 2013 and September 2023, the selected videos exhibited an average total view count of 95,546±138,000. The videos had an average duration of 6.35±2.26 minutes. Furthermore, the mean values for both likes and dislikes were 461.55±581 and 2.89±2.86, respectively. In contrast, the mean like ratio, view ratio, and power index were 0.98±0.0112, 176,00±13,100, and 173.80±131, respectively. The mean LAP-VEGaS scores for videos related to testicular exploration were 9.94±2.05. It is noteworthy that the first group had a statistically higher number of dislikes; however, the view count, comments count, likes count, and view ratio were statistically lower in the same group. Conclusion Videos related to testicular exploration on YouTube exhibit notably low quality and do not serve as a valuable resource for urology trainees. Key factors such as video duration, total view count, and viewer interactions (including likes, dislikes, and comments) should not be relied upon as indicators of educational video quality. Consequently, it is advisable for urology trainees to refrain from using YouTube as a primary source for learning about testicular exploration. Instead, they should seek guidance and support from experienced senior colleagues, educational supervisors, or consultants to explore more reliable sources of information for this surgical procedure.
    Keywords:  surgical skills; surgical-education; testicular exploration; urology; youtube®
    DOI:  https://doi.org/10.7759/cureus.47340
  17. Int J Impot Res. 2023 Nov 30.
      The aim of this study was to evaluate the accuracy and quality of the videos published on YouTube on the subject of disorder of sexual development. The search was performed by using term 'disorder of sexual development', 'differences in sex development', 'variations in sex development' and 'intersex' on YouTube. Videos in languages other than English and whose sound or image quality was poor were excluded from the study. The videos were evaluated in terms of source, content, intended audience, commercial bias, and accuracy of information. Video features were recorded. Journal of the American Medical Association (JAMA) criteria, modified DISCERN scale and Global Quality Score (GQS) were used for quality evaluation. A total of 150 videos were evaluated. The source of 30% of the videos was medical education sites, the content of 43.3% was general information and the target audience of 40.6% was patients/society. Accuracy of information rate was 90% and commercial bias rate was 7.3%. The median JAMA, GQS and Modified DISCERN score were 1 (IQR value:2, range:0-2), 3 (IQR value:2, range:2-4) and 3 (IQR value:2, range:1-3) respectively. These scores were correlated with each other (rho = 0.834-0.909 p < 0.001). Scores of the videos whose source was academic journal/university were higher compared to other videos (p < 0.001). The median duration of the videos with good quality was longer (p < 0.001). A negative correlation was found between all scoring systems and number of views/likes/comments, view/comment per day and days since upload date (rho = -0.332, rho = -0.273, rho = -0.382, rho = -0.249, rho = -0.323 rho = -0.285 respectively; p < 0.05). YouTube is a good platform to learn about disorder of sexual development, but the quality may vary depending on the video source.
    DOI:  https://doi.org/10.1038/s41443-023-00800-7
  18. Front Pharmacol. 2023 ;14 1264794
      Background: Due to the huge number of drugs available and the rapid growth and change in drug information, healthcare professionals, especially physicians, frequently require reliable, easily accessible, rapid, and accurate reference sources to obtain the necessary drug information. Several sources of information are available for physicians to use and select from; however, the information-seeking behaviour of healthcare providers is varied, and this process can be challenging. Objectives: In this study, Jordanian physicians were approached to evaluate the drug information they require, the sources of information they use, the perceived credibility of the sources, and the challenges they face when searching for the most accurate and current information about drugs. Methods: This is an observational, cross-sectional study. A self-administered questionnaire was distributed to practising physicians in Jordan using a convenience sampling method (purposive sampling followed by snowball sampling) regardless of their speciality, age, gender, seniority, or place of employment. Results: Three hundred and eighty physicians participated in the study. Most participants responded that they performed drug information searches on a weekly (155, 40.8%) or a daily basis (150, 39.5%). The drug-related information that physicians most frequently searched for concerned dosage regimens and adverse drug events. The majority of surveyed doctors (97.9%) reported using online websites to acquire drug information; UpToDate®, Medscape and Drugs.com were the most frequently used online databases, although many participants did not consider online sources to be the most reliable source. The most prevalent and recurrent challenges encountered concerned an inability to access subscription-only journals and websites (56.6%), difficulty identifying trusted and credible sources (41.8%) and the enormous number of available sources (35.3%). However, these challenges were less of a problem for physicians who currently work or have previously worked in academia (p < 0.001). Conclusion: This study demonstrated that Jordanian physicians frequently use online websites to look for drug information and all doctors face challenges throughout this process particularly those with no experience in academia. This suggests that being in academia makes the process of information-seeking easier which highlights the need for academics to transfer their knowledge and experience to their non-academic colleagues and the upcoming generations of physicians.
    Keywords:  Jordan; challenges; drug information; information sources; information-seeking behaviour
    DOI:  https://doi.org/10.3389/fphar.2023.1264794
  19. Inquiry. 2023 Jan-Dec;60:60 469580231217982
      Few studies have investigated whether improve electronic health (eHealth) literacy can alleviate food neophobia in university students. We explored the associations among online health information (OHI)-seeking behaviors, eHealth literacy and food neophobia. A questionnaire-based, cross-sectional study of 5151 university students in China was conducted from October to December 2022. The study used Chinese versions of the eHealth literacy scale (C-eHEALS), and the food neophobia scale (FNS-C), as well as the OHI-seeking behaviors scale. Data were collected through Wenjuanxing software. Analysis of variance, t-tests, the Pearson correlation coefficient and chi-square tests were performed for data analysis. The average (SD) scores of C-eHEALS and FNS-C were 26.81 (5.83) and 38.86 (6.93), respectively. University students in China had a low C-eHEALS and a high FNS-C level, and there were significant differences between the high and low groups of C-eHEALS (P < .001) and FNS-C (P < .001). There was also a significant correlation between eHealth literacy and food neophobia (P < .001), and a lower eHealth literacy level indicated a higher probability of food neophobia occurrence. University students with high FNS-C and low C-eHEALS show more OHI-seeking behaviors. When schools, communities, and parents want to alleviate students' food neophobia, OHI-seeking training to improve eHealth literacy may be a good intervention.
    Keywords:  electronic health literacy; food neophobia; online health information seeking behaviors; university students
    DOI:  https://doi.org/10.1177/00469580231217982
  20. PEC Innov. 2023 Dec 15. 3 100232
       Objective: To explore factors associated with communication and information-seeking after receipt of skin cancer prevention information among Hispanic individuals.
    Methods: Multivariable logistic regression was used to analyze existing data on demographics, personal experience, salience, and beliefs variables collected from Hispanic individuals to determine independent associations with sharing and seeking information about skin cancer prevention.
    Results: Of 578 participants, 53% reported any communication about skin cancer prevention behaviors or skin cancer genetic risk; and 31% and 21% sought additional information about preventive behaviors or genetic risk, respectively. Female sex, greater perceived severity, higher comparative chance of getting skin cancer, and lower health literacy were associated with greater communication, while having no idea of one's own skin cancer risk was related to less communication. Greater health numeracy and higher cancer worry were associated with information-seeking about prevention behaviors and genetic risk.
    Conclusion: Up to half of participants reported communication or information-seeking, although factors associated with specific activities differed. Future studies should evaluate how to promote communication behaviors in the Hispanic community and how sharing and seeking information influence an individual's network prevention practices.
    Innovation: Several factors related to communication behaviors among Hispanic people after obtaining skin cancer prevention information were identified.Trial registration: This trial was registered on clinicaltrials.gov (NCT03509467).
    Keywords:  Communication; Genetic risk; Hispanic people; Information-seeking; MC1R; Skin cancer prevention
    DOI:  https://doi.org/10.1016/j.pecinn.2023.100232