Cureus. 2024 Nov;16(11): e73212
Introduction Epilepsy is a chronic disorder that requires patient education for management and to avoid triggers and complications. This study aims to evaluate and compare the effectiveness of two artificial intelligence (AI) tools, ChatGPT (version 3.5, OpenAI, Inc., San Francisco, United States) and Google Gemini (version 1.5, Google LLC, Mountain View, California, United States), in generating patient education guides for epilepsy disorders. Methodology A patient education guide was generated on ChatGPT and Google Gemini. The study analyzed the sentence count, readability, and ease of understanding using the Flesch-Kincaid calculator, examined similarity using the QuillBot plagiarism tool, and assessed reliability using a modified DISCERN score. Statistical analysis included an unpaired T-test where a P-value <0.05 is considered significant. Results There was no statistically significant difference between ChatGPT and Google Gemini in terms of word count (p=0.75), sentence count (p=0.96), average words per sentence (p=0.66), grade level (p=0.67), similarity% (p=0.57), and reliability scores (p=0.42). Ease scores generated by ChatGPT and Google Gemini were 38.6 and 43.6 for generalized tonic-clonic seizures (GTCS), 18.7 and 45.5 for myoclonic seizures, and 22.4 and 55.8 for status epilepticus, respectively, showing Google Gemini generated responses notably better (p=0.0493). The average syllables per word (p=0.035) were appreciably lower for Google Gemini-generated responses, with 1.8 for GTCS and myoclonic, 1.7 for status epilepticus against 1.9 for GTCS, 2 for myoclonic, and 2.1 for status epilepticus for ChatGPT responses. Conclusions A significant difference was seen in only two parameters. Further improvement in AI tools is necessary to provide effective guides.
Keywords: artificial intelligence; chatgpt; education guide; epilepsy; generalized tonic-clonic seizures (gtcs); google gemini; myoclonic seizures; seizures; status epilepticus