Assessing the quality and readability of online patient information: ENT UK patient information e-leaflets versus responses by a generative artificial intelligence

Journal article


Shamil, E., Ko, Tsz Ki, Fan, K., Schuster-Bruce, James, Jaafar, Mustafa, Khwaja, Sadie, Eynon-Lewis, Nicholas, D'Souza, Alwyn and Andrews, Peter 2024. Assessing the quality and readability of online patient information: ENT UK patient information e-leaflets versus responses by a generative artificial intelligence. Facial Plastic Surgery. https://doi.org/10.1055/a-2413-3675
AuthorsShamil, E., Ko, Tsz Ki, Fan, K., Schuster-Bruce, James, Jaafar, Mustafa, Khwaja, Sadie, Eynon-Lewis, Nicholas, D'Souza, Alwyn and Andrews, Peter
Abstract

Background 
The evolution of artificial intelligence has introduced new ways to disseminate health information, including natural language processing models like ChatGPT. However, the quality and readability of such digitally generated information remains understudied. This study is the first to compare the quality and readability of digitally generated health information against leaflets produced by professionals.

Methodology 
Patient information leaflets from five ENT UK leaflets and their corresponding ChatGPT responses were extracted from the Internet. Assessors with various degrees of medical knowledge evaluated the content using the Ensuring Quality Information for Patients (EQIP) tool and readability tools including the Flesch-Kincaid Grade Level (FKGL). Statistical analysis was performed to identify differences between leaflets, assessors, and sources of information.

Results 
ENT UK leaflets were of moderate quality, scoring a median EQIP of 23. Statistically significant differences in overall EQIP score were identified between ENT UK leaflets, but ChatGPT responses were of uniform quality. Nonspecialist doctors rated the highest EQIP scores, while medical students scored the lowest. The mean readability of ENT UK leaflets was higher than ChatGPT responses. The information metrics of ENT UK leaflets were moderate and varied between topics. Equivalent ChatGPT information provided comparable content quality, but with reduced readability.

Conclusion 
ChatGPT patient information and professionally produced leaflets had comparable content, but large language model content required a higher reading age. With the increasing use of online health resources, this study highlights the need for a balanced approach that considers both the quality and readability of patient education materials.

KeywordsChatGPT; Patient information leaflets; Rhinology leaflets; Facial plastic surgery leaflets; Patient information
Year2024
JournalFacial Plastic Surgery
PublisherGeorg Thieme Verlag KG
ISSN0736-6825
1098-8793
Digital Object Identifier (DOI)https://doi.org/10.1055/a-2413-3675
Official URLhttps://www.thieme-connect.com/products/ejournals/abstract/10.1055/a-2413-3675
Publication dates
Online15 Oct 2024
Publication process dates
Deposited24 Oct 2024
Output statusPublished
Permalink -

https://repository.canterbury.ac.uk/item/99743/assessing-the-quality-and-readability-of-online-patient-information-ent-uk-patient-information-e-leaflets-versus-responses-by-a-generative-artificial-intelligence

  • 23
    total views
  • 0
    total downloads
  • 23
    views this month
  • 0
    downloads this month

Export as

Related outputs

“Comprehensive Rhinoplasty: Structural and Preservation Concepts” by Sam P. Most
Vansteelant, Géraldine and D'Souza, Alwyn Ray 2024. “Comprehensive Rhinoplasty: Structural and Preservation Concepts” by Sam P. Most. Facial Plastic Surgery. https://doi.org/10.1055/s-0044-1786186
Myomodulation using botulinum toxin in septorhinoplasty for crooked noses: Introducing the concept and application of Nasal Muscle Imbalance Theory (NMIT)
Wong, E. and D'Souza, Alwyn Ray 2023. Myomodulation using botulinum toxin in septorhinoplasty for crooked noses: Introducing the concept and application of Nasal Muscle Imbalance Theory (NMIT). Facial Plastic Surgery. 40 (01), pp. 052-060. https://doi.org/10.1055/a-2047-7179