Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

(021) ChatGPT's Ability to Assess Quality and Readability of Online Medical Information

View through CrossRef
Abstract Introduction Health literacy plays a crucial role in enabling patients to understand and effectively use medical information. As technology rapidly advances, the significance of health literacy becomes even more pronounced, particularly in comprehending complex medical information. Artificial Intelligence (AI) platforms have garnered significant attention for their remarkable ability to generate automated responses to a wide range of prompts. However, their capacity to assess the quality and readability of provided text remains uncertain. Given the growing prominence of AI web assistant tools, we hypothesized that integrating these tools into patients' web searches could enhance the retrieval of accurate medical information. Objective To evaluate the proficiency of Conversational Generative Pre-Trained Transformer (ChatGPT) in assessing readability, and utilizing the DISCERN tool to assess quality of online content regarding shock wave therapy for erectile dysfunction. Methods Websites were generated using a Google search of “shock wave therapy for erectile dysfunction” with location filters disabled. Readability was analyzed using the Readable software (Readable.com, Horsham, United Kingdom). Quality was assessed independently by three reviewers using the DISCERN tool. The same plain text files collected were inputted into ChatGPT to determine whether it produced comparable metrics for readability and quality. Results The study results revealed a notable disparity between ChatGPT's readability assessment and that obtained from a reliable tool, Readable.com (p<0.05). This indicates a lack of alignment between ChatGPT's algorithm and that of established tools, such as Readable.com. Similarly, the DISCERN score generated by ChatGPT differed significantly from the scores generated manually by human evaluators (p<0.05), suggesting that ChatGPT may not be capable of accurately identifying poor-quality information sources regarding shock wave therapy as a treatment for erectile dysfunction. Conclusions ChatGPT’s evaluation of the quality and readability of online text regarding shockwave therapy for erectile dysfunction differs from that of human raters and trusted tools. ChatGPT's current capabilities were not sufficient for reliably assessing the quality and readability of textual content. Further research is needed to elucidate the role of AI in the objective evaluation of online medical content in other fields. Continued development in AI and incorporation of tools such as DISCERN into AI software may enhance the way patients navigate the web in search of high-quality medical content in the future. Disclosure No.
Title: (021) ChatGPT's Ability to Assess Quality and Readability of Online Medical Information
Description:
Abstract Introduction Health literacy plays a crucial role in enabling patients to understand and effectively use medical information.
As technology rapidly advances, the significance of health literacy becomes even more pronounced, particularly in comprehending complex medical information.
Artificial Intelligence (AI) platforms have garnered significant attention for their remarkable ability to generate automated responses to a wide range of prompts.
However, their capacity to assess the quality and readability of provided text remains uncertain.
Given the growing prominence of AI web assistant tools, we hypothesized that integrating these tools into patients' web searches could enhance the retrieval of accurate medical information.
Objective To evaluate the proficiency of Conversational Generative Pre-Trained Transformer (ChatGPT) in assessing readability, and utilizing the DISCERN tool to assess quality of online content regarding shock wave therapy for erectile dysfunction.
Methods Websites were generated using a Google search of “shock wave therapy for erectile dysfunction” with location filters disabled.
Readability was analyzed using the Readable software (Readable.
com, Horsham, United Kingdom).
Quality was assessed independently by three reviewers using the DISCERN tool.
The same plain text files collected were inputted into ChatGPT to determine whether it produced comparable metrics for readability and quality.
Results The study results revealed a notable disparity between ChatGPT's readability assessment and that obtained from a reliable tool, Readable.
com (p<0.
05).
This indicates a lack of alignment between ChatGPT's algorithm and that of established tools, such as Readable.
com.
Similarly, the DISCERN score generated by ChatGPT differed significantly from the scores generated manually by human evaluators (p<0.
05), suggesting that ChatGPT may not be capable of accurately identifying poor-quality information sources regarding shock wave therapy as a treatment for erectile dysfunction.
Conclusions ChatGPT’s evaluation of the quality and readability of online text regarding shockwave therapy for erectile dysfunction differs from that of human raters and trusted tools.
ChatGPT's current capabilities were not sufficient for reliably assessing the quality and readability of textual content.
Further research is needed to elucidate the role of AI in the objective evaluation of online medical content in other fields.
Continued development in AI and incorporation of tools such as DISCERN into AI software may enhance the way patients navigate the web in search of high-quality medical content in the future.
Disclosure No.

Related Results

Exploring Large Language Models Integration in the Histopathologic Diagnosis of Skin Diseases: A Comparative Study
Exploring Large Language Models Integration in the Histopathologic Diagnosis of Skin Diseases: A Comparative Study
Abstract Introduction The exact manner in which large language models (LLMs) will be integrated into pathology is not yet fully comprehended. This study examines the accuracy, bene...
Assessment of Chat-GPT, Gemini, and Perplexity in Principle of Research Publication: A Comparative Study
Assessment of Chat-GPT, Gemini, and Perplexity in Principle of Research Publication: A Comparative Study
Abstract Introduction Many researchers utilize artificial intelligence (AI) to aid their research endeavors. This study seeks to assess and contrast the performance of three sophis...
CHATGPT ASSISTANCE ON BIOCHEMISTRY LEARNING OUTCOMES OF PRE-SERVICE TEACHERS
CHATGPT ASSISTANCE ON BIOCHEMISTRY LEARNING OUTCOMES OF PRE-SERVICE TEACHERS
This research investigates the effect of ChatGPT on the learning outcomes of pre-service biology teachers. Sampling was done by purposive sampling in class A (treated with ChatGPT)...
Appearance of ChatGPT and English Study
Appearance of ChatGPT and English Study
The purpose of this study is to examine the definition and characteristics of ChatGPT in order to present the direction of self-directed learning to learners, and to explore the po...
User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study (Preprint)
User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study (Preprint)
BACKGROUND With the rapid advancement of artificial intelligence (AI) technologies, AI-powered chatbots, such as Chat Generative Pretrained Transformer (Cha...
Global Healthcare Professionals’ Perceptions of Large Language Model Use In Practice (Preprint)
Global Healthcare Professionals’ Perceptions of Large Language Model Use In Practice (Preprint)
BACKGROUND Chat Generative Pre-Trained Transformer (ChatGPTTM) is a large language model (LLM)-based chatbot developed by OpenAITM. ChatGPT has many potenti...

Back to Top