Connect with us

Life Sciences

ChatGPT vs. Humans: NYU Study Reveals Near-Identical Healthcare Responses

It’s been almost a year since ChatGPT was launched to the public, and it’s still wowing professionals from all industries with its superpower. One…

Published

on

This article was originally published by AITHORITY

It’s been almost a year since ChatGPT was launched to the public, and it’s still wowing professionals from all industries with its superpower. One industry where ChatGPT has brought about a holistic transformation is healthcare, ChatGPT in healthcare is transforming patient communication, streamlining administrative tasks, and providing reliable medical advice.

According to a recent study published in JMIR Medical Education, researchers from New York University’s (NYU) Grossman School of Medicine and Tandon School of Engineering found that ChatGPT’s responses to healthcare-related queries were incredibly similar to those provided by human clinicians.

AI chatbots in healthcare, including ChatGPT, are being explored as tools to assist in drafting responses to patients’ questions. However, the study highlighted that the general population’s ability to distinguish between chatbot and human responses, along with patients’ trust in chatbots in this context, is not well-established.

In an effort to bridge this gap, the researchers delved into exploring the practicality of utilizing ChatGPT and other AI-based chatbots to facilitate patient-provider communication. Their findings shed light on the potential of AI-driven chatbots to play a crucial role in enhancing healthcare interactions.

A Detailed Study on Healthcare Responses

In this research conducted, 392 participants, all aged 18 and above, were invited to take part. The participants were presented with a set of ten patient questions along with their corresponding responses. Interestingly, half of these responses were generated by real human healthcare providers, while the other half came from ChatGPT, the impressive language model developed by OpenAI.

Also Read: 10 Best Ways How ChatGPT Is Revolutionizing Medicine and Patient Care

The participant’s task was twofold: first, they had to determine whether each response came from a human or ChatGPT, and second, they were asked to rate their level of trust in the responses provided by ChatGPT. To gauge their trust levels, the participants were presented with a 5-point scale, allowing them to rate the responses on a spectrum from “completely untrustworthy” to “completely trustworthy.”

The researchers aimed to uncover whether people could distinguish between human and chatbot-generated responses. They also wanted to explore the level of trust participants placed in the healthcare advice offered by ChatGPT.

This study aims to shed light on the potential for ChatGPT to play a significant role in patient-provider communication in the realm of healthcare. Understanding how users perceive and trust AI-generated responses is crucial in assessing the feasibility of integrating such technology into the healthcare landscape. The results have the potential to shape the future of AI-powered healthcare assistance.

Results: Participants’ Perception of Human-Like Responses

Surprisingly, the study found that people have a limited ability to distinguish between chatbot and human-generated responses. Participants, on average, accurately distinguished chatbot responses 65.5% of the time, and provider responses 65.1% of the time.

The accuracy ranged from 49.0% to 85.7% for different questions. These results remained consistent across all demographic categories of respondents.

In terms of trust, participants displayed mild overall trust in chatbot responses, with an average score of 3.4. However, the level of trust varied based on the complexity of the health-related tasks in question. Logistical questions, such as scheduling appointments and insurance inquiries, garnered the highest trust rating with an average score of 3.94. Preventative care, which includes aspects like vaccines and cancer screenings, followed closely with an average score of 3.52.

Diagnostic and treatment advice received the least trust, with respective scores of 2.90 and 2.89.

Implications for Healthcare Communication

The study’s results highlight the valuable role that chatbots can play in facilitating patient-provider interactions. This applies particularly in the case of administrative tasks and the management of common chronic diseases.

One significant finding is the high level of trust patients placed in chatbot responses, especially concerning logistical and routine healthcare matters. Patients seem to feel comfortable relying on chatbots for tasks like scheduling appointments, checking insurance details, and seeking general information. This increased trust could lead to greater patient engagement and satisfaction, as chatbots provide quick and accurate responses, reducing waiting times and streamlining processes.

Read: Generative AI Can Revolutionize Public Healthcare Systems

By taking on administrative tasks, chatbots can effectively assist healthcare providers, allowing them to focus more on personalized patient care. When administrative burdens are alleviated, providers have more time to concentrate on building strong relationships with their patients, offering empathy, and delivering tailored medical advice. Consequently, this can lead to improved patient outcomes and overall healthcare experiences.

In settings where patients might not need immediate attention from a human healthcare provider, chatbots can be a reliable first point of contact. They can efficiently handle routine inquiries and provide essential information, freeing up human providers to address more complex medical issues. This dynamic can enhance the efficiency of healthcare systems and optimize the allocation of medical resources.

Moreover, the integration of chatbots in healthcare communication has the potential to enhance accessibility and extend healthcare services to a broader population. Chatbots can be accessible 24/7, allowing patients to seek information and support outside regular clinic hours. This feature is especially beneficial for individuals with busy schedules or those residing in remote areas, as they can access reliable healthcare advice whenever they need it.

Read: New Survey Shows Perceptions of AI Use in Healthcare Are Changing

However, while chatbots demonstrate promise in streamlining administrative tasks and enhancing healthcare communication, the study’s findings also emphasize the need for cautious implementation. As chatbots expand their role in patient-provider interactions, it is crucial to strike a balance and ensure that certain medical decisions and complex healthcare matters remain within the purview of human healthcare professionals. The study underscores the importance of continuous research and refining chatbot capabilities to ensure safety, accuracy, and ethical considerations in healthcare settings.

The Road Ahead

Despite the promising results, the researchers caution against relying solely on chatbot-generated advice for more critical clinical roles. The study’s authors emphasize the need for further research in this area. Providers should approach the integration of chatbots into clinical decision-making with caution and critical judgment due to the inherent limitations and potential biases of AI models.

The study, aptly titled ‘Putting ChatGPT’s Medical Advice to the (Turing) Test: Survey Study,’ published in JMIR Medical Education, has shed light on the immense potential of chatbots in transforming the landscape of healthcare communication. The ability of ChatGPT to provide responses almost indistinguishable from human healthcare providers, combined with the overall trust expressed by patients, signifies a significant step forward in leveraging AI technology for the betterment of patient care. As further advancements and research unfold, chatbots are poised to become reliable allies to healthcare providers, enhancing patient experiences and optimizing healthcare delivery.

 [To share your insights with us, please write to sghosh@martechseries.com]. 

The post ChatGPT vs. Humans: NYU Study Reveals Near-Identical Healthcare Responses appeared first on AiThority.

vaccines



medicine

Life Sciences

Wittiest stocks:: Avalo Therapeutics Inc (NASDAQ:AVTX 0.00%), Nokia Corp ADR (NYSE:NOK 0.90%)

There are two main reasons why moving averages are useful in forex trading: moving averages help traders define trend recognize changes in trend. Now well…

Continue Reading
Life Sciences

Spellbinding stocks: LumiraDx Limited (NASDAQ:LMDX 4.62%), Transocean Ltd (NYSE:RIG -2.67%)

There are two main reasons why moving averages are useful in forex trading: moving averages help traders define trend recognize changes in trend. Now well…

Continue Reading
Life Sciences

Asian Fund for Cancer Research announces Degron Therapeutics as the 2023 BRACE Award Venture Competition Winner

The Asian Fund for Cancer Research (AFCR) is pleased to announce that Degron Therapeutics was selected as the winner of the 2023 BRACE Award Venture Competition….

Continue Reading

Trending