Connect with us

Life Sciences

The Promise and Peril of AI in Medicine

AI is so convincing that it can even lead experts to second-guess themselves, which is dangerous. Therefore, deploying this technology in medicine demands…

Published

on

This article was originally published by Clinical Omics

By Nephi Walton, MD, MS, FACMG, FAMIA

Nephi Walton
Nephi Walton

The technology behind AI, which is designed to mimic the human brain, continues to astonish even those who understand it well due to its increasingly human-like interactions. In my study of medical information dissemination, particularly concerning genetic medical conditions, I have discovered a significant error rate. However, AI is so convincing that it can even lead experts to second-guess themselves, which is dangerous. Therefore, deploying this technology in medicine demands extreme caution.

While medicine is testing this technology and creating use cases, patients are already using it.  ChatGPT readily gives advice on what to do with positive genetic testing results, often with erroneous information. For example, when I asked how to avoid passing an autosomal recessive condition to my children, it recommended that I not have children. ChatGPT generates many more such errors regarding clinical management, although newer versions have shown better performance.

Through my own work in AI model development, I have encountered one major challenge: AI responses are unpredictable. In medicine, where advice must align with current evidence-based practices, this unpredictability can be perilous. Large language models are trained on extensive datasets that come with timestamps. Due to the substantial data processing requirements, these timestamps can be a year or more old. Furthermore, these models generate text based on training data frequencies, leading them to regurgitate older evidence and better-established guidelines while neglecting new information. These are significant limitations of AI models.

Humans can also provide unpredictable responses, but they are typically selected based on track record, interviews, and training. Although humans make errors during training, they are supervised by others who help correct missteps. They also have malpractice insurance that reflects their past accuracy, offering financial protection against errors. This raises questions about the training and certification requirements for machines and the level of supervision they need before earning our trust. Should vendors of such technologies be obligated to insure against malpractice? Without granting machines some degree of freedom in their response, we risk producing very scripted responses and losing the human-like interactions of which AI is now capable.

The second issue is staying up to date with latest guidelines, which is further complicated by conflicting guidelines and the individualized nature of medical care. I trained under six clinical geneticists, each with their own approaches to diagnosis and care. Determining the best approaches and which guidelines to adopt becomes a complex challenge. Trust in those making such decisions for AI models is crucial as they wield significant influence over entire domains and populations. This is not just a concern in medicine but extends to AI in general.

Sam Altman, the CEO of OpenAI, recently suggested that AI could possess superhuman persuasive abilities. I argue that this already exists to some extent. When asking ChatGPT to generate references for papers, it often creates entirely fabricated abstracts with convincing results and a list of authors that are well known in the field. Accepting this at face value without verifying through PubMed can be misleading. If you request a reference from ChatGPT that does not exist, it will generate one that appears highly convincing.

I am concerned by the need to implement safeguards in AI and prevent inappropriate patient care. However, one could also argue that imposing such guardrails might limit the potential of AI. Unburdened by the constraints of slow-paced, evidence-based guidelines, AI may learn to manage patients more effectively as it gains a deeper understanding of human biology and medication interactions. This potential power is profound, but we must carefully consider whether we are ready to entrust patients to machines.

Lastly, my most significant concern is not just in medicine but for all of human civilization—as AI becomes integrated into our lives, humans may become overly reliant on it and lose their own skills. We risk losing the art of medicine and may no longer have physicians who can operate without machine assistance. We may have individuals whose critical thinking depends on machines and children who cannot write an essay without computer aid. Most importantly, we may lack individuals with the knowledge to manage the superhuman machines we created.

 

Nephi Walton MD, MS, FACMG, FAMIA completed his MD and MS in biomedical informatics with a focus on machine learning/artificial intelligence at the University of Utah School of Medicine. He completed a combined residency in pediatrics and genetics at Washington University in St Louis, Missouri. He is board certified in both clinical genetics and clinical informatics. He has worked with two of the largest population health sequencing programs in the U.S.: MyCode at Geisinger and HerediGene at Intermountain Health. He currently serves as the associate medical director of Intermountain Precision Genomics where he co-leads the HerediGene genomic sequencing return of results program and runs the Intermountain Precision Genomics Whole Genome Sequencing clinic. He also serves as the associate medical director of Intermountain’s sequencing laboratory. He was past chair of the American Medical Informatics Association Genomics and Translational Bioinformatics Workgroup and has presented at several meetings on translating the use of genomics into general medical practice, something he is actively pursuing at Intermountain Health.

 

The post The Promise and Peril of AI in Medicine appeared first on Inside Precision Medicine.




artificial intelligence

machine learning
medicine

Life Sciences

Wittiest stocks:: Avalo Therapeutics Inc (NASDAQ:AVTX 0.00%), Nokia Corp ADR (NYSE:NOK 0.90%)

There are two main reasons why moving averages are useful in forex trading: moving averages help traders define trend recognize changes in trend. Now well…

Continue Reading
Life Sciences

Spellbinding stocks: LumiraDx Limited (NASDAQ:LMDX 4.62%), Transocean Ltd (NYSE:RIG -2.67%)

There are two main reasons why moving averages are useful in forex trading: moving averages help traders define trend recognize changes in trend. Now well…

Continue Reading
Life Sciences

Asian Fund for Cancer Research announces Degron Therapeutics as the 2023 BRACE Award Venture Competition Winner

The Asian Fund for Cancer Research (AFCR) is pleased to announce that Degron Therapeutics was selected as the winner of the 2023 BRACE Award Venture Competition….

Continue Reading

Trending