Connect with us

Life Sciences

How Artificial Intelligence Is Driving Changes in Radiology

Described simply, artificial intelligence (AI) is a field that combines computer science and robust data sets, to enable problem-solving. The umbrella…

Published

on

This article was originally published by Clinical Omics

Described simply, artificial intelligence (AI) is a field that combines computer science and robust data sets, to enable problem-solving. The umbrella term encompasses the subfields of machine learning and the more recently developed deep learning, which itself is a subfield of machine learning. Both use AI algorithms to create expert systems that make predictions or classifications based on input data.

The first reports of AI use in radiology date back to 1992 when it was used to detect microcalcifications in mammography1 and was more commonly known as computer-aided detection. It wasn’t until around the mid-2010s that it really started to be seen as a potential solution to the daily challenges, such as volume burden, faced by radiologists.

Ronald Summers
Ronald Summers, chief of the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory at the National Institutes of Health Clinical Center

Ronald Summers, chief of the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory at the National Institutes of Health Clinical Center in Bethesda, Maryland, describes the development of deep-learning techniques around this time as having a “democratizing influence” on the field. “The research is so much easier now,” he says. “You don’t have to have mathematical representations of disease; instead you feed in large amounts of data and the neural network that’s part of the deep-learning system learns the patterns on its own.”

With the advent of AI came the emergence of radiomics, where digital images that are normally interpreted by radiologists in a qualitative way are transformed into quantitative data. The digital data can then be used to train machine-learning algorithms to recognize certain features that may give insight into a diagnosis or prognosis invisible to the human eye.

In addition, AI can be applied to almost every part of a patient’s journey through a radiology department with its use broadly categorized in to three main areas.

Improving Hospital Workflow

The first of these is hospital operations. “There’s a lot of interest in AI applications to improve workflow,” says Summers. The automation of operational tasks, such as the evaluation of appropriateness of imaging, patient scheduling, selection of examination protocols, and improvement of radiologists’ reporting workflow, among others, can all be done with the assistance of AI.

Madhuri Sebastian
Madhuri Sebastian business leader of Enterprise Imaging Philips

The advantages of these applications have not gone unnoticed by some of the biggest players in the field of radiology. “Workflow automation is a big opportunity and our customers recognize this need, especially with the staff shortages post-COVID,” says Madhuri Sebastian, business leader of enterprise imaging at Philips. “We are focusing on improving operational efficiency with solutions like the Workflow Orchestration in our Image Management offering that streamlines the worklist for the radiologist and other solutions driving efficiency in the imaging workflow.”

Philips offers a series of products across the magnetic resonance (MR) workflow such as VitalEye, which detects patient physiology and breathing movement, allowing routine MR examination setup to occur in less than a minute. There is also MR Workspace for AI-based protocol selection and SmartExam for examination planning.

Philips’ VitalEye
Philips’ VitalEye technology and algorithms, compatible with the Ingenia Elition X and Ingenia Ambition X MRI scanners, process over 100 body locations in parallel to intelligently extract signs of breathing—allowing routine exam set-up time to occur in less than a minute.

Another industry leader, GE Healthcare, has its Effortless Workflow model that similarly offers users AI-based tools to automate and simplify time-consuming tasks. Prior to scanning, the applications use machine learning to automatically suggest protocols for each exam, position patients in the scanner, provide the correct scan range for head, chest, abdomen, and pelvis scans, including multigroup scans. Post scanning, the tools help with image review and analysis on a Revolution Ascend computed tomography (CT) scanner.

Image reconstruction

Post-scanning image reconstruction is the second big area that is benefitting from the use of AI. “One of the things that we’re proudest of here at GE Healthcare is that we’ve really put a lot of effort into the image reconstruction stage because it just gives a much better machine for the customers and the patients,” says Jan Makela, president and CEO of imaging at GE Healthcare.

Jan Makela
Jan Makela president and CEO of Imaging at GE Healthcare

Improving image reconstruction with AI is typically done using deep-learning algorithms that increase the signal-to-noise ratios, which in turn cuts scan times, increases diagnostic confidence, and speeds up workflow. “You get 30% faster images, which are higher quality with more resolution,” notes Makela when describing GE Healthcare’s AIR Recon DL image reconstruction product. The tool, which can be used for 2D and 3D images, was recently recognized in the annual Best of What’s New awards by Popular Science magazine and is compatible with the vast majority of the company’s 16,000 MRI systems that are currently installed worldwide.

Makela says that installing the software can increase the number of patients scanned per day from 20–25 to 30–35. “It’s always about the patient outcomes and patient safety, but the next thing is more productivity and it’s [also] fairly cost-effective because you’re not building a new building or hiring more people; you’re just putting software in.”

SmartSpeed MRI
MRI scanner with Philips SmartSpeed MR software.

Meanwhile, Philips report that their SmartSpeed product, which is used with their MRI scanners, can produce scans up to three times faster than conventional techniques without sacrificing image quality. It combines a state-of-the-art speed engine with a deep-learning algorithm, implemented at the source of the MR signal, that has been trained to remove noise while preserving detail.

Other big manufacturers of MRI hardware, including Siemens and Canon, also incorporate their own deep-learning algorithms into their systems to speed up image reconstruction. Regulatory requirements mean that third-party companies are not generally involved at this stage.

Reading and Interpreting Images

Where third parties, often start-ups, do come into the picture is in the reading and interpreting of the images—the third big area to which AI is applied in radiology. Radiologists read numerous images each day. It’s a high-pressure job trying to spot something suspicious without reporting false positives; AI-based clinical decision support can help with this.

The digital images that the radiologists read are produced in a standard DICOM (digital imaging and communication in medicine) format and are accessed via the picture archiving communication system (PACS). The standardized nature of these images and the PACS makes it easier for third parties to use the data for algorithm development. They can then make those algorithms available to the widest possible customer base, as they can be applied regardless of the system used to capture the image.

In 2021, the market for AI in medical imaging was valued at $1.06 billion and is expected to reach $10.14 billion by 2027, growing at an average rate of 45.7% per year.2 The number of tools being developed by these companies to assist with image interpretation, not only for MRI but also X-ray, CT, and ultrasound, is vast, as are the potential applications.

Shravya Shetty
Shravya Shetty engineering director Google Health AI Research

Google Health began exploring the radiology space toward the end of 2017, and have since used their expertise in AI, particularly in computer vision, to develop tools with wide-ranging applications, including image interpretation. “The first work that we did was in lung cancer screening. But after that, we started moving on to other modalities such breast cancer applications, chest X-ray, and more recently, we’ve had some pretty exciting work that we published in the use of ultrasounds for maternal care,” says Shravya Shetty, engineering director at Google Health AI Research.

She stresses, however, that Google Health’s goal is “not to go after the entire radiology space.” They instead are looking for areas with an obvious unmet need where they can make a significant difference, she says.

The ultrasound project is one such example. The Google Health team developed two deep-learning neural network models to predict gestational age and fetal malpresentation (non-cephalic vs cephalic) from ultrasound videos captured using a “blind sweep” method.3 The benefits of this method are that it can be applied in low-resource settings by healthcare workers who have had just 8 hours of training and no prior experience with ultrasound.

The system consists of a low-cost, battery-powered ultrasound device and an Android smartphone that can operate without internet connectivity or other infrastructure. Its accuracy was non-inferior to existing clinical standards and the approach “has the potential to improve access to ultrasound in low-resource settings,” write the study authors in Communications Medicine.3

Shetty notes that the Google Health team “is primarily an impact-driven research team,” and to get their solutions on to the market they are developing partnerships with companies already established in the field. They recently announced that they have licensed their mammography AI research model to medical technology provider iCAD and their model for lung nodule malignancy prediction on CT imaging to aidence.

Louis Culot
Louis Culot, innovation strategy leader in precision diagnosis at Philips

Partnerships seem to be of the upmost importance throughout the field. Sebastian says that the AI strategy at Philips “is a partnership strategy and an ecosystem strategy.” Her colleague Louis Culot, innovation strategy leader in precision diagnosis at Philips, adds that the company believes that “working together is going to move the field a lot further faster.”

As an example, Philips partnered with Nicolab to enhance outcomes in stroke patients. They are using Nicolab’s StrokeViewer—a cloud-based, AI-driven stroke triage and management platform—to streamline the stroke workflow in combination with their Azurion image-guided therapy platform. The system works by assessing CT scan data using AI and identifying large vessel occlusion.

Triage tools are a popular choice for AI algorithm developers because they can help radiologists prioritize which images they should be looking at first, particularly in time-sensitive fields such as stroke. Viz.AI, Aidoc, and MaxQ AI have all received FDA clearance for triaging support tools for potentially life-threatening diagnoses including stroke, intracranial hemorrhage, pulmonary embolism, and spine fractures.1

Another area where AI is used for image interpretation is in segmentation. In this case, “the AI system analyzes the scan and measures the volume of an organ or abnormality,” says Summers. When done manually, segmentation is a time-consuming task that aims to divide the image into parts that represent normal tissue and those that are abnormal;4 AI cuts this down substantially.

The convolutional neural network U-Net5 specializes in the automated segmentation of medical images, including those of the brain and liver. Such segmentation enhances image analysis and can add confidence to a radiologist’s diagnosis.

While both triage and segmentation are looking for things a radiologist might be expecting, opportunistic screening on whole body scans uses AI to predict something that might happen in future, such as heart attacks or strokes, by estimating the risks.

Summers says that he has been working with Perry Pickhardt, chief of gastrointestinal imaging at the University of Wisconsin School of Medicine and Public Health in Madison, for more than a decade, measuring biomarkers such as the amount and quality of abdominal muscle, the amount of plaque or hardening in the abdominal aorta, the presence or absence of fat on the liver, and the size of the spleen and the liver.

They developed fully automated AI body composition tools, together with John Garrett and B. Dustin Pooler, both also from University of Wisconsin–Madison, that will be used in the Opportunistic Screening Consortium in Abdominal Radiology (OSCAR) project. The project is firstly aiming to show that CT-based opportunistic screening is feasible and generalizable at scale. Secondly, it aims to demonstrate that models derived using these biomarkers can help identify patients early on who may need intervention or lifestyle changes to avoid health issues down the road, without the need for additional patient time or dose exposure.7

Fitting into Multiomics

Clinicians nowadays are faced with increasingly complex data sets that include not only radiologic information, but also genomic data, pathology reports, and blood analyses. There has also been a dramatic increase in therapeutic options, particularly in oncology.

Thierry Colin
Thierry Colin vice president of multimodal research SOPHiA GENETICS

Thierry Colin, vice president of multimodal research at SOPHiA GENETICS, says that there is a big demand from oncologists “to help them to integrate all this data and to help them really to take the decision.”

SOPHiA GENETICS started out by offering customers tools to assist them in the analysis of next-generation sequencing (NGS) data. The company’s SOPHiA DDM Platform uses machine learning algorithms to call, annotate, and pre-classify variants from raw NGS data, with the molecular profiles matched to therapeutic, diagnostic, and prognostic clinical associations.

However, the requests for improved data integration led company researchers to develop a multimodal platform that combines genomics information with radiomic data as well as proteomics, digital pathology, metabolomics, transcriptomics, epigenomics, hematologic, and clinical data from electronic health records.

They are now collaborating with GE Healthcare, among others, on a number of projects in the fields of digital oncology and radiogenomic analysis. The company is also working with big institutions such as the Mayo Clinic and UMass Chan Medical School on projects such as the DEEP-Lung-IV study, which will use deep learning–enabled analysis of real-world multimodal data to predict immunotherapy response in patients with stage IV non-small cell lung cancer.

Colin says, “My job is to use all the data that are collected to develop an algorithm that will be predictive to the response to the treatment, and to aid the clinician to identify patients that are good responders.” He adds that some patients may be “super responders” who experience an ongoing response. “The identification of these patients is very important to see what they have and what it means in the context of discovery of biomarkers.”

Increasing acceptance

With the uses of AI expanding in radiology, it’s fair to wonder what radiologists think of it all. Early reports often cite reluctance to support the adoption of AI tools within their discipline for fear of being replaced by computers, but this no longer appears to be the case.

“About 5 or 6 years ago, I was seeing a lot more hesitation and the feedback was quite mixed,” says Shetty. “There were some people who were excited about what could happen with deep learning but there were also a lot of clinicians for whom there was some hesitation, either because they didn’t think this would work or because there were underlying concerns around whether AI was going replace radiologists and so on.”

However, she notes that since then there has been a steady shift in attitude. “Most radiologists are at the point today where they don’t believe that the AI solutions are replacing what radiologists do but instead see that this could potentially make them more efficient—that it has ways of reducing workload and could be a second check or a second reader.”

A degree of hesitancy is not a bad thing though, according to Culot. “I think that the medical field is appropriately conservative when it comes to adopting these technologies, because you are dealing with patients,” he says.

An American College of Radiology survey suggests that AI adoption in radiology increased from zero to 30% between 2015 to 2020,8 and it is clear that close collaboration between the AI developers and users is key to bolstering acceptance.

“I don’t think we can be successful in the space if we didn’t have clinicians working closely with us on a day-to-day basis,” says Shetty. And Summers notes that professional societies such as the Radiological Society of North America have “really done, in my view, a tremendous job at educating radiologists about what AIs strengths and limitations are, about how to do the research, and about the factors involved in regulation of AI as a device.”

Challenges

Although AI use is becoming widespread in medical imaging research, there are still several challenges to be overcome before it can become more broadly adopted in the clinic. AI development is often restricted by a lack of high-quality, high-volume, longitudinal outcomes data.9 There are also challenges associated with the curation and labeling of medical imaging data.

In addition, regulatory approval, economic factors, ensuring the use of representative training and validation data sets, and patient privacy concerns over the use of their medical images in AI development all need to be addressed.

Standardization is another big issue, one that Google Health and other institutions have been investing in. Shetty says that there’s a series of best practices evolving within the AI development industry. These are now becoming accepted best practices and even progressing into guidelines10, 11 that give information on things like which metrics to use, how the test set should be independent of the training, what kind of labels should be collected, and the pitfalls that may be faced.

What next?

Over his 30-year career in radiology, Summers believes that “AI software has gotten much more useful [and the] breadth of clinical applications has exploded. Not a day goes by where there isn’t another research paper looking at another clinical application of interest that I haven’t seen before.”

And Sebastian points out that “by the year 2025, the amount of healthcare data is expected to grow by 36% per year.”12 She adds that although “this is all tremendous for customers to develop and deliver a holistic view of the patient and drive outcomes, it’s also overwhelming, and that is where AI comes in.”

If the imaging companies and AI developers are doing their jobs right, radiologists will be “less burnt out” in 10 years’ time and their work will be more data driven, she says.

With collaboration between radiologists and industry, the possibilities for AI use appear to be endless, but Culot says that the field “is just at the beginning of optimization—right from the acquisition all the way through in any application of radiology.”

He suggests that the technology could eventually be used for biomarker detection, but at that stage you must question whether radiologic biomarkers provide more information than molecular tests. His view is that “before [AI use] will change therapy, it’ll change diagnostic strategy.”

 

References

  1. Driver CN, Bowles BS, Bartholmai BJ, et al. Artificial Intelligence in Radiology: A call for Thoughtful Application. Clin Transl Sci 2020; 13: 216–218
  2. arizton.com/market-reports/artificial-intelligence-in-medical-imaging-market
  3. Gomes RG, Vwalika B, Lee C, et al. A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment. Commun Med 2022; 2: 128
  4. Koenigkam Santos M, Raniery Ferreira Júnior J, Tadao Wada D, et al. Artificial Intelligence, Machine Learning, computer-aided diagnosis, and radiomics: advances in imaging towards precision medicine. Radiol Bras 2019; 52: 387–396
  5. U-Net: Convolutional Networks for Biomedical Image Segmentation org/abs/1505.04597
  6. www.med.upenn.edu/cbica/brats2020/data.html
  7. radiology.wisc.edu/news/opportunistic-screening-consortium-in-abdominal-radiology-oscar-project-wins-at-isct-meeting
  8. Allen B, Agarwal S, Coombs L, et al. 2020 ACR Data Science Institute Artificial Intelligence Survey. J Am Coll Radiol 2021; 18: 1153–1159
  9. Tang, X. The role of artificial intelligence in medical imaging research. BJR Open 2020; 2: 20190031
  10. Sounderajah V, Ashrafian H, Rose S, et al. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nature Med 2021; 27: 1663–1665
  11. Sounderajah V, Ashrafian H, Golub, RM et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open 2021; 11 :e047709
  12. rbccm.com/en/gib/healthcare/episode/the_healthcare_data_explosion

 

Laura Cowen is a freelance medical journalist who has been covering healthcare news for over 10 years. Her main specialties are oncology and diabetes, but she has written about subjects ranging from cardiology to ophthalmology and is particularly interested in infectious diseases and public health.

The post How Artificial Intelligence Is Driving Changes in Radiology appeared first on Inside Precision Medicine.










radiology
medicine


device
hardware
artificial intelligence

machine learning


public health
application
mobile

regulation

Life Sciences

Wittiest stocks:: Avalo Therapeutics Inc (NASDAQ:AVTX 0.00%), Nokia Corp ADR (NYSE:NOK 0.90%)

There are two main reasons why moving averages are useful in forex trading: moving averages help traders define trend recognize changes in trend. Now well…

Continue Reading
Life Sciences

Spellbinding stocks: LumiraDx Limited (NASDAQ:LMDX 4.62%), Transocean Ltd (NYSE:RIG -2.67%)

There are two main reasons why moving averages are useful in forex trading: moving averages help traders define trend recognize changes in trend. Now well…

Continue Reading
Life Sciences

Asian Fund for Cancer Research announces Degron Therapeutics as the 2023 BRACE Award Venture Competition Winner

The Asian Fund for Cancer Research (AFCR) is pleased to announce that Degron Therapeutics was selected as the winner of the 2023 BRACE Award Venture Competition….

Continue Reading

Trending