Connect with us

Medtech

The Role of Medical Cybersecurity Regulations in Protecting Patients with the Rise of AI-Driven Healthcare

There’s no doubt AI is already becoming commonplace, even in the healthcare industry. Just recently, two tech giants announced new AI solutions for health…

Published

on

This article was originally published by AITHORITY
The Role of Medical Cybersecurity Regulations in Protecting Patients with the Rise of AI-Driven Healthcare

There’s no doubt AI is already becoming commonplace, even in the healthcare industry. Just recently, two tech giants announced new AI solutions for health and medical applications. Google presented the new functions of its Vertex AI Search tool designed for the healthcare and life sciences field. Microsoft, on the other hand, released some details about the healthcare function of its Fabric analytics solution.

A multitude of AI medical or healthcare products have already been deployed over the past years, ranging from patient monitoring wearables and implants to diagnostic imaging, digital pathology, and genomic sequencing solutions. These products have brought about numerous benefits in terms of healthcare facility operations and patient care. However, they have also resulted in the emergence of new vulnerabilities and risks, which in turn attracted the attention of regulators.

The medical and healthcare sector is already highly regulated, and it seems inevitable for more regulations to be imposed. However, there are currently no specific laws on AI use in medicine and health services, at least when it comes to major markets like the United States and Europe. There are no laws that target how AI is utilized like requiring a human doctor’s verification when making AI-aided diagnoses and regulating the dispensation of AI-powered automated medical services.

The regulations in force now are mostly about medical cybersecurity and data privacy. These are not enough, but even with these legal impositions, there is some resistance to having them enforced especially among businesses.

World’s First AI Healthcare Radio Station Now Streaming Through Spotify

Here’s a look at some of these regulations along with arguments on why they are important and the improvements needed.

Making sure that devices work as intended

AI-infused medical devices are viewed as advanced and perform unprecedented functions. Patients are either excited to try them or they hesitate to become the pioneering users because of the device performance uncertainties. Fortunately, regulations exist to ascertain that AI healthcare products are safe, effective, and free from cyber risks before they can be made available to the public.

There are a few medical cybersecurity regulations that help allay concerns over device safety, effectiveness, and cybersecurity. They are not specifically aimed at the use of AI, but they include provisions that can compel device manufacturers to ascertain that their AI systems behave in line with reasonable expectations.

In the United States, the Food and Drug Administration (FDA) has issued guidelines covering different stages of the lifecycle of products. There are pre-market requirements, particularly those outlined in FDA 21 CFR parts 807 and 814, that ascertain that products have been designed and manufactured within safety and effectiveness regulations.

 There are also post-market rules (21 CFR Part 803) that compel manufacturers to monitor their products for possible defects and other issues that may lead to user injury and other unwanted outcomes. Additionally, the FDA has regulations applicable across different product lifecycle stages to ensure product quality (21 CFR part 820) and to inform consumers about the cybersecurity of their products (21 CFR part 820).

Meanwhile, the European Union has the Medical Device Regulation (MDR)  2017/745, which replaced the EU Medical Device Directive (MDD). Just like the FDA regulations, this law requires medical devices to be safe, effective, and reliable. It sets mechanisms to ascertain that defects and other problems are reported and addressed in a timely manner. It also includes post-market evaluation and continuous improvement provisions. It requires device makers to compile readily available information about their products and make sure that consumers are made adequately aware of what they need to know about the products, especially when it comes to safety and effectiveness issues.

Again, these regulations do not specifically target the integration of artificial intelligence into medical products and services. However, they establish a way to force manufacturers to maintain acceptable standards of quality in designing and producing their devices. They also empower patients or consumers to have a role in the safety and effectiveness of the products available in the market.

AI Healthcare Startup AKASA Hires Phil Walsh as Chief Marketing Officer

Ensuring patient privacy and data security

Machine learning is all about accumulating data and using this data to continuously improve a system. In the process, this creates the problem of possibly exposing patient data. Without mandated guardrails in place, AI healthcare systems like AI bots used to interact with patients may reveal patient data to threat actors. They may expose information that should be kept confidential.

Fortunately, there are already existing laws that can be implored to prevent any system from unnecessarily revealing information to non-parties. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the European Union’s General Data Protection Regulation (GDPR), for example, have provisions that can be used to prevent AI systems from violating patient privacy and disclosing their private data to unauthorized entities. 

Similar to what happened with ChatGPT, it is not a remote possibility for AI bots used in healthcare to face privacy violation suits. Generative AI, the technology powering ChatGPT, will be used in creating patient-interfacing bots. As such, it is crucial to implement ways to keep AI systems from oversharing information, and regulations are there to make this happen.

Informing patients about product reliability, safety, and security

Product labels that indicate not only the uses, usage, safety, and cybersecurity of medical devices are a welcome addition to existing regulations. They help consumers find the best products for them by obliging product makers to provide correct information on their labels, especially when it comes to the effectiveness and safety aspects.

There have been attempts by the US FDA to enforce medical device cybersecurity labeling but this did not materialize, and the agency settled with a voluntary labeling system. It is not ideal but it is better than nothing. The EU has some provisions on labeling, but these do not cover safety and cybersecurity concerns.

In summary

There is a need for clear and specific regulations on using artificial intelligence in medical devices and healthcare in general. While there are existing laws and regulations that may be used to address worries over the use of AI in medicine and healthcare, they are insufficient and not explicit enough to reassure patients and consumers.

Regulations help address product reliability, effectiveness, and cybersecurity. They also facilitate patient data protection. Additionally, they can make it imperative for device manufacturers to aid customer choice through useful accurate information on product labels. FDA regulations, the EU MDR, the Association for the Advancement of Medical Instrumentation’s (AAMI) Technical Information Report 57, PATCH ACT, the International Medical Device Regulators Forum (IMDRF), and the Medical Device Coordination Group’s (MDCG) 2019 series of guidance documents need to be updated to reflect the growing prominence of AI in the healthcare field.

[To share your insights with us, please write to sghosh@martechseries.com]‌

The post The Role of Medical Cybersecurity Regulations in Protecting Patients with the Rise of AI-Driven Healthcare appeared first on AiThority.




medicine



devices
artificial intelligence

machine learning




life sciences

Medtech

ETF Talk: AI is ‘Big Generator’

Second nature comes alive Even if you close your eyes We exist through this strange device — Yes, “Big Generator” Artificial intelligence (AI) has…

Continue Reading
Medtech

Apple gets an appeals court win for its Apple Watch

Apple has at least a couple more weeks before it has to worry about another sales ban.

Continue Reading
Medtech

Federal court blocks ban on Apple Watches after Apple appeal

A federal appeals court has temporarily blocked a sweeping import ban on Apple’s latest smartwatches while the patent dispute winds its way through…

Continue Reading

Trending