|

Zee Live News News, World's No.1 News Portal

What Healthcare Organizations Can Learn from NIST’s AI Risk Management Framework

Author: admin_zeelivenews

Published: 25-03-2026, 5:06 PM
What Healthcare Organizations Can Learn from NIST’s AI Risk Management Framework
Telegram Group Join Now

 

Compliance Is Not a Checklist

How can healthcare organizations manage the black box that is a third-party AI solution in their environment? This is where an innovative approach to risk mitigation needs to happen. Organizations can’t think of compliance as merely a checklist to complete. In a list of 20 mandatory items, a single “no” can mean an organization isn’t compliant. Risk management frameworks are in the category of “-ish,” which means that organizations can be “compliant-ish,” or somewhat compliant. That is worlds better than complete noncompliance. 

Still, whether an organization adopts a certain AI risk management framework comes down to the current culture for risk in its environment. Is it much more conservative? Does it have stricter requirements because of a previous data breach?

Vendors are given a lot of trust, especially because there isn’t a certification body that can offer a “badge” that a certain AI solution is set to handle a level of risk. This is a real challenge, because it’s unclear whether these solutions are fully vetted. It then becomes the responsibility of the healthcare organization to strengthen its risk mitigation strategy so that innovation can still flourish. It’s a tricky balance to find.

This is why it’s useful for healthcare organizations to share the knowledge and experiences they’ve had with certain solutions; but of course, it’s harder to share experiences that may not have been so positive.

DISCOVER: Here are the four security trends to watch in 2026.

Organizations Need to Trust but Verify AI Solutions

When a mature organization wants to adopt new patient-facing devices, there’s commonly a lab or some sort of proving ground for a contained trial to monitor traffic and other aspects. For example, the technical teams will want to figure out how that device is communicating, because it will be in service in the environment for years, maybe decades, so there’s a thoughtful lifecycle management process behind it.

AI solutions should be treated the same way. Test out the ambient clinical documentation software in a controlled setting before wider deployment. Build governance structures that involve working closely with the partner or vendor. As a healthcare CISO, I would much rather go with product A, which fully discloses how it works but may not be top of the line, than product B, which may be the best solution technologically but doesn’t share anything about how it works. If I’m going to open my organization up to risk, I would rather have that working relationship with a partner who is willing to be more open about the product.

52%

The percentage of organizations that possess nonhuman identities (such as AI agents) with critical excessive permissions, compared with 37% for human users

Source: Tenable, Cloud and AI Security Risk Report 2026, February 2026

AI Risk Management Is a Team Effort 

Managing AI risk must involve cross-functional negotiations within the organization. It can’t just be from the clinical or the technology side. Legal, marketing, operations — most departments have to be in lockstep, because when adapting a new solution, there may be a financial impact on the organization, so there must be clear communication about the risks.

Leaders must control the risk plane and have that visibility to know when use of the solution is veering away from what all the stakeholders have agreed upon. Have acceptable use policies with the vendor changed at a moment’s notice? Is the organization performing a point-in-time risk assessment, or must it establish a set of controls for ongoing compliance and risk measurement?

This is where organizations can start to build in some of their key risk indicators around AI solutions, so that they are always on and omnipresent. If it’s just a point-in-time snapshot that organizations need to look back at, it becomes irrelevant. These solutions are likely changing more quickly than we expect, so they need to have real-time controls.

Ultimately, understanding and managing risk in healthcare is paramount. If organizations don’t get this right and patients receive a notification that their data has been compromised, they will seek care elsewhere. That violation of trust will have a long-term impact on a provider.

Similarly, the available AI risk frameworks need to garner more trustworthiness within healthcare. If providers are going to adopt and adapt whatever framework has been established, either from an independent body or a governmental agency, there needs to be more collaboration and transparency about how these controls will work in a healthcare setting.

As someone with decades of experience in healthcare security, I know how quickly an organization’s posture can change. One moment, a provider is secure, and the next moment, there’s a breach. Trustworthiness is not just earned, it’s demonstrated. That is one of the biggest factors of these risk management frameworks and the platforms that they lie within.

Source link
#Healthcare #Organizations #Learn #NISTs #Risk #Management #Framework

Related News

Leave a Comment

Plugin developed by ProSEOBlogger
Facebook
Telegram
Telegram
Plugin developed by ProSEOBlogger. Get free Ypl themes.
Plugin developed by ProSEOBlogger. Get free gpl themes