S3 E37: Getting to Know NIST AI RMF

Audio version

Getting to Know NIST AI RMF

Transcript

Jordan Eisner  
Welcome back. Another episode of Compliance Pointers. I’m joined by Chris Abacon today, AKA Bacon. So I like to call him around here. Chris, how you doing?


Chris Abacon  

I’m doing great, Jordan. How are you doing?


Jordan Eisner  

Good. It’s been a little while since you’ve been.


Chris Abacon  

Yeah, yeah. It’s been the CMMC, right? This is bad. I mean, sometime last year, I think. Yeah, it’s been, it’s been a while. Yeah, right. Yeah, right. There’s what’s going on. Yeah.


Jordan Eisner  

Yeah, CMMC. Never heard of it.
Nothing new with that, right? There’s no no pressing November dates or anything.


Chris Abacon  

Yeah, it’s not like, you know, people have to get, you know, their programs and things like that going. Yeah, we’ll we’ll see. You know, a lot of things going on in CMMC.


Jordan Eisner  

Right.
When you’ve done a podcast with us in the past, has it been video or just audio? OK, all right, all right. So it hasn’t been that.


Chris Abacon  

It’s been video, absolutely. It’s been video each time, yeah.


Jordan Eisner  

Haven’t been that long. It’s been within a year.


Chris Abacon  

But I remember video each time cause I I remember I got this specific mic or you know, with a podcast I did with my friend, right? You guys see my page. But you know, I remember always having it when I when I’m when I’m on these podcasts and it’s awesome.


Jordan Eisner  

That’s good. It’s legit.


Chris Abacon  

Yes, indeed.


Jordan Eisner  

So for those of you who don’t know, Chris has been with CompliancePoint. Remind me your official title. Are you? I know you’re senior. Yeah, senior security consultant. It’s been about two years now or more.


Chris Abacon  

Senior security consultant. Yeah.
As beginning of the year, a little bit over 2 years. Yeah, I I joined CompliancePoint late July 2023. So two years in a few months.


Jordan Eisner  

Little bit over to you.
OK, gotcha. Yes, that’s good tenure too. You military, you military guys move around a lot.


Chris Abacon  

It’s flown by. It’s been a great time telling you it’s like constant.
Yeah, exactly, exactly. It’s either every three or four years, depending on, you know, where you worked, right. But three years seems to be kind of the standard, right? And.


Jordan Eisner  

Yeah, you’re used to that. I guess that makes sense.
Yeah.
So judging by that, you must have moved maybe about four times cause you we were just talking, you were in the Navy for 11 years.


Chris Abacon  

Yeah, four times. And it’s, yeah, it’s Virginia, Florida, Florida, Florida.


Jordan Eisner  

Four times, OK.
Yeah, and you’ve been, you’ve been in IT and information security. You know, you put in your profile blue team, probably no red team.


Chris Abacon 

Correct.
Yeah, so I was on a blue team. I was, I was the opposite side of the cool guys. Network defense, right. Analysts. We’d we’d get the, you know, telemetry from the ships and things like that and do analysis on it back at home base, right for in simpler terms.


Jordan Eisner  

Yeah, yeah.


Chris Abacon  

But the red team guys were doing all the cool stuff, right?


Jordan Eisner  

Sure. Yeah, yeah. Well, I’m sure you’re not being entirely forthcoming with everything you did. So, but no, a good guess a a as he alluded to a little bit earlier, we were hinting around or joking about CMMC, but an.


Chris Abacon  

Yeah, yeah.
All right on.


Jordan Eisner 

CMMC, NIST, general cybersecurity, information security, but especially these government frameworks and government regulation. So today we’re talking about government or I got NIST a lot of times associated with that, but the the NIST.


Chris Abacon  

Yes.


Jordan Eisner  

RMF framework which is focused on AI or NIST answer to AI, right?


Chris Abacon  

Right. So it’s a NIST AI RMF framework this time. And yeah, AI RMF framework. NIST as for listeners, right, falls under Department of Commerce, right. They do a lot of the research on behalf of the government. They get all the requests for information from separate entities within the government and then they do a ton of research.


Jordan Eisner  

Hey, sorry, yeah, so bad.


Chris Abacon  

A lot of smart people, right? At NIST, PhDs, whatnot, right, that do everything. So they’re very valuable source resource to lean on.


Jordan Eisner  

That’s a great standard. It’s free, right, to access. We like it a lot here at Compliance Point when when organizations come to us. Well, I’m speaking broader about this right now, just as an organization, but when they come to us and they need a risk assessment or framework and they’re not being held to a.


Chris Abacon  

Absolutely.
Right.


Jordan Eisner  

Certain standard or framework by an external party. We recommend this free to access it to build a program off. We really see especially the cybersecurity framework is a cornerstone framework. But this is this is their answer to AI similar to ISO 42001 or HITRUST AI cybersecurity framework.

I say similar to you might prove me wrong in this podcast and you might speak to some of the differences, but that’s what we’re going to be discussing today. So start us out with an overview NIST AI RMF. Walk us through it.


Chris Abacon  

All right, absolutely, absolutely. So the the NIST AIRMF, AI Risk Management Framework, essentially A framework that’s designed to help organizations across various in all industries actually to identify and manage.
Risks associated with AI. Now in the actual framework documentation itself, AI didn’t define AI, right. AI is very broad and they didn’t want to take that that that problem, right. They didn’t want that. They didn’t want that on their hands, right. But what they did instead is they defined AI systems. So basically anything that generates hub.
Puts such predictions, recommendations, and any decisions that could influence any real or virtual environment with various levels of autonomy, right? But what’s nice about NIST is they also defined risk, right? So NIST in this sense is.
The composite measure of an event’s probability of occurring and the magnitude or degree of the consequence of that corresponding event, right. So these impacts could be either positive, negative or both. And they, you know as a result can be, you know, opportunities or threats, right. So and then the.
And this framework itself, though, this specific framework is really all about developing and deploying trustworthy AI systems. I’ll be continuing to say trust throughout this, you know, podcast because trust is essentially what this is all about. The big characteristics they’ve identified are that the systems, right, these AI systems.
Are valid and reliable, right? Meaning that it’s performing the way it’s advertised and expected. The system is safe, right? It doesn’t endanger life, health, property, environment, right? So again, this is a very broad framework. So when you’re talking about safety of the system.


Jordan Eisner  

Right.


Chris Abacon  

You know, you can talk about anything like it could be something with healthcare, it could be something in an AI product that could be utilized in the public sector that can determine the the traffic flows, right. So that could be public safety. So you got to think about not just maybe do your organization, but as a whole NIST defined it, right these.
Risks across organizations, you know, throughout the industries, right?


Jordan Eisner  

So you might be answering already as a follow-up question. It sounds like you got more to add on the on the basics of it, but I was gonna ask when a company or a company or an organization might think to engage.


Chris Abacon  

Oh yeah.


Jordan Eisner  

Or to apply this framework to maybe get into that, but it sounds like you pretty much said anytime they’re going to be leveraging some sort of AI system, no matter the company or the type, there’s not a specific that it, you know, it’s brought.


Chris Abacon  

Right, right. Absolutely. So I’ll, I’ll dive into that later on. But again, this is a framework, right? And it’s really a voluntary framework based on that’s based on NIST’s best practices and you know the tons of research they do and consider that.
I know I’m skipping ahead a little bit, but there there are some, you know, there are, you know, functions and outcomes and subcategories that I’ll talk about later. But you tailor these subcategories to your organization, right? If you’re a developer of AI.
You’re going to have a totally different control set or risk-based control set compared to somebody that’s a user or end user, right? So again, that’s all part, you know, security and resiliency about with this AI kind of moving on to the next topic with right with the trustworthiness is it’s it’s got to be explainable.


Jordan Eisner  

Yeah.


Chris Abacon  

Right. So the users have to understand how the system operates at a high level. Now I understand this is a huge undertaking for non-technical folks, right? Understanding inferencing, understanding labels and you know, all that stuff. Not necessarily going to be daily business talk throughout organizations.
Right. But at least understand, I would say, you know where data goes, where your data flows are, right, where you’re uploading your like where is data, right or where is AI within your organization? You got to be these organizations that provide these services, you have to be able to explain it, right. So that’s that’s a kind of a whole rabbit hole, right?
Right. When you when you think about it. But that said, you know there are assurances that these organizations, all the all the big AI organizations out there need to have need to be able to explain what they do with your data, which then turns into, you know the next part of which is privacy, right? System user privacy as in if you’re if you’re querying a AI if you are.
If you’re utilizing AI for some type of, you know, intellectual property work or whatever it might be, you got to be able to, you know, the system itself, the AI system’s got to protect your privacy. And then lastly, big one is it’s got to account for bias.
And discrimination, right. And you know, this is a big topic within the AI field because actually NIST itself is actually in the in currently drafting a a definition of this bias right in NIST SP 1270, right. It’s currently in its draft. So SP 1270, if you Google that, there’s a big draft on it.
It’s got a whole listing on all these types of biases that could happen when you’re utilizing AI, right? So and I just have this pulled up here real quick. It does have also the the ISO definition of bias is the degree to which a reference value deviates from the truth. So that’s a very broad statement, but that’s exactly.
It is. It’s very broad, right? You’ve got many forms of AI and many forms of bias when it comes to AI. So let’s take an example for activity, right? We wear our, you know, if you work out, you got your smartwatch. Let’s say you’re wearing it throughout the day and maybe your.
Utilizing it for health reasons, right? You want some health metrics or something. You want some AI to give you some data points based on your health, right? But that means that you’ve got to be wearing your AI, your your wristwatch all the time, like when you’re sleeping, when you’re showering, whatever.
It’s not going to be complete data, right? Because at the core, you’re not going to have a full picture of, you know, your heart rate, your health metrics or some stuff like that. So bias means that it could be skewed a certain way, right? So if you’re only wearing it when you’re working out, right, depending on the type of model, it’s going to be like, oh, Jordan works out all the time.
He only wears it when he’s wearing his watch, right? Yeah, exactly.


Jordan Eisner  

Yeah, or he needs to get his heart checked out because, uh, that that’s a high heart rate all the time.


Chris Abacon  

Yeah, absolutely. And then there’s also consider something called like sampling bias, right, or reporting bias. So let’s say you’re doing a health study, you know, across, you know, the population of the United States, which is very diverse, right? But your health metrics can be skewed.
Based on reporting, certain demographics don’t necessarily have the reporting numbers from a sample size perspective that other demographics do. So they could an AI could potentially suggest, oh, this demographic has less, has less risk of heart disease and diabetes.
But in actuality, it’s just that. So perhaps that these this demographic doesn’t trust as much to go to the doctor or or things like that, right? So bias itself can be, it’s a massive topic, right? And it’s it’s really at the core of the trust with AI. And when you think about it from a business perspective, these biases can be.
They can be. They can be misleading perhaps as well from let’s say if you’re from if you’re a product person, right? A product person could get incorrect biased data and output from an AI and then completely.
Disrupt their strategy from a marketing perspective moving forward, right? So there’s all sorts of sorts of things that can go on with it. Again, it’s all about trust, right? Trust is everything.


Jordan Eisner  

Moving along, because I think, I think each one of these things you could do sections and sections and sections, right? I mean, every conference you go to these days, workshop, whatever, there’s AI tracks.


Chris Abacon  

Yeah, absolutely.


Jordan Eisner  

But so with it being Nest, there’s a similar core function group. Ernie, you’re going to talk about that. I have it written down. Govern, map, measure, manage. Other than the obvious of those, anything else you would add with those functions?


Chris Abacon  

Yeah, so those are really, I mean those are the core functions, right. So CSF has its own functions, identify, protect, right. So this gets kind of modelled after that. So I think what’s nice about this framework is that they made it easy for us, right. Map or really govern is at the centre, right. If you think about it in like a circle, govern is at the center. It’s like the governance of of the AI and you’ve got map measure.


Jordan Eisner  

Right.
Sure.


Chris Abacon  

And manage surrounding it, right. So you’ve got like a little big circle. So consider that kind of the framework and they all work together. There’s all these cross functional, there’s all these cost functional dependencies as such, right? But let’s just talk about each of them specifically.
We’ve got govern, right, which is the fostering of risk culture right throughout an organization and identifying those risks at the high level of an organization, right. So you’ve got your executive board, right? They’re not, they’re less concerned about technology, but more about organizational governance.
Right. So again, critical to building trust and then also empowering the employees of that organization to manage that specific risk based on a broad set of perspective that moves us on to map, which is obviously these all work together. Mapping, right, is a function that concentrates.
And then on managing risk and at every phase of the AI life cycle, right. So assessing the risks and you know to potential stakeholders and end users and operationalizing the socio technical approaches like establishing those contextual factors with risk.
And really understanding that is really based on the organization. Everybody’s organization is different from a risk perspective and deploying AI in that sense can, you know, can definitely skew and create some difficulties being able to map that. That’s why it’s important to understand where your AI usage and use cases.
And then next you’ve got manage, I’m sorry, measure, right, which is the development of quantitative and qualitative methods to assess AI risks. So NIST describes it as repeatable and scalable test evaluations and validation verification measures. They call it.
Have in the document. They defined risk, right? But we wanna measure them based on the trustworthiness criteriAI mentioned previously, right? So really there’s quantitative where you can tie a number to something, right? You can talk about loss impacts or the total value of.
Building perhaps, right, you know, calculated your loss expectancy, but then you could also have the qualitative where you’ve got like a low, medium, high, very high, right. So a lot of organizations have to choose or generally I would recommend some a combination of both, right, holistically from a company, right. But that’s where the measures.
And then we’ve got managed where it’s the actual operator, which focuses on the resources dedicated to mitigating and the identified risks. So it’s essentially the control implementation, right? Risk prioritisation, treatment, response, involving data management, decommissioning mechanisms like a kill switch, you know?
Incident response, monitoring, compliance and insurance, right for identification, making sure you get some money back, right. And then essentially this is the operational management involved in AI. So that’s kind of the the four levels explained at the high level.


Jordan Eisner  

OK. And maybe a tough question if you’re not super familiar with 42001, but you know that’s the other one that I’ve seen out there gaining a lot of traction. There’s high trust and for whatever reason just can.

Seems to be tied to companies already doing high trust. So somebody exploring this for the first time, I hear about this and I hear about ISO 42001. Have you looked at that? Do you have notes on a comparison between the two? What’s different?


Chris Abacon  

Correct.
Yeah, absolutely. So the big difference between them is that for NIST, the NIST framework itself is voluntary and it’s a non-certifiable guideline, right. It’s a US-based guideline versus ISO 42001 is an international certifiable standard, right to.
Established something called the AIMS or AI Management system, which really focuses on like structured governance, risk management in compliance similar to ISO 27001, but it has a focus on transparency and accountability specifically with external audits. That’s the big one, external audits.


Jordan Eisner  

Yeah.


Chris Abacon  

Audits to ensure compliance, right? AI RMF is more adaptable. Like I said, you don’t have to pick every guide, right? You look at the categories and subcategories, see where they apply to you, and you as an organization can decide to implement that if you need to, if you want to. But they can be very.


Chris Abacon 

Very valuable right depending on what or depending on where you you know you position yourself in the market. That said right it takes that risk centric approach right big one risk centric approach versus AISO takes ISO 42001 takes that prescriptive.


Jordan Eisner  

Yeah.


Chris Abacon 

Auditable management system, right? So it also obviously aligns globally, right? For a formal verification process, right? Right now there’s no real crosswalk between the two, but I can definitely foresee in the future where organizations would align to both.
And you know, it just presents yourself more in a technology forward fashion as an organization, right? So.
Mhm.


Jordan Eisner  

Organization moving forward, how you’re going to meet on it, how you’re going to, you know, assess risk against it. It’s not as prescriptive as NIST AI RMF, which is going to have specific controls and functions and things to abide by. So if you’re aligning with NIST AI RMF and it looks like you got some thoughts on it.
Perhaps aligning with that and then you build your 42001 certification based on that and your management of your AIMS based on RMF controls.


Chris Abacon 

Yeah, absolutely. There are many ways to, as I say, many ways to skin the cat in this regard. You know, like again, I think if you do not have, you know, a strict adherence or requirement for ISO 47001.
Then AI RMF is a great option to consider, especially because you’ve got like a way to tailor your outcomes and controls, right? And again, these controls actually, there’s no official mappings just yet, but there are applicable controls on 800-53 that can in this 800-53.
G3 that can alley to the AIRMF, right? It’s all based on the subcategories.


Jordan Eisner  

Yeah.
Yeah, I agree. I think from a business standpoint, because if you’re right, if nobody’s asking you to get certified and it’s 42001, you keep saying 47, but understandably because ISO 27001, there’s so many numbers of ISO, it’s crazy.


Chris Abacon  

Yeah, yeah, yeah, I keep. Yeah, there’s so many numbers. I keep saying 27001, I know, right?


Jordan Eisner  

Yeah, no, no, that’s a good one too. But so with 42001, you know it’s a way to show it international organization certified. But I agree if you’re and and I’m going to ask you about this too. So this is a good segue to the next question. If you’re on the fence, there’s no external pressures, you want to ensure you’re solving for risk around.
On AI or AI systems, then absolutely take a look at this AI RMF because you’re not committing to a program like you are with ISO where you’re going to have, you know, certification on it every three years, surveillance in between building a whole management program. I’m not saying any of that’s bad.

But it’s like anything else, bigger house, another car, you know, more things you have, more to maintain, more to upkeep, more work. So it’s maybe good to dip your toe in. So that’s a good question if you’re on the fence about it.


Chris Abacon  

Exactly.


Jordan Eisner  

And whether or not you need to implement something like Nest AI RMF, what are some factors you should consider?


Chris Abacon  

Yeah, I definitely one of the first things to consider is if you’re, you know, a European company, if the AI act applies to you, right, EU AI Act right now, now it’s not mandatory to do this CSF, but these, I’m sorry, this AI RMF, but the standards can help you comply with that business law, right. It’s that natural.


Jordan Eisner  

Yeah.


Chris Abacon  

Towards that certification step for 42001.


Jordan Eisner  

And establishing best practices and repeatable practices.


Chris Abacon  

Repeatable, right. And I think the specifically on the measure and manage sections, those two right, while there’s no formal mapping, those two will likely map very well to the 42001 when you took a look at those subcategories.
Now an additional factor is of are your customers asking about your AI use right and safeguards right. A lot of these a lot of modern tech companies are integrating AI into their processes, procedures and products right to be able to effectively position themselves better in the market right for these organizations.
Now these companies are now the customers are asking how are you protecting my organization? How are you protecting my data? In this regard, if you as an organization choose to utilize AI RMF, you can really weigh out the advantages of incorporating the trustworthiness of AI RMF.
RMF and see how it can increase your confidence in the marketplace, right within your direct market space, right. You can position yourself more competitively and I think that’s a real value from a business standpoint. Why if this is mandatory.
Take it to the next level. You know, AI RMF can definitely assist in that. And yeah, exactly, exactly. And then lastly, right? Like how is your consider what level of AI use that you have currently, right? How are you using it? Are you ad hoc? Are you just using free accounts on?


Jordan Eisner  

Yeah, and sleep better at night.


Chris Abacon  

You know, ChatGPT, you know, Claude, whatever you might be using. Or do you have a fully formed Azure or a WS instance that focuses on training models, right? You’ve got your own models, right? So what are your expectations for that? Definitely consider if you’re more on the technical end.
Right. Some of these trustworthiness factors and how they can help your organization, right. So those are the, I would say, big factors.


Jordan Eisner  

And so if you need to do it, how do you get started?


Chris Abacon  

Let’s see here. I’ll get started right with the top governance, right? So we as a bit for businesses, right? I’d recommend establishing roles and responsibilities like AIf if you’ve got the resources.
Create an oversight group, a group with a senior stakeholder, an executive sponsor that can help lead the charge for AI RMF, right? Again, like anything, if you don’t have executive sponsorship for any venture or project, you’re not going to be able to.
Implement that within your organization, right? So if you’re listening to this and you know you’re at that, you know, mid senior level, definitely get a executive sponsor on board to help and assign that clear ownership of the AI RMF in those functions to each of those people, right?
Next is that I would identify and classify the types of AI systems that your organization uses, right? Are they production? Are you a technical company, right? Or maybe those AI platforms that are embedded in vendor products, right? The AI RMF actually.
Defines various stages of the AI life cycle in its text, right? So from data, data, design, creation of AI models all the way to the end users, right? So I would consider.
Classifying all those applicable to your business and how it’s used currently, right? Make sure you get an honest, truthful version of it, because you never know if there’s the shadow AI or shadow IT, right? You’ve probably heard of that. There’s probably users out there maybe using AI, you know, potentially not within compliance.

Right. So that’s why the governance aspect more than probably, right. So that’s why governance is really a key factor. Next again, more high level stuff is defining risk appetite and principles, right. Being able to document organization’s tolerance. This is going to be done at the executive layer, right. Risk tolerance, it’s going to be set by the specific business process.


Jordan Eisner  

Oh yeah, more than probably, definitely.


Chris Abacon  

Owners and the executives in charge, right. There’s going to have to be discussions on that. Identify where the harm could be in data handling, you know, outputs and misuse. Next is define risk metrics.
What’s the risk to my organization? Do I have a risk register, right? I’m sure many mature organizations out there that are risk-centric, they’ve got the risk registers and they’ve quantified and they’ve categorized anything that could happen to the, you know, their manufacturing center, right?
But in this case, you want to be able to do the same thing with AI. Where is it in your system? You’re having that discussion with your senior leadership, your CISOs and your frontline, you know, information security managers, having that discussion with them and you know, having those working groups and making sure that you’ve got those risks defined.
And lastly, you’ve got implementation, right? This is definitely more on the manage side at of the framework itself. Implementation of risk controls, each of these controls are going to be dedicated to each specific risk, right? So specific triggers and bias mitigation techniques and I mean even like a human review process, right? These little things can be.
Controls that can be very helpful and really critical to ensuring the trustworthiness of your AI systems to match each control to the specific identified risk, right? So those kind of at a high level, those are my general recommendations for business to be able to implement AI on that.


Jordan Eisner  

OK. No, that’s helpful. I was thinking more so like talk to an expert if you want to get started. Yeah, well, maybe that’s a good wrapping point then. So no, Chris, I I think we got even deeper than we expected to on this as you know, for our listeners, there’s just a lot.


Chris Abacon  

Yeah, right. You can talk to us.
Yeah, definitely I.


Jordan Eisner  

There’s a lot to unwrap. There’s a lot of uncertainty. There’s still a lot of people figuring out and we start bringing up things like bias and, you know, transparency and how to do that with things that are changing every single day. It’s challenging.


Chris Abacon  

There is a lot of it.


Jordan Eisner  

You know, if you’re an organization on the fence about this, if you’re an organization doing business in Europe, if you’re an organization getting external pressures to have a program around AI, please don’t hesitate to reach out.
So you can connect with us at connect@compliancepoint.com. Chris is active on LinkedIn. I’m on LinkedIn. Feel free to message us there. And yeah, just do the website. Plenty of content we’ve been putting out about this and other things and continue to subscribe and listen.


Chris Abacon  

Awesome.


Jordan Eisner  

Chris, thanks again. We’ll do it another time.

Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.