S4 E07: The Intersection of AI Governance of Traditional Security Frameworks

Audio version

The Intersection of AI Governance of Traditional Security Frameworks

Transcript

Jordan Eisner  
Hello and welcome to another episode of Compliance Pointers. This one fresh back from our conference CompliancePointExchange 2026 down in Orlando earlier this week and I am joined once again by Brandon Breslin, our director of Security Assurance Services, and he’s got his, he’s got his new toy, he’s got his new microphone on.


Brandon Breslin 

I do have a new microphone. I like it. I like it. Thanks. Thanks, Jordan, for having me on.


Jordan Eisner 

Absolutely. Thanks for joining and for really spearheading the topic on this one. You know this, you’re certainly the brains behind what we’re going to be talking about all these things here and everything you’re doing for us internally at CompliancePoint around a I, but also for our clients, right. And so excited to get into this topic. I know it’s becoming more and more near and dear to your heart.

So for our listeners. Today we are going to be talking about AI governance and where it’s meeting compliance and more specifically how PCI, SOC 2, HITRUST, and ISO are converging around AI in 2026. So Brandon gave me some talking points really to kick this off. And so I’m gonna steal some of his Thunder here, because I’ve heard him say this a few times, a few Times Now, and I like it. Yeah, I like it. But I like how you say it, right? AI is no longer experimental. It’s embedded.


Brandon Breslin  

Sure. Take it. We always go off script anyways, so go for it.


Jordan Eisner  

In enterprise operations and really even downstream from there.


Brandon Breslin  

Yeah, absolutely.


Jordan Eisner  

You know, traditional compliance frameworks were not written for AI and so you’re of the opinion or I shouldn’t say just of the opinion, but you know you’re sharing with your clients and prospects and the like that in 2026 those frameworks I mentioned they’re going to start converging around AI. So we’re gonna unpack that today. What convergence means for governance specifically and executive accountability?


Brandon Breslin  

Absolutely.


Jordan Eisner 

So yeah, Brandon, you broke it up into different segments. I think that’s really helpful. So let’s start with segment one, the immediate reality. Why is AI governance intersecting with traditional compliance frameworks now?


Brandon Breslin  

Yeah, it’s a good question because you know the core idea right now and it is even before I get to the core idea, it’s really interesting to see how you alluded to this, how the governance frameworks were not built with AI. It did not even exist. Sure, it existed in maybe military operations and things like that, but not to the general public that it is to the level now.
In 2026 AI has moved, you know, into the true production environments that assessors, and auditors are evaluating. I think that’s really the core idea when it comes to the intersection with compliance frameworks.
It was. It’s always been relevant, but now it’s relevant for audits, which is a huge shift from relevance for efficiency gains in operational tasks or, you know, enterprise operations even. But from a control perspective, from a governance standpoint, it is now front and center.
And it’s going to continue to be as it grows. It’s impacting, I mean every industry, right? It’s, you know, PCI, SOC 2, HITRUST, ISO 27001, ISO 42001. Those are the frameworks that, you know, piece that AI is already coming together in, right? It’s now relevant.
Not the controls that you’re looking at, right? We need to be thinking about it from a scope standpoint. We need to be thinking about it from an operation standpoint. There are so many layers now that are involved in evaluating environments when it comes to AI.


Jordan Eisner  

Yeah, OK. So you’ve used this term, and you’ve got me using this term converging a lot. So let’s open that up, right? How are the frameworks converging?


Brandon Breslin  

Yeah, I would say it’s happening in multiple layers, right. It’s happening at the risk layer and the accountability layer and we’ll get into the AI accountability, you know issue or gap, right. But I think the core what we as we let’s start broad right AI is.
It’s not just influencing a control, it’s now part of the control environment. So that is a fundamental shift when it comes to any piece of technology or tool set, right? If you’re in an audit, there’s a tool that may influence a control, but now it’s flipped on its head.
AI is now a core principle of the environment or a core fundamental, you know, gateway, you know within the environment that you need to be evaluating. So I think that just fundamental process has shifted when it comes to.
You know, decisions like you know, we talked about at our conference this week, actually some of the attorney sessions were talking about how, how are we evaluating AI outputs? I think that’s relevant as well too, right? AI driven decisions or even humans.
Developing decisions based on their AI tool set now existing within regulated environments. I think that’s just as important as the control environment. So it’s slightly different there. Governance fragmentation is now breaking down right when it comes to these frameworks we need to be evaluating.
OK, where is AI in the process, right? It’s not just are you using AI, it’s are you governing it correctly and it is it relevant for the framework that we’re evaluating it against?


Jordan Eisner  

And you had talked about those four in particular PCI, SOC 2, ISO, HITRUST.
Do you have examples right from one to the other that you’re seeing like a consistent theme amongst them? You know, is it like what you’re talking about? It’s now embedded, it’s now part of, it’s now inherent in the control and there needs to be governance around it. But you know, maybe things that didn’t exist.


Brandon Breslin  

Yeah.


Jordan Eisner  

Three years, 2-3 years ago in those frameworks that do now moving forward.


Brandon Breslin  

Sure. Yeah, I guess I can give some examples, right. And I think before even giving examples, I mentioned at the beginning of this, this question that we’re talking about the convergence is happening at the risk and the accountability layer. I think that’s that comes back to people, right. So who you first need to before you even go down the path of governance. You need to first evaluate who or what team is responsible for AI tool sets or AI decisions that are happening within the organization, right? You cannot let it be the wild, wild west out there. If it is, it’s going to control you. You know in the future. Are you going to be managing?
Are you going to be managed by AI right? That’s the critical question to be thinking about. So if you want to stay ahead of the curve, you need to be thinking about the risk and the accountability layers from the get-go. And that’s where governance is intersecting. Because let’s take ISO 42001 for example, each of the control sets within Annex A.


Jordan Eisner  

Oh.


Brandon Breslin  

Are hitting each of those areas within those layers that are relevant to managing the AI tool stack that you’ve established or are looking to establish within your environment. So I just, I did want to clarify that that there is, you know, you need to make sure there’s human in the loop. You need to make sure that there’s human involvement from the get go.
And you’re not just allowing somebody or some team to deploy a tool set within their environment and you have no guardrails around it. You have no governance around it. You have no strategy around it, right? You can’t just jump in and execute. You need to have a plan for not just what tool, but how you’re going to use it and why you need to use it and is it relevant and is it governed well? So I think those are five key areas to be thinking about in regards to examples, right? So maybe let’s take PCI for example, there’s.


Jordan Eisner  

Well, you know what? Let me even add a second level to that question. And maybe I’m not thinking about it the right way, but HITRUST has put forth AI controls and AI risk framework. ISO 42001 is an AI management system. PCI and SOC 2 to my understanding have not been.


Brandon Breslin  

Right.


Jordan Eisner  

Direct in doing it, but more so are just incorporated into their frameworks as a whole. At least PCI can’t speak to SOC 2. You could better than me and I know that there’s a little bit more, you know, bring your own controls to that framework, but I guess.


Brandon Breslin  

Correct.
Right.


Jordan Eisner  

That’s what I mean. Or they’re just even in the foundational layers of those four beyond the specific AI frameworks they’ve created just in what is legacy PCI, SOC 2, HITRUST, ISO, you know, that’s what I assume you’re speaking of when you’re talking about how you know they’re converging in 2026 round AI.


Brandon Breslin  

They are, yeah, that’s a great point. So there are two fundamental differences, right? There are the frameworks that the ISO 42001, the NIST AI RMF, the HITRUST AI additions, right? Those are core related to AI governance specifically and there are the traditional frameworks, if you will.
PCI, HITRUST, core SOC 2 develop your own controls but core principles from AICPA and then ISO 27001 the management system for your environment but not necessarily AI related. So I think there are two fundamental different frameworks here or core elements of the frameworks that we need to be thinking about. The ones that are AI driven are going to of course have an AI focus because you are deploying or evaluating I should say those frameworks because you are thinking about deploying an AI system you are strategizing on.
Incorporating AI into your environment or you just want to stay ahead of the game and be proactive and be thinking about, OK, before I develop AIor before I utilize a third party system or before I use a frame a platform that’s out there, I need to be thinking about the guardrails. I need to be thinking about the security. I need to be thinking about the fence.
Around my environment, right? The thinking about it from a defense in depth method. The other side of the house, those core traditional frameworks, if you will, not necessarily are meant to evaluate AI, but the reason why we’re talking about them is because.
If you are an organization being evaluated against PCI for example, and you have a payment solution or you are or you’re a merchant and you’re accepting payments or you’re a service provider and you’re you work with merchant customers, right? It doesn’t matter either way or even if you’re an issuer, if you have, if you have AI embedded into that ecosystem.
You need to be thinking about the impacts to the controls that are being evaluated against in your audit.


Jordan Eisner  

Uh-huh. OK, Yep, that that does clear enough. That helps.


Brandon Breslin  

So you’re absolutely right. You called it out that there are two fundamental areas here, but they both are relevant because of the fact that AI is included in your scope or if it is included in your scope, right. We’re kind of taking an assumption that you’re that this this podcast episode is relevant for organizations that are looking to incorporate AI.
Into their enterprise operations. So if you’re just, yeah, if you’re just using it for relevant or smaller tasks, right? Maybe you’re in an alpha stage of just dipping your toes in artificial intelligence. This probably is a little bit more advanced for others, right? Maybe it’s not for your organization yet, but it could.


Jordan Eisner  

We already have, right? To your point.


Brandon Breslin  

Could be in the future.


Jordan Eisner  

Yeah, well put. OK. So what are auditors in the market evaluating?


Brandon Breslin  

Yeah, I would say they’re looking at governance maturity, right? Yeah.


Jordan Eisner  

Like dirt. Sorry, sorry, let me clarify that because I didn’t. I just key point on that during audits.


Brandon Breslin  

Yeah, sure. During audits and I think you know it’s funny you say that I would say during audits, but also still in the audit off season if you will. If you’re not kind of doing a year-round audit cycle, it’s still relevant, right. But you’re absolutely right. If you’re being, if you’re in an audit, it’s going to be looked at a little bit more scrutinized than when you’re out of the audit cycle.


Jordan Eisner  

Right, yeah.


Brandon Breslin  

I think governance maturity, right? If we put a scale on it, are you just dipping your toes into AI or are you more on the advanced side of the house where you’ve incorporated AI into your enterprise operations? It’s a core fundamental business process or business tool set that you are using to execute functions in your environment, right? So there’s that scale.
That we need to be thinking about, and that’s where custom solutions comes into play. What organizations can expect to see, right? Is it referenced in your risk assessments? Is it included in your change management process? Is it included in your vendor oversight or third-party management processes?
Is monitoring expanding to detect AI drift or output risk? Or you know, are you looking at different biases in your sets, right? If you’re developing your own, your own models, or if you’re using third-party models, do you have a process to?
Decide which outputs are used for business operations, right? Do you have guardrails? Do you have security controls in place? Do you have technical controls, operational controls, policy-based controls or documentation controls? All of these things are relevant when it comes to.
Your security governance around AI.


Jordan Eisner  

OK, helpful. Um.


Brandon Breslin  

And I would say that all falls in the ecosystem of governance maturity around AI, right. It’s a fundamental shift from just saying, oh, I would like to use AI in our environment and I’m going to go get a, I’m going to go sign up for, you know.
Cloud Enterprise or ChatGPT Pro and start using it. That’s an execution strategy or not even a strategy. That’s just executing. You want to have a plan. You want to have a strategic path forward of what’s the right tool set? What are we even trying to achieve? What are the efficiencies we’re trying to gain? What business processes are we trying to?
To perform with a new tool, is it right for us at this time? Are we at the risk maturity level to be able to be able to accept the risk of a new tool coming into our environment or using a new tool in our environment? And are we ready to govern it appropriately?


Jordan Eisner  

So one of the things you talked about and mentioned is the AI accountability gap. So where are organizations struggling with this?


Brandon Breslin 

Yeah, you know, I love talking about the AI accountability gap because like I mentioned earlier, it’s all about where are you managing the process right as a human or as a leader, right? If you are a leader in your organization or if you are charged with governance in your organization, where are you inserting yourself?
Or your team into the process, right? You need to stay ahead of the wave or the wave will crush you. AI adoption is moving faster than governance, so you need to be putting governance right at the forefront before. Like I’ve said multiple times, don’t just execute, right? Have a plan, strategize, evaluate.
You execute some of the common issues that I would say come from the AI accountability gap or what we see out there in the market, right? No clear ownership. So there’s not a team or an individual responsible or charged with governance of AI. It’s just.
An organization out there says we want to start using a tool set or we want to start going down the path of using AI right? Cause it’s a buzzword, it’s something that’s shiny and new, but they don’t think about it from a risk standpoint and an ownership perspective.


Jordan Eisner  

Yeah.


Brandon Breslin  

It’s spreading. You know, other issues are, you know, spreading across security, IT, compliance, business units, right? They’re all using different solutions or they again, they’re just executing with no plan or, you know, governance leaders or executive leadership doesn’t even know.
Which tools are being used in their environment? That’s even worse. That’s where Shadow AI comes into the mix, right? Boards, right? Directors and board-level directors are being asked about oversight from other third parties that they might be working with or other, you know.
You know, other governance structures that they might be working with. And that’s if you can’t even answer that question, that’s a time to look in the mirror and say what are we doing right? We need to stop. We need to establish a committee. We need to establish, you know, a process for making sure that we, you know, are planning about this right. We’re.


Jordan Eisner  

Yeah.


Brandon Breslin  

Strategizing and we’re going about this in the right way and we’re thinking about the security and the guardrails before we just execute.


Jordan Eisner  

I think that leads to a good wrap up point. So for those organizations that are like what you’ve talked about here, they’re beyond using it for minor tasks. They’re starting to really embed it more meaningfully inside their organization.


Brandon Breslin  

Right.


Jordan Eisner  

They’re hearing these things you’re talking about. They’re going, we got some accountability gaps. You know, we haven’t really considered how that’s going to impact at that scale right across the organization and and the ripple effect from there.


Brandon Breslin  

Right.


Jordan Eisner  

What’s a practical starting point?


Brandon Breslin  

Yeah. And I think, I guess before we even get to the starting point, one thing that that I think we haven’t talked about yet is those that are past the stage. So maybe I can speak to two different audiences, right? Ones that are maybe more on the advanced side of the house or maybe you’ve already gone down the path of strategizing and you are executing on a strategy or.
You are in the middle of a planning and you’re trying to figure out what’s the best way to do it. Or maybe you’re going down the path of implementing autonomous agents or autonomous workflows, whatever it may be, right? Think about the process of what you’re using the tool set for, right? What is your goal? Don’t just say we want to use AI because it’s a buzzword.
It’s something new and shiny and exciting. Are you actually fulfilling a goal for your organization? Are you aligning business and IT with the tool set that you have implemented? And is it moving the business forward? Is it helping you scale your business? Is it helping you drive better margins? Is it helping you drive additional revenue for the business, right?
Is it actually solving a problem that you have identified in your organization? So I want to just call that out first for those that are maybe thinking we’re already past this, Brandon, right? We’re already in the process of using AI. Everybody’s using AI now, right? It’s how you govern it and it’s what stage.
Of implementation that you’re at. So did want to speak to those folks. For the ones that haven’t started yet, identify a plan, identify accountability. Who is in charge? Who’s going to be your AI leader or your AI team in your organization? Who is going to be?
Responsible for doing the research, identifying the plan and developing the plan, right? Or identifying the problem, I should say, and then establishing the plan, getting executive buy-in right. That’s just like any other project in an organization. If you don’t get executive buy-in, it will fail.
That’s we see that statistics research validate that you need to get executive buy in and you need to come at it from the angle of security and governance, right? Don’t just execute, have a plan and have a security mindset and audit mindset as well.
Security and an audit mindset when it comes to establishing your plan for implementing AI in your organization. And then I would say third piece is think about the frameworks that you’re currently in an audit against or thinking about in the future, right?
If there’s a, if there’s a framework that you’re, you know, going to be evaluated against, how is AI going to affect that control?


Jordan Eisner  

Yeah.


Brandon Breslin  

And if you don’t know where your AI exists or where your usage of AI exists, you can’t govern it. So discovery exercise is critical.


Jordan Eisner  

I think that.
Yeah.
I think that’s a good bow on this, and I I would leave it with one more recommendation for those in that position. That is, don’t be afraid to raise your hand. You’re not alone. There are a lot of organizations going through this, and I heard an ad on the radio this morning. Yes, I still occasionally listen to AM radio.


Brandon Breslin  

Yeah.


Jordan Eisner  

And you know, but they’re talk. But they were really speaking to like small local businesses like, hey, are you trying to even understand how you can leverage AI in the first place? It’s called like the a whole industry has been built.


Brandon Breslin  

Sure.


Jordan Eisner  

Overnight, which some people will worry about a little bit, but like there are people out there that have spent a lot of time. You’re one of them, Brandon, on AI, its power, its usage, how it can be leveraged and for all sorts of purposes and get online.


Brandon Breslin  

Sure.
Right.


Jordan Eisner  

You know, reach out, ask your network, who are you doing? How are you figuring it out? You know, I’m seeing ads on. I think there were Super Bowl ads right for for, you know, big, huge, you know, you know, consulting firms and how they’re helping organizations leverage AI, IBM’s doing it so.


Brandon Breslin  

There were.


Jordan Eisner  

My point is like third-party support in this area and I’m not just plugging CompliancePoint here. Yes, of course we know around security and compliance and governance around these areas. Yes, we’re working with clients on this as they and have been for a while, not just auditing against the framework, but really best practices around this. You know, Brandon, you’ve been.


Brandon Breslin  

It is.


Jordan Eisner  

Hand in hand in this with a lot of clients, but other areas too, how to leverage it, how to use it. So I would, I would encourage those organizations and our listeners to seek third party because I think the ROI on that could be huge.


Brandon Breslin  

Absolutely.
Yeah.
Yeah.
I think that’s a great point and you know again not to plug CompliancePoint, but it is something that we are currently helping our organizations with right now. We have developed our own framework that has taken key principles from ISO 42001, NIST AI RMF and HITRUST AI into a core set of controls that it gives you a good structured baseline.
For identifying where you are on that scale, I talked about the scale of maturity, right? When it comes to AI, where are you on that scale, right? You don’t know until you go down the path of identifying where AI exists in your organization. What are your goals for your organization? What’s your risk tolerance level? What’s your risk maturity? And then finally, where are you?
Are you looking to go in the next year, five years, 10 years for your organization, right? From both an organizational perspective, but also from a security standpoint, right? What are your business goals? What are your IT goals? What are your security goals? And then what are the organizational goals for moving the business forward with AI involved, right? So I think that’s a big factor that it tends to get overlooked when it comes to this. So get that executive buy-in, do a gap or readiness assessment to understand where you are from a baseline, structured baseline perspective, an initial understanding of you know where your organization is right now and then where you’re looking to go in the future.


Jordan Eisner  

Yep, well said. All right. I think that’s a good stopping point. So for our listeners, you know, any further questions around, well, I shouldn’t say any further questions because we’ve been, we haven’t necessarily directly answered their questions, maybe indirectly. But if you have questions on this, please don’t hesitate to reach out, visit us at Compliancepoint.com, e-mail us in at connect@compliancepoint.com. Brandon’s very active on LinkedIn. You can find me there as well. So if that’s your channel, DM us there. But we’d love to hear from you, hear what sort of issues you might be working through and see if we can be a guiding hand. And until next time, everybody.
Be well.

Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.