S4 E03: AI Risk Management That Scales With Adoption
Audio version
AI Risk Management That Scales With Adoption
Transcript
Jordan Eisner
Hello and welcome to another episode of Compliance Pointers, sponsored by Compliance Point, of course. And who do we have on the podcast today? None other than, I’m sure the leader in guest appearances on Compliance Pointers, Brandon Breslin, our director of Security Assurance.
Brandon Breslin
Well, I don’t know. I’ve seen Matt Cagle a lot recently on some of these.
Jordan Eisner
Well, he’s a host. He doesn’t count. You know, he’s joined the ranks of host.
Brandon Breslin
Oh, that’s true. That’s a good point. That’s a good point. So hey, we’ve got, I’m loving that we’re putting so many of these episodes out there. I love it.
Jordan Eisner
And you’re liking it so much that you’re recommending topics as you did with this one today.
Brandon Breslin
Yep. I’m getting our getting our subscriber count up there. I like it. Hopefully, hopefully with some of these new topics.
Jordan Eisner
Yeah. And so what we’re talking about today, and this is timely because we came into this podcast to talk about OpenAI ServiceNow partnerships around AI, how relevant ISO 42001 can be on some of that, but then just recently we’re seeing these layoffs, UPS, Dow, Amazon and they’re leaning into AI and they’re just they’re saying the quiet part out loud I guess is hey, we’re eliminating however many thousand jobs because we’re going to use AI.
Jordan Eisner
And so that I think is, yeah, yeah, exactly. So I think that’s timely too today. And you know, we can talk about how some of that intertwines and does that make 42001 more relevant? Does it make other AI certification or frameworks?
Brandon Breslin
It’s changing everything.
Jordan Eisner
More relevant than they already are, or do you expect that organizations will, I don’t know, I guess rally to those more so than they would have before. So but before we get I guess into that for our listeners, in case and watchers, if you don’t know Brandon, he’s the Director of our Assurance Services group. So that covers information security, really readiness, and audit work against common InfoSec framework. So PCI, from the payment card industry SOC 2, ISO 27001, 27701, 42001 and some extensions and adjacents, and HITRUST. But you know that’s just one piece of that, right? To be able to audit against those frameworks, work with organizations to certify that they’re in good standing with controls. You have to have technical expertise. You need to understand, you know, what’s out there from an enterprise all the way down level and what’s being leveraged at these organizations from a systems and application standpoint. And that’s inclusive of AI now, right? So that’s why he’s such a good guest on this.
Podcast today to talk about these things. So this stuck out to you, Brandon. You saw these partnerships, you saw these announcements. What’s the big important signal here for you?
Brandon Breslin
Yeah, I think the and I was just thrown off. I think my clock just behind me just broke. I don’t know if you saw on the back of it, the hand just fell and that’s OK, not powered by AI exactly. So I think the big signal is that we’re we’re seeing a massive shift in you know.
Jordan Eisner
Not powered by AI.
Brandon Breslin
Previous not only just tasks but you know enterprise wide infrastructure changes that used to be relied on by you know the traditional entry-level employees, but also the middle-level management are being completely replaced by AI right. I I think that there’s a there’s a massive shift coming through of seeing, you know, especially for executives looking at every single thing that’s done in the organization, all delivery lines, right of where can we start to eliminate positions and where can we replace that with, you know, a more automated tool or artificial intelligence.
That doesn’t, you know, that doesn’t, you know, have issues right in that in that actual delivery of the process. I I think that there’s of course still kinks to be worked out, but I think that there’s it’s we’re already seeing that shift.
And we can talk about that ServiceNow, you know, partnership that you mentioned that that came out in the beginning of last week. You know, some of these integral workflows are being already, you know, radically changed from AI that used to just be manually created.
So even just down to like support tickets and change tickets, right? So it’s really interesting to see.
Jordan Eisner
Yeah. And so I guess for our listeners who tune into this podcast, they’re probably saying, OK, Yep, seen that, right? That’s good. That’s all in place. But what I’m here for is how does this shift change right risk and assurance conversations for organizations?
Brandon Breslin
Yeah, I think it’s it. It opens Pandora’s box for managing risk, right when you’re talking about, you know.
Jordan Eisner
Well, yeah, well, sorry to interrupt your talk track there. And I should let you finish that thought and you will finish that thought. But maybe when you do finish that thought, I’d be interested in. I think it does and doesn’t at the same time, right? Because humans are some of the biggest risk factors within an organization. But sorry, go ahead.
Brandon Breslin
You’re right. It’s a good point. No, it’s a great point. It’s a great ad because it is interesting to think about, you know, humans. What’s the old saying, right? Humans are, you know, the weakest point of security, right? If there’s, if there’s, if there’s an opportunity for a control to fail with a of human intervention, it’s probably going to fail at some point, right? So I think there is that factor. But when you’re incorporating any new technology, not even AI, right? Just let’s look at over the last few decades of newer technologies that have been introduced. There are new risks that are introduced, however, while new risks are introduced, there are new pillars that can be achieved or new business opportunities that can be expanded into at a rapid rate, and with AI, it’s a force multiplier at this point, right there. There are new ventures that businesses can tap into, new opportunities that businesses can tap into that have never been explored at this scale.
Scale, right. We’re not talking about 2X, 3X. We’re talking about 10X, 100X, 1000X productivity and opportunities from a business scale reaching customers, reaching delivery timelines that have never been able to be achieved at rapid pace, right? There’s so many.
Things that have been unlocked now with AI. So I think there are kind of two paths, right? There’s the, there’s the output and scale opportunities with AI. But then there are the risks, some mitigated, that I mentioned, like even taking humans out of the equation, right, that you brought up, I think. And then there’s also introducing new risks of ensuring your AI data is locked down. If you’re going out to the Internet to have search capabilities and things like that, then your AI data not exfiltrated out or sent out, or you’re only reaching out to outside of the boundaries, but your data cannot get out of those boundaries, right?
So kind of A1 directional, you know, process. I think that’s a big factor that a lot of people forget to when they’re developing, you know, different sandboxes and things like that, to ensure that the data cannot be exfiltrated out of that instance.
So I think that’s a big risk. I think when it comes to, you know, deployments, right, having something, I know we’ve touched on this a little bit in some of our other podcasts is the governance piece, right. You need to organizations need to get to a point now where you’re assigning roles and departments to manage the new AI.
Tools that are out there, which is kind of ironic because there’s layoffs happening. However, with AI, they’re introducing new opportunities, right? New roles that need to be established, and I think some of the smaller organizations are using the…
Jordan Eisner
Jobs are being created.
Brandon Breslin
You know network security teams or server security teams or even the CISO role to govern the, you know, process for AII think it varies based on organization, but it is something that you have to be thinking about of the governance perspective of that.
Jordan Eisner
I guess I should say just you and how you manage the team, how are you keeping up right with AI, and just how quickly it’s changing, how fast, how quick is being adopted across so many organizations. I mean year-over-year audits are not the same anymore I would expect for most of your.
Brandon Breslin
No, they’re not.
Jordan Eisner
Organizations.
How, how is that factoring into how you look at the year, you look at the 100 plus, maybe even, you know, close to 200 audits that we do a year as a business?
Brandon Breslin
Yeah.
Jordan Eisner
What are you anticipating and managing with your team around AI?
Brandon Breslin
I think, well, I think from an if we’re talking in an audit, you know, vacuum if you will or maybe let’s just focus on the audit realm for a second. Yeah, yeah, the audit vacuum, the, you know, process of moving away from the year-over-year audits.
Jordan Eisner
Auto vacuum. Yeah, that’s fine.
Trade.
Brandon Breslin
The culture of the organization for us to move from, you know, oh, this is a one-time thing, this is a one-time audit, this is a one-time process to constantly evolving, constantly shifting of understanding, OK, what is the data that we have out there? What needs to be harvested, what needs to be mined, what needs to be protected?
Jordan Eisner
So constant validation becomes more and more important.
Brandon Breslin
The validation needs to be re-audited exactly so and then you can look at it from a controls perspective what needs to be you know, continuously developed or refined, right. And I think the LOMS are actually pretty good at that when it comes to, you know, incorporating. It just depends on the security and how close you want it to get to your data, right?
Jordan Eisner
So, your audits maybe still be at least they haven’t shifted totally in that you know they still might be a one time like audit point in the year, but more of the controls, more of what you’re evaluating, more of what you’re even recommending for subsequent years is that continuous monitoring or some sort of continuous checks need to start to take place.
Brandon Breslin
No doubt, no doubt. And a lot of the security tools that are purchased on even whether they’re off the shelf or custom, a lot of them have, you know, AI modules built in who you know, some organizations divulge what they’re using on the back end. A lot of them are just LLM, but a lot of them are more in-depth.
Jordan Eisner
Yeah.
Brandon Breslin
You know, true agent-based taking workflows to a new level, right. So I think when it comes to control development, control refinement, we’re going to see that that more continuous process that it’s not the, oh, it’s time to review our processes, it’s the beginning of the year or it’s the middle of the year, right. We’re now shifting to continuously updating those and then continuously ensuring that they’re up to date, which I think from a security perspective significantly improves the posture of the organization, increases the maturity level of the security realm, if you will, or the security department, whatever the size, is I think when it comes to control, you know, validation, it’s it’s if it’s always up to date, the audit is not a big deal right when it comes. I think where a lot of organizations struggle is if they have a one-time process for managing their program or checking their program, right, ensuring their controls are relevant, they’re up to date.
If you only have that one time, then you end up having stale processes for when the audit rolls around. You’re having to talk with the team of, wait, wait, who manages that? Who? Who’s handling that right? What’s still what’s out of date? That’s out of date, right? What’s been patched or what’s what scans haven’t been run right. All of these things.
But if you have, you know, AI modules built into your security program that are constantly checking your control, checking your processes, doing recurring tasks for you, right? And I think we’re moving more into the agent side of the house. But having some of those workflows built out, even having agents run recurring tasks for you within your security program that used to be security professionals doing for you that now can be replaced with AI. I think that makes the audit process a lot smoother. Wouldn’t you agree?
Jordan Eisner
Well, so let’s see if we can segue this into ISO 42001 then. Do you, do you feel that it calls for more given the nature of it and what it’s certifying continuous nature? I mean, I saw in general is about auditing a management system, not necessarily the area, right. So I guess maybe I’m answering my own questions there, but maybe you could allude to a little bit how ISO 42001 is still relevant with the scenarios we’re talking about where we’re seeing more and more wide AI adoption and that you know maybe.
Brandon Breslin
It is. Correct.
Jordan Eisner
The models are already certified, but the organizations is irrelevant for them to get an AI or 42001 certification based on how they’re leveraging it.
Brandon Breslin
I would argue that as adoption of AI increases, the necessity for ISO 42001 also increases because, again, this is a governance framework. It’s built on the process of as you are developing an AI management system on top of your information security management system, right that there are there are attributes, there are controls, there are relevant requirements that you need to be thinking about when it comes to the boundaries of your AI management system that ISO 42001 touches. The biggest risk we’ve been talking about risks, right? The biggest risk for AI adoption is not the speed. I think many people think the biggest risk is speed. I don’t think it’s speed. It’s the lack of awareness of the governance requirements that you need to be thinking about when it comes to your tool deployment, right? So that starts from the build process if you’re building agents, if you’re building workflows.
Jordan Eisner
Oh.
Brandon Breslin
Be thinking about the security aspect and the governance elements that need to be incorporated into the development of that AI management system, right? When it comes to any technology, right, we were talking earlier about other, you know, earlier times of technology deployment, right? Even the 2000s, the 2010s.
Any technology deployment, you wanted to incorporate security and compliance into the development, into the SDLC lifecycle, into the deployment process, into all of that, you know, the A-Z, the soup to nuts process of deploying a new service or a new tool that has not changed with AI, right. I think it’s just been enhanced to a greater level when it and there’s more things to think about right when it comes to that build process or the deployment process of that tool.
Jordan Eisner
That’s a great analogy. I was thinking exactly what you’re talking about, sales development, lifecycle and having some of that built in security. And so I think I like how you put that. It’s not the speed, it’s the lack of governance, right? And the and the security mindset as you’re building out or adopting or creating these agents or doing whatever you’re doing with the AI.
Brandon Breslin
Yep.
Jordan Eisner
But I still think there’s perhaps a little gap there and it’s like.
Well, who knows how to do that, right? Right. You might. If I’m building an agent for the sales team here at CompliancePoint.
Brandon Breslin 16:10
All right.
Sure.
It’s still new in the marketplace.
Jordan Eisner
I could absolutely build something that’s maybe not secure ’cause I don’t know. So I guess a big component there is education and training. And if you’re an organization that wants to adopt it, take the added step of getting some sort of education, a training, awareness on how to or incorporate security into the build-out.
Brandon Breslin
Yes, that’s a great point. The education and the training is fundamental, right? And when it comes to any technology, you want the technology to empower the process or to enhance the process. When you’re developing a process, right, you want training and education to be incorporated directly in that before you get to the.
Deployment process, right? So let’s say you’re doing user testing or let’s say you’re doing a pilot testing. You want to start educating the necessary individuals that are going to be both using the tool and managing the tool. And this applies to anything, right? Not just AI. This applies to any tool deployments, right? When you’re looking at secure development and security-focused development of any tool, right? You want to incorporate the users and the administrators or the people that are managing the tool right there from the get-go, right from the beginning, so that they are aware of the tool’s capabilities, but also the kicker, the guardrails. And we’re talking about risks. The guardrails are just as important, if not more important than the capabilities of the tool, because when you’re talking about data security, data protection, and a lot of these governance elements that ISO 42001 focuses on, the guardrails are critical, right? You have to ensure that your AI data is protected, and that all starts with the guardrails that you develop within the tool, because it’s great if you have all these capabilities with a tool, but once your AI data leaks out, that puts all of that to shame.
Jordan Eisner
Yeah, well said. I think that’s a good point to end, unless you get anything else you want to add.
Brandon Breslin
No, I think just you know to go back to your first comment as I think about some of these layoffs, right, UPS, Amazon, some of these other ones that are out there, it really is interesting to see how the landscape is changing, and I think you know.
To to the, you know, society at large, to those that are charged with governance right now, I would say be careful with not the speed, but again, the, you know, guardrails that you’re putting in, the risks that you are developing your tool with, right, that need to be thought about.
Just when you’re developing a new tool, if you’re if you’re thinking about using a new AI model or developing a new AI model, be careful of the risks. The capabilities are there. We know that, right? But be cognizant of some of these requirements, whether it’s an ISO 42001 or some of these other frameworks that are out there that are continuing to come out in the marketplace. I know there’s the NIST AIR map, there’s even if you’re doing HITRUST, there’s HITRUST AI controls. There’s so many out there now that are relevant to your organization, but just be cognizant of what needs to be.
You know, considered when you’re deploying a new tool.
Jordan Eisner
Yeah, that’s great. This has been really good. I think listeners are gonna get a lot out of it and I would say if they’re seeking those things, education, consultant, advisory on how to build that security from the get-go or how to ready for an audit against ISO, for instance, 42001, to get a management system in place and then ultimately certified. Please don’t hesitate to reach out. The easiest place to find us is compliancepoint.com. There’s an e-mail distro that should be easy to spot on there, but if it’s, but if you can’t for some reason, it’s connect@compliancepoint.com. You can e-mail into that and that’s monitored pretty much, you know, all business hours of the day. So send an e-mail into that, you’ll hear from us usually within 5 minutes and you know, hopefully maybe we can provide some value for you. But Brandon, thanks. I’m sure I’ll be talking to you again on this podcast and now all our listeners and viewers be well.
Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.
