S3 E14: Leveraging AI in PCI Assessments
Audio version
Leveraging AI in PCI Assessments
Transcript
Jordan Eisner (00:00)
All right, welcome back. Another episode of Compliance Pointers. I am joined by our most frequent guest, Brandon Breslin. This is his ninth appearance on Compliance Pointers. Brandon, thanks for coming back and good to have you.
Brandon Breslin (00:14)
Yeah, thanks for having me Jordan. I appreciate it.
Jordan Eisner (00:17)
I expect a wealth more of information than you normally do with his, which is usually a wealth of information now that you’re a director and not an associate director. So congrats on the recent promotion.
Brandon Breslin (00:28)
Thank you, I appreciate it.
Jordan Eisner (00:30)
See you sporting the jacket of the season right now, the Master’s apparel.
Brandon Breslin (00:36)
Best week of the year. Best week of the year. Should have perfect weather down in Augusta. Wish I was there, but maybe next time.
Jordan Eisner (00:43)
Yeah, yeah. Yeah, so I think we’ll just totally throw our marketing team off and let’s just talk about the masters and tariffs.
Brandon Breslin (00:54)
Yeah, exactly. There you go.
Jordan Eisner (00:58)
No, we’re not going to be talking about either of them, but I am curious who you think is going to win it.
Brandon Breslin (01:03)
It’s hard to go against Scotty, maybe it’s Rory’s year. You never know. I he’s trying to get that career grand slam and he’s just one event away, but it’s the Augusta National that gets him every year. So I’m cheering for Rory, but I always love to see Scotty do it. And you never know. There’s a ton of golfers out there, some new ones this year.
Jordan Eisner (01:25)
So I don’t follow golf too closely. So, you know, I might look really ignorant here, but is Jordan Spieth in it?
Brandon Breslin (01:35)
So Jordan Spieth this the last few years has definitely dropped off. He’s he’s been trying to get a swing back working on a few new things, but he’s definitely not in his 2015 shape when he won the Masters. But hey, you never know. mean, he’s playing. He had. Yep, Russell Henley’s in it. All the guys you see on TV, most of them are in it for sure. But you never know what Jordan’s speed he could. Hey, he could turn it on at any time. That’s what’s the beauty of Spieth, but it sucks that Tiger Tore his achilles and is not in it.
Jordan Eisner (02:11)
Yeah, alright, OK, so yeah. I think Russell Henley won something recently. I have. People that update me on that on occasion just because they know we’re both from making Russell Henley probably wouldn’t remember me, but I squared off against him in basketball. Actually good big guy, he was a great basketball player. Awesome, basketball player soccer player that I knew he was going to be a pro golfer. I remember the other parents talking about that. I mean when we were like 12.
Brandon Breslin (02:40)
You see it with athletes, A lot of them are athletes in multiple sports or could have gone pro in multiple sports, which is incredible.
Jordan Eisner (02:48)
Yep, OK. Dechambeau, Dustin Johnson these guys all in it.
Brandon Breslin (02:52)
Yeah, they’re all in it for sure. Bryson always has a good chance. Brooks always has a good chance. He plays well in the majors. But it definitely seems to be Rory and Scotty as the two favorites. I haven’t seen the latest Vegas out, but I would guess those two.
Jordan Eisner (03:08)
I watched the documentary on last year’s on the Delta flight recently and so it got me. A little excited about Washington. Cool man, alright, well we’ll dive in. Sorry viewers and listeners if you were wanting more of that material. I’m sure there’s no shortage of it, but. Well, we are talking about today is recent guidance from the PCI Security Standards Council on integrating AI and PCI assessments.
Hopefully we don’t get a bunch of drop off and listeners and viewers after announcing that there is not. But I can tell you there’s not as much information readily available on this as there is on the former of which we were just discussing. So we’re to get Brandon’s take and perspective as a QSA qualified security assessor by the PCI Council. So let’s dive right in. What are some areas of emphasis in the guidance from the SSC?
Brandon Breslin (04:12)
Yeah, and I’ll say before we even get into the highlights of the guidance, this one was a bit of a shell shock from all of us in the PCI community meeting. Usually the PCI Security Standards Council takes a hands off approach and they’ll sure they’ll give some tactical guidance and things like that based on, you know, maybe scoping and segmentation or multifactor authentication, whatever it may be. They do still give out guidance, right? But when it comes to assessments and how assessments are driven and how the assessor should run the assessment. They usually take a hands-off approach. There’s the QSA program guide and some roles and responsibilities for what we have to do as assessors. However, we all thought they were just going to take the hands-off approach for integrating AI or using AI in the assessments, and they didn’t give in the PCI community meeting in Boston. They didn’t give any heads up that this was in the works. So definitely a surprise when they just posted this online and sent everybody in the community an email about it, which is not normal for them. But as it relates to some of the highlights, I would say the big piece to walk away from is that AI is not a replacement for an assessor. It’s just a tool. It enhances the process. In some ways positive, in some ways maybe more concerning.
But the gist of the guidance is, you cannot put the entire assessment into the AI, right? You can’t run the entire assessment through AI. You can’t make judgment calls through AI. You have to just have your standard process that you would normally do as a QSA firm or a QSA assessor, right? And then you can use AI to enhance that process, but you cannot have that be the end-all be-all. When it comes to reviewing evidence, right? You can’t leverage the AI to say,
Is this compliant or not compliant? You can use it as a resource or maybe help come to that conclusion, or maybe you can have it take the first pass at that conclusion if you have a decision-making AI. However, cannot be the final say or you have to be the final say, excuse me, on that compliance status of that requirement. I would say that’s the gist of it, right? It does go into some more tactical elements of where AI can be used, right? It of course can automate tasks like document, you know, reviewing documentation or taking maybe a first pass at writing some some of the rock wording report generation, right? Some of those pieces work paper creation, but but at the end of the day, AI can just be an assistant. It cannot be the assessor.
Jordan Eisner (06:51)
Gotcha. Yeah. Okay. And it was, was pretty straightforward and, you, you summarize it very well. remember reading about it when they announced it and some of it, and it seems like, okay, I think everybody would expect that to your point. They don’t normally comment on it. They felt the need to comment on this. What, you know, in your opinion, that was a little bit of a surprise. So what are the benefits? What are the risks identified in the guidance? You know, why do this? Are there any benefits or risks that you would want people to be aware of that weren’t mentioned in the guidance? You know, is this just, I guess, create further questions around it or do you feel, okay, that’s pretty good. That’s what we expected. Now it’s confirmed and we can move on and go as is.
Brandon Breslin (07:38)
I would say I lean more towards the latter, right? I think there’s a general sense in the community, not just in PCI, but in the cybersecurity and compliance community overall that AI can enhance the process in the right ways and it can also hinder the process in some ways, or maybe it opens Pandora’s box a little bit, I should say, in some respects, right? There are more risks that you open up when leveraging an AI tool, right?
You want to be cognizant about client data. From our standpoint, we do not input any client data into an AI tool. There may be other companies that take a different stance on that. That’s the number one you want to be careful about. When it comes to accuracy of data, are you willing to live with the result that comes out of that AI tool? Most likely not. You want to make sure you take that extra step to review that.
if you’re using an AI tool to review documentation, make sure you’re taking that extra step to review, is what it’s saying in here accurate. If it’s regarding work paper creation or report writing, you don’t want to just throw it in AI and then say, okay, great, it’s done, looks good. You want to take that extra step to ensure the accuracy of that data. Then also, from a security perspective, going back to the client data piece, even if it’s not client or customer data, even your own organizational data, if you’re not a QSA company and you’re leveraging an AI tool, make sure your employees are aware, your team members, your employees, any third parties that you work with, that they’re not just openly dumping customer data or organizational data into that platform without any thought or policy procedure. You wanna make sure that you have boundaries around what your company’s stance is on that data.
Jordan Eisner (09:31)
Mm. Okay.
So that’s good, even for those that maybe have stumbled upon this because of AI or because of the Masters content. Now they’re getting a little bit of information beyond just PCI if that’s not so.
Brandon Breslin (09:47)
Yeah, I think a lot of these exactly exactly. I think a lot of these concepts that you know the PCI Council put in their guidance right and it’s a very small piece of guidance. It’s almost like a blog post if you will. I mean it’s just a couple pages in a PDF. It’s very small, but most of these core concepts can be applied to non assessors, right? I mean it’s what we hear about all the time since you know since AI has become a large facet of our lives, if you will in the workforce over the last 12 to 18 months is sensitivity of data, protection of data, handling of data, accuracy of the exports, understanding what’s the backend architecture of the tool you’re using, what are the LLMs that it’s using, are you comfortable with, how are they training the data on that tool? So you really wanna be cognizant of the inputs, kind of the old saying of garbage in garbage out. still applies to AI as well. You want to be careful about what you’re putting in those tools.
Jordan Eisner (10:51)
Yeah, I know when I when I search things on Google or or yeah, well, I’m not going to pretend I used any search engine other than Google really. So when I search when I Google. Oh, you get the AI overview and I’m talking about for work, but also for personal life too. And it’s so easy to just read that and go, OK, that’s what it is. But then I’m like, wait, hold on a second. What’s the source of that? And do I? You know, I should resist this habit of just relying on whatever the AI churns out every single time.
Brandon Breslin (11:01)
Exactly. No doubt, no doubt. It’s having that second thought of not just, just what you portrayed, right? Not just accepting that as truth. And maybe it might be truth, but having that second thought of, this accurate? I want to double check it. I want to make sure that the source is accurate, especially for QSA companies, right? Including CompliancePoint ourselves. We have to, we are ultimately responsible for the compliance status or ensuring the validation of compliance, I should say, of each requirement, right?
When we’re doing an assessment, of course, the entity being assessed is ultimately responsible for compliance with their environment. But as an assessor company, we want to validate and be accurate with the validation of our compliance evaluation of that requirement. And if we’re using a tool for that, we want to make sure it’s accurate, right? We sure we can leverage it a little bit for decision making, but we have to ultimately be the ones to make that judgment call as an assessor. Have that professional skepticism, right? Have that second thought have that that professional judgment call of is this data accurate? Is this output that I’m receiving from this tool accurate? And also something we haven’t talked about is are we using the tools in the right way, right? Just because the guidance says, hey, these are some options doesn’t necessarily mean for your company that it’s the right way to go about it, right? We use tools in different facets for different reasons, right?
But maybe your company uses them in different ways and that’s okay. That’s the beauty of different assessor companies out there, right? We recognize every QSA is going to be doing things a little bit differently. So just having that second thought, having that having that initial gut reaction and listen to your gut, right?
Jordan Eisner (13:03)
So maybe a closing question and flipping it a little bit for organizations considering adding AI to their PCI assessments. So the guidance is more so around assessors and how they’re leveraging AI in the assessment. But for a company that AI is now part of their scope or yeah, part of their scope for an assessment, what are some things that they need to plan?
Brandon Breslin (13:29)
Yeah, so you’re talking about it in regards to implications like compliance implications for the assessment. Yeah, so I would say if you’re an entity that’s maybe even even in cycle or out of cycle, right? There’s a couple considerations. If you’re out of cycle, anything that you have a major change in your environment. If you deploy new AI tool, whether you created or you go use a third party for it right, it may constitute a significant change which could prompt a few things, whether that’s pen testing or additional changes to diagrams, inventories, whatever it may be. If you’re using a solely third party solution for AI, then you want to understand how is that integrated. Is there any shared API usage or is it purely just employees going on their workstation to type in data or inputs into a third party solution? That may not have a significant change on the environment, but it is something to understand.
Are we talking about payment data? you’re a service provider, are you entering customer data into that AI tool? That could be a massive concern that we want to evaluate. Are you entering payment data into that AI tool? Are you thinking about your customers’ data when you’re entering this information? Are your employees taking transaction data and doing bulk uploads into an AI tool? Those are some big considerations when it comes to data security and can also have scoping implications.
Whether you’re using a third party tool or an in-house tool, it’s absolutely something that needs to be considered for the assessment. it’s a third party tool, a lot of those vendor management elements come into play of due diligence, or agreements, contracting or agreements or service-writer agreements, your legal entity uses. Where’s the responsibility? Where’s the liability?
What happens if something to be thinking about is what happens if there’s a data breach and you’ve entered your employees have entered customer data into that tool. Be thinking about those. You know, have a lot of those what if scenarios in the back of your mind of what happens if an employee does XYZ right now. I think all of us know in the cyber security, cybersecurity community that humans are the weakest part of security, right? So think about what what your employees could be doing on the tool.
If you’ve taken the stance of as an organization, not an assessor, as an organization, potentially, whether you’re going down PCI or not, if you’ve taken the approach of, we’re just not going to do anything with AI, that’s fine from an operational perspective. However, you need to be cognizant of your employees might be using AI tools. If you take the stance of, we’re not going to write a policy or we’re not going to do a procedure on that, we’re just going to be hands-off completely.
Your employees could be taking advantage of that situation and entering customer data spreadsheets, documents into AI tools, and you may not have any idea about it. So at the very least, you need to have a policy or procedure that says, we do not allow the usage of any AI tool, right? Or something of that sort if you want to take that hands-off approach. You want to get ahead of this. It’s not going away. It’s continuing to grow.
Jordan Eisner (16:46)
So. Good answer. I mean, not that I’m the judge of the right or wrong answer on that, but I just mean. No, it was good information. In fact, you know, it the wheels turning to my head. And maybe we’ve done this before. We’ve done a lot of podcasts, but we should. We should revisit. I think what organizations are. They. Because as you were saying all that, I was thinking. He’s just going to continue to get more and more questions around this, right?
Brandon Breslin (16:53)
No, no!
Jordan Eisner (17:15)
How this is integrated into current assessments, how the assessors are using AI. AI is continuing to change rapidly. The expectations, the risks, the vulnerabilities associated with it. So, right, I think a lot of InfoSec folks and organizations, OK, you got your annual attestation and more regular attestation requirements. You’ve got your cybersecurity. Foster and maturity policies, technology and how you set that up, but. There’s probably a lot of questions around AI.
What should I be doing? You know, what sort of policies? What should I be informed by organization? What sort of ownership should InfoSec have on how AI is implemented and used across the business? Ideally, pretty big ownership to understand, you know, the impact assessment around it. So we’ll put a pin on that one and schedule that maybe have you back for that. Yeah, that’s a good 10th episode.
Brandon Breslin (18:08)
That’s a good one. One just immediately that comes to mind is it starts with data classification, right? You cannot protect data if you do not know where your data is. So discovery and classification is critical, right? You need to know where your data is. need to then from there, once you have identified all the data, you need to classify things, right? What is the most critical to our operations? What is the most critical that could impact our company?
if there is a data breach or a data leakage, right? What is most important from a customer perspective or a third party standpoint? Going through an exercise of data discovery and data classification has to be a top priority for the organization before you even consider protecting the data.
Jordan Eisner (18:53)
Mm-hmm.
Brandon Breslin (18:56)
Yeah, I like that. I like that for another topic. Let’s definitely do a topic on that. I like that.
Jordan Eisner (19:01)
Yeah, until then I think that’s a wrap for the PCI Council and their guidance around AI. Brandon, appreciate you coming on and commenting on that and for our viewers and listeners. Thank you for watching and listening. I would say if you have any more questions. Around AI and the guidance from PCI Council or just in general, please don’t hesitate to reach out. You know where to find us by this point compliancepoint.com. can email us directly at connect@compliancepoint.com and if you’re a regular listener and have seen and heard from Brandon multiple times now. Please be sure to leave us a review. If you’re not, make sure you subscribe so you can catch Brandon 10th episode and don’t miss it. Until then, thanks everybody.
Brandon Breslin (19:44)
Thanks, Jordan.
Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.