S3 E41: AI and SOC 2 Compliance
Audio version
AI and SOC 2 Compliance
Transcript
Jordan Eisner
Welcome to Compliance Pointers, where we take an in-depth look into the latest news trends and challenges surrounding information security, privacy and marketing compliance. Let’s dive in with your host. Jordan Eisner, hello, everybody. Here. We are another episode of Compliance Pointers by CompliancePoint, a unique episode today, similar to some we’ve done here in the past, we’ve got a guest. We’re delighted to have a guest here today. So this is Mary Beth Marchione of Wipfli, and we, of course, are of CompliancePoint. And we, I would say, have been a good collaborative relationship for a number of years, mainly around SOC 2 but other information security framework and work going back, I don’t know, at least five years, maybe even, I think, prior to that, but really strengthened in past, recent years, and I may have already mentioned it, but Mary Beth is a partner with Wipfli. She is the global lead for SOC 2. She also tell us about active with AICPA?
Mary Beth Marchione
Yep, I’m part of the AICPA SOC 2 working group. So lots of fun stuff going on. There some I can talk about, but yeah, excited to be on. Thank you for inviting me.
Jordan Eisner
I always like, when we when we get together and I’m talking to you about something going on the industry with this technology, doing this or that, you go, well actually, because she’s got the fresh take from the AICPA, and what they’re looking at, how they’re governing with this, and then, of course, you’re the lead, and I’ve done countless number of these engagements, so a great resource for this podcast. We’re really excited to have her on, and we’re going to be talking about none other than AI, yep, and how AI is going to be impacting SOC 2 now, of course, in the future,
how it can be used to assist, I think, right, for the better, and then also what factors need to be considered in terms of risk, how maybe it’s going to prevent or present challenges, and in compliance programs governed by SOC too. So we’ll jump in. We’ve got just a few questions. Really, we’re going to keep it mostly high level, but really leverage your expertise in this, share it in the video, and hopefully provide some value for those watching and listening. So let’s start with just how organizations can ensure that the use of AI systems complies with their SOC, two requirements for data security, privacy, really, all the Trust Services criteria.
Mary Beth Marchione
Yeah, absolutely. You know, it’s really looking at the risk, you really, when you take a step back, everything’s really based on risk, and that’s really what the SOC 2 framework goes through right the different security risks, privacy risks, identifies the most pertinent criteria and then asks organizations to kind of look at their processes procedures and evaluate how they’re meeting those criteria. So when you think of AI, if you want to keep it simple, you’re really looking at the same concepts, but through a new lens, and some of those risks are unique, right? A lot of times we’re thinking about human behavior and how humans behave, and some past historical risk that we’ve thought of when now, with AI, we do have to start to think about what do machine models Mimic, and what’s their behavior, and what are the access points, because those can be quite different than how humans access a system and how these models are kind of construction constructed in The infrastructure behind them. So I think that’s the key, is really taking a step back and thinking about the unique risks within the same framework and policy and procedure and governance that a lot of organizations have in place.
Jordan Eisner
Do you prefer to evaluate the human nature side or the machine side?
Mary Beth Marchione
That’s so funny.
Jordan Eisner
One might be more predictable.
Mary Beth Marchione
I mean, I’ve a lot of folks, I don’t know if you’ve seen this, a lot of companies are now coming out with their own AI models to start to ask questions to that’s kind of a new thing that I’ve seen everything from schools to
communities. Associations, all the way to you know, you’ve got your banks and your financial institutions. So there’s a range of use cases, but what I’ve seen is different answers for the same question, right? And I think that’s indicative of human behavior, but we do have to be and that goes back to the risk. What are the risks, the inputs and the outputs and thinking about that a little bit differently.
Jordan Eisner
Yeah, I had a team member actually message my entire team the other day,
and basically had built a model only with everything we’re putting in. It was controlled, obviously, within us, but to based on our website.
How we present ourselves, how we should approach, you know, prospective customers and reach out to them is pretty fascinating. Yeah, yeah, okay. Language, yeah. Good overview. Good overview. So just again, similar, but instead of really evaluating human nature, human you know, processes, you’re looking at the machines, right, and the AI and the systems, and how that’s going to present risk to the organization, and how they’re gonna solve for so in doing this, what are some of the unique challenges you’ve seen with the AI side, maybe compared to historically, looking at SOC 2 for what it was?
Mary Beth Marchione
Yeah, I think right now, what we’re guiding a lot of our clients to do is just start to incorporate AI. It kind of depends where you’re at, like you said, current versus future, and what kind of company you are and how you’re leveraging AI. So we have some companies that AI is really used for marketing purposes, maybe to help construct documents, but it’s not holding any sensitive information. It’s not part of any product. We do have some companies that have built large language models, you know, well before AI became this hot buzzword that we’re all using, and they’ve kind of been validating their data for a really, really long time, and understand the risks deeply and are very cautious. For those in the middle where you’re maybe have played with AI folks use it for efficiency purposes, but it’s not consuming any sensitive data. But now you’re like, Okay, this is we’ve gained some efficiencies. How are we going to leverage this in our products? That’s where I think you have to be really careful, because once you start to introduce sensitive data or any proprietary data, that’s when the risk automatically starts to escalate, and when you want to take it a step further than just saying, Do we have a policy? Are we guiding our employees into understanding the proper uses versus improper uses. But what technical safeguards are we really putting into place to prevent misuse or data leakage? I think that’s like the biggest issue we’ve seen is data leakage, or, of course, depending on your use case, improper outputs.
Jordan Eisner
Yeah. I mean, it matters what you’re putting in, right? Yep, because I’ve seen even in the internal use, and I wouldn’t say I leverage it a great deal, but it seems like you do it and then you’re going through your file explorer, and you’re looking at all these files where maybe it’s been stored and shared here, there, and there’s like, and I think where’s this going? Yeah, you know, it’s just setting it all up in the back end, but that’s probably more a representation of my lack of control than the general
Well, let’s flip it then, and let’s talk about so it’s introducing new risk, especially given, you know, the type of data that’s put in it. And for the most part, if it’s not super sensitive data, if it’s not going beyond the organization, you can control where it’s going. It’s just another thing to manage as part of the process and demonstrate where you’ve got governance. But what about flipping it? What about these organizations have been going through SOC 2 for years that wow, want to leverage AI to assist and they’re monitoring their ongoing and their requirements as part of their SOC 2 compliance.
Mary Beth Marchione
Yeah, a lot of we’re seeing, a lot of security tools are building it in, so that’s key. And I think you know, if you haven’t dug into that in your security group, that’s something you should definitely look at, because it is a great, powerful tool to leverage. I do think as we see tools like Copilot and some project management tools get better, we may see some of the traditional use case for some governance risk and compliance tools become less heightened. And you know, you can kind of leverage some of these other tools that you’re already using, like Outlook, calendars, Copilot tasks, different things that are just built into some of the Project Management Suite, tools to leverage, as opposed to buying additional software. I do think that could be a possibility, but yeah, and weeding through the noise, right? You can use AI tools to kind of summarize and tell you what’s a priority and what’s not, as opposed to reading through each and every alert. Of course, you have to validate, and that’s the that’s the tough part, right? With any AI, whether it’s built into a product or you’re using it, you can’t blindly trust it. You have to go back and validate and then provide that feedback loop. So that’s something you can’t forget to do or stop doing, but that is something that I think we’ll see is just tools becoming more ingrained in our everyday use, and maybe some efficiencies there where we’re not going out to 10 different tool sets to do some of this monitoring.
Jordan Eisner
Yeah, that a lot of sense, I think, from just the tools and features that you’re already using and leveraging as their AI abilities increase and you become more familiar with it, you can leverage that for your SOC 2 compliance program. So, you know, we prepared these questions beforehand, but I’m thinking about talking about, you know, since actual technical controls, and maybe that leads to this a little bit. It’s talking about documenting, but so in the situation where the AI in an organization is perhaps making decisions on its own, automating some of the information, and there is sensitive info involved, what consideration should be taken to document those AI processes and the decision-making to meet the SOC 2 audits?
Mary Beth Marchione
Yeah, I think there are a couple different things there. There’s one really documenting the data flow and the model, right? How did you as a team validate that the model is performing and that the outputs are exactly as you’d expect. There’s nothing. There’s no way to kind of trip up the model or get a bad response. Like we’ve all seen those stories online, right, where somebody went online and said, You know, I think I read one about a dealership where somebody tricked the model into saying, you could have a Chevy Tahoe for $1 like that.
Jordan Eisner
I heard about the same one. Is it true that that dealership then had to do to some sort of contractual thing sell the Tahoe for the dollar?
Mary Beth Marchione
I’m not sure, but I wouldn’t be surprised, because you know that’s that is a real life outcome.
Jordan Eisner
Well, it seems like that would be in sorry, to jump in in the middle of you answering that question some extensive validation, because there are so many pathways that can go. How can you really verify So, is there an accepted sample amount? Is there an accepted amount where you can say, okay, they’ve tested this enough? What’s the threshold?
Mary Beth Marchione
Yeah, I think that’s obviously SOC 2doesn’t delve in as much as, say, a penetration test would, and I think that’s where you have to leverage and collaborate with really good partners and make sure that you know you’re guiding your clients to make sure they’re really validating. So I think there’s some really interesting tactics that I’ve read about from a penetration testing standpoint and a validation standpoint, that if folks are truly serious about using AI, making it public facing, and relying on some of the processes and outputs, then you need to spend and invest in some really good third party testing, I would say, as well as a good internal team that can that can validate those things.
Jordan Eisner
Yeah, we’re moving right along on this stuff here, your answers are good, succinct.
No, don’t be sorry. It’s perfect. It’s perfect. No, I think this would be great for for our viewers and listeners. So last question, and then I’d open up to anything else you want to add on this. How does the use of AI impact risk assessments and management as part of the SOC chief? Right? Because, you know, that’s an annual exercise that you like to see as part of it. So what sort of changes maybe happen, if any, in the risk assessment when AI is entered into the picture?
Mary Beth Marchione
So I will say, I do volunteer with AICPA as part of the SOC 2working group, and there’s a lot of talk around making sure that the points of focus are considering AI risk. So I don’t know exactly what will happen, but I know that that’s being considered, and I think there’ll be more done in the future there to make sure that the framework it is all encompassing and flexible, but this, I think we all need guidance. I think the audit world needs guidance. And I think internal audit teams need guidance as well to make sure that all of those things are being considered.
I think risk assessment is one of those things where, no matter who you talk to, it looks a little different. Everyone has their own idea of how it should look feel. You know what success means? It’s a little nebulous. I recently went online and saw MIT has an entire like risk register of all of these AI risks, right? So you can go down a pretty big rabbit hole and end up spending hours upon hours doing a risk assessment of AI in your environment, whether you’re using it kind of entry level, or if you’ve ramped it up, and you’re using it as part of your product set, and it’s making, you know,
important decisions. I think, you know, I’m always a person that likes to do things in the middle, right? I think you look at that big risk register, you prioritize what you think are the most important to your product, set, your company, and then you pick those to start with. And starting is better than anything, right? You can’t just ignore it. You can’t get lost in all the information.
Information and the noise. I think it’s really important to just start, and that’s where, you know, leverage some of this out, like the MIT risk register, or leverage some of these white papers that are coming out to start. I think it’s the same risks that we always see right data loss, data leakage, those are going to be super important to think about and access points, the access is going to look a little different. There might be some API connections like we’ve talked about. Can you trick a prompt into leaking information that you potentially wouldn’t want exposed? Those are all things that are unique to AI, but it’s the same risk that you would experience is any company, data loss, data leakage, misuse of any technology, right? It’s just, how can that misuse take place and then access that you don’t want folks to have?
Jordan Eisner
This is a totally off-script question, but can you train an AI to say, ‘Hey, you need to abide by the Trust Services criteria’? So that’s the train it’s on. SOC, two, I believe now abide by this, and you can’t do anything outside this realm. And so then, for you know, employees at a company, if they then ask us something, they go, well, that violates our…
Mary Beth Marchione
Yes, absolutely. And I think that’s exactly what we’re seeing, so that you know when models are trained, just like that example with the $1 Chevy Tahoe, you can’t answer that question because it goes against our code of ethics, or it violates…
Jordan Eisner
they can be the most compliant. So there I answered the question for you on what you like better humans are, or the machines, because the machines because machines are gonna be more compliant. They’re gonna be better to work with.
Mary Beth Marchione
Well, yeah, they might complain less too, right?
Jordan Eisner
Yeah, yeah, I’m convinced that’s why people love dogs so much. I know that I’m not gonna gain any popularity with this, but they don’t. They don’t say anything. You just assume, right? That they love you. They might, I like dogs. Sorry, I should disclose that. I realized at the top of this, I talked about you being the global lead for Wipfli, serving the board for AICPA. And the AI and AICPA is not artificial intelligence.
Mary Beth Marchione
That’s correct. Yeah, it’s the American Institute.
Jordan Eisner
But I didn’t give you an opportunity to talk about Wipfli too, right? So, yeah, you know, as a wrap, please, do you know tell the listeners and viewers about wipfly that I think will be important?
Mary Beth Marchione
Yeah, of course. Whip flea is a top 25 accounting consulting firm. We do all of the traditional services you would expect from a CPA firm, accounting, financial statement, audits, back office, bookkeeping, tax related services, and we do a number of different consulting engagements, strategic planning, help with, you know, data strategy, which is another key component of what we’re talking about today that I forgot to mention, but yeah. And then, of course, I leave their sock practice, and we are a global firm.
Jordan Eisner
If somebody watches this video and they say, I gotta get in touch with Mary Beth, I have to pick this person’s brain on this. What’s the best way for them to contact you?
Mary Beth Marchione
LinkedIn, I’m very responsive. So if you want to find me on LinkedIn, I’m available.
Jordan Eisner
Okay, there you go. And if you’re a regular viewer watch of this show, you should know where to find us at CompliancePoint, but at any point in time you need a reminder. You can email us at connect at CompliancePoint, calm to try and be active on LinkedIn. So feel free to message us that way. And I can put you in touch with Mary Beth if, for some reason you can’t find her, but you should be able to. She is very active on LinkedIn, so thanks everybody for watching until next time. Be well. You.
Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.
 
				
