S2 E9: The Impact of AI on Privacy Regulations and Compliance

The Impact of AI on Privacy Regulations and Compliance


Jordan Eisner: Well, welcome back everybody to Compliance Pointers. I’m your host, Jordan Eisner, and I will be speaking today with my friend again, and colleague of over 10 years now, Matt Dumiak, our Director of Privacy Services at CompliancePoint.

We’re going to be talking about just the most talked-about topic, probably in the world today, and that’s AI. But we’re going to be talking about it in the vein of data privacy. If you’ve been to a data privacy event or show or conference in the past year, probably 80 percent of the topics were AI. The questions are AI.

I might be alone and just being rather sick of AI, but it’s here to stay and companies need to embrace it, to stay innovative, I think, to sustain and last and grow in an emerging and radically changing landscape.

Matt, let’s dive right in. I think our listeners will know you from last time. But not for those of you that don’t know, Matt’s been with CompliancePoint going on 15 years, Director of our Privacy Services. He’s also Director of a niche practice group we’ve got called Marketing Compliance that focuses on organizations on TCPA, do not call rules, do not text, email, other direct-to-consumer contact methods that sometimes can play in the privacy space.

But Matt, with the emergence of AI, what are the privacy concerns that have been raised?

Matt Dumiak: Sure. Jordan, first and foremost, happy Valentine’s Day.

Jordan Eisner: Yes. How could I forget? How could I forget? In the local gym, I go to the YMCA. They said that the word of the week is love, yet I keep forgetting today is Valentine’s Day. Yes. What a special occasion to be having this podcast.

Matt Dumiak: It is. I think at that club, weren’t they also offering Girl Scout cookies at 7 AM?

Jordan Eisner: Pecan Swirls, my favorite. I was raised on them.

Matt Dumiak: Yeah. Perfect for pre-workout there at 7 or 6 AM.

Jordan Eisner: Pre and post-workout, so you can cancel out anything that you just did.

Matt Dumiak: Exactly right.

Your question going back to some of the privacy concerns under artificial intelligence. I think to your point, it is a hot topic right now in the privacy space, but even outside of privacy as well, organizations are really trying to leverage artificial intelligence in any manner that they can.

Honestly, the industry hasn’t even really settled on a definition of what artificial intelligence is. Some folks think it could be a chatbot. That’s fine. That certainly is one.

There’s other types of artificial intelligence that are generative AI, where it can develop pictures or voice, and deep fakes, and all kinds of things.

The challenges or concerns for privacy are numerous. It can be anything from, and I think, well, let me back up. It can be numerous or concerning because artificial intelligence, it is driven by the data that it consumes, and that data can include personal information or personal data.

Not only that, but even going beyond that, when we talk about generative AI and deep fakes and some other things like that, when you think about how those types of artificial intelligence could be utilized to really infringe upon an individual’s privacy. That was a huge talk of conversation at the IAPP Summit in Washington last year. A lot of the keynote speakers were talking about more of the privacy concerns from that perspective of deep fakes and different things that individuals were using AI for, and then putting pictures online or videos online of individuals, and that it wasn’t that individual, but it was making it look like that individual. So even that can be a real concern from a data privacy perspective.

But I think a lot of times when it comes down to it is that the AI consumes data, its personal information, how’s it getting it, do individuals know that the systems are using it. And then beyond that, and we’ll talk a little bit about this throughout this podcast, what organizations are doing with AI and what decisions they’re making too are really critical. That’s why I think it’s starting to catch on with potentially some regulations here upcoming in the EU and some other things at the state level and even this executive order from Biden, because it’s just a lot of things that are going on right now with AI.

Jordan Eisner: Okay.

Matt Dumiak: I know that was a lot.

Jordan Eisner: No, you answered a question I was going to have a follow-up is, what are some of the early regulatory actions? I think you just mentioned three of them, EU AI Act, state privacy laws, the Biden executive order.

Matt Dumiak: Yes.

Jordan Eisner: So there’s a lot of state regulations that deal with profiling and automated decision-making. So let’s talk beyond the ones you just mentioned, let’s talk about ones that already exist. Can you tell our listeners how are those defined? If an organization is using profiling or automated decision-making, what actions are they required to take?

Matt Dumiak: Sure. That’s going to vary by state slightly, but we can crosswalk those and look at what those look like across the various states.

Profiling is, you see it in law enforcement at times, but it’s any type of, and I’m looking at the formal definition here and I’ll paraphrase, but it’s any type of automated processing of personal information to evaluate certain personal aspects relating to that person to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health or personal preferences, interests, reliability, anything like that. So that’s from a profiling perspective.

Automated decision-making, a little more straightforward, but the regulations or the laws are really going to look at it from a system software or process, where they use statistics or other data processing or artificial intelligence to make a decision about an individual. So it also includes profiling. So those work very simultaneously with each other.

Jordan Eisner: I know a lot of this is still just so new. A lot of people are trying to wrap their minds around some of it, but can you exemplify some of the common challenges organizations are facing regarding the use of AI? Some common use cases, I guess today and what they’re struggling with.

Matt Dumiak: Yeah. There’s a good bit they’re struggling with. I think from a compliance or privacy side, they’re struggling with a few things. The business is really wanting to leverage artificial intelligence or AI.

And so from the compliance and privacy side, working alongside the business to understand where AI might already be in use, where it needs to be used, where the business wants to use it, slowing down the business, as we all know. So we don’t want to stifle growth or innovation. So the privacy side or the compliance and privacy side, that’s a struggle, is to say, how do we find that balance of ensuring that we’re not violating a law while also allowing the business to innovate and use something and not fall behind?

I think you said that in your intro, which was perfect. An organization that isn’t using AI is falling behind. And so I think there’s not a power struggle, but a balance that we’re trying to find of how do we engage the business, ensure that they’re using AI responsibly, and that we’re not getting too far outside of our skis.

Also, I think another challenge is just understanding what it is, candidly. The term can be used broadly. I talked about it. There’s not a on the books accepted definition of AI. There have been proposed definitions specifically under the EU AI Act, which looks to be relatively straightforward. And they’ve tried to streamline that. And there were a lot of negotiations back and forth before that draft was approved.

You talked about the state level. There is the term we use, artificial intelligence is brought up in the CCPA and other privacy legislation, but it’s not defined. It is more so those terms like profiling or automated decision-making kind of encompass artificial intelligence usually.

And so that’s a challenge, I think, for a lot of organizations, both from a compliance and privacy side, but also a business side is to say, well, what is AI?

And to begin with, and maybe I should have started with that one, not just like how do we not slow down business, but even understanding what it is and considering the privacy concerns, of course, like where what is it consuming to make the decisions that it is?

Not all of that information from some of the solutions that organizations are using is readily available. Like, how do we make these decisions? Where is it coming from? Some of the technology thinks it’s their secret sauce. Others, you know, we might think like kind of big and scary, like they don’t even know where their data is coming from. They don’t know how the machines are taking over. Right. It’s smarter than the than the individuals that are feeding it.

So, you know, all of those are certainly a challenge right now, I think, in this space.

Jordan Eisner: Yes. I think you can maybe just reference the movie Terminator. Right. The more you feed it.

Matt Dumiak: You’re good at movies, Jordan, is it RoboCop or is it Terminator where they scan an individual’s face when he’s walking down the street and they say he’s likely to commit murder?

Jordan Eisner: Minority report.

Matt Dumiak: OK. I just vaguely remember that. That’s where we’re heading. Maybe we’re scanning individuals who are just walking down the street. We’re profiling them as murderers or criminals. That type of thing. Hey, the good news is with the EU AI Act, that type of that type of use is prohibited. So we’ll see. Right.

Jordan Eisner: Yep. I’ll sleep better tonight.

Matt Dumiak: Except for law enforcement purposes, though, there is an exemption under that. So we’ll see if law enforcement maybe tries to go that route. We’ll see.

Jordan Eisner: Always some sort of exemption. Right.

Matt Dumiak: There is. But now I think that is a prohibited purpose even for that exemption. So I don’t think they’d get that far, hopefully, at least not in Europe. That’s specifically in the EU.

Jordan Eisner: So what you’ve talked about so far. And I think. Our listeners would agree with this, right. It’s still just abstract. What is it? How do we ensure we’re not violating privacy law, but not slow innovation advancement? How do we even wrap our minds around in the first place to then put rules and practices in place?

So. Let me ask it a different way, then. You are ahead of our consulting group for data privacy, where every client’s ask about this. What are you advising, right? What are you telling them to do when these questions pop up?

What do you what are the recommendations we’re giving right now in this unprecedented time with AI and data privacy?

Matt Dumiak: To breathe. Take a step back, right? You can’t boil the ocean.  You have to take a higher-level view of it. At times our clients come to us in a little bit of a panic when they find out that maybe a different area of the business is already in full-blown implementation mode or full-blown use of AI, and I think it’s taking that step back and saying OK, but AI isn’t all bad, but also there may not be as much privacy risk as we might think.

And so taking that step back is first and foremost is our first recommendation, right? Take a step back, look at it at a higher level, but then you have to think about when I talk about not boiling the ocean, which I think is a pretty common consulting term, but, it’s effective, is prioritizing the risk of the different types of AI that the business might be using and the criticality to the business. And so looking at those and saying OK, but let’s figure out how are these different solutions being leveraged? So kind of an inventory, if you will, of how are they being leveraged? What are they consuming? What are they doing for the business? What is the impact to the revenue? What is the impact to the consumer as well? Right? I think there’s a lot of conversations with an organization about risk, but that needs to be a two-way street, both risk to the business, but also a risk to the consumer.

And establishing a priority or a road map at that point to say, OK, well, we have all of let’s kind of take a step back. Let’s look at that. Let’s compile these things. Let’s take that prioritized view of it and then put that action plan in place, if you will, because it’s obvious that if AI is already being leveraged, that there wasn’t a governance program to begin with. And that’s not something that needs to be established right with the AI or with how fast AI is expanding.

And so going from there, I think it’s all about communicating that with the business training, how AI can be engaged, how it cannot be, some guidelines around that and ensuring that we’re working alongside the business. So that we don’t kind of step into the area where, to your point, we don’t want to stifle, but we do want to comply with what we have to comply with without putting businesses and consumers at risk.

Jordan Eisner: OK, you said some things there. You talked about conducting an inventory, right? I think that’s with the assumption that an organization has taken some of those data privacy steps already and done an inventory and done data mapping, right? And determine some of these things. Where are the processes and activities? Where does PII exist within the organization? Where might AI type activities intertwine with PII where that could be a criticality or risk?

So this just comes to me as a question, but what about companies that haven’t done much on a data privacy standpoint? They don’t really have some of those foundational things because I’m thinking when you said that, my mind just went to Michael Scott in that office episode where the fire starts breaking out. And I don’t know if you remember the one in Dwight Schrute’s like, I’ve got this, you know, follow the procedure. What’s the procedure? And Michael screaming, stay effing calm. It’s like it seems like somebody with the privacy program, hey, we’ve got this inventory. We labeled the risk. We’ve siloed it. It’s here. It’s in this department. Whereas if you didn’t have some of those foundational things from a private standpoint and know where the data exists and the activities occurred, you might be the Michael Scott screaming, stay calm and running around.

Matt Dumiak: And that’s a cold open, right? Like they start the episode like that, right?

Jordan Eisner: Yes. So what about somebody listening who’s not I mean, not only are they trying to figure out data privacy. Now you had AI into the equation.

Matt Dumiak: Yeah, it’s a good question. I think backing, going to that first step and backing up. And if that’s the you and you always have to establish what the priority is for the organization. I would not recommend if you’re trying to figure out where AI is that you start by building a personal data inventory to start, because that’s going to take a lot of time. And so I think doing some business interviews with the likely usual suspects of individuals that might be using AI, the website team, because they’re going to have the automated chat on the website.

You’re going to look at customer service as well, potentially marketing, maybe even sales, say where, you know, talk to them about what solutions, what products, where they’re innovating. Some of these are kind of like cutting-edge departments to write a lot of entrepreneurs in those spaces to say, what are they doing? How are they engaging? Kind of start with that priority.

Start with those departments and that talks to like how to prioritize it. Right. And because if you’re trying to solve for the AI problem for the entire organization and you don’t have a nice baseline to start from, you’re going to get very and so is the business going to get very overwhelmed very quickly.

And so I think as simple as doing some business interviews can be really helpful, understanding how they’re engaged with it, how they’re using it and then kind of go from there in terms of like analyzing. OK, but what does that mean from a risk and regulatory perspective?

Jordan Eisner: If you modify the engine of the car and supe it up, as they say, it’s not going to make any difference if you’re on three wheels.

Matt Dumiak: That’s right. Exactly right.

Jordan Eisner: OK, I use the car reference because I know you’re such a car guy because you wouldn’t understand it otherwise.

What frameworks are available out there for business as a guide, right? Tell us about some of those core elements and how the topic AI. Could be a factor in those or not.

Matt Dumiak: Yeah, we’re starting to see more of these. Fortunately, everybody’s looking for a framework to help guide. NIST has one. So you may those listening may know NIST for maintaining the atomic clock, but also for the NIST CSF. Several cyber security frameworks beyond that even more in detail NIST privacy, but then even beyond that, there are organizations and associations that are starting to come out with frameworks.

The Center for Information Policy and Leadership has one we really like. It operates from several components, including kind of going to answer the second part of your question. What do they include? How to get leadership and oversight and buy in how to conduct a risk assessment of AI? What the policies and procedures should look like? Some of the requirements around it, including transparency, doing data protection impact assessments, notice or transparency, but opt out.

I should say training is huge in the space, monitoring, and verification. So with compliance, we always talk about where the rubber meets the road. You can’t just stand up a compliance program and think it’s going to operate effectively without audit and then enforcement. So if you find an issue, how do you document that or document and memorialize that you fix that issue?

Right, so a lot of those frameworks are going to follow very similar baselines of that.

Jordan Eisner: And when in doubt, just ask AI, right?

Matt Dumiak: Exactly, you could always do that. I think ChatGPT would be happy to craft an AI governance program for you and what that should look like. Right, you may think to yourself after you review it. Yeah, did an AI chat bot write this thing because it says nothing about training or transparency or anything like just kidding, but they probably would have all of those things. Yep, but then you have to go implement it right then you have to go operationalize it, which is a real challenge so.

Jordan Eisner: OK. Well, I think that’s helpful.

Some of these topics, but maybe for some of our listeners are struggling with this. I think the thing to stick out with me is just breathe right? I don’t think there’s an overwhelming amount of enforcements, at least not yet right coming out for organizations for using AI the wrong way and violating data privacy concerns.

It’s like you said, right? Everybody’s just still trying to grasp what they’re doing with how they’re doing. I think it’s important for organizations to be aware to have policies or at least start talking about one sort of policies they’re going to have around usage of AI by the business and how they support their clients. The business in that instance, if it’s a legal department or compliance department or somebody of that nature, cybersecurity data privacy groups. But you know more to come on all.

Matt Dumiak: I just thought of something like even like in the IT space where we have, a white list of what’s usable. What can an organization use from an AI perspective? Maybe that’s an option even beyond that to kind of make it quick and easy, right? Here’s a white list. We reviewed these there readily. We proactively reviewed these were comfortable with where we sit. Yes, maybe a list of like maybe in a list of no. For now, right? That might even be a good approach for that.

Jordan Eisner: Yeah. Yeah, so I’d say for our listeners, I know we spoke to this pretty broadly today in this podcast. If you have specific questions on it, right? Maybe how your company is using AI or contemplating using AI. Drop a drop a comment right in the reviews. Talk about the topic. Maybe we’ll if we get enough, we’ll do a podcast on that specifically, but you can also reach out to us right directly. Matt or myself. We’re both on LinkedIn. Go to our website compliancepoint.com. We have a email distro that hits the directors and others here within the organization, Connect@compliancepoint.com.

If you want to send your questions there, you can even schedule a meeting with us on our website. So we’d love to talk and walk through maybe some of the initiatives or objectives you’ve got with using AI and how that interplays with your data privacy program or where you want your data privacy program to be.

Thank you everyone. Thank you, Matt, for your time. See you guys.

Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.