S3 E40: What is the EU AI Act?

Audio version

What is the EU AI Act?

Transcript

Jordan Eisner  

Another episode here with Matt Dumiak, who is director of not only our data privacy practice here.


Matthew Dumiak  

Yeah.


Jordan Eisner  

CompliancePoint, but also our marketing compliance practice. So he’s a go-to resource for EU and US North America really privacy regulations whether it be.
Historic out of our marketing compliance group and centered around outbound marketing and direct-to-consumer contact activities or what’s been emerging data privacy law here in the States for the past few years, which of course got the its birth or the GDPR was the catalyst for all of this.
So why is that relevant? Because we’re going to be talking about the EU AI Act today and it could be GDPR all over again in terms of they put it out first and then they start to follow. I know Colorado and maybe some other states already have them, but we’ll.


Matthew Dumiak  

Yep, that’s right.


Jordan Eisner  

We’ll see with this EU act if it’s an event similar to GDPR where where other starts. Now there’s some political feelings that might prevent some of that on the state level, at least for the, you know, near future.
But maybe a topic for another day. For today, we’re just going to talk about the EU Act, what it is, how to know if it applies to you, what to do if it does apply to you, and ultimately.
How to mitigate risk associated with that, which is what we focus on here, compliance point. So Matt.
What is the EU AI Act other than the obvious? And who’s it applied to?


Matthew Dumiak  

Yeah, sure. Good place to start. So and I know you brought up GDPR. So the EU AI Act is like a cousin or maybe kind of sits alongside the GDPR. It’s broadly applicable to organizations that are either they market or have brought to market an AI solution.
Or so we would call those providers. So they really develop the AI technology, they market it, they sell it, that type of situation or organizations who are deploying AII bring up the kind of it sits alongside GDPR because.


Jordan Eisner  

OK.


Matthew Dumiak  

It does not have any. It’s really the applicability or I should say when you start, when the scope and applicability is not based on personal information and artificial intelligence, it’s really just the artificial intelligence. So if it’s an AI system.
And it’s in the EU market. That’s where these types of obligations would start kicking in under the EU AI Act. Now it does reference the GDPR.


Jordan Eisner  

That’s been created by the company or developed by the company, not necessarily where a company is leveraging already established AI.


Matthew Dumiak  

That’s right. Well, well, no, maybe clarify that. So there’s a couple things going on there. So there are some and we’ll talk about these with the deadlines, but this law passed in August of 2024. Some things have kicked in.
Surrounding some of the requirements, but nothing material outside of some obligations for general-purpose AI. We’ll get into the high-risk systems, but there’s this lag here that is.
There’s 2026 AI or high-risk AI systems have some obligations that kick in. So you know a little under a year, August 2026. In August 2027 legacy general-purpose AI systems that were on the market before.
August 2nd, 2025, that’s when they need to start complying with the EUAI Act. So I think I hope that clarifies your question or your point there that it doesn’t just apply to those who are developing it currently or who are deploying it currently. It could apply both in future states, certainly, but certainly things that are.
Occurring now as well, so.


Jordan Eisner  

You’ve got some experience with.
Technology and future state and capacity.


Matthew Dumiak  

Yeah.


Jordan Eisner  

With the TCPA, and that was complicated for years on years on years, you envisioned a similar thing here.


Matthew Dumiak  

Uh-huh. Yep. Yep, the capacity of.
We’ll see. I think they’ve done a good job of kind of clarifying what the definition of artificial intelligence system is. But there it’s such a new and cutting edge field and technology and law that, yeah, absolutely there will be questions and even.
I was as I as we were preparing for this, I was listening to a podcast about some artificial intelligence in general, and there were obviously interested parties seeking opinion from the European Data Protection Board, who overseas really the GDPR about whether or not.
There’s personal information involved in models. Would that be covered by the GDPR? Because there’s this, there’s a paper out of Germany that talks about the fact that even if this, so this is kind of a controversial opinion, but there’s a paper out of an organization in Germany that looks at OK with the model.
It involves some personal information, but by the time it digests that and and really the output of it doesn’t include any personal information, it’s all aggregate. That paper outlined that any model would be anonymous. The European Data Protection Board disagreed with that and it’s not super, it’s not controversial per se, but.
What it did, what came to light from that is that it’s going to be a case by case basis that the regulators in Europe look at each form, like how AI is leveraged and engaged to determine whether or not privacy obligations apply. So that I just thought was an interesting tidbit that.
About that even, you know, it’s so new that, yeah, it’s still totally up in the air. So absolutely. Now I don’t know about that capacity and things like that under TCPA, but you know, we’ll see, absolutely.


Jordan Eisner  

So I can’t describe or define what AI is, but I know what it is when I see it.


Matthew Dumiak  

That’s right, absolutely.


Jordan Eisner  

If an American business, because that’s what we’re focused on here today for this podcast, if an American business does find out that applies to them or.


Matthew Dumiak  

Yep, that’s right.


Jordan Eisner  

They’re on the fence or it’s quite possible it does. What do they need to know? What should they do?


Matthew Dumiak  

Yeah, sure. And to your point and just to hopefully clarify that, even if they’re based here in the US and they do not have a physical presence in the EU, if they’re a provider and they’re putting a tool out in the EU market, it likely applies if they’re deploying it and it generates processes data or generates output in the.
In the EU, it likely applies. So just like GDPR, it’s broadly applicable scope and application. It’s not just organizations that have a physical presence in the EU. They’re really going to be, they’re going to be looking at not just personal information, but what again is going on in the EU. They call it the EU market, right? So.
There are this law is when you break it down, it’s all over the place. There’s a lot of terminology, there’s a lot of definitions. Some are newer in concept. I think that we should probably, and I’m happy to answer any questions about this, Jordan, or take a different route, but.
My thought was really focusing on.
A couple things. So there are prohibitive practices or prohibitive AI systems that we can touch on that have unacceptable risk, those deadlines to cease all operations that would be considered prohibitive.
Are that deadline is hit, then there’s high risk AI systems and then there’s just kind of normal or not high risk but generative or general-purpose AI. So there’s there’s a few tiers there from a risk perspective of prohibitive.
High risk and then general purpose, the majority of the. And then there’s two concepts and I’ve already used the used the terms a couple times, but there are providers and there are deployers. So providers you could think of like.
Open AI, right? Or anyone who’s developing an AI model and deploy and pushing that and selling it or offering it to the market or even internally developing an AI model and using that without selling. So it doesn’t have to be that someone’s purchasing it for it to be a to fall under the provider bucket.
Or there’s your deployers. So I think that the majority of obligations fall under provide fall onto the provider.

On the same, in the same breath, I’ll say that I believe the majority of organizations probably listening to this podcast or concerned about the EU AI Act are going to be the deployers of it. So businesses that are looking to another provider to purchase their AI solution.
Implement that in some form or fashion, right? Whether that be for advertising means or within the HR landscape or whatever that might be.


Jordan Eisner  

Got it.


Matthew Dumiak  

So yeah, so when it’s unacceptable risk. And so again, these should not be occurring anymore.
And we’ve all seen like examples of these in movies and things like that, but things like deploying subliminal or manipulative or deceptive techniques to distort behavior, which again, you could see how some of these AI systems and maybe they are following under that inadvertently even, but.
AI used to evaluate or classify individuals, so things like social scoring, we’ve seen that in countries that would be prohibited in the EU, not that should not be occurring. And we’ll talk about the different level of penalties and things like that under that, but also things like compiling facial recognition.


Jordan Eisner  

Yeah.


Matthew Dumiak  

On untargeted scraping of facial images from the Internet or CCTV footage, that type of thing, yeah, should not be occurring well again by from from an untargeted scraping of facial images. Like I think it’s also the application of that as well.


Jordan Eisner  

Really.
Facial recognition? Oh.

Matthew Dumiak  

Yeah. So that’s one thing and those should not be. Those are prohibited now. Those are within the deadline now at this point. And that deadline was the 2nd of February this year.

Those should be. Those are prohibited. Then you go into high-risk AI systems. Those are things by anything including biometrics, but again, it’s about the purpose and use.
AI systems used to ensure that critical infrastructure is safe education. So from an AI perspective with education, things like determining admission to an institution or a university.
That would be a high-risk AI system.
Employment. So employment. I think we’ve seen AI in use in the employment landscape or in the employment context for a long time. Anything from using an algorithm to review resumes.
But also we’ve seen AI used on as an example, maybe in your context, Jordan, and I don’t think you all are using this, but I would call it. We have clients that use AI in the sales context that will not just take notes or summarize calls and provide action items and you know provide in that regard, but also that will.
Look at OK, how did the salesperson perform? Compare them against their colleagues and help their managers make some coaching decisions and that type of thing as an example.
So, you know, that’s kind of interesting, right. But again, HR side, I think AI has been, we’ve been running into AI since 2018, 2017, from an HR perspective. That’s right, exactly right. So it’s been around a long time.


Jordan Eisner  

2001 A Space Odyssey.


Matthew Dumiak  

Law enforcement, migration, asylum, border control, any of that stuff that would be high risk AI systems. So you can imagine that those are some pretty meaningful decisions that the artificial intelligence would be making.
And dealing with some pretty personal or sensitive information. Again, maybe not personal, but some of it would be likely personal, the types of things we’ve caught out there.


Jordan Eisner  

Yeah, I think these next questions are going to be the important one, which is what are the risk or penalties that come from violating the act because.
I hear you. Maybe the definitions are clear, but.
Also trying to.
Trying to cage something that’s maybe not designed to be caged.


Matthew Dumiak  

Yeah. And it’s, I think they’ve.


Jordan

So and then.


Matthew Dumiak  

You’re exactly right. Like in the US, it’s more business forward. It’s move fast, break things in the EU, certainly just like I think the analogy was good there to the GDPR in terms of at least bringing everybody under the same umbrella there in terms of a federal or.
A region specific AIlaw that you know there it is that I think they’re trying to strike a balance, but it is likely to stifle some things because not only you think about the requirements that we’ve just run through at a high level, it’s expensive to implement those types of requirements. So yeah, we’ll talk about the fines here, the penalties.
But even just building a program to comply with the EU AI Act, some organizations are going to look at that and say that’s cost prohibitive for us. We’re not going to be capable of building that. So what we’re going to do right now is at least pause this in the EU market. So individuals within the EU may not see any benefit from.


Jordan Eisner  

Yeah.


Matthew Dumiak  

What can be a really powerful tool or powerful solution, right. So yeah, I think that some organizations will likely do that and limit that exposure in the EU. They’ll likely do that in the US too, in California or Colorado, where they might say, yeah, we’re just not comfortable with building out that program. We’re a startup, we’re running quickly, we’re not.
We’re not going to run the risk of violating those laws today.


Jordan Eisner  

But what is the risk? Give the people what they want.


Matthew Dumiak  

Well, and it’s not a scare attack. I mean, they’re large penalty amounts. So for the prohibitive, prohibitive, absolutely.


Jordan Eisner  

That’s what’s GDPR, of which has really only been big companies. I think we should, you know, well, maybe everything.


Matthew Dumiak  

For the large fines, yes. Well, for the large fines, I think you’re absolutely right. Yeah, you’re not going to see, you know, it’s the Amazons, the Metas, the Googles that are getting the the hundreds of millions and billion-dollar fines. British Airways, millions of dollars, but they’re well-known brands. Exactly right. That’s who they’re going at. That’s who they’re going to, they’re going to position themselves to target.


Jordan Eisner  

British Airways.


Matthew Dumiak  

And those are the organizations or businesses processing the most personal information as well, right? Right. They’re, yeah, they’re not the number one airline, but it is, it is certainly something that smaller companies face risk to, but it’s more like 5-figure.


Jordan Eisner  

Yeah, they’re no Delta, but yeah, they’re well, they’re well, sure.


Matthew Dumiak  

Kind of low 5 figure fines under the GDPR. We’ll see where they go with the EU AI Act, but with the prohibited practices, so those those high risk prohibited can’t be doing them 35 million per violation pretty high plus or up to 7% of global revenue.


Jordan Eisner  

They love doing that. You know, I do respect that about them. Whichever’s higher, I’m guessing, right?


Matthew Dumiak  

To be significant, right? Yes.
Yep.
Yeah, exactly right, exactly right. They do and they do something similar on the. So when you get into the high-risk or general-purpose AI, it’s up to 15,000,000 or 3% and there are some tiers there that go to 7 1/2 to 1% as well.


Jordan Eisner  

Man, so even with the GDPR ones.
What? Repeat the last one?


Matthew Dumiak  

Of global turnover, 7 and a half million to 1% of global revenue. So that’s things like presenting or providing. So any AI. So we could talk to the tiered requirements, but if you’re putting artificial intelligence to the market and this applies, there’s going to be some.
Transparency obligations, that’s the lowest tier of penalty. So if there were incomplete transparency requirements or disclosures, that’s going to be the seven and a half million, one percent, 1% of global revenue.


Jordan Eisner  

I’ll put you on the spot here. The GDPR fines, have any of them actually been? What was theirs? 2% of global revenue years or three? What was it? What was the percent?


Matthew Dumiak  

Yeah, 3.


Jordan Eisner  

3% Have any of them actually been 3% of global revenue?


Matthew Dumiak  

Three and five.
Not that I’m aware of. Maybe some of those smaller ones if they, but that would have to be egregious. But no, like we looked, yeah, we when you look at them.


Jordan Eisner  

Yeah.
It’s it’s just outrageous. I’m trying not to let personal, you know, opinion factor in here, but.


Matthew Dumiak  

Yeah. Well, when you look at the meta violations, those sound like really huge numbers. And then you look at their global revenue numbers and the fine, the penalty amounts are something like half a percent or some, you know, .25% of their global revenue.


Jordan Eisner  

Still a lot there.


Matthew Dumiak  

It’s a ton. Oh, absolutely, absolutely.


Jordan Eisner  

We’ll do how to comply, you know, another day. I think that’s a follow up to this or a separate podcast in itself. But when when you think of runaway AIor AIturning on the people, what movies do you think of?


Matthew Dumiak  

OK. Yeah, sounds good.
Yeah.
Uh, RoboCop.


Jordan Eisner  

That’s a good one.


Matthew Dumiak  

Yeah, yeah. Um.
You got any for me? You’re the movie buff.


Jordan Eisner  

Well, 2001 Space Odyssey. How? It’s just dark.


Matthew Dumiak  

Yeah.


Jordan Eisner  

Um, I was thinking Terminator.


Matthew Dumiak  

Terminator. Yeah, I always think of RoboCop for some reason.


Jordan Eisner  

Yeah.
Yeah, what’s the program in Terminator that they go back in time to stop? Spoiler alert.


Matthew Dumiak  

I don’t know. It’s been a long time since I’ve seen that.


Jordan Eisner  

You don’t remember.
What is it the?
Skynet.


Matthew Dumiak  

Uh, Skynet. So if you see that, come out.


Jordan Eisner  

Yes, got that. What about demolition, man? Or is that just like the future was just like too buttoned up? What? Sylvester Stallone, Sandra Bullock, Wesley Snipes.


Matthew Dumiak  

I’ve never seen it.
Sylvester Stallone. Never seen it.
I’ll add it to the list.


Jordan Eisner  

They freeze Sylvester Stallone because Wesley Snipes I think is also frozen but he’s a criminal and so they freeze Sylvester Stallone. So if Wesley Snipe ever comes back, they can bring him back, but then he’s in the future. Might not be AI per se.


Matthew Dumiak  

Oh.


Jordan Eisner  

It’s just the society just had a lot more rules back then. Like he gets a ticket for cussing like some like recording device on the street gives him a ticket for using foul language.


Matthew Dumiak  

Oh.
Automatically. Probably some social scoring there too. So I think you’re you’re dead on with the artificial intelligence, yeah.


Jordan Eisner  

All right. Well, yeah, there’s a lot with AI. It’s.
It’s a lot to take in, so I will.


Matthew Dumiak  

It is. There’s this is a robust law. There are a lot of obligations specifically for high risk AI systems, specifically for providers of them.


Jordan Eisner  

Yeah.


Matthew Dumiak  

That we could certainly do another episode on and really run through in detail some of those kind of high risk or hot button items that you know organizations are really going to want to cover off at the provider level, but also at the deployer layer too, where maybe those things mirror like mix mix with or crosswalk each other nicely.


Jordan Eisner  

So we’ll end with this. If you’re listening or watching this and have concerns about how you’re leveraging AI, is the EU Act applicable to you? Are stateside laws applicable to you?
You’re seeking more control or governance around your program, and like me, you’re just not quite grasping it. Reach out because we’ve got people like Matt here who do have a deeper understanding of it. They do.
Understand how AI is going to impact different controls you put in your organizations, whether it’s around data governance, data security, other factors within an organization that is just.


Matthew Dumiak  

Mhm.


Jordan Eisner  

Real impactful stuff when you talk about employing AI systems across the organization and everything that it touches. So reach out. You can find us at compliancepoint.com. That’s pretty easy. But also we have an e-mail distro connect@compliancepoint.com and we would welcome any inquiries or questions or concerns around AI or anything adjacent through that channel. And Matt is on LinkedIn. I’m on LinkedIn. If you have questions or other sort of topics in this area that you want to discuss, please don’t hesitate to reach out. Until next time. Thanks everybody.

Let us help you identify any information security risks or compliance gaps that may be threatening your business or its valued data assets. Businesses in every industry face scrutiny for how they handle sensitive data including customer and prospect information.