Navigate the complex landscape of AI regulations effortlessly with this insightful episode of Digital Labor Lab. Hosts Jenny and Brad Owens delve into the intricacies of AI laws in the U.S. and EU, shedding light on critical legal frameworks that are shaping artificial intelligence utilization worldwide. From the GDPR to the EU AI Act, understand how these regulations influence AI deployment across different sectors, including healthcare and HR.
Jenny shares her in-depth research on European legislative requirements, such as the necessity for server location within the European Economic Area, and highlights how the EU AI Act categorizes AI applications by risk. Meanwhile, Brad emphasizes the patchwork of state-led regulations in the U.S., cautioning business owners about the ambiguities and urging them to stay informed and adaptable in the ever-evolving legal environment.
Engage with this professional yet conversational guide that equips business owners and AI enthusiasts with practical advice on navigating regulatory challenges. Discover the importance of documenting AI processes and ethical principles as a means to safeguard your business. Tune in to gain actionable insights and take advantage of their offer for a guide that aids in documenting AI workflows. Email hello@digitallabourlab.com to access tools that can help ensure your compliance and strategic use of AI in your operations.
Links & Resources:
The EU Act is Coming to America – https://substack.com/home/post/p-157005561
Anthropic Economic Impacts paper – https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf
Watch the full episode:
AI Generated Full Transcript:
Brad Owens (00:00)
Jenny, you know, in doing this podcast, I realized that we’re in the US, but there’s likely a lot of difference in the use of AI around the
Jennifer Owens (00:07)
Man, AI use is a little bit different, but you what’s really different is the regulations. Let’s talk about that.
Hello, and welcome to Digital Labor Lab, where we’re exploring the future of work one experiment at a time. I’m your host, Jenny Owens.
Brad Owens (00:28)
And I’m Brad Owens and we’re going to continue on our theme of guardrails around AI use in your business. So last week we talked a little bit more about the ethics concerned with AI use in your business, but those ethical concerns kind of led to regulatory issues and some different types of rules that are in place. Things you just kind of need to know about if you’re going to use AI in your business. And we wanted to bring that to you so that you understand how to safely use AI in your business. Now I need to give a blanket statement here.
This in no way qualifies as legal advice. That’s not us. We may be able to hook you up with lawyers that are specialized in this, but we just wanted to give you an overview of all the topics, things you may want to consider. And Jenny went deep into organizing thoughts around all of these different types of regulations that are out there. Jenny, just give us an overview. What’d you find?
Jennifer Owens (01:01)
Not a lawyer. Yeah.
Sure, I went deep. The amount of stuff that I cut from the script from this episode is like this long. But briefly, we really focused our efforts here on legislation in the EU and in the United States. So the European Union, my research was primarily focused on the General Data Protection Regulation, or GDPR, and the EU AI Act. So GDPR, the first thing that often affects AI initiatives and digital labor initiatives in this space is that the servers must be physically located
in the European Economic Area. So please do not be thinking that you can start a global digital labor company from the United States and sell your services in the EU without having servers also located there. The GDPR principles include your usual suspects like data minimization, transparency and explainability, and data protection. And a lot of this is going to sound really familiar to our episode from last week. They’ve got some interesting restrictions on what you can do with the data that actually kind of
To an American, this sounds a little bit limiting, especially to an American that’s used to being in the health care space. So when you go to a hospital, when you receive medical care, you sign a release that says, hey, I’m giving you access to my medical records for these purposes, also for treatment, prevention, and operations. And a lot of times, operations will include quality improvement that covers a lot of secondary uses of your data. In the EU, not so much. Secondary uses, unless you’ve explicitly consented to them, prohibited. Fascinating.
Then we have the EU AI Act, which is really aimed at preventing algorithmic discrimination. And I could talk for hours about this and probably will at some point, but the thing that I want to call out is that the EU AI Act really segments artificial intelligence into different tranches of risk. They call out high risk areas that pose significant risks to health, safety, or your fundamental rights through the use of algorithms such as automated hiring processes,
any sort of like triage for healthcare, anything that poses a risk to your health, your safety or your fundamental rights, which means Brad, that both you and I are in this highly regulated bucket. So.
Brad Owens (03:25)
Yeah, so
great. So everything that we’re trying to do with our work is going to be incredibly regulated outside of the US for right now. So great.
Jennifer Owens (03:33)
I mean, to be fair, it’s also pretty regulated inside the US. And I have a theory that this actually puts us at an advantage rather than a disadvantage. Do you want to hear it?
Brad Owens (03:41)
I can understand where you’re trying to go with this one. So yes, I feel like we need to hear why this is an advantage for us here in the U.S.
Jennifer Owens (03:49)
So when you’re in a highly regulated industry like this, when you’ve got a legislative framework for how you have to deal with data and how you have to deal with customers, patients, applicants, whoever, you already have a framework for wrestling with artificial intelligence. You just have to funnel it through your existing regulatory framework, right? So you already have rules, Brad, about how you’re able to use the information that applicants provide to you. You already have rules about what you are and are not allowed to use to discriminate when you’re hiring.
I have rules about what I can do with patient data. All I have to do is add artificial intelligence flavoring to that as I’m continuing to think about how are we going to incorporate AI and digital labor into healthcare or into HR. So, yes. Yeah.
Brad Owens (04:32)
So that’s a lot. We
understand all the different things that they’re doing with the EU AI act and with the GDPR and everything else they’ve got over there. But something happened recently in Paris where we were trying to get a worldwide version of this AI act, essentially of the safe use of AI. And a couple of people came away with out signing onto that. And one of those was the U S so what’s going on in the U S when it comes to AI.
Jennifer Owens (05:02)
Yeah, so JD Vance was our representative at the AI Summit that was in Paris. And he mentioned that he wasn’t in favor of the legislation, not the legislation, the treaty that was being signed, because he felt that it posed barriers to innovation in AI in general and to American innovation in AI in particular. So this is really interesting because at the federal level, we’ve been kind of wavering.
on regulation. So Biden era executive orders called for collaboration and guidance and a lot of training to really minimize harm. These have been modified by an executive order from President Trump that’s ordering a review of all of the policies and the guidance that took place under that previous executive order to make sure that it is not posing a barrier to America’s global AI dominance, which is really like strong and aggressive language, but it’s really interesting. So we seem to be really wrestling at a federal level.
with what we want to do about artificial intelligence, whether we want to remove barriers to innovation or whether we think that we should put guardrails in for what people can and cannot do safely. What makes that interesting though is that, of course, that America is a confederacy of states. Brad, talk to me about state regulation.
Brad Owens (06:10)
So in the States, we’re kind of left with just a patchwork of a whole bunch of stuff. So at the federal level, there were some Biden era guidance for what AI should be able to do, what it shouldn’t do, what we should watch out for, some of the risks that might be involved. And that may or may not stand depending on what the whole Trump administration decides to do on this stuff.
But the administrative things aside, because there has been no federal mandate for the U.S., kind like the EU AI Act, it’s getting left to the states to do something about it. So we have states like Colorado, like California, like New Hampshire, like these different patchwork of states coming up with their own ideas and their own things of what they feel like AI should and shouldn’t be able to do and putting those rules in place. But even those rules come with.
very vague definitions and it’s coming down to lower courts to start deciding, hey, what does it actually mean? What can, can’t you do? So this is leaving a lot of business owners very confused. mean, Jenny, if you were a business owner currently right now thinking about, I want to run my business with majority AI, how would you feel?
Jennifer Owens (07:19)
I mean, that’s what I’m thinking about as we’re talking. I have talked a lot about this. You have talked a lot about this. And the lack of clear guidance really does make it a little confusing and a little intimidating. I understand ethically the principles that I am supposed to be following. How can I make sure that I have the correct safeguards in place so that when I’m using artificial intelligence in my day job? Actually, let me take a step back.
Brad, you use artificial intelligence and digital labor. You use agents in your day job. Talk to me a little bit about that. What does that look like for you right now today?
Brad Owens (07:53)
Right now it’s really all assistive. It’s all, Hey, I need help doing this thing. Help me do this thing, which if we go back to what the EU AI act was talking about and our risk levels, it is a very low risk, likely not going to cause any problems or anything at what they would refer to as a risky AI endeavor. It’s not going to right now because everything I’m doing is more assistive. There’s no
decision-making process. There’s no part of that that I’m going to take as gospel and say, okay, well here’s what we’re going to do in our business. mean, it comes down to how much of my business is actually going to be affected. How much of my decision-making is going to be affected by what this AI is doing. And per the EU’s rules, there’s not a whole lot of risk to my business for doing it.
Jennifer Owens (08:38)
Mm-hmm.
I mean, I would say the same. Obviously, artificial intelligence is everywhere in health care, even before it was called artificial intelligence. You’ve got noise reduction in your CTs. You’ve got different calculators for calculating your risk post-surgery, et cetera. But there’s a lot of stuff that is there as information, ways to of chew up all of that medical data and surface it in different ways, but that still leaves a clinician’s medical judgment intact. Doctors and nurses.
right now are not relying on AI to make decisions for them. They’re relying on AI to surface the correct information at the correct time and give them the information that they need to exercise their medical judgment. I think that that’s going to be a crucial part in determining how you want to proceed with digital labor and AI in your business is, is it assistive or is it autonomous? Is it doing things on its own?
Brad Owens (09:30)
So we’ve been painting this whole picture and all of the media is painting this picture and movies and all the tech leaders out there. They want to paint this picture of your business is going to be run by AI and they’re going to be making all these decisions. But sorry, y’all that’s kind of BS. There’s a whole lot of rules in place today. Yeah. There are a whole lot of rules in place that will make that not possible. It’s just not going to happen.
Jennifer Owens (09:46)
Today. Today it is. Yeah.
Which is really interesting to me too, because I wonder and I worry a little bit about the future of artificial intelligence. If our legislation is so geared towards keeping humans in the loop and towards transparency and explainability, which are ethical principles that I completely agree with, but they’re not necessarily aligned with what the tech does. And I think this is another topic for another podcast, but I do wonder if we’re legislating ourselves into a small subset of what AI can be.
Anyway, another subject for another day. But I think what I really want to end our show with, Brad, is if you’re a business owner who wants to use AI and digital labor today, what does all this regulation mean for you? What do you need to be thinking about? And I want to be crystal clear here. We are not lawyers, but we are people who want to use automated labor. We are trying to automate as much of this podcast as we can.
So Brad, as the person who’s been trying to get this podcast automated, what does this mean for you? What are you taking away from this?
Brad Owens (10:59)
as a business owner, it’s up to me to at least make a concerted effort to understand the laws and regulations and things that are going to govern what I am able to do. So at the very least, I need to be able to do some research on my own to try and understand if there’s anything I may step in by doing business this way.
Jennifer Owens (11:20)
Yeah, and from my perspective, I think it’s really important to document our principles when using AI. When we were talking about an automation flow for the podcast, we talked about the items that were important to us. So not only to document our principles, but also to document the process, what the process is intended to do, what the intended output is, and any sort of risk mitigation that we’ve put in place. That’s a start. If you’re interested.
We’ve put together a reusable outline for you to help you start on your documentation journey as you’re thinking about principles, as you’re thinking about processes, as you’re thinking about governance. It’s a really helpful thought exercise and a worksheet so that you can start coming up with your own business’s approach to using AI and digital labor. If you’re interested in that,
Send us a line. We’re at hello at digitallabourlab.com. We’d be happy to share that with you and to spark that conversation for you and for the other people working in your business.
Brad Owens (12:14)
So if I were to say my key takeaway, then just pay attention, just document what you’re doing. Just have an idea of it. Because even if you look at the very restrictive parts of the EU AI act and of what has come out of the U S it really comes down to, can you show your work? Can you be a high school student in, I don’t know, calculus, be able to show your work of how you came to that outcome.
That’s really what it’s going to come down to. So it’s all about documentation at this point. Someone’s going to say, Hey, how did you do that? That was discriminatory, whatever. You could be like, look, here’s the entire process laid out for you and how we came to this decision. That’s what it’s going to really come down to. So yes, take us up on this offer. Email us to hello at digital labor lab.com. We’ll send you this guide and this document that’ll help you come up with those workflows so that you can have a solid understanding of how you’re using these things and why.
Jennifer Owens (13:06)
until next time, I’m Jenny Owens. You can find us at digitallabourlab.com. You can shoot us an email, hello at digitallabourlab.com. Please feel free to like, subscribe, do whatever on whatever podcasting platform of choice you are getting your Clearly Superior podcasts. Catch you next week.
Brad Owens (13:08)
And I’m Brad Owens.