The Ethics of AI and Digital Labor: What Every Business Should Know

DLL Blog Thm

Subscribe for More Episodes

As businesses rapidly adopt AI and automation, the ethical implications of digital labor are becoming impossible to ignore. In this episode of Digital Labor Lab, Jennifer and Brad Owens explore the key guardrails that businesses need to consider when integrating AI into their workflows.

From regulatory challenges to the risks of algorithmic bias, this discussion highlights why ethical oversight is essential to prevent unintended consequences. With insights from industry leaders and real-world examples, this conversation offers a practical framework for businesses looking to balance innovation with responsibility.

Watch the full episode below to learn how companies can navigate the future of digital labor without sacrificing ethics or quality.

Links & Resources:
This Week in AI: Billionaires talk automating jobs away – https://techcrunch.com/2025/02/04/this-week-in-ai-billionaires-talk-automating-jobs-away/
Automation and New Tasks: How Technology Displaces and Reinstates Labor – https://www.aeaweb.org/articles?id=10.1257/jep.33.2.3

Watch the full episode:

AI Generated Full Transcript:

Jennifer Owens (00:00)

Hey Brad, I think we really need to talk about the ethics of digital labor. Do you want to talk about that in our next episode?

Brad Owens (00:05)

really?

Hey everyone and welcome to the Digital Labor Lab where we explore the future of work one experiment at a time. I’m Brad Owens.

Jennifer Owens (00:22)

I’m Jenny Owens. And in today’s episode, we’re going to talk about some guardrails for using artificial intelligence and digital labor in your business.

Brad Owens (00:30)

It doesn’t sound like lot of fun.

Jennifer Owens (00:33)

So I think it’s really important to kind of lampshade the tension that we had when we were setting up this episode right away. Because as we were crystallizing the idea for a digital labor lab, I really wanted to talk about ethics. I wanted to talk about regulations. I wanted to talk about basically the boundaries of the box that are going to kind of determine what you can and cannot and should and should not do. And you had a different opinion.

Brad Owens (00:56)

I didn’t want to do that. Like the whole thing

about digital labor is that it’s fun and we’re at the experimental phase and look what this thing can do. And look what that thing can do. But because everyone in the industry is in the experimental phase, it’s also leading to a whole lot of conversations where people are just starting to be like, well, hang on a whole crap. This is fun and all, but this is going to mean major changes to business. What should we actually be paying attention to? What should we actually do? So.

I think it’s important. I think we do need to get into, hey, this is all cool, but you’re about to make a significant change in your business.

Jennifer Owens (01:35)

And so really two things kind of sparked my desire to bring the ethics and regulatory discussions kind of upfront. One is that if you are a business leader looking to incorporate digital labor into your business, you’re gonna face a lot of pushback and a lot of that pushback is gonna be armed with these same concerns. So it’s best to get it out in the light and discuss it upfront. And secondly, we are in a great age of experimentation with artificial intelligence and digital labor. And just like when you’re experimenting in the lab,

You have personal protective equipment. You have a list of lab safety rules. You have MSDS on all your chemicals. Like please don’t rub the athidium bromide on your reproductive organs. So we need the same kind of very basic safety concerns or at least safety discussions around the use of digital labor. And so I know it’s no fun. I know I feel like I’m telling you to eat your broccoli before you can have dessert, but we’re gonna do it because broccoli when properly prepared can be delicious and fun for your brain.

So given that I’ve just told you that I’m gonna feed you a bunch of brain broccoli, here’s what we’re gonna discuss in this episode and then in future episodes. We’re gonna talk about three guardrails for using artificial intelligence and digital labor in your business. In this episode, we’re gonna talk about ethics and we’re not gonna get into a deep philosophical dive. We’re gonna talk very basically about how you can think about what you should and shouldn’t do with automation and with agentic labor.

In the next episode, we’re gonna talk about regulations. So that’s literally the laws that you’re going to be governed by and how you’re gonna decide, okay, am I going to do things this way or am I going to risk a fine and do things this way? And then in our last episode, we’re gonna talk about the ways in which digital labor and artificial intelligence when improperly used can actually result in kind of a downward spiral of your product and your business and ways to avoid that. So three guardrails in this episode, we’re gonna take our time. We’re gonna really talk about ethics and about…

the right and wrong ways in which we can use digital labor. And what kicked this off and what really rocketed this up to the top of my priority list to cover on our podcast was these comments that happened recently. Brad, what was the forum where the billionaires were talking about automating jobs away?

Brad Owens (03:38)

So there have been two conferences that happened not so much back to back, but in the news cycle back to back. So the first one happened in Davos at the World Economic Forum, I think it’s called. Yep. So we had there, we had Mark Benioff, we had a few others from workday and all these others that are coming out with what they’re calling their digital labor platforms. They’re all talking about, hey, we’re going to be the last generation of leaders to lead only humans, which that was the big one that took me by surprise.

Jennifer Owens (03:50)

I think, yeah.

Brad Owens (04:08)

man, you’re completely right. There’s going to be digital workers in the workforce. But then fast forward about a month later, just this past week, you had a AI emergency summit happen in Paris and in Paris, we had a bunch of different news stories come out about what they were talking about and why this thing was going on. And one particular news story had the headline that caught most everyone’s attention. That was the meeting where the billionaires are talking about automating all the jobs away.

caught everyone’s attention. And rightfully so, because yeah, that’s scary. The and I agree, the potential is there that the technology would be able to automate a high level of jobs. We’ll just say a large amount, a large percentage of the labor force right now. Should you? That’s the question.

Jennifer Owens (05:03)

So I want to lightly push back on that a little bit, even before we get to the should you point, because over the past week, we’ve been working on ways to automate the production of this podcast. We’ve been digging into, you know, what of the editing can we outsource to artificial intelligence? What of our social media can we outsource to artificial intelligence?

Although all of the technical pieces are there, there are still some bits missing that are keeping us from being able to fully automate this podcast. So we cannot just sit here in our chairs, talk into our cameras, hit a button, and then walk away and have our fully edited podcast corrected, exported, uploaded to YouTube, to LinkedIn, to Spotify, to all the places where our podcast is available.

We’re still exercising a degree of human editorial control because A, some of the AI is not quite ready for prime time, right? We’ve discovered that our automatic editing gets really excited when we use our cut words because we get really excited when we use our cut words. So there’s a little bit of training that’s needed there, but also that we don’t have the kind of API access into the social media platforms.

that would allow us to completely automate that workflow. So while I do agree with you that a lot of the pieces are there for the replacement of many of our workday tasks, I don’t know that we’re really at a point where we can say on a one-to-one basis, like this FTE is going to be replaced by this particular agent who is going to do all of their job descriptions. I think we are going to see more blending where this FTE…

is now 80 % agent and 20 % human, and the other 80 % of that human is doing other things. So we might see a dilution of the total head count, the total human head count. We’re gonna have to be very careful with our language about this. We might see a dilution of our total human head count, but I do think that a one-to-one replacement of jobs is probably not in the future for at least, this is gonna age like milk, five to 10 years.

Brad Owens (06:53)

I’d go even faster.

Jennifer Owens (06:54)

I I’m just saying, I’m not afraid to be wrong, okay? I’m not afraid. So I think here’s the really interesting part, right? So we’ve said that the technology is there to replace the jobs and people are nervous about it. I read this recent, this actually is not a recent, I read this article recently from 2019.

from Darren Esamoglu and Pascual Restrepo. were talking, continuing their discussion about robotics that we covered actually in our episode on what is digital labor. They were talking about the right and wrong kinds of artificial intelligence and its relation to the labor market. And this is what I thought was really interesting. And I think we’re gonna touch on this in our third episode in this series on, and shitification and that downward spiral of where you do something that’s kind of crappy and then you get like medium results and then you do more crappy stuff and that’s it. That’s the whole episode.

That’s a lie. Yeah, Brad didn’t believe me when I said that I wanted to cover en-shitification and he’s like, what are you talking about? And I’m like, well, this is actually, it’s an actual word. It’s a fairly well-known paradigm in tech space, right? Yes. So going back to the article from 2019 that I was reading, the authors argue that in an age of automation, labor standing deteriorates and workers are badly affected if the new technologies are just kind of okay.

Brad Owens (07:43)

And yes everyone, that is an actual word.

It’s an actual word.

Today I learned.

Jennifer Owens (08:12)

Right? If they’re doing like a C minus student job where you could have a person doing an A plus level job, that’s a pretty, it’s a pretty easy scenario to envision, right? I’m a business leader. I can automate this job for, know, for 0.4X where I’m paying this person X and I get roughly 60 % of what they were doing. That’s fine. But that also kind of starts.

that downward spiral not only for your product and your reputation, but also for labor, right? The demand for skilled labor, for A plus labor is no longer there. The demand is for C minus labor and the robots can do C minus labor real easy. So I think this is interesting when we return to our discussion about what you should do and what are the correct uses of artificial intelligence and digital labor. So if you’re using a digital labor to replace a human product that is at this level,

with an automated product that is at this level, just know that this is gonna drive not only your labor market downward, it’s gonna depress the need for wages, it’s gonna make people bummed out. That’s the technical term.

Brad Owens (09:17)

Well, can I put this

into kind of terms that I at least resonate with?

Jennifer Owens (09:24)

Absolutely, because I can tell I’m up on my soapbox, I’m giving my TED talk, and I was just thinking like, Jenny, you got to shut up and let Brad talk here.

Brad Owens (09:30)

So I’m really excited that you’re into this the way that you are, because I feel like people need to pay attention to this. And when I think about how to separate the hype, which let’s get honest, everyone, right now, all the AI and digital labor and agentic AI and all this stuff, it’s still just hype. Yes, there are companies that are using bits and pieces of this for what we had chat bots to do a long time ago. We’re not.

thinking a ton here. This is not super intelligent. This is just, hey, if someone says this, look at this stuff and present where it matches closely enough. It’s simply, it’s just a math game still. It’s not something that we’re truly automating these things away. We’re helping and we’re starting to deflect things like customer cases and things like that, but we’re truly not replacing yet with what I would call intelligent labor. So we’ll get to that, but I want to.

bring this into something that I experience almost all the time when I’m using AI and I’m using things like chat GPT, for instance, when I’m going to, let’s say, use it for a social media post. I have spent tons of time training a custom GPT model to produce what reflects most closely my tone of voice. The way that I write, I’ve given it tons of content. I’ve given it tons of videos that I’ve recorded, lots of different things that it can learn from to

try and get as close as possible to my tone of voice, what I typically say, the words that I typically use, my pace, my cadence, all that kind of stuff. And while it’s really good and it works as a party trick to show people, hey, look how closely this can model what I am doing, it’s still not exactly the way that I speak and the way that I deliver. And it won’t truly reflect exactly who I am, but it’s close enough.

So then what we’re talking about about this downward spiral is if I were to accept 90 % of me and then we go to continually replicate that eventually we have only 90 % of a Brad. And then if we try and add even more layers on top of that that get 90 % of where that is, then we end up with only 80 % of what Brad was. And you can see this is a downward spiral. If we just focus on more of this automation type stuff,

and taking things off of people’s plate and replace them completely. That’s a key point of this. Not just augment, replace them completely with a 90 % version. The math doesn’t work out. You will eventually end up with a garbage product.

Jennifer Owens (11:59)

And I think it’s helpful too to bring this back to the ethical question, right? So if we’re replacing you with a 90 % version of Brad that is a giant catamari of GPT and your YouTube videos and whatnot, I think it’s interesting to think about what does that represent and what learning opportunities is that 90 % version of Brad missing out on?

And so I want to spend some time thinking about too, as we’re talking about replacing human beings. I think you do have an ethical obligation to consider if we’re swapping out humans for automation, what biases are we introducing? What are these new digital labor resources missing? Or what might they have that a human being does not?

So, you mentioned like a 90 % version of Brad. I think it’s also important to say that at this point, that 90 % version of Brad is kind of crystallized and stuck right where you were, right? You’re not going to have your wife talk to you about ethics and maybe get a new idea in your head that that version is right there where it is. I think also your perspective is really interesting. I’d love to get your take on this. When we talk about artificial intelligence and digital labor, particularly in the HR space, I wanna hear your thoughts on algorithmic bias. I wanna hear your thoughts on

How do you, when you’re, mean, we can’t pretend that digital labor just like springs fully formed from the head of Sam Altman, right? You know, this is trained on real data. It’s built by real people. And so it has the opportunity to crystallize bias and other things that are already inherent in the system and then codify them. I wanna hear your thoughts on this.

Brad Owens (13:39)

So when we start thinking about the right kind of AI, we truly cannot think about a single source of truth for AI. Let’s think about running this as a business. If we think about a single source of truth kind of model to say that this is the model that will run our business. Hey, that’s a CEO.

So let’s think about what it takes to run a highly profitable, successful business. Typically takes a lot of different departments. And think about where human resources comes into this. They’re typically, they’re seen as the Toby’s, as the watchdogs over top of all the rest of the business. They’re there to protect it. They’re there to help identify biases, help identify things that are going to stand in the way of the business succeeding or getting in a whole lot of trouble or all of those types of guardrail things.

So when we think about automating these jobs away and potentially introducing biases or crystallizing something that may have a thought process that’s not aligned with how we want our business to actually operate, we can’t just think of it as this single model, this single person. That’s a CEO. We need to think of it as an entire business itself, which has departments. So the way that I like to think about being able to put guardrails in place around ethics when it comes to potential biases is around

don’t use a single source. There’s always gotta be another source acting as a watchdog that everything has to run through once it’s completed its process to say, wait, what biases might actually be in this? And if there are, whoop, try again.

Jennifer Owens (15:08)

So I think you’ve raised a really interesting point about watchdogs and about guardrails. Because as I was thinking about this, how do we hold our digital labor resources to the same HR standards that we would hold a human being? We can’t mandate that human beings not hold attitudes that we find to be objectionable, but we can manage their work and we can manage their relationships with their coworkers. So I think really truly, and we could spend a long time down the ethics rabbit hole. I’m not a philosopher.

I just really like the good place. And so I feel like the good places mandate that, you know, we don’t have to ask things to be perfect. We do have to ask them to get better or to at least try to get better. And I think that’s really the crucial thing here. Do you have, as you’re managing digital labor, do you have mechanisms set up to identify when things are performing poorly or when you might be serving a particular group of people or a particular group of needs differently than the rest of your customer base?

Are you able to surface that? Are you able to respond to it in a meaningful way? Do you have the mechanisms in place to get better? I will spare you my long discussion about consequentialism. I will spare you my other thoughts on what it means for digital labor to be moral when it is not mortal. This is a whole other conversation for at least three glasses of wine. But I do think that if we boil all of our ethics thoughts down,

to just the one thing, one takeaway. If you’re looking at bringing digital labor into your business, do you have mechanisms set up to make sure that your digital labor is performing ethically

according to your standards? Are you able to see that? And then are you able to manage to that? That’s the crucial thing. Just like you would have HR for human beings, you need to have HR for your digital labor resources.

Brad Owens (16:54)

this comes back to kind of why I thought we needed to start this podcast in the first place. There will need to be a digital labor leader in your business that their sole responsibility is to start thinking about these things, to start understanding what are we automating? What are the potential risks involved in that? How do we need to set up guardrails? How do we need to think about this actual digital labor and how it’s going to affect our business? And that’s really the basis for this entire podcast on its own. This can’t just be something that you could think of, oh, we’re going to automate this thing away.

Yes, it will be a whole lot of fun for the department to stand around a computer and show them, look how close this got. No, it almost automated this thing’s away, but right now it’s still just hype. And you’ve really got to start thinking about how are we really going to introduce this to our business? And I appreciate you bringing this up, Jenny and allowing this to be a main topic for us to cover off on early in the, the episodes of this podcast, because I think it’s going to be really, really important.

Jennifer Owens (17:48)

I couldn’t agree more, especially given our two day jobs, right? You work in HR software, I work in healthcare. These are two really highly regulated areas that have a high ethical standard to uphold. And I think it’s really helpful to talk about it upfront.

Brad Owens (18:01)

So if you like this kind of content, there is plenty more wherever you’re watching this or listening to this, click that subscribe button. We will be back with plenty of other episodes, or you can catch us at digitallaborlab.com That is always going to be the home for this and many more pieces of content.

Jennifer Owens (18:16)

Until next time, I’m Jenny Owens. Thank you for listening to Digital Labor Lab.

Brad Owens (18:17)

and I’m Brad Owens.