The future just got real. Shopify’s CEO just made AI mandatory for everyone, and it’s sending shockwaves through the business world. Here’s what it means for you and your team—whether you’re ready or not.
Imagine showing up to work and getting this email from your CEO: Before hiring anyone new, you must first ask “what would this look like if autonomous AI agents were already on the team?”
That’s exactly what happened at Shopify last week when CEO Tobi Lütke dropped a memo heard ’round the tech world. His message was crystal clear: AI is no longer optional at Shopify. It’s the default.
Lütke didn’t mince words. AI isn’t just another tool—it’s now “our standard.” In his words, learning to use AI effectively is “a foundational expectation” for everyone at Shopify. Yes, everyone—including executives.
But this isn’t just some “Hey, try using ChatGPT” suggestion. This is a strategic mandate that fundamentally reshapes how one of the world’s largest e-commerce platforms operates:
AI must be your first reflex – Turn to AI by default before starting any task
AI proficiency is part of performance reviews – Your AI usage will be measured and evaluated
No new hires until you’ve prototyped with AI – Prove digital labor can’t handle it first
As Lütke bluntly puts it: “I don’t think it’s feasible to opt out of learning the skill of applying AI in your craft.”
In other words: adapt or get left behind.
What This Means For Every Team
Let’s be real—this changes everything. Suddenly every team at Shopify is scrambling to figure out what good AI usage actually looks like in their department.
For HR leaders, you’re now tasked with:
Defining what “good AI use” looks like across wildly different roles
Figuring out how to measure and evaluate AI proficiency
Creating entirely new performance metrics on the fly
For managers and team leads:
You’ve got a new “employee” to supervise and integrate into workflows
Your hiring process just got more complex (prove AI can’t do it first!)
You need to balance team productivity with AI experimentation time
And for individual contributors:
Your job description just expanded to include “AI expert in your domain”
You’re expected to tinker, learn, and share AI best practices
Your performance evaluations now include AI proficiency
It’s a massive shift that requires everyone to run faster just to stay in place. As Lütke colorfully describes it: “stagnation is slow motion failure.”
The Hidden Challenges Beneath the Surface
What makes this particularly interesting (and potentially chaotic) is the timing. Shopify has been through multiple rounds of layoffs recently—20% reduction in 2023 and more cuts in customer service this year.
That means teams are already:
Doing more with less
Feeling stretched thin
Dealing with restructuring fatigue
Now add “become AI experts overnight” to that list.
The memo encourages both self-directed learning and peer instruction, but here’s the million-dollar question: When exactly are people supposed to do this? As anyone who’s tried to master a new skill knows, this kind of learning takes significant time and mental bandwidth.
If AI is now part of your job description and performance evaluations, then:
When do you find time to experiment?
Who provides the training?
How does this impact your existing workload?
As the podcasters pointed out, building expertise in a new area “on the side” is tremendously stressful. It adds hours to your workday and forces you to maximize every gap in your schedule.
Why This Matters Beyond Shopify
Shopify isn’t just adopting AI—they’re institutionalizing it. This isn’t a pilot program or an initiative. It’s a fundamental rewiring of their entire organization around digital labor.
The public nature of this memo creates enormous pressure on competitors to follow suit. If you work in e-commerce, retail tech, or any adjacent industry, your company is almost certainly looking at this mandate and wondering if they need to do the same.
Even if you’re in a completely different industry, this represents a sea change in how businesses approach AI integration. It’s not just “let’s use AI for these specific tasks”—it’s “AI is now the default way we approach everything.”
What You Should Do Now
Whether your company follows Shopify’s lead or not, the writing is on the wall. AI proficiency is quickly becoming table stakes across industries. Here’s how to prepare:
Start small but start now – Pick a low-risk area where AI could help your workflow
Block dedicated learning time – Even 30 minutes a day for experimentation adds up
Document your experiments – Keep track of what works, what doesn’t, and lessons learned
Build a peer learning group – Share discoveries with colleagues to accelerate everyone’s progress
Frame this as labor enhancement, not replacement – Focus on how AI helps you work better, not just faster
Most importantly, if you’re in a leadership position, remember that learning AI is real work. It requires time, resources, and support—not just a mandate. If you’re making AI proficiency part of performance evaluations, then you need to provide the scaffolding for people to succeed.
The Bottom Line
Shopify has thrown down the gauntlet. AI is no longer a “nice to have” or a competitive advantage—it’s becoming table stakes. The question isn’t whether your organization will follow suit, but when and how.
Will it be a thoughtful integration that accounts for the very real human challenges of learning new skills under pressure? Or will it be a chaotic scramble that leaves people feeling overwhelmed and underequipped?
The future of work isn’t coming—it’s here. And it’s watching to see how we respond.
Is your organization grappling with how to implement AI effectively? We’re building digital labor readiness toolkits to help teams navigate this transition. Drop us a line and let us know: If you could hire an AI as your first digital colleague, what role would they fill?
Are your proprietary data and trade secrets at risk of becoming someone else’s AI training fodder? In the latest episode of Digital Labor Lab, hosts Brad Owens and his team dive deep into safeguarding your intellectual property against the ever-expanding appetite of AI. They address the crucial aspects of integrating security and privacy into your digital labor systems, discussing on-prem AI model deployment, private cloud solutions, and strategies for maintaining full control over your data.
As AI tools grow increasingly indispensable across industries from healthcare to finance, your internal documents, customer records, and strategy decks represent the crown jewels of your organization’s intellectual property. The discussion highlights the importance of building trusted digital labor systems that prevent your data from being unknowingly siphoned off to fine-tune external models. With significant entities like Samsung and Apple placing restrictions on AI usage due to similar concerns, the episode emphasizes constructing robust security frameworks and a secure architecture tailored to your organization’s needs.
From deploying AI models in-house to exploring the potential of locally fine-tuned open source models, such as Mistral and Falcon, the conversation explores practical steps to protect your sensitive information. The episode also covers the value of sandbox training environments and synthetic data sets to enhance security and operational efficiency. By incorporating audit trails, governance committees, and zero-trust architecture, organizations can safely harness digital labor and ensure compliance and ethical use of AI technologies.
Brad Owens (00:00) Last week, we uncovered how Meta’s AI was trained on a massive, massive trove of pirated books. But the question that we started taking away from that is, how then do you protect your own proprietary data for becoming someone else’s training set?
Jennifer Owens (00:14) So on this episode of Digital Labor Lab, we’re going to talk strategies, on-prem AI, private model training, and how to bake security and privacy into your digital labor stack by design.
Brad Owens (00:33) Welcome to Digital Labor Lab, where we explore the future of work one experiment at a time. I’m one of your hosts, Brad Owens. So whether you are a healthcare system, a law firm, a product company, your data is your IP, your internal documents, your customer records, your research that you’re doing, your strategy decks, you know, all of those.
Jennifer Owens (00:34) you
I’m your other host, Jenny Owens.
Brad Owens (00:58) are your crown jewel. These AI tools that you’re using though, they’re hungry for data.
Jennifer Owens (01:00) And if.
Yes, they are. So just a quick note, IP is intellectual property for people who are not in this space all day, every day. I’m going to try to alternate between, I just want to clarify our acronyms. If you’re using cloud hosted or public large language model APIs without really strict controls, your proprietary information could be cached, stored, or used to fine tune someone else’s model, often without you knowing. They could be profiting off of your hard work, and you would have no way of knowing.
Brad Owens (01:33) Yeah, we’re seeing headlines where companies are banning AI tools because of that stuff outright. We’ve got Samsung, Apple, JP Morgan. There’s a lot, but blocking AI is not going to be the answer. There’s going to be a way that people will use AI. So building out some trusted digital labor systems, that’s what you can really do to protect yourself into the future.
Jennifer Owens (01:55) Yeah, and that starts with knowing where your AI lives, knowing what it’s trained on, and knowing how your data flows into and out of it. So we’re gonna spend some time breaking down how you can really build some of that trust and reliability in your own organization. So first of all, let’s talk architecture. One of the strongest moves you can make is deploying AI models on-prem in your own systems. That means the model is running inside your firewall, on your servers, on your terms.
subject to all of your company’s technical controls.
Brad Owens (02:24) Yeah. And that’s kind of the ideal world, right? No data is going to leave your network at all. No cloud vendor with some kind of fuzzy privacy policy is going to take all of that data. It’s just complete control. However, we also understand that AI models right now take a whole lot of firepower and there’s not a lot of companies that are going to have just, I guess, some graphics processors on top of graphics processors that they can run their own local AI. Things are changing so fast that that’s going to be out of date in like a week.
So there’s not true things that we can do right now to say, hey, host it all yourself. You’re going to be an AI company now. We recognize it. That’s really not the thing that you truly can do.
Jennifer Owens (03:03) So not today as of this recording, right? But look at how Deep Seek made so many waves just a few weeks ago. the reason why they were making so many waves is that Deep Seek saw a lot of the same, it performed really well in a lot of performance benchmarks without requiring the same processing power as a lot of the other models. So Deep Seek also kind of failed on a lot of security points. So I wouldn’t take that as like your gold standard.
But I do think that we’re seeing efficiency as a key competitive mechanism in the large language model market, which is really interesting. The open source models like Mistral or Falcon, those can be fine tuned locally. So you can also fine tune Deep Seek locally if you want to do a local install. You can get those powerful generative AI capabilities while keeping your sensitive data within your own ecosystem.
Brad Owens (03:50) Yeah. And GPU advances are coming to like we just saw, Nvidia just came out with a completely new chip last week and that’s allowing kind of even midsize companies that can run these models locally and really make sure that they have secure private clouds to themselves. know, you don’t need Google silver farm to really do this stuff anymore. You can do a lot of it locally
Jennifer Owens (03:52) Yes.
Yeah, and a bonus that really makes my governance-loving heart happy, the local models and the fine-tuning that you do on those, those are auditable. You know exactly what data went into training, and you know how the outputs are generated, which is really crucial for any sort of robust AI governance or monitoring program, or for regulatory compliance if you’re in a highly regulated field like finance or healthcare or education or law or like any other of the million highly regulated fields out there.
Brad Owens (04:37) So I have to be honest, this goes a little against my shiny, bright AI can do everything. Ooh, I’m to play with this. Ooh, I’m going to play with this. Ooh, I’m going to play with this. It goes against my heart because I really like playing with the cool flashy stuff and all the new things, but I’m also not putting my entire corporation’s data on the internet to do that. I’m just having fun with, Oh, look, you can spell strawberry now. Like the little things. So it can now it did once at least. So.
Jennifer Owens (05:00) I can’t it.
Brad Owens (05:06) Let’s dig into at least how companies should be thinking about this when they’re training their models, like how to actually not expose all your intellectual property. So you start with your infrastructure. You have all of that locked down. You think, okay, no one has access to all of our data. That’s great. But when you start bringing in model training into your organization, you open yourselves up to completely new things that you haven’t really exposed all your data to before. So how then do you.
customize all of this without compromising what’s possible with AI.
Jennifer Owens (05:38) Sure, and I’m bringing my bias to this as an output of all of my clinical research training and my role in healthcare, where my first reaction to any sort of like data sharing opportunity is to be like, no, no, you can’t have my data. I’m gonna put it all in this paper format so that nobody can read it, which is not super effective and it’s not a great way to innovate. So I’ve been fighting against that instinct for many years.
But a good way to indulge your conservative instincts while still getting to innovate is to start with something like differential privacy. So differential privacy refers to adding a certain amount of noise or randomness to a data set so that you could pull any real data point out of that and not compromise the overall structure of your data set. So I think about this like widening the bell curve, right? So we think about this.
If I’m looking at, I don’t know, the average education level of people in our particular county, that’s gonna be skewed because our county holds a lot of people who hold multiple post-graduate degrees. We have a lot of doctors, a lot of lawyers, a lot of folks who have had post-college education. So can we add a certain level of noise to that? Will we preserve that bump in the higher level of education? Well, saying that if my neighbor…
who collects degrees because it’s fun, if they dropped out of the data set, we’re not going to see then a sudden impact to that. That allows you to still query your data set and still preserve all of the useful features of that data set while preserving the privacy of any single individual. So if we pulled a single individual out of that data set and all of a sudden that number of degrees dropped, we know it’s somebody who has five or six degrees in our county. That’s a smaller list of people.
So you wanna make sure that you’re able to protect the privacy of the individuals who are making up your data while not compromising your ability to work with that data in a way that’s innovative and functional.
Brad Owens (07:28) talking about AI employees here, when we start getting into digital labor, our employees that work for our business all have individual kind of role-based access to all of our systems. We’ve had this for years where we say, all right, this system is proprietary to just this, and this is proprietary to just that. So we do that through data classification. So we have these policies, maybe it’s just for sake of argument, it’s just a label that you add to a document. This is just an internal document.
This is a publicly available document or this is a confidential document. You know, you’re adding data classification labels to your individual data and you’re opening this up to AI the exact same way you’d open this up to an employee. You’re saying that this AI model or this digital labor employee has access to only this stuff because it’s labeled a certain way.
Jennifer Owens (08:18) Right, I don’t need access to all of our pharmacy records. I don’t need access to all of our legal documents. I do need access to this particular production environment or that particular sandbox environment, which brings me to my next point, sandbox training environments. Boy, building this into your architecture will give you so much peace of mind. If you have a sandbox where you can do training, so no outbound data traffic, stuff only goes in and stays in.
No external logging tools. So nothing is writing, nothing is reading, nothing that you don’t know about is happening in this sandbox. This is just a sealed lab where your model can learn from your data without spilling it outwards.
Brad Owens (08:56) picture Jenny in her basement with her laboratory of where she’s doing all her AI experiments. Like, that’s what we’re talking about. We’re talking about a corporate laboratory.
Jennifer Owens (09:05) Yeah, that’s it. Also, why are you exposing my basement laboratory? That is supposed to be my special space. Yeah, it was labeled. Not for podcast discussion.
Brad Owens (09:11) sorry, I didn’t look at the data classification label.
love it. all right. one extra point. I like to always throw out there as synthetic data sets when coming from my HR world, I never want to put any individuals, anything out there that could be exposed. So I’m always a big fan of synthetic data sets wherever possible. they’re ones that you can actually license or purchase from other companies, given our entire discussion, make sure that you know where they got all that data from or how they generated it. Yep.
Jennifer Owens (09:44) for provenance. Yep.
Brad Owens (09:46) So it gives you that ability to add in that realism without exposing any legitimate people or trade secrets.
Jennifer Owens (09:54) Yeah. So now that we’ve got a sense of how we want to build our security and our privacy into our architecture and our training approach, let’s talk about digital labor strategy. If you were adding AI agents into your workforce, we need to think beyond the tech, right? So this is not just a model that we need to train. This is not just a security classification, but we’re also building a new layer of infrastructure. So it’s, going to, your cybersecurity folks are going to have to secure that just like you would any other mission critical system.
or maybe it’s not mission critical, right? If you have, let’s say you’ve got a person who is a person, a digital labor agent who is in charge of maintaining a document library, maybe that’s not mission critical. We call that, I always think about tiers, right? When I think about security. like tier zero is the stuff that must absolutely be on 24 seven. Tier one is the stuff that really has to be on, but we can handle like a drop of 30 seconds maybe. So think about tiering to your AI agents and make sure that your security structure
matches the criticality of that agent.
Brad Owens (10:55) So what we’re talking about is security first, digital labor, right? We’re talking about the digital labor version of zero trust architecture. You start with no one has access and you only give specific access to specific things or people when they need it. So every AI agent, every automation tool, they should have their own identity verification. How do we know that this is actually this thing that’s accessing this data? And how do we make sure that the right data is just
restricted so that that thing cannot get access. And then we have behavioral monitoring as well.
Jennifer Owens (11:30) Yeah. Audit trails are really critical. We do this with humans all the time. We have a log of every login, every prompt, like everything. We need to do this for our digital labor as well. You need to log. OK, I prompted the agent with this, and this is the actions that were taken. This is the output that we got. We need to make sure that you have a log that every time they touch a file, every time they make an edit, every time they do that, you would do this for humans. You should do this for your digital labor as well. That’s your paper trail for compliance and internal and external accountability.
Brad Owens (11:59) And our favorite thing, can’t forget about governance.
Jennifer Owens (12:02) Our favorite thing? Aww.
Brad Owens (12:04) I’ve come around to your way of thinking. I want to use all the new shiny tools, but I realized that if it’s not just for me coming up with recipes, like our ranch chicken that we came up with a couple of weeks ago, which actually ended up being pretty good. if I’m using this for corporate things, we should probably think about governance. So think about an internal AI ethics and risk committee, define what is acceptable use of all of this data and you know,
Jennifer Owens (12:06) Okay, okay.
That was Stuller, yes.
Yes.
Brad Owens (12:30) you’re not going to restrict what AI people can use. There’s going to be something that they find access to. But just make sure you have at least an acceptable use. Give them escalation paths, which is a word I can never really say, but it allows them to move things up the chain as needed and review all new AI deployments like you would just if you were going to have a new hire or sign a new vendor contract.
Jennifer Owens (12:53) Yeah, because at the end of the day, these digital labor resources are acting on behalf of your business. They are part of your labor force. You need to treat them like it. You would never hire a new person and just push them into the office and be like, I don’t know, you’ll figure it out. When I started at Cleveland Clinic, I got multiple days of what I lovingly refer to as Cleveland Clinic brainwashing. And it was everything from our mission, our vision, our guiding principles, all the way through here is training specific to your job role.
We need to do that for our digital labor assets immediately from day one.
Brad Owens (13:24) Let’s wrap this all up then. So what we are advocating for here is on-prem when possible, or at least a private cloud AI that’s going to give you complete control over how your data is being used. And then fine tune open source models with synthetic data or with privacy tools that are in place by your organization to make sure that your data doesn’t get exposed to train these actual big models that are out there and then build in security from day one. We’re talking governance.
talking monitoring actually what’s happening with these digital employees and then train all of those things to make sure that they have security in mind from day one.
Jennifer Owens (14:04) Yeah, the big scandal that MetaAI is currently embroiled in where they’re revealing that their model was trained on copyrighted works. This reminded us that AI is really only as ethical and secure as the system behind it. So you cannot, you cannot, cannot, cannot outsource trust. You have to build it in.
Brad Owens (14:20) So if you want your own quick start guide to how you can secure your AI deployment, email us, email us to hello at digitallabourlab.com. We’re happy to hook you up with that checklist so that you can make sure that what you’re doing with AI is safe.
Jennifer Owens (14:33) Yes, if this episode helped you rethink your AI strategy, share it with somebody in your company, or drop us a comment. Give us a subscribe. We are everywhere. The finest podcasts are sold. We’re on LinkedIn. We’re on YouTube. And we will see you next time.
What if the AI driving your business operations was built on stolen content? In a startling revelation, Meta’s AI has come under fire for allegedly using millions of pirated books to train their large language models (LLMs). This controversy highlights the risks and ethical dilemmas businesses face when deploying AI-driven solutions. Unsealed court documents reveal that Meta’s LLM, reportedly trained using Books3—an illicit dataset comprising copyrighted works from celebrated authors like Stephen King and Margaret Atwood—raises significant concerns about copyright infringement and trust.
The implications for companies relying on AI are profound. While Meta may not have directly committed the act of piracy, using such questionable data underscores the deeper issue of accountability in AI training. For businesses, this revelation is a crucial reminder to scrutinize their AI tools’ origins and ensure compliance with intellectual property laws. As AI models become more ubiquitous in customer service, content generation, and HR operations, the need for transparency and ethical considerations increases to prevent legal fallout and reputational damage.
As companies navigate the AI landscape, understanding the source of data used in AI models is paramount, especially in regulated sectors like healthcare and finance. This incident serves as a call to action for organizations to implement robust AI governance policies, demand vendor transparency, and push for ethical AI practices. Emphasizing trust and transparency in AI development not only safeguards business operations but positions companies strategically as regulatory landscapes evolve and ethical standards tighten.
Brad Owens (00:00) What if the AI that was driving your business was built on stolen content? We’re talking millions of pirated books that could be powering the LLM that you’re using.
Jennifer Owens (00:09) So this is exactly what happened with Meta’s AI according to court documents that were unsealed, I think in January. But what this really is is a wake up call for every company deploying AI-driven labor.
Brad Owens (00:19) Yeah, because it’s not about copyright things. Yes, that’s bad. But this is about more like trust and risk and really what happens when your AI vendor takes some ethical shortcuts.
Jennifer Owens (00:28) Yeah,
so we’re gonna dive into this today on Digital Labor Lab. We’re gonna break down the scandal, we’re gonna break down what it means for the future of digital labor, and if we’re lucky, we’ll avoid me hopping on my soapbox to talk about authors and their intellectual property rights.
Welcome to Digital Labor Lab, where we are exploring the future of work one experiment at a time. I’m Jenny Owens.
Brad Owens (00:56) And I’m Brad Owens. So let’s get you all on, on the same playing field as we are just to understand what happened. So there is a case that was filed against Meta by several different authors. court documents showed that Meta used a data set that’s known as what was this called Jenny books three. Right. So there we go. And he was used to train that llama AI model.
Jennifer Owens (01:13) Books 3, I believe, it’s part of the LibGen database.
Mm-hmm.
Brad Owens (01:22) So this data set contains something like over 190,000 books. Most of them were pirated. And many of those books are all still under copyright.
Jennifer Owens (01:32) Yeah, and just to set the stage, like this is like minor, like barely known authors like Stephen King, Margaret Atwood, Zadie Smith, Colson Whitehead. Like these are major contemporary writers, right? This is not training on Project Gutenberg where stuff is in the public domain. is living authors whose work is still protected by copyright. And they had their work ingested into that machine learning system without their consent, without compensation, and without any real oversight. And I just want to pause here for a second because
One of the things that is interesting to me about this is that Meta used a pirated database for the training. So the actual act of piracy, the crime committed where we’re stripping the protections out of like an ebook and then loading that up was done by somebody else, but Meta still used the products of that crime to train their machine learning. Yeah.
Brad Owens (02:19) And they didn’t disclose this,
which is big problem. came out kind of during discovery for this trial. And let’s kind of move beyond it. This is not just about taking pirated data and using it because it’s possible that meta didn’t even know that they were using that because what they were training their actual model on was the internet. So if something exists out there on the internet, it’s likely going to be ingested into one of these LLMs. And that could be copyrighted material or not copyrighted material.
So it’s not truly about just this one case. What it brought up to us and it got us thinking about is, wait, what does this actually mean for companies when they’re using things that their AI agents are going to take action from? I guess is the right word there. They’re going to take action from all of that copyrighted data. And now you’ve got your customer service, your content generation, your HR operations that are now interacting with your customers using pirated data.
Jennifer Owens (03:01) Mm-hmm.
Mm-hmm. It’s a huge red flag. I think we’re not just talking about biased data or inaccurate outputs. We’re talking about data that’s been stolen. And so I’m thinking about all the papers that came out that we’re comparing, like the llama model versus this model versus that model. And part of that performance is based on data that could now be triggering lawsuits.
or fines, or even just a PR firestorm, right? Like you can have some serious reputational loss if this comes out that I trained my chat bot on the Handmaid’s Tale and Kujo and like all this stuff. So I wanna take a minute and zoom out a little bit. So the whole idea behind the crux of this podcast, behind Digital Labor is using AI agents and automation to improve efficiency, to scale, to reduce our costs.
But if the systems that we’re relying on are trained on illegal or ethically gray content, then I think we need to be really honest and open about what it is that we’re building with.
Brad Owens (04:15) So think about if you are using AI that summarizing or generating content that’s, know, to your knowledge, unknowingly, coming from copyrighted work, could really expose your company to legal risks. And even if you never touch that data set, you could be held liable for using this model that was trained irresponsibly. Think about it like knowingly accepting stolen goods. If you showed up to a
A meet and greet for Facebook marketplace and you were buying something and you’re like this kind of seems fishy seems like a good deal I’m getting here that was stolen stuff surprise you committed a crime same sort of thing here. Just we’re talking the AI version
Jennifer Owens (04:53) So,
as we were discussing this podcast concept, we were talking about how if I rob a bank and I leave the money on the corner, like taking that money is still a crime. So, and I think just to extend your metaphor, right? If I go on Facebook marketplace and I see this amazing deal on the Nintendo Switch 2 for like 75 bucks and I go and I purchase that and it turns out that it is stolen, what happens to my Switch 2 that I just purchased? The police take it, right? Like I don’t get to retain that. So.
I think it’s worth thinking about the source of the data that your AI models are trained on, especially in highly regulated sectors like healthcare, education, finance, law, where that intellectual property and the compliance with those regulations is really tightly, tightly controlled. So like for example, if I’m trying to put a model into a healthcare ecosystem that was trained on copyrighted textbooks or proprietary research, boy, that would make me really nervous. Like it’s making my palms sweat just to even think about it.
Brad Owens (05:52) And this fuels the arms race for all of these companies that are trying to play into this zero sum model game, right? They’re trying to think we’re going to build the best model. It’s going to be the thing that everyone bases all of their other potential agents on. It’s going to be incredible. But if companies like Meta are gaining an advantage and maybe it’s performance wise, maybe it’s like output wise, whatever they’re actually trying to play towards through training on all of this copyrighted data, or we’ll just say like bad data.
just for sake of argument here. Maybe that creates an unfair advantage for them. Startups, someone who is trying to be an ethical company who doesn’t want to take that route may end up falling behind because all of these other companies are just like, well, the content was out there. I was just using it. I’m sorry that they didn’t password protect their data.
Jennifer Owens (06:37) Mm-hmm.
I mean, they did. It’s just that somebody took it off. So I think that the thing that is really getting me is that the real kicker is that there’s no regulation yet. I feel like regulation is coming. We’ve covered this in previous episodes that we’re not seeing a strong federal direction on this in the states. But the EU has the EU AI Act. And the states are starting to build their own legislature, some of which is interesting and some of which is really going to hamper innovation.
But the other flavor now that is coming into play here is lawsuits, right? Do we really think that Stephen King is going to sit back and let people steal the Shining? The Shining is a masterpiece. It is a masterpiece, and it should not be used for free and without credit in training these models. So we’re going to see additional regulation, but we’re also going to see a lot of lawsuits. It’s really going to impact how you use these tools.
Brad Owens (07:36) It’s gonna be a good time to be an AI lawyer.
Jennifer Owens (07:38) It would be a great time to be an AI lawyer. AI lawyers call us. We want to talk. Yeah.
Brad Owens (07:42) Yeah, absolutely we
do. So then, but yes, all of this happened. It’s not surprising that it happened. We have these gigantic models that are trained on the internet. There’s a lot of good and bad things out there on the internet.
Jennifer Owens (07:55) My college blog is still out there, guys. You don’t wanna incorporate that.
Brad Owens (07:59) Ooh, $10 to the first person that could find that blog.
Jennifer Owens (08:01) No, it’s not hard to find.
No, do not put that out there.
Brad Owens (08:06) All right. So then what should leaders do then? What business leaders, how do we take this thing that’s happening over here and try and understand how we should then be, adjusting maybe our AI policies or what we’re doing with AI in our business. What it comes down to really is you have to really dig deep into the tools that you’re using. Don’t just ask, Hey, what can this AI do? Really look at it and say, what is this AI trained on?
Jennifer Owens (08:35) Mm-hmm.
Brad Owens (08:35) So there’s kind of three questions that you can really ask of whether it’s your technology of how you’re using this. No, was this model trained using licensed or public domain content? That’s the easy one. If we were trying to, create a chat bot, we don’t want that chat bot to start spouting off Romeo and Juliet or something that’s actually in, sorry. Yeah, that’s out. so this actually currently licensed, yeah, no, I think that’s, that’s gone.
Jennifer Owens (08:56) public domain.
The Shakespeare estate is not suing people over that.
Brad Owens (09:05) So has the vendor provided to you a data lineage? Can they show a provenance statement of here’s where all of this came from? And then, ya know, something you could do before you sign on with a potential technology provider. are there known lawsuits that are actually already involving this company or that model or the data set?
something you could actually look up.
Jennifer Owens (09:27) Yep, it’s true. It’s great due diligence, as tempting as it is to stick your head in the sand and be like, la la, they say it’s going to help me. I don’t want to think about how it was built. The other thing I would add is that building your internal governance will also build these skills to be asking these questions and thinking about these kinds of topics in a way that benefits your business. So you can create an AI review board or assign responsibility to a group of people in compliance or in legal or in other areas.
to vet AI vendors that your teams are using or even thinking about using. I would love if we could govern both before the moment of deployment and then after.
Brad Owens (10:03) And this is a little bit more technical, but if you’re developing your own models, just be transparent, use synthetic data, all completely made up stuff. But AI is really good at anticipating what should be in those data sets. So if you’re using a synthetic data set, just be careful with it. license specific data sets. There’s people out there that truly have data sets that you can license to use to train your model.
Or you can participate in other open source initiatives. So the data sources are very clearly documented. They’re out there in the open of here’s what we actually used. Start using more responsible data.
Jennifer Owens (10:37) Yeah. And another thing you can do is you can really push for vendor accountability. You can ask them for, like in health care, the Coalition for Health AI has these model cards that demonstrate, like, here’s what the model is intended to do. Here’s the data on which it was trained. And you can ask for those kind of, for a statement of how the model was trained, or you can ask them for a third party audit. I think we’re going to see a big uptick in.
AI certification, kind of like we see SOC 2 and other kind of security documentation, we’re going to see a huge uptick in AI documentation. If your vendor isn’t willing to work with you on that, it doesn’t have to be formal, but just like a discussion. If they’re not willing to open even just that tiniest crack into their process, that’s your signal to walk away.
Brad Owens (11:22) You wouldn’t trust a third party software in your enterprise if they didn’t give you a very clear term of service, right? If they weren’t actually giving you the, the what behind what they’re actually doing. So just treat AI the exact same way. This is just another technology that you’re adding to your enterprise software landscape. It’s, it’s not just the tech that you’re adding though, when it comes to this stuff, you’re adding in liability.
Jennifer Owens (11:47) Yep, every new tool comes with risk, right? And as more AI systems start generating content for your business, whether they’re generating, you know, like legal contracts or, you know, like summaries of things, you need to be confident that your tools aren’t built on a foundation of stolen intellectual property.
Brad Owens (12:01) Let’s wrap all this up then. So this meta story, it’s not just about a big company getting caught, right? Yes, that’s going to make headlines and my gosh, Meta did this thing. But what this is really doing is it’s exposing more, more diligence that we need to do as users of AI, as your organization who is using these foundational models to power the rest of the things going on in your business. It’s just kind of.
getting a gigantic flashlight on this of, man, we might want to pay attention to a little bit more than just, hey, this tool can do this cool stuff.
Jennifer Owens (12:33) Yeah, so our message and our position is very simple. Ethical AI is strategic AI. The faster you get ahead of this, the better positioned you are when regulations and when lawsuits hit and when your customers also start asking really tough questions.
Brad Owens (12:47) Yeah. It’s your moment to be able to lead responsibly. That’s what you want to be known for as an organization. And that’s how we’re trying to help frame this up for you. So your competitors, they might cut corners and they may do some things faster, but in the long run, trust and transparency are going to win this.
Jennifer Owens (13:04) So we’ll be watching in this space to see how this unfolds and sharing updates as new AI governance models and new lawsuits and new disclosures come out. Please do be sure to subscribe wherever you get your podcasts. We’re also on LinkedIn and on YouTube. And you can visit digitallabourlab.com for a full rundown of the content of this episode, as well as a really cool checklist that you can check out.
Brad Owens (13:28) Yeah. So if you found this valuable, please share this out with your CTO, your general counsel, your procurement lead who’s finding all of your AI tech. This affects everyone that’s in that. So we’ll catch you on the next episode. Thanks so much for watching.
Discover the intriguing paradox of AI at work—it’s meant to lighten our workload but might just be doing the opposite. In this eye-opening discussion from Digital Labor Lab, hosts Jennifer and Brad Owens dive into how AI tools, designed to tackle simple tasks, are leaving human employees with only the most complex and stressful responsibilities. Across industries like healthcare, HR, and customer service, automation takes care of the “low-hanging fruit,” transforming traditional roles and creating new challenges for workers.
As AI and automation evolve, they offer the promise of unprecedented productivity. However, Jennifer and Brad reveal a hidden consequence: increased cognitive loads for employees handling complicated tasks. Explore how this shift impacts mental health and job satisfaction, leading employees to face burnout, increased stress, and decreased performance.
With a focus on practical application, the discussion examines how businesses can better balance AI’s capabilities with human needs. By strategically automating not just easy tasks but also reducing complexity in challenging ones, companies can provide employees with opportunities for rest and growth. Join the conversation and discover how reshaping digital labor strategies can enhance productivity and employee well-being.
So update on that AI tool that we were working on, by the way. I found that it responds to like 85 % of the easy emails, which is terrific. Problem is now I have to deal with only the complicated ones and I don’t like it.
Brad Owens (00:11)
I never thought about it that way. We should talk about that.
Jennifer Owens (00:15)
Yeah.
Welcome to Digital Labor Lab, where we are exploring the future of work one experiment at a time. I’m your host, Jenny Owens.
Brad Owens (00:31)
And I’m Brad Owens and on today’s episode, we’re gonna tackle a question of what happens when AI takes over all the simple work and leaves just humans with the only most stressful, exhausting tasks? Not something I would have anticipated, but a really good thing for us to get into.
Jennifer Owens (00:48)
I want to talk a little bit about this pattern that we’re seeing across industries from health care to HR to customer service, where we’re using these tools to automate the low hanging fruit, right? The easy stuff. I want to think about what that does to our human employees as we’re starting to see these digital agents come in and scoop up some of the workforce. Can we talk about this?
Brad Owens (01:06)
Yeah, so AI and automation, they were meant to make jobs easier. But in reality, what we’re proposing here is that they’re actually making them harder and more exhausting. So let’s dig into the kind of what is the paradox of when you automate things, why do jobs actually get harder?
Jennifer Owens (01:24)
Yeah, so let’s think about an example here, right? So I recently spent some time on the phone with customer service. Customer service reps used to have a wide range of stuff that they were equipped to handle, right? Everything from the basics, like how do I pay my bill? Or what are your hours? All the way through the really complex, like I changed my name at 2 a.m. on daylight savings time and now my billing is all messed up.
But now we can use chatbots and we can use automated services to handle the easy stuff. And humans are dealing only with the name change in the middle of a daylight savings time type issues. So let’s talk about why does this make work harder?
Brad Owens (02:04)
Yeah. So let’s think about it in our, in our day jobs. So in HR and hiring, when we have AI that might screen applicants, we might have those applicants that are just the easy ones. Like when you have a unskilled potential role and all we’re really looking for is people who have high rates of replies and are interactive during the hiring process. Those are typically people that would make good, you know,
lower level type positions, maybe factory pickers or warehouse workers and things like that. Those are kind of easy for humans to do, but that also makes them easy for AI to do. Now we also have the flip side of that, where we have a highly competitive job, something that is really difficult to fill. Maybe it’s a high level executive or something like that. So in my day job, typically, if I were a recruiter, I’d be able to handle both of those and I would have kind of a good balanced day. But if AI has taken only the easy positions.
And now I just have to focus on those really hard positions. I’m going to be spent by the end of the day.
Jennifer Owens (03:05)
Yeah, so in my day job, I work in health care. I work at a teaching hospital, right? And this is something that we actually think about a lot as we’re working on artificial intelligence algorithms to assign staffing appropriately to the more complex cases. But what that means at a teaching hospital, and it’s not just Cleveland Clinic, it’s any teaching hospital, is let’s say that you’re working with post-surgery care of patients. And we want to give the healthiest.
the quickest recovery patients to the youngest doctors, right? Your residents, your people who are fresh out of medical school. So they’re getting all of the easy cases. And then the difficult, the complex, the people with multiple chronic conditions, those go to the more experienced clinicians. So actually as you work and you gain experience, what you’re gaining for all of your hard work is a more complex and more draining workload. This creates like this overwhelming cognitive load and humans really aren’t even highly trained doctors.
aren’t meant to only deal with high stress, high complexity tasks all day for a full 12 hour shift.
Brad Owens (04:04)
So then we’re talking about how AI is going to revolutionize business and make everything easier and take all this stuff off our plate. But what we’re discovering here, and I understand you actually did a lot of research on this too, this may not be the case. It may actually make business harder. And this isn’t something that all of the hype surrounding AI is really talking about right now. So we’re here to give it to you straight. So what then is the business impact then? What happens when work becomes too hard?
Jennifer Owens (04:27)
Yeah.
Yeah, so the mental model that I was working with, health care background, was thinking about nursing and doctors during COVID. All of a sudden, we had a new condition that was raising the complexity of patients. It was raising the risk that they were going to have serious long-term effects or even die. And so we know in health care that when COVID happened, our burnout skyrocketed. And I thought, OK, well, is that really true across multiple?
multiple industries, multiple position types, or is it just in healthcare when you have a global pandemic, which is stressful for many other kinds of reasons as well? What I found in my research is it doesn’t matter, right? If you’re a teacher, if you’re a firefighter, if you’re a paramedic, if you’re a doctor, a nurse, a surgeon, the more your cognitive load goes up, and there’s nifty tools that we’re not gonna spend time talking about it to measure the cognitive load. So it’s not just the complexity of the task, can you get your work done in a reasonable amount of time?
How emotionally difficult is that work? Is it really frustrating? Are you finding yourself like butting heads with your coworkers over things, or is everybody collaborating together? As we find that cognitive load going up, work performance across several metrics suffers. Your decision-making abilities suffer. Your ability to switch focus between multiple tasks, and I don’t know a single job these days that doesn’t require some form of multitasking, it’s something that humans aren’t great at doing anyway. We’re really great at telling ourselves we’re good at it.
But as your cognitive load goes up, your ability to effectively multitask goes down. Your ability to sustain focus goes down. And your decision-making, when you’re in those tough life or death type decisions, your decision-making actually kind of crumbles a little bit too. That high cognitive load is terrible for generating high quality work.
Brad Owens (06:16)
Yeah. So think about those doctors. Think about those individuals that, know, they get no break every single decision they have to make. It’s potentially life or death. That is not something we want for our workers. So then when we think about business as a whole,
Business needs to think about automation differently. They should think about this AI infusion and what tasks we take off and, the types of things that we automate. should think about that very, very closely. So what are some of the things we feel like businesses should do different?
Jennifer Owens (06:53)
So I think what’s interesting is we think about digital resources as having linear productivity. If you apply more resources, you apply more compute, you get more product out. If I’ve got an automation that will do one task for every 10 minutes, if I get 10 of those, then I’ll do 10 tasks every 10 minutes. Humans aren’t linear, first and foremost. And secondly, capacity does not scale.
linearly with the complexity of the task, right? We can maybe manage one high complexity task or three or four low complexity tasks. So as we’re thinking about how to incorporate digital labor into our businesses, we need to balance that automation so that you’re not sticking the humans just with the hard stuff. We need to make sure that humans have time built in their day to allow for that restorative mental process, to take off that cognitive load, to make sure that really to make sure that your human beings still have time to take their breaks and talk with their colleagues and do the things that make us human.
I keep thinking about that tweet that everybody likes to discuss in artificial intelligence that I don’t want AI to make music and art while I do laundry and dishes. I want AI to do the laundry and dishes so that I can make music and art. And that’s great. But even a person who is doing music and art all the time still needs those breaks to rest and refresh and bring a different perspective to their work. So we want to make sure that we’re using, we want to kind of balance how you’re using your artificial intelligence and your digital labor, right? You want to make sure that you’re using it
to remove complexity from the complex tasks as well as automating the easy stuff. So if I have like an FTE that is 100 % dedicated to something, can we take 20 % of the complexity out of their highest complexity work and automate maybe their 20 % of the work that’s just so easy that they’re frustrated with doing it. That’s a decent balanced automation platform. I like that approach quite a bit.
Brad Owens (08:41)
Sure. And when we keep coming back to a lot is you’ve mentioned this in the hospital systems for using AI in radiology. So there is technology that will help doctors to process and to read scans faster. So that’s not making decisions. That’s not looking at the thing and actually telling patients outcomes and looking at that. It’s at least augmenting some of the jobs that it’s a little easier.
So we’re not talking about completely offloading an easy task. What we’re doing is we’re using AI to reduce some of the complexity of a task that typically was crazy complicated and making it a little bit easier for them. So we’re not particularly just leaving these doctors with just only the hard decisions. We’re allowing them to have a easier work day for things that used to be complex for them. So
One of the problems that I see in business is right now we, the ROI and the kind of metrics that we’re looking at on this AI stuff is productivity focus metrics. Like, my gosh, they can do all this work. They never take breaks. work 24 seven. we’re talking about offloading all these repetitive tasks.
So that people in business can work on the harder stuff, the stuff AI can’t do. But what we’re talking about here is that’s not always fantastic. That may actually lead to a ton of burnout and actually reduce your business output significantly. So it’s something that you absolutely need to take.
Jennifer Owens (10:13)
So in one of our earlier episodes, I called out the two by two matrix that I use to think about AI use cases. And so it’s just a grid. And maybe we can even pop up a little schematic here. Maybe we might be able to draw that. But on one axis, we have what people are good at and what people are not good at. On the other axis, we have what AI is good at and what AI is not good at. And you really want to focus your work into the two corners of this matrix. You want people doing what people are good at and AI is not.
And you want AI doing what AI is good at and people are not. And then as we’re continuing our discussion, we want to think about, OK, well, if we have people doing only what people are good at, are there ways to use AI to reduce the complexity, to reduce the burnout of that particular workload? And then are there things that AI is doing that people might just enjoy doing? Maybe you’re the kind of person who really gets a kick out of filling out your own expense reports. That’s fine, right? You do you. That’s not my jam.
But if that’s something that provides you a welcome break in your day, you know, from maybe like from the rest of your work, then maybe that’s something that you retain and you don’t use a tool to do that. I think that’s a perfectly reasonable balance to strike.
Brad Owens (11:23)
So let’s wrap this all up then with some key takeaways then. From my perspective, I feel like AI at this point isn’t just changing jobs if we look at it that way. It’s making them more mentally and emotionally draining if we don’t put in check what we’re actually automating. And businesses that look at this automation just as a, we’re gonna increase productivity and allow people to do what people are good at. I mean, they’re gonna face burnout. They’re gonna face high turnover.
They’re going to have declining productivity from all their actual workers. They’re setting themselves up for a big, big problem.
So what I feel like businesses need to understand is they have to rethink how AI is going to reshape their work before you get into the period of burnout, high turnover, declining productivity. So put some effort into.
Jennifer Owens (12:12)
If you like today’s discussion, please subscribe, share, and tell us. I want to know, has automation made your job easier or harder? How do you think businesses should handle this shift? If you’re a business owner, if you’re thinking about adding AI into your labor pool, how are you thinking about what that’s going to do to your human employees, to their workload, and to their job satisfaction? If AI is reshaping your job in any other way, we want to hear your story. Please drop us a comment. Shoot us a message. We would love to hear from you.
Brad Owens (12:40)
If you like this kind of content, there is plenty more at digital labor lab dot com, where we explore the future of work one experiment at a time. You can follow us on your social media profiles of choice at digital labor lab. We will see you next week. Please email us at hello at digital labor lab dot com. If you’ve got some questions you want us to tackle until next time, I’m Brad Owens. We’ll see you next time.
Navigate the complex landscape of AI regulations effortlessly with this insightful episode of Digital Labor Lab. Hosts Jenny and Brad Owens delve into the intricacies of AI laws in the U.S. and EU, shedding light on critical legal frameworks that are shaping artificial intelligence utilization worldwide. From the GDPR to the EU AI Act, understand how these regulations influence AI deployment across different sectors, including healthcare and HR.
Jenny shares her in-depth research on European legislative requirements, such as the necessity for server location within the European Economic Area, and highlights how the EU AI Act categorizes AI applications by risk. Meanwhile, Brad emphasizes the patchwork of state-led regulations in the U.S., cautioning business owners about the ambiguities and urging them to stay informed and adaptable in the ever-evolving legal environment.
Engage with this professional yet conversational guide that equips business owners and AI enthusiasts with practical advice on navigating regulatory challenges. Discover the importance of documenting AI processes and ethical principles as a means to safeguard your business. Tune in to gain actionable insights and take advantage of their offer for a guide that aids in documenting AI workflows. Email hello@digitallabourlab.com to access tools that can help ensure your compliance and strategic use of AI in your operations.
Jenny, you know, in doing this podcast, I realized that we’re in the US, but there’s likely a lot of difference in the use of AI around the
Jennifer Owens (00:07)
Man, AI use is a little bit different, but you what’s really different is the regulations. Let’s talk about that.
Hello, and welcome to Digital Labor Lab, where we’re exploring the future of work one experiment at a time. I’m your host, Jenny Owens.
Brad Owens (00:28)
And I’m Brad Owens and we’re going to continue on our theme of guardrails around AI use in your business. So last week we talked a little bit more about the ethics concerned with AI use in your business, but those ethical concerns kind of led to regulatory issues and some different types of rules that are in place. Things you just kind of need to know about if you’re going to use AI in your business. And we wanted to bring that to you so that you understand how to safely use AI in your business. Now I need to give a blanket statement here.
This in no way qualifies as legal advice. That’s not us. We may be able to hook you up with lawyers that are specialized in this, but we just wanted to give you an overview of all the topics, things you may want to consider. And Jenny went deep into organizing thoughts around all of these different types of regulations that are out there. Jenny, just give us an overview. What’d you find?
Jennifer Owens (01:01)
Not a lawyer. Yeah.
Sure, I went deep. The amount of stuff that I cut from the script from this episode is like this long. But briefly, we really focused our efforts here on legislation in the EU and in the United States. So the European Union, my research was primarily focused on the General Data Protection Regulation, or GDPR, and the EU AI Act. So GDPR, the first thing that often affects AI initiatives and digital labor initiatives in this space is that the servers must be physically located
in the European Economic Area. So please do not be thinking that you can start a global digital labor company from the United States and sell your services in the EU without having servers also located there. The GDPR principles include your usual suspects like data minimization, transparency and explainability, and data protection. And a lot of this is going to sound really familiar to our episode from last week. They’ve got some interesting restrictions on what you can do with the data that actually kind of
To an American, this sounds a little bit limiting, especially to an American that’s used to being in the health care space. So when you go to a hospital, when you receive medical care, you sign a release that says, hey, I’m giving you access to my medical records for these purposes, also for treatment, prevention, and operations. And a lot of times, operations will include quality improvement that covers a lot of secondary uses of your data. In the EU, not so much. Secondary uses, unless you’ve explicitly consented to them, prohibited. Fascinating.
Then we have the EU AI Act, which is really aimed at preventing algorithmic discrimination. And I could talk for hours about this and probably will at some point, but the thing that I want to call out is that the EU AI Act really segments artificial intelligence into different tranches of risk. They call out high risk areas that pose significant risks to health, safety, or your fundamental rights through the use of algorithms such as automated hiring processes,
any sort of like triage for healthcare, anything that poses a risk to your health, your safety or your fundamental rights, which means Brad, that both you and I are in this highly regulated bucket. So.
Brad Owens (03:25)
Yeah, so
great. So everything that we’re trying to do with our work is going to be incredibly regulated outside of the US for right now. So great.
Jennifer Owens (03:33)
I mean, to be fair, it’s also pretty regulated inside the US. And I have a theory that this actually puts us at an advantage rather than a disadvantage. Do you want to hear it?
Brad Owens (03:41)
I can understand where you’re trying to go with this one. So yes, I feel like we need to hear why this is an advantage for us here in the U.S.
Jennifer Owens (03:49)
So when you’re in a highly regulated industry like this, when you’ve got a legislative framework for how you have to deal with data and how you have to deal with customers, patients, applicants, whoever, you already have a framework for wrestling with artificial intelligence. You just have to funnel it through your existing regulatory framework, right? So you already have rules, Brad, about how you’re able to use the information that applicants provide to you. You already have rules about what you are and are not allowed to use to discriminate when you’re hiring.
I have rules about what I can do with patient data. All I have to do is add artificial intelligence flavoring to that as I’m continuing to think about how are we going to incorporate AI and digital labor into healthcare or into HR. So, yes. Yeah.
Brad Owens (04:32)
So that’s a lot. We
understand all the different things that they’re doing with the EU AI act and with the GDPR and everything else they’ve got over there. But something happened recently in Paris where we were trying to get a worldwide version of this AI act, essentially of the safe use of AI. And a couple of people came away with out signing onto that. And one of those was the U S so what’s going on in the U S when it comes to AI.
Jennifer Owens (05:02)
Yeah, so JD Vance was our representative at the AI Summit that was in Paris. And he mentioned that he wasn’t in favor of the legislation, not the legislation, the treaty that was being signed, because he felt that it posed barriers to innovation in AI in general and to American innovation in AI in particular. So this is really interesting because at the federal level, we’ve been kind of wavering.
on regulation. So Biden era executive orders called for collaboration and guidance and a lot of training to really minimize harm. These have been modified by an executive order from President Trump that’s ordering a review of all of the policies and the guidance that took place under that previous executive order to make sure that it is not posing a barrier to America’s global AI dominance, which is really like strong and aggressive language, but it’s really interesting. So we seem to be really wrestling at a federal level.
with what we want to do about artificial intelligence, whether we want to remove barriers to innovation or whether we think that we should put guardrails in for what people can and cannot do safely. What makes that interesting though is that, of course, that America is a confederacy of states. Brad, talk to me about state regulation.
Brad Owens (06:10)
So in the States, we’re kind of left with just a patchwork of a whole bunch of stuff. So at the federal level, there were some Biden era guidance for what AI should be able to do, what it shouldn’t do, what we should watch out for, some of the risks that might be involved. And that may or may not stand depending on what the whole Trump administration decides to do on this stuff.
But the administrative things aside, because there has been no federal mandate for the U.S., kind like the EU AI Act, it’s getting left to the states to do something about it. So we have states like Colorado, like California, like New Hampshire, like these different patchwork of states coming up with their own ideas and their own things of what they feel like AI should and shouldn’t be able to do and putting those rules in place. But even those rules come with.
very vague definitions and it’s coming down to lower courts to start deciding, hey, what does it actually mean? What can, can’t you do? So this is leaving a lot of business owners very confused. mean, Jenny, if you were a business owner currently right now thinking about, I want to run my business with majority AI, how would you feel?
Jennifer Owens (07:19)
I mean, that’s what I’m thinking about as we’re talking. I have talked a lot about this. You have talked a lot about this. And the lack of clear guidance really does make it a little confusing and a little intimidating. I understand ethically the principles that I am supposed to be following. How can I make sure that I have the correct safeguards in place so that when I’m using artificial intelligence in my day job? Actually, let me take a step back.
Brad, you use artificial intelligence and digital labor. You use agents in your day job. Talk to me a little bit about that. What does that look like for you right now today?
Brad Owens (07:53)
Right now it’s really all assistive. It’s all, Hey, I need help doing this thing. Help me do this thing, which if we go back to what the EU AI act was talking about and our risk levels, it is a very low risk, likely not going to cause any problems or anything at what they would refer to as a risky AI endeavor. It’s not going to right now because everything I’m doing is more assistive. There’s no
decision-making process. There’s no part of that that I’m going to take as gospel and say, okay, well here’s what we’re going to do in our business. mean, it comes down to how much of my business is actually going to be affected. How much of my decision-making is going to be affected by what this AI is doing. And per the EU’s rules, there’s not a whole lot of risk to my business for doing it.
Jennifer Owens (08:38)
Mm-hmm.
I mean, I would say the same. Obviously, artificial intelligence is everywhere in health care, even before it was called artificial intelligence. You’ve got noise reduction in your CTs. You’ve got different calculators for calculating your risk post-surgery, et cetera. But there’s a lot of stuff that is there as information, ways to of chew up all of that medical data and surface it in different ways, but that still leaves a clinician’s medical judgment intact. Doctors and nurses.
right now are not relying on AI to make decisions for them. They’re relying on AI to surface the correct information at the correct time and give them the information that they need to exercise their medical judgment. I think that that’s going to be a crucial part in determining how you want to proceed with digital labor and AI in your business is, is it assistive or is it autonomous? Is it doing things on its own?
Brad Owens (09:30)
So we’ve been painting this whole picture and all of the media is painting this picture and movies and all the tech leaders out there. They want to paint this picture of your business is going to be run by AI and they’re going to be making all these decisions. But sorry, y’all that’s kind of BS. There’s a whole lot of rules in place today. Yeah. There are a whole lot of rules in place that will make that not possible. It’s just not going to happen.
Jennifer Owens (09:46)
Today. Today it is. Yeah.
Which is really interesting to me too, because I wonder and I worry a little bit about the future of artificial intelligence. If our legislation is so geared towards keeping humans in the loop and towards transparency and explainability, which are ethical principles that I completely agree with, but they’re not necessarily aligned with what the tech does. And I think this is another topic for another podcast, but I do wonder if we’re legislating ourselves into a small subset of what AI can be.
Anyway, another subject for another day. But I think what I really want to end our show with, Brad, is if you’re a business owner who wants to use AI and digital labor today, what does all this regulation mean for you? What do you need to be thinking about? And I want to be crystal clear here. We are not lawyers, but we are people who want to use automated labor. We are trying to automate as much of this podcast as we can.
So Brad, as the person who’s been trying to get this podcast automated, what does this mean for you? What are you taking away from this?
Brad Owens (10:59)
as a business owner, it’s up to me to at least make a concerted effort to understand the laws and regulations and things that are going to govern what I am able to do. So at the very least, I need to be able to do some research on my own to try and understand if there’s anything I may step in by doing business this way.
Jennifer Owens (11:20)
Yeah, and from my perspective, I think it’s really important to document our principles when using AI. When we were talking about an automation flow for the podcast, we talked about the items that were important to us. So not only to document our principles, but also to document the process, what the process is intended to do, what the intended output is, and any sort of risk mitigation that we’ve put in place. That’s a start. If you’re interested.
We’ve put together a reusable outline for you to help you start on your documentation journey as you’re thinking about principles, as you’re thinking about processes, as you’re thinking about governance. It’s a really helpful thought exercise and a worksheet so that you can start coming up with your own business’s approach to using AI and digital labor. If you’re interested in that,
Send us a line. We’re at hello at digitallabourlab.com. We’d be happy to share that with you and to spark that conversation for you and for the other people working in your business.
Brad Owens (12:14)
So if I were to say my key takeaway, then just pay attention, just document what you’re doing. Just have an idea of it. Because even if you look at the very restrictive parts of the EU AI act and of what has come out of the U S it really comes down to, can you show your work? Can you be a high school student in, I don’t know, calculus, be able to show your work of how you came to that outcome.
That’s really what it’s going to come down to. So it’s all about documentation at this point. Someone’s going to say, Hey, how did you do that? That was discriminatory, whatever. You could be like, look, here’s the entire process laid out for you and how we came to this decision. That’s what it’s going to really come down to. So yes, take us up on this offer. Email us to hello at digital labor lab.com. We’ll send you this guide and this document that’ll help you come up with those workflows so that you can have a solid understanding of how you’re using these things and why.
Jennifer Owens (13:06)
until next time, I’m Jenny Owens. You can find us at digitallabourlab.com. You can shoot us an email, hello at digitallabourlab.com. Please feel free to like, subscribe, do whatever on whatever podcasting platform of choice you are getting your Clearly Superior podcasts. Catch you next week.
As businesses rapidly adopt AI and automation, the ethical implications of digital labor are becoming impossible to ignore. In this episode of Digital Labor Lab, Jennifer and Brad Owens explore the key guardrails that businesses need to consider when integrating AI into their workflows.
From regulatory challenges to the risks of algorithmic bias, this discussion highlights why ethical oversight is essential to prevent unintended consequences. With insights from industry leaders and real-world examples, this conversation offers a practical framework for businesses looking to balance innovation with responsibility.
Watch the full episode below to learn how companies can navigate the future of digital labor without sacrificing ethics or quality.
Hey Brad, I think we really need to talk about the ethics of digital labor. Do you want to talk about that in our next episode?
Brad Owens (00:05)
really?
Hey everyone and welcome to the Digital Labor Lab where we explore the future of work one experiment at a time. I’m Brad Owens.
Jennifer Owens (00:22)
I’m Jenny Owens. And in today’s episode, we’re going to talk about some guardrails for using artificial intelligence and digital labor in your business.
Brad Owens (00:30)
It doesn’t sound like lot of fun.
Jennifer Owens (00:33)
So I think it’s really important to kind of lampshade the tension that we had when we were setting up this episode right away. Because as we were crystallizing the idea for a digital labor lab, I really wanted to talk about ethics. I wanted to talk about regulations. I wanted to talk about basically the boundaries of the box that are going to kind of determine what you can and cannot and should and should not do. And you had a different opinion.
Brad Owens (00:56)
I didn’t want to do that. Like the whole thing
about digital labor is that it’s fun and we’re at the experimental phase and look what this thing can do. And look what that thing can do. But because everyone in the industry is in the experimental phase, it’s also leading to a whole lot of conversations where people are just starting to be like, well, hang on a whole crap. This is fun and all, but this is going to mean major changes to business. What should we actually be paying attention to? What should we actually do? So.
I think it’s important. I think we do need to get into, hey, this is all cool, but you’re about to make a significant change in your business.
Jennifer Owens (01:35)
And so really two things kind of sparked my desire to bring the ethics and regulatory discussions kind of upfront. One is that if you are a business leader looking to incorporate digital labor into your business, you’re gonna face a lot of pushback and a lot of that pushback is gonna be armed with these same concerns. So it’s best to get it out in the light and discuss it upfront. And secondly, we are in a great age of experimentation with artificial intelligence and digital labor. And just like when you’re experimenting in the lab,
You have personal protective equipment. You have a list of lab safety rules. You have MSDS on all your chemicals. Like please don’t rub the athidium bromide on your reproductive organs. So we need the same kind of very basic safety concerns or at least safety discussions around the use of digital labor. And so I know it’s no fun. I know I feel like I’m telling you to eat your broccoli before you can have dessert, but we’re gonna do it because broccoli when properly prepared can be delicious and fun for your brain.
So given that I’ve just told you that I’m gonna feed you a bunch of brain broccoli, here’s what we’re gonna discuss in this episode and then in future episodes. We’re gonna talk about three guardrails for using artificial intelligence and digital labor in your business. In this episode, we’re gonna talk about ethics and we’re not gonna get into a deep philosophical dive. We’re gonna talk very basically about how you can think about what you should and shouldn’t do with automation and with agentic labor.
In the next episode, we’re gonna talk about regulations. So that’s literally the laws that you’re going to be governed by and how you’re gonna decide, okay, am I going to do things this way or am I going to risk a fine and do things this way? And then in our last episode, we’re gonna talk about the ways in which digital labor and artificial intelligence when improperly used can actually result in kind of a downward spiral of your product and your business and ways to avoid that. So three guardrails in this episode, we’re gonna take our time. We’re gonna really talk about ethics and about…
the right and wrong ways in which we can use digital labor. And what kicked this off and what really rocketed this up to the top of my priority list to cover on our podcast was these comments that happened recently. Brad, what was the forum where the billionaires were talking about automating jobs away?
Brad Owens (03:38)
So there have been two conferences that happened not so much back to back, but in the news cycle back to back. So the first one happened in Davos at the World Economic Forum, I think it’s called. Yep. So we had there, we had Mark Benioff, we had a few others from workday and all these others that are coming out with what they’re calling their digital labor platforms. They’re all talking about, hey, we’re going to be the last generation of leaders to lead only humans, which that was the big one that took me by surprise.
Jennifer Owens (03:50)
I think, yeah.
Brad Owens (04:08)
man, you’re completely right. There’s going to be digital workers in the workforce. But then fast forward about a month later, just this past week, you had a AI emergency summit happen in Paris and in Paris, we had a bunch of different news stories come out about what they were talking about and why this thing was going on. And one particular news story had the headline that caught most everyone’s attention. That was the meeting where the billionaires are talking about automating all the jobs away.
caught everyone’s attention. And rightfully so, because yeah, that’s scary. The and I agree, the potential is there that the technology would be able to automate a high level of jobs. We’ll just say a large amount, a large percentage of the labor force right now. Should you? That’s the question.
Jennifer Owens (05:03)
So I want to lightly push back on that a little bit, even before we get to the should you point, because over the past week, we’ve been working on ways to automate the production of this podcast. We’ve been digging into, you know, what of the editing can we outsource to artificial intelligence? What of our social media can we outsource to artificial intelligence?
Although all of the technical pieces are there, there are still some bits missing that are keeping us from being able to fully automate this podcast. So we cannot just sit here in our chairs, talk into our cameras, hit a button, and then walk away and have our fully edited podcast corrected, exported, uploaded to YouTube, to LinkedIn, to Spotify, to all the places where our podcast is available.
We’re still exercising a degree of human editorial control because A, some of the AI is not quite ready for prime time, right? We’ve discovered that our automatic editing gets really excited when we use our cut words because we get really excited when we use our cut words. So there’s a little bit of training that’s needed there, but also that we don’t have the kind of API access into the social media platforms.
that would allow us to completely automate that workflow. So while I do agree with you that a lot of the pieces are there for the replacement of many of our workday tasks, I don’t know that we’re really at a point where we can say on a one-to-one basis, like this FTE is going to be replaced by this particular agent who is going to do all of their job descriptions. I think we are going to see more blending where this FTE…
is now 80 % agent and 20 % human, and the other 80 % of that human is doing other things. So we might see a dilution of the total head count, the total human head count. We’re gonna have to be very careful with our language about this. We might see a dilution of our total human head count, but I do think that a one-to-one replacement of jobs is probably not in the future for at least, this is gonna age like milk, five to 10 years.
Brad Owens (06:53)
I’d go even faster.
Jennifer Owens (06:54)
I I’m just saying, I’m not afraid to be wrong, okay? I’m not afraid. So I think here’s the really interesting part, right? So we’ve said that the technology is there to replace the jobs and people are nervous about it. I read this recent, this actually is not a recent, I read this article recently from 2019.
from Darren Esamoglu and Pascual Restrepo. were talking, continuing their discussion about robotics that we covered actually in our episode on what is digital labor. They were talking about the right and wrong kinds of artificial intelligence and its relation to the labor market. And this is what I thought was really interesting. And I think we’re gonna touch on this in our third episode in this series on, and shitification and that downward spiral of where you do something that’s kind of crappy and then you get like medium results and then you do more crappy stuff and that’s it. That’s the whole episode.
That’s a lie. Yeah, Brad didn’t believe me when I said that I wanted to cover en-shitification and he’s like, what are you talking about? And I’m like, well, this is actually, it’s an actual word. It’s a fairly well-known paradigm in tech space, right? Yes. So going back to the article from 2019 that I was reading, the authors argue that in an age of automation, labor standing deteriorates and workers are badly affected if the new technologies are just kind of okay.
Brad Owens (07:43)
And yes everyone, that is an actual word.
It’s an actual word.
Today I learned.
Jennifer Owens (08:12)
Right? If they’re doing like a C minus student job where you could have a person doing an A plus level job, that’s a pretty, it’s a pretty easy scenario to envision, right? I’m a business leader. I can automate this job for, know, for 0.4X where I’m paying this person X and I get roughly 60 % of what they were doing. That’s fine. But that also kind of starts.
that downward spiral not only for your product and your reputation, but also for labor, right? The demand for skilled labor, for A plus labor is no longer there. The demand is for C minus labor and the robots can do C minus labor real easy. So I think this is interesting when we return to our discussion about what you should do and what are the correct uses of artificial intelligence and digital labor. So if you’re using a digital labor to replace a human product that is at this level,
with an automated product that is at this level, just know that this is gonna drive not only your labor market downward, it’s gonna depress the need for wages, it’s gonna make people bummed out. That’s the technical term.
Brad Owens (09:17)
Well, can I put this
into kind of terms that I at least resonate with?
Jennifer Owens (09:24)
Absolutely, because I can tell I’m up on my soapbox, I’m giving my TED talk, and I was just thinking like, Jenny, you got to shut up and let Brad talk here.
Brad Owens (09:30)
So I’m really excited that you’re into this the way that you are, because I feel like people need to pay attention to this. And when I think about how to separate the hype, which let’s get honest, everyone, right now, all the AI and digital labor and agentic AI and all this stuff, it’s still just hype. Yes, there are companies that are using bits and pieces of this for what we had chat bots to do a long time ago. We’re not.
thinking a ton here. This is not super intelligent. This is just, hey, if someone says this, look at this stuff and present where it matches closely enough. It’s simply, it’s just a math game still. It’s not something that we’re truly automating these things away. We’re helping and we’re starting to deflect things like customer cases and things like that, but we’re truly not replacing yet with what I would call intelligent labor. So we’ll get to that, but I want to.
bring this into something that I experience almost all the time when I’m using AI and I’m using things like chat GPT, for instance, when I’m going to, let’s say, use it for a social media post. I have spent tons of time training a custom GPT model to produce what reflects most closely my tone of voice. The way that I write, I’ve given it tons of content. I’ve given it tons of videos that I’ve recorded, lots of different things that it can learn from to
try and get as close as possible to my tone of voice, what I typically say, the words that I typically use, my pace, my cadence, all that kind of stuff. And while it’s really good and it works as a party trick to show people, hey, look how closely this can model what I am doing, it’s still not exactly the way that I speak and the way that I deliver. And it won’t truly reflect exactly who I am, but it’s close enough.
So then what we’re talking about about this downward spiral is if I were to accept 90 % of me and then we go to continually replicate that eventually we have only 90 % of a Brad. And then if we try and add even more layers on top of that that get 90 % of where that is, then we end up with only 80 % of what Brad was. And you can see this is a downward spiral. If we just focus on more of this automation type stuff,
and taking things off of people’s plate and replace them completely. That’s a key point of this. Not just augment, replace them completely with a 90 % version. The math doesn’t work out. You will eventually end up with a garbage product.
Jennifer Owens (11:59)
And I think it’s helpful too to bring this back to the ethical question, right? So if we’re replacing you with a 90 % version of Brad that is a giant catamari of GPT and your YouTube videos and whatnot, I think it’s interesting to think about what does that represent and what learning opportunities is that 90 % version of Brad missing out on?
And so I want to spend some time thinking about too, as we’re talking about replacing human beings. I think you do have an ethical obligation to consider if we’re swapping out humans for automation, what biases are we introducing? What are these new digital labor resources missing? Or what might they have that a human being does not?
So, you mentioned like a 90 % version of Brad. I think it’s also important to say that at this point, that 90 % version of Brad is kind of crystallized and stuck right where you were, right? You’re not going to have your wife talk to you about ethics and maybe get a new idea in your head that that version is right there where it is. I think also your perspective is really interesting. I’d love to get your take on this. When we talk about artificial intelligence and digital labor, particularly in the HR space, I wanna hear your thoughts on algorithmic bias. I wanna hear your thoughts on
How do you, when you’re, mean, we can’t pretend that digital labor just like springs fully formed from the head of Sam Altman, right? You know, this is trained on real data. It’s built by real people. And so it has the opportunity to crystallize bias and other things that are already inherent in the system and then codify them. I wanna hear your thoughts on this.
Brad Owens (13:39)
So when we start thinking about the right kind of AI, we truly cannot think about a single source of truth for AI. Let’s think about running this as a business. If we think about a single source of truth kind of model to say that this is the model that will run our business. Hey, that’s a CEO.
So let’s think about what it takes to run a highly profitable, successful business. Typically takes a lot of different departments. And think about where human resources comes into this. They’re typically, they’re seen as the Toby’s, as the watchdogs over top of all the rest of the business. They’re there to protect it. They’re there to help identify biases, help identify things that are going to stand in the way of the business succeeding or getting in a whole lot of trouble or all of those types of guardrail things.
So when we think about automating these jobs away and potentially introducing biases or crystallizing something that may have a thought process that’s not aligned with how we want our business to actually operate, we can’t just think of it as this single model, this single person. That’s a CEO. We need to think of it as an entire business itself, which has departments. So the way that I like to think about being able to put guardrails in place around ethics when it comes to potential biases is around
don’t use a single source. There’s always gotta be another source acting as a watchdog that everything has to run through once it’s completed its process to say, wait, what biases might actually be in this? And if there are, whoop, try again.
Jennifer Owens (15:08)
So I think you’ve raised a really interesting point about watchdogs and about guardrails. Because as I was thinking about this, how do we hold our digital labor resources to the same HR standards that we would hold a human being? We can’t mandate that human beings not hold attitudes that we find to be objectionable, but we can manage their work and we can manage their relationships with their coworkers. So I think really truly, and we could spend a long time down the ethics rabbit hole. I’m not a philosopher.
I just really like the good place. And so I feel like the good places mandate that, you know, we don’t have to ask things to be perfect. We do have to ask them to get better or to at least try to get better. And I think that’s really the crucial thing here. Do you have, as you’re managing digital labor, do you have mechanisms set up to identify when things are performing poorly or when you might be serving a particular group of people or a particular group of needs differently than the rest of your customer base?
Are you able to surface that? Are you able to respond to it in a meaningful way? Do you have the mechanisms in place to get better? I will spare you my long discussion about consequentialism. I will spare you my other thoughts on what it means for digital labor to be moral when it is not mortal. This is a whole other conversation for at least three glasses of wine. But I do think that if we boil all of our ethics thoughts down,
to just the one thing, one takeaway. If you’re looking at bringing digital labor into your business, do you have mechanisms set up to make sure that your digital labor is performing ethically
according to your standards? Are you able to see that? And then are you able to manage to that? That’s the crucial thing. Just like you would have HR for human beings, you need to have HR for your digital labor resources.
Brad Owens (16:54)
this comes back to kind of why I thought we needed to start this podcast in the first place. There will need to be a digital labor leader in your business that their sole responsibility is to start thinking about these things, to start understanding what are we automating? What are the potential risks involved in that? How do we need to set up guardrails? How do we need to think about this actual digital labor and how it’s going to affect our business? And that’s really the basis for this entire podcast on its own. This can’t just be something that you could think of, oh, we’re going to automate this thing away.
Yes, it will be a whole lot of fun for the department to stand around a computer and show them, look how close this got. No, it almost automated this thing’s away, but right now it’s still just hype. And you’ve really got to start thinking about how are we really going to introduce this to our business? And I appreciate you bringing this up, Jenny and allowing this to be a main topic for us to cover off on early in the, the episodes of this podcast, because I think it’s going to be really, really important.
Jennifer Owens (17:48)
I couldn’t agree more, especially given our two day jobs, right? You work in HR software, I work in healthcare. These are two really highly regulated areas that have a high ethical standard to uphold. And I think it’s really helpful to talk about it upfront.
Brad Owens (18:01)
So if you like this kind of content, there is plenty more wherever you’re watching this or listening to this, click that subscribe button. We will be back with plenty of other episodes, or you can catch us at digitallaborlab.com That is always going to be the home for this and many more pieces of content.
Jennifer Owens (18:16)
Until next time, I’m Jenny Owens. Thank you for listening to Digital Labor Lab.
Are AI agents coming for your job? In this episode, Brad Owens and Jennifer Owens explore the history of automation, from the Industrial Revolution to the rise of digital labor and AI. What makes today’s AI revolution different? And how can businesses and workers adapt?
Key topics discussed:
The history of automation and labor shifts
AI’s role in today’s workforce
The difference between assistive AI and fully autonomous AI
Real-world examples of AI in business
The future of work and digital labor
AI is a tool—just like past technological advances—but how we integrate it into the workforce will define the future of jobs.
Brad Owens (00:00) my god Jenny, everyone’s gonna lose their jobs to AI agents?
Jennifer Owens (00:04) People have been worried that they’re gonna lose their jobs to automation since the industrial revolution. What is different about today?
Hi, everybody, and welcome to the Digital Labor Lab, where we are exploring the future of work one experiment at a time. I’m your host, Jenny Owens.
Brad Owens (00:27) And I’m Brad Owens and on today’s episode, we’re going to talk about the history of the future of work, kind of a back to the future kind of thing. Mostly what we need to dig into here is why is this such a big deal now? Why is everyone talking about this thing? But haven’t these sort of shifts happened in the past before? So Jenny, take us all the way back, back in time. Where do you feel like this sort of shift has already happened before?
Jennifer Owens (00:57) since the dawn of time, human beings have labored on local agricultural-based units, know,
Brad Owens (00:58) You
Jennifer Owens (01:04) in a small community, family members, multi-generational. And then the Industrial Revolution happened, right? So a couple of things that I want to call out. First of all, our discussion here is pretty specific to the labor market in the United States. So sorry, European listeners, I promise we’re going to get to you. But today we’re going to talk about close to home stuff. So the United States started off as a primarily agricultural-based
economy heavily dependent actually on unpaid labor in the form of slavery. The industrial revolution happened, right? We saw a vast migration of people out of the farms and into the cities where we started to see some condensed places to perform labor and a real switch from subsistence farming to wage-based labor, which is primarily the lens that we used when we’re thinking about digital labor, although we’re gonna talk about that. So the industrial revolution happened.
Everybody got really excited about like cotton and you know, like fabrics, stuff you can make in factories. And then I think the next real revolution was when we started to see some automation and robotics in our manufacturing. So I’m thinking, please, yes.
Brad Owens (02:07) hang out before we move off of this. Do you mind if I dig into this for just a second?
So what was it about the industrial revolution that really started making work different? What do feel like we really had that we didn’t have before?
Jennifer Owens (02:23) I think that the main change for me is scale. So instead of an artisan working and producing, for example, articles of clothing in a cottage industry, right, so you might have a few different farms and artisans working, somebody might be tending the sheep, somebody else might be spinning the wool, somebody else might be fabricating stuff from that wool.
Instead, we’re seeing massive increases in scale. So we’re buying sheep at scale. We’re processing wool at scale and we’re producing clothing for purchase, not on a bespoke like, I know my kid’s going to outgrow their sweater, so I got to make a new one. But we’re really producing things for purchase also at scale. That’s okay. Take me, take me on a journey.
Brad Owens (03:03) So I’m going lead you down a path here.
So what made that scale possible?
Jennifer Owens (03:14) I we’ve kind of brought ourselves back into a loop, right? Because what was driving the industrialization was the ability to produce goods at scale. And I mean, because of factories, right? Like we’ve mechanized and automated part of that labor. So to go back to my example, right? Of the wool and the sweater, we’re no longer beholden to manual knitting, right? We’ve got machines that are capable of knitting and they’re capable of knitting much finer and different cloth than you can produce.
Brad Owens (03:20) Why? Why do we have that ability?
Jennifer Owens (03:44) So not only have we scaled up the production of a previously needed good, but we’ve also expanded the selection of goods that are available.
Brad Owens (03:52) So you’re saying is, we had access to a tool that not only helped us do our labor better, but was far more efficient at doing so.
Jennifer Owens (04:06) I see where you’re headed with this. Let’s follow this path. Tell me about your thoughts about how digitization and technical revolution can help us be more efficient in our labor and expand the options of what we’re able to produce.
Brad Owens (04:21) So here is where I wanted to keep this thread throughout the entire episode. Because when we think of, let’s hang on to that industrial revolution. Now we started getting into the next sort of phases, which was we didn’t just have all these people sitting at these mills, at these refineries, at whatever it was that we were trying to revolutionize. We now had the ability to automate some of that work as well.
So now we had the ability to remove the human element from it. And let’s jump ahead a couple of years here, automotive manufacturing. The assembly line came along. We used to have a better way of working. But then what happened in the auto industry? We started getting robotics. We started getting things that could duplicate the exact same movements, the exact same way to make sure that we ended up with an end product that was what we had specified from the beginning.
But to be able to do that, we had to have the process down, locked in repeatable steps. And then we had one, I mean, let’s just take a robotic arm, right? Everyone knows that big yellow arm that sits in all the automotive factories. That one arm had a repeatable thing over and over again. So it was automated. It was robotics, but because it only had that one job, it was not intelligent.
Jennifer Owens (05:44) So we’ve entered into kind of a, let’s call it a golden age of science fiction relationship
people and technology. The technology may be better than a human being because a robot doesn’t get, for example, repetitive stress injuries if you do perform all the proper maintenance. But the robot is only doing what the people tell it to do. We’ve got a very directive relationship between humans and their technology. So when I think about technology assist,
Maybe the second or third thing I think about is Clippy. Do you remember Clippy? Like back in the days when word processing on the computer was still like exciting and sexy and fun. And you could have like the little, thanks Microsoft by the way for your permission to use this. This is not true, we didn’t get permission. But I’m thinking about the little grammar and spelling assistant that would pop up when you were first typing your document. And you’d be like, Clippy, look, I am not doing a resume right now. Can you please stop? Go away.
Because although the idea was quite sound, the technology was not really there for humans to interact with Clippy in the way that we really want to be interacting with an assistant. also want to stop there and allow you to respond because there are three different directions I want to go.
Brad Owens (07:02) Well, Clippy was a good first start, right? That was when we were trying to think of, how in a digital way could we assist someone in doing their job? And I keep using that word assist on purpose because it’s not doing the job. It’s assisting us in that job because we can dig into how we actually use AI ourselves at some point. But if you think about the majority of the ways that AI has been implemented up until this point, up until this next revolution that we’re going to get to.
AI has been used in such a way that it would assist us in doing something. It is a, we’re giving this thing a task by asking it a question of something that we need to do or giving it one specific job. So I always come back to it when people are like, AI is going to take over everything. I was like, yeah, how’s that auto correct working out.
Jennifer Owens (07:46) Mm-hmm.
Brad Owens (07:54) We’ve had autocorrect for a long time.
Jennifer Owens (07:55) So I think it’s really interesting because I feel like in our current artificial intelligence landscape, the products that are available to the consumer are kind of chunky, right? So I’m thinking about our word processing metaphor here, right? I came of age before there were digital assistants, right? So I learned to use a paper dictionary and a thesaurus, and for example, an encyclopedia if I needed to look something up. So I am used to going to a specific reference.
for a spelling, a specific reference for another word choice, a specific reference for, wait, do I really know what happened in the student’s revolution of like 1863? Yeah, I don’t know, but the encyclopedia knows and I can go get that. This is kind of how I think of AI right now, right? We’ve got large language models, which are terrific at generating text. We have machine learning algorithms, which are capable of ingesting large amounts of data and deriving patterns and drawing conclusions from that.
At the consumer level, we don’t really have, and maybe I’m wrong here, but I have not yet seen an autonomous workflow that I would trust, even 60 % of the time.
Right, if I ask, you know, like chat GPT to say, okay, make me a meal plan, make me a grocery list from the meal plan. Okay, now go into my account and order these groceries for delivery. I don’t feel quite confident there yet. So.
Brad Owens (09:16) yet. And
the key distinction that you said there was at the consumer level. These things are possible. It is completely possible. If you know a bit of deep technical knowledge of jumping into Python, or you have an exorbitant amount of money to spend on open AI’s new orchestrator or something like that, some of these things are possible.
Jennifer Owens (09:20) Yes. Yeah, absolutely.
Mm-hmm. So today, right, we can use task-specific assistance. What about this is driving the conversation about what might be possible tomorrow? Let’s talk about autonomy. Let’s talk about how this might really shift our workforce. And then I want to take us back.
to that thing that you said earlier about we have a better way to work. As you were talking about automotive manufacturing and we’re talking about this, because I really want to probe that from a couple of different lenses. But first, let’s discuss the future.
Brad Owens (10:08) So when we, when I think about my day job, what I have been able to.
Jennifer Owens (10:14) Wait,
pause. For those who are listening to this episode first, what’s your day job, Brad?
Brad Owens (10:18) So my day job, I am working with Salesforce space technology to help companies come up with a way to do their work. Better for lack of, for not going into a ton of depth. That’s what my day job is all about. We have a consulting organization that is able to help companies do work better with the Salesforce platform. And because it’s on Salesforce, we all know Mark Benioff’s feeling about the future of digital labor. They are driving hard into agentic AI.
Jennifer Owens (10:47) Wait,
what if I’m learning about this for the very first time? What is Mark Benioff’s stance on agentic AI and digital labor?
Brad Owens (10:55) like it. So Mark Benioff was on stage at the world economic forum in Davos, and he was talking to all the other leadership that was on stage and said at one point, Hey, are you all aware that we are the last leaders to lead only human labor? And one kind of pause and looked at him and he went into this detailed explanation about. We only currently as leaders and as business owners have had to lead human labor.
what he is picturing and honestly what the Workday CEO, what the Google CEO, everyone is starting to understand is that in the very near future, it is highly likely that we will have digital employees. What we’re referring to on this show as digital labor. That’s made possible because of what everyone is starting to talk about and, oh my God, it’s going to come take my job, agentic AI. That’s what we really need to dig in.
Jennifer Owens (11:53) So what’s interesting is that as we were conceiving and really refining the idea for this podcast, I started to do a deep dive and do some research. And one of the places where I went was actually golden age of science fiction. I went back to Asimov and our three laws of robotics. I started to think about the relationship of humans and their technology, because this is something that is really fascinating to me. When we approach these kind of.
changes from a place of fear, right? It feels like AI is coming to take my job. When in reality, I feel like a hybrid human digital workforce looks a lot like our space program sending rovers to Mars and teaching them to sing happy birthday to themselves. like, I know, right? Like, like, so I think it’s really interesting, right? The desire of human beings
Brad Owens (12:42) Just, bah, all right.
Jennifer Owens (12:48) to kind of have dominion over something, right? That very golden age sci-fi like the human is in charge of you robot, you robot do what I tell you to do versus humans desire to make pets out of stuff. And I think the future of digital labor really will succeed when we have labor that we can feel friendly about, that is truly an assistant, that we feel is working hand in hand with us. I have a lot of other thoughts on iRobot that I will…
leave out for the sake of time. But if anybody wants to talk about Asimov with me, I’d love to. I’m curious, though, because we’re talking a lot about assistive artificial intelligence. And we’re talking about the future of digital labor as being truly agentic, so autonomous, having that agency to generate a, you to respond to a prompt and then to go and take action on that. And it’s interesting, as I’m thinking about what are the impacts of this going to be on the labor force, right? So switching perspective,
from the perspective of the employer thinking about, how can I get more productivity out of the human capital that I have versus now I’m also the employee of an organization. How am I gonna experience this change as a member of the workforce? And this is really interesting to me because our show is called Digital Labor Lab, right? We wanna do research, we wanna do experiments, we wanna think about things. So I found this really interesting paper on automation and local labor markets from 2017.
This was in the National Bureau of Economic Research in 2017. The authors are Asimoglu and Rastrepro. And I thought that this was really fascinating because they looked at the impact of robots and automation on local labor markets. if you’re, the paper is 91 pages long, it’s a fantastic read. really do recommend it.
link down below. But the short term takeaway is when the robots are competing with human labor on various tasks, those robots, then the presence of robots in the labor market reduces employment and it reduces wages. Right. So when humans and robots are competing, absolutely humans are going to lose. Right. robots don’t get repetitive stress injuries.
This is interesting to me because I want to think less about a labor market in which humans and agentic AI are competing for roles and more about a labor market in which humans and agentic AI are collaborating. So how can we use these to remove some of the stresses of our workforce? How can we use these to do the jobs that people aren’t great at at scale?
or individually? How can we use agentic AI to do what AI does best and what people don’t do great and free up people to do the things that people do really well? I keep thinking about the tweet that keeps circulating, right? I don’t want AI to do the art so that I can do more dishes and laundry. I want AI to do the dishes and laundry so that I can do more art. That’s what I love. I want to see artificial intelligence do the stuff that people either don’t want to do or aren’t great at doing to free us up to be the most human we possibly can be.
Brad Owens (15:50) So I thinking about an analogy for this today. And if my grandparents, if they were still around, if they came to me asking, Hey, what is this whole agentic AI thing all about? There’s the fear side of things of, my God, this is going to come take my job, which we can dig into because it may not right now. It is a tool. It is a tool to be used and.
If I were to able to explain this in a very simple way to someone that may not understand what this is capable of and to dig past their fear about this, I’m glad that they’re calling this agentic AI. think on purpose, that was a fantastic call because if I were to think of something as an agent that I would interact with on a daily basis or something that I have experienced with, I think back to a travel agent. And I want to talk a little bit about.
the experience of working with a travel agent. did it once. We had a recent trip that we wanted to go on. All we knew was here’s where we want to go, the types of experiences that we want to have, and here’s roughly when we can do that. That’s all we said. Now the agent, the travel agent went off, did all of the research, came up with the entire plan.
booked all of the vacation locations, helped us with understanding what travel arrangements may actually get there on time, made sure that we had experiences that fit what we wanted to do while we were there. We just sat back and had an amazing time. When we think about that sort of interaction with agentic AI now, we’re essentially talking about that same sort of experience, but with our work. So think about the types of things in your work that may be annoying the ever loving crap out of you.
And how would that feel to offload that to someone that could take that for you?
Jennifer Owens (17:46) So there’s two things that are interesting to me. One is that you spoke about our recent experience with the travel agent and I agree it was a wonderful experience. We had one 15 minute phone call in which we prompted the agent with the kind of things that we wanted out of the trip. And the second thing that was interesting to me is that we had that experience in 2022 at a time when we were like when when travel agency really has has kind of suffered right from the rise in Yelp and TripAdvisor and all the other stuff.
There’s a lot that you can successfully do yourself, but for a trip of this magnitude, right? This was really kind of a once in a lifetime trip for us. For a trip of this magnitude, we didn’t want to do it ourselves. We wanted an expert. So we went to the expert.
The second thing that this really brings to mind though is, you’re talking about our work and I wanna think about work not just as something that I’m doing in exchange for a paycheck, but also the other things that are enriching my life. So, I volunteer in a couple of different areas, right? But one of the things that often sticks out to me is the challenge of orchestrating volunteer labor because people are available when they’re available. Some people are available Wednesdays from like 10 to two, but not if the Wednesday is a prime number.
And like untangling all of that is a fantastic job for artificial intelligence. And it is a pain in the butt for a human being to figure out, okay, I have these six people, here’s their availabilities, I need to staff this location for these hours, go. I think it’s interesting to think about the gains that can be made from the use of digital labor in places that aren’t necessarily the exchange of labor for cash.
Brad Owens (19:26) So where I want to bring this back to and help everyone to have a solid grounding in is there is a lot to bring this completely back to where Jenny started this conversation. There is a lot about this that is still possible, but the majority of it is still science fiction. I can tell you from seeing it in the actual wild,
We’re not at the point yet where this is going to produce some kind of mass layoff because of AI. It’s just not there
Jennifer Owens (19:55) Remember that eating disorder association that had laid off all of their phone line people because they had like an AI thing that was gonna do it? And then it turns out that their AI tool was giving fantastically irresponsible advice, so they hired their people back.
Brad Owens (20:08) So it’s not there. It’s absolutely the speed of change is coming that it could get to the point of starting to replace a large portion of the workforce or where we want to keep the focus of this right now because it’s what’s most applicable to the majority of business owners out there is the assistive technology that is out there is getting to be pretty tremendous.
And helping people to do their job at a level that was unheard of before, but just like the industrial revolution, this is a tool. This is a tool that allows you to make the best use of it for you and your business. I don’t think that the majority and Jenny, I’m curious to get your take. I don’t think the majority of businesses are at that point where they need to start thinking about how am going to replace my people with digital?
Jennifer Owens (21:00) I think the other large unknown in all of this, and this is something that I would love to dive into on a future episode, is how does the automation of the workforce, the agentification of the workforce, drive your consumer product or your service that you’re providing? We just talked about that example with the eating disorder line where the output was unacceptable. And so we ended up seeing a resurgence of human labor. I’m curious to know if I’m a
If I’m a, I keep thinking about like the robot nail painting things that you see at like Vegas or sometimes in like the fancy airports. If I’m thinking about that, where will the consumer start to make a choice that’s different? What of the agentic workforce is acceptable to a consumer or a purchaser of services? And where do we want to interact with a human? That’s interesting to me. And the other thing is that this fear that AI is taking our jobs,
I think that’s interesting. I think we should lean into that. I think we should use that as a lens to explore how we feel about our work. What about our work is truly uniquely us, uniquely human, and what truly can be automated without sacrificing quality, without sacrificing the care? I think feeling that fear and using it as a lens to explore is the direction that’s going to be most productive.
Brad Owens (22:22) And that’s why we have started the digital labor lab and why we hope this would be exciting for you all to listen to as well. Because these, at this point, are all just experiments. Even the big headline grabbing things of Salesforce saved 50 % of their caseload because they were able to take all of their customer service and move that to a bot. That’s right now just an experiment for them. They’ll even say it themselves. It’s just an experiment. So that’s why this is a digital labor.
want to give you the freedom to be able to experiment with these things, to find what works best for you, for your business, to be able to help you grow and to take advantage of what’s out
So based on where you’re listening or watching this episode, hit that subscribe button for us. Make sure you follow along with this and all the other episodes that are going to come. Jenny, where can they find us?
Jennifer Owens (23:08) You can find us at digitallabourlab.com. You can also find our episodes and bits and clips on many social media platforms, Blue Sky, LinkedIn, all sorts of places. We would love to have you engage in the conversation there as well.
Brad Owens (23:21) If you search digital labor lab on your social media of choice, odds are we’re going to be there. So until next time, I’m Brad Owens. We’ll see you.
Welcome to the Digital Labor Lab Podcast, where we explore the future of work, automation, AI, and the evolving role of digital labor in our economy. In this inaugural episode, hosts Brad Owens and Jennifer Owens discuss:
What digital labor really means – is it just robots?
Why this topic is more relevant than ever in today’s economy
Insights from industry leaders, including a key takeaway from Salesforce CEO Marc Benioff
How digital labor is transforming industries like healthcare, supply chains, and HR
What to expect from future episodes!
Join us as we dive into the ethics, challenges, and opportunities surrounding digital labor. Whether you’re an employer, employee, or just curious about how work is changing, this podcast is your guide!
Watch the full episode:
AI Generated Full Transcript:
Brad Owens (00:00) Hey Jenny, what’s digital labor?
Jennifer Owens (00:02) feeling it’s more than just robots. Can we discuss?
Brad Owens (00:15) Hey everyone and welcome into the Digital Labor Lab where we explore the future of work one experiment at a time. I’m Brad Owens.
Jennifer Owens (00:22) I’m Jenny Owens, and together we are your hosts. Welcome to our inaugural episode of the Digital Labor Lab. In this episode, we’re going to talk about what is digital labor, why we’re talking about it, and what you can expect from us as we continue our podcast episodes. So without further ado, guess, Brad, can we talk about what is digital labor? Is it just robots, or is there more to it?
Brad Owens (00:44) Yeah, so digital labor can span a ton here. So for the audience, yes, we get it. This could encompass a ton of different things. But what we want to focus on here is just this next wave of how we’re going to work. Yes, there could be some automations involved. Yes, there could be handing off a portion of your business to some sort of automated AI agent or whatever you may want to call it. But
This is simply just the future of work where we focus on more digital types of workers for your business.
Jennifer Owens (01:19) whether that digital is truly in the cloud or whether that’s digital made concrete through an actual physical robot, right? I’m thinking about the automation in automotive plants. I’m thinking about the robots that carry supply chain stuff. So if you’ve got like a large physical campus and you need to get supplies from one place to another, you may have robots that are doing that in response to supply chain software. So they’re being responsive to the real time needs of your stocking needs. I’m also thinking about
Brad, can we talk a little bit about why is it that we’re talking about digital labor right now? What is it about our current economic setup that’s making digital labor feel like such a hot topic?
Brad Owens (01:59) You know, it’s really interesting because at a recent conference, the World Economic Forum that was held in Davos, we had a lot of the biggest names in tech, in business, in really thinking about the future. They were all in the same place together. And a few choice quotes came out of those different panels. One of which Salesforce CEO Mark Benioff had mentioned that, hey, everyone on the panel
Do you realize that we may be the last leaders to only lead humans? And everyone kind of gave him a look of like, what? So what he was getting at and
He went on to explain that in the very near future, the workforce will no longer be just humans. It will also be humans augmented by digital labor. And he started going very deep down that path of how are we actually set up to manage digital labor? It was very, very interesting.
Jennifer Owens (03:00) So this is really interesting to me. And it’s interesting because it’s where it intersects with my day job. So for those of you who don’t know me, I work in artificial intelligence at Cleveland Clinic. For the purposes of this podcast, all opinions and thought here, this is representative of me personally, not a representative of my employer. But I spent a lot of time thinking about the role of artificial intelligence in the health care landscape.
what work truly can be automated? Where can artificial intelligence really be autonomous, right? When we’re starting to think about agentic AI, we’re starting to think about what workflows do we trust to turn over to technology alone with human oversight and human quality control versus what workflows do we really feel like, nope, gotta have a human involved in 100 % of those tasks. So this is really fascinating to me. And you mentioned Mark Benioff of Salesforce and I think it might be helpful for you to explain, hey, what is your background here? Why are you interested in this topic?
Brad Owens (03:55) Sure. So I come from the world of HR and staffing and recruiting, and I typically look at an organization and try and think about how it’s running and why. When you think about human resources, you think about literally what is it taking to run your business. And up until now, that has all been people, but that is completely changing. And in my day job, we’re actually taking advantage of software by the likes of Salesforce to be able to help businesses
right now with this digital labor. We’re giving them, whether it’s augmenting just pieces of their business process right now, or it’s fully automating an entire process from their business on this platform through agentic AI or other means, we’re truly helping people start down this journey. And I’m noticing that there are a lot of different people that
One, have absolutely no idea what this is all about. Two, what it’s capable of. Or three, if they’re even ready for it. So we don’t know the answers to that. And the good news is no one knows the answer to that. So when I was talking with Jenny, we went, you know what? With our powers combined, we would make a pretty interesting show around all of this type of information. And that’s a digital labor.
Jennifer Owens (05:11) Yeah, so on the show and in subsequent episodes, we’re going to dive into a history of the labor market and how changes in industrialization and in automation have affected the labor market to date and what we can expect from digital labor’s entry into the marketplace in its first real meaningful sort of scale. We’re going to talk about how to determine whether your business is ready for digital labor, how you can manage a hybrid human and digital workforce.
We’re going to talk about the ethics of using digital labor. We’re also going to step back a little bit from the labor payment paradigm, because we often think about labor as something which is exchanged for money. But there’s a lot of ways in which labor is powering our economy that aren’t paid, thinking about volunteer labor, thinking about the kind of labor that it takes to run a household. So there’s all kinds of labor that can be digitized, and bits and pieces of all these workflows can be digitized as well. We’re going to talk about that.
We’re going to talk about the effects of digital labor on your human workforce and vice versa, and how you can, if you’re an employee, how you can use these tools to make yourself a more valuable prospect to an employer or to start your own business. There’s so much to cover. I don’t think we’re ever going to run out of topics.
Brad Owens (06:22) Right, and so in doing that, we need to dig into the people, the ideas, the tech, what’s truly behind this, what’s making it work, what is pushing this space forward, how are businesses actually using this? So it’s not just gonna be us jabbering in your ear for a half hour once a week, we’re truly gonna bring on the people that are involved in this. We’re gonna talk to the people from.
Large management consulting agencies that have been helping businesses through these types of transitions. We’re to be bringing in the people that are responsible for the tech that’s making these changes. We’re going to bring in people that understand the agentic AI frameworks, the actual platforms that make this happen. We’re going to be talking to you about the research, talking to you about what’s out there so that this digital labor lab will truly act as your foundation of, Hey, if I just need to understand something at a basic level.
or I need to understand where this is going so that my business doesn’t fall behind, this digital labor lab show will be a fantastic guide
Jennifer Owens (07:20) Yeah, absolutely. And so if you’re looking for resources, for references, for back archives of the show, where are we gonna have that, Brad?
Brad Owens (07:29) Right? So all of that is the home for this will be at digital labor lab.com. That’s going to be the hub. but that’s not the only place you are probably listening to this not on digital labor lab.com. we’re going to have the podcast on Spotify. It’ll be on YouTube. It’ll be on Apple podcasts. It’ll be wherever good podcasts are sold.
and it’ll also be all across social media. If you will follow us on LinkedIn, we’ll absolutely bombard you with content. If you wanted to be out there on X, if you want to be on Blue Sky, we’re going to make sure that this is available where you are so you don’t have to go somewhere special to consume this.
Jennifer Owens (08:01) So come along with us on this journey as we explore and discover together. We’re gonna experiment, we’re gonna learn, we’re gonna find experts, we’re gonna bring everything together in one place. I’m so excited to have you join us.
Brad Owens (08:11) Until next time, make sure you subscribe wherever you are currently listening to or watching this episode, because we will be back with plenty of other episodes and hope you’ll join us on this journey of the digital labor lab.