AI Is Boosting Productivity and Burnout: Teams Must Build Hope

Many workers who use AI report gains in productivity, but many also notice an increase in stress.

Hope grows when people believe they still have influence inside the system.

AI should be one pathway among many to accomplish great work—not the only route forward.

By Jen Fisher and Paula Davis

Burnout has long been framed as a problem to solve with efficiency, endurance, or recovery strategies—but those approaches often miss something essential: People need hope. In a time when artificial intelligence (AI) is reshaping how we work, many conversations focus on productivity gains, job disruption, and speed. Yet AI may offer another, less discussed opportunity: the chance to reduce unnecessary friction, restore a sense of possibility, and help people feel more capable and supported in the face of growing demands. If burnout is fueled by chronic stress and helplessness, then hope may be one of the most important antidotes.

While many workers who use AI do report gains in productivity, many also notice an increase in stress, as free time doesn’t translate into downtime. A 2025 report from the Upwork Research Institute showed that the most productive AI users were significantly more likely to report burnout and disengagement—and nearly twice as likely to say they were considering quitting. In the same study, 90 percent of workers described AI as a “coworker,” and more than half said they trusted AI more than their human teammates.

A separate eight-month study found something similar. When one tech company introduced AI tools (without mandating their use), employees worked faster, longer hours, and took on broader scopes of responsibility. What looked like productivity gains also created scope creep, “work slop” that increased cognitive load, constant pressure to produce more, and reduced recovery time. Over time, leaders struggled to distinguish genuine performance from unsustainable intensity, which can become a precursor for burnout.

A manager at a large consulting firm describes his experience this way: "AI is my fastest teammate and my worst boss. It can execute anything I ask. But it has no wisdom about what is worth asking." He's required to use AI tools with little guidance, no clear expectations, and no organizational vision for what good AI-assisted work even looks like. That gap is exactly where work slop is born: more output, less judgment about which output matters. And when no one stops to ask whether the output is any good—when the work just gets accepted—that's where hopelessness lives. Do I even matter here? Or am I just approving what the machine already decided?

Decades of research reveal that burnout has three dimensions: chronic exhaustion, chronic cynicism, and inefficacy—the “Why bother?” mentality that appears when people can no longer see how their work makes a difference.

Most organizations focus on exhaustion. But in the AI era, inefficacy may be the more dangerous signal. When employees start asking, “If the system can do this faster than I can, what value do I add?” they aren’t simply overwhelmed. They’re questioning their relevance. And when relevance erodes, agency follows.

Agency is the engine of hope.

What Hope Looks Like Now

Psychologist C. R. Snyder defined hope as more than optimism. It rests on three components: meaningful goals, agency (the belief that you can take action toward those goals), and pathways (the ability to see multiple routes forward). The problem is that hope often gets conflated with optimism or wishing, both of which are passive states. When you hope, you have both high expectations for the future and a realistic view of the obstacles that you may need to overcome to achieve your goal. Phrases like “Don’t lose hope” or “Think positively,” while often well-intentioned, are really wishing disguised as hope and can undermine your efforts.

AI doesn’t eliminate the need for human agency—it changes what agency looks like. The shift is from “Can I complete this task?” to “Can I direct this technology toward meaningful outcomes?” In addition, pathways thinking becomes essential. AI should be one pathway among many to accomplish great work—not the only route forward.

Leaders undermine agency during AI transformation when they send “You’re being replaced” messaging (explicitly or implicitly), roll out tools without employee input, or measure productivity without measuring impact.

Leaders build agency when they frame AI adoption as ongoing experimentation rather than a performance mandate, when they invite teams into decisions about where AI adds value and where human judgment remains essential, and when they publicly acknowledge their own learning curve with new tools.

Hope grows when people believe they still have influence inside the system.

As organizations integrate AI, some leaders are discovering that team dynamics can shift—not because people resist technology, but because trust changes.

When a human teammate makes a mistake, teams can ask questions, understand context, and learn together. AI doesn't participate in that mutual learning process. It introduces what researchers call "trust ambiguity."

The risk is that people stop asking colleagues for help, not because they aren't good, but because AI is faster and never makes anyone feel stupid for asking. And suddenly, without anyone noticing, the team has stopped talking.

Hope requires psychological safety. People need permission to say, "I don't know how to use this yet," without fearing career consequences.

Harvard Business School professor Amy Edmondson—whose foundational work on psychological safety has shaped how organizations think about learning and risk—and her co-author Jayshree Seth argue that AI integration should be treated as a learning challenge, not just a rollout. That means leaders can:

Frame AI use as experimentation, not expectation. Position deployment as ongoing, hands-on learning rather than a performance mandate.

Model fallibility by sharing their own AI missteps. When leaders acknowledge their own learning curve, they normalize growth instead of fear.

Distinguish between intelligent failures and preventable errors. Low-risk experiments that generate insight should be celebrated, not penalized.

Create space to discuss AI challenges—not just AI wins. Teams need room to name what isn't working.

Without psychological safety, agency stays silent.

Monday Morning Actions

Hope isn’t built through speeches. It’s built through structure and good teaming practices.

Here are three actions you can take this week:

Establish clear AI norms. Define when AI should be used—and when human judgment is non-negotiable. Before defaulting to automation, ask: “What are three other pathways we could explore?”

Reframe one-on-ones around agency. Beyond productivity metrics, ask: “What meaningful impact are you creating right now? Where do you need more control in how we’re using AI?”

Share your own learning curve. Describe one way AI didn’t work as expected and what you learned. When leaders model curiosity instead of certainty, they normalize growth instead of fear.

Hope Is the Real Competitive Advantage

Hope isn’t naive optimism about AI’s potential. It’s the disciplined practice of protecting agency and preserving pathways—even as technology evolves. The organizations that thrive in the AI era won’t be the ones that automate the fastest. They’ll be the ones that help people see how they still matter inside the system.

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today