The AI Skill No One Is Talking About: Decision-Making
AI outputs can be misleading by appearing more accurate than they are.
AI shifts the meaning of expertise from generating answers to evaluating them.
The key skill for the future of work is making decisions with AI, not using it to make decisions entirely.
For the past few years, the conversation about AI at work has focused primarily on knowing how to use AI. Initially, that made sense. Incorporating AI into everyday operations meant knowing how to prompt, automate, streamline, and generate content efficiently.
Now, as AI tools are becoming embedded in many individuals’ everyday workflow, a different, more consequential skill is emerging, one that is far less discussed: the ability to make decisions with what AI gives us.
From Manual Work to Automation Overload
Until recently, much of professional work was constrained by time-consuming, manual processes: filling out spreadsheets, analyzing data, putting together engaging presentations, and creating content. In recent years, AI has fundamentally altered what individuals spend time on at work, making menial tasks easy to streamline and outsource to our favourite chatbot.
This shift has increased reliance on AI as an efficiency tool, and with it, our trust in AI for increasingly complicated and high-stakes decisions. The reasoning feels intuitive: If AI can reliably handle simple tasks, why not use it to assist with more complex ones?
But there’s a problem: AI agents may sound confident, but they make mistakes (e.g., Peters & Gerster, 2025).
Confidence Is Not Competence
What makes this problem particularly difficult is not simply that AI makes errors, but that it presents information in ways that make those errors difficult to detect. AI-generated outputs are typically fluent, coherent, and well-structured—all qualities that we tend to associate with accuracy. Decades of research on the processing fluency effect show that information that is easier to process is more likely to be perceived as true, regardless of its actual validity (e.g., Reber & Schwarz, 1999; Alter & Oppenheimer, 2009). In the context of AI, the more polished and confident the output, the more likely it is to be accepted without sufficient scrutiny.
As AI becomes more proficient, so does trust in its outputs. After all, why wouldn’t we trust an intelligent agent that has access to more information than any single human being at any point in time? Why wouldn’t we use AI-generated output to inform our decisions, including in contexts that involve complex or high-stakes organizational choices?
The truth is: We should. But only after we learn how to make decisions with AI-generated information.
Decision-Making With AI, Not by AI
People still largely fall into two broad categories: those who view AI as omnipotent, and those who wouldn’t trust AI to write an email. Both groups are missing out; while the latter group may be spending time and effort completing low-skill tasks and searching for information that can be done significantly more efficiently with AI, the former group is in significantly more dangerous territory; they may blindly incorporate AI-led feedback into consequential organizational decisions without auditing, evaluating, and putting that feedback into context. This is risky not only because AI has been shown, in research and in practice, to continuously make errors, but also because AI lacks something that is uniquely human: the ability to take history, context, cultural climates, emotions, and others into consideration.
Making decisions with AI requires more than simply verifying whether an answer is correct. It requires a shift in how decisions are approached altogether. Rather than treating AI outputs as conclusions, they should be treated as inputs into a broader decision-making process. Research on judgment and decision-making suggests that individuals are more accurate when they consider multiple alternatives, actively question initial impressions, and integrate contextual constraints (e.g., Clark, 2019; Kahneman & Lovallo 2011).
In practice, this means resisting the tendency to accept the first coherent response and instead deliberately generating variation, interrogating assumptions, and evaluating how well an output aligns with the specific context in which a decision is being made. Even consulting multiple chatbots for the same question can reveal how variable individual outputs can be. The goal is not to identify the single “best” answer generated by AI, but to use AI-generated information to inform a more rigorous human judgment process.
By making answers instantaneous, AI removes an important feature of decision-making: the natural pauses during which we reflect on and evaluate our decisions. In doing so, it may accelerate work, but it also increases the risk of premature conclusions.
Rethinking Expertise in the Age of AI
There was a time when expertise meant knowing more than others. The internet changed that, such that expertise became about knowing where to find the right information.
AI changes it again. When answers are abundant, immediate, and often plausible, expertise is no longer about knowledge or access. It is about judgment and decision-making. The definition of workplace proficiency is shifting away from the ability to simply generate answers and toward determining which answers are worth trusting, and how to translate them into practice.
There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy
