A Jarred Consulting perspective on where human judgement still matters
We’ve noticed a quiet issue emerging in organisations at the moment. The most common fear in the press and media is that AI will replace humans overnight. But the real issue is not that dramatic: it’s much more subtle. AI should free people to think better, but in practice, it sometimes encourages them to think less. So across talent management, leadership, and organisational decision-making, we’ve found that human judgment is being used too little – or too late. And this isn’t because people don’t care. It’s because it’s easier to accept confident, AI-generated answers rather than question them. And it’s something we can quite easily put right.Where adoption outpaces judgment
The data backs up what we’re seeing on the ground.- Over 75% of leaders now use AI regularly, while frontline usage lags at around 51%.
- Only 16% of organisations have effectively redesigned work to integrate AI properly.
- Two-thirds of executives admit they don’t yet have the leadership capability to fully leverage it.
- Meanwhile, 66% of employees don’t verify AI outputs before using them.
3 danger areas where judgment is slipping
In our work across talent and leadership, this is what we’ve noticed: In hiring and talent decisions, tools now screen CVs, generate summaries, and even suggest interview questions. While useful, managers often rely on these outputs as if they’re neutral and complete. But because they’re based on patterns rather than context, they can miss potential outside the mode. Your ideal candidate may be slipping through your fingers. In performance reviews, AI can draft polished and fair feedback. But the nuanced human understanding of what someone needs to hear, what helps their growth, and what might land badly still requires human judgment. Increasingly, this part is being skipped or ignored as if it’s unimportant but it might be the difference between damaging a relationship and an employee feeling valued enough to stay. In decision-making, AI can offer options, analysis, and recommendations but it doesn’t bear accountability or deal with the consequences. When leaders quickly rely on ‘what the model suggests,’ they’re not just saving time; they’re passing responsibility. None of this is dramatic; it’s gradual – and that’s why it’s easy to overlook. 3 vital questions to ask There’s a misconception that as AI becomes more capable, human skill will become less significant. In reality, the opposite is true. So we’ve found that the organisations getting the most from AI aren’t the ones using it the most but those being more deliberate about its application. They ask:- Where do we want people to rely on AI?
- At what stage should human judgment intervene?
- How do we develop people to challenge outputs rather than accept them blindly?
