Why performance reviews don’t work: bias, surprises, and the trust gap
.png)
%20(1).png)
Performance reviews don’t work when they measure perception instead of performance. They also don’t work when feedback shows up too late, as a surprise. That’s the moment a process stops feeling useful and starts feeling arbitrary.
In a fireside chat, Hebba Youssef (founder of I Hate It Here and Chief People Officer at Workweek) joined Workleap’s Senior HRBP, Sam Sadeghi, to talk about why performance reviews break down in real organizations. Not in theory. In the messy, human reality where managers forget half of what happened, employees brace for impact the second they hear the word “review,” and “performance” becomes a proxy for visibility.
Watch the full conversation on YouTube:
Prefer the highlights? Here’s the core of it, plus what to do next.
If you’re looking for practical ways to run a stronger performance review conversation, Workleap has a guide to making performance reviews more effective and another that covers how managers and employees can approach the review meeting itself. This article stays focused on the “why it breaks” story, because once you understand that, fixing it gets much easier.
The uncomfortable truth: reviews often measure perception
Hebba put it plainly: performance management doesn’t measure performance as reliably as we want it to. It measures someone’s perception of you. And perception is easy to distort.
The most common distortion is recency bias. In an annual cycle, a manager gets to the review and remembers the last few weeks. The rest of the year fades into “I think you were doing fine?” That’s not malicious. It’s human memory.
This is also why many teams start experimenting with a different review cadence, especially when they want feedback to reflect a fuller picture of the year instead of the last month. If you’re weighing that shift, here’s a helpful breakdown of common performance review cycles and what they change.
Then there’s proximity and visibility bias. When there’s no consistent record of work and feedback, visibility often replaces impact. People who are more present in the “right” conversations tend to accumulate credit. People who ship quietly, or contribute cross-functionally without a spotlight, get overlooked.
At that point, the review isn’t measuring performance. It’s measuring what the system can see.
The part that breaks trust is surprise
When HR teams talk about performance reviews, they often focus on the mechanics. The rating scale. The form. The calibration meeting. But when employees talk about performance reviews, they talk about how it feels.
“I didn’t see that rating coming.”
“I didn’t know this was a problem.”
“I was passed over and nobody told me why.”
Those moments turn a process into a trust issue. Over time, surprise turns into a belief that performance is arbitrary. Maybe even political.
Hebba’s non-negotiable is worth repeating because it’s the simplest litmus test you can apply to your own process:
If you’re going to reveal anything in a review that you haven’t already discussed, you’ve failed.
That doesn’t mean every review has to be positive, or comfortable, or easy. It means reviews should confirm what’s already been said and what’s already been documented. If your organization is stuck in an annual review rhythm, it’s worth watching out for the common traps that create surprise (and defensiveness), including what managers unintentionally do during high-stakes cycles. Here are a few of the biggest annual review mistakes to avoid.
Where systems really break is day-to-day manager behavior
The conversation kept coming back to something important. Performance management doesn’t live in a framework. It lives in manager behavior.
Expectations need to be clear, and they need to be restated when priorities change. Feedback needs to be specific, and it needs to happen when it can still be useful.
Hebba told a story about coaching a manager who said it felt “very boomer” to state expectations explicitly. Funny, but also revealing. Expectation setting doesn’t come naturally. Most managers were promoted because they were good at the work, not because they were trained to lead people through priorities, trade-offs, and standards.
What helps is reframing feedback as a skill. It’s not a personality trait. It’s a muscle.
A simple upgrade is moving away from feedback that’s anchored in feelings (“you made me mad when we missed the deadline”) and toward feedback that’s anchored in outcomes:
Here’s what happened. Here’s the impact. Here’s what good looks like next time.
If you want a practical framework for getting managers out of vague feedback and into actionable coaching, you can point them toward how to give performance feedback that improves engagement and retention. And if the bigger challenge is getting managers to apply a consistent approach (so reviews don’t feel like a personality contest), Workleap also lays out a manager-friendly way to implement performance reviews.
Ratings are rarely the point, but they become the headline
Ratings exist because companies need to make decisions about compensation, growth, and promotions. But ratings also hijack the conversation because humans are not neutral about numbers.
“Three out of five” can land as mediocre even when it means “meeting expectations at a high bar.” People also tend to translate ratings into money, whether or not the system is designed that way. Once the number becomes the headline, the narrative feedback becomes footnotes.
Hebba’s take was refreshingly honest. She has a love-hate relationship with ratings because they’ve been conflated with worth, and because narrative feedback often does the real developmental work.
If your reader needs a more foundational overview of what reviews are supposed to include, rather than why they tend to fail, you can route them to a deeper explainer on how employee performance reviews typically work.
AI can help, but it cannot replace judgment
AI came up as both an opportunity and a risk. Hebba’s framing is helpful because it cuts through the noise.
AI is useful when it summarizes what already exists. It can surface patterns across months of one-on-ones. It can pull together “forgotten wins” from earlier in the year. It can help a manager craft clearer wording, especially when the message is tough and the temptation is to soften it into confusion.
AI becomes dangerous when it replaces the human responsibility of feedback. Copying and pasting generic feedback into a review is a fast way to lose trust, especially in a moment that already carries emotional weight. “Garbage in, garbage out” is still the rule. A system with weak inputs doesn’t become strong because you added AI.
If you’re evaluating tools, it’s worth looking for systems that help you pull evidence from where work actually happens and reduce “end-of-cycle scramble.” That’s the direction Workleap is building in with Workleap Performance, and more broadly across the Workleap platform.
One small change that actually moves things
At the end of the conversation, Hebba didn’t prescribe a full process overhaul. She gave a small, practical starting point: build the habit of continuous feedback, and make sure your system has a way to capture it.
That’s what prevents surprise. That’s what reduces review-season fatigue. It also gives employees something they often want more than a score: a record that their work is visible.
And if you take nothing else from this: the review shouldn’t be the first time someone learns where they stand. It should be the moment you align on what’s next.
Watch the full conversation
If you’re rethinking your performance review process, or you’re trying to reduce surprise and rebuild trust, the full fireside chat goes deeper on bias, calibration, manager training, and practical implementation.
Watch on YouTube:
Bonus: What HR leaders asked us live
Answered by Sam Sadeghi, Senior HR Business Partner at Workleap
How do you train managers to give better feedback when they’re already underwater?
The best training is practice, not theory. Start with the one skill that drives everything: clear, actionable feedback. Build it into what leaders already do (check-ins, one-on-ones) using simple scripts, short role plays, and coaching on real situations. In high-pressure environments, training has to reduce friction, not add more work.
Performance management is seen as box-checking. How do you get buy-in for ongoing performance communication?
Frame it in business impact: missed development, retention risk, and slower outcomes (plus more surprise at review time). Then pilot it with a small group of leaders using a lightweight cadence and simple metrics. Leaders buy in faster when they see that frequent feedback makes their job easier and reduces conflict later.
In my org, performance and pay are tightly linked. Can you decouple them without chaos?
Yes. The cleanest approach is to separate performance calibration from compensation decisions. Calibrate performance first based on contribution and impact, then assign compensation. That protects trust because ratings aren’t implicitly “adjusted” to fit the year’s budget.
How do you stop feedback from feeling personal (like labeling someone “not engaged”)?
Help leaders separate behavior from identity. Labels feel personal, but performance conversations should focus on observable actions and outcomes. Anchor feedback in specifics, discuss context, and focus on development opportunities so feedback stays actionable and growth-oriented.
How do you bring “performance communication” to management instead of just annual reviews?
Connect it to what’s at stake: missed development, retention risk, and slower outcomes. Suggest a pilot with a small group of leaders to test cadence and format, with metrics to track impact. Frequent check-ins reduce surprises, improve role clarity, and strengthen manager-employee alignment.
Can continuous performance management work without formal talent assessments?
It can, but many orgs underestimate how much structure talent assessments provide. If you remove them without strengthening goal clarity, feedback quality, and calibration around decisions, continuous performance can quickly become informal and inconsistent, which may hurt trust.
How do you speak to a CEO who doesn’t believe in formal upward feedback?
Position it in business impact: it helps catch issues early, retain key talent, and align leaders to strategy. Data tends to convince most executives, so start small with a targeted upward feedback pilot tied to leadership outcomes.
%20(1).png)



