learncloudassignment.online

Table of Contents

7 Mistakes HR Makes While Running Assessment Centres

Home / Blog / 7 Mistakes HR Makes While Running Assessment Centres
Author picture

Here is a scenario that will feel familiar. Your organisation invested three months building what looked like a solid competency framework, ran a full day of assessment centre exercises, put six managers in the room as assessors, and produced a set of ratings. Then someone in the wash-up meeting asked: “Why did Panel A rate the same candidate a 4 and Panel B rate them a 2?” The answer, nobody had a clean one. The data was inconsistent. The decisions that followed were, by any honest standard, no more defensible than a panel interview would have been.

A 2022 study by the International Congress on Assessment Center Methods found that over 40% of organisations running assessment centres reported inconsistent or unusable assessor data in at least one programme per year. The methodology is not the problem. These organisations knew about assessment centres. Many had certified their HR teams and bought reputable tools. The problem was execution, and execution failures in assessment centres follow patterns that are specific, repeatable, and fixable.

This article covers the seven mistakes that appear most consistently when running assessment centres in real organisations: not the obvious ones that get covered in induction training, but the ones that surface six months into a live programme when results start to disappoint and nobody is quite sure why. If you recognise your own programme in more than two of these, you are in better company than you think. More importantly, every one of them has a specific, actionable fix.

Mistake 1: Designing Exercises Without a Competency Framework Anchor

The Mistake

HR teams select exercises first and justify them with competencies afterwards. A group discussion is added because “it tests teamwork.” A presentation is included because “senior roles need communication skills.” The result is an assessment centre design built on convention rather than a job-specific competency framework. Each exercise generates data, but because the competency-to-exercise mapping was never explicit, assessors do not know what they are rating, and no two centres produce comparable outputs.

Why It Happens

Competency framework development takes time that most HR calendars do not protect. When a hiring deadline is close or a board request arrives, the path of least resistance is to pull an exercise set from a previous programme and assume it will transfer. It almost never does cleanly.

What It Costs the Organisation

The British Psychological Society’s Guidelines on Assessment Centres (2017 revision) note that exercises not anchored to specific behavioural indicators produce validity coefficients 30 to 40% lower than well-anchored designs. In plain terms: your programme is predictive at roughly the level of a structured interview, but with five times the cost and effort.

THE FIX

Before selecting a single exercise, complete a full competency mapping exercise for the target role. Each target competency should be observable in at least two independent exercises (the multi-trait, multi-method principle). Build the exercise set from that map, not the other way around.

If your organisation does not have an existing competency framework for the target role, that is not a reason to delay the assessment centre. It is a reason to build one first. A framework built on job analysis conversations with three to five strong performers and their line managers takes two to three weeks. The assessment centre that follows will be measurably more valid.

Real-world scenario: A financial services firm running a mid-manager assessment centre used a group exercise designed for graduate selection. The competency it was meant to measure (influencing without authority) was developmentally appropriate for graduates. For mid-managers, it produced floor effects: everyone performed well and the exercise discriminated nothing. Three months of planning produced data that could not inform a single promotion decision.

Mistake 2: Treating Assessor Training as a One-Hour Briefing

The Mistake

Internal assessors receive a 45-minute brief the morning of the assessment centre. They are handed a scoring guide, told what the competencies mean, and paired with a more experienced colleague who is also, in many cases, too pressed for time to model the methodology properly. Assessor training is treated as a logistical step rather than a quality control mechanism.

Why It Happens

Professionals who serve as assessors are expensive. Pulling them out of operations for a full-day assessor training programme requires a business case that HR teams often struggle to make at pace. The compromise is the morning briefing, which satisfies the process checkbox but delivers almost none of the observational rigour the methodology requires.

What It Costs the Organisation

Research by Lievens and Christiansen (2010) found that untrained assessors introduce systematic halo error in up to 62% of group exercise ratings, meaning they give a candidate who performs strongly in one competency inflated scores across all others. In promotion decisions, this conflates likeability with competence and produces predictable patterns of bias.

THE FIX

Minimum viable assessor training for a first-time assessor is one full day, covering the ORCE methodology (Observe, Record, Classify, Evaluate), practice with video-recorded exercises, independent rating, and calibration against a benchmark standard. This is not optional for a programme producing legally defensible decisions.

For existing assessor pools, run a half-day annual calibration before each programme cycle. Assessors who rated in a previous cycle drift toward their personal interpretation of behavioural anchors. Annual recalibration restores inter-rater reliability to acceptable levels.

Able Ventures delivers accredited Competency Assessor Certification programme for internal HR teams and professionals, including practice materials and calibration tools.

Real-world scenario: A professional services firm discovered during a post-programme audit that two of its six assessors had rated every candidate at least one point higher than the group norm across all exercises. Both were senior partners with strong relationships with the candidate group. Neither had received formal assessor training. Their ratings skewed the final wash-up data to the point that two candidates who should not have progressed were moved forward.

Are Your Assessors Trained to the Standard Your Programme Needs?

Mistake 3: Running One Assessment Centre Design Across All Role Levels

The Mistake

The same assessment centre design developed for a mid-manager programme gets reused for a graduate intake with minor cosmetic changes. Or a senior leadership centre is built by adding harder-sounding scenarios to a middle-management template. The exercise format stays the same. The competency framework stays the same. Only the name changes.

Why It Happens

Rebuilding from scratch for each level feels inefficient. If the exercises worked last time, the logic goes, they should work again. What HR teams underestimate is how significantly the observable behavioural indicators for a given competency change across role levels. Strategic thinking at graduate level looks nothing like strategic thinking at director level.

What It Costs the Organisation

Exercises calibrated for the wrong level produce either ceiling or floor effects. When 85% of a candidate group scores in the top two rating categories on an exercise, that exercise is contributing no discriminant information to your decisions. You have run a programme that cost significant budget and generated data with the statistical value of a coin flip.

THE FIX

Build a role-level matrix before design begins. Define what each target competency looks like in observable behavioural terms at the specific level you are assessing. A competency-based assessment for a first-line supervisor selection centre should include observable indicators that distinguish a strong supervisor from an average one, not a strong director from an average director.

Use role-specific exercise scenarios. The context matters more than the exercise format. A group exercise assessment centre scenario set in a manufacturing plant will surface very different behaviours from the same group exercise set in a client services context. Design for the actual world your candidates operate in.

Real-world scenario: A logistics company ran an identical assessment centre design for both their graduate intake and their first-line supervisors, changing only the candidate briefing language. The graduate exercises required strategic analysis well beyond the target role. The supervisor exercises were conceptually simple enough that most candidates scored at ceiling. Neither cohort’s data was usable. Both programmes were, in effect, expensive non-events.

Mistake 4: Ignoring Candidate Experience During the Assessment Process

The Mistake

Candidates receive inadequate briefings, unclear instructions, long unexplained waiting periods between exercises, and feedback that amounts to a result notification. The assessment centre is treated as an internal HR process. The candidate’s experience of it is an afterthought, addressed only when something goes badly wrong and someone complains formally.

Why It Happens

Assessment centre logistics are genuinely complex. When HR teams are managing assessor panels, exercise rotation schedules, materials distribution, and scoring timelines simultaneously, candidate communication tends to drop off the priority list. It is rarely a values failure. It is almost always a planning failure.

What It Costs the Organisation

LinkedIn’s 2023 Global Talent Trends report found that 83% of candidates say the interview and assessment experience strongly influences their decision to accept an offer. For senior and specialist roles where the candidate is likely fielding competing offers, a poorly run assessment centre is an active attrition event. You are not just measuring candidates. You are also being measured by them.

THE FIX

Build a candidate journey map for every assessment centre programme. This is a simple document that lists every touchpoint from pre-assessment communication through to post-centre feedback, and defines who is responsible, what information is shared, and at what timeline.

Specific non-negotiables: candidates should receive a written briefing pack at least 48 hours before the centre, clear instructions on the format and sequence of exercises, and a committed feedback timeline (not just ‘we will be in touch’). These cost nothing to implement and are correlated with higher candidate quality across programmes because strong candidates are more likely to follow through when they feel the process respects their time.

Real-world scenario: A technology firm lost three of its top five-ranked candidates from a senior engineering manager assessment centre because the day ran 90 minutes over schedule with no communication to candidates, two exercises were explained verbally with no written materials, and post-centre feedback was never delivered. Two of those candidates later joined competitors. One mentioned the assessment experience specifically in an industry peer forum.

Mistake 5: Failing to Standardise Assessor Observation Within a Competency Framework

The Mistake

Assessors observe the same exercise and record fundamentally different things. One assessor fills three pages with verbatim behavioural notes. Another writes “good communicator, seemed confident, held the group together.” A third writes nothing during the exercise and relies on memory during the wash-up discussion. The competency framework exists on paper. In the assessment room, it is not consistently being applied.

Why It Happens

Most assessment centre programmes invest in assessor training for the observation methodology but under-invest in standardising what observation looks like in practice. Assessors are told to use the ORCE methodology but are not given structured observation sheets, calibrated examples, or a shared understanding of what constitutes sufficient behavioural evidence for a rating.

What It Costs the Organisation

The 2018 International Congress on Assessment Center Methods survey found inter-rater reliability coefficients (ICC) averaging 0.51 across organisations without standardised observation protocols, compared to 0.73 in organisations with structured formats. An ICC below 0.6 means assessor disagreement is contributing more noise to your final ratings than signal. Your wash-up discussion is resolving that noise with influence rather than evidence.

THE FIX

Design a behaviour observation record (BOR) sheet for every exercise, linked explicitly to the competency framework. The BOR should list the target competencies for that exercise with space for verbatim evidence, a prompt reminding assessors to record what they observed rather than what they concluded, and a separate section for the rating and justification.

In the wash-up, require each assessor to read their evidence aloud before any rating is discussed. This single practice change reduces halo effect contamination and forces the discussion to stay grounded in behavioral competencies observed during the exercise rather than general impressions formed across the day.

Real-world scenario: During an assessor calibration exercise at a consumer goods company, two assessors watching the same recorded group discussion produced ratings that differed by two scale points on four of six competencies. When asked to read their notes, one had recorded three specific behavioural examples per competency. The other had recorded general impressions. The calibration session was the first time either assessor had been asked to share evidence before sharing a rating. It changed how both of them assessed for the remainder of the programme.

Mistake 6: Treating the Assessment Centre as an Event Rather Than a Data Source

The Mistake

The assessment centre runs. Decisions are made. The data is filed. Twelve months later, a new cohort goes through a new assessment centre and produces a new set of data that sits in a separate folder. No one has tracked how the candidates who passed performed in role. No one has compared the assessment centre ratings to appraisal outcomes, promotion velocity, or retention rates. The methodology is being used as a decision filter, not as an organisational intelligence system.

Why It Happens

Post-assessment tracking requires joining data across HR systems that are often not integrated, and it requires someone to own the analysis. In most organisations, the assessment centre sits under a talent team that does not own the performance management system, and the link is never made.

What It Costs the Organisation

Without criterion validity data, your competency-based assessment programme cannot improve. If your exercises are predictive, you will not know it. If they are not, you will not be able to diagnose why. A 2020 SHRM report found that 71% of organisations using assessment centres had never conducted a formal criterion validity study of their own programme, meaning the budget they spent had no evidence base beyond the face validity of the methodology itself.

THE FIX

Build a minimum viable criterion validity tracking process from your next cohort. At six months post-hire or post-promotion, compare assessment centre ratings (by competency, not just overall) against manager performance ratings on the same competencies. Even a sample of 15 to 20 participants produces directional signal about which exercises and competencies are predicting performance and which are not.

At programme level, track: assessment rating by competency, 6-month and 12-month performance rating, retention at 18 months, and promotion velocity. This takes approximately four hours per cohort to compile and produces the organisational case for the programme’s continued investment or its redesign.

Real-world scenario: A retail business had run an annual assessment centre for store manager selection for four years. When a new HR director requested a validity review, the analysis revealed that the group discussion exercise, which was weighted most heavily in the wash-up, had a criterion correlation of 0.21 with 12-month performance. The in-tray exercise, which had been treated as supplementary, correlated at 0.48. The programme was redesigned in one quarter. Predictive validity improved measurably in the following cohort.

Is Your Assessment Centre Producing Data You Can Actually Use?

Mistake 7: Skipping Structured Participant Feedback After the Centre

The Mistake

Candidates who did not progress receive a form email. Candidates who did progress receive a call telling them so. Neither group receives structured developmental feedback tied to the competency framework and to specific behaviours observed during the exercises. The assessment centre was, from the participant’s perspective, a black box: they went in, something happened, and a decision came out.

Why It Happens

Delivering structured feedback is time-consuming, and after the wash-up meeting, the HR team’s attention moves to the next programme. Feedback is scheduled, then rescheduled, then compressed into a brief conversation that fails to reference the behavioural evidence at all. For internal assessment centre programmes, the problem is compounded by assessors’ reluctance to give direct developmental feedback to colleagues they will see the following Monday.

What It Costs the Organisation

Research by the Association for Talent Development (2021) found that participants who received structured, competency-level feedback following a development centre reported 34% higher engagement with subsequent development activities compared to those who received outcome-only feedback. For assessment centres used in development contexts, skipping structured feedback effectively halves the ROI of the programme.

THE FIX

Every participant in an assessment centre should receive a written feedback report within 10 working days of the programme. The report must reference specific behavioural evidence from the exercises (anonymised as to assessor), ratings by competency with the rating scale anchors, and two to three developmental actions linked to the competency framework.

For internal programmes, separate the feedback delivery from the decision communication. The feedback conversation should not open with the outcome: it should open with the developmental narrative and arrive at the outcome as a conclusion of that narrative. This protects the integrity of the feedback and reduces the likelihood that candidates disengage as soon as they hear the outcome.

Able Ventures builds structured post-centre feedback report templates into every assessment centre programme design as a non-negotiable deliverable, not an optional service.

Real-world scenario: A pharmaceutical company ran a high-potential identification centre with 24 participants. Twelve received structured feedback reports with competency-level data and development recommendations. Twelve received a brief verbal summary. At nine months, the structured feedback group had completed an average of 2.3 development activities each. The verbal summary group had completed 0.7. The cost difference between the two approaches was negligible. The outcome difference was not.

The 7 Mistakes at a Glance: Quick Reference

Use this table in your next assessment centre review meeting to identify where your programme has gaps.

Mistake

Core Risk

One-Line Fix

1. No competency framework anchor

Exercises measure the wrong things

Map exercises to competencies before design begins

2. Undertrained assessors

Halo error and bias undermine ratings

One full-day training + annual calibration

3. One design across all levels

Ceiling/floor effects, unusable data

Build level-specific competency indicators

4. Poor candidate experience

Attrition, employer brand damage

Candidate journey map with committed timelines

5. Unstandardised observation

Inter-rater reliability below 0.6

BOR sheet per exercise, evidence-first wash-up

6. No criterion validity tracking

Programme cannot improve over time

Track AC ratings vs. 6-month performance ratings

7. No structured feedback to participants

Development ROI is lost

Written feedback report within 10 working days

What Assessment Centres Look Like When All Seven Mistakes Are Avoided

When a competency framework drives every design decision, when assessors have been trained and calibrated rather than briefed and assumed competent, when exercises are built for the specific role level and context, and when candidates experience a process that respects their time and communicates clearly throughout, the assessment centre produces something rare in talent practice: data you can actually defend.

Not defend in the sense of surviving a legal challenge (though well-run centres do that too), but defend in the sense that when a hiring manager asks why a particular candidate was or was not progressed, you can answer with specific, observed, competency-anchored evidence that does not rely on “the panel felt” or “there was something about them.”

That is the standard these programmes are capable of. The seven mistakes above are the most common reasons they do not reach it. None of them are inevitable. All of them are fixable. The assessment centre design, the assessor training, the candidate experience, and the post-centre data strategy are all within the direct control of the HR team running the programme. Getting them right is not a resource problem. It is a prioritisation and methodology problem, and those are always more tractable than they appear.

How Able Ventures Can Help

Able Ventures works with HR teams and L&D functions that are already running assessment centres and need to bring their programmes to a higher standard of validity, reliability, and defensibility. This is not about starting from scratch. It is about auditing what exists, identifying the specific gaps, and making targeted interventions that produce measurable improvement in programme quality.

Our work covers the full methodology: competency framework development, assessment centre design and exercise build, certified assessor training programmes, live delivery support, and post-centre developmental reporting. We also conduct programme audits for organisations that want an independent view of where their current methodology is losing validity.

If your programme is producing inconsistent data, your assessors are uncertain about how to apply your framework, or your post-centre feedback is not landing with participants, any of these are diagnostic signals worth investigating before the next cohort runs. The Able Ventures assessment practice provides a direct entry point into that conversation.

Get an Independent Audit of Your Assessment Centre Programme

Frequently Asked Questions

What is the biggest mistake HR teams make when designing an assessment centre?

The most consequential and most common mistake is selecting exercises before defining the competency framework for the target role. When the exercise mix is not explicitly mapped to specific, role-level behavioural competencies, assessors do not know what they are rating, data across exercises is not comparable, and the final wash-up discussion produces decisions based on general impression rather than structured evidence. This single design failure undermines the entire validity advantage that assessment centres have over simpler selection methods.

How much assessor training is actually needed for a competency-based assessment centre?

A minimum of one full training day is required for assessors new to the role. This covers the ORCE methodology, practice with scored observation exercises, independent rating, and calibration against a group standard. For experienced assessors, a half-day annual calibration session before each programme cycle is the minimum to maintain inter-rater reliability. Assessor training is not a one-time event. Assessment skills drift between programmes in the same way that any skill degrades without practice. Organisations that treat the morning briefing as sufficient consistently produce inter-rater reliability coefficients below the 0.6 threshold that makes assessment data scientifically defensible.

How do you ensure consistency across assessors in an assessment centre?

Three practices, used together, produce measurable assessor consistency. First, give every assessor a behaviour observation record (BOR) sheet for each exercise, with the target competencies listed and space for verbatim behavioural evidence rather than evaluative language. Second, require assessors to make their ratings independently before any group discussion begins. Third, run the wash-up meeting on an evidence-first protocol: assessors read their recorded observations aloud before any rating is disclosed. These practices directly address the three main sources of assessor inconsistency: undefined observation targets, premature social influence on ratings, and reliance on memory rather than recorded evidence.

Why is candidate feedback important after an assessment centre, and what should it include?

Structured feedback after an assessment centre serves two purposes that are often treated as separate but are both commercially significant. For development centres, it is the primary mechanism through which the programme produces behavioural change. Research consistently shows that competency-level, evidence-backed feedback produces meaningfully higher engagement with development activities than outcome-only communication. For selection centres, structured feedback is an employer brand investment: candidates who receive substantive developmental feedback after a selection process, regardless of outcome, report significantly higher net promoter scores for the organisation. Feedback reports should reference specific observed behaviours, provide ratings by competency with scale anchors, and include two to three prioritised developmental recommendations.

How can organisations measure whether their assessment centre is actually predicting performance?

The process is called criterion validity tracking, and it does not require a research team to do at a basic level. For each cohort, record assessment centre ratings by competency. At six and twelve months post-hire or post-promotion, collect manager performance ratings on the same competency dimensions. Calculate the correlation between assessment ratings and performance ratings for each competency and for each exercise. Any correlation above 0.35 is directionally useful. Correlations below 0.25 for a specific exercise are a signal that the exercise is not measuring what you think it is measuring. This level of analysis takes approximately four hours per cohort and produces the internal evidence base that most organisations running assessment centres have never built.

Recent Blogs

Scroll to Top