Tag: agent training

  • What is the Kirkpatrick Model? A Practical Guide for Contact Center Training

    What is the Kirkpatrick Model? A Practical Guide for Contact Center Training

    Most contact centers believe their training is effective, but how many actually measure it?

    We might evaluate completion—agents complete onboarding, pass quizzes, get certified—but are we measuring true readiness? Once agents hit the floor, are they confident and ready to take difficult calls? 

    This gap isn’t solved by more training, but rather with an understanding of what kind of training (and what kind of measurement) actually translates into real performance improvement and readiness. 

    When used intelligently, that’s what the Kirkpatrick Model is designed to do.

    What Is the Kirkpatrick Model?

    The Kirkpatrick Model has been around since the 1950s and is one of the most widely-used frameworks for evaluating the effectiveness of training programs. 

    It breaks down learning into four levels:

    • Reaction: Did agents enjoy the training?
    • Learning: Did they understand the material?
    • Behavior: Did they apply the training on the job?
    • Results: Did the training drive business outcomes?

    It’s a simple and intuitive model, but easy to misapply, especially in fast-paced environments like contact centers. 

    How the Kirkpatrick Model is Applied in Contact Centers

    Level 1: Reaction

    In a contact center, Level 1 of the Kirkpatrick Model is usually evaluated through post-training surveys that ask agents to report their experience of a given training program. Questions like “Was this helpful?” or “Do you feel confident with your knowledge of this subject?” help evaluate whether or not agents were engaged during training. 

    But positive feedback doesn’t always predict performance. An agent can enjoy and actively participate during training and still struggle tremendously on live calls.

    Level 2: Learning

    Level 2 evaluates whether or not agents understand the material provided during a training session. Most contact centers evaluate Level 2 through knowledge checks, certifications, exams, and role plays. 

    At this stage, most agents can repeat and regurgitate the right information—but knowing what to do isn’t the same as doing it when the situation strikes. Level 2 is where most training programs begin to break down. 

    Level 3: Behavior

    Level 3 of the Kirkpatrick Model assesses whether agents are applying what they learned during real interactions. In a contact center, this includes behaviors like proper objection handling, tool navigation, and soft skill demonstration.

    Have you ever had an agent ace training but struggle and lose their cool on the floor? If training isn’t converting to real behavior change, that is a symptom that something has gone wrong between Level 2 and Level 3.

    Level 4: Results

    Level 4 asks whether agent behavior is actually driving business outcomes. This level is what operational leadership ultimately cares about because it encompasses core business metrics like:

    • Average handle time (AHT)
    • First call resolution (FCR)
    • Conversion rate and revenue
    • Customer satisfaction (CSAT/NPS)
    • Renewals and churn

    These results are downstream from Behavior (Level 3), which needs to be led by strong and well-proven Reaction (Level 1) and Learning (Level 2) results.

    If you can’t clearly see or influence your Level 3 behaviors, then Level 4 becomes highly difficult to diagnose or fix. 

    Where Most Contact Centers Get Stuck

    Here’s what the gap between Level 2 and Level 3 of the Kirkpatrick Model looks like:

    • An agent knows their script but forgets it during an intense call
    • An agent passes onboarding with flying colors but escalates too many calls
    • An agent knows your product inside and out but struggles with objections
    • An agent sounds confident during roleplays but freezes under pressure

    By the time this gap is identified, underperformance has already impacted the customer experience—and the agent experience, too. 

    A Better Way to Think About the Kirkpatrick Model

    The Kirkpatrick Model is often treated as an evaluation framework, when it’s really a design framework. The best training programs don’t start from content, but rather with Level 4: the business outcomes they want to drive. Then trainers work backward to understand how each Level has to operate in order to support those outcomes. 

    Ask yourself:

    • Level 4: What business outcomes are we trying to drive?
    • Level 3: Which agent behaviors lead to those outcomes?
    • Level 2: What do agents need to know and practice in order to confidently and consistently perform those behaviors?
    • Level 1: How should agents best learn that material?

    Let’s stop assuming that training completion means agents are ready, and start looking at the downstream performance metrics that matter. 

    Why Effective Training Matters More Than Ever

    AI and automation have not just raised the bar for human agents, but built an entirely new ladder. When routine interactions are increasingly handled by AI tools and self service, the conversations left for human agents become the hardest and most nuanced.

    There’s less room for error, and training matters more than ever. Learning design has to adapt alongside this new call mix; static certifications and scripted roleplays simply won’t prepare agents for the reality of being on the floor, and that gap between Levels 2 and 3 risks eating away at your bottom line. 

    Tools like TrueCX enable your agents to practice common scenarios and edge cases alike with Intelligent Virtual Customers (IVCs) that sound, respond, and object like your real customers. This not only lets agents get their sea legs on the phone, but lets you measure behavior change (Level 3) before real customers are at risk. 

    The Kirkpatrick Model has been around for decades, and its core tenets remain highly relevant and practical. The challenge is applying it consistently, thoughtfully, and with an attention to failures between Levels. 

    Those gaps may be your greatest training obstacles, but they’re also your greatest opportunities for growth and real results. 

  • How to Stop the Self-Fulfilling Prophecy of Contact Center Agent Churn

    How to Stop the Self-Fulfilling Prophecy of Contact Center Agent Churn

    It’s Vivian’s first live shift at her contact center job. Her company’s IVR and AI tools have already absorbed the easy calls, leaving her with escalations, edge cases, and emotionally charged situations. 

    Frustrated customer after frustrated customer calls in: one customer had their power shut off; one had a billing dispute that already failed twice; and another has already had to repeat their story three times before reaching a human. 

    Vivian isn’t expected to perform well on her first day. And she isn’t set up to do so, either. The unspoken message is clear: let’s see if she makes it. 

    We call this “ramp,” but it’s more like throwing someone in the deep end and seeing if they sink or swim. 

    “On the first day of my first call, I had everything ready 30 minutes beforehand: connection, cubicle, headset, paper for notes… but I was so nervous about not knowing what would happen that just five minutes after logging in, I threw up all over the place.”

    — r/CallCenterWorkers on Reddit

    When we design the first 90 days on the job as a probation period instead of a support and incubation period, churn risks becoming a self-fulfilling prophecy. 

    The Signal We Send Agents on Day One

    At most contact centers, new agents have lower performance expectations, and aren’t eligible for bonuses during their first 90 days. 

    With no incentive to succeed, a powerful narrative is created: you’re not part of the team yet. We expect you to fail. 

    When bonus incentives are delayed, one of your most powerful incentives is removed during the most high-efforts and stressful periods of the job. 

    Why should Vivian go above and beyond if she’s not going to be rewarded? Why shouldn’t she just quit, if her company doesn’t believe in her anyway? 

    How the Prophecy Becomes Reality

    Here’s how Vivian’s first 90 days goes:

    • She struggles on some of her harder calls
    • Her mistakes are public and impact the company’s bottom line
    • Her confidence is eroded and her stress level is higher
    • This leads to more mistakes, more scrutiny, and more emotional fatigue
    • She doesn’t feel like her company cares about her development, performance, or whether she stays or goes
    • So she quits before the 90 day mark

    The first 90 days on the floor are when habits form; they determine whether an agent sees their job as a career path or a temporary stopover. 

    And once churn becomes normalized during an agent’s first 90 days, it reshapes a contact center’s entire culture. Supervisors expect attrition; operations teams bake it into their forecasts; and hiring plans are built up to account for it. Performance ceilings lower, and failure becomes the norm. 

    “I remember that I started half an hour earlier than the rest of my team and my manager didn’t get in until 1 1/2 hours into my shift. We had a support line but they too weren’t open right away. It was frustrating, being new on the phone and not having any support. I ended up absorbing info on the job like crazy because otherwise I wouldn’t get any help.”

    — r/CallCenterWorkers on Reddit

    Given the outsized cost of churn, contact centers need to question those norms more critically. Consider:

    • Recruiting and training costs
    • Lost productivity during ramp
    • Supervisor time spent on coaching and training
    • Forecast instability during high-volume periods

    Ramp time and churn are not just HR metrics – they’re operational efficiency metrics. 

    Calculate The Cost of Treating Ramp Like a Trial Period

    Use this simple calculator to estimate the financial impact of early churn during an agent’s ramp period:

    Ramp Cost Calculator

    Estimate the annual cost of treating ramp like a trial period.

    This calculator provides directional estimates only. It does not include secondary costs like QA volatility, supervisor bandwidth, lower CSAT, or scheduling disruption.

    How to Stop the Cycle

    Breaking the self-fulfilling prophecy of contact center churn doesn’t require a complete overhaul. Consider these four steps:

    1. Align Incentives from Day One

    Think about extending bonus eligibility to new agents during ramp. This signals belief and trust, and early financial wins in this regard can reinforce effort and resilience. 

    2. Redesign Call Exposure

    A new agent shouldn’t experience their first difficult call or escalation live and unprepared. Structured simulations like Intelligent Virtual Customers (IVCs) allow agents to practice calls in true-to-life environments without the pressure of real metrics and customers. 

    3. Measure Readiness, Not Just Completion

    Typical contact center metrics like AHT, FCR, and QA scores are lagging indicators. You need a way to make sure an agent is ready to hit the phones proactively, not reactively. 

    Some leading indicators to consider measuring include:

    • Objection-handling confidence
    • Comfort with policy and tool navigation
    • Success rate when a call simulation goes off-script
    • Rate of improvement over time, especially on complex calls 

    4. Redefine Ramp

    Shift from viewing an agent’s first 90 days as a trial period into viewing them as an incubation period. Instead of “let’s see if they make it,” let’s switch to “how do I make sure they succeed?” 

    Agents feel the difference when they are believed in and supported, and they will be more likely to achieve early wins and stay resilient through early losses. 

    The First 90 Days Predict The Next 900

    Contact centers don’t inherently have a churn problem. They have a ramp design problem. 

    When we expect churn, and design policies and cultures that reinforce it, we are creating a self-fulfilling prophecy that leads to heavy operational costs. 

    But when we design for support, readiness, and proficiency, we can achieve the opposite: stability, confidence, and real performance improvement. 

  • 5 Ways AI Has Made Contact Center Onboarding Harder

    5 Ways AI Has Made Contact Center Onboarding Harder

    Contact center agent onboarding has followed the same arc for decades: start new hires on simple calls and build confidence through repetition and gradual complexity. But with the introduction of AI, that arc is starting to feel unreliable.

    This shift isn’t happening everywhere, and it’s not happening all at once, but it’s happening often enough that onboarding feels harder than it used to for agents and contact center leaders alike.

    The opportunity to warm up on low risk, simple calls is lower, and new agents are facing complex, emotionally-charged conversations and edge cases early and often. This is the time to question long-held assumptions about what onboarding should look like. 

    This post breaks down five ways AI is reshaping contact center onboarding, and what teams can do to adapt without sacrificing confidence, performance, or retention.

    Challenge #1: “Easy” Calls Are Disappearing First 

    AI and self-service usually absorb the simplest customer interactions first.

    Balance checks, password resets, shipping status, basic account updates. These were once the lowest rung of the onboarding ladder. They gave new hires repetition, rhythm, and a low-risk way to build confidence before handling more complex situations.

    Although AI adoption isn’t equal across all industries, these entry-level questions are slowly disappearing as AI quietly redirects simple issues away from human agents.

    This means that agents have fewer low-stakes interactions to practice with, and they reach nuanced or complicated conversations sooner – before they feel fully settled into their roles. 

    Industry example

    The first call Ryan receives during his first day on the phones is from a customer whose power was shut off and is worried about losing refrigeration for his insulin.

    The routine questions Ryan practiced during onboarding are now automatically answered by IVR. The calls that reach him are edge cases, escalations, and emotional situations. He technically knows the utility company’s policies, but he hasn’t been able to practice in a low-risk environment and build confidence before things get personal.

    Challenge #2: Early Mistakes Carry More Risk 

    When “easy” calls disappear, so does the margin for error. Trust, compliance, and revenue are impacted – among other key metrics – when avoidable mistakes happen during high-stakes customer conversations.

    Onboarding completion, at face value, doesn’t say much about how an agent will actually perform under real stakes. Now that early performance matters more, teams need better ways to observe, assess, and support agents during onboarding itself. 

    Intelligent Virtual Customers (IVCs) allow this by allowing teams to evaluate real performance, behavior, and training gaps before agents ever get on the phone with a live customer. 

    Industry example

    Sam finishes his onboarding and passes all of his required knowledge checks. During his first week talking to real customers, he gets overwhelmed and misses an important compliance step. This leads to escalation, manager intervention, and a big confidence hit for Sam.

    In industries like finance and healthcare that are highly regulated, early mistakes often carry outsized consequences. The goal shouldn’t be to speed up agent time-to-floor, but to ensure that true readiness will actually translate into compliance.

    Challenge #3: Confidence Falters Early

    When new agents struggle, it is easy to assume they lack knowledge, skill, or motivation. More often, the issue is overwhelm and cognitive load. 

    As first-call complexity increases, agents have to listen, interpret, decide, and respond under emotional pressure, all while navigating brand new tools, policies, and time constraints. 

    This pressure shows up quickly: agents hesitate mid-call, second-guess themselves, or over-rely on escalation. Stress rises, confidence drops, and what might have been a temporary wobble becomes a pattern. Over time, this can be one of the strongest predictors of early churn. 

    Industry example

    Leia is on back-to-back calls from stranded passengers during a severe storm. She knows her company’s policies, but the emotional pressure, time constraints, and sheer amount of calls slows her down.

    After several highly-emotional conversations, she begins hesitating, putting customers on hold, and escalating issues she knows she could normally resolve on her own – though she isn’t so sure anymore.

    Without regular reinforcement and training, even the most capable agents can start doubting themselves and making avoidable missteps.

    Challenge #4: The Training Ladder Doesn’t Match Reality

    Contact center onboarding programs have traditionally involved learning the basics before progressing towards more complex scenarios. That approach is less relevant now that basic calls are gradually being replaced with AI at many contact centers, and complexity is the new status quo. 

    This is not a training failure, it’s an opportunity to introduce new approaches, tools, and processes and train a new generation of flexible, prepared, and confident agents.

    Industry example

    Ray, a new agent, did great on his training scenarios during onboarding. Once on the floor, however, he was met with a mix of edge cases and emotional calls from day one. His reality didn’t match what the training ladder taught him to expect, and his confidence – and the customer experience – suffered as a result.

    Challenge #5: Readiness Signals Haven’t Kept Up 

    Even as customer conversations grow more complex, many onboarding metrics remain designed for a simpler era: completion rates and time-to-floor remain the main indicators of success. 

    While these metrics are easy to track, they don’t actually reflect how prepared an agent is for the calls they’ll face. 

    This gap affects culture, morale, and decision-making:

    • Leaders and tenured agents hesitate to trust new agents
    • Supervisors and managers are asked to make training longer without evidence it will help – or worse, they’re asked to accept a “churn and burn” norm
    • Agents can feel judged by outcomes that don’t reflect their learning curve.
    Industry example

    Priya finishes her onboarding on schedule, but during her first week, she struggles to manage troubleshooting, compliance checks, and distressed customers.

    Her performance begins to slip, and escalations increase. Priya is taken off the phones and put back in training, slashing her motivation and morale because readiness was declared too early, using signals that measure completion rather than performance under real conditions.

    AI Can Make Contact Center Onboarding Easier, Too

    The same technologies that have changed the status quo and made onboarding feel harder also have the potential to make it more effective, predictable, and cost effective. 

    Used intentionally, AI can reduce risk on the floor, and ensure agents are set up for success on day one. 

    The key is redefining readiness. When we have the right tools to adequately assess performance before agents get on calls, AI can become a way to move learning out of live queues and into lower-cost, lower-risk environments. 

    Intelligent Virtual Customers (IVCs), for example, allow agents to simulate real calls with an AI customer to see how they handle pressure, volume, objections, and edge cases before real metrics like CSAT and retention are at stake. 

    The payoff is real: fewer escalations, less agent churn, and a better customer and agent experience. AI gives operations leads a way to teach, measure, and improve readiness without paying for it in real time, with real customers.

  • Day One Readiness: A Practical Checklist for Contact Center Trainers

    Day One Readiness: A Practical Checklist for Contact Center Trainers

    An agent’s first day on the phones sets the tone for everything that follows. Confidence. Performance. And even retention. 

    Many companies struggle with the same issue: they confuse contact center agent training completion for true readiness. After completing training, agents may have memorized the material, but they still have no experience handling real conversations in real conditions. 

    That gap between contact center agent training and readiness is where Day One one often breaks down.

    From Trained to Ready

    Teams that incorporate realistic, repeatable call practice with Intelligent Virtual Customers (IVCs) tend to see stronger Day One outcomes. 

    When agents can practice realistic conversations in a true-to-life environment without pressure from live customers, they build confidence faster and make fewer avoidable mistakes once they hit the floor.

    Day One Readiness Checklist

    To help learning and development teams assess readiness before agents go live, we put together a short and simple Day One Readiness Checklist.

    It focuses on four areas that help predict early success:

    • Agent Fundamentals: Systems, audio, documentation, and coaching plans are ready before Day One begins. (check out this best ANC headphones guide.) 
    • Call Readiness: Agents have practiced and aced real conversations, not just reviewed scripts or completed mock calls.
    • Floor Readiness: Agents know how to put calls on hold, handle escalations, and solve inevitable technical issues.
    • Support in the First 24 Hours: Call center agent training, coaching, feedback, and check-ins are clearly defined.

    The checklist is designed to be saved, shared, and used as a final readiness check. Before agents take their first live call, count how many boxes you can confidently check off:

    • Few boxes checked means high risk. Agents are likely to feel overwhelmed or stressed.
    • A moderate score means agents may survive Day One, but confidence will lag.
    • A strong score means agents are set up to perform and recover, even when things go wrong.