Interior architecture showing modern design elements with large pillars in Toronto.

Your AI Initiative Isn’t Stuck Where You Think It Is

You’ve run the pilots. You’ve bought the platform. You’ve trained a cohort of early adopters. And six months in, someone in a leadership meeting says the thing nobody wants to hear:

“Why aren’t more people using it?”

The room goes quiet. Then the usual suspects emerge. The innovation team says people need more training. IT says the tools need better integration. HR says change management needs more budget. Legal says the governance framework isn’t ready.

Everyone is prescribing solutions — and nobody has agreed on what’s actually broken.

I see this exact scene in every AI transformation I work with. Organizations stall not because they lack tools, budget, or executive support. They stall because they’re treating symptoms instead of diagnosing the system. The team that can’t get Legal approval doesn’t have a Legal problem — they have a governance legibility problem. The department where nobody experiments doesn’t have a motivation problem — they have a psychological safety problem.

The difference between organizations that push through this wall and those that stay stuck? A diagnostic that tells them where to look.

The Five Pillars: A Diagnostic, Not a Scorecard

The Five Pillars Framework isn’t a maturity model you fill out once and file away. It’s a diagnostic — a way to pinpoint which of five interdependent dimensions is holding your AI adoption hostage, so you stop treating symptoms and start fixing the system.

Every AI initiative — whether it’s a single IC experiment or an enterprise-wide transformation — depends on five pillars. Weaken one, and the others can’t compensate.

Pillar 1: Business Value & Strategy

Prove undeniable value in work leadership already cares about, fast enough that momentum outruns skepticism.

This is where most organizations think they’re strong — and where most are actually weakest. They’ve picked pilots. They’ve got an executive sponsor. They have a strategy slide deck.

But here’s the test: can every person working on an AI initiative articulate, in one sentence, what business outcome they’re proving? Not “we’re exploring AI for customer service.” Something specific: “We’re proving that AI-augmented triage can reduce first-response time by 40% for priority-one support tickets.”

If your teams can’t pass that test, you don’t have a strategy problem. You have a clarity problem. And unclear pilots don’t generate the evidence you need to scale.

Pillar 2: Data Foundation

Know exactly what data is safe to use, where it lives, and how to prepare it without triggering policy violations.

This is the silent killer. Teams want to experiment. They pull data into an AI tool. Nobody told them which data is safe and which will trigger a compliance incident.

The organizations that move fastest on AI aren’t the ones with the most data. They’re the ones where every team member knows the answer to three questions: What data can I use freely? What data requires approval? What data is off-limits, full stop?

Without clear data boundaries, your people face a binary choice: experiment recklessly or don’t experiment at all. Most choose the second. The ones who choose the first create the risk events that give Legal ammunition to shut everything down.

Pillar 3: GenAI Infrastructure — People, Process, Tools

Build the foundation for sustainable adoption: defined roles, experiment rules, approved platforms, and rituals that capture learning.

Notice the order: people, process, tools. Not the other way around.

Infrastructure isn’t just “which platform did we buy.” It’s the entire operating environment that makes experimentation repeatable and scalable. Who approves a new use case? Where do teams share what they’ve learned? What happens when an experiment fails?

Most organizations nail the tools part — they’ve licensed a platform, they’ve got an API. But they have no experiment protocol, no learning capture system, and no defined roles for who does what when a pilot moves to production. The tool works. The operating system around it doesn’t exist.

Pillar 4: Governance & Security

Set boundaries your teams can actually follow, plus incident response and audit trails that make risk observable and manageable.

Here’s the governance paradox: too little, and your organization is exposed. Too much, and nobody experiments at all.

The organizations winning at AI governance have stopped trying to write a comprehensive policy that covers every scenario. Instead, they’ve built something more practical: clear boundaries, a detection system for when things goes wrong, and a response protocol fast enough to contain issues without shutting down the program.

Governance isn’t the enemy of innovation. Bad governance is. And “bad” doesn’t just mean too restrictive — it also means too vague. If your governance framework is a 40-page document that nobody reads, you don’t have governance. You have a liability shield that won’t hold up when something breaks.

Pillar 5: People Readiness

Create the mindset shifts, skills, and psychological safety that turn fear into action.

This is the pillar that underpins everything. But “People Readiness” isn’t a vague aspiration. It has three concrete dimensions:

  • Capability: Can your people evaluate AI outputs, integrate AI into workflows, and exercise judgment about when to trust and when to override?

  • Mindset: Do they believe AI makes them more valuable — or are they quietly convinced it makes them expendable?

  • Psychological safety: Can they experiment, fail, and share what they learned — without career risk?

Training programs address the first dimension. Almost nobody addresses the second and third. That’s why adoption stalls — the easy adopters jumped in, and everyone else is waiting for proof that it’s safe to try.

Why You Can’t Fix One Pillar at a Time

Here’s what trips up even sophisticated organizations: they diagnose the right pillar, then try to fix it in isolation. It doesn’t work. The pillars are a system — and in a system, the dependencies matter more than the components.

Consider the dependency chain that actually governs your AI transformation:

  • ICs can’t experiment safely without manager approval and clear data boundaries (Pillars 2 and 4)
  • Managers can’t scale pilots without executive funding and governance frameworks (Pillars 1 and 4)
  • Executives can’t allocate capital without IC-generated evidence of ROI (Pillar 1)
  • Legal can’t approve unless someone makes risk legible (Pillar 4)
  • IT can’t build infrastructure unless business articulates requirements (Pillar 3)

Every pillar depends on at least two others. This is why the “just pick a pilot and go” approach produces such disappointing results. You can run a brilliant pilot (Pillar 1) on data nobody vetted (Pillar 2 failure), with no governance protocol (Pillar 4 failure), and wonder why Legal killed it before it reached production.

Or you can build an airtight governance framework (Pillar 4) and a world-class data classification system (Pillar 2) — and watch nothing happen because nobody addressed the fear and resistance sitting inside Pillar 5.

The Five Pillars aren’t a checklist. They’re a system. And like any system, the weakest link determines the overall performance.

The Diagnostic: Find What’s Actually Stuck

The power of the Five Pillars isn’t in knowing what they are — it’s in using them to figure out what’s actually stuck.

Next time your AI initiative hits a wall, run it through the framework:

  • If adoption is low despite good tools — check Pillar 5. Your people problem is a readiness problem, not a training problem.

  • If pilots succeed but don’t scale — check Pillars 1 and 3. You may have proven value in a prototype but have no operating system to move it to production.

  • If Legal keeps blocking progress — check Pillar 4. You haven’t made risk legible enough for them to say yes.

  • If teams are experimenting in secret — check Pillar 2. They don’t know what’s safe, so they’re hiding what they’re doing.

  • If executives are losing patience — check Pillar 1. The evidence loop from experiments to business value isn’t closing fast enough.

Every stuck AI program has a pillar problem. Usually more than one. The diagnostic tells you where to focus — and, just as importantly, where not to waste energy.

Take Actions

Run the Five Pillars diagnostic on your current AI initiative. For each pillar, answer one question:

  • Business Value & Strategy: Can every person on the team state the specific business outcome this initiative is proving?

  • Data Foundation: Does every team member know which data is safe to use, which requires approval, and which is off-limits?

  • GenAI Infrastructure: Do you have a defined process for how experiments become production workflows — or are you figuring it out as you go?

  • Governance & Security: If something goes wrong with an AI output tomorrow, does your team know exactly what to do?

  • People Readiness: When did someone in leadership last address — directly, not in a slide — what AI means for people’s careers and professional identity?

Any “no” is a pillar that needs attention before you invest another dollar in tools or training.

This is Post 10 of the People Readiness Playbook.

Disclaimer: All company examples, case studies, and references cited in this article are based solely on publicly available information. The author has no affiliation, partnership, or commercial relationship with any companies mentioned, nor does this content imply any endorsement or association on behalf of the author’s employer or clients. All opinions expressed are the author’s own.

Similar Posts