IT Partnership 101: Why People Readiness Leaders and IT Must Be Best Friends — Not Frenemies
On the 4th floor, the People Readiness team is designing a beautiful AI adoption program. Change management workshops. Champion networks. Communication plans. Skill-building curricula. It’s thoughtful, well-researched, and completely disconnected from what IT is actually deploying.
On the 7th floor, the IT team is evaluating platforms, building integrations, hardening security, and rolling out tools. Their deployment timeline doesn’t mention training. Their architecture decisions don’t account for how humans will actually use the systems. And nobody told them about the champion network that’s supposed to launch the same week as their infrastructure migration.
Both teams are competent. Both teams care. But they’re not working together.
Why the Partnership Gap Exists
IT and People Readiness teams avoid each other because they speak different languages, operate on different timelines, and are measured on different outcomes.
The Language Gap
IT talks about APIs, latency, throughput, security protocols, data engineering and deployment pipelines. People Readiness talks about change curves, adoption metrics, resistance patterns, and communication cadences. When these teams sit in a room together, they often spend the first hour just trying to understand what the other side means.
However, language gaps create assumption gaps. IT assumes that “deployment” means the work is done. People Readiness assumes that “training” means people will actually use the tool. Both these assumptions are wrong.
The Timeline Gap
IT operates on sprint cycles — two-week increments with clear deliverables. People Readiness operates on change cycles — longer arcs of awareness, understanding, acceptance, and adoption. When IT says “we’ll have this ready in six weeks,” People Readiness hears “we have six weeks to prepare an entire organization for a behavioral shift.” Those are different six weeks.
The Measurement Gap
IT is measured on uptime, performance, security compliance, and delivery speed. People Readiness is measured on adoption rates, satisfaction scores, and capability assessments. Neither team’s KPIs include the other team’s outcomes. So neither team has a structural incentive to collaborate.
This is a fixable root cause
The consequences of the IT-People Readiness disconnect are predictable and expensive:
- Shadow IT multiplies. When the official tools are deployed without adequate training, users find workarounds. They use unauthorized AI tools, create security vulnerabilities, and fragment the organization’s AI ecosystem.
- Adoption stalls at 15-20%. IT celebrates a successful deployment. Two months later, usage data shows that 80% of licensed users have tried the tool once and abandoned it. The problem isn’t the technology — it’s that nobody taught people how to integrate it into their actual workflows.
- Champion networks have nothing to champion. People Readiness builds a beautiful champion network. But the champions don’t have early access to the tools, don’t understand the technical constraints, and can’t answer basic questions about why things work the way they do. They lose credibility fast.
- Security and governance collide with adoption. IT implements security controls that make perfect technical sense but create terrible user experiences. People Readiness didn’t know about the constraints until launch day. Now they’re explaining to frustrated users why the AI tool requires seven authentication steps.
The Partnership Framework: Five Structural Fixes
Telling IT and People Readiness to “collaborate more”, good luck with that! You need structural mechanisms that make partnership a mandate, not optional.
Fix 1: Shared Planning Cadence
IT and People Readiness should share a single deployment calendar — not two separate calendars that occasionally sync. Every AI deployment should have both a technical milestone track and a readiness milestone track, visible to both teams, with dependencies explicitly mapped.
Practically, this means a weekly 30-minute sync between the IT deployment lead and the People Readiness lead for every active AI initiative. Not a monthly steering committee. A weekly operational sync where both sides share what’s changed, what’s blocked, and what’s coming.
Fix 2: Embedded Liaison Roles
Assign someone from People Readiness to sit in IT sprint reviews. Assign someone from IT to sit in People Readiness planning sessions. These aren’t permanent transfers but liaison roles, usually 2-4 hours per week. Their job is to translate between teams and catch disconnects before they become disasters.
The liaison doesn’t need to be an expert in both domains. They need to be curious, organized, and willing to ask “stupid” questions. The best liaisons are the ones who say, “Wait — when you say ‘deployment,’ do you mean the tool is technically available, or that users can actually do something useful with it?”
Fix 3: Joint User Journey Mapping
Before any AI tool deploys, IT and People Readiness should jointly map the user journey — from the moment a user first hears about the tool to the moment they’re using it independently and effectively.
This mapping reveals gaps that neither team would catch alone:
- IT discovers that their planned SSO integration creates a confusing experience for users in the field office
- People Readiness discovers that their training timeline assumes tool access two weeks before IT can actually provide it
- Both teams discover that there is a gap in the “first 48 hours” experience — the critical window where a user decides whether this tool is worth their time
Fix 4: Shared Metrics Dashboard
Create a single dashboard that combines IT metrics (uptime, performance, error rates) with People Readiness metrics (adoption rate, usage depth, support ticket themes). Both teams look at the same data. Both teams are accountable for the complete picture.
When IT sees that their tool has 99.9% uptime but 22% adoption, they can’t declare victory and move on. When People Readiness sees high adoption but a spike in error-related support tickets, they can’t blame training gaps alone. The shared dashboard forces shared accountability.
Fix 5: Joint Retrospectives
After every AI deployment, run a joint retrospective. Not separate retrospectives where each team analyzes its own performance. A single session where both teams examine what worked, what didn’t, and what they’ll do differently next time.
The most valuable output of joint retrospectives are more than the action items — it’s the relationship. When IT and People Readiness teams regularly sit together, examine failures honestly, and build solutions together, they stop being two teams and start being one team with two specializations.
The Conversation Starter
If you’re reading this and thinking “we need to fix this,” here’s your first move. It’s deliberately simple because the hardest part isn’t the framework — it’s starting the conversation.
Schedule a coffee chat with your counterpart on the other team. If you’re in People Readiness, find the IT lead for your biggest AI initiative. If you’re in IT, find whoever is responsible for change management or training around AI.
Bring one visual: the Five Pillars of People Readiness. Walk through each pillar together and ask two questions:
- “Which of these pillars are you already owning?”
- “Which ones are falling through the cracks between our teams?”
That second question is where the real conversation starts. The pillars that fall through the cracks are your highest-risk areas — and your biggest opportunities for partnership.
Partnership is Structural
Organizations that figure this out — that build the structural mechanisms for IT and People Readiness to operate as genuine partners — will adopt AI faster, with less waste, less frustration, and more sustainable results than those that operate in separate silos.
It doesn’t require a reorganization. It doesn’t require a new budget. It requires five structural fixes and the willingness to have one honest conversation.
This is Post 7 of 365 in the People Readiness Playbook.