How to scale AI delivery when you can't hire fast enough

Everyone knows the uncomfortable truth: the AI talent market is brutal. You can spend six months trying to hire a Head of AI, make an offer that gets declined, then spend another six months starting over. Meanwhile, your AI roadmap sits gathering dust.

If your 2026 AI plan assumes you'll have a full team in place by Q2, you're probably setting yourself up for disappointment. But here's what I've learned: the organisations that are actually delivering are the ones who've built operating models that work with talent reality, not against it.

Why the traditional approach doesn't work anymore

The standard hiring approach assumes AI talent is available and you just need to find the right people. But in 2026, that assumption is demonstrably wrong. Experienced AI professionals are being actively headhunted and can be extremely selective about where they work.

The organisations adapting fastest are asking different questions: 

How do we deliver our AI roadmap with the talent we can realistically access? 

What can we do with a smaller, more senior team plus partners? 

Where do we absolutely need permanent capability versus where we can use other models?

This is about being strategic with how you access and deploy AI capability.

Three operating models that actually work

I've seen three approaches consistently deliver results for enterprises facing talent constraints. Most successful organisations use some combination of all three rather than betting everything on one model.

Model 1: Small permanent core plus flexible capacity

Instead of building a large in-house team, you hire a small core of senior people (usually 3-5) who set strategy, own governance, make architectural decisions and manage delivery. Then you flex capacity up and down using partners, contractors etc, depending on what you're actually building.

This works because you're only recruiting and retaining talent for a handful of senior hires. Your permanent staff focus on the things that genuinely need institutional knowledge and continuity, with everything else delivered through other models.

Which priorities (usually) belong in the core team:

  • Strategic direction and use case prioritisation

  • Governance, risk and compliance oversight

  • Architecture and platform decisions

  • Vendor and partner management

  • Capability building and knowledge transfer

Which priorities can be outsourced:

  • Model development for specific use cases

  • Integration work

  • Initial platform builds

  • Surge capacity for major initiatives

Process: hire 3-5 people for your core team, then flex into partners, contractors etc as and when your projects demand it. This is the most linear version of the three models - it may take a bit longer, but gives you more control.

Model 2: Embedded partners who transfer capability

Rather than hiring first and then delivering, you partner with specialists who can start delivering immediately, while you focus on building your internal capability in parallel.

This works when you've got urgent delivery needs but you're still recruiting. Partners bring proven patterns, experienced teams and momentum while your permanent hires onboard and learn from them. Over time, you transition ownership internally.

The critical success factor is being explicit about capability transfer. This is a deliberate partnership where documentation, training, pair working and knowledge sharing are built into the delivery model.

The results when done properly: organisations deliver their first use cases in weeks not months, their new hires onboard into live projects rather than theoretical training and after 6-12 months they've got both delivered use cases and internal capability.

Process: start work immediately with a partner and while work is ongoing, recruit. Use the partner to onboard your team and gradually transfer capabilities over time. This process is generally a little faster, but more risky as it hinges on having an excellent delivery partner and a clear capability transfer plan to execute once you have team members ready to onboard.

Model 3: Hub and spoke with federated delivery

Instead of centralising all AI capability in one team, you build a small central hub that sets standards, provides platforms and tooling and offers expertise. Then you enable delivery teams across the business to build and deploy AI solutions within that framework.

This works when you've got multiple business units with their own priorities and you can't possibly build a central team large enough to serve everyone. The “hub” provides governance, infrastructure and expertise. The “spokes” (i.e. the business unit teams) do the actual delivery for their areas.

The challenge is capability. Your business unit teams need enough AI literacy to deliver safely and effectively and that usually means the hub provides training, templates, and support - not just governance and infrastructure, as might traditionally be the case.

This model scales well because you're not bottlenecked by central team capacity, but it only works if you've got the right enabling infrastructure and governance framework in place first.

Process: build your core team and co-work on a roadmap with business units to develop concurrently. This hinges on your core team having education, enablement and literacy experience, but allows a faster scaling of AI projects across very large organisations.

The skills you actually need

If I'm building an AI team from scratch in 2026, here's what I prioritise (in order):

Senior technical leadership first. Someone who can make architectural decisions, manage technical delivery and bridge between AI and enterprise IT. This person must be your first hire.

Data and ML engineering before data science. You need people who can build robust pipelines, deploy models to production and integrate AI into existing systems. These skills are often more limiting than model development capability.

Product and delivery management. Someone who can translate business needs into technical requirements, manage stakeholder expectations and keep delivery moving. AI projects fail more often from poor delivery management than poor models.

Data science and ML expertise. If you've got strong leadership and engineering, you can supplement data science capability through partnerships more easily than the reverse. Look for expertise in unexpected places.

Governance and risk capability. In highly-regulated industries, you need people who understand both AI and your regulatory environment. Don't assume your data scientists or engineers will naturally have this expertise.

What to do next

This article covers how to use operating models to fulfil AI talent gaps during your AI roadmap planning. If you’d like more details, including a “micro-assessment” set of questions to bring into planning meetings, sign up for our micro-assessment email series below - you’ll receive 5 weekly emails from me (Mark), or my Co-Founder, Ben, to support with your 2026 AI planning in its entirety, including operating models and talent in week 5.

Want to assess your governance framework against 2026 delivery requirements? We’ve built this 40-question planning checklist to de-risk your entire AI roadmap for 2026, including assessing governance. Download here.

If you want to talk through your specific situation, my DMs are always open on LinkedIn. I read and respond to every message.

Previous
Previous

Introduction to AI Agents for Technical Users

Next
Next

Introduction to AI Agents for Business Users