Why your SDLC is slowing down AI delivery (and what to do about it)

We’ve all seen it before: a team gets budget approved for an ambitious AI roadmap, but everything grinds to a halt at the SDLC.

Projects stretch into months and simple experiments require full development cycles, with more time spent navigating processes than actually delivering value. 

The problem is that most organisations are trying to deliver AI using development processes designed for traditional applications. 

If this sounds like you - here's what's actually slowing you down and what to do about it.

Your current system isn’t built for AI

Your traditional SDLC was designed for certainty, moving in a straight line from requirements to deployment. But AI development starts with uncertainty, proceeds through experimentation and never really ends because models need ongoing monitoring, retraining and validation even after deployment.

Where are the bottlenecks?

A team identifies a promising use case, like using AI agents to automate parts of the customer onboarding process. It's agreed to have business value and submitted through the standard SDLC.

The SDLC process begins. First, they need to write comprehensive requirements documentation. But they don't actually know all the requirements yet because they haven't experimented with the technology. So they make their best guess, knowing it'll change.

Then it goes through architecture review, which takes three weeks because the reviewers are used to evaluating traditional software and don't have a framework for assessing AI use cases. They ask questions about deterministic behaviour that don't really apply to probabilistic models.

Next, infrastructure provisioning. Two more weeks because the standard environments aren't set up for model training and the team needs special approvals for GPU compute.

Eventually, they get to actually build something. The model works in development but performs differently in production because the data distributions are different. Now they need to go back through the SDLC to make changes.

Six months later, they've delivered something. But it needs ongoing monitoring and retraining, which nobody planned for because the SDLC treats deployment as "done."

Here’s what you need to do.

The four changes to create an AI-enabled SDLC

What you do need is to adapt your SDLC to accommodate the unique characteristics of AI development. Here are the four changes that make the biggest difference and allow you to keep the governance, testing and deployment processes that are working (especially if you’re highly regulated):

1. Create a fast path for experimentation

The organisations moving quickly have carved out space for rapid experimentation outside the full SDLC. Small teams can spin up proof-of-concepts in 2-4 weeks with minimal budget and governance to validate whether an approach works.

If the experiment shows promise, it graduates into the formal development process. If it doesn't work, you've stopped quickly without wasting months in full development.

In practice, this means creating lightweight approval processes for low-risk experimentation, providing teams with pre-approved sandbox environments and setting clear criteria for when an experiment graduates to full development.

2. Build in iterative development with clear decision points

You can't accurately estimate how long it'll take to train a model that meets your performance requirements, because data quality issues (always) appear halfway through and models that worked perfectly in development perform differently in production.

Embrace iterative approaches, with timeboxed experimentation phases, clear go/no-go decision points based on model performance and plans for multiple iterations.

Build your process around learning and adapting rather than following a fixed plan.

3. Treat deployment as the beginning, not the end

With AI, deployment is when the real work starts: models need ongoing monitoring for performance degradation, retraining as data distributions shift and continuous validation to ensure they're still working as intended. Without these considerations, you're creating technical debt and compliance risk. You need processes for ongoing model management (monitoring, retraining, versioning, rollback, decommissioning).

Build maintenance into project plans and budgets from the start, assign clear ownership for post-deployment model management and create runbooks for common maintenance tasks like retraining and rollback.

4. Adapt testing for AI-specific requirements

AI models need to be tested for accuracy, bias, robustness and performance across different scenarios. In highly-regulated industries, they need documentation that proves they work as intended.

Your SDLC needs to accommodate these additional testing requirements without becoming a bottleneck, with AI-specific testing and validation in the development process from the start.

The broader benefits of an AI-enabled SDLC

  • You can effectively execute your AI roadmap in-house, or “build with” a partner.

  • You can enable rapid delivery of custom solutions.

  • You can innovate quickly through sandboxed experiments of new use cases and tools.

  • You only apply the most rigorous governance processes to the highest-risk use cases (which prevents low-risk use cases becoming bottlenecked).

What to do next

This article covers the four highest impact changes you could make to your SDLC to adapt it to be AI-enabled. If you’d like more details, including a “micro-assessment” set of questions to bring into planning meetings, sign up for our micro-assessment email series below - you’ll receive 5 weekly emails from me (Mark), or my Co-Founder, Ben, to support with your 2026 AI planning in its entirety, including AI-enabled SDLCs in week 2.

If you want to talk through your specific situation, my DMs are always open on LinkedIn. I read and respond to every message.

Previous
Previous

10 Ways to Build Future-Proofed AI Workflows

Next
Next

Building AI Literacy: A Strategic Guide for Enterprise Teams