From Figma to Functioning Frontend in Four Weeks

How We Used AI to Transform Financial Onboarding

At WeBuild-AI, we talk a lot about building faster and smarter with AI, but talk doesn’t prove anything. So when a leading UK insurance provider came to us and asked us to prove it, we had four weeks to deliver.

The brief was to take the consumer insurance policy onboarding journey, a multi-section, multi-step web form covering everything from personal details to KYC and financial information and demonstrate that an AI-augmented development process could move meaningfully faster than traditional methods, whilst keeping the same quality, if not improving it. As the mobile app had already been built the year prior, our job was to replicate and extend that journey for the web, using AI as a core part of how we worked.

Turning Designs into a Machine-Readable Spec

The first challenge was context. To move fast with AI-assisted development, you need to give the model as much structured information as possible upfront. We had Figma designs, but without a Figma dev seat we could not use the Figma MCP to extract design tokens programmatically, so we did it manually.

We systematically worked through every screen in the Figma files, copying out all the question text, field labels, validation messages and copy. The onboarding journey had five sections: About You, Your Identity, Contacting You, Your Finances and Your Account. Each section contained multiple screens, with branching logic depending on the user's answers. We took all of this raw content and used Claude to generate a structured YAML file that encoded the entire form flow, step by step, screen by screen, branch by branch.

That YAML file became the backbone of the project. It was our single source of truth for the application structure and because it was generated with AI assistance from the design artefacts, we could move from design to a working structural skeleton in a fraction of the usual time. Alongside this, we extracted every CSS value we could get hold of: colour tokens, typography scales, spacing, button styles. Even without a dev seat, we built up enough of a design system context to give Claude everything it needed to produce brand-accurate components. 

By the end of the first week or so we had a substantial amount of the Next JS frontend completed, we had a working flow through the form, built using our custom web components that were also documented and tested in Storybook.

Claude Code in the Development Loop

One of the things that made the biggest difference was not just using Claude to generate one-off code snippets, but integrating it deeply into the development lifecycle. We used Claude Code throughout, and you can see the evidence of this in the git history: the repository contains CLAUDE.md files committed directly alongside the application code. These are instruction files that give Claude persistent context about the codebase architecture, naming conventions and the specific patterns we were following, so that every subsequent interaction built on a shared understanding of the project rather than starting cold.

Two Prototypes, One PoC

In parallel with the form-based onboarding journey, we built a second prototype: a fully conversational, AI-powered chat interface for account opening. This was a deliberate experiment in the spirit of failing fast and learning quickly.

The chat prototype used a LangGraph orchestrator routing between specialised agents: a Product Discovery agent to help users find the right insurance product, a Form Collection agent to gather application data conversationally, an FAQ agent backed by a RAG pipeline using Qdrant for the vector store and a Clarification agent for handling ambiguous inputs. Amazon Bedrock provided the LLM inference layer, with Langfuse wired in for tracing and observability so we could see exactly what each agent was doing at every step.

The user research from this prototype was genuinely valuable. Users found the conversational flow interesting, but struggled with trust, particularly around sharing personal and financial information with what felt like a chatbot. The perception of speed was also a sticking point: people felt less in control than with a traditional form. These are not surprising findings in the context of financial services, but having real user data to back them up was important. It told us exactly where AI deployment would and would not work at this stage.

The conclusion was pragmatic: lead with the form-based experience, which users trust, and use AI where it adds clear value without creating anxiety. The immediate opportunity is customer support, where AI can surface information and guide users without making decisions on their behalf.

AI Across the Whole SDLC

Beyond the product itself, we embedded AI across the broader development process in ways that genuinely changed how the team operated.

Snyk AI was integrated into the GitHub Actions pipeline for automated security scanning. We built a Playwright-based end-to-end testing framework, with the tests themselves generated and maintained with AI assistance via MCP. AI persona testing allowed us to simulate different user types moving through the form flow before any real users touched it. Requirements generation was also AI-assisted, helping us translate the existing mobile app behaviours into clear web-specific specifications quickly.

The infrastructure was built with Terraform and deployed on AWS ECS Fargate, with the whole setup defined as code and version-controlled from the start. PingOne handled authentication, with Jumio integration planned for identity verification.

What We Learned About Process

Not everything went smoothly and being honest about that is part of what makes the PoC valuable.

Security protocols and access controls caused a delay of around one and a half weeks at the start. This is a completely understandable reality of working within a regulated financial institution, but it is also avoidable if you plan for it. Our solution for the next phase is explicit: a dedicated Sprint 0, a mobilisation sprint that happens before development begins, used entirely to sort access, environments, tooling licences and governance approvals. No more losing development time to administrative overhead mid-sprint.

We also found that the standard supplier onboarding process was not designed with innovation-style engagements in mind. The process was built for large, long-running production contracts, not for a fast-moving four-week PoC. We have now mapped the specific constraints and know how to navigate them efficiently going forward.

The Bottom Line

Four weeks:

  • A complete frontend for insurance policy onboarding. 

  • A conversational AI prototype with real user research. 

  • An AI-powered SDLC with automated security scanning and AI-generated testing. 

  • And a credible path to production in a quarter of the originally planned timeline.

This is what AI-augmented development looks like in practice. Not AI replacing developers, but AI giving a skilled team the leverage to move at a much faster pace.

WeBuild-AI is now moving into the full delivery phase. If you’re thinking about what this kind of approach could mean for your organisation, we would love to talk. Get in touch.

Next
Next

AI Centres of Excellence: The Engine Room of Enterprise AI Adoption