Insights
10 Ways to Build Future-Proofed AI Workflows
Several Azure OpenAI model versions will soon be retired - here’s 10 ways to build future-proofed LLM workflows to prevent migration risks when models are deprecated.
Why (and why not) train a language model from scratch
Learn about why (and why not to) train a language model from scratch - plus, what would be required to implement in practice
Why Small Language Models Are the Key to Agent Independence
Open source small language models offer organisations a strategic path to building AI agents that avoid vendor lock-in, enable explainability for regulated industries and provide operational independence from the three dominant LLM providers.
Why Your Organisation Needs Agent Lifecycle Management
Explore why organisations should adopt full lifecycle management for AI agents for monitoring, governing, versioning and maintaining in business systems.
Joining the 5% Inner Circle: Moving Beyond the AI Failure Narrative
Discussing what organisations must do to join the small group that succeeds in AI adoption - only 5%, according to MIT research from 2025.
What Metrics Matter for AI Agent Reliability and Performance
What are the key metrics and measurement strategies that organisations should monitor to ensure their AI agents behave reliably, safely, and usefully?
Why Prefect is A Perfect Pick for AI Agent Monitoring
Exploring how Prefect (a workflow orchestration tool) fits naturally into AI agent monitoring and enables tracing, alerting and observability of agent operations.
Protecting Enterprise Data in the MCP Era
Covering the data governance, security and privacy challenges that arise when connecting AI agents to enterprise data via Model Context Protocol (MCP), as well as how to mitigate risks.
How MCP Transform Enterprise Intelligence
How MCP enables AI systems to make insights more actionable, integrated and contextually aware, based on relevant enterprise data.
What is Model Context Protocol and Why Should You Care?
Model Context Protocol (MCP) lets AI systems securely interface with enterprise data, breaking silos and embedding context into AI outputs. Read on to find out more.
Our Principles for Building Enterprise Grade Generative AI
The foundational principles WeBuild‑AI used for building our Pathway platform, from AI‑native design to guardrails, ethics and automation as code.
The Five Agent Types of Knowledge Work
Uncover the five key AI agent types reshaping knowledge work, from data wranglers to decision-makers, and how they each accelerate productivity.
The Evolution of Enterprise Apps in the Generative AI Era
Learn about how enterprise applications are evolving with GenAI to become more intelligent, adaptive and embedded into daily decision-making in business.
Why Your Enterprise Needs a Unified Approach To Generative AI
Discover why a strategic, enterprise-wide AI strategy is essential to deliver real value, with tangible support, security and usability across the business.
Practical Security Guardrails for Large Language Models
Actionable techniques to ensure secure LLM deployments that balance innovation with function, from using prompt injection protection to ethical use and access controls.





