Skip to main content

Posts

Why AI ROI is more volatile than classical IT projects

Recent posts

AI ROI without hype: a practical way to measure value using risk adjustment + Azure Copilot example

Most people know what ROI means, but it’s harder to calculate for AI projects. The numbers are less predictable than with traditional platforms because many AI projects never reach stable production. IDC says only about 44% of custom AI apps and 53% of third-party AI apps make it from proof of concept to production. That’s why it’s important to look at ROI through a risk lens, not just cost versus benefit. One useful approach is to use a risk-adjusted formula: AI ROI = (AI Business Value Income / (Initial Investment + Annual Costs)) × Success Probability where, >AI Business Value Income (over N years) Consider a 2 to 3 year period and include both direct and indirect value: Direct: time saved, fewer tickets, higher conversion, lower fraud. Indirect: improved customer or employee experience and quicker decisions. For these, use measurable stand-ins like CSAT, churn, time to resolution, or hours saved, and estimate conservatively. >Initial Investment This covers more than just buil...

Private doesn't mean invisible - What enterprise AI chats really mean

 Many companies use AI tools such as ChatGPT Enterprise and Microsoft Copilot to raise efficiency and reduce repetitive tasks. However, it is essential to clarify the meaning of the “private” label. In an enterprise setting, “private” typically refers to daily sharing restrictions rather than absolute confidentiality. Organizations may still access these chats for governance, security, or legal reasons. ChatGPT Enterprise OpenAI states that, by default, ChatGPT Enterprise does not use business data (inputs and outputs) to train its models. Customers retain ownership and control over their data, including retention settings. OpenAI also maintains compliance with requirements such as GDPR through contractual agreements, such as a Data Processing Addendum (DPA). Within an enterprise workspace, “private chat” generally means chats are not shared with colleagues, but it does not guarantee that administrators cannot access them. Enterprise plans may use compliance tools such as the Compl...

AI Ntive cloud reference architecture on Microsoft Azure

After 17 years working with cloud technology, I’ve seen a clear pattern. AI projects rarely fail because the model is weak. More often, the problem is that the platform was built for traditional applications, not for AI. GenAI and agents add extra demands on the architecture. AI also brings unpredictable traffic and new security and governance challenges. Here’s a reference architecture I use when designing AI-native platforms on Microsoft Azure. It’s not a strict blueprint, but a practical structure to keep teams aligned and prevent surprises as the solution grows. User and API entry layer Start with a clear entry point. Focus on predictable performance, strong security, and access control. On Azure, many teams use Azure Front Door or Application Gateway for incoming traffic, then add Azure API Management to manage API exposure, throttling, authentication, and versioning. A common mistake is exposing AI endpoints directly to the internet. It might seem quick for a proof of concept, bu...

Azure Governance that scales: guardrails for fast and safe delivery

 For large organizations, Azure success depends on solid governance, clear requirements, planned initiatives, and business priorities. Start with a clear hierarchy to apply rules consistently across the organization, not just to individual projects. First, I set up core elements: management groups, subscriptions, resource groups, and then resources. This structure is practical and important for scaling access and compliance controls. Management groups matter if you have multiple subscriptions and want a uniform baseline. I keep them shallow, three to four levels, since more are hard to manage. Azure allows up to six (excluding the tenant root and subscription level). Assignments at higher levels cascade down, so hierarchy matters. I use subscriptions as boundaries for billing and scaling. Splitting development, testing, and production into separate subscriptions isolates costs and risks. A dedicated subscription for shared network services, such as ExpressRoute or Virtual WAN, simp...

What a company needs to be able to deliver Cloud AI Native solutions

 Cloud AI-Native delivery means turning AI from a basic demonstration into a scalable platform. This requires modern cloud infrastructure, up-to-date & well-organized data, engineering practices suitable for operating AI at scale, and processes to ensure AI is used safely and responsibly. So, what does a company actually need to do to make this work? Build platforms, not just projects A company must design and build reusable foundations. Reliable frameworks have to support products and teams, rather than creating isolated projects. This means the company must be able to create reference architectures, standard templates, clear approaches, and clear processes for how teams work. Security, cost control, and operational monitoring must be built into the platform design at the start, not added later. Modernise applications, not just move them A company must migrate from lift-and-shift systems to cloud-native ones. This calls for skills in refactoring, containerisation, breaking mon...

Phase 5 of Intelligent Cloud Modernisation: Build-run-evolve of AI-Native solutions

 By Phase 5, most organisations have working systems. Applications are refactored, data modernised, AI integrated, and governance established. It is tempting to think the journey is over. AI-Native platforms are not classic IT. You don’t deploy and forget. Models drift, prompts evolve, embeddings go stale, costs shift, and user expectations change quickly. This is why Phase 5 is a continual Build–Run–Evolve cycle. In the image I use for this phase, the cycle is simple: Build → Run → Evolve. Behind this simplicity lies a serious message: AI requires automation and operational discipline on par with engineering. Build: Focus on making delivery repeatable, not dependent on individual effort. In AI projects, ‘heroic delivery’ is common: one team member deploys the model, another fixes the pipeline, and a few keep the platform alive. This does not scale. Build means we standardise how we build and release everything: infrastructure, applications, data pipelines, prompts, models, policie...