Skip to main content

Phase 4 of Intelligent Cloud Modernisation: AI brings more risks (Governance for AI)

 During the first three phases of the AI-Native journey, we focus on major engineering tasks. We refactor applications to be cloud-native, update data for RAG and real-time context, and make AI workloads a core part of the platform. At this point, many organizations are excited because the technology is working. Demos are impressive, agents respond, and models help users.

However, a new challenge appears: intelligence also brings risk. Without proper governance, the same AI that creates value can also cause harm.
This leads to Phase 4, where governance becomes as important as architecture and data. It is not just a compliance task, but a practical way to enable safe AI scaling.


1. Data security and access control become more critical than ever

GenAI systems reveal information in ways that traditional applications never could. While a report or dashboard only shows what it is meant to, a GenAI assistant can combine data and produce unexpected answers. If access controls are weak, sensitive data might leak without anyone noticing.
That is why strict identity and access control are needed for data, prompts, and tools. For example, in Azure, use Entra ID (Azure AD) for identity, role-based access control for resources, and Azure Key Vault for secrets and keys. For data access and classification, Microsoft Purview applies consistent policies and provides visibility across both structured and unstructured sources.

2. Governance must include “model behaviour” and not only infrastructure

In traditional systems, governance focused on infrastructure like network boundaries, encryption, and compliance. With AI, governance also needs to address behavior, such as hallucinations, harmful content, bias, and unsafe actions.
This is a real concern. If an agent gives incorrect instructions to an employee or produces biased results in HR or customer support, the business faces serious risks.
Use Azure AI Content Safety and responsible AI evaluation patterns, including prompt/output testing, with human-in-the-loop controls where needed. Governance defines what models can and can’t do.

3. Tool calling and agent autonomy require guardrails

Agents are powerful because they can use tools to execute workflows, query systems, create tickets, and trigger automation. However, this is also where risks increase. If an agent has too many permissions, it can cause damage quickly.
That is why we need guardrails, such as limited toolsets, approval workflows, and complete audit trails.
For example, in Azure, use Azure API Management as a controlled gateway for agent tool access. Orchestrate sensitive workflows with Logic Apps or Azure Functions, including clear approval steps. Agents should be helpful, but also kept within limits.

4. Monitoring, auditability, and traceability are mandatory

With AI systems, it is not enough to know that the service is running. We also need to know what the model answered, what sources it used, and why it produced a certain output. This is important for trust, troubleshooting, and meeting regulatory requirements.
Governance requires logging prompts, responses, retrieved documents, and actions taken, all with secure storage and access.
Azure case: Use Azure Monitor, Application Insights, and Log Analytics for end-to-end tracing. Store audit logs securely and, when needed, integrate them with security monitoring via Microsoft Defender for Cloud and Microsoft Sentinel.

5. Compliance and Responsible AI become part of the engineering lifecycle

Starting in 2026, regulation and audit expectations will increase. Organizations will need to show that their AI is fair, explainable, and respects privacy. This cannot be left until the end; it must be included from the beginning.
This is why governance is a phase that is built into CI/CD and MLOps through testing, evaluation, approval gates, and regular reviews.
In cloud, use policy enforcement with Azure Policy, data governance with Purview, and security posture management with Defender for Cloud to keep your platform compliant as it evolves.

Final thought

Phase 4 is when AI programs either scale safely or stop because of fear. Governance turns fear into confidence, making AI trustworthy for employees, customers, and regulators.
Intelligence brings power. Governance helps control, measure, and use it responsibly. Without Phase 4, AI is a risk. With Phase 4, AI becomes a sustainable capability.

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

[Post Event] Azure AI Connect, March 2025

On March 13th, I had the opportunity to speak at Azure AI Connect about modern AI architectures.  My session focused on the importance of modernizing cloud systems to efficiently handle the increasing payload generated by AI.