Skip to main content

AI-Native on top of the 6 Migration Rs

For the last decade, the 6 Rs of cloud migration have been used to describe how enterprises should adopt the cloud: Rehost, Replatform, Refactor, Retain, and, sometimes, Retire.

The 6 Rs of cloud migration have guided enterprises in adopting the cloud. However, with AI now central to digital transformation, these Rs alone are no longer sufficient. Cloud migration is just the first step; true AI-Native status requires a deeper cloud-native transformation.
Customers labelling their migrations as Cloud-Native often have applications that still behave like on-premises systems, resulting in manual operations, static systems, and locked data that hinder AI programs.
This is where a new perspective is required to build AI capabilities on top of the 6Rs.
Pure cloud-native solutions are difficult for large enterprises. Realistically, we need to identify gaps and what is needed to prepare for AI integration.
In the next part of the article, each R will be analysed in terms of AI-Native needs.

Rehost
The workloads are moved, but nothing changes inside the system. Rehost does not provide enough ground for a structural change, but 3 items can be achieved easily:
  • Telemetry to instrument the platform and get better insights
  • GPU to provide capabilities to run AI workloads
  • FinOps to track and get visibility regarding costs
This approach enables current and future workloads to adopt AIOps and provides a ground for AI.
AWS and Microsoft Azure already have this tool and are ready for use in a Rehost scenario.
  • Azure Application Insights with Smart Detection, or DevOps Guru, can automatically detect abnormal workload patterns.
  • Azure Monitor with dynamic thresholds or CloudWatch outlier detection can be used to learn from past metrics and use ML to identify and react to anomalies.
Replatform
Workloads, storage, and databases are moved to native solutions, but without near-real-time integration or a lake around them. Replatform provides limited capabilities for streaming, event integration, and data fabric, which are required by GenAI solutions.
To be closer to AI-Native, the Replatform should include:
  • Streaming and event-based capabilities for AI ingestion
  • Data fabric for analytics and ML operations
  • Hooks to existing data stream using solutions like Azure Synapse Link to avoid affecting current business flows.
Refactor
Breaking the monolith is no longer enough for AI. Having a microservice architecture that is fast and scalable responds to your business needs, but not to AI. The application that we build needs to consume and expose to AI services natively.
The outcome of the refactoring should provide a modular approach, with APIs and data streams as first-class citizens. That enables AI components to plug into the existing system to observe, predict and even optimise when possible.
Microsoft provides these capabilities by combining AKS, Azure Functions and Event Hub to the OpenAI Service stack and Foundry.
Repurchase
The system replacing the existing ones should not only be SaaS but also have AI-native capabilities. Depending on the maturity level, capabilities may be limited, providing only hooks via APIs or data streaming. For more mature solutions, vector databases and native integration with AI platforms are desired, enabling us to build flows between the two systems.
Salesforce Einstein GPT is a good example that can be combined with Azure OpenAI, Azure AI Search, and AI Agents to build an intelligent layer.
Retain
Not all the systems will reside on the cloud, and there are good reasons for this. It does not mean that this system needs to be isolated from the rest of the ecosystem and AI capabilities.
An AI-native approach, empowered by the cloud, can bring these capabilities closer by using APIs and hybrid gateways that fetch information from on-premises systems and provide intelligence.
AWS Outposts, Azure Stack HCI, combined with Bedrock Agents or Azure AI Agent Service, are just 2 examples of how on-premises systems can be part of your AI strategy.
Retire
There is not too much that you can do when you retire, except one thing. You can reinvest the savings in AI capabilities and fund AI adoption programs within your organisation. This can trigger a domino effect and generate additional savings or business in the end.
The 6 Rs are essential and relevant in the AI-Native context. It is important to know how to address them from the AI perspective. The shift we are making from managing infrastructure to managing intelligence forces us to approach areas like resource optimisation, make autonomous decisions, and autonomously react to triggers. In the end, we are on a journey to reimagine how the platform behaves and operates.

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

[Post Event] Azure AI Connect, March 2025

On March 13th, I had the opportunity to speak at Azure AI Connect about modern AI architectures.  My session focused on the importance of modernizing cloud systems to efficiently handle the increasing payload generated by AI.