Skip to main content

AI-Native on top of the 6 Migration Rs

For the last decade, the 6 Rs of cloud migration have been used to describe how enterprises should adopt the cloud: Rehost, Replatform, Refactor, Retain, and, sometimes, Retire.

The 6 Rs of cloud migration have guided enterprises in adopting the cloud. However, with AI now central to digital transformation, these Rs alone are no longer sufficient. Cloud migration is just the first step; true AI-Native status requires a deeper cloud-native transformation.
Customers labelling their migrations as Cloud-Native often have applications that still behave like on-premises systems, resulting in manual operations, static systems, and locked data that hinder AI programs.
This is where a new perspective is required to build AI capabilities on top of the 6Rs.
Pure cloud-native solutions are difficult for large enterprises. Realistically, we need to identify gaps and what is needed to prepare for AI integration.
In the next part of the article, each R will be analysed in terms of AI-Native needs.

Rehost
The workloads are moved, but nothing changes inside the system. Rehost does not provide enough ground for a structural change, but 3 items can be achieved easily:
  • Telemetry to instrument the platform and get better insights
  • GPU to provide capabilities to run AI workloads
  • FinOps to track and get visibility regarding costs
This approach enables current and future workloads to adopt AIOps and provides a ground for AI.
AWS and Microsoft Azure already have this tool and are ready for use in a Rehost scenario.
  • Azure Application Insights with Smart Detection, or DevOps Guru, can automatically detect abnormal workload patterns.
  • Azure Monitor with dynamic thresholds or CloudWatch outlier detection can be used to learn from past metrics and use ML to identify and react to anomalies.
Replatform
Workloads, storage, and databases are moved to native solutions, but without near-real-time integration or a lake around them. Replatform provides limited capabilities for streaming, event integration, and data fabric, which are required by GenAI solutions.
To be closer to AI-Native, the Replatform should include:
  • Streaming and event-based capabilities for AI ingestion
  • Data fabric for analytics and ML operations
  • Hooks to existing data stream using solutions like Azure Synapse Link to avoid affecting current business flows.
Refactor
Breaking the monolith is no longer enough for AI. Having a microservice architecture that is fast and scalable responds to your business needs, but not to AI. The application that we build needs to consume and expose to AI services natively.
The outcome of the refactoring should provide a modular approach, with APIs and data streams as first-class citizens. That enables AI components to plug into the existing system to observe, predict and even optimise when possible.
Microsoft provides these capabilities by combining AKS, Azure Functions and Event Hub to the OpenAI Service stack and Foundry.
Repurchase
The system replacing the existing ones should not only be SaaS but also have AI-native capabilities. Depending on the maturity level, capabilities may be limited, providing only hooks via APIs or data streaming. For more mature solutions, vector databases and native integration with AI platforms are desired, enabling us to build flows between the two systems.
Salesforce Einstein GPT is a good example that can be combined with Azure OpenAI, Azure AI Search, and AI Agents to build an intelligent layer.
Retain
Not all the systems will reside on the cloud, and there are good reasons for this. It does not mean that this system needs to be isolated from the rest of the ecosystem and AI capabilities.
An AI-native approach, empowered by the cloud, can bring these capabilities closer by using APIs and hybrid gateways that fetch information from on-premises systems and provide intelligence.
AWS Outposts, Azure Stack HCI, combined with Bedrock Agents or Azure AI Agent Service, are just 2 examples of how on-premises systems can be part of your AI strategy.
Retire
There is not too much that you can do when you retire, except one thing. You can reinvest the savings in AI capabilities and fund AI adoption programs within your organisation. This can trigger a domino effect and generate additional savings or business in the end.
The 6 Rs are essential and relevant in the AI-Native context. It is important to know how to address them from the AI perspective. The shift we are making from managing infrastructure to managing intelligence forces us to approach areas like resource optimisation, make autonomous decisions, and autonomously react to triggers. In the end, we are on a journey to reimagine how the platform behaves and operates.

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

Cloud Myths: Migrating to the cloud is quick and easy (Pill 2 of 5 / Cloud Pills)

The idea that migration to the cloud is simple, straightforward and rapid is a wrong assumption. It’s a common misconception of business stakeholders that generates delays, budget overruns and technical dept. A migration requires laborious planning, technical expertise and a rigorous process.  Migrations, especially cloud migrations, are not one-size-fits-all journeys. One of the most critical steps is under evaluation, under budget and under consideration. The evaluation phase, where existing infrastructure, applications, database, network and the end-to-end estate are evaluated and mapped to a cloud strategy, is crucial to ensure the success of cloud migration. Additional factors such as security, compliance, and system dependencies increase the complexity of cloud migration.  A misconception regarding lift-and-shits is that they are fast and cheap. Moving applications to the cloud without changes does not provide the capability to optimise costs and performance, leading to ...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...