Cloud! When we think of the cloud, our mind goes to unlimited scalability and power, using an efficient system that can automatically respond to demand. The reality is not like this, mainly because most of the systems were migrated as is to the cloud or built by teams where the cloud skills were limited.
Cloud modernization is not part of the lifecycle of systems that run inside the cloud, meaning that the same cloud services are used by a system after 5-10 years. Performance bottlenecks emerge when AI is added to systems because of inefficient resource allocations, architectural decisions, and a lack of understanding of the demand of the AI model.
The truth is that most of the systems that run in the cloud nowadays are not ready for the demand for an AI service. Microsoft, like other vendors, provides paths to navigate in the AI journey that requires customers to adopt new cloud technologies or change the way how the system works.
One of the mistakes made by organizations is to underestimate the complexity of an AI system. Cloud platforms like Microsoft Azure provide scalable services, powerful VMs, and an easy way to provide more resources. However, none of this offers a system that would work with the data, bandwidth, and security demands of an AI system. At the same time, dedicated GPU VMs like Azure ND-series that are designed for deep learning do not distribute the workload efficiently, which increases the cloud spend. Customers must use AKS (Azure Kubernetes Service) or similar solutions to implement smart workload balancing.
The data pipeline is a common challenge for AI.
Organizations need to be able to streamline data ingestion and processing, but many times, they fail because the existing solution relies on an older solution. This generates a bottleneck at the data level, not allowing them to streamline the ingestion and processing of data. Tools like ADF (Azure Data Factory) and Microsoft Fabric provide orchestration capabilities for data workflows, but this requires storing the data in the same location to avoid latency. To ensure fast queries, services with hierarchical namespace support, like Azure Data Lake Storage (v2) , must be combined with AI to provide high-speed access to structured data. The risk of not using such services is that AI models will spend more time waiting for data than processing them.
Peeks don’t happen to classical cloud applications. When we deploy an AI solution, peaks can also occur, requiring us to stress test our AI systems. Services like Azure Machine Learning can automatically scale up or down but require configuring an auto-scaling threshold to avoid downtime or overprovisioning. For this purpose, it is important to include monitoring solutions like Azure Monitor and Application Insights that can track the AI system performance, identify possible performance degradation, and trigger a scaling activity before impacting our business.
Organizations expect that moving and running AI payloads in the cloud will automatically reduce costs. The reality is that the cloud costs will be higher because of inefficient scaling strategies, improper cloud services used and lack of cost control. Cost management strategies such as reserved instances for predictable AI payloads and spot VMs for non-critical jobs are crucial to balancing the cost and performance of the end-to-end system. Organizations that fail to integrate FinOps into their cloud modernization journey for AI will end up with exceeding budget consumption and a lack of cost control.
A successful AI strategy must include cloud modernization, which covers strategy, design, monitoring, and proactive scaling of AI services and the cloud ecosystem. Cloud modernization for AI is not just about adding AI services and scaling the existing ones. It’s about making cloud applications and AI services run efficiently, scalable, and cost-efficient. This can be achieved by having a cloud modernization strategy and a strong FinOpns function.
Comments
Post a Comment