Skip to main content

Resilience at Scale: Why Best Practices and AI Matter More Than We Think

 In technology conversations, “best practices” are mentioned everywhere—architecture reviews, governance frameworks, and delivery checklists. They are part of how we design and operate digital platforms. But in many projects, especially those with low or moderate workloads, best practices may feel theoretical. They look good on paper, yet the business impact is not always visible.

 


I recently worked on a project that challenged this perception. We pushed Azure Batch to operate at over 100,000 vCores, stretching the service's limits and placing significant pressure on Azure Storage, Azure Container Registry, and the networking layer. At this scale, every detail matters. And suddenly, all those Microsoft recommendations that previously seemed optional became essential.

 1. Best Practices Deliver Real Value When Systems Become Truly Intensive

For smaller systems or early-stage products, it is easy to overlook best practices. Everything works fine. For example:

  • Using multiple storage accounts for distribution
  • Minimizing container image sizes for faster pull times
  • Structuring pools to avoid cold starts
  • Designing networks with high-throughput patterns
  • Implementing intelligent retry logic
  • Observing platform rate limits

When the volume is low, ignoring these does not immediately create problems. The business sees no disruption, the team sees no errors, and delivery continues smoothly.

But scale changes everything. When you run tens of thousands of concurrent jobs, any slight inefficiency becomes amplified. A single suboptimal configuration can create delays, bottlenecks, or even system-wide failures. What looked like a “nice-to-have” quickly becomes a critical enabler of reliability and performance.

This project reinforced an important truth: best practices only generate visible value when systems are truly tested by scale. Their absence may go unnoticed at first, but it becomes evident as the system grows. This is why we encourage clients to focus not only on functionality but also on the readiness of their systems to scale responsibly—because growth should never come at the cost of stability.

 2. How GitHub Copilot and AI Mentor Agents Can Support Better Engineering

The challenge is that implementing best practices requires time, experience, and attention. Delivery teams want to do the right thing, but operational constraints, deadlines, and context-switching often make deep optimization difficult.

This is where AI-assisted engineering brings meaningful support. Tools such as GitHub Copilot help teams follow best practices more naturally, reducing the cognitive load and effort required to “get it right.” They can:

  • Suggest code patterns that comply with Azure recommendations.
  • Detect anti-patterns early
  • Accelerate Infrastructure-as-Code implementations
  • Generate validation scripts
  • Surface architectural problems before they become issues.

Looking ahead, the next evolution—AI mentor agents—can act as real-time architectural companions. These agents can provide tailored guidance by analyzing context, highlighting potential risks, and explaining the reasoning behind their suggestions. They will help engineers understand not only what to implement but why it matters. This guidance can make expertise more accessible across teams and regions, supporting delivery excellence at scale.

For me, this matches my belief: human ingenuity and intelligent technology together deliver better results. AI tools don't replace experience, but help teams make best practices easier and more sustainable.

 Conclusion

Our experience scaling Azure Batch beyond 100,000 vCores reminded us of a simple but powerful lesson: Best practices show their true value only when systems face real pressure, ensuring resilience, predictability, and performance when businesses need them.

AI-powered engineering tools help teams build scalable, high-quality cloud solutions more efficiently, making it easier to adopt best practices. Combining disciplined engineering with intelligent automation is essential for future-ready, resilient platforms prepared for tomorrow’s demands.

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

[Post Event] Azure AI Connect, March 2025

On March 13th, I had the opportunity to speak at Azure AI Connect about modern AI architectures.  My session focused on the importance of modernizing cloud systems to efficiently handle the increasing payload generated by AI.