Skip to main content

Resilience at Scale: Why Best Practices and AI Matter More Than We Think

 In technology conversations, “best practices” are mentioned everywhere—architecture reviews, governance frameworks, and delivery checklists. They are part of how we design and operate digital platforms. But in many projects, especially those with low or moderate workloads, best practices may feel theoretical. They look good on paper, yet the business impact is not always visible.

 


I recently worked on a project that challenged this perception. We pushed Azure Batch to operate at over 100,000 vCores, stretching the service's limits and placing significant pressure on Azure Storage, Azure Container Registry, and the networking layer. At this scale, every detail matters. And suddenly, all those Microsoft recommendations that previously seemed optional became essential.

 1. Best Practices Deliver Real Value When Systems Become Truly Intensive

For smaller systems or early-stage products, it is easy to overlook best practices. Everything works fine. For example:

  • Using multiple storage accounts for distribution
  • Minimizing container image sizes for faster pull times
  • Structuring pools to avoid cold starts
  • Designing networks with high-throughput patterns
  • Implementing intelligent retry logic
  • Observing platform rate limits

When the volume is low, ignoring these does not immediately create problems. The business sees no disruption, the team sees no errors, and delivery continues smoothly.

But scale changes everything. When you run tens of thousands of concurrent jobs, any slight inefficiency becomes amplified. A single suboptimal configuration can create delays, bottlenecks, or even system-wide failures. What looked like a “nice-to-have” quickly becomes a critical enabler of reliability and performance.

This project reinforced an important truth: best practices only generate visible value when systems are truly tested by scale. Their absence may go unnoticed at first, but it becomes evident as the system grows. This is why we encourage clients to focus not only on functionality but also on the readiness of their systems to scale responsibly—because growth should never come at the cost of stability.

 2. How GitHub Copilot and AI Mentor Agents Can Support Better Engineering

The challenge is that implementing best practices requires time, experience, and attention. Delivery teams want to do the right thing, but operational constraints, deadlines, and context-switching often make deep optimization difficult.

This is where AI-assisted engineering brings meaningful support. Tools such as GitHub Copilot help teams follow best practices more naturally, reducing the cognitive load and effort required to “get it right.” They can:

  • Suggest code patterns that comply with Azure recommendations.
  • Detect anti-patterns early
  • Accelerate Infrastructure-as-Code implementations
  • Generate validation scripts
  • Surface architectural problems before they become issues.

Looking ahead, the next evolution—AI mentor agents—can act as real-time architectural companions. These agents can provide tailored guidance by analyzing context, highlighting potential risks, and explaining the reasoning behind their suggestions. They will help engineers understand not only what to implement but why it matters. This guidance can make expertise more accessible across teams and regions, supporting delivery excellence at scale.

For me, this matches my belief: human ingenuity and intelligent technology together deliver better results. AI tools don't replace experience, but help teams make best practices easier and more sustainable.

 Conclusion

Our experience scaling Azure Batch beyond 100,000 vCores reminded us of a simple but powerful lesson: Best practices show their true value only when systems face real pressure, ensuring resilience, predictability, and performance when businesses need them.

AI-powered engineering tools help teams build scalable, high-quality cloud solutions more efficiently, making it easier to adopt best practices. Combining disciplined engineering with intelligent automation is essential for future-ready, resilient platforms prepared for tomorrow’s demands.

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

Cloud Myths: Migrating to the cloud is quick and easy (Pill 2 of 5 / Cloud Pills)

The idea that migration to the cloud is simple, straightforward and rapid is a wrong assumption. It’s a common misconception of business stakeholders that generates delays, budget overruns and technical dept. A migration requires laborious planning, technical expertise and a rigorous process.  Migrations, especially cloud migrations, are not one-size-fits-all journeys. One of the most critical steps is under evaluation, under budget and under consideration. The evaluation phase, where existing infrastructure, applications, database, network and the end-to-end estate are evaluated and mapped to a cloud strategy, is crucial to ensure the success of cloud migration. Additional factors such as security, compliance, and system dependencies increase the complexity of cloud migration.  A misconception regarding lift-and-shits is that they are fast and cheap. Moving applications to the cloud without changes does not provide the capability to optimise costs and performance, leading to ...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...