Skip to main content

Why back-off mechanism are critical in cloud solutions

What is a back-off mechanism?
It's one of the most basic mechanism used for communication between two systems. The core idea of this mechanism is to decrease the frequency of requests send from system A to system B if there are no data or communication issues are detected.
There are multiple implementation of this system, but I'm 100% sure that you use it directly of indirectly. A retry mechanism that increase the time period is a back-off mechanism.

Why it is important in cloud solutions?
In contrast with a classical system, a cloud subscription will come at the end of the month with a bill details that will contain all the costs.
People discover often too late that there are many services where you pay also for each request (transaction) that you do on that service. For example, each request that is done to Azure Storage or to Azure Service Bus is billable. Of course the price is extremely low - Azure Service Bus costs is around 4 cents for 1.000.000 requests, but when you have a system that is not written in the right way, you'll end up adding additional costs.
I remember a few years ago, we didn't had a back-off mechanism for Azure Service Bus Queue, end even if we didn't had data we checked every 10ms. Guess what happen? At the end of the month we had around 30% of costs only from this kind of requests. Once we implemented the right back-off mechanism we reduce the cost of transaction to under 0.50$.

Where is applicable?
Don't forget that this topic is applicable not only when a cloud services is not reachable, but also for cases when you don't have data, but you make requests to often.

How we should implement it?
If you are a developer you might jump directly to whiteboard and start designing an algorithm that increase the time interval with a specific rate if there is no data or there are connection problems.
Before doing something likes this check what kind of protocol you are using and what libraries you are using.
Solved by Protocol
Nowadays, there are many communication protocols that keep an open connection between the two parties. It means that you will never specify a time interval and in the moment when there are data on the other end, your system is notified.
Solved by client libraries
All client libraries offered by Microsoft for Azure contains a retry policy mechanism, that can be used and extended with success. I seen people that were using back-off mechanism without knowing it - the client library was already using it with default values (smile).
I think that in most of the cases, the existing mechanism are enough to solve our core problems.

As we can see in the above example, even if we are increasing the time, there is a maximum threshold, that we can set based on our business needs (NFRs).

Should I ignore it?
No, you should never ignore it. But don't add extra complexity if you don't need it. Start and try a simple mechanism and based on your needs develop a more complex one.
If you need a custom back-off mechanism ask yourself why you are different from the rest of the consumers. You don't want to invest in something that you will not need or use at full capacity. It is just extra effort.

References
An extremlty useful resource for Azure clients is "Retry service specific guidance" -  https://docs.microsoft.com/en-us/azure/best-practices-retry-service-specific

 

Comments

  1. Indeed, reducing costs might be an advantage, however the backoff alghoritms are usually used to avoid congestion and contention in a system. Indeed, most of us are using it without knowing, since most of our servers are using the Ethernet protocol :)

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too