Auto scaling of App Services and Web Application it is a feature that is available for some time inside Microsoft Azure. Beside standard metrics like CPU, Memory and Data In/Out there is a specific web metric that can be used for scaling – HTTP Queue Length.
Counter definition
It is important to know from beginning what does this metric represents. The name of the metric can create confusion especially if you used in the past IIS or similar services. The counter that can be accessed from Azure Portal represents the total number of active requests inside W3WP. The technical path of the counter would be “W3SVC_W3WP – Active_Requests_ - _Total”.
Naming confusion
This metric created confusion in the past, so it was renamed from HTTP Queue Length to Requests and shows the total number of requests in a specific moment in time. This change was done only in the Metric Monitor part of the Azure Portal.
Inside auto scale section of „Scale out” you will find this metrics called „HttpQueueLength”, but remember that they represents the same counter.
What does this metric represents
This counter represents the total number of active requests to our App Services. For example if we have 5 clients connected in that moment to our App Service, this means that the number of HttpQueueLength will be 5.
This value is relevant especially for two kind of applications. First are the applications that are hit by high number of requests and the requests processing time is short. The second case is when we have requests that are executed for long period of time, which could put pressure to our backend system.
Should I use inside for Auto-Scaling
Yes and No. This is a tricky question. This is that kind of counter that if you analyze it separately if will not provide too much information related to the current load of the system and if you need to scale up/down.
Taking into account the length of the requests, you might realize that there is direct impact on the quality of service if you have 100 requests in parallel that takes 10 minutes and consume 80% of CPU or 1000 requests in parallel that takes 0.2 seconds.
Do not start to use this metric from day 0. Try to gather historical data and see how you can combine this metrics with other counters that are provided by App Services. I recommend to start with simple counters like CPU or Memory. After a while, based on historical data and how the system behave you can decide to use Http Queue Length counter.
Final thoughts
First, remember that Http Queue Length and Requests are the same metrics. They represents the total number of requests in a specific moment in time. Be careful on how you use this metrics combined with auto scale because false-positive actions might occur.
Counter definition
It is important to know from beginning what does this metric represents. The name of the metric can create confusion especially if you used in the past IIS or similar services. The counter that can be accessed from Azure Portal represents the total number of active requests inside W3WP. The technical path of the counter would be “W3SVC_W3WP – Active_Requests_ - _Total”.
Naming confusion
This metric created confusion in the past, so it was renamed from HTTP Queue Length to Requests and shows the total number of requests in a specific moment in time. This change was done only in the Metric Monitor part of the Azure Portal.
Inside auto scale section of „Scale out” you will find this metrics called „HttpQueueLength”, but remember that they represents the same counter.
What does this metric represents
This counter represents the total number of active requests to our App Services. For example if we have 5 clients connected in that moment to our App Service, this means that the number of HttpQueueLength will be 5.
This value is relevant especially for two kind of applications. First are the applications that are hit by high number of requests and the requests processing time is short. The second case is when we have requests that are executed for long period of time, which could put pressure to our backend system.
Should I use inside for Auto-Scaling
Yes and No. This is a tricky question. This is that kind of counter that if you analyze it separately if will not provide too much information related to the current load of the system and if you need to scale up/down.
Taking into account the length of the requests, you might realize that there is direct impact on the quality of service if you have 100 requests in parallel that takes 10 minutes and consume 80% of CPU or 1000 requests in parallel that takes 0.2 seconds.
Do not start to use this metric from day 0. Try to gather historical data and see how you can combine this metrics with other counters that are provided by App Services. I recommend to start with simple counters like CPU or Memory. After a while, based on historical data and how the system behave you can decide to use Http Queue Length counter.
Final thoughts
First, remember that Http Queue Length and Requests are the same metrics. They represents the total number of requests in a specific moment in time. Be careful on how you use this metrics combined with auto scale because false-positive actions might occur.
Comments
Post a Comment