Skip to main content

Traffic Manager Overview

Starting from today we have a mechanism that give us the possibility to control the traffic that comes to our Azure services. The name of this service is Traffic Manager.
What does this means?

Performance Load Balancing
Well, the simplest scenario is when we have a service running on different data centers. In this case we want to be able to redirect users to the closest data centers. We could have a service that identifies the location of the user and based on this redirect him to a specific data center. This problem is resolved by Traffic Manager Service. Using the client IP, this service will identify the location of the client and will redirect him to the closest data center (the one that have the lowest latency).
To be able to monitor the performance of each endpoint you will need to specify a relative path to the resource that is monitored. The monitor part is pretty simple, the latency time of each endpoint resource is measure every 30 seconds. When one of the request exceed 10 seconds or the return request code is different than 200 for more than 4 times in a row the endpoint will be considered down.

Failover Load Balancing
Another scenario that is cover by Traffic Manager is the case when one of our services from a data center is down. In this case the Traffic Manager will be able to detect the failover of the service and redirect the traffic to another data center. In this way all the traffic will be redirect to a backup service. We can define the order of the endpoints. This means that if the endpoint one will be down, the Traffic Manager will try to redirect the traffic to the second endpoint. If the second endpoint is down, the traffic will be redirect to the 3rd one and so on.
The performance Load Balancing also monitors the status of the endpoint and will not redirect traffic to an endpoint that is down.

Round Robin Load Balancing
This is the classic case of load balancing. In this case we have 2 or more endpoints available. The first client is redirected to the first endpoint, the second client to the second one and so on. This is a simple and very efficient way to make load balancing.
Also in this case, the Traffic Manager Monitoring component will redirect traffic to the endpoints that are up and running.

A normal question is when does the Traffic Manager appear on the requested map. For example if we have a domain foo.com and we will create a traffic manager domain named foo.trafficmanager.net. When a request will come to our website DNS name the request will be redirect to the foo.trafficmanager.net. Based on the policy that we use the traffic manager will redirect the client request to one of our endpoint.
Of course the latency of our system will increase at first request, but this value will be very low. In normal cases I would consider this value equal to zero and is not relevant for normal web applications.
Also, you should know that the resources of the endpoint that is used to check if the latency of the service needs to be over HTTP or HTTPS protocol. If your services works with different protocols that you need to add a HTTP or HTTPS resource – this can be a simple resource like a small file.
Another important thing to do after you configure the traffic manager is to update the DNS resource record to redirect the request from foo.com to foo.trafficmanager.com.
What do you think about this service? Do you think that you will use it in the near feature?

Comments

  1. It was about time to give traffic manager a new UI, since the old portal was a bit outdated..

    Anyway, why isn't possible to say to Azure: just route automatically the traffic to the closest available datacenter? (if I have the money to host a service in multiple data centers.. :) )

    ReplyDelete

Post a Comment

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

Scope
The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

Context
First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…