Skip to main content

Scale Units and Cloud

In this post we will talk about what is a scale unit and what are the benefits of scale unit concept when we are working with a system that is running in a cloud environment.
What is a scale unit?
We can see a scale unit as a group of resources that are grouped together to serve a specific number of clients or requests.  This scale unit has a ‘common’ configuration that specifies the resources that are needed by a scale unit.
Let’s assume that we have a scale unit that contains:

  • 2 Azure SQL
  • 4 Service Bus Namespaces (with 100 Queues per namespace)
  • 8 Worker Roles
  • 3 Web Roles
  • 2 Different storage accounts

Having all of them grouped together we can test the environment at a specific scale. Otherwise we could try to scale our system infinitely, but all of us knows that this is not possible. All the resources under the same scale unit work together for the same purpose.
Each scale unit serve a specific number of clients (or resources). Because the scale unit is fixed we can know exactly what is the throughput of our scale unit - number of requests per second, number of messages that can be consumed, number of access at storage and so on.
In the end we will know exactly the number of clients that we can server or manage for each scale unit and the cost of a scale unit.
Scaling can be made easily without affecting the performance of the system, by adding new units each time.
This means that we scale very easily, by adding new scale units. For each scale unit we know exactly what are the costs. In this way we can estimate the cost easily.
The hardest thing is to separate all each scale units. Between each scale unit we should not have any kind of communication or a central node (a master one). This is the hardest thing to accomplish, because large systems are very complex with a lot of dependencies.
I think that scale unit can help us to be able to predict the necessary request and to scale in a safe way.


In the above example we can see to instances of our scale unit. Each scale unit is mapped to a specific scale unit. There is no communication between scale unit. Each scale unit can be hosted in the same data centers or in different data centers, based on our needs.

In the future, with the new portal. we will be able very easily to create the provisioning for a scale unit and control the provisioning with a few clicks. Azure V2 will allow us to define a JSON file that can be used to provision all the components from our scale unit and connected between them, without having to specify the storage account name and key to the worker roles that need this information (we will be able to do this using a script).

In the next post we will try to see how we map a system that requires 'some' communication between scale units.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

[Post-Event] Codecamp Conference Cluj-Napoca - Nov 19, 2016

Last day I was invited to another Codecamp Conference, that took place in Cluj-Napoca. Like other Codecamp Conferences, the event was very big, with more than 1.000 participants and 70 sessions. There were 10 tracks in parallel, so it was pretty hard to decide at  what session you want to join.
It was great to join this conference and I hope that you discovered something new during the conference.
At this event I talked about Azure IoT Hub and how we can use it to connect devices from the field. I had a lot of demos using Raspberry PI 3 and Simplelink SensorTag. Most of the samples were written in C++ and Node.JS and people were impressed that even if we are using Microsoft technologies, we are not limited to C# and .NET. World and Microsoft are changing so fast. Just looking and Azure IoT Hub and new features that were launched and I'm pressed (Jobs, Methods, Device Twin).
On backend my demos covered Stream Analytics, Event Hub, Azure Object Storage and DocumentDB.

Title:
What abo…