Skip to main content

Demystifying Azure SQL DTUs and vCore

The main purpose of this post is to describe what are the differences between DTUs and vCore and the main characteristics of each of them. Additional to this we will discover the things that we need to be aware in the moment when we want to migrate from one option to another (e.g., DTU to vCore).
DTU and vCores are two different purchasing model for Azure SQL where you can get computation, memory, storage, and IO in different ways.

Let's start with the DTUs concept and understand better what they represent. Each DTU unit is a combination of CPU, memory, read and writes operations. For each DTU unit, the number of resources allocated for each resource's type is limited to a specific value. When you need more power, you need to increase the number of DTUs. 
It's a perfect solution for clients that have a preconfigured resource configuration where the consumption of resources is balanced (CPU, memory, IO). When they reach the limit of resources allocated to them, their requests are automated throttled that are translated to slower performance or timeouts. In these scenarios, clients can increase the number of DTUs reserved for their instances. Remember that you will be throttled at the moment when you reach the maximum amount of resource allocated on one of the DTU components - CPU, memory, IO.
The DTU concept it is simple enough to allow us to do and understand better the scalability of an Azure SQL Database. When we double the number of DTUs, we will double the no. of resources allocated to that database.
The downside of DTUs is that you don't have the flexibility to scale only a specific resource type, like the memory or the CPU. Because of this, you can end up paying additional resources without needing or using them. The best example for this is the storage when based on the type of tier that you are using you pay the same price per DTU without taking into account that you are using 1GB or 250GB. Even if you see no additional cost for 1 or 250GB, the cost of storage is already included in the price estimation of DTUs.

This model is closer to the classical approach where you would have a physical or virtual machine, and you can scale each resource independently. In comparison with DTUs where when you increase the no. of DTUs you automatically increase CPU, memory, IO for vCores you have the flexibility to scale each resource independently. 
At this moment the scaling is supported on two different axes. One is the storage space, where you can scale up and down the database based on how many GB of storage you need and the other one is the number of cores (vCores). Each vCore comes automatically 7GB or 5.5GB of memory (depending on what type of vCore you are using). This is the only limitation at this moment - you can't control the size of memory independently. It goes all the time hand in hand with the no. of cores that you are using.
At this moment in time, two types of vCore are available (Gen 4 and Gen 5) in combination with two tires: General Purpose and Business Critical. 
You shall keep in mind that in the case of vCores you pay separately for compute, no. of IOs, backup, log storage, and database size.

A super cool thing at vCores it is the ability to use the SQL Server licenses that you have from your on-premises. It means that you use your Software Assurance from on-premises on vCores, enabling you to save 25-30% of the vCore price. The concept is similar to the possibility to bring your own OS license when you use Azure VMs. 

DTUs vs. vCores
In the below table you can find a comparison between them from different perspectives.
Relationship between DTUs and vCores
One of the first things that pop up in our minds when we would like to migrate from DTU model to vCore model is: "How many DTUs are equal to a vCore?".
The generic recommendation from this perspective it the following:
  • 100 DTUs Standard = 1 vCore of General Purpose
  • 125 DTUs Premium = 1 vCore of Business Critical
This is just a starting point, from where you will need to run some performance tests to find the sweet spot for your application. 
As you can see you can have almost double of computation power available of vCores. 80 vCores are equal to ~10.000 DTUs, in the context of DTU model you can have maximum 4.000 DTUs.

Should I migrate to vCores?
This is a good question. For small and medium applications that don't require too many resources, vCores might be a to expensive option. In general, the base recommendation is to start to look at vCore option where you already use 300-350 DTUs. From that point, vCores might be a better option for you.
Also, for cases when the database size is much bigger than the one that it is offered on the specific tier that you use for DTUs, vCore offers you the flexibility to have a bigger database with less computation reserved for it. Perfect for situations where you have big and old databases, that are not used to often but still need to be available for internal systems. 

Final thoughts
We should have in mind that vCores are not replacing DTUs. It is just the next level of Azure SQL Database, for complex scenarios, where you want to have the ability to control how much CPU, storage, and memory you have allocated. 
I still love DTUs and for small, and medium scenarios DTUs remains my favorite, offering you good flexibility at a low price. For more complex situations, vCores will do their job and support our business needs.  


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.