Skip to main content

Demystifying Azure SQL DTUs and vCore

The main purpose of this post is to describe what are the differences between DTUs and vCore and the main characteristics of each of them. Additional to this we will discover the things that we need to be aware in the moment when we want to migrate from one option to another (e.g., DTU to vCore).
DTU and vCores are two different purchasing model for Azure SQL where you can get computation, memory, storage, and IO in different ways.

DTUs
Let's start with the DTUs concept and understand better what they represent. Each DTU unit is a combination of CPU, memory, read and writes operations. For each DTU unit, the number of resources allocated for each resource's type is limited to a specific value. When you need more power, you need to increase the number of DTUs. 
It's a perfect solution for clients that have a preconfigured resource configuration where the consumption of resources is balanced (CPU, memory, IO). When they reach the limit of resources allocated to them, their requests are automated throttled that are translated to slower performance or timeouts. In these scenarios, clients can increase the number of DTUs reserved for their instances. Remember that you will be throttled at the moment when you reach the maximum amount of resource allocated on one of the DTU components - CPU, memory, IO.
The DTU concept it is simple enough to allow us to do and understand better the scalability of an Azure SQL Database. When we double the number of DTUs, we will double the no. of resources allocated to that database.
The downside of DTUs is that you don't have the flexibility to scale only a specific resource type, like the memory or the CPU. Because of this, you can end up paying additional resources without needing or using them. The best example for this is the storage when based on the type of tier that you are using you pay the same price per DTU without taking into account that you are using 1GB or 250GB. Even if you see no additional cost for 1 or 250GB, the cost of storage is already included in the price estimation of DTUs.

vCores
This model is closer to the classical approach where you would have a physical or virtual machine, and you can scale each resource independently. In comparison with DTUs where when you increase the no. of DTUs you automatically increase CPU, memory, IO for vCores you have the flexibility to scale each resource independently. 
At this moment the scaling is supported on two different axes. One is the storage space, where you can scale up and down the database based on how many GB of storage you need and the other one is the number of cores (vCores). Each vCore comes automatically 7GB or 5.5GB of memory (depending on what type of vCore you are using). This is the only limitation at this moment - you can't control the size of memory independently. It goes all the time hand in hand with the no. of cores that you are using.
At this moment in time, two types of vCore are available (Gen 4 and Gen 5) in combination with two tires: General Purpose and Business Critical. 
You shall keep in mind that in the case of vCores you pay separately for compute, no. of IOs, backup, log storage, and database size.

A super cool thing at vCores it is the ability to use the SQL Server licenses that you have from your on-premises. It means that you use your Software Assurance from on-premises on vCores, enabling you to save 25-30% of the vCore price. The concept is similar to the possibility to bring your own OS license when you use Azure VMs. 

DTUs vs. vCores
In the below table you can find a comparison between them from different perspectives.
Relationship between DTUs and vCores
One of the first things that pop up in our minds when we would like to migrate from DTU model to vCore model is: "How many DTUs are equal to a vCore?".
The generic recommendation from this perspective it the following:
  • 100 DTUs Standard = 1 vCore of General Purpose
  • 125 DTUs Premium = 1 vCore of Business Critical
This is just a starting point, from where you will need to run some performance tests to find the sweet spot for your application. 
As you can see you can have almost double of computation power available of vCores. 80 vCores are equal to ~10.000 DTUs, in the context of DTU model you can have maximum 4.000 DTUs.

Should I migrate to vCores?
This is a good question. For small and medium applications that don't require too many resources, vCores might be a to expensive option. In general, the base recommendation is to start to look at vCore option where you already use 300-350 DTUs. From that point, vCores might be a better option for you.
Also, for cases when the database size is much bigger than the one that it is offered on the specific tier that you use for DTUs, vCore offers you the flexibility to have a bigger database with less computation reserved for it. Perfect for situations where you have big and old databases, that are not used to often but still need to be available for internal systems. 

Final thoughts
We should have in mind that vCores are not replacing DTUs. It is just the next level of Azure SQL Database, for complex scenarios, where you want to have the ability to control how much CPU, storage, and memory you have allocated. 
I still love DTUs and for small, and medium scenarios DTUs remains my favorite, offering you good flexibility at a low price. For more complex situations, vCores will do their job and support our business needs.  

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see