Skip to main content

The real difference between an Azure VM with or without SSD

I want to talk about the real difference of an Azure VM with or without SSD. This is not a post with charts and artificial benchmarks; it is just a real story from the field.
One of my fellows from my work came at me complaining about a performance issue related to SQL Server. On an Azure VM with Linux they used to have an SQL Server instance. The DB storage size was not to complex and the DB size was acceptable.
Every few hours a job has to be executed on the database. There is a lot of data processing inside it and it usually takes around 1 hour. From duration perspective this is not acceptable, there is a clear NFR that request the task to be executed under 30 minutes.
An audit was done to the VM and database and it was pretty clear that there is a problem at read and write operations. Many actions were happening at that level, causing the memory and storage to be at high levels.
The DB specialists reviewed the database structure and the job. Unfortunately, there was not to many things that they could optimize. In addition, different things were tested like enable/disable different cache level and other SQL Server configurations, but without success.
The IT team also checked the Linux configuration and tried to see if there was an issue with the VM itself, but like in the case of DB nothing relevant was found except that the disk and memory were at high limits.
VM Resources
When we looked at VM type, we discover that it was an A4 with 8 cores and 14Gb memory. In theory this should be more than enough, with plenty of memory and CPU.
When we looked at the storage we noticed that behind the disk there is a normal HDD. This was the first sign combined with the results from DB and  the IT team that reported high disk and memory consumption. Usually, a system like SQL Server tries to use more memory when the disk is too slow and cannot keep up with the load.
We migrated the VM from A4 to D4S v3 that has only 4 cores and 16Gb memory, but has a powerful SSD behind it offered by premium storage.
Surprise, from more than an hour we were able to reduce the SQL job to 7 MINUTES. WOW! This is a big difference, that was mostly influenced by the storage type.
Less than 50% of memory is now consumed and the boost from SSD makes the job to fly. The funny thing is that we even pay less. An A4 cost us around 255e/month in comparison with a D4S v3 that is less than 150e/month.

Lesson learned
Before deciding what kind of machine you want to use from Azure, try to put on the paper what kind of resources you will need most. Based on this, try to choose the best VM that suites your needs and don’t forget that Microsoft has a good documentation related for different VM types (General Purpose, Compute Optimized, Memory Optimized, Storage Optimize, GPU and High Performance Compute).
And yes, play with different VMs configurations to see what works best for your needs.


  1. SSD disks for SharePoint Server is a no brainer wither in Azure IaaS or On Prem. At the least the SQL DB server should have SSD.


Post a Comment

Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.