Skip to main content

Encrypt binary content stored on Azure Storage supported out of the box - Azure Storage Service Encryption for Data at Rest

Small things make the real difference between a good product and a great one. In this post we will talk about why Azure Storage Service Encryption for Data at Rest is so important.

When you decide to store confidential data in cloud or in an external data source means that there is a trust between you and the storage provider (in our case Microsoft Azure). Many time this is not enough. There are also laws that force you to encrypt the content from the moment when binary content leaves your network until reach the next destination or will be access again.
In this context, even if you trust Microsoft enough or any other cloud provider, this will not be enough. And any paper or certification will not be enough in front of the laws. For industries like banking or health this kind of scenarios are common and migration to cloud is hard and even impossible in some situations

Encryption Layers
When we discuss about encryption and security of data, there are multiple layers where we need to provide encryption. In general, when we define application that move data or access it from client secure environment we need to provide security at:

  • Transport Layer
  • Storage Layer

If we have a system that transfer data from location A to location B, then we might have:

  • Transport Layer
  • Transit Layer

But in the end, Storage and Transit Layer at the same thing. In one case we persistent content for long time, in comparison with the other one when we store data only for a specific time interval.

Transport Layer
At Transport Layer, Azure Storage supports HTTPs, that automatically secure us our transport Layer. Unfortunately, we are not allowed to use our own certificates, only Microsoft certificates are allowed. In most cases, this is enough and is not a blocker.
Also, for normal consumers, cloud provider certificates are safer than custom client certificates.

Storage Layer
What we had until now
Until now, we didn’t have any mechanism to encryption content at storage layer. The only mechanism available until now was to encrypt content before sending in on the wire. This solution works great when you have enough CPU power of you don’t need to encryption to much traffic. Otherwise client encryption is expensive and requires the user to manage by himself the encryption keys. This is an out of the box features offered by Azure SDK. Client libraries allows us to encrypt content based on our own keys. To secure and control who has access to this keys we can use Azure Key Vault. This mechanism is great, but we are the one that need to manage everything and the encryption is done on the client side.

What we have starting from now
Starting from now we, Microsoft Azure allows us to do this encryption at REST endpoint directly using Azure Storage Service Encryption for Data at Rest.
Long name, but the idea is very simple. All the content that will be stored in Azure Storage will be encrypted automatically by Azure, before storing it on the disk. In the moment when we want to access the information, the content will be decrypted before sending in back to as.

All this activities are transparent to the user. The client doesn't need to do something special for it. Once the encryption is activated per Azure Storage Account from the Azure Portal, all content that is written from that moment will be encrypted.

Encryption Algorithm
The encryption algorithm used by Azure Storage in this moment in time is AES 256 (Advanced Encryption Standard with the key length of 256 bits).
This is a well known standard, that is accepted and used by Governments and other companies around the word. This standard includes ISO/IEC 18033-3 standard, being safe enough to be used in most industries.

Encryption Key Management
The key management is done fully by Microsoft. Clients are not allowed to come with their own keys.

Content Replication
If you have activated the geo-replication feature, all the content that is written in the main region and in the geo-replicas will be encrypted.

What we can encrypt
In this moment in time we can encrypt any kind of content that is stored on Blobs (Block Blobs, Append Blobs and Page Blobs), VHDs and OS disks.
There is no way to encrypt content that is stored on Azure Tables, Files or Azure Queues.

What happens for content that already exists on the Azure Storage
You are allowed to activate this feature any time once you create an Azure Storage. Once you activate this feature, all the content that is written after this moment will be encrypted. Existing content will remain in 'clear text'.
If you need to encrypt content that was already written in your storage, you need to read and write it again. Tools like AzCopy can be used with success in this scenarios.
A similar thing happens when you disable encryption. From that moment all the content will be written in 'plain text'. Existing content will remain encrypted until the next write.

Azure Storage Account Types
Only the new storage accounts support encryption - ARM. The Azure Storage Accounts that were created using in the classical format (Classic Storage Account) doesn't support encryption.

There is no additional fee for this service.

How to activate this feature
This feature can be activated from Azure Portal or using PowerShell.

This is a great feature that can be used with success to offer end-to-end encryption of data, from the moment when data leaves your premises until you get it back. You can get end-to-end encryption without extra costs of custom implementation.
Only by activating this feature and using HTTPs you have this out of the box. Cool!


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…