Skip to main content

Encrypt binary content stored on Azure Storage supported out of the box - Azure Storage Service Encryption for Data at Rest

Small things make the real difference between a good product and a great one. In this post we will talk about why Azure Storage Service Encryption for Data at Rest is so important.

Context
When you decide to store confidential data in cloud or in an external data source means that there is a trust between you and the storage provider (in our case Microsoft Azure). Many time this is not enough. There are also laws that force you to encrypt the content from the moment when binary content leaves your network until reach the next destination or will be access again.
In this context, even if you trust Microsoft enough or any other cloud provider, this will not be enough. And any paper or certification will not be enough in front of the laws. For industries like banking or health this kind of scenarios are common and migration to cloud is hard and even impossible in some situations

Encryption Layers
When we discuss about encryption and security of data, there are multiple layers where we need to provide encryption. In general, when we define application that move data or access it from client secure environment we need to provide security at:

  • Transport Layer
  • Storage Layer

If we have a system that transfer data from location A to location B, then we might have:

  • Transport Layer
  • Transit Layer

But in the end, Storage and Transit Layer at the same thing. In one case we persistent content for long time, in comparison with the other one when we store data only for a specific time interval.

Transport Layer
At Transport Layer, Azure Storage supports HTTPs, that automatically secure us our transport Layer. Unfortunately, we are not allowed to use our own certificates, only Microsoft certificates are allowed. In most cases, this is enough and is not a blocker.
Also, for normal consumers, cloud provider certificates are safer than custom client certificates.

Storage Layer
What we had until now
Until now, we didn’t have any mechanism to encryption content at storage layer. The only mechanism available until now was to encrypt content before sending in on the wire. This solution works great when you have enough CPU power of you don’t need to encryption to much traffic. Otherwise client encryption is expensive and requires the user to manage by himself the encryption keys. This is an out of the box features offered by Azure SDK. Client libraries allows us to encrypt content based on our own keys. To secure and control who has access to this keys we can use Azure Key Vault. This mechanism is great, but we are the one that need to manage everything and the encryption is done on the client side.

What we have starting from now
Starting from now we, Microsoft Azure allows us to do this encryption at REST endpoint directly using Azure Storage Service Encryption for Data at Rest.
Long name, but the idea is very simple. All the content that will be stored in Azure Storage will be encrypted automatically by Azure, before storing it on the disk. In the moment when we want to access the information, the content will be decrypted before sending in back to as.

All this activities are transparent to the user. The client doesn't need to do something special for it. Once the encryption is activated per Azure Storage Account from the Azure Portal, all content that is written from that moment will be encrypted.

Encryption Algorithm
The encryption algorithm used by Azure Storage in this moment in time is AES 256 (Advanced Encryption Standard with the key length of 256 bits).
This is a well known standard, that is accepted and used by Governments and other companies around the word. This standard includes ISO/IEC 18033-3 standard, being safe enough to be used in most industries.

Encryption Key Management
The key management is done fully by Microsoft. Clients are not allowed to come with their own keys.

Content Replication
If you have activated the geo-replication feature, all the content that is written in the main region and in the geo-replicas will be encrypted.

What we can encrypt
In this moment in time we can encrypt any kind of content that is stored on Blobs (Block Blobs, Append Blobs and Page Blobs), VHDs and OS disks.
There is no way to encrypt content that is stored on Azure Tables, Files or Azure Queues.

What happens for content that already exists on the Azure Storage
You are allowed to activate this feature any time once you create an Azure Storage. Once you activate this feature, all the content that is written after this moment will be encrypted. Existing content will remain in 'clear text'.
If you need to encrypt content that was already written in your storage, you need to read and write it again. Tools like AzCopy can be used with success in this scenarios.
A similar thing happens when you disable encryption. From that moment all the content will be written in 'plain text'. Existing content will remain encrypted until the next write.

Azure Storage Account Types
Only the new storage accounts support encryption - ARM. The Azure Storage Accounts that were created using in the classical format (Classic Storage Account) doesn't support encryption.

Price
There is no additional fee for this service.

How to activate this feature
This feature can be activated from Azure Portal or using PowerShell.


Conclusion
This is a great feature that can be used with success to offer end-to-end encryption of data, from the moment when data leaves your premises until you get it back. You can get end-to-end encryption without extra costs of custom implementation.
Only by activating this feature and using HTTPs you have this out of the box. Cool!

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

Scope
The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

Context
First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…