Skip to main content

Azure Storage - Archive Tier | Perfect solution for store audit data

There are two new features of Azure Storage that will make the end of 2017 very interesting.

New Archiving Tier
A new archiving tier is now available for blob storage. Additional to Cool and Hot access tiers we have now Archive tier. In contrast to the existing ones, it was design for situations when you need to archive data for long periods.
An interesting fact, in comparison with Cool storage is related to SLA. The availability SLA is the same as for Cool storage – 99%, in the context of a tier that is it secure and durable as Cool storage, but it is much cost efficient. From what I see know is more than 5 times cheaper than Cool storage.

The new tier comes in hand-in-hand with the current trend of moving existing infrastructures to Azure. In many situations, there are cases when because of regulatory aspects, you need an archiving solutions for audit data.
Audit data needs to be stored for at least 5 years. With the current price of Hot and Cool tier, it was hard to do something like inside Azure. Cool tier made our life easier, reducing storage cost, but still kept us close to on-premises archive solutions. This happened because in most of the cases you almost never touch the audit archive. There only few situations where you need to access this data and they are only isolated cases.
For situations like this, Archive tier is a perfect feet. The only thing that we need to keep an eye is the price of this tier versus TCO of on-premises archiving solutions.

Access tier switch
From now on, we can change the tier of our storage account without having to move the data from one storage account to another. Additional to this, we are allowed to specify and change the tier at blob level.
It means that we can have full benefits of a hot tier when we write the audit data inside it. But once we finish writing, we can change the tier from hot to archive and forget about it.
The combination of these two new features make us to rethink the archiving solution and look to Azure as an end-to-end solution, including archiving capabilities.

Do not forget 
Once you change a tier from Cool to Archive tier you will not be able to read the content of that blob. This means that if you need to execute read operations on top of that content you need to change the tier from Archive to Cool.
The operation of changing their tier is called rehydrate and can take up to 15 hours for now (for a blob that is under 50GBs). This is the biggest difference between Cool and Archive tier I would say.
This is acceptable for most of the cases, because in general you don’t need to access audit data and when you need access you can wait half a day for it.

Archive tier promise us not only better prices but also much better solutions, which can respond to our client needs. Let’s keep an close eye on it.


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.