Skip to main content

(Part 2) Azure Storage - Cold Tier | Business use cases where we could use it

In the last post we talked about Cold Storage tier. Together we identified what are the main features and advantages of their tier. In the same time we discover that there are some things that we need to take into account such as the price of switching between one tier and another or the extra charge that is apply when we replicate the Cold Storage content to another region.
This post is presenting different use cases where cold storage tier is a better option to store content. Also, cases when cold storage might not be the best choice will be taken into consideration.

Archiving Logs and Raw data
Because of different requirements we might need to store content for long period of time – 1y, 5y or even 20y. For this scenarios we need to identify a cheap location where we can store this data, with a minimum cost and in the same time to be sure that the content will not be lost or damaged after a period of time.
A good example is audit logs, that needs to be stored more than we can imagine. For a scenario like this, we need a cheap location where we can drop all the audit data and store it. There are two different approaches for this
       1. (processing and analytics required) To drop the audit data in the Hot tier for a specific period of time. Once we don’t need to process these data we can move it to the Cold tier.
       2. (no processing or analytics required) All the audit data can be dumped directly to the cold tier, without storing it in an intermediate location, like hot tier
Another option for the first case, when we also need to do some analytics or reporting over the audit data is to dump the same content in two different locations:
  • Transfer content to a storage that allow us to run analytics or reporting (Hot tier, Azure Data Lake, Azure SQL)
  • Dump content to the cold tier for achieving

Using this approach, there is no need to transfer data between one storage to another and no changes are made to the storage that is responsible or archiving once the data is written.

Video or image streaming
An interesting and attractive case for cold tier is when we need to store video streaming of surveillance camera. This kind of stream is not usually accessed once it was created. Only in special situation you will want to analyze what was recorded sometime in the past.
This is the perfect situation when you would prefer to use cold storage – low cost for storage, data is read rare and almost never you will not want to read all the data.
On top of this, you can define different policies, that could delete automatically the content of your storage, based on how old data is. For example, a simple folder structure might help you during the cleanup process.
A similar case is in hospitals where you need to archive the X-Rays or in bank industry where all docs that were scanned need to be archive.
Dumping such content directly to cold tier is the best option that you have, if you want to optimize costs and in the same time to be sure that you don’t lose it after a specific period of time.

Temporary storage until all data is collected
There are situations where we are working with large data sets that require to be collected fully before analizing them. A similar case is with content that collected slow from different sources (for example sensors) and only after a few months we want to process the data.
For situations like this it is less expensive to put data in the cold tier and when is necesary (if is necesary) to move it to the hot tier or to another system.

Of course there are many other situations where Cold tier would help us a lot. It is impossible to cover all of them, but the 3 above are a good starting point.

When we should not use cold tier
These is great, but as we saw in the previous post, there are some activities that can be done on cold tier that can generate extra cost. For cases when we know that the number of read operation will be very frequent hot tier is a better solution for us.
Another use case when hot tier might be a better solution is when we collected data, store it, process it and in the end archive it for long period of time. In this case, the best option is to start with hot tier, where data needs to be persisted until when we finish the processing part. Once data was processed we can move it to the cold tier.

A simple thing like storage can be used in very complex what. Even if the price per GB is low, buy using the wrong storage we can increase drastically the cost of hosting and running. Before jumping to peek a solution, it is pretty clear that we need to see what kind of activities we need to do, for how long and so on.


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.