Skip to main content

(Part 2) Azure Storage - Cold Tier | Business use cases where we could use it

In the last post we talked about Cold Storage tier. Together we identified what are the main features and advantages of their tier. In the same time we discover that there are some things that we need to take into account such as the price of switching between one tier and another or the extra charge that is apply when we replicate the Cold Storage content to another region.
This post is presenting different use cases where cold storage tier is a better option to store content. Also, cases when cold storage might not be the best choice will be taken into consideration.

Archiving Logs and Raw data
Because of different requirements we might need to store content for long period of time – 1y, 5y or even 20y. For this scenarios we need to identify a cheap location where we can store this data, with a minimum cost and in the same time to be sure that the content will not be lost or damaged after a period of time.
A good example is audit logs, that needs to be stored more than we can imagine. For a scenario like this, we need a cheap location where we can drop all the audit data and store it. There are two different approaches for this
       1. (processing and analytics required) To drop the audit data in the Hot tier for a specific period of time. Once we don’t need to process these data we can move it to the Cold tier.
       2. (no processing or analytics required) All the audit data can be dumped directly to the cold tier, without storing it in an intermediate location, like hot tier
Another option for the first case, when we also need to do some analytics or reporting over the audit data is to dump the same content in two different locations:
  • Transfer content to a storage that allow us to run analytics or reporting (Hot tier, Azure Data Lake, Azure SQL)
  • Dump content to the cold tier for achieving

Using this approach, there is no need to transfer data between one storage to another and no changes are made to the storage that is responsible or archiving once the data is written.

Video or image streaming
An interesting and attractive case for cold tier is when we need to store video streaming of surveillance camera. This kind of stream is not usually accessed once it was created. Only in special situation you will want to analyze what was recorded sometime in the past.
This is the perfect situation when you would prefer to use cold storage – low cost for storage, data is read rare and almost never you will not want to read all the data.
On top of this, you can define different policies, that could delete automatically the content of your storage, based on how old data is. For example, a simple folder structure might help you during the cleanup process.
A similar case is in hospitals where you need to archive the X-Rays or in bank industry where all docs that were scanned need to be archive.
Dumping such content directly to cold tier is the best option that you have, if you want to optimize costs and in the same time to be sure that you don’t lose it after a specific period of time.


Temporary storage until all data is collected
There are situations where we are working with large data sets that require to be collected fully before analizing them. A similar case is with content that collected slow from different sources (for example sensors) and only after a few months we want to process the data.
For situations like this it is less expensive to put data in the cold tier and when is necesary (if is necesary) to move it to the hot tier or to another system.

Of course there are many other situations where Cold tier would help us a lot. It is impossible to cover all of them, but the 3 above are a good starting point.

When we should not use cold tier
These is great, but as we saw in the previous post, there are some activities that can be done on cold tier that can generate extra cost. For cases when we know that the number of read operation will be very frequent hot tier is a better solution for us.
Another use case when hot tier might be a better solution is when we collected data, store it, process it and in the end archive it for long period of time. In this case, the best option is to start with hot tier, where data needs to be persisted until when we finish the processing part. Once data was processed we can move it to the cold tier.

Conclusion
A simple thing like storage can be used in very complex what. Even if the price per GB is low, buy using the wrong storage we can increase drastically the cost of hosting and running. Before jumping to peek a solution, it is pretty clear that we need to see what kind of activities we need to do, for how long and so on.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

Scope
The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

Context
First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…