Skip to main content

(Part 2) Azure Storage - Cold Tier | Business use cases where we could use it

In the last post we talked about Cold Storage tier. Together we identified what are the main features and advantages of their tier. In the same time we discover that there are some things that we need to take into account such as the price of switching between one tier and another or the extra charge that is apply when we replicate the Cold Storage content to another region.
This post is presenting different use cases where cold storage tier is a better option to store content. Also, cases when cold storage might not be the best choice will be taken into consideration.

Archiving Logs and Raw data
Because of different requirements we might need to store content for long period of time – 1y, 5y or even 20y. For this scenarios we need to identify a cheap location where we can store this data, with a minimum cost and in the same time to be sure that the content will not be lost or damaged after a period of time.
A good example is audit logs, that needs to be stored more than we can imagine. For a scenario like this, we need a cheap location where we can drop all the audit data and store it. There are two different approaches for this
       1. (processing and analytics required) To drop the audit data in the Hot tier for a specific period of time. Once we don’t need to process these data we can move it to the Cold tier.
       2. (no processing or analytics required) All the audit data can be dumped directly to the cold tier, without storing it in an intermediate location, like hot tier
Another option for the first case, when we also need to do some analytics or reporting over the audit data is to dump the same content in two different locations:
  • Transfer content to a storage that allow us to run analytics or reporting (Hot tier, Azure Data Lake, Azure SQL)
  • Dump content to the cold tier for achieving

Using this approach, there is no need to transfer data between one storage to another and no changes are made to the storage that is responsible or archiving once the data is written.

Video or image streaming
An interesting and attractive case for cold tier is when we need to store video streaming of surveillance camera. This kind of stream is not usually accessed once it was created. Only in special situation you will want to analyze what was recorded sometime in the past.
This is the perfect situation when you would prefer to use cold storage – low cost for storage, data is read rare and almost never you will not want to read all the data.
On top of this, you can define different policies, that could delete automatically the content of your storage, based on how old data is. For example, a simple folder structure might help you during the cleanup process.
A similar case is in hospitals where you need to archive the X-Rays or in bank industry where all docs that were scanned need to be archive.
Dumping such content directly to cold tier is the best option that you have, if you want to optimize costs and in the same time to be sure that you don’t lose it after a specific period of time.


Temporary storage until all data is collected
There are situations where we are working with large data sets that require to be collected fully before analizing them. A similar case is with content that collected slow from different sources (for example sensors) and only after a few months we want to process the data.
For situations like this it is less expensive to put data in the cold tier and when is necesary (if is necesary) to move it to the hot tier or to another system.

Of course there are many other situations where Cold tier would help us a lot. It is impossible to cover all of them, but the 3 above are a good starting point.

When we should not use cold tier
These is great, but as we saw in the previous post, there are some activities that can be done on cold tier that can generate extra cost. For cases when we know that the number of read operation will be very frequent hot tier is a better solution for us.
Another use case when hot tier might be a better solution is when we collected data, store it, process it and in the end archive it for long period of time. In this case, the best option is to start with hot tier, where data needs to be persisted until when we finish the processing part. Once data was processed we can move it to the cold tier.

Conclusion
A simple thing like storage can be used in very complex what. Even if the price per GB is low, buy using the wrong storage we can increase drastically the cost of hosting and running. Before jumping to peek a solution, it is pretty clear that we need to see what kind of activities we need to do, for how long and so on.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too