Skip to main content

(Part 2) Azure Storage - Cold Tier | Business use cases where we could use it

In the last post we talked about Cold Storage tier. Together we identified what are the main features and advantages of their tier. In the same time we discover that there are some things that we need to take into account such as the price of switching between one tier and another or the extra charge that is apply when we replicate the Cold Storage content to another region.
This post is presenting different use cases where cold storage tier is a better option to store content. Also, cases when cold storage might not be the best choice will be taken into consideration.

Archiving Logs and Raw data
Because of different requirements we might need to store content for long period of time – 1y, 5y or even 20y. For this scenarios we need to identify a cheap location where we can store this data, with a minimum cost and in the same time to be sure that the content will not be lost or damaged after a period of time.
A good example is audit logs, that needs to be stored more than we can imagine. For a scenario like this, we need a cheap location where we can drop all the audit data and store it. There are two different approaches for this
       1. (processing and analytics required) To drop the audit data in the Hot tier for a specific period of time. Once we don’t need to process these data we can move it to the Cold tier.
       2. (no processing or analytics required) All the audit data can be dumped directly to the cold tier, without storing it in an intermediate location, like hot tier
Another option for the first case, when we also need to do some analytics or reporting over the audit data is to dump the same content in two different locations:
  • Transfer content to a storage that allow us to run analytics or reporting (Hot tier, Azure Data Lake, Azure SQL)
  • Dump content to the cold tier for achieving

Using this approach, there is no need to transfer data between one storage to another and no changes are made to the storage that is responsible or archiving once the data is written.

Video or image streaming
An interesting and attractive case for cold tier is when we need to store video streaming of surveillance camera. This kind of stream is not usually accessed once it was created. Only in special situation you will want to analyze what was recorded sometime in the past.
This is the perfect situation when you would prefer to use cold storage – low cost for storage, data is read rare and almost never you will not want to read all the data.
On top of this, you can define different policies, that could delete automatically the content of your storage, based on how old data is. For example, a simple folder structure might help you during the cleanup process.
A similar case is in hospitals where you need to archive the X-Rays or in bank industry where all docs that were scanned need to be archive.
Dumping such content directly to cold tier is the best option that you have, if you want to optimize costs and in the same time to be sure that you don’t lose it after a specific period of time.


Temporary storage until all data is collected
There are situations where we are working with large data sets that require to be collected fully before analizing them. A similar case is with content that collected slow from different sources (for example sensors) and only after a few months we want to process the data.
For situations like this it is less expensive to put data in the cold tier and when is necesary (if is necesary) to move it to the hot tier or to another system.

Of course there are many other situations where Cold tier would help us a lot. It is impossible to cover all of them, but the 3 above are a good starting point.

When we should not use cold tier
These is great, but as we saw in the previous post, there are some activities that can be done on cold tier that can generate extra cost. For cases when we know that the number of read operation will be very frequent hot tier is a better solution for us.
Another use case when hot tier might be a better solution is when we collected data, store it, process it and in the end archive it for long period of time. In this case, the best option is to start with hot tier, where data needs to be persisted until when we finish the processing part. Once data was processed we can move it to the cold tier.

Conclusion
A simple thing like storage can be used in very complex what. Even if the price per GB is low, buy using the wrong storage we can increase drastically the cost of hosting and running. Before jumping to peek a solution, it is pretty clear that we need to see what kind of activities we need to do, for how long and so on.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP