Skip to main content

(Part 1) Azure Storage - Cold Tier | Things that we should know

In this post we will talk about things that we need to consider if we want to use Cold storage of Azure Storage. This post we’ll be followed by another one that will present use cases when cold storage tier is useful and how we should integrated it in our system.
(Part 2) Azure Storage - Cold Tier | Business use cases where we could use it

Overview
Cold storage tier of Azure Storage is a new tier that is available for Azure Storage for data that needs to be persisted for long time with minimum costs. This tier should be used when data is not accessed frequently. In comparison with Hot tier, cold storage latency can be a little bit higher, but still it is in milliseconds.
Azure Storage contains multiple types of storages like Azure Queues, Azure Tables, Azure Blobs (block blob, append blob and page blob). Unfortunately, their tier is available only for block blob and append blob. It means that we cannot store an Azure Table or a Page Blob in the cold tier.
The Storage Account that allows us to have access to this storage tier also is only available when we create a new storage account and we specify at the account type “Blob Storage” and not “General Purpose”. Only in this way we will have access to the cold storage tier.

Things that we should take into account
There are some things that you should consider also. Once you create an Azure Storage account that has support for Cold/Hot tier you can switch between. Switching between this two tier is not free of cost. The cost of migration from one tier to another is equivalent with cost of reading all your data from that storage account.
Existing storage accounts that are General Purpose or old storage account (so called classical ones) cannot be change to Blob Storage kind. It means that for this storages we cannot have cold and hot tier functionality.

For this scenarios it is necessary to migrate all the content to specific storage accounts. This can be made easily using AzCopy tool, that will make the migration for you. This tools can be used directly from command prompt or write a small app on top of Azure Data Movement library that is using AzCopy behind the scene.
The API that can be used for block and append blob it is the same as for normal Storage Accounts. If you do such a migration that are no change inside your app code. Only the connection string that points to the Azure Storage account needs to be changed.

HOT vs COLD Tier
If we compare this two tiers, at SLA level there is a small different for the Availability SLA. Where from 99.99% (RA-GRS) availability for Hot tier, we have only 99.9 availability for Cold storage.
From cost perspective, HOT storage cost of storage is higher than the Cold one (more than double), but in the same time the transaction cost is almost double for Cold storage. This is an important thing that we need to take into account when we make the cost estimation. Also on top of this,  there are costs for reading and writing to cold tire in comparison with hot tier when you pay only for outbound data, that go outside the Azure Region
The latency is the same for both of them – expressed in milliseconds.


When you are using the Geo-Replication feature for Cold tier, the replication cost in the secondary region will be as you read the data from the main location.

In the next post we will take a look on use cases when we should use Cold tier and how the migration should be realizes.
(Part 2) Azure Storage - Cold Tier | Business use cases where we could use it

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP