Skip to main content

How does the Windows Azure in-memory cache works

Until the last version of Windows Azure to be released, for caching we could use Windows Azure Cache. This was and is a very powerful caching mechanism that can be used without any problem. Almost all the problems related to synchronization were resolved by the framework. In this blog post a talk about Windows Azure Cache.
But today I don’t want to talk about the old Windows Azure Cache. The new version of Windows Azure Cache has a new functionality: in- memory cache. Why is so important? First of all is free and the synchronization between roles is automatic done by Windows Azure.
First of all what should we know about in-memory cache. Even if is free, in-memory cache have some cost indirectly. It will consume memory space from the role and also computation power. The most important resource is the memory. Because of this, based on the role size you need to be aware of the size of the cache of each role. The good part is that we don’t have any limitation. Because of this you can define specials roles only for cache (or even a cache clusters machine – in this case I think that Windows Azure Cache is better).
This new version of cache eliminates any cache quotas and throttling and support mem-cache protocol. We can very easily integrate with other system. On the same role we can have more than one cache configured; each cache can have different name and different settings.
From what I know it is very similar with Windows Cache. It can be integrated with Windows Azure Cache very easily. When we want to configure an in-memory cache we have two options. The first is to define a role that is dedicated for caching. The other option is to have a role that beside the in-memory cache it also has the application (the resources are shared between them). The name of this type is “Co-located Role Caching”.
Let’s see how we can configure the dedicated role for in-memory caching. We will need to create a new project named “Cache Worker Role”. This role will define a dedicated machine that will be used only for caching. After we create this new role, if we go to properties of this role we will see that we have a new tab named “Caching”. In this tab we can configure the names of the cache, the number of copies that each cache will have (for backup purposes), when an item from cache will expired and so on. You can set more expiration types. In the current version we have 3 options:
  • None – item will not expired (in this case the time to live need to be set to 0)
  • Absolute – the item will expired in X minutes (time to live value) from the moment when is added to the cache
  • Sliding Windows – the item will expired in X minutes from the moment when is added to the cache or is accessed by someone (in this case if we have the time to live set to 5 minutes and someone read the value, that the item will exist 5 more minutes in the cache)
At this step don’t forget to set a valid storage account that will be used to create backup copies. Is not mandatory to configure the backup copies. Based on the size of the role the size of the in-memory cache of dedicated server will set.
The co-located role caching is very simple to activate. Click on the properties of the role and go to caching tab. In that location check the “Enable Caching”. In this moment the in-memory cache for your role is activated. You have the possibility to set the cache size in percent. I recommend setting this value very carefully. The performance of your application can be affected if the in-memory size is too big.
The in-memory cache can be access by any roles from the same deployment. You will not be able to access the in-memory cache from different deployments. For example Foo client will not be able to access OOF in-memory cache. For security reasons this is a very good decisions. You don’t want that someone from the internet to access and still/change your data from in-memory cache.
Next step is to access and consume the cache from your code. To do this you will need to install a package from NuGet. Search and install the “Windows Azure Caching”. This package will help us (the developers) to configure and consume in-memory cache very easily. By default this package will add in the configuration file all the configuration sections that are needed. In the configuration file don’t forget to set the cache cluster name (this named was set in the Cache tab of the role).
To create a cache where you can get or saved any kind of serialize data you need to create a new instance of DataCache. You can use any name for the cache. If the cache was not created yet, it will be created automatically.
DataCache dataCache = new DataCache(“FooCache”);
The API to add or get items from cache is very similar with Windows Azure Cache.
dataCache.Add(“item1”, new Size(100, 200));
dataCache.Put(“item1”, new Size(100, 200));
Size size = (Size) dataCache.Get(“item1”);
The main difference between Add and Put is that Add method will throw an exception if an item with the same key already exists in the dictionary. When the item is not found in the cache, the null value is retrieved.
When we configured the cache (in the Cache tab of the role) we set the life time of an item from the cache. This value can be configured when we add an item to the cache, as 3th parameters of the Add method.
dataCache.Add(“item1”, new Size(100,200), TipeSpan.FromHourts(1));
Each item from the cache can be retrieved as a specific object of type “DataCacheItem” that contains specific items as the region name, tags, timeout, value, and version and so on.
DataCacheItem  dataCacheItem = dataCache.GetCacheItem(“item1”);
dataCacheItem.Timeout // When the item will expire.
A cache can contain 0 to n regions. This can be used to group cache items. Also each item from cache has a version that is automatically updated when we change the value of the item.
In conclusion in-memory cache is a very powerful feature of Windows Azure. A lot of people wanted this feature, because for sensitive data, the cache can be accessed only from your deployment machine and not from the internet.

Comments

  1. Is this new kind of cache a distributed cache, or is limited to the machine where each role instance is running?
    Ex.: for any Azure application with 20 web role instances I want that the cache is distributed too on 20 or 30 machines, otherwise the cache becomes a bottleneck..

    ReplyDelete
    Replies
    1. For the cache role, if you have two or more cache roles, that the cache will be automatic synchronized between this roles. The same think happen with the co-located cache.
      The base idee is that: the cache is distributed to all the cache locations (cache role or co-located instances).

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP