Skip to main content

How does the Windows Azure in-memory cache works

Until the last version of Windows Azure to be released, for caching we could use Windows Azure Cache. This was and is a very powerful caching mechanism that can be used without any problem. Almost all the problems related to synchronization were resolved by the framework. In this blog post a talk about Windows Azure Cache.
But today I don’t want to talk about the old Windows Azure Cache. The new version of Windows Azure Cache has a new functionality: in- memory cache. Why is so important? First of all is free and the synchronization between roles is automatic done by Windows Azure.
First of all what should we know about in-memory cache. Even if is free, in-memory cache have some cost indirectly. It will consume memory space from the role and also computation power. The most important resource is the memory. Because of this, based on the role size you need to be aware of the size of the cache of each role. The good part is that we don’t have any limitation. Because of this you can define specials roles only for cache (or even a cache clusters machine – in this case I think that Windows Azure Cache is better).
This new version of cache eliminates any cache quotas and throttling and support mem-cache protocol. We can very easily integrate with other system. On the same role we can have more than one cache configured; each cache can have different name and different settings.
From what I know it is very similar with Windows Cache. It can be integrated with Windows Azure Cache very easily. When we want to configure an in-memory cache we have two options. The first is to define a role that is dedicated for caching. The other option is to have a role that beside the in-memory cache it also has the application (the resources are shared between them). The name of this type is “Co-located Role Caching”.
Let’s see how we can configure the dedicated role for in-memory caching. We will need to create a new project named “Cache Worker Role”. This role will define a dedicated machine that will be used only for caching. After we create this new role, if we go to properties of this role we will see that we have a new tab named “Caching”. In this tab we can configure the names of the cache, the number of copies that each cache will have (for backup purposes), when an item from cache will expired and so on. You can set more expiration types. In the current version we have 3 options:
  • None – item will not expired (in this case the time to live need to be set to 0)
  • Absolute – the item will expired in X minutes (time to live value) from the moment when is added to the cache
  • Sliding Windows – the item will expired in X minutes from the moment when is added to the cache or is accessed by someone (in this case if we have the time to live set to 5 minutes and someone read the value, that the item will exist 5 more minutes in the cache)
At this step don’t forget to set a valid storage account that will be used to create backup copies. Is not mandatory to configure the backup copies. Based on the size of the role the size of the in-memory cache of dedicated server will set.
The co-located role caching is very simple to activate. Click on the properties of the role and go to caching tab. In that location check the “Enable Caching”. In this moment the in-memory cache for your role is activated. You have the possibility to set the cache size in percent. I recommend setting this value very carefully. The performance of your application can be affected if the in-memory size is too big.
The in-memory cache can be access by any roles from the same deployment. You will not be able to access the in-memory cache from different deployments. For example Foo client will not be able to access OOF in-memory cache. For security reasons this is a very good decisions. You don’t want that someone from the internet to access and still/change your data from in-memory cache.
Next step is to access and consume the cache from your code. To do this you will need to install a package from NuGet. Search and install the “Windows Azure Caching”. This package will help us (the developers) to configure and consume in-memory cache very easily. By default this package will add in the configuration file all the configuration sections that are needed. In the configuration file don’t forget to set the cache cluster name (this named was set in the Cache tab of the role).
To create a cache where you can get or saved any kind of serialize data you need to create a new instance of DataCache. You can use any name for the cache. If the cache was not created yet, it will be created automatically.
DataCache dataCache = new DataCache(“FooCache”);
The API to add or get items from cache is very similar with Windows Azure Cache.
dataCache.Add(“item1”, new Size(100, 200));
dataCache.Put(“item1”, new Size(100, 200));
Size size = (Size) dataCache.Get(“item1”);
The main difference between Add and Put is that Add method will throw an exception if an item with the same key already exists in the dictionary. When the item is not found in the cache, the null value is retrieved.
When we configured the cache (in the Cache tab of the role) we set the life time of an item from the cache. This value can be configured when we add an item to the cache, as 3th parameters of the Add method.
dataCache.Add(“item1”, new Size(100,200), TipeSpan.FromHourts(1));
Each item from the cache can be retrieved as a specific object of type “DataCacheItem” that contains specific items as the region name, tags, timeout, value, and version and so on.
DataCacheItem  dataCacheItem = dataCache.GetCacheItem(“item1”);
dataCacheItem.Timeout // When the item will expire.
A cache can contain 0 to n regions. This can be used to group cache items. Also each item from cache has a version that is automatically updated when we change the value of the item.
In conclusion in-memory cache is a very powerful feature of Windows Azure. A lot of people wanted this feature, because for sensitive data, the cache can be accessed only from your deployment machine and not from the internet.

Comments

  1. Is this new kind of cache a distributed cache, or is limited to the machine where each role instance is running?
    Ex.: for any Azure application with 20 web role instances I want that the cache is distributed too on 20 or 30 machines, otherwise the cache becomes a bottleneck..

    ReplyDelete
    Replies
    1. For the cache role, if you have two or more cache roles, that the cache will be automatic synchronized between this roles. The same think happen with the co-located cache.
      The base idee is that: the cache is distributed to all the cache locations (cache role or co-located instances).

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too