Skip to main content

How does the Windows Azure in-memory cache works

Until the last version of Windows Azure to be released, for caching we could use Windows Azure Cache. This was and is a very powerful caching mechanism that can be used without any problem. Almost all the problems related to synchronization were resolved by the framework. In this blog post a talk about Windows Azure Cache.
But today I don’t want to talk about the old Windows Azure Cache. The new version of Windows Azure Cache has a new functionality: in- memory cache. Why is so important? First of all is free and the synchronization between roles is automatic done by Windows Azure.
First of all what should we know about in-memory cache. Even if is free, in-memory cache have some cost indirectly. It will consume memory space from the role and also computation power. The most important resource is the memory. Because of this, based on the role size you need to be aware of the size of the cache of each role. The good part is that we don’t have any limitation. Because of this you can define specials roles only for cache (or even a cache clusters machine – in this case I think that Windows Azure Cache is better).
This new version of cache eliminates any cache quotas and throttling and support mem-cache protocol. We can very easily integrate with other system. On the same role we can have more than one cache configured; each cache can have different name and different settings.
From what I know it is very similar with Windows Cache. It can be integrated with Windows Azure Cache very easily. When we want to configure an in-memory cache we have two options. The first is to define a role that is dedicated for caching. The other option is to have a role that beside the in-memory cache it also has the application (the resources are shared between them). The name of this type is “Co-located Role Caching”.
Let’s see how we can configure the dedicated role for in-memory caching. We will need to create a new project named “Cache Worker Role”. This role will define a dedicated machine that will be used only for caching. After we create this new role, if we go to properties of this role we will see that we have a new tab named “Caching”. In this tab we can configure the names of the cache, the number of copies that each cache will have (for backup purposes), when an item from cache will expired and so on. You can set more expiration types. In the current version we have 3 options:
  • None – item will not expired (in this case the time to live need to be set to 0)
  • Absolute – the item will expired in X minutes (time to live value) from the moment when is added to the cache
  • Sliding Windows – the item will expired in X minutes from the moment when is added to the cache or is accessed by someone (in this case if we have the time to live set to 5 minutes and someone read the value, that the item will exist 5 more minutes in the cache)
At this step don’t forget to set a valid storage account that will be used to create backup copies. Is not mandatory to configure the backup copies. Based on the size of the role the size of the in-memory cache of dedicated server will set.
The co-located role caching is very simple to activate. Click on the properties of the role and go to caching tab. In that location check the “Enable Caching”. In this moment the in-memory cache for your role is activated. You have the possibility to set the cache size in percent. I recommend setting this value very carefully. The performance of your application can be affected if the in-memory size is too big.
The in-memory cache can be access by any roles from the same deployment. You will not be able to access the in-memory cache from different deployments. For example Foo client will not be able to access OOF in-memory cache. For security reasons this is a very good decisions. You don’t want that someone from the internet to access and still/change your data from in-memory cache.
Next step is to access and consume the cache from your code. To do this you will need to install a package from NuGet. Search and install the “Windows Azure Caching”. This package will help us (the developers) to configure and consume in-memory cache very easily. By default this package will add in the configuration file all the configuration sections that are needed. In the configuration file don’t forget to set the cache cluster name (this named was set in the Cache tab of the role).
To create a cache where you can get or saved any kind of serialize data you need to create a new instance of DataCache. You can use any name for the cache. If the cache was not created yet, it will be created automatically.
DataCache dataCache = new DataCache(“FooCache”);
The API to add or get items from cache is very similar with Windows Azure Cache.
dataCache.Add(“item1”, new Size(100, 200));
dataCache.Put(“item1”, new Size(100, 200));
Size size = (Size) dataCache.Get(“item1”);
The main difference between Add and Put is that Add method will throw an exception if an item with the same key already exists in the dictionary. When the item is not found in the cache, the null value is retrieved.
When we configured the cache (in the Cache tab of the role) we set the life time of an item from the cache. This value can be configured when we add an item to the cache, as 3th parameters of the Add method.
dataCache.Add(“item1”, new Size(100,200), TipeSpan.FromHourts(1));
Each item from the cache can be retrieved as a specific object of type “DataCacheItem” that contains specific items as the region name, tags, timeout, value, and version and so on.
DataCacheItem  dataCacheItem = dataCache.GetCacheItem(“item1”);
dataCacheItem.Timeout // When the item will expire.
A cache can contain 0 to n regions. This can be used to group cache items. Also each item from cache has a version that is automatically updated when we change the value of the item.
In conclusion in-memory cache is a very powerful feature of Windows Azure. A lot of people wanted this feature, because for sensitive data, the cache can be accessed only from your deployment machine and not from the internet.


  1. Is this new kind of cache a distributed cache, or is limited to the machine where each role instance is running?
    Ex.: for any Azure application with 20 web role instances I want that the cache is distributed too on 20 or 30 machines, otherwise the cache becomes a bottleneck..

    1. For the cache role, if you have two or more cache roles, that the cache will be automatic synchronized between this roles. The same think happen with the co-located cache.
      The base idee is that: the cache is distributed to all the cache locations (cache role or co-located instances).


Post a Comment

Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.