Skip to main content

How does the Windows Azure in-memory cache works

Until the last version of Windows Azure to be released, for caching we could use Windows Azure Cache. This was and is a very powerful caching mechanism that can be used without any problem. Almost all the problems related to synchronization were resolved by the framework. In this blog post a talk about Windows Azure Cache.
But today I don’t want to talk about the old Windows Azure Cache. The new version of Windows Azure Cache has a new functionality: in- memory cache. Why is so important? First of all is free and the synchronization between roles is automatic done by Windows Azure.
First of all what should we know about in-memory cache. Even if is free, in-memory cache have some cost indirectly. It will consume memory space from the role and also computation power. The most important resource is the memory. Because of this, based on the role size you need to be aware of the size of the cache of each role. The good part is that we don’t have any limitation. Because of this you can define specials roles only for cache (or even a cache clusters machine – in this case I think that Windows Azure Cache is better).
This new version of cache eliminates any cache quotas and throttling and support mem-cache protocol. We can very easily integrate with other system. On the same role we can have more than one cache configured; each cache can have different name and different settings.
From what I know it is very similar with Windows Cache. It can be integrated with Windows Azure Cache very easily. When we want to configure an in-memory cache we have two options. The first is to define a role that is dedicated for caching. The other option is to have a role that beside the in-memory cache it also has the application (the resources are shared between them). The name of this type is “Co-located Role Caching”.
Let’s see how we can configure the dedicated role for in-memory caching. We will need to create a new project named “Cache Worker Role”. This role will define a dedicated machine that will be used only for caching. After we create this new role, if we go to properties of this role we will see that we have a new tab named “Caching”. In this tab we can configure the names of the cache, the number of copies that each cache will have (for backup purposes), when an item from cache will expired and so on. You can set more expiration types. In the current version we have 3 options:
  • None – item will not expired (in this case the time to live need to be set to 0)
  • Absolute – the item will expired in X minutes (time to live value) from the moment when is added to the cache
  • Sliding Windows – the item will expired in X minutes from the moment when is added to the cache or is accessed by someone (in this case if we have the time to live set to 5 minutes and someone read the value, that the item will exist 5 more minutes in the cache)
At this step don’t forget to set a valid storage account that will be used to create backup copies. Is not mandatory to configure the backup copies. Based on the size of the role the size of the in-memory cache of dedicated server will set.
The co-located role caching is very simple to activate. Click on the properties of the role and go to caching tab. In that location check the “Enable Caching”. In this moment the in-memory cache for your role is activated. You have the possibility to set the cache size in percent. I recommend setting this value very carefully. The performance of your application can be affected if the in-memory size is too big.
The in-memory cache can be access by any roles from the same deployment. You will not be able to access the in-memory cache from different deployments. For example Foo client will not be able to access OOF in-memory cache. For security reasons this is a very good decisions. You don’t want that someone from the internet to access and still/change your data from in-memory cache.
Next step is to access and consume the cache from your code. To do this you will need to install a package from NuGet. Search and install the “Windows Azure Caching”. This package will help us (the developers) to configure and consume in-memory cache very easily. By default this package will add in the configuration file all the configuration sections that are needed. In the configuration file don’t forget to set the cache cluster name (this named was set in the Cache tab of the role).
To create a cache where you can get or saved any kind of serialize data you need to create a new instance of DataCache. You can use any name for the cache. If the cache was not created yet, it will be created automatically.
DataCache dataCache = new DataCache(“FooCache”);
The API to add or get items from cache is very similar with Windows Azure Cache.
dataCache.Add(“item1”, new Size(100, 200));
dataCache.Put(“item1”, new Size(100, 200));
Size size = (Size) dataCache.Get(“item1”);
The main difference between Add and Put is that Add method will throw an exception if an item with the same key already exists in the dictionary. When the item is not found in the cache, the null value is retrieved.
When we configured the cache (in the Cache tab of the role) we set the life time of an item from the cache. This value can be configured when we add an item to the cache, as 3th parameters of the Add method.
dataCache.Add(“item1”, new Size(100,200), TipeSpan.FromHourts(1));
Each item from the cache can be retrieved as a specific object of type “DataCacheItem” that contains specific items as the region name, tags, timeout, value, and version and so on.
DataCacheItem  dataCacheItem = dataCache.GetCacheItem(“item1”);
dataCacheItem.Timeout // When the item will expire.
A cache can contain 0 to n regions. This can be used to group cache items. Also each item from cache has a version that is automatically updated when we change the value of the item.
In conclusion in-memory cache is a very powerful feature of Windows Azure. A lot of people wanted this feature, because for sensitive data, the cache can be accessed only from your deployment machine and not from the internet.

Comments

  1. Is this new kind of cache a distributed cache, or is limited to the machine where each role instance is running?
    Ex.: for any Azure application with 20 web role instances I want that the cache is distributed too on 20 or 30 machines, otherwise the cache becomes a bottleneck..

    ReplyDelete
    Replies
    1. For the cache role, if you have two or more cache roles, that the cache will be automatic synchronized between this roles. The same think happen with the co-located cache.
      The base idee is that: the cache is distributed to all the cache locations (cache role or co-located instances).

      Delete

Post a Comment

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

Scope
The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

Context
First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…