Skip to main content

Azure Redis Cache (Day1 of 31)

List of all posts from this series: http://vunvulearadu.blogspot.ro/2014/11/azure-blog-post-marathon-is-ready-to.html

Short Description
If you already used Redis Cache you should know that Microsoft Azure Redis Cache is based on this open source cache solution. This cache solution is offered as a service, with an SLA that reaches 99.9% uptime. Everything is managed by Azure infrastructure, the only thing that we need to do is to use it.
There are two editions of Azure Redis Cache in this moment. The basic edition is formed from only one node and can be used with success when we are in the development phase or we are working on a PoC. You don’t have any SLA on the base edition.
The standard edition is formed from 2 nodes (Master/Slave) and comes with a SLA of 99.9% uptime. On top of this, because we are having a multi-node configuration this edition is ready for high performance.
The maximum size of an instance is 53GB for each unit and we will talk more about this in the next sections.

Main Features
It is based on (key, value) pair store. Because of this it is very simple to use. On top of this, each operation is atomic and after each operation you are sure that the changes were persisted into the cache.
There are multiple data types that can be stored in Azure Redis Cache:

  • String – Maximum size is 512MB (per string)
  • Lists – This concept is pretty interesting and allows us to store data as a string and control where to push/pop content (at the beginning or at the end of the list). The list is ordered based on insertion time and can be very useful and play the role of a stack or queue in the same time.
  • Sets – A set of strings that allows us to add multiple times an element (it will not be added multiple times, but a check is not required before adding it). There is backend support for operations like union and intersection.
  • Hashes – Used to map different data objects
  • Sorted Sets – Similar with Sets, but each element has associated a score used to keep the sets in order. We have support to have multiple items with the same score
  • Bitmap and HyperLogLogs – strings based type with their own semantics

We have support for transactions. A transaction contains one or more atomic operations. Be aware that if a command in the transaction fails, there will be no rollback action. The rest of the commands will be executed and the error will be reported. This behavior happens because commands can fail only when there is a syntax error. Redis Cache is so simple as data structure and there are no real reasons to fail.

Redis Cache has the notion of Publisher/Subscriber. This allows us to have clients that send messages to a channel. Each subscriber will receive this message (a more simplified Service Bus).
There is full support of expiration keys. It allows us to specify how long a key should be kept in the cache. Keep in mind that the default behavior is to create keys that don’t expire. This means that the key will remain in the cache forever!
Each Azure Redis Cache access port is configurable. This means that you can specify what port and what kind of channel (HTTP/HTTPS) to use.
There is also full support of LRU that allow to delete automatically all data. I don’t want to go deeper on this subject, but what I would like to say is that all eviction policies from Redis are supported by Azure.
The last feature that I would like to present is the support for operations over cached items. For example we have support to increment a value from cache without having to get that value from it.

Limitations
The first limitation that people would say that exist is the cache size that can have maximum 53GB. Yes, we could see this as a limitation, but it is a good opportunity to split cache content on multiple cache units. For example users’ data can go into a cache unit, and products data can go to another cache unit and so on.
The second limitation is the maximum number of items that we can store into a Hash, List, Set or Sorted List. The maximum size is 4 billion. There are use cases when this value can be small and this can be a limitation for us. Of course from client side we can resolve this problem.

Applicable Use Cases
There are so many use cases that I have in mind. I will try to give 4 simple use cases when Redis Cache can become our best ally.

Top Product List
A top 100 most viewed products on a web sites. A use case like this can be implemented very easily using Sorted Sets where we would increment the score value of a product each time when someone buys one. We can use commands like ZRANK and ZRANGE to show only a part of the list for pagination or other marketing strategy.

Clients IPs
Using Sets we can track the IPs or all clients of our application and associate different information to this. We can even store the black list of the IPs in Redis.

Tracking system
A Redis lists can be used for each device to track the GPS position. You can set a maximum size of the list and push the new information all the time. In this way you will have the tracking history of devices for a specific period of time. Anytime you will be able to iterate it and analyze it.

Database Cache
Use Redis Cache as a cache layer over database and store information for a specific time period.

Code Sample
To be able to use Azure Redis Cache in C# you will need the NuGet packages related to it. Don’t forget that you can use Azure Redis Cache from any language and you have a REST API over it.
First of all you need to create a connection to Redis Cache. In that moment you need to specify the connection string (ConnectionMultiplexer). Once you have a connection instantiated you can get an instance to database and start adding/reading data from cache.
You should remember when a key doesn’t exist in the cache and you request it the NULL value will be returned.
// Get Connection instance
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("vunvulea.redis.cache.windows.net...");
// Get database
IDatabase databaseCache = connection.GetDatabase();
// Add items
databaseCache.StringSet("foo1", "1");
databaseCache.StringSet("foo2", "2");
// Add items with experation value
databaseCache.StringSet("foo3", "3", TimeSpan.FromMinutes(20));

// Get item value
string foo1Value = databaseCache.StringGet("foo1");

Pros and Cons
If you need a Cache solution, I would say that Azure Redis Cache is the best solution that is now on the market available on Microsoft Azure. The next list of pro and cons is applicable to Redis in general not only for Azure Redis Cache:

CONS:
  • Only a few different data types can exist by default, but there are enough for a cache system. You may have some problem if you want to map complex data types, but in this case you should use a document oriented solution like MongoDB.
  • Don’t try to store data in Redis Cache that has inheritance. It will be a nightmare and again this is not a use case for a (key,value) database solution and you should use MongoDB.

PROS:
  • Very fast even when you have billions of data cached 
  • Better scaling support
  • Command and smart data types like Sorted Lists
  • Persistence to disk
  • String size up to 512MB
  • Publisher and Subscriber support 


Conclusion
Yes and Yes. Azure Redis is the best caching solution that is now on the market as a service. It has a lot of great features and I’m happy that Microsoft Azure supports it.
In the last 4 years I had the opportunity to use all caching solutions offered by Microsoft Azure. Based on this experience I recommend to create a separate layer for caching and be prepared to replace a cache system with another if future will prepare to us better solutions. Don’t create monolithic systems.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

[Post-Event] Codecamp Conference Cluj-Napoca - Nov 19, 2016

Last day I was invited to another Codecamp Conference, that took place in Cluj-Napoca. Like other Codecamp Conferences, the event was very big, with more than 1.000 participants and 70 sessions. There were 10 tracks in parallel, so it was pretty hard to decide at  what session you want to join.
It was great to join this conference and I hope that you discovered something new during the conference.
At this event I talked about Azure IoT Hub and how we can use it to connect devices from the field. I had a lot of demos using Raspberry PI 3 and Simplelink SensorTag. Most of the samples were written in C++ and Node.JS and people were impressed that even if we are using Microsoft technologies, we are not limited to C# and .NET. World and Microsoft are changing so fast. Just looking and Azure IoT Hub and new features that were launched and I'm pressed (Jobs, Methods, Device Twin).
On backend my demos covered Stream Analytics, Event Hub, Azure Object Storage and DocumentDB.

Title:
What abo…