These days I encountered an application deployed on Azure that had connectivity issues with Azure Redis Cache.
The application is a web-based application with most of the logic inside Azure Functions. The system it is using Azure Redis Cache for data exchange between the web-application and the functions behind the scene that is crunching the data.
The deployment is stable and working as expected for 5-10 minutes. After that, it is down for the next 20-30 minutes. The cycle repeats over and over again with a generic error on both sides (Azure Web Application and Azure Functions) that indicates that the source of the problem is Azure Redis Cache.
The errors are similar to the one below:
The team first reaction was to change the cache tier from C0 to C2 and C3. The problem started not to appear so often, but the price of cache increased drastically for a system that should cost less than €200 per month.
Their mistake was that nobody took a closer look at the error and the Azure Redis Cache metrics available inside Azure Portal. Inside the Portal, it was pretty clear that:
Taking a look at the code we notified that inside Azure Functions a new connection is open each time when data is written to the cache - but without closing the connection. This means that the connection remains open until the timeout occurs for that connection instance.
Cause: Improper use of the API. Each connection instance (ConnectionMultiplexer) needs to be closed manually or use the IDisposable pattern that it is already available. Once the code was updated, there were no issues anymore with it, and we were able to downgrade the cache tier back to the original one.
The deployment is stable and working as expected for 5-10 minutes. After that, it is down for the next 20-30 minutes. The cycle repeats over and over again with a generic error on both sides (Azure Web Application and Azure Functions) that indicates that the source of the problem is Azure Redis Cache.
The errors are similar to the one below:
No connection is available to service this operation: RPUSH ST; UnableToConnect on lola.redis.cache.windows.net:6380/Interactive, origin: ResetNonConnected, input-buffer: 0, outstanding: 0, last-read: 5s ago, last-write: 5s ago, unanswered-write: 459810s ago, keep-alive: 60s, pending: 0, state: Connecting, last-heartbeat: never, last-mbeat: -1s ago, global: 0s ago, mgr: Inactive, err: never; IOCP: (Busy=0,Free=1000,Min=6,Max=1000), WORKER: (Busy=3,Free=8188,Min=6,Max=8191)
The team first reaction was to change the cache tier from C0 to C2 and C3. The problem started not to appear so often, but the price of cache increased drastically for a system that should cost less than €200 per month.
Their mistake was that nobody took a closer look at the error and the Azure Redis Cache metrics available inside Azure Portal. Inside the Portal, it was pretty clear that:
- The maximum number of connections allowed to the cache was reached
- In that times, the number of requests that were missed by the cache was 100%
- Changing the tier of cache instance just increased the number of connections allowed without resolving the main problem
Taking a look at the code we notified that inside Azure Functions a new connection is open each time when data is written to the cache - but without closing the connection. This means that the connection remains open until the timeout occurs for that connection instance.
Cause: Improper use of the API. Each connection instance (ConnectionMultiplexer) needs to be closed manually or use the IDisposable pattern that it is already available. Once the code was updated, there were no issues anymore with it, and we were able to downgrade the cache tier back to the original one.
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(RedisCacheConnectionString); IDatabase cacheDb = connection.GetDatabase(); cacheDb.ListRightPush(key, value); connection.Close(); OR using (ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(RedisCacheConnectionString)) { IDatabase cacheDb = connection.GetDatabase(); cacheDb.ListRightPush(key, value); }
Comments
Post a Comment