Skip to main content

Azure Redis Cache and connection management

These days I encountered an application deployed on Azure that had connectivity issues with Azure Redis Cache.
The application is a web-based application with most of the logic inside Azure Functions. The system it is using Azure Redis Cache for data exchange between the web-application and the functions behind the scene that is crunching the data.
The deployment is stable and working as expected for 5-10 minutes. After that, it is down for the next 20-30 minutes. The cycle repeats over and over again with a generic error on both sides (Azure Web Application and Azure Functions) that indicates that the source of the problem is Azure Redis Cache.

The errors are similar to the one below:
No connection is available to service this operation: RPUSH ST; UnableToConnect on lola.redis.cache.windows.net:6380/Interactive, origin: ResetNonConnected, 

input-buffer: 0, outstanding: 0, last-read: 5s ago, last-write: 5s ago, unanswered-write: 459810s ago, keep-alive: 60s, pending: 0, state: Connecting, last-heartbeat: never, 

last-mbeat: -1s ago, global: 0s ago, mgr: Inactive, err: never; IOCP: (Busy=0,Free=1000,Min=6,Max=1000), WORKER: (Busy=3,Free=8188,Min=6,Max=8191)

The team first reaction was to change the cache tier from C0 to C2 and C3. The problem started not to appear so often, but the price of cache increased drastically for a system that should cost less than 200 per month.
Their mistake was that nobody took a closer look at the error and the Azure Redis Cache metrics available inside Azure Portal. Inside the Portal, it was pretty clear that:
  • The maximum number of connections allowed to the cache was reached
  • In that times, the number of requests that were missed by the cache was 100%
  • Changing the tier of cache instance just increased the number of connections allowed without resolving the main problem
The high number of connections is not normal for an application that has 5 different times of Azure Functions that are configured in such a way to not run in parallel and the maximum number of requests from the Azure Web App to the cache in the same time is up to 4 different connections.

Taking a look at the code we notified that inside Azure Functions a new connection is open each time when data is written to the cache - but without closing the connection. This means that the connection remains open until the timeout occurs for that connection instance.

Cause: Improper use of the API. Each connection instance (ConnectionMultiplexer) needs to be closed manually or use the IDisposable pattern that it is already available. Once the code was updated, there were no issues anymore with it, and we were able to downgrade the cache tier back to the original one.


ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(RedisCacheConnectionString);

IDatabase cacheDb = connection.GetDatabase();
cacheDb.ListRightPush(key, value);

connection.Close();

OR

using (ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(RedisCacheConnectionString))
{
 IDatabase cacheDb = connection.GetDatabase();
 cacheDb.ListRightPush(key, value);
}

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see