Skip to main content

Near real time data processing using AWS Firehose, AWS Kinesis Analytics and AWS ElastiCache for Redis

One of the most used solutions to increase the performance of an application is by using a cache layer. The storage and database layer can be protected from throttling and intensive queries using a caching mechanism.

In this article, we analyze if a standard cache solution is our best option all the time.

Context 

We are working on a web application with a frontend and an API consumed by the front-end and other 3rd parties systems. We don’t have control on the 3rd parties, but every 2 months, the number of request increase with 200%. 

The current solution build by the team is written in .NET Core and runs inside AWS EC2 machines configured using AWS Beanstalk. In-memory caching is used for content that is often retrieved from the database (AWS RDS).

New Gen 

The number of AWS EC2 instances increased, and to protect the database and backend, the team decides to disable stick session (session affinity) and migrate to a distributed cache solution. 

The team decides to use AWS ElastiCache and to split the cached content into 3 different cache repositories: 

  • The Common Cache used to cache content that is not tenant or user-specific 
  • The Tenant Cache used to cache tenant-specific content 
  • The User Cache is used to cache user-specific content per session. This content is all the time encrypted.
The solution is implemented in 2 weeks of work, and in 6 weeks, it goes into production. It is a big success; the average response time decreased by 30% and the load on the database and backend decrease drastically.

Team no. 2 
In parallel with our team, another team is working on adding new functionality that enables users to see real-time the current status of devices in all the factories around the globe (1.5M devices). The status of each device is updated every 2s. 
Because on the roadmap, the Tenant Cache was already planned, they decide to use also the cache to store the state of the devices.

BOOM! 
The integration goes well, and all performance tests are a big success. Once they go into production, the response time of AWS ElastiCache for Redis increases from a few ms to 10–15s. All the read and write operations to the cache are slow. The cluster size of AWS ElastiCache for Redis is increased, but the running costs too high for the business, and the real-time monitoring feature of devices is disabled.

What happened? 
AWS ElastiCache for Redis is amazing and works great as long as you use it for high number of read operations and a low number of write operations. By updating every 2s the state of each device, the load of the tenant cache increased with 750k write operations per second. Even if the load is distributed over multiple regions around the globe, the number of write operations was over 150k per region. The performance tests that were run before going live weren't able to increase the write/update load on AWS ElastiCache for Redis enough to be able to detect this problem.

Solution 
By design, it does not make sense to update the state of a device if remains the same. Taking this into account, the state of devices is stored inside AWS ElastiCache for Redis, but updated only when is changed. 
In normal conditions, the state of a device changes every 4–16h only and in a timeframe of 1h a maximum of 20% of devices state is changed. Even so, from the business point of view, the 2s heartbeat is essential and cannot be changed. For example to be pushed only when the device has a new state.


The redesign solution is collecting the device heartbeats into AWS Data Firehose. The information is forwarded to AWS Kinesis Analytics that is configured in such a way to detect when the state of the device changed. When a new state is detected, an AWS Lambda is called that updates the value from AWS ElastiCache for Redis. In this way, the solution is able to ingest a high number of notifications from devices and analyze in near-real-time and store the device state in a storage that is optimized for a high number of read operations.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(...

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see...

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provi...