Skip to main content

Azure Table Performance - 1 vs 100.000 Tables under the same Storage Account

In our system we are using Azure Table to store a list of commands that needs to be send our clients and persisted until the client is available. Because the number of clients is high (more than 100.000), it would be very expensive to store the list of commands in other resources like Redis Cache or SQL Azure.
From the performance perspective, Azure Table are amazing, very fast even at high throughput when you store a lot of data inside them.

At the first version we done a simple mapping, where we had only one Azure Table for all our clients. For each client, we had a dedicated partition in the table. This works great because Azure Table is partitioned (scale) based on the partition.


There is only a small problem with this approach and is related to maintenance and support. If a support engineering needs to look at the commands of a specific user it will be hard for him to navigate and access the data.

The second approach is to create a different Azure Table for each client. The current documentation specify that we can have as many tables we want under a Storage Account without affecting the performance.


Before doing such a change in our system we decided to run a performance test and see if the performance is impacted in one way or another if we have on one table that is big of 100.000 tables.

We run 3 different scenarios with the same load on Azure Table:
  • One big table with all the commands inside it
  • 100.000 empty tables (one per client), were clients only checked if they have commands
  • 1000.000 tables (one per client), that had 5 commands for each client
The source of the load were on-premises machine. Don't focus on the base latency, but the different between this 3 different scenarios. When we access Azure Table from Azure environment (like Worker Roles), the latency for a read operation is under 10ms.

Results are express in milliseconds and is the average of multiple runs.



As we can see there is no impact having 100.000 tables under Azure Storage or one. Based on your needs, it might be more simple to have multiple tables, especially when you need to be able to run execute cleanup steps on large amounts on data . Accessing tables partitions and delete row by row will be expensive and time consumption. Deleting a whole Azure Table can be done with only one simple request.
We can even say, based on current results that you have better performance if you use multiple tables and not only one.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(...

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see...

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provi...