Skip to main content

Azure Blob Storage - More storage and throughput

One of the core services of Microsoft Azure is Azure Storage that is used to store binary content, key-values pair (Azure Tables)  or message queue (Azure Queues). In today's post, we will discover how a small change in Azure Storage capabilities is changing our life and simplify our IT solutions.

Current solutions
The current maximum capacity of Azure Blob Storage used to be 500TB. Even if this might sounds a lot, there are multiple cases when you had to overcome this limits. If you have a system where devices and users are uploading content to your system, than you can reach easily 2-5TB per day that would force you to use a new Azure Storage account every 3 months.
To overcome this limitation, your solution needs to be able to manage Azure Storage accounts automatically. Besides being able to clean and archive content automatically, you will need a system that can create a storage account on the fly and redirect traffic to it. When you use multiple Storage Account, you are forced to store not only information related to what content you are storing, but also a mapping related to was Storage Account  has the specific content.
Even if creating and managing a Storage Account is not complicated, it is adding an extra complexity on top of your current application. This can be translated in extra management costs and possible bugs (or even strange behaviors that are hard to reproduce).

Small things like this make our life a little more complicated and force us to add more complexity to our system.

Blob Storage Throughput Increase 
I was happy to find out that these days might be over. At least for applications that are using less than 5PB of blob storage.
Azure Team just announced that they increased the capacity of blob storage from 500TB to 5PB. This is a big step forward, allowing us to plan and design our systems more simple. At least most of the application will not need to use multiple storage accounts to increase their storage capability.
Of course, when you increase the storage capability you also need to increase the bandwidth speed together with the number of transactions that are allowed (TPS). This was also increased with at least 2.5 times.

Below you can find the thresholds that were increased:

  • Max capacity for Blob storage accounts - 5PB (10x increase)
  • Max TPS/IOPS for Blob storage accounts - 50K (2.5x increase) 
  • Max ingress for Blob storage accounts - 50Gbps (2.5-10x increase)
  • Max egress for Blob storage accounts - 50Gbps (2.5-5x increase)65

Remember that this new limits applies only for Azure Blob Storage. For the other services the old limits remains the same.

Final thoughts
The current trend is looking good and put Microsoft Azure in a good position. When this kind of updates occurs, it is more than increasing the threshold of a service. It is about charring on what customer needs.
Yes, we had workarounds for these limitations, but Microsoft is making our life much better by offering us what we need.


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Fundamental Books of a Software Engineer (version 2018)

More then six years ago I wrote a blog post about fundamental books that any software engineer (developer) should read. Now it is an excellent time to update this list with new entries.

There are 5 different categories of books, that represent the recommended path. For example, you start with Coding books, after that, you read books about Programming, Design and so on.
There are some books about C++ that I recommend not because you shall know C++, only because the concepts that you can learn from it.


Writing solid codeCode completeProgramming Pearls, more programming pearls(recommended)[NEW] Introduction to Algorithms


Refactoring (M. Fowler)Pragmatic ProgrammerClean code[NEW] Software Engineering: A Practitioner's Approach[NEW] The Mythical Man-Month[NEW] The Art of Computer Programming


Applying UML and Patterns (GRASP patterns)C++ coding standards (Sutter, Alexandrescu)The C++ programming language (Stroustrup, Part IV)Object-oriented programming (Peter Coad)P…

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…