Skip to main content

Deep dive in Append Blob - A new type of Azure Storage Blob

In this post we will take a look about a new type of blob - Append Blob
Until now, we had two types of blobs in Azure Storage:
  • Page Blob
  • Block Blob

Block Blob is useful when we needs to work with large files. Each block can have different sized (max 4 MB) and we can work with each block independently. Features similar with transactions are supported on blocks. The maximum size of a block blob is 195 GB.
Page Blob are optimized for random access. A Page Blob is a collection of pages of 512 bytes and are useful when we need to support random access to the content (read and write). We can refer to a specific location of a Page Blob very similar with a cursor. The maximum size of a Page Blob is 1 TB. When you create a Page Blob you need to specify the maximum size of the Page Blob (and you will pay for it from the beginning, even if you don't use all the space.

Append Blob

This new type of blob is formed by multiple blocks. Each time when you create a block, it will be added to the end of the blob. There is no support to modify or delete an existing block. Once a block was added to the end of the Append Blob it will be automatically available for read operations.
The size of each block is not fixed, the maximum size is 4 MB. The maximum size of an Append Blob is the same as for Block Blob - 195 GB. We can even say that an Append Blob is like a Block Blob, but optimized for appending block at the end of the blob.

Sample Code
The default Nuget library allow us to work with Append Blob in a very similar way how we work with Append Blob. The library allows us to append a stream, an array of bytes, text of even the content of a specific file.
The below sample, presents how we can append a text to our blob.
CloudStorageAccount cloudStorageAccount = CloudStorageAccount
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("fooContainer");

CloudAppendBlob cloudAppendBlob = cloudBlobContainer.GetAppendBlobReference("fooBlob.txt");
cloudAppendBlob.AppendText("Content added");
cloudAppendBlob.AppendText("More content added");
cloudAppendBlob.AppendText("Even more content added");

string appendBlobContent = cloudAppendBlob.DownloadText();

When to use Append Blob?
Append blob should be used when we need to append content to a blob and we don't care about the order. It is very useful in use cases like logging. For logging we can have multiple threads and processed that needs to write content on the same blob. For a case like this, Append Blob is the perfect solution, allowing us to dump the logs in a fast and safe way.

In conclusion we can say that Append Blob is a type of blob that allow us:

  • To append content at the end of the blob
  • Maximum size is 195 GB
  • Allows us to have multiple writers on different blocks
  • Perfect for logging situations
  • Doesn't allow us to modify or delete a block
  • A block is available for read in the moment when was written


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…