Skip to main content

Azure Queue Storage (Day 20 of 31)

List of all posts from this series: http://vunvulearadu.blogspot.ro/2014/11/azure-blog-post-marathon-is-ready-to.html

Short Description 
Azure Queue Storage is a services part of Azure Storage that offer queueing support. It is a simple queue services that can be accessed from anywhere over HTTP and HTTPS protocol.


Main Features 
Size
First think that comes in my mind is the size of the queue. Because this service is construed over Azure Storage, the maximum size of a queue(s) can be hundreds of terra bytes (under the same storage account). This can useful when we need to store large amount of messages in a queue.
RESTfull
Like all Azure Services, the Queue Storage can be accessed and manage over a REST API. On top of this there are libraries for different programing languages like C#, NodeJS, and so on.
Batch Support
For reading messages from queue, we have the ability to consume them in batches of 32 messages.
Transactions support (kind off)
When consuming messages from queues we have a ‘kind of transaction support’. This mean that for each message from queue, we need to send a ‘delete’ command to the queue for removing it. A message that is send to a client for consuming, cannot be received (visible) by other clients for a specific time. If the message is not deleted in that specific time, the message will be available for clients.
Queue Length
You have the ability to get an estimation of the number of messages that are in the queue.
TTL
For each message we can set the Time To Leave property. The maximum value that is accepted is 7 days. It can be very useful when you need to remove old messages automatically.
XML or JSON format
The communication with API can be made in both formats. Clients can specify in each request what format they want to use.
Pooling Support
The current libraries and APIs allow us to create a poling mechanism and check if new messages are available.
Redundant
Queue Storage give us multiple options when we talk about redundancy. By default we have redundancy at data center level – this mean that all the time there are 3 copies of the same content (and you pay only one). On top of this there are other 3 options of redundancy that I will describe below (plus the one that we already talk about):

  • LRS (Local Redundant Storage) – Content is replicate 3 times in the same data center (facility unit from region) 
  • GRS (Geo Redundant Storage) – Content is replicated 6 times across 2 regions (3 times in the same region and 3 times across different regions)
  • RA- GRS (Read Access Geo Redundant Storage) – Content is replicated in the same way as for GRS, but you have read only access in the second region. For the GRS even if the data exist in the second region you cannot access it directly. 

Pay only what you use 
At the end of the month you will pay only the space that you used. Because of this, clients don’t pay in advance the space that will be used or to pay for the space that is not used anymore.
Tracing capabilities 
Storage Queue has tracing abilities over blobs and containers. Information like access time, client IP and how request ended can automatically be stored and accessed. In this way we can have a full audit over storage content.
Unlimited queues and messages
Because the maximum size is limited to 200TB, we can say that we can have unlimited queues and messages. It is a very scalable solution.

Limitations 

  • Maximum size of a message is 64KB.
  • Maximum number of messages per second in a single queue is 2000.
  • Maximum Time To Leave for each message is 7 days.
  • No order guaranty.
  • Only Peek & Lease mode supported (for receiver).
  • No batch support for sender



Applicable Use Cases 
Below you can find 3 use cases when I would use Azure Queue.
Storing large amounts of messages
If I need a system that can handle and store large amounts of messages and I can afford to lose data (because of TTL), than Azure Queue can be a very good solutions.
State Machine
We could use Azure Queue if we need to create a state machine, for example Orders Management and we need to be able to track different orders, change their state and so on.
Distribute work between instances
We could use Azure Queue to distributed messages between different machines. We could use this message system to distribute the load to all our available resources.

Code Sample 
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
    CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.
CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Create the queue if it doesn't already exist.
queue.CreateIfNotExists();

// Create a message and add it to the queue.
CloudQueueMessage message = new CloudQueueMessage("Hello, World");
queue.AddMessage(message);

// Peek at the next message
CloudQueueMessage peekedMessage = queue.PeekMessage();

// Display message.
Console.WriteLine(peekedMessage.AsString);

Source: http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-queues/

Pros and Cons 
Pros

  • Extremely scalable
  • Support for TTL 
  • Support for Batch support

Cons

  • No real transaction support
  • Only one mechanism for consuming messages from queue
  • Size of messages is limited to 64KB


Pricing 
When you start to calculate the cost of Azure Blob Storage you should take into account the following things:

  • Capacity (size) 
  • Number of Transactions 
  • Outbound traffic 
  • Traffic between facilities (data centers)


Conclusion
Azure Queue is a scalable and good queuing messaging system. It is very simple but can be perfect for different scenarios. If you need a more features from a queueing system that you should take a look over Service Bus Queue, Topics or Event Hub.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see