Skip to main content

Azure Queue Storage (Day 20 of 31)

List of all posts from this series: http://vunvulearadu.blogspot.ro/2014/11/azure-blog-post-marathon-is-ready-to.html

Short Description 
Azure Queue Storage is a services part of Azure Storage that offer queueing support. It is a simple queue services that can be accessed from anywhere over HTTP and HTTPS protocol.


Main Features 
Size
First think that comes in my mind is the size of the queue. Because this service is construed over Azure Storage, the maximum size of a queue(s) can be hundreds of terra bytes (under the same storage account). This can useful when we need to store large amount of messages in a queue.
RESTfull
Like all Azure Services, the Queue Storage can be accessed and manage over a REST API. On top of this there are libraries for different programing languages like C#, NodeJS, and so on.
Batch Support
For reading messages from queue, we have the ability to consume them in batches of 32 messages.
Transactions support (kind off)
When consuming messages from queues we have a ‘kind of transaction support’. This mean that for each message from queue, we need to send a ‘delete’ command to the queue for removing it. A message that is send to a client for consuming, cannot be received (visible) by other clients for a specific time. If the message is not deleted in that specific time, the message will be available for clients.
Queue Length
You have the ability to get an estimation of the number of messages that are in the queue.
TTL
For each message we can set the Time To Leave property. The maximum value that is accepted is 7 days. It can be very useful when you need to remove old messages automatically.
XML or JSON format
The communication with API can be made in both formats. Clients can specify in each request what format they want to use.
Pooling Support
The current libraries and APIs allow us to create a poling mechanism and check if new messages are available.
Redundant
Queue Storage give us multiple options when we talk about redundancy. By default we have redundancy at data center level – this mean that all the time there are 3 copies of the same content (and you pay only one). On top of this there are other 3 options of redundancy that I will describe below (plus the one that we already talk about):

  • LRS (Local Redundant Storage) – Content is replicate 3 times in the same data center (facility unit from region) 
  • GRS (Geo Redundant Storage) – Content is replicated 6 times across 2 regions (3 times in the same region and 3 times across different regions)
  • RA- GRS (Read Access Geo Redundant Storage) – Content is replicated in the same way as for GRS, but you have read only access in the second region. For the GRS even if the data exist in the second region you cannot access it directly. 

Pay only what you use 
At the end of the month you will pay only the space that you used. Because of this, clients don’t pay in advance the space that will be used or to pay for the space that is not used anymore.
Tracing capabilities 
Storage Queue has tracing abilities over blobs and containers. Information like access time, client IP and how request ended can automatically be stored and accessed. In this way we can have a full audit over storage content.
Unlimited queues and messages
Because the maximum size is limited to 200TB, we can say that we can have unlimited queues and messages. It is a very scalable solution.

Limitations 

  • Maximum size of a message is 64KB.
  • Maximum number of messages per second in a single queue is 2000.
  • Maximum Time To Leave for each message is 7 days.
  • No order guaranty.
  • Only Peek & Lease mode supported (for receiver).
  • No batch support for sender



Applicable Use Cases 
Below you can find 3 use cases when I would use Azure Queue.
Storing large amounts of messages
If I need a system that can handle and store large amounts of messages and I can afford to lose data (because of TTL), than Azure Queue can be a very good solutions.
State Machine
We could use Azure Queue if we need to create a state machine, for example Orders Management and we need to be able to track different orders, change their state and so on.
Distribute work between instances
We could use Azure Queue to distributed messages between different machines. We could use this message system to distribute the load to all our available resources.

Code Sample 
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
    CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.
CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Create the queue if it doesn't already exist.
queue.CreateIfNotExists();

// Create a message and add it to the queue.
CloudQueueMessage message = new CloudQueueMessage("Hello, World");
queue.AddMessage(message);

// Peek at the next message
CloudQueueMessage peekedMessage = queue.PeekMessage();

// Display message.
Console.WriteLine(peekedMessage.AsString);

Source: http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-queues/

Pros and Cons 
Pros

  • Extremely scalable
  • Support for TTL 
  • Support for Batch support

Cons

  • No real transaction support
  • Only one mechanism for consuming messages from queue
  • Size of messages is limited to 64KB


Pricing 
When you start to calculate the cost of Azure Blob Storage you should take into account the following things:

  • Capacity (size) 
  • Number of Transactions 
  • Outbound traffic 
  • Traffic between facilities (data centers)


Conclusion
Azure Queue is a scalable and good queuing messaging system. It is very simple but can be perfect for different scenarios. If you need a more features from a queueing system that you should take a look over Service Bus Queue, Topics or Event Hub.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

[Post-Event] Codecamp Conference Cluj-Napoca - Nov 19, 2016

Last day I was invited to another Codecamp Conference, that took place in Cluj-Napoca. Like other Codecamp Conferences, the event was very big, with more than 1.000 participants and 70 sessions. There were 10 tracks in parallel, so it was pretty hard to decide at  what session you want to join.
It was great to join this conference and I hope that you discovered something new during the conference.
At this event I talked about Azure IoT Hub and how we can use it to connect devices from the field. I had a lot of demos using Raspberry PI 3 and Simplelink SensorTag. Most of the samples were written in C++ and Node.JS and people were impressed that even if we are using Microsoft technologies, we are not limited to C# and .NET. World and Microsoft are changing so fast. Just looking and Azure IoT Hub and new features that were launched and I'm pressed (Jobs, Methods, Device Twin).
On backend my demos covered Stream Analytics, Event Hub, Azure Object Storage and DocumentDB.

Title:
What abo…