Skip to main content

Service Bus Queues from Windows Azure - Introduction

In the following post we will talk about Service Bus Queues. Basically, this type of service bus can help us to communicate between two separate types of modules. We can have one or more producers (modules that create messages) that send messages to service bus asynchronously. A receiver can handle each message that is send to the service bus.
Using Service Bus Queues we are ensure that each message that is send to the service bus will be available to the consumer. We don’t need to wait until the consumer process the message or to check if the message was lost in the process of sending the message to the consumer.
Another nice feature is the isolation that Service Bus Queues creates between the consumer and the producers. The consumer don’t need to know who send the message to the service bus, this is not relevant and important to him. Of course we can mark messages with the producer information. The same thing happens on the message senders. They don’t know about how will consume the message. They only know about the service bus. This type of communication is named brokered messaging communication.
As the name is saying, Service Bus Queues uses a collection similar to a queue to store the messages. Because of this the consumer will be able to receive the messages in the order that were sending to the queue (FIFO – first in, first out).
When you send or get a message to the queue, the operation has a transactional behavior and atomicity. What does means? When we are sending a message or we get one or more messages from the service bus, the infrastructure guaranty to us that no message will be lost during this process.
The maxim size of messages that are in Service Bus Queues can be 5GB. From what I saw, when you try to add messages to the service bus, when the maxim size was exceeded an error will be throw that can be handled in code. In this moment each message that is added to the queue not only need to be serializable, but also the size of the message need to be under than 256 KB. In the near future the size of a message will be 1 MB, but is nothing official.
If you use this kind of mechanism, you will have a mechanism that detects automatic duplicates. What does that duplicate messages can be removed automatically, without needing to check manually. Each message that is added to the service bus has a lifetime. We can send how long a message can exist in the service bus and when he will expire. This flag is called TTL (Time To Live) and the maxim value accepted is unlimited. When a message expired it is automatically removed from service bus. The default value is 60 seconds but can be changed every easily.
A producer or a consumer can add/get more than one message. Service Bus Queues support messages batches. For example a consumer can add 10 messages with only one request.
An interesting feature is the ability to change the delivery mechanism from a safe one to receive and delete pattern. This means that the message will be automatically delete from the Service Bus Queues without waiting the consumer to confirm that he received the message and process it. This can some costs.
Each message can have a delivery time property. In this way a message can be deliver to the consumer later on. It offers us the ability to schedule the messages – when the consumer will be able to receive them. On the consumer side, we have the long poling mechanism implemented. So we can pool for a message for a long time without any problems.
At the begging of the post I sad that we can have only one consumer. Basically we can have as many consumer we want, but only one of them we will receive a message. The same message will not be broadcast to all (for this functionality we can use Service Bus Topic. We can make a small hack to be able to specify on that consumer we want a message to be sent. By setting the SessionID property to a message only the specific consumers that listen to the given sessions will be able to receive that message. But be aware, broadcasting messages to more than one consumer is not possible using Service Bus Queues.
This kind of Service Bus can be very easily integrated with WCF. The maxim number “queues” in a Service Bus unit (service namespace) is 10.000, but if you want more, this limit can be increased. Theoretically, we can have an unlimited number of clients to a service bus. The only limit that exists is for TCP communication protocol that is set to 100.
In this moment we can communicate with Service Bus Queues in different ways. We have a lot of programing languages that are supported. The base communication is REST over HTTPS or TCP with TLS. Because of this we can find libraries that help us to use Service Bus Queues written in: .NET, PHP, Node.js and Java.
Let’s see what limitations we have on Service Bus Queues. First of all the latency is around 100ms and the maxim number of messages that can be sent are 2000. A queue name from service bus can have maximum 256 characters and of course we cannot have specials characters like @ or *. We can have only letters, numbers and ‘.’ ‘_’ ‘ ‘-’.
In this moment we don’t have any type of Shared Access Signature or public Service Bus Namespace. The authentication is made using:
  • Identify provider federation
  • Role based access control
  • ACS claims
We have three types of roles that are defined. A receiver that can consume a message. A sender that can only add a message but without consume any message from it and an admin. Than can add, get and iterate throw the queue (and delete of course).
In this post I presented some base concept of Service Bus Queues. In the next post I will show you how we can work with it.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

Scope
The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

Context
First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…