Skip to main content

Service Bus Queues from Windows Azure - How to retry to consume messages and death letter

In the last blog post when I talked about poison messages that can exist in Service Bus Queues I told you how simple is to detect if a message was not processes with success for three times and throw him to the death letter sub-queue from Service Bus Queues. If you want to find more information about death letter please follow this link.
As a recap, each queue has a sub-queue where we can throw messages that were not processes – death letter. Service Bus automatically throw messages in this queue when the TTL expired, we reach the maxim limit of messages in a queue or when we don’t have enough space in the queue. The Service Bus is not the only one that can mark messages as death letter. We can do this also from the code and there are situations where is very useful.
Remarks: A messages that was marked as death letter cannot be added to the queue. We need to recreate it.
This is very useful when we try to process messages from a queue but because of an unknown cause we cannot process the messages with success. In this case if we use the Peer-Look Receive Mode and a message is not removed from the queue until we call the complete action, these messages will hang in the queue until TTL expired. In the same time, the consumers will try to process these messages over and over again. This is a waste of resources and there are some cases when a message cannot be processes first time but if after 3 attempts we could not process the messages than we should mark him as poison messages.
In the following example I defined a consumer that tries to consume messages from Service Bus Queues. For the messages that are the 3th time when the system try to process them without success they are marked as death letters. Another system will process them letter on and can do more actions with them or only log it.
 QueueClient qc =
    QueueClient.CreateFromConnectionString(
    myFooConnectionString, "FooQueue");
while(true)
{
    BrokeredMessage message = qc.Receive();
    if(message == null)
    {
        Thread.Sleep(1000);
        continue;
    }
   
    try
    {
        // process our message.
        message.Complete()
    }
    catch(Exception ex)
    {   
            if( message.DeliveryCount > 3 )
            {
                message.DeadLetter();
            }
            message.Abandon();
    }
}
In this code sample the key is DeliveryCount. This is a property of a BrokeredMessage that is automatically incremented when someone retrieves the message from Service Bus Queue. When this value reach 3 then we mark the message as dead letter and we will not be able to access it from the queue.
In the following code sample I will retrieve all the messages that were marked as death letter.
QueueClient qcdl =
    QueueClient.CreateFromConnectionString(
    myFooConnectionString, QueueClient.FormatDeadLetterPath("FooQueue"));
while(true)
{
    BrokeredMessage message = qcdl.Receive();
    if(message == null)
    {
        Thread.Sleep(1000);
        continue;
    }

    // Process message.
}
Notified how I get the name of the queue where I stored the deferral messages? QueueClient.FormatDeadLetterPath("FooQueue") – don’t forget about this helper functions that framework offer to us.
In this post we saw how we can mark messages as deferral when we try for 3 types to process them without success.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too