Skip to main content

Azure Service Bus - How to extend the lock of a message | RenewLock

In this post we will discuss about Azure Service Bus Topics and Queues, with a special focus on Peek and Lock feature.

Introduction
Azure Service Bus is a messaging system that allows us to send messages between different systems in a reliable and easy way. A lot of concepts from ESB are implemented by Service Bus, allowing us to do do magic stuff with messages.
There are two ways to consume messages from Service Bus
Peek and Lock - locks a message for a specific time internal and notify Service Bus when we want to mark the message as processed (removed from Service Bus)
Receive and Delete  - once a message is received from Service Bus, it is also deleted automatically from the messaging system

Peek and Lock
When using Peek and Lock, by default we lock the message for 60 seconds. This means that in this time interval the message is not available/visible for other consumers. Once we process the message we can mark it as processed.
If we don't mark the message as processed or something happens with it, the message will become available for consumers.

Problem
The default value for Peek and Lock is 60 seconds. We can change this value based on our needs. The highest value that is accepted is 5 minutes (300 seconds). This means that we should be able to execute our logic in this time interval and mark the message as consumed.
But, what is happening when the execution takes more than 5 minutes.

What we can do to be able to keep the functionality offered by Peek and Lock but in the same time to have more time for processing?

... kind of ....solution
The most common solution in this situation is to split the logic and extract something similar with a state machine. Yes, we can keep it in a storage, we can map it in other messages with different states, there are a lot of possibilities.

RenewLock
This simple but powerful command allows us to reset the timer and keep the lock of the message. We can call this method as many times we need, one... two... or even 100.
QueueClient queueClient = QueueClient.Create("fooqueue");

BrokeredMessage brokeredMessage = queueClient.Receive();
...
message.RenewLock();
...
message.RenewLock();
...
message.RenewLock();

Behind the scene, "RenewLock" is "calling" directly the "LockedUntilUtc" property.
You can use with success "LockedUntilUtc" property to check until when the lock is available.


Scenarios
Yes, it is great that we have it. There are many cases when you don't know how long it takes to process a message. For example, when you need to persist the message in a database and call an external service, in general it could take 60 seconds, but in some situations the call of external service could take 120 seconds. In this case you might want to keep the lock and not start the rollback process. In these cases this is perfect.
Other situations is when we don't know the complexity of the task that is triggered by a message. For example converting the encoding of a video. It can take 10 seconds or even 10 hours. In this case, this feature is great.

Concerns
Personally I'm not a big fan of this feature. Why?
First of all, I'm afraid on how people would use it. Because this can be used as a hack and keep the lock of the messages in cases when you would normally release the message. For example, when you have an error accessing an external resources and you have a retry mechanism that is waiting and waiting. You will keep the lock of the message for a lot of time, even if you are in a dead end scenario and normally you would put the system in a 'freeze' state.

Precaution
We should double check all the time that the lock renew logic is done where we want and WHEN we want. We don't want to create a infinite cycle.
QueueClient queueClient = QueueClient.Create("fooqueue");
// ..
bool messageIsConsumed = false;
while (!messageIsConsumed)
{
    BrokeredMessage brokeredMessage = queueClient.Receive();
    try
    {
        // some logic with brokeredMessage
        // something happens and an error is throw 
        throw new Exception();

        messageIsConsumed = true;
    }
    catch (Exception ex)
    {
        // ...
        brokeredMessage.RenewLock();
    }
}
In the above example, the renew will be done over and over again

Conclusion
This is a great feature, that can help us in complex situations when we need a little more time to finish the work. Be aware when and how you do this.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP