Skip to main content

MQTT protocol inside Azure - RabbitMQ on top of AKS


In this post, we will talk about what are the available options at the moment when you need to support MQTT protocol inside Azure.

Business requirement
Design a solution that can handle 10M messages per day that it is deployed in one Azure Regions and support MQTT protocol.

Out-of-the-box options
Inside Microsoft Azure, the only option that we have to use MQTT as a service is in combination with Azure IoT Hub. This service is design to scale and manager a high number of devices and throughput. There are situations like when you need to collect telemetric data when Azure IoT Hub is to complex and you would like a more simple solution.
You would need just an endpoint where you can push metrics. Inside AWS, we can use Amazon MQ that it is a managed message broker service build on top of Apache ActiveMQ.

Similar options are available inside Azure, but none of them is supporting MQTT. For example, Azure Event Hub is a right candidate, but unfortunate it is supported only AMQP protocol that it is not an acceptable option for us. Other options are available more message base oriented like Azure Queues or Azure Service Bus, but none of them are supporting MQTT and are not designed for such loads and at the same time to keep costs low.

Message base solutions
On the market, there are multiple message base communication solutions that support MQTT. Most used are RabbitMQ, Apache Kafka and ActiveMQ. You will be able to find a lot of comparison between all of these services, but personally, for this case, I prefer RabbitMQ that offers:
  •           (plus) Excellent support for setup-up and configuration
  •           (plus) Fast and low consumption of resources
  •           (plus) A cluster can be defined with strong support for partitioning
  •           (plus) Intuitive interface and API
  •           (plus) Strong support from the community and well documented
  •           (minus) Needs strong ops knowledge

Top clients of RabbitMQ are MIT, Reddit, 9GAG, Zillow and Rainist.

Custom solutions
The solution is based on RabbitMQ that it is hosted inside Azure Kubernetes Services. A RabbitMQ cluster that runs inside AKS is running on dedicated nodes with a specific configuration (nodes with a high number of memory and fast SSD). Bitnami is offering images for Kuberntestes that are ready to be used.
In front of the RabbitMQ, you will need Azure Load Balancer (ALB) that would do the name resolution and the redirect to the RabbitMQ cluster. Two public IPs are reserved inside ALB for name resolution. For this scenario, there is no need to configure an API Gateway that would handle the traffic. ALB is a perfect march for it.

Tips to optimize a RabbitMW cluster inside AKS
  • Try to use one node (HA) if possible (no mirroring between nodes)
  • Allocate notes with plenty of memory
  • Disable publish confirmation if you don't need them
  • Disable acknowledgements if you don't need them
  • Enable RabbitMQ Hipe (precompiled) 
  • Disable plugins and features that you are not using
  • Ensure that the queues are not staying long for a long period of time
  • Don't forget to set a maximum queue size
  • It is mandatory to use transient messages (not written to the disk)
  • Don't use lazy queues (to avoid writing on disk)
  • Use multiple no. of queues
  • Use hash exchange plugin where the throughput is very very high


Conclusion
Even if inside Azure, we don’t have any other PaaS/SaaS option beside Azure IoTHub to use MQTT protocol, AKS can be a good place where we can run our RabbitMQ cluster using Bitnami images.

Comments

Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified