Skip to main content

Azure Functions - Things that we need to consider

In the last posts we discussed about how we can write Azure Functions to process images from OneDrive, the support for CI that comes with them and the base functionalities.

In this post we will take a look on things that we shall avoid to do in Azure Functions and what are the differences between Azure Functions and Azure Web Jobs.

Things that we need to consider

Avoid HTTP communication between Azure Functions
In complex scenarios, we can end-up easily with Azure Functions that needs to communication between each other. In this kind of scenarios, the first things that we could do is to use web hooks or HTTP requests to communicate between them.
This is not the recommended approach for cases like this. If you reach a high volume of data, you might have some problems.
In serve-less systems the communication channel between components shall be messaging system over any kind of queue system. Not only communication over queues is more cheaper, but the reliability and scaling mechanism are faster and better.

There are multiple messaging services offered by Azure and supported by Azure Functions:

  • Azure Storage Queue (cheap)
  • Azure Service Bus (messages bigger than 64 KB)
  • Azure Event Hub (high volume)
Based on your needs (message size, cost, volume), you can decide what messaging system you shall use.

Cover edge cases
Don't expect that you code is perfect. Bad things can happen any time. Inside the function, we should catch exception and react if is possible. 
The most common problem when you write application on top of Lambda or Azure Functions is to have an error for a specific trigger and enter in a infinite loop. 
As we can see in the above case, same message will be processed again and again. Dead letter queue can be our ally in this kind of cases. 
Another case when we can encounter this problems is when we process bulk of data (messages, packages or anything else). In one way or another we need to cover the case when an error occurs in the middle of processing and to ensure that the process will not start again, consuming the same content - where of course the same error will occur.

Big and complex functions  
Writing code for Azure Functions needs some changes in the way how we think. The concept is more closer to micro-services principles where each services should do only one think, that is simple and isolated from the rest of the environment.
The same principles are applicable when we write Azure Function. On top of this, we shall keep in mind that a functions needs to have all the data from the begging. If we write a functions that make HTTP requests to external resources and the respond time is 10s of the time-off is set to 30s than the functions will be slow and will consume a lot of resources
  
As we can see in the above example, of the input is missing when the trigger comes. The function has to go to an external source (an HTTP endpoint from on-premises) to requests that input (resources). This should be avoided, because we don't have control on how long will take to an external resource to provide the data that we need.

Stateless
If during the design of an Azure Function we realized that there is information between different calls of the same function that we need to share, than we need to stop. We should never share and persist a state inside a function. 
In this situation we might need to redesign the function, request more input data or use an external source where this kind of data can be persisted and provided as input and persisted as output. 

One function for each environment
It is easy to fall in the trap of adding an input that specifies to the function that is for production or testing environment for example and to behave different. But this is not the way how we should do this. 
Azure Functions are well integrated with CI and can fetch data from different branches. For each different environment we need to have a different Azure Functions. In this way we will never mix environments or influence the behavior of one environment from another environment. 
 
As we can see above, each branch on GitHub has his own environment where code is deploy, including for Azure Functions. Each env. can have different configuration, for example, the Production Environment logging level is much lower, because if we are too verbose we will affect the performance of the system.

What next
Just now I realize that Azure Functions vs Azure Web Jobs is a to complex topic to be included in this post. Because of this I will write another post in the near future about this topic. 

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

[Post Event] Azure AI Connect, March 2025

On March 13th, I had the opportunity to speak at Azure AI Connect about modern AI architectures.  My session focused on the importance of modernizing cloud systems to efficiently handle the increasing payload generated by AI.