Skip to main content

Azure Functions - Things that we need to consider

In the last posts we discussed about how we can write Azure Functions to process images from OneDrive, the support for CI that comes with them and the base functionalities.

In this post we will take a look on things that we shall avoid to do in Azure Functions and what are the differences between Azure Functions and Azure Web Jobs.

Things that we need to consider

Avoid HTTP communication between Azure Functions
In complex scenarios, we can end-up easily with Azure Functions that needs to communication between each other. In this kind of scenarios, the first things that we could do is to use web hooks or HTTP requests to communicate between them.
This is not the recommended approach for cases like this. If you reach a high volume of data, you might have some problems.
In serve-less systems the communication channel between components shall be messaging system over any kind of queue system. Not only communication over queues is more cheaper, but the reliability and scaling mechanism are faster and better.

There are multiple messaging services offered by Azure and supported by Azure Functions:

  • Azure Storage Queue (cheap)
  • Azure Service Bus (messages bigger than 64 KB)
  • Azure Event Hub (high volume)
Based on your needs (message size, cost, volume), you can decide what messaging system you shall use.

Cover edge cases
Don't expect that you code is perfect. Bad things can happen any time. Inside the function, we should catch exception and react if is possible. 
The most common problem when you write application on top of Lambda or Azure Functions is to have an error for a specific trigger and enter in a infinite loop. 
As we can see in the above case, same message will be processed again and again. Dead letter queue can be our ally in this kind of cases. 
Another case when we can encounter this problems is when we process bulk of data (messages, packages or anything else). In one way or another we need to cover the case when an error occurs in the middle of processing and to ensure that the process will not start again, consuming the same content - where of course the same error will occur.

Big and complex functions  
Writing code for Azure Functions needs some changes in the way how we think. The concept is more closer to micro-services principles where each services should do only one think, that is simple and isolated from the rest of the environment.
The same principles are applicable when we write Azure Function. On top of this, we shall keep in mind that a functions needs to have all the data from the begging. If we write a functions that make HTTP requests to external resources and the respond time is 10s of the time-off is set to 30s than the functions will be slow and will consume a lot of resources
  
As we can see in the above example, of the input is missing when the trigger comes. The function has to go to an external source (an HTTP endpoint from on-premises) to requests that input (resources). This should be avoided, because we don't have control on how long will take to an external resource to provide the data that we need.

Stateless
If during the design of an Azure Function we realized that there is information between different calls of the same function that we need to share, than we need to stop. We should never share and persist a state inside a function. 
In this situation we might need to redesign the function, request more input data or use an external source where this kind of data can be persisted and provided as input and persisted as output. 

One function for each environment
It is easy to fall in the trap of adding an input that specifies to the function that is for production or testing environment for example and to behave different. But this is not the way how we should do this. 
Azure Functions are well integrated with CI and can fetch data from different branches. For each different environment we need to have a different Azure Functions. In this way we will never mix environments or influence the behavior of one environment from another environment. 
 
As we can see above, each branch on GitHub has his own environment where code is deploy, including for Azure Functions. Each env. can have different configuration, for example, the Production Environment logging level is much lower, because if we are too verbose we will affect the performance of the system.

What next
Just now I realize that Azure Functions vs Azure Web Jobs is a to complex topic to be included in this post. Because of this I will write another post in the near future about this topic. 

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

Cloud Myths: Migrating to the cloud is quick and easy (Pill 2 of 5 / Cloud Pills)

The idea that migration to the cloud is simple, straightforward and rapid is a wrong assumption. It’s a common misconception of business stakeholders that generates delays, budget overruns and technical dept. A migration requires laborious planning, technical expertise and a rigorous process.  Migrations, especially cloud migrations, are not one-size-fits-all journeys. One of the most critical steps is under evaluation, under budget and under consideration. The evaluation phase, where existing infrastructure, applications, database, network and the end-to-end estate are evaluated and mapped to a cloud strategy, is crucial to ensure the success of cloud migration. Additional factors such as security, compliance, and system dependencies increase the complexity of cloud migration.  A misconception regarding lift-and-shits is that they are fast and cheap. Moving applications to the cloud without changes does not provide the capability to optimise costs and performance, leading to ...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...