Skip to main content

Azure Functions - Things that we need to consider

In the last posts we discussed about how we can write Azure Functions to process images from OneDrive, the support for CI that comes with them and the base functionalities.

In this post we will take a look on things that we shall avoid to do in Azure Functions and what are the differences between Azure Functions and Azure Web Jobs.

Things that we need to consider

Avoid HTTP communication between Azure Functions
In complex scenarios, we can end-up easily with Azure Functions that needs to communication between each other. In this kind of scenarios, the first things that we could do is to use web hooks or HTTP requests to communicate between them.
This is not the recommended approach for cases like this. If you reach a high volume of data, you might have some problems.
In serve-less systems the communication channel between components shall be messaging system over any kind of queue system. Not only communication over queues is more cheaper, but the reliability and scaling mechanism are faster and better.

There are multiple messaging services offered by Azure and supported by Azure Functions:

  • Azure Storage Queue (cheap)
  • Azure Service Bus (messages bigger than 64 KB)
  • Azure Event Hub (high volume)
Based on your needs (message size, cost, volume), you can decide what messaging system you shall use.

Cover edge cases
Don't expect that you code is perfect. Bad things can happen any time. Inside the function, we should catch exception and react if is possible. 
The most common problem when you write application on top of Lambda or Azure Functions is to have an error for a specific trigger and enter in a infinite loop. 
As we can see in the above case, same message will be processed again and again. Dead letter queue can be our ally in this kind of cases. 
Another case when we can encounter this problems is when we process bulk of data (messages, packages or anything else). In one way or another we need to cover the case when an error occurs in the middle of processing and to ensure that the process will not start again, consuming the same content - where of course the same error will occur.

Big and complex functions  
Writing code for Azure Functions needs some changes in the way how we think. The concept is more closer to micro-services principles where each services should do only one think, that is simple and isolated from the rest of the environment.
The same principles are applicable when we write Azure Function. On top of this, we shall keep in mind that a functions needs to have all the data from the begging. If we write a functions that make HTTP requests to external resources and the respond time is 10s of the time-off is set to 30s than the functions will be slow and will consume a lot of resources
  
As we can see in the above example, of the input is missing when the trigger comes. The function has to go to an external source (an HTTP endpoint from on-premises) to requests that input (resources). This should be avoided, because we don't have control on how long will take to an external resource to provide the data that we need.

Stateless
If during the design of an Azure Function we realized that there is information between different calls of the same function that we need to share, than we need to stop. We should never share and persist a state inside a function. 
In this situation we might need to redesign the function, request more input data or use an external source where this kind of data can be persisted and provided as input and persisted as output. 

One function for each environment
It is easy to fall in the trap of adding an input that specifies to the function that is for production or testing environment for example and to behave different. But this is not the way how we should do this. 
Azure Functions are well integrated with CI and can fetch data from different branches. For each different environment we need to have a different Azure Functions. In this way we will never mix environments or influence the behavior of one environment from another environment. 
 
As we can see above, each branch on GitHub has his own environment where code is deploy, including for Azure Functions. Each env. can have different configuration, for example, the Production Environment logging level is much lower, because if we are too verbose we will affect the performance of the system.

What next
Just now I realize that Azure Functions vs Azure Web Jobs is a to complex topic to be included in this post. Because of this I will write another post in the near future about this topic. 

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see