Skip to main content

Azure Functions - Things that we need to consider

In the last posts we discussed about how we can write Azure Functions to process images from OneDrive, the support for CI that comes with them and the base functionalities.

In this post we will take a look on things that we shall avoid to do in Azure Functions and what are the differences between Azure Functions and Azure Web Jobs.

Things that we need to consider

Avoid HTTP communication between Azure Functions
In complex scenarios, we can end-up easily with Azure Functions that needs to communication between each other. In this kind of scenarios, the first things that we could do is to use web hooks or HTTP requests to communicate between them.
This is not the recommended approach for cases like this. If you reach a high volume of data, you might have some problems.
In serve-less systems the communication channel between components shall be messaging system over any kind of queue system. Not only communication over queues is more cheaper, but the reliability and scaling mechanism are faster and better.

There are multiple messaging services offered by Azure and supported by Azure Functions:

  • Azure Storage Queue (cheap)
  • Azure Service Bus (messages bigger than 64 KB)
  • Azure Event Hub (high volume)
Based on your needs (message size, cost, volume), you can decide what messaging system you shall use.

Cover edge cases
Don't expect that you code is perfect. Bad things can happen any time. Inside the function, we should catch exception and react if is possible. 
The most common problem when you write application on top of Lambda or Azure Functions is to have an error for a specific trigger and enter in a infinite loop. 
As we can see in the above case, same message will be processed again and again. Dead letter queue can be our ally in this kind of cases. 
Another case when we can encounter this problems is when we process bulk of data (messages, packages or anything else). In one way or another we need to cover the case when an error occurs in the middle of processing and to ensure that the process will not start again, consuming the same content - where of course the same error will occur.

Big and complex functions  
Writing code for Azure Functions needs some changes in the way how we think. The concept is more closer to micro-services principles where each services should do only one think, that is simple and isolated from the rest of the environment.
The same principles are applicable when we write Azure Function. On top of this, we shall keep in mind that a functions needs to have all the data from the begging. If we write a functions that make HTTP requests to external resources and the respond time is 10s of the time-off is set to 30s than the functions will be slow and will consume a lot of resources
  
As we can see in the above example, of the input is missing when the trigger comes. The function has to go to an external source (an HTTP endpoint from on-premises) to requests that input (resources). This should be avoided, because we don't have control on how long will take to an external resource to provide the data that we need.

Stateless
If during the design of an Azure Function we realized that there is information between different calls of the same function that we need to share, than we need to stop. We should never share and persist a state inside a function. 
In this situation we might need to redesign the function, request more input data or use an external source where this kind of data can be persisted and provided as input and persisted as output. 

One function for each environment
It is easy to fall in the trap of adding an input that specifies to the function that is for production or testing environment for example and to behave different. But this is not the way how we should do this. 
Azure Functions are well integrated with CI and can fetch data from different branches. For each different environment we need to have a different Azure Functions. In this way we will never mix environments or influence the behavior of one environment from another environment. 
 
As we can see above, each branch on GitHub has his own environment where code is deploy, including for Azure Functions. Each env. can have different configuration, for example, the Production Environment logging level is much lower, because if we are too verbose we will affect the performance of the system.

What next
Just now I realize that Azure Functions vs Azure Web Jobs is a to complex topic to be included in this post. Because of this I will write another post in the near future about this topic. 

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

Scope
The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

Context
First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…