Skip to main content

Azure Functions - Features Review

In the last two posts related to Azure Functions we talk about how we can write functions that can process images from OneDrive and the current integration with Visual Studio and Continuous Integration.
In this post we will take a look over the main features, price model.

Pay-per-use
Personally, this is one of the strongest points of Azure Functions. You pay only what you use, when your code is not running or is not called by an external trigger, you don't need to pay. You cannot want more from the hosting perspective.
When you have zero clients, you pay zero. When you have n clients, you pay for each of them. The pricing model will be described later on.

Multiple-languages support
The number of languages that are supported is not 1 or 2. In this moment we have support for C#, F#, JS (by Node.JS), PHP, Python, batch, bash.
Another interesting feature is that you can come with any executable that can run as an Azure Function. In interesting concept, that I promise that I will follow-up in the future.

Source Control and CI
There is full integration with TFS and GIT. You can use triggers to make automatically deployment from different branches. You can even deploy the functions from OneDrive or DropBox.

External Dependencies
I was surprise when I discover that I can use NuGet (or NPM) packages inside an Azure Function. In this was you can use predefined libraries.

SaaS Integration fully supported
Even if the naming is not clear, if we translate it would mean: Integration and hooks with external services that can be part of Azure platform or not.
For example we have integration with Azure Event Hub, Document DB, Storage (Object :) ) and so on, but also with external service providers like Google Drive.

Triggers
The list of triggers is pretty long. You will find from time triggers to message or GitHub triggers. The one that I like is webhook trigger that enables us to integrate Azure Function with our own services.

Security
There is full support for OAuth, having out of the box integration with Facebook, Google, Twitter, Azure AD and Microsoft Accounts.

Service Plans
There two types of service plans that are available for Azure Functions.

App Service Plan
The classical one - App Service Plan - is the one that we already know and we are used to from Web Apps. When you select this plan, your Azure Functions will run in your own App Service Plan and will use resources that are available in it.
Azure Functions will scale automatically in your App Service Plan as long as there are enough resources. It is the right place to run your functions, when the load on your functions is constant or you want to have resource reservation.
The downside with this service plan is that you will for it eve if you don't run the functions. This happens because when you have an App Service Plan you pay for the VMs that you in the cluster.
This plan is very appealing if beside Azure Functions you have Web Apps and other applications that runs. You could put all of them inside the same App Service Plan at can be a great solution for systems were load is constant and a good load forecast can be done.

Consumption Plan
The mindset is 180 degrees different in comparison with App Service Plan. For Consumption Plan you don't need an App Service Plan, VMs or anythings else. You just deploy your code and run it. You will pay for each time when you function is running.
There are two types of units for which you'll have to pay:

  • Resource Consumption - A way to measure how much resources our function consumed that is calculated  based on how much memory you used during the time period when your functions are running
  • Execution - numbers of times when a function is triggered. But the price is low, for 1 million execution you'll pay €0.1687
Similar with web jobs, the service is free until you reach a specific limit. For Azure Functions the service is free for first 1 million execution and for the first 400.000 Gb-s consumed (Resource Consumption).
The Consumption Plan will scale automatically by monitoring the triggers. Based on the load of a queue or other triggers the system can decide if it is necessary to scale up or down. For triggers that doesn't offer a count (size) - like a generic web hook, this might create some problem with scaling.
For situations when we know that we need to scale fast we shall try to use as trigger sources that offer also a counter that can be monitored by Azure Functions.

Monitoring - Function Pulse
A base monitoring system is supported our of the box. Information like invocation history with the last status of each run is available. All information that we log in a function using TraceWritter can be accessed, same for exception.
Beside this, a live stream is available, where we can see in real time what functions runs and what is their output. This output can be redirected to a power shell or console app or in any other location where we want.

Testing
Azure Functions can be test easily. Based on a well know URL you can trigger your function (as long as you offer all inputs that are required to run) - this is applicable for functions that has as trigger a we hook or an http request. Based on your trigger you'll need to adapt your tests.

Remote Debugging
As I presented in a previous post, remote debugging is fully supported from Visual Studio. You can attach easily to you functions, catch errors, add breaking points. Life is more simple now.

CORS Support 
By default, this feature is disable. From the configuration panel, we are allowed to activate it and specify the list of domains that are allowed to make calls.

Threading
Each function runs single thread. If multiple calls are happening in parallel, the hosting plan can decide to run functions in parallel. This is much better in comparison with AWS Lambda that execute only one function each time. The palatalization level can vary based on the consumption level and trigger type.

Conclusion
From the features perspective, Azure Functions are a powerful service that enable us to develop and run code without thinking about infrastructure. In the future this services is a game changer, that you need to take into consideration when you design a solution on top of Azure.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP