Skip to main content

Azure Functions - Features Review

In the last two posts related to Azure Functions we talk about how we can write functions that can process images from OneDrive and the current integration with Visual Studio and Continuous Integration.
In this post we will take a look over the main features, price model.

Pay-per-use
Personally, this is one of the strongest points of Azure Functions. You pay only what you use, when your code is not running or is not called by an external trigger, you don't need to pay. You cannot want more from the hosting perspective.
When you have zero clients, you pay zero. When you have n clients, you pay for each of them. The pricing model will be described later on.

Multiple-languages support
The number of languages that are supported is not 1 or 2. In this moment we have support for C#, F#, JS (by Node.JS), PHP, Python, batch, bash.
Another interesting feature is that you can come with any executable that can run as an Azure Function. In interesting concept, that I promise that I will follow-up in the future.

Source Control and CI
There is full integration with TFS and GIT. You can use triggers to make automatically deployment from different branches. You can even deploy the functions from OneDrive or DropBox.

External Dependencies
I was surprise when I discover that I can use NuGet (or NPM) packages inside an Azure Function. In this was you can use predefined libraries.

SaaS Integration fully supported
Even if the naming is not clear, if we translate it would mean: Integration and hooks with external services that can be part of Azure platform or not.
For example we have integration with Azure Event Hub, Document DB, Storage (Object :) ) and so on, but also with external service providers like Google Drive.

Triggers
The list of triggers is pretty long. You will find from time triggers to message or GitHub triggers. The one that I like is webhook trigger that enables us to integrate Azure Function with our own services.

Security
There is full support for OAuth, having out of the box integration with Facebook, Google, Twitter, Azure AD and Microsoft Accounts.

Service Plans
There two types of service plans that are available for Azure Functions.

App Service Plan
The classical one - App Service Plan - is the one that we already know and we are used to from Web Apps. When you select this plan, your Azure Functions will run in your own App Service Plan and will use resources that are available in it.
Azure Functions will scale automatically in your App Service Plan as long as there are enough resources. It is the right place to run your functions, when the load on your functions is constant or you want to have resource reservation.
The downside with this service plan is that you will for it eve if you don't run the functions. This happens because when you have an App Service Plan you pay for the VMs that you in the cluster.
This plan is very appealing if beside Azure Functions you have Web Apps and other applications that runs. You could put all of them inside the same App Service Plan at can be a great solution for systems were load is constant and a good load forecast can be done.

Consumption Plan
The mindset is 180 degrees different in comparison with App Service Plan. For Consumption Plan you don't need an App Service Plan, VMs or anythings else. You just deploy your code and run it. You will pay for each time when you function is running.
There are two types of units for which you'll have to pay:

  • Resource Consumption - A way to measure how much resources our function consumed that is calculated  based on how much memory you used during the time period when your functions are running
  • Execution - numbers of times when a function is triggered. But the price is low, for 1 million execution you'll pay €0.1687
Similar with web jobs, the service is free until you reach a specific limit. For Azure Functions the service is free for first 1 million execution and for the first 400.000 Gb-s consumed (Resource Consumption).
The Consumption Plan will scale automatically by monitoring the triggers. Based on the load of a queue or other triggers the system can decide if it is necessary to scale up or down. For triggers that doesn't offer a count (size) - like a generic web hook, this might create some problem with scaling.
For situations when we know that we need to scale fast we shall try to use as trigger sources that offer also a counter that can be monitored by Azure Functions.

Monitoring - Function Pulse
A base monitoring system is supported our of the box. Information like invocation history with the last status of each run is available. All information that we log in a function using TraceWritter can be accessed, same for exception.
Beside this, a live stream is available, where we can see in real time what functions runs and what is their output. This output can be redirected to a power shell or console app or in any other location where we want.

Testing
Azure Functions can be test easily. Based on a well know URL you can trigger your function (as long as you offer all inputs that are required to run) - this is applicable for functions that has as trigger a we hook or an http request. Based on your trigger you'll need to adapt your tests.

Remote Debugging
As I presented in a previous post, remote debugging is fully supported from Visual Studio. You can attach easily to you functions, catch errors, add breaking points. Life is more simple now.

CORS Support 
By default, this feature is disable. From the configuration panel, we are allowed to activate it and specify the list of domains that are allowed to make calls.

Threading
Each function runs single thread. If multiple calls are happening in parallel, the hosting plan can decide to run functions in parallel. This is much better in comparison with AWS Lambda that execute only one function each time. The palatalization level can vary based on the consumption level and trigger type.

Conclusion
From the features perspective, Azure Functions are a powerful service that enable us to develop and run code without thinking about infrastructure. In the future this services is a game changer, that you need to take into consideration when you design a solution on top of Azure.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too