Skip to main content

How to run Azure Functions on AWS and on-premises

Nowadays cloud and hybrid cloud solutions are highly demanded by companies. In this post, we will talk about a technical solution for business logic that it’s supported by multiple clouds provides and can run on-premises without any kind of issues.
The requirements for the given scenario is the following:
We need to be able to expose an API that can execute tasks that runs for 2-5 minutes each in parallel. The native platform shall be Microsoft Azure, but it should be able to run on dedicated hardware in specific countries like China and Russia. With minimal effort, it should be able to run on the AWS platform.

The current team that we had available was a .NET Core team, with good skills on ASP.NET. There are numerous services available on Azure that can run in different environments also. The most attractive ones are the ones on top of Kubernetes and microservices.
Even so, we decided to do things a little different. We had to take into considerations that an autoscaling functionality would be important. Additional to this the current team needs to deliver the first version on top of Azure. The current team capabilities were not so good on Kubernetes and the delivery timeline was strict.

Solution
Taking this into considerations, we decided to go with a 2 steps approach. The first version of the solution would be built on top of Azure Functions 2.X, fully hosted on Microsoft Azure. The programming language that would be used is .NET Core 2.X.
 To be able to deliver what is required the following binding needs to be used:

  • HTTP that plays the role of the trigger. External systems can use this binding to kick off the execution.
  • Blob Storage that plays the role of data input. The chunk of data is delivered in JSON format from the external system and loaded to blob storage before execution starts.
  • Webhooks that plays the role as data output. An external system is notified where the result (scoring information) is provided

The power of Azure Functions is revealed at this moment. The team can focus on implementation, without a minimal effort on the infrastructure part. Initially, the solution will be hosted using Consumption Plan that enables us to scale automatically and pay per usage.
Even if it might be a little more expensive than the App Service Plan, where dedicated resources are allocated, the Consumption Plan is a good starting point. Later, based on the consumption level the solution might be or not migrated to App Service Plan.
Execution Time
When you are using the Consumption Plan, a function can run a maximum of 30 minutes. The initial estimation of a task duration is 2-5 minutes. There is a medium risk that the 30 minutes limits to be reached. To be able to mitigate this risk, during the implementation phase, the team will execute some stress tests with real data to estimate the task duration in Azure Function context.
Additional to this, every week a custom report will be generated where different KPIs related to execution times. A scatter charts, combined with a clustering chart shall be more than enough. The reports are generated inside Power BI.
The mitigation plan for this is to migrate to the App Service Plan, where you have the ability to control what kind of resources you have available and to allocate dedicated resources only for this. On top of this, in App Service Plan you don’t have a timeout limitation, you can run a function as long as you want.
There are also other mechanisms to mitigate it, like code optimization or split the execution into multiple functions, but this will be decided later on if there will be problems with the timeout time.
Remarks: Azure Functions 1.X execution timeout for Consumption Plan is 10 minutes, in comparison with Azure Function 2.X where the maximum accepted value is 30 minutes.
On-premises support
The current support of Azure Function for t on-premises system if you want to have a stable system where you can run high loads is Kubernetes cluster. The cool thing that Microsoft is offering to us is the ability to run Azure Functions inside Kubernetes cluster as a Docker image.
The tool that is available on the market at this moment is allowing us to create from an Azure Function a Docker image that it is already configured with Horizontal Pod Autoscaler. This enables us without to do any custom configuration to have an Azure Function hosted inside Kubernetes as a container that can scale automatically based on the load. Beside this the deployment and service configuration part it’s also generated.
The tool that is allowing us to do this is called Core Tools and it is designed by the Azure Function team. Besides this, because it is a command line tool can be easily integrated with CI/CD systems that we have already in place.

AWS Environment
The same solution as for on-premises can be used to host our Azure Functions inside AWS EKS or in any other services based on Kubernetes.

The official support from Core Tools is allowing us to create images for Docker and deploy them in Kubernetes using/on top of:

  • Virtual-Kubelet
  • Knative
  • Kubectl
  • AKS (Azure Kubernetes Services)
  • ACR (Azure Container Registry) – Image hosting
  • ACS (Azure Container Services)
  • AWS EKS

Azure Functions Core Tools is available for download from:

  • Github - https://github.com/Azure/azure-functions-core-tools
  • npm- azure-functions-core-tools
  • choco – azure-functions-core-tools
  • Brew - azure-functions-core-tools
  • Ubuntu - https://packages.microsoft.com/config/ubuntu/XX.XX/packages-microsoft-prod.deb 


Conclusion
As we can see at this moment Azure Functions is allowing us to not be locked to run our solution only inside Azure as Functions. We have the ability to take our functions and spin-up as a console application or even inside Kubernetes. Azure Function Core Tools is enabling us to create Docker images and run them inside Kubernetes in any kind of environment.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see