Skip to main content

[Cloud lock-in] Overview and pros and cons of cloud lock-in / Azure

I will start a series of posts about how we can avoid cloud lock-in and interoperability between managed cloud services and external systems using different solutions on the market.

Cloud lock-in Overview

There are multiple definitions on the market, but when we talk about cloud lock-in or vendor lock-in, we refer to direct and indirect costs of a platform to be moved from one cloud vendor to another.

The lock-in makes customers more dependent on a cloud vendor, and migration to another vendor becomes expensive. There are multiple dimensions of cloud lock-in that we need to be aware like:

  • Storage (Azure Storage)
  • Data (Azure SQL Elastic Pool, Azure SQL Database)
  • Payloads (Azure Kubernetes Services, Azure VMs, Azure Functions, Azure App Services)
  • Communication (Azure Service Bus, Azure Event Hub)
  • Infrastructure (Azure VNETs)
  • Managed Cloud and Monitoring (Azure Monitoring, Azure App Insights)

We need to be aware that cloud lock-in comes with advantages, especially from the time-to-market and cloud economics point of view.

Cloud Managed Services like Azure App Services, Azure Service Bus, Azure Redis Cache help us build a platform in no time. The effort required to build the infrastructure and implemented the NFRs (e.g., availability, redundancy, backup and recovery) is drastically reduced, together with the SLA that are provided for each Cloud Managed Services. It is much easier for the support team to manage the services that are out of the shelve offered by cloud vendors like Azure or AWS.

Should I go ALL-IN?

Going all-in for a cloud vendor provides a high level of agility and the effort required to manage the infrastructure, stack components like cache or DB and middleware are very low. Using this approach there is more time to invest in security, scalability, DR and HA and cloud managed services competencies. 

Pros and Cons

The most important cons of a cloud lock-in are:

  • Dependency outages - Using a high number of cloud-managed services increases the chances to have service down. Even if we are backed by SLA and a recorded history we need to be aware of it and mitigate it when necessary. 
  • Leverage - When you have a high level of dependency on a cloud vendor it is harder to negotiate the costs or what you get for that amount of money.
  • Migration - Using a high number of PaaS and SaaS services makes it harder to embrace another cloud vendor.
The pros of cloud lock-in are:
  • Speed to market - Cloud Managed Services saves us weeks of hard work to implement and manage different bits of our systems (e.g. Azure SQL Database, Azure Storage, Azure ML Studio)
  • Technical Dept - Especially because of SaaS services and the integration packages, the technical dept of our systems is much much lower.
  • Innovation - It is much easier to integrate a serverless or a NoSQL solution when you just spin it up from the dashboard, with ZERO effort on installation and configuration. The best example here is Azure ML Studio, which you can run spin-it-up in just a few minutes, ready to run your model and scripts. 
  • Consistency - You have the same approach across your entire system. From the security or monitoring point of view, you can use the same approach for all the system components. 
  • Knowledge Competencies - It is much easier for teams to work with the same services across multiple systems and to have consolidated best practices and recommendations related to how to integrate different components. 

The real challenges

 Large organizations have a strategy to avoid cloud lock-in. This can be achieved at the program level, ensuring that different parts of the systems run in different cloud vendors or by building the systems to run seamlessly (more or less) on multiple cloud vendors.

The real challenge is finding the right balance between Cloud Managed Services (PaaS and SaaS) and the ones you managed by yourself.

 In the last few years, adopting containerization solutions like Kubernetes combined with different solutions like Dapr enabled us to achieve a higher level of interoperability with less effort and a high degree of loosely couples between cloud services and the application. 

The out of the shelf integration and abstraction layer that Dapr offers, give us the ability to switch between AWS SNS and Azure Services Bus without making changes at the application layer. Kubernetes enable us to run the same application inside AWS EKS or Azure AKS seamlessly.

There are many other ways to handle cloud lock-in and each of them will be discussed in a series of articles.

Is cloud lock-in so bad?

YES and NO

There are many aspects that need to be taken into account when you decide what level of cloud lock-in you want to have. You can find below a list of items that need to be taken into account when you take such a decision:

  • Organization strategy related to cloud vendors
  • Regulation and Compliance 
  • Time-to-market
  • Learning curve
  • Infrastructure skills to manage all the components (e.g. DB, Storage, Cache, FW, ...)
  • System lifecycle (1-3-5-15y)
  • Geographical deployments locations
  • Cost of managing the platform
  • Functional requirements 
  • NFRs
  • Architecture approach and design
  • Running costs (high volume of consumption on a cloud provider can facility getting a better price - Azure Reserved Virtual Machine Instances)
  • Transfer costs across cloud vendors 
  • Security and Monitoring across cloud vendors


Popular posts from this blog

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th