Skip to main content

Microservices and Serverless Computing | Pros and Cons

All of us heard about the success of Netflix, Uber or eBay. All these companies went on an approach based on falt tolerance system developed on top of microservices and serverless computing. Nowadays people are trying to replicate their success and build systems that take advantage of this new way of designing software solutions.

The complexity of a system that is built now in comparison with systems that were built 10 or 20 years ago is different. The complexity is not only higher, but also the NFRs and SLA are tighter. The expectation from the customers is availability that goes as close as possible to 100% with less money and smaller teams. It can be achieved only with the new paradigm of software development on top of microservices and serverless approach.

We are in the moment in time when architects and technical leads are realising the advantages that serverless and microservices approach brings. They are trying to adopt as fast as possible the new ways of designing applications and replicate the success of other companies like Amazon or Netflix.
In most of the cases, there are two sides, one represented by microservices fans and the other represented by serverless groups. These two sides see the world only inside microservices or only inside serverless — nothings between, nothing in the grey zone. For this reason, solutions need to follow strict rules of one of the sides.
The reality is somewhere between. The current trend and improvements in serverless computing are allowing us to do things that were not possible before. The border between these two approaches is becoming thinner and thinner and can create confusion when we need to decide what the right approach is. A serverless system can run now in any cloud provider and even on on-premises systems. This combined with the possibility to run a serverless function for hours can create confusion and make us take the wrong approach. Features between serverless and microservices approach are thinner, and it is harder and harder to define a clear border.

To achieve success, we need to develop new tools and ways of developing and deploy the code. The transition between the traditional ways of writing code to the new ones it is not smooth, but can be done only by understanding the advantages and disadvantages of each new paradigm. Only then, we can design systems that take the full advantages of it. This would allow us to define migration blueprints from the current systems that fulfil the business needs.

Serverless
Serverless is a new way of computing execution, that focus on the functionality and less on the infrastructure. It involves just writing a function that incorporates a business logic and push it AWS Lambda, Google Cloud Functions, Azure Functions or even to IBM OpenWhish. The function is event-triggered, and in general, the context exists only during the execution - ephemeral. The infrastructure that is behind the scene it is managed entirely by the provider (in general the cloud provider) and the client pays only for the number of execution and how much compute power he consumed.

Microservices
This well knows architecture pattern, that used to be a buzz word in the last few years it is built around an architecture style that structures the application around a collection of services that are loosely coupled, deployable independently, high maintainable and testable, organised on top of business needs and capabilities. It enables us to manage complex and large systems using a continuous deployment and delivery approach where systems can evolve and transform with their own pace.

Pros and Cons of Serverless
One of the significant advantages of Serverless approach is the price. It tends to be less in comparison with a microservice approach. We do not need to pay for all the cluster nodes; we pay for what we consume. Even so, when you have functions that run for an extended period, the costs can be close or even higher in comparison with microservices. Cost forecasting is not an easy job when you do not have fixed costs. Estimations need to be done on assumptions that are close as possible to reality; otherwise, costs can explode quickly.

Short live functions work great in the serverless world. In general, the recommendation is to have functions that run only for a few seconds. Many cloud providers have hard limits that are around 300 seconds. Even so, the hard limits are starting to be raised or even removed when you request a dedicated environment to run your serverless solution. This can create confusion when you need to decide what is the right approach to implement your business logic.
Durable functions are allowing us to chain multiple functions that can wait for each other. It means that implementing logic that has external dependencies is much simpler and we don’t need to rely only on message and event-based communication.   
A serverless function is available only as a private API. To make them available, we need to configure an API Gateway. Many cloud providers started to work at this and enable us to expose our functions to the public internet over HTTP(s) protocol.
The level of external dependencies to different libraries can be pretty high. On top of this, we don’t have all the libraries written on the same technology stack. Managing them in a serverless approach can be a nightmare and can require us to use hybrid approaches between microservices and serverless. When the level of dependencies is low and not complicated, a serverless approach can be a success story. However, when we need to deal with legacy components and dependencies, a simple serverless approach might not be our best option.
Setting up a serverless environment it is easy to do without requiring us to take to many actions. In comparison with a microservice approach where we need to be aware of cluster size, system load and how the microservices can communicate between each other and with the external world. Even so, because spinning new instances of a serverless function is so easy, we can end up easily forgetting about them and even losing control.
At this moment in time, most of the serverless computing have a hard limit around 300 seconds. There are some business scenarios where this time it is not enough, forcing us to go with a microservice approach. Serverless computing on top of a dedicated environment is starting to allow us to run functions as long as we want, enabling us to break the timeout limit. In the same time, it’s more easily in this way to design a system in the wrong way, ending up with a ‘monolithic’ solution on top of serverless computing.
Serverless computing is promising us dynamic scaling. We need to focus only on the business problem and our code and less on other things. Dynamic scaling works excellent, enabling us to respond to customer needs without designing a complex solution. Even so, any error in our system can become a problem and scaling boundaries shall be configured.

Pros and Cons of Microservices
It is easy to misunderstanding the concept of containers and microservices. Using containers does not mean that you are doing microservices. Inside a container, you can still have a classical application deployed.
Similar to serverless approach, having multiple microservices to fulfil a business use case can create a dependency tree that can be buggy to maintain and can add performance issues. For example, jumping between 8-9 different microservices to check user credentials, it is not all the time the best approach.
From the service instances perspective, the instances are running in the same sandbox in all environments. Issues caused by different environments or software configuration are not common in the microservices world. Each service has the same software configuration inside the container in all environment.
Scaling such a system can be done and controlled at each service level. It can help us to optimise how computation power is consumed and used. The machines that are part of the cluster can be managed independently, allowing to change the size of the cluster of machines types without having to affect the services that are running on top of it.
The microservices approach enables us to change the implementation of a service or run in parallel two different implementations of the same business functionality without affecting the end customer. This combined with the load balancer layer and execution environment makes microservices flexible and robust. In comparison with serverless, microservices approach required a few extra layers to be configured and managed, but in most of the cases, a microservice orchestration layer can solve this problem.
The middleware is not supported out of the box in a microservice architecture. Many times you need to decide what kind of middleware you need for internal and external communication. There are so many approaches available from direct calls to events and message-based communication that teams are not able all the time to take the right decision. Additional to this, cloud services like AWS API Gateway and Azure API Management offers an external middleware that is cheap and scalable. In most of the case, the problem is not the middleware, but it is related to how it is used.
Changing the mindset from a monolithic approach to microservices requires changing the way how we design systems. Migrating from one database approach to multiple databases where data is duplicated can be a mindset change that requires time to understand how services shall be couples and data dependencies managed.
In general, each service is independent of the rest of the services. It enables us to manage each service as individual projects, with its own artefacts, pipeline and configuration. As for serverless approaches communication cross services, it is crucial and having a stable interface is vital for the project success.

Final comparison
An approach using microservices includes an overhead generated by:
·         Operating system
·         Maintenance and support (e.g. operation system updates, security patches)
·         Monitoring of the operating system
·         Deployment mechanism of the operation system
·         Deployment and configuration of the application
·         Infrastructure management
Even so, a serverless approach cam still has some limitations like:
·         Need to use external services to be able to deliver the same functionality
·         Overcome the limitations of disk space, RAM and execution duration
·         Legacy dependencies to different stacks or systems
A part of them can be overcome with the new functionality of serverless systems like durable functions, state machine, and no limit of execution time. Beside this solution like Kubeless change the way how we look at serverless computing.

Hybrid solutions
A hybrid approach that combines microservices with serverless might be the key to success. It enables us to migrate the legacy systems and run in a controlled environment all the external dependencies that we have. In the meantime, we can use serverless and microservices architecture for the current and future needs of the system. Multiple technology stacks can be combined inside containers, and serverless can be used where it is necessary.

Kubernetes with Kubeless is just an example of how the two architecture styles are merging — enabling us to have only one physical infrastructure, that supports both approaches. One of the downsides of an approach like this is the combination of two different architecture style, that can create confusion at the team level.   

Conclusion
Serverless architecture is exciting and interesting. Even so, it comes with some limitations. Microservices are already mature and can fulfil most business and technology requirements. The success is to understand the business requirements and what are the expectation from the system that we design. Merging different architecture styles and technology is acceptable as long as we are aware not only about benefits but also about the tradeoffs.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too