Skip to main content

A subjective feature comparison between Terraform and Azure ARM / AWS CloudFormation

The most common question when you start a cloud project is
“Shall I use Terraform or a native tool like ARM or CloudFormation?”

No answer fits all needs, and there are many things that we need to consider. In this article, we tackle this topic from multiple dimensions, creating awareness related to things that we need to consider and what are the pros and cons of different approaches. The feedback was collected from various teams that have experience with Terraform and ARM or CloudFormation.

Configuration Language 
The ARM language is based on JSON, that it is easy to be used, but sometimes can be a little cumbersome. Even so, because it is a notation language, you can separate the configuration in multiple files (e.g. having variables separately, nested templates). The support for conditions makes ARM super powerful - once you learn the notation language. CloudFormation is using YAML or JSON, that is pretty powerful, but the feedback in comparison with ARM is that people would prefer to work with ARM that is more clear. 
The support for conditions and reference parameters inside CloudFormation is better in comparison with HCL language. It has also support for wait conditions and the ability to create custom policies that cannon out be found inside HCL. Terraform is using HCL language, that is an interpolation syntax. This, combined with all the toolsets offered by Terraform and other plugins, make the whole experience very natural. Because of the interpolations, there are some extra abilities that you cannot find easily in CloudFormation, like conditional structures.

Code Readability 
In ARM you have limitations related to how you can add comments, but at the same time you don’t have so many “$” signs and “{ }”. Sometimes the ARM files end up to be long and developers even forget about the nesting ability. The ability to do script validation and provides strong input, that is above the syntax itself. The ability to validate the CloudFormation scripts is limited and provides input only related to the syntax, but you can use the changesets feature to see and apply only the changes. A similar changesets ability is also provided by Terraform. 
Terraform modularity enables people to write code that is more readable and easy to understand. In the beginning, the “$” can be annoying, but once you accept it, you can work with it pretty good. There are some limitations related to the validation step. Where you have a code validation (syntax), but not a logic validation. Because of this, often you can end up to identify issues at the deployment step only.

Exceptions and Error Tracking 
ARM and CloudFormation have clear error messages. The chances for an error to be hidden by another exception are so not frequent. In general, you will find official support from the cloud providers and the community. Terraform is not so friendly with error messages. If you journey with HCL is at the beginning, you will find it hard to understand the message and identity the issue.

Audit and tracking capabilities 
In ARM you have the full changes logged inside Azure Monitoring Log Analytics. The same is applicable for AWS, where you have CloudTrail. All changes that are done from HCL are logged inside Azure Monitoring Log Analytics or CloudTrail because you change the infrastructure or service configuration. Additional to this you have the full API calls from Terraform, but in most of the cases, you will prefer to use CloudTrail and Azure Monitoring.

Deployment monitoring 
ARM provides the ability to track the deployment progress from Azure Portal, knowing exactly at what step you are and what events happened. A similar experience you have from the AWS Console for CloudFormation. Both of them are well integrated with the audit and monitoring of native systems.
Terraform is not able to provide any kind of inputs related to deployment progress. Because of this, the overview is limited. Some plugins offer a similar experience, but the full experience is available only on ARM and CloudFormation.

Tooling support 
The best experience with ARM is with Visual Studio together with ARM Tools plugin. The experience is consistent during all the development and deployment cycle. The same experience is with CloudFormation if you use IntelliJ IDEA plugin for CloudFormation. The plugin is pretty good, but cannot be compared with the experience that you have on Visual Studio when you use ARM.
Terraform is well integrated with IntelliJ IDEA and offers a similar experience with CloudFormation, giving you the ability to receive recommendation related to all features, parameters, locals variable and many more. 

The experience with modules inside ARM is provided by nested templates. That works pretty nice, you can even import them from different sources (e.g. files, external URLs). The downside that it is similar to CloudFormation is the requirement to have all the nested template accessible by the ARM engines during the deployment. 
 CloudFormation enables to build modules using nested stacks and cross-stack references, but because you need to store them inside S3, can increase the complexity, especially when versioning is required. Terraform supports modularity naturally, and the ability to work with modules is fantastic. It’s one of the best options that you have available when you want to build reusable modules. The way how you can design modules provides the best experience from this point of view.

Versioning and management of the state
CloudFormation can keep the history of state changes, enabling us to do a rollback to a previous version. Even if the ARM is not a fully stateful, there is the ability to do revert to the last working version. A similar capability is provided by Terraform also.

Validation mechanism 
Inside ARM, you can validate the syntax with a specific command. The deployment itself is validated only during the deployment phase. Sometimes you might have surprises related to it. You also have the ability to verify the template from the schema point of view. 
CloudFormation is similar to ARM, enabling us to validate the stack – the syntax itself together with change review in comparison with an existing deployment. Terraform supports a validation mechanism build in top of a deployment plan. It can calculate the differences between the current deployment and provides feedback.

You can do versioning on top of ARM templates, but because of the reliability level of JSON files, the experience is not perfect. The same experience is for CloudFormation when YAML and JSON are not so easy to read. In comparison with it, the ability to create functions inside Terraform makes maintainability easier and create a better experience for the teams.

From this point of view, it is pretty clear that the only tool that provides this ability is Terraform, where you can reuse the knowledge of cross-cloud providers. It is very useful when you have a cross-cloud strategy, where you need to be able to deploy the same infrastructure on multiple cloud providers.

Overall comparison 
Below you can find a matrix that compares the available options from each point of view, providing a scoring board. The scoring is subjective and depending on the NFRs and the quality attributes that are relevant for your system, the score could look different 

I would not recommend comparing the total score of each feature. It is more important to compare the score for each feature and identify the ones that are important.
Remember that the decision shall be taken by you, taking into account your specific need and priorities where the skills of the team, multi-cloud and the complexity of the IaC are important.

In the next post, we will discuss what solution should I use in different situations.


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…