Skip to main content

Deep dive in clouds providers SLAs

In the era of cloud we are bombarded with different cloud services. New features of cloud providers are lunched every day and prices are dropping every month. In this moment the most known cloud providers are Amazon, Google and Microsoft.
Looking over their services we will see SLAs (Service Level Agreement) that reach 99.9% availability, 99.95% availability or even 99.99% availability. This article will jump into cloud providers SLAs, trying to explain why the SLAs are so important, what are the benefits of them and last but not least how much money we could get back if a service goes down.

What does a SLA mean?
“A service level agreement (SLA) is a contract between a service provider (either internal or external) and the end user that defines the level of service expected from the service provider. SLAs are output-based in that their purpose is specifically to define what the customer will receive. SLAs do not define how the service itself is provided or delivered.”
Source: https://www.paloaltonetworks.com


A SLA is a contract between a service provider and the consumer that specify the ‘quality’ of a service that will be provided to the consumer. For example if we think about a service that give you the exact time, in the SLA will define how much time the service will be up and running in a year (99.99%).
Beside that the SLA defines what are the warranties that are provided if the SLA is not reached. For example if the exact time service is down more than 0.01% per month, the service provider will reduce the total cost the bill with 50%.
What areas are covered?
Based on what type of service or business we are talking about, the things that can be covered can different. It is very common a SLA to cover the fallowing attributes of a service:
Volume
Speed
Responsiveness
Efficiency
Quality of work
Looking again over our example with the exact time service we could have an SLA that says: “The exact time service is up 99.99% per year, the response time from the moment when a request hit the service is 0.0001 seconds and the time precision is 0.00000001 second.”

Cloud SLAs
In general if we are talking about cloud providers and services that are offered by them, the areas that are covered by all of them is the uptime. Beside uptime, there are other areas that are covered, but varies based on service type.
Microsoft, Google and Amazon have a clear SLA that specify the uptime for each service that is provided by them.  Even if they are different companies, the SLAs are very similar to each other.
For example if we are looking at cloud storage service that it is not replicated to different datacenters or nodes we will discover that Google is offering a 99.9% uptime SLA, Microsoft is offering a 99.9% uptime SLA, Amazon is offering a 99.95% uptime SLA (with the remark that if we have for use Read Access-Geo Redundant Storage from Microsoft we can reach even 99.99%).
As we could in the above example the SLAs are very similar +/-0.05%.

How the service is measured?
This question is very important, because each cloud provider SLA specify very clear how, who, when, based on what the service SLA is measured.
In all the cases the service uptime is measured internally, by their own system.  This doesn’t mean that the measure is not real. No, it is very real, but if the cause of the downtime is an external factor like network issues on client side, than it is not their problem – and it is normal.
The SLA is not applicable for cases when the service is used outside specific boundaries. For example the SLA is applicable only when the number of requests per second is under 10 million requests or when there are at least two instances of that deployment.

Warranty
Very often people assume that if they have a service in cloud that generate 1000$ per hour, in a case of a downtime, the provider will give them the amount of money that service would generate. Another wrong assumption is that a cloud provider will cover all the lost that are generated by a downtime.
Both assumptions are wrong. In this moment I don’t know a cloud provider or a service provider that would cover the loss of a downtime.
It might sounds strange but it is normal. First off all it is hard to measure and calculate the loss and secondly the SLA is referring to the service that you are using, not the system and services that you are providing over it.
Google, Microsoft and Amazon are offering very similar warranties. Based on the downtime period at the end of the month, a specific amount of service credit is offered to the customer. For example if the uptime of a service was under 99.9%, the customer will receive 25% of that service cost of that specific month as credit. This credit will be used to reduce the bill cost in the next month.
Also the SLAs specify if a specific incident or event cause a downtime too more than one cloud services than the client can submit a claim only for one services that was affected by this event.
For example if a datacenter goes down because of the same software update, where storage, computation and messaging systems are affected, than the customer can claim the credit for only one of the services.

Amazon, Google and Microsoft Warranty
Let’s take a look to warranties that are offered by this cloud providers in a case of a downtime of their storage service.
Amazon
Monthly Uptime Percentage Service Credit Percentage
99% < 99.95%                         10%
<99.0%                                 30%

Google
Monthly Uptime Percentage Service Credit Percentage
99.0% = < 99.9%                 10%
95.0% = < 99.0%                  25%
<95%                                  50%

Microsoft
Monthly Uptime Percentage Service Credit Percentage
99% < 99.99%                         10%
<99%                                  25%

The offer is not 100% the same, but is pretty similar. Even if Google offers 50% storage credit in the case of a downtime, I don’t want to be in the case when the uptime is only 90% for example. The credit that is offered when the uptime is between 99.xx% and 99% is the same. Every ‘9’ that is offered on top of 99% is very expensive and hard to obtain. That nines are representing the real battle and can make a different between a service and a great service.

When and how I receive the credit?
All cloud providers have different mechanism to notify customers when a service is not working as expecting (over a web sites, using an API or via email). In all this cases, even if a service is down more than it is specified in SLA, the customer will not receive the credit that we talked above (by default).
In the moment when customers are affected by an incident they needs to notify the cloud provider and open a ticket at customer support level. They need to specify what service was affected and when.  Based on this information, the cloud provider will check the internal audit system and the level of error rate at that specific time interval.

Trust
This is the key world around cloud providers and their customers. The most important thing is the trust that exist between them. We, the customers, trust our cloud providers that will respect their SLAs. The same thing is happening with any external provider.
In general all the SLAs that are offered by cloud providers are respected. The cases when incidents exists are very rarely and isolated.

Cloud service uptime is not our product uptime
An important thing that we need to take into consideration is that when we are constructing a product over cloud the uptime of our system is not the same with the uptime cloud services.
For example if we have a product that is constructed using 20 services from cloud, than the uptime of our system will need to be calculated taking in considerations the uptime of all cloud services. If each of cloud services has an uptime of 99.9% than the uptime of our system could reach around 98% uptime.

Conclusion
As we seen above, the SLAs that are offered by different cloud providers are pretty similar. The most important thing is to know exactly what does the SLA cover and how to handle downtime periods.    

References
Amazon EC2 SLA: http://aws.amazon.com/ec2/sla/
Google SLA: https://cloud.google.com/storage/sla
Microsoft SLA: http://azure.microsoft.com/en-us/support/legal/sla/

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP