Skip to main content

Azure CDN – Things that are not available (yet)

Last week I had some interesting discussions around payload delivery and CDNs. I realize how easily people can misunderstand some features or functionality, thinking that it is normal or making the wrong assumption.
In this context, I will write 3 posts about Azure CDNs, focusing on what is available, what we can do and what we cannot do.


I was involved in many discussions when people assumed that different features are available on Azure CDN and were ready to change the current architecture based on that wrong assumptions. Let’s take a look on some features or functionality that we don’t have on Azure CDNs, but many times we are assuming that we have them.

SAS Support for Blob Storage 
The most common functionality that people think that is available on Azure CDN is Shared Access Signature for Blob Storage. Shared Access Signature (SAS) is one of the most used and powerful feature of Blob Storage. On Azure CDN we have the ability to cache a blob storage using the full
URL including the URL.
Maybe because of this, people have the impression that Azure CDN will take into consideration the SAS validity and once the SAS will expire it will also invalidate the cache.
The reality that in this moment Azure CDN treats an URL to an Azure Blob that has a SAS like just another URL. If the content can be accessed, then it will copy the payload and replicate to the CDNs. The content will be removed from the CDN nodes only in the moment when the TTL value on CDN expires, that doesn’t has any connection with SAS.



HTTPS for Custom DNS Domains
Even if Azure CDN has support for custom DNS and also has support for HTTPS, it doesn’t mean that you can use HTTPS using your own domain name. Sounds strange, but sometimes can be confusing when you have two features supported, but the combination between them is not supported.
This means that if you need HTTPS than you will need to use Azure CDN DNS. The good news is that this is a feature that on Azure Voice is most voted for CDNs and very soon we will have support for it.

HTTPS Client Certificates
When you are using Azure CDNs it is important to remember that even if there is support for HTTPS, there is no support in this moment for client certificates. You are allowed to use only SSL certificates provided by the CDN.
This is why we don’t have support yet for client certificates on custom DNS domains for Azure CDNs, but things will be much better very soon.

HTTP/2
The new protocol of the internet, if we can call it in this way is not yet supported. In general, this is not a blocker, except if you have a complex web application where you want to minimize the number of requests that are done by the client browser.
For this situations, working with Azure CDN might be not so simple, but there are work arounds.

Fallback to custom URI
Some CDNs providers, give us the possibility to specify an URI that can be used as a fallback when the content is no found in the original location. It is useful when you are working with an application where availability time is critical and you need to be able to provide a valid location for all content.
Of course, with a little of imagination you can resolve this problem pretty simple. Putting between Azure CDN and your content Azure Traffic, that would allow you to specify a fallback list of URIs.

Access logs of CDNs are raw data
Many people are requesting that they want to have access to raw data of logs, similar with the one from Azure Storage. This might be an useful feature, but in the same time I have some concerns related to it.
The main scope of an CDN is to cache and serve content that is requested often from a location that is near as possible to the client. It is used for cased when the RPS (requests per second) is very high. Generating a file with raw data, could be expensive and to end up with logs that are big. Processing such files might be expensive.

It is important to remember that a part of this features are already planned by Azure team and near future we might find them in Azure CDN. Also, don’t forget that an out-of-the-box solution is hard to find and with a little of imagination we can find a solution for it.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP