Skip to main content

Azure CDN (Day 2 of 31)

List of all posts from this series:

The service that I selected for today is CDN. For those who do not know, CDN comes from Content Delivery (Distribution) Network. The main goal of a CDN is to server end-users that need high availability and high performance. It is a cache system, similar with Reverse Proxy, but for static content.

Short Description
The content that can be cached on current CDNs is any kind of static content of content that is already stored on Azure blobs. For example it is very easily to use CDNs for pictures that are stored on Azure blobs.
First question that a person ask when you say about a CDN service is where are the nodes of this service. For any CDN customer it is very important to have a CDN network that is very close to their end-users and can cope with high traffic. I’m happy to say that the current CDN network is spread world-wide in the most important internet nodes. I expect in future to see more CDN nodes added to it.
Below you can find a picture with the current location of CDN nodes that Azure CDN has. And YES, the nodes are not only Azure Data Centers.
The Azure CDN works as a normal CDN. Content is requested to CDN. If the content is found on CDN node that it is served from there, otherwise the service that has the resource is called and content is cached on CDN.

Main Features
The static content that is cached, does not mean only images. You can also use CDN for content like static pages with query strings. Content can be accessed using HTTP or HTTPS.
We have the possibility to specify how long an item to be stored on CDN (expiration time)

When you cache items with query string, don’t forget that URL with parameters are stored as string literals. This mean that ‘http://[path]?name=Radu&status=Ok’ and ‘‘http://[path]? status=Ok&name=Radu’  are different content from CDN perspective.
When you are using CDN in combination with blobs it is recommended that the size of blobs to be less than 10GB.
Custom domain are supported only for HTTP. When you want to use CDN over HTTPS, you need to use a certificate provider by CND – custom certificates are not supported in this moment. Also, custom CNAME are not supported also. Because of this you need to use the CDN name.
When HTTPS is used over CDN, the content can be also retrieved over HTTP.

Applicable Use Cases
Below, I will present 4 use cases when I would use Azure CDN.

I would use Azure CDN for all the content related to newsletters (html, images and so on). Why? Because this is a static content that reach a lot of people around the world in a very short period of time.

Image Sharing
If you have a web application used by users to share images than Azure CDN can be used with success to reduce the load of your backend.

SPA Application
A real SPA application contains only HTML/CSS/JavaScript and some content. All this content ca be stored on CNDs, you don’t need to serve JavaScript files from your own machines. You can use Azure Blobs and Azure CDN with success.

If you have updates for different systems, that are public available, than this is the perfect place to send them to end-clients, without a high load on your back-end infrastructure.

Code Sample
How to use CDNs in combination with Azure Blobs
I was impresses to discover how easily you can use CDN service using Blobs. Once you activate CDN service you need to do only two small steps:
  • Make the blob container public (to be accessible by anybody)
  • Configure CDN to cache the blob content
  • Use CDN URL format or Blob URL format
CND URL: http://[myCDNServiceName][containerName]/[resourceName] 
Azure Blob URL: http://[myAccount][contanerName/[resourceName]

How to use CDNs in combination with a standard service
To be able to do this you need to have port 80 open for HTTP requests. On top of this you will need to create a folder called ‘cdn’ under service path where you need to put all the content that you want to cache or you can deliver it to CDN directly. As for blobs, you will need to specify to CDN what the service path.
Once this was done, you can access content through CDN path

How to specify the expiration time for content
If you want to cache content from a cloud service, than you have two option.
First option is to use a web.config file where you can add the static content node. The second option available is to set in HTTP response the expiration heather. This can be done from ISS or from code. Below you can find the content used for this.
For blob content that is cached, things are pretty different. By default the content will be stored on CDN for 7 days. If you want to specify the expiration time you need to set ‘x-ms-blob-cache-control’ property in the moment when you add or update blob content (Put Blob, Put Block Blob, Set Blob Properties).x-ms-blob-cache-control: public, max-age=259200 This can be done using code also:
blob.SetCacheControl("public, max-age=259200"); Don’t forget that the value is expressed in seconds.

Pros and Cons
Easy to use.
World wide distributed
HTTP and HTTPS support
Expiration date configurable
Can automatic redirect users to the proper CDN based on their location

You don’t have the ability to specify the data center that is served for users. When because of the legal part you can store content only in specific regions than you may have some issues.
Only anonymous access supported.
No support for SAS (Shared Access Signature) in combination with CDNs.

You need to pay only for outbound traffic. You don’t pay for storage that is used on CDN nodes.

Azure CDN can be a very good option for you. The price is very attractive and can be very easily integrated on existing system. It works very well with systems hosted on Microsoft Azure.
It can be a great option when you need a low latency and massive scalable CDN service for your systems.


Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…