Skip to main content

Azure Cosmos DB | The perfect place for device topology for world wide solutions

In the world of IoT, devices are distributed all around the world. The current systems that are now on the market are offering scalable and distributed systems for communication between devices and our backends.

It is not something out of ordinary if we have an IoT solution distributed to 3 or 5 places around the globe. But behind this performant systems we need a storage solution flexible enough for different schemas but in the same time to be powerful enough to scale at whatever size we want.

Relational database are used often to store data where we have a stable schema. Having different devices across the globe request to have different schemas and data formats. Storing data in non-relational databases is more natural and simple. A key-value, graph or document database is many time more suitable for our needs in IoT scenarios then a relational database.

Current solutions
There are plenty solutions on the market, that are fast, powerful and easy to use. I expect that you heard already about at least one of the following databases DocumentDB, MongoDB or more technology specific like Graph API or Table API.
Microsoft Azure is offering this databases as-a-service for some time. Offering them as a service has big advantages for customer and technical team. Time invested in infrastructure, scalability, availability and maintenance is drastically reduced.

As we saw before, an IoT solution is cross regions. This requires multiple deployments across the globe that need to be maintained and many time synchronized. Creating a cluster of DocumentDB instances spreaded around the globe that needs to be in sync is not an easy task.
Is not impossible but things from the below list needs to be consider:

  • Consistency between regions
  • Consistency strategy
  • Latency between regions
  • Data Partitioning
  • Failover
  • Availability
and many more. There are so many things that we need to consider that the volume of work is considerable.

There are different solutions that can help us. A part of them are offered as tools and others are offered as services, where the solution provider is offering an out of the box solution that resolve a part of our problems. 
One of the solutions that is on the market now is Azure Cosmos DB, that is a distributed database service at global level.

Azure Cosmos DB
How it works
Azure Cosmos DB is a multi-model database that scale independently and is distributed all around the world. The service is design in a such a way that we don't need to think about replication and scalability strategies. 
Cosmos DB is using partitioning to offer high performance at instance level (horizontal scaling). This power is combined with out-of-the-box replication capabilities cross-region, unlocking us the possibility to distribute at global level our data in real time without writing an additional lines of code. 

What kind of database is Cosmos DB?  
It is not a database itself. Azure Cosmos DB is a distributed database service, that has support for 4 different types of databases:

How does global distribution works?
At region (deployment) level, all resources that are stored are distributed across different partitions. Each partition can  be seen as a independent container that is fully managed by Azure. All partitions are replicated in all other regions where the solution is deployed. 
Using this approach we end up with a something similar like a matrix, where on one axis we have partitions (containers) in each region and one another axis we have regions where partitions are replicated.
Users have the ability to specify the number of Azure Regions that we needs for his solutions. Content will be automatically replicated in all regions. On the fly, we have the ability to add a new Azure Region to our solution of to remove one of them.
The cool thing is not only this. We can specify the failover order of Azure Regions. It means that we can specify that North Europe Regions to be the failover noted for Cosmos DB in the case West Europe goes down. This is happening behind the scene, without affection our clients.

When we talk about replication cross regions it is normal to ask our self what is the consistency level. There 5 different types of consistency, that covers all our needs, from the one that guarantees us a linear consistency to the most relax one where even the reads can be out of order.
Additional to this, we can specify the consistent level at query level. The default value for consistency at query level is the same as for our database, but we can change it based on our needs. 

Do I need to change the code?
No. There are zero changes that needs to be done to the code on your existing code. In some situations you will need to make some changes related to connection string. Except this, you will be able to use your existing libraries. 
For example in the below code we can observe that the only thing that is different is the connection string. 
var MongoClient = require('mongodb').MongoClient;
var assert = require('assert');
var ObjectId = require('mongodb').ObjectID;
var url = 'mongodb://';

var findCars = function(db, callback) {
var cursor =db.collection('car').find( );
cursor.each(function(err, doc) {
    assert.equal(err, null);
    if (doc != null) {
    } else {

MongoClient.connect(url, function(err, db) {
assert.equal(null, err);
insertDocument(db, function() {
    findFamilies(db, function() {

Azure Cosmos DB in IoT world 
This service allows us to develop a platform cross regions, where all data is replaced around the world without complex systems. With just a few clicks we can end up with an IoT solutions that is backed-up by a powerful database service that offers consistency and reliability. The device topology can be synchronized at global level without having a point of failure - a complex master node.

If you are a solution provider and you are using a NoSQL solution supported by this service and you need content replicated in two or more locations, than you should take into consideration migration to Azure Cosmos DB. It is one of the best services offered by Azure, that is a game changer for complex systems.


Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…