Skip to main content

Azure Cosmos DB | The perfect place for device topology for world wide solutions

In the world of IoT, devices are distributed all around the world. The current systems that are now on the market are offering scalable and distributed systems for communication between devices and our backends.

It is not something out of ordinary if we have an IoT solution distributed to 3 or 5 places around the globe. But behind this performant systems we need a storage solution flexible enough for different schemas but in the same time to be powerful enough to scale at whatever size we want.

Relational database are used often to store data where we have a stable schema. Having different devices across the globe request to have different schemas and data formats. Storing data in non-relational databases is more natural and simple. A key-value, graph or document database is many time more suitable for our needs in IoT scenarios then a relational database.

Current solutions
There are plenty solutions on the market, that are fast, powerful and easy to use. I expect that you heard already about at least one of the following databases DocumentDB, MongoDB or more technology specific like Graph API or Table API.
Microsoft Azure is offering this databases as-a-service for some time. Offering them as a service has big advantages for customer and technical team. Time invested in infrastructure, scalability, availability and maintenance is drastically reduced.

As we saw before, an IoT solution is cross regions. This requires multiple deployments across the globe that need to be maintained and many time synchronized. Creating a cluster of DocumentDB instances spreaded around the globe that needs to be in sync is not an easy task.
Is not impossible but things from the below list needs to be consider:

  • Consistency between regions
  • Consistency strategy
  • Latency between regions
  • Data Partitioning
  • Failover
  • Availability
and many more. There are so many things that we need to consider that the volume of work is considerable.

There are different solutions that can help us. A part of them are offered as tools and others are offered as services, where the solution provider is offering an out of the box solution that resolve a part of our problems. 
One of the solutions that is on the market now is Azure Cosmos DB, that is a distributed database service at global level.

Azure Cosmos DB
How it works
Azure Cosmos DB is a multi-model database that scale independently and is distributed all around the world. The service is design in a such a way that we don't need to think about replication and scalability strategies. 
Cosmos DB is using partitioning to offer high performance at instance level (horizontal scaling). This power is combined with out-of-the-box replication capabilities cross-region, unlocking us the possibility to distribute at global level our data in real time without writing an additional lines of code. 

What kind of database is Cosmos DB?  
It is not a database itself. Azure Cosmos DB is a distributed database service, that has support for 4 different types of databases:

How does global distribution works?
At region (deployment) level, all resources that are stored are distributed across different partitions. Each partition can  be seen as a independent container that is fully managed by Azure. All partitions are replicated in all other regions where the solution is deployed. 
Using this approach we end up with a something similar like a matrix, where on one axis we have partitions (containers) in each region and one another axis we have regions where partitions are replicated.
Users have the ability to specify the number of Azure Regions that we needs for his solutions. Content will be automatically replicated in all regions. On the fly, we have the ability to add a new Azure Region to our solution of to remove one of them.
The cool thing is not only this. We can specify the failover order of Azure Regions. It means that we can specify that North Europe Regions to be the failover noted for Cosmos DB in the case West Europe goes down. This is happening behind the scene, without affection our clients.

When we talk about replication cross regions it is normal to ask our self what is the consistency level. There 5 different types of consistency, that covers all our needs, from the one that guarantees us a linear consistency to the most relax one where even the reads can be out of order.
Additional to this, we can specify the consistent level at query level. The default value for consistency at query level is the same as for our database, but we can change it based on our needs. 

Do I need to change the code?
No. There are zero changes that needs to be done to the code on your existing code. In some situations you will need to make some changes related to connection string. Except this, you will be able to use your existing libraries. 
For example in the below code we can observe that the only thing that is different is the connection string. 
var MongoClient = require('mongodb').MongoClient;
var assert = require('assert');
var ObjectId = require('mongodb').ObjectID;
var url = 'mongodb://';

var findCars = function(db, callback) {
var cursor =db.collection('car').find( );
cursor.each(function(err, doc) {
    assert.equal(err, null);
    if (doc != null) {
    } else {

MongoClient.connect(url, function(err, db) {
assert.equal(null, err);
insertDocument(db, function() {
    findFamilies(db, function() {

Azure Cosmos DB in IoT world 
This service allows us to develop a platform cross regions, where all data is replaced around the world without complex systems. With just a few clicks we can end up with an IoT solutions that is backed-up by a powerful database service that offers consistency and reliability. The device topology can be synchronized at global level without having a point of failure - a complex master node.

If you are a solution provider and you are using a NoSQL solution supported by this service and you need content replicated in two or more locations, than you should take into consideration migration to Azure Cosmos DB. It is one of the best services offered by Azure, that is a game changer for complex systems.


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.