Skip to main content

DocumentDB (Day 5 of 31)

List of all posts from this series:

Short Description 
DocumentDB is a NoSQL Document database offered as a service. It is full managed by Microsoft Azure and is extremely scalable and fast.
It gives us the possibility to store any kind of data. The stored data don’t need to have a specific format or to respect a model predefined. We can store in the same collection, data with different models and formats.

Main Features 
No Schema
Stored data don’t need to have a predefined data.
A database can be spread across multiple machines. In this way we have the ability to scale up the computing power of our database.
Standard Capacity Unit (CU)
Each capacity unit comes with a specific storage capacity and throughput. Using CU we can scale up or down very easily. In the current version each CU gives us the fallowing throughput per second:

  • 2000 reads operations
  • 500 insert/update/delete operations
  • 1000 queries
  • 20 store procedures

Users and Permissions
To be able to control the access to our database in a granular way, we can define users and different permissions rules. For each database account we have the ability to define maximum 500.000 users and 2M permissions rules.
JSON Object Notation (JSON)
All content that is stored in DocumentDB is in JSON format. On top of this, all custom actions or code that we want to run on DocumentDB side is written in JavaScript. We will talk a little bit later about it.
Data Model
The model is based on documents and collection that are stored in JSON format. Each document is formed from a collection of key/value pairs. The type of elements can be string, integer, floating point or any other JSON type.
In DocumentDB there is no need to define indexes. This database will define indexes over all the properties of a document.
A collection is represented by a set of documents that are grouped together. In the preview phase the maximum size of collection is 10GB, but we don’t have a limit of number of collections.
A collection can has documents with different data models. In the below example we have two different document in the same collection. In this way we can define collections that have document with different model. This can be useful when we store data that is similar but with small difference.
        "id": "1",
        "name": "Radu Vunvulea",    
            "country": "Romania", 
            "city": "Cluj-napoca",
            "street" : "Plopilor"
        "id": "2",
        "name": "Pop Iliescu",    
            "country": "Romania", 
            "city": "Timisoara",
            "street" : "Lopitau"
There is transaction support, at collection level. This mean that we can execute transaction only at collection level. We cannot define transaction over multiple collection.
We can access DocumentDB from multiple languages (C#, JavaScript, Python). It is important to remember that the core language is JavaScript and all the services are exposed as REST API. Having a REST API we can access the database from any client, even from browsers.
Below you can find the list of HTTP verbs:

  • GET – used to get resources
  • POST – create new resources (from documents to store procedures or triggers – we will talk about them in a few seconds)
  • PUT – updates (replace a document with a new document)
  • DELETE – remove existing resources

SQL Query
DocumentDB has support for a subset of SQL Query. We can use this query to execute different queries over a collection. The language is very simple and can be used with success by people that already know SQL.
SELECT {"name", "country":} AS SimplePerson
FROM Persons person 
WHERE > 10
The result is returned as a collection of documents or key/value pairs (depends on the result).
Stored Procedures
Yes, we have the ability to executed stored procedure on DocumentDB backend. Because we have CU units, we have a direct control on how much throughput power we have and we can increase it when needed.
Store procedures are defined in JavaScript. Once the store procedure were declared, a client can make a POST are request a specific store procedure to execute. Each store procedure that is executed will run in isolated environment.
We can execute with success when we have logic that is common and we want to execute it in a control and managed way. We can define our in store procedure a part of our domain logic.
On top of stored procedure we can define triggers that will execute before or after specific commands. For example we can have a trigger that will run every time after an insert on a specific collection is done.
The language that is used for defining triggers is the same language that is used for Store Procedures – JavaScript.
For example triggers can be used with success to make validation of data before adding them We cannot change the data from a trigger in an insert command but we can reject an insert from a trigger.
User Defined Functions
Are very similar with store procedures but are used to extend the query language. We can define our own functions that can be used and access  by anybody. In this way we don’t need to define the same query over and over again.
You have the ability to 'attache' data to a document.
All the time, we have 3 copies or our database in the same data center. When we scale up we don’t scale only the CU power but also we duplicate our database. This is needed because we don’t want our storage to become our bottleneck.
Because of storage replicated, problems can appear in the moment when we add a new document or we update an existing one. Below you can find 4 different consistency level that DocumentDB offers to us:

  • Eventual – High performance, but the client may read out of date data or we can see data in another execution order
  • Session – Client will read his own data correctly, but other clients may read his data in an out of date order or older data. The balance between performance and correctness is in a sweet spot with this configuration and can be used with success in many scenarios.
  • Bounded Staleness – More powerful than sessions because clients can see old data but they will see them in the order or execution them. Clients can specify how old data should be. 
  • Strong – All the time clients can see only consistent data. But because synchronization is expensive, all read and writes are slower.  

Remember that for each of them, we have a tradeoff between data correctness and performance. The beauty of this options is that each client can select the consistency level that is needed for their database.

Transactions cross collections
We cannot have a transaction on multiple collections. In NoSQL world this is something normal and it would be very expensive to have.
Replication on different data center
This features is not available in this moment. I see this feature very important for any storage type (from blobs to SQL and DocumentDB)
Automatic Scale (Elastic)
I would really like to see an elastic scale of CU based on needs.
No versioning support
There are a lot of use cases when versioning over Documents would be very useful.

Applicable Use Cases 
Below you can find 4 uses cases for DocumentDB:
Blog Application
DocumentDB can be used with success when we define a blog framework. We can define the posts, comments and list of users very easily. We would be able to let each user to define his custom properties and configuration (no-schema support). In this case we can configure with success the consistency level at session.
Muti-player games
A DocumentDB with consistency level configured at Bounded Staleness can be a good choice. Users will be able to retrieve data in the order of timeline.
e-Commerce – Products List
We can store the products list as document collections. It would be very easily to add custom characteristic to some products, versioning them and so on.
e-Commerce – User Card
The user card can be stored with success as a DocumentDB. Managing it can be very simple and with minimal costs.

Code Sample 
using (client = new DocumentClient(new Uri(endpoint), authKey))
    // Connect to a database
    Database database = new Database 
        Id = "radudb" 
    database = await client.CreateDatabaseAsync(database);
    // Get document collection
    DocumentCollection collection = new DocumentCollection { Id = "Persons" };
    collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection);
    // Get persons with id bigger than 10.
    var query = client.CreateDocumentQuery(collection.SelfLink, "SELECT * FROM Persons person WHERE > 10");
    var persons = query.AsEnumerable();
    foreach (var person in persons)
        Console.WriteLine(sting.Format("Person name: {0}",;
    // Add a new person
    await client.CreateDocumentAsync(collection.SelfLink, @"{ 'id':3, 'name': 'Stefan Pop'}"); 

Pros and Cons 

  • Easy to Use
  • JavaScript support
  • JSON format
  • Triggers
  • Store Procedures
  • CU scaling (very smart)
  • Database size that can be very big
  • User Defined Functions
  • Multiple consistency level
  • Rich query over a schema-free 
  • Scalable storage and throughput 
  • Rapid development with familiar technologies 
  • Blazingly fast and write optimized database service


  • No Elastic Scale support
  • No custom user management
  • No mechanism to fetch store procedures and triggers from a source control

When we calculate the price we should take into account the fallowing components:

  • Capacity Units
  • Database Size
  • Storage
  • Outbound traffic

DocumentDB is very powerful NoSQL solution that can be used with success on many scenarios. It is very simple to use and scalable to TB of data. With atomic transactions per collection, triggers and store procedures can become very easily the best options for any developers that need a Document storage.
JavaScript, No Schema, JSON and simplicity convince me to use this database on projects.


  1. Replies
    1. What about it? We are talking about a service and not about you are using it. Once you have the access token you can do anything there. And developer is responsible to prevent this kind on injections.


Post a Comment

Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.