Skip to main content

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement:
As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB.

The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations.

Solution 1: Diagnostic Logs
Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was the operation that was done on our document (e.g. create, read).
This is the most basic operation that even if is generated by diagnostic logs can be used as audit as long as we don’t need what changes were done on each document. In diagnostics logs we will not be able to find this information.
Don’t forget to check the maximum value for retention policy and push data to a storage before this period.

Solution 2: Change feed
These features of Azure Cosmos DB allow us to stream to an external system any change operations that happen on our collections. It is built in inside Azure Cosmos DB, allowing us to catch all the changes that were done on our collection without taking into account what system have done the change.
The feed generated by Cosmos DB can be ingested by multiple systems like:

  • Azure Function
  • App Services
  • Azure Notification Hubs
  • Azure HDInsight
  • Apache Spark
  • Apache Storm
  • Azure Stream Analytics
  • Azure Cosmos DB
  • Azure Storage (Table, Blobs)
  • Azure Data Lake

For audit system, we might want to push the data inside the most cheap storage solution available – Azure Storage Blob or Azure Data Lake. Why? When we collect audit data, we do not need to do any action except storing the data itself for later use.
When data processing is required, we can use Azure Functions or any stream analytics solution available to ingest and crunch the audit data. From scalability perspective, a consumer is available for each collection partition, enabling us to scale-up the solution with multiple nodes for the same collection. These is done using the lease and consumer groups’ concept that it is used for Azure Event Hub.

The changed fee is available not only in the write region, but also in the read regions, giving us the possibility to decide in what region (replica) we want to do this action. By default, only Create and Update operations are written in the feed. The delete operations can be written also but you need to specify this (set a specific flag). Once you activate it, you will catch also the deletion of documents that have the TTL set and are deleted automatically (expires).
Another cool feature of change feed is that there is no expiration time available. You can requests the change feed for the last 12 months anytime, because there is no retention policy.

There some things related to change feed that you should know before jumping to this solution as audit system. First of all you will not be able to know who made the change. This means that you will know the state change of a document, but without adding extra fields in the document it is impossible to know who made the change to it.

The biggest problem from the audit perspective related to change feed is that not all the changes can be found in the feed. Only the most recent change in the document will be available, allowing us to know what document was change, but without being able to know how many changes were done on the document.
Change feed can be an interesting feature if we need to audit an Azure Cosmos DB, but we need to ensure that all business requirements related to the audit system can be covered.

Solution 3: Custom proxy
The first two solution are out of the box solution available inside Azure Cosmos DB. They are offering good support, but might lack of some information that you need when you want to fulfil all the requirements of an audit system.
For example using the Diagnostics Logs of Azure Cosmos DB you know who done a change and the operation type, but without knowing what were the changes done on a document. Using change feed you will know that a document suffered some changes, but without knowing what were the changes and who done them.
Adding a proxy on top of Azure Cosmos DB would fulfil all the requirements that you can have on an audit system, but it will kill the performance. Having a proxy on top of Azure Cosmos DB means that the proxy need to be thin and to have a minimum impact from latency perspective.
This proxy might be required when you expose the Azure Cosmos DB repository to multiple system, where you don’t have control what actions are done by each external system.

Solution 4: Triggers
Azure Cosmos DB has a good support for triggers and are extremely powerful. Can be used with success to collect audit information as long as you do not need user information (the person who done the trigger). There is no way to find information related to it. Beside this you need to define triggers for each collection and operations, this increase complexity and chances to make a mistake.

Each of this solution comes with pros and cons. The 3rd one is the most powerful, but if you ask me I would do anything possible to avoid having a proxy between system and Azure Cosmos DB. This would increase latency and could become a bottleneck easily.
Change feed it is a nice and useful when you need to stream changes, but might not cover all the changes that are happening on your storage. Additional to this you might lose information related to who made the change.

My main preference is using Diagnostics Logs, that could provide enough information to fulfil the audit requirements. If this is not an option than using a proxy or something similar might be the only way for you


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.