Skip to main content

Logging on external storages.... lesson learns

Logging and audit is a must have to for all applications. Without this information, monitoring and support team would not be able to know what is happening in the system, if the system works correctly and what happens in a specific point in time.
On top of this, from a security perspective, you need to audit at different levels of your system who is accessing your system, what is the action and when.

There are many solutions out of the box on the market that help us to do logging and audit in our system. I suppose that all of us used at least one time in their life log4net or NLog. There are situations when you need to persistent logs in storage that are not on the same machine where your system runs. For example, a common use case can be writing all this information to:

  • SQL instance
  • Azure Blob Storage
  • Azure Event Hub

But, did you ask yourself what is happening when this storage cannot be reached. This post will cover this case, what if … the storage where I persist logs and audit cannot be reached.
Let’s imagine a system that write all the logs to Azure Blob Storage and audit information are send directly to Azure Event Hub, from where are analyzed in real time to detect any security problems or instability of the system.

To reduce the network load, improve performance (speed) and control costs, each component has a buffer of where logs and audit data are written. Once the buffer reached a specify size, the content is flushed automatically. This would work perfect as long as Azure Blob Storage (for logs) and Azure Event Hub (for audit) are available.
Remarks: We will go forward with the case when you are using log4net, but similar behavior will exist with other logging frameworks. 

What is happening when one of this storage cannot be reach?
What do you think?


The buffer will become bigger and bigger. Normally, this buffer in memory because you want to have low latency for write operations.

You will start to consume more and more memory and there is a high probability to end up with an out of memory exception, that will not only will block your component or application, but also will cause you to lose the current logs and audit data.
Losing this data will not help you too much when you will need to trace what happen, why the component (application) is not working or why logs and audit data are not persistent.
Another thing that we need to take into account is that when you write to a different destination that the default one (especially external locations), you need to think at this situation and how you should handle them.


What should I do?
There three important actions that you should make:

Event Log
You should ensure that all exception or odd behaviors that appears at logger level are written in Event Logger. Writing logger exceptions in this location will guaranty you that you can trace and identify any kind of issues with your logging mechanism.
Another possibility is to write directly on disk, as files, but Event Logger is a very powerful tool and monitoring and support team can aggregate automatically all event logs, define alerts over them and so on.
This should be your last safetymen when your logging system is not working as expected. Don’t forget to think about where to log data from the moment when your component (application) start until in the moment when your logging component is initialized – what happens if it fails and how you can detect this. This are two questions that you should ask yourself.

Temporary Storage
In the moment when the logging mechanism wants to make a flush but detects that the remote storage cannot be reached a custom action should be triggered.
A possibility is to have a retry mechanism that would retry to flush the data, but:

  • For how long?
  • What should you do with extra data?

For how long you should retry? There is no an out of the box response. I highly recommend to try only for a few time and after that you a backup solution. For example I would make 3 retries (1s, 2s and 4s). If the logging or audit storage would be still down that I would go on the backup solution (the solution will be presented below).

What should you do with extra data?
You cannot store data in the buffer, because your buffer is already full. It might be possible to do this for a few second or minutes, but this is not feasible for 1h or 4h. In this situation you should flush all the data that you have in the buffer on your local disk.
In this way you will clear your flush and the new logs and audit data can be persisted without having issues with out of memory exception use case. On top of this you should ensure, that once you succeed to write logs or audit on the external storage system (in our case was blob and event hub), you should write all data that was stored on the local storage to external storage.
With this solution, you will need to take into account the case when the external storage is down for a long period of time and you will be run out of space on your local storage. A cleanup mechanism should be in place for this situations. The simplest solution, that can be implement with success is for all logs or audit file older than X hours or days delete them.
Using such a cleanup mechanism, local space needed for your system can be forecast easily and special cases when logs and audit files are using the available space that should be used by other system that run on the same machine will not exist.

Passive Storage
In comparison with previous solutions, this is an optional solution, that should be used only when is critical to receive logs or audit data in a specific time interval. This solution will come with extra costs and can add also a little complexity on the system that process and analyze logs and audit trails.
This solution doesn’t exclude the previous ones because both active and passive storage can go down of course.

This solution involve using two different storages, one active that is used all the time and a secondary one (passive) that is used only when the first one is not available.

In conclusion I highly recommend to put on the table on this case for all your system. At least write in the Event Logger any errors or strange behavior of your logging mechanism.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

[Post-Event] Codecamp Conference Cluj-Napoca - Nov 19, 2016

Last day I was invited to another Codecamp Conference, that took place in Cluj-Napoca. Like other Codecamp Conferences, the event was very big, with more than 1.000 participants and 70 sessions. There were 10 tracks in parallel, so it was pretty hard to decide at  what session you want to join.
It was great to join this conference and I hope that you discovered something new during the conference.
At this event I talked about Azure IoT Hub and how we can use it to connect devices from the field. I had a lot of demos using Raspberry PI 3 and Simplelink SensorTag. Most of the samples were written in C++ and Node.JS and people were impressed that even if we are using Microsoft technologies, we are not limited to C# and .NET. World and Microsoft are changing so fast. Just looking and Azure IoT Hub and new features that were launched and I'm pressed (Jobs, Methods, Device Twin).
On backend my demos covered Stream Analytics, Event Hub, Azure Object Storage and DocumentDB.

Title:
What abo…