Skip to main content

How to track who is accessing your blob content

In this post we’ll talk about how we can monitor our blobs from Windows Azure.
When hosting content on a storage one of the most commune request that is coming from clients is

  • How I can monitor each request that is coming to my storage?
  • For them it is very important to know 
  • Who downloaded the content?
  • When the content was downloaded?
  • How the request ended (with success)?

I saw different solution for this problem. Usually the solutions involve services that are called by the client after the download ends. It is not nothing wrong to have a service for confirmation, but you add another service that you need to maintain. Also it will be pretty hard to identify why a specific device cannot download your content.
In this moment, Windows Azure has a build-in feature that give us the possibility to track all the requests that are made to the storage. In this way you will be able to provide all the information related to the download process.
Monitor
In Windows Azure portal you will need to go to your storage and navigate to the Monitoring section. For blobs you will need to set the monitoring level to Verbose. Having the monitoring level to Verbose all metrics related to your storage will be persisted. The main different between Minimal and Verbose is the level of monitoring.
This data can be persisted from 1 day to 1 year. Based on your needs and how often you collect the data you can set the best value that suites you. If your storage is used very often I recommend to set maximum 7 days. You can define a simple process that extract monitor information from the last 7 days, store it in a different location and analyze it using your own rules. For example you may way want to raise an alert to your admins if a request coming from the same source failed for more than 10 times.
This table contains all the monitoring information for your storage. In this moment we don’t have support to write monitoring data for a specific blob to a specific table, but we can make query over this table and select only the information that we need.
All the information related to this will be persisted in Azure table from your storage named ‘$MetricsCapacityBlob’.
Logging
The feature that we really need is logging. Using logging functionality we will be able to trace all request history. The activation of this feature can be done from the portal, under the logging section. You can activate logging for the main 3 operations that can be made over a blob: Read/Write/Delete.
All this data is stored under the $logs container:
https://<accountname>.blob.core.windows.net/$logs/blob/YYYY/MM/DD/hhm/counter.log
Everything that we can imagine can be found in this table:

  • Successful and failed requests with or without Shared Access Signature
  • Server errors
  • Timeouts errors
  • Authorization, network, throttling errors
  • ...

Each entity from the table contains helpful information like:

  • LogType (write, read, delete)
  • StartTime 
  • EndTime
  • LogVersion (for future, in this moment we have only one version – 1.0)
  • Request URL
  • Client IP

The most useful information for is usually found under the ‘Client IP’ and ‘Request URL’. Maybe you ask yourself why we have a start time and an end time. This can be very useful for a read request for example. In this way we will be able to know how long the download process took.

I invite you to explore this feature when you need to track the clients that access your blob resources.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too