Skip to main content

How to use Diagnostic Monitor on Windows Azure

The last update of Windows Azure (from June 2012) brought to us a lot new features. From virtual machines that can run Linux without any problem to distributed cache that can be store in-memory or on a dedicate machine.
One new feature that can help us after we deploy a product for Windows Azure is the ability to monitor and track resources utilization in real-time. Until now, when we wanted to monitor the resources that are consumed on Windows Azure we had to redirect all the Diagnostics Monitor data to Windows Azure tables or blobs. After all this data were written in the data store we need tools to be able to be able to interpret this information. For this, a lot of free tools exist, but they don’t work all the time as we expected. Each time when they information were loaded a lot of data is passed between Windows Azure and us – this means extra cost.
With the new version of Windows Azure, we have this mechanism build in. From the portal we can monitor and track the resources utilization in real time. We don’t need to search tools to view data and generate report. All this things are done by the portal for us.
In the moment when we create a new cloud service the minimal monitoring is provided to us. This means that we have the following metrics by default:
  • CPU Percentage
  • Data In
  • Data Out
  • Disk Read Throughput
  • Disk Write Throughput
We can change this configuration to add or remove metrics in any moment of time. When we want to make this configurations when the role start we will need to use a diagnostic configuration file. All the metrics data are stored in our data store, because of this we need to specify the data store account that will be used to store this information.
In our service definition of our role we will need to import the diagnostics module.
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name=”FooService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="FooWebRole">
<Imports>
<Import moduleName="Diagnostics" />
</Imports>
</WebRole>
</ServiceDefinition>
After this step, in the service configuration of our role we can specify the data store connection string:
<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="FooService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">
<Role name="FooWebRole">
<Instances count="2" />
<ConfigurationSettings>
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"
value="DefaultEndpointsProtocol=hLinkttps;AccountName=AccountName;AccountKey=AccountKey"/>
</ConfigurationSettings>
</Role>
</ServiceConfiguration>
After this step all that we need to do is to create a diagnostic monitor configuration file. For more information please follow this link: http://msdn.microsoft.com/en-us/library/hh411551.aspx
By default, if we set the level of diagnostic monitor to verbose, the data aggregation will be made at the following time interval: 5 minutes, 1 hour and 12 hours. All information older than 10 days is deleted automatically. The information is transferred from role instances at 3 minutes time interval.

If we have all this configurations made on our instances, we can access any time the portal and change the configuration. For each cloud service we can go to the “CONFIGURE” panel and change the monitoring settings. We can only change the level of monitoring, the retention time interval (how many days data are stored before they are purged) and the connection string to the data store. A different configuration can be made for the production and staging environment. All this configurations can be done from the configuration file before deploying the solution to the cloud. Any configuration that is made on cloud services will not be persist if we make a new deploy on a new service instance.

A nice feature of the monitoring dashboard from Windows Azure is the ability to add any kind of metric. This feature is very useful when we try to debug a cloud service and we realize that we need to monitor some metrics. In that moment we can access the monitor dashboard and add the metrics that we need.
Let’s see how this data is stored in Azure tables. For each role instance per aggregation time interval a different table is created. The naming format of each table at the first look can be pretty odd, but after we understand the format we will not have any problem to be able to understand it.
WAD[deploymentId]PT[aggregationInterval][R|RI]Table
where:
  • WAD – this is the default suffix that is added to the table name
  • deploymentId – id of the deployment
  • PT – characters used for delimitation
  • aggregationInterval – aggregation interval (the values can be: 5M, 1H, 12H)
  • R|RI – aggregation type (it can be at role level aggregation – R, or aggregation for role instances – RI)
From now we shouldn’t be afraid when we see a monitoring table name in Windows Azure Tables.
The easiest thing to do is to see the monitoring metrics report that is displayed in the monitoring panel. I don’t think that someone will have problems using and understating this information.
In this post we saw how easy is to use the monitoring features from Windows Azure. With the new version of Windows Azure not only the configuration of diagnostics monitoring is simple but also reading all this data is very easily. On the top of this all this diagnostics configuration can be changed at runtime without any problem.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too