Skip to main content

At a glance: Azure Load Tests Service

Quality metrics are important and need to be measured when we build and deploy a new system version. Resilience, response time, scalability, and application performance are not easy to test when creating a cloud solution. 
Today's post talks about how you can run a performance test on top of Azure using Azure Load Testing. This service provided by Microsoft gives us the possibility to run large performance, scalability or application quality tests in a controlled and easy manner. 
With the Azure Load Testing service, a developer or tester can configure and run a load test in just a few minutes, collect the output, and identify the system's bottleneck.

Azure Load test components
4 main components are built around Azure load and performance capability:
(1) Azure Monitor – used to collect information from Azure services
(2) Azure Application Insights – used to collect application data and to provide an easy way to display and track application metrics
(3) Azure Container Insights – used to collect containers data and integrate the output with other monitoring systems (Azure Monitor, Azure Application Insights)
(4) Azure Load Testing – used to run the performance and stress tests and orchestrate the load test(s)
One of the nice things provided by the new service is the dashboards that are consolidating in one location all the metrics from client-side and server-side, including HTTP responses, DB load and reads, container resource consumption, and so on. 
As expected, there is full integration with Azure Pipelines and GitHub Actions. We can specify a performance baseline and trigger a build using the pipelines when the performance criteria are not matched. 

How to run a load test?
The loads' tests are built using Apache JMeter scripts. We can easily reuse the JMX files we are using for other kinds of difficulties or on-premises systems if we migrate from on-premises. The JMX files can be uploaded directly to the Portal or from the repository. Another way to run the load tests is based on the classical 'Test method'.

Pipeline integration
Azure Load Testing offers us the ability to integrate with CI/CD workflow. We can trigger the Load Testing step after we build and deploy the application into the testing environment. 
This step loads the Test Plan configuration and calls the Azure Load Testing Service. Once the load tests are run, the test results, together with all collected metrics, are pushed back to the CI/CD workflow and depending on the test results, the workflow step will pass or fail. Azure Load Testing Dashboards are available after each run to be analyzed by the testing or development team. Additional actions can be registered based on the test results. 

Pricing model
The pricing model has 3 main components: 
(1) Resources that are used by the system when you run the load test
(2) Resourced used by the Load Test to create the load and run the test engine
(3) Additional usage of Virtual Users Hours

Final conclusion

Should we consider reviewing and integrating this service? YES. In the long term, the impact of Azure Load Testing on how you monitor the quality metrics is enormous. Try to do the integration in small steps and ensure that you have the business and quality metrics of the system before you start running to run performance and load tests. Start from business drivers and expectations before jumping to run the stress and quality metric tests. 


Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(...

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see...

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provi...