Skip to main content

Why our Integration Tests were down

From some time, one of our projects has the Integration Tests on RED. It seems that randomly, some
tests are failing. This usually happens only on the CI machine (Visual Studio Team Services).
The strangest thing is the behavior. The behavior cannot be reproduced on the development machine. The reported errors are caused by asserts or strange exceptions - for example that an external resources doesn't exist.
From time to time, the error can be reproduced on the local machine, but only once. The fail cannot be reproduced twice on the same machine. After a few sprints and a lot of time invested in this issue, we still had the same problem.

After a review of <the problem,the code, how the tests were written>, the root causes were identified and isolated. Let's see what were the steps and causes of this problem.

Step: Isolate the integration tests in a dedicated build.
Why: You want to reduce the build time us much as possible and run only the tests that you want without affecting the rest of the team. You need an 'isolated' environment where you want to play and test different configuration without sending hundreds of email notification to the team.

Step: Don't try to resolve all the issues in the same.
Why: Take each problem separated, analyze it and fix it. Only after you fixed it go to the second one.

Step: Understand what the test is testing.
Why: If you want to resolve a problem you need to understand what you want to achieve, in our case what is the purpose of the test.

Cause: Avoid refactoring Unit Tests to reduce duplication.
Why: Yes, is nice to have fewer lines of code, but in tests this can be a killer. Once you reduced the number of lines of code from unit tests by extracting in common methods you will need to check that any change that you do to the code will not affect or alter the behavior of all the tests that are using that code.

Cause: Avoid sharing resources between tests from the same test class.
Why: You will end up in one tests to remove or alternate a resource that is needed by another test. A good example here is for tests that cover a special flow that will end up with an error. To be able to throw an error you might change a configuration that is needed by another test.
To avoid this kind of scenarios and share resources between tests you might want to run a preparation or cleanup step before and after each test - the main purpose of this step is to reset the environment to the 'standard' setup.

Cause: Try to fix the tests before understanding what the test is testing.
Why:  Very often we are jumping to the solution, to change the code, the classical try&error mode before making the investigation. This can costs us a lot time and on top of this it can add more bugs and problems to the system, making it more unstable.

Cause: Testing an async method in the same way like a syncmethod. 
Why: In this case this was the root of the strange behavior on the system. Based on the load of the machine and the test order, the async method that was called had enough time to execute before the test ended or not. A few years ago I wrote posts about this topic, where you can find more information about how you can test an async method - topic http://vunvulearadu.blogspot.ro/2013/04/how-to-write-unit-tests-for-async.html

[TestMethod]
public async Task SomeTest() 
{ 
     await .....
}

In our case the main causes were:

  • Testing async methods in the wrong way
  • Refactoring Unit Tests to much
In conclusion I would say that if you have a strange behavior, don't try to resolve it before understanding the root cause of the problem. 

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too