Skip to main content

Bugs that cover each other

This week I had the opportunity to work on a PoC that was pretty challenging. In a very short period of time we had to test our ideas and come with some results. When you need to do something like this, you have to:

  • Design
  • Implement
  • Test
  • Measure (Performance Test)

First two steps were pretty straight, we didn’t had any kind of problems. The testing of the current solution was pretty good, we find some small issues – small problems that were resolved easily.
We started the performance test, when we here hit by a strange behaviors. The database server had 3-4 minutes with a 100% load, after this period the load would go down to 0% for 5-6 minutes. This was like a cycle that used to repeat to infinity.
The load of the database should be 100% all the time… We looked over the backend server, everything looked okay. They were received requests from client bots and processed. Based on the load of the backend everything should be fine.
Next step was to look over the client bots machines. Based on the tracking information everything should be fine… But still we had a strange behaviors in the database, something was not right.
We started to take each part of the solution and debug it. We started with …SQL store procedures …backend and …client bots. When we looked over client bots we observed that we have a strange behaviors there. For each request we received at least to different responses.
After 1 hours of debugging we found out that we had 2 different bugs. The interesting part of this was that one of the bugs created a behaviors that masked the other bug. Because of this on the backend we had the impression that we have the expected behavior and the clients’ works well.
The second bug that was masked by the first one has big and pretty ugly.
In conclusion I would say that when even when you write a PoC and you don’t have enough time, try to test with one, two and 3 clients in parallel. We tested with one, 10 and 100 clients. Because of the logs flow for 10 and 100 clients we were not able to observe the strange behaviors before starting the performance testing.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too