This week I had the opportunity to work on a PoC that was pretty challenging. In a very short period of time we had to test our ideas and come with some results. When you need to do something like this, you have to:
First two steps were pretty straight, we didn’t had any kind of problems. The testing of the current solution was pretty good, we find some small issues – small problems that were resolved easily.
We started the performance test, when we here hit by a strange behaviors. The database server had 3-4 minutes with a 100% load, after this period the load would go down to 0% for 5-6 minutes. This was like a cycle that used to repeat to infinity.
The load of the database should be 100% all the time… We looked over the backend server, everything looked okay. They were received requests from client bots and processed. Based on the load of the backend everything should be fine.
Next step was to look over the client bots machines. Based on the tracking information everything should be fine… But still we had a strange behaviors in the database, something was not right.
We started to take each part of the solution and debug it. We started with …SQL store procedures …backend and …client bots. When we looked over client bots we observed that we have a strange behaviors there. For each request we received at least to different responses.
After 1 hours of debugging we found out that we had 2 different bugs. The interesting part of this was that one of the bugs created a behaviors that masked the other bug. Because of this on the backend we had the impression that we have the expected behavior and the clients’ works well.
The second bug that was masked by the first one has big and pretty ugly.
In conclusion I would say that when even when you write a PoC and you don’t have enough time, try to test with one, two and 3 clients in parallel. We tested with one, 10 and 100 clients. Because of the logs flow for 10 and 100 clients we were not able to observe the strange behaviors before starting the performance testing.
- Design
- Implement
- Test
- Measure (Performance Test)
First two steps were pretty straight, we didn’t had any kind of problems. The testing of the current solution was pretty good, we find some small issues – small problems that were resolved easily.
We started the performance test, when we here hit by a strange behaviors. The database server had 3-4 minutes with a 100% load, after this period the load would go down to 0% for 5-6 minutes. This was like a cycle that used to repeat to infinity.
The load of the database should be 100% all the time… We looked over the backend server, everything looked okay. They were received requests from client bots and processed. Based on the load of the backend everything should be fine.
Next step was to look over the client bots machines. Based on the tracking information everything should be fine… But still we had a strange behaviors in the database, something was not right.
We started to take each part of the solution and debug it. We started with …SQL store procedures …backend and …client bots. When we looked over client bots we observed that we have a strange behaviors there. For each request we received at least to different responses.
After 1 hours of debugging we found out that we had 2 different bugs. The interesting part of this was that one of the bugs created a behaviors that masked the other bug. Because of this on the backend we had the impression that we have the expected behavior and the clients’ works well.
The second bug that was masked by the first one has big and pretty ugly.
In conclusion I would say that when even when you write a PoC and you don’t have enough time, try to test with one, two and 3 clients in parallel. We tested with one, 10 and 100 clients. Because of the logs flow for 10 and 100 clients we were not able to observe the strange behaviors before starting the performance testing.
Comments
Post a Comment