Skip to main content

Refactoring in the Maintenance phase

In this blog spot I would like to talk about the maintenance phase of an application, especially from a developer perspective.
Usually a software product has the following life cycle:

  • Requirements definition
  • System and Software Design 
  • Implementation and Unit Testing
  • Integration and System Testing
  • Operation and Maintenance

At the last phase, you already have a working product that was deliver to the client and is used by users. A big problem at this step is the technical depth that not only exist but will increase with every bug that you fix.
When a bug is found, almost all the people recommend to touch only the part of the application where the issues was found and fix the problem with the minim amount of changes. The problem with this approach is related to technical depth and the garbage that you need to pull with you.
What is happening with the dirty fixes that are done at this step? Who and when you will refac. this parts of the application and clean the code.
A non-technical client will say that he don’t care about this, it is your problem to resolve this issues and all the things related to it. In the end he will not pay for additional tasks.
In the same time, a refac. or a clean up would mean changing a part of the code (system) that will required a full test of the application. This may not be possible.
Going back to the original question: When and how we should do this refac?
There are two possible things that can be done.
First one is the refac. that should be done in the moment when a solution for the bug was found. In that moment, before going live and send to the client a notification that the bug was fixed, you should look over the code and refac. the code that was changed for the current issues.
The second thing that can be done when we observe that there is a technical depth that don’t affect the application functionality is to add the refac. task in a queue. All the tasks from this queue should be prioritize, estimate and the risks calculated.
For example let’s suppose that you have in your application an algorithm that is critical for your client (calculate the wining chances in a casino) and is working as expected. But you don’t like the way was written. In this particular case you should not touch it and change because you think that is better in the other way. You could cause a lot of damages on the client side.
Also, don’t forget that all the refac. should be approved by the client, in the end it is his code and not your. We can compare the application code with a garden. You cannot change the location of a tree of cut the grass without the approval of the owner. The same thing is with the code.
In conclusion I would say that even the temptation is high to change and refac. the code, you should make a step back and wait client approval. It is very good to be proactive, to come with new ideas and designs, but in the same time you should wait the green light from the client.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see