Skip to main content

Configuration files horror

Nowadays, working with services is pretty simple. Almost anybody can create a WCF service and expose functionalists  The same thing is with WCF clients. .NET development environment can create the client proxy very easily.
When we are creating the client proxy, a part of the configuration will be added to the configuration file. In the configuration files we will find the URL of the service, which will be modified a lot of time during the development phase. We can have a testing service, a mock service, a development service and so on.
If the client and the service can be hosted on the same machine, than developers will be happy, but the configuration files will be a mess. They will forget that they change the URL address and they will make commits with this change. A part of them will use “localhost”, other part will use the machine name.
When you end up in a project with 40, 50 or almost 100 configuration files, changing the URL can become a time consuming process. Not only this, but you cannot use find and replace because each developer had a different machine name.
What we can do in this case? The simplest solution is to try to use different configuration file. From some time ago, Visual Studio support to have different configuration files for debug, release and we can even define custom versions. Each developer can have a version of the configuration file that will not end up on the source control.
To help developers, we can create a script that will generate the local configuration files for developers. In this way they will not have any kind of excuse that they change the configuration files.

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

[Post Event] Azure AI Connect, March 2025

On March 13th, I had the opportunity to speak at Azure AI Connect about modern AI architectures.  My session focused on the importance of modernizing cloud systems to efficiently handle the increasing payload generated by AI.