Skip to main content

Payload Replication - Temporary and Master of data

There are moments in time when you focus on small things and you might miss bigger and more important things.

Working on a solution that is distributed in multiple Azure Regions across the world we had to replicated binary content (files) in all data centers where our solution runs.
The system that push this content is an on-premises system. In this context we decided to use a temporary location where the content is copied first and from where we replicate the content in other regions.

As we can see, the Temp location is used to replicate the content all around the glob. The system is smart enough to balance the load and if multiple replicas are needed in the same Azure Regions, only one copy will be done from the Temp location.
Once the replication is done, the binary content from the Temp location is removed automatically.

Nothing special until this, until in the moment when you ask yourself what is happening if one of the Replicas is corrupted or something happens and you lose data (partially or totally).
In such a case, of course you can go to the master - on-premises system. But when you work with hundreds of terabits, the check and the recovery of the lost  replica will take a while.
There is no way for our replicas to fetch the data from an Azure location, even if there was a time when we had this content (in the moment when we copied in the Temp folder).

There is a clear mistake on this flow. We don't have a location that can play the role of the master (internally in our system). The on-premises system is an external system that push data. We should still have in our own system a master for this data.

A solution that is complex and requires costs at the development and maintenance is to track all the replicas and when there is a fail, to be able to provide a source from another replica. In this way, we don't need the extra space for storage. On the other hand the solution is complex and might be more buggy.
In the same time, if we take into consideration that from this replicas, there will be a lot of download activity, the real cost is not from storage, it will be from download.

The most simple solution is to have in the same location as the Temp location a 'Master' for our payloads. This master can be used situations when one of our replica is our of sync. The Master can be secured easily for unexpected situations if we activate geo-replication

As we can see, in this moment we have a Master for our content that can be used in the case of a out of sync situation with one of our replicas.

Comments

Popular posts from this blog

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...

Cloud Myths: Migrating to the cloud is quick and easy (Pill 2 of 5 / Cloud Pills)

The idea that migration to the cloud is simple, straightforward and rapid is a wrong assumption. It’s a common misconception of business stakeholders that generates delays, budget overruns and technical dept. A migration requires laborious planning, technical expertise and a rigorous process.  Migrations, especially cloud migrations, are not one-size-fits-all journeys. One of the most critical steps is under evaluation, under budget and under consideration. The evaluation phase, where existing infrastructure, applications, database, network and the end-to-end estate are evaluated and mapped to a cloud strategy, is crucial to ensure the success of cloud migration. Additional factors such as security, compliance, and system dependencies increase the complexity of cloud migration.  A misconception regarding lift-and-shits is that they are fast and cheap. Moving applications to the cloud without changes does not provide the capability to optimise costs and performance, leading to ...