Skip to main content

Azure Storage - Data protection from deletion

Let me start with a story - "In 2015 we were running a 2 weeks performance test on our solution, that was hosted inside Microsoft Azure. After the performance test finished, we have a clean-up of all the resources. The cleaning script was build in such a way as to not delete the logs storage. After a few days, I was notified that we still have consumption on that subscription. I forgot that there are the storages where we have logs, so I purged the subscription. The 2 weeks of performance logs and metrics were lost forever."

Nowadays, we have a lot of mechanisms to avoid such a thing. Let's look at what we could do to not be in the same situations as I was in 2015.

(1) Resource Lock

The first thing that we shall configure is resource lock at Azure Storage level for "Do Not Delete." As long as the resource has this lock, nobody would be able to delete the resource. 

(1.1) RBAC Configuration - restrict lock access

Having the lock configured ensures that you or somebody else cannot remove the storage without removing the lock first. The next step is to ensure that the "Do Not Delete" lock can be modified only by the right people. 

Using RBAC, we have the ability to create roles that have or not have access to Microsoft.Authorization/* or Microsoft.Authorization/locks/* actions. Only users that have these actions assigned can modify a lock. (exception Owner and User Access Administrator). 

(2) Soft Delete

This feature helps us to prevent accidental deletion or even changes on a blob or containers. At the moment when you activate it, the content is only soft deleted. Meaning that in the case you decide that you want to recover the content, you have the ability to do this. 

There is a retention policy that can be configured, depending on your needs. By default, the soft-deleted objects are not visible. If you want to view them, you need to specify in the list option a specific attribute.

(3) Versioning 

Provides us the ability to create multiple versions of the same storage object. Each time when a storage object is modified, a new version of it is created. In the case of deletion, we can restore a previous version of the object. 

When we have the blob versioning active, the deletion operation represents only another version of the blob. The full version tree for the deleted blob is persisted. 

(4) Snapshots

The snapshots provide us the capability to create a 'hard' version of the blob. All the content is copied and can be accessed later on. To access a specific snapshot,, we need to append to the blob name the snapshot's date and time. 

Even if we have snapshots available, we need to be aware that we need to configure them so that others would not be able to remove them.

(5) Change feed

Used for systems where we need to get notified when a blob is modified. The change feed provides a channel for consumers who can be notified when a blob's content is modified. 

Change feed needs to be used carefully because they it can generate a high load on the consumers, and many times, we don't need to be notified of all the changes. 

(6) Point-in-time restore

It gives us the ability to create policies that would protect us from accidental deletion. Once we have the data in a consistent state, we can create a point-in-time restore that can be used later on in time. 

The functionality works in combination with other features that need to be enabled (Soft Delete, Blob versioning, and Change feed). 


Conclusion

If we take the story that I presented initially, I would prefer to have the Resource Lock and a Snapshot created after the performance tests where I finished. These two items would enable me to have the storage configure so that accidental deletion should not happen, and they would not cause data loss. 

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...