Skip to main content

Where can I backup my content - Azure File Storage

Nowadays taking pictures is so easy, you can end up easily with 20-30 shots per day. After a few years of using external HDD as backups I realize that is a time for a change.
Until now I used OneDrive to backup pictures from my phone and 3 external HDD that were putted in mirror. This setup works great as long you have enough storage and you external HDDs.
But now I ended up with the OneDrive storage at 99.8% full (from 30GB) and with external storage almost full also.

What I need?
I looked up for a solution that is cheap, can be accessed from anywhere I don't need to install custom software on my machine to be able to access it. The ideal solution would be to be able to attach the storage as a partition.
Buying external HDD s is not a solution anymore for me anymore. I would prefer a cloud or storage provider that could offer me a reliable storage for a good price.

Azure Offer
An interesting offer is coming from Microsoft Azure, more exactly from Azure File Storage. The service is part of Azure Storage and allow us to access Azure Storage as a file system. Having the classical folders, files and so on.

Azure File
The concept is simple and powerful. Under your Azure Subscription you can create a Storage Account that can be used to store files under a directory structure. You can upload, access, remove content from any location.
From technical perspective, each 'drive' (if we can call in this way), that you create under Storage Account is called Shared. The maximum size is 5TB, that is more than enough. Under a Share you can have as many directories you want. The number of files is not limited, but the maximum size of a file is limited to 1TB. When you create a new file share you need to specify the quota limit (max size of the file share). This quota can be changed anytime without having to recopy the content.
For my personal needs, this is more and enough. If I would need more than 5TB, than I can create multiple Shares under the same Storage Account.

Price
This is one of the things that attracted me. You have multiple tires that you can use. From 3 replicas in the same Azure Region you can go with more complex ones where content is replicated in another Azure Region where read access is also available (RA-GRS).
Based on how often you access your data there are two types of tires that can be used Cool or Hot tier. Hot tier is used when the content is access and updated often. For backups scenarios, like in my case, Cold storage is a better option. Usually this kind of backups are not accessed often. In my case, I don't read/access all the more often than one time per month.
For this kind of backups the best solution is a Cold Storage that is Geo-Redundant Storage (GRS). This means that storage is replicated in two different Azure Regions. In each Azure Region that are 3 different copies of your content.
When you calculate the cost you need to take into account 3 different things

  • Storage cost (€1.69 per month for 100GB)
  • Transaction cost (€0.16 or €0.0084 for 100.000, based on transaction type)
  • Data Read (€0.0084 per GB) or Data Write (€0.0042 per GB)
For a normal use I calculated that I will never pay more than €1.8 per months for 100GB. This means that the cost per year is under €18, in the context where the content is replicated in two different locations and is accessible from anywhere.

Content access
The content can be accessed directly from Azure Portal, but of course you don't want to do something like this when you have 50.000 files. 
On top of this we have Microsoft Azure Storage Explorer. This nice tool allow us to access content directly from desktop. There is full support for Windows, Linux and Mac. 2The experience is nice and similar with the one that you have from File Explorer.
For access your content from this tools you can use your Azure account credentials or Azure Storage account name and access key. If it is your first time when you use Azure Storage I recommend to check a nice tutorial provided by Microsoft - About Azure storage accounts.

This is not all. You can attach the file storage as a partition on your system. With only one command executed in command line you can do this. This magic can happens because Azure Files Storage has support for SMB 3.0 protocol. This means that you can mount the file share on Windows 7+ or Windows Server 2008+ operation system or Linux machines.
// Windows Machine
net use 
     v: \\zzzz.file.core.windows.net\pictures 
     /u:zzzz 
     azurestoragekey==

// Linux Machine
sudo mount -t 
     cifs //zzz.file.core.windows.net/pictures 
     [mount point] -o vers=3.0,
     username=zzzzz,
     password=azurestoragekey==,
     dir_mode=0777,file_mode=0777

Simple like this, you can attach the Azure File Storage as a partition.

Conclusion
Azure File Storage is a good option if you need to backup you content. The price per month is low and being able to attach it as a partition to my computer makes this solution the best one on the market.

Comments

  1. Sounds too good to be true - I wounder what the terns&conditions say about long-term data storage - are they committed to store that data for ~ 10 years, or this was designed just as a short-term storage solution? :)

    ReplyDelete
  2. As long as this service exist we will be able to backup our content in this way. I'm pretty exited of this backup solution, that is not so powerful and easy to use as OneDrive, that is for consumers, but perfect for me.

    ReplyDelete
  3. Flickr offer 1TB for free. Then again, who knows what will happen to it with all the turbulences around yahoo...

    ReplyDelete
    Replies
    1. Yes, they are a good option if you have only pictures. If you have also documents or other materials, than you might want to use something else.
      There are a lot of options on the market free or cheap. I tried to look for a solution where backup is quartered.

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP