Skip to main content

Azure Site Recovery (Day 30 of 31)

List of all posts from this series: http://vunvulearadu.blogspot.ro/2014/11/azure-blog-post-marathon-is-ready-to.html

Short Description 
Azure Site Recovery offer us a way to manage and control the replication and recovery actions of our machines. We have the ability to replicate our entire data center. Using this service you can replicate your datacenter on Azure, without having to reverse resources in advance.


Main Features 
Hybrid Solution
You have the ability to replicate virtual machines that are on on-premises data centers on Azure or on another hosting providers.
Custom Replication Policy
You have the full control to the policies that are used to create the replications. In this way we can manage the replications use cases.
Cross-Technologies 
Site Recovery is not adding another replication mechanism over existing one. It is only the orchestration over the existing one. Technologies like SQL Server AlwaysOn, System Center, SAN or Hyper-V can be used.
Ability to set Azure as disaster recovery site
Having a second site for disaster recovery can be extremely expensive. Using Azure as the second site for recovery purpose we can keep the cost at a low level.
Real time health monitoring
Site Recovery monitor the current health of you data centers and virtual machines. In this way you can know what the current state of your system is and what kind of actions you should trigger.
Encrypted Connection and Content
All the communication between your data centers and Azure is encrypted. In this way nobody can still the content that is send over the wire. We also have the ability to encrypted the content that is store on Azure. We will be the only one that can access it.
Recovery Orchestration
Site Recovery allow us to define different actions in a specific order. This feature allow us to recover our system in less time and in the order that we want. The recovery flow that we define run any kind of scripts and can also require human intervention at different steps of the recovery plan.
Site for on-premises or hosted on Azure
Azure Site Recovery can be used not only for on-premises sites. We can also use this for recovery plans for sites that are hosted on Microsoft Azure.

Limitations 
The only thing that could create problems is the location where meta-data are stored when we replicate on-premises content to another on-premises location. In this use case the meta-data is stored on Azure Site Recovery (on cloud). It is normal to send meta-data to this location because Site Recovery orchestrate all the actions, but people may say that we increase the risk of a failure during the recovery action because we have another node that may fail.

Applicable Use Cases 
Below you can find some use cases when I would use Azure Site Recovery.
On-premises Site
When you have a site on-premises and you need a good recovery plan you can use without any kind of problem Azure Site Recovery to replicate your content on Azure. In this if something goes wrong with your on-premises site you will be able to recover it from Azure.
On-premises Site in Life Care industry
Because of strict regulations in life care many countries don’t allow companies to store data the country itself (patients information for example). In this scenarios we could use Azure Site Recovery to orchestrate the recovery plan on another data center from the same country (region).

Code Sample 
-

Pros and Cons 
Pros

  • Encrypted Content
  • Encrypted Connection
  • Works with on-premises sites
  • 99.9% SLA
  • Support multiple technologies

Cons
-

Pricing 
If you calculate the costs of Azure Site Recovery you should take into account the following:

  • Number of instances that you want to protect
  • Storage Size
  • Storage Transactions count 
  • Outbound traffic (from Azure data centers)


Conclusion
Azure Site Recovery in combination with Azure Backup can be an interesting solution when we want to create a system where data is not lost and recovery can be made simple and easy.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP