Skip to main content

Migrate data from Azure to on-premises data center using Azure Data Factory

In today post we will talk about Azure Data Factory. But we will try a different approach. We will not look on what are the cores features of Azure Data Factory and when you should use it. Let's take a real life example, when Azure Data Factory can make our life easier.

Scenario
We have a system hosted on Azure that produce each day 100GB of audit data. There is a requirement from our client, to move this data on his own data center, where he will archive the information on tape.

Classical implementation    
The use case is pretty simple, but in the same complex. To be able to offer a reliable communication between this two endpoints, you need to ensure that all content is copied on the client data center. For this purpose you could use different tools, base on what kind of storage you have (SQL, binary/raw data, documents and so on).
In general, audit data can be found as raw data in binary files. In Azure you would use Blob Storage. Other types of storage like Azure SQL or DocumentDB might be to expensive especially because you need to store a high amount of data and in 99% of the cases you will not need to execute queries over this data.
To support this feature you will need a process that is able to copy the content from Azure to on-premises. There are high chances that you would put this process on the client side because it is more simple to allow access to Azure Storage using Shared Access Signature (SAS) keys, than the local storage where you want to copy the content.
You can develop your own solution for migration or you would integrate an existing solution - licence costs + VM costs that do the migration.

Azure Data Factory in Action
Another approach is to use Azure Data Factory. This Azure Service allows us to define a so called 'Copy Wizard' that is able to automatically copy content from one place to another.
In this moment there is full support for:
  • Azure Blob
  • Azure Table
  • Azure SQL Database
  • Azure SQL Data Warehouse
  • Azure DocumentDB (see note below)
  • Azure Data Lake Store
  • SQL Server On-premises/Azure IaaS
  • File System On-premises/Azure IaaS
  • Oracle Database On-premises/Azure IaaS
  • MySQL Database On-premises/Azure IaaS
  • DB2 Database On-premises/Azure IaaS
  • Teradata Database On-premises/Azure IaaS
  • Sybase Database On-premises/Azure IaaS
  • PostgreSQL Database On-premises/Azure IaaS
  • ODBC data sources on-premises/Azure IaaS
  • Hadoop Distributed File System (HDFS) On-premises/Azure IaaS
  • OData sources
  • Web table
, as source of the copy and:

  • Azure Blob
  • Azure Table
  • Azure SQL Database
  • Azure SQL Data Warehouse
  • Azure DocumentDB (see note below)
  • Azure Data Lake Store
  • SQL Server On-premises/Azure IaaS
  • File System On-premises/Azure IaaS

, as destination.

All the mapping and configuration can be done directly from Azure Data Factory. Mapping of storage (source, destination) or what part of the data we want to move is done from Azure Data Factory portal directly.
On top of this, we can specify to Azure Data Factory a schedule and in which part of the day the content should be moved from one place to another. We can have a pipeline task that is executed only one time or at a specific recurrence.

Monitoring
All the action is audited and change of state and other events are logged and can be monitored from the monitoring component. This web component allow us to see the history of the activities that we done on our Azure Data Factory and detect anomalies (in our case when a copy failed). This monitoring tool is extremely powerful and can be used with success for reporting also.
For example, for activities that are running in the moment when you access the app, you can see the real status of each activity (Waiting, InProgress, Failed, Ready, Skipped, None).

Conclusion
Azure Data Factory is a better solution to copy and move content from one location to another. It is a native out of the box solution, that can be configured in a few minutes. No maintenance, licencing or extra costs. You pay only what you use, when you use. 

Comments

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP