Skip to main content

Migrate data from Azure to on-premises data center using Azure Data Factory

In today post we will talk about Azure Data Factory. But we will try a different approach. We will not look on what are the cores features of Azure Data Factory and when you should use it. Let's take a real life example, when Azure Data Factory can make our life easier.

We have a system hosted on Azure that produce each day 100GB of audit data. There is a requirement from our client, to move this data on his own data center, where he will archive the information on tape.

Classical implementation    
The use case is pretty simple, but in the same complex. To be able to offer a reliable communication between this two endpoints, you need to ensure that all content is copied on the client data center. For this purpose you could use different tools, base on what kind of storage you have (SQL, binary/raw data, documents and so on).
In general, audit data can be found as raw data in binary files. In Azure you would use Blob Storage. Other types of storage like Azure SQL or DocumentDB might be to expensive especially because you need to store a high amount of data and in 99% of the cases you will not need to execute queries over this data.
To support this feature you will need a process that is able to copy the content from Azure to on-premises. There are high chances that you would put this process on the client side because it is more simple to allow access to Azure Storage using Shared Access Signature (SAS) keys, than the local storage where you want to copy the content.
You can develop your own solution for migration or you would integrate an existing solution - licence costs + VM costs that do the migration.

Azure Data Factory in Action
Another approach is to use Azure Data Factory. This Azure Service allows us to define a so called 'Copy Wizard' that is able to automatically copy content from one place to another.
In this moment there is full support for:
  • Azure Blob
  • Azure Table
  • Azure SQL Database
  • Azure SQL Data Warehouse
  • Azure DocumentDB (see note below)
  • Azure Data Lake Store
  • SQL Server On-premises/Azure IaaS
  • File System On-premises/Azure IaaS
  • Oracle Database On-premises/Azure IaaS
  • MySQL Database On-premises/Azure IaaS
  • DB2 Database On-premises/Azure IaaS
  • Teradata Database On-premises/Azure IaaS
  • Sybase Database On-premises/Azure IaaS
  • PostgreSQL Database On-premises/Azure IaaS
  • ODBC data sources on-premises/Azure IaaS
  • Hadoop Distributed File System (HDFS) On-premises/Azure IaaS
  • OData sources
  • Web table
, as source of the copy and:

  • Azure Blob
  • Azure Table
  • Azure SQL Database
  • Azure SQL Data Warehouse
  • Azure DocumentDB (see note below)
  • Azure Data Lake Store
  • SQL Server On-premises/Azure IaaS
  • File System On-premises/Azure IaaS

, as destination.

All the mapping and configuration can be done directly from Azure Data Factory. Mapping of storage (source, destination) or what part of the data we want to move is done from Azure Data Factory portal directly.
On top of this, we can specify to Azure Data Factory a schedule and in which part of the day the content should be moved from one place to another. We can have a pipeline task that is executed only one time or at a specific recurrence.

All the action is audited and change of state and other events are logged and can be monitored from the monitoring component. This web component allow us to see the history of the activities that we done on our Azure Data Factory and detect anomalies (in our case when a copy failed). This monitoring tool is extremely powerful and can be used with success for reporting also.
For example, for activities that are running in the moment when you access the app, you can see the real status of each activity (Waiting, InProgress, Failed, Ready, Skipped, None).

Azure Data Factory is a better solution to copy and move content from one location to another. It is a native out of the box solution, that can be configured in a few minutes. No maintenance, licencing or extra costs. You pay only what you use, when you use. 


Post a Comment

Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.