Skip to main content

Migrate data from Azure to on-premises data center using Azure Data Factory

In today post we will talk about Azure Data Factory. But we will try a different approach. We will not look on what are the cores features of Azure Data Factory and when you should use it. Let's take a real life example, when Azure Data Factory can make our life easier.

Scenario
We have a system hosted on Azure that produce each day 100GB of audit data. There is a requirement from our client, to move this data on his own data center, where he will archive the information on tape.

Classical implementation    
The use case is pretty simple, but in the same complex. To be able to offer a reliable communication between this two endpoints, you need to ensure that all content is copied on the client data center. For this purpose you could use different tools, base on what kind of storage you have (SQL, binary/raw data, documents and so on).
In general, audit data can be found as raw data in binary files. In Azure you would use Blob Storage. Other types of storage like Azure SQL or DocumentDB might be to expensive especially because you need to store a high amount of data and in 99% of the cases you will not need to execute queries over this data.
To support this feature you will need a process that is able to copy the content from Azure to on-premises. There are high chances that you would put this process on the client side because it is more simple to allow access to Azure Storage using Shared Access Signature (SAS) keys, than the local storage where you want to copy the content.
You can develop your own solution for migration or you would integrate an existing solution - licence costs + VM costs that do the migration.

Azure Data Factory in Action
Another approach is to use Azure Data Factory. This Azure Service allows us to define a so called 'Copy Wizard' that is able to automatically copy content from one place to another.
In this moment there is full support for:
  • Azure Blob
  • Azure Table
  • Azure SQL Database
  • Azure SQL Data Warehouse
  • Azure DocumentDB (see note below)
  • Azure Data Lake Store
  • SQL Server On-premises/Azure IaaS
  • File System On-premises/Azure IaaS
  • Oracle Database On-premises/Azure IaaS
  • MySQL Database On-premises/Azure IaaS
  • DB2 Database On-premises/Azure IaaS
  • Teradata Database On-premises/Azure IaaS
  • Sybase Database On-premises/Azure IaaS
  • PostgreSQL Database On-premises/Azure IaaS
  • ODBC data sources on-premises/Azure IaaS
  • Hadoop Distributed File System (HDFS) On-premises/Azure IaaS
  • OData sources
  • Web table
, as source of the copy and:

  • Azure Blob
  • Azure Table
  • Azure SQL Database
  • Azure SQL Data Warehouse
  • Azure DocumentDB (see note below)
  • Azure Data Lake Store
  • SQL Server On-premises/Azure IaaS
  • File System On-premises/Azure IaaS

, as destination.

All the mapping and configuration can be done directly from Azure Data Factory. Mapping of storage (source, destination) or what part of the data we want to move is done from Azure Data Factory portal directly.
On top of this, we can specify to Azure Data Factory a schedule and in which part of the day the content should be moved from one place to another. We can have a pipeline task that is executed only one time or at a specific recurrence.

Monitoring
All the action is audited and change of state and other events are logged and can be monitored from the monitoring component. This web component allow us to see the history of the activities that we done on our Azure Data Factory and detect anomalies (in our case when a copy failed). This monitoring tool is extremely powerful and can be used with success for reporting also.
For example, for activities that are running in the moment when you access the app, you can see the real status of each activity (Waiting, InProgress, Failed, Ready, Skipped, None).

Conclusion
Azure Data Factory is a better solution to copy and move content from one location to another. It is a native out of the box solution, that can be configured in a few minutes. No maintenance, licencing or extra costs. You pay only what you use, when you use. 

Comments

Post a Comment

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

Scope
The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

Context
First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…