Skip to main content

How to backup SQL Azure database using blobs or DacImportExportCli

Daca aveti o aplicatie pe Windows Azure, sunt sanse destul de mari sa fi lucrat cu SQL Azure. Exista diferite metode de a face backup la o baza de date SQL Azure. Aceasta poate sa fie facuta pe o alta instanta SQL Azure dat si pe un server on-premise.
O alta varianta de a face back-up la date, este pe un blob din Windows Azure. In momentul de fata aceasta functionalitate (de a face back-up la date pe un blob) nu ne permite sa facem versionare si nu cred ca in acest caz am avea nevoie de acest lucru.
In acest post o sa discutam despre aceasta varianta de a face import/export la date, iar in zilele urmatoare o sa discutam si de alte solutii. Este bine de stiut cand in momentul in care se face export la date intr-un blob, acestea se salveaza in format binar (in format DAC).
Pentru a putea face un import din portalul de Windows Azure este nevoie sa aveti userul si parola la instainta SQL Azure si un storage account. Acest storage account nu trebuie sa fie pe aceiasi subscriptie pe care este inregistrata si baza de date.
Din cauza ca se zvoneste ca interfata grafica la portalul de Windows Azure urmeaza sa se schimbe, nu o sa fac adaug si print screen-uri pentru fiecare pas in parte.
Pentru procesul de export, este nevoie ca de pe portal sa selectati baza de date la care vreti sa faceti export. Odata selectata o baza de data, o sa vi se activeaza in zona de "Import and Export" actiunea de "Export". Odata ce ati selectat aceasta actiune o sa fie nevoie sa introduceti date precum user si parola la baza de date impreuna cu date despre contul de storage.
La "Blob Url" trebuie sa aveti grija sa introduceti un URL valid care sa contina atata protocolul (http, https) cat si numele la blob, nu doar containerul. Blob-ul nu trebuie sa existe deja, acesta o sa fie generat automat. Containerul pe care il specificati poate sa fie si private, nu este obligatoriu sa fie public.
Pentru operatia de import, restore la baza de data, o sa fie nevoie de aceleasi date. Vestea proasta la acest pas este ca restor-ul se va face intr-o baza de date. Nu puteti sa folositi una deja existenta. Se v-a face restore la toate datele care existau in baza de date.
Un tool destul de folositor pentru a face acest lucru portaul de Windows Azure este DacImportExportCli.exe. Cu ajutorul acestuia puteti sa faceti import, export direct din consola. Se pot genera si scripturi care sa faca acest lucru automat. Daca decideti sa folositi DacImportExportCli.exe puteti sa faceti operatii precum generarea unui back-up de la un server on-premise si exportul acestuia in SQL Azure sau invers. Fisierele generate de catre DacImportExportCli.exe. au extensia .bacpac. Vreau sa mentionez la acest pas, ca by default DacImportExportCli.exe nu scrie intr-un blob, acesta scrie direct un fisier pe disk. DacImportExportCli.exe foloseste in in spate libraria Microsoft.SqlServer pe baza careia puteti fara probleme sa scrieti un program care face import/export la orice baza de date. In comparatie cu tool-ul de pe portal, daca folositi DacImportExport.Cli.exe puteti fara probleme sa faceti import dintr-un fisier la o baza de date in SQL Azure fara sa fiti obligati sa recreati o baza de date noua si sa folositi una deja existenta.
In concluzie aceste variante de a face back-up la o baza de date sunt destul de primitive, dar pot sa fie extrem de folositoare cand avem un proces de back-up la date simplu. Nu uitati ca orice tranzactie costa, iar procesul de import si export va poate ridicata destul de mult factura lunara la subscriptie daca este facut prea des.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP