Skip to main content

Posts

Showing posts from August, 2018

Using an external email provider with Episerver DXC environment inside Azure

In this post, we talk about different options that we have we need to integrate an email service with an eCommerce application developed using Episerver and hosted inside DXC. As a side note, DXC it is SaaS on top in Microsoft Azure, where clients can host their web applications developed on top of Episerver. More about the DXC environment will be covered in another post. What we what to achieve? We want to be able to integrate an external email service provider to be able to send emails to our clients for marketing purposes. Our web application it is already hosted inside DXC, and even if we could write custom code that can run inside DXC to communicate with the email service provider, there are some limitations that we need to be aware. The authentification and authorization mechanism offered by the email service provider it is based on IP whitelisting. Only the IPs from that whitelist are allowed to make calls to the service and send emails. Limitations At this mom...

Data moving to/from Azure Storage using Azure Data Factory and Copy Activity

This post covers how we can use Azure Data Factory to copy our data from one location to another. Even if the copy activity might sound like a basic task that is not complex or challenging when you need to move 10 TB or 50 TB of data from one storage to another, things are becoming interesting. Reliability, resume and continue, task parallelization, data consistency are becoming essential for your project. There are multiple solutions on the market that helps you to move data to and from Azure or between different Azure locations. Today we will focus on Azure Data Factory and how it can help us to our job better. Azure Data Factory it is a service created to help us to work with data when we need to store, process and move it, without having to things about anything else. Inside Azure Data Factory there we have an activity called Copy Activity that can be used to move (copy) data from source to another. When we talk about Azure Data Factory, we call: Source the repositor...

Azure Design patterns: Self-Healing & Transient fault handling

I will start a series of posts about the core design patterns that you need to take into consideration when you start to work with Microsoft Azure and the cloud in general. These principles are important in a cloud application, even if most of them are known from classical on-premises development. Even if most of them are known, we don’t apply them for on-premises system all the time. In a cloud environment, it is possible for example to have for 10ms a connectivity problem with another service that you are using. It means that you would need a retry policy in-place that would automatically try to reconnect to the service. Cases like this need to be covered by your application that is running in the cloud. Take into consideration that you need to take all of this into consideration for lift and shift cases, too. Self-Healing & Transient fault handling The today post it is dedicated to self-healing topic. This is not SF, and you don’t need to be Netflix to have a self-healing...

Why to use Azure Databricks to run your Apache Spark Notebooks

A few months ago (almost one year) a new flavor of Apache Spark appeared on Microsoft Azure. Azure Databricks it is just a platform optimized for Azure, where Apache Spark can run. The consumer does not need anymore to configure the Apache Spark cluster (VM creation, configuration, network, security, storage and many more). Azure Databricks already has a cluster that is configured and ready to be used. We can focus on our application and business requirements and less on the infrastructure part. Capabilities   and Features All the features that we have inside Apache Spark can also be found inside Azure Databricks. The same version of Spark that you have on-premises runs on top of Azure Databricks, the only difference is at the infrastructure level, where you already have the system preconfigured. As a user, you are free to scale up or down your cluster in a 'drag and drop' manner, without having to pay attention or do anything else. All the 5 main components of Spark can b...

Azure Dev Spaces - A free tool for developing app on top of AKS

In this post, we will talk about Azure Dev Spaces and how it can improve the developing and support experience when you are working with AKS (Azure Kubernetes Service). Working with containers and AKS it is a great experience when you develop the application on your local machine, or you are in production and everything works as expected. The friction appears in the moment when you have some issues with your application, and you want to debug it. Recreating a dedicated env. just for you can be complicated, time-consuming and expensive.  This is where Azure Dev Spaces comes to fill the gap. It is enabling us to deploy and to remote debugging of our services directly inside the AKS cluster. Directly from our preferred IDE we can deploy our container inside AKS and do a remote debugging session to it, adding breakpoint directly in Visual Studio. How is this possible? Azure Dev Spaces tools that are installed on your local machine are creating a secure tunnel between your ...

Demystifying Azure SQL DTUs and vCore

The main purpose of this post is to describe what are the differences between DTUs and vCore and the main characteristics of each of them. Additional to this we will discover the things that we need to be aware in the moment when we want to migrate from one option to another (e.g., DTU to vCore). DTU and vCores are two different purchasing model for Azure SQL where you can get computation, memory, storage, and IO in different ways. DTUs Let's start with the DTUs concept and understand better what they represent. Each DTU unit is a combination of CPU, memory, read and writes operations. For each DTU unit, the number of resources allocated for each resource's type is limited to a specific value. When you need more power, you need to increase the number of DTUs.  It's a perfect solution for clients that have a preconfigured resource configuration where the consumption of resources is balanced (CPU, memory, IO). When they reach the limit of resources allocated to them, ...