Skip to main content

Using an external email provider with Episerver DXC environment inside Azure

In this post, we talk about different options that we have we need to integrate an email service with an eCommerce application developed using Episerver and hosted inside DXC.
As a side note, DXC it is SaaS on top in Microsoft Azure, where clients can host their web applications developed on top of Episerver. More about the DXC environment will be covered in another post.

What we what to achieve?
We want to be able to integrate an external email service provider to be able to send emails to our clients for marketing purposes. Our web application it is already hosted inside DXC, and even if we could write custom code that can run inside DXC to communicate with the email service provider, there are some limitations that we need to be aware.
The authentification and authorization mechanism offered by the email service provider it is based on IP whitelisting. Only the IPs from that whitelist are allowed to make calls to the service and send emails.

At this moment in time, Microsoft Azure is offering the possibility to assign static IP to your resources. Even so, because of DXC environment offers a high-availability SLA, the public endpoints for consumers and clients are based on CNAMEs and not on IPs. Additional to this resources might be shared between different deployments and customers.
It means that there is no way to get a static IP for our DXC environment that can be added in the whitelist of our email service provider.
Beside this, we need to take into account that there is no other authentification mechanism beside IP whitelisting used by the 3rd party and a custom URL provided for each client.

There are multiple solutions available. Let’s take a look on some options that we have. The last one is the one that I prefer, and I think that it is closer to production ready.

Option 1: Whitelist IP list of Azure Region
The public documentation is offering us the IP ranges used in each Azure Region. Additional to this we know the IP ranges for each Azure Services. The list of IPs is updated each time when something change and we can subscribe to this type of notifications.
Inside DXC our applications are running on top of Azure Web Apps, allowing us to provide to our email service providers only the range of IPs used by Azure Web Apps inside that Azure Region.
Even if the solution is simple, there are two risks that we need to take into account and mitigate. The first one is related to defining a process that ensures as that we provide the new range of IPs at the moment when Microsoft is updating the range of IPs. The second risk is related to who can use the service. Because the range of IPs that are provided to whitelist covers all the Azure Web Apps inside that Azure Region means that any web application hosted as a Web App inside that region can send emails if the email service URL it is known.

Option 2: Expose a REST API
The second option involves creating an external REST API, that can be called by our web application and forward the calls to the email service. There is already planned to use an Azure VM for some other functionality, outside DXC. It means that we can use this VM to host out REST API inside the IIS and assign a Static IP to the machine. The API would forward the calls to the email service.
The downside of this solution would be primarily from the security part; we need to design an authentication and authentification system for our REST API. Besides this we need to handle cases when the Azure VM it is not available, we don’t want to have clients that did not receive their emails.

Option 3: Windows Service and Service Bus Queue
This option is based on option 2 and involved moving the forward capability from the REST API to a Windows Service. We are in the context where the Azure VM already has other types of Windows Services deployed, and the most simple thing that we can do it is to add another one that can forward the request to the email service.
To be able to avoid losing messages when the Window Service is not available or when the output is too high, we can add a queue used to communicate between our application hosted inside the DXC and our Windows Service.

Option 4: Azure Function and Service Bus Queue
The downside of previous solutions is that we are keeping the logic inside a traditional on-premises solution. For better result and to simplify things more, we can add our logic inside Azure Functions. For this case, Azure Functions are perfect, because it is offering us a serverless environment where we can run our logic that forward calls from our system to email service.
In theory, the IP shall not change too often as long as we don’t delete our function or change the tier. For Static IP on Azure Functions, we need an App Service Environment, where we can have a clear list of Static IP

Option 5: Azure Functions, Service Bus Queue, and Azure Blob Storage
The previous option is almost perfect, except in one case. When the email that we want to send is bigger than the maximum size of a message in the queue (256K). Even if there is support for sessions on messages, where we can have multiple messages that are consumed together by the same consumer as a message, we need to mitigate the case when the email is bigger than the message queue.
A possible solution is to write the email content (body) directly to Azure Blob Storage and add only the URL to blob storage inside the message. The access can be controlled using Storage Accounts Keys or Shared Access Signature. Depending on how often an email is bigger than the maximum message queue, you can decide what would be the default behavior – email body in the message or inside Azure Blob Storage.

You can add extra logic that calculates the email body size and decides if the message body shall be added in Blob Storage or inside the message.  

As you can see, there are multiple solutions to this problem Taking into consideration time constraints, environments, existing solution and many more you can decide which one suits you best.

The cleanest one is Option no. 5, but you might prefer Option no. 3 if you need to deliver a fast solution and you don’t have skills related to Azure Functions. 


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…