Skip to main content

Using an external email provider with Episerver DXC environment inside Azure

In this post, we talk about different options that we have we need to integrate an email service with an eCommerce application developed using Episerver and hosted inside DXC.
As a side note, DXC it is SaaS on top in Microsoft Azure, where clients can host their web applications developed on top of Episerver. More about the DXC environment will be covered in another post.

What we what to achieve?
We want to be able to integrate an external email service provider to be able to send emails to our clients for marketing purposes. Our web application it is already hosted inside DXC, and even if we could write custom code that can run inside DXC to communicate with the email service provider, there are some limitations that we need to be aware.
The authentification and authorization mechanism offered by the email service provider it is based on IP whitelisting. Only the IPs from that whitelist are allowed to make calls to the service and send emails.

Limitations
At this moment in time, Microsoft Azure is offering the possibility to assign static IP to your resources. Even so, because of DXC environment offers a high-availability SLA, the public endpoints for consumers and clients are based on CNAMEs and not on IPs. Additional to this resources might be shared between different deployments and customers.
It means that there is no way to get a static IP for our DXC environment that can be added in the whitelist of our email service provider.
Beside this, we need to take into account that there is no other authentification mechanism beside IP whitelisting used by the 3rd party and a custom URL provided for each client.


There are multiple solutions available. Let’s take a look on some options that we have. The last one is the one that I prefer, and I think that it is closer to production ready.

Option 1: Whitelist IP list of Azure Region
The public documentation is offering us the IP ranges used in each Azure Region. Additional to this we know the IP ranges for each Azure Services. The list of IPs is updated each time when something change and we can subscribe to this type of notifications.
Inside DXC our applications are running on top of Azure Web Apps, allowing us to provide to our email service providers only the range of IPs used by Azure Web Apps inside that Azure Region.
Even if the solution is simple, there are two risks that we need to take into account and mitigate. The first one is related to defining a process that ensures as that we provide the new range of IPs at the moment when Microsoft is updating the range of IPs. The second risk is related to who can use the service. Because the range of IPs that are provided to whitelist covers all the Azure Web Apps inside that Azure Region means that any web application hosted as a Web App inside that region can send emails if the email service URL it is known.

Option 2: Expose a REST API
The second option involves creating an external REST API, that can be called by our web application and forward the calls to the email service. There is already planned to use an Azure VM for some other functionality, outside DXC. It means that we can use this VM to host out REST API inside the IIS and assign a Static IP to the machine. The API would forward the calls to the email service.
The downside of this solution would be primarily from the security part; we need to design an authentication and authentification system for our REST API. Besides this we need to handle cases when the Azure VM it is not available, we don’t want to have clients that did not receive their emails.

Option 3: Windows Service and Service Bus Queue
This option is based on option 2 and involved moving the forward capability from the REST API to a Windows Service. We are in the context where the Azure VM already has other types of Windows Services deployed, and the most simple thing that we can do it is to add another one that can forward the request to the email service.
To be able to avoid losing messages when the Window Service is not available or when the output is too high, we can add a queue used to communicate between our application hosted inside the DXC and our Windows Service.

Option 4: Azure Function and Service Bus Queue
The downside of previous solutions is that we are keeping the logic inside a traditional on-premises solution. For better result and to simplify things more, we can add our logic inside Azure Functions. For this case, Azure Functions are perfect, because it is offering us a serverless environment where we can run our logic that forward calls from our system to email service.
In theory, the IP shall not change too often as long as we don’t delete our function or change the tier. For Static IP on Azure Functions, we need an App Service Environment, where we can have a clear list of Static IP

Option 5: Azure Functions, Service Bus Queue, and Azure Blob Storage
The previous option is almost perfect, except in one case. When the email that we want to send is bigger than the maximum size of a message in the queue (256K). Even if there is support for sessions on messages, where we can have multiple messages that are consumed together by the same consumer as a message, we need to mitigate the case when the email is bigger than the message queue.
A possible solution is to write the email content (body) directly to Azure Blob Storage and add only the URL to blob storage inside the message. The access can be controlled using Storage Accounts Keys or Shared Access Signature. Depending on how often an email is bigger than the maximum message queue, you can decide what would be the default behavior – email body in the message or inside Azure Blob Storage.

You can add extra logic that calculates the email body size and decides if the message body shall be added in Blob Storage or inside the message.  

Conclusion
As you can see, there are multiple solutions to this problem Taking into consideration time constraints, environments, existing solution and many more you can decide which one suits you best.

The cleanest one is Option no. 5, but you might prefer Option no. 3 if you need to deliver a fast solution and you don’t have skills related to Azure Functions. 

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see