Skip to main content

Using an external email provider with Episerver DXC environment inside Azure

In this post, we talk about different options that we have we need to integrate an email service with an eCommerce application developed using Episerver and hosted inside DXC.
As a side note, DXC it is SaaS on top in Microsoft Azure, where clients can host their web applications developed on top of Episerver. More about the DXC environment will be covered in another post.

What we what to achieve?
We want to be able to integrate an external email service provider to be able to send emails to our clients for marketing purposes. Our web application it is already hosted inside DXC, and even if we could write custom code that can run inside DXC to communicate with the email service provider, there are some limitations that we need to be aware.
The authentification and authorization mechanism offered by the email service provider it is based on IP whitelisting. Only the IPs from that whitelist are allowed to make calls to the service and send emails.

Limitations
At this moment in time, Microsoft Azure is offering the possibility to assign static IP to your resources. Even so, because of DXC environment offers a high-availability SLA, the public endpoints for consumers and clients are based on CNAMEs and not on IPs. Additional to this resources might be shared between different deployments and customers.
It means that there is no way to get a static IP for our DXC environment that can be added in the whitelist of our email service provider.
Beside this, we need to take into account that there is no other authentification mechanism beside IP whitelisting used by the 3rd party and a custom URL provided for each client.


There are multiple solutions available. Let’s take a look on some options that we have. The last one is the one that I prefer, and I think that it is closer to production ready.

Option 1: Whitelist IP list of Azure Region
The public documentation is offering us the IP ranges used in each Azure Region. Additional to this we know the IP ranges for each Azure Services. The list of IPs is updated each time when something change and we can subscribe to this type of notifications.
Inside DXC our applications are running on top of Azure Web Apps, allowing us to provide to our email service providers only the range of IPs used by Azure Web Apps inside that Azure Region.
Even if the solution is simple, there are two risks that we need to take into account and mitigate. The first one is related to defining a process that ensures as that we provide the new range of IPs at the moment when Microsoft is updating the range of IPs. The second risk is related to who can use the service. Because the range of IPs that are provided to whitelist covers all the Azure Web Apps inside that Azure Region means that any web application hosted as a Web App inside that region can send emails if the email service URL it is known.

Option 2: Expose a REST API
The second option involves creating an external REST API, that can be called by our web application and forward the calls to the email service. There is already planned to use an Azure VM for some other functionality, outside DXC. It means that we can use this VM to host out REST API inside the IIS and assign a Static IP to the machine. The API would forward the calls to the email service.
The downside of this solution would be primarily from the security part; we need to design an authentication and authentification system for our REST API. Besides this we need to handle cases when the Azure VM it is not available, we don’t want to have clients that did not receive their emails.

Option 3: Windows Service and Service Bus Queue
This option is based on option 2 and involved moving the forward capability from the REST API to a Windows Service. We are in the context where the Azure VM already has other types of Windows Services deployed, and the most simple thing that we can do it is to add another one that can forward the request to the email service.
To be able to avoid losing messages when the Window Service is not available or when the output is too high, we can add a queue used to communicate between our application hosted inside the DXC and our Windows Service.

Option 4: Azure Function and Service Bus Queue
The downside of previous solutions is that we are keeping the logic inside a traditional on-premises solution. For better result and to simplify things more, we can add our logic inside Azure Functions. For this case, Azure Functions are perfect, because it is offering us a serverless environment where we can run our logic that forward calls from our system to email service.
In theory, the IP shall not change too often as long as we don’t delete our function or change the tier. For Static IP on Azure Functions, we need an App Service Environment, where we can have a clear list of Static IP

Option 5: Azure Functions, Service Bus Queue, and Azure Blob Storage
The previous option is almost perfect, except in one case. When the email that we want to send is bigger than the maximum size of a message in the queue (256K). Even if there is support for sessions on messages, where we can have multiple messages that are consumed together by the same consumer as a message, we need to mitigate the case when the email is bigger than the message queue.
A possible solution is to write the email content (body) directly to Azure Blob Storage and add only the URL to blob storage inside the message. The access can be controlled using Storage Accounts Keys or Shared Access Signature. Depending on how often an email is bigger than the maximum message queue, you can decide what would be the default behavior – email body in the message or inside Azure Blob Storage.

You can add extra logic that calculates the email body size and decides if the message body shall be added in Blob Storage or inside the message.  

Conclusion
As you can see, there are multiple solutions to this problem Taking into consideration time constraints, environments, existing solution and many more you can decide which one suits you best.

The cleanest one is Option no. 5, but you might prefer Option no. 3 if you need to deliver a fast solution and you don’t have skills related to Azure Functions. 

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...