Skip to main content

Azure Spring Cloud and DMZ

Starting from 2019, we can find a new service in Azure portfolio - Azure Spring Cloud. Microsoft together with Pivotal joined their forces by offering the capability of running Spring Boot applications inside Azure seamless.
Azure Spring Cloud is a SaaS, that is managed by Pivotal offering 100% compatibility with any type of Java application that was built for Spring Boot. It might not sound a big WOW, but having the ability to migrate the line of business applications that are running inside on-premises systems to a full managed Spring Boot environment is awesome. 
If you want to find about this service I invite you to check the service page.

A common discussion that arises when you need to take the on-premises application and put them inside Azure Spring Cloud is related to network security - more exactly DMZ. There are two different worlds that usually collide and it is important to understand both concerns and limitations. 

Azure Spring Cloud is running on top of Azure Kubernetes Services and use Service Registry and Spring Cloud Config Service to offer the high availability and the same experience that you have in Pivotal. The nice thing is that you have the same capability as you have in Pivotal to patch automatically your code inside containers and fully manage them using Pivotal Build Service.

From infrastructure and security point of view, at this step things can be a little confusing from a DMZ point of view. In general, when you are working with Azure and you want to build a DMZ landing node you can use with success the Azure network capabilities like VNETs, NVA (Network Virtual Appliances) and NSG (Network Security Groups).
Unfortunately, at this moment you cannot use the network features of Azure to build the DMZ inside Azure Spring Cloud. Things might change in the future, but for now, we cannot rely on network integration.

Even so, if you used Pivotal in the past, you know the capability to define a DMZ zone at the application layer. It's not as a DMZ at the network layer but can work with success if you don't want to put additional services in front of Azure Spring Cloud
A solution to limit the container-to-container communication is to configure Diego Brain in such a way that you limits what application can talk with what application (so-called Diego Cells). Using this approach you define and isolate the applications that are part of the DMZ and the channels that can be used to talk with the rest of the system. Unfortunately, Diego is not available inside Azure Spring Cloud, so you will not be able to use it.

The 3rd option available is to rely on Azure AD and add to each application the capability to allow requests from users with specific roles. In this way, you could virtual map and isolate a collection of applications that are part of DMZ.

This is a good trade-off until we will have the capability to use VNET features inside Azure Cloud Spring also. Nevertheless, I'm sure that we will see pretty soon the integration with VNET - it is just a matter of time.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...