Skip to main content

Azure Private Link advantages over Azure Service Endpoint

People often ask what they should use to secure the connection between Azure PaaS services and VNETs. Today's article talks about the key differences between Azure Private Link and Azure Service Endpoints and when you should use each of them.

What is?

Azure Service Endpoint provides a direct and secure connection to Azure PaaS services over the Azure backbone network. Even if the traffic leaves your VNET and hits the public endpoint of the Azure PaaS service, it goes over the Azure backbone.

Azure Private Link enables you to have a private IP inside your VNET used to hit the endpoint of your Azure PaaS service. The assigned private IP is part of your VNET and ensures that all traffic will stay within your VNET.

What about Azure Private Endpoint? It is part of Azure Private Link, enabling you to configure the private IP address and the peering over VPN or VNET. 

If you want to expose your own service over  Private Link, you can do this by using Azure Private Link Service, but you need a Standard Load Balancer (ILB/PLB) to create the Private Link. It's an excellent service if you want to share your service with other consumers in secure manure. 

Sometimes Azure resources have support only for Private Link or Service Endpoint, not for both. You should not design a solution that uses only one of them.


Concerns

The section covers the most important infrastructure and security concerns that we should consider when we decide what approach we would like to use.

Connectivity

Private Link - Azure PaaS service receives a private IP from your networks used for communication with your VNET

Service Endpoint - Azure PaaS public IP it is still used, the trafic between VNET and the public IP of Azure PaaS service goes over Azure backbokne network

OnPrem Connectivity

Private Link - ExpressRoute and VPN tunnels provide support to extend the private Azure PaaS connectivity to the OnPrem networks

Service Endpoint - No native support for OnPrem integrations. Build mainly for Azure VNETs.  

Cost

Private Link - The cost is based on inboud, outbound traffic and no. of endpoints. Depending on the total traffic, the total cost can grow easilty

Service Endpoint - No additional cost (free of use)

Data protection

Private Link - Build-in data protection

Service Endpoint - Needs to be integrated with a Network Virtual Appliance/Firewall of exfiltration protection is required

Availability

Private Link - The no. of Azure PaaS services supported by Private Link is high and is grow each month - full list of Azure PaaS Services available here.

Service Endpoint - Well supported by core Azure PaaS Services - full list of Azure PaaS Services available here.

UDRs and NSGs 

Private Link - The traffic can bypass the Private Endpoint of you use UDRs and NSGs. Special configuration might be required

Service Endpoint - No specific overlaps exists

Complexity

Private Link - Involves updates to DNS (Azure Private DNS) and where the service will attach to your VNET.

Service Endpoint - Easy to configure and setup from Azure Portal

Cross-region support

Private Link - Has full support for access resources across regions and across Azure AD tenants

Service Endpoint - No native support for cross-region support


Conclusion

When security and data restrictions are your main concerns, Azure Private Link should be your first choice. It is superior compared to Azure Service Endpoint, even if the setup complexity is higher. 

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP