Skip to main content

Secure tunnel using Hybrid Connections

I observed an interesting trend in a few last years. Even if more and more companies migrate to cloud providers like Azure or migrate to IoT era, a common requirement that still popups for all clients is the need of a secure tunnel - Remote Screen Sharing, FTP, Remote Access and so on.

In the end the requirement is simple but very hard to fulfill. The need of a secure tunnel (like a VPN) between their devices or systems to their backends. When you combine this with scalability and integration with different system it can become an expensive and complicated nightmare.
If you look in my past post, you’ll see that I played with different solutions like Service Bus Relay and OpenSSL.

Hybrid Connections
But now, it seems that Microsoft made a surprise for us – Hybrid Connections. This is a new feature of Azure Service Bus Relays that allows us to create a point-to-point connection in a secure and reliable way.
Before jumping to code, let’s see why I am so excited about it. From the latency point of view, the latency is extremely low. I establish a RDP connection to a computer that is in Europe from USA and I was impress that there was no difference between Team Viewer and Hybrid Connection. This bidirectional communication can pass firewall as long as both clients have an open port to make requests – ports like 443 or 80 can be used with success.

Old Port Bridge
If you remember there was an implementation of Port Bridge done some years ago by @Clements Vasters on top of Service Bus Relay. Using that implementation, we would be able to establish a secure tunnel between two endpoints.
The downside of that solution, from the implementation point of view, was related to dependencies and some magic inside the source code. Some WCF stuff were needed to be able to support this kind of scenarios. Migrating the solution to Linux was not impossible, but very expensive.

The NEW Port Bridge – no WCF dependency 
Now, the new solution that is implemented over Hybrid Connections doesn’t has anymore this dependency. This means that we can run with success not only on Windows systems, but also from Linux and why not, it might be possible from .NET Core also.
Why I’m so excited that there is no WCF dependency? It means that we can write a C++ or Java Script application for this, enabling me to create secure tunnels on dummy devices that runs custom Linux distributions.

Another cool stuff is that the new implementation supports a list of ports that can be specified to be mapped to port bridge. It means that we can tunnel applications that requests multiple ports, not only one. With some twicks, I think that even apps with dynamic ports might be able to work, but it might require some work.
The full sample and how you can use it can be found of GitHub: https://github.com/Azure/azure-relay-dotnet/tree/master/samples/portbridge

Playing with Hybrid Connections
Hybrid connections allows us to do pretty cools stuff. On top of having a secure tunnel, we can send any stream of data from one location to another. With just a few lines of codes, we can stream data from one machine to another.

A simple and name sample of the client app can be found on GihHub (https://github.com/Azure/azure-relay-dotnet/blob/master/samples/simple/Client/Program.cs). As we can see, we just need to open a Hybrid Connection and read or write content. Using HybridConnectionClient we can open a stream to read content and another one to write. So simple to establish a stream communication between two endpoints from internet.
On the server side, the code is very similar. The difference is that on the server we need to listen and accept a connection over Hybrid Connection. An important step here is to not forget to Close the relay connection once you finish what you have to do.

JavaScript and Hybrid Connection
The super-duper thing, that we will talk more in the next post is the native support for Web Sockets. This means that we can write code, without the need of a custom library that discuss directly with a Hybrid Connection.
Just a copy-paste sample code from Clements GitHub:
      var host = window.document.location.host.replace(/:.*/, '');
      var ws = new WebSocket('wss://cvrelaywus.servicebus.windows.net:443/$hc/public?sb-hc-action=connect');// &sb-hc-token={{token}}');
      ws.onmessage = function (event) {
        updateStats(JSON.parse(event.data));

Conclusion
Things looks a lot better for situations when you need to establish a tunnel between two endpoints. Even if the use cases that are covered are edge cases, the extra value is enormous.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see